EMC - Agile Tech Edit

SMI-S Enables Storage Automation for SCVMM 2012
and Multiple Storage Arrays
Abstract
Microsoft® System Center 2012 - Virtual Machine Manager introduces new storage automation features
enabled by the Storage Management Initiative Specification (SMI-S) and supported by EMC Symmetrix,
CLARiiON, and VNX storage systems. This paper explains the new storage architecture and shows you
how to set up a preproduction environment to explore and validate these new storage capabilities.
July 2012
EMC | Microsoft
Reference Architecture | Best Practices
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Copyright © 2012 EMC Corporation. All Rights Reserved.
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without
notice.
The information in this publication is provided “as is.” EMC Corporation makes no representations or warranties of any kind with
respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a
particular purpose.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.
Copyright © 2012 Microsoft Corporation. All Rights Reserved
This document is provided "as-is". Information and views expressed in this document, including URL and other Internet Web site
references, may change without notice.
Some examples depicted herein are provided for illustration only and are fictitious. No real association or connection is intended or
should be inferred.
This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You may copy and
use this document for your internal, reference purposes. You may modify this document for your internal, reference purposes.
Microsoft, Active Directory, Hyper-V, SQL Server, Windows, Windows PowerShell, and Windows Server are trademarks of the
Microsoft group of companies. All other trademarks are property of their respective owners.
Revision History
Release Date
July 2012
Change History
Final draft (prepublication)
ii
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Table of Contents
0
ABOUT THIS DOCUMENT .................................................................................................................. 1
1
OVERVIEW ........................................................................................................................................ 2
1.1
1.2
1.3
1.4
1.5
2
ARCHITECTURE ................................................................................................................................. 5
2.1
2.2
2.3
3
PREVIEW THE TEST ENVIRONMENT.................................................................................................... 34
SET UP EMC STORAGE DEVICES FOR STORAGE VALIDATION TESTING ....................................................... 37
SET UP EMC SMI-S PROVIDER FOR STORAGE VALIDATION TESTING ........................................................ 43
SET UP VMM FOR STORAGE VALIDATION TESTING .............................................................................. 49
VALIDATE STORAGE AUTOMATION IN YOUR TEST ENVIRONMENT ................................................ 81
5.1
5.2
5.3
5.4
5.5
6
COORDINATE STORAGE REQUESTS AND STORAGE ALLOCATION NEEDS...................................................... 22
COORDINATE STORAGE-RELATED SECURITY MEASURES ......................................................................... 27
REVIEW FREQUENTLY ASKED QUESTIONS (FAQS) ................................................................................ 29
REVIEW KNOWN ISSUES AND LIMITATIONS ......................................................................................... 31
BUILD YOUR PREPRODUCTION TEST INFRASTRUCTURE ................................................................. 34
4.1
4.2
4.3
4.4
5
INDUSTRY STANDARDS FOR A NEW STORAGE ARCHITECTURE .................................................................... 5
STORAGE AUTOMATION ARCHITECTURE ............................................................................................... 8
STORAGE AUTOMATION SCENARIOS .................................................................................................. 14
PLAN A PRIVATE CLOUD ................................................................................................................. 22
3.1
3.2
3.3
3.4
4
WHY AUTOMATE STORAGE? ............................................................................................................. 2
STANDARDS-BASED STORAGE AUTOMATION ......................................................................................... 2
MORE SOPHISTICATED, YET SIMPLER AND FASTER .................................................................................. 3
TESTING STORAGE AUTOMATION IN A PRIVATE CLOUD............................................................................ 4
JOINT PRIVATE CLOUD PLANNING AND IMPLEMENTATION ........................................................................ 4
SET UP THE MICROSOFT VMM STORAGE AUTOMATION VALIDATION SCRIPT............................................. 81
CONFIGURE TRACE LOG COLLECTION ................................................................................................. 84
REVIEW THE FULL TEST CASE LIST DEVELOPED BY VMM........................................................................ 89
TEST CASE LIST BY EMC ARRAY PRODUCT FAMILY................................................................................ 90
TEST STORAGE AUTOMATION IN YOUR PRE-PRODUCTION ENVIRONMENT ................................................. 93
PREPARE FOR PRODUCTION DEPLOYMENT .................................................................................... 94
6.1
6.2
IDENTIFY ISSUES UNIQUE TO YOUR PRODUCTION ENVIRONMENT............................................................. 94
PRODUCTION DEPLOYMENT RESOURCES ............................................................................................ 94
APPENDIX A: INSTALL VMM .................................................................................................................. 97
APPENDIX B: ARRAY MASKING AND HYPER-V HOST CLUSTERS .......................................................... 101
APPENDIX C: ENABLE LARGE LUNS ON SYMMETRIX ARRAYS .............................................................. 110
APPENDIX D: CONFIGURE SYMMETRIX TIMEFINDER FOR RAPID VM PROVISIONING ......................... 112
APPENDIX E: TERMINOLOGY ............................................................................................................... 116
APPENDIX F: REFERENCES ................................................................................................................... 126
iii
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
0 About This Document
Are you investigating options for deploying a private cloud? EMC supports new storage automation
features introduced in Microsoft® System Center 2012 – Virtual Machine Manager. New functionality
builds on the Storage Management Initiative Specification (SMI-S) developed by the Storage Networking
Industry Association (SNIA). EMC set up a test environment to validate the new VMM storage
capabilities with supported EMC arrays. This document can serve as a guide to build and test a similar
environment.
Introducing a private cloud into your IT infrastructure requires joint planning by multiple stakeholders,
including those listed in the following Document Map.
Document Map
Section
Primary Audience
Secondary Audience
Technical Decision Makers
Hyper-V and other Server Administrators
Solution Architects
Cloud Administrators
Storage Administrators
VMM Administrators
Network Administrators
Self-Service Portal Administrators
Security Administrators
Solution Architects
Cloud Administrators
Storage Administrators
VMM Administrators
Hyper-V and other Server Administrators
Network Administrators
Self-Service Portal Administrators
Security Administrators
Cloud Administrators
VMM Administrators
Storage Administrators
Hyper-V and other Server Administrators
Network Administrators
Self-Service Portal Administrators
Security Administrators
VMM Administrators
Cloud Administrators
Cloud Administrators
Storage Administrators
Hyper-V and other Server Administrators
VMM Administrators
Solution Architects
Cloud Administrators
VMM Administrators
Hyper-V and other Server Administrators
Network Administrators
Storage Administrators
Security Administrators
VMM Administrators
Cloud Administrators
Appendix B: Array Masking and Hyper-V Host Clusters
Cloud Administrators
VMM Administrators
Storage Administrators
Appendix C: Enabling Large LUNs on Symmetrix Arrays
Cloud Administrators
VMM Administrators
Storage Administrators
Appendix D: Configuring TimeFinder for Rapid VM Provisioning
Cloud Administrators
VMM Administrators
Storage Administrators
Appendix E: Terminology
Cloud Administrators
VMM Administrators
Anyone
Anyone
Anyone
Overview
Architecture
Plan a Private Cloud
Build Your Pre-Production Test Infrastructure
Validate Storage Automation in Your Test Environment
Prepare for Production Deployment
Appendix A: Install VMM
Appendix F: References
1
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
1 Overview
EMC and Microsoft collaborate to deliver a private cloud with new and enhanced storage automation
features. Microsoft® System Center 2012 - Virtual Machine Manager (VMM 2012) introduces automatic
discovery of storage resources and automated administration of those resources within a private cloud.
Multiple EMC storage systems support these new capabilities.
Standards-based storage management is major part of what is new in VMM 2012. VMM can manage
arrays from the EMC® Symmetrix, CLARiiON, and VNX storage families through a standards-based
interface. Microsoft System Center and EMC solutions deliver cost-effective and agile data center
services that enable integrated management of physical, virtual, and cloud environments.
1.1 Why Automate Storage?
The VMM product team asked customers if they automate any storage tasks now. The answer was "no"
from 86% of respondents.
Why?
Of the 86% who do not currently automate storage:

Half indicate that they do not have in-house expertise to automate storage tasks.

Half indicate that they have so many different types of arrays that the development effort and time
required to automate storage tasks often blocks major storage automation initiatives.
The 14% of respondents who do automate storage tasks typically do just enough automation to reduce
the chance of human error. More advanced automation is a goal, but often a deferred goal — it requires
expertise and time that are in short supply. An industry standard is needed that enables automation of
storage tasks, yet simplifies storage automation across multiple types of array.
1.2 Standards-Based Storage Automation
VMM 2012 introduces standards-based discovery and automation of iSCSI/Fibre Channel (FC) block
storage resources in a virtualized data center environment. These new capabilities build on the Storage
Management Initiative Specification (SMI-S)
developed by the Storage Networking Industry
VMM is not an SRM
Association (SNIA). The SMI-S standardized
VMM focuses on automating storage tasks in a
management interface enables an application
Hyper-V® environment. VMM is not a Storage
such as VMM to discover, assign, configure, and
Resource Management (SRM) tool, nor does it
automate storage for heterogeneous arrays in a
replace array management consoles or SAN
unified way. An SMI-S Provider uses SMI-S to
administration tools:
enable storage management.

To take advantage of this new storage capability,
EMC updated its SMI-S Provider to support the
VMM 2012 RTM release.

Storage administrators continue to design,
implement, and manage storage resources.
Cloud administrators use VMM to consume
storage resources available to a private cloud.
The EMC SMI-S Provider aligns with the SNIA goal
to design a single interface that supports unified management of multiple types of storage array. The
one-to-many model enabled by the SMI-S standard makes it possible for VMM to interoperate, via the
EMC SMI-S Provider, with multiple disparate storage systems from the same VMM Console that is used
to manage all other VMM private cloud components.
2
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
1.3 More Sophisticated, Yet Simpler and Faster
Multiple EMC storage systems are built to take advantage of VMM storage automation capabilities.
Table 1: Advantages provided by the integration of VMM and EMC storage systems
Value Add
Integrate
Virtualization
and Storage
Reduce Costs
Simplify
Administration
Description
 Coordination: Fosters integrated planning of virtual and storage infrastructures.
 Visibility: Allows VMM infrastructure administrators and VMM cloud
administrators (both are members of the VMM role Administrator) to access
supported EMC storage systems.
 End-to-end map: Automates end-to-end discovery, and maps virtualization to
storage assets:
 First: VMM discovers all relevant storage assets and their relationships.
 Next: VMM maps a VM to its respective storage resource, creating a full end-toend map directly accessible either from the VMM Console or by using a VMM
PowerShell script.
 Result: Outdated diagrams or additional consoles to access storage are no longer
needed. Administrators can discover available storage assets and understand
how the underlying storage area network (SAN) infrastructure interacts with
other private cloud resources.
 On-Demand Storage: Aligns IT costs with business priorities by synchronizing
storage allocation with fluctuating user demand. VMM elastic infrastructure
supports "thin provisioning," that is, VMM supports expanding (or contracting) the
allocation of storage resources on EMC storage systems in response to waxing or
waning demand.
 Ease-of-Use: Simplifies consumption of storage capacity — and thus saves time and
lowers costs — by enabling the interaction of EMC storage systems with, and the
integration of storage automation capabilities within, the VMM private cloud.
 Private Cloud GUI: Allows administration of private cloud assets (including storage),
through a single management UI, the VMM Console, available to VMM or cloud
administrators.
 Private Cloud CLI: Enables automation through VMM’s comprehensive set of
Windows PowerShell™ commands ("cmdlets"), including 25 new storage-specific
cmdlets.
 Reduce errors: Minimizes errors by providing the VMM UI or CLI to view and
request storage.
 Private Cloud Self-Service Portal: Provides a Web-based interface that permits
users to create VMs, as needed, with a storage capacity that is based on predefined
classifications.
 Simpler storage requests: Automates storage requests to eliminate delays of days
or weeks.
3
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Deploy Faster
 Deploy VMs faster and at scale: Supports rapid provisioning of VMs to Hyper-V®
hosts or host clusters at scale. VMM can communicate directly with your SAN
arrays to provision storage for your VMs. VMM 2012 can provision storage for a VM
in the following ways:
 Create a new logical unit from an available storage pool — you can control the
number and size of each logical unit
 Create a writeable snapshot of an existing logical unit — you can provision
many VMs quickly by rapidly creating multiple copies of an existing virtual disk;
this puts minimal load on hosts and uses space on the array efficiently
 Create a clone of an existing logical unit — you can offload the creating of a full
copy of a virtual disk from the host to the array; typically, clones are not as
space-efficient as snapshots and take longer to create
 Reduce load: Rapid provisioning of VMs using SAN-based storage resources takes
full advantage of EMC array capabilities while placing no load on the network.
1.4 Testing Storage Automation in a Private Cloud
The SMI-S Provider model introduces an interface between VMM and storage arrays available to the
VMM private cloud. To ensure that customers can successfully use the full set of new storage features,
Microsoft developed a set of automated validation tests that can be used in both preproduction and
production environments to confirm that systems are correctly configured and deployed. Scenarios
tested include end-to-end discovery, Hyper-V host and host cluster storage management, and VM rapid
provisioning.
1.5 Joint Private Cloud Planning and Implementation
Making optimal use of VMM storage automation capabilities with supported EMC storage systems
requires an organization’s administrators, particularly cloud and storage administrators, to jointly plan,
design, implement, verify, and manage their integrated cloud environment. As described in detail later
in this document, this brings together what were formerly separate disciplines in a new way that
requires a greater level of cooperation and coordination than was needed in the past.
4
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
2 Architecture
VMM 2012 architecture builds on, and extends, the architectural design used in VMM 2008 R2 SP1. One
major area where VMM architecture, and corresponding functionality, is enhanced significantly is in the
area of storage.
The following diagram is a high-level depiction of VMM components available in the 2012 release of
VMM. The highlighted "Storage Management" Fabric component is the focus of this paper.
Figure 1: The Storage Management component within the Fabric in VMM 2012
Delivering the new storage functionality required innovation and transformation in the following areas:

Industry standards for a new storage architecture

Storage automation architecture
 Storage automation scenarios
The following subsections describe each of these areas in turn.
2.1 Industry Standards for a New Storage Architecture
A virtualized data center typically includes heterogeneous compute, network, and storage resources.
Achieving interoperability among these diverse technologies requires, first, a common way to describe
and represent each element and, second, the development of accepted industry standards that enable
management of these elements.
2.1.1 SNIA SMI-S and Related Industry Standards
The following table summarizes major models and standards that make the storage architecture
possible that enables VMM to communicate with supported EMC arrays in a private cloud.
5
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Major organizations that develop standards integral to IT, including storage automaton, include:

Storage Networking Industry Association (SNIA): SNIA develops management standards related
to data, storage, and information management. SNIA standards are designed to address
challenges such as interoperability, usability, and complexity. The SNIA standard that is central
to this paper is:


Storage Management Initiative Specification (SMI-S)
Distributed Management Task Force (DMTF): DMTF develops platform-independent
management standards. DMTF standards are designed to promote interoperability for
enterprise and Internet environments. DMTF standards relevant to VMM and EMC storage
include:
 Common Information Model (CIM)

Web-Based Enterprise Management (WBEM)
The following table describes briefly some of the most important standards used by VMM and EMC to
deliver storage-related services.
Table 2: Standards used by VMM and EMC to provide integrated storage automation
Standard
SMI-S
EMC
SMI-S
Provider
WBEM
CIM
Description
 The SMI-S standard enables an SMI-S Provider to manage specific storage hardware. This
standard defines a management interface that SNIA promotes to simplify and facilitate
secure monitoring and operation of heterogeneous storage resources in multi-vendor
environments.
Note SMI-S builds on CIM and WBEM standards from DMTF.
 The EMC SMI-S Provider (used by VMM) is certified by SNIA as compliant with the SMI-S
standard.
 VMM uses the EMC SMI-S Provider to discover arrays, storage pools, and logical units; to
classify storage; to assign storage to one or more VMM host groups; to create, clone,
snapshot, or delete logical units; and to unmask or mask logical units to a Hyper-V host
or cluster. Unmasking assigns a logical unit to a host or cluster; masking hides a logical
unit from a host or cluster.
 WBEM is a collection of standards (published by DMTF) for accessing information about
and managing compute, network, and storage resources in an enterprise-scale
distributed environment.
 WBEM includes:
 A model, CIM, to represent resources
 An XML representation of CIM models and messages (xmlCIM) that travel via CIMXML
 An XML-based protocol, CIM-XML over HTTP, that lets network components
communicate
 A SOAP-based protocol, Web Services for Management (WS Management, or WSMan), that supports communication between network components.
 The CIM standard provides a model for representing heterogeneous compute, network,
and storage resources as objects and for representing relationships among those
objects. CIM lets VMM administer dissimilar elements in a common way. Both SMI-S and
6
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
WBEM build on CIM.
 CIM Infrastructure Specification defines the object-oriented architecture of CIM.
 CIM Schema defines a common, extensible language for representing dissimilar
objects.
 CIM Classes identify specific types of IT resources (for example, CIM_StorageVolume).
Note EMC SMI-S Provider V4.3.2 (or later) supports DMTF CIM Schema V2.31.0.
ECIM
ECOM
 The EMC Common Information Model (ECIM) defines a CIM-based model for
representing IT objects (for example, EMC_StorageVolume, which is a subclass of
CIM_StorageVolume).
 EMC Common Object Manager (ECOM) implements the DMTF WBEM infrastructure for
EMC. The EMC SMI-S Provider utilizes ECOM to provide a single WBEM infrastructure
across all EMC hardware and software platforms.
2.1.2 EMC SMI-S Provider and the SNIA Conformance Testing Program (CTP)
The SNIA Conformance Testing Program (CTP) validates SMI-S Providers against different versions of the
standard. EMC works closely with SNIA to ensure that the EMC SMI-S Provider supports the latest SMI-S
standard.
EMC SMI-S Provider is certified by SNIA as compliant with SMI-S 1.3, 1.4, and 1.5. EMC plans to update
EMC SMI-S Provider, as appropriate, to keep current with the SMI-S standard as both the standard itself,
and VMM’s support for the standard, evolve.
For information about the SNIA CTP program and EMC participation in that program, see:

SNIA Conformance Testing Program (SNIA-CTP)
http://www.snia.org/ctp/

SMI-S Conforming Provider Companies
http://www.snia.org/ctp/conformingproviders/index.html
7
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
2.2 Storage Automation Architecture
The following figure depicts the storage architecture that VMM 2012 delivers in conjunction with the
EMC SMI-S Provider and EMC storage systems.
Figure 2: SMI-S Provider is the interface between VMM and storage arrays in a VMM private cloud
The preceding diagram depicts the primary components of the new storage architecture enabled in
VMM by the SMI-S standard.
2.2.1 Storage Automation Architecture Elements
The following subsections describe each element that appears in the preceding architecture figure, and
the relationships among those elements.
2.2.1.1 VMM Server
The VMM Management Server, often referred to as the VMM Server, is the service that cloud and VMM
administrators use to manage VMM objects, including hypervisor physical servers, VMs, storage
resources, networks, clouds, and services (a set of VMs deployed together).
8
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
The VMM Server uses WS-Man and Windows Management Instrumentation (WMI), the Microsoft
implementation of DMTF’s WBEM and CIM standards, to enable management applications to share
information:

Web Services Management (WS-Man): The Microsoft implementation of WS-Man is Windows
Remote Management (WinRM). VMM components use this client interface to communicate with
the Microsoft Storage Management Service through WMI. VMM does not use WS-Man to
communicate with SMI-S Providers; it uses the Microsoft Storage Management Service, which, in
turn, uses the CIM-XML protocol for communications with the SMI-S Provider.

Windows Management Instrumentation Service (WMI Service): WMI provides programmatic
access to a system so that administrators can collect and set configuration details on a wide variety
of hardware; operating system components and subsystems; and software. This service is:


The Windows® implementation of a standard CIM Object Manager (CIMOM)

A self-hosted service that provides hosting for the Microsoft Storage Management Service (WMI
provider)
Microsoft Storage Management Service (a WMI Provider): This service is a new WMI provider
(installed, by default, on the VMM Server) for managing VMM storage operations. This service:

Is an SMI-S client that communicates with the SMI-S Provider server over the network

Uses the SMI-S Module to convert SMI-S objects to Storage Management Service objects

Discovers storage objects (such as arrays, storage pools, LUNs) as well as host initiators and
storage controller endpoints on the arrays

Performs storage operations against storage arrays

SMI-S Module: A component of the Storage Management Service on the VMM Server that maps
Storage Management Service objects to SMI-S objects. The CIM-XML Client uses only SMI-S objects.

CIM-XML Client: A component of the Storage Management Service that enables communication
with the SMI-S Provider through the CIM-XML protocol.
2.2.1.2 CIM-XML
CIM-XML is the protocol that is used as the communication mechanism between the VMM Server and
the SMI-S Provider. The use of the CMI-XML protocol is mandated by the SMI-S standard.
2.2.1.3 EMC SMI-S Provider
The EMC SMI-S Provider is the SMI-S-compliant management server that enables VMM to manage
storage resources on EMC storage systems in a unified way.
The EMC SMI-S Provider kit contains the following components:

EMC SMI-S Provider

ECOM

EMC Solutions Enabler
The EMC SMI-S Provider kit also makes available the providers listed in the following table.
9
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Table 3: EMC SMI-S Provider includes three provider components
Provider
Description
Array Provider
The VMM storage feature requires the installation of the Array Provider. This
provider allows the client (VMM) to manage an array that is in the Symmetrix,
CLARiiON, or VNX product family.
This document uses interchangeably the terms "EMC SMI-S Provider" and "EMC
SMI-S Array Provider" because this is the provider that enables VMM to access EMC
arrays.
Host Provider
The Host Provider is not used for VMM storage operations.
Do not install the Host Provider in your test environment.
(N/A to VMM)
VASA Provider
(N/A to VMM)
The VASA Provider is installed automatically whenever you select the option to
install the Array Provider because the VASA Provider has a dependency on the Array
Provider.
VMM does not use the VASA Provider for VMM storage operations. However, if
your environment includes vSphere as well as VMM, you have the option to use the
same EMC SMI-S Provider in both environments.
2.2.1.4 Array
A storage array is a disk storage system that contains multiple disk drives attached to a storage area
network (SAN) in order to make storage resources available to servers that have access to the SAN.
In the context of a VMM private cloud, storage arrays, also called storage systems, make storage
resources available to for use by cloud and VMM administrators and by cloud users.
EMC arrays support one or more of the following storage protocols:

iSCSI

Fibre Channel (FC)

Fibre Channel over Ethernet (FCoE)
Each array communicates with its EMC SMI-S Provider as follows:

CLARiiON and VNX arrays: All management traffic between the provider and array travels over
the TCP/IP network.

Symmetrix arrays: The communication path between the SMI-S Provider server and the array is
inband via FC, FCoE or iSCSI. (Communication to Symmetrix arrays also requires gatekeeper
LUNs; EMC recommends that six gatekeeper LUNs be created on each Symmetrix array.)
Within an array, the storage elements most important to VMM are:

Storage Pools: A pool of storage is located on an array. You can use VMM to categorize storage
pools based on service level agreement (SLA) factors such as performance. One typical naming
convention is to classify pools as "Gold," "Silver," "Bronze," and so on.

Logical Units: A logical unit of storage (a storage volume) is located within a storage pool. In
VMM, a logical unit is typically a virtual disk that contains the VHD file for a VM. The SMI-S term
for a logical unit is storage volume. (A SAN logical unit is often, if somewhat imprecisely,
referred to as a logical unit number or LUN).
10
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
2.2.1.5 Element Manager Tool
The storage administrator uses a vendor-provided Element Manager tool to access and manage storage
arrays and, typically, the administrative domain. An Element Manager is one of an administrator’s key
Storage Resource Management (SRM) tools. EMC Unisphere is an example of an Element Manager.
2.2.1.6 Library Server/Hyper-V Host and Host Cluster
Currently, VMM supports storage automation only for Hyper-V hosts and host clusters.
In the architecture figure depicted earlier (and in the test infrastructure described later in this paper) the
standalone Hyper-V server is both a VM host and a VMM library server:

VM host: A physical computer managed by VMM and on which you can deploy one or more VMs.
VMM 2012 supports Hyper-V hosts (on which the VMM agent is installed), VMware ESX hosts, and
Citrix XenServer hosts. However, in the current release, VMM supports storage provisioning only for
Hyper V hosts.

Library Server: A file server managed by VMM that you can use as a repository to store files used for
VMM tasks. These files include virtual hard disks (VHDs), ISOs, scripts, VM templates (typically used
for rapid provisioning), service templates, application installation packages, and other files.

You can use VHD files stored on the VMM library server provision VMs. VHD files used to
support VM rapid provisioning are contained within LUNs on storage arrays but are mounted to
folders on the VMM library server.

You can install the VMM library server on the VMM Server, on a VM host, or on a standalone
Hyper-V host. However, to fully implement (and test) all VMM 2012 storage functionality, the
VMM library server must be installed on a standalone Hyper-V host that is configured as a VM
host. (For more information, see the section "Minimum Hardware Requirements Explained"
later in this paper.)
Hyper-V hosts or host clusters in a VMM private cloud must be able to access one or more storage
arrays:

iSCSI Initiator (on the host) to access iSCSI SAN: If you use an iSCSI SAN, each Hyper-V host will
access a storage array using the Microsoft iSCSI Initiator, which is part of the operating system.
During storage operations, such as creating a logical unit and assigning it to the host, the iSCSI
initiator on the host is logged on to the array.
An iSCSI initiator (on the Hyper-V host) is the endpoint that initiates a SCSI session with an iSCSI
target (the storage array). The target (array) is the endpoint that waits for commands from initiators
and returns requested information.
Note Whether you use an iSCSI HBA, TCP/IP Offload Engines (TOE), or a network interface card
(NIC), you are using the Microsoft iSCSI Initiator to manage them and to manage sessions
established through them.

HBA Provider (on the host) to access FC SAN: If you use an FC SAN, each Hyper-V host that will
access a storage array must have a host bus adapter (HBA) installed. An HBA connects a host system
(the computer) to a storage fabric. Storage devices are also connected to this fabric. Each host and
related storage devices must be zoned correctly so that the host can access the storage arrays.

NPIV Provider (on the host) for FC SAN: VMM supports N_Port ID Virtualization (NPIV) on an FC
SAN. NPIV uses HBA technology (which creates virtual HBA ports, also called vPorts, on hosts) to
enable a single physical FC port to function as multiple logical ports, each with its own identity —
11
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
one purpose of which is to provide an identity for a VM on the host. In this case, a vPort enables the
host to see the LUN that is used by the VM. VMM 2012 does not support creation or deletion of
vPorts on the host as an individual operation. However, for an existing VM, VMM 2012 can move
the vPort that identifies that particular VM from the source host to the destination host (when SAN
transfer is used to migrate the VM). "Moving" the vPort refers to deleting the vPort from the source
host and creating the vPort on the destination host.
VMM storage automation requires discovery of storage objects not only on arrays but also on each host
and host cluster:


VMM agent and software VDS on the host for discovery: Just as the Microsoft Storage
Management Service on the VMM Server enables VMM to discover (via the SMI-S Provider) storage
objects on external arrays, VMM can also discover storage-related information on Hyper-V hosts
and host clusters.

VMM agent on the host: VMM uses the VMM agent installed on a physical Hyper-V host
computer to ask the iSCSI initiator (on the host side) for a list of iSCSI targets (on the array side);
similarly, the VMM agent queries the FC HBA APIs for FC ports.

Microsoft VDS Software Provider on the host: VMM uses the VDS API (VDS software provider)
on the host to retrieve disk and volume information on the host; to initialize and partition disks
on the host; and to format and mount volumes on the host.
VDS Hardware Provider on the VMM Server (only for arrays that do not support SMI-S): The VDS
hardware provider is used by VMM 2008 R2 SP1 to discover and communicate with SAN arrays. In
VMM 2012, the SMI-S Provider supersedes the VDS hardware provider because SMI-S provides
more extensive support for storage automation than does the VDS hardware provider. However, the
VDS hardware provider is still available in VMM 2012 and can be used to enable SAN transfers if no
SMI-S Provider is available. However, if an SMI-S Provider is available, do not install the VDS
hardware provider in a VMM 2012 environment.
2.2.2 VMM Supports Multiple SMI-S Providers and Vendors
The SMI-S standard and VMM 2012 make it possible for one instance of VMM to use a single provider to
communicate with one or more arrays of different types. In addition, a single VMM instance can
communicate with multiple providers at the same time. Some vendors implement more than one
provider. Some customers might choose to use multiple providers from different vendors and might
incorporate storage systems from different vendors in their private cloud. In addition, multiple VMM
instances can communicate, simultaneously, with multiple providers.
The following figure depicts a common configuration, in which VMM uses a separate provider to
manage arrays from different vendors.
12
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Figure 3: Managing one or more storage arrays from multiple vendors
The next figure depicts a configuration in which multiple instances of VMM are in use. It is important to
realize that the instances of VMM do not communicate with each other.
Figure 4: Managing storage from multiple instances of VMM
Administrators who deploy a cloud infrastructure with multiple instances of VMM need to be aware that
it is possible, for example, for two VMM instances to compete simultaneously for the same storage
resource — whichever transaction completes first "wins" the resource. The VMM team has not tested
storage automaton capabilities in this configuration.
2.2.3 Top-Down Design Precedes Bottom-Up Implementation
The order in which you design and plan the components of a VMM-based private cloud is the inverse of
the order in which you implement those components.
13
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Table 4: Sequence for storage design is the inverse of implementation
Top-Down Design
Bottom-Up Implementation
Cloud Infrastructure1
Arrays
VMM
SMI-S Provider
SMI-S Provider
VMM
Arrays
Cloud Infrastructure1
1
Cloud infrastructure details unrelated to storage are not included in this document.
After you complete a top-down design, you implement from the bottom up. However, before you can
start building your preproduction test environment, it is useful to become familiar with some of the
scenarios that illustrate why storage automation matters (described next) and to review the issues and
limitations summarized later in the planning section.
2.3 Storage Automation Scenarios
The three scenarios described next explain what the new storage architecture enables you to do:

Scenario 1: End-to-end discovery and end-to-end mapping

Scenario 2: Storage on demand — host and cluster storage management

Scenario 3: SAN-based VM rapid provisioning using snapshots or clones
Automation of common and recurring storage tasks enables VMM and cloud administrators to become
more productive, and more responsive, with storage resources. Concomitantly, storage automation
provides extra time to focus on other critical tasks.
In VMM 2012, the deep integration of storage provisioning with the VMM Console and VMM PowerShell
substantially reduces the learning curve for administrators. For example, you do not need a special plugin to add shared storage capacity to a Hyper-V cluster, nor do you have to learn complex new skills to
perform rapid provisioning of VMs. These capabilities are built into and delivered by VMM.
2.3.1 Scenario 1: End-to-End Discovery and End-to-End Mapping
VMM discovers both local and remote storage. The first storage automation scenario includes end-toend discovery of all storage objects (on each array and on each Hyper-V host). This scenario also
includes end-to-end mapping of each discovered association between an array object and a host object
as well as a complete VM-to-LUN map.
2.3.1.1 End-to-End Discovery of Array and Host Objects
VMM 2012 discovers two broad categories of storage — remote (on the array) and local (on the host) —
as summarized in the following table.
14
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Table 5: VMM uses various services to discover storage objects on an array or on a host
Discovery Type
Array object
Level 1
discovery
(1 of 2)
Array object
Level 2
discovery
(2 of 2)
Host object
Agent discovery
(1 of 2)
Host object
VDS discovery
(2 of 2)
Discovery Agent
Discovered Objects
Microsoft Storage Management
Service:
 Resides on the VMM Server
 Discovers (via the SMI-S
Provider) storage objects on
remote arrays
Level 1 discovery uses an SMI-S Provider
registered with VMM to return the following
array objects:
 Storage pools
 Storage endpoints (FC ports, iSCSI Targets)
 Storage iSCSI portals
Microsoft Storage Management
Service:
 Resides on the VMM Server
 Discovers (via the SMI-S
Provider) storage objects on
remote arrays
Level 2 discovery is targeted against storage
pools already under VMM management and
returns the following array objects:
 Storage logical units (commonly called LUNs)
associated with that storage pool
 Storage initiators associated with the
imported LUNs
 Storage groups1 (often called masking sets)
associated with the imported LUNs
VMM Agent:
 Resides on a Hyper-V server (a
VM host)
 Discovers specific storage
objects on the local host
VMM Agent discovery returns information
about the following Hyper-V (VM host) storage
objects:
 FC endpoints
 FC ports
 iSCSI endpoints (iSCSI targets)
 iSCSI portals
Virtual Disk Service (VDS
software provider):
 Resides on a Hyper-V server (a
VM host)
 Discovers specific storage
objects on the local host
VDS discovery returns information about the
following Hyper-V (VM host) storage objects:
 Disks
 Volumes
1
Storage groups (described in the next subsection) are discovered by VMM but are not displayed in the VMM Console. You can
display storage groups by using the following VMM PowerShell command:
Get-SCStorageArray -All | Select-Object Name,ObjectType,StorageGroups | Format-List
2.3.1.2 End-to-End Mapping of LUN and Array Objects to Hosts
As indicated in the preceding subsection, VMM Level 1 discovery retrieves information about all storage
objects of specific types (storage pools, endpoints, and iSCSI portals) on an array with which VMM is
configured to interact through the SMI-S Provider.
Level 2 discovery starts by retrieving information about logical units (only about logical units for storage
pools that have already been brought under VMM management), and then retrieves storage initiators
and storage groups associated with the imported logical units.
15
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
As part of importing information about logical units, VMM also populates the VMM database with any
discovered associations between storage group objects and logical unit objects. In VMM, storage groups
are defined as objects that bind together host initiators (on a Hyper-V host or host cluster) with target
ports and logical units (on the target storage array). Thus, if a storage group contains a host initiator, the
logical unit is unmasked to (assigned to) that host (or cluster). If no association exists, the logical unit is
masked (that is, it is not visible to the host or cluster).
By default, when VMM manages the assignment of logical units for a host cluster, VMM creates storage
groups per node (although it is also possible to specify storage groups per cluster instead of by individual
node). A storage group has one or more host initiators, one or more target ports, and one or more
logical units. For more information about how VMM handles storage groups in the context of
masking/unmasking for Hyper-V host clusters, see "Appendix B: Array Masking and Hyper-V Host
Clusters" in this document.
End-to-end mapping takes place as follows:

LUN-to-host map: With information — about discovered associations between storage groups
and logical units — now stored in the VMM database, VMM has an initial logical ‘map’ of each
discovered logical unit that is associated with a specific host.

Array-to-host map: However, detailed information about a Hyper-V host is available only if the
VMM agent is installed on the host. The VMM agent is installed on any Hyper-V server that acts
as a VM host (and is therefore managed by VMM), so a more detailed map between storage
objects on a VM host and on any associated arrays is automatically created. This information
tells you which arrays a given host can "see."

VM-to-LUN map: After VMM discovers all available storage assets, VMM maps a VM — any VM
that consumes storage from the SAN (VHDs or passthrough disks) — to its LUN and then creates
a complete VM-to-LUN map. The administrator can access this VM-to-LUN map in the VMM
Console or by using a VMM PowerShell script. A sample script is provided in the blog "List all the
VMs hosted on a specific SAN array" at:
http://blogs.technet.com/b/hectorl/archive/2011/07/26/list-all-the-vms-hosted-on-a-specificsan-array.aspx
Why does any of this matter?

Example 1 – VM deployment based on storage pool classification
Discovery provides VMM with rich information about the arrays under management, but it is
not initially obvious which array offers the best storage for which purpose. Therefore, VMM lets
you tag each storage pool with a user-defined classification that indicates the capabilities of that
pool. One common classification scheme is to label high-performance storage pools as "Gold,"
good performance pools as "Silver," moderate performance pools as "Bronze," and so on. VMM
stores this information in the VMM database.
You can take advantage of the storage classification capability to initiate automated deployment
of VMs to only those hosts or clusters that have access to a storage pool of a given classification.
Afterwards, you can see, for any VM, what its storage classification is.

Example 2 – identifying underlying storage associated with a service instance
In VMM 2012, a service instance is a collection of connected VMs that together provide a
service to users. Each VM in this service contains one or more virtual hard disks (on a host
volume within a logical disk) and/or a pass-through disk (represented by a different logical disk).
16
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Each logical disk on the VM host is associated with a specific logical unit on the array; all logical
units associated with this service instance are contained by one or more storage pools on a
specific array.
You can use VMM 2012 to identify the underlying storage array, storage pool, and logical units
associated with a specific service instance.

Example 3 – simpler array decommissioning
When a storage array must be decommissioned, you can use the VMM Console (or VMM
PowerShell) to identify quickly which Hyper-V hosts have data on that array by enumerating all
of the logical units on that array and then determining which hosts each logical unit is unmasked
to (associated with).
You can then use VMM to move that data to another location before the array is taken out of
service. You move data by manually un-assigning and then re-assigning logical units or by using
SAN transfer to migrate the data.
2.3.2 Scenario 2: Storage on Demand – Host and Cluster Storage Management
VMM automates the assignment of storage to a Hyper-V VM host or to a Hyper-V host cluster (by
unmasking LUNs to a host or cluster), and monitors and manages any supported storage system
associated with a host or a cluster.
Note Although VMM supports VMware ESX hosts and Citrix XenServer hosts in addition to Hyper-V
hosts, in the current release, the storage provisioning functionality of VMM applies only to Hyper-V
hosts.
2.3.2.1 VMM Allocates Storage by VMM Host Group in a Cloud
Local storage on a VM host or on a host cluster is, by definition, always available to the host or cluster.
By contrast, remote storage must be assigned explicitly to a host or host cluster. Instead of assigning
storage resources directly to a host or cluster, however, VMM uses the more flexible mechanism of
allocating storage first to a VMM host group. This approach enables administrators to make storage
available to different sets of users (such as different VMM roles; different types of IT administrators; or
end-users in separate business units) independently of when — or whether, at any given moment —
that allocated storage is assigned to a particular host or cluster.
Each VMM cloud must have one or more VMM host groups. Before you can provision new logical units
or assign storage to a host or cluster, you must first assign storage to a host group. You can allocate both
logical units and storage pools to a VMM host group.
It is important not to confuse storage allocation with storage assignment. Allocation of storage to a
VMM host group is simply a way of staging storage capacity (thus making it available, for example, to
different types of IT administrators and/or to multiple business units). Allocation of storage to a host
group does not assign the storage to each (or to any) host or cluster in the host group. In fact, you can
allocate storage without having yet added any hosts or clusters to the host group. Whenever hosts or
clusters are added to that host group, storage that has been allocated to that host group will be
available to the hosts or clusters in the host group (and thus to the set of users with permissions to use
resources in that host group).
Storage pools and logical units are allocated to host groups differently:

Storage pools can be allocated to one or multiple VMM host groups.
17
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices

Logical units are allocated exclusively to a specific VMM host group and can be used only by
hosts or clusters in that host group.
However, allocating storage to a VMM host group does not imply that storage can be unmasked to all of
the hosts in that host group. Allocation of storage (whether pools or logical units) to a host group takes
place at multiple levels, according to the following rules:

Capacity management: The administrator determines the storage, or the subset of storage, that
can be consumed by the host or clusters in a VMM host group. At this level, which host or
cluster can see the SAN is determined by FC zoning or by iSCSI target logon.

Storage pools: As part of rapid provisioning workflow (which uses SAN-copy-capable templates),
the VMM placement feature determines whether the template VHD resides on a storage logical
unit that is on the same pool that has been allocated to the host group. During assignment
(unmasking) of storage to a host or cluster, new storage logical units can be created from
allocated pools

Storage logical unit: VMM does not place VMs on allocated logical units. Allocation is used
when assigning (unmasking) a LUN to a host or cluster. Only allocated LUNs can be unmasked.
2.3.2.2 VMM Storage Provisioning
After storage resources available to your private cloud are discovered and allocated to a host group, you
can start to make use of those storage resources. For example, you can set up your private cloud so that
users with different types of requirements — such as a software development team, members of a
marketing department, and inventory-control staff — know what storage resources are allocated to
them. From storage allocated to their business unit, they can assign what they need to a Hyper-V host or
host cluster and can focus quickly on their job-related tasks because VMM automates the provisioning
process.
Table 6: Sequence in which VMM provisions storage to an existing Hyper-V VM host or host cluster
Provisioning
Sequence
Logical Unit
Operations
Task
How?
Provision new
storage from a
storage pool assigned
to a VMM host group
You can use VMM to provision new storage from a storage
pool assigned to a VMM host group in one of three ways:
 Create a new logical unit from available capacity.
 Create a new logical unit by cloning an existing logical
unit.
 Create a new logical unit by creating a snapshot of an
existing logical unit.
In each case, the new logical unit can be used to deploy a
new VM to a host, or it can be used by an existing VM
(passthrough).
Assign a newly
created logical unit
(or an existing one)
to a Hyper-V VM host
You can use VMM to assign a newly created logical unit, or
an existing one, to a Hyper-V VM host or to an existing host
cluster by unmasking (assigning) the logical unit to that host
or cluster.
18
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Provisioning
Sequence
Host Disk /
Volume Operations
Task
Prepare disks and
volumes
How?
After storage is assigned to a host or cluster, VMM lets you
perform the following tasks on the host or cluster:
Disk (LUN) on a standalone host:
Format the volume as NTFS volume (optional):
 Specify partition type: GBT or MBR
 Specify a volume label
 Specify allocation unit size
 Choose Quick format (optional)
Specify the mount point:
 Specify a drive letter, a path to an empty NTFS folder, or
none
Cluster disk (LUN):
Format the volume as an NTFS volume (required):
 Specify partition type: GBT or MBR
 Specify a volume label
 Specify allocation unit size
 Choose Quick format (optional)
Note No "mount point" fields exist for a cluster disk.
In addition, you can assign storage to a new cluster by using the new cluster wizard. VMM supports the
creation of a new cluster from available Hyper-V hosts. In the new cluster wizard you can select which
logical units to assign to the cluster. As part of creating the new cluster, the logical units are unmasked
to all of the nodes and prepared as cluster shared volumes (CSVs).
2.3.3 Scenario 3: SAN-Based VM Rapid Provisioning Using Snapshots or Clones
VMM rapid provisioning encompasses both the rapid provisioning of a large number of VMs and the
ability to migrate rapidly a VM from one computer to another by using a SAN transfer.
2.3.3.1 Use Snapshots or Clones to Create a Large Number of New VMs Rapidly
VMM 2008 R2 SP1 introduced a type of VM rapid provisioning by using Windows PowerShell scripts to
duplicate a logical unit that then is used to create and deploy a VM. However, with VMM 2008 R2 SP1,
rapid provisioning on the SAN side requires vendor tools (external to VMM).
This earlier rapid provisioning capability is, in VMM 2012, greatly extended by the introduction of SMI-S
support, which enables automated SAN-based rapid provisioning of new VMs on a large scale. With
VMM 2012, the entire process is intrinsic to VMM, and you can use either the VMM Console or VMM
PowerShell to rapidly provision new VMs.
Copying a VHD on a LUN from one location to another on a SAN (SAN transfer) when two VM hosts are
connected to the same SAN is far faster than copying a VHD from one computer to another over a local
area network (LAN transfer).
As outlined in the following table, with VMM 2012, you can create and customize easy-to-use SAN-copycapable templates (SCC templates) to perform automated large-scale rapid provisioning of VMs either to
19
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
standalone Hyper-V VM hosts or to Hyper-V host clusters. These templates, once created, are stored in
the VMM library and are therefore reusable.
Table 7: VMM 2012 automates the entire workflow for VM rapid provisioning
Task
Automation Workflow
Identify an SCC VHD
in Library
Identify a SAN-copy-capable VHD (SCC VHD) in the VMM Library that resides
on a SAN array. The array must support copying a logical unit by cloning it or by
creating a writeable snapshot of it (or both).
Create an SCC
Template
Create an SCC template that uses the SCC VHD as the source for repeatedly
creating new VMs with identical hardware and software characteristics (as
specified in this particular template). This is a SAN-copy-capable template (SCC
template). Like the SCC VHD, it is stored in the VMM library and is available for
re-use.
Use the SCC
Template to Rapidly
Deploy a New VM
The template:
 Finds one or more potential hosts: The template uses the VMM placement
engine to automatically identify appropriate hosts based not only on
characteristics of available hosts but also by automatically identifying which
of those hosts are attached to the same storage array where the SCC VHD
resides.
 Clones or snapshots the SCC VHD: The template creates a copy of the SCC
VHD and uses the copy to create a new VM, customizing the VM as specified
by the settings in the template.
 Assigns storage for the new VM to a host: The template unmasks (assigns)
the new VM logical unit either to a standalone Hyper-V host or to all of the
nodes in a Hyper-V host cluster, as specified in the template.
See Also:


For step-by-step details about how to create and use an SCC template, see the following sections
later in this document:

"4.4.3.4 Configure arrays for VM rapid provisioning (select snapshots or clones)"

"4.4.4 Create SAN-Copy-Capable Templates for Testing VM Rapid Provisioning"
For information about creating reserved LUNs for snapshots on CLARiiON or VNX arrays, see:

"EMC CLARiiON Reserved LUN Pool Configuration Considerations: Best Practices Planning"
(September 2010) at http://www.emc.com/collateral/hardware/white-papers/h1585-clariionresvd-lun-wp-ldf.pdf

EMC Unisphere online help
2.3.3.2 Use SAN Transfer to Migrate Existing VMs Rapidly
Existing VMs that use a dedicated logical unit can be migrated by using SAN transfer (also called SAN
migration). A Hyper-V-based VM can have either a virtual hard disk (VHD) file attached or a passthrough
disk. In either case, SAN transfer will move the LUN regardless of whether the manifestation of the LUN
on the Hyper-V side is a VHD or a passthrough disk.
20
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Example: SAN Transfer of a VM with a VHD
In the case of a VM with a VHD attached (the LUN contains the VHD), using SAN transfer to migrate the
VM from a source host to a destination host simply transfers the path to the LUN from one Hyper-V
server to another. Assuming that both the source and destination Hyper-V VM hosts can access the
storage array, the only change required is to the path.
The mechanism for moving the LUN path is unmasking/masking. The path to the storage volume (to the
LUN) is masked (hidden) from the source host and unmasked (exposed) to the destination host. The
storage volume is mounted on the destination host so that the VHD can be accessed.
A SAN transfer is much faster than copying a VHD file over a local area network (LAN) to move a VM
from a source to a destination host. The LUN is not moved; the only change made is that the path to the
LUN changes.
VMM supports SAN transfer for both iSCSI and FC storage:

iSCSI migration
VMM can use either of the following methods (based on what the underlying array supports):


Unmask and mask

iSCSI initiator logon/logoff
FC migration
Prerequisite: Zoning must be set up appropriately.
VMM can use either of the following methods:

Unmask and mask

NPIV vPort creation/deletion
21
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
3 Plan a Private Cloud
The architecture that you use to design the preproduction environment will follow the standards-based
architecture described in the preceding section.
Before you set up your environment for testing, consider the following:

Coordinate storage requests and storage allocation needs

Coordinate storage-related security measures

Review frequently asked questions (FAQs)
 Review known issues and limitations
The following subsections provide information about each of these topics.
Developing your approach now for coordinating private cloud and storage requirements, developing
jointly agreed-on security measures, and gaining familiarity with FAQs and known issues will enable you
to set up your preproduction test environment in an optimal way. Planning and coordination will also
help ensure a more efficient deployment into your production environment later.
3.1 Coordinate Storage Requests and Storage Allocation Needs
When you plan the infrastructure for a VMM private cloud that is now capable of supporting far more
sophisticated storage automation functionality, it is critical to include storage considerations as an
integral part of the earliest planning phase.
What the VMM/Cloud Administrator Manages Now
What the Storage Administrator Manages Now
Figure 5: Cloud and storage administration (formerly separate) now require coordinated planning
Coordination between cloud and storage administrators, starting with design and planning, is critical to
the successful deployment of one or more private clouds that can take advantage of all available storage
automation capabilities.
However, the necessity for coordinated planning of all aspects of a private cloud goes beyond cloud and
storage administrators. Administrators who need to identify and coordinate storage-related needs for a
VMM-based private cloud include:
22
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices

Storage Administrators

VMM Administrators

Cloud Administrators

Self-Service Portal Administrators

Hyper-V and other Server Administrators

Network Administrators

Security Administrators
Note This document assumes that, in an enterprise-scale heterogeneous environment, the role
"VMM cloud administrator" or "cloud administrator" refers to a person who focuses on, and is
responsible for, cloud services provided to users. Although a VMM cloud administrator must have VMM
Administrator or VMM Delegated Administrator permissions to view and manage cloud storage systems,
the VMM cloud administrator role is different from the "VMM administrator" role. The VMM
administrator role focuses on managing and monitoring the VMM infrastructure that supports the cloud
and ensures that cloud services remain accessible at all times.
3.1.1 Identify Global VMM-Storage Planning Issues
Understanding your global storage requirements starts with an understanding of the services that you
want to provision in your clouds. This is true whether you plan to deploy a private, hybrid, community,
or public cloud. Because a service can be migrated across one or more types of cloud, it is important to
plan for these workflows early so that your cloud remains elastic and can continue to handle dynamic
workloads over time.
Design your cloud infrastructure based on the number and types of private clouds that VMM will host.
Each cloud will have capacity, performance, scale, and elasticity requirements defined as SLAs.
For storage-related issues, consider:

Which services will be migrated across clouds?

For each of those services:

What are the storage allocation requirements for that service?

What is the storage provisioning completion time requirement for that service?

What are the storage security requirements for that service?

How should storage be allocated and classified for that service?
Given the sophisticated storage automation capabilities introduced with the VMM 2012 private cloud,
storage and non-storage administrators need to develop systematic ways to communicate with each
other any requirements, preferences, and limitations.
The following table lists some areas where the storage administrator will likely need to take a leadership
role when working with other IT administrators.
23
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Table 8: Storage administrators: Private cloud storage requests and processes (global issues)
Systematize
Identify private
cloud storage needs
Balance competing
requests
Allocate for a private
cloud
Role of Storage Administrator (Working With Non-Storage IT Administrators)
 Gain familiarity with new, and enhanced, storage capabilities delivered by
the VMM private cloud.
 Ask whether the new capabilities require:
 Installing ancillary software on a storage system?
 Enabling functionality on a storage system that is OFF by default?
 Adding more storage to a storage system?
 Rebalancing storage usage across storage systems (and ask whether
support for this rebalancing capability exists)?
 Respond to multiple, often simultaneous, competing storage and SAN
requests from VMM, cloud, self-service portal, server, and network
administrators
 Will existing methods for balancing competing requests be modified to
handle increased demand from private cloud administrators and users? For
example:
 Can you expect to install additional SMI-S Providers in order to provide
load balancing by reducing the number of arrays managed by each
provider?
 Do you need to install additional SMI-S Providers in order to eliminate a
service or workload interdependency?
 Allocate storage in a systematic way that is appropriate for the new private
cloud environment.
 Ask whether rapid provisioning will alter storage administration, and ask:
 How much? Will the quantity of storage allocated in a very short time in
order to rapidly provision VMs change how storage resources are tracked
and allocated?
 How fast? Will the speed at which storage is made available need to be
expedited to ‘keep up’ with rapid provisioning of large numbers of VMs?
 Define and create the appropriate storage classifications. The following
should be considered for each storage classification:
 Disk drive types
 Tiered storage
 Caching
 Thick and thin pools
The following table lists some areas where IT administrators (other than storage administrators) will
likely need to take a proactive role in communicating their needs to storage administrators.
24
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Table 9: IT administrators: Private cloud storage requests and processes (global issues)
Systematize
Role of IT Administrators (Working With Storage Administrators)
Understand storage
domain
 Gain familiarity with the impact of storage requests on the storage domain.
 How will storage administrators classify and allocate storage for IT areas?
Communicate
storage needs
 Communicate to the storage administrator, in a predictable way, the specific
storage needs for each IT area, and the specific storage needs of your users.
Identify available
storage — quantity?
 Ascertain how much storage the storage administrator can make available to
each IT area and to each set of users within that area.
 Ascertain how the storage administrator plans to handle storage allocation
for sets of users whose needs fluctuate (wax and wane) significantly based
on factors such as shopping season, accounting quarters, project
development cycles, and so on.
 Ascertain which specific storage pools the storage administrator can make
available to each IT area and to each set of users in that area.
Identify available
storage — where?
3.1.2 Specific Storage Requests from IT Administrators to Storage Administrators
Planning for storage requests that other IT administrators will make to storage administrators includes
information, actions, and joint action.
Table 10: Planning that IT administrators coordinate with storage administrators (specific issues)
IT
Administrator
VMM
Administrators
Cloud
Administrators
Self-Service
Administrators
Hyper-V
Administrators
Needs <This> from Storage Administrators
 Storage to support VMM database
 Storage to support VMM Library Server (for example, library LUNs)
 SMI-S Server(s) to act as an interface for all storage resources available to VMM
 Storage to support existing VMs and services (if any), and expansion of VMs and
services
 Capacity planning requirements that meet expected cloud workflow demands,
including prepopulating Reserved LUN Pool with sufficient capacity to support
rapid provisioning with snapshots
 Recovery of storage from deleted VMs and services
 Storage for new VMs and services
 Classification of storage based on established SLAs
 Required storage system features




Storage to support existing VMs or expanding growth in VM requests
Recovery of storage from deleted VMs
Storage for new VMs
Classification of storage based on established SLAs
 Storage required for saving VMs
 Host zoning requirements
25
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Server
Administrators
(non-Hyper-V)
Network
Administrators
 Required host-based software, such as EMC PowerPath (license required)
 Number of host bus adapters (HBAs) and ports needed
 Required storage protocols, such as FC, FCoE, or iSCSI
 Required bandwidth and quality of service
 Multipath requirements
3.1.3 Storage Administrator Requests to Specific IT Administrators
Planning and resource requests that storage administrators make to other IT administrators include
information, action, and joint action.
Table 11: Planning that storage administrators coordinate with IT administrators (specific issues)
IT
Administrator
Storage Administrator Needs <This> from Other IT Administrators
VMM
Administrators
 How many VMM servers need an SMI-S Server configured with a specific storage
system?
 The SMI-S Server cannot be clustered. What is the impact of that on meeting the
availability requirements for each configured storage system?
 Does your organization require that each VMM server must have a separate
account to access each SMI-S Provider server, or can all VMM servers share the
same account?
Cloud
Administrators
 What storage SLAs must be provided to the storage administrator?
 Only explicitly specified storage pools can be administered by VMM. Is there a
systematic way that the storage administrator will notify the cloud administrator
which storage pools are available to VMM?
Note Currently, VMM does not have a built-in capability to restrict (deny) the ability to
limit which storage pools are under management.
 There might, or might not, be LUNs in a storage pool that should be treated as
reserved. Is there a systematic way that the storage administrator will notify the
cloud administrator which LUNs are not available to VMM?
Note Currently, VMM does not have a built-in capability to restrict (deny) the ability to
create a LUN.
 What are the backup and recovery requirements for non-self-service VMs and for
their storage?
Self-Service
Administrators
 What are the backup and recovery requirements for self-service VMs and for their
storage?
Hyper-V
Administrators
 What is the location (geographical location, subnet, Active Directory® domain,
VMM host group, and so on) of the Hyper-V hosts and host clusters that are
managed by VMM?
 Will VM storage migration and/or SAN copy be used?
26
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Server
Administrators
(non-Hyper-V)
 Do I need to install storage management software on one or more servers?
 What dependent software is required?
Network
Administrators
 What HBAs have been installed on the Hyper-V hosts?
 Are iSCSI and/or Fibre Channel supported on the Hyper-V hosts?
Note If a Hyper-V host connects to the same array with both iSCSI and FC, VMM
uses FC by default.
Security
Administrators
 What SMI-S Provider communications requirements exist (such as http/https and
available ports)?
 What SMI-S Provider configured security protocols and accounts exist?
3.2 Coordinate Storage-Related Security Measures
Storage-related security issues that require coordination between the cloud administrator, security
administrator, and storage administrator when planning to deploy a VMM-based private cloud include:

VMM role-based access control to grant rights to VMM host groups and clouds

Run As accounts and Basic Authentication

Storage system global administrator account

SMI-S Provider object security
The following subsections address these issues.
3.2.1 VMM Role-Based Access Control to Grant Rights to VMM Host Groups and Clouds
VMM supports role-based access control (RBAC) security for defining the scope within which a specific
VMM user role can perform tasks. In VMM, this refers to having rights to perform all administrative
tasks on all objects within the scope allowed for that user role. The scope for the VMM Administrators
role extends to all objects that VMM manages. The scope for any particular Delegated Administrator
role is limited to objects within the assigned scope, which can include one or more (or all) host groups,
clouds, and library servers.
The current RBAC model allows members of the VMM roles Administrator and Delegated Administrator
to add and remove SMI-S Providers from VMM.
Members of the VMM Administrator and Delegated Administrator roles can also allocate storage, but
which storage they can allocate is limited to those VMM host groups or clouds that they have the right
to access.
The following screenshots illustrate how VMM defines the scope for a Delegated Administrator role that
is limited to host groups (but does not include clouds) and can administer storage resources allocated to
the defined host group:

Properties for Delegated Administrator: The Scope page shows a defined scope for this role that (in
this case) includes only the VMM host group named LMHostGroup1.
27
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Figure 6: This Delegated Administrator role’s scope includes only one host group

Properties for LDMHostGroup1 host group: The Storage page shows total storage capacity (in GB)
and represents allocated storage in terms of logical units and storage pools.
Figure 7: Storage (storage pools and/or logical units) is allocated by host group
3.2.2 Run As Accounts and Basic Authentication
VMM security includes Run As accounts. Two VMM user roles, Administrators and Delegated
Administrators, can create and manage Run As accounts.
However, VMM Run As accounts do not grant or deny rights to administer storage associated with a
specific SMI-S Provider. Instead, you use the EMC SMI-S Provider Run As account (which is not
associated with any VMM user role) to access storage. This account is an ECOM account and must be
created separately (outside of VMM) by using the ECOM Configuration Tool. This account allows a
connection from the VMM Server to the provider by using Basic Authentication.
The storage administrator should work with the VMM Administrator and Security Administrator to
determine the security model to use for storage in a VMM-based private cloud. This includes
determining how many Run As accounts are needed based on the number of VMM and SMI-S Provider
management servers.
28
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
See Also:

For more information about Basic Authentication when configuring the EMC SMI-S Provider, see
the section "Install and Configure the EMC SMI-S Provider" later in this document.

For more information about Run As accounts in VMM 2012, see the Microsoft TechNet help
topic "Configuring Run As Accounts in VMM" at http://technet.microsoft.com/enus/library/gg675096.aspx.

For the most up-to-date EMC information, see the latest EMC SMI-S Provider Release Notes.
3.2.3 Storage System Global Administrator Account
The storage administrator must use a storage domain global administration account for a storage
system when configuring the storage system for management by the SMI-S Provider server. Depending
on your storage domain security configuration, a single account or multiple accounts might be required.
3.2.4 SMI-S Provider Object Security
Currently, ECOM and the SMI-S Provider support only class-level security (but not instance-level
security). Instance-level security — if it existed — would enable a storage administrator to restrict access
to specific existing pools and to LUNs within a pool. Currently, when you define pools and use VMM’s
role-based security to restrict access and operations, no instance-level provider security is present.
Access to SMI-S objects is allowed only if the VMM provider RunAs account has permissions to access
those objects. Permissions are set through ECOM role-based security.
3.3 Review Frequently Asked Questions (FAQs)
The following table provides answers to questions that EMC customers commonly ask.
Table 12: Frequently Asked Questions (FAQs)
Q&A
Description
Question:
Can I install the SMI-S Provider and VMM on the same computer?
Answer:
No, Microsoft and EMC recommend that you do not install the SMI-S Provider on the VMM
Server in either a preproduction or a production environment. This configuration is
untested and therefore unsupported. Install the SMI-S Provider on a dedicated server with
sufficient resources to support your performance requirements. For more information, see
the section "Set Up EMC SMI-S Provider for Storage Validation Testing" later in this
document.
Question:
On what type of server should the SMI-S Provider be installed?
Answer:
To build the preproduction test infrastructure described later in this document, EMC
recommends installing the SMI-S Provider — specifically, the 64-bit version of the SMI-S
Array Provider — on a Windows Server® 2008 R2 SP1 64-bit computer with at least 2 cores
and 8 GB RAM.
Note EMC SMI-S Provider can be installed on other Windows and Linux platforms (listed
in the EMC SMI-S Provider Release Notes). However, the validation tests in this document
were performed with an EMC SMI-S Provider installed on a Windows Server 2008 R2 SP1
64-bit computer.
Question:
Can I install the SMI-S Provider into a cluster?
29
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Answer:
No, installing the EMC SMI-S Provider on a cluster is an unsupported configuration.
Question:
Is there a limit to the number of arrays per SMI-S Provider?
Answer:
In a production environment, EMC recommends that you configure an SMI-S Provider
server with no more than five arrays to ensure optimal performance. Within this limit, the
specific recommended ratio of arrays to provider can vary depending on the expected load
for a specific SMI-S Provider. Overloading a provider can cause the VMM server to
experience timeouts with the result that workflows will not complete.
Question:
Specifically, why would you install fewer than five arrays per SMI-S Provider?
Answer:
If you have an array that has a large number of storage groups or that has a large number
of storage volumes within its storage groups, reduce the number of storage systems per
SMI-S Provider to ensure acceptable performance. Storage groups are often also called
masking views or SCSI Protocol Controllers (SPCs).
Question:
Do I need to install the EMC VDS Hardware Provider on Hyper-V hosts?
Answer:
No. VMM uses the Microsoft VDS Software Provider on a Hyper-V host to retrieve and
configure disk and volume information on the host. Installation of the EMC VDS Hardware
Provider is not needed on the Hyper-V host.
Note Install the VDS hardware provider on the VMM Server only in the case where you
use arrays (such as VNXe arrays) in your private cloud environment that are not supported
by the EMC SMI-S Provider.
Question:
If I install the EMC VDS Hardware Provider on my VMM Server, will I be able to do rapid
provisioning as it is available in VMM 2012?
Answer:
No, you cannot do automated rapid provisioning at scale unless you use VMM 2012 in
conjunction with the EMC SMI-S Provider. Installing the EMC VDS Hardware Provider on
the VMM Server provides only the more limited rapid provisioning capability that was
possible with SCVMM 2008 R2 SP1.
Question:
Is there a limit on how many VMs you can rapidly provision at the same time?
Answer:
Rapid VM provisioning should be batched to contain no more than eight VMs to avoid the
possibility of VMM and/or provider timeouts. Results will vary depending on the
configuration.
Question:
What do I need to know about array management ports and their IP addresses?
Answer:
A CLARiiON or VNX array has two management port IP addresses that the SMI-S provider
uses to manage the array. To configure a CLARiiON or VNX array with the provider, you
must specify both management port IP addresses and must open port 443. The IP
addresses of both management ports must be accessible so that the provider can fully
discover and manage the array.
Secure Sockets Layer (SSL) port 443 is the port used for the communication. If a firewall
exists between the SMI-S Provider installation and a CLARiiON or VNX array, open SSL port
443 in the firewall (inbound and outbound) for management communications to occur
with the array.
Question:
What storage licensing issues do I need to know about?
30
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Answer:
Some array features needed by VMM might be disabled by default and require licenses to
be added. For more information about licensing, see section "Specific Requirements by
Array Type."
Question:
What is required for a Symmetrix array to communicate with the SMI-S Provider?
Answer:
Symmetrix arrays must have an inband (iSCSI, FC, or FCoE) communication path between
the SMI-S Provider server and each array. EMC recommends that six gatekeeper LUNs be
created on each array and that the array be zoned and unmasked to the SMI-S Provider
server to enable the provider to manage the array.
3.4 Review Known Issues and Limitations
This section describes issues or limitations encountered by EMC during validation testing of the new
VMM storage capabilities when incorporating EMC storage systems into a VMM-based private cloud.
This section also calls out any scale or functionality limitations. (The test environment used is described
later in the section "Build Your Preproduction Test Infrastructure.")
Important For the most up-to-date information, see these subsections in the latest EMC SMI-S Provider
Release Notes:

Known problems and limitations
 Technical notes
The following tables list known issues or limitations identified by EMC during validation testing of EMC
storage systems with VMM. Awareness of these issues can be useful to customers who want to set up a
VMM-based private cloud that includes EMC storage systems.
Table 13: For CLARiiON arrays, you must add LUNs to increase the size of a snapshot pool
Issue or Limitation
Description
Cannot specify a
new size for a
snapshot pool on a
CLARiiON array
You cannot specify a new size when you want to expand snapshot pool capacity
on a CLARiiON array.
Fix Available?
Yes
Fix Details
The workaround, since you cannot specify a new size, is to supply additional
LUNs to increase the size of the reserved snapshot pool on a CLARiiON array. Use
EMC Unisphere to perform this operation.
Applicable To
CLARiiON CX4 Series arrays
Table 14: Managing pools with MetaLUN is not supported
Issue or Limitation
Description
MetaLUN
unsupported
Managing pools with MetaLUN is not supported
Fix Available?
No
31
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Issue or Limitation
Description
Fix Details
Check future releases of VMM to see whether or not MetaLUN support is
available.
Applicable To
CLARiiON CX4 Series arrays
Table 15: VMAX 10K/VMAXe supports clones but not snapshots
Issue or Limitation
Description
VMAX 10K/VMAXe
supports only
clones
VMAX 10K/VMAXe Series is a slightly different version of the VMAX product that,
currently, supports clones but not snapshots.
Fix Available?
N/A (by design)
Fix Details
N/A (by design)
Applicable To
Symmetrix VMAX 10K/VMAXe Series arrays
Table 16: VMM discovers and modifies cascading storage groups but cannot create them
Issue or Limitation
Description
Cascading initiator
support exists but
limited
If cascading storage groups have been created and configured (outside of VMM)
on Symmetrix arrays, VMM can discover those externally created cascading
storage groups.
VMM can perform masking operations on cascaded storage groups that VMM
has discovered. VMM can also modify an existing cascaded storage group by
assigning (or un-assigning) storage from that cascaded storage group to a HyperV VM host or host cluster that is a member of the VMM host group.
However, you cannot use the VMM Console (or VMM PowerShell commands) to
create cascaded storage groups.
Fix Available?
No
Fix Details
Check future releases of VMM to see whether or not support for the cascading
initiator group is available.
Applicable To
Symmetrix arrays
Table 17: VMM cannot create a LUN larger than 240 GB unless you configure the auto_meta setting
Issue or Limitation
Configure
auto_meta to
create LUNs larger
than 240 GB
Description
For Symmetrix arrays, VMM cannot create a LUN larger than 240 GB unless you
first set the auto_meta setting.
32
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Fix Available?
Yes
Fix Details
You can set Symmetrix-wide meta settings, including auto_meta, by using the
symconfigure command and specifying a command file. The auto_meta setting is
a Symmetrix-wide setting (ENABLE/DISABLE) that enables automatic creation of
metadevices. The default value is DISABLE.
Metadevices allow individual devices to be concatenated to create larger
devices. The devices assigned in a meta sequence do not need to be adjacent.
For more information, see "Appendix C: Enable Large LUNs on Symmetrix
Arrays."
Applicable To
Symmetrix arrays
Table 18: Configuring the SMI-S Provider server to communicate with VMM using HTTPS might fail
Issue or Limitation
Description
HTTPS
configuration might
fail
The default configuration of ECOM conflicts with the Windows HTTPS
implementation.
Fix Available?
Yes
Fix Details
1. Open the following file:
<C>:\Program Files\EMC\ECIM\ECOM\conf\security_settings.xml
2. Change the following setting from the current value:
<ECOMSetting Name="SSLClientAuthentication"
Type="string"
Value="Optional"/>
Type="string"
Value="None"/>
To:
<ECOMSetting Name="SSLClientAuthentication"
3. Restart the ECOM service.
Applicable To
EMC SMI-S Array Provider
Table 19: Timeouts appear in VMM log when performing multiple provisioning steps simultaneously
Issue or Limitation
Description
Timeouts during
provisioning
operations
VMM might create more connections to the SMI-S Provider server than the
default configuration can support. As a result, you might see timeouts in the
VMM log when you perform multiple provisioning steps at the same time.
Fix Available?
Yes
Fix Details
See the section "Install and Configure the EMC SMI-S Provider" later in this
document for recommended connection limit settings.
Applicable To
EMC SMI-S Array Provider
33
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
4 Build Your Preproduction Test Infrastructure
This section shows you how to build a preproduction environment that you can use for functional
storage validation testing using a basic configuration. You can use this test environment to exercise most
of VMM’s storage automation scenarios and primitives (individual capabilities), enabling you to quickly
detect and address any problems encountered.
A quick ‘preview’ of the test environment is followed by array, provider, and VMM requirements that
customers will need to consider when planning how to build and deploy a private cloud.
Note This document does not include steps to configure the virtual network required for the test
infrastructure. Search for an EMC Fast Track document that contains this information on
Support.EMC.com.
4.1 Preview the Test Environment
This preview describes the minimum test infrastructure that you need to validate the new storage
capabilities — especially VM rapid provisioning — when you deploy VMM 2012 with EMC storage
systems.
4.1.1 Minimum Hardware Requirements (Servers and Arrays) for Test Environment
Microsoft describes the ideal minimum hardware requirement for storage validation testing as follows:



Management Servers (two physical servers):

1 VMM server with at least 4 processor cores (includes SQL Server® for VMM database, unless
you use a separate SQL Server)

1 EMC SMI-S Provider server
Hyper-V servers (five physical servers):

1 standalone Hyper-V server (also acts as the Library Server)

1 four-node Hyper-V failover cluster (made up of 4 Hyper-V servers)
Arrays (at least one) – for EMC, this can include one or more of the following:

1 or more Symmetrix VMAX 10K/VMAXe or VMAX 20K/VMAX arrays running Enginuity 5875 or
later, or a Symmetrix VMAX 40K array running Enginuity 5876 or later

1 or more CLARiiON CX4 Series arrays running FLARE 30 or later

1 or more VNX arrays running VNX OE 31 or later
Notes

Both EMC and Microsoft used the VMM Storage Automation Validation Script (described later in
the section "Validate Storage Automation in Your Test Environment") to test configurations with
hardware similar to that listed above. EMC also performed validation testing using another
configuration that contains an eight-node Hyper-V host cluster.

It is possible to install the VMM Server and/or SMI-S Provider server on VMs, but Microsoft
recommends installing all servers in the preceding list on physical servers for the reasons listed
in the next subsection "Minimum Hardware Requirements Explained."
34
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
4.1.2 Minimum Hardware Requirements Explained
The servers and arrays listed in the preceding subsection are the minimum recommended hardware
requirements for testing storage automation in a VMM-based private cloud for the following reasons:

Standalone host versus cluster:

Masking operations for a standalone host and cluster differ because VMM offers two models
for creating masking sets in the cluster case; this is not applicable to standalone hosts. (See
"Appendix B: Array Masking and Hyper-V Host Clusters" later in this document.)

A 4-node cluster is the recommended minimum configuration because four nodes will typically
enable you to catch the majority of issues. Optimally, you can also perform testing with 8-node
and 16-node clusters to identify a larger set of issues.

Workflow and APIs differ for a standalone server versus a cluster for disk initialization,
partition creation, volume format, and volume mounting code paths, and for cluster resource
creation.

Validation testing differs for host and cluster:

Parameters specify host or cluster. VMM PowerShell commands (called "cmdlets") use
parameters to specify whether a cmdlet targets a standalone VM host or a host cluster. For
example, Register-SCStorageLogicalUnit has different parameter sets for VMHostCluster and
VMHost:
Register-SCStorageLogicalUnit [-StorageLogicalUnit] <StorageLogicalUnit[]> VMHostCluster <VMHostCluster> [-JobVariable <String>] [-PROTipID <Guid>] [RunAsynchronously <SwitchParameter>] [<CommonParameters>]
Register-SCStorageLogicalUnit [-StorageLogicalUnit] <StorageLogicalUnit[]> -JobGroup
<Guid> -VMHost [<String Host>] [-JobVariable <String>] [-PROTipID <Guid>] [RunAsynchronously <SwitchParameter>] [<CommonParameters>]
Notes
 Verification is not performed by the VMM Storage Automation Validation Script
(described later in the section "Validate Storage Automation in Your Test Environment").
Instead, verification is handled by the VMM engine: When VMM unmasks a LUN to a
host or to a cluster, VMM validates that the disk is visible to the server (in the case of a
cluster, visible to all servers).
 VMM PowerShell is merely an API to the VMM engine. The core logic is in the VMM
engine. This means that operations that partition disks and mount volumes are the same
in the standalone host case and the cluster case — but VMM calls additional APIs in the
cluster case. This is why it is important to test primitives and end-to-end scenarios.


Context differentiates between host and cluster. In a number of other VMM storage
cmdlets, the context (standalone vs. cluster) determines how the LUN is unmasked. SMI-S
calls are the same between cluster and standalone host; however, the sequence and
frequency differ. This is another reason why testing primitives is not sufficient and why endto-end scenario testing is essential. Passing SMI CTP tests is a necessary — but not sufficient
— prerequisite to help ensure that a provider will work with VMM.
A dedicated standalone Hyper-V host is required for:

Rapid provisioning with FC SAN

To serve as a VMM library server
35
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Important Adding a VMM library server role to a physical Hyper-V server already configured as
a standalone VM host is required in your test environment to test fully all VMM 2012 storage
automation functionality with EMC arrays. Using the same Hyper-V server for a VM host and a
VMM library server lets you unmask and mask LUNs to that server. This is because the folder
mount path you specify on the VM host (in the test steps described later in this document) is a
path that is managed by the library server.
Using a physical server is a requirement primarily for FC-based arrays. There is no way to expose
FC-based storage to a VM. Exposing storage to a VM is possible only with iSCSI, by using the
iSCSI initiator that comes with Windows Server.
In addition, to streamline the setup for rapidly provisioning VMs and to enable the administrator
to work primarily in the VMM Console, Microsoft recommends co-hosting the VMM library
server with a Hyper-V server because VMM can unmask LUNs only to Hyper-V servers.
If, in your production environment, you prefer not to co-host a VM host and a VMM library
server on the same physical server, VMM also supports adding each role to separate servers. In
this case, however, you will need to do all unmasking and masking for the library server outside
of VMM (not through the VMM Console or by using VMM PowerShell cmdlets).


SMI-S Provider servers — one is the minimum, but the number can vary depending on:

Number of arrays

Size of arrays (that is, the number of pools and LUNs on each array)

Array product family and model

Array OS version

Connectivity
At scale, Microsoft recommends testing with physical rather than virtual servers to maximize
throughput and push the scalability and reliability of the entire system (VMM + SMI-Provider +
arrays). Running the infrastructure servers (VMM Server, SQL Server, and SMI-S Provider) on VMs
limits throughout. The main cause of limited throughput is the CPU sharing model when running
multithreaded applications in a VM — hardware performs better.
The storage validation tests kick off multiple parallel operations. VMM uses multiple threads to
handle those parallel operations:

VMM Server – physical (recommended). Requires at least 4 processor cores

VMM Server – virtual (not recommended at scale). Requires at least 4 logical processors
4.1.3 Relationships among Servers and Arrays in Test Environment
The following figure depicts the relationships among the minimum required number of servers and
arrays used by EMC to test VMM in a way that takes full advantage of the new storage capabilities.
36
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Figure 8: Minimum servers and arrays recommended by Microsoft for validating storage capabilities
Table 20: Labels in the figure that indicate communication requirements for the test infrastructure
Label
TCP/IP
FC
-oriSCSI
FC or iSCSI
TCP/IP
Description
 TCP/IP indicates that the two endpoints use TCP over an IP network to
communicate.
 FC indicates that the two endpoints send SCSI commands over an FC network.
 iSCSI indicates that the two endpoints send SCSI commands over an IP network.
Note If a Hyper-V host has FC and iSCSI connections to an array, VMM uses FC by
default.
FC or iSCSI | TCP/IP indicates that the EMC SMI-S Provider needs one of the following
for communications (through the EMC Solutions Enabler) between the provider and
array:
 TCP/IP for CLARiiON or VNX arrays
 FC or iSCSI for Symmetrix arrays
Note Communication to Symmetrix arrays also requires gatekeeper LUNs; EMC
recommends that six gatekeeper LUNs be created on each Symmetrix array
4.2 Set Up EMC Storage Devices for Storage Validation Testing
EMC updated its SMI-S Provider so that Symmetrix, CLARiiON, and VNX storage systems now support
VMM 2012. To perform storage automation validation testing with these storage systems, you need to
set up a test infrastructure similar to the one described in this document.
4.2.1 Summary: Tested EMC Arrays that Support VMM Storage Automation
VMM supports FC and iSCSI storage arrays. EMC storage systems that use the FC or iSCSI protocol and
that can be incorporated into a VMM private cloud are listed in the following table. EMC tested the
37
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Symmetrix VMAX, CLARiiON, and VNX models listed in this table by using a test infrastructure similar to
the one described in this document.
For detailed results of VMM Storage Automation Validation Script testing, see the section "Test Case List
by EMC Array Product Family" later in this document.
Note If a Hyper-V host has both FC and iSCSI connectivity to the same array, VMM uses FC by default.
Table 21: Tested EMC arrays that support new VMM storage automation capabilities
Maker
Series
Model
EMC
Symmetrix
VMAX 10K/VMAXe
Series
EMC
Symmetrix
VMAX 20K/VMAX Series
EMC
Symmetrix
VMAX 40K Series
EMC
CLARiiON
CX4 960, 480, 240, 120
EMC
VNX
5100 , 5300, 5500,
5700, 7500
1
2
Protocol
Minimum Array OS
Provider
1
Version
Max Arrays
Per Provider
Provider
Download
FC
Enginuity 5875
(or later)
4.4.0
(or later)
5
EMC Powerlink
iSCSI / FC
Enginuity 5875
(or later)
4.4.0
(or later)
5
EMC Powerlink
iSCSI / FC
Enginuity 5876
(or later)
4.4.0
(or later)
5
EMC Powerlink
iSCSI / FC
FLARE 30
(or later)
4.4.0
(or later)
5
EMC Powerlink
FC
VNX OE 31
(or later)
4.4.0
(or later)
5
2
EMC Powerlink
EMC SMI-S Provider V4.3.2 introduced support for VMM; the current version is V4.4.0.
VNX 5100 supports FC but not iSCSI; all other VNX arrays support both FC and iSCSI. (VNXe is not supported at this time.)
4.2.2 Summary: Specific VMM Storage Capabilities for Each Tested EMC Array
The following table lists support, by array type, for specific storage capabilities delivered by VMM. EMC
validated, in its test environment, each storage capability listed in the "Storage Primitives" column in
this table.
EMC performed comprehensive testing by running the "VMM Storage Automation Validation Script"
provided by Microsoft on each type of array. For detailed results of VMM Storage Automation Validation
Script testing, see the section "Test Case List by EMC Array Product Family" later in this document.
38
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Table 22: Tests run, by array type, that validate EMC support for VMM 2012 storage capabilities
EMC Arrays Tested by Storage Feature
Symmetrix
VMAX Series
CLARiiON
CX4 Series
VNX1
Family
Discover arrays
X
X
X
Discover storage pools
X
X
X
Discover FC ports (storage endpoints)
X
X
X
Discover iSCSI targets (storage endpoints)
X
X
X
Discover iSCSI portals
X
X
X
Discover LUNs
X
X
X
Discover host initiator endpoints (Initiator ports)
X
X
X
Discover storage groups (masking sets)
X
X
X
Private Cloud
Scenario
Storage
Primitives
End-To-End
Discovery Scenario
2
Host and Cluster
Storage Capacity
Management Scenario
Rapid Provisioning of
VMs on SANs at Scale
4,5
Scenario
Create LUN
X
X
X
Snapshot LUN (writeable)
X
3
X
3
X
3
Clone LUN
X
3
X
3
X
3
Unmask LUN (create OR modify) to a host
X
X
X
Mask LUN (modify OR delete) on a host
X
X
X
Unmask LUN (create OR modify) to a cluster
X
X
X
Mask LUN (modify OR delete) on a cluster
X
X
X
Delete LUN
X
X
X
Mount multiple LUNS to a host
X
X
X
Mount multiple LUNs to a cluster
X
X
X
Concurrent create LUN
X
X
X
Concurrent snapshot LUN
X
X
X
Concurrent clone LUN
X
X
X
Concurrent unmask LUN
X
X
X
Concurrent mask LUN
X
X
X
Supported arrays include the VNX Family, but not VNXe.
2
Discovery primitives in this table refer only to VMM discovery of storage resources on arrays (not storage objects on hosts).
3
For the number of snapshots and clones supported by each array type, see the next subsection. VMAX 10K/VMAXe Series
arrays support only clones (by design).
4
See also "Appendix B: Array Masking and Hyper-V Host Clusters."
5
Number of arrays per provider at scale is 1 array to 1 provider. Deployment to host or cluster is limited by array capabilities.
1
4.2.3 Summary: Maximum Snapshots and Clones by Array Model
The following table lists the maximum number of clones and snapshots that you can create by type of
array. This information is central to VM rapid provisioning.
For detailed results of snapshot and clone testing for each EMC array family, see the section "Test Case
List by EMC Array Product Family" later in this document.
39
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Table 23: Maximum number of clones and snapshots per source LUN
VMAX
VMAX 10K/VMAXe Series:
0 snapshots (not supported)
15 clones
VMAX 20K/VMAX Series:
128 snapshots
15 clones
VMAX 40K Series:
128 snapshots
15 clones
CLARiiON
CX4 120:
8 snapshots
100 clones
CX4 240:
8 snapshots
100 clones
CX4 480:
8 snapshots
100 clones
CX4 960:
8 snapshots
100 clones
VNX
VNX 5100:
8 snapshots
100 clones
VNX 5300:
8 snapshots
100 clones
VNX 5500:
8 snapshots
100 clones
VNX 5700:
8 snapshots
100 clones
VNX 7500:
8 snapshots
100 clones
Tip To see the maximum number of clones or snapshots per source LUN in your environment, open a
VMM PowerShell command shell and type the following command:
Get-SCStorageArray -All | Select-Object Name, ObjectType, Manufacturer, Model,
LogicalUnitCopyMethod, IsCloneCapable, IsSnapshotCapable, MaximumReplicasPerSourceClone,
MaximumReplicasPerSourceSnapshot | Format-List
4.2.4 Specific Requirements by Array Type
This section identifies specific software packages (for each type of array) that you might need to install,
enable, or purchase (and obtain a license for) to support specific storage automation functionality. The
ancillary software that you actually need depends on which storage automation features you plan to
make available in your VMM-based private cloud.
One example of a storage automation feature obtained by purchasing add-on software is VM rapid
provisioning. VMM can quickly create a large number of LUNs used for automated rapid VM creation,
but this requires arrays that support snapshots, clones, or both. Both snapshot and clone features can
be licensed on all EMC arrays.
The following subsections list software that you need for Symmetrix, CLARiiON, or VNX arrays.
4.2.4.1 Symmetrix Requirements
The following tables list software and configuration requirements for Symmetrix arrays that support
VMM storage automation.
See Also:

EMC SMI-S Provider Release Notes (latest version)

Hardware/Platforms Documentation

Symmetrix Data Sheets:

http://www.emc.com/collateral/hardware/data-sheet/h8816-symmetrix-vmax-10k-ds.pdf

http://www.emc.com/collateral/hardware/data-sheet/h6193-symmetrix-vmax-20k-ds.pdf

http://www.emc.com/collateral/hardware/data-sheet/h9716-symmetrix-vmax-40k-ds.pdf

http://www.emc.com/collateral/hardware/product-description/h6544-vmax-w-enginuitypdg.pdf
40
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Table 24: Symmetrix VMAX software and license requirements
Requirement
Description
Version
An operating environment
(OE) designed by EMC for
data storage; used to control
components in a Symmetrix
array. (Installed with the
array)
VMAX 10K/VMAXe Series: Enginuity 5875
(or later)
VMAX 20K/VMAX Series: Enginuity 5875 (or
later)
Additional
License
Required?
Firmware
Enginuity
Management Software
Provides the interface
EMC
between the EMC SMI-S
Solutions
Provider and Symmetrix,
Enabler
CLARiiON, and VNX arrays
No
No
VMAX 40K Series: Enginuity 5876 (or later)
No
V7.4.0 (or later)
(Installed automatically when you install
EMC SMI-S Provider kit)
No
Table 25: Symmetrix VMAX configuration and license requirements
Requirement
Description
Additional
License
Required?
Firmware
EMC Symmetrix Management
Console (SMC) / EMC
Unisphere for VMAX
Web-based interface used to discover, monitor, configure,
and control Symmetrix arrays.
EMC TimeFinder
TimeFinder Snap is required for:
 Snapshots
 Cloning
Yes
Gatekeepers enable the EMC SMI-S Provider to manage
the Symmetrix array. EMC recommends that six
gatekeeper LUNs be created on the array and masked to
the EMC SMI-S Provider server.
No
No
Management Software
Gatekeeper Devices
4.2.4.2 CLARiiON Requirements
The following tables list software and configuration requirements for CLARiiON arrays that support
VMM storage automation.
See Also:

EMC SMI-S Provider Release Notes (latest version)

Hardware/Platforms Documentation

CLARiiON Data Sheets:

http://www.emc.com/collateral/hardware/data-sheet/h5527-emc-clariion-cx4-ds.pdf

http://www.emc.com/collateral/software/data-sheet/h2306-clariion-rep-snap-ds.pdf
41
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices

http://www.emc.com/collateral/hardware/data-sheet/h5521-clariion-cx4-virtual-ds.pdf
Table 26: CLARiiON software and license requirements
Requirement
Description
Version
Additional
License
Required?
Firmware
A specialized operating
environment (OE) designed by EMC
for data storage and used to
control components in a CLARiiON
array. FLARE manages all
input/output (I/O) functions of the
storage array. (Installed with the
array)
FLARE
Management Software
Provides the interface between the
EMC
EMC SMI-S Provider and
Solutions
Symmetrix, CLARiiON, and VNX
Enabler
arrays
FLARE 30 (or later)
No
V7.4.0 (or later)
(Installed automatically when you
install EMC SMI-S Provider kit)
No
Table 27: CLARiiON configuration and license requirements
Requirement
Description
Additional
License
Required?
Firmware
EMC CLARiiON SnapView
Required for SMI-S snapshots
Yes
EMC CLARiiON SAN Copy
Required for SMI-S cloning
Yes
EMC CLARiiON Access Logix
(ACLX)
Required for masking/unmasking
EMC CLARiiON Virtual
Provisioning
Required for thin provisioning
Yes
Yes
Management Software
EMC Unisphere
Web-based interface used to discover, monitor,
configure, and control CLARiiON arrays.
No
4.2.4.3 VNX Requirements
The following tables list software and configuration requirements for VNX arrays that support VMM
storage automation.
See Also:

EMC SMI-S Provider Release Notes (latest version)

Hardware/Platforms Documentation

EMC VNX Series Total Efficiency Pack
42
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices

VNX Data Sheets:

http://www.emc.com/collateral/software/data-sheet/h8509-vnx-software-suites-ds.pdf

http://www.emc.com/collateral/hardware/data-sheets/h8520-vnx-family-ds.pdf

http://www.emc.com/collateral/software/specification-sheet/h8514-vnx-series-ss.pdf
Table 28: VNX software and license requirements
Requirement
Description
Version
Additional License
Required?
Firmware
VNX OE
A specialized operating environment
(OE) designed by EMC to provide file
and block code for a unified system.
VNX OE contains basic features, such
as thin provisioning.
V31 (or later)
Provides the interface between the
EMC SMI-S Provider and Symmetrix,
CLARiiON, and VNX arrays
V7.4.0 (or later)
(Installed when you
install EMC SMI-S
Provider kit)
Basic: No
Advanced1: Yes
Major Update:
Yes
Management Software
EMC Solutions
Enabler
No
1
For advanced features, you can buy add-ons, such as the Total Efficiency Pack. The feature FAST Suite, for example, is
purchased as part of a pack.
Table 29: VNX configuration and license requirements
Requirement
Description
Additional
License
Required?
Firmware
EMC VNX SnapView
Required for snapshots
Yes
EMC VNX SAN Copy
Required for cloning
Yes
EMC VNX Access Logix
(ACLX)
Required for masking/unmasking
EMC VNX Virtual
Provisioning
Required for thin provisioning
Yes
Yes
Management Software
EMC Unisphere
Web-based interface used to discover, monitor, configure, and
control VNX arrays.
No
4.3 Set Up EMC SMI-S Provider for Storage Validation Testing
EMC updated its existing SMI-S provider to provide support for the new storage resource management
capabilities made available by VMM 2012. The updated EMC SMI-S Provider supports the SNIA SMI-S 1.5
standard. This standard makes available a single interface to storage objects — on multiple storage
systems in a private cloud environment — that are discovered by VMM and then allocated by
43
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
administrators for use by administrators and end-users. The current version of the EMC SMI-S Provider
is V4.4.0.
EMC SMI-S Provider is hosted by the EMC CIMOM Server, ECOM, to provide an SMI-S compliant
interface for EMC Symmetrix, CLARiiON, and VNX family of storage systems.
This section shows you how to set up the SMI-S Provider so that you can test VMM storage capabilities
with one or more EMC storage systems.
Important For the most up-to-date information, see these topics in the latest EMC SMI-S Provider
Release Notes:

Installation

Post-installation tasks
4.3.1 EMC SMI-S Provider Software Requirements
To enable EMC SMI-S Provider V4.3.2 (or later) to support VMM storage capabilities, you must install the
software listed in the following table on the SMI-S Provider server.
Table 30: Software to install to on the SMI-S Provider server in your test environment
Requirement
Description
Server Operating
System
Windows Server 2008 R2 SP1 64-bit
EMC SMI-S Provider
EMC SMI-S Provider 64-bit (version 4.3.2 or later; current version is 4.4.0)
EMC SMI-S Provider uses SMI-S to enable VMM to interact with EMC storage
systems.
(Array Provider)
Note
 EMC recommends installing the operating system for the computer that will
host the 64-bit version of the EMC SMI-S Provider on a multicore server with
a minimum of 8 GB of physical memory.
 EMC SMI-S Provider can be installed on any Windows or Linux platform
listed in the EMC SMI-S Provider Release Notes. EMC performed the tests in
this document with an EMC SMI-S Provider installed on a Windows Server
2008 R2 SP1 64-bit computer.
Array Provider (select only the Array Provider)
VMM requires the installation of the Array Provider of the SMI-S Provider. The
Array Provider enables VMM (the ‘client’) to retrieve information from the
server about, and modify configuration information for Symmetrix, CLARiiON,
or VNX storage systems.
Note Do not install the Host Provider in your VMM test environment. The
VASA Provider is automatically installed with the Array Provider but is not used
by VMM.
Download the EMC SMI-S Provider:
 On Support.EMC.com, click Downloads on the top menu and search for SMI
S Provider; when the results appear, under Browse Products, select the
version number of the SMI-S Provider that you want to download.
44
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Requirement
EMC CIM Object
Manager (ECOM)
Server
(Installed with EMC SMI-S
Provider)
Solutions Enabler
(Installed with the EMC
SMI-S Provider kit)
Description
ECOM Server
Service that is installed with the SMI-S Provider. The ECOM Server hosts the
provider, creating an SMI-compliant interface for EMC Symmetrix, CLARiiON,
and VNX arrays.
Solutions Enabler V7.4.0 (or later)
Solutions Enabler provides the interface between the SMI-S Provider and the
Symmetrix, CLARiiON, and VNX arrays.
Note If Solutions Enabler Access Control is enabled on a Symmetrix array, the
computer on which the SMI-S Provider is installed must have sufficient
privileges. At minimum, the computer must belong to a group with access to
ALL_DEVS with BASE and VLOGIX privileges.
C++ 2008 SP1
Visual C++ 2008 SP1 Redistributable Package with KB 973923 applied
(Installed with EMC SMI-S
Provider)
Visual C++ 2008 SP1 Redistributable Package with KB973923 applied is required
for Windows environments (this is a Microsoft Visual Studio runtime
requirement).
4.3.2 Install and Configure the EMC SMI-S Provider
The following table shows you how to install and configure the EMC SMI-S Array Provider for VMM.
Important For the most up-to-date information, see the latest EMC SMI-S Provider Release Notes.
Table 31: Install and configure the EMC SMI-S Provider for a VMM-based private cloud
Task
Action
Download EMC
SMI-S Provider
Download the EMC SMI-S Provider 64-bit V4.4.0 (or later) from the following EMC
site:
 On Support.EMC.com, click Downloads on the top menu and search for SMI S
Provider; when the results appear, under Browse Products, select the version
number of the SMI-S Provider that you want to download.
Start the
installation
On a computer running Windows Server 2008 R2 SP1 64-bit, run the following
command:
Install Solutions
Enabler
When prompted, install Solutions Enabler V7.4.0 (or later).
Select install
directory
Accept the default installation directory for SMI-S Provider and Solutions Enabler.
Select the Array
Provider
On Provider List, select only the Array Provider (this is the default selection).
Start the
installation wizard
On Ready to Install the Program, click Install to start the installation wizard.
se740-WINDOWS-x64-SMI.exe
45
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Task
Action
Complete the
installation wizard
After Installation Wizard Completed appears, click Finish.
Update
environment
variable path
On the SMI-S Provider server, you can choose to update your environment variable
path to include the Solutions Enabler installation directory and the ECOM directory
so that you can run command-line utilities from any directory.
The default path for the Solutions Enabler installation directory is:
(optional)
C:\Program Files\EMC\SYMCLI\bin
The default path for the ECOM directory is:
C:\Program Files\EMC\ECIM\ECOM\bin
Update the
Windows Firewall
settings
To enable inbound communication with the SMI-S Server, you will need to add
some rules for the Windows firewall using these commands:
netsh advfirewall firewall add rule name="SLP-udp" dir=in protocol=UDP
localport=427 action=allow
netsh advfirewall firewall add rule name="SLP-tcp" dir=in protocol=TCP
localport=427 action=allow
netsh advfirewall firewall add rule name="CIM-XML in" dir=in protocol=TCP
localport=5988-5989 action=allow
Increase ECOM
External
Connection Limit
and HTTPS options
1. On the SMI-S Provider server, if necessary, stop Ecom.exe (type services.msc to
open Services, click ECOM, and then click Stop).
2. Open Windows Explorer, navigate to and open the following XML file:
C:\Program Files\EMC\ECIM\ECOM\Conf\Security_Settings.xml
3. To increase the ECOM external connection limit, change the following settings:
Change the default value for ExternalConnectionLimit (100) to 600:
<ECOMSetting Name="ExternalConnectionLimit" Type="uint32" Value="600"/>
Change default value for ExternalConnectionLimitPerHost (100) to 600:
<ECOMSetting Name="ExternalConnectionLimitPerHost" Type="uint32"
Value="600"/>
4. Change the default value for SSLClientAuthentication to "None":
<ECOMSetting Name="SSLClientAuthentication" Type="string"
Value="None"/>
5. Save the Security_Settings.xml file and then restart ECOM.
4.3.3 Configure the EMC SMI-S Provider to Manage EMC Storage Systems
The following table shows you how to configure EMC arrays for the SMI-S Provider and VMM.
Table 32: Configure EMC SMI-S Provider to manage arrays
Task
Before You
Start
Action
For optimal performance, EMC recommends that no more than five (Symmetrix,
CLARiiON, or VNX combined) storage systems should be managed by any one SMI-S
Array Provider.
46
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Task
Add
Symmetrix
Storage
Systems
Action
1. On the SMI-S Provider server, configure six gatekeepers for each Symmetrix array.
Note For details, search for "EMC Solutions Enabler Symmetrix Array Management CLI
Product Guide" on the EMC website Support.EMC.com.
2. After configuring the gatekeepers, restart ECOM. After ECOM restarts, the SMI-S
Provider automatically discovers all Symmetrix arrays connected to the server on
which the provider is running.
3. On the SMI-S Provider, open a command prompt and type the following command:
%ProgramFiles%\EMC\ECIM\ECOM\bin\TestSmiProvider.exe
4. Enter the requested information for the storage system and its management ports.
To accept the default values (displayed just left of the colon), press Enter for each
line:
Connection Type (ssl,no_ssl) [no_ssl]:
Host [localhost]:
Port [5988]:
Username [admin]:
Password [#1Password]:
Log output to console [y|n (default y)]:
Log output to file [y|n (default y)]:
Logfile path [Testsmiprovider.log]:
5. To verify your configuration, type the following command:
dv
6. Confirm that the Symmetrix arrays are configured correctly by checking to see that
they are listed as the output of the dv command.
47
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Task
Add CLARiiON
and VNX
Storage
Systems
Action
1. On the SMI-S Provider server, open a command prompt and type the following
command:
%ProgramFiles%\EMC\ECIM\ECOM\bin\TestSmiProvider.exe
2. For requested connection information, accept all of the default values.
3. Request to add a storage system by typing addsys at the prompt:
localhost:5988) ? addsys
4. Enter the requested information for the storage system and its management ports.
To accept the default values (displayed just left of the colon), press Enter for each
line:
Connection Type (ssl,no_ssl) [no_ssl]:
Host [localhost]:
Port [5988]:
Username [admin]:
Password [#1Password]:
Log output to console [y|n (default y)]:
Log output to file [y|n (default y)]:
Logfile path [Testsmiprovider.log]:
5. After connecting, a menu displays a list of commands, followed by these entries:
Namespace: root/emc
repeat count: 1
(localhost:5988) ?
6. At the prompt, type addsys and then enter the other values shown in this example:
(localhost:5988) ? addsys
Add System {y|n} [n]: y
ArrayType (1=Clar, 2=Symm) [1]: 1
One or more IP address or Hostname or Array ID
Elements for Addresses
IP address or hostname or array id 0 (blank to quit): <YourIPAddress1>
IP address or hostname or array id 1 (blank to quit): <YourIPAddress2>
IP address or hostname or array id 2 (blank to quit):
Address types corresponding to addresses specified above.
(1=URL, 2=IP/Nodename, 3=Array ID)
Address Type (0) [default=2]: 2
Address Type (1) [default=2]: 2
User [null]: <YourGlobalAdminAccountName>
Password [null]: <YourGlobalAdminAccountPwd>
KEY:
<YourIPAddress1> - Management Port for SPA /*For CLARiiON or VNX, you must specify*/
<YourIPAddress2> - Management Port for SPB /*both addresses to connect to array*/
<YourGlobalAdminAccountName> - User name to connect to storage system
<YourGlobalAdminAccountPwd> - Password to connect to storage system
7. Repeat for each storage system to be managed.
8. To verify your configuration, type the following command:
dv
4.3.4 EMC SMI-S Information Resources
To find the most current version of the EMC SMI-S Provider Release Notes:
48
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
1. On Support.EMC.com, on the top menu, click Support By Product; in the field labeled Find a
Product, type SMI-S Provider and then click the search icon.
2. In the results pane, click the entry for SMI-S Provider Release Notes (whatever the current version
number is at the time you run this search) to open the release notes document.
See Also:

EMC Community Network (parent page for the Everything Microsoft at EMC page) at:
https://community.emc.com/index.jspa

Virtual Machine Manager page (online VMM product team page) at:
http://technet.microsoft.com/en-us/library/gg610610.aspx

"Appendix F: References" in this document
4.4 Set Up VMM for Storage Validation Testing
VMM 2012 uses the SMI-S standard to provide advanced storage capabilities for storage systems that
support this standard. This section shows you how to set up VMM so that you can test VMM storage
capabilities using the EMC SMI-S Provider and one or more EMC storage systems.
This guide describes a simple installation of VMM that is sufficient for storage validation testing.
See Also:

For comprehensive installation and configuration instructions for VMM 2012:
http://technet.microsoft.com/en-us/library/gg610610.aspx

For more about the VMM 2012 private cloud:
Microsoft Virtualization and Private Cloud Solutions at:
http://www.emc.com/platform/microsoft/microsoft-virtualization-private-cloud-solutions.htm
Microsoft Private Cloud at:
http:// www.microsoft.com/privatecloud
4.4.1 VMM Prerequisites
This section lists hardware and software requirements for installing the VMM Server in the storage
validation test environment.
4.4.1.1 VMM Server Hardware
Install VMM on a server running Windows Server 2008 R2 SP1 with at least four processor cores. For
large-scale testing, Microsoft recommends installing VMM on a physical server.
See Also:

"Minimum Hardware Requirements (Servers and Arrays) for Test Environment" in this document

"Minimum Hardware Requirements Explained" in this document
49
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
4.4.1.2 VMM Server Software
The software prerequisites for VMM listed in the following table are required for the test deployment
described in this guide. For a comprehensive list of requirements for installing VMM 2012 in a
production environment, see "System Requirements: VMM Management Server" at:
http://technet.microsoft.com/en-us/library/gg610562.aspx
Table 33: Software required for installing VMM in your test environment
Requirement
Description
Active
Directory
One Active Directory domain
You must join the VMM Server, SQL Server (if it is on a separate server than the VMM
Server), and Hyper-V servers (VM Host/Library server, and cluster nodes) to the
domain. Optionally, you can join the EMC SMI-S Provider to the domain.
Note VMM supports Active Directory with a domain functional level of Windows
Server 2003 (or later) that includes at least one Windows Server 2003 (or later) domain
controller.
Window Server
Windows Server 2008 R2 SP1 (full installation)
Edition:
Standard, Enterprise, or Datacenter
Service Park:
Service Pack 1 or earlier
Architecture:
x64
Processors:
At least four processor cores
Domain joined: Yes
iSCSI access:
Yes (initiator logged into target) [if applicable]
FC access:
Yes (zoned to array) [if applicable]
WinRM
Windows Remote Management (WinRM) 2.0
WinRM 2.0 is included in Windows Server 2008 R2 SP1 and, by default, is set to start
automatically (delayed start).
(Installed with
Windows
Server)
SQL Server
Microsoft SQL Server 2008
Install any version of SQL Server 2008 RTM or later. SQL Server stores the VMM
database.
Note
 SQL Express not supported: VMM 2012 does not support Microsoft SQL Server
2008 Express (which was available in earlier releases of VMM).
 Co-host VMM and SQL Server for test: A dedicated SQL Server is not required for
the test environment described in this document. Therefore, one option is to install
SQL Server on the same server as the VMM Server.
 Separate SQL Server for production. In a full-scale production environment — for
example, one that might contain 400 hosts on which 8,000 VMs are deployed —
Microsoft recommends using a dedicated SQL Server to store the VMM database.
WAIK
Windows Automated Installation Kit (Windows AIK, or WAIK) for Windows 7
You can download WAIK from the Microsoft Download site at:
http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=5753
50
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
.NET 3.5 (SP1)
Microsoft .NET Framework 3.5 (SP1) or later
On a computer running Windows Server 2008 R2 SP1, if the Microsoft .NET Framework
3.5 Service Pack 1 (SP1) feature is not installed (it is not installed by default), the VMM
Setup wizard will install the feature.
4.4.2 Install VMM
For the steps to install a VMM Server in your preproduction environment, see "Appendix A: Install
VMM." The steps to install VMM are in an appendix because no steps specifically related to storage
automation occur in the VMM Setup wizard.
4.4.3 Configure VMM to Discover and Manage Storage
The steps in this section show you how to set up a test environment that you can use to validate storage
functionality in a VMM private cloud that includes EMC arrays.
4.4.3.1 Add a standalone Hyper-V Server as a VM host to VMM
A standalone Hyper-V host is required in your test environment in order to test VMM 2012 storage
automation functionality with EMC arrays.
Before You Start:

Hyper-V server: You must have a physical server running Windows Server 2008 R2 SP1 with the
Hyper-V server role installed. Join this server to the same Active Directory domain to which the
VMM Server belongs.
If the Windows server computer that you want to add as a VM host does not already have the
Hyper-V server role installed, make sure that the BIOS on the computer is configured to support
Hyper-V. If the BIOS is enabled to support Hyper-V but the Hyper-V role is not already installed
on the server, VMM automatically adds and enables the Hyper-V role when you add the server.
See Also:


"Minimum Hardware Requirements (Servers and Arrays) for Test Environment" and
"Minimum Hardware Requirements Explained" earlier in this document

"Hyper-V Installation Prerequisites" and "System Requirements: Hyper-V Hosts." Note,
however that this test environment might not need to meet all requirements recommended
for a production environment.
Run As Account: You must have, or will create, a Run As account with the following
characteristics:
 You must use an Active Directory domain account, and that account must be added to the
local Administrators group on the Hyper-V host that you want to add as a VM host to VMM.


If you configured your VMM Server to use a domain account when you installed the VMM
Server, do not use the same domain account to add or remove VM hosts.
Group Policy and WinRM: If you use Group Policy to configure Windows Remote Management
(WinRM) settings, before you add a Hyper-V host to VMM management, see the "Prerequisites"
section for steps you might need to take in the online help topic "How to Add Trusted Hyper-V
Hosts and Host Clusters" at:
51
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
http://technet.microsoft.com/en-us/library/gg610648
Table 34: Add a standalone Hyper-V server as VM host to VMM
Task or
Wizard Page
[On VMM
Server]
Action
On the VMM Console, in the lower-left pane, click Fabric; on the ribbon, click the
Home tab; click Add Resources, and then select Hyper-V Hosts and Clusters.
Start the Add
Resource
Wizard
Resource
location page
Specify the location of the server that you want to add as a VM host by selecting
Windows Server Computers in a trusted Active Directory domain.
Credentials
page
Create a new Run As account:
Select Use an existing Run As account and then click Browse.
On Select a Run As Account, click Create Run As Account, and then specify an Active
Directory domain account that already has (or will have) local Administrator privileges
on the Hyper-V host that you want to add to VMM:
Name:
Description: (optional)
User name:
Password:
Confirm Password:
Accept the default selection for Validate domain credentials, click OK to return to
Select a Run As Account, click the name of the new Run As account that you just
created.
Click OK to return to the Credentials page, and then click Next.
52
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
[On the host]
Add new Run
As account to
host
Add the new Run As account to the Hyper-V host (if necessary):
Open Server Manager, expand Configuration, and then expand Local Users and
Groups.
Click Groups, double-click Administrators to open the Administrators Properties page,
click Add, and then in Enter the object names to select, type:
<DomainName>\<NewRunAsAccountName>
Click Check Names, and then click OK twice.
[On VMM
Server]
Discovery
scope page
On the VMM Console, return to the Discovery scope page in the Add Resources
Wizard.
Select Specify Windows Server computers by names, and then, under Computer
names, type the name (or part of the name) of the computer that you want to add as a
VM host.
Target
resources
page
Wait until the name of the server you specified on the preceding page appears, and
then under Discovered computers, select the server name.
Host settings
page
Specify:
 For Host group, assign the host to a host group by selecting All Hosts or by selecting
the name of a specific host group.
 For Add the following path, do one of the following to specify the path to the
directory on the host where you want to store VM files that will be deployed on this
host:
 To accept the default VM placement path
(%SystemDrive%\ProgramData\Microsoft\Windows\Hyper-V), leave this field blank.
 To specify a VM placement path other than the default, type the path, and then
click Add.
Example: C:\MyVMs
Note Add a path only for a standalone host; for a host cluster, VMM automatically
manages the paths that are available for VMs based on the shared storage available to
the host cluster.
Summary
page
Confirm the settings that you selected in the wizard, and then click Finish.
Jobs dialog
box
Confirm that the job to add the host completes successfully, and then close the Jobs
dialog box.
4.4.3.2 Add an existing Hyper-V host cluster to VMM
A Hyper-V host cluster is required in your test environment in order to test VMM 2012 storage
automation functionality with EMC arrays.
Before You Start:

Hyper-V servers configured as a host cluster. You must have four servers running Windows Server
2008 R2 SP1 with the Hyper-V server role installed. These servers must belong to the same Active
Directory domain as the VMM Server.
53
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
These four servers should be the nodes of an existing host cluster. The steps used in the following
procedure assume that you have an existing Hyper-V host cluster that you want to add to VMM.
See Also:



"Minimum Hardware Requirements (Servers and Arrays) for Test Environment" and "Minimum
Hardware Requirements Explained" earlier in this document

"Hyper-V: Using Hyper-V and Failover Clustering." Note, however that the test environment
might not need to meet all requirements recommended for a production environment.

With VMM 2012, it is also possible to create a host cluster as described in "How to Create a
Hyper-V Host Cluster in VMM."
Run As Account. You must have, or will create, a Run As account with the following characteristics:

You must use an Active Directory domain account, and that account must be added to the local
Administrators group on each node (each Hyper-V host) in the cluster.

If you configured your VMM Server to use a domain account when you installed the VMM
Server, do not use the same domain account to add or remove host clusters.
Group Policy and WinRM. If you use Group Policy to configure Windows Remote Management
(WinRM) settings, before you add a host cluster to VMM management, see the "Prerequisites"
section for steps you might need to take in the online help topic "How to Add Trusted Hyper-V Hosts
and Host Clusters" at:
http://technet.microsoft.com/en-us/library/gg610648
Table 35: Add an existing Hyper-V host cluster to VMM
Task or
Wizard Page
Action
Start the Add
Resource
Wizard
On the VMM Console, in the lower-left pane, click Fabric; on the ribbon, click the
Home tab; click Add Resources, and then select Hyper-V Hosts and Clusters.
Resource
location page
Specify the location of the cluster that you want to add by selecting Windows Server
Computers in a trusted Active Directory domain.
Credentials
page
Select Use an existing Run As account and then click Browse.
On the Select a Run As Account dialog box, click the name of the Run As account that
you want to use (the one that you created for each of the cluster nodes), click OK to
return to the Credentials page, and then click Next.
54
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Discovery
scope page
To search for the cluster that you want to add to VMM, select Specify Windows
Server computers by names, and then under Computer names type either:
 The NETBIOS name of the cluster
Example: LAMANNA-CLUS01
Alternatively, you can type the fully qualified domain name (FQDN) of the cluster.
Example: LAMANNA-CLUS01.sr5fdom.eng.emc.com
 Do not select Skip AD verification (leave the checkbox unchecked)
Target
Resources
page
Under Discovered computers, wait until the Discovered computers pane is
populated, and then select the name of the cluster that you specified on the
preceding page.
Notice that not only does the FQDN of the cluster appear under Discovered
computers, but the FQDN of each cluster node (each VM host) also appears.
55
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Host settings
page
Specify:
 For Host group, assign the cluster to a host group by selecting All Hosts or by
selecting the name of a specific host group.
Important Notice that the wizard recognizes that you have chosen to add a
cluster (rather than a standalone host) and that therefore the field Add the
following path does not appear on this wizard page (you add a VM path only for a
standalone host). For a cluster, VMM automatically1 manages paths for VMs based
on the shared storage available to the cluster.
Summary page
Confirm the settings that you specified, and then click Finish.
Jobs dialog box
Confirm that the job to add a cluster completes successfully, and then close the Jobs
dialog box.
56
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Open cluster
properties
After all jobs complete successfully, on the VMM Console, in the lower-left pane, click
Fabric.
In the upper-left pane, expand Servers, expand All Hosts, navigate to the host group
to which you added the cluster, right-click the cluster name, and then click Properties.
<ClusterName>
Properties
page
On the General tab, for Cluster reserve (nodes), specify 0, and then click OK.
Note This setting specifies the number of node failures that a cluster must be able to
sustain while still supporting all virtual machines deployed on the host cluster. For
more information, see "Configuring Hyper-V Host Cluster Properties."
1
Here is how VMM handles paths for Hyper-V host clusters:
 For shared storage, VMM uses Failover Clustering WMI API to list the paths for shared storage; typically
C:\ClusterStorage\Volume1, C:\ClusterStorageVolume2, and so on.
 For SAN deployments to a cluster, VMM uses a volume GUID path (\\?\{GUID}), so in this case also, the administrator
does not need to specify a path.
4.4.3.3 Add EMC SMI-S Provider to VMM and place storage pools under management
The EMC SMI-S Provider is required in your test environment in order to test VMM 2012 storage
automation functionality with EMC arrays.
Before You Start:

Confirm SMI-S Provider server is installed. You must have already installed the EMC SMI-S Provider
on a server as described earlier in sections "Install and Configure the EMC SMI-S Provider" and
"Configure the EMC SMI-S Provider to Manage EMC Storage Systems."

Ascertain port to use for SMI-S Provider server. The default ports for the EMC SMI-S Provider are
5988 (non-SSL port) and 5989 (SSL port). When adding a provider, VMM assumes you will use Secure
Sockets Layer (SSL). Ask your storage administrator which port to use in your environment. A
specific security policy might be required. Also, the provider might have been configured with ports
different from defaults.

Confirm availability of storage pools. Check with your Storage Administrator to see which storage
pools are available for you to add to your VMM private cloud.
Caution Identifying available storage pools is particularly important if you plan to use storage
arrays in your test environment from which the Storage Administrator has already allocated some
storage pools to the production environment.

Create a separate ECOM Run As account on the SMI-S Provider server for VMM use. Before you
can add the SMI-S Provider to VMM, you must create a Run As account for the SMI-S Provider. This
Run As account must be an ECOM administrator account. VMM will use the account when it uses
Basic Authentication to connect to ECOM and to the provider.
EMC recommends that you create a separate account solely for VMM use so that any required
security policies can be applied independently of any other IT services in your environment that are
using the same provider. Consult with your security and storage administrators for additional
guidance.
To create the ECOM account, use the ECOM Administration Web Server:
A. Open a browser on the SMI-S Provider server that needs the new ECOM account and enter the
following URL:
57
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
http://localhost:5988/ecomconfig
B. Confirm that the following screen appears:
C. Log on to the ECOM Administration Login Page with appropriate credentials.
Option 1
Use the default Administrator credentials:
 Username: admin
 Password: #1Password
Option 2
Use another set of credentials with
Administrator rights.
D. When the ECOM Administration page opens, click Add User.
E. When the ECOM Security Admin Add User page opens, create an ECOM account that you will
use solely for VMM operations:

User Name: <YourNameForEcomUserAccountForVMM>

Password: <YourPassword>
58
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices

Role: administrator

Scope: Local

Password never expires: false or true (depending on your organization's security policies; if
you select false, the password expires every 90 days)
F. Click Add User to create the new ECOM account.
G. Click Back to return to the ECOM configuration page.
H. On the main page, click Logout.
I.
Be sure to use this account (in the following steps) when you use the VMM Console to create a
RunAs account for this provider.
Use the steps in the following table to add the EMC SMI-S Provider to the VMM private cloud and to
bring EMC storage pools under VMM management.
Table 36: Add EMC SMI-S Provider to VMM and place storage pools under management
Task or
Wizard Page
Action
Review existing
providers
On the VMM Console, in the lower-left pane, click Fabric; in the upper-left pane,
expand Storage; click Providers, and then review the list of existing SMI-S Providers
(if any).
Start Add
Storage Devices
Wizard
On the VMM Console, in the lower-left pane, click Fabric; on the ribbon, click the
Home tab, click Add Resources, and then select Storage Devices.
59
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Specify
Discovery
Scope page
Specify:
 For IP address or FQDN and port, type one of the following:
<Provider
<Provider
<Provider
<Provider
IP Address>: 5988
FQDN>:5988
IP Address>: 5989
FQDN>:5989
Note
 5988 is the number for the default non-secure port.
 5989 is the number for the secure port (use 5989 only if the provider uses SSL)
 For Use Secure Sockets Layer (SSL) connection , select or clear the checkbox
based on the port in the path you received from the storage administrator:
 Select SSL. If the port is an SSL port (this is the default for VMM), select the
option for SSL.
 Clear SSL. If the port is not an SSL port (this is the default for the EMC SMI-S
Provider), clear the option for SSL.
 For Run As account, click Browse, select a Run As account (this must be a Run As
account that you created earlier on this SMI-S Provider server by using the ECOM
Administration Web Server), and then click Next.
Gather
Information
page
Wait for discovery to complete importing storage device information; confirm that
discovered storage arrays appear, and then click Next.
60
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Select Storage
Pools page
Select one or more storage pools that you want to place under management.
Caution You might see storage pools that the Storage Administrator has
assigned to other IT administrators. Make sure that you know which storage pools
on this array you can place under VMM management in your test environment.
Click Create classification.
New
Classification
dialog box
Specify a name and (optionally) a description for the storage classification you want
to create.
Example: Depending on the quality of the storage pool, you might specify:
Name:
Gold
Description: High performance storage pool
Name:
Silver
Description: Good performance storage pool
Name:
Bronze
Description: Moderate performance storage pool
Click Add to add the new classification and to return to the Select Storage Pools
page.
Select Storage
Pools page
If appropriate, select additional storage pools to place under VMM management and
either create a new storage classification or select an existing classification from the
drop-down list.
When you have added, and classified, all of the storage pools you want click Next.
61
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Summary page
Confirm the settings that you specified, and then click Finish.
Jobs dialog box
Wait until the jobs to add the SMI-S Provider server and discover and import storage
information complete successfully, and then close the Jobs dialog box.
View Storage
Pools
On the VMM Console, in the lower-left pane, click Fabric; in the upper-left pane,
expand Storage, and then click Classification and Pools.
In the main pane, confirm that you can see the storage pool (or pools) that you
brought under VMM management while you added the EMC SMI-S Provider to
VMM.
62
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
4.4.3.4 Configure arrays for VM rapid provisioning (select snapshots or clones)
Enable the array or arrays (that you brought under VMM management when you added the
SMI-S Provider) for VM rapid provisioning.
Before You Start:

Prepopulate Reserved LUN Pool. Ask your storage administrator to pre-populate your Reserved
LUN Pool with sufficient capacity to support rapid provisioning with snapshots. The pool should
contain a sufficient number of LUNs, of the appropriate size, to handle the load in your
environment.

Configure snapshots or clones on CLARiiON or VNX arrays. See:
 "EMC CLARiiON Reserved LUN Pool Configuration Considerations: Best Practices Planning"
(September 2010) at: http://www.emc.com/collateral/hardware/white-papers/h1585clariion-resvd-lun-wp-ldf.pdf


EMC Unisphere online help
Configure snapshots or clones on Symmetrix arrays. See:
 "Appendix D: Configure Symmetrix TimeFinder for Rapid VM Provisioning" in this document

"EMC Symmetrix Timefinder Product Guide" at:
https://support.emc.com/docu31118_Symmetrix-TimeFinder-ProductGuide.pdf?language=en_US
Table 37: Select snapshots or clones on an array
Task or
Wizard Page
Action
Display arrays
On the VMM Console, in the lower-left pane, click Fabric; in the upper-left pane,
expand Storage; and then click Arrays to display arrays under VMM management in
the main pane.
Display array
properties
In the main pane, right click an array, and then click Properties.
Specify
snapshots or
clones
Click the Settings tab to display the Storage array settings page, select the method —
snapshots or clones — that you want to use for VM rapid provisioning, and then click
OK.
The choice you make depends on the capabilities of the array:
 Use snapshots
If the array supports creating snapshots at scale, select snapshots
 Clone logical units
If the snapshot technology for this array is not designed or optimized for application
data, select clones
Note The default value depends on the capabilities that are returned from the array
to VMM. These capabilities depend on the array, in the case of Symmetrix, or on the
software packages installed on a CLARiiON or VNX. If the array supports both snapshots
and clones, the VMM default is snapshots.
Repeat for
Be sure to specify clones or snapshots for each array that you bring under VMM
63
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
each array
management.
4.4.3.5 Specify the default behavior for creating storage groups for a Hyper-V host cluster
By default, VMM sets the value for CreateStorageGroupsPerCluster (a property on a storage array object)
to False, which means that VMM creates storage groups per node for a Hyper-V host cluster and adds
host initiators to storage groups by node (not by cluster). Storage groups are also called masking sets.
For some storage arrays, if the provider does not scale for unmasking storage volumes to a cluster, it is
preferable to specify that VMM manage storage groups for the entire cluster. In this case, VMM adds
host initiators for all cluster nodes (as a set) to a single storage group.
Table 38: Change the default, on an array, for how VMM creates storage groups for a cluster
Task or
Wizard page
Action
Open VMM
PowerShell
On the VMM Console, in the ribbon, click PowerShell to open the Windows
PowerShell – Virtual Machine Manager command shell.
Display array
information
Storage groups on an array are discovered by VMM but do not display in the
VMM Console. To display storage groups, and other information about the arrays
in your test environment, type:
Get-SCStorageArray -All | Select-Object Name, Model, ObjectType,
StorageGroups, LogicalUnitCopyMethod, CreateStorageGroupsPerCluster | fl
Confirm that output similar to the following displays:
Name
Model
ObjectType
: APM00101000787
: Rack Mounted CX4_240
: StorageArray
1
StorageGroups
: {Storage Group }
LogicalUnitCopyMethod
: Snapshot
CreateStorageGroupsPerCluster : False
Name
Model
ObjectType
: 000194900376
: VMAX-1SE
: StorageArray
StorageGroups
: {ACLX View, ACLX View, ACLX View, ACLX View }
1
LogicalUnitCopyMethod
: Snapshot
CreateStorageGroupsPerCluster : False
Name
Model
ObjectType
: APM00111102546
: Rack Mounted VNX5100
: StorageArray
1
StorageGroups
: {Storage Group }
LogicalUnitCopyMethod
: Snapshot
CreateStorageGroupsPerCluster : False
Tip To view specific properties for StorageGroups for a specific array, type:
$Arrays = Get-SCStorageArray -All
$Arrays[0].StorageGroups | Select-Object ObjectType, Name, Description | fl
Specify, by array,
how VMM creates
To change the default value for CreateStorageGroupsPerCluster, type:
$Array = Get-SCStorageArray -Name "YourArrayName"
64
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
storage groups for a
cluster
1
Set-SCStorageArray -StorageArray $Array -CreateStorageGroupsPerCluster $True
For the StorageGroups property, two possible values correspond to objects in the list returned for the Name property:
{Storage Group} – The SPC type for a CLARiiON or VNX array (SPC is the acronym for SCSI Protocol Controller)
{ACLX View}
– The SPC type for a Symmetrix array (ACLX is the acronym for Access Logix.)
4.4.4 Create SAN-Copy-Capable Templates for Testing VM Rapid Provisioning
This section provides step-by-step instructions for creating two SAN-copy-capable (SCC) templates for
VM rapid provisioning.
4.4.4.1 Create local shares, add shares as Library Shares, designate a VM Host as a Library Server
Adding the VMM Library Server role to a Hyper-V server already configured as a standalone VM host is
required in your test environment if you want to test fully all VMM 2012 storage automation
functionality with EMC arrays. Using the same Hyper-V server for a VM host and a Library Server lets you
unmask and mask LUNs to that server. This is because the folder mount path that you specify on the VM
host (in the test steps described in this document) is a path that is managed by the Library Server.
Note If, in your production environment, you prefer not to co-host a VM host and a Library Server on
the same physical server, VMM also supports adding each role to different servers. In this case,
however, you would have to do all unmasking and masking for the Library Server outside of VMM (you
would not be able to use the VMM Console or VMM PowerShell).
Before You Start:

VM Host: You need an existing server running Windows Server 2008 R2 with the Hyper-V role
installed that belongs to the same Active Directory domain as the VMM Server. This server must
have been already added to VMM as a VM host. In this example test environment, this server is the
VM host that you added earlier in the section "Add a standalone Hyper-V Server as a VM host."
This Library Server must be on a server that is also a VM host so that you can use VMM to assign a
logical unit to this server. VMM assigns logical units to the VM host component (but cannot assign
logical units to Library Servers).
See Also:

"Minimum Hardware Requirements (Servers and Arrays) for Test Environment" and "Minimum
Hardware Requirements Explained" earlier in this document

"System Requirements: VMM Library Server" (at: http://technet.microsoft.com/enus/library/gg610631). Note, however that this test environment might not need to meet all
requirements recommended for a production environment.

Run As Account: When you add a library server to VMM, you must provide credentials for a domain
account that has administrative rights on the computer that you want to add. In this procedure, you
can use the same Run As account that you used earlier to add the VM host.

Firewall: When you add a library server to VMM, the firewall on the server that you want to add
must allow File and Print Sharing (SMB) traffic so that VMM can display available shares.

Windows shared folders become VMM library shares. To add resources to a library share, an
administrator typically needs to access the share through Windows Explorer.
65
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Table 39: Create local shares, add shares as Library Shares, and add a VM host as a Library Server
Task or
Wizard Page
[On the Host]
Create folders
on the VM host
Action
On the VM host that you want to add to VMM as a library server, open Windows
Explorer and create the following parent folder:
C:\Library
Create child folders. For example, create:
HATemplateShare
SATemplateShare
Open <Folder>
Properties
In Windows Explorer, right-click the Library parent folder that you just created, and
then click Properties.
Share the
parent folder
On the Properties page, click the Sharing tab; click Advanced Sharing; on the
Advanced Sharing dialog box, select Share this folder.
[On VMM
Server]
On the VMM Console, in the lower-left pane, click Library; in the upper-left pane,
right-click Library Servers, and then click Add Library Server.
Open Add
Library Server
wizard
Credentials
page
Under Use an existing Run As account, click Browse, and then select a Run As
account with permissions on the VM Host that you will now add as a Library Server.
Click OK to return to the Credentials page, and then click Next.
Select Library
Servers page
Specify the following:
 For Domain, confirm that the domain field is prepopulated.
 For Computer name, type the name of the VM Host that you want to add as a
Library Server, click Add and confirm that the name now appears in the Selected
servers pane.
 Under Selected servers, click the computer name you added, and then click Next.
Add Library
Shares page
Under Select library shares to add, click the name of the local share called Library
that you created earlier on the host, and then click Next.
Summary page
Confirm that the name of the server you want to add as a Library Server appears
under Confirm the Settings, and then click Add Library Servers.
66
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Jobs dialog box
Confirm that the job to add the Library Server completes successfully, and then close
the Jobs dialog box.
4.4.4.2 Allocate a storage pool to a host group
In VMM, you allocate a storage pool on an array to a VMM host group. This action makes that storage
pool available for use by Hyper-V VM hosts (or by Hyper-V host clusters) in the VMM host group. In
VMM, the storage available to a host or cluster from a storage pool is used only for VM workloads.
Table 40: Allocate a storage pool to the host group to which the VM host/Library Server belongs
Task or
Wizard Page
Action
Open host
group
Properties
On the VMM Console, in the lower-left pane, click Fabric; in the upper-left pane,
expand Servers; right-click the host group where the VM host/Library Server is
stored (in this example, the host group name is LDMHostGroup1); and then click
Properties.
<HostGroup>
Properties page
Click the Storage tab; click Allocate Storage Pools; on the Allocate Storage Pools
dialog box, select a storage pool, click Add; click OK to return to the Storage tab; and
then click OK.
Jobs dialog box
Confirm that the job to allocate a specific storage pool to the VMM host group (in
which the VM host/Library Server computer resides) completes successfully, and
then close the Jobs dialog box.
4.4.4.3 Create and mount a storage logical unit on the VM host/Library Server for HA Template
To run the validation tests described later, you must create a logical unit and mount it on the standalone
VM host that is also a Library Server. In a later step ("Create an HA VM Template for Testing Rapid
Deployment to a Host Cluster"), you use this logical unit to create the HA Template that you will use to
test rapid provisioning of VMs to a Hyper-V host cluster.
67
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Tip VHD files used to support rapid provisioning of VMs are contained within LUNs on the arrays but
are mounted to folders on the VMM library server.
Before You Start:

Confirm VM host connectivity to array: Be sure that you have configured the VM host correctly to
access the storage array. This varies by array.
Optionally, when configuring the connection from the host to the array, you can add to the host the
Microsoft Multipath Input/Output (MPIO) feature to improve host access to an FC or iSCSI array.
MPIO supports multiple data paths to storage and, in some cases, can increase throughput by using
multiple paths simultaneously. For more information, see "How to Configure Storage on a Hyper-V
Host" and "Support for Multipath IP (MPIO)."

SAN Type:

FC SAN: If you use an FC SAN, the VM host must have a host bus adapter (HBA) installed and
must be zoned so that the host can access the array.

iSCSI SAN: If you use an iSCSI SAN, the VM host must have the Microsoft iSCSI Initiator Service
started and set to Automatic startup.
Note You can use the following VMM PowerShell command to determine whether this VM
host is connected to an FC SAN and/or to an iSCSI SAN:
Get-SCVMHost <YourVMHostName> | select-object Name, ObjectType, FibreChannelSANStatus,
iSCSISANStatus | Format-List
Example output:
Name
: <YourVMHostName>.<YourDomainName>.com
ObjectType
: VMHost
FibreChannelSANStatus : Success (0)
ISCSISANStatus
: Success (0)
Table 41: Create and mount a LUN (HATemplateLU1) for HATemplate on VM host/Library Server
Task or
Wizard Page
Open VM Host
Properties
Action
On the VMM Console, in the lower-left pane, click Fabric.
In the upper-left pane, expand Servers, expand the host group where the VM
host/Library Server is stored (in this example, the host group name is
LDMHostGroup1), right-click the VM host, and then click Properties.
68
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Storage tab:
Disk: Add
Click the Storage tab; in the upper-left menu, click the Add option (Disk: Add) to
open the screen that lets you create a logical unit.
Do not click OK (remain on the Storage tab.)
Open Create
Logical Unit
dialog box
On the Storage tab, click Create logical unit to open the Create Logical Unit dialog
box, and then specify:
Storage Pool: SMI-Thin [Select a storage pool of yours under VMM management]
Name: HATemplateLU1
Size: 25
Click OK to return to the Storage tab. Wait until this step completes, and then
remain on the Storage tab for the next step.
69
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Storage tab:
Specify a mount
point
On the Storage tab, confirm that HATemplateLU1 now appears in the Logical unit
field, and then, under Mount point, select Mount in the following empty NTFS
folder:
Open Select
Destination
Folder
Click Browse to open the Select Destination Folder dialog box; under the server
name, expand the C:\ drive, expand C:\Library, and then click the folder
HATemplateShare.
Click OK to return to the Storage tab, and then click OK to close the VM host
Properties page.
Jobs dialog box
Confirm that the job to create HATemplateLU1 completes successfully, and then
close the Jobs dialog box.
70
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
4.4.4.4 Create and mount a storage logical unit on the VM host/Library Server for SA Template
To run the validation tests described later, you must create a second logical unit and mount it on the
standalone VM host that is also a Library Server. In a later step ("Create a VM Template for Testing Rapid
Deployment to a Standalone Host"), you use this logical unit to create the SA Template that you will use
to test rapid provisioning of VMs to a standalone Hyper-V host.
The following procedure omits screenshots because these steps are identical to those in the preceding
procedure, except for the folder and logical unit names used.
Table 42: Create and mount a LUN (SATemplateLU1) for SATemplate on the VM host/Library Server
Task or
Wizard Page
Action
Open VM Host
Properties
On the VMM Console, in the lower-left pane, click Fabric.
In the upper-left pane, expand Servers; expand the host group where the VM
host/Library Server is stored (in this example, the host group name is
LDMHostGroup1); right-click the VM host, and then click Properties.
Storage tab:
Disk: Add
Click the Storage tab; in the upper-left menu, click the Add option (Disk: Add) to
open the screen that lets you create a logical unit.
Open Create
Logical Unit
dialog box
On the Storage tab, click Create logical unit to open the Create Logical Unit dialog
box, and then specify:
Storage Pool: SMI-Thin [Select a storage pool of yours under VMM management]
Name: SATemplateLU1
Size: 25
Click OK to return to the Storage tab. Wait until this step completes, and then
remain on the Storage tab for the next step.
Storage tab:
Specify a mount
point
On the Storage tab, confirm that SATemplateLU1 now appears in the Logical unit
field, and then, under Mount point, select Mount in the following empty NTFS
folder.
71
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Open Select
Destination
Folder
Click Browse to open the Select Destination Folder dialog box; under the server
name, expand the C:\ drive, expand C:\Library, and then click the folder
SATemplateShare.
Click OK to return to the Storage tab, and then click OK to close the VM host
Properties page.
Jobs dialog box
Confirm that the job to create SATemplateLU1 completes successfully, and then
close the Jobs dialog box.
4.4.4.5 Copy a "Dummy" operating system VHD to local shared folders and import into VMM
You must copy a VHD into the HATemplateShare and SATemplateShare folders on the host. You will use
this VHD later to create the actual VM templates called (in this example test environment) HATemplate
and SATemplate, respectively.
Before You Start:

Windows OS VHD: For this procedure, you could use a VHD that contains an operating system, but
you do not need an operating system to test storage automation. Typically, administrators use a
"dummy" VHD (an empty VHD) for this procedure. In this example procedure, the VHD is named
DummyW2k8r2.vhd so that it is clear that the VHD does not really contain an operating system.
Table 43: Copy a "dummy" OS VHD to both template folders; import the VHDs into VMM
Task or
Wizard Page
[On the Host]
Copy OS VHD to
Library Template
folders
Action
On the standalone VM host/Library Server, open Windows Explorer, navigate to the
location where the OS VHD (or "dummy" OS VHD) is stored, and then copy that VHD
to both of the following folders:
 C:\Library\HATemplateShare
 C:\Library\SATemplateShare
72
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
[On VMM
server]
Open Add
Library Shares
On the VMM Console, in the lower-left pane, click Library; in the upper-left pane,
expand Library Servers; click <ServerName> (of the server on which you created the
folder C:\Library); above the ribbon, click the Library Server tab, and then click Add
Library Shares.
This step opens the Add Library Shares page.
Add Library
Share to VMM
On Add Library Shares, select C:\Library, click Next, and then on the Summary
page, click Add Library Shares.
Jobs dialog box
Confirm that the job to import the library share completes successfully, and then
close the Jobs dialog box.
73
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Confirm that the
"dummy" VHD
now appears in
VMM Library
On the VMM Console, in the lower-left pane, click Library; in the upper-left pane,
expand Library Servers; expand <ServerName> (on which you created the folder
C:\Library); expand Library, and then expand HATemplateLU1.
Confirm that DummyWin2k42.vhd appears with SAN Copy Capable set to Yes.
Note If you had copied the VHD file into a Windows folder that is an existing VMM
Library Share, you would not need to add a library share in this procedure. Instead,
you would right-click <LibraryShareName>, and then click Import.
4.4.4.6 Create an HA VM template to test rapid deployment to a host cluster
You are now ready to create a VM template — called HATemplate in this test environment — that you
can use to deploy VMs to a Hyper-V host cluster.
Table 44: Create an HA VM template to use to rapidly deploy VMs to a Hyper-V host cluster
Task or
Wizard Page
Start Create VM
Template Wizard
Action
On the VMM Console, in the lower-left pane, click Library; on the ribbon, click the
Home tab, and then click Create VM Template to launch the Create VM Template
Wizard.
74
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
VM Template
Source dialog
box
Select Use an existing VM template or a virtual hard disk stored in the library.
Click Browse to open the dialog box Select VM Template Source, and then select
the VHD (DummyWin2k8r2.vhd) in the HATemplateShare folder.
Caution Do not select DummyWin2k8r2.vhd in the SATemplateShare folder.
Click OK to return to the Select Source page, and then click Next.
VM Template
Identity page
On VM Template Identity, for VM Template name, type HATemplate, and then
click Next.
Configure
Hardware page
On Configure Hardware, in the center pane under Advanced, click Availability; in
the main pane, select Make this virtual machine highly available, and then click
Next.
Note Selecting this option when you create HATemplate is the only step that
differs from the steps to create SATemplate (in the next procedure).
Tip If you do not see the Availability option, in the center pane, collapse
Compatibility, General, Bus Configuration, and Network Adapter, and then expand
Advanced.
75
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Configure
Operating
System page
On Configure Operating System, in the drop-down list labeled Guest OS profile,
select [None – customization not required], and then click Next.
Note
 Typically, it is rare that you would choose not to install and customize an
operating system when creating a VM template to use to create and deploy new
VMs.
 However, because HATemplate and SATemplate use "dummy" VHDs (to save
time when testing storage automation in the test environment) this option is
appropriate.
 If you do choose [None – customization not required], the new VM template
wizard skips the Configure Application and Configure SQL Server pages in the
wizard and moves directly to the Summary page.
Summary page
Confirm that the only setting specified is that the VM Template is HATemplate, and
then click Create.
Jobs dialog box
Confirm that the job to create HATemplate completes successfully, and then close
the Jobs dialog box.
4.4.4.7 Create an SA VM template to test rapid deployment to a standalone host
You are now ready to create a VM template — called SATemplate in this test environment — that you
can use to deploy VMs to a standalone Hyper-V host.
The following procedure omits screenshots because these steps are identical to those in the preceding
procedure, except for the template name (in this case, you use SATemplate, not HATemplate) and the
omission (in this case) of the highly available option.
76
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Table 45: Create an SA VM Template to use to rapidly deploy VMs to an individual Hyper-V VM host
Task or
Wizard Page
Action
Start Create VM
Template Wizard
On the VMM Console, in the lower-left pane, click Library; on the ribbon, click the
Home tab, and then click Create VM Template to launch the Create VM Template
Wizard.
VM Template
Source dialog
box
Select Use an existing VM template or a virtual hard disk stored in the library.
Click Browse to open the dialog box Select VM Template Source and select the VHD
(DummyWin2k8r2.vhd) in the SATemplateShare folder.
Caution Do not select DummyWin2k8r2.vhd in the HATemplateShare folder.
Click OK to return to the Select Source page, and then click Next.
VM Template
Identity page
On VM Template Identity, for VM Template name, type SATemplate, and then click
Next.
Configure
Hardware page
On Configure Hardware, no customization is needed, so click Next.
Configure
Operating
System page
On Configure Operating System, in the drop-down list Guest OS profile, select
[None – customization not required], and then click Next.
Summary page
Confirm that the only setting specified is that the VM Template is SATemplate (not
HATemplate), and then click Create.
Jobs dialog box
Confirm that the job to create SATemplate completes successfully, and then close
the Jobs dialog box.
Caution Do not select the option Make this virtual machine highly available. This
is the only step where creating SATemplate differs from creating HATemplate.
4.4.4.8 View the two new SCC templates in the VMM Library
You have now created the two SAN-copy-capable (SCC) VM templates that you will use to validate
storage automation in your test environment.
This procedure shows you how to verify that both templates exist and are available for use in the VMM
Library.
77
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Table 46: Confirm that the two SCC templates you just created are in the VMM Library
Task or
Wizard Page
View VMM
Library
Templates
Action
On the VMM Console, in the lower-left pane, click Library; in the upper-left pane,
expand Templates; click VM Templates, and then confirm that you see both
templates in the main pane.
78
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
View
HATemplate
Properties
In the main pane, right-click HATemplate, click Properties, click the Hardware
Configuration tab; under Advanced in the center pane, click Availability to confirm
that this template can be used to create a highly available VM.
View
SATemplate
Properties
In the main pane, right-click SATemplete, click Properties, click the Hardware
Configuration tab; under Advanced in the center pane, click Availability to confirm
that this template can be used to create a VM with normal availability.
79
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Before You Start
Automated
Testing
(optional)
Any VM template stored in the VMM Library is re-usable. Therefore, if you wish, you
can experiment with non-automated VM provisioning (non-scripted provisioning) by
using either or both of the VM templates that you just created.
Right-click one of the templates, select Create Virtual Machine, and then complete
the Create Virtual Machine Wizard.
For an alternative entry point to the Create Virtual Machine Wizard, see the steps
to deploy a VM manually in the online help topic "How to Deploy a New Virtual
Machine from the SAN-Copy-Capable Template."
80
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
5 Validate Storage Automation in Your Test Environment
Microsoft partners — including EMC — who develop an SMI-S Provider and storage systems that
support VMM 2012 use a Windows PowerShell validation script developed by the Microsoft VMM
product team to perform comprehensive validation testing. The tests provided by the script validate
VMM storage automation functionality with vendor SMI-S Providers and storage arrays. Each storage
vendor that performs this testing then publishes vendor-specific support in a document similar to this
one.
EMC and Microsoft co-authored this document (with a structure defined by Microsoft and common to
all vendors who perform similar testing) to capture:

The configuration of VMM, the EMC SMI-S Provider, and managed EMC arrays

Best practices; software and hardware configurations required to enable specific storage
features; and limitations and known issues that emerged from the development and testing
process
Customers can use this document as a guide to deploy a configuration in their lab or data center similar
to the EMC preproduction environment described in this document. Setting up a similar test
environment enables customers to benefit directly from the storage automation validation testing
performed by EMC and described later in this section.
Customers can take EMC's testing one step farther — after setting up a test environment like the one
EMC used, you can then run the same VMM storage validation script that EMC (and other vendors) use.
In addition, the validation configuration results can be useful to customers later as a reference even
after deploying one or more VMM private clouds into the production environment. You can run the
VMM validation script again to confirm configuration changes and, if necessary, to gain information
useful for troubleshooting.
5.1 Set Up the Microsoft VMM Storage Automation Validation Script
The Microsoft VMM product team developed the Windows PowerShell-based validation script — called
the VMM Storage Automation Validation Script — that EMC used to test multiple scenario-based and
functionality-based test cases.
The purpose of the validation script is to validate that the EMC SMI-S Provider and each supported EMC
array meet VMM’s defined functionality and scale requirements.
5.1.1 Download the Microsoft VMM Validation Script
You can download the VMM validation script and store it on the VMM Server as follows.
81
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Table 47: Download the storage validation script
Task
Action
Download the
VMM validation
Script
Download the VMM Storage Automation Validation Script from the appropriate
Microsoft site.
Notes
 The file name of the script that EMC used to validate the preproduction
environment described in this paper is named: StorageAutomationScript.ps1
 Check to see if a later version of the script (for VMM 2012 SP1) has been
released.
Copy Script to
VMM Server
On the VMM Server, open Windows Explorer and create the following folder:
C:\Toolbox\VMMValidationScript
Unzip the contents of the downloaded validation script to:
C:\Toolbox\VMMValidationScript
5.1.2 Use a Script Editor that Supports Breakpoints
When you choose a Windows PowerShell script editor, the recommendation is to use one that allows
you to insert breakpoints. This is great for learning how the test cases are structured and for debugging
test cases.
5.1.3 Script Configuration Input — StorageConfig.xml
When the VMM Storage Automation Validation Script starts, it reads the file StorageConfig.xml to obtain
configuration information. The contents of this XML file are defined in the following table. A sample
EMC StorageConfig.xml file follows the table.
Table 48: Contents of StorageConfig.xml input file read by the VMM validation script
XML Tag
VmmServer
ProviderName
UserName
Password
NetName
Description
Name of the server on which the VMM Management Server is installed
Name of the provider used when you add it to the VMM Management Server:
ServerName:Port
Name of the ECOM user account used to add the provider to the VMM
Management Server
Password of the ECOM user account used to add the provider to the VMM
Management Server
URL for the provider computer to which to connect:
http://ServerName
Port
Port on which the provider listens to a client (such as VMM)
PoolName
Name of a storage pool that is managed by VMM.
ArrayName
Name of the array from which the storage pool should be selected —
typically, this is the serial number of the array.
(Required only if the provider manages multiple arrays and two or more have
duplicate names for storage pools; otherwise, this tag is optional.)
82
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
XML Tag
Description
HostName1
Any name to be used for classifying types of storage. This name must agree
with the pool specified (see sample StorageConfig.xml file below this table.)
Name of the standalone VM host against which validation tests will be run.
ClusterName1
Name of the Hyper-V host cluster against which validation tests will be run
ClusterNodes
A list that contains the name of each node in the specified cluster.
Node
Name of a node in the cluster (add a node name for each node in the cluster
ClassificationName
LunDescPrefix
ParallelLunsCount
ParallelSnapshotCount
ParallelCloneCount
VmNamePrefix
ServiceNamePrefix
VmTemplate
HaVmTemplate
VmLocation
DomainUserName
DomainPassword
OutputCSVFile
LibServer
(optional) 1
Prefix to be used for all LUNs that are created by the validation test; this
prefix will facilitate clean-up in case tests fail to complete
Number of LUNs created in parallel (simultaneously); this value can be
overwritten in the test function
Number of parallel operations for creating snapshots; this value can be
overwritten in the test function
Number of parallel operations for creating clones; this value can be
overwritten in the test function
Prefix to be used for new VMs that are created
Prefix to be used for new services that are created (see also "Creating and
Deploying Services in VMM" at http://technet.microsoft.com/en-us/library/gg675074)
Template name used for creating and deploying new VMs to a standalone
host (in this example document, SATemplate)
Template name used for creating and deploying new VMs to a Hyper-V host
cluster (in this example document, HATemplate)
Path to the location on the VM host where new VMs will be stored
Note For SAN deployments to a Hyper-V host cluster and for VM rapid
provisioning to a cluster, no paths are required.
Name of the Active Directory user account that is a VMM Administrator or
Delegated Administrator for the specified host and storage resources
Password of the DomainUserName account
Name of the CSV file that contains the results of each test together with the
completion time for each operation
Name of the library server; in this example document, the library server is the
same computer as the standalone VM host
LibLocalShare
(optional) 1
Local path to shared folder (on the Library Server computer) where LUNs are
mounted that will be used to create SCC templates
LibShareName
(optional) 1
Name of the VMM Library Share for the specified local LibLocalShare folder
VhdName
(optional)1
Name of the virtual hard disk that will be copied onto the SCC LUN
83
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
XML Tag
Description
1
Assuming that you already created the templates as specified earlier in this document, the following four values in the
StorageConfig.xml file are optional: LibServer, LibLocalShare, LibShareName, and VhdName (and will not be used if you do fill
them in).
The following sample XML file shows the tags and contents of a StorageConfig.xml file that EMC used
during one of its actual validation tests.
Figure 9: EMC sample StorageConfig.xml file
5.2 Configure Trace Log Collection
This section shows you how to configure trace log collection for the Microsoft Storage Management
Service and for ECOM.
5.2.1 Configure Tracing for the Microsoft Storage Management Service
The Microsoft Storage Management Service introduced with VMM 2012 communicates with SMI-S–
based providers from storage vendors, including the EMC SMI-S Provider. To facilitate troubleshooting,
VMM now includes substantial storage-related tracing information in its own logs. Whenever possible,
VMM also includes the CIM-XML output from the vendor's SMI-S Provider.
84
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
In some cases, however, you will need to obtain CIM-XML output and trace output from the Storage
Management Service directly to help you troubleshoot further.
The three levels of tracing that you will need, and how to enable each one, are described in the
following table.
85
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Table 49: Configure trace logging for storage automation validation testing
Task
Enable VMM
Tracing
Action
VMM traces will produce error and exception information. Be sure that you collect
traces on the VMM server. You need Hyper-V host traces only if the failure occurs
on the Hyper-V side (for example, if you encounter volume mount issues).
Refer to the instructions in Microsoft KB 970066 article to set up VMM tracing:
How to collect traces in System Center Virtual Machine Manager
Enable SCX CIMXML Command
Tracing
Microsoft Storage Management Service uses CIM-XML to communicate with the
SMI-S Provider.
To enable SCX CIM-XML command tracing:
1. Open the Registry Editor.
2. Add a registry subkey in the following location called CIMXMLLog:
HKLM\Software\Microsoft\Storage Management\CIMXMLLog
3. Add a registry DWORD named LogLevel with the value 4.
4. Add a registry String named LogFileName, and specify the full path and file
name to use for logging. Make sure that the directory exists and that the
Network Service account has read-and-write access to that directory.
5. Close the Registry Editor.
6. Open Services to stop and start Microsoft Storage Management Service.
Note Logging will fail to start if a space exists in any of the registry VALUES.
Compare:
 "LogFileName" is correct.
 "LogFileName " has a trailing space, and logging will not start.
The output produced by SCX CIM-XML command tracing is the raw call-andresponse interaction between the service and the provider. This information is very
verbose, so (to help minimize noise) collect this information only when you
reproduce the issue.
The following sample shows you the type of information collected in the trace:
86
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Enable Traceview
ETL
Microsoft Storage Management Service has its own trace output, which you can
collect by using Traceview Extract, Transform, and Load (Traceview ETL).
To enable Traceview ETL:
1. Download (from Microsoft Download Center) the Windows Driver Kit Version
(WDK) 7.1.0 and install WDK on the VMM Server.
2. Download Traceview.zip from SCVMM 2012: Collecting storage related traces, and
unzip it to a local folder on the VMM Server.
3. Copy Traceview.exe from the WDK folder to the same local folder.
4. Run Traceview.exe with administrator rights on the VMM Server.
5. In Traceview, click File, click Open Workspace, select SCX, and then click OK.
The Traceview UI starts to display trace information when the next storage
operation occurs. This information is also logged to the StorageService.etl file
(located in the same folder as Traceview.exe).
The following screenshot is an example of the type of information collected in the
trace:
5.2.2 ECOM Input/Output Tracing for the EMC SMI-S Provider
The following steps show you how to configure ECOM tracing for the EMC SMI-S Provider.
Table 50: Configure ECOM trace logging for storage automation validation testing
Task
Shut down ECOM
Action
In a command shell on the EMC SMI-S Provider server, shut down Ecom.exe:
C:\Program Files\EMC\ECIM\ECOM\bin\sm_service Stop Ecom.exe
Note Alternatively, you can use either of the following tools:
 Service Manager
 Command shell command "net stop ECOM"
Clean up log files
Delete existing log files in the ECOM log folder:
C:\Program Files\EMC\ECIM\ECOM\log
87
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Edit
Log_settings.xml to
turn on ECOM http
input/output
tracing
Open the Log_settings.xml file at the following location:
C:\Program Files\EMC\ECIM\ECOM\conf\Log_settings.xml
Make the following changes:
Change the value for Severity from this:
<ECOMSetting Name="Severity"
Type="string"
Value="NAVI_WARNING"/>
Type="string"
Value="NAVI_TRACE"/>
To this:
<ECOMSetting Name="Severity"
Change the value for HTTPTraceOutput from this:
<ECOMSetting Name="HTTPTraceOutput" Type="boolean" Value="false"/>
To this:
<ECOMSetting Name="HTTPTraceOutput" Type="boolean" Value="true"/>
Change the value for HTTPTraceInput from this:
<ECOMSetting Name="HTTPTraceInput" Type="boolean" Value="false"/>
To this:
<ECOMSetting Name="HTTPTraceInput" Type="boolean" Value="true"/>
Change the value for HTTPTraceMaxVersions from this:
<ECOMSetting Name="HTTPTraceMaxVersions" Type="uint32" Value="3"/>
To this:
<ECOMSetting Name="HTTPTraceMaxVersions" Type="uint32" Value="30"/>
Save the Log_settings.xml file.
Start ECOM
Start Ecom.exe:
C:\Program Files\EMC\ECIM\ECOM\bin\sm_service Start Ecom.exe
Note Alternatively, you can use either of the following tools:
 Service Manager
 Command shell command "net start ECOM"
Test <this> issue
Run a test that reproduces the issue for which you want to enable ECOM tracing:
Tip Run the test only long enough to trigger the issue.
Shut down ECOM
Shut down Ecom.exe:
C:\Program Files\EMC\ECIM\ECOM\bin\sm_service Stop Ecom.exe
Note Alternatively, you can use either of the following tools:
 Service Manager
 Command shell command "net stop ECOM"
Collect all files
Collect all of the files in both of the following locations:
Undo changes to
Log_settings.xml
Undo each change made to Log_settings.xml by reverting the value for each
ECOMSetting modified above to its original value.
C:\Program Files\EMC\ECIM\ECOM\log
C:\Program Files\EMC\SYMAPI\log
88
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Restart ECOM
Start Ecom.exe:
C:\Program Files\EMC\ECIM\ECOM\bin\sm_service Start Ecom.exe
Note Alternatively, you can use either of the following tools:
 Service Manager
 Command shell command "net start ECOM"
5.3 Review the Full Test Case List Developed by VMM
The VMM Storage Automation Validation Script contains multiple test cases that exercise functionality
and scale. Vendors capture test case results in an Excel spreadsheet named Provider-StabilizationTestsTemplate.xlsx.
Table 51: Tests developed by VMM that exercise storage automation functionality and scale
Test Type
Test
Single
Operations
Test102_CreateDeleteOneLun -LunSizeinMB 10240
Test103_CreateOneSnapshotOfLun -LunSizeinMB 10240
Test104_CreateOneCloneOfLun -LunSizeinMB 10240
Test105_RegisterUnRegisterOneLunToHost
Test155_RegisterUnRegisterOneLunToCluster
Test106_RegisterOneLunAndMountToHost -LunSizeinMB 10240
Test107_RapidCreateOneVMToHost
Test157_RapidCreateOneVMToCluster
End-to-End
Scenarios
(baseline scale
test)
Test101_AddRemoveProvider
Test202_CreateDeleteMultipleLun -Count 10 -LunSizeinMB 10240
Test203_CreateMultipleSnapshotsOfLun -Count 10 -LunSizeinMB 10240
Test204_CreateMultipleClonesOfLun -Count 10 -LunSizeinMB 10240
Test205_RegisterUnRegisterMultipleLunsToHost -Count 10 -LunSizeinMB 10240
Test255_RegisterUnRegisterMultipleLunsToCluster -Count 10 -LunSizeinMB 10240
Test206_MountMultipleLunsToHost -LunSizeinMB 1024 -Count 10
Test256_MountMultipleLunsToCluster -Count 10 -LunSizeinMB 10240
Test207_RapidCreateMultipleVMsToHost -Count 10
Test257_RapidCreateMultipleVMsToCluster -Count 10
Test501_MigrateMultipleVMFromHost2Cluster -VMCount 10
Test502_MigrateMultipleVMFromCluster2Host -VMCount 10
Test400_PerformAllClusterTests
End-to-End
Scenarios (full
scale test)
Test101_AddRemoveProvider
Test202_CreateDeleteMultipleLun -Count 10 -LunSizeinMB 10240
Test203_CreateMultipleSnapshotsOfLun -Count 10 -LunSizeinMB 10240
Test204_CreateMultipleClonesOfLun -Count 10 -LunSizeinMB 10240
Test205_RegisterUnRegisterMultipleLunsToHost -Count 10 -LunSizeinMB 10240
Test255_RegisterUnRegisterMultipleLunsToCluster -Count 10 -LunSizeinMB 10240
Test206_MountMultipleLunsToHost -LunSizeinMB 1024 -Count 10
Test256_MountMultipleLunsToCluster -Count 10 -LunSizeinMB 10240
Test207_RapidCreateMultipleVMsToHost -Count 10
Test257_RapidCreateMultipleVMsToCluster -Count 10
Test207_BatchRapidCreateMultipleVMsToCluster -BatchSize 10 -NumberofBatches 251
Test501_MigrateMultipleVMFromHost2Cluster -VMCount 10
Test502_MigrateMultipleVMFromCluster2Host -VMCount 10
89
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Test400_PerformAllClusterTests
Test207_BatchRapidCreateMultipleVMsToCluster is the only test that is new in this row (full-scale tests) compared to the
preceding row (baseline-scale tests).
1
5.4 Test Case List by EMC Array Product Family
EMC test results obtained by using the VMM Storage Automation Validation Script are provided in this
section for the Symmetrix, CLARiiON, and VNX product families. Each of these storage system families
support VMM 2012 storage functionality. These tests validate the operation of each supported array, its
Operating Environment, and the EMC SMI-S Provider that communicates with the Microsoft Storage
Management Service.
5.4.1 Test Results – EMC Symmetrix Family
The following table lists the results of EMC testing obtained by using the VMM Storage Automation
Validation Script for the Symmetrix product family.
Table 52: Tests developed by VMM that EMC ran successfully on Symmetrix family arrays
Test Type
Single
Operations
End-to-End
Scenarios
(baseline
scale test)
End-to-End
Scenarios
(full scale
test)
Test
Test102_CreateDeleteOneLun -LunSizeinMB 10240
Pass or Fail?
(or N/A)
Pass
Test103_CreateOneSnapshotOfLun -LunSizeinMB 10240
Pass
Test104_CreateOneCloneOfLun -LunSizeinMB 10240
Pass
Test105_RegisterUnRegisterOneLunToHost
Pass
Test155_RegisterUnRegisterOneLunToCluster
Pass
Test106_RegisterOneLunAndMountToHost -LunSizeinMB 10240
Pass
Test107_RapidCreateOneVMToHost
Pass
Test157_RapidCreateOneVMToCluster
Pass
Test101_AddRemoveProvider
Pass
Test202_CreateDeleteMultipleLun -Count 10 -LunSizeinMB 10240
Pass
Test203_CreateMultipleSnapshotsOfLun -Count 10 -LunSizeinMB 10240
Pass
Test204_CreateMultipleClonesOfLun -Count 10 -LunSizeinMB 10240
Pass
Test205_RegisterUnRegisterMultipleLunsToHost -Count 10 -LunSizeinMB 10240
Pass
Test255_RegisterUnRegisterMultipleLunsToCluster -Count 10 -LunSizeinMB 10240
Pass
Test206_MountMultipleLunsToHost -LunSizeinMB 1024 -Count 10
Pass
Test256_MountMultipleLunsToCluster -Count 10 -LunSizeinMB 10240
Pass
Test207_RapidCreateMultipleVMsToHost -Count 10
Pass
Test257_RapidCreateMultipleVMsToCluster -Count 10
Pass
Test501_MigrateMultipleVMFromHost2Cluster -VMCount 10
Pass
Test502_MigrateMultipleVMFromCluster2Host -VMCount 10
Pass
Test400_PerformAllClusterTests
Pass
Test101_AddRemoveProvider
Pass
Test202_CreateDeleteMultipleLun -Count 10 -LunSizeinMB 10240
Pass
Test203_CreateMultipleSnapshotsOfLun -Count 10 -LunSizeinMB 10240
Pass
Test204_CreateMultipleClonesOfLun -Count 10 -LunSizeinMB 10240
Pass
Test205_RegisterUnRegisterMultipleLunsToHost -Count 10 -LunSizeinMB 10240
Pass
Test255_RegisterUnRegisterMultipleLunsToCluster -Count 10 -LunSizeinMB 10240
Pass
90
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Test206_MountMultipleLunsToHost -LunSizeinMB 1024 -Count 10
Pass
Test256_MountMultipleLunsToCluster -Count 10 -LunSizeinMB 10240
Pass
Test207_RapidCreateMultipleVMsToHost -Count 10
Pass
Test257_RapidCreateMultipleVMsToCluster -Count 10
Pass
Test207_BatchRapidCreateMultipleVMsToCluster -BatchSize 10 -NumberofBatches 25
Pass
Test501_MigrateMultipleVMFromHost2Cluster -VMCount 10
Pass
Test502_MigrateMultipleVMFromCluster2Host -VMCount 10
Pass
Test400_PerformAllClusterTests
Pass
5.4.2 Test Results – EMC CLARiiON Family
The following table lists the results of EMC testing obtained by using the VMM Storage Automation
Validation Script for the CLARiiON product family.
Table 53: Tests developed by VMM that EMC ran successfully on EMC CLARiiON family arrays
Test Type
Single
Operations
End-to-End
Scenarios
(baseline
scale test)
End-to-End
Scenarios
(full scale
test)
Test
Test102_CreateDeleteOneLun -LunSizeinMB 10240
Pass or Fail?
(or N/A)
Pass
Test103_CreateOneSnapshotOfLun -LunSizeinMB 10240
Pass
Test104_CreateOneCloneOfLun -LunSizeinMB 10240
Pass
Test105_RegisterUnRegisterOneLunToHost
Pass
Test155_RegisterUnRegisterOneLunToCluster
Pass
Test106_RegisterOneLunAndMountToHost -LunSizeinMB 10240
Pass
Test107_RapidCreateOneVMToHost
Pass
Test157_RapidCreateOneVMToCluster
Pass
Test101_AddRemoveProvider
Pass
Test202_CreateDeleteMultipleLun -Count 10 -LunSizeinMB 10240
Pass
Test203_CreateMultipleSnapshotsOfLun -Count 10 -LunSizeinMB 10240
Pass
Test204_CreateMultipleClonesOfLun -Count 10 -LunSizeinMB 10240
Pass
Test205_RegisterUnRegisterMultipleLunsToHost -Count 10 -LunSizeinMB 10240
Pass
Test255_RegisterUnRegisterMultipleLunsToCluster -Count 10 -LunSizeinMB 10240
Pass
Test206_MountMultipleLunsToHost -LunSizeinMB 1024 -Count 10
Pass
Test256_MountMultipleLunsToCluster -Count 10 -LunSizeinMB 10240
Pass
Test207_RapidCreateMultipleVMsToHost -Count 10
Pass
Test257_RapidCreateMultipleVMsToCluster -Count 10
Pass
Test501_MigrateMultipleVMFromHost2Cluster -VMCount 10
Pass
Test502_MigrateMultipleVMFromCluster2Host -VMCount 10
Pass
Test400_PerformAllClusterTests
Pass
Test101_AddRemoveProvider
Pass
Test202_CreateDeleteMultipleLun -Count 10 -LunSizeinMB 10240
Pass
Test203_CreateMultipleSnapshotsOfLun -Count 10 -LunSizeinMB 10240
Pass
Test204_CreateMultipleClonesOfLun -Count 10 -LunSizeinMB 10240
Pass
Test205_RegisterUnRegisterMultipleLunsToHost -Count 10 -LunSizeinMB 10240
Pass
Test255_RegisterUnRegisterMultipleLunsToCluster -Count 10 -LunSizeinMB 10240
Pass
Test206_MountMultipleLunsToHost -LunSizeinMB 1024 -Count 10
Pass
Test256_MountMultipleLunsToCluster -Count 10 -LunSizeinMB 10240
Pass
91
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Test207_RapidCreateMultipleVMsToHost -Count 10
Pass
Test257_RapidCreateMultipleVMsToCluster -Count 10
Pass
Test207_BatchRapidCreateMultipleVMsToCluster -BatchSize 10 -NumberofBatches 25
Pass
Test501_MigrateMultipleVMFromHost2Cluster -VMCount 10
Pass
Test502_MigrateMultipleVMFromCluster2Host -VMCount 10
Pass
Test400_PerformAllClusterTests
Pass
5.4.3 Test Results – EMC VNX Family
The following table shows the results of EMC testing obtained by using the VMM Storage Automation
Validation Script for the VNX product family.
Table 54: Tests developed by VMM that EMC ran successfully on EMC VNX family arrays
Test Type
Single
Operations
End-to-End
Scenarios
(baseline
scale test)
End-to-End
Scenarios
(full scale
test)
Test
Pass or Fail?
(or N/A)
Test102_CreateDeleteOneLun -LunSizeinMB 10240
Pass
Test103_CreateOneSnapshotOfLun -LunSizeinMB 10240
Pass
Test104_CreateOneCloneOfLun -LunSizeinMB 10240
Pass
Test105_RegisterUnRegisterOneLunToHost
Pass
Test155_RegisterUnRegisterOneLunToCluster
Pass
Test106_RegisterOneLunAndMountToHost -LunSizeinMB 10240
Pass
Test107_RapidCreateOneVMToHost
Pass
Test157_RapidCreateOneVMToCluster
Pass
Test101_AddRemoveProvider
Pass
Test202_CreateDeleteMultipleLun -Count 10 -LunSizeinMB 10240
Pass
Test203_CreateMultipleSnapshotsOfLun -Count 10 -LunSizeinMB 10240
Pass
Test204_CreateMultipleClonesOfLun -Count 10 -LunSizeinMB 10240
Pass
Test205_RegisterUnRegisterMultipleLunsToHost -Count 10 -LunSizeinMB 10240
Pass
Test255_RegisterUnRegisterMultipleLunsToCluster -Count 10 -LunSizeinMB 10240
Pass
Test206_MountMultipleLunsToHost -LunSizeinMB 1024 -Count 10
Pass
Test256_MountMultipleLunsToCluster -Count 10 -LunSizeinMB 10240
Pass
Test207_RapidCreateMultipleVMsToHost -Count 10
Pass
Test257_RapidCreateMultipleVMsToCluster -Count 10
Pass
Test501_MigrateMultipleVMFromHost2Cluster -VMCount 10
Pass
Test502_MigrateMultipleVMFromCluster2Host -VMCount 10
Pass
Test400_PerformAllClusterTests
Pass
Test101_AddRemoveProvider
Pass
Test202_CreateDeleteMultipleLun -Count 10 -LunSizeinMB 10240
Pass
Test203_CreateMultipleSnapshotsOfLun -Count 10 -LunSizeinMB 10240
Pass
Test204_CreateMultipleClonesOfLun -Count 10 -LunSizeinMB 10240
Pass
Test205_RegisterUnRegisterMultipleLunsToHost -Count 10 -LunSizeinMB 10240
Pass
Test255_RegisterUnRegisterMultipleLunsToCluster -Count 10 -LunSizeinMB 10240
Pass
Test206_MountMultipleLunsToHost -LunSizeinMB 1024 -Count 10
Pass
Test256_MountMultipleLunsToCluster -Count 10 -LunSizeinMB 10240
Pass
Test207_RapidCreateMultipleVMsToHost -Count 10
Pass
Test257_RapidCreateMultipleVMsToCluster -Count 10
Pass
92
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Test207_BatchRapidCreateMultipleVMsToCluster -BatchSize 10 -NumberofBatches 25
Pass
Test501_MigrateMultipleVMFromHost2Cluster -VMCount 10
Pass
Test502_MigrateMultipleVMFromCluster2Host -VMCount 10
Pass
Test400_PerformAllClusterTests
Pass
5.5 Test Storage Automation in Your Pre-Production Environment
You now know how EMC built a preproduction test environment to validate VMM 2012 storage
functionality. You also know exactly what storage functionality the VMM Storage Automation Validation
Script is designed to test and the results of EMC testing by using the Microsoft validation script.
Now, you can build your own preproduction test environment, download the VMM validation script,
and run your own validation testing. This will enable you to learn about VMM 2012, EMC storage arrays,
and how your private cloud components interact in your own environment.
Specifically, in a preproduction environment, you can use the validation script to validate that the
configuration is working as expected before deploying a private cloud into your production
environment. The script enables you to establish a baseline of what the environment can do. After
production deployment, you can compare the current performance and behavior to that baseline.
For example, say that masking operations were working in the preproduction setting. However, now
they start fail to the same 16-node cluster that you used earlier. Experience will tell you to eliminate
issues with VMM first by restarting the failed masking job. If the job completes, the next thing to
investigate is whether a timeout for the job occurred. Timeouts on the provider side might indicate an
overloaded provider.
93
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
6 Prepare for Production Deployment
You must plan carefully the transition from a preproduction or lab environment to a production
environment. Effectively using VMM to create and manage one or more private clouds requires that you
design and implement your management of storage resources with VMM in mind. This section can help
you do that by describing how to get started and identifying important resources.
6.1 Identify Issues Unique to Your Production Environment
Optimally, your preproduction environment accurately models your production environment. If not, it is
important to identify what those differences or limitations are.
It is also critical to understand what issues might exist in your production environment that limit the
performance and scale of your private cloud. One example is the number of nodes that you use (or plan
to use) in your production Hyper-V host clusters. Another example is the set of rapid provisioning
requirements that your organization plans to specify for VMM host groups.
6.2 Production Deployment Resources
A number of resources are available to support you in designing, implementing, and verifying your
private cloud environment. Which resources you choose to use will depend on your goals for building,
operating, and maintaining a private cloud.
If you are just starting with VMM, viewing Microsoft Private Cloud Videos can help familiarize you with
VMM features and functionality. After viewing the videos, contact an EMC representative about how to
use the Microsoft Technology Centers (MTCs) to aid you in building your private cloud. The
representative might recommend validated Fast Track configurations to expedite deploying VMM and
EMC storage in your production environment.
6.2.1 Microsoft Private Cloud Videos
You can use the Microsoft videos described briefly in the following subsections as an introduction to
how to deploy a private cloud.
6.2.1.1 Microsoft IT Showcase Video
You can find the IT Showcase video "How Microsoft IT Uses System Center Virtual Machine Manager to
Manage the Private Cloud" at http://technet.microsoft.com/en-us/edge/Video/hh748210
VMM 2012 helps enable centralized management of both physical and virtual IT infrastructure;
increases server utilization; and improves dynamic resource optimization across multiple virtualization
platforms. Microsoft uses VMM to plan, deploy, manage, and optimize their own virtual infrastructure,
while at the same time maximizing its datacenter resources.
6.2.1.2 Microsoft Jump Start Videos
The following jump start videos by Microsoft introduce VMM 2012:

"Private Cloud Jump Start (01): Introduction to the Microsoft Private Cloud with System Center
2012" at http://technet.microsoft.com/en-US/edge/private-cloud-jump-start-01-introduction-tothe-microsoft-private-cloud-with-system-center-2012
94
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices

"Private Cloud Jump Start (02): Configure and Deploy Infrastructure Components" at
http://technet.microsoft.com/en-us/edge/video/private-cloud-jump-start-02-configure-and-deployinfrastructure-components
6.2.2 Microsoft Technology Centers (MTCs)
Today, you can find Microsoft Technology Centers (MTCs) all over the world. These centers bring
together Microsoft and its partners in a joint effort to help enterprise customers find innovative
solutions for their unique environments.
EMC participates in these centers as a member of the Microsoft Technology Center Alliances Program.
Through the Alliances program, working directly with an MTC Alliance Manager, EMC provides
hardware, software, and services to all of the MTC facilities.
Customers can meet with solution and technology experts at an MTC location and find answers to
questions such as the following:

What is the best solution?

How do we get a solution to market faster?

How do we solve this difficult problem?

What are the appropriate development best practices to apply?

Should we be looking at release or pre-release software?

Can we verify this proposed solution before making a purchase?

What are the appropriate products to purchase for the solution?
 Can we see a live demo of the solution?
MTCs have three types of offerings, each of which focuses on a different stage of your organization’s
search for a solution:

Strategy Briefing

Architecture Design Session

Proof-of-Concept Workshop
No matter what development stage you are at with your solution, MTC can help get you to the next
step.
MTCs focus on the following business goals:

Build customer connections

Drive real-world business process

Drive business performance

Enable your mobile workforce

Optimize your application platform

Optimize your business productivity infrastructure

Optimize and secure your core infrastructure

Test and tune performance
For more information about EMC as an MTC Alliance partner, watch the following video:

Inside The Partnership (EMC/MSFT) ITP01 - The MTC
95
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
To engage with EMC at the MTC, contact your local Microsoft or EMC account manager. MTC visits are
free, with convenient locations and flexible schedules. You can also schedule a visit using the EMC online
booking request form by using one of the following methods:

Click Microsoft Technology Center Booking Request Form
–OR–
1. Open Powerlink.emc.com, click Solutions, select Application Solutions, select Microsoft, and then
click Microsoft Technology Centers.
2. On the main pane, click Microsoft Technology Center Booking Request Form, and then fill in the
required fields.
For more information about MTCs, visit www.microsoft.com/mtc or speak to your local Microsoft or
EMC account manager.
6.2.3 Microsoft Private Cloud Fast Track Program
The Microsoft Private Cloud Fast Track program helps accelerate customer deployments of a Microsoft
private cloud into a production environment by defining a specific configuration that implements best
practice guidance. These pre-validated configurations utilize multiple points of integration with the
Microsoft System Center product set. For System Center 2012 — Virtual Machine Manager 2012
(VMM 2012), Fast Track solutions defined by EMC in conjunction with associated server and SAN
vendors utilize the SMI-S Provider to deliver end-to-end storage system automated operations.
Additional integration between System Center and EMC is provided in Fast Track deliverables that
include System Center 2012 — Operations Manager and System Center 2012 — Orchestrator as well as
other solutions that include EMC PowerShell components. Customers implementing Microsoft Private
Cloud Fast Track solutions are provided with a pre-staged, validated configuration of storage, compute,
and network resources that fulfill all private cloud requirements. These solutions significantly improve
return on investment for private cloud deployments.
Table 55: EMC and Microsoft sources for the Fast Track program
Source
Website
Link
EMC
Microsoft Virtualization and Private Cloud
Solutions
http://www.emc.com/hypervcloud
Microsoft
Microsoft Private Cloud
http:// www.microsoft.com/privatecloud
96
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Appendix A: Install VMM
The following steps show you how to install the VMM management server.
Before You Start:

Review the following sections earlier in this document:

"Minimum Hardware Requirements (Servers and Arrays) for Test Environment"

"Minimum Hardware Requirements Explained"

"VMM Prerequisites"
Table 56: Install VMM
Task or
Wizard Page
Action
Start the Installation
Wizard
On the VMM installation media, to open the Microsoft System Center 2012 Virtual
Machine Manager Setup Wizard right-click Setup.exe, and then click Run as
administrator.
Select features to
install
On the opening screen, click Install and then select the following options:
 VMM management server
 VMM console [selected automatically when you select VMM management
server]
Specify product
registration
information
Specify:
 Name:
 Organization: [optional]
 Product key:
Indicate whether
you accept the
license agreement
Select:
 I have read, understood, and agree with the terms of the license agreement.
Indicate whether
you want to join
CEIP
Select Yes or No, depending on whether or not you want to join CEIP.
Specify Microsoft
Update behavior
If the screen Specify Microsoft Update behavior appears, specify whether or not
you want to use Microsoft Update.
Note If, on this computer, you earlier chose to use Microsoft Update, this page
does not appear in the VMM Setup wizard.
(if this page
appears)
Specify the
installation location
Accept the default installation path:
C:\Program Files\Microsoft\Microsoft System Center 2012\Virtual Machine Manager
97
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Review prerequisite
warnings (if any
appear)
(if this page
appears)
Specify SQL
database
configuration
information
for VMM
The Setup wizard checks whether all hardware and software requirements for
VMM 2012 are met; it displays a page with warnings if any requirements are not
met.
Review warnings (if any):
 Fix any errors
 Fix or ignore any warnings
Tip If any errors or warnings appear, see "VMM Prerequisites" earlier in this
document and see "System Requirements for System Center 2012 – Virtual Machine
Manager"
Specify the following settings for the SQL Server that will contain the VMM
database:
 Server name:
Example: If the SQL Server is on the same computer on which you are now installing
VMM, type localhost, or type the server name (such as vmmserver01).
 Port: <blank – unless all of the following are true>
SQL Server is on a remote computer
SQL Server Browser service is not started on that computer
SQL Server is not configured to use the default port (1433)
Optionally, specify:
 Domain\Username:
 Password:
Specify:
 Instance name:
Note The default instance name is MSSQLSERVER. A server can host only one
default instance of SQL Server, so if you plan to install multiple instances of SQL
Server on this computer, specify a named instance. For more information, see
the MSDN® topic "Instance Configuration."
Specify whether you will create a new database or use an existing database:
 New database:
Important If the account you use to install the VMM Server does not have
permissions to create a new SQL Server database, select Use the following credentials
and provide the user name and password of an account that does have permissions.
Example name: VMMDatabase01
-or Existing database:
98
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Configure service
account and
distributed key
management
information
Select:
 Local System account
Important This is the account that the VMM service uses.
Changing the VMM service account after installation is unsupported; this includes
changing from the local system account to a domain account, from a domain account
the local system account, or from one domain account to another domain account.
If you specify a domain account, the account must be a member of the local
Administrators group on the computer.
If you plan to use shared ISO images with Hyper-V virtual machines, you must use a
domain account.
For more information about which type of account to use, see Specifying a Service
Account for VMM.
Optionally, you can select:
 Store my keys in Active Directory
However, for this preproduction test installation, you might not need to select
this option. (For more information, see "Configuring Distributed Key Management
in VMM.")
Specify the ports for
various VMM
features
Typically, for this test installation, you can accept the following default values for
ports:
Important The values you assign for these ports during Setup cannot be
changed without uninstalling and reinstalling the VMM Server.
8100 Communication with the VMM console
5975 Communication to agents on hosts and library servers
443 File transfers to agents on hosts and library servers
8102 Communication with Windows Deployment Services
8101 Communication with Windows PE agents
8013 Communication with Windows PE agent for time synchronization
Specify Library
configuration
information
Specify a share for the VMM library by selecting:
 Create a new library share
Accept the pre-populated default values for:
 Share name:
MSSCVMMLibrary
 Share location:
C:\ProgramData\Virtual Machine Manager Library Files
 Share description:
VMM Library Share
Important MSSCVMMLibrary is the default library share name; its location is:
%SYSTEMDRIVE%\ProgramData\Virtual Machine Manager Library Files
Because ProgramData is a hidden folder, if you want to see its contents in Windows
Explorer, you configure use Windows Explorer to show hidden folders.
After VMM Setup completes, you can add library shares (and additional library
servers) by using the VMM console or by using VMM PowerShell.
Review Installation
summary page
Review your selections, and then click Install.
99
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Confirm that
Installing features
completes
Wait until the installation completes for both of the following:
 VMM management server
 VMM console
Confirm Setup
completed
successfully
When you see a message that the Setup wizard has completed, click Close.
Configure Storage
After you have successfully installed the VMM Server in your preproduction
environment, configure the sets steps in the following sections earlier in this
document:
 "Configure VMM to Discover and Manage Storage"
-and "Create SAN-Copy-Capable Templates for Testing VM Rapid Provisioning"
100
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Appendix B: Array Masking and Hyper-V Host Clusters
Storage groups unmask, or associate, servers with specific logical units on an array and with target ports
on that array (these are the ports through which a logical unit is visible to the server). When a cluster is
involved, how VMM unmasks storage to a cluster varies depending on factors described in this section.
This appendix can help administrators determine the appropriate configuration for unmasking storage
to host clusters in a way that helps avoid issues such as timeouts.
Before addressing how VMM handles unmasking operations for clusters, the first of the following
subsections introduces the concept of storage groups and explains how VMM uses storage groups to
bind logical units on arrays to specific VM host servers.
Storage Groups Unmask Logical Units to Hyper-V VM Hosts
In a VMM 2012 private cloud, the purpose of storage groups is make storage on an array available to
Hyper-V VM hosts or to Hyper-V host clusters. The mechanism to enable unmasking (assigning) a logical
unit on an array to a host is to use storage groups to bind (map) initiator endpoints on Hyper-V VM hosts
(or initiator endpoints on clusters) to target endpoints on the storage array.
VMM creates new storage groups and modifies existing storage groups.
The following table lists commonly used synonyms for storage groups, initiator endpoint, and target
endpoint.
Table 57: Commonly used synonyms for storage groups, initiators, and targets





Synonyms for the
Interface
that Binds Initiators to Targets
Storage groups
Masking sets
Masking Views
Views
SCSI Protocol Controllers (SPCs)
SPC is the term typically used by
SMI-S
SCSI is the common protocol
used — over FC or over
Ethernet — when storage is
assigned remotely to a
server
Synonyms for the
Endpoint
on a Hyper-V Host









Initiator
Storage initiator
Host initiator
Host initiator endpoint
Host initiator port
Initiator port
Port
Hardware ID
A specific implementation (FC SAN):
FC initiator port
1
HBA port
HBA1
 A specific implementation (iSCSI SAN):
iSCSI initiator port
iSCSI initiator
Synonyms for the
Endpoint
on a Storage Array












Target
Target endpoint
Target port
Target portal
Target iscsi portal
Storage endpoint
Storage target
Storage endpoint
Storage port
Port
ISCSI portal2
A specific implementation (FC SAN):
FC target port
 A specific implementation (iSCSI SAN):
iSCSI target port
iSCSI target
1
HBA is the physical adapter. An HBA adapter might have one or more physical ports. In the NPIV case, one physical port can
have multiple virtual ports associated to it, each with its own World Wide name (WWN).
2
The "portal" in "iSCSI Portal" refers to the IP address that initiators use to first gain access to iSCSI targets.
As indicated in the table above, the term storage groups sometimes is used interchangeably with SPCs.
SCSI as the first element of the SPC acronym is appropriate because SCSI is the protocol used for both FC
and iSCSI communications in a SAN. From an SMI-S perspective, a storage group is an instance of the
CIM class CIM_SCSIProtocolController, as illustrated in the following figure.
101
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Figure 10: A storage group is an instance of the CIM class SCSIProtocolController
VMM 2012 discovers existing storage groups during Level 2 discovery when it retrieves storage groups
(and storage endpoints) associated with discovered logical units in VMM-managed storage pools on an
array. VMM populates the VMM database not only with discovered storage objects but also with any
discovered association between a host and a logical unit — storage groups act as the interface that
binds host initiator endpoints (called InitiatorPorts in the figure) on a Hyper-V VM host (or Hyper-V host
cluster) to storage endpoints (called TargetPorts in the figure) for specific logical units on target arrays.
Figure 11: VMM modifies storage groups during masking operations to unmask LUNs to hosts
Thus, if a storage group contains a host initiator endpoint (InitiatorPort in the figure) on the host side
that maps to TargetPorts on the array side, VMM unmasks the logical unit to that host through the
association established by the storage group. If no association exists, the logical unit is masked (the
logical unit is not visible to the host).
Factors that Affect Unmasking for Hyper-V Host Clusters in VMM
Array-side properties that affect how VMM 2012 configures unmasking for Hyper-V host clusters
include:

Ports per View (Ports refers to target ports on an array; View refers to storage groups)
This property indicates that the array supports one of the following options:

Only one target port per storage group
102
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices


All target ports per storage group

One or multiple or all target ports per storage group
Hardware ID per View (Hardware ID refers to the host initiator on a VM host; View refers to
storage groups)
This property indicates that the array supports one of the following options:

Only one hardware ID per storage group

Multiple hardware IDs per storage group
Important The Hardware ID per View setting does not apply to EMC arrays but is included in
this document for completeness. If you run the following VMM PowerShell command in an
environment with EMC arrays, you can see that the value returned for
MaskingOneHardwareIDPerView is always returned as FALSE:
$Arrays = Get-SCStorageArray –All
$Arrays | Select-Object ObjectType, Name, Model, MaskingOneHardwareIDPerView,
HardwareIDFlags
A host-side configurable setting, called storage groups, is affected by values for the above two array-side
properties. Hardware ID per View and Ports per View individually and together influence, or determine,
how you should configure VMM to managed storage groups for Hyper-V host clusters:

Storage groups (storage groups are also referred to as masking views or SPCs)
VMM manages storage groups in one of the following ways:
 Per node

Per cluster
By default, VMM manages storage groups for clusters per node (not per cluster). However, you might
need to change this setting so that VMM instead manages storage groups per cluster. Understanding
the array-side Hardware ID per View and Ports per View properties can help you decide which option for
Storage Groups per Cluster is appropriate in your VMM-based private cloud.
Ports per View Property — One or Multiple or All
In the context of unmasking or masking a logical unit to a host or host cluster, the Ports per View
property on an array specifies the number of target ports per masking view (per SPC or storage group)
that the underlying storage array supports. The value returned from Ports per View indicates the
requirement from the array; its value is not configurable.
Valid values for the Ports per View property are a set of read-only strings limited to those in the
following list. In each case, the value returned indicates the option that a specific type of array supports:


OnePortPerView (traditional):

Adding only one target port to the storage group is only option

Not implemented by EMC VMAX, CLARiiON, or VNX arrays that support VMM 2012
AllPortsShareTheSameView (simplest):

Adding all target ports to the storage group is required

Supported by EMC CLARiiON and VNX arrays
103
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices

MultiplePortsPerView (most flexible):


Any of the following is supported:

Adding one target port to the storage group

Adding multiple target ports to the storage group

Adding all target ports to the storage group
Supported by EMC VMAX arrays
The Ports per View property is an array-based property; its value is not set by VMM, nor can you modify
its value by using VMM. However, the True or False value for this property is made available to VMM
through the SMI-S Provider; you can therefore use VMM cmdlets to return its value.
Example commands (run in the VMM PowerShell command shell):
$Arrays = Get-SCStorageArray -All
$Arrays| Select-Object ObjectType, Name, Model, MaskingPortsPerView | Format-List
Example output:
ObjectType
Name
Model
MaskingPortsPerView
:
:
:
:
StorageArray
APM00101000787
Rack Mounted CX4_240
AllPortsShareTheSameView
ObjectType
Name
Model
MaskingPortsPerView
:
:
:
:
StorageArray
000194900376
VMAX-1SE
MultiplePortsPerView
ObjectType
Name
Model
MaskingPortsPerView
:
:
:
:
StorageArray
APM00111102546
Rack Mounted VNX5100
AllPortsShareTheSameView
Hardware ID per View Property — One or Multiple
In the context of unmasking or masking a logical unit to a host or host cluster, the Hardware ID per View
property refers to an object on the array that corresponds to a host initiator endpoint on a host (or on a
node of a host cluster). The value for Hardware ID per View is not configurable.
Important The Hardware ID per View setting does not apply to EMC arrays but is included in this
document for completeness.
VMM creates a new masking set if no hardware ID already exists. Next, the array detects which
hardware IDs exist on the host and a corresponding hardware ID object is created on the array.
The Boolean value returned for the Hardware ID per View property indicates:

True (traditional):

This type of array supports only one hardware ID object (host initiator port) per masking view
(per SPC or storage group)

Not implemented by EMC VMAX, CLARiiON, or VNX arrays that support VMM 2012
104
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices

False (more flexible):

This type of array supports multiple hardware ID objects (host initiator ports) per masking view
(per SPC or storage group); storage groups can contain multiple host initiator ports; more than
one masking view can exist

Supported by EMC VMAX, CLARiiON and VNX arrays
The Hardware ID per View property is an array-based property; its value is not set by VMM, nor can you
modify its value by using VMM. However, the True or False value for this property is made available to
VMM through the SMI-S Provider; you can therefore use VMM cmdlets to return its value.
Example commands (run in the VMM PowerShell command shell):
$Arrays = Get-SCStorageArray -All
$Arrays[0] | Select-Object ObjectType, Name, Model, MaskingOneHardwareIDPerView, HardwareIDFlags
Example output:
ObjectType
Name
Model
MaskingOneHardwareIDPerView
HardwareIDFlags
:
:
:
:
:
StorageArray
APM00101000787
Rack Mounted CX4_240
False
SupportsPortWWN, SupportsISCSIName
Storage Groups Setting — Per Node or Per Cluster
Create Storage Group per Cluster is a VMM 2012 configurable setting. By default, VMM sets the value
for CreateStorageGroupsPerCluster (a property on a storage array object) to FALSE for any VMM-managed
array. The default specifies that storage groups are created per node (rather than per cluster).
You can manually change the default value to specify that storage groups be created per cluster. Note
that this setting has an array scoping and therefore will affect all host clusters that have storage
allocated to this array.
The Boolean value that you can configure for Create Storage Groups per Cluster specifies:

CreateStorageGroupsPerCluster = False
(more flexible; default)

Creates storage groups on an array at the node level — each storage group contains all initiator
ports for one node. Thus, the LUN (or LUNs) associated with this storage group are made
available to a single node or to a subset of nodes in the cluster.

Drivers:



Supports the ability to make a specific LUN available to just one node, which means that you
can have a separate LUN for boot-from-SAN scenarios. In the boot-from-SAN scenario, the
boot LUN must be specific to a particular host and only that host can access that LUN.
Supported by EMC VMAX, CLARiiON, and VNX arrays
CreateStorageGroupsPerCluster = True
(simplest; also improve performance because there is only one storage group to manage)

Creates storage groups on an array at the cluster level — the storage group contains all host
initiator ports for all nodes in that cluster. Thus, the LUN (or LUNs) associated with this storage
group are made available to all nodes in the cluster.

Drivers:
105
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices

On some arrays, masking operations are serialized, which means that the time required to
unmask or mask a LUN increases if there are multiple masking requests. In this case,
timeouts can occur so you should consider setting CreateStorageGroupsPerCluster to TRUE.

If you have a large number of nodes (8 to 16) in a cluster, you might encounter timeout
issues. The more nodes, the greater is the chance of timeouts. If so, consider setting
CreateStorageGroupsPerCluster to TRUE.

If you have fewer than 8 nodes per cluster but if the cluster is heavily used, you might
encounter timeout issues. If so, consider setting CreateStorageGroupsPerCluster to TRUE.

Important If you do set CreateStorageGroupsPerCluster to TRUE, be aware that you lose the
ability to make a specific LUN available to just one node or to a subset of nodes. This means that
a separate LUN is no longer available for boot from SAN scenarios.

Setting CreateStorageGroupsPerCluster to True is supported by (and appropriate for) EMC
VMAX arrays; supported by but not typically recommended for CLARiiON or VNX arrays.
You can change the default FALSE value to TRUE by using VMM cmdlets.
Example commands (run in the VMM PowerShell command shell):
$Arrays = Get-SCStorageArray -All
$Arrays[0] | Select-Object ObjectType, Name, Model, StorageGroups, CreateStorageGroupsPerCluster |
Format-List
Example output:
ObjectType
Name
Model
StorageGroups
CreateStorageGroupsPerCluster
:
:
:
:
:
StorageArray
APM00101000787
Rack Mounted CX4_240
{Storage Group, Storage Group}
False
The following subsection ties together the array-side Hardware ID per View and Ports per View
properties with the host-side Storage Groups per Cluster setting. It shows how the former two
determine the appropriate way to unmask a LUN either to cluster nodes or to the entire cluster.
How Ports per View and Hardware ID per View Influence Unmasking to a Cluster
As noted earlier, configuring storage groups per cluster or per node is a VMM 2012 setting, whereas the
value for Ports per View (one; all; or one or multiple or all) and for Hardware ID per View (TRUE or
FALSE) are array-based read-only properties. Because SMI-S makes available to VMM the values for both
of these properties, VMM can utilize both properties to help determine the appropriate (or required)
value for CreateStorageGroupsPerCluster.
Impact of Ports per View and Hardware ID on Storage Groups for Clusters
The following matrix shows how the intersection of the value for the Hardware ID per View property
with the value for the Ports per View property influences, or determines, the configuration that VMM
can or must use for host clusters. For each "cell" in the table, the combination of the value for these two
array-side properties indicates whether CreateStorageGroupsPerCluster is TRUE or FALSE or Not
Applicable.
Note

Recall that the term "storage groups" is used interchangeably with the "SPCs" and masking
views.
106
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices

The values of these array-size properties affect how storage groups are managed or modified if
storage groups already exist, or affect how storage groups are created if none currently exist.
Table 58: Array-side properties whose values affect how storage groups are set for host clusters
SETTING
All Target Ports
Share Same Storage Group
Multiple Target Ports
Per Storage Group
One Target Port
Per Storage Group
1 Initiator Port per
Storage Group =
FALSE
CreateStorageGroupsPerCluster =
TRUE or FALSE
CreateStorageGroupsPerCluster =
TRUE or FALSE
CreateStorageGroupsPerCluster =
TRUE or FALSE
1 Initiator Port per
Storage Group
View = TRUE
CreateStorageGroupsPerCluster –
N/A to EMC Storage Arrays
CreateStorageGroupsPerCluster –
N/A to EMC Storage Arrays
CreateStorageGroupsPerCluster –
N/A to EMC Storage Arrays
The following two diagrams depict cells 1–3 (top row) in the preceding table. Figures for cells 4–6
(bottom row) are not included because these cases are not applicable to EMC storage systems.
Figure that depicts cells 1 and 2 in the table
In this case, VMM creates one storage group for
the entire cluster (for all nodes in the cluster).
In this case, VMM creates one storage group for
each node in the cluster.
Figure 12: Each storage group has at least 2 target ports, set CreateStorageGroupsPerCluster to either T or F
In the next example, the result is not as intuitive as in the preceding example because, in the following
example, when you set CreateStorageGroupsPerCluster to TRUE, the result is one storage group per
node. The reason why is explained below the figure.
107
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Figure that depicts cell 3 in the table
In this case:
 Only one storage group is allowed per
target port, so you must have a
minimum of one storage group per
target port. In this example, 2 target
ports exist and therefore 2 storage
groups must exist.
 2 or more initiator ports are allowed in
a storage group, so each storage group
in this example contains both initiator
ports for a given node.
 CeateStorageGroupsPerCluster is set to
TRUE, so typically you would expect to
see only 1 storage group but you must
have more than one as explained in the
first bullet.
 Here, you unmask the LUN through
both initiator ports on a given node.
In this case:
 Only one storage group is allowed per target port, so
you must have a minimum of one storage group per
target—but you can have more than one storage group
per target (as in this example).
 2 or more initiator ports are allowed in a storage group,
so each storage group in this example contains initiator
ports for a given node. Since each node in this example
has 2 storage groups, both storage groups must contain
both initiator ports.
 CreateStorageGroupsPerCluster is set to FALSE, so
VMM can create more than one storage group for each
node in the cluster—but it remains true that each
storage group must contain both initiator ports for each
node.
 Here, you have the flexibility to unmask the LUN
through one initiator port on a given node but not
through the other initiator port on that node.
Figure 13: Each storage group has only 1 target port, set CreateStorageGroupsPerCluster to either T or F
108
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Using VMM 2012 cmdlets to display information about Storage Groups per Cluster
You can display storage groups — and information about the Hardware ID per View and Ports per View
properties and related information — in your own environment by using VMM cmdlets as shown by the
following examples.
Example commands (run in the VMM PowerShell command shell):
$Arrays = Get-SCStorageArray -All | Select-Object ObjectType, Name, Model, StorageGroups,
CreateStorageGroupsPerCluster, StoragePools, StorageInitiators, StorageEndpoints, StorageiSCSIPortals,
MaskingPortsPerView, MaskingOneHardwareIDPerView, HardwareIDFlags
Example command:
To use $Arrays to see details about a specific storage group:
$StorageGroups = $Arrays[4].StorageGroups
$StorageGroups[0] | Select-Object ObjectType, Name, ObjectId, StorageArray, StorageInitiators,
StorageEndpoints, StorageLogicalUnits
Example output:
ObjectType
Name
ObjectId
: StorageGroup
: Storage Group
: root/emc:hSMIS-SRV-VM1.SR5DOM.ENG.EMC.COM:5988;Clar_LunMa
skingSCSIProtocolController.CreationClassName=%'Clar_LunM
askingSCSIProtocolController%',DeviceID=%'CLARiiON+APM001
11102546+b266edfa68a4e011bd47006016372cc9%',SystemCreatio
nClassName=%'Clar_StorageSystem%',SystemName=%'CLARiiON+A
PM00111102546%'
StorageArray
: APM00111102546
StorageInitiators
: {5001438001343E40}
StorageEndpoints
: {500601603DE00835, 500601683DE00835}
StorageLogicalUnits : {LaurieTestLun}
Example command:
To use $Arrays to see details about a specific LUN:
$LUNs = $StorageGroups[0].StorageLogicalUnits
$LUNs | Select-Object ObjectType, Name, HostGroup, HostDisks, StorageGroups, StoragePool,
NumberOfBlocks, ConsumableBlocks, AccessDescription, TotalCapacity, AllocatedCapacity, InUseCapacity,
RemainingCapacity
Example output:
ObjectType
Name
HostGroup
HostDisks
StorageGrouops
StoragePool
NumberOfBlocks
ConsumableBlocks
AccessDescription
TotalCapacity
AllocatedCapacity
InUseCapacity
RemainingCapacity
:
:
:
:
:
:
:
:
:
:
:
:
:
StorageLUN
LaurieTestLun
All Hosts
{\\.\PHYSICALDRIVE2, \\.\PHYSICALDRIVE2}
Pool 1
33554432
33554432
Read/Write Supported
17179869184
0
0
17179869184
109
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Appendix C: Enable Large LUNs on Symmetrix Arrays
For Symmetrix arrays, VMM cannot create a LUN larger than 240 GB unless you first configure the
auto_meta setting. You set Symmetrix-wide meta settings, including the auto_meta setting, by using the
symconfigure command to specify a command file.
Note The contents of this appendix are adapted from "EMC Solutions Enabler Symmetrix Array
Controls CLI" available at:
https://support.emc.com/docu40313_EMC-Solutions-Enabler-Symmetrix-Array-Controls-CLI-V7.4-ProductGuide.pdf?language=en_US
Maximum Device Size Limits
The maximum size for Symmetrix devices depend on the Enginuity version:

For Enginuity 5874 and later, the maximum device size in cylinders is 262668.

For Enginuity 5773 and earlier, the maximum device size in cylinders is 65520.
Metadevices Let You Exceed Maximum Device Size Limits
EMC first introduced the auto_meta feature in Solutions Enabler V6.5.1, running Enginuity version 5773.
The auto_meta setting enables automatic creation of metadevices (a set of logical volumes) in a single
configuration change session. A metadevice is also referred to as a metavolume.
If the auto_meta feature is set to DISABLED (the default value) and you try to create a device larger than
the allowable maximum, creating the device will fail. However, if you set auto_meta to ENABLE and then
specify the creation of a single standard device larger than the maximum allowable size, Symmetrix will
create a metadevice instead of a standard device.
The following table shows, by Enginuity version, the metadevice sizes that are enabled by the
auto_meta feature.
Table 59: Metadevice sizes enabled by the auto_meta feature
Enginuity
Version
5874
Max Single Device Size
(CYL)
262668
Max Single Device Size
(GB)
240
Min_auto_meta_size
(CYL)
262669
5773
65520
59
65521
Auto_meta_member_Size
(CYL)
262668
65520
Parameter Dependencies
You can enable the auto_meta setting only if the following auto_meta parameters are set to valid
values:


Min_auto_meta_size: Specifies the size threshold that triggers auto_meta creation.

When you create a device with a maximum size larger than min_auto_meta_size, and
auto_meta is enabled, then a metadevice will be created.

The min_auto_meta_size cannot be set to a value smaller than the auto_meta_member_size.

The min_auto_meta size must be smaller than or equal to the value in the preceding table.
Auto_meta_member_size: Specifies the default meta member size in cylinders when the auto_meta
feature is enabled:
110
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices


The auto_meta_member_size must be smaller than or equal to the value in the preceding table.
Auto_meta_config: Specifies the default meta config when the auto_meta feature is enabled:

Valid values include CONCATENATED, STRIPED, or NONE.
These settings are Symmetrix-wide.
How to enable auto_meta by using EMC Solutions Enabler:
1. Open the command shell (you can use the Windows Command Prompt, Windows PowerShell, or a
Linux or Unix shell).
2. Run the following command to verify if auto_meta is disabled:
symcfg list -sid xxxx -v
Note Replace xxxx with your Symmetrix ID (SID).
3. If auto_meta is not disabled, create a file called 1.txt, and then add the following text to that file:
set Symmetrix auto_meta=enable, min_auto_meta_size=xxxx,
auto_meta_member_size=xxxx, auto_meta_config=xxxx;
4. Run the following command:
symconfigure -sid xxxx -f 1.txt commit -nop
Note Replace xxxx with your Symmetrix ID (SID).
5. Verify that auto_meta is enabled:
symcfg list -sid xxxx -v
Note Replace xxxx with your Symmetrix ID (SID).
How to enable auto_meta by using the Symmetrix Management Console (SMC):
1. In the Console, right-click Symmetrix ID, and then select Symmetrix Admin.
2. Select Set Symmetrix Attributes, and then enable the Auto Meta feature.
3. Enter appropriate values for each of the following parameters:

Minimum Auto Meta Size

Auto Meta Member Size
 Auto Meta Configuration
To determine the appropriate values, review the information provided earlier in this appendix.
1. Select Add to Config Session List (which will create a configuration session task).
2. Commit the task from the Config Session menu.
111
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Appendix D: Configure Symmetrix TimeFinder for Rapid VM
Provisioning
EMC Symmetrix TimeFinder capabilities include:

TimeFinder/Snap: Creates pointer-based logical copies that consume less storage than clones

TimeFinder/Clone: Creates full-device and extent-level point-in-time copies
Automated rapid VM provisioning with VMM requires EMC TimeFinder. This appendix provides an
overview of TimeFinder and outlines what you need to know about both TimeFinder/Snap and
TimeFinder/Clone in order to set up your preproduction environment to test rapid VM provisioning.
Specifically, you can use this appendix to help determine which configuration steps to perform before
deploying VMs (as described earlier in the section "Create SAN-Copy-Capable Templates for Testing VM
Rapid Provisioning").
See Also:

EMC Solutions Enabler Symmetrix TimeFinder Family CLI V7.4 Product Guide at
https://support.emc.com/docu40317_EMC-Solutions-Enabler-Symmetrix-TimeFinder-FamilyCLI-V7.4-Product-Guide.pdf?language=en_US

"EMC Symmetrix Timefinder Product Guide" at
https://support.emc.com/docu31118_Symmetrix-TimeFinder-ProductGuide.pdf?language=en_US
TimeFinder/Snap Overview
TimeFinder/Snap creates space-saving, logical point-in-time images called snapshots. You can create
multiple snapshots simultaneously on multiple target devices from a single source device. Snapshots are
not complete copies of data; they are logical images of the original information, based on the time the
snapshot was created.
Virtual Device (VDEV)
TimeFinder/Snap uses source and target devices where the target device is a special Symmetrix device
known as a virtual device (VDEV). Through the use of device pointers to the original data, VDEVs allow
you to allocate space based on changes to a device (using an Asynchronous Copy on First Write, or
ACOFW, mechanism,) rather than replicating the complete device.
A VDEV is a Symmetrix host-addressable cache device used in TimeFinder/Snap operations to store
pointers to point-in-time copies of the source device. Virtual devices are space efficient because they
contain only address pointers to the actual data tracks stored on the source device or in a pool of SAVE
devices (described next).
SAVE Device
A SAVE device is a Symmetrix device that is not accessible to the host and can be accessed only through
VDEVs that store data on SAVE devices. SAVE devices provide pooled physical storage and are
configured with any supported RAID scheme. SAVE devices are placed within logical constructs called
Snap pools (also referred to as SAVE pools) in order to aggregate or isolate physical disk resources for
112
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
the purpose of storing data associated with TimeFinder/Snap. The following figure shows the
relationship between Source devices, VDEVs, and SAVE devices.
Figure 14: Configuring TimeFinder / Snap for VM deployment
[Graphic source: EMC Solutions Enabler Symmetrix TimeFinder Family CLI V7.4 Product Guide]
Configuring TimeFinder/Snap for VM Deployment
To support rapid VM deployment in a VMM 2012 private cloud with TimeFinder/Snap, the default snap
pool, named DEFAULT_POOL, must be pre-populated with SAVE devices sized appropriately to accept
the expected write workload associated with the VMs to be deployed. The configuration of the SAVE
devices and their placement into the DEFAULT_POOL is beyond the scope of this document. Please refer
to the appropriate EMC Solutions Enabler or Symmetrix Management Console documentation for how
to configure the default snap pool. (For more information, search Support.EMC.com, search
Powerlink.EMC.com, or refer to your storage documentation.)
To support snapshot operations, the EMC SMI-S provider can automatically select appropriately sized
VDEVs, or it can create new VDEVs. By default, the SMI-S provider first attempts to find pre-created
VDEVs within the Symmetrix array before the provider creates new VDEVs. You can find the settings that
control this behavior in the file called OSLSProvider.conf (located in the EMC\ECIM\ECOM\Providers
installation directory on your SMI-S Provider server). (These settings are described in the table labeled
"Property descriptions and default values in the OSLSProvider.conf settings file" at the end of this
section.)
For the provider to select existing VDEVs automatically, those VDEVs:

Must be the same size as the source

Must have the same metadevice configuration as the source
 Must not be in a snap relationship
One benefit of pre-creating VDEVs for automatic selection is that doing so accelerates the VM
deployment process, especially when multiple snapshots are requested in parallel. When the
113
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
SMI-S Provider creates devices, it does so in a serial fashion. If multiple snapshot requests occur, and if
those requests must create VDEVs as part of establishing the snapshot relationship, the VM deployment
process will be extended (slower). By pre-creating VDEVs of the appropriate size and metadevice
configuration, the provider needs only choose the VDEV and create the snapshot relationship. Using this
technology substantially speeds up VM deployment.
When a VM is deleted from VMM (by using the VMM Console or VMM PowerShell), a request is sent to
the provider to automatically terminate the snapshot relationship. However, the VDEV is not deleted as
a part of the VM delete process.
TimeFinder/Clone Overview
TimeFinder/Clone is a local Symmetrix replication solution that creates full-device point-in-time copies
that you can use for backups, decision support, data warehouse refreshes, or any other process that
requires parallel access to production data. To support rapid VM deployment in a VMM 2012 private
cloud, TimeFinder/Clone is used to create full-device copies. VMM uses these copies to deploy VMs
from VM templates that reside on a source LUN on an array that the VM host can access.
Configuring TimeFinder/Clone for VM Deployment
When using TimeFinder/Clone, by default, the SMI-S provider creates a full volume, non-differential
copy of the source device. Non-differential means that after the clone copy is complete, no incremental
relationship is maintained between the source device and the clone target. The VM deployment process
waits for the full data copy (from the source to the clone target) to complete before VMM continues the
associated VM deployment job. After the copy completes, the provider terminates the clone
relationship.
Similar to VM deployment with TimeFinder/Snap, with TimeFinder/Clone, the SMI-S provider can
automatically select appropriately sized clone targets, or it can create new clone targets to support the
clone operation. By default, the SMI-S provider does not attempt to find pre-created clone devices
within the Symmetrix array before the provider creates new devices.
You can find the settings that control this behavior in the file OSLSProvider.conf (located in the
EMC\ECIM\ECOM\Providers installation directory). For the provider to select existing clone devices
automatically, you must change the default setting in the file OSLSProvider.conf. (The possible default
values are listed in the table at the end of this subsection labeled "Property descriptions and default
values in the OSLSProvider.conf settings file."
In addition, the clone target must:

Reside in the same disk group as the source

Be the same size as the source

Have the same metadevice configuration as the source

Be the same RAID type as the source

Not be visible to a host, including not being mapped to any front-end ports

Not be labeled (not have a user-friendly name)

Not be in a clone relationship with another device
If the SMI-S Provider cannot find an appropriate clone target, by default, the provider will create a clone
target of the correct size, of the same RAID type as the source, and within the same disk group as the
source.
114
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
One benefit of pre-creating clone targets for automatic selection is that doing so accelerates the VM
deployment process, especially when multiple clones are requested in parallel. When the SMI-S Provider
creates devices, it does so in a serial fashion. By default, the clone copy process also occurs serially when
there are multiple requests. If multiple clone requests occur, and if those requests must create clone
targets as part of establishing the clone relationship, the VM deployment process will be slower. By precreating clone targets based on the requirements listed in the bullets, the provider needs only choose
the clone target, establish the clone copy session, and then wait for the clone copy to complete.
When a VM is deleted from VMM (by using the VMM Console or VMM PowerShell), a request is sent to
the provider to automatically delete the device (in the case of a clone, the device is not a VDEV) that is
associated with the virtual machine. This frees space within the disk group.
Table 60: Property descriptions and default values in the OSLSProvider.conf settings file
SMI-S Provider Properties
= (OptVal |
DefaultVal)
Description
OSLSProvider/com.emc.cmp.osls.se.array.
ReplicationService.provider.creates.snap.target
false | true
If true, the provider can create target snap elements
OSLSProvider/com.emc.cmp.osls.se.array.
ReplicationService.provider.autoselects.snap.target
false | true
If true, the provider first tries to find a suitable snap
target before creating one
OSLSProvider/com.emc.cmp.osls.se.array.
ReplicationService.provider.creates.clone.target
false | true
If true, the provider can create target clone elements
OSLSProvider/com.emc.cmp.osls.se.array.
ReplicationService.provider.autoselects.clone.target
True | false
If true, the provider first tries to find a suitable clone
target before creating one
115
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Appendix E: Terminology
The following table defines terms used in this document.
Table 61: Terms defined in the context of VMM 2012, SMI-S, and EMC
Term
Definition
Array
See the entry for "storage array."
Array OS
Refers to the collection of software components resident within an array
that controls array hardware; the array operating system (array OS) makes
available an interface that supports the virtualization and management of
storage. See also the entry for "Operating Environment (OE)."
boot from SAN
Refers to a computer booting (loading) its operating system over connection
to a SAN rather than from a local hard disk on the computer
CIM
See the entry for "Common Information Model (CIM)."
CIM-XML client
A component on the VMM Server that enables the Microsoft Storage
Management Service (via the SMI-S Module) to communicate with the SMI-S
Provider by using the CIM-XML protocol.
CIM-XML protocol
The communication mechanism between the VMM Server's Storage
Management Service and the SMI-S Provider.
Common Information
Model (CIM)
A DMTF standard that provides a model for representing heterogeneous
compute, network, and storage resources as objects; the model also includes
the relationships among those objects.
 CIM Infrastructure Specification defines the object-oriented architecture
of CIM.
 CIM Schema defines a common, extensible language for representing
dissimilar objects.
 CIM Classes identify specific types of IT resources (for example:
CIM_NetworkPort).
CIM enables VMM to administer dissimilar elements (storage-related
objects) in a common way through the SMI-S Provider. The EMC SMI-S
Provider V4.3.2 (or later; the current version is 4.4.0) supports CIM Schema
V2.31.0.
Discovery
VMM discovers storage objects on a storage array or on a Hyper-V VM host.
See also the section "Scenario 1: End-to-End Discovery and End-to-End
Mapping."
Distributed
Management Task
Force (DMTF)
An international organization that promotes the development of standards
that simplify management of millions of IT systems worldwide. DMTF creates
standards that enable interoperability at the enterprise level among multivendor systems, tools, and solutions.
DMTF
See the entry for "Distributed Management Task Force (DMTF)."
116
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Term
Definition
ECIM
See the entry for "EMC Common Information Model (ECIM)."
ECOM
See the entry for "EMC Common Object Manager (ECOM)."
EMC Common
Information Model
(ECIM)
Defines a CIM-based model for representing IT objects (for example:
EMC_NetworkPort, which is a subclass of the CIM class CIM_NetworkPort).
EMC Common Object
Manager (ECOM)
Serves as the interoperability hub for the EMC Common Management
Platform (CMP) that manages EMC storage systems.
EMC SMI-S Provider
EMC software that uses SMI-S to allow management of EMC arrays. EMC
SMI-S Provider V4.3.2 (or later; V4.4.0 is current) is certified by SNIA as
compliant with SMI-S 1.3, 1.4, and 1.5. VMM uses the EMC SMI-S Provider to
discover arrays, storage pools, and logical units; to classify storage; to assign
storage to one or more host groups; to create, clone, snapshot, or delete
logical units; and to unmask or mask logical units to a Hyper-V host or
cluster.
EMC WBEM
Uses ECOM to provide a single WBEM infrastructure across all EMC
hardware and software platforms. WBEM is the standard that enables ECOM
to serve as the interoperability hub of the EMC CMP. EMC SMI-S Provider
4.2.3 (or later) uses EMC WBEM.
endpoint
(host initiator endpoints
-andstorage endpoints)
Two endpoints are associated with each other and are thus best described
together:
 host initiator endpoints
 storage endpoints
Host initiator endpoints on a Hyper-V VM host are bound (mapped) to
storage endpoints on the target array. This mapping is done through an
intermediary called a storage group (also called a masking set or SPC).
See also the lists of synonyms for initiator endpoints and storage endpoints
in section "Storage Groups Unmask Logical Units to Hyper-V VM Hosts"
earlier in this document.
Fibre Channel (FC)
A gigabit-speed network technology used to connect devices on enterprisescale storage area networks (SANs). FC is an ANSI standard. FC signaling can
use not only fiber-optic cables but also twisted-pair copper wire.
Fibre Channel Protocol
(FCP)
A transport protocol (analogous to TCP on IP networks) that sends SCSI
commands over Fibre Channel networks. All EMC storage systems support
FCP.
gatekeeper
(EMC Symmetrix arrays)
Gatekeepers on a Symmetrix storage array provide communication paths
into the array used by external software to monitor and/or manage the
array. A gatekeeper "opens the gate" to enable low-level SCSI commands to
be routed to the array.
hardware VDS
See the entry for "Virtual Disk Service (VDS)"
117
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Term
Definition
HBA
See the entry for "host bus adapter (HBA)."
host agent
Service installed on Hyper-V servers (VM hosts) that communicates with the
VMM Server. VMM does not install host agents for Citrix XenServer hosts or
VMware ESX hosts.
host bus adapter (HBA)
Connects a host computer to a storage device for input/output (I/O)
processing:. An HBA is a physical device that contains one or more ports; a
single system contains one or more HBAs. FC HBAs are more common, but
iSCSI HBAs also exist:
 FC HBA: A physical card on the host that acts as the initiator that sends
commands from the host to storage devices on a target array
 iSCSI HBA: A physical card on the host that acts as the initiator that sends
commands from the host to storage devices on a target array
A computer with more than one HBA can connect to multiple storage
devices. HBA is used in this paper specifically to refer to one or more devices
on a VM host that initiates a connection, typically via an FC HBA, to storage
arrays.
IETF
See the entry for "Internet Engineering Task Force (IETF)."
initiator / target
These terms are binary opposites and are thus best defined together:
 Initiator (on the host): The endpoint (a SCSI port or an FC port) on the
host requests information and receives responses from the target array.
 target (on the array): The endpoint (a SCSI port or an FC port) that
returns information requested by the initiator. Atarget consists of one or
more LUNs and, typically, returns one or more LUNs to the initiator.
See also the entry for "endpoint."
Internet Engineering
Task Force (IETF)
An international organization that promotes the publication of high-quality,
relevant technical documents and Internet Standards that influence the way
that people design, use, and manage the Internet. IETF focuses on improving
the Internet from an engineering point of view. The IETF's official products
are documents, called RFCs, published free of charge.
Internet SCSI
See the entry for "Internet Small Computer System Interface (iSCSI)."
Internet Small
Computer System
Interface (iSCSI)
An IP-based standard developed by IETF that links data storage devices to
each other and to computers. iSCSI carries SCSI packets (SCSI commands)
over TCP/IP networks, including local area networks (LANs), wide area
networks (WANs), and the Internet. iSCSI supports storage area networks
(SANs) by enabling location-independent data storage and retrieval and by
increasing the speed of transmission of storage data. Almost all EMC storage
systems support iSCSI in addition to supporting FC (one exception is the VNX
5100, which supports only FC).
iSCSI
See the entry for "Internet Small Computer System Interface (iSCSI)."
118
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Term
Definition
iSCSI initiator
See the entry for "initiator / target."
iSCSI target
See the entry for "initiator / target."
logical unit
A unit of storage within a storage pool on a storage array in a SAN. LUN.)
Each logical unit exported by an array controller corresponds to a virtual
disk. From the perspective of a host computer that can access that logical
unit, the logical unit appears as a disk drive.
In VMM, a logical unit is typically a virtual disk that contains the VHD file for
a VM.
Example commands (run in the VMM PowerShell command shell):
Get-SCStorageLogicalUnit | Select-Object ObjectID,ObjectType,Name,
ServerConnection | fl
Get-SCStorageLogicalUnit | Select-Object
ObjectID,ObjectType,Name,Description,Enabled,SMDisplayName,SMName,SML
unIdFormat,SMLunIdDescription
Get-SCStorageLogicalUnit | Select-Object
ObjectID,ObjectType,Name,BlockSize,NumberOfBlocks,ConsumableBlocks,To
talCapacity,InUseCapacity,AllocatedCapacity,RemainingCapacity
Get-SCStorageLogicalUnit | Select-Object
ObjectID,ObjectType,Name,WorkloadType,Status,ThinlyProvisioned,Storag
ePool,StorageGroups,HostGroup,IsAssigned,IsViewOnly | fl
Get-SCStorageLogicalUnit | Select-Object ObjectID,ObjectType,Name,
SourceLogicalUnit,LogicalUnitCopies,LogicalUnitCopySource | fl
Example commands: Register a logical unit with a host:
Sample set of commands — Register a logical unit with a host:
$VMHost = Get-SCVMHost -ComputerName "VMHost01"
$LU = Get-SCStorageLogicalUnit -Name "LUN01"
Register-SCStorageLogicalUnit -StorageLogicalUnit $LU -VMHost $VMHost
logical unit number
(LUN)
A number that identifies a logical unit of storage within a storage pool on a
SAN array. Frequently, the acronym LUN is used as a synonym for the logical
unit that it identifies.
LUN mapping
Refers to configuring access paths (via a target port) to logical units to make
storage represented by logical units available for use by servers.
LUN masking
Refers to configuring access permissions to determine which hosts have
access to specific logical units on a SANs.
LUN mask
A LUN mask is a set of access permissions that identify which initiator (on a
host) can access specific LUNs on a target (an array). This mask makes
available a LUN (and the logical unit of storage identified by that LUN) to
specified hosts, and makes that LUN unavailable to other hosts.
119
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Term
Definition
mask / unmask
These terms are binary opposites and are thus best defined together:
 Unmask: Assign a logical unit to a host or host cluster.
 Mask: Hide a logical unit from a host of host cluster.
Microsoft Storage
Management Service
A service (a WMI provider installed by default on the VMM Server) used by
VMM to discover storage objects and to manage storage operations. This
service is an SMI-S client that communicates with the SMI-S Provider server
over the network; it converts retrieved SMI-S objects to Storage
Management Service objects that VMM can manage.
N_Port
(applies only to Fibre
Channel)
A port on the node (located either on a host or on a storage device) in a
Fibre Channel SAN. Also known as a node port.
N_Port ID Virtualization
(NPIV)
(applies only to Fibre
Channel)
Enables multiple N_Port IDs to share a single physical N_Port. This allows
multiple FC initiators to occupy a single physical port, easing hardware
requirements for SANs.
In VMM, the NPIV Provider (on a VM host) uses HBA technology (which
creates virtual HBA ports, also called vPorts, on hosts) to enable a single
physical FC vPort to function as multiple logical ports, each with its own
identity. VMM 2012 automates the creation (and deletion) of vPorts as part
of the SAN transfer of a VM (from one computer to another) on an FC SAN.
VMM 2012 does not create vPorts when creating a new VM.
NPIV
See the entry for "N_Port ID Virtualization (NPIV)."
OE
See the entry for Operating Environment (OE).
Operating Environment
(OE)
Operating Environment (array OS) on an EMC storage array:
 Enginuity. A specialized operating environment (OE) designed by EMC for
data storage; used to control components in a Symmetrix array.
 FLARE. A specialized operating environment (OE) designed by EMC for
data storage and used to control components in a CLARiiON array. FLARE
manages all input/output (I/O) functions of the storage array.
 VNX OE. A specialized operating environment designed by EMC to
provide file and block code for a unified system. VNX OE contains basic
features, such as thin provisioning. For advanced features, you can buy
add-ons, such as the Total Efficiency Pack.
See also the entry for "Array OS."
rapid provisioning
VM creation using SAN snapshot or clone technologies
SCSI initiator
See the entry for "initiator / target."
SCSI target
See the entry for "initiator / target."
120
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Term
Definition
self-hosted service
A service that runs within a process (application) that the developer created.
The developer controls its lifetime, sets the properties of the service, opens
the service (which sets it into a listening mode), and closes the service.
Services can be self-hosted or can be managed by an existing hosting
process.
Small Computer
Systems Interface (SCSI)
A set of standards that define how to physically connect, and transfer data
between, computers and external devices such as storage arrays. SCSI
standards define commands, protocols, and electrical and optical interfaces.
Typically, a computer is an "initiator" and a data storage device is a "target."
SMI-S
See the entry for "Storage Management Initiative Specification (SMI-S)."
SMI-S module
A component of the Microsoft Storage Management Service that maps
Storage Management Service objects to SMI-S objects.
SMI-S Provider
An implementation of the SMI-S standard. An SMI-S Provider is software
developed by a storage vendor to enable management of diverse storage
devices in a common way. Thus, an SMI-S Provider provides the interface
between a management application (such as VMM) and multiple storage
arrays. The EMC SMI-S Provider is the EMC implementation of the SMI-S
standard.
SNIA
See the entry for "Storage Networking Industry Association (SNIA)."
software VDS
See the entry for "Virtual Disk Service (VDS)"
storage area network
(SAN)
A dedicated network that provides access to consolidated, block level data
storage, thus making storage devices, such as disk arrays, accessible to
servers. Storage devices appear, to the server's operating system, like locally
attached devices.
VMM 2012 supports FC and iSCSI SANs:
 FC SAN: The VM host uses a host bus adapter (HBA) to access the array
by initiating a connection to a target on the array.
 iSCSI SAN: The VM host uses the Microsoft iSCSI Initiator Service to
access the array by issuing a SCSI command to a target on the array.
storage array
A disk storage system that contains multiple disk drives attached to a SAN in
order to make storage resources available to servers. Also called a storage
system.
 SANs make storage arrays available to servers; arrays appear like locally
attached devices to the server operating system.
 EMC storage systems that support the VMM private cloud include the
Symmetrix VMAX family, the CLARiiON CX4 Series, and the VNX family.
 VMM discovers storage resources on storage arrays and can then make
storage resources available to VM hosts. An array in a VMM private cloud
must support the FC or iSCSI storage protocol, or both. Within an array,
the storage elements most important to VMM are storage pools and
logical units.
121
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Term
Definition
storage classification
A string value defined in VMM and associated with a storage pool that
represents a level of service or quality of service guarantee. One typical
naming convention used is to categorize storage pools as "Gold," "Silver,"
"Bronze," and so on.
storage group
Binds host initiator endpoints on a Hyper-V host to storage endpoints on the
target array. VMM discovers existing storage groups but does not display
storage groups in the VMM Console. Instead, you can display storage groups
by using the following VMM PowerShell command:
Get-SCStorageArray -All | Select-Object Name,ObjectType,StorageGroups |
Format-List
Synonyms:
 Masking set
 SCSI Protocol Controller (SPC)
See also:
 The entry for "endpoint."
 The entry for "initiator / target."
 Appendix B: Array Masking and Hyper-V Host Clusters
Storage Management
Initiative Specification
(SMI-S)
A standard developed by the Storage Networking Industry Association
(SNIA). SMI-S defines a standardized management interface that enables a
management application, such as VMM, to discover, assign, configure, and
automate functionality for heterogeneous storage systems in a unified way.
An SMI-S Provider implements SMI-S standard. The EMC SMI-S Provider
enables VMM to manage EMC VMAX, CLARiiON, and VNX arrays in a unified
way.
storage management
service
See the entry for "Microsoft Storage Management Service."
Storage Networking
Industry Association
(SNIA)
An international organization that develops management standards related
to data, storage, and information management in order to address
challenges such as interoperability, usability, and complexity. The SNIA
standard that is central to VMM 2012 storage automation is the Storage
Management Initiative Specification (SMI-S).
122
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Term
storage pool
Definition
A repository of homogeneous or heterogeneous physical disks on a storage
array from which logical units (often called LUNs) can be created. A storage
pool on an array can be categorized by VMM based on service level
agreement (SLA) factors such as performance. One typical naming
convention used is to categorize storage pools as "Gold," "Silver," "Bronze,"
and so on.
To see information about the storage pools in your environment, open the
VMM PowerShell command shell and type the following:
Example commands (run in the VMM PowerShell command shell):
$Pools = Get-SCStoragePool
$Pools | Select-Object ObjectType, Name, StorageArray, IsManaged,
Classification, TotalManagedSpace, RemainingManagedSpace,
StorageLogicalUnits | where {$_.IsManaged -eq "True" -and $_.Name -eq
"Pool 1"}
Example output:
ObjectType
: StoragePool
Name
: Pool 1
StorageArray
: APM00111102546
IsManaged
: True
Classification
: EMC_VNX_Bronze
TotalManagedSpace
: 2301219569664
RemainingManagedSpace : 2100037681152
StorageLogicalUnits
: {LUN 73, LUN 67, LUN 56, LUN 30...}
storage system
See the entry for "storage array."
target / initiator
See the entry for "initiator / target."
thin provisioning
Configurable feature that lets you allocate storage based on fluctuating
demand.
unmask / mask
See the entry for "mask / unmask."
Virtual Disk Service
(VDS)
VDS can refer to either of the following, which should not be confused:
 VDS software provider on the VM host (central to VMM 2012): Retrieves
disk and volume information on the host, initializes and partitions disks
on the host, and formats and mounts volumes on the host.
 VDS hardware provider on the VMM Server (deprecated in VMM 2012):
Used only for storage arrays that do not support SMI-S. The VDS
hardware provider can discover and communicate with SAN arrays and
can enable SAN transfers, but the VDS hardware provider does not
support automated provisioning.
123
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Term
Definition
VM host
A physical computer (managed by VMM) on which you can deploy one or
more VMs. VMM 2012 supports Hyper-V hosts (on which is installed the
VMM agent), VMware ESX hosts, and Citrix XenServer hosts. However, in the
current release, VMM supports storage provisioning only for Hyper-V hosts.
VMM PowerShell
command shell
Command-line interface (CLI) for the VMM Server. VMM 2012 provides 450
Windows PowerShell cmdlets developed specifically for VMM to perform all
tasks that are available in the VMM Console. VMM 2012 includes 25 new
storage-specific cmdlets.
VMM Console
Graphical user interface (GUI) for the VMM Server. You can use the VMM
Console on the VMM Server or from a remote computer.
VMM library server
File server (managed by VMM) used as a repository to store files (used for
VMM tasks) such virtual hard disks (VHDs), ISOs, scripts, VM templates
(typically used for rapid provisioning), service templates, application
installation packages, and other files.
VHD files used to support rapid provisioning of VMs are contained within
LUNs on the arrays but are mounted to folders on the Library server.
You can install the VMM Library on the VMM Server, on a VM host, or on a
standalone Hyper-V host.
VMM Management
Server (VMM Server)
Service used to manage VMM objects such as virtual machines, hypervisor
physical servers, storage, network, clouds, and services. Also called VMM
Server.
WBEM
See the entry for "Web-Based Enterprise Management (WBEM)."
Web Services
Management (WSMan)
Enables IT systems to access and exchange management information. WSMan is a DMTF standard that supports the use of Web services to enable
remote access to network devices and promotes interoperability between
management applications and managed resources.
Web-Based Enterprise
Management (WBEM)
A group of standards that enable accessing information about and managing
compute, network, and storage resources in an enterprise-scale distributed
environment. WBEM includes:
 CIM: A model, CIM, to represent resources
 CIM-XML: An XML-based protocol, CIM-XML over HTTP, that lets network
components communicate
 WS-Man: A SOAP-based protocol, Web Services for Management (WS
Management, or WS-Man), that lets network components communicate
 xmlCIM: An XML representation of CIM models and messages (xmlCIM)
that travel via CIM-XML
124
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Term
Definition
Windows Management
Instrumentation (WMI)
The Microsoft implementation of the WBEM standard that enables accessing
management information in an enterprise-scale distributed environment.
 WMI uses the CIM standard to represent systems, applications, networks,
devices, and other managed components.
 The WMI Service is the Windows implementation of the CIM Object
Manager (CIMOM), which provides applications with uniform access to
management data.
 The Microsoft Storage Management Service that VMM 2012 uses to
communicate with the SMI-S Provider is implemented as a WMI provider.
Windows Remote
Management (WinRM)
The Microsoft implementation of WS-Man. WinRM enables Windows
PowerShell 2.0 cmdlets and scripts to be invoked on one or more remote
machines.
WS-Man
See the entry for "Web Services Management (WS-Man)."
125
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Appendix F: References
Sources listed in this appendix focus on storage automation enabled by the SNIA SMI-S standard in the
context of EMC storage systems and the VMM 2012 private cloud.
Standards Sources
Table 62: SNIA, DMTF, and other standards related to storage automation
Source
Website
Link
DMTF
CIM Infrastructure Specification
http://dmtf.org/sites/default/files/standards/documents/DSP00
04_2.6.0_0.pdf
DMTF
CIM Operations Over HTTP
http://www.dmtf.org/sites/default/files/standards/documents/
DSP0200_1.3.1.pdf
DMTF
CIM Schema 2.31.0 Release Notes
http://www.dmtf.org/sites/default/files/cim/cim_schema_v231
0/releasenotes.html
DMTF
CIM Schema: Version 2.8.2 (Final)
http://dmtf.org/standards/cim/cim_schema_v282
DMTF
Common Information Model (CIM)
http://dmtf.org/standards/cim
DMTF
DMTF Tutorial
http://www.wbemsolutions.com/tutorials/DMTF/index.html
DMTF
Standards and Technology
http://dmtf.org/standards
DMTF
Web Services Management (WS-MAN)
http://dmtf.org/standards/wsman
DMTF
Web-Based Enterprise Management (WBEM)
http://dmtf.org/standards/wbem
Microsof
t
Windows Management Instrumentation
(WMI)
http://msdn.microsoft.com/enus/library/windows/desktop/aa394582(v=vs.85).aspx
SNIA
SMI Specification
http://www.snia.org/sites/default/files/SMI-Sv1.6r4Block.book_.pdf
SNIA
SMI-Lab Program
http://www.snia.org/forums/smi/tech_programs/lab_program
SNIA
SMI-S Conforming Provider Companies
http://www.snia.org/ctp/conformingproviders/index.html
SNIA
SNIA – SMI-S Conformance Testing Program –
Official CTP Test Results – EMC Corporation
http://www.snia.org/ctp/conformingproviders/emc.html
SNIA
SNIA Conformance Testing Program (SNIACTP)
http://www.snia.org/ctp/
SNIA
SNIA Storage Management Initiative (SMI)
home page
http://www.snia.org/smi/home/
SNIA
Storage Management Initiative (SMI) forums
http://www.snia.org/forums/smi
SNIA
Storage Management Initiative Specification
(SMI-S)
http://www.snia.org/tech_activities/standards/curr_standards/
smi
126
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
SNIA
Storage Management Technical Specification
Overview
http://www.snia.org/sites/default/files/SMISv1.3r6_Overview.book_.pdf
SNIA
Storage Networking Industry Association
(SNIA)
http://www.snia.org/
SNIA
Video – SMI Overview
http://www.snia.org/forums/smi/video/smioverview
EMC Sources
Typically, you can find all EMC documents on the EMC webpage EMC Powerlink at:
http://powerlink.emc.com
You can also find EMC documentation on the EMC webpage What's New at EMC Support at:
https://support.emc.com
EMC sources in the following table introduce some of the EMC sources relevant for storage systems that
support VMM storage automation.
Table 63: EMC sources related to VMM 2012 storage automation
Source Website
Link
EMC
Arrays - Announcing the EMC
Symmetrix VMAX 40K, 20K, 10K Series
and Enginuity 5876
http://powerlink.emc.com/km/live1/en_US/Offering_Basics/
Articles_and_Announcements/Symmetrix-VMAX-40KEnginuity-5876-article.docx
EMC
Arrays - CLARiiON Data Sheet – EMC
CLARiiON CX4 Series
http://www.emc.com/collateral/hardware/data-sheet/h5527emc-clariion-cx4-ds.pdf
EMC
Arrays - CLARiiON Data Sheet – EMC
CLARiiON CX4 Series - Virtual
http://www.emc.com/collateral/hardware/data-sheet/h5521clariion-cx4-virtual-ds.pdf
EMC
Arrays - CLARiiON Data Sheet – EMC
Replication Manager and SnapView
Replication for EMC CLARiiON Arrays in
Physical and Virtual Environments
http://www.emc.com/collateral/software/data-sheet/h2306clariion-rep-snap-ds.pdf
EMC
Arrays - EMC Hardware/Platforms
Documentation
http://powerlink.emc.com/km/appmanager/km/
secureDesktop?_nfpb=true&_pageLabel=
image7b&internalId=0b01406680024e24&_irrt=true
EMC
Arrays - Introduction to the EMC
CLARiiON CX4 Series Featuring
UltraFlex Technology" (December
2009)
http://www.emc.com/collateral/hardware/whitepapers/h5534-intro-clariion-cx4-series-ultraflex-tech-wp.pdf
EMC
Arrays - Introduction to the EMC VNX
Series: A Detailed Review (September
2011)
http://www.emc.com/collateral/hardware/whitepapers/h8217-introduction-vnx-wp.pdf
EMC
Arrays - Symmetrix Data Sheet – VMAX
10K
http://www.emc.com/collateral/hardware/data-sheet/h8816symmetrix-vmax-10k-ds.pdf
127
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Source
Website
Link
EMC
Arrays - Symmetrix Data Sheet – VMAX
20K
http://www.emc.com/collateral/hardware/data-sheet/h6193symmetrix-vmax-20k-ds.pdf
EMC
Arrays - Symmetrix Data Sheet – VMAX
40K
http://www.emc.com/collateral/hardware/data-sheet/h9716symmetrix-vmax-40k-ds.pdf
EMC
Arrays - VNX Data Sheet – EMC VNX
Family: Next-generation unified
storage, optimized for virtualized
applications
http://www.emc.com/collateral/hardware/data-sheets/h8520vnx-family-ds.pdf
EMC
Arrays - VNX Data Sheet – EMC VNX
Series Total Efficiency Pack
http://www.emc.com/collateral/software/data-sheet/h8509vnx-software-suites-ds.pdf
EMC
Cloud - Everything Microsoft at EMC
https://community.emc.com/community/connect/everything_
microsoft
EMC
Cloud - Inside the Partnership
(EMC/MSFT) – ITP23
http://www.youtube.com/watch?v=9trcD-oGkkQ
EMC
Cloud - Microsoft Virtualization and
Private Cloud Solutions (on EMC.com)
http://www.emc.com/hypervcloud
EMC
Cloud - Solutions for Microsoft
http://www.emc.com/solutions/microsoft
EMC
ECIM - EMC Common Information
Model
https://corpusweb172.corp.emc.com/eRoom/spoadvtech/EMCS
MI/0_135a
EMC
ECOM - ECOM Deployment and
Configuration Guide
http://developer.emc.com/developer/devcenters/storage/snia/
smis/downloads/SMIProvider_V430_ECOMDeploymentandConfigur
ationGuide.pdf
EMC
EMC and the SNIA SMI-S
http://developer.emc.com/developer/devcenters/storage/snia/
smi-s/index.htm
EMC SMI-S Provider Download
http://powerlink.emc.com/km/appmanager/km/
secureDesktop?_nfpb=true&_pageLabel=
servicesDownloadsTemplatePg&internalId=
0b014066800251b8&_irrt=trueversion%204.0,1
EMC
EMC SMI-S Provider Release Notes
V4.4.0 – direct link to the current
version
http://powerlink.emc.com/km/live1/en_US/
Offering_Technical/Technical_Documentation/300-013992.pdf?mtcs=
ZXZlbnRUeXBlPUttQ2xpY2tTZWFyY2hSZXN1bHRzRXZlbnQsZG9jd
W1lbn
RJZD0wOTAxNDA2NjgwNjZmOTdmLGRhdGFTb3VyY2U9RENUTV
9lbl9VU18w
EMC
EMC SMI-S Provider Release Notes
V4.4.0 (or later) – Navigate to the most
recent version
EMC
6. Open http://powerlink.emc.com
7. In Search Powerlink, type:
"SMI-S Provider Release Notes"
Tip Include the quotation marks.
128
SMI-S Enables Storage Automation for Microsoft SCVMM 2012 and EMC Storage Arrays
Reference Architecture | Best Practices
Microsoft Sources
Table 64: Microsoft sources related to VMM 2012 storage automation
Source
Website
Link
Microsoft
Storage automation in VMM 2012
http://blogs.technet.com/b/scvmm/archive/2011/03/29/storageautomation-in-vmm-2012.aspx
Microsoft
Cloud - Microsoft Private Cloud
http:// www.microsoft.com/privatecloud
Microsoft
MTC – Microsoft Technology Center
Alliances Program - EMC
http://www.microsoft.com/en-us/mtc/partners/emc2.aspx
Microsoft
MTC - Microsoft Technology Centers
http://www.microsoft.com/en-us/mtc/default.aspx
Microsoft
MTC - Microsoft Technology Centers
Alliances Program
http://www.microsoft.com/en-us/mtc/partners/alliance.aspx
Microsoft
SMI-S - Microsoft SMI-S Roadmap
Update
http://www.snia.org/sites/default/files2/SDC2011/
presentations/wednesday/
JeffGoldner_Microsoft_Roadmap_Update.pdf
Microsoft
VMM - System Requirements: VMM
Management Server (for installing
VMM 2012 in a production
environment)
http://technet.microsoft.com/en-us/library/gg610562.aspx
Microsoft
VMM - Technical Documentation
Download for System Center 2012 –
Virtual Machine Manager
http://www.microsoft.com/download/en/
details.aspx?id=6346
Microsoft
VMM - Virtual Machine Manager
page (online VMM product team
page)
http://technet.microsoft.com/en-us/library/gg610610.aspx
Microsoft
Video - How Microsoft IT Uses
System Center Virtual Machine
Manager to Manage the Private
Cloud
http://technet.microsoft.com/en-us/edge/Video/hh748210
Microsoft
Video - Private Cloud Jump Start
(01): Introduction to the Microsoft
Private Cloud with System Center
2012
http://technet.microsoft.com/en-US/edge/private-cloud-jumpstart-01-introduction-to-the-microsoft-private-cloud-withsystem-center-2012
Microsoft
Video - Private Cloud Jump Start
(02): Configure and Deploy
Infrastructure Components
http://technet.microsoft.com/en-us/edge/video/private-cloudjump-start-02-configure-and-deploy-infrastructure-components
129
Download PDF
Similar pages