EMC VSPEX Proven Infrastructure Private Cloud VMware up to 100

Proven Infrastructure
EMC® VSPEX™ PRIVATE CLOUD
VMware vSphere® 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe™, and
EMC Next-Generation Backup
EMC VSPEX
Abstract
This document describes the EMC VSPEX Proven Infrastructure solution for
private cloud deployments including VMware vSphere with EMC VNXe for up to
100 virtual machines using NFS or iSCSI Storage.
March, 2013
Copyright © 2013 EMC Corporation. All rights reserved. Published in the USA.
Published March 2013
EMC believes the information in this publication is accurate of its publication date.
The information is subject to change without notice.
The information in this publication is provided as is. EMC Corporation makes no
representations or warranties of any kind with respect to the information in this
publication, and specifically disclaims implied warranties of merchantability or
fitness for a particular purpose. Use, copying, and distribution of any EMC software
described in this publication requires an applicable software license.
EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC
Corporation in the United States and other countries. All other trademarks used
herein are the property of their respective owners.
For the most up-to-date regulatory document for your product line, go to the technical
documentation and advisories section on the EMC online support website.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC Next-Generation
Backup
Part Number H11328.2
2
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Contents
Chapter 1
Executive Summary
13
Introduction .................................................................................................. 14
Target audience ............................................................................................ 14
Document purpose ....................................................................................... 14
Business needs ............................................................................................ 15
Chapter 2
Solution Overview
17
Overview ....................................................................................................... 18
Virtualization ................................................................................................ 18
Compute ....................................................................................................... 18
Network ........................................................................................................ 19
Storage ......................................................................................................... 19
Chapter 3
Solution Technology Overview
21
Overview ....................................................................................................... 22
Summary of key components ........................................................................ 22
Virtualization ................................................................................................ 23
Overview .............................................................................................................. 23
VMware vSphere 5.1 ............................................................................................ 23
VMware vCenter ................................................................................................... 24
VMware vSphere High Availability (VMHA) ........................................................... 24
Integration with other VMware products ............................................................... 24
EMC Virtual Storage Integrator for VMware ........................................................... 28
VNXe VMware vStorage API for Array Integration support...................................... 29
Compute ....................................................................................................... 29
Network ........................................................................................................ 31
Storage ......................................................................................................... 33
Overview .............................................................................................................. 33
EMC VNXe series .................................................................................................. 33
3
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
3
Contents
Backup and recovery..................................................................................... 34
Overview .............................................................................................................. 34
EMC NetWorker .................................................................................................... 34
EMC Avamar......................................................................................................... 34
Other technologies ....................................................................................... 35
EMC XtremSW Cache (optional) ............................................................................ 35
Chapter 4
Solution Architecture Overview
37
Overview ....................................................................................................... 38
Solution architecture .................................................................................... 38
Architecture for up to 50 virtual machines ............................................................ 38
Architecture for up to 100 virtual machines .......................................................... 39
Key components .................................................................................................. 40
Hardware resources ............................................................................................. 41
Software resources .............................................................................................. 43
Server configuration guidelines .................................................................... 44
Overview .............................................................................................................. 44
VMware vSphere memory virtualization for VSPEX................................................ 44
Memory configuration guidelines ......................................................................... 46
Network configuration guidelines ................................................................. 47
Overview .............................................................................................................. 47
VLAN .................................................................................................................... 47
Enable jumbo frames ........................................................................................... 48
Link Aggregation .................................................................................................. 48
Storage configuration guidelines .................................................................. 49
Overview .............................................................................................................. 49
VMware vSphere storage virtualization for VSPEX ................................................ 49
Storage layout for 50 virtual machines ................................................................. 50
Storage layout for 100 virtual machines ............................................................... 53
High Availability and failover ........................................................................ 55
Overview .............................................................................................................. 55
Virtualization layer ............................................................................................... 55
Compute layer...................................................................................................... 55
Network layer ....................................................................................................... 56
Storage layer ........................................................................................................ 56
Validation test profile ................................................................................... 57
Profile characteristics........................................................................................... 57
Backup environment configuration guide ..................................................... 58
Backup characteristics ......................................................................................... 58
Backup layout ...................................................................................................... 58
4
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Contents
Sizing guidelines .......................................................................................... 59
Reference workload ...................................................................................... 59
Defining the reference workload........................................................................... 59
Applying the reference workload ................................................................... 60
Overview .............................................................................................................. 60
Example 1: Custom-built application.................................................................... 60
Example 2: Point of sale system ........................................................................... 61
Example 3: Web server ......................................................................................... 61
Example 4: Decision support database ................................................................ 61
Summary of examples.......................................................................................... 62
Implementing the reference architectures ..................................................... 62
Overview .............................................................................................................. 62
Resource types .................................................................................................... 62
CPU resources ...................................................................................................... 63
Memory resources................................................................................................ 63
Network resources ............................................................................................... 63
Storage resources ................................................................................................ 64
Implementation summary .................................................................................... 64
Quick assessment......................................................................................... 65
Overview .............................................................................................................. 65
CPU requirements ................................................................................................ 65
Memory requirements .......................................................................................... 65
Storage performance requirements ...................................................................... 66
Storage capacity requirements ............................................................................. 67
Determining Equivalent Reference Virtual Machines ............................................ 67
Fine tuning hardware resources ........................................................................... 70
Chapter 5
VSPEX Configuration Guidelines
75
Configuration overview ................................................................................. 76
Deployment process ............................................................................................ 76
Pre-deployment tasks ................................................................................... 77
Overview .............................................................................................................. 77
Deployment prerequisites .................................................................................... 77
Customer configuration data ......................................................................... 78
Prepare switches, connect network, and configure switches......................... 79
Overview .............................................................................................................. 79
Prepare network switches .................................................................................... 79
Configure infrastructure network .......................................................................... 79
Configure VLANs .................................................................................................. 81
Complete network cabling.................................................................................... 81
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
5
Contents
Prepare and configure storage array ............................................................. 82
VNXe configuration .............................................................................................. 82
Install and configure VMware vSphere hosts ................................................ 86
Overview .............................................................................................................. 86
Install ESXi ........................................................................................................... 86
Configure ESXi networking ................................................................................... 86
Jumbo frames....................................................................................................... 87
Connect VMware datastores................................................................................. 87
Plan virtual machine memory allocations ............................................................. 87
Install and configure SQL Server database .................................................... 90
Overview .............................................................................................................. 90
Create a virtual machine for Microsoft SQL Server ................................................ 90
Install Microsoft Windows on the virtual machine ................................................ 91
Install SQL Server ................................................................................................. 91
Configure database for VMware vCenter ............................................................... 91
Configure database for VMware Update Manager ................................................. 91
Install and configure VMware vCenter Server ................................................ 92
Overview .............................................................................................................. 92
Create the vCenter host virtual machine ............................................................... 93
Install the vCenter guest OS ................................................................................. 93
Create vCenter ODBC connections ........................................................................ 93
Install vCenter Server ........................................................................................... 93
Apply vSphere license keys .................................................................................. 93
Deploy the VNX VAAI NFS plug-in (NFS implementation only) ............................... 93
Install the EMC VSI plug-in ................................................................................... 93
Summary ............................................................................................................. 94
Chapter 6
Validating the Solution
95
Overview ....................................................................................................... 96
Post-install checklist ..................................................................................... 97
Deploy and test a single virtual server .......................................................... 97
Verify the redundancy of the solution components ....................................... 97
Appendix A
Bill of Materials
99
Bill of materials ........................................................................................... 100
Appendix B
Customer Configuration Data Sheet
103
Customer configuration data sheets ........................................................... 104
Appendix C
References
107
References .................................................................................................. 108
EMC documentation ........................................................................................... 108
6
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Contents
Other documentation ......................................................................................... 108
Appendix D
About VSPEX
109
About VSPEX ............................................................................................... 110
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
7
Contents
8
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Figures
Figure 1.
Figure 2.
Figure 3.
Figure 4.
Figure 5.
Figure 6.
Figure 7.
Figure 8.
Figure 9.
Figure 10.
Figure 11.
Figure 12.
Figure 13.
Figure 14.
Figure 15.
Figure 16.
Figure 17.
Figure 18.
Figure 19.
Figure 20.
Figure 21.
Figure 22.
Figure 23.
9
VSPEX private cloud components ....................................................... 22
Compute layer flexibility ..................................................................... 30
Example of highly available network design ....................................... 32
Logical architecture for 50 virtual machines........................................ 39
Logical architecture for 100 virtual machines ..................................... 39
Hypervisor memory consumption ....................................................... 45
Required networks ............................................................................. 48
VMware virtual disk types ................................................................... 50
iSCSI storage architecture for 50 virtual machines on EMC VNXe3150 51
NFS storage architecture for 50 virtual machines on EMC VNXe3150 .. 52
iSCSI storage architecture for 100 virtual machines on EMC VNXe3300
........................................................................................................... 53
NFS storage architecture for 100 virtual machines on EMC VNXe3300 54
High Availability at the virtualization layer .......................................... 55
Redundant Power Supplies................................................................. 55
Network layer High Availability ........................................................... 56
VNXe series high availability .............................................................. 57
Resource pool flexibility ..................................................................... 62
Required resources from the Reference Virtual Machine Pool ............. 68
Aggregate resource requirements from the Reference Virtual Machine
Pool.................................................................................................... 69
Customizing server resources ............................................................. 70
Sample Ethernet network architecture (iSCSI) ..................................... 80
Sample Ethernet network architecture (NFS) ....................................... 80
Virtual machine memory settings ....................................................... 89
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
9
Figures
10
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Tables
Table 1.
Table 2.
Table 3.
Table 4.
Table 5.
Table 6.
Table 7.
Table 8.
Table 9.
Table 10.
Table 11.
Table 12.
Table 13.
Table 14.
Table 15.
Table 16.
Table 17.
Table 18.
Table 19.
Table 20.
Table 21.
Table 22.
Table 23.
Table 24.
Table 25.
Table 26.
Table 27.
Table 28.
Table 29.
Table 30.
Table 31.
Table 32.
11
VNXe customer benefits ..................................................................... 33
Solution hardware .............................................................................. 41
Solution software ............................................................................... 43
Server hardware ................................................................................. 44
Network hardware .............................................................................. 47
Storage hardware ............................................................................... 49
Solution profile characteristics ........................................................... 57
Backup profile characteristics ............................................................ 58
Virtual machine characteristics........................................................... 60
Blank worksheet row .......................................................................... 65
Reference virtual machine resources .................................................. 67
Example worksheet row ...................................................................... 68
Example applications ......................................................................... 69
Server resource component totals ...................................................... 71
Blank customer worksheet ................................................................. 73
Deployment process overview ............................................................ 76
Tasks for pre-deployment ................................................................... 77
Deployment prerequisites checklist .................................................... 77
Tasks for switch and network configuration ........................................ 79
Tasks for storage configuration........................................................... 82
Tasks for server installation ................................................................ 86
Tasks for SQL Server database setup .................................................. 90
Tasks for vCenter configuration .......................................................... 92
Tasks for vCenter configuration validation .......................................... 96
List of components used in the VSPEX solution for 50 virtual machines
......................................................................................................... 100
List of components used in the VSPEX solution for 100 virtual machines
......................................................................................................... 101
Common server information ............................................................. 104
ESXi server information .................................................................... 104
Array information.............................................................................. 105
Network infrastructure information ................................................... 105
VLAN information ............................................................................. 105
Service accounts .............................................................................. 105
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
11
Tables
12
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Chapter 1
Executive Summary
This chapter presents the following topics:
Introduction............................................................................................... 14
Target audience ......................................................................................... 14
Document purpose .................................................................................... 14
Business needs ......................................................................................... 15
13
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
13
Executive Summary
Introduction
VSPEX validated and modular architectures are built with proven best-of-breed
technologies to create complete virtualization solutions that enable you to make an
informed decision in the hypervisor, compute, and networking layers. VSPEX helps to
reduce virtualization planning and configuration burdens. When embarking on server
virtualization, virtual desktop deployment, or IT consolidation, VSPEX accelerates the
IT Transformation by enabling faster deployments, flexibility of choice, greater
efficiency, and lower risk.
This document is intended to be a comprehensive guide to the technical aspects of
this solution. Server capacity is provided in generic terms for required minimums of
CPU, memory, and network interfaces. The customer is free to select the server and
networking hardware that meet or exceed the stated minimums.
Target audience
The reader of this document is expected to have the necessary training and
background to install and configure VMware vSphere, EMC VNXe series storage
systems, and associated infrastructure as required by this implementation. External
references are provided where applicable and it is recommended that the reader be
familiar with these documents.
Readers are also expected to be familiar with the infrastructure and database security
policies of the customer installation.
Individuals focused on selling and sizing a VMware private cloud infrastructure
should pay particular attention to the first four chapters of this document. After
purchase, implementers of the solution may want to focus on the configuration
guidelines in Chapter 5, the solution validation in Chapter 6, and the appropriate
references and appendices.
Document purpose
This document contains an initial introduction to the VSPEX architecture, explanation
of how to modify the architecture for specific engagements, and instructions on how
to effectively deploy the system.
The VSPEX private cloud architecture provides the customer with a modern system
capable of hosting many virtual machines at a consistent performance level. This
solution runs on the VMware vSphere virtualization layer that is backed by the highly
available VNX family of storage. The compute and network components, defined by
the customer, are selected to be redundant and sufficiently powerful to handle the
processing and data needs of the virtual machine environment.
The 50 and 100 virtual machine environments discussed are based on a defined
reference workload. While not every virtual machine has the same requirements, this
document contains methods and guidance to adjust the system to be cost-effective
when deployed.
14
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Executive Summary
A private cloud architecture is a complex system offering. This document facilitates
its setup by providing up-front software and hardware material lists, step-by-step
sizing guidance and worksheets, and verified deployment steps. When the last
component has been installed, there are validation tests to ensure that your system
is operating properly. Following the procedures defined in this document ensures an
efficient and painless journey to the cloud.
Business needs
Business applications are moving into consolidated compute, network, and storage
environments. The EMC VSPEX private cloud solution using VMware reduces the
complexity of configuring every component of a traditional deployment model. The
complexity of integration management is reduced while application design and
implementation options are maintained. Administration is unified, while process
separation can be adequately controlled and monitored. The following are the
business needs met by the EMC VSPEX VMware private cloud architectures:

Provides an end-to-end virtualization solution to utilize the capabilities of the
unified infrastructure components.

Provides a VSPEX for VMware private cloud solution for efficiently virtualizing
up to 100 virtual machines for varied customer use cases.

Provides a reliable, flexible and scalable reference design
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
15
Executive Summary
16
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Chapter 2
Solution Overview
This chapter presents the following topics:
Overview ................................................................................................... 18
Virtualization ............................................................................................. 18
Compute ................................................................................................... 18
Network..................................................................................................... 19
Storage ..................................................................................................... 19
17
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
17
Solution Overview
Overview
The EMC VSPEX private cloud solution for VMware vSphere 5.1 provides complete
system architecture capable of supporting up to 100 virtual machines with a
redundant server/network topology and highly available storage. The core
components that make up this particular solution are virtualization, compute,
network, and storage.
Virtualization
VMware vSphere is the leading virtualization platform in the industry. For years, it has
provided flexibility and cost savings to end users by enabling the consolidation of
large, inefficient server farms into nimble, reliable cloud infrastructures. The core
VMware vSphere components are the VMware vSphere Hypervisor and the VMware®
vCenter™ Server for system management.
The VMware vSphere Hypervisor runs on a dedicated server and allows multiple
operating systems to execute on the system at one time as virtual machines. These
hypervisor systems can be connected to operate in a clustered configuration. These
clustered configurations are then managed as a larger resource pool through the
vCenter product and allow for dynamic allocation of CPU, memory and storage across
the cluster.
Features like vMotion®, which allows a virtual machine to move between different
servers with no disruption to the operating system, and Distributed Resource
Scheduler (DRS), which perform vMotion automatically to balance load, make
vSphere a solid business choice.
With the release of vSphere 5.1, a VMware virtualized environment can host virtual
machines with up to 64 virtual CPUs and 1TB of virtual RAM.
Compute
VSPEX allows the flexibility of designing and implementing the customer’s choice of
server components. The infrastructure has to conform to the following attributes:
18

Sufficient RAM, cores, and memory to support the required number and types
of virtual machines

Sufficient network connections to enable redundant connectivity to the
system switches

Excess capacity to withstand a server failure and failover within the
environment
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Overview
Network
VSPEX allows the flexibility of designing and implementing the vendor’s choice of
network components. The infrastructure has to conform to the following attributes:

Redundant network links for the hosts, switches and storage

Support for Link Aggregation

Traffic isolation based on industry accepted best practices
Storage
The EMC VNX storage family is the number one shared storage platform in the
industry. Its ability to provide both file and block access with a broad feature set
make it an ideal choice for any private cloud implementation.
The VNXe storage components include the following, which are sized for the stated
reference architecture workload:

Host adapter ports – Provide host connectivity via fabric into the array.

Storage Processors – The compute component of the storage array
responsible for all aspects of data moving into, out of, and between arrays
and protocol support.

Disk drives – actual spindles that contain the host/application data and their
enclosures.
The 50 and 100 virtual machine VMware private cloud solutions discussed in this
document are based on the VNXe3150 and VNXe3300 storage arrays respectively.
The VNXe3150 can support a maximum of 100 drives and the VNXe3300 can host up
to 150 drives. Two storage configurations are tested here; one using NFS, and the
other using iSCSI.
The EMC VNXe series supports a wide range of business class features ideal for the
private cloud environment including:

Thin Provisioning

Replication

Snapshots

File Deduplication and Compression

Quota Management
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
19
Solution Overview
20
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Chapter 3
Solution Technology
Overview
This chapter presents the following topics:
Overview ................................................................................................... 22
Summary of key components ..................................................................... 22
Virtualization ............................................................................................. 23
Compute ................................................................................................... 29
Network..................................................................................................... 31
Storage ..................................................................................................... 33
Backup and recovery ................................................................................. 34
Other technologies .................................................................................... 35
21
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
21
Solution Technology Overview
Overview
This solution uses the EMC VNXe series and VMware vSphere 5.1 to provide storage
and server hardware consolidation. The new virtualized infrastructure is centrally
managed, allowing efficient deployment and management of a scalable number of
virtual machines and associated shared storage.
Figure 1 depicts the solution components.
Figure 1.
VSPEX private cloud components
These components are described in more detail in the following sections.
Summary of key components
This section briefly describes the key components of this solution.

Virtualization
The virtualization layer allows the physical implementation of resources to be
decoupled from the applications that use them. In other words, the
application’s view of the resources available to it is no longer directly tied to
the hardware. This enables many key features in the private cloud concept.
22
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Technology Overview

Compute
The compute layer provides memory and processing resources for the
virtualization layer software and for the applications running in the private
cloud. The VSPEX program defines the minimum amount of compute layer
resources required, but allows the customer to fulfill the requirements using
any suitable server hardware.

Network
The network layer connects the users of the private cloud to the resources in
the cloud, and connects the storage layer to the compute layer. The VSPEX
program defines the minimum number of network ports required for the
solution, and provides general guidance on network architecture, but allows
the customer to fulfill the requirements using any suitable network hardware.

Storage
The storage layer is a critical resource for the implementation of the private
cloud. By allowing multiple hosts to access a shared set of data, it enables
many of the use cases defined in the private cloud concept. The EMC VNX
family of storage used in this solution provides high performance data storage
while maintaining high availability.

Backup and recovery
The optional Backup and Recovery components of the solution provide data
protection in the event that the data in the primary system is deleted,
damaged, or otherwise unusable.
Solution architecture provides details on all the components that make up the
reference architecture.
Virtualization
Overview
The virtualization layer is a key component of any server virtualization or private cloud
solution. It allows the application resource requirements to be decoupled from the
underlying physical resources that serve them. This enables greater flexibility in the
application layer by eliminating hardware downtime for maintenance, and even
allowing the physical capability of the system to change without affecting the hosted
applications. In a server virtualization or private cloud use case, the virtualization
layer allows multiple independent virtual machines to share the same physical
hardware, rather than being directly implemented on dedicated hardware.
VMware vSphere
5.1
VMware vSphere 5.1 transforms a computer’s physical resources by virtualizing the
CPU, memory, storage, and network functions. This transformation creates fully
functional virtual machines that run isolated and encapsulated operating systems
and applications just like physical computers.
The high-availability features of VMware vSphere 5.1 such as vMotion and Storage
vMotion enable seamless migration of virtual machines and stored files from one
vSphere server to another with minimal or no performance impact. Coupled with
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
23
Solution Technology Overview
vSphere DRS and Storage DRS, virtual machines have access to the appropriate
resources at any point in time through load balancing of compute and storage
resources.
VMware vCenter
VMware® vCenter™ is a centralized management platform for the VMware Virtual
Infrastructure. It provides administrators with a single interface for all aspects of
monitoring, managing, and maintaining the virtual infrastructure that can be
accessed from multiple devices.
VMware vCenter is also responsible for managing some of the more advanced
features of the VMware virtual infrastructure like VMware vSphere High Availability
(VMHA) and Distributed Resource Scheduling (DRS), along with vMotion and Update
Manager.
VMware vSphere
High Availability
(VMHA)
The VMware vSphere High Availability feature allows the virtualization layer to restart
virtual machines automatically in various failure conditions.

If the virtual machine operating system has an error, the Virtual Machine can
be restarted automatically on the same hardware.

If the physical hardware has an error, the impacted virtual machines can be
restarted automatically on other servers in the cluster.
Note
To restart virtual machines on different hardware, those servers need to have
available resources. There are specific recommendations in the Compute
section below to enable this functionality.
VMware vSphere High Availability allows you to configure policies to determine which
machines are restarted automatically, and under what conditions these operations
should be attempted.
Integration with
other VMware
products
VMware vCloud Director
VMware vCloud® Director™ is a part of the vCloud Suite in 5.1, which orchestrates
the provisioning of software-defined datacenter services as complete virtual
datacenters ready for consumption in minutes. vCloud uses pools of resources
abstracted from the underlying physical resources in VSPEX to enable the automated
deployment of virtual resources.
With VMware vCloud Director, you can:

Build secure, multi-tenant private clouds by pooling infrastructure resources
from VSPEX into virtual datacenters and exposing them to users through webbased portals and programmatic interfaces as fully automated, catalog-based
services

Build out complete virtual datacenters, delivering compute, networking,
storage, security, and a complete set of necessary services to make
workloads operational in minutes
Software-defined datacenter services and the virtual datacenter paradigm
fundamentally simplify infrastructure provisioning, and enable IT to move at business
speed. VMware vCloud Director integrates with existing or new VSPEX VMware
24
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Technology Overview
vSphere private cloud deployments and supports existing and future applications by
providing elastic storage and networking services, such as Layer-2 connectivity and
broadcasting, between virtual machines. VMware vCloud Director utilizes open
standards to preserve deployment flexibility and pave the way to the hybrid cloud.
The key features of VMware vCloud Director include:

Virtual machine snapshot and revert capabilities

Integrated vSphere profile, security and vCenter Single Sign-on

Fast Provisioning

vApp catalogs

Isolated multi-tenant capabilities

Self-service web portals

VMware vCloud API services

OVF support

Integration of private, public, and hybrid clouds
All VSPEX Proven Infrastructures can leverage vCloud Director to orchestrate the
deployment of virtual datacenters based on single or multiple VSPEX deployments
facilitating simple and efficient deployments of virtual machines, applications, and
virtual networks to secure, private infrastructures within any given VSPEX instance.
vCloud Networking and Security 5.1
VMware vShield Edge™, Application and Data Security capabilities have been
integrated and enhanced in vCloud Networking and Security 5.1, which is part of the
VMware vCloud Suite. VSPEX private cloud solutions with VMware vCloud Networking
and Security enable customers to adopt virtualized networks to eliminate the rigidity
and complexity with physical equipment.
Physical equipment creates artificial barriers to operating an optimized virtual
network architecture. Physical networking does not keep pace with the datacenter
virtualization, which limits the ability of businesses to rapidly deploy, move, scale,
and protect applications and data according to business needs.
To solve these datacenter challenges, VSPEX with VMware vCloud Networking and
Security virtualizes networks and security to create efficient, agile, extensible logical
constructs that meet the performance and scale requirements of virtualized
datacenters, in the following ways:

VMware vCloud Networking and Security delivers software-defined networks
and security with a broad range of services in a single solution and includes a
virtual firewall, virtual private network (VPN) support, load balancing and
VXLAN extended networks.

Management integration with VMware vCenter Server and VMware vCloud
Director reduces the cost and complexity of datacenter operations and
unlocks the operational efficiency and agility of private cloud computing.
VSPEX for virtualized applications can also take advantage of vCloud Networking and
Security features. VSPEX enables businesses to virtualize Microsoft applications.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
25
Solution Technology Overview
With VMware vCloud, the applications can be protected and isolated from risk as
administrators have greater visibility into virtual traffic flows so that they can enforce
policies and implement compliance controls on in-scope systems by implementing
logical grouping and virtual firewalls.
Administrators who deploy virtual desktops with VSPEX End-User Computing for
VMware vSphere and View can also benefit from vCloud Networking and Security by
creating logical security around individual or groups of virtual desktops. This ensures
that the users deployed on the VSPEX Proven Infrastructure can only access the
applications and data that they are authorized to, preventing broader access to the
datacenter. vCloud also enables rapid diagnosis of traffic and potential trouble spots.
Administrators can effectively create software-defined networks that scale and move
virtual workloads within their VSPEX Proven Infrastructures without physical
networking or security constraints, which can be streamlined via VMware vCenter and
VMware vCloud Director Integration.
VMware vSphere Data Protection
VMware vSphere Data Protection (VDP) is a proven solution for backing up and
restoring VMware virtual machines. VDP is built off EMC’s award-winning Avamar®
product and has many integration points with vSphere providing simple discovery of
the virtual machines and efficient policy-creation.
One of challenges that traditional systems have with virtual machines is the large
amount of data that these files contain. VDP helps to solve this challenge in the
following ways:

VDP uses a variable-length deduplication algorithm, which ensures the
minimum amount of disk space is used and ongoing backup storage growth
is reduced. Data is deduplicated across all virtual machines associated with
the VDP virtual appliance.

VDP leverages vStorage APIs for Data Protection (VADP), which sends only
daily changed blocks of data, resulting in only a fraction of the data being
sent over the network.

VDP enables up to eight virtual machines to be backed up concurrently. Since
VDP resides in a dedicated virtual appliance, all the backup processes are
offloaded from production virtual machines.
VDP can alleviate the burdens of restore requests from administrators by enabling
end users to restore their own files using a web-based tool called vSphere Data
Protection Restore Client. Users can browse their system backups in an easy-to-use
interface that has search and version control. Within a few simple clicks, the users
can restore individual files or directories without any intervention from IT, freeing up
valuable time and resources, and resulting in a better end user experience.
Smaller deployments of VSPEX Proven Infrastructure can leverage VDP as well. VDP is
deployed as a virtual appliance with four processors (vCPUs) and 4 GB RAM. Three
configurations of usable backup storage capacity are available: 5 TB, 1 TB and 2 TB,
which consume 850 GB, 1300 GB, and 3100 GB of actual storage capacity
respectively. Proper planning should be performed to help ensure proper sizing as
additional storage capacity cannot be added once the appliance is deployed. Storage
capacity requirements are based on the number of virtual machines being backed up,
26
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Technology Overview
amount of data, retention periods, and typical data change rates. VSPEX Proven
Infrastructures are sized based upon Reference virtual machines and therefore any
deployment of VDP will alter that sizing and should be considered.
vSphere Replication
vSphere Replication is a feature of the vSphere platform that provides business
continuity. vSphere Replication copies a virtual machine that is defined in the VSPEX
Infrastructure to a second instance of VSPEX or within the clustered servers in a single
VSPEX deployment. vSphere Replication continues to protect the virtual machine on
an ongoing basis and replicates the changes to the target virtual machine. This
ensures that the virtual machine remains protected and is available for recovery
without requiring a restore from backup.
Application virtual machines that are defined in VSPEX can be easily replicated to
ensure application consistent data with a single click when replication is being set
up. Administrators who are managing VSPEX for virtualizing Microsoft applications
can leverage the vSphere Replication automatic integration with Microsoft Volume
Shadow Copy Service (VSS) to ensure that applications such as Microsoft Exchange
or Microsoft SQL Server databases are handled properly and are in a consistent state
when replica data is being generated. A call to the virtual machine VSS layer flushes
the database writers for an instant to ensure that the data replicated is static and
recoverable. This automated approach simplifies the management and increases the
efficiency of your VSPEX-based virtual environment.
VMware vCenter Operations Manager Suite
The VMware vCenter Operations Manager™ Suite provides unparalleled visibility into
the VSPEX virtual environments. It collects and analyzes data, correlates
abnormalities and helps to identify the root cause of performance problems while
providing administrators with the information they need to optimize and tune their
VSPEX virtual infrastructures. vCenter Operations Manager provides an automated
approach to optimizing the VSPEX-powered virtual environment by delivering selflearning analytic tools that are tightly integrated to provide better performance,
capacity usage and configuration management.
The VMware vCenter Operations Management Suite delivers a comprehensive set of
management capabilities, including performance, capacity, change, configuration
and compliance management, application discovery and monitoring, and cost
metering.
The VMware vCenter Operations Management Suite includes five components:

VMware vCenter Operations Manager - provides the operational dashboard
interface, which makes visualizing issues in your VSPEX virtual environment
simple

VMware vCenter Configuration Manager™

VMware vFabric™ Hyperic® - monitors physical hardware resources,
operating systems, middleware and applications that you may have deployed
on VSPEX
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
27
Solution Technology Overview

VMware vCenter Infrastructure Navigator™ - provides visibility into the
application services running over the virtual machine infrastructure and their
interrelationships for day-to-day operational management

VMware vCenter Chargeback Manager™ - enables accurate cost
measurement, analysis, and reporting of virtual machines, providing visibility
into the actual cost of the defined VSPEX Proven Infrastructure being utilized
to support business services
VMware Single Sign-on
With the introduction of VMware vCenter Single Sign-on in VMware vSphere 5.1,
administrators now have a deeper level of available authentication services in
managing their VSPEX Proven Infrastructures. Authentication by vCenter Single Signon makes the VMware cloud infrastructure platform more secure by enabling the
vSphere software components to communicate with each other through a secure
token exchange mechanism, instead of requiring each component to authenticate a
user separately with a directory service like Active Directory.
When users log in to the vSphere Web Client with a username and password, their
credentials are sent to the vCenter Single Sign-on server. The credentials are then
authenticated against the backend identity source(s) and exchanged for a security
token that is returned to the client to access the solutions within the environment.
Single Sign-on translates into time and cost savings, which can result in significant
savings and streamlined workflows when factored across the entire IT organization.
New in vSphere 5.1, users have a single pane-of-glass view of their entire vCenter
Server environment because multiple vCenter Servers and their inventories are
displayed. This does not require Linked Mode unless users share roles, permissions,
and licenses among vSphere 5.x vCenter Servers.
Administrators can now deploy multiple solutions within an environment with true
single sign-on that creates trust between solutions without requiring authentication
every time a solution is accessed.
VSPEX private cloud solutions with VMware vSphere 5.1 are designed to be simple,
efficient and flexible. VMware Single Sign-on simplifies the authentication, improves
the work efficiency, and enables administrators with the flexibility to make Single
Sign-on Servers local or global.
EMC Virtual
Storage Integrator
for VMware
EMC Virtual Storage Integrator (VSI) for VMware vSphere is a plug-in to the vSphere
client. It provides a single management interface that is used for managing EMC
storage within the vSphere environment. Features can be added and removed from
VSI independently, which provides flexibility for customizing VSI user environments.
Features are managed by using the VSI Feature Manager. VSI provides a unified user
experience, which allows new features to be introduced rapidly in response to
changing customer requirements.
The following features are used during the validation testing:

28
Storage Viewer (SV) — Extends the vSphere client to facilitate the discovery and
identification of EMC VNX storage devices that are allocated to VMware vSphere
hosts and virtual machines. SV presents the underlying storage details to the
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Technology Overview
virtual datacenter administrator, merging the data of several different storage
mapping tools into a few seamless vSphere client views.

Unified Storage Management — Simplifies storage administration of the EMC
VNX unified storage platform. It enables VMware administrators to provision new
Network File System (NFS) and Virtual Machine File System (VMFS) datastores,
and RDM volumes seamlessly within vSphere client.
Refer to the EMC VSI for VMware vSphere product guides on EMC Online Support for
more information.
VNXe VMware
vStorage API for
Array Integration
support
Hardware acceleration with VMware vStorage API for Array Integration (VAAI) is a
storage enhancement in vSphere 5.1 that enables vSphere to offload specific storage
operations to compatible storage hardware such as the VNXe series platforms. With
storage hardware assistance, vSphere performs these operations faster and
consumes less CPU, memory, and storage fabric bandwidth.
Compute
The choice of a server platform for an EMC VSPEX infrastructure is based not only on
the technical requirements of the environment, but also on the supportability of the
platform, existing relationships with the server provider, advanced performance and
management features, and many other factors. For this reason, EMC VSPEX solutions
are designed to run on a wide variety of server platforms. Instead of requiring a given
number of servers with a specific set of requirements, VSPEX documents a number of
processor cores and an amount of RAM that must be achieved. This can be
implemented with 2 servers or 20 and still be considered the same VSPEX solution.
For example, assume that the compute layer requirements for a given implementation
are 25 processor cores and 200 GB of RAM. One customer might want to use whitebox servers containing 16 processor cores and 64 GB of RAM; while a second
customer might choose a higher-end server with 20 processor cores and 144 GB of
RAM.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
29
Solution Technology Overview
Figure 2.
Compute layer flexibility
The first customer needs four of the server they chose, while the second customer
needs just two servers.
Note
To enable high availability at the compute layer, each customer needs one
additional server so if a server fails the system has enough resources to
maintain business operations.
The following best practices should be observed in the compute layer:

30
It is a best practice to use a number of identical or at least compatible
servers. VSPEX implements hypervisor level high-availability technologies
that may require similar instruction sets on the underlying physical hardware.
By implementing VSPEX on identical server units, you can minimize
compatibility problems in this area.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Technology Overview

If you are implementing hypervisor layer high availability, the largest virtual
machine you can create is constrained by the smallest physical server in the
environment.

It is recommended to implement the high availability features available in the
virtualization layer, and to ensure that the compute layer has sufficient
resources to accommodate at least single server failures. This allows you to
implement minimal-downtime upgrades, and tolerate single unit failures.
Within the boundaries of these recommendations and best practices, the compute
layer for EMC VSPEX can be very flexible to meet your specific needs. The key
constraint is that you provide sufficient processor cores and RAM per core to meet the
needs of the target environment.
Network
The infrastructure network requires redundant network links for each vSphere host,
the storage array, the switch interconnect ports, and the switch uplink ports. This
configuration provides both redundancy and additional network bandwidth. This
configuration is required regardless of whether the network infrastructure for the
solution already exists, or is being deployed alongside other components of the
solution. An example of this kind of highly available network topology is depicted in
Figure 3.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
31
Solution Technology Overview
Figure 3.
Example of highly available network design
This validated solution uses virtual local area networks (VLANs) to segregate network
traffic of various types to improve throughput, manageability, application separation,
high availability, and security.
EMC unified storage platforms provide network high availability or redundancy by
using link aggregation. Link aggregation enables multiple active Ethernet connections
to appear as a single link with a single MAC address, and potentially multiple IP
addresses. In this solution, Link Aggregation Control Protocol (LACP) is configured on
VNXe, combining multiple Ethernet ports into a single virtual device. If a link is lost in
the Ethernet port, the link fails over to another port. All network traffic is distributed
across the active links.
32
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Technology Overview
Storage
Overview
The storage layer is also a key component of any Cloud Infrastructure solution, which
serves data generated by applications and operating systems in datacenter storage
processing systems. This increases storage efficiency, management flexibility and
reduces total cost of ownership. In this VSPEX solution, EMC VNXe Series storage
arrays are used for providing virtualization at the storage layer.
EMC VNXe series
The EMC VNX™ family is optimized for virtual applications delivering industry-leading
innovation and enterprise capabilities for file and block storage in a scalable, easyto-use solution. This next-generation storage platform combines powerful and flexible
hardware with advanced efficiency, management, and protection software to meet
the demanding needs of today’s enterprises.
The VNXe series is powered by the Intel® Xeon processor for intelligent storage that
automatically and efficiently scales in performance, while ensuring data integrity and
security.
The VNXe series is purpose-built for the IT manager in smaller environments and the
VNX™ series is designed to meet the high-performance, high-scalability
requirements of midsize and large enterprises. Table 1 shows the customer benefits.
Table 1.
VNXe customer benefits
Feature
Next-generation unified storage, optimized for virtualized applications

Capacity optimization features including compression, deduplication, thin
provisioning, and application-centric copies

High availability, designed to deliver five 9s availability

Multiprotocol support for file and block

Simplified management with EMC Unisphere™ for a single management interface
for all NAS, SAN, and replication needs

Note
VNXe does not support block compression.
Software Suites

Local Protection Suite—Increases productivity with snapshots of production
data.

Remote Protection Suite—Protects data against localized failures, outages,
and disasters.

Application Protection Suite—Automates application copies and proves
compliance.

Security and Compliance Suite—Keeps data safe from changes, deletions,
and malicious activity.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
33
Solution Technology Overview
Software Packs
VNXe Total Value Pack—Includes the Remote Protection, Application Protection and
Security and Compliance Suite.
Backup and recovery
Overview
The backup and recovery component in this VSPEX solution provides data protection
by backing up data files or volumes with defined schedule and by restoring data from
backups for recovery after a disaster. In this VSPEX solution, EMC Networker provides
backup and recovery capabilities for the 50 virtual machine architecture, and EMC
Avamar provides backup and recovery capabilities for the 100 virtual machine
architecture.
This section provides guidelines for how to set up a backup and recovery
environment for this VSPEX solution. It describes characteristics and layout of the
backup component.
EMC NetWorker
EMC’s NetWorker coupled with Data Domain deduplication storage systems
seamlessly integrate into virtual environments, providing rapid backup and
restoration capabilities. Data Domain deduplication results in vastly less data
traversing the network by leveraging the Data Domain Boost technology, which
greatly reduces the amount of data being backed up and stored, translating into
storage, bandwidth and operational savings.
The following are two of the most common recovery requests made to backup
administrators:

File-level recovery: Object-level recoveries account for the vast majority of
user support requests. Common actions requiring file-level recovery are
individual users deleting files, applications requiring recoveries, and batch
process-related erasures.

System recovery: Although complete system recovery requests are less
frequent in number than those for file-level recovery, this bare metal restore
capability is vital to the enterprise. Some common root causes for full system
recovery requests are viral infestation, registry corruption, or unidentifiable
unrecoverable issues. The NetWorker System State protection functionality adds backup and recovery
capabilities in both of these scenarios.
EMC Avamar
34
EMC’s Avamar data deduplication technology seamlessly integrates into virtual
environments, providing rapid backup and restoration capabilities. Avamar’s
deduplication results in less data travelling across the network, reduced quantities of
data being backed up and stored, and savings in storage, bandwidth, and
operational costs.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Technology Overview
Other technologies
In addition to the required technical components for EMC VSPEX solutions, there are
other items which may provide additional value depending on the specific use case.
These include, but are not limited to the technologies listed below.
EMC XtremSW
Cache (optional)
EMC XtremSW Cache™ is a server Flash caching solution that reduces latency and
increases throughput to improve application performance by using intelligent caching
software and PCIe Flash technology.
Server-side flash caching for maximum speed
XtremSW Cache software caches the most frequently referenced data on the serverbased PCIe card, thereby putting the data closer to the application.
XtremSW Cache caching optimization automatically adapts to changing workloads by
determining which data is most frequently referenced and promoting it to the server
Flash card. This means that the “hottest” or most active data automatically resides on
the PCIe card in the server for faster access.
XtremSW Cache offloads the read traffic from the storage array, which allows it to
allocate greater processing power to other workloads. While one workload is
accelerated with XtremSW Cache, the array’s performance for other workloads is
maintained or even slightly enhanced.
Write-through caching to the array for total protection
XtremSW Cache accelerates reads and protects data by using a write-through cache
to the storage to deliver persistent high availability, integrity, and disaster recovery.
Application agnostic
XtremSW Cache is transparent to applications, so no rewriting, retesting, or
recertification is required to deploy XtremSW Cache in the environment.
Integration with vSphere
XtremSW Cache enhances both virtualized and physical environments. Integration
with the VSI plug-in to VMware vSphere vCenter simplifies the management and
monitoring of XtremSW Cache.
Minimum impact on system resources
XtremSW Cache does not require a significant amount of memory or CPU cycles, as all
flash and wear-leveling management is done on the PCIe card, and does not use
server resources. However, unlike other PCIe solutions, there is no significant
overhead from using XtremSW Cache on server resources.
XtremSW Cache creates the most efficient and intelligent I/O path from the
application to the data store, which results in an infrastructure that is dynamically
optimized for performance, intelligence, and protection for both physical and virtual
environments.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
35
Solution Technology Overview
XtremSW Cache active/passive clustering support
XtremSW Cache clustering scripts’ configuration ensures that stale data is never
retrieved. The scripts use cluster management events to trigger a mechanism that
purges the cache. The XtremSW Cache-enabled active/passive cluster can ensure
data integrity, while accelerating application performance.
XtremSW Cache performance considerations
The following are the XtremSW Cache performance considerations:

On a write request, XtremSW Cache first writes to the array, then to the cache,
and then completes the application I/O.

On a read request, XtremSW Cache satisfies the request with cached data, or,
when the data is not present, retrieves the data from the array, writes it to the
cache, and then returns it to the application. The trip to the array can be in the
order of milliseconds, therefore the array limits how fast the cache can work.
As the number of writes increases, XtremSW Cache performance decreases.
XtremSW Cache is most effective for workloads with at least a 70 percent read/write
ratio, with small, random I/O (8 K is ideal). I/O greater than 128 K is not cached in
XtremSW Cache 1.5.
Note
36
For more information, refer to XtremSW Cache Installation and Administration
Guide v1.5.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Chapter 4
Solution Architecture
Overview
This chapter presents the following topics:
Overview ................................................................................................... 38
Solution architecture ................................................................................. 38
Server configuration guidelines .................................................................. 44
Network configuration guidelines ............................................................... 47
Storage configuration guidelines ................................................................ 49
High Availability and failover ...................................................................... 55
Validation test profile ................................................................................ 57
Backup environment configuration guide ................................................... 58
Sizing guidelines ....................................................................................... 59
Reference workload ................................................................................... 59
Applying the reference workload ................................................................ 60
Implementing the reference architectures ................................................... 62
Quick assessment ..................................................................................... 65
37
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
37
Solution Architecture Overview
Overview
This section provides a comprehensive guide to the major aspects of this solution.
Server capacity is specified in generic terms for required minimums of CPU, memory,
and network interfaces; the customer is free to select the server and networking
hardware that meet or exceed the stated minimums. The specified storage
architecture, along with a system meeting the server and network requirements
outlined, has been validated by EMC to ensure that it provides high levels of
performance while delivering a highly available architecture for your private cloud
deployment.
Each VSPEX Virtual Infrastructure balances the storage, network, and compute
resources needed for a set number of virtual machines that have been validated by
EMC. In practice, each virtual machine has its own set of requirements that rarely fit a
pre-defined idea of what a virtual machine should be. In any discussion about virtual
infrastructures, it is important to first define a reference workload. Not all servers
perform the same tasks, and it is impractical to build a reference that takes into
account every possible combination of workload characteristics.
Solution architecture
The VSPEX solution for VMware vSphere with EMC VNXe is validated at two different
points of scale. One configuration with up to 50 virtual machines and one
configuration with up to 100 virtual machines are defined in terms of the reference
workload.
Note
Architecture for up
to 50 virtual
machines
38
Due to the concept of a reference workload, which is applied as a core piece
of the VSPEX program, do not assume that because there are 10 servers to
consolidate into the VSPEX Infrastructure that you need 10 Reference virtual
machines of capability. Evaluate your workload in terms of the reference to
arrive at an appropriate point of scale. This document describes the process
in Applying the reference workload.
The architecture diagram shown in Figure 4 characterizes the infrastructure validated
to support up to 50 virtual machines.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Architecture Overview
Figure 4.
Architecture for up
to 100 virtual
machines
Logical architecture for 50 virtual machines
Figure 5 shows the logical architecture of the infrastructure validated for support of
up to 100 virtual machines.
Figure 5.
Note
Logical architecture for 100 virtual machines
The networking components for both solutions can be implemented using 10
GbE IP networks if sufficient bandwidth and redundancy are provided to meet
the listed requirements.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
39
Solution Architecture Overview
Key components
The architecture includes the following components:
VMware vSphere 5.1 Server—Provides a common virtualization layer to host a
virtualized server environment. The specifics of the validated environment are listed
in Table 2. vSphere 5.1 provides highly available infrastructure through such features
as:

vMotion—Provides live migration of virtual machines within a virtual
infrastructure cluster, with no virtual machine downtime or service disruption.

Storage vMotion—Provides live migration of virtual machine disk files within
and across storage arrays with no virtual machine downtime or service
disruption.

vSphere High Availability (HA) —Detects and provides rapid recovery for a
failed virtual machine in a cluster.

Distributed Resource Scheduler (DRS) —Provides load balancing of
computing capacity in a cluster.

Storage Distributed Resource Scheduler (SDRS) —Provides load balancing
across multiple datastores, based on space use and I/O latency.
VMware vCenter Server 5.1—Provides a scalable and extensible platform that forms
the foundation of virtualization management for the VMware vSphere 5.1 cluster. All
vSphere hosts and their virtual machines are managed from vCenter.
VSI for VMware vSphere—EMC Virtual Storage Integrator (VSI) for VMware vSphere is
a plug-in to the vSphere client that provides storage management for EMC arrays
directly from the client. VSI is highly customizable and helps provide a unified
management interface.
SQL Server — VMware vCenter Server requires a database service to store
configuration and monitoring details. A Microsoft SQL 2012 server is used for this
purpose.
DNS Server — DNS services are required for the various solution components to
perform name resolution. The Microsoft DNS Service running on a Windows 2008 R2
server is used for this purpose.
Active Directory Server — Active Directory services are required for the various
solution components to function properly. The Microsoft AD Directory Service running
on a Windows Server 2012 server is used for this purpose.
Shared Infrastructure — DNS and authentication/authorization services like Microsoft
Active Directory can be provided via existing infrastructure or set up as part of the
new virtual infrastructure.
IP/Storage Networks — All network traffic is carried by standard Ethernet network with
redundant cabling and switching. User and management traffic is carried over a
shared network while NFS or iSCSI storage traffic is carried over a private, nonroutable subnet.
EMC VNXe3150 array — Provides storage by presenting NFS or iSCSI datastores to
vSphere hosts for up to 50 virtual machines.
40
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Architecture Overview
EMC VNXe3300 array — Provides storage by presenting NFS or iSCSI datastores to
vSphere hosts for up to 100 virtual machines.
VNXe series storage arrays include the following components:
Hardware
resources

Storage Processors (SPs) support block and file data with UltraFlex I/O
technology that supports iSCSI, CIFS/SMB, and NFS protocols The SPs provide
access for all external hosts and for the file side of the VNXe array.

Disk-array enclosures (DAE) house the drives used in the array.
Table 2 lists the hardware used in this solution.
Table 2.
Solution hardware
Hardware
Configuration
Notes
VMware
vSphere servers
CPU:
Configured as a
single vSphere
cluster.

One vCPU per virtual machine

Four vCPUs per physical core

Memory:

2 GB RAM per virtual machine

100 GB RAM across all servers for the 50virtual-machine configuration

200 GB RAM across all servers for the 100virtual-machine configuration

2 GB RAM reservation per vSphere host

Network:

Two 10 GbE NICs per server
Note In order to implement VMware vSphere High
Availability (HA) functionality and to meet the
required listed minimums, the infrastructure
should have one additional server.
Network
infrastructure
Storage
Minimum switching capacity:

Two physical switches

Two 10 GbE ports per vSphere server

One 1 GbE port per storage processor for
management
Common

Two storage processors (active/active)

Two 10 GbE interfaces per storage
processor
Redundant LAN
configuration
This can include
the initial disk
pack on the VNXe.
For 50 Virtual Machines

EMC VNXe 3150
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
41
Solution Architecture Overview
Hardware
Configuration

45 x 300 GB 15k RPM 3.5-inch SAS disks

2 x 300GB 15k 3.5-inch SAS disks as hot
spares

30 x 600 GB 15k RPM 3.5-inch SAS disks

1 x 600 GB 15k RPM 3.5-inch SAS disk as a
hot spare
Notes
iSCSI storage
option
NFS storage option
For 100 Virtual Machines
Shared
infrastructure

EMC VNXe 3300

77 x 300 GB 15k RPM 3.5-inch SAS disks

3 x 300GB 15k 3.5-inch SAS disks as hot
spares

63 x 600 GB 15k RPM 3.5-inch SAS disks

3 x 600 GB 15k RPM 3.5-inch SAS disks as
hot spares
In most cases, a customer environment already
has infrastructure services such as Active
Directory, DNS, and other services configured. The
setup of these services is beyond the scope of this
document.
If this is being implemented with no existing
infrastructure, a minimum number of additional
servers is required:
EMC NextGeneration
Backup

Two physical servers

16 GB RAM per server

Four processor cores per server

Two 10 GbE ports per server
For 50 virtual machines:

NetWorker

3 DD160 Factory
For 100 virtual machines:

42
Avamar Business Edition
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
iSCSI storage
option
NFS storage option
These services can
be migrated into
VSPEX postdeployment;
however they must
exist before VSPEX
can be deployed.
Solution Architecture Overview
Software resources Table 3 lists the software used in this solution.
Table 3.
Solution software
Software
Configuration
VMware vSphere
vSphere server
5.1
vCenter Server
5.1
Operating system for vCenter Server
Windows Server 2012 Standard Edition
Microsoft SQL Server
Version 2008 R2 Standard Edition
EMC VNXe
Software version
2.3.1.18703
EMC VSI for VMware vSphere: Unified
Storage Management
Version 5.1
EMC VSI for VMware vSphere: Storage
Viewer
Version 5.1
Next-Generation Backup
Avamar
6.1 SP1
Data Domain OS
5.2 SP1
NetWorker
8.0 SP1
Virtual Machines (used for validation – not required for deployment)
Base operating system
Microsoft Window Server 2012 Datacenter Edition
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
43
Solution Architecture Overview
Server configuration guidelines
Overview
When designing and ordering the compute/server layer of the VSPEX solution, several
factors may alter the final purchase. From a virtualization perspective, if a system’s
workload is well understood, features like Memory Ballooning and Transparent Page
Sharing can reduce the aggregate memory requirement.
If the virtual machine pool does not have a high level of peak or concurrent usage, the
number of vCPUs can be reduced. Conversely, if the applications being deployed are
highly computational in nature, the number of CPUs and memory purchased may
need to be increased.
Table 4.
Server hardware
Hardware
Configuration
Notes
VMware
vSphere servers
CPU:
Configured as a
single vSphere
cluster.

One vCPU per virtual machine

Four vCPUs per physical core
Memory:

2 GB RAM per virtual machine

100 GB RAM across all servers for 50 virtual
machines

200 GB RAM across all servers for 100
virtual machines

2 GB RAM reservation per vSphere host
Network:

Two 10 GbE NICs per server
Note To implement VMware vSphere High
Availability (HA) functionality and to meet the
listed minimums, the infrastructure should have
one additional server.
VMware vSphere
memory
virtualization for
VSPEX
VMware vSphere 5.1 has a number of advanced features that help to maximize
performance and overall resource utilization. The most important of these are in the
area of memory management. This section describes some of these features and the
items you need to consider when using them in the environment.
In general, you can consider virtual machines on a single hypervisor consuming
memory as a pool of resources, as shown in Figure 6.
44
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Architecture Overview
Figure 6.
Hypervisor memory consumption
Memory overcommitment
Memory overcommitment occurs when more memory is allocated to virtual machines
than is physically present in a VMware vSphere host. Using sophisticated techniques,
such as ballooning and transparent page sharing, vSphere is able to handle memory
over-commitment without any performance degradation. However, if more memory
than is present on the server is being used actively, vSphere might resort to swapping
out portions of a virtual machine's memory.
Non-Uniform Memory Access
VMware vSphere uses a Non-Uniform Memory Access (NUMA) load-balancer to assign
a home node to a virtual machine. Because memory for the virtual machine is
allocated from the home node, memory access is local and provides the best
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
45
Solution Architecture Overview
performance possible. Even applications that do not directly support NUMA benefit
from this feature.
Transparent page sharing
Virtual machines running similar operating systems and applications typically have
similar sets of memory content. Page sharing allows the hypervisor to reclaim the
redundant copies and keep only one copy, which frees up the total host memory
consumption. If most of your application virtual machines run the same operating
system and application binaries then total memory usage can be reduced to increase
consolidation ratios.
Memory ballooning
By using a balloon driver loaded in the guest operating system, the hypervisor can
reclaim host physical memory if memory resources are under contention. This is done
with little to no impact to the performance of the application.
Memory
configuration
guidelines
This section provides guidelines for allocating memory to virtual machines. The
guidelines outlined here take into account vSphere memory overhead and the virtual
machine memory settings.
vSphere memory overhead
There are some associated overhead for the virtualization of memory resources. The
memory space overhead has two components.

The system overhead for the VMkernel

Additional overhead for each virtual machine
The amount of additional overhead memory for the VMkernel is fixed while for each
virtual machine depends on the number of virtual CPUs and configured memory for
the guest operating system.
Allocating memory to virtual machines
The proper sizing for virtual machine memory in VSPEX architectures is based on
many factors. With the number of application services and use cases available,
determining a suitable configuration for an environment requires creating a baseline
configuration, testing, and making adjustments for optimal results.
Note
46
Virtual machines require a certain amount of available overhead memory to
power on. When considering the memory sizing of the virtual machines, you
should be aware of the amount of this overhead.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Architecture Overview
Network configuration guidelines
Overview
This section provides guidelines for setting up a redundant, highly available network
configuration. The guidelines outlined here take into account jumbo frames, VLANs,
and Link Aggregation Control Protocol (LACP) features available on EMC unified
storage. Table 5 lists the detailed network resource requirements.
Table 5.
VLAN
Network hardware
Hardware
Configuration
Notes
Network
infrastructure
Minimum switching capacity:
Redundant LAN
configuration

Two physical switches

Two 10 GbE ports per vSphere server

One 1 GbE port per storage processor for
management
It is a best practice to isolate network traffic so that the traffic between hosts and
storage, hosts and clients, and management traffic all move over isolated networks.
In some cases physical isolation may be required for regulatory or policy compliance
reasons; but in many cases logical isolation using VLANs is sufficient. This solution
calls for a minimum of three VLANs.

Client access

Storage

Management
Figure 7 illustrates these VLANs.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
47
Solution Architecture Overview
Figure 7.
Note
Required networks
The diagram demonstrates the network connectivity requirements for a
VNXe3300 using 10 GbE network connections. A similar topology should be
created when using the VNXe3150 array, or 1 GbE network connections.
The client access network is for users of the system, or clients, to communicate with
the infrastructure. The Storage Network is used for communication between the
compute layer and the storage layer. The Management network provides
administrators with a dedicated way to access the management connections on the
storage array, network switches, and hosts.
Note
48
Some best practices call for additional network isolation for cluster traffic,
virtualization layer communication, and other features. These additional
networks may be implemented if desired, but they are not required.
Enable jumbo
frames
This solution for EMC VSPEX private cloud recommends an MTU set at 9000 (jumbo
frames) for efficient storage and migration traffic.
Link Aggregation
A link aggregation resembles an Ethernet channel, but uses the Link Aggregation
Control Protocol (LACP) IEEE 802.3ad standard. The IEEE 802.3ad standard supports
link aggregations with two or more ports. All ports in the aggregation must have the
same speed and be full duplex. In this solution, Link Aggregation Control Protocol
(LACP) is configured on VNXe, combining multiple Ethernet ports into a single virtual
device. If a link is lost in the Ethernet port, the link fails over to another port. All
network traffic is distributed across the active links.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Architecture Overview
Storage configuration guidelines
Overview
VMware vSphere allows more than one method of utilizing storage when hosting
virtual machines. The solutions described here are tested utilizing both NFS and
iSCSI; the storage layout described adheres to all current best practices. A customer
or architect with related background can make modifications based on their
understanding of the systems usage and load if required.
Table 6.
Storage hardware
Hardware
Configuration
Notes
Storage
Common
This can include
the initial disk
pack on the VNXe.

Two storage processors (active / active)

Two 10 GbE interfaces per storage
processor
For 50 Virtual Machines

EMC VNXe 3150

45 x 300 GB 15k RPM 3.5-inch SAS disks

2 x 300GB 15k 3.5-inch SAS disks as hot
spares

30 x 600 GB 15k RPM 3.5-inch SAS disks

1 x 600 GB 15k RPM 3.5-inch SAS disk as a
hot spare
iSCSI storage
option
NFS storage option
For 100 Virtual Machines

EMC VNXe 3300

77 x 300 GB 15k RPM 3.5-inch SAS disks

3 x 300GB 15k 3.5-inch SAS disks as hot
spares

63 x 600 GB 15k RPM 3.5-inch SAS disks

3 x 600 GB 15k RPM 3.5-inch SAS disks as
hot spares
iSCSI storage
option
NFS storage option
This section provides guidelines for setting up the storage layer of the solution to
provide high availability and the expected level of performance
VMware vSphere
storage
virtualization for
VSPEX
VMware ESXi provides host-level storage virtualization. It virtualizes the physical
storage and presents the virtualized storage to virtual machines.
A virtual machine stores its operating system and all other files that are related to the
virtual machine activities in a virtual disk. The virtual disk itself is one or more files.
VMware uses virtual SCSI controller to present the virtual disk to the guest operating
system running inside a virtual machine.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
49
Solution Architecture Overview
A virtual disk resides in a datastore. Depending on its type, it can be either an iSCSI
VMware Virtual Machine File System (VMFS) datastore or an NFS datastore. An
additional option, Raw Device Mapping, allows the virtual infrastructure to connect a
physical device directly to a virtual machine.
Figure 8.
VMware virtual disk types
VMFS
VMFS is a cluster file system that provides storage virtualization optimized for virtual
machines. It can be deployed over any SCSI-based local device (direct attached or
SAN) or via network storage devices using protocols such as iSCSI and NFS.
Raw Device Mapping
VMware also provides a mechanism named Raw Device Mapping (RDM). RDM allows
a virtual machine to access a volume directly on the physical storage, and can only be
used with Fibre Channel or iSCSI.
NFS
VMware supports using NFS file systems from external NAS storage system or device
as virtual machine datastore.
Storage layout for
50 virtual
machines
The architecture diagram in this section shows the physical disk layout. Disk
provisioning on the VNXe series is simplified with wizards. The administrator does
not choose which disks belong to a given storage pool. The wizard may choose any
available disk of the proper type, regardless of where the disk physically resides in
the array.
Figure 9 shows the iSCSI storage architecture for 50 virtual machines on VNXe3150.
50
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Architecture Overview
Figure 9.
iSCSI storage architecture for 50 virtual machines on EMC VNXe3150
The reference architecture uses the following configuration:

Forty-five 300 GB SAS disks are allocated to a single storage pool as nine 4+1
RAID 5 groups (sold as nine packs of five disks).

At least one hot spare disk is to be allocated for each 30 disks of a given type.

At least four iSCSI LUNs are allocated to the ESXi cluster from the single
storage pool to serve as datastores for the virtual servers.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
51
Solution Architecture Overview
Figure 10 shows the NFS storage architecture for 50 virtual machines on VNXe3150.
Figure 10.
NFS storage architecture for 50 virtual machines on EMC VNXe3150
The reference architecture uses the following configuration:
52

Thirty 600 GB SAS disks are allocated to a single storage pool as six 4+1 RAID
5 groups (sold as 5-disk packs).

At least one hot spare disk is allocated for each 30 disks of a given type.

At least two NFS shares are allocated to the vSphere cluster from the single
storage pool to serve as datastores for the virtual servers.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Architecture Overview
Storage layout for
100 virtual
machines
Figure 11 shows the iSCSI storage architecture for 100 virtual machines on
VNXe3300.
Figure 11.
iSCSI storage architecture for 100 virtual machines on EMC VNXe3300
The reference architecture uses the following configuration:

Seventy-seven 300 GB SAS disks are allocated to a single storage pool as
eleven 6+1 RAID 5 groups (sold as 11 packs of seven disks).

At least one hot spare disk is to be allocated for each 30 disks of a given type.

At least 10 iSCSI LUNs are allocated to the ESXi cluster from the single storage
pool to serve as datastores for the virtual servers.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
53
Solution Architecture Overview
Figure 12 shows the NFS storage architecture for 100 virtual machines on VNXe3300.
Figure 12.
54
NFS storage architecture for 100 virtual machines on EMC VNXe3300

The reference architecture uses the following configuration: Sixty-three 600
GB SAS disks are allocated to a single storage pool as nine 6+1 RAID 5 groups
(sold as 7-disk packs).

At least one hot spare disk is allocated for each 30 disks of a given type.

At least two NFS shares are allocated to the vSphere cluster from the single
storage pool to serve as datastores for the virtual servers.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Architecture Overview
High Availability and failover
Overview
This VSPEX solution provides a highly available virtualized server, network, and
storage infrastructure. When implemented in accordance with this guide it provides
the ability to survive single-unit failures with minimal to no impact on business
operations.
Virtualization layer Configure high availability in the virtualization layer and allow the hypervisor to
automatically restart virtual machines that fail. Figure 13 illustrates the hypervisor
layer responding to a failure in the compute layer:
Figure 13.
High Availability at the virtualization layer
By implementing high availability at the virtualization layer it ensures that, even in
the event of a hardware failure, the infrastructure attempts to keep as many services
running as possible.
Compute layer
While the choice of servers to implement in the compute layer is flexible, it is
recommended to use enterprise class servers designed for the datacenter. This type
of server has redundant power supplies; these should be connected to separate
Power Distribution units (PDUs) in accordance with your server vendor’s best
practices.
Figure 14.
Redundant Power Supplies
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
55
Solution Architecture Overview
Configure high availability in the virtualization layer. This means that the compute
layer must be configured with enough resources so that the total number of available
resources meets the needs of the environment, even with a server failure, as
demonstrated in Figure 13.
Network layer
The advanced networking features of the VNX family provide protection against
network connection failures at the array. Each vSphere host has multiple connections
to user and storage Ethernet networks to guard against link failures. These
connections should be spread across multiple Ethernet switches to guard against
component failure in the network.
Figure 15.
Network layer High Availability
By ensuring that there are no single points of failure in the network layer you can
ensure that the compute layer is able to access storage and communicate with users
even if a component fails.
Storage layer
56
The VNX family is designed for five 9s availability by using redundant components
throughout the array. All of the array components are capable of continued operation
in case of hardware failure. The RAID disk configuration on the array provides
protection against data loss due to individual disk failures, and the available hot
spare drives can be dynamically allocated to replace a failing disk.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Architecture Overview
Figure 16.
VNXe series high availability
EMC Storage arrays are designed to be highly available by default. When configured
according to the directions in their installation guides, no single unit failure results in
data loss or unavailability.
Validation test profile
Profile
characteristics
The VSPEX solution is validated with the environment profile listed in Table 7.
Table 7.
Solution profile characteristics
Profile characteristic
Value
Number of virtual machines
50 /100
Virtual machine OS
Windows Server 2012 Datacenter
Edition
Processors per virtual machine
1
Number of virtual processors per physical CPU core
4
RAM per virtual machine
2 GB
Average storage available for each virtual machine
100 GB
Average I/O Operations per Second (IOPS) per
virtual machine
25 IOPS
Number of datastores to store virtual machine disks
2/4
Number of virtual machines per datastore
25
Disk and RAID type for datastores (iSCSI)
RAID 5, 300 GB, 15k RPM, 3.5-inch
SAS disks
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
57
Solution Architecture Overview
Profile characteristic
Value
Disk and RAID type for datastores (NFS)
RAID 5, 600 GB, 15k RPM, 3.5-inch
SAS disks
Backup environment configuration guide
Backup
characteristics
The solution is sized with the application environment profile shown in Table 8.
Table 8.
Backup profile characteristics
Profile characteristic
50 virtual machines
100 virtual machines
Number of users
500
1000
Number of virtual machines
50 (20% DB, 80%
Unstructured)
100 (20% DB, 80%
Unstructured)
Exchange data
0.5TB (1GB mail box per
user)
1TB (1GB mail box per
user)
SharePoint data
0.25 TB
0.5 TB
SQL server
0.25 TB
0.5 TB
User data
2.5 TB (5.0 GB per user)
5GB (5.0 GB per user)
Daily change rate for the applications
Exchange data
10%
SharePoint data
2%
SQL server
5%
User data
2%
Retention per data types
Backup layout
All DB data
14 Dailies
User data
30 Dailies, 4 Weekly, 1 Monthly
Backup layout for 50 virtual machines
NetWorker Fast Start provides various deployment options depending on the specific
use case and the recovery requirements. In this case, the solution is deployed with
both NetWorker Fast Start and Data Domain managed as a single solution. This
enables the backup of unstructured user data directly to the Data Domain system for
simple file level recovery. The database is managed by the NetWorker Fast Start
software, but it is directed to the Data Domain system with the embedded Boost
client library. The backup solution unifies the backup process and achieves
dramatically increased levels of performance and efficiency.
58
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Architecture Overview
Backup layout for 100 virtual machines
Avamar provides various deployment options depending on the specific use case and
the recovery requirements. In this case, the solution is deployed with both Avamar
and Data Domain managed as a single solution. This enables the unstructured user
data to be backed up directly to the Avamar system for simple file level recovery. The
database and virtual machine images are managed by the Avamar software, but are
directed to the Data Domain system with the embedded Boost client library. This
backup solution unifies the backup process with industry leading deduplication
backup software and storage, and achieves the highest levels of performance and
efficiency.
Sizing guidelines
The following sections provide definitions of the reference workload used to size and
implement the VSPEX architectures discussed in this document. Guidance is provided
on how to correlate those reference workloads to actual customer workloads and how
that may change the end delivery from the server and network perspective.
Modification to the storage definition can be made by adding drives for greater
capacity and performance. The disk layouts have been created to provide support for
the appropriate number of virtual machines at the defined performance level and
typical operations like snapshots. Decreasing the number of recommended drives or
stepping down an array type can result in lower I/O Operations per Second (IOPS) per
virtual machine and a reduced user experience due to higher response time.
Reference workload
When considering moving an existing server into a virtual infrastructure, you have the
opportunity to gain efficiency by right-sizing the virtual hardware resources assigned
to that system.
Each VSPEX Virtual Infrastructure balances the storage, network, and compute
resources needed for a set number of virtual machines that have been validated by
EMC. In practice, each virtual machine has its own set of requirements, which rarely
fit a pre-defined idea of what a virtual machine should be. In any discussion about
virtual infrastructures, it is important to first define a reference workload. Not all
servers perform the same tasks, and it is impractical to build a reference that takes
into account every possible combination of workload characteristics.
To simplify the discussion, we have defined a representative customer reference
Defining the
reference workload workload. By comparing your actual customer usage to this reference workload, you
can extrapolate which reference architecture to choose.
For the VSPEX solutions, the reference workload is defined as a single virtual
machine. Table 9 shows the characteristics of this virtual machine.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
59
Solution Architecture Overview
Table 9.
Virtual machine characteristics
Characteristic
Value
Virtual machine operating system
Microsoft Windows Server 2012
Datacenter Edition
Virtual processors per virtual machine
1
RAM per virtual machine
2 GB
Available storage capacity per virtual machine
100 GB
IOPS per virtual machine
25
I/O pattern
Random
I/O read/write ratio
2:1
This specification for a virtual machine is not intended to represent any specific
application. Rather, it represents a single common point of reference against which
other virtual machines can be measured.
Applying the reference workload
Overview
When considering moving an existing server into a virtual infrastructure, you have the
opportunity to gain efficiency by right-sizing the virtual hardware resources assigned
to that system.
The reference architectures create a pool of resources that are sufficient to host a
target number of Reference virtual machines with the characteristics shown in Table
11. The customer virtual machines may not exactly match the specifications above. In
that case, define a single specific customer virtual machine as the equivalent of some
number of Reference virtual machines, and assume the virtual machines are in use in
the pool. Continue to provision virtual machines from the resource pool until no
resources remain.
Consider the following examples.
Example 1:
Custom-built
application
A small custom-built application server needs to move into this virtual infrastructure.
The physical hardware that supports the application is not fully utilized. A careful
analysis of the existing application reveals that the application can use 1 processor,
and needs 3 GB of memory to run normally. The I/O workload ranges between 4 IOPS
at idle time to a peak of 15 IOPS when busy. The entire application consumes about
30 GB on local hard drive storage.
Based on these numbers, the following resources are needed from the resource pool:

60
CPU resources for one virtual machine
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Architecture Overview

Memory resources for two virtual machines

Storage capacity for one virtual machine

IOPS for one virtual machine
In this example, a single virtual machine uses the resources for two of the Reference
virtual machines. If the original pool had the resources to provide 100 Reference
virtual machines, the resources for 98 Reference virtual machines remain.
Example 2: Point
of sale system
The database server for a customer’s Point of Sale system needs to move into this
virtual infrastructure. It is currently running on a physical system with 4 CPUs and 16
GB of memory. It uses 200 GB of storage and generates 200 IOPS during an average
busy cycle.
The following are the requirements to virtualize this application:

CPUs of four Reference virtual machines

Memory of eight Reference virtual machines

Storage of two Reference virtual machines

IOPS of eight Reference virtual machines
In this case, the one virtual machine uses the resources of eight Reference virtual
machines. Implementing this one machine on a pool for 100 Reference virtual
machines would consume the resources of eight Reference virtual machines, and
leave resources for 92 Reference virtual machines.
Example 3: Web
server
The customer’s web server needs to move into this virtual infrastructure. It is currently
running on a physical system with 2 CPUs and 8 GB of memory. It uses 25 GB of
storage and generates 50 IOPS during an average busy cycle.
The following are the requirements to virtualize this application:

CPUs of two Reference virtual machines

Memory of four Reference virtual machines

Storage of one Reference virtual machines

IOPS of two Reference virtual machines
In this case, one virtual machine would use the resources of four Reference virtual
machines. If this is implemented on a resource pool for 100 Reference virtual
machines, resources for 96 Reference virtual machines remain.
Example 4:
Decision support
database
The database server for a customer’s decision support system needs to move into
this virtual infrastructure. It is currently running on a physical system with 10 CPUs
and 64 GB of memory. It uses 5 TB of storage and generates 700 IOPS during an
average busy cycle.
The following are the requirements to virtualize this application:
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
61
Solution Architecture Overview

CPUs of 10 Reference virtual machines

Memory of 32 Reference virtual machines

Storage of 52 Reference virtual machines

IOPS of 28 Reference virtual machines
In this case, one virtual machine uses the resources of 52 Reference virtual machines.
If this is implemented on a resource pool for 100 Reference virtual machines,
resources for 48 Reference virtual machines remain.
Summary of
examples
The four examples illustrate the flexibility of the resource pool model. In all four cases
the workloads simply reduce the amount of available resources in the pool. All four
examples can be implemented on the same virtual infrastructure with an initial
capacity for 100 Reference virtual machines, and resources for 34 Reference virtual
machines would remain in the resource pool as shown in Figure 17.
Figure 17.
Resource pool flexibility
In more advanced cases, there may be tradeoffs between memory and I/O or other
relationships where increasing the amount of one resource decreases the need for
another. In these cases, the interactions between resource allocations become highly
complex, and are outside the scope of the document. Once the change in resource
balance has been examined and the new level of requirements is known, these
virtual machines can be added to the infrastructure using the method described in
the examples.
Implementing the reference architectures
62
Overview
The reference architectures require a set of hardware to be available for the CPU,
memory, network, and storage needs of the system. These are presented as general
requirements that are independent of any particular implementation. This section
describes some considerations for implementing the requirements.
Resource types
The reference architectures define the hardware requirements for the solution in
terms of four basic types of resources:

CPU resources

Memory resources

Network resources

Storage resources
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Architecture Overview
This section describes the resource types, how they are used in the reference
architecture, and key considerations for implementing them in a customer
environment.
CPU resources
The architectures define the number of CPU cores that are required, but not a specific
type or configuration. It is intended that new deployments use recent revisions of
common processor technologies. It is assumed that these perform as well as, or
better than, the systems used to validate the solution.
In any running system, it is important to monitor the utilization of resources and
adapt as needed. The rReference virtual machine and required hardware resources in
the reference architectures assume that there are no more than four virtual CPUs for
each physical processor core (4:1 ratio). In most cases, this provides an appropriate
level of resources for the hosted virtual machines; however, this ratio may not be
appropriate in all use cases. Monitor the CPU utilization at the hypervisor layer to
determine if more resources are required.
Memory resources
Each virtual server in the reference architecture is defined to have 2 GB of memory. In
a virtual environment, it is common to provision virtual machines with more memory
than the hypervisor physically has due to budget constraints. The memory overcommitment technique takes advantage of the fact that each virtual machine often
does not fully utilize the amount of memory allocated to it. To oversubscribe the
memory usage to some degree makes business sense. The administrator has the
responsibility to proactively monitor the oversubscription rate such that it does not
shift the bottleneck away from the server and become a burden to the storage
subsystem due to paging/swapping activities.
If VMware ESXi runs out of memory for the guest operating systems, paging begins to
take place, resulting in extra I/O activity going to the vswap files. If the storage
subsystem is sized correctly, occasional spikes due to vswap activity may not cause
performance issues as transient bursts of load can be absorbed. However, if the
memory oversubscription rate is so high that the storage subsystem is severely
impacted by a continuing overload of vswap activity, more disks need to be added
not because of capacity requirement, but due to the demand of increased
performance. Now, it is up to the administrator to decide whether it is more cost
effective to add more physical memory to the server, or to increase the amount of
storage.
This solution is validated with statically assigned memory and no overcommitment of
memory resources. If memory overcommit is used in a realworld environment, you
should regularly monitor the system memory utilization, and associated page file I/O
activity to ensure that a memory shortfall does not cause unexpected results.
Network resources
The reference architecture outlines the minimum needs of the system. If additional
bandwidth is needed, it is important to add capability at both the storage array and
the hypervisor host to meet the requirements. The options for network connectivity on
the server depend on the type of server. The storage arrays have a number of
included network ports, and have the option to add ports using EMC FLEX I/O
modules.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
63
Solution Architecture Overview
For reference purposes in the validated environment, EMC assumes that each virtual
machine generates 25 IOs per second with an average size of 8 KB. This means that
each virtual machine is generating at least 200 KB/s of traffic on the storage network.
For an environment rated for 100 virtual machines, this comes out to a minimum of
approximately 20 MB/sec. This is well within the bounds of modern networks.
However, this does not consider other operations. For example, additional bandwidth
is needed for:

User network traffic

Virtual machine migration

Administrative and management operations
The requirements for each of these vary depending on how the environment is being
used. It is not practical to provide concrete numbers in this context. However, the
network described in the reference architecture for each solution should be sufficient
to handle average workloads for the above use cases.
Regardless of the network traffic requirements, always have at least two physical
network connections that are shared for a logical network so that a single link failure
does not impact the availability of the system. The network should be designed so
that the aggregate bandwidth in the event of a failure is sufficient to accommodate
the full workload.
Storage resources
The reference architectures contain layouts for the disks used in the validation of the
system. Each layout balances the available storage capacity with the performance
capability of the drives. There are a few layers to consider when examining storage
sizing. Specifically, the array has a collection of disks that are assigned to a storage
pool. From that storage pool, you can provision datastores to the VMware vSphere
Cluster. Each layer has a specific configuration that is defined for the solution and
documented in Chapter 5.
It is generally acceptable to replace drives with a type that has more capacity and the
same performance characteristics; or with ones that have higher performance
characteristics and the same capacity. Similarly, it is acceptable to change the
placement of drives in the drive shelves in order to comply with updated or new drive
shelf arrangements.
In other cases where there is a need to deviate from the proposed number and type of
drives specified, or the specified pool and datastore layouts, ensure that the target
layout delivers the same or greater resources to the system.
Implementation
summary
64
The requirements that are stated in the reference architecture are what EMC considers
the minimum set of resources to handle the workloads required based on the stated
definition of a reference virtual server. In any customer implementation, the load of a
system varies over time as users interact with the system. However, if the customer
virtual machines differ significantly from the reference definition, and vary in the
same resource group, you may need to add more of that resource to the system.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Architecture Overview
Quick assessment
Overview
An assessment of the customer environment helps ensure that you implement the
correct VSPEX solution. This section provides an easy-to-use worksheet to simplify
the sizing calculations, and help assess the customer environment.
First, summarize the applications that are planned for migration into the VPSEX
Virtual Infrastructure. For each application, determine the number of virtual CPUs, the
amount of memory, the required storage performance, the required storage capacity,
and the number of Reference virtual machines required from the resource pool.
Applying the reference workload provides examples of this process.
Fill out a row in the worksheet for each application, as shown in Table 10.
Table 10.
Blank worksheet row
CPU
(virtual
CPUs)
Application
Memory
(GB)
IOPS
Capacity
(GB)
Equivalent
Reference
virtual
machines
Example
Resource
application requirements
Equivalent
Reference
virtual
machines
Fill out the resource requirements for the application. The row requires inputs on four
different resources: CPU, Memory, IOPS, and Capacity.
CPU requirements
Optimizing CPU utilization is a significant goal for almost any virtualization project. A
simple view of the virtualization operation suggests a one-to-one mapping between
physical CPU cores and virtual CPU cores regardless of the physical CPU utilization. In
reality, consider whether the target application can effectively use all of the CPUs that
are presented. Use a performance-monitoring tool, such as perfmon in Microsoft
Windows or ESXtop in vSphere to examine the CPU Utilization counter for each CPU. If
they are equivalent, then implement that number of virtual CPUs when moving into
the virtual infrastructure. However, if some CPUs are used and some are not, consider
decreasing the number of virtual CPUs that are required.
In any operation involving performance monitoring, it is a best practice to collect data
samples for a period of time that includes all of the operational use cases of the
system. Use either the maximum or 95th percentile value of the resource
requirements for planning purposes.
Memory
requirements
Server memory plays a key role in ensuring application functionality and
performance. Therefore, each application process has different targets for the
acceptable amount of available memory. When moving an application into a virtual
environment consider the current memory available to the system, and monitor the
free memory by using a performance monitoring tool, like Microsoft Windows
perfmon, to determine if it is being used efficiently.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
65
Solution Architecture Overview
Storage
performance
requirements
The storage performance requirements for an application are usually the least
understood aspect of performance. Three components become important when
discussing the I/O performance of a system. The first is the number of requests
coming in – or IOPS. Equally important is the size of the request, or I/O size -- a
request for 4 KB of data is significantly easier and faster to process than a request for
4 MB of data. That distinction becomes important with the third factor, which is the
average I/O response time, or I/O latency.
IOPS
The Reference virtual machine calls for 25 IOPS. To monitor this on an existing system
use a performance-monitoring tool like Microsoft Windows perfmon. Perfmon
provides several counters that can help here. The most common are:

Logical Disk\Disk Transfer/sec

Logical Disk\Disk Reads/sec

Logical Disk\Disk Writes/sec
For non-Windows systems such as VMware, use an equivalent tool for that system.
For additional information on monitoring storage metrics using ESXtop, visit the
following URL:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=dis
playKC&externalId=1008205
The Reference virtual machine assumes a 2:1 read: write ratio. Use these counters to
determine the total number of IOPS, and the approximate ratio of reads to writes for
the customer application.
I/O size
The I/O size is important because smaller I/O requests are faster and easier to
process than large I/O requests. The Reference virtual machine assumes an average
I/O request size of 8 KB, which is appropriate for a large range of applications. Use
perfmon or another appropriate tool to monitor the “Logical Disk\Avg. Disk
Bytes/Transfer” counter to see the average I/O size. Most applications use I/O sizes
that are even powers of 2 –4 KB, 8 KB, 16 KB, 32 KB, and so on are common. The
performance counter does a simple average; it is common to see 11 KB or 15 KB
instead of the common I/O sizes.
The Reference virtual machine assumes an 8 KB I/O size. If the average customer I/O
size is less than 8 KB, use the observed IOPS number. However, if the average I/O
size is significantly higher, apply a scaling factor to account for the large I/O size.
A safe estimate is to divide the I/O size by 8 KB and use that factor. For example, if
the application is using mostly 32 KB I/O requests, use a factor of four (32 KB / 8 KB
= 4). If that application is doing 100 IOPS at 32 KB, the factor indicates to plan for
400 IOPS since the Reference virtual machine assumed 8 KB I/O sizes.
I/O latency
The average I/O response time, or I/O latency, is a measurement of how quickly I/O
requests are processed by the storage system. The VSPEX solutions are designed to
66
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Architecture Overview
meet a target average I/O latency of 20ms. The recommendations in this sizing guide
should allow the system to continue to meet that target, however it is worthwhile to
monitor the system and re-evaluate the resource pool utilization if needed.
To monitor I/O latency use the “Logical Disk\Avg. Disk sec/Transfer” counters in
Microsoft windows perfmon or the equivalent counter for non-Windows systems. If
the I/O latency is continuously over the target, re-evaluate the virtual machines in the
environment to ensure that they are not using more resources than intended.
Storage capacity
requirements
The storage capacity requirement for a running application is usually the easiest
resource to quantify. Determine how much space on disk the system is using, and
add an appropriate factor to accommodate growth. For example, to virtualize a server
that is currently using 40 GB of a 200 GB internal drive with anticipated growth of
approximately 20% over the next year, 48 GB are required. Reserve space for regular
maintenance patches and swap files. In addition, some file systems, like Microsoft
NTFS, degrade in performance if they become too full.
Determining
equivalent
Reference virtual
machines
With all of the resources defined, determine an appropriate value for the Equivalent
Reference virtual machines line by using the relationships in Table 11. Round all
values up to the closest whole number.
Table 11.
Reference virtual machine resources
Resources
Value for Reference
virtual machine
Relationship between
requirements and
equivalent Reference
virtual machines
CPU
1
Equivalent Reference
virtual machines =
resource requirements
Memory
2
Equivalent Reference
virtual machines =
(resource
requirements)/2
IOPS
25
Equivalent Reference
virtual machines =
(resource
requirements)/25
Capacity
100
Equivalent Reference
virtual machines =
(resource
requirements)/100
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
67
Solution Architecture Overview
For example, the Point of Sale system used in Example 2: Point of sale system earlier
in the paper requires four CPUs, 16 GB of memory, 200 IOPS and 200 GB of storage.
This translates to four Reference virtual machines of CPU, eight Reference virtual
machines of memory, 8 Reference virtual machines of IOPS, and two Reference virtual
machines of capacity. Table 12 demonstrates how that machine fits into the
worksheet row.
Table 12.
Application
Example worksheet row
CPU
(virtual
CPUs)
Example
Resource
4
application requirements
Equivalent
4
Reference
virtual
machines
Memory
(GB)
IOPS
Capacity
(GB)
16
200
200
8
8
2
Equivalent
Reference
virtual
machines
8
Use the maximum value of the row to fill in the column for Equivalent Reference
virtual machines. As shown below, eight Reference virtual machines are required.
Figure 18.
Required resources from the Reference Virtual Machine Pool
Once the worksheet has been filled out for each application that the customer wants
to migrate into the virtual infrastructure, compute the sum of the “Equivalent
Reference Virtual Machines” column on the right side of the worksheet as shown in
Table 13, to calculate the total number of Reference virtual machines that are
required in the pool. In the example, the result of the calculation from Table 11 is
shown for clarity, along with the value, rounded up to the nearest whole number, to
use.
68
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Architecture Overview
Table 13.
Example applications
Application
CPU
(virtual
CPUs)
Memory
(GB)
IOPS
Reference
Capacity
virtual
(GB)
machines
1
3
15
30
Example
Application
#1: Custom
built
application
Resource
Requirements
Equivalent
1
Reference
virtual machines
2
1
1
Example
Application
#2: Point of
system
Resource
requirements
16
200
200
Equivalent
4
Reference
virtual machines
8
8
2
Example
Application
#3: Web
server
Resource
requirements
2
8
50
25
Equivalent
2
Reference
virtual machines
4
2
1
Example
Application
#4: Decision
support
database
Resource
requirements
64
700
5120
4
10
Equivalent
10
Reference
virtual machines
Total equivalent Reference virtual machines
2
8
4
(5TB)
32
28
52
52
66
The VSPEX solutions define discrete resource pool sizes. For this solution set, the
pool can support 100 reference virtual machines. Figure 19 shows that 34 Reference
virtual machines are available after applying all four examples in the 100-virtualmachine solution.
Figure 19.
Aggregate resource requirements from the Reference Virtual Machine
Pool
As shown in Table 13, the customer requires 66 Reference virtual machines of
capability from the pool. Therefore, the 100 virtual machine resource pool provides
sufficient resources for the current needs as well as room for growth.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
69
Solution Architecture Overview
Fine tuning
hardware
resources
In most cases, the recommended hardware for servers and storage are sized
appropriately based on the process described. The customer may wish to further
customize the hardware resources that are available to the system. A complete
description of system architecture is beyond the scope of this document. Additional
customization can be done at this point.
Storage resources
In some applications, there is a need to separate application data from other
workloads. The storage layouts in the VSPEX architectures put all of the virtual
machines in a single resource pool. To separate the workloads, purchase additional
disk drives for the application workload and add them to a dedicated pool.
It is not appropriate to reduce the size of the main resource pool in order to support
application isolation, or to reduce the capability of the pool. The storage layouts
presented in the 50 and 100 virtual machine solutions are designed to balance many
different factors in terms of high availability, performance, and data protection.
Changing the components of the pool can have significant and difficult-to-predict
impacts on other areas of the system.
Server resources
For the server resources in the VSPEX solution, it is possible to customize the
hardware resources more effectively.
Figure 20.
70
Customizing server resources
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Architecture Overview
To do this, first total the resource requirements for the server components as shown
in Table 14. Note the “Server Resource Component Totals” line at the bottom of the
worksheet. In this line, add up the server resource requirements from the
applications in the table.
Table 14.
Server resource component totals
Server resources
CPU
Application
(virtual
CPUs)
Storage resources
Memory
(GB)
IOPS
Capacity
(GB)
Example
application
#1: Custom
built
application
Resource
requirements
1
3
15
30
Equivalent
Reference
virtual
machines
1
2
1
1
Example
application
#2: Point of
sale System
Resource
requirements
4
16
200
200
Equivalent
Reference
virtual
machines
4
8
8
2
Example
application
#3: Web
server
Resource
requirements
2
8
50
25
Equivalent
Reference
virtual
machines
2
4
2
1
Example
application
#4:
Decision
support
database
Resource
requirements
10
64
700
5120
Equivalent
Reference
virtual
machines
10
32
28
52
Total equivalent Reference virtual machines
Server resource component
totals
17
Reference
virtual
machines
2
8
4
52
66
155
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
71
Solution Architecture Overview
In this example, the target architecture required 17 virtual CPUs and 155 GB of
memory. This translates to five physical processor cores and 155 GB of memory, plus
2 GB for the hypervisor on each physical server. In contrast, the 100-reference-virtualmachine resource pool documented in the Reference Architecture calls for 200 GB of
memory plus 2 GB for each physical server to run the hypervisor, and at least 25
physical processor cores. In this environment, the solution can be effectively
implemented with fewer server resources.
Note
72
Keep high availability requirements in mind when customizing the resource
pool hardware.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Solution Architecture Overview
Table 15 is a blank worksheet.
Table 15.
Blank customer worksheet
Application
Server resources
Storage resources
CPU
(virtual
CPUs)
IOPS
Memory
(GB)
Capacity (GB)
Reference
virtual
machines
Resource
requirements
Equivalent
Reference
virtual
machines
Resource
requirements
Equivalent
Reference
virtual
machines
Resource
requirements
Equivalent
Reference
virtual
machines
Resource
requirements
Equivalent
Reference
virtual
machines
Resource
requirements
Equivalent
Reference
virtual
machines
Total equivalent Reference virtual machines
Server resource
component totals
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
73
Solution Architecture Overview
74
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Chapter 5
VSPEX Configuration
Guidelines
This chapter presents the following topics:
Configuration overview .............................................................................. 76
Pre-deployment tasks ................................................................................ 77
Customer configuration data ...................................................................... 78
Prepare switches, connect network, and configure switches ....................... 79
Prepare and configure storage array ........................................................... 82
Install and configure VMware vSphere hosts............................................... 86
Install and configure SQL Server database .................................................. 90
Install and configure VMware vCenter Server .............................................. 92
75
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
75
VSPEX Configuration Guidelines
Configuration overview
Deployment
process
The deployment process is divided into the stages shown in Table 16. Upon
completion of the deployment, the VSPEX infrastructure is ready for integration with
the existing customer network and server infrastructure.
Table 16 lists the main stages in the solution deployment process. The table also
includes references to chapters where relevant procedures are provided.
Table 16.
Deployment process overview
Stage
Description
Reference
1
Verify prerequisites
Pre-deployment tasks
2
Obtain the deployment tools
Deployment
3
Gather customer configuration
data
Customer configuration
4
Rack and cable the components
Refer to the vendor documentation
5
Configure the switches and
networks, connect to the
customer network
Prepare switches, connect network, and
configure switches
6
Install and configure the VNXe
Prepare and configure storage
7
8
Configure virtual machine
datastores
Prepare and configure storage
Install and configure the servers
Install and configure VMware vSphere hosts
76
9
Set up SQL Server (used by
VMware vCenter)
Install and configure SQL Server
10
Install and configure vCenter and
virtual machine networking
Install and configure VMware vCenter Server
11
Perform several test scenarios to
ensure that the deployment has
been successful.
Install and configure VMware vCenter Server
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
VSPEX Configuration Guidelines
Pre-deployment tasks
Overview
Pre-deployment tasks, shown in Table 17, include procedures that do not directly
relate to environment installation and configuration, but whose results are necessary
at the time of installation. Pre-deployment tasks include collecting hostnames, IP
addresses, VLAN IDs, license keys, installation media, and so on. These tasks should
be performed before the customer visit to decrease the time required onsite.
Table 17.
Deployment
prerequisites
Tasks for pre-deployment
Task
Description
Reference
Gather
documents
Gather the related documents listed in
Appendix C. These are used
throughout the text of this document
to provide detail on setup procedures
and deployment best practices for the
various components of the solution.
EMC documentation
Gather tools
Gather the required and optional tools
for the deployment. Use Table 18 to
confirm that all equipment, software,
and appropriate licenses are available
before the deployment process.
Table 18 Deployment
prerequisites checklist
Gather data
Collect the customer-specific
configuration data for networking,
naming, and required accounts. Enter
this information into the Customer
Configuration Data worksheet for
reference during the deployment
process.
Appendix B
Other documentation
Table 18 itemizes the hardware, software, and license requirements to configure the
solution. For additional information on hardware and software, refer to Table 2 and
Table 3.
Table 18.
Deployment prerequisites checklist
Task
Description
Reference
Hardware
Physical servers to host virtual servers:
Sufficient physical server capacity to host 50100 virtual servers
Table 2
VMware vSphere 5.1 servers to host virtual
infrastructure servers
Note This requirement may be covered in the
existing infrastructure
Networking: Switch port capacity and
capabilities as required by the virtual server
infrastructure.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
77
VSPEX Configuration Guidelines
Task
Description
Reference
EMC VNXe: Multiprotocol storage array with the
required disk layout.
Software
VMware ESXi™ 5.1 installation media
VMware vCenter Server 5.1 installation media
EMC VSI for VMware vSphere: Unified Storage
Management
EMC Online Support
EMC VSI for VMware vSphere: Storage Viewer
Microsoft Windows Server 2012 or Microsoft
Windows server 2008 R2 installation media
Microsoft SQL Server 2008 or newer installation
media
Note This requirement may be covered in the
existing infrastructure
EMC vStorage API for Array Integration Plugin
Licenses
EMC Online Support
VMware vCenter 5.1 license key
VMware ESXi 5.1 license keys
Microsoft Windows Server 2012 (or higher)
Datacenter license keys
Note This requirement may be covered by an
existing Microsoft Key Management Server
(KMS)
Microsoft SQL Server license key
Note This requirement may be covered in the
existing infrastructure
Customer configuration data
To reduce the onsite time, information such as IP addresses and hostnames should
be assembled as part of the planning process.
Appendix B provides a table to maintain a record of relevant information. This form
can be expanded or contracted as required, and information may be added, modified,
and recorded as deployment progresses.
Additionally, complete the VNXe Series Configuration Worksheet, available on the
EMC Online Support website, to provide the most comprehensive array-specific
information.
78
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
VSPEX Configuration Guidelines
Prepare switches, connect network, and configure switches
Overview
This chapter provides the requirements for network infrastructure needed to support
this architecture. Table 19 provides a summary of the tasks for switch and network
configuration and references for further information.
Table 19.
Tasks for switch and network configuration
Task
Description
Configure
infrastructure
network
Configure storage array and ESXi host
infrastructure networking as specified in
the solution document.
Configure
VLANs
Configure private and public VLANs as
required.
Complete
network
cabling

Connect the switch interconnect
ports.

Connect the VNXe ports.

Connect the ESXi server ports.
Reference
Your vendor’s switch
configuration guide
Prepare network
switches
For validated levels of performance and high availability, this solution requires the
switching capacity that is provided in Table 2. If existing infrastructure meets the
requirements, no new hardware installation is needed.
Configure
infrastructure
network
The infrastructure network requires redundant network links for each ESXi host, the
storage array, the switch interconnect ports, and the switch uplink ports. This
configuration provides both redundancy and additional network bandwidth. This
configuration is required regardless of whether the network infrastructure for the
solution already exists or is being deployed alongside other components of the
solution.
Figure 21 shows a sample redundant Ethernet infrastructure for this solution. The
diagram illustrates the use of redundant switches and links to ensure that no single
points of failure exist in network connectivity.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
79
VSPEX Configuration Guidelines
80
Figure 21.
Sample Ethernet network architecture (iSCSI)
Figure 22.
Sample Ethernet network architecture (NFS)
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
VSPEX Configuration Guidelines
Configure VLANs
Complete network
cabling
Ensure adequate switch ports for the storage array and configure ESXi hosts with a
minimum of three VLANs for:

Virtual machine networking, ESXi management, and CIFS traffic (customerfacing networks, which may be separated if desired).

NFS or iSCSI networking (private network).

vMotion (private network).
Ensure that all solution servers, storage arrays, switch interconnects, and switch
uplinks have redundant connections and are plugged into separate switching
infrastructures. Ensure that there is a complete connection to the existing customer
network.
Note
At this point, the new equipment is being connected to the existing customer
network. Be careful that unforeseen interactions do not cause service issues
on the customer network.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
81
VSPEX Configuration Guidelines
Prepare and configure storage array
VNXe configuration Overview
This chapter describes how to configure the VNXe storage array. In the solution, VNXe
series provides Network File System (NFS) or iSCSI Virtual Machine File System
(VMFS) data storage for VMware hosts.
Table 20.
Tasks for storage configuration
Task
Description
Reference
Set up initial VNXe
configuration
Configure the IP address
information and other key
parameters on the VNXe.
Setup VNXe
Networking
Configure LACP on the VNXe and
network switches.
VNXe System Installation
Guide
VNXe Series Configuration
Worksheet
Provision storage
for NFS or iSCSI
datastores
Based on your implementation
(iSCSI or NFS), choose one of the
following provisioning methods:

Create iSCSI servers (targets)
to be presented to the ESXi
servers (iSCSI initiators) as
VMFS datastores that host
the virtual servers.

Create NFS file systems to be
presented to the ESXi servers
as NFS datastores that host
the virtual servers.
Your vendor’s switch
configuration guide
Prepare VNXe
VNXe3150 System Installation Guide and VNXe3300 System Installation Guide
provide instructions on assembling, racking, cabling, and powering the VNXe. There
are no specific setup steps for this solution.
Set up initial VNXe configuration
After completing the initial VNXe setup you need to configure key information about
the existing environment so that the storage array can communicate. Configure the
following items in accordance with your IT datacenter policies and existing
infrastructure information.
82

DNS

NTP

Storage network interfaces

Storage network IP address

CIFS services and Active Directory Domain membership
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
VSPEX Configuration Guidelines
The reference documents listed in Table 20 provide more information on how to
configure the VNXe platform. Storage layout for 50 virtual machines and Storage
layout for 100 virtual machines provide more information on the disk layout of the
two solutions.
Note: Only one of the following provisioning methods is required based on your
implementation (iSCSI or NFS).
Provision storage for iSCSI datastores
Complete the following steps in Unisphere to configure iSCSI servers on the VNXe
array to be used to store virtual servers:
1.
Create a pool with the appropriate number of disks.
a.
In Unisphere, select System  Storage Pools.
b.
Select Configure Disks and manually create a new pool by Disk Type for
SAS drives. The validated configuration uses a single pool with 45 drives
(for 50 virtual machines) or 77 drives (for 100 virtual machines). In other
scenarios, create separate pools. The Storage configuration guidelines
section provides additional information.
Note
Create your hot spare disks at this point. Refer to the VNXe 3150
or VNXe 3300 System Installation Guide for additional
information.
Figure 9 depicts the target storage layout for 50 virtual machines and
Figure 11 depicts the storage layout for 100 virtual machines.
Note
2.
3.
As a performance best practice, all of the drives in the pool should be
of the same size.
Create an iSCSI server.
a.
In Unisphere, select Settings  iSCSI Server Settings  Add iSCSI
Server. The wizard appears.
b.
Refer to VNXe3150/VNXe3300 System Installation Guide for detailed
instructions to create an iSCSI server.
Create a VMware storage resource.
a.
In Unisphere, select Storage  VMware  Create.
Create an iSCSI datastore in the pool and iSCSI server. The size of the
datastore is determined by the number of virtual machines that it
contains. The Storage configuration guidelines section provides
additional information about partitioning virtual machines into separate
datastores. The validated configuration uses four 1.5 TB datastores (for
50 virtual machines) or ten 750 GB datastores (for 100 virtual machines
with the size of 70 GB each).
Note
b.
Do not enable Thin Provisioning.
If the snapshot data protection is needed, configure the required
protection space.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
83
VSPEX Configuration Guidelines
The validated configuration also enables the use of array-based
snapshots to maintain point-in-time views of the datastores. The
snapshots can be used as sources for backups or other use cases. When
utilizing snapshots, consider the issues that the customers may
experience.
There is a short-term increase in the I/O latency when taking an iSCSI
snapshot. To avoid this increase being noticeable, do not set multiple
snapshots to occur on the same schedule.
When the most recent snapshot is deleted on a large LUN, a new
snapshot cannot be created for a period while snapshot reconciliation
occurs. To avoid this situation, use the snapshot scheduling tool in
Unisphere.
Note
This solution is validated with VNXe Operating Environment
version 2.2.0.16150. There is a known issue with array-based
snapshots in this version, which is addressed in a hot fix. The
later revisions of the VNXe Operating Environment will
incorporate the necessary changes. Contact EMC Customer
Support, or reference primus article emc293164 to obtain this
hot fix.
Provision storage for NFS datastores
Complete the following steps in Unisphere to configure NFS file systems on the VNXe
array that is used to store virtual servers. Figure 10 and Figure 12 show the storage
layout for 50 and 100 virtual machines.
1.
Using the Unisphere storage provisioning wizard, create a Performance Pool
of six RAID 5 (4+1) groups, totaling thirty 600 GB SAS drives(for 50 virtual
machines), or sixty-three 600GB SAS drives(for 100 virtual machines).
a.
Go to System  Storage Pools.
b.
Click Configure Disk.
c.
Manually create a new pool with the option Pool created for VMware
Storage – Datastore in the first step of Disk Configuration Wizard.
d.
Choose thirty 600 GB SAS drives (for 50 virtual machines), or sixty-three
600GB SAS drives (for 100 virtual machines) to create RAID 5(4+1)
groups in the following steps.
Note
2.
84
Create your Hot Spare disks at this point. Consult the VNXe System
Installation Guide for additional information.
Create multiple file systems from the NAS pool to present to the ESXi servers
as NFS datastores. The validated solution used two 3 TB (for 50 virtual
machines) or 5 TB (for 100 virtual machines) file systems from the pool. In a
customer implementation, it may be desirable to create logical separation
between virtual machine groups by assigning some to one file system, and
others to a separate one. In other cases where there is a need to deviate
from the proposed number and type of drives specified, or the specified
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
VSPEX Configuration Guidelines
pool and datastore layouts, ensure that the target layout delivers the same
or greater resource levels to the system.
3.
a.
Go to Storage  VMware.
b.
Click Create in the VMware Storage Wizard.
c.
Choose Network File System (NFS) as the datastore type. This Wizard
helps to complete storage provision.
Export the file systems using NFS, and give root access to the ESXi servers.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
85
VSPEX Configuration Guidelines
Install and configure VMware vSphere hosts
Overview
This chapter provides the requirements for the installation and configuration of the
ESXi hosts and infrastructure servers required to support the architecture. Table 21
describes the tasks that must be completed.
Table 21.
Install ESXi
Tasks for server installation
Task
Description
Reference
Install ESXi
Install the ESXi 5.1 hypervisor on
the physical servers being
deployed for the solution.
vSphere Installation and Setup
Guide
Configure ESXi
Networking
Configure ESXi networking
including NIC trunking, VMkernel
ports, and virtual machine port
groups and jumbo frames.
vSphere Networking
Connect
VMware
Datastores
Connect the VMware datastores
to the ESXi hosts deployed for
the solution.
vSphere Storage Guide
Upon initial power up of the servers being used for ESXi, confirm or enable the
hardware-assisted CPU virtualization and the hardware-assisted MMU virtualization
setting in each of the servers’ BIOS. If the servers are equipped with a RAID controller,
it is recommended to configure mirroring on the local disks.
Boot the ESXi 5.1 install media and install the hypervisor on each of the servers. ESXi
hostnames, IP addresses, and a root password are required for installation. Appendix
B provides appropriate values.
Configure ESXi
networking
During the installation of VMware ESXi, a standard virtual switch (vSwitch) is created.
By default, ESXi chooses only one physical NIC as a virtual switch uplink. To maintain
redundancy and bandwidth requirements, an additional NIC must be added either by
using the ESXi console or by connecting to the ESXi host from the vSphere Client.
Each VMware ESXi server should have multiple interface cards for each virtual
network to ensure redundancy and provide for the use of network load balancing, link
aggregation, and network adapter failover.
VMware ESXi networking configuration including load balancing, link aggregation,
and failover options are described in vSphere Networking. Choose the appropriate
load-balancing option based on what is supported by the network infrastructure.
Create VMkernel ports as required, based on the infrastructure configuration:
86

VMkernel port for NFS or iSCSI traffic

VMkernel port for VMware vMotion

Virtual server port groups (used by the virtual servers to communicate on the
network)
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
VSPEX Configuration Guidelines
vSphere Networking describes the procedure for configuring these settings. Refer to
the list of documents in Appendix C for more information.
Jumbo frames
A jumbo frame is an Ethernet frame with a “payload” of 1500 to 9000 bytes. This is
also known as the Maximum Transmission Unit (MTU). The generally accepted
maximum size for a jumbo frame is 9000 bytes. Processing overhead is proportional
to the number of frames. Therefore, enabling jumbo frames reduces processing
overhead by reducing the number of frames to be sent. This increases the network
throughput. Jumbo frames are recommended to be enabled end-to-end. This includes
the network switches, ESXi servers, and VNXe SPs.
Jumbo frames can be enabled on the ESXi server into two different levels. If all the
ports on the virtual switch need to be enabled for jumbo frames, this can be achieved
by selecting properties of virtual switch and editing the MTU settings from within
vCenter. If specific VMkernel ports are to be jumbo frames enabled, edit the VMkernel
port under network properties from vCenter.
To enable jumbo frames on the VNXe, use Unisphere Settings  More
Configuration  Advanced Configuration. Select the appropriate IO module and
Ethernet port, and then set the MTU to 9000.
Jumbo frames may also need to be enabled on each network switch. Consult your
switch configuration guide for instructions.
Connect VMware
datastores
Connect the datastores configured in Install and configure VMware vSphere hosts
to the appropriate ESXi servers. These include the datastores configured for:

Virtual server storage

Infrastructure virtual machine storage (if required)

SQL Server storage (if required)
vSphere Storage Guide provides instructions on how to connect the VMware
datastores to the ESXi host. Refer to the list of documents in Appendix C
The EMC VAAI plug-in must be installed after VMware Virtual Center has been
deployed, as described in Install and configure VMware vCenter Server.
Plan virtual
machine memory
allocations
Server capacity is required for two purposes in the solution:

To support the new virtualized server infrastructure

Support the required infrastructure services such as
authentication/authorization, DNS, and databases
For information on minimum infrastructure requirements, refer to Table 2. If existing
infrastructure services meet the requirements, the hardware listed for infrastructure
services is not required.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
87
VSPEX Configuration Guidelines
Memory configuration
Proper sizing and configuration of the solution requires careful server memory
allocation. The following section provides general guidance on memory allocation for
the virtual servers and factor in vSphere overhead and the virtual machine
configuration. We begin with an overview of how memory is managed in a VMware
environment.
ESX/ESXi memory management
Memory virtualization techniques allow the vSphere hypervisor to abstract physical
host resources such as memory in order to provide resource isolation across multiple
virtual machines while avoiding resource exhaustion. In cases where advanced
processors (i.e. such as Intel processors with EPT support) are deployed, this
abstraction takes place within the CPU. Otherwise, this process occurs within the
hypervisor itself via a feature known as shadow page tables.
vSphere employs the following memory management techniques:

Allocation of memory resources greater than those physically available to the
virtual machine is known as memory over commitment.

Identical memory pages that are shared across virtual machines are merged
via a feature known as transparent page sharing. Duplicate pages are
returned to the host free memory pool for reuse.

Memory compression - ESXi stores pages, which would otherwise be swapped
out to disk through host swapping, in a compression cache located in the
main memory.

Host resource exhaustion can be relieved via a process known as memory
ballooning. This process requests that free pages be allocated from the virtual
machine to the host for reuse.

Lastly, hypervisor swapping causes the host to force arbitrary virtual machine
pages out to disk.
Additional information can be obtained by visiting the following URL:
http://www.vmware.com/files/pdf/mem_mgmt_perf_vsphere5.pdf
88
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
VSPEX Configuration Guidelines
Virtual machine memory concepts
Figure 23 shows the memory settings parameters within the virtual machine.
Figure 23.
Virtual machine memory settings

Configured memory—Physical memory allocated to the virtual machine at the
time of creation.

Reserved memory—Memory that is guaranteed to the virtual machine.

Touched memory—Memory that is active or in use by the virtual machine.

Swappable—Memory that can be de-allocated from the virtual machine if the
host is under memory pressure from other virtual machines with ballooning,
compression, or swapping.
Following are the recommended best practices:

Do not disable the default memory reclamation techniques. These lightweight
processes enable flexibility with minimal impact to workloads.

Intelligently size memory allocation for virtual machines. Over-allocation
wastes resources, while under-allocation causes performance impacts which
can affect other virtual machines sharing resources. Overcommitting can lead
to resource exhaustion if the hypervisor cannot procure memory resources. In
severe cases when hypervisor swapping is encountered, virtual machine
performance is likely adversely affected. Having performance baselines of
your virtual machine workloads assists in this process.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
89
VSPEX Configuration Guidelines
Install and configure SQL Server database
Overview
This chapter and Table 22 describe how to set up and configure a SQL Server
database for the solution. At the end of this chapter, you will have Microsoft SQL
Server installed on a virtual machine, with the databases required by VMware vCenter
configured for use.
Table 22.
Tasks for SQL Server database setup
Task
Description
Reference
Create a virtual
machine for
Microsoft SQL
Server
Create a virtual machine to host
SQL Server. Verify that the virtual
server meets the hardware and
software requirements.
http://msdn.microsoft.com
Install
Microsoft
Windows on
the virtual
machine
Install Microsoft Windows Server
2012 Standard Edition on the
virtual machine created to host
SQL Server.
http://technet.microsoft.com
Install
Microsoft SQL
Server
Install Microsoft SQL Server on
the virtual machine designated
for that purpose.
http://technet.microsoft.com
Configure
database for
VMware
vCenter
Create the database required for
the vCenter server on the
appropriate datastore.
Preparing vCenter Server Databases
Configure
database for
VMware Update
Manager
Create the database required for
Update Manager on the
appropriate datastore.
Preparing the Update Manager
Database
Deploy the VNX
VAAI for NFS
plug-in (NFS
implementation
only)
Using VMware Update Manager,
deploy the VNX VAAI for NFS plugin to all ESXi hosts.
EMC VNX VAAI NFS Plug-in–
Installation HOWTO video available
on www.youtube.com
vSphere Storage APIs for Array
Integration (VAAI) Plug-in
Installing and Administering
VMware vSphere Update Manager
Note The VNX plug-in supports the
VNXe platform.
Create a virtual
machine for
Microsoft SQL
Server
90
Create the virtual machine with enough computing resources on one of the ESXi
servers designated for infrastructure virtual machines, and use the datastore
designated for the shared infrastructure.
Note
The customer environment may already contain a SQL Server that is
designated for this role. In that case, refer to Configure database for VMware
vCenter.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
VSPEX Configuration Guidelines
Install Microsoft
Windows on the
virtual machine
The SQL Server instance must run on Microsoft Windows. Install Windows on the
virtual machine by selecting the appropriate network, time, and authentication
settings.
Install SQL Server
Install SQL Server on the virtual machine from the SQL Server installation media.
One of the installable components in the SQL Server installer is the SQL Server
Management Studio (SSMS). You can install this component on the SQL server
directly as well as on an administrator’s console. SSMS must be installed on at least
one system.
In many implementations, you may want to store data files in locations other than the
default path. To change the default path, right-click on the server objects in SSMS
and select Database Properties. This action opens a properties interface from which
you can change the default data and log directories for new databases created on the
server.
Note
Configure
database for
VMware vCenter
For high availability, SQL Server can be installed in a Microsoft Failover
Cluster, or on a virtual machine protected by VMware vSphere High
Availability clustering. It is not recommended to combine these technologies.
To use VMware vCenter in this solution, create a database for the service to use. The
requirements and steps to configure the vCenter Server database correctly are
covered in Preparing vCenter Server Databases. Refer to the list of documents in
Appendix C.
Note
Do not use the Microsoft SQL Server Express-based database option for this
solution.
It is a best practice to create individual login accounts for each service accessing a
database on SQL Server.
Configure
database for
VMware Update
Manager
To use VMware Update Manager in this solution, create a database for the service to
use. The requirements and steps to configure the Update Manager database correctly
are covered in Preparing the Update Manager Database. Refer to the list of
documents in Appendix C. It is a best practice to create individual login accounts for
each service accessing a database on SQL Server. Consult your database
administrator for your organization’s policy.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
91
VSPEX Configuration Guidelines
Install and configure VMware vCenter Server
Overview
This section provides information on how to configure the VMware Virtual Center
components. Table 23 describes the tasks that must be completed.
Note
The table shows all potentially relevant steps for vCenter configuration. Only
the major steps are further described in this chapter.
Table 23.
92
Tasks for vCenter configuration
Task
Description
Reference
Create the vCenter Host
virtual machine
Create a virtual machine to be
used for the VMware Virtual
Center Server
vSphere Virtual Machine
Administration
Install the vCenter
Guest OS
Install Windows Server 2012
Standard Edition on the
vCenter host virtual machine
Update the virtual
machine
Install VMware Tools, enable
hardware acceleration, and
allow remote console access
vSphere Virtual Machine
Administration
Create vCenter ODBC
Connections
Create the 64-bit vCenter and
32-bit vCenter Update
Manager ODBC connections
vSphere Installation and Setup
Installing and Administering
VMware vSphere Update
Manager
Install vCenter Server
Install vCenter Server software
vSphere Installation and Setup
Install vCenter Update
Manager
Install vCenter Update
Manager software
Installing and Administering
VMware vSphere Update
Manager
Create a Virtual
Datacenter
Create a virtual datacenter
vCenter Server and Host
Management
Apply vSphere License
Keys
Enter vSphere license keys in
the vCenter Licensing Menu
vSphere Installation and Setup
Add ESXi Hosts
Connect vCenter to ESXi hosts
vCenter Server and Host
Management
Configure vSphere
Clustering
Create a vSphere cluster and
move the ESXi hosts into it.
vSphere Resource
Management
Perform Array ESXi Host
Discovery
Perform ESXi host discovery
within the Unisphere console
Use Unisphere help function
Install the vCenter
Update Manager Plug-in
Install the vCenter Update
Manager plug-in on the
administration console
Installing and Administering
VMware vSphere Update
Manager
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
VSPEX Configuration Guidelines
Create the vCenter
host virtual
machine
If the VMware vCenter Server is to be deployed as a virtual machine on an ESXi server
installed as part of this solution, connect directly to an Infrastructure ESXi server
using the vSphere Client. Create a virtual machine on the ESXi server with the
customer’s guest OS configuration, using the Infrastructure server datastore
presented from the storage array. The memory and processor requirements for the
vCenter Server are dependent on the number of ESXi hosts and virtual machines
being managed. The requirements are outlined in the vSphere Installation and Setup
Guide. Refer to the list of documents in Appendix C.
Install the vCenter
guest OS
Install the guest OS on the vCenter host virtual machine. VMware recommends using
Windows Server 2012 Standard Edition. Refer to vSphere Installation and Setup
Guide to ensure that adequate space is available on the vCenter and vSphere Update
Manager installation drive. Refer to the list of documents in Appendix C.
Create vCenter
ODBC connections
Before installing vCenter Server and vCenter Update Manager, you must create the
ODBC connections required for database communication. These ODBC connections
use SQL Server authentication for database authentication. Appendix B provides SQL
login information.
Refer to vSphere Installation and Setup and Installing and Administering VMware
vSphere Update Manager for instructions on how to create the necessary ODBC
connections. Refer to the list of documents in Appendix C.
Install vCenter
Server
Install vCenter by using the VMware VIMSetup installation media. Use the customerprovided username, organization, and vCenter license key when installing vCenter.
Apply vSphere
license keys
To perform license maintenance, log into the vCenter Server and select the
Administration - Licensing menu from the vSphere client. Use the vCenter License
console to enter the license keys for the ESXi hosts. After this, they can be applied to
the ESXi hosts as they are imported into vCenter.
Deploy the VNX
VAAI NFS plug-in
(NFS
implementation
only)
The VAAI for NFS plug-in enables support for the vSphere 5.1 NFS primitives. These
primitives reduce the load on the hypervisor from specific storage-related tasks to
free resources for other operations. Additional information about the VAAI for NFS
plug-in is available in the plug-in download vSphere Storage APIs for Array Integration
(VAAI) Plug-in. Refer to the list of documents in Appendix C.
Note
The same version of the plug-in supports both the VNX and VNXe platforms.
The VAAI for NFS plug-in is installed using vSphere Update Manager. Refer process for
distributing the plug-in demonstrated in the EMC VNX VAAI NFS plug-in – installation
HOWTO video available on the www.youtube.com website. To enable the plug-in after
installation, you must reboot the ESXi server.
Install the EMC VSI The VNX storage system can be integrated with VMware vCenter by using the EMC
Virtual Storage Integrator (VSI) for VMware vSphere unified Storage Management
plug-in
plug-in. This provides administrators with the ability to manage VNX storage tasks
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
93
VSPEX Configuration Guidelines
from vCenter. After the plug-in is installed on the vSphere console, administrators can
use vCenter to:
Summary
94

Create datastores on VNX and mount them on ESXi servers

Extend datastores

Perform a FAST/full clone of virtual machines
In this chapter, we presented the steps required to deploy and configure the various
aspects of the VSPEX solution, which included both the physical and logical
components. At this point, you should have a fully functional VSPEX solution. The
following chapter covers post-installation and validation activities.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Chapter 6
Validating the
Solution
This chapter presents the following topics:
Overview ................................................................................................... 96
Post-install checklist.................................................................................. 97
Deploy and test a single virtual server ........................................................ 97
Verify the redundancy of the solution components ..................................... 97
95
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
95
Validating the Solution
Overview
This chapter provides a list of items that should be reviewed once the solution has
been configured. The goal of this chapter is to verify the configuration and
functionality of specific aspects of the solution, and ensure that the configuration
supports core availability requirements.
Table 24 describes the tasks that must be completed.
Table 24.
96
Tasks for vCenter configuration validation
Task
Description
Reference
Post install
checklist
Verify that adequate virtual ports exist on
each vSphere host virtual switch.
vSphere Networking
Verify that each vSphere host has access
to the required datastores and VLANs.
vSphere Storage Guide
vSphere Networking
Verify that the vMotion interfaces are
configured correctly on all vSphere hosts.
vSphere Networking
Deploy and
test a single
virtual server
Deploy a single virtual machine using the
vSphere interface.
vCenter Server and Host
Management
vSphere Virtual Machine
Management
Verify
redundancy of
the solution
components
Perform a reboot of each storage
processor in turn, and ensure that LUN
connectivity is maintained.
Steps shown below
Disable each of the redundant switches
in turn and verify that the vSphere host,
virtual machine, and storage array
connectivity remains intact.
Reference vendor’s
documentation
On a vSphere host that contains at least
one virtual machine, enable maintenance
mode and verify that the virtual machine
can successfully migrate to an alternate
host.
vCenter Server and Host
Management
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Validating the Solution
Post-install checklist
The following configuration items are critical to the functionality of the solution, and
should be verified prior to deployment into production.

On each vSphere server, verify that the vSwitch that hosts the client VLANs
has been configured with sufficient ports to accommodate the maximum
number of virtual machines it may host.

On each vSphere server used as part of this solution, verify that all required
virtual machine port groups have been configured and that each server has
access to the required VMware datastores.

On each vSphere server used in the solution, verify that an interface is
configured correctly for vMotion using the material in vSphere Networking
guide. Refer to the list of documents in Appendix C of this document for more
information.
Deploy and test a single virtual server
To verify the operation of the solution, perform a deployment of a virtual machine in
order to verify the procedure completes as expected. Verify that the virtual machine
has been joined to the applicable domain, has access to the expected networks, and
that it is possible to login to it.
Verify the redundancy of the solution components
To ensure that the various components of the solution maintain availability
requirements, it is important to test specific scenarios related to maintenance or
hardware failure.
1.
2.
Perform a reboot of each VNXe Storage Processor in turn and verify that
connectivity to VMware datastores is maintained throughout each reboot.
Use these steps:
a.
In Unisphere, navigate to Settings  Service System.
b.
In the System Components pane, select Storage Processor SPA.
c.
In the Service Actions pane, select Reboot.
d.
Click Execute service action.
e.
During the reboot cycle, check for presence of datastores on the ESXi
hosts.
f.
Wait until the SP has finished rebooting and shows up as available
within Unisphere.
g.
Repeat steps b to e for Storage Processor B.
To verify that network redundancy features function as expected, disable
each of the redundant switching infrastructures in turn. While each of the
switching infrastructures is disabled, verify that all the components of the
solution maintain connectivity to each other and to any existing client
infrastructure as well.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
97
Validating the Solution
3.
98
On a vSphere host that contains at least one virtual machine, enable
maintenance mode and verify that the virtual machine can successfully
migrate to an alternate host.
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Appendix A
Bill of Materials
This appendix presents the following topic:
Bill of materials ....................................................................................... 100
99
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
99
Bill of Materials
Bill of materials
Table 25.
List of components used in the VSPEX solution for 50 virtual machines
Component
VMware
vSphere
servers
Solution for 50 virtual machines
CPU
1 x vCPU per virtual machine
4 x vCPUs per physical core
50 x vCPUs
Minimum of 13 Physical CPUs
Memory
2 GB RAM per virtual machine
2 GB RAM reservation per vSphere host
Minimum of 100 GB RAM
Network – 10 Gb
2 x 10 GbE NICs per server
Note To implement VMware vSphere High Availability (HA) functionality and to
meet the listed minimums, the infrastructure should have at least one
additional server beyond the number needed to meet the minimum
requirements.
Network
infrastructure
Common
2 x physical switches
1 x 1 GbE port per storage processor for
management
10 Gb network
2 x 10 GbE ports per vSphere server
2 x 10 GbE ports per storage processor for data
EMC NextGeneration
Backup
Data Domain
EMC VNXe
series
storage array
Common
Networker
3 x DD160 Factory
EMC VNXe3150
2 x storage processor (active/active)
iSCSI Storage option
45 x 300 GB 15k RPM 3.5-inch SAS disks
2 x 300GB 15k RPM 3.5-inch SAS disks as hot
spares
NFS Storage option
30 x 600 GB 15k RPM 3.5-inch SAS disks
1 x 600 GB 15k RPM 3.5-inch SAS disk as a hot
spare
10 Gb Network
1 x 10 Gb I/O module for each storage processor
(each module includes two ports)
100
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Bill of Materials
Table 26.
List of components used in the VSPEX solution for 100 virtual machines
Component
Solution for 100 virtual machines
VMware
vSphere
servers
CPU
1 x vCPU per virtual machine
4 x vCPUs per physical core
100 x vCPUs
Minimum of 25 Physical CPUs
Memory
2 GB RAM per virtual machine
2 GB RAM reservation per vSphere host
Minimum of 200 GB RAM
Network – 10 Gb
2 x 10 GbE NICs per server
Note To implement VMware vSphere High Availability (HA) functionality and to
meet the listed minimums, the infrastructure should have at least one
additional server beyond the number needed to meet the minimum
requirements.
Network
infrastructure
Common
2 x physical switches
1 x 1 GbE port per storage processor for management
10 Gb network
2 x 10 GbE ports per vSphere server
2 x 10 GbE ports per storage processor for data
EMC NextGeneration
Backup
Avamar
Avamar Business Edition
EMC VNXe
series
storage array
Common
EMC VNXe3300
2 x Storage Processor (active/active)
iSCSI storage option
77 x 300 GB 15k RPM 3.5-inch SAS disks
3 x 300 GB 15k RPM 3.5-inch SAS disks as hot spares
NFS storage option
63 x 600 GB 15k RPM 3.5-inch SAS disks
3 x 600 GB 15k RPM 3.5-inch SAS disks as hot spares
10 Gb network
2x10 Gb I/O module for each Storage Processor
(each module includes two ports)
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
101
Bill of Materials
102
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Appendix B
Customer
Configuration Data
Sheet
This appendix presents the following topic:
Customer configuration data sheets ......................................................... 104
103
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
103
Customer Configuration Data Sheet
Customer configuration data sheets
Before you start the VSPEX configuration, gather some customer-specific network,
and host configuration information. The following tables provide information on
assembling the required network and host address, numbering, and naming
information. This worksheet can also be used as a “leave behind” document for
future reference.
The VNXe Series Configuration Worksheet should be cross-referenced to confirm
customer information.
Table 27.
Common server information
Server name
Purpose
Primary IP
Domain Controller
DNS Primary
DNS Secondary
DHCP
NTP
SMTP
SNMP
vCenter Console
SQL Server
Table 28.
ESXi server information
Server name
Purpose
Primary IP
Private net (storage)
addresses
ESXi
Host 1
ESXi
Host 2
…
104
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
VMkernel
IP
vMotion
IP
Customer Configuration Data Sheet
Table 29.
Array information
Array name
Admin account
Management IP
Storage pool name
Datastore name
NFS Server or iSCSI target IP
Table 30.
Name
Network infrastructure information
Purpose
Default
gateway
IP
Subnet mask
VLAN ID
Allowed subnets
Ethernet Switch 1
Ethernet Switch 2
…
Table 31.
Name
VLAN information
Network purpose
Virtual Machine
Networking
ESXi Management
NFS or iSCSI Storage
Network
vMotion
Table 32.
Account
Service accounts
Purpose
Password (optional, secure
appropriately)
Windows Server administrator
root
ESXi root
Array administrator
vCenter administrator
SQL Server administrator
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
105
Customer Configuration Data Sheet
106
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Appendix C
References
This appendix presents the following topic:
References .............................................................................................. 108
107
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
107
References
References
EMC
documentation
Other
documentation
The following documents, located on the EMC Online Support website, provide
additional and relevant information. Access to these documents depends on your
login credentials. If you do not have access to a document, contact your EMC
representative.

VNXe3150 System Installation Guide

VNXe3300 System Installation Guide

VSI for VMware vSphere: Storage Viewer—Product Guide

XtremSW Cache Installation and Administration Guide v1.5

EMC VSI for VMware vSphere: Unified Storage Management— Product Guide

VNXe Series Configuration Worksheet
The following documents, located on the VMware website, provide additional and
relevant information:

vSphere Installation and Setup Guide

vSphere Networking

vSphere Storage Guide

Preparing vCenter Server Databases

Preparing the Update Manager Database

vSphere Virtual Machine Administration

vSphere Virtual Machine Management

Installing and Administering VMware vSphere Update Manager

vCenter Server and Host Management

vSphere Resource Management

vSphere Storage APIs for Array Integration (VAAI) plug-in
For documentation on Microsoft SQL Server, refer to the Microsoft website:
108

http://www.microsoft.com

http://technet.microsoft.com

http://msdn.microsoft.com
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Appendix D
About VSPEX
This appendix presents the following topic:
About VSPEX ........................................................................................... 110
109
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
109
About VSPEX
About VSPEX
EMC has joined forces with the industry leading providers of IT infrastructure to create
a complete virtualization solution that accelerates the deployment of cloud-based
infrastructures. Built with best-of-breed technologies, VSPEX enables faster
deployment, more simplicity, greater choice, higher efficiency, and lower risk.
Validation by EMC ensures predictable performance and enables customers to select
technology that leverages their existing IT infrastructure while significantly reducing
planning, sizing, and configuration burdens. VSPEX provides a proven infrastructure
for customers looking to gain simplicity that is characteristic of truly converged
infrastructures while at the same time gaining more choice in individual components.
VSPEX solutions are proven by EMC and packaged and sold exclusively by EMC
channel partners. VSPEX provides channel partners with more opportunities, faster
sales cycles, and end-to-end enablement. By working even more closely together EMC
and its channel partners can now deliver infrastructure that accelerates the journey to
the cloud for even more customers.
110
VMware vSphere 5.1 for up to 100 Virtual Machines
Enabled by Microsoft Windows Server 2012, EMC VNXe, and EMC NextGeneration Backup
Download PDF
Similar pages