VCE Vblock and VxBlock Systems 740 Architecture Overview

VCE Vblock and VxBlock Systems 740 Architecture Overview
www.vce.com
VCE Vblock® and VxBlock™
Systems 740
Architecture Overview
Document revision 1.7
September 2016
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Revision history
Revision history
Date
Document revision
Description of changes
September 2016
1.7
•
Added support for AMP-2S and
AMP enhancements.
•
Added support for the Cisco
MDS 9396S Multilayer Fabric
Switch
August 2016
1.6
Added support for EMC embedded
management (eMGMT).
July 2016
1.5
•
Updated the EMC VMAX
configuration information to
indicate support for a single
engine.
April 2016
1.4
•
Removed physical planning
information removed from this
book and moved it to the VCE
Systems Physical Planning
Guide
•
Added the EMC VMAX All Flash
option
•
Added support for the Cisco
Nexus 3172TQ Switch
October 2015
1.3
Added support for vSphere 6.0 with
Cisco Nexus 1000V switches.
August 2015
1.2
Added support for VxBlock Systems.
Added support for vSphere 6.0 with
VDS.
February 2015
1.1
Updated Intelligent Physical
Infrastructure appliance information.
December 2014
1.0
Initial release
2
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Contents
Contents
Introduction................................................................................................................................................. 5
System overview.........................................................................................................................................6
System architecture and components.................................................................................................... 6
Base configurations and scaling.............................................................................................................8
Connectivity overview...........................................................................................................................10
Network topology........................................................................................................................... 11
Compute layer overview...........................................................................................................................13
Compute overview................................................................................................................................13
Cisco UCS............................................................................................................................................13
Cisco UCS fabric interconnects............................................................................................................13
Cisco Trusted Platform Module............................................................................................................ 14
Scaling up compute resources............................................................................................................. 14
VCE bare metal support policy.............................................................................................................15
Disjoint layer 2 configuration................................................................................................................ 16
Storage layer overview.............................................................................................................................18
Storage layer hardware........................................................................................................................ 18
EMC VMAX3 storage arrays................................................................................................................ 19
EMC VMAX 400K storage arrays.................................................................................................. 21
EMC VMAX 200K storage arrays.................................................................................................. 22
EMC VMAX 100K storage arrays.................................................................................................. 22
EMC VMAX All Flash storage arrays............................................................................................. 22
Network layer overview............................................................................................................................24
Network layer hardware....................................................................................................................... 24
Port utilization.......................................................................................................................................25
Cisco Nexus 5548UP Switch......................................................................................................... 26
Cisco Nexus 5596UP Switch......................................................................................................... 26
Cisco Nexus 9396PX Switch......................................................................................................... 27
Cisco MDS 9706 Multilayer Director and Cisco MDS 9148S Multilayer Fabric Switch segregated networking...................................................................................................................27
Cisco Nexus 3172TQ Switch - management networking...............................................................28
Cisco MDS 9396S Multilayer Fabric Switch...................................................................................29
Cisco Nexus 3064-T Switch - management networking................................................................ 30
Virtualization layer overview....................................................................................................................32
Virtualization components.................................................................................................................... 32
VMware vSphere Hypervisor ESXi.......................................................................................................32
VMware vCenter Server....................................................................................................................... 33
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
3
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Contents
Management..............................................................................................................................................36
Management components overview.....................................................................................................36
Management hardware components....................................................................................................36
Management software components..................................................................................................... 38
Management network connectivity....................................................................................................... 39
Sample configurations............................................................................................................................. 45
Sample VCE Systems with EMC VMAX 400K..................................................................................... 45
Sample VCE Systems with EMC VMAX 200K..................................................................................... 48
Sample VCE Systems with EMC VMAX 100K..................................................................................... 52
Additional references............................................................................................................................... 57
Virtualization components.................................................................................................................... 57
Compute components.......................................................................................................................... 57
Network components............................................................................................................................58
Storage components............................................................................................................................ 59
4
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Introduction
Introduction
This document describes the high-level design of the VCE™ System and the hardware and software
components.
In this document, the Vblock System and VxBlock System are referred to as VCE Systems.
The VCE Glossary provides terms, definitions, and acronyms that are related to VCE.
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
5
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
System overview
System overview
System architecture and components
VCE™ Systems are modular platforms with defined scale points that meet the higher performance and
availability requirements of an enterprise's business-critical applications. They are designed for
deployments involving large numbers of VMs and users.
Refer to the VCE Systems Physical Planning Guide for information about cabinets and their components,
the Intelligent Physical Infrastructure solution, and environmental, security, power, and thermal
management.
VCE Systems provide the following features:
•
Delivers a multi-controller, scale-out architecture with consolidation and efficiency for the
enterprise
•
Allows scaling of resources through common and fully redundant building blocks
•
Uses a SAN storage medium
•
Local boot disks are optional and available only for bare metal blades
The following table lists the components in the VCE Systems architecture:
Component
VCE Systems
Cisco B-series blade chassis
64 chassis maximum (16 per Cisco UCS domain, to a maximum of four
Cisco UCS domains)
Cisco B-series blades (maximum)
Half-width = 512
Full-width = 256
Double-height = 128
Back-end buses
Two SAS loops per engine
Datastore type
VMFS
Boot path
SAN
Disk drives maximum
EMC VMAX 400K = 5760
EMC VMAX 200K = 2880
EMC VMAX 100K = 1440
EMC VMAX All Flash 850F and 850FX = 1920
EMC VMAX All Flash 450F and 450FX = 960
6
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
System overview
VCE Systems contain the following key hardware and software components:
Resource
Components
VCE System management
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Virtualization and management
Compute
Network
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
VCE Vision™ Intelligent Operations System Library
VCE Vision™ Intelligent Operations Plug-in for vCenter
VCE Vision™ Intelligent Operations Compliance Checker
VCE Vision™ Intelligent Operations API for System Library
VCE Vision™ Intelligent Operations API for Compliance Checker
VMware vSphere Server Enterprise Plus
VMware vSphere ESXi
VMware vCenter Server
VMware vSphere Web Client
VMware Single Sign-On (SSO) Service
Cisco UCS C240 Servers for AMP-2
EMC PowerPath/VE
Cisco UCS Manager
EMC Unisphere for VMAX
EMC Secure Remote Support (ESRS)
EMC PowerPath Electronic License Manager Server (ELMS)
Cisco Data Center Network Manager (DCNM) for SAN
Cisco UCS 5108 Blade Server Chassis
Cisco UCS B-Series M3 Blade Servers with Cisco UCS VIC 1240,
optional port expander or Cisco UCS VIC 1280
•
Cisco UCS B-Series M4 Blade Servers with Cisco UCS VIC 1340,
optional port expander or Cisco UCS VIC 1380
•
Cisco UCS 2204XP Fabric Extenders or Cisco UCS 2208XP Fabric
Extenders
•
Cisco UCS 6248UP Fabric Interconnects or Cisco UCS 6296UP Fabric
Interconnects
•
Cisco Nexus 5548UP Switches, Cisco Nexus 5596UP Switches, or
Cisco Nexus 9396PX Switches
•
Cisco MDS 9148S or Cisco MDS 9396S 16G Multilayer Fabric Switches,
or Cisco MDS 9706 Multilayer Directors
•
•
•
Cisco Nexus 3064-T Switches or Cisco Nexus 3172TQ Switches
•
Optional VMware NSX Virtual Networking for VxBlock Systems
Optional Cisco Nexus 1000V Series Switches
Optional VMware vSphere Distributed Switch (VDS) for VxBlock
Systems
7
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Resource
Components
Storage
•
•
•
•
•
System overview
EMC VMAX 400K
EMC VMAX 200K
EMC VMAX 100K
EMC VMAX All Flash 850F and 850FX
EMC VMAX All Flash 450F and 450FX
The VCE Release Certification Matrix provides a list of the certified versions of components for VCE
Systems. For information about VCE System management, refer to the VCE Vision™ Intelligent
Operations Technical Overview.
Base configurations and scaling
In VCE™ Systems, there is a base configuration that is a minimum set of compute and storage
components, as well as fixed network resources.
These components are integrated in one or more 28-inch 42U cabinets. In the base configuration, you
can customize the following hardware aspects:
Hardware
How it can be customized
Compute blades
Cisco UCS B-Series blade type including all VCE supported M3/M4 blade
configurations
Compute chassis
•
•
Edge servers
(with optional VMware NSX)
Up to sixteen Cisco UCS server chassis per Cisco UCS domain
Up to four Cisco UCS domains (four pairs of fabric interconnects)
—
Supports up to 32 double-height Cisco UCS blade servers per domain
—
Supports up to 64 full-width Cisco UCS blade servers per domain
—
Supports up to 128 half-width Cisco UCS blade servers per domain
Four to six Cisco UCS B-series Blade Servers, including the B200 M4 with VIC
1340 and VIC 1380.
For more information, see the VCE VxBlock™ Systems for VMware NSX
Architecture Overview.
Storage
Supports 2 ½ inch drives, 3 ½ inch drives, and a mix of both 2 ½ inch and 3 ½
inch drives (EMC VMAX 400K, EMC 200K, and EMC 100K only)
•
•
EMC VMAX 400K
—
Contains one to eight engines
—
Contains a maximum of 256 front-end ports
—
Supports 10 – 5760 drives
EMC VMAX 200K
—
Contains one to four engines
—
Contains a maximum of 128 front-end ports
8
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
System overview
Hardware
How it can be customized
—
•
Supports 10 - 2880 drives
EMC VMAX 100K
—
Contains one to four engines
—
Contains a maximum of 128 front-end ports
—
Supports 10 - 2880 drives
Supports 2 ½ inch drives (EMC VMAX All Flash models only)
•
EMC VMAX All Flash 850F and 850FX
—
•
—
Contains a maximum of 192 front-end ports
—
Supports 17 – 1920 drives
EMC VMAX All Flash 450F and 450FX
—
Storage policies
Contains one to eight engines
Contains one to four engines
—
Contains a maximum of 96 front-end ports
—
Supports 17 - 960 drives
Policy levels are applied at the storage group level of array masking.
Array storage is organized by the following service level objectives (EMC VMAX
400K, EMC 200K, and EMC 100K only):
•
•
Optimized (default) - system optimized
•
•
•
Silver - 8 ms response time, emulating 10K RPM drives
Gold - 5 ms response time, emulating 15K RPM drives
•
Diamond - <1 ms response time, emulating EFD
Bronze - 12 Milliseconds (ms) response time, emulating 7.2K drive
performance
Platinum - 3 ms response time, emulating 15K RPM drives and enterprise
flash drive (EFD)
Array storage is organized by the following service level objective (EMC VMAX
All Flash models only):
Diamond: <1 ms response time; emulating EFD.
Supported disk drives
(EMC VMAX 400K, EMC 200K, and EMC 100K only)
Tier 1 drives:
•
Solid state: 200/400/800/1600 GB
Tier 2 drives:
•
•
15K RPM: 300 GB
10K RPM: 300/600/1200 GB
Tier 3 drives:
•
7.2K RPM: 2/4 TB
(EMC VMAX All Flash models only)
Tier 1 drives:
•
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
Solid state: 960/1920/3840 GB
9
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
System overview
Hardware
How it can be customized
Management hardware options
AMP-2 is available in multiple configurations that use their own resources to run
workloads without consuming resources on the VCE System.
Together, the components offer balanced CPU, I/O bandwidth, and storage capacity relative to the
compute and storage arrays in VCE Systems. All components have N+N or N+1 redundancy.
These resources can be scaled up as necessary to meet increasingly stringent requirements. The
maximum supported configuration differs based on core components. To scale up compute resources,
add blade packs and chassis.
Optionally, add expansion cabinets with additional resources.
Connectivity overview
Components and interconnectivity in VCE™ Systems are broken down into compute, storage, and
network layers.
The following table describes the layers:
Layer
Description
Compute
Contains the following compute power components:
•
•
•
Storage
Cisco UCS server chassis
Cisco UCS fabric interconnects
Contains the following storage array components:
•
•
•
•
•
Network
Cisco UCS blade servers
EMC VMAX 400K
EMC VMAX 200K
EMC VMAX 100K
EMC VMAX All Flash 850F and 850FX
EMC VMAX All Flash 450F and 450FX
Contains the following components to provide switching and routing between the
compute and storage layers in a VCE System, and between the VCE System and
the external network:
•
•
Cisco MDS switches
Cisco Nexus switches
All components incorporate redundancy into the design.
Related information
Compute overview (see page 13)
10
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
System overview
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Storage layer hardware (see page 18)
Network layer hardware (see page 24)
Network topology
In the network topology for VCE Systems, LAN and SAN connectivity is segregated into separate Cisco
Nexus switches.
LAN switching uses the Cisco Nexus 9396PX Switch, the Cisco Nexus 5548UP Switch, or the Cisco
Nexus 5596UP Switch. SAN switching uses the Cisco MDS 9148S Multilayer Fabric Switch, the Cisco
MDS 9396S Multilayer Fabric Switch, or the Cisco MDS 9706 Multilayer Director Switch.
Note: The optional VMware NSX feature uses the Cisco Nexus 9396PX switches for LAN switching. For
more information, see the VCE VxBlock™ Systems for VMware NSX Architecture Overview.
The compute layer connects to both the Ethernet and Fibre Channel (FC) components of the network
layer. Cisco UCS fabric interconnects connect to the Cisco Nexus switches in the Ethernet network
through 10 GbE port channels and to the Cisco MDS switches through port channels made up of multiple
8 Gb links.
The front-end IO modules in the storage array connect to the Cisco MDS switches in the network layer
over 16 Gb FC links.
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
11
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
System overview
The following illustration shows a network block storage configuration for VCE Systems:
SAN boot storage configuration
VMware vSphere ESXi hosts always boot over the FC SAN from a 10 GB boot LUN, which contains the
hypervisor's locker for persistent storage of logs and other diagnostic files. The remainder of the storage
can be presented as Virtual Machine File System (VMFS) data stores or as raw device mappings
(RDMs).
12
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
Compute layer
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Compute layer
Compute overview
Cisco UCS B-Series Blades installed in the Cisco UCS chassis provide computing power in a VCE
System.
Fabric extenders (FEX) in the Cisco UCS chassis connect to Cisco fabric interconnects over converged
Ethernet. Up to eight 10 GbE ports on each Cisco UCS fabric extender connect northbound to the fabric
interconnects, regardless of the number of blades in the chassis. These connections carry IP and storage
traffic.
VCE uses multiple ports for each fabric interconnect for 8 Gb Fibre Channel (FC). These ports connect to
Cisco MDS storage switches and the connections carry FC traffic between the compute layer and the
storage layer. These connections also enable SAN booting of the Cisco UCS blades.
Cisco UCS
The Cisco UCS data center platform unites compute, network, and storage access. Optimized for
virtualization, the Cisco UCS integrates a low-latency, lossless 10 Gb Ethernet unified network fabric with
enterprise-class, x86-based Cisco UCS B-Series Servers.
In a VCE System, each chassis also includes Cisco UCS fabric extenders and Cisco UCS B-Series
Converged Network Adapters.
VCE Systems powered by Cisco UCS offer the following features:
•
Built-in redundancy for high availability
•
Hot-swappable components for serviceability, upgrade, or expansion
•
Fewer physical components than in a comparable system built piece by piece
•
Reduced cabling
•
Improved energy efficiency over traditional blade server chassis
Cisco UCS fabric interconnects
Cisco UCS 6248UP or Cisco UCS 6296UP Fabric Interconnects provide network connectivity and
management capabilities to the Cisco UCS blades and chassis.
Cisco UCS fabric interconnects offer line-rate, low-latency, lossless 10 Gigabit Ethernet and Fibre
Channel over Ethernet (FCoE) functions. Northbound, the FIs connect directly to Cisco network switches
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
13
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Compute layer
for Ethernet access into the customer network. They also connect directly to Cisco MDS switches for FC
access of the attached VCE System storage. These connections are currently eight GB FC. EMC VMAX
storage arrays have 16 Gb FC connections into the MDS switches.
The optional VMware NSX feature uses Cisco UCS 6296UP Fabric Interconnects to accommodate the
port count needed for VMware NSX external connectivity (edges). For more information, refer the VCE
VxBlock™ Systems for VMware NSX Architecture Overview.
Cisco Trusted Platform Module
Cisco Trusted Platform Module (TPM) provides authentication and attestation services that provide safer
computing in all environments.
Cisco TPM is a computer chip that securely stores artifacts such as passwords, certificates, or encryption
keys that are used to authenticate remote and local server sessions. Cisco TPM is available by default as
a component in the Cisco UCS B-Series blade servers, and is shipped disabled.
VCE supports only the Cisco TPM hardware. VCE does not support the Cisco TPM functionality. Because
making effective use of the Cisco TPM involves the use of a software stack from a vendor with significant
experience in trusted computing, VCE defers to the software stack vendor for configuration and
operational considerations relating to the Cisco TPM.
Related information
www.cisco.com
Scaling up compute resources
This topic describes what you can add to VCE Systems to scale up compute resources.
Blade packs
To scale up compute resources, you can add blade packs and chassis activation kits when VCE Systems
are built or after they are deployed. Cisco UCS blades are sold in packs of two, and include two identical
Cisco UCS blades.
The base configuration of VCE Systems includes two blade packs. The maximum number of blade packs
depends on the selected scale point.
Each blade type must have a minimum of two blade packs as a base configuration and can be increased
in single blade pack increments thereafter. Each blade pack is added along with license packs for the
following software:
•
Cisco UCS Manager (UCSM)
•
VMware vSphere ESXi
14
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Compute layer
•
Cisco Nexus 1000V Series Switch (Cisco Nexus 1000V Advanced Edition only)
•
EMC PowerPath/VE
Note: License packs for VMware vSphere ESXi, Cisco Nexus 1000V Series Switch, and EMC
PowerPath/VE are not available for bare metal blades.
Additional chassis
The power supplies and fabric extenders for all chassis are pre-populated and cabled, and all required
Twinax cables and transceivers are populated. However, in base VCE Systems configurations, there is a
minimum of two Cisco UCS 5108 Server Chassis. There are no unpopulated server chassis unless they
are ordered that way. This limited licensing reduces the entry cost for VCE Systems.
As more blades are added and additional chassis are required, additional chassis are added
automatically to an order. The kit contains software licenses to enable additional fabric interconnect ports.
Only enough port licenses for the minimum number of chassis to contain the blades are ordered.
Additional chassis can be added up-front to allow for flexibility in the field or to initially spread the blades
across a larger number of chassis.
VCE bare metal support policy
Since many applications cannot be virtualized due to technical and commercial reasons, VCE™ Systems
support bare metal deployments, such as non-virtualized operating systems and applications.
While it is possible for VCE Systems to support these workloads (with the following caveats), due to the
nature of bare metal deployments, VCE can only provide reasonable effort support for systems that
comply with the following requirements:
•
VCE Systems contain only VCE published, tested, and validated hardware and software
components. The VCE Release Certification Matrix provides a list of the certified versions of
components for VCE Systems.
•
The operating systems used on bare metal deployments for compute and storage components
must comply with the published hardware and software compatibility guides from Cisco and EMC.
•
For bare metal configurations that include other hypervisor technologies (Hyper-V, KVM, etc.),
those hypervisor technologies are not supported by VCE. VCE support is provided only on
VMware Hypervisors.
VCE reasonable effort support includes VCE acceptance of customer calls, a determination of whether a
VCE System is operating correctly, and assistance in problem resolution to the extent possible.
VCE is unable to reproduce problems or provide support on the operating systems and applications
installed on bare metal deployments. In addition, VCE does not provide updates to or test those operating
systems or applications. The OEM support vendor should be contacted directly for issues and patches
related to those operating systems and applications.
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
15
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Compute layer
Disjoint Layer 2 configuration
In the Disjoint Layer 2 configuration, traffic is split between two or more different networks at the fabric
interconnect to support two or more discrete Ethernet clouds.
The Cisco UCS servers connect to two different clouds. Upstream Disjoint Layer 2 networks allow two or
more Ethernet clouds that never connect to be accessed by VMs located in the same Cisco UCS domain.
The following illustration provides an example implementation of Disjoint Layer 2 networking into a Cisco
UCS domain:
Virtual port channels (vPCs) 101 and 102 are production uplinks that connect to the network layer of the
VCE™ System. vPCs 105 and 106 are external uplinks that connect to other switches.
16
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
Compute layer
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
If you use Ethernet performance port channels (103 and 104 by default), port channels 101 through 104
are assigned to the same VLANs.
Disjoint Layer 2 network connectivity can also be configured with an individual uplink on each fabric
interconnect.
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
17
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Storage layer
Storage layer
Storage layer hardware
EMC VMAX3 storage arrays are high-end, storage systems built for the virtual data center.
Architected for reliability, availability, and scalability, these storage arrays use specialized engines, each
of which includes two redundant director modules providing parallel access and replicated copies of all
critical data.
EMC VMAX3 offers the following software components:
Component
Description
EMC Symmetrix Virtual Provisioning (Virtual
Pools)
Virtual Provisioning, based on thin provisioning, is the ability to
present an application with more capacity than is physically
allocated in the storage array. The physical storage is then
allocated to the application on demand as it is needed from the
shared pool of capacity. Each disk group in the EMC VMAX3 (allinclusive) is carved into a separate virtual pool.
EMC Symmetrix Storage Resource Pool
(SRP)
An SRP is a collection of virtual pools that make up an EMC FAST
domain. A virtual pool can only be included in one SRP. Each EMC
VMAX initially contains a single SRP that contains all virtual pools
in the array.
EMC Symmetrix Fully Automated Storage
Tiering for Virtual Pools (FAST VP)
EMC Symmetrix FAST is employed to migrate sub-LUN chunks of
data between the various virtual pools in the SRP. Tiering is
automatically optimized by dynamically allocating and relocating
application workloads based on the defined service level objective
(SLO).
EMC Symmetrix SLO
An SLO defines the ideal performance operating range of an
application. Each SLO contains an expected maximum response
time range. An SLO uses multiple virtual pools in the SRP to
achieve its response time objective. SLOs are predefined with the
array and are not customizable.
EMC Embedded Management (eMGMT)
EMC eMGMT is the management model for EMC VMAX3 which is
a combination of EMC Solutions Enabler and EMC Unisphere for
VMAX running locally on the EMC VMAX using virtual servers.
EMC HYPERMAX OS
The storage operating environment for EMC VMAX3 delivering
performance, array tiering, availability, and data integrity.
EMC Unisphere for VMAX
Browser-based GUI for device creating, managing, and monitoring
on EMC storage arrays.
18
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
Storage layer
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
EMC VMAX3 storage arrays
This topic provides an overview of the EMC VMAX3 storage array characteristics that are common across
all models.
EMC VMAX3 storage arrays include the following features:
•
Two 16 Gb multimode (MM), Fibre Channel (FC), four-port IO modules per director (four per
engine) - two slots for additional front-end connectivity are available per director. For the EMC
VMAX All Flash 450F and 450FX and EMC VMAX All Flash 850F and 850FX, one slot is
available for additional front-end connectivity.
•
Minimum of five drives with a maximum of 360 3 ½ inch drives or 720 2 ½ inch drives per engine.
•
Option of 2 ½ inch, 3 ½ inch or a combination of 2 ½ inch and 3 ½ inch drives.
•
Racks may be dispersed, however, each rack must be within 25 meters of the first rack.
•
Number of supported Cisco UCS domains depends on the number of array engines.
Note: Only 2½ inch drives are supported for EMC VMAX All Flash models.
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
19
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Storage layer
The following illustration shows the interconnection of the EMC VMAX3 in VCE Systems:
The following table shows the engines, with the maximum domains, chassis, and blades:
Engines
Maximum domain
Maximum chassis
Maximum blades (half-width)
1
1
16
128
2
2
32
256
3
3
48
384
4
3
64
512
5
4
64
512
6
4
64
512
7
4
64
512
8
4
64
512
20
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Storage layer
Supported drives
The following tables shows the supported drives for EMC VMAX3 models:
Drive
3½ inch drive capacity/speed
Flash
SAS
NL-SAS
200 GB
300 GB -10K
2 TB - 7.2K
400 GB
300 GB - 15K
4 TB - 7.2K
800 GB
600 GB - 10K
1200 GB - 10K
2 ½ inch drive capacity/speed
200 GB
300 GB -10K
400 GB
300 GB - 15K
800 GB
600 GB - 10K
1200 GB - 10K
RAID protection
R5 (3+1) (default)
Mirrored (default)
R6 (6+2) (default)
R5 (7+1)
R5 (3+1)
R6 (14+2)
R5 (7+1)
The following table shows the supported drives for EMC VMAX All Flash models:
Drive
Flash
2½ inch drive capacity/speed
960 GB
R5 (7+1)
R6 (14+2)
1920 GB
R6 (14+2), R5 (7+1)
3840 GB
R5 (7+1)
R6 (14+2)
EMC VMAX 400K storage arrays
Use this topic for an overview of the EMC VMAX 400K storage arrays.
EMC VMAX 400K storage arrays include the following features:
•
Minimum configuration contains one engine and the maximum contains eight engines
•
EMC VMAX 400K engines contain Intel 2.7 GHz Ivy Bridge processors with 48 cores
•
Available cache options are: 512 GB, 1024 GB, or 2048 GB (per engine)
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
21
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Storage layer
EMC VMAX 200K storage arrays
Use this topic for an overview of the EMC VMAX 200K storage arrays.
EMC VMAX 200K storage arrays include the following features:
•
Minimum configuration contains one engine, and the maximum contains four engines
•
EMC VMAX 200K engines contain Intel 2.6 GHz Ivy Bridge processors with 32 cores
•
Available cache options are: 512 GB, 1024 GB, or 2048 GB (per engine)
EMC VMAX 100K storage arrays
Use this topic for an overview of the EMC VMAX 100K storage arrays.
EMC VMAX 100K contains the following features:
•
Minimum configuration contains a single engine
•
Maximum contains two engines
•
EMC VMAX 100K engines contain Intel 2.1 GHz Ivy Bridge processors with 24 cores
•
Available cache options are: 512 GB and 1024 GB per engine
EMC VMAX All Flash storage arrays
Availability of the EMC VMAX All Flash option is based on drive capacity and available software of the
EMC VMAX All Flash 850F, EMC VMAX All Flash 850FX , EMC VMAX All Flash 450F, EMC VMAX All
Flash 450FX.
Overview
EMC VMAX All Flash models include the following features:
•
•
The EMC VMAX All Flash 850F and 850FX and EMC VMAX All Flash 450F and 450FX storage
are defined by an EMC V-Brick that contains one engine, two DAEs, and 53 TBu of storage.
—
Each engine is configured with one or 2 TB cache.
—
All engines in the EMC V-Brick must contain identical cache types.
The EMC V-Brick has a minimum 16 drives with 3.84 TB capacity in RAID5 (7+1).
22
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Storage layer
The following table describes the EMC VMAX All Flash versions:
Software
package
EMC VMAX All Flash 850F, EMC VMAX
All Flash 450F
EMC VMAX All Flash 850FX, EMC VMAX All
Flash 450FX
Base software
(Included)
•
•
•
•
•
•
•
Optional software
•
HyperMAX operating system
Thin provisioning
QoS or host I/O limits
D@RE (disabled by default)
HyperMAX operating system
Thin provisioning
QoS or host I/O limits
D@RE
By default, all software is included.
The VCE Systems with EMC VMAX All Flash arrays contain the following key components:
Component
EMC VMAX All Flash 850F and 850FX
EMC VMAX All Flash 450F and 450FX
Base EMC V-Brick
•
•
•
•
1 to 8
53 TBu dense flash, one engine (1 or
2 TB cache)
1 to 4
53 TBu dense flash, one engine (1
TB cache)*
Number of engines
1 to 8 engines containing Intel 2.7 GHz Ivy
Bridge processors with 48 cores
1 to 4 engines containing Intel 2.6 GHz
Ivy Bridge processors with 32 cores
Flash capacity
Up to 1920 drives (up to 4 PBe per system) Up to 960 drives (up to 2 PBe per
system)
* Two TB cache EMC V-Brick upgrade available. Requires 2 TB cache EMC V-Brick upgrades for
maximum capacity.
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
23
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Network layer
Network layer
LAN and SAN make up the network layer of the VCE™ System.
Network layer hardware
LAN and SAN make up the network layer of the VCE™ System and include Cisco Nexus switches and
Cisco MDS switches.
LAN layer
Each VCE System requires a pair of Cisco Nexus 3064-T Switches or a pair of Cisco Nexus 3172TQ
Switches for all device, out-of-band management connectivity and traffic. Each Cisco Nexus 3064-T
Switch provides 48 ports of 100 Mbps/1000 Mbps/10 Gbps twisted pair connectivity and four QSFP+
ports. Each Cisco Nexus 3172TQ Switch provides 48 100 Mbps/1000 Mbps/10 Gbps twisted pair
connectivity and six 40 GbE QSFP+ ports. The 48 ports on each switch provide management interface
connectivity for all devices in the VCE System.
The Cisco Nexus 5548UP Switch, Cisco Nexus 5596UP Switch, and Cisco Nexus 9396PX Switch in the
network layer provide 10 Gb connectivity using SFP+ modules for all VCE System production traffic.
The following table shows LAN layer components:
Component
Description
Cisco Nexus 5548UP Switch
•
•
•
Cisco Nexus 5596UP Switch
Cisco Nexus 9396PX Switch
Cisco Nexus 3172TQ Switch
Cisco Nexus 3064-T Switch
•
•
•
•
•
•
•
•
•
1RU appliance
Supports 32 fixed 10 Gbps SFP+ ports
Expands to 48 10 Gbps SFP+ ports through an available expansion
module
2RU appliance
Supports 48 fixed 10 Gbps SFP+ ports
Expands to 96 10 Gbps SFP+ ports through three available
expansion slots
2RU appliance
Supports 48 fixed 10 Gbps SFP+ ports and 12 fixed 40 Gbps QSFP+
ports
1RU appliance
Supports 48 fixed, 48 100 Mbps/1000 Mbps/10 Gbps twisted pair
connectivity ports and six fixed, 40 Gbps QSFP+ ports for the
management layer of the VCE System
1RU appliance
Supports 48 fixed, 10GBase-T RJ45 ports and four fixed, 40 Gbps
QSFP+ ports for the management layer of the VCE System
24
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Network layer
SAN layer
VCE Systems contains two Cisco MDS switches to provide FC connectivity between the compute and
storage layer components. These switches comprise two separate fabrics. Connections from the storage
components provide 16 Gb of FC bandwidth. Cisco UCS fabric interconnects provide a FC port channel
of four 8 Gb connections (32 Gb bandwidth) to each fabric. This can be increased to eight connections for
64 Gb bandwidth or sixteen connections for 128 Gb bandwidth. These connections also facilitate SAN
booting of the blades in the compute layer.
Two Cisco MDS 9148S Multilayer Fabric Switches, Cisco MDS 9396S 16G Multilayer Fabric Switches, or
Cisco MDS 9706 Multilayer Directors provide:
•
FC connectivity between the compute layer components and the storage layer components
•
Connectivity for backup and business continuity requirements when configured
Note: Inter-Switch Links (ISLs) to existing SAN or between switches is not permitted.
The following table shows SAN network layer components:
Component
Description
Cisco MDS 9148S Multilayer Fabric
Switch
•
•
•
•
•
•
Cisco MDS 9396S 16G Multilayer
Fabric Switch
Cisco MDS 9706 Multilayer Director
•
•
1RU appliance
Provides 12 to 48 line-rate ports for non-blocking 16 Gbps throughput
12 ports are licensed - additional ports can be licensed as required
2RU appliance
Provides 48 to 96 line-rate ports for non-blocking 16 Gbps throughput
48 ports are licensed – additional ports can be licensed in 12-port
increments as required
9RU appliance
Provides up to 12 Tbps front panel FC line rate nonblocking systemlevel switching
•
VCE leverages the advanced 48 port line cards at line rate of
16 Gbps for all ports
•
Consists of one 48 port line card per director. Up to three additional
48 port line cards can be added
•
VCE requires that four fabric modules are included with all Cisco
MDS 9706 Multilayer Directors for a N+1 configuration
•
•
Four PDUs
Two supervisors
Note: Both fabric switches must have the same SAN switch hardware configuration.
Port utilization
This section describes the switch port utilization for Cisco Nexus switches in the networking configuration.
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
25
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Network layer
Cisco Nexus 5548UP Switch
The base Cisco Nexus 5548UP Switch provides 32 SFP+ ports used for 1G or 10G connectivity for LAN
traffic.
The following table shows the core connectivity for the Cisco Nexus 5548UP Switch (no module):
Feature
Used ports
Port speeds
Media
Uplinks from fabric interconnect (FI)
8
10G
Twinax
Uplinks to customer core
8
Up to 10G
SFP+
Uplinks to other Cisco Nexus 55xxUP Switches
2
10G
Twinax
Uplinks to management
3
10G
Twinax
Customer IP backup
4
1G or 10G
SFP+
If an optional 16 unified port module is added to the Cisco Nexus 5548UP Switch, there are 28 additional
ports are available to provide additional network connectivity.
Cisco Nexus 5596UP Switch
The base Cisco Nexus 5596UP Switch provides 48 SFP+ ports used for 1G or 10G connectivity for LAN
traffic.
The following table shows core connectivity for the Cisco Nexus 5596UP Switch (no module):
Feature
Used ports
Port speeds
Media
Uplinks from Cisco UCS fabric interconnect
8
10G
Twinax
Uplinks to customer core
8
Up to 10G
SFP+
Uplinks to other Cisco Nexus 55xxUP Switches
2
10G
Twinax
Uplinks to management
2
10G
Twinax
The remaining ports in the base Cisco Nexus 5596UP Switch (no module) provide support for the
following additional connectivity option:
Feature
Used ports
Port speeds
Media
Customer IP backup
4
1G or 10G
SFP+
If an optional 16 unified port module is added to the Cisco Nexus 5596UP Switch, additional ports are
available to provide additional network connectivity.
26
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Network layer
Cisco Nexus 9396PX Switch
The base Cisco Nexus 9396PX Switch provides 48 SFP+ ports used for 1G or 10G connectivity and 12
40G QSFP+ ports for LAN traffic.
The following table shows core connectivity for the Cisco Nexus 9396PX Switch:
Feature
Used ports
Port speeds
Media
Uplinks from fabric interconnect
8
10G
Twinax
Uplinks to customer core
8(10G)/2(40G)
Up to 40G
SFP+/QSFP+
VPC peer links
2
40G
Twinax
Uplinks to management
2
10G
Twinax
The remaining ports in the Cisco Nexus 9396PX Switch provide support for a combination of the following
additional connectivity options:
Feature
Available ports
Port speeds
Media
Customer IP backup
8
1G or 10G
SFP+
Uplinks from Cisco UCS FIs for Ethernet
BW enhancement
8
10G
Twinax
Cisco MDS 9706 Multilayer Director and Cisco MDS 9148S Multilayer
Fabric Switch
VCE Systems incorporate the Cisco MDS 9706 Multilayer Director and Cisco MDS 9148S Multilayer
Fabric Switch to provide 16G FC connectivity from storage to Cisco MDS.
Cisco MDS 9706 Switch
The following table shows connectivity for the Cisco MDS 9706 Multilayer Director in a single switch:
Feature
available ports
Port speed
Media
EMC VMAX3 storage arrays
64*
16G
SFP
Uplinks from fabric interconnect (FI)
32**
8G
SFP
AMP-2
3***
8G
SFP
*Based on eight engines (EMC VMAX3), eight FC ports per engine
**Based on four Cisco UCS domains with eight uplinks, the number of ports from FI depends on the
number of Cisco UCS domains and configurations
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
27
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Network layer
***AMP-2 may consist of two or more servers connecting to each fabric
The Cisco MDS 9706 Multilayer Director provides additional connectivity to optional devices such as FC
backup or direct attachment servers depending on the configuration options. The Cisco MDS 9706
Multilayer Director allows the addition of IO modules for port expansion.
Cisco MDS 9148S Multilayer Fabric Switch
Cisco MDS 9148S Multilayer Fabric Switch is a fixed switch with no IOM expansion for additional ports.
The Cisco MDS 9148S Multilayer Fabric Switch provides connectivity for up to 40 ports from Cisco UCS
FI switches and an EMC VMAX3 storage array, and supports up to four engines.
The following table shows connectivity for the Cisco MDS 9148S Multilayer Fabric Switch:
Number of
domains
Mgmt
1
Domain
Maximum storage ports
Total FI FC ports
1
2
3
4
2*
8
-
-
-
32
8
1
2*
16
-
-
-
24
16
2
2*
8
8
-
-
24
16
2
2*
8
16
-
-
16
24
2
2*
16
16
-
-
8
32
3
2*
8
8
8
-
16
24
3
2*
8
8
16
-
8
32
4
2*
8
8
8
8
8
32
* AMP-2 connections may require more than two per switch.
Cisco Nexus 3172TQ Switch - management networking
The base Cisco Nexus 3172TQ Switch provides 48 100 Mbps/1 GbE/10 GbE BaseT fixed ports and six
QSFP+ ports to provide 40 GbE connections.
The following table shows core connectivity for the Cisco Nexus 3172TQ Switch for management
networking and reflects the AMP-2 base for two servers:
Feature
Used ports
Port speeds
Media
Management uplinks from fabric
interconnect (FI)
2
1GbE
Cat6
Uplinks to customer core
2
Up to 10G
Cat6
vPC peer links
2QSFP+
10GbE/40GbE
Cat6/MMF 50µ/125
LC/LC
28
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Network layer
Feature
Used ports
Port speeds
Media
Uplinks to management
1
1GbE
Cat6
Cisco Nexus management ports
1
1GbE
Cat6
Cisco MDS management ports
2
1GbE
Cat6
AMP2-CIMC ports
1
1GbE
Cat6
AMP2-1GbE ports
2
1GbE
Cat6
AMP2-10GbE ports
2
10GbE
Cat6
EMC VNXe management ports
1
1GbE
Cat6
EMC VNXe_NAS ports
4
10GbE
Cat6
Gateways
14
100Mb/1GbE
Cat6
The remaining ports in the Cisco Nexus 3172TQ Switch provide support for additional domains and their
necessary management connections.
Cisco MDS 9396S Multilayer Fabric Switch
VCE Systems incorporate the Cisco MDS 9396S Multilayer Fabric Switch to provide 16G FC connectivity
from storage to Cisco MDS.
Cisco MDS 9396S Switch
The following table shows connectivity for the Cisco MDS 9396S Multilayer Fabric Switch in a single
switch:
Feature
Available ports
Port speed
Media
EMC VMAX3 storage TBD*
arrays
16G
SFP
Uplinks from fabric
interconnect (FI)
TBD**
16G
SFP
AMP-2
TBD***
16G
SFP
*Based on eight engines (EMC VMAX3), eight FC ports per engine
**Based on four Cisco UCS domains with eight uplinks, the number of ports from FI depends on the
number of Cisco UCS domains and configurations
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
29
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Network layer
***AMP-2 may consist of two or more servers connecting to each fabric
Number of
domains
Mgmt
Domain
Maximum
storage
ports
Total FI FC
ports
1
2*
8
-
-
-
32
8
1
2*
16
-
-
-
24
16
2
2*
8
8
-
-
24
16
2
2*
8
16
-
-
16
24
2
2*
16
16
-
-
8
32
3
2*
8
8
8
-
16
24
3
2*
8
8
16
-
8
32
4
2*
8
8
8
8
8
32
1
2
3
4
Cisco Nexus 3064-T Switch - management networking
The base Cisco Nexus 3064-T Switch provides 48 100Mbps/1GbE/10GbE Base-T fixed ports and 4QSFP+ ports to provide 40GbE connections.
The following table shows core connectivity for the Cisco Nexus 3064-T Switch for management
networking and reflects the AMP-2 HA base for two servers:
Feature
Used ports
Port speeds
Media
Management uplinks from fabric
interconnect (FI)
2
1GbE
Cat6
Uplinks to customer core
2
Up to 10G
Cat6
VPC peer links
2QSFP+
10GbE/40GbE
Cat6/MMF 50µ/125
LC/LC
Uplinks to management
1
1GbE
Cat6
Cisco Nexus management ports
1
1GbE
Cat6
Cisco MDS management ports
2
1GbE
Cat6
AMP2-CIMC ports
1
1GbE
Cat6
AMP2-Gi ports
2
1GbE
Cat6
AMP2-10G ports
2
10GbE
Cat6
EMC VNXe management ports
1
1GbE
Cat6
EMC VNXe_NAS ports
4
10GbE
Cat6
Gateways
14
100Mb/1GbE
Cat6
30
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
Network layer
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
The remaining ports in the Cisco Nexus 3064-T Switch provide support for additional domains and their
necessary management connections.
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
31
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Virtualization layer
Virtualization layer
Virtualization components
VMware vSphere is the virtualization platform that provides the foundation for the private cloud. The core
VMware vSphere components are the VMware vSphere ESXi and VMware vCenter Server for
management.
Depending on the version that you are running, VMware vSphere 5.x includes a Single Sign-on (SSO)
component as a standalone Windows server or as an embedded service on the vCenter server. VMware
vSphere 6.0 includes a pair of Platform Service Controller Linux appliances to provide the Single Sign-on
(SSO) service.
The hypervisors are deployed in a cluster configuration. The cluster allows dynamic allocation of
resources, such as CPU, memory, and storage. The cluster also provides workload mobility and flexibility
with the use of VMware vMotion and Storage vMotion technology.
VMware vSphere Hypervisor ESXi
The VMware vSphere Hypervisor ESXi runs in the management servers and in VCE Systems using
VMware vSphere Server Enterprise Plus.
The lightweight hypervisor requires very little space to run (less than six GB of storage required to install)
and has minimal management overhead.
VMware vSphere ESXi does not contain a console operating system. The VMware vSphere Hypervisor
ESXi boots from the SAN through an independent Fibre Channel (FC) LUN presented from the EMC
storage array to the compute blades. The FC LUN also contains the hypervisor's locker for persistent
storage of logs and other diagnostic files to provide stateless computing in VCE Systems. The stateless
hypervisor (PXE boot into memory) is not supported.
Cluster configuration
VMware vSphere ESXi hosts and their resources are pooled together into clusters. These clusters
contain the CPU, memory, network, and storage resources available for allocation to virtual machines
(VMs). Clusters can scale up to a maximum of 32 hosts for VMware vSphere 5.1/5.5 and 64 hosts for
VMware vSphere 6.0. Clusters can support thousands of VMs.
The clusters can also support a variety of Cisco UCS blades running inside the same cluster.
Note: Some advanced CPU functionality might be unavailable if more than one blade model is running in
a given cluster.
32
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
Virtualization layer
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Data stores
VCE Systems support a mixture of data store types: block level storage using VMFS or file level storage
using NFS. The maximum size per VMFS volume is 64 TB (50 TB VMFS3 @ 1 MB). Beginning with
VMware vSphere 5.5, the maximum VMDK file size is 62 TB. Each host/cluster can support a maximum
of 255 volumes.
VCE optimizes the advanced settings for VMware vSphere ESXi hosts that are deployed in VCE Systems
to maximize the throughput and scalability of NFS data stores. VCE Systems currently support a
maximum of 256 NFS data stores per host.
Virtual networks
Virtual networking in the Advanced Management Platform uses the VMware Virtual Standard Switch
(VSS). Virtual networking is managed by either the Cisco Nexus 1000V distributed virtual switch or
VMware vSphere Distributed Switch (VDS). The Cisco Nexus 1000V Series Switch ensures consistent,
policy-based network capabilities to all servers in the data center by allowing policies to move with a VM
during live migration. This provides persistent network, security, and storage compliance.
Alternatively, virtual networking in VCE Systems is managed by VMware VDS with comparable features
to the Cisco Nexus 1000V where applicable. The VMware VDS option consists of both a VMware VSS
and a VMware VDS and uses a minimum of four uplinks presented to the hypervisor.
The implementation of Cisco Nexus 1000V for VMware vSphere 5.1/5.5 and VMware VDS for VMware
vSphere 5.5 use intelligent network Class of Service (CoS) marking and Quality of Service (QoS) policies
to appropriately shape network traffic according to workload type and priority. With VMware vSphere 6.0,
QoS is set to Default (Trust Host). The vNICs are equally distributed across all available physical adapter
ports to ensure redundancy and maximum bandwidth where appropriate. This provides general
consistency and balance across all Cisco UCS blade models, regardless of the Cisco UCS Virtual
Interface Card (VIC) hardware. Thus, VMware vSphere ESXi has a predictable uplink interface count. All
applicable VLANs, native VLANs, MTU settings, and QoS policies are assigned to the virtual network
interface cards (vNICs) to ensure consistency in case the uplinks need to be migrated to the VMware
vSphere Distributed Switch (VDS) after manufacturing.
VMware vCenter Server
This topic describes the VMware vCenter Server which is a central management point for the hypervisors
and VMs.
VMware vCenter is installed on a 64-bit Windows Server. VMware Update Manager is installed on a 64bit Windows Server and runs as a service to assist with host patch management.
VMware vCenter Server provides the following functionality:
•
Cloning of VMs
•
Template creation
•
VMware vMotion and VMware Storage vMotion
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
33
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
•
Virtualization layer
Initial configuration of VMware Distributed Resource Scheduler (DRS) and VMware vSphere
high-availability clusters
VMware vCenter Server also provides monitoring and alerting capabilities for hosts and VMs. System
administrators can create and apply alarms to all managed objects in VMware vCenter Server, including:
•
Data center, cluster, and host health, inventory, and performance
•
Data store health and capacity
•
VM usage, performance, and health
•
Virtual network usage and health
Databases
The back-end database that supports VMware vCenter Server and VUM is the remote Microsoft SQL
Server 2008 (VMware vSphere 5.1) and Microsoft SQL 2012 (VMware vSphere 5.5/6.0). The SQL Server
service can be configured to use a dedicated service account.
Authentication
VMware Single Sign-On (SSO) Service integrates multiple identity sources including Active Directory,
Open LDAP, and local accounts for authentication. VMware SSO is available in VMware vSphere 5.x and
later. VMware vCenter Server, Inventory, Web Client, SSO, Core Dump Collector, and VUM run as
separate Windows services, which can be configured to use a dedicated service account depending on
security and directory services requirements.
VCE supported features
VCE supports the following VMware vCenter Server features:
•
VMware SSO Service (version 5.x and later)
•
VMware vSphere Web Client (used with VCE Vision™ Intelligent Operations)
•
VMware vSphere Distributed Switch (VDS)
•
VMware vSphere High Availability
•
VMware DRS
•
VMware Fault Tolerance
•
VMware vMotion
—
Layer 3 capability available for compute resources (version 6.0 and higher)
•
VMware Storage vMotion
•
Raw Device Maps
34
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
Virtualization layer
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
•
Resource Pools
•
Storage DRS (capacity only)
•
Storage-driven profiles (user-defined only)
•
Distributed power management (up to 50 percent of VMware vSphere ESXi hosts/blades)
•
VMware Syslog Service
•
VMware Core Dump Collector
•
VMware vCenter Web Services
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
35
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Management
Management
Management components overview
The Advanced Management Platform (AMP-2) provides a single management point for VCE™ Systems.
For VCE Systems, the AMP-2 provides the ability to:
•
Run the Core and VCE Optional Management Workloads
•
Monitor and manage health, performance, and capacity
•
Provide network and fault isolation for management
•
Eliminate resource overhead
The core management workload is the minimum required management software to install, operate, and
support the VCE System. This includes all hypervisor management, element managers, virtual networking
components, and VCE Vision™ Intelligent Operations Software.
The VCE optional management workload is non-core management workloads supported and installed by
VCE, whose primary purpose is to manage components in the VCE System. The list includes, but is not
limited to VCE Data Protection, security, or storage management tools such as EMC Avamar
Administrator, EMC InsightIQ for Isilon, and VMware vCNS appliances (vShield Edge/Manager).
Management hardware components
AMP-2 is available in multiple configurations that use their own resources to run workloads without
consuming resources on the VCE System.
The following list shows the operational relationship between the Cisco UCS Servers and VMware
vSphere versions:
•
VCE Systems with Cisco UCS C240 M3 servers are configured with VMware vSphere 5.5 or 6.0.
•
VCE Systems with Cisco UCS C220 M4 servers are configured with VMware vSphere 6.0.
36
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Management
The following table describes the various AMP-2 options:
AMP-2 option
Number of Cisco UCS
C2x0 servers
Storage
AMP-2HA Baseline
2
•
•
AMP-2HA Performance
3
•
•
Description
FlexFlash SD for
Provides HA/DRS
VMware vSphere ESXi functionality and shared
boot
storage using the EMC
VNXe3200.
EMC VNXe3200 with
Fast Cache for VM
data stores
FlexFlash SD for
Adds additional compute
VMware vSphere ESXi capacity with a third server
boot
and storage performance
with the inclusion of EMC
EMC VNXe3200 with
FAST VP
Fast Cache for VM
data stores
AMP-2S
2 - 12
•
•
FlexFlash SD for
Provides limited scalability
VMware vSphere ESXi configuration using Cisco
boot
UCS C2x0 Servers and
additional storage
EMC VNXe3200 with
Fast Cache and FAST expansion capacity
VP for VM data stores
Note: AMP-2S is supported on Cisco UCS C220 M4 servers with VMware vSphere 6.0.
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
37
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Management
Management software components
The Advanced Management Platform (AMP-2) is delivered with the following installed software
components, depending on the selected VCE Release Certification Matrix (RCM):
•
Microsoft Windows Server 2008 R2 SP1 Standard x64
•
Microsoft Windows Server 2012 R2 Standard x64
•
VMware vSphere Enterprise Plus
•
VMware vSphere Hypervisor ESXi
•
VMware Single Sign-On (SSO) Service
•
VMware vSphere Platform Services Controller
•
VMware vSphere Web Client Service
•
VMware vSphere Inventory Service
•
VMware vCenter Server
Note: For VMware vSphere 6.0 and later, the preferred instance is created using VMware
vSphere vCenter Server Appliance. An alternate instance may be created using the
Windows version. Only one of these options can be implemented.
•
VMware vCenter Database using Microsoft SQL Server 2012 Standard Edition
•
VMware vCenter Update Manager (VUM)
Note: For VMware vSphere 6.0 and later, the preferred configuration embeds the SQL server on
the same VM as the VUM. The alternate configuration leverages the remote SQL server
with VMware vCenter Server. Only one of these options can be implemented.
•
VMware vSphere client
•
VMware vSphere Syslog Service (optional)
•
VMware vSphere Core Dump Service (optional)
•
VMware vSphere Distributed Switch (VDS) or Cisco Nexus 1000V virtual switch (VSM)
38
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Management
Note: VCE™ Systems cannot include both VMware VDS and Cisco Nexus 1000V VSM VMs.
•
EMC PowerPath/VE Electronic License Management Server (ELMS)
•
EMC Secure Remote Support (ESRS)
•
Array management modules, including but not limited to:
—
EMC Unisphere Client
—
EMC Unisphere Service Manager
—
EMC VNX Initialization Utility
—
EMC VNX Startup Tool
—
EMC SMI-S Provider
—
EMC PowerPath Viewer
•
Cisco Prime Data Center Network Manager and Device Manager
•
(Optional) EMC RecoverPoint management software that includes the management application
and deployment manager
Management network connectivity
The Vblock and VxBlock Systems 740 offer several types of AMP network connectivity and server
assignments.
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
39
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Management
AMP-2S network connectivity with Cisco UCS C2x0 M4 servers
The following illustration shows the network connectivity for the AMP-2S on the Cisco UCS C2x0 M4
servers:
40
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
Management
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
AMP-2S server assignments with Cisco UCS C2x0 M4 servers
The following illustration shows the VM server assignments for AMP-2S on the Cisco UCS C2x0 M4
servers, which implements the default VMware vCenter Server configuration using the VMware 6.x
vCenter Server Appliance and VMware Update Manager with embedded MS SQL Server database:
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
41
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Management
The following illustration shows the VM server assignments for AMP-2S on the Cisco UCS C2x0 M4
servers, which implements the alternate VMware vCenter Servers configuration, using the VMware 6.x
vCenter Server, Database Servers, and VMware Update Manager:
VCE Systems that use VMware vSphere Distributed Switch (VDS) do not include the Cisco Nexus 1000V
VSM VMs.
The AMP-2S option leverages the DRS functionality of the VMware vCenter to optimize resource usage
(CPU and memory) so that VM assignment to a VMware vSphere ESXi host is managed automatically.
42
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
Management
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
AMP-2HA network connectivity on Cisco UCS C2x0 M3 servers
The following illustration shows the network connectivity for AMP-2HA on the Cisco UCS C2x0 M3
servers:
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
43
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Management
AMP-2HA server assignments with Cisco UCS C2x0 M3 servers
The following illustration shows the VM server assignment for AMP-2HA with Cisco UCS C2x0 M3
servers:
44
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
Sample configurations
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Sample configurations
Sample VCE Systems with EMC VMAX 400K
VCE Systems with EMC VMAX 400K cabinet elevations vary based on the specific configuration
requirements.
The following elevations are provided for sample purposes only. For specifications for a specific VCE
System design, consult your vArchitect.
Note: EMC VMAX array cabinets are excluded from the sample elevations.
VCE System with EMC VMAX 400K cabinet front view
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
45
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Sample configurations
VCE System with EMC VMAX 400K cabinet rear view
46
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
Sample configurations
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
VCE System with EMC VMAX 400K cabinet 1
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
47
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Sample configurations
VCE System with EMC VMAX 400K cabinet 2
Sample VCE Systems with EMC VMAX 200K
VCE Systems with EMC VMAX 200K cabinet elevations vary based on the specific configuration
requirements.
The following elevations are provided for sample purposes only. For specifications for a specific VCE
System design, consult your vArchitect.
Note: EMC VMAX array cabinets are excluded from the sample elevations.
48
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
Sample configurations
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
VCE System with EMC VMAX 200K front view
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
49
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Sample configurations
VCE System with EMC VMAX 200K rear view
50
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
Sample configurations
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
VCE System with EMC VMAX 200K cabinet 1
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
51
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Sample configurations
VCE System with EMC VMAX 200K cabinet 2
Sample VCE Systems with EMC VMAX 100K
VCE Systems with EMC VMAX 100K cabinet elevations vary based on the specific configuration
requirements.
The following elevations are provided for sample purposes only. For specifications for a specific VCE
System design, consult your vArchitect.
Note: EMC VMAX array cabinets are excluded from the sample elevations.
52
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
Sample configurations
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
VCE System with EMC VMAX 100K front view
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
53
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Sample configurations
VCE System with EMC VMAX 100K rear view
54
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
Sample configurations
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
VCE System with EMC VMAX 100K cabinet 1
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
55
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Sample configurations
VCE System with EMC VMAX 100K cabinet 2
56
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Additional references
Additional references
Virtualization components
Product
Description
Link to documentation
VMware vCenter
Server
Provides a scalable and extensible platform that forms
the foundation for virtualization management.
http://www.vmware.com/products/
vcenter-server/
VMware vSphere
ESXi
Virtualizes all application servers and provides
VMware high availability (HA) and dynamic resource
scheduling (DRS).
http://www.vmware.com/products/
vsphere/
Compute components
Product
Description
Cisco UCS B-Series
Blade Servers
Servers that adapt to application demands,
www.cisco.com/en/US/products/
intelligently scale energy use, and offer best-in-class ps10280/index.html
virtualization.
Cisco UCS Manager
Provides centralized management capabilities for
the Cisco Unified Computing System (UCS).
www.cisco.com/en/US/products/
ps10281/index.html
Cisco UCS 2200 Series
Fabric Extenders
Bring unified fabric into the blade-server chassis,
providing up to eight 10 Gbps connections each
between blade servers and the fabric interconnect.
http://www.cisco.com/c/en/us/
support/servers-unifiedcomputing/ucs-2200-seriesfabric-extenders/tsd-productssupport-series-home.html
Cisco UCS 5100 Series
Blade Server Chassis
Chassis that supports up to eight blade servers and
up to two fabric extenders in a six rack unit (RU)
enclosure.
www.cisco.com/en/US/products/
ps10279/index.html
Cisco UCS 6200 Series
Fabric Interconnects
Cisco UCS family of line-rate, low-latency, lossless,
10 Gigabit Ethernet, Fibre Channel over Ethernet
(FCoE), and Fibre Channel functions. Provide
network connectivity and management capabilities.
www.cisco.com/en/US/products/
ps11544/index.html
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
Link
57
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Additional references
Network components
Product
Description
Link to documentation
Cisco Nexus 1000V Series Switches
A software switch on a server that
delivers Cisco VN-Link services to
virtual machines hosted on that
server.
www.cisco.com/en/US/products/
ps9902/index.html
VMware vSphere Distributed Switch
(VDS)
A VMware vCenter-managed
software switch that delivers
advanced network services to virtual
machines hosted on that server.
http://www.vmware.com/products/
vsphere/features/distributedswitch.html
Cisco Nexus 5000 Series Switches
Simplifies data center transformation http://www.cisco.com/c/en/us/
by enabling a standards-based, high- products/switches/nexus-5000performance unified fabric.
series-switches/index.html
Cisco MDS 9000 Series Multilayer
Directors
Provides industry-leading availability,
scalability, security, and
management. The Cisco MDS 9000
family allows deployment of highperformance SANs with the lowest
TCO.
http://www.cisco.com/c/en/us/
products/storage-networking/
mds-9000-series-multilayer-switches/
index.html
Cisco MDS 9706 Multilayer Fabric
Switch
Provides 48 line-rate 16 Gbps ports
and offers cost-effective scalability
through on-demand activation of
ports.
http://www.cisco.com/c/en/us/
products/storage-networking/
mds-9706-multilayer-director/
index.html
Cisco MDS 9148S Multilayer Fabric
Switch
Provides 48 line-rate 16 Gbps ports
and offers cost-effective scalability
through on-demand activation of
ports.
http://www.cisco.com/c/en/us/
products/collateral/storagenetworking/mds-9148s-16gmultilayer-fabric-switch/datasheetc78-731523.html
Cisco Nexus 3064-T Switch
Provides management access to all
VCE System components using vPC
technology to increase redundancy
and scalability.
http://www.cisco.com/c/en/us/
support/switches/nexus-3064-tswitch/model.html
Cisco Nexus 3172TQ
Provides management access to all
VCE System components using vPC
technology to increase redundancy
and scalability.
http://www.cisco.com/c/en/us/
products/collateral/switches/
nexus-3000-series-switches/
data_sheet_c78-729483.html
Cisco MDS 9396S Multilayer Fabric
Switch
Provides up to 96 line-rate 16 Gbps
ports and offers cost-effective
scalability through on-demand
activation of ports.
http://www.cisco.com/c/en/us/
products/collateral/storagenetworking/mds-9396s-16gmultilayer-fabric-switch/datasheetc78-734525.html
58
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
VCE Vblock® and VxBlock™ Systems 740 Architecture Overview
Additional references
Product
Description
Link to documentation
Cisco Nexus 9396PX Switch
Provides high scalability,
http://www.cisco.com/c/en/us/
performance, and exceptional energy support/switches/nexus-9396pxefficiency in a compact form factor.
switch/model.html
Designed to support Cisco
Application Centric Infrastructure
(ACI).
Storage components
This topic provides a description of the storage components.
Product
Description
Link to documentation
EMC VMAX 400K
Delivers industry-leading
performance, scale, and efficiency
for hybrid cloud environments.
http://www.emc.com/collateral/
hardware/specification-sheet/
h13217-vmax3-ss.pdf
EMC VMAX 200K
EMC high-end storage array that
http://www.emc.com/collateral/
delivers infrastructure services in the hardware/specification-sheet/
next generation data center. Built for h13217-vmax3-ss.pdf
reliability, availability, and scalability,
EMC VMAX 200K uses specialized
engines, each of which includes two
redundant director modules providing
parallel access and replicated copies
of all critical data.
EMC VMAX 100K
Delivers full enterprise-class
availability, data integrity, and
security.
http://www.emc.com/collateral/
hardware/specification-sheet/
h13217-vmax3-ss.pdf
EMC VMAX All Flash
Provides high-density flash storage.
https://support.emc.com/products/
40306_VMAX-All-Flash
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
59
www.vce.com
About VCE
VCE, an EMC Federation Company, is the world market leader in converged infrastructure and converged solutions. VCE
accelerates the adoption of converged infrastructure and cloud-based computing models that reduce IT costs while improving
time to market. VCE delivers the industry's only fully integrated and virtualized cloud infrastructure systems, allowing
customers to focus on business innovation instead of integrating, validating, and managing IT infrastructure. VCE solutions
are available through an extensive partner network, and cover horizontal applications, vertical industry offerings, and
application development environments, allowing customers to focus on business innovation instead of integrating, validating,
and managing IT infrastructure.
For more information, go to http://www.vce.com.
Copyright 2014-2016 VCE Company, LLC. All rights reserved. VCE, VCE Vision, VCE Vscale, Vblock, VxBlock, VxRack,
VxRail, and the VCE logo are registered trademarks or trademarks of VCE Company LLC. All other trademarks used herein
are the property of their respective owners.
60
© 2014-2016 VCE Company, LLC.
All Rights Reserved.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertising