Dell EMC Vscale Architecture Overview

Dell EMC
Vscale Architecture Overview
Document revision 1.0
February 2017
Revision history
Date
Document revision
Description of changes
February 2017
1.0
Initial version
Revision history | 2
Contents
Introduction................................................................................................................................................. 4
Architecture overview................................................................................................................................ 5
Connecting system resources.................................................................................................................10
Connectivity..........................................................................................................................................10
Physical components........................................................................................................................... 11
LAN architecture...................................................................................................................................12
LAN configurations.........................................................................................................................15
VXLANs......................................................................................................................................... 16
SAN architecture.................................................................................................................................. 17
SAN switches.................................................................................................................................18
Architecture resources.............................................................................................................................20
Physical components........................................................................................................................... 20
Technology connects........................................................................................................................... 21
Compute resources.............................................................................................................................. 24
Storage resources................................................................................................................................ 25
File storage.................................................................................................................................... 28
Data protection resources.................................................................................................................... 29
System resources.................................................................................................................................31
Hosting management applications......................................................................................................... 33
Compute components.......................................................................................................................... 35
Storage components............................................................................................................................ 35
Network components............................................................................................................................36
Network architecture...................................................................................................................... 36
VMware vSphere virtual switch designs...............................................................................................41
Management workload cluster and resource pools..............................................................................43
Virtualization.........................................................................................................................................47
Sample configurations............................................................................................................................. 48
Sample ACI configuration with the Cisco Nexus 9504 Switch............................................................. 48
Sample ACI configuration with the Cisco Nexus 9508 Switch............................................................. 49
Sample technology connect configuration............................................................................................51
Sample system with compute...............................................................................................................52
Sample system with storage................................................................................................................ 53
Sample Vscale Management Platform with VNX5200......................................................................... 54
Additional references............................................................................................................................... 56
Virtualization components.................................................................................................................... 56
Compute components.......................................................................................................................... 56
Network components............................................................................................................................57
Storage components............................................................................................................................ 58
3 | Contents
Introduction
This document describes the high-level design of the Vscale Architecture.
Vscale Architecture is an architectural framework that uses Vscale Fabric to connect modular building
blocks such as Converged Systems and Vscale Fabric Technology Extensions. This provides Vscale
Architecture the flexibility to accommodate a wide variety of application requirements using an IT
infrastructure that can scale from a single Converged System to the largest data center.
The target audience for this document includes sales engineers, field consultants, advanced services
specialists, and customers.
The following table provides a description of related documentation:
Document
Provides
Release Certification Matrix
A list of the certified versions of software, firmware, and hardware.
Converged Systems Physical Planning Guide
A description of the physical components and elevations.
Converged Systems Powering On and Off
Guide
Instructions on how to manage power.
Integrated Data ProtectionGuide
Contains information on advanced planning and backup guidelines.
Release Certification Matrix
A list of the certified versions of software, firmware, and hardware.
Vision Intelligent Operations Administration
Guide
Information on how to manage Converged Systems.
Glossary
Definitions of terms specific to Converged Systems.
Introduction | 4
Architecture overview
Vscale Architecture is a framework that enables the building of data-center scale IT systems comprised of
resources logically connected using a Vscale Fabric to form logical systems.
The Vscale Architecture modular building blocks include the following:
•
New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems).
•
Converged Systems that can be expanded with Converged Technology Extensions.
•
Vscale Fabric delivers scalable LAN and SAN switching for connectivity between Vscale Fabric
Technology Extensions and Converged Systems.
—
Vscale Fabric incorporates a scalable spine/leaf LAN architecture with optional softwaredefined networking (SDN) and a core/edge SAN architecture.
—
Vscale Fabric Technology Extensions are modular containers that provide connectivity for
compute, storage, and data protection resources consumed by other resources attached to
the Vscale Fabric.
•
TheVscale Border Technology Connect provides external intranet and internet connectivity for
external facing routers, firewalls, and other edge functions to communicate with the Vscale
Architecture.
•
The Vscale Open Technology Connect enables an organization to integrate non-Dell EMC
resources into the Vscale Architecture to provide investment protection and flexibility.
•
Vscale Management Platform is a scalable management platform that hosts core management
applications, which can also be extended to support an IT organization's management and
orchestration stacks, as well as other applications, such as logging and SEIM.
By connecting multiple modular components to a scalable network fabric, a wide variety of application
requirements at any scale can be accommodated. Vscale Architecture offers a high degree of flexibility
and scale that compliments Converged Systems with the following benefits:
•
Dell EMC engineered and validated architecture.
•
Dell EMC lifecycle management and support.
•
Dell EMC Release Certification Matrix (RCM) certification for all Dell EMC-provided components
including LAN and SAN fabrics.
•
Uplinks to Vscale Fabric for unified storage access and file services.
•
Out-of-band management networks.
•
VMP as a scalable platform used to host Converged Systems element managers for all
infrastructure components in the data center. VMP can be extended to support Dell EMC
ecosystem management workloads and other management applications.
•
Logical build guidelines that include prescriptive connectivity and management.
•
Vision Intelligent Operations software for managing the health, Dell EMC RCM and security
compliance of Converged Systems across the Vscale Architecture.
5 | Architecture overview
The following illustration shows how Vscale Architecture combines the modular components:
Vscale Fabric
Vscale Fabric consists of two discrete, switched LAN and SAN networks. The Vscale LAN Fabric contains
spine and leaf switches that provide Ethernet and IP connectivity. The spine switches are high-throughput
switches that forward traffic between the leaf switches. The leaf switches provide a network connection
point for resources in the Vscale Fabric Technology Extensions and Converged Systems to connect to
the spine switches.
Architecture overview | 6
The following illustration shows a Vscale LAN Fabric where the spine, leaf, and Vscale Fabric Technology
Extensions connect:
The Vscale SAN Fabric is a flexible, core-edge or edge-core-edge architecture that uses FC SAN
switches for storage connectivity. The core switches are high throughput, director-class switches with high
port density that provide connectivity for storage arrays and edge switches. The edge switches provide a
SAN connection point for compute and storage resources in Vscale Fabric Technology Extensions and
Converged Systems.
7 | Architecture overview
The following illustration shows how the SAN core and Vscale Fabric Technology Extensions connect:
Vscale Fabric Technology Extensions
Vscale Fabric Technology Extensions are modular containers that connect Dell EMC compute, storage,
and data protection resources to the Vscale Fabric. These resources can be logically configured to form
logical systems or consumed by other systems attached to the Vscale Fabric.
Vscale Fabric Technology Extensions contain the following components:
•
Intelligent physical infrastructure is a 42 RU enclosure that includes an intelligent gateway that
gathers information about power, thermals, security, alerts, and all components in the physical
cabinet.
•
Two LAN leaf switches (optional if LAN connectivity is not required).
•
Two SAN edge switches (optional if SAN connectivity is not required).
•
One or more Dell EMC management switches.
Vscale Fabric Technology Extensions can be populated with storage, compute or data protection
resources, or a combination of resources to meet a particular operational model or application use cases.
Architecture overview | 8
For example, a Vscale Fabric Technology Extension can be configured with storage or compute only or
with both storage and data protection resources.
Technology connects
The is a special use case of Vscale Fabric Technology Extensions that can include resource types that
Dell EMC does not support. The is used solely for non-Dell EMC, third-party resources that provide
connectivity for non-Dell EMC supplied, third-party IT infrastructure. The includes compute servers,
storage, or data protection resources and provides VXLAN connectivity.
An can be populated with any third-party, non-Dell EMC supplied assets. This enables organizations to
integrate technology assets that have not been depreciated, or have strategic value, but cannot be replatformed to x86 technology. An cannot be used for Dell EMC-provided systems or resources.
The contains the following components:
•
Intelligent physical infrastructure
•
Two LAN leaf switches
•
Two SAN edge switches
•
One or more Dell EMC management switches
The contains similar components as the but is used to provide external connectivity for customer routers,
and other edge devices such as firewalls, application delivery controllers, or intrusion detection protection.
SAN edge switches are not required for a border technology connect.
Management
Vscale Management Platform is used to manage all components in a Vscale Architecture deployment.
VMP is a scalable management platform that hosts core Dell EMC management workloads such as
element managers, VMware vCenter Servers, and Secure Remote Support. The VMP may also extend to
management workloads that are not provided such as management and orchestration stacks or logging.
Vision Intelligent Operations, contained in the VMP, simplifies management and operations of converged
infrastructure and helps to ensure that all shared resources in a Vscale Architecture data center are
compatible and available for applications to consume.
Refer to the appropriate Architecture Overview for more information about your system.
Related information
Connecting system resources (see page 10)
Architecture resources (see page 20)
Hosting management applications (see page 33)
9 | Architecture overview
Connecting system resources
Vscale Fabricprovides high performance, LAN and SAN switched networks for connectivity between
Vscale Architecture system resources. The Vscale LAN Fabric is a spine and leaf architecture and the
Vscale SAN Fabric is a core and edge architecture.
LAN architecture
The LAN network contains the spine and leaf architecture that provides a high performance, routed
network for connectivity between system resources attached to the Vscale Fabric Technology Extension
leaf switches and switched for each system resource. LAN connectivity consists of an equal number of
connections from each leaf switch to all spine switches using a minimum of one, and a maximum of four
40 Gbps connections, dependent on 40 GbE capacity and number of deployed spine switches.
The LAN contains the spine and leaf architecture that provide a high performance, routed underlay
network for transparent VXLAN overlay connectivity between all leaf switches connected to the Vscale
Fabric Technology Extensions. LAN connectivity consists of an equal number of connections from each
leaf switch to all spine switches using a minimum of one, and a maximum of four 40 Gbps connections,
dependent on 40 GbE capacity and number of deployed spine switches.
SAN architecture
The SAN network is a flexible core-edge or edge-core-edge architecture. Two separate Vscale SAN
Fabrics provide high availability. Compute resources only connect to SAN edge switches. Storage and
data protection resources may connect to edge or core switches, which is considered a collapsed core
design.
Connectivity
Vscale Fabric combines Vscale LAN Fabric spine switches and Vscale SAN Fabric core switches for
connectivity between the Converged Systems and Vscale Fabric Technology Extensions.
The LAN leaf switches and SAN edge switches combine to connect resources in a Vscale Fabric
Technology Extension to the Vscale LAN Fabric and Vscale SAN Fabric.
Important: To simplify traffic management and scaling in the border technology connect, only customer
upstream core routers and switches should connect to the border leaf switches.
Vscale LAN Fabric connectivity
Vscale LAN Fabric is a spine and leaf architecture that adheres to the following connectivity rules:
•
Vscale Fabric leaf switches connect to the spine switches.
•
Vscale Fabric leaf switches cannot be directly connected to other leaf switches.
•
Vscale Fabric spine switches cannot be directly connected to the other spine switches.
•
Hosts, such as servers, IP storage (NAS), or routers, cannot be directly connected to the spine
switches.
Connecting system resources | 10
Vscale SAN Fabric connectivity
The Vscale SAN Fabric is a classic redundant SAN A/B architecture with the following connectivity:
•
Edge switches do not connect directly to other edge switches.
•
Edge switches can provide connectivity for the following:
—
Storage arrays
—
Compute resources
—
Data Protection resources
•
Core switches only connect to other core switches in the same Vscale SAN Fabric.
•
Core switches can provide connectivity for the following:
—
Storage arrays
—
Data protection resources
—
Core switches for future meshing at the core switches for expansion
—
Edge switches
Physical components
Vscale Fabric consists of LAN and SAN switching that is used to interconnect Converged Systems and
resources within Vscale Fabric Technology Extensions.
LAN switches
The following table provides the LAN switches that are supported in the Vscale Fabric:
Component
Spine
Cisco Nexus 3172TQ Switch
X
Cisco Nexus 9332PQ, 32 port, QSFP+ based switch
X (NX-OS)
Cisco Nexus 9336PQ ACI Spine Switch
X
Cisco Nexus 9396PX Switch with M12PQ, 12 port, QSFP+ uplink card
X (ACI for VxRack
Systems only)
X (Default)
Cisco Nexus 9504 Switch with N9K-X9636PQ, 36 port, QSFP+ line cards,
NX-OS
X
Cisco Nexus 9508 Switch with N9K-X9636PQ, 36 port, QSFP+ line cards,
NX-OS
X (Default)
Cisco Nexus 9504 Switch with N9K-X9736PQ, 36 port, QSFP+ line cards,
ACI
X
Cisco Nexus 9508 Switch with N9K-X9736PQ, 36 port, QSFP+ line cards,
ACI
X (Default)
11 | Connecting system resources
Leaf
SAN switches
The following table provides the SAN switches that are supported in the Vscale Fabric:
Component
Core
Cisco MDS 9396S 16G Multilayer Fabric Switch
Edge
X
Cisco MDS 9706 Multilayer Director, 48 port, 16 Gb FC Module
X
X
Cisco MDS 9710 Multilayer Director, 48 port, 16 Gb FC Module
X (Default)
X
Cisco MDS 9148S Multilayer Fabric Switch
X (Default)
Cisco MDS 9148 Multilayer Fabric Switch
X
LAN architecture
The LAN contains the spine and leaf architecture providing high performance, non-blocking, multi-stage
switched network connectivity between system resources attached to the Vscale Fabric Technology
Extension leaf switches.
Access to the Vscale LAN Fabric is provided through a leaf switch connected to all spine switches. All
bandwidth must be the same between each spine and leaf switch in the fabric. Any increase in bandwidth
must be performed across all leaf switches. For example, if one leaf switch requires additional bandwidth
to support an application and an additional 40 GbE uplink per spine is provisioned, all switches connected
to the Vscale LAN Fabric must be upgraded.
The Vscale LAN Fabric underlay is a Layer 3, routed network supporting overlay VXLANs using
multiprotocol, border gateway protocol extensions (MP-BGP) for Ethernet VPN (EVPN) distributed control
plane operations. MP-BGP supports the EVPN address family that advertises Layer 2 reachability
information between VXLAN endpoints. If MP-BGP is configured, route reflectors (RR) are used to
simplify configuration because the Vscale Fabric scales in size.
When a host VM attaches to a leaf, the leaf uses MP-BGP EVPN to advertise the MAC address and IP
address of the host to the leaf switches. As hosts move between the Converged Systems and Vscale
Fabric Technology Extensions, MP-BGP updates reachability information across leaf switches to ensure
forwarding information is current and correct.
Connecting system resources | 12
The following illustration shows how EVPN is used:
Vscale LAN Fabric uses open shortest path first (OSPF) routing protocol to establish and maintain the
routing topology, and provides equal-cost multi-pathing (ECMP) for VXLAN tunnel end point (VTEP)
address reachability. Protocols such as intermediate system to intermediate system (IS-IS) protocol can
be used as a fabric underlay VTEP address routing protocol. If VXLAN is deployed, the default is to use
the distributed control plane features of EVPN for anycast gateway and head-end replication.
13 | Connecting system resources
The following illustration shows the difference in connectivity between the IP Layer 3 and bridged Layer 2
host:
The following illustration shows the transport network for scalability and border leaf connectivity:
Use multiprotocol internal BGP (MP-iBGP) as the underlay routing protocol to carry EVPN, the control
plane protocol used to distribute Layer 2 network layer reachability information (NLRI) between switches.
The leaf provides its hosts Layer 2 connectivity and VXLAN functionality to extend Layer 2 domains
across the Layer 3 fabric. In operation, each leaf switch uses OSPF to advertise its network virtualization
Connecting system resources | 14
endpoint (NVE), VTEP, and IP address to every other leaf switch connected to the fabric to exchange
routes and form a full point-to-point mesh network.
LAN configurations
LAN configurations are based on the configuration of the spine and leaf switches.
Spine switch configurations
Each leaf switch in the Vscale LAN Fabric has a minimum of one 40 Gbps uplink to each spine switch.
The maximum number of spines in the LAN depends on the number of uplinks per leaf switch and the
number of spine switches. For example, if a leaf switch has 12 ports and two uplinks per spine, the
maximum number of spine switches is six. Each Vscale Fabric Technology Extensions has two leaf
switches that yield 960 Gbps of bi-sectional bandwidth between any two Vscale Fabric Technology
Extensions.
Important: Dell EMC recommends deploying a minimum of three and a maximum of six spine switches
for Vscale LAN Fabric.
The number of Vscale Fabric Technology Extensions that can connect to a Vscale LAN Fabric is limited
by a number of factors such as VXLAN tunnel endpoint (VTEP) interface limitations or port density in the
spine. Vscale Fabric Technology Extensions use two connections per spine switch if one uplink per leaf
switch is connected, and four connections if two uplinks per leaf switch are connected. All Cisco Nexus
9500 Series Switches have a mandatory base configuration such as PDUs, system fabrics, and fans, for
line rate operation and fault tolerance.
The following table provides base configurations and limitations for spine switches:
Switch
Base configuration
Cisco Nexus 9332PQ
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Cisco Nexus 9336PQ
Cisco Nexus 9504
Cisco Nexus 9508
32 port, QSFP+ fixed port switch
16 blocks: one connection per leaf
8 blocks: two connections per leaf
32 port, QSFP+ fixed port switch
18 blocks: one connection per leaf
9 blocks: two connections per leaf
4 slot chassis
6 fabric modules
2 supervisors
1 Cisco Nexus 9636PQ Switch 36 port, 40 GbE, QSFP+ line card (NX-OS)
1 Cisco Nexus 9736PQ Switch 36 port, 40 GbE, QSFP+ line card (ACI)
72 blocks: one connection per leaf
36 blocks: two connections per leaf
8 slot chassis
6 fabric modules
2 Sup-A supervisors
1 Cisco Nexus 9636PQ Switch 36 port, 40 GbE, QSFP+ line card (NX-OS)
1 Cisco Nexus 9736PQ Switch 36 port, 40 GbE, QSFP+ line card (ACI)
144 blocks: one connection per leaf
72 blocks: two connections per leaf
15 | Connecting system resources
The number of supported Vscale Fabric Technology Extensions is based on the number of leaf switch
uplinks and spine switch port density. For example, to deploy three Cisco Nexus 9332PQ Switches with
32 40 GbE ports, with each leaf switch connected to the spine switches with one 40 GbE uplink, you can
deploy a maximum of fifteen Converged Systems or Vscale Fabric Technology Extensions and one
border technology connect. If more are required, use larger spine switches such as the Cisco Nexus 9504
or Cisco Nexus 9508 Switch.
Important: The Vscale LAN Fabric supports up to 128 Converged Systems orVscale Fabric Technology
Extensions due to current VTEP limitations.
Leaf switch configurations
Each Vscale Fabric Technology Extension contains two leaf switches with each connected to a spine
switch. Every leaf switch has a minimum of one, and maximum of three uplinks to each spine switch. All
Vscale Fabric Technology Extensions must have the same number of links between the leaf switches and
the spine.
The Cisco Nexus 9396PX Switch has a 12 port, 40 GbE, QSFP+ module and provides the following
features:
•
48 1/10 GbE, SFP+ non-blocking ports
•
12 port, 40 GbE QSFP+ non-blocking ports, or six port, 40 GbE, QSPF+ non-blocking ports
(optional)
The Cisco Nexus 9332PQ Switch is only supported on VxRack Systems.
VXLANs
The overlay LAN architecture is based on VXLAN enabling the extension of Layer 2, broadcast domains
across arbitrary, Layer 3 routed topologies.
The Vscale LAN Fabric provides:
•
OSPF routed Layer 3 backbone for VXLAN tunnel end point (VTEP) reachability
•
Multiprotocol, border gateway protocol (MP-BGP) Ethernet VPN (EVPN) distributed control plane
•
VXLAN overlay networking with anycast gateway and unicast head-end replication for multitenant operations
Networking
Leaf switches can be configured to use VXLAN to extend Layer 2 domains across the Layer 3 fabric in
each tenant overlay, using MP-BGP EVPN as the distributed control plane. Using the Cisco Nexus 9000
Series Switches as leaf switches, all VXLAN operations are done in the switch hardware applicationspecific integrated circuit (ASICs) to achieve line-rate performance for all VXLAN enabled traffic.
The spine is used only for high speed transport and is never involved in the overlay VXLAN encapsulation
and de-capsulation operations.
Each Vscale Fabric Technology Extension has a pair of leaf switches configured to use an anycast IP
address and shared anycast MAC address for each VXLAN enabled VLAN SVI interface. Each Vscale
Fabric Technology Extension leaf switch pair uses a proxy VTEP address. This is also known as the
virtual port channel (vPC) shared address with the individual switch VTEP address for vPC load balancing
Connecting system resources | 16
and redundancy for end-host connectivity. The IP address is advertised within the open shortest path first
(OSPF) routing table.
The anycast IP address provides redundancy and efficient load distribution and routing. All routing is local
to each system at their respective top-of-rack switches to optimize both east/west and north/south traffic
flows. To advertise and maintain Layer 2 forwarding information (MP-BGP) with EVPN is used as a
control plane among the leaf switches in the underlay network.
As hosts connect to the Vscale Fabric Technology Extension, the leaf switch learns the MAC address and
IP address of the host. These addresses are inserted in network layer reachability information (NRLI)
advertised by MP-BGP to all of the leaf switches connected to the Vscale LAN Fabric.
The leaf switch supports routing frames between different VXLANs, or to switch frames inside the same
VXLAN enabled VLAN at line rate, inside the same tenant overlay. All VXLAN operations, including
encapsulation, de-encapsulation, bridging, and routing are transparent to the host systems and the traffic
generated by the systems. No additional configuration is required beyond the setup of typical network
connectivity for the hosts.
This differs from the external routing where the destination of the packet is not inside the tenant overlay,
such as traffic destined for the internet, non-VXLAN corporate resources, or for resources located in a
different tenant overlay. Traffic exits the VXLAN fabric through the border leaf switches routed at the
customer core.
SAN architecture
Vscale SAN Fabric provides FC connectivity that enables Converged Systems and Vscale Fabric
Technology Extensions resources to be interconnected to support applications.
Uplink support
Each edge switch connects to the core switch using eight or 16 uplinks. The Cisco MDS 9148S Multilayer
Fabric Switch is a fixed, form-factor switch. The Cisco MDS 9706 Multilayer Director and Cisco MDS 9710
Multilayer Director support a 48 port, FC port module with 48 line-rate 16 Gbps FC ports (with at least
three fabric modules).
All FC ports on the 48 FC port modules should be populated with a 16 Gbps, standard, form-fit, pluggable
device (SFP). Mixing 8 Gbps and 16 Gbps FC connections on a Vscale SAN Fabric is not recommended.
An enterprise license is not required for most Vscale Fabric Technology Extensions. A full Cisco Data
Center Network Manager license is not required.
Topology
The SAN network architecture is defined by two-tier and three-tier topologies. The number of switch hops
between the host and the storage for a given topology is as follows:
•
Two-tier design: fabric interconnect -> edge-> core-> storage ==1 Hop
•
Three-tier design: fabric interconnect -> edge -> core -> edge -> array == 2 Hops
Converged Systems with Cisco MDS 9500 Series Switches should not connect to the
Vscale SAN Fabric because the SAN directors can severely limit the growth capacity of the
SAN.
17 | Connecting system resources
The two-tier topology is a core-edge/collapsed, core-edge topology that exists when storage arrays can
be connected to the core, and the compute servers are attached to the host edge switch. The core-edge/
collapsed core-edge topology is designed as follows:
•
All servers are connected to edge SAN switches.
•
Storage arrays may be directly connected to the core switches.
•
VPLEX and RecoverPoint and other service nodes must be connected to the core switches acting
as intermediary devices. Placing these types of capabilities at the core provides the most
bandwidth efficient interception point if multi-array replication is required.
The three-tier topology is an edge-core-edge topology. This topology exists when the storage array is
attached to the storage edge switch, and the servers are attached to the host edge switch.
The edge-core-edge topology is designed as follows:
•
Edge switches can be a pair of Cisco MDS 9148S Multilayer Fabric Switches or Cisco MDS 9700
Series Switches.
•
All storage and servers are connected to edge switches.
•
Storage arrays are not connected to the core switches.
•
VPLEX and RecoverPoint and other service nodes can directly connect to the core switches
acting as intermediary devices. Placing these capabilities at the core provides the most
bandwidth efficient interception point if multi-array replication is required.
SAN switches
The core switch connectsConverged Systems and Vscale Fabric Technology Extensions, while an edge
switch allows resources to be accessed within the Vscale Fabric.
Core switches
The following requirements apply:
•
Full high availability, field replaceable components, such as supervisors, fabric module, fan and
power supplies
•
All ports must be line-rate capable
•
Support 16 Gbps or higher speeds at line rate
•
Inter-VSAN routing (IVR) capable
•
Smart-zoning capable
•
Support N_port virtualization (NPV)
•
Support port channeling
Edge switches
The following table shows the connection topologies for SAN connections within Vscale Fabric:
Connecting system resources | 18
The accessing resource is the source and the sharing resource is the target.
Model
Description
Local
Source and targets are within the same network.
Core-edge
Target is connected at the SAN core, and the source is located one edge link away.
Edge-core-edge
Target is connected to an edge separate from the source. Traffic must traverse through core
switches to arrive at the target.
Edge switches connect to the SAN core switch using eight or 16 uplinks.
19 | Connecting system resources
Architecture resources
Vscale Fabric Technology Extensions connect directly to the Vscale Fabric and contain one or more
compute, storage, and/or data protection resources.
Every resource, except Dell EMC, third-party resources can be deployed in a single Vscale Fabric
Technology Extension.
Important: Other direct connections for the Vscale Fabric are not supported.
The following scalability limits are imposed by the physical port availability for a Vscale Fabric Technology
Extension:
Area
Limit
Description
SAN
77 FC switches
Cisco SAN fabric supports a maximum of 80 switches. A maximum of 3
SAN core switches are supported per fabric, which limits the edge
switches to 77 per SAN fabric.
SAN
10,000 world-wide port names
(WWPN) (includes storage and
host ports)
Cisco SAN fabric supports a maximum of 10,000 WWPNs in the global
name server database. This increases to 20,000 when using Cisco
MDS 9700 Series switches exclusively in SAN fabric.
LAN
128 Vscale Fabric Technology
Extensions
Number of VXLAN tunnel end points (VTEPs) available within a VXLAN
fabric is 256. Each Vscale Fabric Technology Extensions requires two
VTEPs.
Dell EMC Sales and Professional Services work with Dell EMC Manufacturing to gather the required
information to configure Vscale Fabric Technology Extensions in the factory. This requires modifications
to the Logical Configuration Survey (LCS). In some cases, only a minimal configuration for Release
Certification Matrix (RCM) will be done at the factory. The RCM provides a list of the certified versions of
components for Vscale Fabric Technology Extensions.
Physical components
The type of Vscale Fabric Technology Extension determines the type of Cisco Nexus 9300 or 9500 leaf
switches required for LAN connectivity and Cisco MDS switches for SAN connectivity.
Architecture resources | 20
The following illustration shows a sample configuration:
Cisco Nexus 3172TQ Switches connected to a pair of Cisco Nexus 3164Q aggregation switches provide
out-of-band management networking.
Technology connects
The Vscale Border Technology Connect provides external access to Vscale Architecture as a specialized
function withinVscale Fabric. The Vscale Open Technology Connect contains third-party, non-Dell EMC
resources that are provided to the Vscale Architecture
Vscale Border Technology Connect
The Vscale Border Technology Connect includes two leaf switches with no end-host connectivity. Only
external connectivity is provided in and out of the Vscale Architecture through the border leaf switches.
21 | Architecture resources
Components
The following table provides a description of the Vscale Border Technology Connect components:
Component
Provides
Intelligent Physical Infrastructure (IPI) Appliance
42 RU enclosure with intelligent PDUs and environmental
monitoring.
Cisco Nexus 9396PX Switch
Ethernet leaf switch for external network access.
Cisco Nexus 3172TQ Switch
Management
Connectivity
The border resources connect directly to the following components:
•
Vscale LAN Fabric (Ethernet spine)
•
External routers or switches for external network connectivity
Border resources connect to the Vscale Fabric in the same manner as the other leaf switches.
Connections to external networking components varies by deployment.
External connectivity
Vscale Fabric uses border resources to provide external northbound connectivity only. The border leaf
switches peer with the external routers or switches using supported dynamic routing protocols. The leaf
switches can be used to connect other edge devices such as firewalls, application delivery controllers,
intrusion detection and prevention appliances. Any device connecting to the border leaf switches must
support a dynamic routing protocol, have sufficient Layer 3 interfaces, and contain virtual routing and
forwarding (VRF) support for multi-tenancy.
The border leaf switches require dedicated, symmetrical, and redundant Layer 3 interfaces in each tenant
VRF towards each external northbound switch or router deployed in the fabric environment.
Important: Extra precaution should be used for the external router when connecting multiple VXLAN
overlay environments using only the default VRF for exchanging routes.
The border does not require the vn-segment ID creation for all VXLAN enabled VLANs because there is
no end-host connectivity. If absolute isolation of each tenant is required in a multi-tenant environment,
each VXLAN overlay Layer 3 interface must be connected to a Layer 3 port residing in dedicated VRFs to
ensure isolation. Failure to place links in different VRFs in multi-tenant environments with isolation
requirements may result in all routes being installed into a single routing table. This could allow isolated
tenants to access each other.
Vscale Open Technology Connect
The following conditions apply:
•
Dell EMC resources cannot connect to an open technology connect.
•
Non-Dell EMC, third-party resources can be added to the Vscale Fabric for LAN and SAN.
•
Dell EMC does not support components connected to resources beyond the Ethernet or FC port
demarcation point. This applies even if the connected component firmware is compatible with a
published release certification matrix (RCM).
Architecture resources | 22
Components
The following table provides a description of open components:
Component
Provides
IPI Appliance
42 RU enclosure with intelligent PDUs and environmental
monitoring.
Cisco Nexus 9396PX Switches
Ethernet leaf switch required for LAN access.
Cisco MDS 9148S Switches
Cisco MDS 9396S 16G Multilayer Fabric Switch
FC edge switch required for FC block access.
Cisco MDS 9700 Series Multilayer Directors
(large environments)
Third-party servers
Servers
Third-party storage arrays
Storage
Cisco Nexus 3172TQ Switch
Management switch
Connectivity
Third-party resources require a separate VSAN and inter-VSAN Routing (IVR) to provide or consume
storage resources within the Vscale Fabric.
The following connection limits apply:
•
External SAN switches cannot connect to an open technology connect.
•
Only end devices, such as storage array front-end ports can connect into the open technology
connect.
•
External network switches providing Layer 3 services cannot connect to a third-party resource.
Data flow
The following table provides possible data flows for open resources between the source and destination
devices:
Devic <->Edge<->
e
<->Core<-> <->Edge<->
Device
Permitted
Comments
Host
Open-edge
Core
Array
Y
(Inter-VSAN routing) IVR required.
Host
Open-edge
Core
Array
Y
IVR required.
Host
Open-edge
Array
N
Devices are not supported.
Host
Open-edge
Core
VPLEX
Y
IVR required.
Host
Open-edge
Core
VPLEX
N
Too many hops.
Array
Open-edge
Core
VPLEX
N
Interoperability of the end array is
not managed.
Array
Open-edge
Core
VPLEX
N
Too many hops.
23 | Architecture resources
Edge
Edge
Edge
Devic <->Edge<->
e
<->Core<-> <->Edge<->
Array
Open-edge
Array
Open-edge
Core
Array
Open-edge
Core
Array
Open-edge
Core
Device
Permitted
Comments
Edge
Host
Y
IVR required. No running VMs on
the Dell EMC hardware from data
stores from open-edge connected
arrays.
Converged
System-edge
Host
Y
IVR required. No running VMs on
the Dell EMC hardware from data
stores from open-edge connected
arrays.
RPA
N
VPLEX
N
Converged
System-edge
Compute resources
Compute resources can be added to a Vscale Fabric Technology Extension to provide additional
compute resources within the Vscale Fabric.
Components
The following table provides a description of compute components:
Component
Provides
Cisco Nexus 9396PX Switches
Ethernet leaf switch for LAN access.
Cisco MDS 9148S Switches
Cisco MDS 9396S 16G Multilayer Fabric Switch
FC edge switch for FC block access.
Cisco MDS 9700 Series Multilayer Directors
Cisco UCS 62xxUP fabric interconnects
Compute connectivity
Cisco UCS 5108 Blade Server Chassis
Cisco UCS 22xxXP fabric extenders
Optional Cisco UCS B200 M4, B260 M4, B420 M3, or B460 M4
Blades.
Cisco UCS B-Series Blade Servers
Cisco UCS C-Series Servers
Optional Cisco UCS C220 and C240 M4 servers.
Cisco Nexus 3172TQ Switch
Management
Compute resources connect through fabric interconnects to the Ethernet leaf and SAN edge switches.
The Vscale Fabric Technology Extensioncan contain up to four Cisco UCS domains, depending on port
availability on the Ethernet and SAN switches.
Disjoint Layer 2
In the Disjoint Layer 2 configuration, traffic is split between two or more networks at the fabric
interconnect. This enables Cisco UCS servers in a Converged System to connect to two or more discrete
Ethernet clouds. Upstream Disjoint Layer 2 networks allow two or more Ethernet clouds to be accessed
by servers or VMs located in the same Cisco UCS domain.
Architecture resources | 24
Connectivity
The following table provides connection models for compute resources:
Connection
Description
Server-to-FI
Cisco UCS C-Series Servers use the same connection models as the Cisco UCS Technology
Extension.
FEX-to-FI
Cisco UCS C-Series Servers with FEX connections use the same connection models as the
Cisco UCS Technology Extension. Cisco UCS 22xxXP FEX offers two or four links to the FI,
Cisco UCS 2208XP FEX offers eight.
FI-to-Cisco Nexus
Cisco UCS 6248UP Fabric Interconnect has eight Ethernet links. Cisco UCS 6296UP defaults
to eight Ethernet links and can expand to 16.
FI-to-Cisco MDS
Cisco UCS 6248UP Fabric Interconnect defaults to four FC links and can expand to eight.
Cisco UCS 6296UP defaults to eight FC links and can expand to 16.
Data flows
The following table provides possible data flows for compute resources between the source and
destination devices:
Device <->Edge<>
<->Core<>
<->Edge<->
Device
Permitte
d
Comments
Array
Y
Uses production VSAN.
Array
Y
Uses production VSAN.
Host
Edge
Host
Edge
Core
Host
Edge
Core
Edge
Array
Y
Uses production VSAN.
Host
Edge
Core
Open-edge
Array
Y
Uses an Inter-VSAN routing (IVR)
VSAN. Cannot boot from remote
SAN device over IVR.
Host
Edge
Core
Converged
System-edge
Array
Y
Assumes there is no domain
conflict in the merging VSAN and
the switch is using default
production VSAN.
Host
Edge
VPLEX
Y
Uses production VSAN. Require
array for VPLEX to be in the same
edge.
Host
Edge
VPLEX
Y
Requires array for VPLEX to be in
the same core switch as the array
ports. Can use production VSANs
or IVR.
Core
Storage resources
Storage resources can be added to a Vscale Fabric Technology Extension to provide additional storage
resources within the Vscale Fabric.
25 | Architecture resources
Dell EMC storage
The storage resource has the following characteristics:
•
Contains only Dell EMC-certified components
•
Prescriptive connectivity, Intelligent Physical Infrastructure (IPI) Appliance, and management
•
System resource operating environment is based on Vscale Fabric Release Certification Matrix
(RCM)
Third-party storage
Third-party storage resources have the following characteristics:
•
Third-party compute and storage resources do not have RCM certification
•
Components sourced directly from Dell EMC partners do not have RCM certification
•
External LAN or SAN resources are not permitted
•
Vision Intelligent Operations does not discover and manage any device below the fabric
Components
The following table provides the available storage components:
Component
Provides
Cisco Nexus 9396PX Switches
Optional Ethernet leaf switch required for file access.
Cisco MDS 9148S Multilayer Fabric Switches
Cisco MDS 9396S 16G Multilayer Fabric Switch
Optional FC edge switch for FC block access.
Cisco MDS 9700 Series Multilayer Directors
Isilon (per the technology extension for Isilon)
Unity
VMAX, VMAX3, VMAX3 All Flash Array (AFA)
One or more storage arrays.
VNX
XtremIO
Connectivity
Storage arrays connect through the core switches for FC block access and through Cisco Nexus 9396PX
leaf switches for file access. Dell EMC recommends connecting directly to the core block from all storage
arrays shared between multiple Vscale Fabric Technology Extensions.
The following table provides connection models:
Connection
Description
Edge-to-core
Edge switches connect to the core using a minimum of eight 16 Gb FC ports in a single port
channel that can be expanded to 16 ports. Inter-VSAN routing (IVR) is supported, but not required.
Architecture resources | 26
Connection
Description
Unity
Uses the same connectivity standards as the Unity storage arrays within the VxBlock and
Vblock Systems 350.
VNX
Uses the same connectivity standards as the VNX storage arrays within the VxBlock and
Vblock Systems 340.
VMAX
Uses the same connectivity standards as the VMAX storage arrays within the Vblock System 720.
VMAX3,
VMAX3 AFA
Uses the same connectivity standards as the VMAX storage arrays within the VxBlock and
Vblock Systems 740.
XtremIO
Uses the same connectivity standards as the XremIO storage arrays within the VxBlock and
Vblock Systems 540.
Edge-to-core
Edge switches connect to the core using a minimum of eight 16 Gb FC ports in a single port
channel that can be expanded to 16 ports. Inter-VSAN routing (IVR) is supported, but not required.
Storage port connections use dynamic port mapping and FC.
Data flows
The following table provides possible data flows for storage resources between the source and
destination devices for storage arrays:
<->Edge<>
<->Core<>
<->Edge<->
Edge
Edge
Comments
Host
Y
Uses production VSAN.
Edge
Host
Y
Uses production VSAN.
Core
Edge
Host
Y
Uses production VSAN.
EMC VPLEX
Y
Uses production VSAN.
EMC VPLEX
Y
Uses production VSAN.
Edge
Core
Open-edge
Host
Y
IVR required.
Core
Open-edge
Host
Y
IVR required.
Edge
Core
Edge
Permitte
d
Core
Core
Edge
Device
EMC RPA
Uses production VSAN.
EMC RPA
Uses production VSAN.
Core
Dell EMC
Host
System-edge
No technical reason to block. Collisions with
world-wide node names (WWNN)/WWPNs
require IVR, tier 2, ascertained prior to sale.
Core
Dell EMC
Host
System-edge
No technical reason to block. Collisions with
world-wide node names WWNN/WWPNs
require IVR, tier 2, ascertained prior to sale.
The following table provides the data flow for Isilon:
<->Leaf<>
<->Spine<>
Leaf
27 | Architecture resources
<->Leaf<->
Device
Permitted Comments
Host
Y
Jumbo frames VXLAN configured on leaf
Layer 2. Recommended by Dell EMC.
<->Leaf<>
<->Spine<>
<->Leaf<->
Device
Permitted Comments
Leaf
Spine
Leaf
Host
Y
Jumbo frames VXLAN configured on leaf
Layer 2. Recommended by Dell EMC.
Leaf
Spine
Dell EMC
System-leaf
Host
Y
Jumbo frames VXLAN configured on leaf
Layer 2. Recommended by Dell EMC.
Leaf
Spine
Open-leaf
Host
Y
Jumbo frames VXLAN configured on leaf
Layer 2. Recommended by Dell EMC.
Leaf
Spine
Border-leaf
Mgmt-Host Y
Jumbo frames VXLAN configured on leaf
Layer 2. Recommended by Dell EMC.
Leaf
Spine
Border-leaf
Cust
N
The following table provides the data flow for X-Blades or NAS shares:
<->Leaf<>
<->Spine<-> <->Leaf<-> Device
Leaf
Permitted
Comments
Host
Y
Jumbo frames Layer 3 capable with no VXLAN.
Adds COS/QOS concerns. Layer 2 optional,
requires VXLAN Layer 3 ToR gateway.
Leaf
Spine
Leaf
Host
Y
Jumbo frames Layer 3 capable with no VXLAN.
Adds COS/QOS concerns. Layer 2 optional,
requires VXLAN Layer 3 ToR gateway.
Leaf
Spine
Dell EMC
Systemleaf
Host
Y
Jumbo frames Layer 3 capable with no VXLAN.
Adds COS/QOS concerns. Layer 2 optional,
requires VXLAN Layer 3 ToR gateway.
Leaf
Spine
Open-leaf
Host
Y
Jumbo frames Layer 3 capable with no VXLAN.
Adds COS/QOS concerns. Layer 2 optional,
requires VXLAN Layer 3 ToR gateway.
Leaf
Spine
Border-leaf Mgmthost
Y
Jumbo frames Layer 3 capable with no VXLAN.
Adds COS/QOS concerns. Layer 2 optional,
requires VXLAN Layer 3 ToR gateway.
Leaf
Spine
Border-leaf Cust
N
Refer to the Architecture Overview for your system for additional information on each component and
storage array.
File storage
File-level storage is deployed in Network Attached Storage (NAS) System and configured with NFS or
SMB/CIFS protocol. The storage system is connected directly to the Vscale Fabric leaf switches to
provide Ethernet connections.
Architecture resources | 28
The following file storage options are available:
Storage option
Description
Isilon
•
•
•
•
Unity
Unified VNX
VMAX3 eNAS, VMAX3
AFA eNAS
NAS gateways
Allows you to scale out NAS capacity and performance up to 50 PB per cluster.
Connects to leaf switches to provide file services.
Contains internal storage capacity and does not require external SAN connectivity.
Unity Storage Processors (SPs) can provide NAS file stores from the same array
providing block storage to SAN hosts.
•
•
Unity 10 Gb Ethernet ports connect to leaf switches to provide file services.
•
One or more X-Blades connected to an VNX block storage array can provide NAS
file stores from the same array providing block storage to SAN hosts.
•
Contains dedicated links to the VNX block storage and does not require SAN
access.
•
•
X-Blades connect to leaf switches to provide file services.
Unity storage arrays can connect to multiple leaf switches provided there are
sufficient ports on each SP to provide redundant connections.
X-Blades can connect to multiple leaf switches provided there is a failover X-Blade
with corresponding connections.
•
Uses internal ports to connect to storage devices on the array, which provide the
capacity for file shares.
•
Ethernet ports (on the VMAX3 engines) dedicated for NAS can connect to leaf
switches in the same manner as the unified VNX X-Blades.
•
Gateways do not contain internal storage and must connect to a Vscale Fabric
Technology Extension with VMAX resources. This provides the same NAS
functionality as a unified VNX X-Blade.
•
Uses SAN to connect to an array where the file systems are stored. SAN ports can
connect to core, but must be local (on the same SAN switch) to the array providing
the block storage.
•
X-Blades can connect to leaf switches in the same manner as the unified VNX XBlades.
Data protection resources
Data protection resources can be added to a Vscale Fabric Technology Extension to provide resources to
infrastructure components within the Vscale Fabric.
Components
The following table provides a description of data protection components:
Component
Provides
Cisco Nexus 9396PX Switches
Ethernet leaf switch for LAN access.
Cisco MDS 9148S Multilayer Fabric Switch
Cisco MDS 9396S 16G Multilayer Fabric Switch
Cisco MDS 9700 Series Multilayer Director
29 | Architecture resources
FC edge switch for FC block access.
Component
Provides
•
•
•
•
One or more data protection instances.
Avamar
Data Domain
RecoverPoint Appliance
VPLEX
Cisco Nexus 3172TQ Switch
Management
Connectivity
Place data protection resources directly on a Vscale Fabric Technology Extensionwith shared resources
for SAN connections and FC access. VPLEX connections must be in close proximity to back-end storage
arrays. Data protection resources connect directly to the Vscale Fabric from the Ethernet leaf switch to
access Avamar and/or Data Domain.
Data flow
The following table provides possible data flows between the source and destination devices:
Device
<->Edge<->
<->Core<->
Host
Open-edge
Core
Host
Open-edge
Core
Host
Open-edge
Host
Open-edge
Core
Host
Open-edge
Core
Array
Open-edge
Core
Array
Open-edge
Core
Array
Open-edge
Array
Open-edge
Core
Array
Open-edge
Core
Array
Open-edge
Core
<->Edge<->
Device
Permitted Comments
Array
Y
Inter-VSAN routing (IVR) required.
Array
Y
IVR required.
Array
N
Dell EMCdoes not support devices.
VPLEX
Y
IVR required.
VPLEX
N
Exceeds the maximum number of
hops.
VPLEX
N
Dell EMC does not manage
interoperability of the end array.
Edge
VPLEX
N
Exceeds the maximum number of
hops.
Edge
Host
Y
IVR required. No running VMs on
the Dell EMC hardware from data
stores from open-edge connected
arrays.
Dell EMC
System-edge
Host
Y
IVR required. No running VMs on
the Dell EMC hardware from data
stores from open-edge connected
arrays.
RPA
N
VPLEX
N
Edge
Edge
Dell EMC
System-edge
The Dell EMC Integrated Data Protection Product Guide contains additional information about data
protection options.
Architecture resources | 30
System resources
Additional resources are required for Converged Systems to connect to the Vscale Fabric.
Components
The following table provides a description of required resources:
Component
Provides
Converged System
Vblock System, VxBlock System, or VxRack System functionality
Cisco Nexus 9396PX Switches
Ethernet leaf switch required for LAN access.
Cisco MDS 9148S Multilayer Fabric
Switches
FC edge switch required for FC block access. Required if the existing
block is using unified networking.
Cisco MDS 9700 Series Multilayer
Directors
Cisco Nexus 3172TQ Switch
Management switch
Connectivity
Unified networking switches cannot connect directly to the Vscale Fabric. Cisco MDS 9148 Multilayer
Fabric Switch, Cisco MDS 9506 Multilayer Director, and Cisco MDS 9513 Multilayer Directors require
impact assessment to determine validity for connections. System resources connect directly to the Vscale
Fabric using Ethernet spine.
The following table provides connection models for the system resources:
Model
Description
Ethernet
Same connectivity as blocks with multiple resources.
FC
Unified networking systems require Cisco MDS switches.
Data flow
The following table provides possible data flows for system resources between the source and destination
devices:
Device
<->Edge<->
Host
Converged
System -edge
Host
Converged
System -edge
Core
Host
Converged
System -edge
Core
Host
Converged
System -edge
Core
Host
Converged
System -edge
31 | Architecture resources
<->Core<->
<->Edge<->
Device
Permitted
Comments
Array
Y
Array
Y
Edge
Array
Y
Inter-VSAN routing (IVR)
required for collisions.
Open-edge
Array
Y
IVR required.
VPLEX
N
Device
<->Edge<->
<->Core<->
Array
Converged
System -edge
Core
Array
Converged
System -edge
Array
Converged
System -edge
Core
Array
Converged
System -edge
Core
Array
Converged
System -edge
Array
Converged
System -edge
Core
Array
Converged
System -edge
Core
Array
Converged
System -edge
Array
Converged
System -edge
<->Edge<->
Device
Permitted
Comments
VPLEX
Y
IVR required for collisions.
Host
Y
Edge
Host
Y
IVR required for collisions.
Open-edge
Host
Y
IVR required for collisions.
VPLEX
Y
VPLEX
N
VPLEX
N
Edge
Too many hops.
RecoverPo Y
int
Core
RecoverPo Y
int
IVR required for collisions.
Architecture resources | 32
Hosting management applications
The Vscale Management Platform (VMP) is an extensible platform used to host Dell EMC core
management applications required forConverged Systems and Vscale Fabric operations.
VMP may also host ecosystem and optional management applications.
VMP is based on a standard Vblock System that manages Converged Systems and Vscale Fabric
Technology Extensions in a single data center or across multiple data centers.
To maintain systems operation and stability, VMP workloads are categorized into three discrete classes
that are partitioned into dedicated clusters and data stores to maintain system performance, availability,
and security.
The following table describes the different types of management workloads:
Management
workload
Description
Core
Contains management applications that are required to install, operate, and support a
Converged System, and core to systems operation.
Dell EMC
optional
Contains non-core, management workloads that extend the Dell EMC core systems capability,
such as data protection, security or storage management tools. These are supported and
installed by Dell EMC to manage the Converged System or Vscale Fabric Technology Extension
components. These include, but are not limited to Avamar Administrator, InsightIQ for Isilon,
VMware vRealize Operations Manager.
Ecosystem
Contains management workloads other than core or Dell EMC optional to manage Converged
System or Vscale Fabric Technology Extension components that an organization can use to
manage their IT environment, such as logging, management and orchestration, security event
and incident management (SEIM).
33 | Hosting management applications
The following illustration shows core VMP concepts:
The following table describes the different types of management workloads:
Management
workload
Description
Core
•
•
•
•
•
•
•
Dell EMC optional
VMware vCenter Hypervisor management
Element Manager: Unisphere
Fabric Manager (subset of the Data Center Network Manager)
Secure Remote Support
PowerPath
Vision Intelligent Operations
Tools: resources to install, operate, and support a Converged System or Vscale Fabric
Technology Extension
This list is inclusive, but not limited to:
•
•
•
•
•
•
Data protection, security or storage management tools
RecoverPoint or VPLEX
Avamar Administrator
InsightIQ for Isilon
VMware vCloud Network and Security appliances (VMware vShield Edge/Manager)
VMware vRealize Operations Manager
Hosting management applications | 34
Management
workload
Description
Ecosystem
This list is inclusive, but not limited to:
•
•
•
VMware vCloud Director
VMware View Connection Brokers
Cisco UCS Director
Compute components
The Vscale Management Platform (VMP) does not have a physical advanced management platform
(AMP), which is the management server for Converged Systems. All management functions are located
within the compute component.
Compute requires a minimum of four servers to support the core management and Dell EMC optional
workload layers. If an ecosystem workload is present, two additional servers are required to support that
configuration.
The following table provides the minimum requirements for compute servers in VMP:
Components
Minimum requirement
Memory
128 GB of RAM
CPU
2x E5-2600 v2 series CPUs with 6 cores each
Refer to the appropriate Architecture Overview for more information about your system.
Storage components
Vscale Management Platform(VMP) consists of the storage and file disk pools.
Sizing of the storage and file disk pools depends on the core, Dell EMC optional, and ecosystem
workload applications deployed on the hosts managed by the VMs.
The following table provides the storage/file disk pool specifications:
Storage disk pool (core and non-core enabled clusters)
File disk pool
12.5 TB minimum usable disk space (70% threshold/target IOPS
18K)
3 TB minimum usable disk space
•
•
•
•
10K SAS minimum or NL-SAS with
FAST VP
•
RAID 5
Storage tier 1 - 200 GB SSD RAID-5 (4+1)
Storage tier 2 - 600 GB SAS 10K RAID-5 (4+1)
Storage tier 3 - 2 TB NL-SAS 7.2K RAID-6 (6+2)
This is not applicable with XtremIO.
VMP offers the following features with Unity, VNX, VMAX only:
•
FAST VP with flash disks (recommended for heavily used VMware vCenter and VMware vCenter
Operations Manager environments)
•
FAST (recommended for VMP storage environments where available)
35 | Hosting management applications
Refer to the appropriate Architecture Overview for more information about your system.
Network components
The Vscale Management Platform (VMP) network architecture is based on a standard Converged System
configuration.
The VMP environment has a hybrid virtual switch design that uses the VMware vSphere Standard Switch
and VMware vSphere Distributed Switch (VDS). Dell EMC does not support the Cisco Nexus 1000V
Series Switches with VMP.
VMP differs from a standard Converged System in the following ways:
•
VMP is a centralized management platform that hosts all of the element managers associated
with the Vscale Architecture. This simplifies the management infrastructure since an Advanced
Management Platform (AMP) is not required for each Converged System or Vscale Fabric
Technology Extension. In addition, this enables better security, and simplifies patching and
maintenance of the management infrastructure.
•
All resources contained within Converged Systems or Vscale Fabric Technology Extension
managed by VMP across the out-of-band network.
•
Layer 3 connectivity is configured between specific hosts and VLANs to enable management
functionality. These routing configurations are implemented in customer-provided, network
infrastructure. The VLAN design is similar to a standard Converged System.
•
For Layer 2 connectivity, core VLANs are extended through the Vscale Architecture network into
the Converged System or Vscale Fabric Technology Extension managed by VMP.
•
The logical network design for VMP reduces impact from network outages and optimizes the
environment for advanced security implementations.
Network architecture
The Vscale Management Platform (VMP) uses in-band, out-of-band, inter-Converged System, and virtual
switch network architecture.
In-band
In-band network traffic traverses the production network switches in the VMP.
The following table provides a list of the VLANs for in-band network traffic:
VLAN
Description
vcesys_esx_mgmt
Local VMware management and applications that may impact production
vcesys_esx_L3vmotion
vMotion traffic between VMware vSphere ESXi hosts in the VMP
vcesys_esx_ft
VMware fault-tolerance traffic VMware vSphere ESXi hosts in the VMP
vcesys_nfs
NFS traffic internal to VMP
vcesys_esx_build
Automated deployment of VMware vSphere ESXi hosts
vcesys_brs_data
Backup/recovery with data protection solution
Hosting management applications | 36
VLAN
Description
fcoe_fabric_a
FC over Ethernet (FCoE) A VLAN for Cisco UCS connectivity
fcoe_fabric_b
FCoE B VLAN for Cisco UCS connectivity
Out-of-band
Out-of-band network traffic traverses the management network switches in the VMP and cannot be
leveraged for any production data.
The following table provides a list of the VLANs for out-of-band network traffic:
VLAN
Description
vcesys_oob_mgmt
VMs and device ports are used for control plane only. There is no data on this VLAN.
Inter-Converged System network
Inter-Converged System VLANs provide network connectivity to the Converged System managed by the
VMP, or to the management and production networks.
The following table provides a list of the VLANs for inter-Converged System network traffic:
VLAN
Description
vmp_oob_mgmt
Control plane only. There is no data on this VLAN.
vcesys_esx_L3vmotion
Cross vCenter vMotion traffic between VMware vSphere ESXi hosts.
vcesys_esx_L3prov
Isolate traffic for cold migration, VM clones, and snapshots.
vcesys_nfs
NFS traffic internal and external to VMP.
vmp_esx_mgmt
VMware management and applications that may impact production.
vmp_vceopt_mgmt
Dell EMC optional management workload VMs (may be collapsed into core).
vmp_eco_mgmt
Ecosystem management workload VMs.
Virtual switch
VMP uses the VMware vSphere Standard Switch and the VMware vSphere Distributed Switch (VDS) in a
hybrid design for virtual networking. Regardless of the virtual networking technology, VMP does not
support virtual networking capabilities with non-VMP, VMware hosts or clusters. VMP can use a virtual
networking solution other than the Converged Systems it manages.
Multiple Converged Systems managed by a single VMP can use different virtual networking solutions. For
example, one Converged System can use the Cisco Nexus 1000V Switch with Advanced Edition while a
Vscale Fabric Technology Extension can use a Cisco Nexus 1000V Switch with Essentials. The VMP that
is managing the two infrastructure components can use a VMware VDS.
If the Cisco Nexus 1000V Essentials or Advanced Edition is selected on Converged System or Vscale
Fabric Technology Extension managed by VMP, enable Layer 3 mode on each system. If the existing
system uses Layer 2 mode, modify the VLAN to Layer 3 for the Cisco Nexus 1000V Switch. Management
of the Cisco Nexus 1000V Switch Layer 3 control is collapsed into the VMP esx_mgmt VLAN.
Deploy the virtual networking components on the VMP and arrange them to support a maximum level of
redundancy (where available).
37 | Hosting management applications
Production management
The following table lists the production management VLANs:
VLAN
Description
vmp_oob_mgmt
Carries inter-Converged System management traffic to and from the VMs in the
VMP_PROD-COMMON management workload. The following VLAN design requirements
apply:
vmp_esx_mgmt
•
If Layer 2 network connectivity is required, the assigned subnet must be a /22 subnet or
bigger to accommodate for IP addressing of a minimum of 650 (up to 1000) VMware
vSphere ESXi hosts.
•
If Layer 3 network connectivity is required, the assigned subnet must be sized to
accommodate less than 30 VMs, thus /27 should be sufficient based on the current
design.
Carries inter-Converged Systemmanagement traffic to and from the VMP_PROD-CENTRAL
management workload. The following VLAN design requirements apply:
•
If Layer 2 network connectivity is required, then the assigned subnet must be a /22
subnet or bigger to accommodate for IP addressing of a minimum of 650 (up to 1000)
VMware vSphere ESXi hosts.
•
If Layer 3 network connectivity is required, then the assigned subnet must be sized
according to the number of VMs that are required to support the external, managed
Converged System or Vscale Fabric Technology Extension. Each VMware vCenter
instance requires four VMs. Each common VMs (such as element manager and fabric
manager) requires six VMs.
•
Must have Layer 2 or Layer 3 connectivity through the customer-provided network to
establish management functionality between VMP and the managed Converged System
or Vscale Fabric Technology Extension. For each option, refer to the requirements listed
in the use cases for inter-Converged System networking.
vmp_vceopt_mgmt
Carries inter-Converged System management traffic to and from the VMs in the Optional
Management Workload vSphere cluster resource pool. VLAN design requirements are to be
supplied.
vmp_eco_mgmt
Carries inter-Converged System management traffic to and from the VMs in the ecosystem
management workload VMware vSphere cluster resource pool beneath the VXVMP-ECO
VMware vSphere Cluster. VLAN design requirements are to be supplied.
In production management, the client network can access hosts and VMs on VLANs 101, 201, 105, and
205 within the Vscale Fabric Technology Extension that contains VMP.
Hosting management applications | 38
The following illustration reflects the connections between the devices, but not the quantity of these
connections for the logical network connectivity of the VMP in the production management environment:
To manage VMP resources within the Vscale Fabric Technology Extension:
•
The VMP local VMware vCenter environment has hosts and VMs on vcesys_esx_mgmt and VMs
on vcesys_oob_mgmt.
•
The VMP production VMware vCenter environment has VMs on vmp_oob_mgmt and
vmp_esx_mgmt.
These are accessible from external networks.
The following VLANs are local to Vscale Fabric network environment:
•
vcesys_nfs leverages Layer 3 connectivity toVscale Fabric Technology Extension for routed NFS
using VXLAN
39 | Hosting management applications
•
vcesys_esx_L3vmotion leverages Layer 3 connectivity to Vscale Fabric Technology Extension for
cross VMware vCenter vMotion
•
vcesys_esx_L3prov leverages Layer 3 connectivity to Vscale Fabric Technology Extension for
OVF provisioning and cold vMotion
•
vmp_oob_mgmt leverages Layer 3 connectivity to Vscale Fabric Technology Extension for OOB
•
vmp_esx_mgmt leverages Layer 3 connectivity to Vscale Fabric Technology Extension for
VMware vSphere ESX management
The VMP must be in the same data center or within a metro (150 milliseconds RTT) latency distance to
the Converged System, VxRack System, or Vscale Fabric Technology Extension managed by VMP.
VM placement and VLAN assignment
The following illustration shows the placement of each VM within the VMP along with its corresponding
VLAN:
Hosting management applications | 40
VMware vSphere virtual switch designs
The Vscale Management Platform (VMP) combines the VMware vSphere Standard Switch and the
VMware vSphere Distributed Switch (VDS) into a hybrid design for virtual networking.
Each VMware vSphere ESXi host has VMkernel port groups, vMotion, and NFS (if used) configured on
the VMware VDS. The remaining port groups are configured on the VMware vSphere Standard Switch
and hosts are managed by the local VMware vCenter Server that resides in the local management
workload pool.
If migrating an existing VMware vCenter Server environment from an external Converged System to the
VMP into a centralized VMware vCenter Server instance, the following conditions apply:
•
The VMP and individual compute hosts must be within the same DC and latency limitations.
•
The managed Cisco Nexus 1000V Switches must be in Layer 3 mode.
•
Any Converged System managed by VMP must be at a supported Release Certification Matrix
(RCM).
•
VMware vCenter resources from a VMware vCenter instance running on VMP cannot be used.
•
Management must be moved in its entirety, including the consolidation or migration of all
associated VMware vCenter Services.
41 | Hosting management applications
VMware vSphere Standard Switch
The following illustration shows the connections between the devices for the VMware vSphere Standard
Switch on the VMP:
Hosting management applications | 42
VMware vSphere Distributed Switch
The following illustration shows the VMware VDS configuration on the VMP:
Management workload cluster and resource pools
Dedicated clusters and system resource pools segregate the resources required to run efficiently and
maintain performance without hindering other workloads.
The following clusters support the management workloads:
•
Core cluster (VXVMP-CORE)
•
Ecosystem cluster (VXVMP-ECO)
43 | Hosting management applications
Core cluster
The core cluster components consists of the following resource pool workloads that support Vscale
Management Platform (VMP) and external Converged Systems management:
Workload
Description
VMP management
Manages components local to the VMP. The local management workload consists of the
VMs required for the VMware vSphere management components that run the Vscale
Fabric Technology Extension with VMP.
Optional management
This VMware vSphere system resource pool provides all the available data protection
software components.
Production shared
management
Manages all external Converged Systems and provides the central workloads for the
VMware vSphere management components and common components such as element
manager, Secure Remote Services, fabric manager and PowerPath.
The following table lists the VMs that belong in the local and production management workloads:
Workload
Components
Manages
VMP management
•
•
•
•
Local Converged System
components
Production management
Local database server
Local VMware vCenter server
Local update manager
Local VMware vCenter Platform Service
Controller 1 and 2
•
•
•
•
•
Local element manager
•
•
•
Production element manager
Production fabric manager
•
Production PowerPath license server
Production database server
Production VMware vCenter Server
Production update manager
Production VMware vCenter Platform
Service Controller 1 and 2
Primary point of management for
Converged System resources to be
managed (including storage,
compute, virtualization, and network)
Production Secure Remote Services
appliance
Ecosystem cluster
The ecosystem management workload consists of non-Dell EMC supported management tools from
Cisco, Dell EMC, and VMware. The workload also consists of software that is certified asVblock Systemready, such as VMware vRealize Suite, VMware Horizon View, Cisco UCS Director, Ionix Unified
Infrastructure Manager (UIM/P and UIM/O), VMware VMTurbo, and Cloud Lifecycle Management.
A new VMware vSphere system resource pool is created within a new and separate VMware vSphere
cluster known as VXVMP-ECO. The separation of the ecosystem management workload enables the
required control and shaping of the enterprise resource management tools.
Hosting management applications | 44
Managing resource usage and VMs
The creation of resource pools/vApps and the grouping of associated VMs within them allow for an
effective and efficient method to manage their resource usage and execution of power on/off
maintenance tasks.
The following example provides a configuration example of pools and vApps for resource management
and VM control:
45 | Hosting management applications
Hosting management applications | 46
Virtualization
The Vscale Management Platform(VMP LOCAL) workloads have a local VMware vCenter server instance
to manage the workloads and one or more VMware vCenter servers for the production environment.
Local VMware vCenter Server
The local management workload resides on the Vscale Fabric Technology Extension with VMP and is
controlled by the local VMware vCenter instance.
Production VMware vCenter Server
The production management workload (VMP PROD-CENTRAL/VMPprod01) for VMP operates as a
monolithic VMware vCenter Server instance for centralized management of the VMware vSphere ESXi
hosts from external Converged Systems.
47 | Hosting management applications
Sample configurations
Cabinet elevations vary based on the specific configuration requirements.
Sample ACI configuration with the Cisco Nexus 9504 Switch
Elevations are provided for sample purposes only. For specifications for a specific design, consult your
vArchitect.
Cabinet 1
Sample configurations | 48
Cabinet 2
Sample ACI configuration with the Cisco Nexus 9508 Switch
Elevations are provided for sample purposes only. For specifications for a specific design, consult your
vArchitect.
49 | Sample configurations
Cabinets 1 and 2
Sample configurations | 50
Cabinet 3
Sample technology connect configuration
Elevations are provided for sample purposes only. For specifications for a specific design, consult your
vArchitect.
51 | Sample configurations
Cabinet 1
Sample system with compute
Elevations are provided for sample purposes only. For specifications for a specific design, consult your
vArchitect.
Sample configurations | 52
Cabinet 1
Sample system with storage
Elevations are provided for sample purposes only. For specifications for a specific design, consult your
vArchitect.
53 | Sample configurations
Cabinet 1
Sample Vscale Management Platform with VNX5200
Elevations are provided for sample purposes only. For specifications for a specific design, consult your
vArchitect.
Sample configurations | 54
Cabinet 1
55 | Sample configurations
Additional references
Virtualization components
Virtualization component information and links to documentation are provided.
Product
Description
Link to documentation
VMware vCenter Server
Provides a scalable and extensible
platform that forms the foundation for
virtualization management and provides
VMware high availability (HA) and
dynamic resource scheduling (DRS).
www.vmware.com/products/vcenterserver/
VMware vSphere ESXi
Abstracts hardware to support virtualized
workloads.
www.vmware.com/products/vsphere/
Compute components
Compute component information and links to documentation are provided.
Product
Description
Link
Cisco UCS B-Series
Blade Servers
Servers that adapt to application demands,
intelligently scale energy use, and offer best-inclass virtualization.
www.cisco.com/en/US/products/
ps10280/index.html
Cisco UCS Manager
Provides centralized management capabilities for
the Cisco Unified Computing System (UCS).
www.cisco.com/en/US/products/
ps10281/index.html
Cisco UCS 2200 Series
Fabric Extenders
Bring unified fabric into the blade-server chassis,
www.cisco.com/c/en/us/support/
servers-unified-computing/
providing up to eight 10 Gbps connections each
between blade servers and the fabric interconnect. ucs-2200-series-fabric-extenders/
tsd-products-support-serieshome.html
Cisco UCS 2300 Series
Fabric Extenders
Bring unified fabric into the blade-server chassis,
www.cisco.com/c/en/us/support/
providing up to four 40 Gbps connections each
servers-unified-computing/
between blade servers and the fabric interconnect. ucs-2300-series-fabric-extenders/
tsd-products-support-serieshome.html
Cisco UCS 5108 Series
Blade Server Chassis
Chassis that supports up to eight blade servers
and up to two fabric extenders in a six RU
enclosure.
www.cisco.com/en/US/products/
ps10279/index.html
Cisco UCS 6200 Series
Fabric Interconnects
Cisco UCS family of line-rate, low-latency,
lossless, 10 Gigabit Ethernet, Fibre Channel over
Ethernet (FCoE), and Fibre Channel functions.
Provide network connectivity and management
capabilities.
www.cisco.com/en/US/products/
ps11544/index.html
Additional references | 56
Product
Description
Link
Cisco UCS 6300 Series
Fabric Interconnects
Cisco UCS family of line-rate, low-latency,
lossless, 40 Gigabit Ethernet, Fibre Channel over
Ethernet (FCoE), and Fibre Channel functions.
Provide network connectivity and management
capabilities.
www.cisco.com/c/en/us/support/
servers-unified-computing/
ucs-6300-series-fabricinterconnects/tsd-products-supportseries-home.html
Network components
Network component information and links to documentation are provided.
Product
Description
Link
Cisco Nexus 1000V
Series Switches
Delivers Cisco VN-Link services to VMs
hosted on that server.
www.cisco.com/en/US/products/ps9902/
index.html
Cisco Nexus 3000
Series Switches
Provides management access to all
Converged System components using vPC
technology to increase redundancy and
scalability
www.cisco.com/c/en/us/products/
switches/nexus-3000-series-switches/
index.html
Cisco Nexus 5000
Series Switches
Simplifies data center transformation by
enabling a standards-based, highperformance, unified fabric.
www.cisco.com/c/en/us/products/
switches/nexus-5000-series-switches/
index.html
Cisco MDS 9000 Series Provides industry-leading availability,
Switches
scalability, security, and management.
www.cisco.com/c/en/us/products/
storage-networking/mds-9000-seriesmultilayer-switches/index.html
Cisco Nexus 9000
Series Switches
Delivers proven high performance and
density, low latency, and exceptional power
efficiency in a broad range of compact form
factors.
www.cisco.com/c/en/us/products/
switches/nexus-9000-series-switches/
index.html
VMware vSphere
Distributed Switch
(VDS)
Delivers advanced network services to VMs
hosted on that server.
www.vmware.com/products/vsphere/
features/distributed-switch.html
57 | Additional references
Storage components
Storage component information and links to documentation are provided.
Product
Description
Link to documentation
Unity
Delivers a fully integrated SAN and NAS array
with seamless tiered storage and streamlined
management
www.emc.com/en-us/storage/unity.htm
XtremIO
Delivers industry-leading performance, scale,
and efficiency for hybrid cloud environments.
www.emc.com/collateral/software/
specification-sheet/h12451-XtremIOss.pdf
VMAX3 AFA Family
Delivers performance, scale, high availability,
and advanced data services for all missioncritical applications.
www.emc.com/en-us/storage/vmax-allflash.htm
VMAX3 Hybrid Family
Delivers industry-leading performance, scale,
and efficiency for hybrid cloud environments.
www.emc.com/collateral/hardware/
specification-sheet/h13217-vmax3ss.pdf
VNX Series Gateways
Provides NAS storage in a centrally managed
information storage system. The gateways allow
you to grow, share, and cost effectively manage
the Vblock System with multi-protocol file
access.
www.emc.com/storage/vnx/vnx-seriesgateways.htm#!
VNX
Delivers high-performance, unified storage with
unsurpassed simplicity and efficiency, optimized
for virtual applications.
www.emc.com/products/series/vnxseries.htm
Additional references | 58
The information in this publication is provided "as is." Dell Inc. makes no representations or warranties of any kind with
respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness
for a particular purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
Copyright © 2017 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are trademarks of
Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners. Published in the USA in
February 2017.
Dell EMC believes the information in this document is accurate as of its publication date. The information is subject to
change without notice.
59 | Copyright
Download PDF

advertising