Design Guide to run VMware NSX for vSphere with Cisco ACI White

Design Guide to run VMware NSX for vSphere with Cisco ACI White
White Paper
Design Guide to run VMware NSX
for vSphere with Cisco ACI
First published: January 2018
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 1 of 64
Introduction .............................................................................................................................................................. 4
Goal of this document ............................................................................................................................................. 4
Cisco ACI fundamentals.......................................................................................................................................... 6
Cisco ACI policy model ......................................................................................................................................... 8
Cisco ACI policy-based networking and security .................................................................................................. 8
Cisco ACI VMM domains .................................................................................................................................... 11
VMware NSX fundamentals................................................................................................................................... 13
NSX for vSphere ................................................................................................................................................. 13
NSX for vSphere network requirements .............................................................................................................. 14
Running vSphere Infrastructure as an application with Cisco ACI ................................................................... 15
vSphere infrastructure ......................................................................................................................................... 15
Physically connecting ESXi hosts to the fabric.................................................................................................... 16
Mapping vSphere environments to Cisco ACI network and policy model ........................................................... 19
Obtaining per-cluster visibility in APIC ............................................................................................................ 20
Securing vSphere Infrastructure ..................................................................................................................... 23
VMware vSwitch design and configuration considerations ................................................................................. 24
Option 1. Running NSX-v security and virtual services using a Cisco ACI integrated overlay for network
virtualization ........................................................................................................................................................... 29
Using NSX Distributed Firewall and Cisco ACI integrated overlays .................................................................... 29
Option 2. NSX-v overlays as an application of the Cisco ACI fabric ................................................................. 36
NSX-v VXLAN architecture ................................................................................................................................. 36
NSX VXLAN—Understanding BUM traffic replication .................................................................................... 37
NSX transport zones ...................................................................................................................................... 38
NSX VTEP subnetting considerations ............................................................................................................ 39
Running NSX VXLAN on a Cisco ACI fabric ....................................................................................................... 39
Bridge domain–EPG design when using NSX hybrid replication .................................................................... 39
Bridge domain–EPG design when using NSX unicast replication .................................................................. 42
Providing visibility of the underlay for the vCenter and NSX administrators ................................................... 43
Virtual switch options: Single VDS versus dual VDS ...................................................................................... 44
NSX Edge Clusters—NSX routing and Cisco ACI .............................................................................................. 46
Introduction to NSX routing ............................................................................................................................ 46
Connecting ESG with NAT to the Cisco ACI fabric ......................................................................................... 49
ESG routing through the fabric ....................................................................................................................... 50
ESG peering with the fabric using L3Out........................................................................................................ 54
Bridging between logical switches and EPGs ..................................................................................................... 59
Conclusion ............................................................................................................................................................. 62
Do you need NSX when running an Cisco ACI fabric? ....................................................................................... 63
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 2 of 64
The Cisco implementation of TCP header compression is an adaptation of a program developed by the University
of California, Berkeley (UCB) as part of UCB's public domain version of the UNIX operating system. All rights
reserved. Copyright © 1981, Regents of the University of California.
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual
addresses and phone numbers. Any examples, command display output, network topology diagrams, and other
figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone
numbers in illustrative content is unintentional and coincidental.
This product includes cryptographic software written by Eric Young ([email protected]).
This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit.
This product includes software written by Tim Hudson ([email protected]).
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other
countries. To view a list of Cisco trademarks, go to this URL: Third-party
trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a
partnership relationship between Cisco and any other company. (1110R)
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 3 of 64
With the launch of the Cisco Application Centric Infrastructure (Cisco ACI ) solution in 2013, Cisco continued the
trend of providing best-in-class solutions for VMware vSphere environments. Cisco ACI is a comprehensive
Software-Defined Networking (SDN) architecture that delivers a better network, implementing distributed Layer 2
and Layer 3 services across multiple sites using integrated Virtual Extensible LAN (VXLAN) overlays. Cisco ACI
also enables distributed security for any type of workload, and introduces policy-based automation with a single
point of management. The core of the Cisco ACI solution, the Cisco Application Policy Infrastructure Controller
(APIC), provides deep integration with multiple hypervisors, including VMware vSphere, Microsoft Hyper-V, and
Red Hat Virtualization; and with modern cloud and container cluster management platforms, such as OpenStack
and Kubernetes. The APIC not only manages the entire physical fabric but also manages the native virtual
switching offering for each of the hypervisors or container nodes.
Since its introduction, Cisco ACI has seen incredible market adoption and is currently deployed by thousands of
customers across all industry segments.
In parallel, some vSphere customers may choose to deploy hypervisor-centric SDN solutions, such as VMware
NSX, oftentimes as a means of improving security in their virtualized environments. This leads customers to
wonder how to best combine NSX and Cisco ACI. This document is intended to help those customers by
explaining the design considerations and options for running VMware NSX with a Cisco ACI fabric.
Goal of this document
This document explains the benefits of Cisco ACI as a foundation for VMware vSphere, as well as how it makes
NSX easier to deploy, more cost effective, and simpler to troubleshoot when compared to running NSX on a
traditional fabric design.
As Cisco ACI fabrics provide a unified overlay and underlay, two possible NSX deployments options are discussed
(Figure 1):
Option 1. Running NSX-V security and virtual services with a Cisco ACI integrated overlay: In this
model, Cisco ACI provides overlay capability and distributed networking, while NSX is used for distributed
firewalling and other services, such as load balancing, provided by NSX Edge Services Gateway (ESG).
Option 2. -Running NSX overlay as an application: In this deployment model, the NSX overlay is used to
provide connectivity between vSphere virtual machines, and the Cisco APIC manages the underlying
networking, as it does for vMotion, IP Storage, or Fault Tolerance.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 4 of 64
Figure 1.
VMware NSX deployment options
These two deployment options are not mutually exclusive. While Cisco ACI offers substantial benefits in both of
these scenarios when compared to a traditional device-by-device managed data center fabric, the first option is
recommended, because it allows customers to avoid the complexities and performance challenges associated with
deploying and operating NSX ESGs for north-south traffic and eliminates the need to deploy any VXLAN-to-VLAN
gateway functions.
Regardless of the chosen option, some key advantages of using Cisco ACI as a fabric for vSphere with NSX are
significant, including:
Best-in-class performance: Cisco ACI builds on best-in-class Cisco Nexus 9000 Series Switches to
implement a low-latency fabric that uses Cisco Cloud Scale smart buffering and provides the highest
performance on a leaf-and-spine architecture.
Simpler management: Cisco ACI offers a single point of management for the physical fabric with full
FCAPS1 capabilities, thus providing for a much simpler environment for running all required vSphere
services with high levels of availability and visibility.
Simplified NSX networking: Because of the programmable fabric capabilities of the Cisco ACI solution,
customers can deploy NSX VXLAN Tunnel Endpoints (VTEPs) with minimal fabric configuration, as
opposed to device-by-device subnet and VLAN configurations. In addition, customers can optimize, reduce,
or completely eliminate the need for certain functions such as NSX ESGs. This contributes to requiring
fewer computing resources and simplifying the virtual topology.
FCAPS is the ISO Telecommunications Management Network model and framework for network management. FCAPS is an
acronym for fault, configuration, accounting, performance, security—the management categories the ISO model uses to define
network management tasks.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 5 of 64
Operational benefits: The Cisco ACI policy-based model with single point of management facilitates setting
up vSphere clusters while providing better visibility, enhanced security, and easier troubleshooting of
connectivity within and between clusters. Furthermore, Cisco ACI provides many built-in network
management functions, including consolidated logging with automatic event correlation, troubleshooting
wizards, software lifecycle management, and capacity management.
Lower total cost of ownership: Operational benefits provided by Cisco ACI and the savings in resources and
licenses from enabling optimal placing of NSX ESG functions, along with faster time to recovery and easier
capacity planning, add up to reduced costs overall.
This document is intended for network, security, and virtualization administrators who will deploy NSX on a
vSphere environment running over an Cisco ACI fabric. We anticipate that the reader is familiar with NSX for
vSphere and with Cisco ACI capabilities. General networking knowledge is presupposed as well.
Cisco ACI fundamentals
Cisco ACI is the industry’s most widely adopted SDN for data center networking. Cisco ACI pioneered the
introduction of intent-based networking in the data center. It builds on a leaf-and-spine fabric architecture with an
(APIC that acts as the unifying point of policy and management.
The APIC implements a modern object model to provide a complete abstraction of every element in the fabric. This
model includes all aspects of the physical devices, such as interfaces or forwarding tables, as well as its logical
elements, like network protocols and all connected virtual or physical endpoints. The APIC extends the principles of
Cisco UCS® Manager software and its service profiles to the entire network: everything in the fabric is represented
in an object model at the APIC, enabling declarative, policy-based provisioning for all fabric functions and a single
point of management for day 2 operations.
Networks are by nature distributed systems. This distributed characteristic has brought significant challenges when
managing fabrics: if a network administrator wishes to modify a certain network attribute, touching discrete
switches or routers is required. This necessity poses significant challenges when deploying new network constructs
or troubleshooting network issues.
Cisco ACI fixes that problem by offloading the management plane of network devices to a centralized controller.
This way, when provisioning, managing, and operating a network, the administrator only needs to access the APIC.
It is very important to note that in the Cisco ACI architecture, centralizing the management and policy planes in the
APIC does not impose scalability bottlenecks in the network, as the APIC fabric management functions do not
operate in the data plane of the fabric. Both the control plane (intelligence) and data plane (forwarding) functions
are performed within the switching layer by intelligent Nexus 9000 Series Switches, which use a combination of
software and hardware features.
A centralized management and policy plane also does not mean that the network is less reliable or has a single
point of failure. As the intelligence function stays at the switches, the switches can react to any network failure
without having to ask the controller what to do.
Because a highly available scale-out cluster of at least three APIC nodes is used, any controller outage does not
diminish the capabilities of the network. In the unlikely event of a complete controller cluster outage, the fabric can
still react to such events as the addition of new endpoints or the movement of existing endpoints across
hypervisors (for instance, when performing virtual machine vMotion operations).
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 6 of 64
The Cisco ACI policy model provides a complete abstraction from the physical devices to allow programmable
deployment of all network configurations. Everything can be programmed through a single, open API, whether it is
physical interface settings, routing protocols, or application connectivity requirements inclusive of advanced
network services.
The Cisco ACI fabric is a VXLAN-based leaf-and-spine architecture that provides Layer 2 and Layer 3 services with
integrated overlay capabilities. Cisco ACI delivers integrated network virtualization for all workloads connected, and
the APIC can manage not only physical devices but also virtual switches. Virtual and physical endpoints can
connect to the fabric without any need for gateways or additional per-server software and licenses. The Cisco ACI
solution works with all virtualized compute environments, providing tight integration with leading virtualization
platforms like VMware vSphere, Microsoft System Center VMM, or Red Hat Virtualization. APIC also integrates
with the leading open source cloud management solution, OpenStack, by having APIC program distributed
services on Open vSwitch using OpFlex. Finally, the Cisco ACI declarative model for defining application
connectivity also goes hand in hand with modern frameworks for running Linux containers, and Cisco ACI has the
same level of integration with Kubernetes, OpenShift or Cloud Foundry clusters.
Figure 2.
APIC declarative model enables intent-based networking
This configuration provides an enormous operational advantage because the APIC has visibility into all the
attached endpoints and has automatic correlation between virtual and physical environments and their application
or tenant context. Integration with virtualization solutions is implemented by defining virtual machine manager
domains in APIC.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 7 of 64
Cisco ACI policy model
The Cisco ACI solution is built around a comprehensive policy model that manages the entire fabric, including the
infrastructure, authentication, security, services, applications, and diagnostics. A set of logical constructs, such as
Virtual Route Forwarding (VRF) tables, bridge domains, Endpoint Groups (EPGs), and contracts, define the
complete operation of the fabric, including connectivity, security, and management.
At the upper level of the Cisco ACI model, tenants are network-wide administrative folders. The Cisco ACI tenancy
model can be used to isolate separate organizations, such as sales and engineering, or different environments
such as development, test, and production, or combinations of both. It can also be used to isolate infrastructure for
different technologies or fabric users, for instance, VMware infrastructure versus OpenStack versus big data,
Mesos, and so forth. The use of tenants facilitates organizing and applying security policies to the network and
providing automatic correlation of statistics, events, failures, and audit data.
Cisco ACI supports thousands of tenants that are available for users. One special tenant is the “common tenant,”
which can be shared across all other tenants, as the name implies. Other tenants can consume any object that
exists within the common tenant. Customers that choose to use a single tenant may configure everything under the
common tenant, although in general it is best to create a dedicated user tenant and keep the common for shared
Cisco ACI policy-based networking and security
Cisco ACI has been designed to provide a complete abstraction of the network. As shown in Figure 3, each Cisco
ACI tenant contains various network constructs:
Layer 3 contexts known as VRF tables: These provide routing isolation and enable running overlapping
address spaces between tenants, or even within a single tenant, and can contain one or more bridge
Layer 2 flooding domains, called bridge domains: These provide scoping for Layer 2 flooding. Bridge
domains belong to a particular VRF table and can contain one or more subnets.
External bridged or routed networks: These are referred to as L2Out or L3Out interfaces and connect to
other networks, such as legacy spanning-tree or Cisco FabricPath networks, or simply to data center
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 8 of 64
Figure 3.
Cisco ACI tenancy model
The Cisco ACI tenancy model facilitates the administrative boundaries of all network infrastructure. Objects in the
common tenant can be consumed by any tenant. For instance, in Figure 3, Production and Testing share the same
VRF tables and bridge domains from the common tenant.
Figure 4 provides a snapshot of the tenant Networking constructs from the APIC GUI, showing how the VRF tables
and bridge domains are independent of the topology and can be used to provide connectivity across a number of
workloads, including vSphere, Hyper-V, OpenStack, bare metal, and IP storage solutions.
Figure 4.
Flexibility of the Cisco ACI networking model
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 9 of 64
Endpoints that require a common policy are grouped together into endpoint groups (EPGs); an Application Network
Profile (ANP) is a collection of EPGs and the contracts that define the connectivity required between them. By
default, connectivity is allowed between the endpoints that are part of the same EPG (intra-EPG connectivity). This
default can be changed by configuring isolated EPGs (in which connectivity is not allowed), or by adding intra-EPG
contracts. Also, by default, communication between endpoints that are members of different EPGs is allowed only
when contracts between them are applied.
These contracts can be compared to traditional Layer 2 to Layer 4 firewall rules from a security standpoint. In
absence of a contract, no communication happens between two EPGs. Contracts not only define connectivity rules
but also include Quality of Service (QoS) and can be used to insert advanced services like load balancers or NextGeneration Firewalls (NGFWs) between any two given EPGs. Contracts are tenant-aware and can belong to a
subset of the tenant resources (to a VRF table only) or be shared across tenants.
The EPGs of an ANP do not need to be associated to the same bridge domain or VRF table, and the definitions
are independent of network addressing. For instance, contracts are not defined based on subnets or network
addresses, making policy much simpler to configure and automate. In this sense, ANPs can be seen as constructs
that define the application requirements and consume network and security resources, including bridge domains,
VRF tables, contracts, or L3Out interfaces. Figure 5 shows an example of a three-tier application with three
environments (Production, Testing, and Development) where the Production Web EPG (Web-prod) allows only
Internet Control Message Protocol (ICMP) and SSL from the external network accessible through an L3Out
interface. Other similar contracts govern connectivity between tiers. Because there are no contracts between the
Development and Testing or Production EPGs, the environments are completely isolated regardless of the
associated bridge domain or VRF table or IP addresses of the endpoints.
Figure 5.
Example of an application network profile for a three-tier application with three environments
The ANP model is ideal for use in combination with cloud management and container frameworks. An application
design stack like that provided by Cisco CloudCenter™ software or VMware vRealize can directly map into Cisco
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 10 of 64
From a networking point of view, the fabric implements a distributed default gateway for all defined subnets. This
ensures optimal traffic flows between any two workloads for both east-west and north-south traffic without
At the simplest level, connectivity can be modeled with an ANP using VLANs and related subnets. A simple ANP
can contain one or more VLANs associated with an environment represented as one EPG per VLAN associated to
a single bridge domain. This still provides the benefit of using the distributed default gateway, eliminating the need
for First-Hop Redundancy Protocol and providing better performance, while using contracts for inter-VLAN security.
In this sense, Cisco ACI provides flexibility and options to maintain traditional network designs while rolling out
automated connectivity from a cloud platform.
Workloads are connected to the fabric using “domains” that are associated to the EPGs. Bare metal workloads are
connected through physical domains, and data center routers are connected as external routing domains. For
virtualization platforms, Cisco ACI uses the concept of Virtual Machine Management (VMM) domains.
Cisco ACI VMM domains
Cisco ACI empowers the fabric administrator with the capability of integrating the APIC with various VMM
solutions, including VMware vCenter (with or without NSX), Microsoft System Center Virtual Machine Manager
(SCVMM), Red Hat Virtualization (RHV), and OpenStack.2
This integration brings the benefit of consolidated visibility and simpler operations, because the fabric has a full
view of physical and virtual endpoints and their location, as shown in Figure 6. APIC can also automate
provisioning of virtual networking within the VMM domain.
Figure 6.
Once a VMM domain is configured, the APIC has a full view of physical and virtual endpoints
In Cisco ACI 3.1 the VMM domain tab is changed from VM Networking to Virtual Networking because the VMM domain concept
now also extends to containers, not just virtual machines. Cisco ACI supports this level of integration with Kubernetes and
OpenShift, with Pivotal Cloud Foundry and other integrations planned for a future release.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 11 of 64
The fabric and vSphere administrators work together to register APIC with vCenter using proper credentials. In
addition, it is also possible to install a Cisco ACI plug-in for the vSphere web client. The plug-in is registered with
the APIC and uses the latter’s Representational State Transfer (REST) API. The plug-in allows the fabric
administrator to provide vCenter administrators with the ability to see and configure of tenant-level elements, such
as VRF tables, bridge domains, EPGs, and contracts, as shown in Figure 7, where we see the entry screen of the
Cisco ACI plug-in and can observe the various possible functions exposed on the right hand side.
In vSphere environments, when a VMM domain is created in the APIC, it automatically configures a vSphere
Distributed Switch (VDS) through vCenter with the uplink settings that match the corresponding Cisco ACI fabric
port configurations, in accordance with the configured interface policies. This provides for automation of interface
configuration on both ends (ESXi host and Cisco ACI Top-of-Rack [ToR] switch, referred to as Cisco ACI Leaf),
and ensures consistency of Link Aggregation Control Protocol (LACP), Link Layer Discovery Protocol (LLDP),
Cisco Discovery Protocol, and other settings. In addition, once the VMM domain is created, the fabric administrator
can see a complete inventory of the vSphere domain, including hypervisors and Virtual Machines. If using the
Cisco ACI vCenter plug-in is used, the vSphere administrator can also have a view of the relevant fabric aspects,
including non-vSphere workloads (such as bare metal servers, virtual machines running on other hypervisors, or
even Linux containers).
Figure 7.
The Cisco ACI plug-in for vCenter
It is important to note that using a VMM domain enables consolidated network provisioning and operations across
physical and virtual domains while using standard virtual switches from the hypervisor vendors.3 For instance, in
the case of Microsoft Hyper-V, APIC provisions logical networks on the native Hyper-V virtual switch, using the
open protocol OpFlex to interact with it. Similarly, in OpenStack environments, Cisco ACI works with Open vSwitch
using OpFlex and Neutron ML2 or the Group-Based Policy plug-in within OpenStack.4
Cisco also offers the Cisco ACI Virtual Edge solution (AVE) as a virtual switch option for vSphere environments. This offers
additional capabilities, but its use is entirely optional.
For more details about the Group-Based Policy model on OpenStack, refer to this link:
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 12 of 64
Specifically in the context of this white paper, which focuses on VMware environments, the APIC provisions a
native vSphere Distributed Switch (VDS) and can use either VLAN or VXLAN5 encapsulation as attach points to the
The APIC does not need to create the VDS. When defining a vCenter VMM domain, APIC can operate on an
existing VDS. In that case, APIC expects the existing VDS to be placed in a folder with the same name as the
VDS. Since Cisco ACI 3.1 it is also possible to define a VMM domain to vCenter in read-only mode. In read-only
mode, APIC will not provision dvPortGroups in the VDS, but the fabric administrator can leverage the added
visibility obtained by VMM integration.
VMware NSX fundamentals
NSX for vSphere
NSX for vSphere is an evolution of VMware vCloud Networking and Security. At the heart of the system is NSX
Manager, which is similar to the former vShield Manager and is instrumental in managing all other components,
from the installation to provisioning and upgrading.
NSX for vSphere has the following components:
NSX Manager: Provides the northbound API and a graphical management interface through a plug-in to the
vCenter web client. It also automates the installation for all other components and is the configuration and
management element for the NSX distributed firewall.
NSX Controller: An optional component that is required if VXLAN overlays with unicast control plane.
Running the VXLAN overlay with multicast flood and learn does not require NSX controllers, and NSX
controllers are not required to run the NSX distributed firewall.
ESXi kernel modules: A set of virtual installation bundles that are installed on each ESXi host during the
NSX host-preparation process to provide distributed firewall, distributed routing, VXLAN capabilities, and
optional data security (known as “guest introspection”).
Distributed Logical Router (DLR) control virtual machines: One or more virtual machines deployed from the
NSX Edge image. When deployed this way, the virtual machine provides the control plane for distributed
logical routers.
NSX Edge Services Gateways (ESGs): One or more virtual machines deployed from the NSX Edge image.
When deployed as an ESG, the virtual machine provides control plane and data plane for Edge features
including the north-south routing that is required to communicate from the VXLAN overlay to external
networks or between different VXLAN overlay environments (between independent NSX Managers or
between NSX transport zones under a single NSX Manager). The ESGs also provide services such as
Network Address Translation (NAT), load balancing, and VPN.
Some customers may adopt NSX for its network virtualization capabilities, while others are interested only in its
security features. With the exception of guest introspection security features that are required for the integration of
certain technology partner solutions, Cisco ACI provides equivalent functionality to NSX and in many cases offers a
superior feature set that better meets real-world customer requirements.
The APIC can program the VDS to use VXLAN when working with vShield environments or when interacting with NSX Manager
as a vShield Manager.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 13 of 64
NSX for vSphere network requirements
Hypervisor-based network overlays like those provided by VMware NSX are intended to provide network
virtualization over any physical fabric. Their scope is limited to the automation of VXLAN-based virtual network
overlays created between software-based VXLAN tunnel endpoints (VTEPs) running at the hypervisor level. In the
case of NSX-v, these software VTEPs are only created within the vSphere Distributed Switch by adding NSX
kernel modules that enable VXLAN functionality.
The virtual network overlay must run on top of a physical network infrastructure, referred to as the underlay
network. The NSX Manager and NSX Controller do not provide any level of configuration, monitoring,
management, or reporting for this physical layer.6
VMware’s design guide for implementing NSX-v describes a recommended Layer 3 routed design of the physical
fabric or underlay network. The following points summarize the key design recommendations:
Fabric design:
◦ Promotes a leaf-and-spine topology design with sufficient bisectional bandwidth. When network
virtualization is used, mobility domains are expected to become larger and traffic flows between racks
may increase as a consequence. Therefore, traditional or legacy designs may be insufficient in terms of
◦ Suggests a Layer 3 access design with redundant ToR switches, limiting Layer 2 within the rack.
◦ Recommends using IP Layer 3 equal-cost multipathing (ECMP) to achieve fabric load balancing and high
◦ Per-rack (per–ToR switch pair) subnets for vSphere infrastructure traffic, including Management, vMotion,
IP Storage (Small Computer System Interface over IP [iSCSI], network file server [NFS]) and NSX VTEP
◦ QoS implemented by trusting Differentiated Services Code Point (DSCP) marking from the VDS.
Server-to-fabric connectivity:
◦ Redundant connections between ESXi hosts and the ToR switches is configured at the VDS level, using
either LACP, routing based on originating port, routing based on source MAC, or active/standby.
The Layer 3 access fabric design imposes constraints on the NSX overlay design. For example, the NSX Edge
services gateways require connecting to VLAN-backed port groups to route between the NSX overlay and any
external networks. On a traditional Layer 3 ToR design, those VLANs will not be present across the infrastructure
but instead will be limited to a small number of dedicated servers on a specific rack. This is one reason the NSX
architecture recommends dedicating vSphere clusters to a single function: running NSX Edge virtual machines in
so-called edge clusters.
Another example is that the VTEP addressing for VXLAN VMkernel interfaces needs to consider per-rack subnets.
That requirement can be accomplished by using Dynamic Host Configuration Protocol (DHCP) with option 82 but
not by using simpler-to-manage NSX Manager IP pools.
Some literature states that when using the Open vSwitch Database protocol the NSX controller can “manage” the underlay
hardware VTEPs. In fact, this protocol enables only provisioning of basic Layer 2 bridging between VXLAN and VLANs, and it
serves only point use cases, as opposed to full switch management.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 14 of 64
This design recommendation also promotes a legacy device-by-device operational model for the data center fabric
that has hindered agility for IT organizations. Furthermore, it assumes that the network fabric will serve only
applications running on ESXi hosts running NSX. But for most customer environments, the network fabric must
also serve bare metal workloads, applications running in other hypervisors, or core business applications from a
variety of vendors, such as IBM, Oracle, and SAP.
vSphere infrastructure traffic (management, vMotion, virtual storage area network [VSAN], fault tolerance, IP
storage, and so forth) that is critical to the correct functioning of the virtualized data center is not considered.
Neither is it viewed or secured through NSX, and as a result it remains the sole responsibility of the physical
network administrator.
From a strategic perspective, the physical fabric of any modern IT infrastructure must be ready to accommodate
connectivity requirements for emerging technologies—for instance, clusters dedicated to container-based
applications such as Kubernetes, OpenShift, and Mesos.
Finally, just as traffic between user virtual machines needs to be secured within the NSX overlay, traffic between
subnets for different infrastructure functions must be secured. By placing the first-hop router at each ToR pair, it is
easy to hop from the IP storage subnet to the management subnet or the vMotion network and vice versa. The
network administrator will need to manage Access Control Lists (ACLs) to prevent this from happening. This
means configuring ACLs in all access devices (in all ToR switches) to provide proper filtering of traffic between
subnets and therefore ensure correct access control to common services like Domain Name System (DNS), Active
Directory (AD), syslog, performance monitoring, and so on.
Customers deploying NSX on top of a Cisco ACI Fabric will have greater flexibility to place components anywhere
in the infrastructure, and may also avoid the need to deploy of NSX Edge virtual machines for perimeter routing
functions, potentially resulting in significant cost savings on hardware resources and on software licenses. They will
also benefit from using Cisco ACI contracts to implement distributed access control to ensure infrastructure
networks follow a white-list–zero-trust model.
Running vSphere Infrastructure as an application with Cisco ACI
vSphere infrastructure
vSphere Infrastructure has become fundamental for a number of customers because so many mission-critical
applications now run on virtual machines. At a very basic level, the vSphere infrastructure is made up of the
hypervisor hosts (the servers running ESXi), and the vCenter servers. vCenter is the heart of vSphere and has
several components, including the Platform Services Controller, the vCenter server itself, the vCenter SQL
database, Update Manager, and others. This section provides some ideas for configuring the Cisco ACI fabric to
deploy vSphere infrastructure services. The design principles described here apply to vSphere environments
regardless of whether they will run NSX.
The vSphere infrastructure generates different types of IP traffic, as illustrated in Figure 8, including management
between the various vCenter components and the hypervisor host agents, vMotion, storage traffic (iSCSI, NFS,
VSAN). This traffic is handled through kernel-level interfaces (VMkernel network interface cards, or VMKNICs) at
the hypervisor level.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 15 of 64
Figure 8.
ESXi VMkernel common interfaces
From a data center fabric perspective, iSCSI, NFS, vMotion, fault tolerance, and VSAN are all just application
traffic for which the fabric must provide connectivity. It is important to note that these applications must also be
secured, in terms of allowing only the required protocols from the required devices where needed.
Lateral movement must be restricted at the infrastructure level as well. For example, from the vMotion VMkernel
interface there is no need to access management nodes, and vice versa. Similarly, VMkernel interfaces dedicated
to connecting iSCSI targets do not need to communicate with other VMkernel interfaces of other hosts in the
cluster. And only authorized hosts on the management network need access to vCenter, or to enterprise
configuration management systems like a puppet master.
In traditional Layer 3 access fabric designs, where each pair of ToR switches has dedicated subnets for every
infrastructure service, it is very difficult to restrict lateral movement. In a Layer 3 access design, every pair of ToRs
must be the default gateway for each of the vSphere services and route toward other subnets corresponding to
other racks. Restricting access between different service subnets then requires ACL configurations on every
access ToR for every service Layer 3 interface. Limiting traffic within a service subnet is even more complicated—
and practically impossible.
Cisco ACI simplifies configuring the network connectivity required for vSphere traffic. It also enables securing the
infrastructure using Cisco ACI contracts. The next two sections review how to configure physical ports to
redundantly connect ESXi hosts and then how to configure Cisco ACI logical networking constructs to enable
secure vSphere traffic connectivity.
Physically connecting ESXi hosts to the fabric
ESXi software can run on servers with different physical connectivity options. Sometimes physical Network
Interface Cards (NICs) are dedicated for management, storage, and other functions. In other cases, all traffic is
placed on the same physical NICs, and traffic may be segregated by using different port groups backed by different
It is beyond the scope of this document to cover all possible options or provide a single prescriptive design
recommendation. Instead, let’s focus on a common example where a pair of physical NICs is used to obtain
redundancy for ESXi host-to-fabric connectivity. In modern servers, these NICs could be dual 10/25GE or even
dual 40GE.
When using redundant ports, it is better to favor designs that enable active/active redundancy to maximize the
bandwidth available to the server. For instance, when using Cisco ACI EX or FX leaf models, all access ports
support 25 Gbps. With modern server NICs also adding 25GE support, it becomes affordable to have 50 Gbps of
bandwidth available to every server.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 16 of 64
In Cisco ACI, interface configurations are done using leaf policy groups. For redundant connections to a pair of leaf
switches, a VPC policy group is required. VPCs are configured using VPC Policy Groups, under Fabric Access
Policies. Within a VPC policy group, the administrator can select multiple policies to control the interface behavior.
Such settings include Storm Control, Control Plane Policing, Ingress or Egress rate limiting, LLDP, and more.
These policies can be reused across multiple policy groups. For link redundancy, port-channel policies must be set
to match the configuration on the ESXi host. Table 1 summarizes the options available in vSphere distributed
switches and the corresponding settings recommended for Cisco ACI port-channel policy configuration.
Table 1.
Cisco ACI port-channel policy configuration
vSphere Distributed Switch (VDS)
Teaming and Failover Configuration
Redundancy Expected with
dual vmnic per host
ACI Port-Channel Policy Configuration
Route Based on Originating Port
MAC Pinning
Route based on Source MAC Hash
MAC Pinning
LACP (802.3ad)
LACP Active:
Graceful Convergence, Fast Select Hostandby Ports
(remove Suspend Individual Port Option)
Route Based on IP Hash
Static Channel Mode On
Explicit Failover Order
MAC Pinning or Access Policy Group
Of the options shown in Table 1, we do not recommend Explicit Failover Order, as it keeps only one link active. We
recommend LACP as the best option, for the following reasons:
LACP is an IEEE standard (802.3ad) implemented by all server, hypervisor, and network vendors.
LACP is easy to configure, well understood by network professionals, and enables active/active optimal
load balancing.
LACP enables very fast convergence on the VMware VDS, independent of the number of virtual machine or
MAC addresses learned.
LACP has extensive support in vSphere, beginning with version 5.1.
Another key setting of the interface policy groups is the Attachable Entity Profile (AEP). AEPs can be considered
the “where”" of the fabric configuration and are used to group domains with similar requirements.
AEPs are tied to interface policy groups. One or more domains (physical or virtual) can be added to an AEP.
Grouping domains into AEPs and associating them enables the fabric to know where the various devices in the
domain live, and the APIC can push and validate the VLANs and policy where it needs to go.
AEPs are configured in the Global Policies section in Fabric Access Policies. Specific for vSphere, ESXi hosts can
be connected to the fabric as part of a physical domain or a Virtual Machine Manager (VMM) domain, or both. An
AEP should be used to identify a set of servers with common access requirements. It is possible to use anything
from a single AEP for all servers to one AEP per server at other extreme. The best design is to use an AEP for a
group of similar servers, such as one AEP per vSphere cluster or perhaps one AEP per NSX transport zone. In
Figure 9 for instance we show a VPC Policy Group associated with an AEP for the particular vSphere environment
that has both a VMM and a Physical domain associated.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 17 of 64
Each domain type will have associated encapsulation resources, such as a VLAN or VXLAN pool. Having interface
configurations such as VPC policy groups associated to AEPs simplifies several tasks, for instance:
Network access that must be present for all servers of a particular kind can be implemented by associating
the relevant EPGs to the AEP directly. For example, all ESXi hosts in a particular vSphere environment
require connections to vMotion, Management, or IP storage networks. Once the corresponding EPGs are
attached to the AEP for the ESXi hosts, all leaf switches with connected ESXi hosts are automatically
configures with those EPG network encapsulations and the required bridge domains, Switch Virtual
Interfaces (SVIs), and VRF tables, without any further user intervention.
Encapsulation mistakes can be identified and flagged by the APIC. If the network administrator chooses a
VLAN pool for a group of servers or applications, the VLAN ID pool will be assigned to the corresponding
domain and, by means of the AEP, associated to relevant interfaces. If a VLAN from the wrong pool is then
chosen by mistake for a port or VPC connecting to a particular server type (as identified by the AEP), the
APIC will flag an error on the port and the EPG.
Figure 9.
ESXi host connected using a VPC policy group
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 18 of 64
Mapping vSphere environments to Cisco ACI network and policy model
The Cisco ACI solution provides multiple options for achieving various levels of isolation for applications. You can
use different VRF tables to achieve Layer 3 level isolation, use different bridge domains and EPGs for Layer 2
isolation, and use contracts to implement Layer 2–4 access control between or within EPGs.
In addition, the fabric administrator can also leverage the Cisco ACI tenancy model to create administrative
isolation on top of logical network isolation. This can be useful for customers that have development or testing
environments that must be completely isolated from production, or in environments where fabric resources are
shared by multiple organizations.
For an IT organization without specific requirements for multi-tenancy, vSphere infrastructure traffic is probably well
served by keeping it under the common tenant in Cisco ACI. As mentioned earlier, the common tenant facilitates
sharing resources with other tenants. Also, because the common tenant is one of the APIC system tenants it
cannot be deleted.
That said, for administrative reasons, it may be desirable to keep a dedicated tenant for infrastructure applications
such as vSphere traffic. Within this tenant, connectivity for vSphere and other virtualization platforms and
associated storage is configured using Cisco ACI application profiles. This method allows the fabric administrator
to benefit from APIC automatically correlating events, logs, statistics, and audit data specific to the infrastructure.
Figure 10 shows a tenant called Virtual-Infrastructure and indicates how the fabric administrator can see at a
glance the faults that impact the tenant. In this view, faults are automatically filtered out to show only those relevant
to infrastructure traffic.
Figure 10.
View of tenant-level fault aggregation in APIC
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 19 of 64
From a networking perspective, we recommend using different bridge domains for different vSphere traffic types.
Figure 11 shows for a configuration example with a bridge domain and an associated subnet for each of the main
types of traffic: management, vMotion, IP storage, hyper-converged storage (for example, VMware vSAN or Cisco
HyperFlex™ server nodes). Each bridge domain can be configured with large subnets, and can expand across the
fabric, serving many clusters. In its simplest design option, all VMkernel interfaces for specific functions are
grouped into a single EPG by traffic type.
Figure 11.
Configuration example with a bridge domain and associated subnet for each main traffic type
Within each Bridge Domain, traffic sources are grouped into EPGs. Each ESXi host therefore represents not one
but a number of endpoints, with each VMkernel interface being an endpoint in the fabric with different policy
Within each of the infrastructure bridge domains, the VMkernel interfaces for every specific function can be
grouped into a single EPG per service as shown in Figure 11.
Obtaining per-cluster visibility in APIC
The previous model of one EPG or bridge domain per vSphere service enables the simplest configuration possible.
It is similar to legacy deployments where a single VLAN is used for all vMotion VMKNICs, for instance, albeit with
the benefits of a distributed default gateway, Layer 2 and Layer 3 ECMP, and so forth.
Although simple is usually good, such an approach limits the understanding of the vSphere infrastructure at the
network level. For example, if you have 10 vSphere clusters, each of which has 32 servers, you will have 320
endpoints in the vMotion EPG. By looking at any two endpoints, it is impossible to understand if vMotion traffic was
initiated by vCenter, as is the case inside a VMware Distributed Resource Scheduler (DRS) cluster, or by an
administrator, if it was between two clusters.
It may be convenient to represent the vSphere cluster concept in the network configuration. The Cisco ACI EPG
model enables fabric administrators to accomplish this without imposing changes in the IP subnetting for the
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 20 of 64
For instance, VMkernel interfaces for each function may be grouped on a per-rack or per-vSphere-cluster basis.
Again, this does not have to impact subnetting, as multiple EPGs can be associated to the same bridge domain,
and therefore all clusters can still have the same default gateway and subnet for a particular traffic type. This
approach, shown in Figure 12, enables more granular control and provides a simple and cost-effective way for the
fabric administrator to get visibility by cluster. APIC automatically correlates statistics, audit, event correlation, and
health scores at the EPG level, so in this way, they represent per-rack or per-cluster level. An alternative design
can make use of per-cluster EPG for each traffic type. This option requires extra configuration that is easy to
automate at the time of cluster creation. An orchestrator such as Cisco UCS Director or vRealize Orchestrator can
create the EPG at the same time the cluster is being set up. This method has no impact on IP address
management, as all clusters can still share the same subnet, and it takes advantage of APIC automatic perapplication and per-EPG statistics, health scores, and event correlation.
Figure 12.
Configuration example with VMkernel interfaces grouped by function and Cluster
Let’s continue looking at vMotion traffic as an example. Within a vSphere cluster, all VMkernel interfaces for
vMotion are grouped into an EPG. EPG traffic statistics now represent vMotion within the cluster. Figure 13 shows
how fabric administrator can look at the tenant-level statistics to view an aggregate of all the traffic for the entire
vSphere infrastructure (not virtual machine traffic, only infrastructure traffic), and then drill down into vMotionspecific traffic to identify a spike, and finally check that vMotion activity was within a specific EPG for the
COMPUTE-01 cluster.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 21 of 64
Figure 13.
Monitoring vMotion-specific traffic on a per-cluster basis
The same approach works for iSCSI, Cisco HyperFlex, or VSAN traffic. It is convenient for troubleshooting and
capacity planning to be able to correlate traffic volumes to a specific cluster and/or to communications between
—Table 2 summarizes the recommended bridge domains, their settings, and associated EPGs to provide vSphere
infrastructure connectivity, whether single or multiple EPGs are used.
Table 2.
Recommended bridge domains, settings, and EPGs for vSphere infrastructure connectivity
Bridge Domain
Hardware Proxy: Yes
ARP Flooding: Yes
L3 Unknown Multicast Flooding: Flood
Multi Destination Flooding: Flood in BD
Unicast Routing: Yes
Enforce Subnet Check: No
Hardware Proxy: Yes
ARP Flooding: Yes
L3 Unknown Multicast Flooding: Flood
Multi Destination Flooding: Flood in BD
Unicast Routing: Yes
Enforce Subnet Check: No
vMotion Subnet
Hardware Proxy: Yes
ARP Flooding: Yes
L3 Unknown Multicast Flooding: Flood
Multi Destination Flooding: Flood in BD
Unicast Routing: Yes
Enforce Subnet Check: No
iSCSI Subnet
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 22 of 64
A common question is whether a subnet must be configured under the bridge domain for services that will not be
routed. For instance, for services like vMotion, NFS, or iSCSI you have the option not to configure a subnet on the
bridge domain. However, if no subnet is configured, then Address Resolution Protocol (ARP) flooding must be
configured when hw-proxy is used. ARP flooding is not strictly required, as long as a subnet is configured on the
bridge domain, if for example, the Cisco ACI spines will do ARP gleaning when hw-proxy is configured and IP
routing is checked on the bridge domain.
Securing vSphere Infrastructure
The designs outlined above include examples of using a single EPG per service and multiple EPGs per service.
When all vMotion VMKNICs are on the same EPG it is clear that vMotion can work, because the default policy for
traffic internal to an EPG is to allow all communications. When using multiple EPGs, however, the default is to
block all communication between different EPGs. Therefore, for the model in Figure 12 to work and allow
intercluster vMotion, this default must be changed.
One way is to place all the EPGs in Figure 12 in the “preferred group.” Another way is to disable policy
enforcement on the VRF table where these EPGs belong. In those two ways, the fabric behaves like a traditional
network, allowing traffic connectivity within and between subnets.
However, it is safer and recommended to implement a zero-trust approach to infrastructure traffic. Let’s use the
vMotion example from the previous section to illustrate how to use the Cisco ACI contract model to secure the
vSphere infrastructure using a zero-trust model.
In the design where an EPG is used for each service and cluster, intercluster vMotion communication requires
inter-EPG traffic. Fabric administrators can leverage the Cisco ACI contract model to ensure that only vMotion
traffic is allowed within the vMotion EPG or between vMotion EPGs (or both).
Figure 14 shows an example where per-cluster EPGs have been configured for vMotion, all within the same bridge
domain (same subnet). A contract called vMotion-Traffic is configured that allows only the required ports and
protocols for vSphere vMotion.7 This contract can be associated to all vMotion EPGs. To allow vMotion traffic
between clusters, all vMotion EPGs will consume and provide the contract. To allow vMotion traffic within a cluster,
but restricted to vMotion ports and protocols, each vMotion EPG will add an Intra-EPG contract association. Once
this is done, only vMotion traffic is accepted by the network from the vMotion VMkernel interfaces; the fabric will
apply the required filters on every access leaf in a distributed way. The same concept can be applied to NFS,
iSCSI, management, and so forth: contracts can be used to allow only the required traffic.
Fabric administrators must pay special attention when configuring intra-EPG contracts. Support for intra-EPG
contracts requires Cisco ACI 3.0 or later and is provided only on Cisco Nexus EX and FX leaf switches or later
models. In addition, when intra-EPG contracts are used, the fabric implements proxy ARP. Therefore, when the
contract is applied to an EPG with known endpoints, traffic will be interrupted until the endpoint ARP cache expires
or is cleaned. Once this is done, traffic will resume for the traffic allowed by the contract filters. (This interruption is
not an issue in green field deployments.)
This VMware Knowledge Base article illustrates the required ports for different vSphere services depending on the ESXi
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 23 of 64
Figure 14.
Configuration with multiple vSphere DRS clusters using per-cluster vMotion EPGs
The fabric administrator has a view of specific VMkernel interfaces, easily mapping the IP address or MAC
address to the appropriate access leaf, access port, or VPC with cluster context and filter information, as shown in
Figure 14.
One primary advantage of the EPG model for vSphere infrastructure is the security enhancement it provides.
Another important advantage is operational, as it is easy for each vSphere administrator to view statistics and
health scores by cluster, as illustrated in Figure 13.
VMware vSwitch design and configuration considerations
Another important consideration is how to map EPGs to port group configurations in vCenter. The vSphere VDS
can be connected to the Cisco ACI leaf switches as a physical domain, as a VMM domain, or both.
Using a VMM domain integration provides various benefits. From a provisioning standpoint, it helps ensure that the
vSwitch dvUplinks are configured in a way that is consistent with the access switch port configuration. It also helps
to automate creating dvPortGroups with the correct VLAN encapsulation for each EPG and to avoid configuring the
EPG on specific access switches or ports. From an operational standpoint, using a VMM domain increases the
fabric administrator’s visibility into the virtual infrastructure, as shown in Figure 15.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 24 of 64
Figure 15.
Example of data from one hypervisor in of one of the VMM domains
When using a VMM domain, the APIC leverages vCenter’s northbound API to get access to the VDS, giving the
fabric administrator several advantages:
Ensures that the dvUplinks configurations of the VDS match those of the fabric.
Monitors the VDS statistics from the APIC, to view dvUplinks, VMKNIC, and virtual machine–level traffic
Automates dvPortGroups configurations by mapping EPGs created on APIC to the VMM domain. APIC
creates a dvPortGroup on the VDS and automatically assigns a VLAN from the pool of the VMM domain,
thus completely automating all network configurations.
Automates EPG and VLAN provisioning across physical and virtual domains. When the vCenter
administrator assigns VMKNICto a dvPortGroup provisioned by the APIC, the latter automatically configures
the EPG encapsulation on the required physical ports on the switch connecting to the server.
Enables the fabric administrator to have a contextual view of the virtualization infrastructure. APIC provides
a view of the vCenter inventory and uses it to correlate virtual elements with the fabric.
Figure 16 represents a VDS with redundant uplinks to a Cisco ACI leaf pair. The VDS can be automatically created
by APIC when the VMM domain is configured, or it can be created by the vCenter administrator prior to VMM
configuration. If it was created by the vCenter administrator, the fabric administrator must use the correct VDS
name at the moment of creation of the VMM domain. The APIC ensures that the dvUplinks have the correct
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 25 of 64
Figure 16.
VDS with redundant uplinks to a Cisco ACI leaf pair
When the fabric administrator creates EPGs for vSphere services, as in Figures 11and 12 earlier in the document,
they only need to associate the EPGs to the VMM domain. No further configuration is required; the APIC
configures the required dvPortGroups and physical switch ports automatically.
If a VMM domain is not used, then the EPGs for vSphere infrastructure must be mapped to a physical domain. The
dvPortGroups must be configured separately by the vCenter administrator using the VLAN encapsulation
communicated by the fabric administrator. In this case, it is best to use statically assigned VLANs.
The fabric administrator then needs to configure the required EPGs on the access leaf switches. Although the
VMM configuration offers the simplest configurations, Cisco ACI offers advantages in simplicity and automation
compared to traditional fabrics also when using physical domains.
For instance, going back to the design in Figure 11, which is partially reproduce in Figure 17, we can see that this
design approach is extremely simple to configure in Cisco ACI, even for large deployments using physical
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 26 of 64
Assuming that an Attachable Entity Profile (AEP) has been configured for ESXi servers in a particular vSphere
environment, it is sufficient to associate the EPGs from Figure 11 to the corresponding AEP. The APIC will take
care of automatically configuring the correct VLAN encapsulation and the distributed default gateway for the
corresponding subnet in every leaf switch where the AEP is present. Adding new ESXi hosts requires only
configuring a VPC and selecting the correct AEP, nothing more. As illustrated in Figure 17, associating the service
EPGs to the AEP, automatically provisions all the required VLAN encapsulation, bridge domains, SVIs, and VRF
tables on all required access switches.
Figure 17.
Configuring EPGs at the AEP used on all VPC policy groups for all ESXi hosts on the vSphere clusters
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 27 of 64
We recommend using VMM domains for the simplified configuration and enhanced visibility they provide, as
outlined above. Table 3 compares the use of VMM to that of physical domains, specifically in the context of
vSphere infrastructure.
Table 3.
Comparing VMM and physical domains for vSphere infrastructure
VDS Connection to ACI
Physical domain
VMM domain
APIC and Cisco API connection to vCenter
for VDS management
Not required
dvUplinks port configuration
Manually by virtual administrator
Automated by APIC
dvPortGroups configuration
Manually by virtual administrator
Automated by APIC
VLAN assignment
Dynamic or static
EPG configuration on leaf switches
Static path or EPG mapped to AEP, or both
Automated by APIC
Virtual Machine Kernel Network Interface
Card (VMKNIC) visibility in the fabric
IP and MAC addresses
IP and MAC addresses, hypervisor association
and configuration
Virtual Machine Network Interface Card
(VMNIC) visibility in the fabric
Not available
Hypervisor association, faults, statistics
Virtual machine level visibility
IP and MAC addresses
Virtual machine objects, virtual NIC
configuration, statistics
Consolidated statistics
Not available
APIC can monitor VDS statistics
Starting with Cisco ACI 3.1, it is also possible to configure a read-only VMM domain. In that mode, APIC always
interfaces with an existing VDS and in view-only mode. A read-only VMM domain does not enable automated
configurations by APIC but provides many of the visibility capabilities outlined above.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 28 of 64
Option 1. Running NSX-v security and virtual services using a Cisco ACI integrated overlay for
network virtualization
Some customers are interested in using VMware NSX security capabilities, or perhaps using NSX integration with
specific ecosystem partners, but do not want to incur the complexity associated with running the NSX overlay, such
as deploying and operating the NSX VXLAN controllers, Distributed Logical Router (DLR) control virtual machines,
ESG virtual machines, and so forth. Sometimes separation of roles must also be maintained, with network teams
responsible for all network connectivity and virtualization or security teams responsible for NSX security features.
These customers can utilize the integrated overlay capabilities of the Cisco ACI fabric for automated and dynamic
provisioning of connectivity for applications while using the NSX network services such as the ESG Load Balancer,
Distributed Firewall, and other NSX security components and NSX technology partners.
Using NSX Distributed Firewall and Cisco ACI integrated overlays
This design alternative uses supported configurations on both Cisco ACI and VMware NSX. The Cisco ACI fabric
administrator works with the VMware vSphere Administrator to define a VMM domain on the APIC for vSphere
integration, as described in earlier parts of this document.
The VMM domain enables APIC to have visibility into the vCenter inventory. When the VMM domain is configured,
the APIC uses a standard vSphere Distributed Switch (VDS) that corresponds to the VMM domain on APIC. As this
document is being written, only VLAN mode is supported for the VMM integration and APIC can use any supported
VDS version. The APIC VMM domain integration interacts with the VDS through vCenter in the same way as when
PowerShell, vRealize Orchestrator, or another orchestration system is used.
The NSX administrator installs the NSX Manager and registers it with the same vCenter used for the APIC VMM
domain. Then the NSX administrator continues with the NSX installation by preparing hosts in the vSphere clusters
to install the NSX kernel modules. However, there is no need to deploy or maintain NSX controllers, nor to
configure VXLAN or NSX transport zones.
The option to deploy NSX security features without adding the NSX controllers and VXLAN configuration is well
known to vCloud Networking and Security. The NSX Distributed Firewall, Service Composer, Guest Introspection,
and Data Security features are fully functional and can work without the use of NSX overlay networking features.
This type of installation is shown in Figure 18.
At the time of this writing there must be a 1:1 mapping between NSX Manager and vCenter. APIC does not have this limitation
and can support multiple VMM domains at once, which makes possible multiple vCenter or other vendor VMM domains.
Therefore, you can have multiple vCenter or NSX Manager pairs mapped to a single Cisco ACI fabric.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 29 of 64
Figure 18.
Installation screens for deploying an NSX environment and its security features
In this model of operation, the connectivity requirements for virtual machine traffic are provisioned on APIC by
creating bridge domains and EPGs, instead of creating logical switches and DLRs. The required bridge domains
and EPGs can be configured on APIC from a variety of interfaces, including the APIC GUI, NX-OS CLI, or APIC
REST API; through a cloud management platform; or from the vCenter web client by using the Cisco ACI vCenter
plug-in mentioned earlier in this document. APIC also supports out-of-the-box integration with key cloud
management platforms, including VMware vRealize Automation, Cisco CloudCenter, and others, to automate these
In a nutshell, instead of deploying NSX ESG and DLR virtual machines and configuring NSX logical switches, the
administrator defines VRF tables, bridge domains, and EPGs and maps the EPGs to the VMM domain. Instead of
connecting virtual machines to logical switches, the virtual administrator connects virtual machines to the
dvPortGroups created to match each EPG.
We recommend using a different Cisco ACI tenant for virtual machine data traffic from the one used for vSphere
Infrastructure connectivity. This method ensures isolation of user data from infrastructure traffic as well as
separation of administrative domains. For instance, a network change for a user tenant network can never affect
the vSphere infrastructure, and vice versa.
Within the user tenant (or tenants), network connectivity is provided using VRF tables, bridge domains, and EPGs
with associated contracts to enable Layer 3 and Layer 2 forwarding as described for infrastructure traffic earlier in
this document. Virtual machines leverage Cisco ACI Distributed Default Gateway technology in hardware to enable
optimal distributed routing without performance compromises, regardless of the nature of the workload (physical,
virtual, containers) or traffic flow (east-west and north-south).
We can illustrate this model using a classic example of a three-tier application with web, application, and database
tiers, as shown in Figure 19. The web and app tiers are running in vSphere virtual machines. The database tier is
running in bare. Additional requirements include low-latency access from the WAN, load balancing, and secure
connectivity between tiers.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 30 of 64
Figure 19.
A typical three-tier application with a nonvirtualized database
This setup can be configured using the fabric for distributed routing and switching. Figure 20 shows a VRF table,
three bridge domains for the various application tiers, and an L3Out interface to connect to endpoints external to
the fabric. Cisco ACI provides distributed routing and switching. Connectivity between bare metal and virtual
machines is routed internally in the fabric, and connectivity with external endpoints is routed via physical ports that
are part of L3Out.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 31 of 64
Figure 20.
The ACI fabric provides programmable distributed routing and switching for all workloads
On Figure 21 shows the rest of the logical configuration, with three EPGs corresponding to our application tiers
(web, application, database). The DB_Prod EPG is mapped to a physical domain to connect bare metal databases,
while the Web_Prod and App_Prod EPGs are mapped to the vSphere VMM domain so that the APIC automatically
creates corresponding dvPortGroups on the VDS using standard vSphere API calls to vCenter. The port groups
are backed using locally significant VLANs that are dynamically assigned by APIC. All routing and switching
functionality is done in hardware in the Cisco ACI fabric. The default gateway for virtual machines is implemented
by the distributed default gateway on every Cisco ACI leaf switch that has attached endpoints on those EPGs. The
EPGs can map to both virtual and physical domains and enable seamless connectivity between virtual machines
and nonvirtual devices. In the vSphere domain, each EPG corresponds to a dvPortGroup that is automatically
created by APIC by means of the VMM domain integration.
In the figure, the Web_Prod and App_Prod EPGs are also placed in the Preferred group, allowing unrestricted
connectivity between these EPGs, just as if they were logical switches connected to a DLR. The only difference
from using NSX DLRs is that if two virtual machines are located on the same ESXi host, traffic is routed at the leaf.
However, on a given vSphere cluster only a small percentage of the traffic can ever stay local in the hypervisor and
switching through the leaf provides low-latency line rate connectivity.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 32 of 64
Figure 21.
Basic example of an application network profile, with three EPGs and contracts between them
In this way, the administrator can easily ensure that all EPGs created for connecting virtual machines (mapped to
the VMM domain) can be placed in the Preferred group and policy will be controlled using the NSX Distributed
Firewall (DFW).
The advantage of this model of network virtualization is that no gateways are required when a virtual machine
needs to communicate with endpoints outside of vSphere. In addition, as the NSX DFW cannot configure security
policies for physical endpoints, the DB_Prod EPG, and any other bare metal or external EPG can be placed
outside of the Preferred group, and Cisco ACI contracts can be used to provide security. This functionality is not
limited to communication with bare metal. For instance, in the same way, the fabric administrator can enable a
service running on a Kubernetes cluster to access a vSphere virtual machine without involving any gateways.
The vSphere administrator can create virtual machines and place them into the relevant dvPort groups using
standard vCenter processes, API calls, or both. This process can be automated through vRealize Automation,
Cisco CloudCenter, or other platforms. The virtual machines can then communicate with other virtual machines in
the different application tiers as per the policies defined in the NSX DFW. All routing and bridging required between
virtual machines happens in a distributed way in the fabric hardware using low-latency switching. This method
means that virtual machines can communicate with bare metal servers or containers on the same or on different
subnets without any performance penalties or bottlenecks. Security policies between endpoints that are not inside
vSphere can be configured using Cisco ACI contracts.
The vSphere hosts have the NSX kernel modules running, and the NSX administrator can use all NSX security
features, including NSX Distributed Firewall and Data Security for Guest Introspection, as shown in Figure 22. The
administrator can also add antivirus or antimalware partner solutions.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 33 of 64
Figure 22.
Example of NXS security features: Service Composer is used to create security groups for application and web tiers
routed by the Cisco ACI fabric
The approach of combining NSX security features with Cisco ACI integrated overlay offers the clear advantage of
better visibility, as shown in Figure 23. Virtual machines that are part of a security group are also clearly identified
as endpoints in the corresponding EPG.
Figure 23.
Virtual machines that are part of an NSX security group are visible inside the APIC EPG
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 34 of 64
It is important to note that, in this model, the use of Cisco ACI contracts is entirely optional. Since we are assuming
that the NSX DFW will be used to implement filtering or insert security services, the fabric administrator may
choose to disable contract enforcement inside the VRF tables to eliminate the need for contracts. However, instead
of completely disabling policy inside the VRF, we recommend placing EPGs mapped to the VMM domain in the
Preferred group to allow open communication between them while allowing the fabric administrator to use
contracts for other EPGs inside the same VRF.
This design approach does not limit NSX to security features only. Other services, such as using the ESG for load
balancing, can also be used.
Both NSX Distributed Firewall and Cisco ACI security are limited to Layer 2–4 filtering. Full-featured firewalls are
also capable of filtering based on URLs, DNS, IP options, packet size, and many other parameters. In addition,
Next-Generation Firewalls (NGFWs) are also capable of performing deep packet inspection to provide advanced
threat management.
It may be desirable to insert more advanced security services to protect north-south or east-west traffic flows, or
both. In this context, east-west commonly refers to traffic between virtual machines in vSphere clusters with a
common NSX installation. Any other traffic must be considered north-south for NSX, even if it is traffic to endpoints
connected in the fabric.
The need for service insertion is not limited to NGFWs but also to other security services such as next-generation
network intrusion prevention, or to advanced Application Delivery Controllers (ADCs) that can also perform SSL
offload in hardware for high-performance web applications. Figure 24 illustrates the various insertion points
available in this design. Because Cisco ACI provides the overlay where all endpoints are connected, organizations
can use Cisco ACI service graphs to insert services such as physical and virtual NGFWs between tiers without
affecting NSX service insertion, which does not leverage VXLAN and is independent of the NSX controllers.
Service insertion is implemented using the NSX API and can be used to insert virtual services between virtual
machines running in the vSphere hypervisors.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 35 of 64
Figure 24.
Advanced services can be added using both Cisco ACI and NSX service partners
Option 2. NSX-v overlays as an application of the Cisco ACI fabric
Using the Cisco ACI integrated overlay offers many advantages, and customers who look at NSX to implement
better security for their vSphere environments are encouraged to follow that model.
Here we will explain instead how to best configure Cisco ACI in the case where customers also use NSX overlay
NSX-v VXLAN architecture
This section describes various alternatives for running NSX-v VXLAN overlay capabilities on a Cisco ACI fabric.
Figure 25 shows a general representation of the reference architecture for NSX-v as outlined in the NSX for
vSphere Design Guide. In the NSX reference architecture, VMware recommends dedicating compute resources for
user applications and for running NSX Edge functions, all connected through a leaf-and-spine fabric to maximize
bisectional bandwidth. In the figure, servers that cannot be part of NSX overlay are shown in blue, including bare
metal, other hypervisors, container platforms, and vSphere clusters without NSX installed.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 36 of 64
Figure 25.
Compute clusters, management cluster, and edge cluster for a multiple vCenter solution
In NSX-v architecture, the NSX Manager is the heart of the system and has a 1:1 relation with a single vCenter.
Therefore, organizations that require using multiple vCenter servers also must run and maintain multiple NSX
Managers. The controller clusters can be dedicated per NSX Manager or shared among up to eight NSX Managers
if the customer can use universal configurations—that is, configurations that are synchronized across NSX
Managers. The use of universal configurations may impose limits on the overall system scale and prevent certain
features and configurations from being used, such as using rich vCenter object names for defining
The NSX reference design shown in Figure 25 makes a clear distinction between ESXi compute clusters dedicated
to running applications (that is, vSphere clusters running user or tenant virtual machines), and those clusters
dedicated to running NSX routing and services virtual machines (that is, NSX Edge services gateway virtual
machines). The architecture calls these dedicated clusters compute clusters and edge clusters.
VMware Validated Design documents for Software-Defined Data Center (SDDC) also described “converged”
designs, in which an ESG virtual machine can coexist with the user virtual machines, eliminating the need for Edge
clusters are required. The key implication, as we will see, is that ESG virtual machines that route between the NSX
overlay and the rest of IT infrastructure require that routing configurations and policies be defined on the physical
switches they connect to. For this reason, at scale, it becomes complicated, if not impossible, to enable the ESG
virtual machines to be placed anywhere in the infrastructure, especially when using a Layer 3 access leaf-andspine fabric design. This limitation does not occur when using Cisco ACI.
NSX VXLAN—Understanding BUM traffic replication
The NSX network virtualization capabilities rely on implementing a software VXLAN Tunnel Endpoint (VTEP) on
the vSphere Distributed Switch (VDS). This is accomplished by adding NSX-v kernel modules to each ESXi host.
The hypervisor software’s lifecycle management is thereby tied to that of NSX. The NSX kernel modules are
programmed from both NSX Manager and the NSX Controller cluster using VMware proprietary protocols.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 37 of 64
During the NSX host preparation process for an ESXi cluster, the NSX administrator enables VXLAN on ESXi
clusters from the NSX Manager. This process requires selection of a VDS that must already exist during the NSX
host preparation. At this time, a new VMkernel interface is created on each ESXi host on the given cluster using a
dedicated TCP/IP stack specific to the VTEP traffic. The NSX Manager also creates a new dvPortGroup on the
VDS. The VXLAN VMKNIC is placed in that dvPortGroup and given an IP address: the NSX VTEP address. The
NSX VTEP address can be configured through either DHCP or an NSX Manager–defined IP pool.
The NSX VTEP IP address management must be considered carefully, because it may have an impact on how
NSX handles broadcast, unknown unicast, and multicast traffic (known as BUM traffic). NSX-v can handle BUM
replication in three ways:
Multicast mode: This mode does not require using NSX-v controllers and implies running VXLAN in floodand-learn mode, with BUM accomplished by leveraging IP Multicast in the underlay. This mode is
essentially equivalent to the legacy vShield implementation and relies on Protocol Independent Multicast
(PIM) or Internet Group Management Protocol (IGMP) in the network fabric for efficient multicast transport.
Unicast mode: This mode requires using NSX-v controllers. In this mode, the ingress ESXi host receiving a
BUM packet does head-end replication. For each VTEP subnet, one ESXi host is selected as a Unicast
VXLAN (uTEP) Proxy). The ingress ESXi host performs BUM replication by creating a copy for each other
interested VTEP in the VTEP subnet, plus one copy for the uTEP proxy for each other VTEP subnet in the
transport zone. The uTEP proxy on each VTEP subnet in turn replicates the packets for each VTEP known
in the subnet. This is less than optimal in terms of CPU usage and bandwidth utilization if a lot of VTEPs
exist in a subnet.
Hybrid mode: This mode also requires using the NSX-v controllers. It uses a combination of both unicast
and multicast to handle flooding of BUM traffic. Within each VTEP subnet, an ESXi host is chosen as a
multicast VXLAN VTEP (mTEP) proxy. Then the ESXi host receiving a BUM packet sends it to the physical
network encapsulated in IP Multicast and relies on the network device to replicate it for all VTEPs in the
subnet. All the VTEPs in the subnet subscribe to the same multicast group (send an IGMP join) to receive
the BUM packet from the network device. The receiving host also sends the BUM traffic as unicast to the
mTEP for each of the VTEP subnets, which in turn sends it to the physical network as multicast as well. In
this model, the physical network does not require routing IP Multicast, but it is assumed that Layer 2
multicast is supported.
NSX transport zones
As part of NSX-v VXLAN configuration, the administrator must also define a transport zone to which ESXi clusters
are mapped. A transport zone defines the scope of VXLAN segments (called logical switches) in NSX overlays.
This mapping determines which virtual machines can access a specific logical network based on the cluster to
which they are associated. The replication mode configured for each transport zone defines the default VXLAN
control plane mode and can be set to multicast, unicast, or hybrid.
More importantly, as mentioned, the transport zone defines the scope of a logical switch. A logical switch is a
dvPortGroup backed by a VXLAN segment created by NSX and is limited to a single transport zone. This means
that virtual machines in different transport zones cannot be on the same Layer 2 segment or use the same NSX
constructs. Scalability limits also defining how many vSphere DRS clusters can be part of a transport zone, so the
scope of a logical switch is limited not by the physical network but by the supported server count per NSX transport
zone (limited to 512 servers in NSX version 6.3). At scale, a typical NSX deployment therefore consists of multiple
transport zones, requiring careful consideration for VDS and transport zone design, and the alignment and
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 38 of 64
connectivity between them. Virtual machines connected to different transport zones must communicate through
gateways routing between them.
NSX VTEP subnetting considerations
When considering the NSX replication models explained earlier in this document, it will be clear that VTEP
subnetting has important implications for the system’s behavior. For instance, if we imagine 250 VTEPs in a
subnet, when using unicast mode, an ESXi host in the worst case scenario would have to create 250 copies for
each received BUM packet. The same setup in hybrid mode would require creating a single copy, and the physical
network would handle the replication. IGMP snooping can restrict the flooding in the fabric.
When deploying a traditional Layer 3 access fabric, subnetting for NSX VTEPs offers a single choice: one subnet
per rack. This statement has three important implications:
It becomes complicated to use NSX Manager IP pools for VTEP addressing unless vSphere clusters are
limited to a single rack.
Using DHCP for VTEP addressing requires configuring DHCP scopes for every rack, using option 82 as the
rack discriminator.
Applying security to prevent other subnets from accessing the VTEP subnets requires configuring ACLs on
every ToR switch.
In contrast, when using a Cisco ACI fabric, subnets can span multiple racks and ToR switches, or even the entire
fabric, for greater simplicity and flexibility in deployment. Deploying NSX on Cisco ACI enables customers to use IP
pools or DHCP with complete flexibility.
Running NSX VXLAN on a Cisco ACI fabric
NSX VXLAN traffic must be on an Cisco ACI bridge domain (). NSX VXLAN traffic will primarily be unicast, but
depending on configuration, it may also require multicast. This requirement will influence the Cisco ACI bridge
domain configuration; the best configuration will depend on the NSX BUM replication model that is prevalent in the
Administrators select a BUM replication mode when configuring an NSX transport zone. This configuration may be
overridden later by the settings for each logical switch. In other words, a transport zone may be configured to use
hybrid replication mode, and subsequently a logical switch in that transport zone can use unicast mode instead.
The following two sections cover the recommended Cisco ACI bridge domain and EPG configurations for NSX
VTEP traffic for unicast and hybrid replication modes.
Bridge domain–EPG design when using NSX hybrid replication
If an NSX deployment will primarily use hybrid replication, the simplest and best performing design is to use a
single bridge domain and one NSX VTEP subnet for each NSX transport zone, as illustrated in Figure 26. This
example expands on the design for vSphere infrastructure outlined earlier in this document, using a bridge domain
for each of the vSphere services: vMotion, IP storage, and forth. A bridge domain is added with a subnet for the
NSX VXLAN traffic. This configuration allows multiple vSphere clusters to share the same NSX VTEP pool while
allowing complete flexibility in how clusters are deployed to racks. In this configuration, NSX administrators can use
either IP pools or DHCP for VTEP addressing. Clusters can be deployed tied to a rack as shown in Figure 27, or
they can expand across racks.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 39 of 64
Figure 26.
Example of NSX VTEP Pool creating matching a Cisco ACI BD and EPG created for the transport zone.
Since hybrid mode is used, the fabric must take care of NSX BUM traffic replication using Layer 2 multicast. To
minimize unnecessary flooding, we recommend configuring the bridge domain with IGMP snooping and to enable
IGMP querier functionality, as illustrated in Figure 27.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 40 of 64
Figure 27.
Example showing a single BD and single EPG for all NSX VTEP interfaces
In this design model, if maximum simplicity is desired a single EPG can be used for the NSX VTEP VMkernel IP
addresses for all clusters.
It is important to note that the EPG (or EPGs) dedicated for NSX VTEP VMKNIC interfaces must always be
mapped as a physical domains. As explained earlier, the endpoint can be mapped directly at the AEP level,
thereby simplifying the configuration of the fabric.
However, as explained earlier for other vSphere services, it may be desirable to use an EPG for NSX VXLAN
VTEP interfaces by cluster. This configuration is possible when using a single subnet for NSX VTEP addresses, as
shown in Figure 28, and offers the fabric administrator better visibility and understanding of the virtual environment.
Figure 28.
Example using a single BD for the NSX Transport Zone with per-cluster EPG for NSX VTEP interfaces
Using one or multiple EPGs for NSX VTEP VMkernel configuration does not complicate NSX configurations for
VXLAN. In all cases, the NSX VTEP is on the same subnet, associated to a Cisco ACI bridge domain, and can use
the same VLAN encapsulation on the NSX host VXLAN configuration. However, for this to work, two
considerations are important:
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 41 of 64
If using multiple EPGs per cluster, the fabric administrator must configure local VLAN significance on the
interface policy group to reuse the same VLAN encapsulation as for other EPGs.
It is not possible to use the same VLAN for two EPGs mapped to the same bridge domain on the same leaf.
Therefore, this model, if using a single bridge domain, works only if clusters do not expand onto multiple
racks. If clusters expand onto multiple racks, different VLANs must be used per EPG to ensure no VLAN
conflicts occur.
Figure 29 illustrates again the advantages of per-cluster EPGs. In this case, the NSX VTEP VMkernel interfaces
are grouped by cluster and Cisco ACI automatically allows the fabric administrator to keep track of per-cluster
health scores, events, and statistics.
Figure 29.
The NSX VTEP interfaces are viewed at the EPG level and VXLAN-specific traffic stats automatically correlated
Bridge domain–EPG design when using NSX unicast replication
It is possible to use a single bridge domain and a single subnet for all NSX VTEPs when using unicast mode
replication for BUM traffic. However, in that case, head-end replication performed by NSX at the ingress hypervisor
may have a significant negative impact on the performance of the NSX overlay, especially for environments with a
larger number of ESXi hosts and consequently larger number of VTEPs.
For this reason, when using NSX unicast replication mode, we recommend using one bridge domain and subnet
per vSphere cluster or per rack. In Layer 3 fabrics, it is not possible to do this per cluster if the clusters expand
across multiple racks. This fact is yet another reason to use Cisco ACI as the underlay for NSX, as opposed to a
traditional Layer 3 fabric. The capability of Cisco ACI for extending a bridge domain wherever needed across the
fabric allows for maximum flexibility: vSphere clusters can be local to a rack or they can expand across racks. But it
is better to tie the subnet to the cluster, not the rack. By tying the VTEP subnet to the cluster, the NSX
administrator can still use NSX IP pools to address the VTEP interfaces, defining them by cluster, and is not forced
to use DHCP as the only option. Alternatively, DHCP relay can be configured at the bridge domain level, if that is
the selected option for NSX VTEP IP address management.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 42 of 64
Figure 30 illustrates how using one bridge domain or EPG per cluster allows the same VLAN to be used for all the
VMKNIC NSX VXLAN configuration. This method requires configuring local VLAN significance on the policy groups
if the clusters will expand onto multiple racks but raises no problems because two EPGs can use the same VLAN
on the same leaf, if the EPGs belong to different bridge domains, as is the case here.
Figure 30.
A design using different NSX VTEP subnets per cluster, each with a dedicated BD and EPG in Cisco ACI
Providing visibility of the underlay for the vCenter and NSX administrators
Regardless of the design chosen, much of the operational information in APIC can be made available to the NSX
or vCenter administrator leveraging the APIC API. Cisco ACI offers a vCenter plug-in that is included with the APIC
license. The Cisco ACI vCenter plug-in is extremely lightweight and uses the APIC API to provide visibility of the
Cisco ACI fabric directly from within vCenter. as shown in Figure 31. Using this tool, the NSX administrator can
leverage the vCenter web client to confirm that the fabric has learned the correct IP or MAC address for NSX
VTEPs, to view security configurations in Cisco ACI, to troubleshoot fabric-level VTEP-to-VTEP communication,
and more. The vSphere administrator can also use the plug-in to configure or view tenants, VRF tables, bridge
domains, and EPGs on APIC. The figure shows the EPGs related to an NSX transport zone, including the VXLAN
contract and NSX VTEPs inside an EPG.
Figure 31.
A view of the NSX VTEP EPGs for various clusters as seen from the Cisco ACI vSphere plug-in
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 43 of 64
The Cisco ACI fabric also provides excellent visibility and troubleshooting tools built in to the APIC. One example is
the troubleshooting wizard available on the APIC to monitor details of the endpoints in the topology. Let’s imagine
that there are connectivity problems between any two given ESXi hosts. The fabric administrator can use the Cisco
ACI Visibility and Troubleshooting tool to have APIC draw the topology between those two hosts. From this tool, we
can quickly pull all the statistics and packet drops for every port (virtual or physical) involved in the path between
the referred endpoints. We can also pull all events, configuration change logs, or failures that are specific to the
devices involved in the path during the time window specified, and we also can pull in the statistics of the contract
Other troubleshooting tools accessible from the same wizard help the fabric administrator configure a specific
SPAN session inject traffic to simulate the protocol conversations end to end across the path, and the like. This
visibility is available on the APIC for any two endpoints connected to the fabric, whether they are virtual or physical.
Virtual switch options: Single VDS versus dual VDS
Previous sections describe how to best provide NSX connectivity and security requirements in terms of using Cisco
ACI constructs such as VRF tables, bridge domains, application network profiles, EPGs, and contracts.
As explained earlier, NSX VXLAN settings require an existing VDS. One design approach is to have one VDS
dedicated solely for NSX logical switches and their corresponding dvPortGroups and to have another VDS
dedicated for vSphere infrastructure traffic, such as vMotion, IP Storage, and Management. This approach is
illustrated in Figure 32. The first VDS is managed by APIC through vCenter using a VMM domain. The other
connects to Cisco ACI using a physical domain and is dedicated to NSX traffic. The NSX VTEP dvPortGroups is
created on this VDS by NSX Manager, and all logical switch port groups are created on this VDS.
Figure 32.
Dual-VDS design with separate VDS for vSphere infrastructure and for NSX
We recommend using a Cisco ACI VMM domain for managing and monitoring the VDS for infrastructure traffic, as
described in the previous sections, and to keep a separate domain for the NSX VXLAN VMkernel interface.
Configuring a VMM in APIC for the vCenter gives the fabric administrator maximum visibility into the vSphere
infrastructure, while the vSphere administrator retains full visibility through the use of native vCenter tools, and
deploying vSphere clusters becomes much easier.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 44 of 64
The downside of dedicating one VDS for infrastructure and another for virtual machine data is that more uplinks are
required on the ESXi host. However, considering the increasing relevance of hyperconverged infrastructure and IP
storage in general, it is not a bad idea to have separate 10/25GE for user data and storage.
Alternatively, to reduce the number of uplinks required, customers can run the NSX VXLAN VMkernel interface on
the same VDS used for vSphere infrastructure traffic. This way, APIC can have management access to the same
VDS through the vCenter API. Cisco has tested and validated this configuration. The VMM domain configuration on
Cisco ACI and the corresponding VDS must be provisioned before NSX host preparation. The configuration of the
dvPortGroup used for the VXLAN VMKNIC is done by NSX Manager in any case, and therefore should not be
done by mapping the VXLAN EPG to the VMM domain. All logical switch dvPortGroups are created on the same
VDS. The NSX VXLAN VMkernel interface must always be attached to an EPG mapped to a physical domain.
Figure 33.
Single-VDS design using same VDS for vSphere infrastructure and NSX
Using VMM may also be convenient when considering the Edge clusters. The compute resources required for
running NSX Edge virtual machines will need to participate on the NSX overlay, but they will also need
communication with the physical network. The connectivity between the NSX ESG virtual machines and the
physical network must be done using standard dvPortGroups backed by VLANs. These ESG-uplink dvPortGroups
are not automated through NSX, nor are they created from NSX Manager.
As we will see in the following sections, the dvPortGroups used for connecting NSX ESG virtual machine uplinks to
the physical world can be configured by creating EPGs that provide the required connectivity and policy and then
mapping those EPG to the specific VMM domain. This method helps in automating the connection of the ESG to
the physical world and provides additional benefits in terms of mobility, visibility, and security.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 45 of 64
NSX Edge Clusters—NSX routing and Cisco ACI
Introduction to NSX routing
When using NSX VXLAN overlays, traffic that requires connecting to subnets outside of the overlay must be routed
through NSX Edge virtual machines, also known as Edge Services Gateway (ESG). NSX does not perform any
automation of the physical network. However, the ESG virtual machines require connection to the physical network
for routing. Since legacy networks lack programmability, to minimize physical network configurations, some
VMware documentation recommends concentrating all ESG virtual machines in dedicated clusters. This way,
network administrators do not need to know which ToR to configure for ESG network connectivity as all ESGs will
appear in the same rack. This approach can lead to suboptimal use of compute resources. On one hand, it is
possible for the vSphere clusters dedicated to running ESG virtual machines to be underprovisioned in capacity, in
which case application performance will suffer. On the other hand, the opposite may happen and Edge clusters
could be overprovisioned, wasting compute resources and associated software licenses. This section discusses
how best to connect ESG virtual machines to the Cisco ACI fabric to minimize or eliminate these constraints.
Edge clusters follow the same designs presented in previous sections in terms of NSX VTEP configuration.
Specifically, a VXLAN VMKNIC is created for NSX VTEP addresses, and it must be connected to the required EPG
in the Cisco ACI fabric.
NSX Edge is a virtual appliance that can be deployed in two different “personalities”: an Edge services gateway or
a Distributed Logical Router (DLR) control virtual machine.
The ESG virtual machine is a Linux-based basic IP router. Each ESG runs its own control plane and data plane,
which operate independently of the NSX controllers. In fact, the ESG can run without the NSX controllers and be
connected to Cisco ACI provisioned port groups as well. In addition to IPv4 routing, the ESG can also be
configured to perform functions such as Network Address Translation (NAT), basic Layer 2–4 firewall, IP
Security (IPsec) or SSL VPN, and basic load-balancing services similar to those offered by open source HAProxy.
The DLR uses an NSX Edge image deployed as a control virtual machine. When using a DLR, the routing data
plane is distributed to the hypervisor kernel modules, but the control plane for routing protocols runs on the DLR
control virtual machine. In other words, protocols like Open Shortest Path First (OSPF) or Border Gateway
Protocol (BGP) run on the DLR control virtual machine, which will communicate routing prefixes to the NSX
Controller Cluster, which in turn programs them on the hypervisor DLR kernel modules.
In most NSX overlay designs, both ESG and DLR control virtual machines are required. The DLR is recommended
for routing traffic between virtual machines in the NSX-enabled ESXi clusters. The ESG is used for routing traffic
between virtual machines in those clusters and subnets external to the overlay. It is important to note that traffic
between different NSX transport zones also will need to be routed through ESG virtual machines. The ESG virtual
machines may also be used for implementing NAT or DHCP services or for running basic load-balancing services.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 46 of 64
Typically, a DLR may have various logical switches connected, each representing a different subnet inside the
NSX overlay. To route traffic toward ESGs, the DLR will use a VXLAN logical switch that will serve as “transit”
between the DLR and the ESG. This two-tier routing is illustrated in Figure 34. The DLR control virtual machine
requires a second virtual machine (not depicted) to implement control plane redundancy. The ESG may be
deployed in Active/Standby mode, as in the figure, or in Active/Active mode and leverage ECMP between the DLR
and ESG sets. In the latter case, the ESG can only do IPv4 dynamic routing: NAT, firewall, and load balancing are
not supported in ECMP mode.
As the ESG and DLR have no concept of VRF, when routing isolation is required, additional DLR-ESG sets are
required as well. It can be said that implementing the equivalent of a VRF requires at least four virtual machines:
two ESGs and two DLR control virtual machines (assuming redundancy is required).
Figure 34.
Typical two-tier minimum logical routing design with NSX for vSphere
In some deployments, the ESG does not need to run a dynamic routing protocol, most often when the subnets on
the DLR use RFC1918 private address space and the ESG performs Network Address Translation (NAT) to a
given routable address.
In such scenarios, some NSX designs propose adding a third routing tier inside the overlay, such as that shown in
Figure 35. In that model, different tenants may get RFC1918 subnets for their apps and route outside towards an
ESG that performs NAT. Those ESGs connect using another Transit logical switch to an ECMP ESG set that
performs routing to external subnets.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 47 of 64
Figure 35.
Typical scalable three-tier overlay routing design with NSX
The sections that follow describe how best to connect ESGs to the Cisco ACI fabric for the these three scenarios:
Connecting ESG running NAT to the Cisco ACI fabric. ESG does not do dynamic routing.
ESG running dynamic routing through the fabric. ESG peers with another router, not with the Cisco ACI
ESG running dynamic routing peering with the fabric. ESG peers with the Cisco ACI fabric border leaf
Regardless of the routing scenario, the ESG will always be a virtual machine with at least two virtual NICs (vNICs),
as illustrated in Figure 36. One vNIC connects to the overlay network, typically to the transit logical switch; this is a
downlink interface. The other vNIC is an uplink and may connect to a dvPortGroup backed by a VLAN or
sometimes also to another transit logical switch. In the figure, the downlink vNIC is connected to a logical switch
and the DLR, while the uplink is to a dvPortGroups backed by VLAN 100.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 48 of 64
Figure 36.
Representation of the ESG virtual machine with two vNICs, one connected to the overlay and one to the physical
Connecting ESG with NAT to the Cisco ACI fabric
In certain cloud solutions it is common to use RFC1918 private address spaces for tenant subnets, for instance, in
certain topologies deployed with vCloud Director or vSphere Integrated OpenStack. In those cases, the private
subnets are configured at the DLR, and the ESG will have one or more routable addresses that it uses to translate
addresses for the DLR private subnets.
From a Cisco ACI fabric perspective, an ESG performing NAT does not require routing configurations since it
appears as a regular endpoint in an EPG.. The simplest design in Cisco ACI is to create a bridge domain for the
subnet that will be used to provide addresses for the NAT pool of the ESGs. The ESG default route will point to the
bridge domain default gateway IP address. The uplink dvPortGroup for the ESG will correspond with an EPG
where the ESG appears connected. Figure 36 shows an ESG with two vNIC, one connected to a VLAN-backed
dvPortGroups that in turn would connect to a Cisco ACI EPG. In this design, the additional ESGs shown in Figure
35 to provide routing using ECMP between the NSX tenant ESG and the rest of the world are not required. The
fabric can route between different NSX tenant ESG and, toward other subnets connected to the same Cisco ACI
fabric, or also, as shown in Figure 37, toward subnets external to the Cisco ACI fabric through an L3Out interface.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 49 of 64
Figure 37.
NSX tenant ESG perform NAT and connect to a Cisco ACI EPG. The Cisco ACI fabric provides all routing required
between ESG virtual machines and any other subnet.
In this case, with Cisco ACI it is very simple to eliminate the need for the second tier of ESGs by leveraging the
L3Out model in the fabric as depicted in Figure 37. This configuration is simple to implement and to automate. It
also has great benefits in terms of performance and availability. Performance-wise, any traffic switched from a NAT
ESG toward an endpoint attached to the fabric will benefit from optimal routing, directly from the first connected leaf
switch. Traffic toward addresses external to the fabric will go through Cisco ACI L3Outs that use 10GE or 40GE
links with hardware-based, low-latency and low-jitter line-rate throughput. The design in Figure 37 is far better than
routing through a second tier of ESG shown in Figure 35. It is also more cost-effective, as no additional servers are
required for a second tier of ESG. Availability-wise, as of the time of this writing it is impossible to guarantee subsecond convergence reacting to any failure vector affecting an ESG router (be it a link, switch, server, or ESG
failure). However, by reducing or eliminating reliance on ESG routing and performing it in the fabric, customers
achieve better availability from eliminating failure vectors and because Cisco ACI provides subs-second
convergence on switch or link failures.
Finally, this configuration can also help customers benefit financially. By reducing the number of ESGs required,
customers require fewer servers, fewer vSphere and NSX licenses, and fewer network ports to connect them.
ESG routing through the fabric
When not doing NAT, more often than not the ESG virtual machines are running a dynamic routing protocol with
external routers and with the DLR control virtual machine. The DLR announces connected subnets to the ESG that
will propagate them to the routers outside of the NSX overlay. The DLR provides a distributed default gateway that
runs on the hypervisor distributed router kernel module and provides routing between virtual machines connected
to the logical switches attached to the DLR. However, for external connectivity, traffic is sent to the ESG, which
removes the VXLAN header and routes the packets into a VLAN to reach external routes.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 50 of 64
Typically, the data center routers are physically connected to the Cisco ACI fabric. If it is expected that each ESG
will route only to subnets external to the Cisco ACI fabric, the ESG virtual machines may be peering with data
center routers. In this case, it is only necessary to provide the Layer 2 path between the router interfaces and the
ESG external vNIC. In such a scenario, the Cisco ACI fabric does Layer 2 transit between the ESG and the data
center routers using regular EPGs. Route peering is configured between the ESG and the external routers, as
illustrated in Figure 38, which shows the ESG uplink port group corresponding to an EPG for the ESG external
vNICs on the same bridge domain as the EPG used to connect the WAN router ports. Note that the ESG and the
data center routers connected to different leaf switches simply for illustration purposes. They could be connected to
the same leaf switches or not.
Figure 38.
The ESG uplink connects to a dvPortGroups mapped to an EPG where the ACI fabric provides transit at Layer 2
towards the data center WAN routers.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 51 of 64
It is possible to use a single EPG for connecting the ESG uplink port group and the physical WAN router ports.
However, keeping them on separate EPGs facilitates visibility and allows the fabric administrator to leverage Cisco
ACI contracts between ESG and WAN routers, offering several benefits:
The fabric administrator can control the protocols allowed toward the NSX overlay, and vice versa, using
Cisco ACI contracts.
If required, the Cisco ACI contracts can drop undesired traffic in hardware, thus preventing the NSX ESG
from having to use compute cycles on traffic that will be dropped anyway.
The Cisco ACI fabric administrator can use service graphs and automate perimeter NGFWs or intrusion
detection and prevention, whether using physical or virtual firewalls.
The Cisco ACI fabric administrator can use tools such as Switched Port Analyzer (SPAN), Encapsulated
Remote Switched Port Analyzer (ERSPAN), or Copy Service to capture in hardware all the traffic to/from
specific ESGs.
In addition, by creating different EPGs for different tenant ESG external vNICs, the NSX and fabric administrators
benefit from isolation provided within Cisco ACI. For instance, imagine a scenario in which you want to prevent
traffic from some networks behind one ESG from reaching other specific ESGs. With NSX, the only way to block
that traffic is by configuring complex route filtering or firewall policies to isolate tenants, assuming all ESGs are
peering through the same logical switch or external uplink port group. With the Cisco ACI EPG model, however, if
two ESG vNICs are on different EPGs, by default those ESGs cannot communicate. This separation could also be
accomplished by creating a single isolated EPG for all ESG-external vNICs.
Figure 39 shows an example of this configuration option. A new bridge domain is created to provide a dedicated
flooding domain for the Layer 2 path between the ESG and the external routers. Although this domain does not
require a subnet to be associated to it, it is always best to configure one and enable routing, so that the fabric
handles ARP flooding in the most optimal way and can learn IP addresses of all endpoints to enhance visibility,
facilitate troubleshooting, and so forth. In Figure 39 we see two NSX tenants, each with dedicated ESG and DLR
sets, as well as redundant ESG virtual machines. The redundant DLR control virtual machine is not shown, but
would be expected. The ESG external vNICs are mapped to dedicated Cisco ACI EPGs for each tenant, as well,
ensuring isolation without requiring complex routing policies. In the basic example, the contract between the ESG
vNIC EPG and the router port EPG is a Permit Any contract. This contract could be used for more sophisticated
filtering, too, or to insert a NGFW using a service graph.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 52 of 64
Figure 39.
ESG to WAN over EPG—Isolating NSX routing domains using different EPGs and contracts
In addition to facilitating tenant isolation, mapping the vNIC external connectivity to an EPG has additional benefits
in terms of both automation and mobility. This EPG, like any other, can be defined programmatically and mapped
to a VMM domain directly from a cloud platform at the same time the ESG is instantiated, for instance, by
leveraging the Cisco ACI plug-in for VMware vRealize Automation and Orchestrator. Additionally, the EPG is not
tied to a specific leaf or rack. The ESG in this sense is like any other virtual machine connecting to the fabric. If the
virtual machine moves, the fabric keeps the virtual machine attached to the right EPG and policy anywhere it
roams within the fabric, so the Edge clusters are no longer tied to specific racks. Administrators can now create an
Edge cluster anywhere in the topology and add or remove servers to a cluster as they see fit.
This flexibility contributes to reducing costs, because there is no longer a need to overprovision the Edge clusters
or to risk underprovisioning them and consequently suffering from additional performance problems on top of those
already inherent to using software routing in the data center. Using this approach, the NSX Edge virtual machine
can run in any cluster; there is no need to run clusters dedicated solely to Edge virtual machines. However, note
that it is better to follow VMware recommendations and keep dedicated compute resources for ESG because of the
difficulty in providing any performance guarantees today for routing functions on a virtual machine. The advantage
in any case is that any vSphere cluster can be used for Edge services.
Using Cisco ACI as a Layer 2 fabric to facilitate ESG peering with the data center WAN routers provides some very
simple and flexible design options. However, traffic between virtual machines in the NSX overlay communicating
with endpoints in subnets connected to the Cisco ACI fabric needs to wind its way through the ESG and then
through the WAN routers as well, as illustrated in Figure 40. Note that this is also the case if traffic flows between
different NSX transport zones.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 53 of 64
Figure 40.
Traffic from a Virtual machine in the NSX overlay that needs to communicate with an endpoint connected in the ACI
fabric will need to traverse the WAN routers if the fabric is Layer-2 only
ESG peering with the fabric using L3Out
In the designs presented earlier in this document, routing is always done from the NSX ESG to an external data
center router where the fabric is used to provide a programmable Layer 2 path between them. If the traffic from the
NSX overlays is all going to subnets external to the fabric, and not to subnets routed within the fabric, this solution
is optimal, given its simplicity.
However, if there is a frequent need for the virtual machines in the NSX overlays to communicate with endpoints
attached to the fabric, such as bare metal servers, IP storage, or backup systems, routing through the external
routers is not efficient.
In these cases, routing can happen directly with the Cisco ACI fabric to obtain more optimal paths, better
performance, and lower latency. This method essentially means using L3Outs between the fabric leaf switches and
the ESG virtual machines.
An L3Out interface is a configuration abstraction that enables routing between Cisco ACI fabric nodes and external
routers. A Cisco ACI node with an L3Out configuration is commonly called a border leaf. There are various ways to
configure L3Outs in Cisco ACI, including using subinterfaces, physical ports, and Switch Virtual Interfaces (SVIs).
When peering with virtual routers, using SVI L3Outs is often the most convenient way, given that the virtual router
may not be tied to a specific hypervisor and therefore may be connecting to different leaf switches throughout its
lifetime. An L3Out can expand across up to eight border leaf switches, enabling a fairly large mobility range for
virtual routers peering with Cisco ACI.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 54 of 64
There are multiple ways to connect NSX ESGs to Cisco ACI L3Outs. When deciding on the best option, we
consider the following design principles:
Consistency: The design is better if it allows consistent configurations with other ESXi nodes. For instance,
VDS configurations should be the same on a ESXi node, whether it runs ESGs or not.
Simplicity: The design should involves the least number of configuration elements.
Redundant: The fabric should provide link and switch node redundancy, providing the fastest convergence
Figure 41 shows a high-level design for connecting ESGs to Cisco ACI SVI L3Outs. The fundamental aspects of
this configuration align with the design principles:
Consistency: Use Link Aggregation Control Protocol (LACP) Virtual Port Channels (VPCs) between leaf
switches and ESXi hosts. LACP is standards-based and simple. This way the VDS uplink configuration is
always the same whether an ESXi host is part of a compute cluster, edge cluster, or converged designs.
Single ESG uplink port group. This element minimizes configuration, as a single SVI L3Out is required and
a single uplink is required on the ESG.
Redundant: The ESG will peer with all leaf switches in the L3Out (up to eight if needed). Load balancing
happen at the LACP level between every ESXi host and the fabric border leafs. Link failure convergence
does not rely on routing protocols, but rather on LACP. VDS LACP provides subsecond link failure
convergence for link failure scenarios between leaf and ESXi host.
Figure 41.
ESG to Cisco ACI routing design with a single SVI L3Out and single uplink port group, link redundancy achieved
with VPC and LACP.
Figure 41 highlights the fact that, while the ESG will see two routing peers, one per Cisco ACI leaf switch, all
prefixes resolve to the same next-hop MAC address corresponding to the SVI L3Out. Traffic is load balanced
through the LACP bundle between the ESXi host and the border leafs and is always routed optimally.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 55 of 64
To understand this concept better, let’s look at Figure 42, which shows an SVI L3Out using VLAN 312 expanding
across two racks (four Cisco ACI leaf switches, assuming redundant leaf switches per rack). The configuration is
very simple at the Layer 2 level, since all VDS uplinks and corresponding Cisco ACI VPC policy groups always use
the same redundant configuration, based on standard LACP.
Figure 42.
A Cisco ACI SVI L3Out can expand multiple leafs, therefore allowing ESG virtual machines to run route peering
even if running on clusters expanding multiple racks.
Although the figure refers to BGP, the concept is the same if the subnet is advertised using OSPF. Both ESG-01
and ESG-02 will have a single uplink and four BGP peers. Note that this example uses four Cisco ACI leaf
switches to illustrate that the ESG can run on a cluster stretching multiple racks, and therefore the administrator
could employ vMotion to move an ESG across racks.
Figure 43 shows these same two ESGs learning subnet configured on an Cisco ACI bridge domain.
This bridge domain has an association with the L3Out and the subnet is flagged as Public to be advertised. Both
ESG-01 and ESG-02 have four possible routing next-hops for the subnet in question in their routing table, one per
Cisco ACI leaf. However, the forwarding table on the ESG for the next-hops resolves to the same router MAC
address for all four routes. Therefore, when any one of the ESGs needs to send traffic toward, it may
choose to send to leaf-01, leaf-02, leaf-03, or leaf-04, but the traffic will always leave the ESXi host having the
same VLAN encapsulation (VLAN 312) and the same destination MAC address (00:22:BD:F8:19:FF). When any of
the four leaf switches receives packets to that destination MAC address, it performs a route lookup on the packet
and routes it to the final destination optimally. For instance, ESG-01 is running on the ESXi host physically
connected to leaf-01 and leaf-02. If ESG-01 sends a packet hitting on the entry pointing to leaf-03, the packet will
reach the VDS, where LACP load balances it between the two links between the ESXi host and leaf-01 and leaf02. Then one of these two switches looks up the packet, because it is sent to the L3Out router MAC address, and
forwards the packet to the right destination.
In other words, at all times, all traffic sent from ESG-01 will be using the all links toward leaf-01 and leaf-02, the
traffic from ESG-02 will use the links toward leaf-03 and leaf-04, and all leaf switches will do a single lookup to
route to the final destination. The traffic back from subnet will be load balanced across the four leaf
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 56 of 64
Figure 43.
An example of the routing and forwarding table for ESG-01 and ESG-02 when peering with a Cisco ACI SVI L3Out
One advantage of this design is shown in Figure 44, where ESG-01 has moved to the ESXi host connected to leaf03 and leaf-04, on another rack. ESG-01 can have a live vMotion for this move, and the routing adjacency will not
drop, nor will traffic be impacted. Of course, in the scenario in this figure, traffic from ESG-01 and ESG-02 flows
only through leaf-03 and leaf-04. Load balancing continues, but now both ESGs send and receive traffic through
two leaf switches instead of four.
Figure 44.
ESG-01 migrated to another host without impacting the BGP peering status or the routed traffic
Using VMM integration also provides benefits in this design scenario. When connecting the ESG toward an L3Out,
the VMM domain is not used to configure the ESG uplink dvPortGroups, which must be mapped to an external
router domain. But if the VMM domain is configured and monitoring the VDS, the fabric administrator can identify
ESG endpoints using the same semantics as the NSX administrator.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 57 of 64
For instance, Figure 45 shows partial screenshots of the APIC GUI, where the administrator can find an ESG by its
name, identify the ESXi host where it is running (and consequently the fabric ports), and monitor the ESG traffic
statistics after verifying that the vNICs are connected.
Figure 45.
The VMM domain allows the fabric administrator greater visibility, simplifies troubleshooting workflows, in the display
seeing details of an ESG VM and its reported traffic
Figure 46 illustrates the final result: Edge clusters can be designed across multiple racks, and ESG virtual
machines can move within the rack or across racks without impacting routing status or traffic forwarding. Because
all ESXi hosts can connect using the same link redundancy mode (VDS), optimal link utilization is achieved while
keeping cluster configuration consistency. This facilitates converged designs where ESG and user virtual machines
share the same clusters.
Figure 46.
Using SVI L3Outs delivers fastest possible convergence for link-failure scenarios and enables mobility of ESG within
and across racks
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 58 of 64
It is important to understand that the scenario illustrated in Figure 46 does not remove the potential bottleneck that
the ESG may represent. The DLR load balancing across multiple ESGs is based on source and destination IP
address, and the ESG will in turn use a single virtual CPU (vCPU) for a single flow. The maximum performance per
flow will be limited by the capacity of the vCPU, and as a flow is defined by source-destination IP addresses, this
configuration can create a bottleneck for any two endpoints communicating. In addition, there is a high probability
that multiple IP source-destinations will be sent to the same ESG and the same vCPU.
Bridging between logical switches and EPGs
Sometimes virtual machines and bare metal servers are required to be on the same network subnet. When running
a NSX VXLAN overlay for virtual machines and a VLAN trunk, or untagged access port, for the bare metal servers,
you need to bridge from the VXLAN encapsulated traffic to VLAN encapsulated traffic. This bridging allows the
virtual machines and bare metal servers running on the same subnet to communicate with each other.
NSX offers this functionality in software through the deployment of NSX Layer 2 bridging, allowing virtual machines
to be connected at Layer 2 to a physical network through VXLAN-to-VLAN ID mapping. This bridging functionality
is configured in the NSX DLR and occurs on the ESXi host that is running the active DLR control virtual machine.
The ESXi host running the backup DLR control virtual machine is the backup bridge in case the primary ESXi host
fails. In Figure 47, the database virtual machine running on the ESXi host is on the same subnet as the bare metal
database server. They communicate at the Layer 2 level. After the traffic leaves the virtual machine, it is
encapsulated in VXLAN by the NSX VTEP in the hypervisor and sent to the VTEP address of the ESXi host with
the active DLR control virtual machine. That ESXi host removes the VXLAN header, puts on the appropriate VLAN
header, and forwards it to the physical leaf switch. The return traffic goes from the bare metal database server
encapsulated in a VLAN header to the Cisco ACI leaf. The leaf forwards the packets to the ESXi host running the
DLR bridging, because the MAC address for the database virtual machine is learned on the ports connecting to
that host. The host removes the VLAN header, puts the appropriate VXLAN header on the packet, and sends it
back to the Cisco ACI leaf, but this time to be sent on the VTEP EPG. The VXLAN packet is delivered to the
appropriate ESXi VTEP for forwarding to the virtual machine.
Figure 47.
Bare metal database server and database virtual machine running on the ESXi host share the same subnet
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 59 of 64
To configure NSX bridging, a unique dvPortGroup is created on the VDS, as shown in Figure 48. This is the
dvPortGroup that will be used for the VLAN side of the bridge. A different dvPortGroup is needed for each VXLAN
Network Identifier (VNI) that will be bridged. Note the VLAN on the dvPortGroups, as this will be used in the Cisco
ACI configuration.
Figure 48.
Example of a dvPortGroup configured on the VDS backed to a VLAN connected to an EPG
Once a dvPortGroup is created for the bridge, the bridge can be configured on the DLR. This is where the logical
switch (VNI) is mapped to a VLAN using the correct dvPortGroup. In the example in Figure 49, a bridge is mapped
between the DB-LS-1 logical switch and the Bridge-For-DB port group, which uses VLAN 1001.
Figure 49.
Detail of NSX Manager configuration showing the bridge between a Logical Switch and a dvPortGroup
Now we can create an EPG and map the ports to the two ESXi hosts with the active and standby DLR control
virtual machines (as these are the hosts that can do the bridging operations) and any bare metal servers that need
to be on the same subnet using the VLAN tag configured in the dvPortGroup using static port bindings.
One EPG is created per bridged VNI-VLAN. Figure 50, shows the two EPGs that are being used for bridging.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 60 of 64
Figure 50.
Illustration of two EPGs configured to bridge to specific Logical Switches in NSX.
Figure 51 shows the static binding configuration for an EPG using VLAN 1001. While it is possible to configure this
mapping also at the AEP level, for EPG connected to NSX Logical Bridges it is a good practice to ensure static
path bindings are used and configured only for the servers running the DLR Bridge.
Figure 51.
A sample of the configuration of the static path binding. Each path represents ports connected to the DLR Bridge
ESXi hosts.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 61 of 64
This paper has discussed various options to design a vSphere infrastructure that uses VMware NSX with a Cisco
ACI fabric. It explored two main options:
Option 1: Using Cisco ACI integrated overlay for network virtualization combined with NSX for vSphere
network services, distributed firewall, and security APIs.
Option 2: Using NSX for vSphere network virtualization with Cisco ACI fabric as underlay.
Customers using NSX for vSphere will realize many benefits when using an Cisco ACI fabric:
Cisco ACI offers industry-leading performance on a cost-effective 40/100-Gbps network fabric leveraging
Cisco cloud-scale innovations like smart buffering or dynamic packet prioritization to enhance application
APIC centralizes the management and policy plane of the data center fabric, thus providing automated
provisioning and visibility into vSphere infrastructure traffic, such as iSCSI, NFS, vMotion, fault tolerance,
and securing management access.
Cisco ACI offers a simplified view of the health of the fabric on a per-tenant and per-application basis,
underpinned by granular telemetry information that allows for faster identification and resolution of
problems. This provides immediate benefits for general vSphere infrastructure and NSX alike.
Cisco ACI can contribute to reducing the complexity of the routing design in NSX in a number of ways: It
can help reduce tiers of ESG virtual machines, automate insertion of perimeter firewalls in front of the ESG
tiers, and contribute to eliminating the ESG tiers altogether by allowing administrators to leverage the
integrated overlay model intrinsic to Cisco ACI.
Cisco ACI allows customers to standardize and simplify all server networking by leveraging standard
protocols like LACP for all server redundancy. Cisco ACI allows running dynamic routing over VPC, and
because L3Out interfaces expand across multiple leaf switches, they enable designs with stretched edge
The Cisco ACI VMM integration enables fabric administrators to find ESG virtual machines by name,
identify their connectivity settings, monitor traffic statistics, and more—from a single tool.
Customers can use NSX for its security features while leveraging Cisco ACI network virtualization
capabilities, completely eliminating the need for DLR and ESG virtual machines. This approach enables
customers to use all NSX security and load-balancing features and partners while lowering the cost of
deploying NSX and eliminating bottlenecks and complexity.
Finally, because Cisco ACI provides integrated overlays along with a contract model that can be used to
implement zero-trust designs with microsegmentation, organizations can consider whether, once Cisco ACI
has been adopted, they need to continue running NSX.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 62 of 64
Do you need NSX when running an Cisco ACI fabric?
Cisco ACI was designed to offer a fully integrated overlay and underlay solution for virtual and physical endpoints
with distributed security. As a result, for most customers, Cisco ACI offers all of the security and network
virtualization functions required for a VMware vSphere environment, and therefore makes the deployment of
VMware NSX unnecessary.
When working with vSphere environments, Cisco ACI supports using the native vSphere Distributed Switch. Cisco
ACI programs the VDS by means of the vCenter northbound API. The vSphere VDS and vCenter API allow
programming of networking constructs such as port groups, VLAN, isolated private VLANs, uplink policies, and the
Optionally, Cisco ACI can also use the Cisco Application Centric Infrastructure Virtual Edge solution for vSphere.
The Cisco ACI Virtual Edge allows the APIC to interface with the virtual networking layer in vSphere in the same
way that it natively does with other hypervisor vendors, through the use of the open OpFlex9 protocol. This protocol
allows policy that defines network connectivity to be transmitted from a controller to a remote device such as a
physical or virtual switch. The Cisco ACI Virtual Edge offers customers the option to implement OpFlex control of
the virtual switching layer in vSphere, facilitating features such as VXLAN tunneling towards the hypervisor or
performing stateful packet filtering running within the Cisco ACI Virtual Edge software.
Whether using native VDS or Cisco ACI Virtual Edge, customers deploying vSphere on Cisco ACI fabrics get many
benefits out of the box:
Workload mobility: Cisco ACI allows extending networks across any number of vSphere clusters within
single or multiple data centers. Solutions such as Cisco ACI Multi-Pod and Cisco ACI Multi-Site allow
customers to easily implement vSphere Metro Storage Clusters or even multiple-vCenter deployments and
seamlessly migrate workloads anywhere in the infrastructure.
Simplify Site Recovery Manager deployments: Cisco ACI makes it extremely simple to extend Layer 2 and
Layer 3 networks across multiple sites, including virtual machine–traffic networks and infrastructure
networks serving vMotion, IP storage, and fault tolerance.
Microsegmentation: Cisco ACI offers microsegmentation and workload isolation capabilities using both the
native VDS and Virtual Edge. Cisco ACI also extends microsegmentation support beyond vSphere clusters
into other hypervisors and bare metal servers.
Layer 4–7 automation: Cisco ACI includes integration with a broad range of Layer 4–7 device partners and
can provide policy-based redirection to both physical and virtual service devices.
Stateful Layer 2–4 firewall: Virtual Edge includes an integrated stateful Layer 4 packet filtering capability for
integrated stateful inspection.
Opflex is an open protocol described in an IETF draft ( Open-source
implementations are available for both OpenStack and OpenDayLight.
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 63 of 64
Out-of-the-box integration with leading cloud-management platforms, including VMware vRealize Automation. By
leveraging the Cisco ACI vRealize Plug-in customers can use a large number of pre-defined workflows to automate
many Cisco ACI fabric configurations.
For organizational reasons, sometimes customers choose to use NSX with Cisco ACI fabrics. For instance, they
may be deploying turnkey solutions that use NSX as a component, or they may be interested in specific security
partners of the NSX ecosystem. Also, many service providers use VMware vCloud Director, a product that also
uses VMware NSX for vSphere.
Nonetheless, considering the rich capabilities included in Cisco ACI, many customers find that deploying Cisco ACI
meets their requirements without adding a second network overlay, thus eliminating the additional cost of VMware
NSX licenses and the hardware required to run all the associated components.
Printed in USA
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 64 of 64
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF