ACI Constructs Design

ACI Constructs Design
ACI Constructs Design
• Common Tenant and User-Configured Tenant Policy Usage, page 1
• Common Pervasive Gateway, page 4
• Contracts and Policy Enforcement, page 7
• Contract Labels, page 13
• Taboo Contracts, page 16
• Bridge Domains, page 17
• Application-Centric and Network-Centric Deployments, page 22
• Layer 2 Extension, page 26
• Infrastructure VXLAN Tunnel Endpoint Pool, page 28
• Virtual Routing and Forwarding Instances, page 30
• Stretched Fabric, page 30
• Access Policies, page 32
• Managed Object Naming Convention , page 42
Common Tenant and User-Configured Tenant Policy Usage
About Common Tenant and User-Configured Tenant Policy Usage
A tenant is a logical container for application, networking and security policies. The rules governing policy
reuse across tenants differ between user-configured tenants and the system-defined common tenant.
An example would be that user-configured tenant "A" has a bridge domain, while user-configured tenant "B"
has an endpoint group. By default, tenant B's endpoint group will never be able to make an association to
tenant A's bridge domain. Objects within user-configured tenants cannot form relationships with objects in
other user-configured tenants unless specified with explicit configurations. One example of this is the process
of exporting a contract from one user-configured tenant to another. Otherwise, that contract can only be
referenced by other objects within the same tenant.
Cisco Application Centric Infrastructure Best Practices Guide
1
ACI Constructs Design
Prerequisites for Common Tenant and User-Configured Tenant Policy Usage
When utilizing the system-generated tenant common, this rule does not apply. Objects within tenant common
can be accessed by all other tenants within a Cisco Application Centric Infrastructure (ACI) fabric. This means
that tenant B's endpoint group would be able to use a bridge domain configured within tenant common.
Similarly, tenant B's endpoint group would be able to use a contract that exists within tenant common without
needing to be exported.
Prerequisites for Common Tenant and User-Configured Tenant Policy Usage
You must meet the following prerequisites to use the common tenant and user-configured tenant policies:
• Tenant common is system generated and has no prerequisite configuration to allow its policies to be
accessed by other tenants.
• A user-configured tenant must be created before usage. Not all user-configured tenant policies can be
made accessible to other tenants. The following policies can be exported from one user-configured tenant
to another to form a relationship:
◦Contracts
◦Layer 4 to Layer 7 devices
Guidelines and Limitations for Common Tenant and User-Configured Tenant
Policy Usage
The following guidelines and limitations apply for common tenant and user-configured tenant policy usage:
• There are specific policies within a user-configured tenant that can be exported to another tenant for
relationship usage.
• A VRF named "myVRF" within user-configured tenant A is not the same as a VRF named "myVRF"
within user-configured tenant B. This difference can be observed by looking at the distinguished name
(DN) of both VRFs.
• Depending on the intended usage of these exported policies, there might be other configuration changes
required to complete inter-tenant communication. For more information, see About Shared Services.
Recommended Configuration Procedure for Common Tenant and
User-Configured Tenant Policy Usage
The following procedure exports contracts and Layer 4 to Layer 7 devices from a user-configured tenant using
the Application Policy Infrastructure Controller (APIC) GUI, which you can then import into another
user-configured tenant. You must use the advanced GUI mode.
Cisco Application Centric Infrastructure Best Practices Guide
2
ACI Constructs Design
Verifying the Common Tenant and User-Configured Tenant Policy Usage
Procedure
Step 1
Step 2
Export a contract. On the menu bar, choose Tenants > All Tenants.
In the Work pane, double-click the desired tenant's name.
Step 3
In the Navigation pane, choose Tenant tenant_name > Security Policies > Contracts.
Step 4
In the Work pane, choose Actions > Export Contract.
Step 5
In the Export Contract dialog box, fill out the fields as necessary.
For a contract to be used between endpoint groups within separate VRFs, the contract scope must be changed
to Global. The scope is set to VRF by default.
Step 6
Step 7
Export a Layer 4 to Layer 7 device. On the menu bar, choose Tenants > All Tenants.
In the Work pane, double-click the user-configured tenant's name from which you will export the contract.
Step 8
In the Navigation pane, choose Tenant tenant_name > L4-L7 Services > L4-L7 Devices.
Step 9
In the Work pane, choose Actions > Export L4-L7 Devices.
Step 10 In the Export L4-L7 Devices dialog box, fill out the fields as necessary.
Verifying the Common Tenant and User-Configured Tenant Policy Usage
A general guide to understanding where a policy resides is to understand the distinguished name (DN) of that
object. This can be said for almost every policy within Cisco Application Centric Infrastructure (ACI), but
especially so for those configured within tenants. Most objects in the GUI allow you to right-click on them
and choose Save As. This will allow you to pull either an XML or JSON representation of the object you
chose, and potentially its children objects as well if desired.
The following procedure provides an example of saving a contract named "BP-contract" that was created in
the tenant "ACI-BP":
Procedure
Step 1
Step 2
On the menu bar, choose Tenants > All Tenants.
In the Work pane, double-click ACI-BP.
Step 3
In the Navigation pane, choose Tenant ACI-BP > Security Policies > Contracts > BP-contract.
Step 4
Step 5
Right-click the contract and choose Save as ....
In the Save As dialog box, click Only Configuration, Self, and xml.
Step 6
Click Download.
The saved XML file contains the following lines:
<?xml version="1.0" encoding="UTF-8"?>
<imdata totalCount="1">
<vzBrCP scope="context" prio="unspecified" ownerTag="" ownerKey=""
name="BP-contract" dn="uni/tn-ACI-BP/brc-BP-contract" descr=""/>
</imdata>
The dn parameter has a value of "uni/tn-ACI-BP/brc-BP-contract." Without examining the classes, you can
see that this contract exists directly under tenant ACI-BP and that the contract name is "BP-contract."
Cisco Application Centric Infrastructure Best Practices Guide
3
ACI Constructs Design
Configuration Examples for Common Tenant and User-Configured Tenant Policy Usage
Configuration Examples for Common Tenant and User-Configured Tenant Policy
Usage
When selecting a policy for use, you can typically see the tenant association during the selection process. For
example, when attempting to associate a contract to an endpoint group within a user-configured tenant, a
variety of contract choices might display, such as in the following example list:
• multiservice/CTRCT1
• multiservice/JT-BigIP1
• multiservice/JT-BigIP2
• common/TK_common
• common/TK_dev
• common/TK_shared
The contract naming convention is "tenant/contract_name." From the example contract names, you can infer
that all choices that begin with "common/" exist within the common tenant, while all choices prefixed with
"multiservice/" have been created within the user-configured tenant "multiservice."
Additional References for Common Tenant and User-Configured Tenant Policy
Usage
For more information about tenants, see the Cisco Application Centric Infrastructure (ACI) policy model
chapter in the Cisco Application Centric Infrastructure Fundamentals Guide.
Common Pervasive Gateway
About Common Pervasive Gateway
Multiple Cisco Application Centric Infrastructure (ACI) fabrics can be configured with an IPv4 common
gateway on a per-bridge-domain basis. Doing so enables moving one or more virtual machines (VMs) or
conventional hosts across the fabrics while the host retains its IP address. VM host moves across fabrics can
be done automatically by the VM hypervisor. The ACI fabrics can be co-located, or provisioned across multiple
Cisco Application Centric Infrastructure Best Practices Guide
4
ACI Constructs Design
Prerequisites for Common Pervasive Gateway
sites. The Layer 2 connection between the ACI fabrics can be a local link, or can be across a routed WAN
link. The following figure illustrates the basic common pervasive gateway topology:
Figure 1: Common Pervasive Gateway Topology
Prerequisites for Common Pervasive Gateway
You must meet the following prerequisites to use common pervasive gateway (CPG):
• Subnets should be determined for CPG
• Common vMAC and unique pMACs across fabrics should be determined
• Hosts to utilize CPG should be set to use the VIP gateway address
• Layer 2 connectivity between fabrics should be established
Guidelines and Limitations for Common Pervasive Gateway
The following guidelines and limitations apply for common pervasive gateway (CPG):
• The bridge domain MAC (pMAC) values for each fabric must be unique.
The default bridge domain MAC (pMAC) address values are the same for all Cisco Application Centric
Infrastructure (ACI) fabrics. The common pervasive gateway requires an administrator to configure the
bridge domain MAC (pMAC) values to be unique for each ACI fabric.
• The bridge domain virtual MAC (vMAC) address and the subnet virtual IP address must be the same
across all ACI fabrics for that bridge domain. Multiple bridge domains can be configured to communicate
across connected ACI fabrics. The virtual MAC address and the virtual IP address can be shared across
bridge domains.
• For endpoints residing off bridge domains with a CPG, the fabric will only route traffic that hits the
bridge domain by utilizing the vMAC. Any traffic utilizing the pMAC upon entry of the ACI fabric that
is destined for an EP will not be routed. This is normally not a concern if the source device is utilizing
ARP lookups before sending a reply, as the gateway entry for the end device should be the VIP/vMAC
Cisco Application Centric Infrastructure Best Practices Guide
5
ACI Constructs Design
Recommended Configuration Procedure for Common Pervasive Gateway
combo. Traffic sourced from the ACI bridge domain will always exit the fabric by utilizing the pMAC,
not the vMAC. This will cause certain appliances to have communication issues when utilizing specific
forwarding features that bypass ARP lookup and instead use the src_mac as the dst_mac in the reply.
The following list contains examples of features that bypass ARP lookup:
◦EMC "Packet Reflect"
◦F5 "Auto Last Hop"
◦Netapp "Fast Path"
Recommended Configuration Procedure for Common Pervasive Gateway
The following information applies when configuring common pervasive gateway (CPG):
• Ensure that all end devices utilizing a CPG as its gateway should perform ARP lookups in all
communication scenarios. Any device that utilizes some feature that bypasses this lookup will have
communication issues when trying to get to another subnet within the fabric.
• The pMAC for bridge domains across two separate Cisco Application Centric Infrastructure (ACI)
fabrics are unique.
• The vMAC across matching bridge domains should be configured the same across both ACI fabrics that
are utilizing CPG.
• The VIP address will be set as a virtual IP and will act as the gateway for hosts within this subnet.
Verifying the Common Pervasive Gateway Using the GUI
The following procedure verifies the common pervasive gateway (CPG) configuration using the Application
Policy Infrastructure Controller (APIC) GUI.
Procedure
Step 1
Step 2
On the menu bar, choose Tenants > All Tenants.
In the Work pane, double-click the desired tenant's name.
Step 3
In the Navigation pane, choose Tenant tenant_name > Networking > Bridge Domains >
bridge_domain_name.
In the Work pane, choose the Policy > L3 Configurations tabs.
The Work pane displays the configuration pieces that are needed for a common pervasive gateway.
Step 4
Step 5
Step 6
Step 7
The Custom MAC Address field is the pMAC that must be unique between both Cisco Application Centric
Infrastructure (ACI) fabrics sharing the CPG. By default, all ACI fabrics have the same value. If the value is
the same for both fabrics, change the value either of the fabrics.
The Virtual MAC Address field is the vMAC that must be the same between both bridge domains across
both ACI fabrics. Replace the “Not Configured” text with a valid MAC address.
Put a check in the Treat as virtual IP address check box to define the subnet to be the VIP address under
the bridge domain.
Cisco Application Centric Infrastructure Best Practices Guide
6
ACI Constructs Design
Additional References for Common Pervasive Gateway
This should be done for the address that will be shared across both bridge domains and act as the GW for
hosts on this subnet. Otherwise, another subnet/bridge domain address will need to be created that is unique
to this fabric. For example, assume that 192.168.1.1 will be the VIP and exist as the virtual IP address on both
fabrics' bridge domains. Fabric 1 will have a second subnet under the bridge domain set as 192.168.1.2, and
Fabric 2 will have a second subnet under the bridge domain set as 192.168.1.3. These second subnets will not
be virtual IPs, but instead will act as the bridge domain SVI.
Additional References for Common Pervasive Gateway
For more information on the common pervasive gateway traffic flow, see the tenants chapter of the Operating
Cisco Application Centric Infrastructure document at the following URL:
http://www.cisco.com/c/en/us/support/cloud-systems-management/
application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html
Contracts and Policy Enforcement
About Contracts and Policy Enforcement
Contracts
By default, a VRF is in enforced mode, which means that without a contract, different endpoint groups are
unable to communicate to each other. Endpoint groups associate to a contract with provider/consumer
relationships. ACLs, rules, and filters are created in the leaf switches to realize the intent of contracts that will
be programmed on the ternary content-addressable memory (TCAM). The following figure illustrates endpoint
groups communicating through contracts:
Figure 2: Endpoint Group Communication Through Contracts
Policy information in Cisco Application Centric Infrastructure (ACI) is programmed into two TCAM tables:
• Policy TCAM contains entries for the allowed endpoint-group-to-endpoint-group traffic
Cisco Application Centric Infrastructure Best Practices Guide
7
ACI Constructs Design
About Contracts and Policy Enforcement
• App TCAM contains shared destination Layer 4 port ranges
The size of the policy TCAM depends on the generation of Cisco ASIC that is in use. For ALE-based systems,
the policy TCAM size is 4k entries. For ALE2-based systems, 32k hardware entries are available. In certain
larger scale environments, it is important to take policy TCAM usage into account and ensure that the limits
are not exceeded.
TCAM entries are generally specific to each endpoint group pair. In other words, even if the same contract
is reused, new TCAM entries are installed for every pair of endpoint groups, as shown in the following figure:
Figure 3: TCAM Entries Per Endpoint Group Pair
An approximate calculation for the number of TCAM entries is as follows:
Number of entries in a contract * Number of Consumer EPGs * Number of Provider EPGs * 2
vzAny
The "Any" endpoint group is a collection of all of the endpoint groups within a context, which is also known
as a virtual routing and forwarding (VRF), that allows for a shorthand way to refer to all of the endpoint groups
within that context. This shorthand referral eases management by allowing for a single point of contract
configuration for all endpoint groups within a context, and also optimizes hardware resource consumption by
applying the contract to this one group rather than to each endpoint group individually.
Cisco Application Centric Infrastructure Best Practices Guide
8
ACI Constructs Design
About Contracts and Policy Enforcement
Consider the example shown in the following figure:
Figure 4: Multiple Endpoint Groups Consuming a Single Contract
In this scenario, a single endpoint group named "Shared" is providing a contract, with multiple endpoint groups
consuming that contract. Although this setup works, it has some drawbacks. First, the administrative burden
increases, as each endpoint group must be configured separately to consume the contract. Second, the number
of hardware TCAM entries increases each time an endpoint group associates with a contract. A very high
number of endpoint groups all providing or consuming a contract can, in extreme cases, lead to exhaustion
of the hardware resources.
To overcome these issues, the "vzAny" object can be used. vzAny is a managed object within Cisco Application
Centric Infrastructure (ACI) that represents all endpoint groups within a VRF. This object can be used to
provide or consume contracts, so in the example above, you can consume the contract from vzAny with the
same results, as shown in the following figure:
Figure 5: vzAny Consuming a Contract
This is not only easier to configure (although automation can eliminate this benefit), but also represents the
most efficient use of fabric hardware resources, so is recommended to be used in cases where every endpoint
group within a VRF must consume or provide a given contract.
Whenever the use of the vzAny object is being considered, the administrator must plan for its use carefully.
Once the vzAny object is configured to provide or consume a contract, any new endpoint groups that are
Cisco Application Centric Infrastructure Best Practices Guide
9
ACI Constructs Design
About Contracts and Policy Enforcement
associated with the VRF will inherit the policy; a new endpoint group added to the VRF will provide or
consume the same contracts that are configured under vzAny. If it is likely that new endpoint groups will
need to be added later and which might not need to consume the same contract as every other endpoint group
in the VRF, then vzAny might not be the most suitable choice. You should carefully consider this situation
before you use vzAny.
To apply a contract to the vzAny group, choose a tenant in the Application Policy Infrastructure Controller
(APIC) GUI. In the Navigation pane, navigate to Tenant tenant_name > Networking > VRFs > vrf_name
> EPG Collection for Context. vrf_name is the name of the VRF for which you want to configure vzAny.
EPG Collection for Context is the vzAny object; contracts can be applied here.
Using vzAny with the "Established Flag"
An additional example of the use of the vzAny policy to reduce resource consumption is to use it in conjunction
with the "established" flag. By doing so, you can configure contracts as unidirectional in nature, which further
reduces hardware resource consumption.
Consider the example shown in the following figure:
Figure 6: Bi-Directional Contracts - Regular Configuration
In this example, two contracts are configured for SSH and HTTP. Both contracts are provided by EPG2 and
consumed by EPG1. The Apply Both Directions and Reverse Filter Ports options are checked, resulting in
the four TCAM entries shown in the figure.
Cisco Application Centric Infrastructure Best Practices Guide
10
ACI Constructs Design
About Contracts and Policy Enforcement
You can reduce the TCAM utilization by half by making the contract unidirectional, as shown in the following
figure:
Figure 7: Unidirectional Contracts
However, having a unidirectional contract presents a problem: return traffic is not allowed in the contract,
and therefore the connections cannot be completed and traffic fails. To allow return traffic to pass, you can
configure a rule that allows traffic between all ports with the "established" flag. We can take advantage of
Cisco Application Centric Infrastructure Best Practices Guide
11
ACI Constructs Design
Guidelines and Limitations for Contracts and Policy Enforcement
vzAny in this case to configure a single contract for the "established" traffic and apply it to the entire VRF,
as shown in the following figure:
Figure 8: Use of vzAny with an "Established" Contract
In an environment with a large number of contracts being consumed and provided, this can reduce the number
of TCAM entries significantly.
Ingress Policy Enforcement for Border Leaf TCAM Scalability
Software release 1.2 introduced a new policy enforcement model whereby security rules for all flows are
enforced on the leaf node to which internal hosts are connected, rather than at the border leaf. This results in
a more even distribution of security rules, rather than being concentrated at the border leaf as was the case
prior to release 1.2.
For more information, see About L3Out Ingress Policy Enforcement.
Guidelines and Limitations for Contracts and Policy Enforcement
The following guidelines and limitations apply when using a vzAny contract:
• When vzAny is used with a contract with scope = Application-Profile, this setting causes rule expansion
in the leaf switches and therefore is not recommended
Cisco Application Centric Infrastructure Best Practices Guide
12
ACI Constructs Design
Additional References for Contracts and Policy Enforcement
• vzAny is supported as a consumer of a shared service, but is not supported as a provider of a shared
service
• vzAny is used only to optimize the specification of a source endpoint group or destination endpoint
group, by specifying a wildcard for either or both endpoint groups.
• If there are ranges in the filter with a vzAny contract, the port range will be done in TCAM to implement
the ranges
Additional References for Contracts and Policy Enforcement
For more information about contracts, including procedures for administering contracts, see the Operating
Cisco Application Centric Infrastructure document at the following URL:
http://www.cisco.com/c/en/us/support/cloud-systems-management/
application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html
Contract Labels
About Contract Labels
Contracts are key objects within the Cisco Application Centric Infrastructure (ACI) policy model to express
intended communication flows. Endpoint groups can only communicate with other endpoint groups according
to the contract rules. A contract can be thought of as an ACL that opens ports between endpoint groups. An
administrator uses a contract to select the types of traffic that can pass between endpoint groups, including
the protocols and ports allowed. If there are no contracts connecting two endpoint groups, inter-endpoint
group communication is disabled by default as long as the VRF is set to Enforced. This is a representation
of the white-list policy model that ACI is built around. There is no contract required for intra-endpoint group
communication; intra-endpoint group communication is always implicitly allowed regardless of VRF settings.
There are configurations that can block intra-endpoint group communication, but is provided by
microsegmentation and is not covered in this section.
Contracts can contain multiple communication rules, and multiple endpoint groups can both consume and
provide multiple contracts. Labels allow for control over which subjects and filters to apply when
communicating between a specific pair of endpoint groups. Without labels, a contract will apply every subject
and filter between consumer and provider endpoint groups. A policy designer can use labels to compactly
represent a complex communication scenario, within the scope of a single contract, then re-use this contract
while specifying only a subset of its policies across multiple endpoint groups.
Prerequisites for Contract Labels
You must meet the following prerequisites to use contract labels:
• Contracts should be configured
• Depending on the type of matching to be done, the contract can contain multiple subjects (for subject
labels to be useful)
• Have an understanding of the scope of the contract and how to change that setting (the default is VRF)
Cisco Application Centric Infrastructure Best Practices Guide
13
ACI Constructs Design
Guidelines and Limitations for Contract Labels
Guidelines and Limitations for Contract Labels
The following guidelines and limitations apply for contract labels:
• Understand the scope of a label. Labels can be applied to a variety of provider and consumer managed
objects. This includes endpoint groups, contracts, bridge domains, DHCP relay policies, and DNS
policies. Labels do not apply across object types; a label on an application endpoint group has no relevance
to a label on a bridge domain.
• Labels are managed objects with only one property: a name. Labels enable the classification of which
objects can and cannot communicate with one another. Label matching is done first. If the labels do not
match, no other contract or filter information is processed.
• Label matching can be applied based on logical operators. The label match attribute can be one of these
values: at least one (the default), all, none, or exactly one.
• Because labels are named references, do not to use duplicate label names unless the intent is to chain
those flows together.
Recommended Configuration Procedure for Contract Labels
In general, contract labels are not required for contract deployments. For these general scenarios, a single
flow can be presented per contract (single subject/group of filters specific to that flow). Utilizing labels does
not save resources compared to defining distinct contracts; labels are only another method available to provision
contracts while defining specific flows.
Verifying the Contract Labels Using the GUI
The following procedure verifies the programmed rules of a contract under the EPG by using the Application
Policy Infrastructure Controller (APIC) GUI. You can use either the advanced basic GUI mode.
Procedure
Step 1
Step 2
Step 3
Step 4
On the menu bar, choose Tenants > All Tenants.
In the Work pane, double-click the tenant's name.
In the Navigation pane, choose Tenant tenant_name > Application Profiles > application_profile_name
> Application EPGs > EPG EPG_name.
In the Work pane, choose the Operational > Contracts tabs.
The Work pane displays programmed rules for the contracts. You can ensure that the contract labels are
configured properly.
Cisco Application Centric Infrastructure Best Practices Guide
14
ACI Constructs Design
Configuration Examples for Contract Labels
Configuration Examples for Contract Labels
The following procedure provides an example of configuring contract labels using the Application Policy
Infrastructure Controller (APIC) GUI.
Procedure
Step 1
Step 2
Configure contract labels (consumer and provider). On the menu bar, choose Tenants > All Tenants.
In the Work pane, double-click the tenant's name.
Step 3
In the Navigation pane, choose Tenant tenant_name > Security Policies > Contracts > contract_name >
contract_subject_name.
In the Work pane, choose the Policy > Label tabs.
The Work pane displays the existing consumed and provided contract labels, and you can configure new
labels.
Step 4
Step 5
Step 6
Step 7
Step 8
Step 9
Configure endpoint group subject labels. In the Navigation pane, choose Tenant tenant_name > Application
Profiles > application_profiles_name > Application EPGs > EPG EPG_name.
In the Work pane, choose the Policy > Subject Labels tabs.
The Work pane displays the existing consumed and provided endpoint group subject labels, and you can
configure new labels.
Configure an endpoint group label when associating a contract as a consumer or provider. In the Navigation
pane, choose Tenant tenant_name > Application Profiles > application_profiles_name > Application
EPGs > EPG EPG_name > Contracts.
In the Work pane, choose Action > Add Provided Contract or Action > Add Consumed Contract.
In the Add Provided Contract or Add Consumed Contract dialog box, fill out the fields as appropriate and
specify the contract label and subject label.
Additional References for Contract Labels
For more information about contracts and contract labels, see the Cisco Application Centric Infrastructure
Fundamentals Guide at the following URL:
http://www.cisco.com/c/en/us/support/cloud-systems-management/
application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html
For more information about Application Policy Infrastructure Controller (APIC) policy enforcement, see the
Cisco Application Policy Infrastructure Controller Data Center Policy Model white paper at the following
URL:
http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/
white-paper-c11-731310.html
Cisco Application Centric Infrastructure Best Practices Guide
15
ACI Constructs Design
Taboo Contracts
Taboo Contracts
About Taboo Contracts
Taboo contracts are special contract managed objects in the model that the network administrator can use to
deny specific classes of traffic. Taboos can be used to drop traffic matching a pattern, such as any endpoint
group, a specific endpoint group, or matching results from a filter. Taboo rules are applied in the hardware
before the rules of regular contracts are applied.
Prerequisites for Taboo Contracts
Taboo contracts do not have any specific prerequisites that you must meet.
Guidelines and Limitations for Taboo Contracts
In general, the use case for taboo contracts are very specialized and are not seen in a typical deployment. Due
to the whitelist nature of Cisco Application Centric Infrastructure (ACI), all flows are blocked by default and
those that are to be allowed will need to be specified by a consumer/provider contract relationship.
Recommended Configuration Procedure for Taboo Contracts
The following procedure configures a taboo contract.
Procedure
Step 1
Step 2
Configure a taboo contract within the security policies of a tenant. On the menu bar, choose Tenants > All
Tenants.
In the Work pane, double-click the desired tenant's name.
Step 3
In the Navigation pane, choose Tenant tenant_name > Security Policies > Taboo Contracts.
Step 4
Step 5
In the Work pane, choose Action > Create Taboo Contract.
In the Create Taboo Contract dialog box, fill in the fields as necessary. You must specify the Name and
add at least one subject.
The subject determines what flow to deny explicitly when the taboo contract is applied.
Step 6
Add a taboo contract to an endpoint group. In the Navigation pane, choose Tenant tenant_name > Application
Profiles > application_profile_name > Application EPGs > EPG_name > Contracts.
In the Work pane, choose Action > Add Taboo Contract.
Step 7
Step 8
In the Add Taboo Contract dialog box, choose an existing taboo contract or create a new taboo contract.
When adding a taboo contract to an endpoint group, there is no consumer/provider relationship needed to
complete the contract flow. The taboo contract will insert a deny specific to that endpoint group once it has
been associated to an endpoint group.
Cisco Application Centric Infrastructure Best Practices Guide
16
ACI Constructs Design
Configuration Examples for Taboo Contracts
Step 9
(Optional) If you are creating a new taboo contract, in the Create Taboo Contract dialog box, fill in the
fields as necessary. You must specify the Name and add at least one subject.
The subject determines what flow to deny explicitly when the taboo contract is applied.
Configuration Examples for Taboo Contracts
One scenario in which taboo contracts can be used is while defining subnets under an L3Out, specifically in
the case that subnets are to be blocked. Generally speaking, for an L3Out, the first subnet to be defined is
0.0.0.0/0 as the network, which allows all subnets into the fabric given proper configuration, although this
definition is not required. If there are specific subnets for which we want to restrict access into the fabric from
this L3Out, you can do so by creating another network under the same L3Out, specifying the subnet to be
blocked, then associating the subnet with a taboo contract.
Additional References for Taboo Contracts
For more information on taboo contract fundamentals, see the Cisco Application Centric Infrastructure
Fundamentals Guide at the following URL:
http://www.cisco.com/c/en/us/support/cloud-systems-management/
application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html
Bridge Domains
About Bridge Domains
Within a private network, one or more bridge domains must be defined. A bridge domain is a Layer 2 forwarding
construct within the fabric, used to constrain broadcast and multicast traffic.
Bridge domains in Cisco Application Centric Infrastructure (ACI) have a number of configuration options to
allow the administrator to tune the operation in various ways. The configuration options are as follows:
• L2 Unknown Unicast—This option can be set to either Flood or Hardware Proxy. If this option is set
to Flood, Layer 2 unknown unicast traffic will be flooded inside the fabric. If the Hardware Proxy option
is set, the fabric mapping database will be queried for Layer 2 unknown unicast traffic. This option does
not have any impact on what the mapping database actually learns; the mapping database is always
populated for Layer 2 entries regardless of this configuration.
• ARP Flooding—If ARP flooding is enabled, ARP traffic will be flooded inside the fabric as per regular
ARP handling in traditional networks. If this option is disabled, the fabric will attempt to unicast the
ARP traffic to the destination. This option only applies if unicast routing is enabled on the bridge domain.
If unicast routing is disabled, ARP traffic is always flooded, regardless of the status of the ARP Flooding
option.
• Unicast Routing—This option enables the learning of IP addresses in the fabric mapping database. MAC
addresses are always learned by the mapping database. Use of the unicast routing option is generally
recommended, even when only Layer 2 traffic is present, to assist troubleshooting (such as with the
Cisco Application Centric Infrastructure Best Practices Guide
17
ACI Constructs Design
Guidelines and Limitations for Bridge Domains
Traceroute tool) and to allow advanced functionality, such as dynamic endpoint attachment with Layer
4 to Layer 7 services. Enabling unicast routing helps to reduce flooding in a bridge domain, as disabling
ARP flooding depends upon it. When considering unicast routing, consideration must be given to the
desired topology. If an external device (such as a firewall) is acting as the default gateway and there is
routing between two bridge domains, enabling unicast routing might cause traffic to be routed on the
fabric and bypass the external device.
• Enforce Subnet Check for IP Learning—If this option is checked, the fabric will not learn IP addresses
from a subnet other than the one configured on the bridge domain. For example, if a bridge domain is
configured with a subnet address of 10.1.1.0/24, the fabric would not learn the IP address of an endpoint
by using an address that is outside of this range, such as 20.1.1.1/24. This feature does not affect the
data path; in other words, it will not drop packets coming from the wrong subnet. The feature simply
prevents the fabric from learning endpoint information in this scenario.
Given the above options, it might not be immediately obvious how a bridge domain should be configured.
The following sections explain when and why particular options should be selected.
Guidelines and Limitations for Bridge Domains
A bridge domain can contain multiple subnets. When you configure a bridge domain with multiple subnets,
the first subnet added becomes the primary IP address on the SVI interface. Subsequent subnets are configured
as secondary IP addresses. When the switch reloads, the primary IP address might change unless it is marked
explicitly.
When using a DHCP relay configuration for bridge domains with multiple subnets, DHCP relay policy can
only be configured for the primary IP address on the SVI interface.
If there are DHCP clients that use multiple subnets, make sure you define different bridge domains with each
subnet to accommodate that requirement.
To configure a bridge domain subnet as primary, view the subnet's properties and do the following things:
• Put a check in the Make this IP address primary check box.
Recommended Configuration Procedure for Bridge Domains
The following sections provide the recommended settings for common bridge domain scenarios.
Scenario 1: IP Address-Based Routed Traffic
In this scenario, the bridge domain has the following configuration:
• IP address-based routed traffic
• Firewalls and load balancers cannot be connected to this bridge domain
• The bridge domain cannot have clusters or similar things that might rely on "floating" IP addresses (that
is, IP addresses that might move to different MACs)
• Silent hosts are not expected to be connected to the bridge domain
Given the above requirements, the recommended bridge domain settings are as follows:
• L2 Unknown Unicast—Hardware Proxy
Cisco Application Centric Infrastructure Best Practices Guide
18
ACI Constructs Design
Recommended Configuration Procedure for Bridge Domains
• ARP Flooding—Disabled
• Unicast Routing—Enabled
• Subnet Configured—Yes, if required
• Enforce Subnet Check for IP Learning—Yes
In this scenario, most of the bridge domain settings can be left at their default, optimized values. A subnet
(that is, a gateway address) should be configured as required and you should enforce the subnet check for IP
learning.
Scenario 2: IP Address-Based Routed Traffic, Possible Silent Hosts
In this scenario, the bridge domain has the following configuration:
• IP address-based routed traffic
• Firewalls and load balancers cannot be connected to this bridge domain
• The bridge domain cannot have clusters or similar things that might rely on "floating" IP addresses (that
is, IP addresses that might move to different MACs)
• There might be silent hosts connected to the bridge domain
Given the above requirements, the recommended bridge domain settings are as follows:
• L2 Unknown Unicast—Hardware Proxy
• ARP Flooding—Disabled
• Unicast Routing—Enabled
• Subnet Configured—Yes
• Enforce Subnet Check for IP Learning—Yes
The bridge domain settings for this scenario are similar to scenario 1; however, in this case the subnet address
should be configured. As silent hosts can exist within the bridge domain, a mechanism must exist to ensure
those hosts are learned correctly inside the Cisco Application Centric Infrastructure (ACI) fabric. ACI
implements an ARP gleaning mechanism that allows the spine switches to generate an ARP request for an
endpoint using the subnet IP address as the source address. This ARP gleaning mechanism ensures that silent
hosts are always learned, even when using optimized bridge domain features such as hardware proxy.
Cisco Application Centric Infrastructure Best Practices Guide
19
ACI Constructs Design
Recommended Configuration Procedure for Bridge Domains
The following figure shows the ARP gleaning mechanism when endpoints are not present in the mapping
database:
Figure 9: ARP Gleaning Mechanism in ACI
If a subnet IP address cannot be configured for any reason, ARP flooding should be enabled as an alternative
to allow the silent hosts to be learned.
Scenario 3: Non-IP Address-Based Switched Traffic, Possible Silent Hosts
In this scenario, the bridge domain has the following configuration:
• Non-IP address-based switched traffic
• Firewalls and load balancers cannot be connected to this bridge domain
• The bridge domain cannot have clusters or similar things that might rely on "floating" IP addresses (that
is, IP addresses that might move to different MACs)
• There might be silent hosts connected to the bridge domain
Given the above requirements, the recommended bridge domain settings are as follows:
• L2 Unknown Unicast: Flood
• ARP Flooding: N/A (enabled automatically due to no unicast routing)
• Unicast Routing: Disabled
• Subnet Configured: No
• Enforce Subnet Check for IP Learning: N/A
Cisco Application Centric Infrastructure Best Practices Guide
20
ACI Constructs Design
Recommended Configuration Procedure for Bridge Domains
In this scenario, all optimizations inside the bridge domain are disabled and the bridge domain is operating
in a "traditional" manner. Silent hosts are dealt with through normal ARP flooding, which is always enabled
when unicast routing is turned off.
Also, when operating the bridge domain in a "traditional" mode, the size of the bridge domain should be kept
manageable. That is, limit the subnet size and number of hosts as you would in a regular VLAN environment.
Scenario 4: Non-IP Address or IP Address-Based, Routed or Switched Traffic, Possible "Floating" IP Addresses
In this scenario, the bridge domain has the following configuration:
• IP address-based or non-IP address-based routed or switched traffic
• Firewalls and load balancers cannot be connected to this bridge domain
• Hosts or devices where the IP address might "float" between MAC addresses
• Silent hosts are not expected to be connected to the bridge domain
Given the above requirements, the recommended bridge domain settings are as follows:
• L2 Unknown Unicast: Hardware Proxy
• ARP Flooding: Enabled
• Unicast Routing: Enabled
• Subnet Configured: Yes
• Enforce Subnet Check for IP Learning: Yes
In this scenario, the bridge domain contains devices where the IP address might move from one device to
another, meaning that the IP address moves to a new MAC address. This might be the case where routed
firewalls are operating in active/standby mode, or where server clustering is used. Where this is a requirement,
it is useful for gratuitous ARPs to be flooded inside the bridge domains to update the ARP cache of other
hosts.
In this example, unicast routing and subnet configuration are enabled for troubleshooting purposes, such as
for using traceroute, or for advanced features that require it, such as dynamic endpoint attachment.
Scenario 5: Migrating to ACI, Legacy Network Connected Through a Layer 2 Extension, Gateways on Legacy
Network
In this scenario, you are migrating to ACI. You are extending Layer 2 from ACI to your legacy network, and
Layer 3 gateways still reside on the legacy network.
Given the above requirements, the recommended bridge domain settings are as follows:
• L2 Unknown Unicast: Hardware Proxy
• ARP Flooding: Enabled
• Unicast Routing: Enabled
• Subnet Configured: If required
• Enforce Subnet Check for IP Learning: If required
Cisco Application Centric Infrastructure Best Practices Guide
21
ACI Constructs Design
Application-Centric and Network-Centric Deployments
In this scenario, the user is migrating hosts and services from the legacy network into the ACI fabric. A Layer
2 connection has been set up between the two environments and the Layer 3 gateway functionality will continue
to exist in the legacy network for some time. The following figure illustrates the topology of this configuration:
Figure 10: Layer 2 Connection to Fabric with External Gateways
In this situation, ensure that ARP flooding is enabled in the bridge domain.
Application-Centric and Network-Centric Deployments
About Application-Centric and Network-Centric Deployments
When discussing a Cisco Application Centric Infrastructure (ACI) deployment, there are two main strategies
that can be taken: application-centric and network-centric.
Application-Centric Deployment
When taking an application-centric approach to an ACI deployment, the applications within an organization
should be allowed to define the network requirements. A true application-centric deployment will make full
use of the available fabric constructs, such as endpoint groups, contracts, filters, labels, external endpoint
groups, and so on, to define how applications and the tiers should communicate.
With an application-centric approach, it is generally the case that the gateways for endpoints will reside in
the fabric itself (rather than on external entities such as firewalls or load balancers). This enables the application
environment to get the maximum benefit from the ACI fabric.
In an application-centric deployment, much of the complexity associated with traditional networks (such as
VRFs, VLANs, and subnets) is hidden from the administrator.
Cisco Application Centric Infrastructure Best Practices Guide
22
ACI Constructs Design
About Application-Centric and Network-Centric Deployments
The following figure shows an example of an application-centric deployment:
Figure 11: Application-Centric Deployment
Application-centric approach is generally recommended when users fully understand their application profiles,
such as the application tier and components, and what applications (or application components) need to
communicate with each other and on what protocol or ports.
Application-centric deployment is also seen as an approach to on board new applications.
Benefits of using this approach include:
• Workload mobility and flexibility, with placement of computing and storage resources anywhere in the
data center
• Capability to manage the fabric as a whole instead of using device-centric operations
• Capability to monitor the network as a whole using the Application Policy Infrastructure Controller
(APIC) in addition to the existing operation monitoring tools; the APIC offers new monitoring and
troubleshooting tools, such as health scores and atomic counters
• Lower TCO and a common network that can be shared across multiple tenants in the data center
• Reduced application downtime for network-related changes
• Rapid application deployment and agility through programmability and integrated automation
• Centralized auditing of configuration changes
• Enhanced data center security for east-west application traffic, with microsegmentation to contain threats
and prevent threats from spreading laterally across tenants and applications inside the data center
• Direct visibility into the health of the application infrastructure, benefitting application owners
• Template-based configuration, which increases efficiency and enables self-service
Cisco Application Centric Infrastructure Best Practices Guide
23
ACI Constructs Design
About Application-Centric and Network-Centric Deployments
Network-Centric Deployment
A network-centric deployment takes the opposite approach to the application-centric deployment in that the
traditional network constructs, such as VLANs and VRFs, are mapped as closely as possible to the new
constructs within the ACI fabric.
As an example, a traditional network deployment might consist of the following tasks:
• Define 2 server VLANs at the access and aggregation layers
• Configure the access ports to map server to VLANs
• Define a VRF at the aggregation layer
• Define an SVI for each VLAN, and map them to the VRF
• Define the HSRP parameters for each SVI
• Apply features such as ACLs to control traffic between server VLANs, and from server VLANs to the
core
The comparable ACI deployment when taking a network-centric approach might be as follows:
• Deploy the fabric
• Create a tenant and VRF
• Define bridge domains for the purposes of external routing entity communication
• Create an external/outside endpoint group to communicate with external networks
• Create two bridge domains and assign a network to each indicating the gateway IP address (such as
10.10.10.1/24 and 10.10.11.1/24)
• Define the endpoint groups
• Define a "permit any" contract to allow any to any EPG communication, as a VRF would do in ‘classic’
model without ACLs
If external gateways are defined (such as firewalls or load balancers) for endpoints to use, this constitutes a
network-centric approach. In this scenario, no contracts are required to allow access to the default gateway
from endpoints. Although there are still benefits to be had in terms of centralized control, the fabric might
become more of a Layer 2 transport in certain situations where the gateways are not inside the fabric.
Cisco Application Centric Infrastructure Best Practices Guide
24
ACI Constructs Design
About Application-Centric and Network-Centric Deployments
The following figure shows an example of a network-centric approach:
Figure 12: Network-Centric Deployment Approach
Network-centric deployment is typically seen as a starting point for initially migrating from a legacy network
to the ACI fabric, where their legacy infrastructure is segmented by VLANs, and by doing VLAN=EPG=BD
mapping helps the VLANs to understand the ACI constructs better and make the transition easier.
Using this approach does not require any changes to the existing infrastructure or processes. It still can leverage
the benefits that ACI offers, as listed below:
• Enables a next-generation data center network with high-speed 10- and 40-Gbps access or an aggregation
network
• East-west data center traffic optimization to support virtualized, dynamic environments as well as
non-virtualized workloads
• Supports workload mobility and flexibility, with placement of computing and storage resources anywhere
in the data center
• Capability to manage the fabric as a whole instead of using device-centric operations
• Capability to monitor the network as a whole using the APIC in addition to the existing operation
monitoring tools; the APIC offers new monitoring and troubleshooting tools, such as health scores and
atomic counters
• Lower TCO and a common network that can be shared securely across multiple tenants in the data center
• Rapid network deployment and agility through programmability and integrated automation
• Centralized auditing of configuration changes
Cisco Application Centric Infrastructure Best Practices Guide
25
ACI Constructs Design
Layer 2 Extension
• Direct visibility into the health of the application infrastructure
Layer 2 Extension
About Layer 2 Extension
When extending a Layer 2 domain outside of the Cisco Application Centric Infrastructure (ACI) fabric to
support migrations from the existing network to a new ACI fabric, or to interconnect dual ACI fabrics at Layer
2, there are the two methods to extend your Layer 2 domain:
• Extend the endpoint group out of the ACI fabric using endpoint group static path binding
• Extend the bridge domain out of the ACI fabric using an external bridged domain (also known as a Layer
2 outside)
Note
When extending the bridge domain, only a single Layer 2 outside can be created per bridge domain.
Endpoint group extension is the most popular approach to extend Layer 2 domains, where each individual
endpoint group is extended using a dedicated VLAN beyond the fabric. This method is the most commonly
used, as it is easy to deploy and does not require the use of contracts between the inside and outside networks.
However, if you use one bridge domain with multiple endpoint groups, then when you interconnect ACI
fabrics in Layer 2, you should not use the endpoint group extension method due to the risk of loops.
Configuration Examples for Layer 2 Extension
When designing a Cisco Application Centric Infrastructure (ACI) environment for dual data centers, one
topology option is to use separate fabrics, one per site, with a Layer 2 interconnection between them. In this
scenario, each fabric is managed by its own Application Policy Infrastructure Controller (APIC) cluster, with
no sharing or synchronization of policies between each.
Cisco Application Centric Infrastructure Best Practices Guide
26
ACI Constructs Design
Additional References for Layer 2 Extension
The following figure illustrates interconnecting ACI fabrics at Layer 2:
Figure 13: Interconnect Fabrics at Layer 2 with Multiple Endpoint Groups per Bridge Domain (Scenario Not Recommended)
In this example, multiple endpoint groups are associated with a single bridge domain. In this scenario, you
should not extend each individual endpoint group between fabrics as shown in the figure, as this might result
in loops between the fabrics. Instead, a Layer 2 Outside should be used to extend the entire bridge domain
using a single VLAN, as shown in the following figure:
Figure 14: Interconnect Fabrics at Layer 2 - Multiple Endpoint Groups per Bridge Domain (Recommended Scenario)
Additional References for Layer 2 Extension
For more information about Layer 2 extension, see the "ACI Layer 2 Connection to the Outside Network"
section of the Connecting Application Centric Infrastructure (ACI) to Outside Layer 2 and 3 Networks white
paper at the following URL:
http://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/
white-paper-listing.html
Cisco Application Centric Infrastructure Best Practices Guide
27
ACI Constructs Design
Infrastructure VXLAN Tunnel Endpoint Pool
Infrastructure VXLAN Tunnel Endpoint Pool
About Infrastructure VXLAN Tunnel Endpoint Pool
The Cisco Application Centric Infrastructure (ACI) fabric is brought up in a cascading manner, starting with
the leaf nodes that are directly attached to the Application Policy Infrastructure Controller (APIC). LLDP and
control-plane IS-IS convergence occurs in parallel to this boot process. The ACI fabric uses LLDP- and
DHCP-based fabric discovery to automatically discover the fabric switch nodes, assign the infrastructure
VXLAN tunnel endpoint (VTEP) addresses, and install the firmware on the switches.
The VTEP pool, which is specified only once during fabric discovery, is the pool of addresses used while
building the fabric. That is, each switch node added to the fabric gets an address. The VTEP pool is used for
other infrastructure related extensions, such as extending the infrastructure into a host for Application Virtual
Switch (AVS) integration.
Prerequisites for Infrastructure VXLAN Tunnel Endpoint Pool
You must meet the following prerequisites to use infrastructure VXLAN Tunnel Endpoint Pool (VTEP):
• The Application Policy Infrastructure Controllers (APICs) are clean and have no configuration. The
only time the VTEP pool gets set for the infrastructure is during the startup script on the APICs.
• The leaf and spine nodes to be added to the fabric are running a Cisco Application Centric Infrastructure
(ACI) image and not an NX-OS standalone image.
• The leaf and spine nodes to be added to the fabric are not part of another ACI fabric.
Guidelines and Limitations for Infrastructure VXLAN Tunnel Endpoint Pool
The following guidelines and limitations apply for infrastructure VXLAN Tunnel Endpoint Pool (VTEP):
• The infrastructure VTEP address cannot be changed once the fabric is built around it.
• To change the VTEP pool, the fabric must be rebuilt from scratch. This is a disruptive process and will
require the configuration to be exported, then imported after the initial fabric steps are completed.
• Generally, the infrastructure subnet stays internal to the fabric. The subnet exists within its own VRF
and is rarely exposed beyond that.
• There are a few scenarios, such as Application Virtual Switch (AVS) integration, where this subnet gets
exposed outside of the fabric. Due to this, ensure that this subnet does not overlap with another subnet
that is in use within the data center.
• While the minimum supported subnet size is a /22, this is not an ideal pool size and will cause scale
issues while attempting to grow the fabric. Subnet size /22 is only recommended for a small lab
environment.
If subnet size is a concern, a recommended subnet size for the VTEP pool is a /19. Otherwise, the ideal
subnet size for the VTEP pool is a /16.
Cisco Application Centric Infrastructure Best Practices Guide
28
ACI Constructs Design
Recommended Configuration Procedure for Infrastructure VXLAN Tunnel Endpoint Pool
Recommended Configuration Procedure for Infrastructure VXLAN Tunnel
Endpoint Pool
The Infrastructure VTEP pool only ever gets set on the Application Policy Infrastructure Controller (APIC)
during the startup script before the fabric is ever built.
Verifying the Infrastructure VXLAN Tunnel Endpoint Pool
The point at which the infrastructure VTEP pool can be verified is right before accepting the configuration
within the startup script on the Application Policy Infrastructure Controller (APIC). The APIC asks if the
configuration is correct, including the VTEP pool address assignment. After you confirm that the configuration
is correct, the larger pool gets broken into multiple DHCP pools for various purposes within the fabric and
there is currently no straightforward way to verify the initial pool size after startup script acceptance.
That being said, with the APIC connected to the fabric, the following procedure can be used to observe the
pools that the initial TEP pool was carved up into, and subsequently the initial network it is carved from.
Procedure
Use the moquery –c dhcpPool command to view the TEP pool confugration.
Example:
Apic1# moquery –c dhcpPool
...
dn
: prov-3/net-[10.0.0.0/16]/pool-7
Specifically within the output distinguished name of this class, there is a section that begins with "net-". In
the example snippet above, the APIC was configured with 10.0.0.0/16 as its TEP pool within the setup script
of the APIC.
Configuration Examples for Infrastructure VXLAN Tunnel Endpoint Pool
The default configuration is 10.0.0.0/16. The configuration is only set once during the startup script on the
Application Policy Infrastructure Controller (APIC).
Additional References for Infrastructure VXLAN Tunnel Endpoint Pool
For more information on setting up the Application Policy Infrastructure Controller (APIC), see the Cisco
APIC Getting Started Guide at the following URL:
http://www.cisco.com/c/en/us/support/cloud-systems-management/
application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html
Cisco Application Centric Infrastructure Best Practices Guide
29
ACI Constructs Design
Virtual Routing and Forwarding Instances
Virtual Routing and Forwarding Instances
About Virtual Routing and Forwarding Instances
A virtual routing and forwarding (VRF) instance, also called a context, represents an application policy domain
and Layer 3 forwarding. A tenant can have one or more VRF instances, and a single VRF instance can have
one or more bridge domains. A VRF instance in Cisco Application Centric Infrastructure (ACI) is equivalent
to a VRF instance in a traditional network.
Guidelines and Limitations for Virtual Routing and Forwarding Instances
The following guidelines and limitations apply for virtual routing and forwarding (VRF) instances:
• Within a single VRF instance, IP addresses must be unique. Between different VRF instances, you can
have overlapping IP addresses.
• If shared services is used between VRF instances or tenants, make sure there are no overlapping IP
addresses.
• Any VRF instances that are created in common tenant will be seen in other user-configured tenants.
• VRF supports enforced mode or unenforced mode. By default, a VRF instance is in enforced mode,
which means all endpoint groups within the same VRF instance cannot communicate to each other unless
there is a contract in place.
• Switching from enforced to unenforced mode (or vice versa) is disruptive.
Additional References for Virtual Routing and Forwarding Instances
For more information about virtual routing and forwarding (VRF) instances, see the Cisco Application Centric
Infrastructure Fundamentals Guide at the following URL:
http://www.cisco.com/c/en/us/support/cloud-systems-management/
application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html
Stretched Fabric
About Stretched Fabric
The stretched fabric allows users to manage multiple datacenter sites as a single fabric by using the same
Application Policy Infrastructure Controller (APIC) controller cluster. The stretched Cisco Application Centric
Cisco Application Centric Infrastructure Best Practices Guide
30
ACI Constructs Design
Guidelines and Limitations for Stretched Fabric
Infrastructure (ACI) fabric behaves the same way as a regular ACI fabric to support workload portability and
virtual machine mobility. The following figure illustrates the stretched fabric topology:
Figure 15: ACI Stretched Fabric Topology
Guidelines and Limitations for Stretched Fabric
The following guidelines and limitations apply for stretched fabric:
• Cisco Application Centric Infrastructure (ACI) stretched fabric site-to-site connectivity options include
dark fiber, dense wavelength division multiplexing (DWDM), and Ethernet over MPLS (EoMPLS)
pseudowire.
• The current validated stretched fabric supports three sites.
• The maximum validated/supported distance between two sites is up to 800 KM/500 miles or latency
within 10 msec RTT to allow Application Policy Infrastructure Controller (APIC) controller clusters to
keep control and data synchronized.
• With software release 1.2(2g), the ACI fabric supports up to six MP-BGP route reflectors. In a stretched
fabric implementation with three sites, place two route reflectors at each site to provide redundancy.
• Transit leaf refers to the leaf switches that provide connectivity among sites. There are no special
requirements and no additional configurations required for transit leaf switches.
• Transit leaf switches in all sites connect to both the local and remote spine switches.
• One or more transit leaf switches can be used. The number of transit leaf switches and links are dictated
by redundancy and bandwidth capacity decisions.
• In the event of link failure between sites, bring the failed links back up so as to avoid system performance
degradation, or to prevent a split fabric scenario from developing.
Cisco Application Centric Infrastructure Best Practices Guide
31
ACI Constructs Design
Additional References for Stretched Fabric
• Bridge domains/IP subnets can be stretched between sites
Additional References for Stretched Fabric
For more information about stretched fabric, including failure scenarios and more operational guidelines, see
the ACI Stretched Fabric Design knowledge base article at the following URL:
http://www.cisco.com/c/en/us/support/cloud-systems-management/
application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html
Access Policies
About Access Policies
The Fabric tab in the Cisco Application Policy Infrastructure Controller (APIC) GUI is used to configure
system-level features including, but not limited to, device discovery and inventory management, diagnostic
tools, domain configuration, and switch and port behavior. The fabric pane is split into three sections: Inventory,
Fabric Policies, and Access Policies. Understanding how fabric and access policies configure the fabric is
key for maintaining these policies for the purposes of internal connections between fabric leaf nodes,
connections to external entities such as servers, networking equipment, and storage arrays.
This section lists guidelines and provides common configuration examples for key objects in the Fabric >
Access Policies view. The Access Policies view is split into folders separating out different types of policies
and objects that affect fabric behavior. For example, the Interface Policies folder is where port behavior is
configured such as port speed and the controls for specifying whether or not to run protocols, such as LACP,
on switch interfaces. Domains and AEPs are also created in the Access Policies view. The fabric access
policies provide the fabric with the base configuration of the access ports on the leaf switches. For more
information, see Additional References for Access Policies, on page 37.
Guidelines and Limitations for Access Policies
Cisco has established several best practices for fabric configuration. These are not requirements, and might
not work for all environments or applications, but might help simplify day-to-day operation of the Cisco
Application Centric Infrastructure (ACI) fabric.
This section contains basic guidelines for access policies.
General Guidelines
• Policies should be created once and reused when connecting new devices to the fabric. Maximizing the
reusability of policy and objects makes day-to-day operations exponentially faster and easier to make
large-scale changes.
Note
The usage of these policies can be viewed by clicking the Show Usage button in the
Application Policy Infrastructure Controller (APIC) GUI. Use this to determine what
objects are using a certain policy to understand the impact when making changes.
Cisco Application Centric Infrastructure Best Practices Guide
32
ACI Constructs Design
Configuration Examples for Access Policies
• Avoid using the Basic GUI or Quick Start wizards, as these may create many automatic configurations
that are not intuitive during troubleshooting.
Interface Policy Guidelines
• Do not use the default setting for interface policies, if possible.
• Reuse policies whenever possible. For example, create new separate interface policies for LACP active,
passive, and mac-pinning; for 1-GE port speed and 10-GE port speed; and for CDP and LLDP policies.
• When naming interface policies, use names that clearly describe the setting. For example, a policy that
enables LACP in mode active could be called "LACP-Active". There are many default policies out of
the box. However, it can be hard to remember what all the defaults are, which is why policies should be
clearly named to avoid making a mistake when adding new devices to the fabric.
Domain Guidelines
• Build one physical domain per tenant for bare metal servers or servers without hypervisor integration
requiring similar treatment.
• Build one external routed/bridged domain per tenant for external connectivity.
• For VMM domains, if both DVS and AVS is in use, create a separate VMM domain to support each
environment.
• For large deployments where domains (physical/VMM/etc) need to be leveraged across multiple tenants,
a single physical domain or VMM domain can be created and associated with all leaf ports where services
are connected.
AEP Guidelines
• Multiple domains can be associated to a single AEP for simplicity. There are some cases where multiple
AEPs may need to be configured to enable the infrastructure VLAN, such as overlapping VLAN pools,
or to limit the scope of the presence of VLANs across the fabric.
• Another scenario in which multiple AEPs should be utilized is when making an association to VMM
domains. The AAEP also contains relationships to the vSwitch policies, which are then pushed to the
vCenter VDS or AVS. If there are multiple VMM domains deployed with differing vSwitch policies,
multiple AAEPs should be created to account for the various potential vSwitch policy combinations.
• When utilizing an AVS for VMM, HyperV, SCVMM, or OpenStack OpFlex integration, the AAEP is
where the option to enable infra vlan is selected. For the most part, we do not want to extend this VLAN
outside of the fabric aside for when performing this integration. For that purpose, it will be beneficial
to create an AEP specific to the AVS VMM Domain if being utilized.
Configuration Examples for Access Policies
This section describes two common methods for deploying your leaf switches, explains how to create and
associate switch and interface profiles, and shows how to create a port channel policy and a vPC domain.
Creating Access Policies for Switches
The following describes two common methods for deploying your leaf switches:
Cisco Application Centric Infrastructure Best Practices Guide
33
ACI Constructs Design
Configuration Examples for Access Policies
• Create a switch profile for each leaf switch individually regardless of vPC definition existence.
• Create a switch profile for each leaf switch individually. Additionally, create a switch profile for each
vPC pair (if using vPC).
For both methods, you also create an interface profile for each switch profile. Each interface profile will group
all the interface selectors associated to that specific switch. In the event of adding or deleting new/existing
ports, changes will only be made under interface profiles, as those interface profiles are already associated to
the corresponding switch profiles.
Consider the following vPC topology as an example:
• When a switch profile is created for each leaf switch individually regardless of vPC definitions:
• Switch profiles example: Leaf_201, Leaf_202
• Interface profiles example: Leaf_201_IPR, Leaf_202_IPR
In the example above, all ports (vPC or non-vPC) are added in both Leaf_201_IPR and Leaf_202_IPR
respectively.
The benefits of creating a switch profile for each leaf individually regardless of vPC definitions are that
there are less switch and interface profiles to manage, it's more flexible to change the ports if needed,
and it supports asymmetric connections for host-facing ports. However, the interface policy group needs
to be configured consistently on both interface selectors.
• When a switch profile is created for each leaf switch individually and also for each vPC pair:
• Switch profiles example: Leaf_201, Leaf_202, Leaf_201_202
• Interface profiles example: Leaf_201_IPR, Leaf_202_IPR, Leaf_201_202_IPR
In the example above, vPC related ports are only added in Leaf_201_202_IPR. Non-vPC related ports
are added to either Leaf_201_IPR or Leaf_202_IPR respectively.
The benefit of creating a switch profile for each leaf and also for each vPC pair is that the configurations
are simpler in a large-scale environment with symmetric in discipline and replicated setup. However, it
is difficult to repurpose the ports that are already in use. Changing those interfaces will impact both of
the switches.
This section explains how to create and associate switch and interface profiles.
Creating a Switch Profile
This section explains how to create a switch profile (leaf or spine).
Cisco Application Centric Infrastructure Best Practices Guide
34
ACI Constructs Design
Configuration Examples for Access Policies
Before You Begin
You must have a configured leaf or spine switch.
Procedure
Step 1
Step 2
Step 3
Step 4
Step 5
Step 6
From the Fabric tab, click Access Policies.
In the Navigation pane, choose Switch Policies > Profiles.
The Leaf Profile and Spine Profile options appear in the Navigation pane.
Choose Leaf Profile or Spine Profile.
In the Work pane, click Actions and choose the option to create a profile.
A dialog appears. When creating a leaf profile, the Create Leaf Profile dialog appears. When creating a spine
profile, the Create Spine Profile dialog appears.
Enter the appropriate values in the fields of the dialog.
Note
For an explanation of a field, click the 'i' icon on the top-right corner of the dialog box to display the
help file.
When done, click Finish.
Creating an Interface Profile
This section explains how to create a switch profile (leaf or spine).
Before You Begin
You must have a configured leaf or spine switch.
Procedure
Step 1
Step 2
Step 3
Step 4
Step 5
Step 6
From the Fabric tab, click Access Policies.
In the Navigation pane, choose Interface Policies > Profiles.
The Leaf Profile and Spine Profile options appear in the Navigation pane.
Choose Leaf Profile or Spine Profile.
In the Work pane, click Actions and choose the option to create a profile.
A dialog appears. When creating a leaf profile, the Create Leaf Interface Profile dialog appears. When
creating a spine profile, the Create Spine Interface Profile dialog appears.
Enter the appropriate values in the fields of the dialog.
Note
For an explanation of a field, click the 'i' icon on the top-right corner of the dialog box to display the
help file.
When done, click Submit.
Cisco Application Centric Infrastructure Best Practices Guide
35
ACI Constructs Design
Configuration Examples for Access Policies
Associating Switch and Interface Profiles
Before You Begin
• You have created a switch (leaf or spine) profile.
• You have created an interface (leaf or spine) profile.
This section explains how to associate switch profiles with interface profiles.
Procedure
Step 1
Step 2
Step 3
Step 4
Step 5
Step 6
Step 7
From the Fabric tab, click Access Policies.
In the Navigation pane, click Switch Policies > Profiles.
The Leaf Profile and Spine Profile options appear in the Navigation pane.
Click the Leaf Profile or Spine Profile drop-down arrow.
Your profile icons appear in the drop-down list in the Navigation pane.
In the Navigation pane, click on a profile icon to choose a switch profile.
Your profile details appear in the Work pane.
From the Associated Interface Selector Profiles table in the Work pane, click the + (plus) symbol.
The Create Interface Profile dialog appears.
Click the Interface Select Profile drop-down arrow and choose an interface profile to associate with your
switch profile.
When done, click Submit.
Creating a Port Channel Policy
This section explains how to create a port channel policy.
Procedure
Step 1
Step 2
Step 3
From the Fabric tab, click Access Policies.
In the Navigation pane, choose Interface Policies > Policies > Port Channel.
From the Work pane, click Actions > Create Port Channel Policy.
The Specify Port Channel Policy dialog appears.
Step 4
Enter the appropriate values in the Specify Port Channel Policy dialog fields.
Note
• For an explanation of a field, click the 'i' icon on the top-right corner of the dialog box to display
the help file.
• The LACP Active option for the Mode field sets a port to the suspended state if it does not
receive an LACP PDU from the peer. Although this feature helps in preventing loops created
due to misconfigurations, in some cases, the feature can cause servers to fail to boot up because
they require LACP to logically bring up the port. This is the use case that you typically would
see with PXE boot. As a workaround, you click the checked Suspend-Individual Port check
box in the Control options to uncheck/disable the option and put a port into an individual state.
Cisco Application Centric Infrastructure Best Practices Guide
36
ACI Constructs Design
Additional References for Access Policies
Step 5
When finished, click Submit.
Creating a vPC Domain
For server active/active deployments, vPC can be used to provide larger uplink bandwidth and faster
convergence upon link or switch failures.
Unlike traditional vPC design, there is no requirement for setting up either a vPC peer-link or vPC
peer-keepalive in the Cisco Application Centric Infrastructure (ACI) fabric. The fabric itself serves as the
peer-link. The rich interconnectivity between spine switches and leaf switches makes it very unlikely that all
the redundant paths between vPC peers fail at the same time. Hence, if the peer switch becomes unreachable,
it is assumed to have crashed. The slave switch does not bring down vPC links.
For more information, see the Operating Cisco Application Centric Infrastructure document at the following
URL: http://www.cisco.com/c/en/us/support/cloud-systems-management/
application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html.
Procedure
Step 1
Step 2
From the Fabric tab, click Access Policies.
In the Navigation pane, click Switch Policies > Policies > Virtual Port Channel default.
The Virtual Port Channel Security Policy - Virtual Port Channel default window appears.
Step 3
Enter the appropriate values in the fields of the Virtual Port Channel Security Policy - Virtual Port Channel
default window.
Note
For an explanation of a field, click the 'i' icon on the top-right corner of the dialog box to display the
help file.
When finished, click Submit.
Step 4
Additional References for Access Policies
For more information, see Operating Cisco Application Centric Infrastructure document at the following
URL: http://www.cisco.com/c/en/us/support/cloud-systems-management/
application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html.
Mis-Cabling Protocol
About the Mis-Cabling Protocol
Unlike traditional networks, the Cisco Application Centric Infrastructure (ACI) fabric does not participate in
the Spanning Tree Protocol (STP) and does not generate bridge protocol data units (BPDUs). BPDUs are
instead transparently forwarded through the fabric between ports mapped to the same endpoint group. Therefore,
ACI relies to a certain degree on the loop prevention capabilities of external devices.
Cisco Application Centric Infrastructure Best Practices Guide
37
ACI Constructs Design
Mis-Cabling Protocol
Some scenarios, such as the accidental cabling of two leaf ports together, are handled directly using LLDP in
the fabric. However, there are some situations where an additional level of protection is necessary; in those
cases, enabling the Mis-Cabling Protocol (MCP) can help.
Consider the example in the following figure:
Figure 16: VLAN Misconfiguration
In this example, two endpoint groups are configured on the ACI fabric, both associated with the same bridge
domain. An external switch has one port connected to each of the endpoint groups. In this example, a
misconfiguration has occurred whereby the external switch is allowing VLAN 10 on port 1/20; however, the
endpoint group associated with port 1/10 on leaf 102 is configured for VLAN 11. In this case, port 1/10 on
leaf 102 will not be able to receive BPDUs for VLAN 10. As a result, the spanning tree cannot detect the loop
and all ports will be forwarding.
The MCP protocol, if enabled, provides additional protection against this type of misconfiguration. MCP is
a lightweight protocol designed to protect against loops that cannot be discovered by either STP or LLDP.
You should enable MCP on all ports facing external switches or similar devices.
Configuration Examples for the Mis-Cabling Protocol
To enable the Mis-Cabling Protocol (MCP) in the fabric, you must enable MCP globally through the global
policies and also on individual ports or port channels through the interface policy group configuration.
Procedure
Step 1
Step 2
On the menu bar, choose Fabric > Access Policies.
In the Navigation pane, choose Global Policies > MCP Insurance Policy Default.
Step 3
In the Work pane, for the Admin State buttons, choose Enabled.
Step 4
For the remaining properties, change the values as desired.
• Key and Confirm Key—A key that uniquely identifies MCP packets within the fabric.
• Initial Delay (sec)—The delay time in seconds before MCP begins taking action.
• Loop Detect Multiplication Factor—Denotes the number of continuous packets a port must receive
before declaring a loop.
Cisco Application Centric Infrastructure Best Practices Guide
38
ACI Constructs Design
Port Tracking
Step 5
Step 6
Enable MCP on the interface level, which is done when you create an access port policy group. On the menu
bar, choose Fabric > Access Policies.
In the Navigation pane, choose Interface Policies > Policy Groups.
Step 7
In the Work pane, choose Actions > Create Access Policy Group.
Step 8
In the Create Access Policy Group dialog box, in the MCP Policy drop-down list, choose MCP-Enabled.
Step 9 Fill out the remaining fields as necessary.
Step 10 Click Submit.
Additional References for the Mis-Cabling Protocol
For more information about the Mis-Cabling Protocol (MCP), see the section about loop detection in the Cisco
Application Centric Infrastructure Fundamentals Guide at the following URL:
http://www.cisco.com/c/en/us/support/cloud-systems-management/
application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html
Port Tracking
About Port Tracking
Port tracking policies are used to monitor the status of links between leaf switches and spine switches. When
an enabled port tracking policy is triggered, the leaf switches take down all access interfaces on the switch
that have endpoint groups deployed on them.
Port tracking addresses a scenario in which a leaf node might lose connectivity to the spine node and where
hosts connected to the affected leaf node in an active/standby manner might not be aware of the failure for a
period of time. The following figure illustrates this scenario:
The port tracking feature detects a loss of fabric connectivity on a leaf node and brings down the host facing
ports. This allows the host to fail over to the second link, as shown in the following figure:
Cisco Application Centric Infrastructure Best Practices Guide
39
ACI Constructs Design
Port Tracking
Note
The preferred host connectivity to the Cisco Application Centric Infrastructure (ACI) fabric is vPC wherever
possible. Port tracking is useful in situations where hosts are connected using active/standby NIC teaming.
Guidelines and Limitations for Port Tracking
• The preferred host connectivity to the ACI fabric is vPC wherever possible.
• Port tracking is useful in situations where hosts are connected using active/standby NIC teaming.
Recommended Configuration Procedure for Port Tracking
To enable and set global port tracking for the ACI fabric, complete the following steps.
Procedure
Step 1
In the Advanced GUI, navigate to the Port Tracking window. Click Fabric > Access Policies > Global
Policies > Port Tracking.
Step 2
In the Port Tracking window, locate the Port Tracking state field and click on.
Step 3
Set the Delay restore timer parameter.
This timer controls the number of seconds the fabric waits before bringing host ports up after the leaf spine
links re-converge.
Step 4
Set the Number of Active Spine Links parameter.
This value specifies how low the number of active links drop to before Port Tracking is triggered. The value
'0' configures Port Tracking to be triggered after the number of active links to the spine drops to zero.
Step 5
Click Submit.
Cisco Application Centric Infrastructure Best Practices Guide
40
ACI Constructs Design
VLAN Pools
VLAN Pools
About VLAN Pools
Within Cisco Application Centric Infrastructure (ACI), there is the concept of access policies, which are a
group of objects that define how traffic can get access into the fabric. Access policy definition matters when
an EPG is created for use. For example, an EPG that has a static path (for example, node 101, int eth1/10,
trunked with VLAN 10) without access policies is essentially telling the EPG to use a set of policies to which
it does not have access. At this point, you will see faults indicating path issues. The access policies and
subsequent domain-to-EPG association tell this EPG that it now has access to a subset of nodes, interfaces,
and VLANs that it can now use in path definitions.
VLAN pools are just one piece of the complete access policies definition. A VLAN pool is a container that
is comprised of encap blocks, which contain the actual VLAN definitions.
Prerequisites for VLAN Pools
• A Cisco ACI fabric that has been initialized.
• An understanding of access policies and their purpose. For information on access policies, see About
Access Policies, on page 32
Guidelines and Limitations for VLAN Pools
• VLAN pools containing overlapping encap block definitions should not be associated to the same AAEP
(and subsequently the same leaf nodes). This can cause issues with BPDU forwarding through the fabric
if the domains associated to an EPG have overlapping VLAN block definitions.
• VLAN pools with an allocation mode of Dynamic are typically used for VMM integration deployments.
VMM integration generally does not require explicit VLAN assignment, so a dynamic pool allows the
system to pull free resources as needed.
• VLAN pools with an allocation mode is Static are typical for the majority of other deployment scenarios
including static paths, L2Out and L3Out out definitions.
• A dynamic VLAN pool can have a static encap block defined within it. This is generally only done for
the specific case of utilizing the "pre-provision" resolution immediacy.
• A static VLAN pool cannot have a dynamic encap block. This will be rejected by the Application Policy
Infrastructure Controller (APIC), as there are no features that utilize this configuration.
Recommended Configuration Procedures for VLAN Pools
See Guidelines and Limitations for VLAN Pools.
Cisco Application Centric Infrastructure Best Practices Guide
41
ACI Constructs Design
Managed Object Naming Convention
Configuration Examples for VLAN Pools
For configuration examples of VLAN pools, please see the Creating Domains, Attach Entity Profiles, and
VLANs to Deploy an EPG on a Specific Port at the following URL: http://www.cisco.com/c/en/us/support/
cloud-systems-management/ application-policy-infrastructure-controller-apic/
tsd-products-support-series-home.html
Additional References for VLAN Pools
For additional information on access policies, including VLAN pools, see the Cisco Application Centric
Infrastructure Fundamentals document at the following URL: http://www.cisco.com/c/en/us/support/
cloud-systems-management/ application-policy-infrastructure-controller-apic/
tsd-products-support-series-home.html
Managed Object Naming Convention
About the Managed Object Naming Convention
Cisco Application Centric Infrastructure (ACI) is based upon the managed object (MO) model, where each
object requires a name. A clear and consistent naming convention is therefore essential to aid manageability
and troubleshooting.
Any change in naming convention for any MO such as profiles or policies requires disruption. It is highly
recommended to plan ahead and define the policy naming convention before deploying the ACI fabric to
ensure that all policies are named consistently.
Cisco Application Centric Infrastructure Best Practices Guide
42
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement