Nexus 1000v
Brandon Morgan
Brandon -at-
January 2010
Presentation Overview
– You already know something about VMWare as this is not a guide to explain it.
– You have some level of network experience and understand VLANs, trunks, uplinks, etc.
What this is
– An overview and collection of information to get you started
– References to published documentation, everything said within this document is not
100% mine, some of the text and pictures are from Cisco or VMWare documentation.
See Appendix for links to the documentation
What this is NOT
– All of the answers
– Certified by Cisco
– Certified by VMWare
Product Overview
Cisco Nexus™ 1000V Series Switches are virtual machine access switches that
are an intelligent software switch implementation for VMware vSphere
environments running the Cisco® NX-OS Software operating system.
Operating inside the VMware ESX hypervisor, the Cisco Nexus 1000V Series
supports Cisco VN-Link server virtualization technology to provide:
Policy-based virtual machine connectivity
Mobile virtual machine security and network policy
Non-disruptive operational model for your server virtualization and networking
Bottom Line: Using the Nexus 1000v distributed switch model virtualizing the
switching environment will allow you circumvent issues associated with Physical NIC
types in different server platforms.
Cisco Nexus™ 1000V consists of two parts
• VSM – Virtual Supervisor Module
– VSM controls multiple VEMs as one logical modular switch.
– Configuration is performed through the VSM and is automatically propagated
to the VEMs.
– One VSM can manage up to 64 VEMs
– Cisco recommends having two VSMs configured as a pair to act as a two SUP
• VEM – Virtual Ethernet Module
– VEM runs as part of the VMware ESX or ESXi kernel and replaces the VMware
virtual switch (vSwitch).
Distributed Switch Models
Ethernet/NIC View
System Requirements
VMware vSphere 4.0 or later with vNetwork Distributed Switch
– Currently only supported with the VSphere Enterprise Plus license
Cisco Nexus 1000V Series VSM:
– VSM can be deployed as a virtual machine on VMware ESX or ESXi 3.5U2 or higher or
ESX or ESXi 4.0
– Hard disk: 3 GB
– RAM: 2 GB
– 1 virtual CPU at 1.5 GHz
Cisco Nexus 1000V Series VEM
VMware ESX or ESXi 4.0
Hard disk space: 6.5 MB
RAM: 150 MB
Number of VLANs connecting VSM and VEM
Minimum: 1
Recommended: 3
Server on VMware Hardware Compatibility List (
Compatible with any upstream physical switches, including all Cisco Nexus and Cisco
Catalyst® switches as well as Ethernet switches from other vendors
The Cisco Nexus 1000V Series is licensed based on the number of physical
CPUs on the server on which the VEM is running. Up to 12 Cores per CPU.
Part Number
Nexus 1000V VSM on Physical Media
Nexus 1000V Paper CPU License Qty 1-Pack
Nexus 1000V Paper CPU License Qty 4-Pack
Nexus 1000V Paper CPU License Qty 16-Pack
Nexus 1000V Paper CPU License Qty 32-Pack
Nexus 1000V eDelivery CPU License Qty 1-Pack
Nexus 1000V eDelivery CPU License Qty 4-Pack
Nexus 1000V eDelivery CPU License Qty 16-Pack
Nexus 1000V eDelivery CPU License Qty 32-Pack
• Read the Step by step guide before you start setting up your 1000v
Cisco recommended VLANs
• Management
– Managing the VSM
• Packet
– Used for protocols such as CDP, LACP
• Control
– Communication between VSM & VEM
– Netflow exports from VEM to VSM then exported to Netflow collector
– VEM notification to VSM for port info
Nuts and Bolts
Have more than one VMWare service console configured (It is interesting when you disconnect yourself…)
Each VSM and VEM are like line cards in a switch
– Slots 1 & 2 are VSM
– 3 – 64 are VEMs (Slots are based on the order the server/VEM is added to the switch)
– The VSM keeps the VEM slot order by VEM UUID
– Virtual Ethernet ports are setup for the switch ports (See Ethernet/NIC View)
– Port profiles are configuration information for switch ports that the VM guests connect to
VMWare note on scaling :
• Scaling maximums should be considered when migrating to a vDS. The following virtual network configuration maximums
– supported in the first release of vSphere 4
– 64 ESX/ESXi Hosts per vDS
– 16 Distributed Switches (vDS or Nexus 1000V) per vCenter Server
– 512 Distributed Virtual Port Groups per vCenter Server
– 6000 Distributed Virtual Switch Ports per vCenter
– 4096 total vSS and vDS virtual switch ports per host
– Note: These configuration maximums are subject to change.
VSM and VEM need to be Layer 2 connected
After you setup the VSM, but before adding a VEM to it, you need to configure at very least the uplink port
profile in the Nexus 1000v
Connecting initial VSM and VEM together
Decide on and configure VSM VLANs
associate VSM to VMWare vSwitch on second uplink then connect the VEM to VSM
Management, Control, Packet
Double check that you have a trunk configured on the port connecting you to the upstream switch
The above VLANs are allowed on the trunk
License your installs
ESX Host
– esxcfg-vswitch
– esxcfg-vswif
– esxcfg-vmknic
ESX Host with VEM installed (There may be a way to run these from the VSM with the module command,
but I have yet to use it)
– vemcmd show port
– vemcmd show trunk
Nexus VSM
– Show module
– Show module vem mapping
– Show port-profile usage
– Show interface brief
Not covered in PPT, but KB just came out at the time of this presentation.
Download PDF