Cisco Nexus 1000v, what is the best Layer 3 control option?

Cisco Nexus 1000v is very popular in VMware environments, provided that the physical network is comprised of Cisco devices. It's not a requirement, but it definitely makes more sense. One big advantage of the Nexus 1000v is that it allows the network team to manage the virtual access network with the same tools they are used to manage the physical network. So if you don't have existing Cisco NX-OS based switches, this advantage may turns into a constraint as it requires learning a new technology.It also has other significant advantages in terms of security and network services chaining with vPath.

A quick reminder on the Nexus 1000v (for VMware): it's a distributed virtual switch that is composed of the two following modules:

  • The VEM or Virtual Ethernet Module, which has to be installed on every ESXi host. You can think about it as a virtual remote card of a modular switch.
  • The VSM or Virtual Supervisor Module, which is shipped as virtual machine. If we use the same analogy, it would be the supervisor of your modular switch. it should be deployed in a redundant fashion (as your supervisors should). The VSM doesn’t play any role in the data-plane, but has privileged communication channels to talk to the VEM.

These channels are comprised of the following interfaces:

  • The Control interface is used for communication between VSM and VEM within a specific SVS (Server Virtualization Switch) domain. It allows the VSM to push configuration to VEMs but also allows the VEMs to send notifications to the supervisor, such as attachment or detachment of ports to the distributed switch. It is also used for high-availability purposes between the primary and the secondary VSM.
  • The Packet interface is used to carry packets that need to be processed by the VSM itself. This includes CDP and IGMP.

Let's focus on design choices when deploying the Nexus 1000v. We have the following possibilities:

  • Use Layer 2 mode for control and packet, which should be are carried over two distinct VLANs. Interfaces don’t have any IP address assigned.
  • Use Layer 3 mode, where IP addresses are effectively used for communications between VSM and VEM.

Cisco currently recommends the second option as it gives you several advantages over the old Layer 2 deployment model:

  • Troubleshooting the control plane is a lot more easier as you can use IP-based tools (e.g. ping, traceroute).
  • You can manage VEMs that reside in another VLAN or subnet.
  • Better integration with other products using Layer 3.
  • You need a port-profile with Layer 3 capabilities for ERSPAN anyway, so you could re-use the existing one. (Although you may want to dedicated another port-profile for monitor sessions)

In the Nexus 1000v System Management Guide, we can read this sentence:

With Layer 3 control, a VSM can be Layer 3 accessible and can control hosts that reside in a separate Layer 2 network.In other words, VSM and VEM can be in different subnets. But what happens if they are in the same subnet? It's not explicitly mentioned. I do like Cisco documentation when it's as clear as London at 7 am.

Layer 3 means IP, right? So theoretically I can use IPs that are in the same subnet or VLAN. With Cisco you're never sure until you ask the support team. And there is a great resource when you’re a partner, the pre-sales support. I have to say that they’re quite efficient and you can definitely expect an answer within 4 hours on average, which is not bad at all!

So the guy confirmed it was ok and I’m actually convinced that this deployment model greatly simplifies the design. Indeed, why adding a design constraint if it is not mandatory? By constraint I also mean risk, as introducing a new device between the VSM and VEM is adding a component that may fail. When using VSM and VEM in different subnets, this device may be a router or a firewall.

You could deploy them in a redundant model for sure, but what about operations, human errors, upgrades, etc… If something bad happens, it will not only have a negative impact on your existing routed traffic, but also on your LAN switched environment, which is traditionally not impacted by those problems. In addition, using VEM and VSM in different subnet adds complexity, as you would need to add a static route on each ESXi host. This requires specific documentation or host profile configuration. This is a configuration step that can easily be forgotten, because the system allows you to set only one default gateway for all vmkernels through the GUI. So you have to add it by using the CLI. If you’re running vSphere 5.1 or above this is fine, but for previous releases, you have to add the configuration in /etc/rc.local as the route is not persistent across reboots and host profiles don’t save or apply these static routes. (cf KB2001426)

The alternative idea is to use the ESXi management vmkernel to communicate with the VSM mgmt0 interface, both interfaces configured with IPs within the same subnet. If you have any security concern, then telling you that the traffic is encrypted with a 128-bits algorithm should be enough (hopefully).

Let's have a look at this design:

Nexus 1000v L3

This example comprises a vSphere cluster with 2 hosts where the Nexus 1000v VEM module has been installed. In addition, a pair of VSM has been deployed for redundancy (active/standby).
We are leveraging the ESXi management vmkernel to talk to the VSM over the control plane. Similarly, we use the mgmt0 interface on the VSM as the control interface.

One thing to notice here is the possibility to leverage either the mgmt0 or the control0 interface on the VSM as the L3 control interface. Although used for the same purpose in this particular scenario, they have different characteristics.The VSM is a virtual appliance configured with 3 virtual network interfaces. They are respectively the control, the management and the packet interfaces:

  • In Layer 3 mode with the control0 interface as the L3 interface, control and packet traffic is handled by the first vmnic.
  • In Layer 3 mode with the mgmt0 interface as the L3 interface, control and packet traffic is handled by the second vmnic.
  • When Layer 3 mode is used, the third interface (packet) is not used, but don't disconnect the vnic!

Another key difference between mgmt0 and control0 interfaces is about VRF membership. The control0 interface is part of the default VRF while the the mgmt0 interface belongs to the management VRF.

In our scenario, we use mgmt0 as the dedicated L3 interface on the VSM. The control, management and packet traffic are handled by the mgmt0 interface, which is the second vnic of the VSM virtual appliance. The manual implementation steps would be (alternatively you can use the Cisco GUI installer):

  1. Deploy the VSM pair in HA mode with basic configuration (Layer 2 mode)
  2. Create a Layer 3 SVS domain and set the mgmt0 interface as the L3 control interface.
  3. Create an SVS connection and link the VSM with the vCenter and the appropriate datacenter inventory object.
  4. Create the required port-profiles (ethernet and vethernet) and add Layer 3 capability to your ESXi management vmkernel port-profile.
  5. Install the VEM on ESXi hosts.
  6. Add ESXi hosts to the Nexus 1000v distributed switch. Once the management vmkernel has been migrated to the Nexus 1000v, the VEM will appear as a remote module of the Nexus 1000v.

The following commands should be used:

Create the Server Virtualization Switch (SVS) domain in Layer 3 mode:

N1KV# conf t
N1KV(config-svs-domain)# svs-domain
N1KV(config-svs-domain)# domain id 10
N1KV(config-svs-domain)# no packet vlan
N1KV(config-svs-domain)# no control vlan
N1KV(config-svs-domain)# svs mode L3 interface mgmt0
N1KV(config-svs-domain)# show svs domain
SVS domain config:
 Domain id:    10
  Control vlan:  NA
  Packet vlan:   NA
  L2/L3 Control mode: L3
  L3 control interface: mgmt0
  Status: Config push to VC successful
  Control type multicast: No 

Create the SVS connection:

N1KV(config-svs-conn)# svs connection vcenter
N1KV(config-svs-conn)# protocol vmware-vim
N1KV(config-svs-conn)# remote ip address 10.0.0.50
N1KV(config-svs-conn)# vmware dvs datacenter-name nvelab
N1KV(config-svs-conn)# connect
N1KV(config-svs-conn)# show svs-connections
connection vcenter:
    ip address: 10.0.0.50
    remote port: 80
    protocol: vmware-vim https
    certificate: default
    datacenter name: nvelab
    admin: n1kUser(user)
    max-ports: 8192
    DVS uuid: ab 89 14 50 b5 19 95 ec-25 37 69 5c 9e 40 1e 3a
    config status: Enabled
    operational status: Connected
    sync status: Complete
 version: VMware vCenter Server 5.5.0 build-1312298
    vc-uuid: 025C35D3-DC84-4F80-A789-49D259FC69C8

Create the ESXi management vmkernel port-profile with Layer 3 capability:

N1KV# conf t
N1KV# port-profile type veth mgmt-l3
N1KV(config-port-prof)# capability l3control
Warning: Port-profile 'mgmt-l3' is configured with 'capability l3control'. Also configure the corresponding access vlan as a system vlan in:
    * Port-profile 'mgmt-l3'.
    * Uplink port-profiles that are configured to carry the vlan
N1KV(config-port-prof)# vmware port-group
N1KV(config-port-prof)# switchport mode access
N1KV(config-port-prof)# switchport access vlan 10
N1KV(config-port-prof)# no shut
N1KV(config-port-prof)# system vlan 10
N1KV(config-port-prof)# state enabled
N1KV(config-port-prof)# show port-profile name mgmt-l3
port-profile mgmt-l3
 type: Vethernet
 description:
 status: enabled
 max-ports: 32
 min-ports: 1
inherit:
 config attributes:
  switchport mode access
  switchport access vlan 10
  no shutdown
  evaluated config attributes:
  switchport mode access
  switchport access vlan 10
 no shutdown
 assigned interfaces:
 port-group: mgmt-l3
 system vlans: 10
 capability l3control: yes
 capability iscsi-multipath: no
 capability vxlan: no
 capability l3-vservice: no
 port-profile role: none
 port-binding: static

A couple of steps are still left: install the VEM on the ESXi hosts and add the hosts to the distributed switch. You can choose not to migrate the management vmkernel at this stage. However, the VEM will not be displayed when using the command show modules on the Nexus 1000v. It is only when you migrate the management vmkernel to the L3 vmkernel port-profile that the remote module discovery is initiated. This is because you have to establish the link between VEM and VSM over the control plane network.

There are a couple of other considerations when putting together a design with the Nexus 1000v, such as traffic pinning, QoS, auto expand features or security. I will dig into these important topics in a future post. So stay tuned!

Comments

comments powered by Disqus