Home > ESX, Flex-10, HP, vCloud Director, VMware > HP Virtual Connect in tunnel mode breaks vCloud Director Network Isolation (vCDNI)

HP Virtual Connect in tunnel mode breaks vCloud Director Network Isolation (vCDNI)

March 21st, 2013

VMware and HP have both released advisories which say you shouldn’t use HP Virtual Connect in tunnel mode if you are using vCloud Director Network Isolation (vCDNI) which is MAC-in-MAC encapsulation.

There are two network modes available with Virtual Connect, Tunnel Mode and Mapped Mode.

When using Tunnel Mode, Virtual Connect passes all tagged and untagged packets through the Virtual Connect switch down to select blades where the VLANs are split into port groups. The uplinks are therefore considered dedicated uplinks as control over which VLANs are trunked is done at the upstream switch and so you can’t have a different set of VLANs going to Blade 1 and Blade 2 while still utilising the same uplinks. You could obviously have separate sets of uplinks for Blade 1 and Blade 2 to achieve this. The advantage of tunneling mode is only having to specify your VLANs once at the upstream switch and being able to pass all VLANs down the same trunk to multiple blades and only having to manage VLANs at the upstream switch and port groups on the ESXi host or within the vSphere Distributed Switch.

In Mapped Mode  mode, the Virtual Connect switch examines all the VLANs and by defining Ethernet Networks for each VLAN on the Virtual Connect switch you can selectively pass down all or some of the VLANs down to the blades. The uplinks are considered shared as you can trunk all VLANs you will need for any blade and for example have some VLANs going to Blade 1 and other VLANs going to Blade 2 but sharing the same uplinks. In order to do this you  have to create separate Ethernet Networks for every VLAN (possibly two for redundancy) and manage VLANs at both the upstream and Virtual Connect switches as well as port groups on the ESXi host or within the vSphere Distributed Switch.

VMware says:

vCDNI uses the same MAC address for all communication on a per ESXi host basis. Each vAPP uses a different vLAN so having the same MAC address is acceptable as long as the vLAN is different from a standard physical switch perspective.

When you put the HP Virtual Connect in tunnel mode it only has one MAC table and does not keep track of the vLAN for each MAC resulting the appearance of a MAC flap, which causes packet-loss.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2040993

HP expands on this to include Load Balancers:

Devices that bridge network traffic at Level 2 (for example, some third-party load balancers), may experience connectivity problems for the server blades using the networks with tunneled VLAN tags

When VC is in VLAN tunnel mode, it maintains a single MAC Address table for the tunneled VC network even though it encompasses multiple VLANs (this is normal). The result is that when a host (physical or VM) inside the VC Domain sends a broadcast like an ARP, it is sent out the VC uplink on one VLAN, traverses through the load balancer and is broadcast on the load-balanced VLAN. If that VLAN is also sent to the VC uplink port, the MAC address of the host is learned outside of VC. Like any 802.1d bridge, subsequent traffic sent to that host’s MAC address and received on the VC uplink is discarded as VC has learned that the MAC address resides outside the domain. The normal MAC address aging time in VC and most other switches is 5 minutes, so this condition will exist until the entry aged out.

http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c02684783&lang=en&cc=us&taskId=101&prodSeriesId=3794423&prodTypeId=3709945

The workaround is to not use tunnel mode and rather reconfigure your Virtual Connect configuration to use mapped vlans which may seriously alter your Virtual Connect network design.

  1. March 23rd, 2013 at 03:56 | #1

    Thanks Julian! Very good information to have at hand when designing a VC architecture for a VMware environment.

  2. Matt
    August 16th, 2013 at 16:48 | #2

    Great info, Julian. We’ve just run into this in our environment when doing a POC with vCloud. Have you seen anyone successfully set up a large vCloud deployment using vCDNI in a Virtual Connect environment?

  1. No trackbacks yet.
Comments are closed.