Home > ESX, vCenter, VMware > What’s New in vSphere 6.0: Cross vCenter and Long Distance vMotion

What’s New in vSphere 6.0: Cross vCenter and Long Distance vMotion

August 26th, 2014

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for details of all the components.

vMotion is one of most basic yet coolest features of vSphere, People generally consider the time they saw vMotion work for the first time as their “wow” moment showing the power of virtualisation. in vSphere 5.5, vMotion is possible within a single cluster and across clusters within the same Datacenter and vCenter. With vSphere 6.0 vMotion is being expanded to include vMotion across vCenters, across virtual switches, across long distances and routed vMotion networks aligning vMotion capabilities with larger data center environments.

vMotion across vCenters will simultaneously change compute, storage, networks, and management. This leverages vMotion with unshared storage and will support local, metro and cross-continental distances.

imageYou will need the same SSO domain for both vCenters if you use the GUI to initiate the vMotion as the VM UUID can be maintained across vCenter Server instances but it is possible with the API to have a different SSO domain. VM historical data is preserved such as Events, Alarms and Task History. Performance Data will be preserved once the VM is moved but is not aggregated in the vCenter UI. the information can still be accessed using 3rd party tools or the .API using the VM instance ID which will remain across vCenters.

When a VM moves across vCenters, HA properties are preserved and DRS anti-affinity rules are honoured. The standard vMotion compatibility checks are executed. You will need 250 Mbps network bandwidth per vMotion operation.

Another new function is being able to vMotion or clone powered of VMs across vCenters. This will use the VMware Network File Copy (NFC) protocol.

vMotion previously could only occur within a network managed by a single virtual switch, either a Virtual Standard Switch (VSS) or Virtual Distributed Switch (VDS). vMotion across vCenters will allow VMs to vMotion to a network managed by a different virtual switch effectively switching the networks seamlessly. This will include:

  • from VSS to VSS
  • from VSS to VDS
  • from VDS to VDS

You will not be able to vMotion from a VDS to a VSS. VDS port metadata will be transferred and cross vCenter vMotion is still transparent to the guest OS. You will still need Layer 2 VM network connectivity.

In vSphere 5.5, vMotion requires Layer 2 connectivity for the vMotion network. vSphere 6.0 will allow VMs to vMotion using routed vMotion networks.

Another great addition in vSphere 6.0 is being able to do Long-distance vMotion. The idea is to be able to support cross-continental US distances with up to 100+ms RTTs while still maintaining standard vMotion guarantees. Use cases are:

  • Disaster avoidance
  • SRM and disaster avoidance testing
  • Multi-site load balancing and capacity utilisation
  • Follow-the-sun scenarios

You can also use Long-distance vMotion to live move VMs onto vSphere-based public clouds, including VMware VCHS now called vCloud Air..

This may be long distance vMotion but it’s still vMotion, a Layer 2 connection is required for the VM network in both source and destination. The same VM IP address needs to be available at the destination. vCenters need to connect via Layer 3 and the vMotion network can now be a Layer 3 connection. The vMotion network can be secure either by being dedicated or encrypted (VM memory is copied across this network).

vMotion not only involves moving over a VMs CPU and Memory but storage needs to be taken into consideration if you are moving VMs across sites and arrays. There are various storage replication architectures to allow this. Active-Active replication over a shared site as with a metro cluster appears as shared storage to a VM and so this works like classic vMotion. For geo-distance vMotion where storage Active-Active replication is not be possible, VVols will be required which creates a whole new use case for VVols.

  1. _mb
    August 27th, 2014 at 06:26 | #1

    How and where are you getting this information?
    I’m at VMworld in SF right now, and every VMware employ I ask about vsphere 6.0 is refusing to answer or giving vague answers…

    The most disappointing thing about VMworld 2014 is their lack of information about Vsphere 6.0, it seems all the hype is about NSX this year 🙁

  2. P. Cruiser
    August 28th, 2014 at 16:58 | #2

    You must be one of those people who skips the keynotes 😉 It was in the day 2 keynote.

  3. September 13th, 2014 at 06:42 | #3

    This is exactly the writeup I was looking for. Thanks for putting this together. I’m curious on your thoughts of LVMotion between vCenters and the practical number of VM’s and associated data capacity that this would support. Seems that the listed use cases are only practical for a few VM’s depending on amount of data, size of WAN pipe, and distance.

  1. No trackbacks yet.
Comments are closed.