One of the major new features of vSphere 6 is the ability to vMotion over very long distances. With previous releases of vSphere the maximum network Round Trip Time (RTT) was 10 ms which equates to a distance of almost 400 miles. With Long-Distance vMotion the RTT has been increased to a whopping 100 ms which increases the distance to 4,000 miles – far enough to move a VM from London to New York.
Of course you do still require a L2 stretched network (L2 adjacency) which is where technologies like vSphere NSX come in, but what about the storage?
vSphere 5.1 introduced Enhanced vMotion which combined vMotion and Storage vMotion into a single operation so that shared storage was no longer required – essentially Long-Distance vMotion moves all of the storage for a given VM along with its memory.
This does not sound ideal as you would need to move an awful lot of data between London and New York – it is one thing to move a few GBs of RAM, but 100s of GBs of disk per VM is another matter. The answer to the problem is to combine Long-Distance vMotion with asynchronous storage array or vSphere replication.
For vSphere Replication this should be relatively straight forward as it is integrated into the hypervisor and it works at the VM level, storage array replication will be much more of a challenge as typically it replicates an entire datastore containing many VMs.
This is where Virtual Volumes comes in to play as they should allow replication to be controlled at the VM level. Long-Distance vMotion would need to synchronise the replication and switch the active site – sounds like a complex task, but it would bring tremendous advantages and make Disaster Avoidance available over almost any distance.
I am quite sure that this is something that VMware is currently working on with the likes of EMC and NetApp – so watch this space as hopefully Long-Distance vMotion is only going to get better.