Long Distance VMotion is by far the best session I’ve attended and the most exciting news for me of the VMworld week this far. The session was a presentation of a research project performed by VMware, EMC and Cisco. The session presented four options for performing a long distance VMotion using stock vSphere and existing technologies, well, almost. Three of the four include technologies currently available.
Why would you want to do a long distance VMotion? In my case, we have two data-centers – geographically close to one another. We currently stretch our cluster between the two locations and it allows us to float VM’s using VMotion between the two. The problem is that if we lose our primary datacenter, all storage is presented from here. Long Distance VMotion is the notion of having two separate clusters, one in each datacenter, and being able to VMotion between them.
What was really news to me from this session (I’ll get to what was presented) was that we can present the same data stores to two different clusters and have them recognized on both clusters. I am pretty sure I tried this way back in the 3.0 days and it failed to work. This must have been added in 3.5 or 4.0 – I have not tried in recent years.
So, what was presented? The three companies worked together to identify and trial a solution to allow for long distance VMotion. At this point, there is a very narrow set of criteria must be satisified to be support and for this to occur. Much of the restriction comes on the storage side, but network also presents some problems. Apparently, everything you need in vSphere is there, if you separate each datacenter into its own set of hosts.
Requirements
- Distance between datacenters must be less than 200 km
- A single instance of vCenter to control both clusters
- Each site must be configured in it’s own cluster – the cluster cannot be stretched
- Dedicated gigabit ethernet network
- A single VMware Distributed Switch stretched across both clusters (ok, didn’t know we could that either)
- Same IP subnet configured on both clusters for the VM to run
- Cisco DCI (Datacenter Interconnect) type technology – if you have something similiar by another vendor, you’ll be supported – this means that you should have a core network that can handle routing traffic to either location for the IP – the VM networks must be stretched between datacenters
- Datacenter storage should be R/W on both sides
- VMFS Storage is presented to both clusters – VM’s in each datacenter are run from the local storage LUNs.
- Must have less than 5 ms latency and at least 622 mbps bandwidth (OC12)
- No FT across sites!
Surprisingly, we have most of this configured in our environment and its been status-quo for us for several years. The biggest difference between our environment and this spec configuration is that we run a stretched cluster to achieve this. Our datacenters are very close to one another and we only present storage from our primary datacenter so that we don’t have a split-brain scenario. But, it does give me new things to think about and talk about with co-workers. We currently don’t run two clusters or SRM because we like the flexibility to VMotions between datacenters – with that now a possiblity, we may have something new to investigate…