For my first post here on Hypervisor Agnostic, I thought it would be appropriate to open up with a debate that I’ve been sucked into as of late due to a project I’ve been involved with at work. Has Hyper-V finally reached the point that it can go toe to toe with VMware as an enterprise hypervisor product? Or is Hyper-V merely an entry level solution for small businesses that lack the budget to invest in VMware? My answer is yes, Hyper-V is an enterprise product, but is it right for your enterprise? The answer to that is a little more complicated.
To appreciate where Hyper-V is today, you have to understand where it began. Microsoft first entered the virtualization market when MS bought the Virtual PC product line from Connectix back in in 2003. Virtual PC was originally a virtualization program for Apple Macintosh computers that ran various x86 versions of Windows (and other x86 OSes) on MacOS. Today, that doesn’t seem like anything special, because modern Macs can dual boot Windows with no problem. But back in those days, Macs and “IBM clones” were based on entirely different processor architectures, and getting Windows to run on a Mac was no small feat. To be fair, getting Windows to run stable on any hardware in the 90s was a minor miracle, but I digress. The point is, Connectix had a pretty decent little virtualization engine for the time, and MS wanted it for their own. From Virtual PC came Virtual Server, which was designed to run on Windows Server platforms, and run other server operating systems. It was designed to compete with VMware Server, another Type 2 hypervisor (a virtualization engine that runs as a program within a conventional operating system). But Type 1 baremetal hypervisors like VMware’s ESX were starting to become affordable, viable solutions, and Type 2 platforms started to become relegated to desktops rather than data centers.
So in order to get in on the baremetal hypervisor party, Microsoft announced that Windows Server 2008 would include Hyper-V – a free, baremetal hypervisor system that offered better performance than Virtual Server. Hyper-V ended up being a separate download from the RTM version of Server 2008, and it was somewhat underwhelming when it was released, being years behind what ESX and Citrix Xen were capable of. There was no way to migrate a VM from one node to another without downtime, VMs in a failover cluster had to be placed on their own LUNs, guest operating system support was very limited, as were the specs of virtual machines. 2008 R2′s release of Hyper-V improved in many of these areas, offering clustered storage, live migration of VMs, and a slightly expanded list of guest operating systems supported.
But it was Hyper-V 2012 that Microsoft really came out swinging, offering specs that (on paper at least) out-scale VMware, improved live migration, along with storage migration (previously only available with System Center Virtual Machine Manager), a completely rebuilt networking stack, and several other features that helped to close the gap between Hyper-V and VMware.
But the gap is still there, and that is the the point of this initial blog post. I’ve spent the better part of this past year working with Hyper-V 2012, and there are some things I absolutely love about it, and some things I loathe. Here’s my run down of what’s good, and what’s bad.
MS advertises that Hyper-V can support 64 nodes in a cluster, as compared to VMware’s 32 nodes. They also claim Hyper-V nodes can support 4TB of RAM, 320 logicial processors, and can support VMs with 1TB of RAM. To me, speccing out a hypervisor like that seems somewhat ridiculous. I’m much more of a fan of “scale out” than “scale up” – I’d rather have 12 nodes in a cluster with 256GB of RAM than 3 nodes with 1TB each. Sure, Hyper-V can support 64 of those nodes with 1TB of RAM. I don’t, however, want to be the guy who has to handle maintenance on that cluster, and wait on all the VMs filling up that 1TB of RAM to migrate from node to node when I want to install Windows updates.
Still, it is great that Hyper-V finally supports VMs with decent specs. I don’t foresee myself ever needing to give a VM one terabyte of memory, but it’s a lot better than 32 or 64GB, and the ability to add more than 4 virtual CPUs is a much needed improvement. Combine that with a new virtual hard disk format that offers much larger sizes, and Hyper-V VMs can be built to a decent scale. That does, however, leave the door open for a lot of overbuilt VMs, but that’s another rant for another day.
This is one area where Hyper-V is really positioned to eat VMware’s lunch. If we compare apples to apples, ie free product vs free product, Hyper-V has one significant advantage over VMware’s free ESXi offering: free Hyper-V can be part of a Windows fail over cluster. Free ESXi is standalone only – and has a pretty limited RAM cap to boot.
So if you want a cheap, highly available virutalization solution, Hyper-V is the way to go. The freebie version of Hyper-V (meaning the standalone, downloadable version – not enabling the Hyper-V role in Windows 2012) is not feature limited compared to its Windows Server brethren. To get HA/fail over capability in VMware’s product, you’re going to spend several thousands of dollars.
But the drawback to this is that it’s built around Windows Failover Clustering, which has it’s own set of issues. First of all, let’s not forget Windows cluster’s reliance on Active Directory.
If you virtualize all your domain controllers, and have some kind of network issue that prevents a node from finding a domain controller, hilarity will ensue, and by hilarity, I mean a bunch of VMs dying/failing over. Second, management of many clustered nodes is possible without System Center Virtual Machine Manager, but it is controlled chaos at best. Once Hyper-V nodes are clustered, you should generally do all node & VM management from the Windows Failover Cluster management console if you’re not using SCVMM. However, MS didn’t include a way to manage Hyper-V networks from the Failover Cluster manager, so you still have to do that through the standalone Hyper-V management console, and do it one by one. Yes, you can script it through PowerShell, and from what I can see, PowerShell seems to be the only “one stop shop” for dealing with Hyper-V. Without PowerShell, you’ll find your self bouncing back and forth between Windows control panel, Hyper-V manager, and Failover cluster manager in order to handle most day to day tasks. It’s do-able, but it’s ugly. VMware’s management is much more streamlined and intuitive.
Live Migration vs. vMotion
vMotion is the feature that allowed VMware to take over the virtualization world – the ability to move VMs from node to node with no downtime was huge, and no one else ever figured out how do it quite as well, or as fast. But there are some limitations – you can only do 4 concurrent vMotion operations per host in 5.1 on anything less than 10GB network links. With a 10GB NIC, you can do up to 8 per host. MS took the “let the administrator decide” approach with Hyper-V 2012, and you can now set the concurrent number of live migrations to whatever you want. Off the top of my head, I believe Hyper-V 2008 R2 only allowed on live migration at a time, so this is a huge improvement.
That said, before you think you’re going to team a couple of 1GB NIC ports in your Hyper-V host and crank the max number of migrations up to 10, 15, 20, or beyond, keep in mind that there’s a very good reason VMware sets the limits they do on vMotion. There’s more to the equation than just the network here – host memory, storage I/O, andCPU usage on host are all impacted during migrations. So, take a cautious approach to this, and steadily increase the live migration count on your Hyper-V hosts rather than deciding right off the bat that 12 is a great number to start off with.
That said, if you have a dedicated live migration network with decent bandwidth, and your hosts can handle it, 10 simultaneous live migrations at a time can significantly decrease your cluster maintenance times.
This is one of those Coke vs Pepsi, Ford vs Chevy, Mac vs PC type debates. VMware zealots absolutely hate the fact that Hyper-V does not allow memory over commitment, and view it as a sign of Hyper-V’s inferiority. Hyper-V fanboys think that handing out resources you don’t have is a bad thing, and that Hyper-V’s dynamic memory is the way to handle fluctuating memory demands. This is one thing I am 100% on the Hyper-V side of the fence on. Look, it’s great that VMware doesn’t have any hard and fast limits on resource assignment. It’s great that DRS can see that a host is getting low on memory, and can move a memory hungry VM to a host with more free memory. But sometimes, the the cluster ends up over committed, a node goes down, and there’s no hosts with resources to satisfy those now homeless VMs. Or, an admin set the cluster to allow VMs to power on even if the resources aren’t there. If the memory isn’t there, and VMware can’t find any VMs that are hoarding memory they’re not actually using, then you end up with VMs swapping their RAM to disk. Outside of a critical productions system being down, a critical production system swapping RAM to disk is pretty much my worst case scenario. It’s ugly.
Hyper-V allows you to assign a startup value for a VM’s RAM, as well as a minimum/maximum value. When a VM reaches a defined threshold, it will request more memory from the host, until it reaches the maximum value. When it’s not using the RAM, it will release it until it reaches the minimum value. Yes, this requires a bit more management overhead. But this is one of those things I’d rather have some degree of control over than just leaving the hypervisor to its own devices.
Guest Operating System Support
This is one area where VMware runs away with. If you’re a primarily Windows shop running current versions of Windows, then Hyper-V’s got you covered. But if you’re running any Unix-like servers other than a very narrowly defined subset of popular Linux distros, Hyper-V can’t do much for you. And, of those supported Linux distros, you’d find some features like dynamic memory, are Windows exclusive.
And if you want to run archaic versions of Windows, you’re out of luck on Hyper-V as well – you’re limited to what MS currently provides support for, which is generally 2 versions behind whatever the latest version is. But if you feel the need to run Windows 3.1, Windows 98, or Windows 2000, the VMware’s got your hookup.
I know, surprise surprise – the Windows based hypervisor is geared toward Windows guest OSes. But if Microsoft really wants Hyper-V to make a dent in the enterprise, they need to come to grips with the fact that some companies run other operating systems that are not Red Hat, CentOS, SuSe or Ubuntu.
Wrapping It All Up
So as of September 2012, with the release of Windows 2012 R2 next month, Hyper-V has supplanted Citrix Xen as the clear number 2 hypervisor platform. in my mind. It can do at least 90% of what VMware can do, at a fraction of the cost. But is that last 10% worth the price?
For small businesses, and smaller enterprise customers that are running primarily Windows in their server rooms and data centers, Hyper-V is priced to move, even if you tack on the cost of SCVMM to manage it – pretty much a must for larger clusters. At the high end of the scale in heterogeneous environments, VMware is still the king of high availability and load balancing, and management is much more streamlined. Even with SCVMM (which brings an entirely new set of headaches, but again, more on that at another time) the thought of trying to manage a 64 node Hyper-V cluster makes my head spin.
So yes, Hyper-V is there. But VMware’s not going anywhere any time soon.
Disclosure: The company I work for is partners with several vendors - http://www.latisys.com/partners/strategic_partnerships.html