Advice From The Community

Read answers to top Server Virtualization Software questions. 425,660 professionals have gotten help from our community of experts.
Richie Murray
Which hypervisor provides the best network performance at 10gb or higher? What do you all think?
author avatarHassan Ismail
Real User

On a basic, structural level, virtual networks aren't that different from physical networks.

So In virtualization, virtual switches are used to establish a connection between the virtual network and the physical network.

Once the vSwitch has bridged the connection between the virtual network and the physical network, the virtual machines residing on the host server can begin transferring data to, and receiving data from, all of the network-capable devices connected to the physical network. That is to say, the virtual machines are no longer limited to communicating solely across the virtual network.

What I want to say the network performance depend on many factors rather than hypervisor itself with my long experience in virtualization after working on VMware, OVM, KVM, Hyper-V, and Nutanix AHV, we can get the best performance for all of these hypervisors if we are using the proper NIC card, physical server, and physical switches.

From my point of view, Nutanix can provide the best performance due to the data locality which can offer more than 10 Gb for the hosted virtual machines.

But again you can gain the best performance from VMware if you have the best design.

author avatarWilVan Lierop
Real User

I felt the need to, again, make some remarks.

Left out of the discussion is the question of which architecture is planned for use and what OSes as guests. In the past, Intel CPU on the x86 ISA has been used merely, but that landscape is rapidly shifting.

There is a big change coming. Apart from the new x86 Epyc CPUs from AMD which have much better gains on a lot of virtualization platforms, the latest developments are now pointing into the direction of other ISA's like ARM and RISC. Not for the faint of heart as of yet, but it is coming and it won’t be stopped this time.

If you look carefully at the AMD Epyc CPU line, with lots of CPU lanes and much better performance figures than can be obtained on current platinum and gold editions of Intel CPU's, you quickly discover the benefits. And yes, this platform is rapidly maturing. This is something to consider when choosing the hypervisor; not all hypervisors perform equally well on those platforms. Initial testing I did with Epyc Rome suggests that the more mature Linux Hypervisors are taking the lead.

It all depends on your particular needs for that 10Gbit speed you want to implement. Without further details, it is hard to offer good advice. If your workload is SQL server the backend plays a much more important rule. In that respect Xenserver 8.0 on Epyc takes the crown, but only if your backend is of good quality too. Full flash backends are not always better for that particular network load and workload. I think if your wallet is deep enough full M.2 on Epyc CPUs spans the top of the line, no matter what hypervisor is chosen.

author avatarJose Alberto Oliveros Garcia-Alcañiz
User

I have good experience with VMware hypervisor over 10G network, in the next link you can see all the information about the best practices.

https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/Perf_Best_Practices_vSphere65.pdf

Hardware Networking Considerations

Before undertaking any network optimization effort, you should understand the physical aspects of the network. The following are just a few aspects of the physical layout that merit close consideration:

* Consider using server-class network interface cards (NICs) for the best performance.
* Make sure the network infrastructure between the source and destination NICs doesn’t introduce bottlenecks. For example, if both NICs are 10Gb/s, make sure all cables and switches are capable of the same speed and that the switches are not configured to a lower speed

For the best networking performance, we recommend the use of network adapters that support the following hardware features:

* Checksum offload
* TCP segmentation offload (TSO)
* Ability to handle high-memory DMA (that is, 64-bit DMA addresses)
* Ability to handle multiple Scatter Gather elements per Tx frame
* Jumbo frames (JF)
* Large receive offload (LRO)
* When using a virtualization encapsulation protocol, such as VXLAN or GENEVE, the NICs should support offload of that protocol’s encapsulated packets.
* Receive Side Scaling (RSS)

Make sure network cards are installed in slots with enough bandwidth to support their maximum throughput. As described in “Hardware Storage Considerations” on page 13, be careful to distinguish between similar-sounding—but potentially incompatible—bus architectures

Ideally, single-port 10Gb/s Ethernet network adapters should use PCIe x8 (or higher) or PCI-X 266 and dual-port 10Gb/s Ethernet network adapters should use PCIe x16 (or higher). There should preferably be no “bridge chip” (e.g., PCI-X to PCIe or PCIe to PCI-X) in the path to the actual Ethernet device (including any embedded bridge chip on the device itself), as these chips can reduce performance.

Ideally 40Gb/s Ethernet network adapters should use PCI Gen3 x8/x16 slots (or higher)

Multiple physical network adapters between a single virtual switch (vSwitch) and the physical network constitute a NIC team. NIC teams can provide passive failover in the event of hardware failure or network outage and, in some configurations, can increase performance by distributing the traffic across those physical network adapters.

When using load balancing across multiple physical network adapters connected to one vSwitch, all the NICs should have the same line speed.

If the physical network switch (or switches) to which your physical NICs are connected support Link Aggregation Control Protocol (LACP), configuring both the physical network switches and the vSwitch to use this feature can increase throughput and availability.

author avatarFábio Rabelo
Real User

Using Intel network cards 520 and 540, I have not tested any 550 yet. Proxmox gets the best performance, but not by much, XEN and VMware get really close, I do not think it can be the "deciding" factor. With Broadcom network cards the result changes a lot. In this case, Proxmox gets WAY better performance compared to XEN and VMware but is a little slower than Intel. I will not provide numbers because my tests are very informal and relaxed, just copy a big file, or open a bunch of queries in an SQL server.

author avatarPatrick Ringelberg
Real User

I have worked only with VMware Hypervisor and have seen that for most customers a 2 x 10Gbit connection it works fine when using in combination with the distributed virtual switch with network profile on it. (VMware QOS) in the DVS.

author avatarSudiro Sudiro
User

In my company, we use VMware vSphere as a hypervisor and Openstack. We have tested for VMware because VMware is the main platform for critical business. We've tested using Iperf tools, with 2 NIC 10Gb/s teaming to Cisco ACI can reach 18Gb/s as expected.

The most important things to achieve high throughput are:
1. Make sure software/firmware compatibility between the hypervisor version and NIC's firmware card.
2. Load testing (send file 1.5TB )using 2 servers as a client/source and 1 server as Target. Of course, before performing a test, all physical layers should be already error-free (optical cable, NIC, switch port).

Note: we found that error(CRC error) and packet drop appears if the firmware is not compatible.

author avatarMarty Pochmara (Nutanix)
Vendor

Two answers:

1) Likely is Nutanix because of the locality.
2) What workload is running that requires 10gb throughput or IOPS? Because the reality is, most workloads are not taxing a 10Gb port. If they are, then great, scale-out infrastructure like Nutanix can help distribute that workload as well as a number of modern DB's such as Mongo and NoSQL, etc.

author avatarFredrick-Massawe
Real User

I found VMware vSphere is far better equipped to meet the demands of an enterprise datacenter than other hypervisors and it delivers the production-ready performance and scalability needed to implement an efficient and responsive data center.

See more Server Virtualization Software questions »

What is Server Virtualization Software?

Server virtualization software, sometimes also called platform virtualization software, is a staple of the modern data center. Virtualization involves emulating a complete physical computer in virtual form. As a result, it becomes possible to run multiple “virtual machines” (VMs) on a single physical device. Given the importance of virtualization to infrastructure strategy, current product offerings for server virtualization tend to be feature rich and highly sophisticated.

When evaluating server virtualization solutions, IT Central Station member comments reflect the depth of functionality and nuances of products on the market today. Preferences go well beyond basics like wanting a virtualization package to be flexible and easy to install and configure

Enterprises are now essentially running their entire infrastructures on top of server virtualization software. As a result, users look for capabilities like automatic mirroring/backup using snapshots on the runtime as a way to minimize the number of fail-recovery procedures and downtime. IT Central Station members pay attention to how well a server virtualization solution can handle multiple physical machines, maintaining relative processing demand to get to optimal usage of each physical machine. The goal is to minimize overload at peak times.

Infrastructure managers emphasize host clustering support. For instance, will a server virtualization software package support at least 8 nodes and 90 VMs?  They pay attention to replication and disaster recovery as well as live migration in a VM/concurrent based migration without VM downtime.  Users seem to want a granular administration model. Stability is prized. 

Portability and usability also drive selection of virtualization managers. With admins on call anywhere, some users want the software to run on a laptop without using up much battery power.  The quality of the web client also matters in this context. 

Find out what your peers are saying about VMware, Proxmox, KVM and others in Server Virtualization Software. Updated: June 2020.
425,660 professionals have used our research since 2012.