VMware vSphere Review

I have tried XenServer, Hyper V, & KVM...but ESXi has been doing it better for longer

What is most valuable?

You can get it downloaded and installed for free. It allows you to do more with less. It's easy to use and simple to configure. There are hardware vendor specific builds of the software, increasing visibility and manageability of the product.

How has it helped my organization?

We have less physical servers to monitor and put under warranty.

For how long have I used the solution?

10 plus years in various forms as GSX, ESX and now ESXi.

What was my experience with deployment of the solution?

Deployment options are varied with ESXi, so depending on what you're trying to achieve within your business.

What do I think about the stability of the solution?

Due to the multitude of configuration options, you can occasionally experience compatibility issues with 3rd party storage vendors such as NetApp which recently had a known issue with NFS reporting all ports down.

What do I think about the scalability of the solution?

Never encountered any scalability issues with this product. It is truly enterprise.

How are customer service and technical support?

Customer Service: Good to excellent.Technical Support: Good to Excellent depending on what engineer is assigned to you.

Which solution did I use previously and why did I switch?

I have tried other hypervisor technologies including XenServer, Hyper V, KVM, Parallels and virtual box. They all do the same thing, but ESXi has been doing it better for longer.

How was the initial setup?

Exceedingly simple setup. You can make it more complex depending how truly enterprise your needs are, like stateless implementations of ESXi.

What was our ROI?

Reduced Electricity Bills, reduced hardware and warranty costs. Reduced server implementation time. Increased management and availability of corporate services.

Which other solutions did I evaluate?

Not on this occasion but I have assessed other hypervisors.

What other advice do I have?

Assess why you think virtualisation is the answer to your problem. Research hypervisor choices, perform Proof of Concept exercises with those products you choose to assess and most of all think about the legacy of what you're doing. i.e. what do you want to leave behind?
**Disclosure: I am a real user, and this review is based on my own experience and opinions.
More VMware vSphere reviews from users
...who work at a Financial Services Firm
...who compared it with Hyper-V
Learn what your peers think about VMware vSphere. Get advice and tips from experienced pros sharing their opinions. Updated: August 2021.
534,468 professionals have used our research since 2012.
Add a Comment
ITCS user

author avatarit_user2652 (Project Manager at a non-tech company with 10,001+ employees)
Top 20PopularVendor

Which storage are you using with ESXi? Do you think there is any performance impact using local disk and using disks from storage?

author avatarit_user71133 (Senior Manager of Network at a tech company with 51-200 employees)

Hi Kapilmalik1983,

Well for my home lab I use a mixture of storage, but essentially I have the following devices which allow me to explore most of the storage options with ESXi such as iSCSI, NFS and vSAN (with the exception of Fibre channel and Fibre channel over Ethernet)

1 x Synology DS411 with 4 x 240 GB SSD (Tier 1 Storage RAID 5)
1 x Synology DS411 with 4 x 500 GB 7200 RPM Disks (Tier 2 Storage RAID 5)
1 x QNAP SS-839 with 6 x 500 GB 5400 RPM Disks (Tier 3 Storage RAID 10) and 2 x 500 GB 7200 RPM (Tier 2 Storage RAID 1)
9 x 250 GB 7200 RPM (Tier 23and 3 x 40GB SSD for vSAN

The question of performance comes down to a few things really. Connectivity Type, Spindle speed, Capacity and number of disks. Now though with SSD becoming more and more affordable, its changing the dynamics of work loads for file and block level storage. However more disks means more IO. More IO is more performance capacity.

So to answer your question "Do you think there is any performance impact using local disk and using disks from storage?" I would have to say yes, but it could go either way. So an iSCSI based SAN with 10 x SATA speed disks of 5400 - 7200 RPM probably isn't going to perform as well as a server with 8 x SSD. It really depends on what your workloads are, which means doing some monitoring and measuring before you start investing in kit to work out what you need. So many times people just measure on capacity. We need a SAN that is (holds hand apart) about this big?

Hope this helps


author avatarit_user113166 (Engineer at a tech services company)

It is not a matter local storage versus iSCSI l disk. it is about the amount of data you need to provision for. Keep the iSCSI on dedicated network. servers do not have the storage that iSCSI devices have. The speed of the network that the storage is on can make the difference between local and iSCSI pretty much a none factor. Dedicated iSCSI controllers also reduce the performance difference.

author avatarHenry (Tata Consultancy Services)
Top 5Real User

Yes Sven,

Also if you got deeper packet for ESX:>)



author avatarit_user71133 (Senior Manager of Network at a tech company with 51-200 employees)

I think the point I am going to make here KapilMalik that some people are not interested in answering your question, but instead are trying to tell you that your question is wrong. Which I am certainly not. What I am trying to accomplish here, is to get you to understand that in order to answer your question (performance of local vs SAN based storage) you may have to answer more questions (what are the loads my VM's are going to generate in IOPS and would those be ok on local or SAN storage) so that you can determine what your requirements are before investing money into anything.
With the advent of vSAN being coded directly into the kernel of ESXi, we are without a doubt going to see the even more and more of the ready built virtualisation solutions that have everything included in the chassis like Cisco's UCS, which is essentially a lot of Compute and Storage in one chassis, so people can scale up and out their virtualisation infrastructures really simply. Think really modular single vendor solutions. Need more storage? Buy another chassis with its default 10TB of SAS capacity and 200GB to 1TB of SSD for caching. Need it for backup and archiving go for the 20TB SATA option. Just need more memory then buy the chassis with no storage and add more compute to your hypervisor clusters.
The point I am making is that you need to start thinking about abstracting your storage from its actual physicality and tying it in with your VM IOPS requirements, as well as security, redundancy, resilience. It's not as Russell says purely about capacity. That's just part of the equation. If he's right and all you need is capacity to cover 1TB then why not buy a single 1TB disk?

author avatarit_user133545 (User at a tech services company)

There will be no performance issues as long as you run the correct no of vm that is supported by the hardware.
Here is a simple calculation to find out the no of vm you can run on your hardware.
A Hard disk with
7,200 rpm - 100 iops
10,000 rpm-150 iops
15,000 rpm- 200 iops
For Example : You have 9 Hard disks configured in Raid ,Count the no of hard disks that contribute to the storage.
In this case all 9 hard disks contribute to the storage.And the hard disk has a 10,000 rpm speed so the no of iops is 150
Hard disk*no of iops=Total no of iops
9*150= 1350
if we would like to run 40 virtual machines
1350/40 = 33.75 each vm would get around 33 iops which would give you no performance issues.
Instead you chose to run 50 vm
1350/50 = 27 each vm would get around 27 iops which would cause performance issues.