The valuable features are:
- It concentrates all our virtual platforms into a really small number of servers.
- It gets rid of dependencies of expensive SAN storage units which decrease our electricity and cooling expenses in a very drastic way.
- It gives us an extra layer of comfort by providing different levels of high availability.
Improvements to My Organization:
We can deploy new servers faster than ever. Our capacity to grow is bigger than when we had SAN storage dependency. We are now able to deploy a pool of QA virtual machines for testing purposes in minutes rather than in hours.
Room for Improvement:
I would like to see faster re-sync and recovery times after a host failure. It’s so difficult to restore a normal situation after a failure. There is a large amount of data to re-sync after a host failure. We have a 1Gb vSAN network, and the restore process can last several hours or days.
I would also like to see a granular sync system, rather than the current “all data” transfer.
Use of Solution:
I have been using this solution since 2014.
During normal activity, the vSAN’s behavior is excellent. Performance and stability are awesome.
We have only encountered some issues related to the host update process because they increase the data movement between cluster hosts and it ends up collapsing the network.
The vSAN solution has scalability inside its core. Although it has a widely supported HCL, you have to choose the new components when adding nodes to ensure that you won’t have any bottlenecks. With our vSAN installation, we didn’t encounter any issues like that.
We haven’t required help from VMware technical support yet. At the beginning, there was not much information about troubleshooting available on the internet.
This product is now more mature and there is a lot of information available, such as VMware or independent blogs and forums, that help with vSAN problems.
We used the traditional solution of a pool of hypervisor hosts with a common storage attached (iSCSI class). It did the job until we had scalability problems that were related to storage.
The cost of buying a new iSCSI storage was more expensive than rethinking our current solution. For this reason, we changed to vSAN technology.
The installation was as complex as any iSCSI scenario can be. However, it was radically simple in terms of the networking part.
In our case, we passed from our standard virtual switches to distributed ones in order to meet the vSAN’s requirements. We had to take into consideration the disks/RAID controller configuration. We chose an acceptable balance between performance and cost, creating a RAID 0 with each disk of each server on the cluster and made them available for vSAN.
Cost and Licensing Advice:
We adjusted the pricing and licensing costs based primarily on the physical processors per server. We chose each node of the cluster with one physical processor since vSAN is licensed per processor. We calculated the performance requirements of our entire virtual platform to decide if one processor solution was a good decision.
Other Solutions Considered:
We didn’t evaluate other options, except for the line of traditional iSCSI storage solutions. We wanted to continue working with the same virtualization-based system. We wanted to get a solution with the smallest possible footprint. The vSAN solution met these requirements.
This is a very good solution if you have the adequate budget to provide for the related requirements or recommendations, e.g., a 10Gb network. It has a wide catalog of uses that fulfill the highest requirements of performance at all levels. Without any doubts, I recommend this solution.
Disclosure: I am a real user, and this review is based on my own experience and opinions.