We use this solution exclusively for our VDI.
We are running vSAN on six Cisco C240 M4 servers.
VMware vSAN is the industry-leading software powering Hyper-Converged Infrastructure solutions.
What vSAN Does
VMware vSAN is also known as vSAN.
Download the VMware vSAN Buyer's Guide including reviews and more. Updated: September 2021
Read Some Case Studies
We use this solution exclusively for our VDI.
We are running vSAN on six Cisco C240 M4 servers.
The newer versions of this solution are much more stable and easier to manage.
We had a near meltdown with 5.5, upgrading firmware and vSphere versions is a hassle.
The most valuable feature of this solution is that it is cheap storage.
This solution would benefit from better collaboration with Cisco for driver updates.
The support from VMware is phenomenal.
We deliver the only end-to-end enterprise technology platform exclusively designed for quick service and food service communities. Our primary use case for this solution is for customer use in our internal labs. As partners with the vendor VMware vSAN, we leverage their tech to build customer-specific simulated environments, to provide unique, controlled individual environments to gain insightful perspectives and capture helpful data.
As a function of our core business, it's a sought after tool that helps us provide analytical support across a wide spectrum of client needs. It's allowed us to test out in our connected restaurant - "TheWorks" - a fully-functional restaurant experience center that allows our clients to discover the value of our connected solutions firsthand. We deploy vSAN in this customer-like environment within a hyperconvergent infrastructure (HCI) to give our clients a better understanding and help optimize data and the end-users' experience.
The feature I've been most pleased with is the import management functionality.
I would like to see the availability of more template based VMware systems. Combined with the ability to check and measure multiple and converging data segments. Another issue I've seen is that the tool seems to be slow when first starting up.
Stability is and was always good with VMware. I've never had any issues.
I've only contacted technical support once. My experience with them from what I can remember was good. I was on the call for something like five minutes.
The initial setup was straightforward, not complex at all.
I would suggest that anyone looking to deploy this solution do their due diligence and try out other competitive products first, like Nutanix. I've used Nutanix in the past. I found it to be a more agile tool compared to VMware. VMware has only just recently started offering this HCI solution.
If I was to rate vSAN from one to ten, 10 being best, I would give it an 8. Not a ten primarily because I haven't tested some aspects of the arrays at this point.
It is used as a remote/branch office solution for a new site that we acquired.
It consolidated our workload and brought the cost down over a long.
Ease of use and implementation.
More modularity in terms of how nodes are provisioned (all nodes having to be the same size when deployed).
It helped to reduce storage costs.
Straightforward and easy to use.
Data services like remote replication.
We primarily use this solution for consolidation on the cloud.
The most valuable feature is the ability to continue our business needs and have higher visibility. It has definitely increased our business productivity levels.
I would love for this product to be cheaper and easier to configure.
It is a very stable and strong product that is easily deliverable to our users.
I know it can scale up or scale out but I have not had the need to do so.
We also evaluated Dell EMC before choosing this solution.
We have vSAN, and have built-in storage capabilities. We have many hosts, and we use the host through our providers with vSAN, with the storage. This improves everything because it is all internally between the servers. We use an NSX protocol. And what NSX does, it uses an internal network between hosts, so there is no use of an external switch. We create an internal connection between the host and the VMware product. So traffic is all internal and you can create all the firewalls and switches, everything. It becomes virtual. But, it is sometimes complicated when you try to deploy new systems or when you have to scale a system very quickly.
I think the vSAN product uses vSphere to monitor the system. It is sometimes difficult to manage the PCs within the system. VMware is currently working towards moving things to the cloud network. This is a great new addition to the VMware product.
To me, it is very stable. I never have problems. I have used VMware for 15 years and I never had any problems with stability. Like any normal system, you may sometimes have problems with one little platform, or with a host that is not working. But, there are no major issues.
We have 130,000 people connected to the platform and to the servers. Eventually, we want to use the cloud, which will help with the volume.
You can speak with VMware and they will provide you service that you need.
I can set up a platform of VMware in a week, easily. It took me about a week to deploy our platform and we basically set up all he servers, all the network and everything else. Then, it took about two or three days to work and patch everything, and cable in everything.
The older versions were a little more complicated. Nowadays, there are more documentations, videos, and tutorials. So, it is less complicated. There are still some issues, until you have to look at everything. But, I think that because there is more documentation now, and more information, you can speak with VMware and they can provide you service.
The only problem I see with VMware is the price tag. This may start causing problems because there are other solutions out there, like AWS, that are open source and free. So, there is no license fee. VMware is very good, but expensive, in comparison.
I compared VMware to Oracle. They're very good, but Oracle is expensive, so people buy it and then start using open-source. Oasis is another option because it's cheaper and it's a similar process. So that is the problem I think VMware is going to have to compete with them in the future, and it is only going to get worse.
To me, VMware is a leader of virtualization. I think everyone just follows VMware.
The reason why we use VMware is because all of the areas that VMware can provide. They fill a need for our platforms. There are other platforms now that provide similar solutions. In the old days, it was a simple Microsoft platform, and they had no management costs. Now they use VMN to create a cross-test and to link all of the servers they want. So they can provide restoration of servers. Furthermore, now they are integrating the movement towards cloud solutions. The only issues concerning the future of vSAN is the price. If someone builds a platform that is free, and only has to pay a license fee for a server, that may cause a problem for VMware.
We are using it for management of all the data that we collect from our customer bases and from our 500-plus locations. There is also the data that we use to manage employee systems, so it's both ends of the business. It's the actual retail side of the business, as well as the internal operations.
vSAN has improved the organization just based on the overall speed. It's a lot faster than what we what we've used in the past. The old-school storage systems were kind of slow and cumbersome. This is much faster. It's much more reliable.
The most valuable feature that VSAN offers is reliability. In my mind, as long as their storage is up and running, we can always access what we need when we need it, that's what's important. It's super important to have reliability, particularly for internal operations: for employee data, payroll management; and then as well for the customer side of the equation with customer information and customer databases.
Areas of improvement could be the UIs. I've seen them. I've worked with them a little bit. The UIs are kind of cumbersome.
There could be an easier way than having the UUIDs associated to the LUNs. That could be simplified to make life a little easier to search and naming conventions and being able to search them down and for overall utilization; ease of utilization.
The stability of vSAN has been pretty much flawless for us.
Scalability: pretty simple. You just add more and away you go.
The data sets are constantly growing, so we have internal needs, new VMs are getting spun up all the time. They're gobbling up all kinds of storage space. We try not to over-commit too much, but everybody does, right? But it's constantly growing and we're constantly adding to it.
I have personally not contacted tech support at VMware for vSAN.
The company has been around for quite a while, so we go back to some of the earliest days of spinning disks and a local, small data center at the corporate office, to the point now where we've grown to have our own data center and racks upon racks upon racks of storage.
I was not involved in the setup on that side, either. That's a different team that does that.
The primary ROI for this is its stability. That's the key. I can't really speak to the cost side of the equation, but I can speak to the stability side, and I know that it's critically important to us to have our data available to us when we need it. Since we've gone over to the vSAN solution, it's been very stable.
When we're choosing a vendor, there are two factors involved, and the lowest price isn't always the most important. We need a vendor who provides really good support and products that really meet our needs well.
I'm going to rate it as a ten out of ten, because it just works. It's always solid.
We are using vSAN as a product in vSphere. Recently, we signed up for the 6.7 version of vSAN. We use it on all-flash and VME. All the discs that we use are NVMe disks.
We provide and manufacture our own local storage. With our own storage, we can path that with the host. So, it's beneficial for us to have a local storage attached to a host which vSAN is awesome for that.
With vSAN coming in, we have stability within the cluster of resources which has been grouped together in a local storage. This is a wonderful feature in vSAN.
We are finding vSAN is going down the right path, but vSAN has specific profiles which supports vSAN disk. However, our company has our own storage. So, we have different profiles of configuration. Some of those profiles and motherboards, vSAN doesn't support. We have challenges and work with VMware to work with other providers to get into the VMware list and drivers. Since it's customizable, we are looking for drivers from other vendors as well from VMware for compatibility. There is a room for improvement on the latest version of compatibility with the VMware product, especially for vSAN and with other vendors, like Intel and AMD, on their motherboards and driver configurations.
It is stable for me. We are getting good amount of IOS (the expected amount). The configuration of vSAN is pretty simple. It's just on a cluster level which is pretty simple.
The stability is very much required. vSAN provides default HA configurations, where if any host goes down, the VM moves around within the host. Even though the disks are local, the VMs moves around with the vSAN disk and vSAN provides a high availability on its own.
vSAN is scalable for us. If any additional capacity needs to be included, we just add to the host and configure the vSAN cluster.
Currently, we are working with one tech support as a partner with VMware. We are receiving a good amount of support with troubleshooting. It's on email, as well on tickets. However, it's going well.
We had out-of-the-box solutions. When vSAN came in, all the local storage became attached. The solution has improved a lot considering the local storage for vSAN configuration.
We are involved in the beta phase of the vSphere product, as well vSAN and newer product versions of VMware.
One of the best features of the configuration is vSAN at the cluster level is pretty simple. People have a lot of issues in configuration of different storages, but vSAN brings in a flexibility. Where as a vSphere admin, people can go and just configure the storage. So, VI admins don't want to have a storage knowledge when they are working with a vSAN. It is simple for us to use.
With vSAN, we didn't find the market that competitive. VMware is doing well with the local storage piling up in cluster configuration. vSAN is doing great with it.
As a vSAN, we didn't find that competitive market. VMware is doing good with the local storage piling up with the cluster configuration. vSAN is doing great on that.
We give it nine out of ten. They are going down the right path. When they started, we saw a lot of improvements with a lot of focus on the product, even in VM World. There were announcements in the features for improvement with vSAN. We continue to see VMware keeping up-to-date with vSAN, not putting the product aside.
We use vSAN primarily as an R&D tool to test our products and see how they work on it, and it is absolutely phenomenal. It is one of the best hyperconverged solutions I've been able to get my hands on.
vSAN has improved our organization by allowing us to perform faster workflows, get better overall performance, and create some really new solutions.
The most valuable features for us are the ability to scale out the nodes independently, and the flexibility of the nodes. We can put almost any type of server in there with our connectivity and everything works great.
The biggest room for improvement I see in vSAN is the lack of SAN connectivity. I've kind of joked around that there is no "SAN" in vSAN. And it's something that we've worked to try and introduce some options for, and we're going to continue to work towards that. But it looks like the door is starting to open and there may be some options, with some of the announcements that came out of VMworld 2018.
vSAN has been very stable for us. Once we get it up and settled in and the workflows going, usually we don't have to intervene at all. Things just keep working. Stability is important for us with vSAN because it becomes the rock that we depend on. When we need an application to stay up and maintain that ability to bounce between hosts, to work in a true hyperconverged manner, it's the only choice for us.
Scalability in vSAN has been really good. It's very easy to add nodes in, to automatically generate the drives and the disk groups. It has been a piece of cake, surprisingly so.
We have not needed to use vSAN tech support, believe it or not. We have not had any kind of an instance where we couldn't resolve it on our own, or it didn't fix itself.
We had no hyperconverged solution beforehand. We knew that we needed to do some testing with them. It started off as a compatibility (test) and just kept ballooning from there until we went and implemented it.
When choosing a vendor, our most important criteria are reputation and stability. You can't go into something without understanding just how good it is, and if you roll the dice, sometimes you get burned. We're a risk-averse company.
I was involved in the initial vSAN setup. The experience was really wonderful, it was really easy, it was very intuitive. There were some learning curves for us because we had never done it before but, overall, the wizard and the experience with the online tutorials that we were able to find solved every concern or question that we had, very quickly.
ROI for us comes in uptime, keeping applications up and running. That's important to us because that's directly attributable to our revenue stream.
Do your research, dig, find out what your particular needs are, what would the overall cost be to - sometimes it's a forklift, sometimes it's a migration. But look at all the factors, look at the requirements of vSAN, look at the requirements of other hyperconverged solutions, and then make the decision.
I would rate vSAN as a solid nine. To get it to a ten it would need: the ability to support a SAN and a little bit of a larger scale. Those would be the two things that I would request.
Our primary use case for vSAN has been our branch locations and multiple different office locations. We are running vSAN as an alternative to external storage arrays, and it's working really well to provide us with data storage at these branch sites.
The most valuable features of vSAN are its simplicity to deploy and that we can use commodity disks in our servers without complexity or need for external storage arrays or storage specialists on our teams. It's part of our vSphere admin's duties as opposed to storage experts.
The features of vSAN allow us to reduce our operational complexity to a large degree. It's a single pane of glass for the administrator, and we're able to somewhat reduce costs, other than the fact that vSAN is somewhat expensive to license.
I see room for improvement for vSAN just around general hardware compatibility and expanding that sort of matrix. It's pretty wide already, but everything else within vSAN seems to work really well. It is very well-integrated.
I don't see a lot to complain about at this point.
Stability with vSAN has been really good. We've had very few issues. When we have had maintenance issues, the vSAN has come back and healed them automatically for us. I don't think that we've had to actually engage support a single time in the six months that we've been running vSAN in our corporate office.
I can't really speak to scalability. We have a fairly limited deployment at this point with three nodes, so it's a bare minimum sort of configuration.
We have not had to engage technical support for vSAN. At this point, we've been able to solve all the problems or basically work through the GUI intuitively to be able to resolve anything that has happened.
The decision to switch away from standard array to vSAN was a fairly simple one for us. We had been decreasing the amount of operations that we do inside of our branch sites. For the sites which remain, vSAN is a good fit versus the legacy Dell EMC VNX arrays that we had been deploying.
We are finding that vSAN is a lot more scalable and adaptable, because we can go in with hybrid arrays for our lower-end storage needs or with all-flash versions of vSAN for places where we need more performance, and it's coming in at a lower cost point than an actual traditional array.
The initial setup for vSAN was extremely simple. There are some concepts that you need to understand before you go in, install, and click the buttons, but once you have your drives configured and inside of the individual nodes, the configuration takes just a few minutes. Everything gets done and orchestrated for you directly from the vSphere or vCenter consoles.
If I had to rate vSAN, I would give it a nine out of ten.
When we're choosing a vendor, we're looking at the ability for the vendor to be in business:
These have a lot to do with our decision to work with a particular vendor. We typically seek out the best-of-breed solutions and try to adhere to those. At the same time, we try to work with the same vendors over and over, because we have existing relationships to leverage and existing expertise around the solutions that are adjacent to what we may be evaluating.
Our primary use case for vSAN is server virtualization. We've used it to virtualize close to 500 servers which would normally have been on physical hardware. We have virtualized and consolidated it down to run on nine nodes of vSAN. That workload primarily consist of web servers running Linux or Windows Servers to support the Windows Active Directory that we have for the environment onsite.
It's improved the organization overall primarily because the storage is local on the boxes. Before we were with vSAN, we were with another iSCSI product which was a clustered product that went across the network. We had multiple instances where we would have either a network hiccup (caused by us) or a network hiccup (caused by the device). This took a whole bunch of VMs down with a lot of repercussions. It took a long time to recover. By eliminating dependency on that back-end storage, we now depend on everything that's in the VMkernel with vSAN. So, we eliminate the middleman.
We like that it is a hyperconverged solution. Everything is in a box. You got the compute, memory, and storage. So, we can scale out by adding nodes as we go and eliminate the back-end storage, whether that's a NAS or iSCSI device.
You get the benefit of local storage, but you have the protection of shared storage.
I see room for improvement with vSAN in particularly in the reporting realm. Now, with vSAN 6.7, they're starting to include vRealize Operations components in the vSphere Client, even if you're not a vRealize Operations customer. So, that's really good. It exposes some really low-level reporting. I would like to see more of that. However, you have to be a vRealize Operations customer to obtain that. I would like to see more include of this included in the vSAN licensing.
The vSAN licensing is not an inexpensive product. It does cost more than hypervisor. I would like to see more basic reporting, or even expert reporting. I think with our licensing that we've paid our dues, and we should get the information.
Stability is working very well. vSAN is very dependent upon your network. If your network is stable, vSAN will most likely be stable.
Our network is very stable. Therefore, we have not had issues.
We started with a three-node cluster. We are now at a nine-node cluster. We can just add nodes piecemeal as needed to add capacity. It's been very transparent. Users have never noticed when we've had to do that. So, scalability has worked real well for us.
We've been with vSAN since the early days of ESX 5.5, when it first went general availability. In those early days, we used support quite a bit. They were very good. The vSAN team that VMware has are top-notch. I think they pick the best of their support people and make them vSAN representatives. In the early days, I used them a lot. Not so much lately, because the product has gotten so much better.
I was involved with the initial deployment of vSAN at our site. The most complex thing is you have to live and die by the vSAN HCL list. You can't put a product or a component into a vSAN node that is not on the host compatibility list, particularly the SSDs and their firmware which is specified on the HCL. You have to match that explicitly to receive good results.
I see ROI on vSAN because we have gotten out of the business of depending on the back-end NAS device or the back-end iSCSI device. We get the return on investment by decreased administrators' time, decrease exposure to network issues and stuff that would take a lot of VMs down. That's where we see our ROI.
We looked at Nutanix before we went with vSAN. For budgeting reasons, we weren't able to pursue Nutanix after a pilot.
The product is at least an eight to eight and a half out of ten. Because the feature growth that I've seen them put into the product since we've been with them since 5.5, they are innovating with each release. They're adding more features and all that adds up to a better ROI on our investment.
As we were consolidating so many servers, we had a really high consolidation ratio. We wanted to have something that was close to being local disk. However, we also needed to have redundancy so we could take a node down for maintenance or if a node would crash. All the same standard reasons of why you would want high availability.
What I look to see in a vendor is good customer support. I want to talk technical with someone. I don't want a lot of marketing PowerPoint stuff. I want to talk to people that know the product very well. Because if I start using the product, I will need that support on the back-end. I don't want to be flailing by myself in the wind. I want to have good expertise that I can call on to help.
Primary use is just for VMDK storage. We're running an all-flash array with NVME caching tier. The performance is really good, we're using SATA drives. We're about to do a complete rebuild with 12-gig SATA drives as the capacity tier, and bigger, newer, faster NVME for the caching tier.
vSAN has improved our organization by giving us yet another high-speed data store. Previously, we were using VNX that had some Nearline-SAS drives with some SSD caching on it. But the all-flash vSAN is obviously much, much faster. We also use a Pure Storage array that we just got in a few months ago.
The most valuable feature would be: You own the hardware already. Why not just throw some drives into it and have a software-defined network storage system?
I know they're working on this: better support for an all-NVME array. Better metrics.
vSAN itself is a great storage platform, but one of the issues with it is that you have to be fully locked into the VMware package to use it. We're going to be deploying 72 Kubernetes nodes, and we're not going to buy VMware licenses for 72 of them, just so they can access vSAN. That's what we're using the Pure for. Opening it up so you could have vSAN as a data store, use it as a data lake, hit it with an NFS, S3 from outside the VMware ecosystem, would be great.
Stability is okay. We do see weird things crop up every now and again. It will say that a drive gets kicked off even though it's fine, and we have to re-add it. So a few gremlins here and there, but for the most part, it's pretty good.
So far, for scalability, we've just been running it on five nodes at our primary data center, and we're building out a second data center. It's going to be running on five nodes there. We haven't really scaled it up since we built it.
I've had to use tech support once or twice. It went okay, as with any tech support.
When we started with VMware, it was a three-node package with the VSA, virtual storage appliance, which was sort of the precursor to vSAN. And it just came as a package, so we said, "Okay, great. We have our storage and our compute tied together."
I'd say vSAN, on a scale of one to 10, would be a seven or an eight now. (If I have to choose it's a) seven. But with what I've heard while I've been at VMworld, I'd say that they'll probably go up to an eight.
Our primary use case for vSAN is for our corporate cluster, and we have many different use cases using vSAN. It was a perfect solution for us. We were there for the beginning of vSAN. We created our own vSAN environment with their early installers and now we have a professional one. It's a great solution.
vSAN improved our organization by taking a whole bunch of servers that we had that were depreciated and letting us remove all of those workloads and put them on one, centralized solution, and have great storage in the back end. It's really helped us consolidate a lot of workloads that were in different silos, and now we're back to managing everything from one place.
The valuable features of vSAN are that
The product can be improved in a couple of ways. One of those would be that they have a lot of hidden features, that are through the CLI, that would be great to have in the GUI, or just be more open about those features. It's something called RVC. It's a tool in the back end. It's a really great tool, but I had to find it through Reddit. So more information on stuff like that would be great.
Also, in the user interface, giving us more features and more reporting that we can do from vSphere itself would be helpful.
Now it's great. The stability of vSAN is getting better every day. We had some hiccups in the past, but we worked through it with some great techs. They were there with us the whole way, and we got through most of our hiccups.
There are definitely some things you need to know about vSAN going into it, like don't over-commit your storage, that we didn't know. We hit every problem you can probably hit with vSAN, but we're good. We're still up and running.
We started with three nodes, added a fourth. It was easy to do, gave us more storage, very scalable. You can just keep on growing and growing.
I was involved with the initial setup. It was fairly easy to get up and running, at first. We had some networking hiccups here and there but, overall, it took about a day to get us ready to go.
The ROI data on vSAN: I would definitely say it's my staff cutting their time by something like 90 percent. They're only dealing with one stack of servers right now. All of them are able to perform the storage tasks needed. Everyone can manage it. We don't have to wait for that one guy to come in and do what he has to do. My entire staff is trained on vSAN. We usually spend no time in it. Before, we were dealing with a lot of different solutions that took up a lot of our time, so time saved is a good reason for our ROI.
If I had a colleague in the field, what I would tell him is that vSAN is great. I would do four nodes instead of three. Make sure that you're safe. Four or five will get you right where you need to be. You won't have any problems. That would be a tip I would give: Go for four nodes. vSAN is definitely worth the money.
I would say it's a nine out of ten. It's not perfect, but it's almost there, and it's great.
We're primarily using it in a VDI environment, a four-node VDI environment. Performance is very good. We're very happy with it. Networking setup was a little bit of a challenge, but we got around that.
Reduced complexity. We don't have to worry about the physical SAN anymore. That makes it easier. The learning curve as well, when people learn vSAN, they find it very easy to manage compared to a physical SAN.
Flexibility, growth, and expansion are probably the more important features for us.
As our environment grows, the more users come on, the more VDI workstations that we need, we can easily expand either horizontally or vertically with the environment. We're very happy with that.
A bit more information on the upgrade path, upgrade availability, how to upgrade, that would be very useful.
We find the stability very good. It really reduces our overall operations.
We find the scalability very good. We've been able to upgrade very easily as users come on, as we need to create more VDI workstations. Adding the extra drives gives us the capacity we need.
We haven't needed to use technical support so far; nothing at all.
Up until about a year-and-a-half ago, we were using physical SANs. Space is a problem in our environments that we deploy, so we knew we had to get rid of the physical SAN and go toward the more virtual environment. The number of nodes we deploy, we need them. By integrating the vSAN, we're able to get the space requirements we need.
I was involved in the initial setup. In fact, I was involved with the selection of vSAN compared to other products, as well as physical SANs, and I was involved in some of the design and configuration.
It was fairly straightforward, actually. After we got around the networking issues, we found that the vSAN setup was very good.
In terms of return on investment, we don't have any kind of requirement there.
We considered EMC as well. We considered HPE LeftHand, which we had used in the past, so we were familiar with the virtualized SAN. We like the vSAN a lot.
The advice I would give is to properly analyze your host infrastructure. Make sure that your network cards are sufficient for the environment you're trying to deploy in, whether it be all-flash. There are already some Ready Nodes available. Go with the Ready Nodes when it comes to vSAN. Don't try and buy your own parts - something we looked at originally that we scrapped. That would be my main advice. Go with Ready Nodes when it comes to virtual SAN.
In terms of improving the product, we're very familiar with the new features in 6.7, which we're going to be upgrading to. Data encryption, we would like to deploy, as well as compression and deduplication. Those features are already available in the new version. We just have to take the time to deploy them.
Out of ten, I'd give it an eight. We're very happy with the product. To bring it to a ten we'd rather not upgrade as often. Right now, we're at 6.2 and that wasn't long ago. They're already going to 6.8 now. We'd like to have a little bit of a normalization period before we get to the next product. I understand it's a focus for VMware. We're very happy they're focusing on it.
We use it for our DMZ and any test environments that we put into our industry.
It's performing pretty well. We have no issues with vSAN at all.
It has improved our organization in a way of scaling it.
We can scale it very easily for a test environment. We were able to segment our DMZ so it wasn't connected to anything, which we really liked.
One thing in vSAN that I would like to improve is using vSAN as a repository for files or other things. For example, with Horizon, maybe we can save profiles with UEM on there. That would be a good feature that I would like.
The stability has been great with vSAN. We have not yet seen downtime.
We scale it with our test environment. We are looking to do it with Horizon. We are able to scale it to see how many VMs that we can host and how long it will take us to add new hosts, if needed.
Technical support has been very good. They respond pretty fast, especially if we have a critical issue. Their responses have been great.
vSAN is one of the easiest implementations of any VMware product. It's almost like click it to enable it, then you're almost done. So, vSAN is very easy to set up.
We did consider other hyperconverged solutions. It usually came down to price. vSan was the most cost effective thing. That's why we went with it. Also, we didn't have to get a connected array. We can put it in small places, remote sites, etc.
Nutanix, Cisco HyperFlex Edge, and VxRail were on our shortlist.
I would rate the solution an eight out of ten. To make it a ten, it needs to be able to scale the amount of data that we can hold so we can put bigger, more data-intensive apps on it.
My advice to a person looking at vSAN is get your hands dirty in the labs. Show how easy it is to set up, because it's not very complicated. It's an easy solution that you can implement at your company.
Most important criteria when selecting a vendor: Since we're a hospital, we have multiple hospitals in the area. We look at local site resiliency, so we're looking to see if we can put it in each of our hospitals.
We recently adopted vSAN. We adopted VDI for our desktop solution about ten years ago and we have a single KPI for delivery which is clinical data accessed in five seconds.
Throughout the last decade, as new back-end technologies have come to market, we have always been investing in the hosting end of VDI. Five years ago, we went to an all-flash array, and two years ago, we went to the vSAN hyperconverged.
When we went to vSAN, at that point in time, we doubled the density of our desktops per host and, for the first time ever, I could demonstrate a significantly lower TCO for a VDI desktop versus a rich or fat client.
For my organization, the most valuable features of vSAN are as follows:
Room for improvement could be in the planning stage of going to hyperconverged. And this is a big ask: some modeling tools or guidance on how to work out the optimal TCO. For example, core size - the amount of RAM that you're running - versus the licensing cost you're up for with, say, Mircrosoft data center, versus the number of hosts you're going to run and have to license for the vSAN. It's quite a complex equation and it's really difficult to work out, in advance of implementing the solution, that you've got it right. That creates some uncertainty around the total cost of ownership.
Stability on the vSAN has been 100 percent. As part of the implementation process, the VMware customer success team for vSAN assisted us. We actually retrofitted hard disk into our own existing hosts and they went through a process of review and remediation to get all the "green ticks". We went through that process in advance of putting it into production for our data center, which we did this year. So, there have been absolutely no problems from that perspective.
When talking about scalability, the real value is that, for the first time, I can just build it out one host a time. Over the years, I'm sure everyone has experienced hitting the wall on their array where it's too old or the technology has changed, and they're up for a large sum of money in one hit. The actual, repeatable, non-quantity of the cost to increase the storage, is very valuable.
On a scale of one to ten, I am giving it a nine. It's probably because I can't bring myself to give a ten for anything, in case it could be improved.
We use it for all our virtual desktop storage.
It's definitely cheaper to buy it piece by piece, instead of an entire shelf at a time.
Also, for setting up new clusters for VDI quickly, it's nice. You don't have to wait on an order for a storage vendor to ship you a system and help you configure it, you do it all yourself. It's kind of convenient that way. And the sizing guides are pretty straightforward.
I would like to see better performance graphs, maybe something that you can export outside to a different console, and maybe a little bit longer time period. The 18-hour maximum, or 24-hour maximum, is kind of short.
Also, the hardware compatibility limitations are a little frustrating sometimes, but as everybody's starting to adopt vSAN more, you get more options for hardware.
It's stable. We haven't had any major issues.
Scalability is easy. You just buy a node and go.
The vSAN technical support guys are great.
We chose it because of cost considerations. We already had an enterprise agreement with VMware, so vSAN licensing was included.
There was a small learning curve, but it's pretty straightforward once you understand the basics of how everything works.
We did evaluate other vendors initially but this was our second hyperconverged solution. We went with it because of the cost.
Do your homework. Make sure you know what kind of IOPS and latency requirements you need to meet. Picking hardware is not hard anymore. Everybody has an HCL. vSAN has a great list. Just pick what you want and go, it's not that hard.
I rate it at eight out of 10 because nothing is perfect. I'm hard to please. I'm not saying there are growing pains, but vSAN was still new at the time. They didn't have dedupe and compression yet. The performance was pretty good. Most of it was hybrid in the beginning, but now with all-flash, it's speedy, when it needs to be. It's a young product and nobody gets a 10 out of the gate.
We use it for VDI.
It's supposed to provide low-cost for storage arrays to do VDI. We're on the fence with it. We're still looking at other solutions. We're not sold on it.
It has provided some value when it's working. Instead of hitting our production SAN array, it has its own array, storage-wise. It keeps workload off production.
It could be more robust. The latency is also an issue for us, and the reliability. I would like it to be faster and a little more flexible.
On a scale of one to ten I would give the stability a six.
Scalability should be pretty good, but we're not getting the performance we want out of it right now, so we're not going to scale it unless something changes.
The initial setup is pretty straightforward.
We have seen value in it but, since it's not performing the way we think it should. We're probably not going to move forward with it.
We went with it because of the cost. It's definitely cheaper than buying a storage array.
We use it for hosting all our business products on virtual machines.
The only thing I care about is that the solution is stable, reliable. They need to improve on those factors. I don't want to have to wake up at night to deal with problems.
It's pretty stable now. We had some challenges when we deployed them. There were software bugs.
The scalability is pretty good. I'm pretty satisfied with it.
Technical support, at times, has not been very good, but we are okay with it now. The problem was that they were not taking care of our issues promptly. They would average a couple of days to get back to us. But if there was a tough question, it would take them days or weeks.
The initial setup was straightforward.
We probably reduced our hardware footprint by 50 percent, which is a lot.
We looked at other vendors but we chose VMware because it has a good reputation and because the underlying technology is pretty solid.
The solution is an eight out of ten. To get to a ten it would need to be more stable and easier to upgrade.
It's going to be employed for our VDI infrastructure and, potentially, we will move it into our VSI infrastructure.
Considering that we have many storage arrays, this seems to keep us a little bit more contained and it's easier to manage versus some of the legacy storage where we don't have manageability, or we're losing manageability for it.
We have greater uptimes, we're not down nearly as much, and we can identify and deal with solutions to problems that we're encountering in those environments.
I would like to see more ease of use, more compatibility with different areas.
The stability is good.
We have a couple of problems but we're working through them. In the deployments we have in our Dev environment, it's more about how the hardware is interacting. We have them on Dell EMC vSAN Ready Nodes and we're just working through some of the driver issues and some random rebooting that we're having to deal with. But we have support contracts. Everything seems to be doing fine.
Our experience working with technical support has been good.
The most important criteria when selecting a vendor for us are the stability of the product, as much uptime as we can get, and service contracts so that we can get people to react more quickly to cases that we open and get things escalated properly.
I rate vSAN at nine out of ten. What would help make it a ten would be if we didn't have so much inconsistency in the information around how to deploy it. That that would be a little bit better.
The primary use case is for VDI. In fact, we have created what's called a virtual research desktop with VDI, which is insulated because we're dealing with HIPAA data. I think it has performed pretty well.
I like the fact that I've got some degree of redundancy built in and, of course, the performance is great.
It would be much improved if we could somehow integrate a better backup with it. Right now, we're using Veeam and it's okay, but I would like more of a VDP vSAN solution. That would be excellent. The VDP, at least the last time we looked at, it was just not quite there.
I was a little bit worried about the stability initially, because I had an experience about three years ago and I wasn't very happy. But so far, it looks pretty good. I'm actually very surprised that its stability has been improved significantly. So far, so good.
I would have liked it to have been more scalable. It's scalable but not as much as, for example, the ScaleIO systems were or the Kaminario. We looked at Kaminario but that was a risky technology, so we didn't want to go there. I think vSAN is okay. It could use a bit more work on the scalability. I think that's key.
I have not had to use technical support myself but my team has. One of the things that I've heard from my team is that, even when they run into significant issues, they have to go through the whole order of support, and they get frustrated. They get a level-one guy or girl, and that person knows less than my team members do, so that's frustrating. When they get to a level-two or level-three, it's okay.
We were using Compellent. I was okay with it, but it wasn't performing as well as I would've liked and, certainly, the expense and scaling the thing was just too expensive. The other issue was that the natural redundancy you can build with vSAN, you can't really build that with Compellent, unless you have at least two of them. With two you can replicate between them, but, again, they are expensive systems.
When selecting a vendor, what's important to me is a partnership. That sums it up. To me, a vendor has to go in with us for the long haul. We can help the vendor and the vendor can help us. We can help each other out. To me, a partnership is key.
So far, we've been able to replace two Compellents which have cost an arm and a leg. And they're just not as performant as the vSAN. So the ROI has been good.
Let's put it this way: I think the VDI/vSAN has replaced quite a few of our desktops or laptops. Over the course of time, give us another year or two, I think the ROI will be very significant.
While vSAN performs pretty well, when we were doing all the performance tests, ScaleIO did pretty well. In fact, it did better than vSAN, but we liked vSAN better because it was more integrated with our VMware environment, obviously. We chose it and we're happy with it.
The hybrid storage strategy is not the best thing you can do; for example, when you're mixing standard drives and flash drives, SSDs. Do all SSDs if you can afford it.
I give vSAN an eight out of ten. It can stand some improvement, but it's much better than it was three years ago when I looked at it.
Because our company is an architecting company, we require a lot of IOPS going from the server side to the clients who are using the models. They require faster transactions and that's the reason we thought of having a type of HCI solution. That's why we went with the vSAN solution.
Previously, we were going to use traditional systems, so when vSAN was launched it gave us a lot of value. The admins have been able to relax a bit, they don't have as many outages to deal with.
We want see a better monitoring tool in vSAN. Monitoring is not that great as of now because it shows us false alarms in the Health status. We would like that to be improved.
It's pretty much stable for us now, apart from some of the issues which can be tackled. But 80 percent of the time it's stable. The issues are probably on our end, network issues. That's what we have figure out.
We don't scale that much because we have a three-year refresh time. We tend to acquire for how much we predict we will scale up in the next three years.
We have used technical support quite a few times but not frequently. We have had a good experience with them. We usually get good engineers on our calls.
Initially, it was quite difficult to understand the solution because we tend to do a PoC. Later on we got used to it. Now it's quite easy for us, but at first it was not easy. We now have about 48 locations where we have deployed vSAN.
When vSAN was introduced we were quite excited about it. We were looking for something that was not traditional and we wanted something hyperconverged. vSAN was a perfect fit for us.
I rate the solution an eight out of ten. To get to a ten it would need improvement in the Health status checkup.
We do reference architectures using our SSDs so we're all about All-Flash vSAN. It's part of our portfolio.
I would love to see vSAN integrate Persistent Memory and NVDIMMs. I know they're supposed to be working on an elastic tier so that we don't have the issues with destaging from the cache to the capacity. Those are the things that I'm interested in.
I'm not an end-user, I'm a partner, we put together proofs of concept for end-users. So my biggest desire is for the VMware/vSAN team to perfect the single tier or what they're calling the elastic tier so that you can pool SSDs as well as NVDIMMs.
The stability is fine, it's as stable as the vSphere, and vSphere has been around for a long time.
We've documented that it scales out per node. The more disk groups, the more nodes, the better the performance.
We have a team of engineers who do the performance evaluation so we don't normally use technical support. We only occasionally use it.
We published the first All-Flash vSAN in 2015. It wasn't straightforward but we got it done.
The primary use case is all of our VMware workloads. In terms of performance, it does alright with the general workloads. I've had some issues with the dupe clusters, but that's just the right-sizing overwriting the cache.
It has helped break down the silos, and we have not needed a separate storage team since the introduction of vSAN.
The most valuable feature is the simplification of storage. We no longer need to deal with Fibre Channel and the external storage arrays.
There are features that we could use that are coming out: File Services, data backup, and a better way to do Maintenance Mode with vSAN, which takes a while.
So far, except for a couple glitches in past revisions, the stability has been alright. We had some issues with dedupe and compression in 6.2, where we had to delete all the storage off of it and recreate the storage groups. But besides that, it's been working well.
It scales really well. However, we're going to be in need of some, not external storage, but ways to expand storage without adding additional nodes to the cluster.
We're an MCS customer with VMware so we get great support.
For HCI, we didn't have anything else in place. For servers, this was our introduction to HCI. We have other products for VDI, but not for server workloads.
The initial setup was very straightforward.
If you're going to run vSAN, make sure that you stick to the HCL and that your firmware and your drivers match what's on the HCL before you implement it or go live with it.
When selecting a vendor, for us, support is number one, the support that we can get from them. The other factor would be the forward-looking direction of the company.
In a lot of cases, the primary use case for vSAN is in small to medium businesses, where they may not have the space or the funds for an actual storage array to provide a shared storage medium for their virtual environment. And even if they do, they may not have the expertise to maintain that and a separate network. vSAN gives them the ability to make use of storage they already own, across their host. As they add more, more storage, more compute, they'll add more memory. It makes their environment simpler to manage and keeps it moving smoothly for them.
The most valuable feature is the simplicity of its scalability: being able to grow it without having to make sure you get the right disks and the right nodes.
The solution is also easy to manage. It's all right there in the vSphere Client. You're not going through multiple things. You don't have to know, once you've created the vSAN node. You add storage, it sees it, and you create your data storage from there. Everything is right there for you.
What I would like to see, for the really small customers, is the ability to have two nodes.
I find it to be incredibly stable.
I've seen it scale up to large databases. I've got some customers who utilize a small vSAN cluster for their Exchange environments because it keeps it encapsulated for them.
The initial setup is very straightforward.
I would definitely go with the vSAN solution. A lot of times, it's less expensive than third-party software, and it's not managed via third-party plugins. It's there, it's native to the ecosystem, and it works.
Our vSAN setup is used in our development system, not our production system, for ease of use and ease of access.
The benefit is easier deployment of storage. We don't have to order a storage system, we can just use whatever we have on hand and roll it into our virtualization system.
I would like to see a little bit more documentation on the initial setup, and a little bit more explanation on the expandability: How to extend out your vSAN much more simply through the console because, a lot of the time, you have to do it through the command line.
So far, the stability has been very good.
We haven't tested the scalability as much, but the small amount we have done has been very good.
We have not had to use technical support.
We use in-place storage systems, but I wanted to be able to spin something up quickly, for the development side, for our clusters. Since it's not a permanent thing, it's much easier to go in and re-do it without having to re-blow-out a whole storage system. It works well.
When selecting a vendor, what's important for me are support and value. The support is especially important. When I have a problem I need solutions. And return on investment is very big for me. I want to make sure that when we buy something, it's going to return the investment very quickly.
The initial setup was pretty straightforward. I had a couple of Knowledge Bases I followed, but it was straightforward, once I read all of them.
It has provided good value on the development side. Once I'm comfortable with it, we'll start looking at moving towards a production setup. But for now, just development.
I would definitely tell colleagues to move towards this solution. I've had a lot of people wanting to go to Hyper-V, not VMware. I have told them VMware is much more mature, it's got the feature list, it has a lot of good qualities.
We use if for our primary infrastructure. In terms of performance, vSAN is fine.
Being able to do maintenance on the fly is a real benefit: migrating off, updating, and then moving the guest back on to the nodes.
Everything that has been mentioned as part of Update 1 solves part of the HCL list issue. They're handling the firmware version but, at the moment, they're only handling the storage IO. They're not handling the rest, which would be firmware, the BIOS, the fNIC, and so forth. After speaking with them, they said they're looking at that for a future update.
Because of the vendor, we are very neutral on the stability at this moment. The main issue is drivers. Every time we move to a new vSAN version, we're having problems finding the correct drivers for the vendor.
The scalability is fine. Adding new nodes is very simple.
Our experience with technical support has been excellent. Every single time we've had an issue so far, they've been able to find the issue with the vendor.
Because of the time that we've had to spend dealing with the vendor, we haven't seen a return on investment yet.
Go with the full managed support, something like VxRail or, if you go with Cisco, get their full central management system.
vSAN alone, with the current features and version we're at, rates an eight out of ten. The vendor would be a definite one out of ten.
To make the solution a ten, it needs to be vanilla. There shouldn't be any custom drivers, any custom anything. It should just be, "Hey, you know what? These drivers are going to work for this version, the next version, and the version after that." That's the difficulty in this. It takes too much upkeep.
The primary use case is bringing redundancy into our plants for failover. It has been performing great.
The most valuable feature is the flexibility, the ability to move the machines around without hesitation.
The UI could certainly be better. The inside into what's actually going on with vSAN would be nice to know.
There have been a few issues, but VWware support has been tremendous in resolving them, so it's been good.
Scalability is easy to do. It's just drop-and-add and you're good.
The process with technical support is pretty good. Escalation up to the top-tier engineers is really good. We have a direct path there. There are no problems with tech support.
We probably already reached our ROI aftertwo and a half years.
Make sure your storage network is strong. But I would recommend vSAN.
It's a pretty solid product now that's it's at 6.5 Update 2. I know that it's going to get better, but right now I'm pretty happy with where we're at. I would rate it at seven out of ten. Nothing's perfect. There's always room for improvement.
The primary use of the product is for storage for VDI plus some other storage for file servers and the like. The performance is great. We use it on all-flash.
Performance and the ability to use all-flash.
I would like to see it be more hardware-agnostic.
Other than that, the only other complication is - and it has gotten better with the newer versions - that lately, once you're running an all-flash, if you need to grow or scale down your infrastructure, it's a long process. You need to evacuate all the data and make sure you have enough space on the host, then add more hosts or take out hosts. That process is a little bit complex. You cannot scale as needed or shrink as needed.
Right now, the stability is pretty good. It's getting a lot better.
It has its quirks but the scalability is good. Given that you have to have the hardware, the right driver, the right framework, and so on, it's not easy to put it together, it's not a plug-and-play solution. But once you get all of that done, it becomes a good product.
I have used the technical support, but most of the time it comes down to the manufacturer of the hardware; Cisco or whoever we're using for it. It's a compatibility type of thing. But tech support is okay.
Our previous solution was SAN-based. I wanted to bring in something new and not only stay with the market, where it's going with the trends, but also to bring in something that is stable enough for production.
Once we got all of the driver configurations done, etc., it was easy enough.
We have definitely seen value, especially in performance.
Give it a try.
Our primary use case is production data and the performance has been great.
I'd like to see better integration with the Update Manager, in terms of firmware updates for hardware.
It has been pretty stable for us.
It's very scalable. I like that. Adding a node is easy. Adding a disk group is easy.
Tech support has been very knowledgeable for the issues that we've had. They have been able to troubleshoot or determine exactly what is going on and then resolve it in a timely manner.
We were end-of-life on our previous storage and looking at replacements. It made sense to look at something that was going to integrate both the servers and the storage.
The most important criteria, for me, when selecting a vendor are
We went with vSAN because of cost and ultimate value. Ease of use and the cost, compared to some of the alternatives, were pretty compelling. I also liked that we could choose whatever hardware we wanted, rather than having to use one particular vendor.
The setup had some complexity, and some of that was figuring out newer releases. Networking, originally, was kind of a pain, with having to have everything talk Multicast. They've gone to Unicast which simplifies things.
It has simplified things for us. It was one purchase for servers and storage so that made it easier on us. It's been a good product, it's something that we'll continue to use.
For our shortlist, we looked at SimpliVity, some Dell EMC solutions, and Nutanix.
Make sure you do a proof of concept. And look at your options for hardware if you're looking at vSAN, compared to some competitors where you have just one option.
I would rate the solution at eight out of ten. To get to a ten they would have to drop the cost. That would get a point right there. Then, going forward, I'd like to see better integration with Update Manager. Some of the manual processes that you still have to do, being able to automate those, have it do them on its own, would be great.
Today, we use it for general compute and VDI. We have not put our VDI into production yet, but on the general compute side, it works great. The performance has been exemplary.
The most valuable features are ease of deployment and ease of management. If you compare it to other software-defined storage products, it's much easier. It's a checkbox. It's lot easier to manage.
The Snapshots feature looks pretty cool, so that will be nice to have. External storage would be a good thing to have in the next release, something other than iSCZI, something a little more, not HA, a little more production-oriented, than iSCZI.
So far we haven't had any issues at all. It has worked very well.
We're not that large at Boys Town. We probably only have 500 VMs. Realistically we have about 50 VSXi hosts. So for us, it's great because we can just buy servers and expand any cluster we need. We split clusters based on other needs, like licensing or something else. It's not like we get to 64 nodes. So we don't have any issues with scalability. It works great for us.
We were having some problems with another software-defined storage vendor so we switched to vSAN. We had problems with the previous vendor's support. While I have never talked to VMware vSAN support, I've talked to GSS, but I've never had issues with GSS, other than their not calling you back right away.
For me, the most important criteria when selecting a vendor are
We've had no issues with the product. We put it in in two days. The initial deployment was straightforward, easy.
On our shortlist were Dell EMC Vx Rack FLEX, VxRail, and we looked at Nutanix a little bit. We chose vSAN because we had done PoCs in the past and, comparing it to every other software-defined storage product out there, its ease of use is unparalleled. It's very easy to set up and very easy to administer, comparatively.
I would ask a colleague who is looking at this type of solution, "Do you need storage for VMs?" Hands-down, if you need storage for VMs, vSAN is your option. If you need a SAN for some other reason, other than storage for VMs, then go for it. But if you're running VMware VMs, buy vSAN.
I like vSAN because they release features incrementally, every year, and you don't have to upgrade your hardware to get those features. If you bought a traditional SAN, you would have to upgrade your hardware constantly, every three years: You would get it, and it is how it is for three years. But on vSAN, you upgrade when you have to, when your hardware gets old or when you need more capacity. It's great, you get new features constantly.
I would rate vSAN at eight out of ten. It could get to a ten, once we have more time running it.
We use it for all of our Production and it has been very effective.
It's more scalable and faster than what we had, and it's easier to support.
I would like to see some of the more traditional SAN functions that are out there now. I can list them: being able to Snapshot on the back-end, better de-dupe, and better compression. Those are the major ones.
We haven't had any issues with the stability.
The scalability is very good. You plug it in and it goes.
We have not had to use technical support for vSAN yet.
We knew we needed a new solution. The other one was too complex and too costly and was never really maintained properly. Too many teams had too many hands in it. With the new ACI solution with the Vx Rack, and SDDC, everything is a lot more easily managed.
The most important criterion when selecting a vendor is reputation.
The initial setup was straightforward.
It's a liitle hard to say what our ROI is because we bought it to replace an old, traditional setup. It was either pay for maintenance and the like, refresh it, or go to an ACI. We went to an ACI. I don't know what the cost to refresh the other environment was, so I don't know exact numbers for return on investment.
Our shortlist was really just EMC. That decision was made before I took over the project. We were always an EMC shop, so we moved away from Cisco and went to Dell EMC for it. I don't know why, exactly, but they said to me, "Here, make it work."
Be careful of your FTT policies.
I rate it a nine out of ten. It would be a ten if it had better deduping, compression, and the ability to Snapshot volumes on the back-end.
It runs our core virtualization, both in our data centers and our edge or remote-site data centers. The performance has exceeded our expectations and exceeded our traditional converged infrastructure.
It's also intuitive and easy to use because one size fits all. Obviously, it scales out, but it's the same solution at every physical location I manage.
After hearing more today, here at VMworld 2018, about what's coming, it seems that what's coming covers us: It's the Snapshotting and the DR and the replication. Historically, we've had to leverage third-parties. They were third-party solutions we were happy with, but all-in-one would be better.
It has been stable.
It scales out.
I haven't used the technical support but my team has. No issues have been escalated to me, so that's a good sign.
We were using traditional converged infrastructure with storage, network, and compute tiers. We had a mandate from a U.S. government entity that required physical separation of a lot of our infrastructure. Thus, we had we had an urgent need to duplicate everything we had. So it was a technology refresh.
There were a handful of important criteria when selecting a vendor:
We didn't calculate a formal ROI on it because it was a technology refresh, but, "seat-of-the-pants," it's less expensive than traditional infrastructure.
We looked at Nutanix, we looked at Cisco, and we looked at Dell in the hyperconverged space. On the flip side, we were looking at the traditional SAN vendors and the traditional compute and networking vendors. We selected vSAN because it met the three criteria that I called out.
I would tell a colleague to highly consider it. Do your research and test it. If it fits, it fits.
We've been live about nine months so I would rate it at eight out of ten right now, just because I haven't used it long enough to be confident to say ten. To get it to a ten it will need to be stable for 12 months.
We use it for localized storage converted into virtual storage. The performance is perfect, awesome. No complaints.
The most valuable features are
I haven't utilized it enough to even know all the features available, much less what might be needed still. It's hitting all of our points pretty well.
The stability is awesome. We love it.
We haven't dealt that much with scalability because we're rural. It's a small area with small community-type banks. Being able to convert existing storage into vSan is really a perfect solution for a lot of our customers.
I haven't needed to contact technical support yet.
What made us go with this solution was price point. When you can utilize existing storage infrastructure, and not have to continually purchase new SAN products out there that are going up in price as time goes by, then it's a wonderful thing.
When selecting a vSAN vendor, the most important criteria were
I've been using VMware for many years, and I'm still using it. That's a testament to how well it works.
The initial setup was very straightforward, a very simple implementation. It's just an easy product to use. VMware, in general, is a very easy product to use.
The timeframe for return on investment is about three years, and we hit that pretty consistently, if not even sooner.
Look at the ROI carefully, and make sure that you can hit that before pushing the product.
It's cheap, easy, and good for low-end customers. We're a small market, rural area, so we have low-end customers. Price point is just about everything for us.
I would rate vSAN at nine out of ten. What would make it a ten would be lower pricing.
We use our vSan primarily for our VCF deployment. We run our production workloads on it, mostly for Microsoft SQL databases and various WebSphere and web-based front-end applications.
It performs pretty well for the most part. The older versions had some issues, specifically regarding upgrade paths and the robustness of the product, but in the last two or three versions they've really addressed those issues and brought it up to speed and made it a real enterprise solution.
I would like to see more comprehensive lifecycle management. The current path and process for upgrading or updating the firmware, as well as the storage controller software to interact with that firmware, is fairly manual and not very well documented. A little more time and effort spent on the documentation of the lifecycle management for vSan would be really great.
Currently, it's very stable. Previous versions, which are still active and out there online: upgrade to the new version.
Scalability is slightly limited in that you're pinned by the physical disks in your hosts, but provided that your solution doesn't require you to have specific disk technology, you can get the size you need and expand it out as much as you need to.
I give technical support an A-plus, from my experience. It was perfect, it was awesome. They helped us recover from a very major outage and we would have been down for much longer had they not been involved.
We were on old hardware and we needed to move to a new solution.
It completely removes the need for a storage network and for a storage administrator and all of that infrastructure and the costs that are involved with them. That, right there, is a huge return.
It's great for DevTest and, as long as you're not going to be consuming data at huge rates, it's great for Prod too.
I would rate vSAN as six-and-a-half or seven out of ten, but only because of the major problems we experienced with them a few months ago that led to some big outages. From what I understand, the current version alleviates those issues. If we're evaluating the current version, I would give it an eight.
It would be a ten if there were more robust lifecycle management and a better-documented implementation within vSphere.
We use it for our developer clusters.
It's a little too early to tell what the benefits are. We've only implemented it over the past three to six months.
Perhaps they could provide encryption without having to use an encryption manager.
No issues so far. It's been pretty stable.
The scalability has been pretty good for us so far.
We are primarily NetApp. The decision to invest in a new solution was a C-level-down recommendation.
The initial setup was pretty straightforward.
Go for it. As long as you don't have a very high IOPS-oriented application, it's a great way to go.
I rate it eight out of 10. While it's a little too early to tell, it doesn't seem like it gives the performance that an actual SAN would give for heavy IOPS, read/writes.
The primary use case is that we're getting ready to deploy a VDI solution across the campus and our healthcare network.
The opportunity gained with the relationship we have now is limitless, as new features and products roll out, especially with today's announcements: the news about microsegmentation, the RDS in the cloud with AWS, as well as some security features. It's a constant evolution for us. That's really why we're with vSAN.
The most valuable feature for us, long-term, is the integration with VMware that we're going to be using. We're currently using AirWatch, we're working in Workspace ONE. We want to make sure that our VDIs, with the integration of the Windows 10 solution - as well as any-device, anywhere, anytime mobility - work, yet still offer them the ability to gain access to that VDI. That is huge for us.
If you want to get down to the nuts and bolts of room for improvement, we would really like them to look at what Nutanix did for day-one/day-two operations deployment: Bringing in the equipment, getting it deployed, getting it setup, and ease of use of one-click for deploying our 30-node solution. With vSAN we had to go into each one individually and set it up.
The stability is there.
It absolutely scales, that's the beauty of it.
We actually involved VMware from the beginning. We brought in Nutanix, Simplivity, and vSAN technicians, as well as integration with our hardware platforms. But the true key was bringing those guys in, helping us set up the best environment, and seeing exactly what our endpoint was going to look like with our business integration. That was better than, "Yay, we can deploy 40 VDIs in 10 seconds." What does that do for the environment we're currently existing in? So for them to help us set up as a true test in our actual environment, that was a huge help, from all three that we tested. It was really impressive.
I am the manager of the guys who will be implementing the product. We recently received our client from Dell and we have installed it. My two main CI guys are here with me at VMWorld 2018 this week, so we're on a temporary hiatus, but we did get one full rack installed so far, and we're getting ready to deploy the vSAN to it.
The solution is only as good as the technicians you have and the investment put into proof of concept testing. My two technicians are some of the smartest people. You always hire someone smarter than you and I definitely did with these two guys. They've already got it worked out. We had the tasks laid out, what we were going to do day-one, day-two, rolling it into a test environment, and then production. We already had that done before we had the equipment on site.
We're just wrapping up year-two of our five-year ROI plan and this VDI solution, with vSAN, is part of it.
We purchased a VMware Enterprise agreement so vSAN was already included with what we had. It was just a smart choice, given where we were heading eventually, to go with vSAN. That was one of the deciding factors.
We just wrapped up proofs of concept for both hardware and software. We did vSAN, we did Nutanix, and we did Simplivity. We looked at HPE hardware and we looked at Dell EMC hardware, among others.
We actually decided to go with Dell with a vSAN solution, even though Nutanix had better day-one/day-two operations, straight out of the box for us. Long-term, we felt that the vSAN solution itself was going to serve us in terms of to utilizing and leveraging the power of VMware, either going to a private and hybrid-cloud solution or public and hybrid cloud solution.
As far as the hardware goes, we didn't really have that much of a preference among the three, but we did see that Dell EMC's OpenManage solution for managing the hardware, the bare metal itself, was much more productive than the other two.
You'd want to give it a 10 out of 10 based on what they're doing in the future, but if you always give a company a 10 they'll feel like they're already there. I would actually rate vSAN one below Nutanix, as far as maturity of the model goes.
I would give vSAN a very solid eight. There is room for improvement to catch up to Nutanix. Nutanix is definitely a nine. Again I don't like giving anybody a 10 because we always want to see what the next evolution or innovation is that they're bringing to the table. The way vSAN would get to a 10 depends on how they get me to "tomorrow".
We use it to provide and sell infrastructure as a service.
The performance for us is very good. Our infrastructure now is only solid-state disks, with two different levels. There is one for write-intensive and one for read-intensive. Our decision was to change traditional storage to vSAN.
One of the valuable features for us is the ability to restrict the performance capacity per client. Other solutions don't have this feature.
I would like to be able to limit IOPS.
When we began with this product, we made some mistakes. But through collaboration with the vendor we were able to find a solution to the problems and, today, it is a stable solution.
We have about 2,000 machines under this solution with about 100 hosts. It can scale beyond what our needs are. We have no problems with scalability.
We have used technical support a lot for this product and for other VMware products. For vSAN, in the beginning, we used tech support intensively. The support is very good for us because we get technical support in Spanish, in Panama.
We are using the different levels of support for different kinds of problems. We are online with them and the response time is very good.
We previously used traditional storage solutions such as HPE, Dell Compellent, Hitachi, and others. We did not use a software storage solution before vSAN.
It depends on the project, but vSAN, in particular, is an easy setup.
Our model is different. Our interest is in how we provide a solution for our clients. vSAN results in indirect benefits for our clients because it helps us reduce costs. But the client does not necessarily know that vSAN is the product behind the solution.
When we began the program with vSAN, it was more expensive than it is now. The price is improving over time. In addition, it includes more features in the same bundle. That is really good for us.
We compared it with Nutanix but Nutanix was so expensive for us because our infrastructure is not as high-end as in America. In Chile, it's lower-end. Also, because we are a service provider, the price of vSAN is not expensive for us. Other products, like Nutanix, don't have a program for service providers and the price is prohibitive for us.
For me, vSAN is a nine out of 10. I don't know what could make it a 10 because I have not really compared it with other products in the last three years. Maybe today there are other products that are better. When we started using it three years ago, vSAN was, perhaps, a seven out of 10 but they have improved the features.
For us, vSAN is a really good option for our EDGE network sites. We're able to use it in a high-available environment that enables our end-users to get to the data they need. We're heavily leveraging it for our VDI deployments.
It has helped us reach a much higher satisfaction rate in our VDI deployments. With the VDI, we didn't really focus on an ROI, although we did see some ROI benefits.
The ability to have a disaster recovery option for our end-users by being able to use VDI and the vSANs, and the ability to do replication across multiple data centers, are valuable to us.
One thing I would have said I'm looking for is vSAN in the cloud but, obviously, they announced that here today at VMworld 2018. That is something that I'm looking forward to.
vSAN has come a long way. It's a highly stable product and something that everyone should look at. Even in a large data center, now, vSAN makes sense.
For me, it scales really well. We have multiple product vendors. We're able to leverage all of them using the vSAN capabilities of all of those vendors.
I was not involved in the initial setup but I have taken it over since then and I have implemented some of the newer features that vSAN has come out with; capabilities that we weren't using when I came in.
We leveraged a partner who helped to make it an easy implementation.
My advice is to look beyond what your initial scope is. If you're looking at using it just for VDI implementations, look at more than just that and how you can leverage it for a lot of different datasets in your data center.
When I look to work with a vendor it's important to find one that is agnostic to either software or hardware and a solution that fits our specific environment.
We use it as a primary storage for our Horizon View environment.
The product is great. It runs well.
It helped us survive power outages in one of our data centers, then continued to function without a hitch.
I would like a better Hardware Certification List (HCL). The HCL should a little easier to deal with.
Making the hardware compatibility not as much of an issue would be a good thing.
It scales well. We have plenty of room to grow. It should be a good long term solution for us.
Technical support has been fantastic. We always get answers quickly whenever we call.
We wanted to give more redundant access to the users' desktops than they previously had. Before, we were on a single SAN which was causing us issues if we had either an issue with the SAN or an issue with our environment when the SAN would go down. By using vSAN, it would allow us to spread our data across multiple data centers on our campus and be more fault tolerant.
It was really straightforward.
We had some help from Venture Technologies, who helped us get it going. They didn't really have to do too much. We figured it out.
We have increased our user productivity. However, being in Higher Education, we don't really measure it.
Give it a look. It will save you time and money.
Our primary use of vSAN is to set up a deployment of a small subset of clusters that we have out in our gas and oil prepossessing plants, in remote areas.
Performance-wise, it has gone above and beyond what we originally spec'ed it for. From that respect, for us, it's like the "golden gun".
It gave us the ability to get the storage-processing and CPU power that we needed in remote areas. It's something like "the big bullet in a small gun", where it actually works and does what it needs to do. It's very useful for what we need it to do.
The most valuable feature is that we're not spending any additional money on an external storage solution for it. It gives us the all-in-one, Swiss Army knife kind of solution.
The usability is pretty good but it could use a little tweaking on the UI, with a clearer definition of exactly what some of the things do. For example, sometimes when sticking hosts into maintenance mode, you have to re-read the definition a couple of times. I have to say to myself, "Okay. I actually want to evacuate the data off of this host. Or no, I actually don't. I want to keep it there but I still put the host into maintenance mode." So a little bit more clear and concise definition of what some of the options do would help.
The first impressions of its stability were really good. After using it a little bit more and going through some issues with it, it still shows that it's a very robust tool. From that point of view, I'm going to keep on using it.
Scalability is very easy. We've already run into one scenario where we've needed some more storage. We were able to provision the drives, slide them into our current hosts in that cluster, and expand it. It was very easy.
I have used technical support and it leaves a little bit to be desired. I've gone through a few people to get to the person who actually has all the knowledge, who can actually solve the problem.
There was a lot of Hyper-V deployed out in this environment, and things of that nature. Hardware was coming to a service-contract end, so the next step for us was to get rid of a lot of one-on-one virtualization that was happening with the Hyper-V environment and start consolidating and bringing it down into something that was a little bit more manageable.
If you're coming from a small enough environment, where you have to provision out a stand-alone datastore for this, and you don't have the resources to do it, I would definitely say go look at vSAN for that, because you can definitely combine your compute and resources into one environment.
We use it for storage and redundancy.
It has changed the way we design our infrastructure. We're looking at a new infrastructure.
Also, it allows us to put our infrastructure in remote locations and still get the same performance we get from our onsite SAN solutions.
I like the availability aspects of it.
The stability has been very good. I don't think we've had any real issues from what we have been setting up so far.
It's very scalable. That is a really good feature of the product.
The initial setup was pretty straightforward.
I rate it at 10 out of 10 because it is just a really good product. I've used other products like it and it seems to be the most stable and easiest to configure.
We use it for our compute clusters, for running our virtual machines. We use it for our vROps clusters. Our customers use it for their compute workloads.
It is scalable, overall. If you need to add storage, it makes it easy to scale by adding additional hard drives into the existing servers or you can add storage by just adding more servers.
I would like to see replication as part of it. I would also like to see direct file access, being able to run SIF shares and NFS and the like. I think that would be critical to continuing the use of it, going forward.
One of the things that we've had challenges with is when we place hosts into maintenance mode. Sometimes doing so triggers large re-sync processes which can be time-consuming and which have, at times, pushed the capacity to the threshold. I definitely think making some changes in that area would provide some big improvements.
Overall, it's stable. When it's designed properly for the proper workloads, it's a very stable product. We had some challenges, initially, with getting the workloads aligned to the proper storage policies and configurations, but since we worked through that it has been very stable.
Technical support is getting better. We've been using vSAN for a couple of years now. Initially, it was a little more challenging, but it seems like GSS is scaling up as well and, perhaps, learning the product along with us, at times. But overall, they do a great job in giving us support when we need it.
The initial setup is pretty easy. I would like to have some additional automation wrapped around it. In the earlier versions, PowerCLI was very limited, but as the versions have progressed the modules have progressed as well. It's getting better. I consider it to still be a fairly new product and, over time, it's continually getting better and better.
Properly align your workloads to the storage policies and make sure you know what your workloads are before you leverage vSAN. Have a good understanding of the size of your VMs, the amount of change that they have, and how you are going to be doing maintenance in your cluster. Understand the workload and what you're going to be doing with it before you jump in.
We use vSAN because it's a VMware product which integrates with all the other virtualization, and it simplifies hyper-converged environments.
The latest version is very stable. It had a couple of hiccups in the earlier versions. The deployment and integration were simple, but we did have some bugs that we hit on, which have since been fixed.
Adding new nodes and expanding vSAN forward is simple and non-disruptive for a lot of our customers. It makes it simpler so we are not doing late night deployments, and we can answer the needs of the business immediately.
It is easy to find information out there, not only from searching the web, but even the times I have engaged VMware support. We were able to get an engineer within minutes of opening a case who understood vSAN, and they were able to help us out.
The initial setup for vSAN is simple. A couple clicks, we were up and running.
It does takes more time to rack it than to actually configure it.
As far as a software-based, storage control product, it is great. They are staying ahead of a lot of the competition out there. vSan is what a lot of the competition is using.
Dataprev has a strategic partnership with VMware and the federal government of Brazil. We're developing a new public cloud and private cloud for the whole government of Brazil.
There are so many valuable features.
I need some additional features, and to learn more, to develop best practices for the Brazilian federal government.
I would like to see machine-learning. This is the biggest problem because, in Brazil, our federal government doesn't know about moving to the cloud. We have city, state, and federal governments to move to the cloud. Dataprev is beginning the work towards a private cloud and machine-learning would be an important feature, one I really need.
I'm really impressed with the stability of vSAN.
My team is starting to develop and make use of the scalability. The team in Brazil is very big in cloud performance but we are just beginning to move into a cloud program.
The technical account team works with my team in Brazil, together, whether in London, China, India - many teams working with us in Brazil. I would rate technical support as very good.
In Brazil, our strategy is that we need to move to the cloud. But there are federal rules and, connected to the government's strategy, there are some questions with many of the solutions. All governments have a problem moving to AWS, to Google, or to Microsoft. Dataprev's strategy, in the employment of the federal government, is to apply the new features while staying within the principles set by the federal government. All governments have a big problem with many data centers, a lot of code, with auditors, etc. I can't go into our strategy in depth here.
The government decided to move to the cloud but there are many problems with regulations, with agencies' sensitive information. VMware provides primary and strategic development features, in working with us in the federal government.
When looking at vendors the most important criterion for us is trust. We need to be able to trust the vendor, the solution, the whole technical development team, because the technical account manager and other teams work with my team inside my data centers.
I can't comment on the initial setup.
I rate vSAN a 10 out of 10 because the VMware team works with my team to develop a better, more timely response. We have made improvements for the federal government. We have been working with VMware for almost 15 years
We use it for our management cluster. All of our network services are on this cluster, on vSAN. That way, it's off the production network, it's off by itself. We have four nodes in case there is an issue with it, it has the failover capabilities.
The performance is very good. We have NVMe performance in it so it's very fast.
The most valuable features are being able to keep it off by itself and the ease of use.
We have been talking to VMware about things we'd like to see and I think they have done them in their 6.6 release. I don't think we need any more enhancements at this time.
The stability is very good. We have some HCI solutions like this in our environment and this one is on par with those solutions.
The scalability is very good. If we know that we need more CPU, more memory, we can add more nodes to it. We don't need to do that today but we know, tomorrow, that we have that capability.
We have a VMware TAM and they have helped us out with technical support. We haven't needed to call support. Things have been very smooth, no issues whatsoever.
We knew from doing the DR project and from having some issues with our production vSphere that we needed some type of solution to help us out, to keep it off the production network. But we did not have a product before this one. This is a new product for us.
For us, the most important criteria when selecting a new vendor are
The initial setup was a little complex. We did it a couple of years ago and we've heard that it is so much easier now. I know that they are working on that capability right now.
I don't see this solution as an ROI type of thing. We tried to do it as a DR solution, or for making sure that it's a solution that is off by itself. At this point, cost was not a major factor for this.
We were using Dell and then we had a Dell EMC box, a hybrid. But it was a lot more money and it seemed we would always be a version behind. But with this one, the vSAN that we chose, we can upgrade it as needed. We can always be at the latest and greatest.
Make sure you use a solution that is supported. There are a lot of companies out there that are new and sometimes they don't have a life. We have been in that situation before where we have bought something and then it has gone end-of-life or no more support. Make sure you get a solution that is going to be supported for five to seven years, such as vSAN.
I would rate it at nine out of 10. I know it's very young and that they're growing it or doing a lot of updates to it, so I'm thinking it will be a 10. It's just very new to us. To make it a 10 will take some time.
Easy to use.
Raw disk and block disk.
VMware was used to manage multiple servers in a DMZ for an eCormerce service (Coy). VMware made the server management, migration and backup/maintenance efficient.
The management of servers was easy from a central standpoint. The server rooms were less cluttered as servers were virtual and easy to manage.
The migration of servers feature makes server rack maintenance easy.
It is a memory intensive app, which should be improved. Also, the server files are larger than before.
We are thinking of using vSAN instead of the traditional SAN. We are just starting to explore how vSAN can benefit us.
This is not yet deployed, we are just starting to explore how vSAN can benefit us. it seems very expensive to obtain a vSAN license.
Based on my findings, it seems easier to deploy than the traditional SAN. I was told vSAN can be deployed in a few minutes.
Dedupe in non flash drives can be improved. The raw capacity for PFTT two is only able to use 67% of the raw capacity.
Our primary use case is server workload and mission critical work.
It has improved our organization in all situations.
All the features are working great.
Only the stretched cluster requires a minor improvement.
As a VAR, it has been about gaining expertise in the platform. Additionally, it has allowed us to benchmark against traditional systems. We are now in a good position to help our clients decide when and where to deploy this solution.
The ability to have an HA cluster in the absence of a shared storage device or SAN. Not having to retain SAN expertise and the cost of a storage area network (SAN) warranty are big pluses, too.
Perhaps a bundle, like Essentials, would allow more businesses to make the leap to the product.
I would like to see this technology be made available to smaller businesses, who might benefit from high availability but struggle with the entry fee.
Coming from the early networking days when storage was software-defined, and seeing the announcement of this product caught my interest. The platform has been improved much over the first version. Today, we are comfortable running any of our mission critical apps on it.
We use vSAN as our server virtualization solution for Dell install of our customer base, and vSAN is our primary solution.
vSAN can help customers save on storage system costs, and also save on the human cost. For an SI (like us), vSAN can save tech service time and easily deploy for maintenance.
VMware vSphere with vSAN HCI system: It is easy to train customers to operate the system even if they have or do not have a VMware operator KB. Most customers can save tech service time via vSAN. vSAN is easy for deploying and maintenance, so some customers can do service themselves.
Simple manager with only one datastore. vSAN has just one datastore. so customers do not need to think where to put their VMs, how to design the physical disk RAID, the LUN size, the LUN mapping, etc. when they use NetApp/EMC/HDS or other storage systems.
vSAN does not have online dedup. When opening the inline dedupe, the performance will be lower than off inline.
Virtual machines disk size cannot cap more than a single node. For a VDI user, it may not save enough to hold a file server or exchange server on a single node storage space.
Teams required to manage the storage for the entire VDI infrastructure were not required after implementing the vSAN solution. Any seasoned VMware engineer can easily manage the whole vSAN without any issues.
It is simple to manage, very easy to implement and troubleshoot in case of any failures.
Any VMware engineer can easily manage vSAN, troubleshoot issues, and perform an upgrade on the vSAN without any downtime. Since the storage space is local to the hosts, it reduces the overall response time and improves the performance.
Some storage tiering options can be included, like other mature storage systems. Some intelligence can be added to the newest version to provide more flexibility between storage tiers, like Nutanix, to make this product a true software defined storage product.
For a new full site, vSAN was used instead of going with the usual fibre SAN. Since vSAN requires SSDs, it was a great way to introduce that tech to the company. If we would have gone with a traditional SAN SSD, it would have been an option, so a debatable feature.
We gained fantastic performance with the benefit of simplifying the whole hardware stack requiring less sum of knowledge to run and maintain.
The simplicity of everything, even though it was a new technology at the time with some quirks. The lower skill cost of maintaining it meant that we could do more with the people that we had.
When it was implemented, we were one of the first to jump into using vSAN for production use. The main problem we had was hardware compatibility, finding the right hardware that was certified. This caused further problems because the hardware reseller had little knowledge of the requirements and we even had issues with firmware from the hardware vendor. This delayed the implementation time by a few months. This should not be a issue today, but still be cautious when choosing the hardware.
Standard fibre SAN infrastructure. We switched due to fibre switches, fibre cards, and fibre SAN.
The setup was very easy if you have the correct hardware and firmware.
Factor in operational costs.
Compared it to a similar sized fibre SAN.
I'm working as a consultant, so I can’t directly say how it helped my customer. But I know that my customer started to equip some branches with our building block and it replaces NetApp filers. We are using a building block of two vSAN nodes and the wireness appliances in the main datacenter. With the next release of our building block, based on vSphere 6.5 and vSAN 6.5, we are switching to direct cabling, so no 10GbE switch is needed for vSAN traffic.
I’m often asked for a vSAN stretched cluster in combination with erasure coding. Currently with vSAN 6.5, you can use one of them but not both at the same time. It is kind of a German behaviour to have two datacenters with active/active architecture and syncronized mirror. But for this type of customer, it’s pretty important to get a vSAN stretched cluster with erasure coding.
I have been using it for three months now. I use VMware vSphere 6.0 Update 2 and vSAN 6.2 (hybrid).
We have not had stability issues. Even losing the witness appliances is no big deal. vSAN 6.2, as well as vSAN 6.5, seems to be a pretty stable and reliable platform.
We have not had scalability issues in both ways. Scaling down to two hosts with direct cabling is possible for ROBO, as well as big clusters with over 32 hosts.
I rate technical support 4.5/5.
My customer switched (or currently is switching) from NetApp filers to vSAN. The main reason is cost. You need the ESXi host hardware anyway, but you now save the costs of storage maintenance. The costs per vSAN license (and the maintenance) are usually lower than for NetApp in this case. Plus, you gain the benefit of only having one management console which is well known and built-in to the management tools used for the central datacenters.
The initial setup is straightforward, but only after deploying the vCenter service. Once the vCenter is up and running, it is pretty easy to enable vSAN. Despite the automatic selection of disks, we chose the manual selection and it was extremely easy to set up vSAN.
When you don’t have a chance to build upon an existing vCenter service, you have to think about the deployment of vCenter without having vSAN. There are several options, like deploying vCenter temporarily on a client PC and then migrating it later onto the vSAN cluster. But it’s always a bit tricky and you probably need some extra time to get the installation done. In most of my vSAN installations, the vCenter was already up and running, so the initial setup of the vSAN cluster literally takes minutes.
Licensing is pretty straightforward. Have a look at the features you need and choose the license that fits. For ROBO scenarios, there is a special ROBO license that could save you some money.
dvSwitch functionality is included in every vSAN license. You don’t have to have vSphere Enterprise Plus to use dvSwitches. You only need vSAN licenses. And despite that, vSAN comes with all flash functionality within every license.
My customer was focusing on continuing with NetApp filers and ESXi hosts or vSAN for ROBO.
Have a look at the simplicity of vSAN and how it easily integrates into the existing management tools. It’s not even the ease of implementation; it’s the ease of managing and maintaining the complete stack.
The solution is easy to deploy and manage. It offers stable performance.
Deduplication and compression usage data display is not real-time and is not that accurate.
We have used this solution for more than three months.
There were no issues with stability.
There were no issues with scalability.
Technical support is not very good.
We did not use a different solution before this one.
The initial setup was easy, if you’ve got such experience. It is a little tricky the first time.
We did not evaluate other options.
It is precisely the possibility of being able to extend the capacities of the cluster of storage and calculation by the simple addition of one or more physical server which makes us lean on this solution in a secure way.
Moreover, with the storage policy, we were able to create different security policies depending on the virtual machines according to their needs for performance or availability.
The most important functionality is the ability to extend cluster storage and cluster computing power securely without loss of data. Also, the ability to set up an extended cluster on multiple sites in a much simpler and easier way than with a traditional storage solution.
This product is usable in many cases. It can be used for a wide range of applications.
It is also possible, for example, to parameterize a stretch cluster at the very least without going to a costly solution based on conventional storage, such as MetroCluster, or another solution of the same level.
No, for the moment, we have not had any unpleasantness related to stability from the time the technical prerequisites are met (hardware in the VMware Compatibility Guide, par example), as well as best practice to design.
No. It's no problem. To be honest, we have not upgraded any vSAN infrastructure to date.
I just had to increase the capacity of a cluster in production by a simple addition of hosts, from the moment the compatibility with the existing one has been checked, there are no problems.
Adding this very simply by drag and drop from the host to the vSAN cluster, beforehand, the host must be placed in maintenance mode. Then, we have more than to add the HDD and SSD to the cluster vSAN in place -- either they are claimed automatically or manually. It all depends on the cluster configuration.
I have never had to contact VMware technical support.
No, I didn’t use other solutions previously, because converged solutions are new technologies and I knew VMware Technology's reputation. But, I am developing other competencies, especially with Nutanix Technology.
From the moment you're used to the VMware vSphere Web Client interface, there are no problems.
The activation is really simple, since it is done with a simple click if one can stay at the level of the parameters of the cluster VMware vSphere.
Then, depending on the number of hosts in the cluster, it will be possible to define a certain number of Fault Domains, that is to say, a quantity of permissible hosts losses, for example.
Moreover, depending on the amount of RAM available in each host and the amount of SSD, we can define one or more Disk Group, as well as host more or fewer objects. But this is more a question of design, which must intervene upstream of course.
The biggest difficulty lies in the design of the solution in order to be closer to the Service-Level Agreement of the company. There are a lot of possibilities according to the number of fault domain one wishes to have, if one makes vSAN on a site or multi-site, the degree of protection of virtual machines, etc,
Basically, vSAN is a license in addition to that of the classic VMware Vsphere, which is also mandatory.
The easiest way is to get closer to VMware directly; it's much simpler and easier.
Yes, at the time, like everyone else, we looked at the classic option, storage bay with SSD caches, but the prices were not necessarily interesting.
Moreover, it requires knowing all the management consoles and specificities related to particular types of storage, network, and infrastructure.
With the use of VMware vSAN, we have not had this worry, since by mastering the VMware vSphere Web Client, I manage all from a single interface, and it is very simple.
Today, I am looking at Nutanix products, and other VxRAIL, and the goal is to identify a concurrent product which can interest us.
Test the product before implementation to see if it fits your needs. Above all, be careful with technical prerequisites and other technical constraints.
To be accompanied by the pre-sales of editor would be the simplest.
Most importantly, the interface is simple, but it is clear that bad handling can have unfortunate consequences.
A simple example: I lose a node in a cluster vSAN, which is also used as a cluster HA. I lose not only the storage part, which is not necessarily serious (depending on the configuration of the vSAN cluster), but on the other hand, I lose also a node of Compute, which can make things complicated quickly.
The most valuable features are:
In a production environment, these features ramp up the provisioning, security and provides faster deployment.
There is not much improvement needed. If you work with the HC Platform, vSAN is not directly touched, i.e., once the HC appliance takes care of it.
I have worked with VxRail, which is a Hyper converged Platform, as it has vSAN embedded as well as it is fully automated the vSAN configuration.
As I did not work with the implementation, but the analyses of functionalities of VxRAIL, would be unfair tell how vSAN would be improved.
I have used this solution for nine months (but only for sales proposal, it has not been implemented, just technical sales information).
I did not encounter any stability issues.
I did not encounter any scalability issues.
Always when I was in need of any technical support, I was promptly answered by them.
As I have worked with the HC Platform, the setup was very simple and easy.
For Latin America, the costs are very higher; even if you go deep on functionalities, but still it is sellable.
Work hard on the sizing matters.
Easy to configure vSAN. I have configured with in 30 minutes. IOPS is comparatively best to run VDI solution. Policy based profiling is a best advantage. Recently VMware launched iscsi support for vSAN is an added advantage.
2 months back we started using this.
Yes. MTU settings will be vary depending upon network switch.
I haven't contacted any technical support.
Earlier customer has used Netapp and EMC storages, In that they faced issues on stability and scalablity.
I have deployed this solution to end customer.
Not yet calculated
Its reasonable, compare with other storage vendors
If we are looking for a valuable prospective, then we can go with the All-Flash vSAN cluster which will provide data compression and deduplication (i.e. actual used storage 30TB; in that case deduplication will be stored in 10TB and save 20TB storage).
Firstly, I want to offer an example in terms of the deployment process and manageability of the vSAN storage environment. vSphere admins can handle all infra tasks and, due to policy based storage, we can manage the I/O performance as well.
vSAN health monitoring has room for improvement because they have many known and unknown bugs which may be resolved in a future release version.
We are using it for the last two and a half years, and started working with vSAN 5.5 and drives file system 1 and in the last six months it has been upgraded to vSAN version 6.2 and drive file system 3.
Yes, in some of cases after I have built a big vSAN cluster of 64 nodes, all hosts start showing different network partition groups. In that case, without correction, you can’t go further on next level.
In scalability I didn’t face any issues.
I can give them an 8 out of ten because it is a game-changing technology so we need to add more vSAN engineers to our team.
In my past experience, I didn’t use policy based storage; I always worked with standard storage.
Initial setup is straightforward but somehow we need to understand the high level topology and way of working with it.
In terms of pricing and licensing, we need to understand the requirements of the project and the cost model as well, because that has a very important effect on our project delivery.
Nutanix and VxRail because these also serve the same function.
The feature that is most valuable is the simplicity of implementation, as you only have to enable the feature on the already existing cluster(s).
For a PaaS platform which I’ve developed, the scalability of VMware vSAN was a necessary feature enabling us to grow with the onboard customers.
Although the product is very scalable, it is not scalable in a way that the different host sizes can effectively be added to an existing cluster. All the hosts/disk configuration have to be consistent, for a consistent performance experience.
I’ve been using VMware vSAN for about two years, i.e., since VMware vSAN 6.0 was released.
The stability of VMware vSAN 6.0 is good. You sometimes have to resynchronize the data over the cluster (which is a single button task).
As stated earlier, all the hosts have to be exactly the same for a consistent performance experience, which limits the scalability of the product. Also, the computer and storage components within the HCI solution are linked to each other, it’s not possible to add only storage nodes.
The documentation of VMware vSAN is good. I’ve had no experience with VMware support regarding vSAN.
I haven’t used a different HCI solution before.
The initial setup is really straightforward, you only have to enable it on the VMware cluster. But, before the initial setup you will have to check the HCL of vSAN for the compatibility of the different components. With VMware vSAN-ready nodes, this process is made simple, but it still is something you have to take into consideration.
VMware vSAN is licensed per CPU and the cost is to the other VMware (and Microsoft) products. VMware vSAN is reasonably priced, but with the addition of more nodes to the cluster, the needed CPU licenses (for VMware/Microsoft/etc.) are increasing rapidly, which makes it an expensive solution.
I’ve looked at HPE SimpliVity, but it has a special hardware requirement whereby it failed in terms of the project requirements.
Use VMware vSAN for special use-cases only and don’t use it as an all-purpose storage solution.
Use VMware vSAN for VDI, small VSI, and dev-test environments. Don’t use it for messaging/database solutions as the licensing costs are huge.
Storage policies are used to perform operations in the VMs. This feature allows you to create storage policies for VMs to get performance, high availability, I/O policies, etc.
Hardware supported by VMware vSAN: The list of hardware supported should be increased in the future. I would improve these areas by increasing the number of partners to support as many as possible.
We have not had stability issues.
We have not had scalability issues.
Technical support is perfect. VMware provides some of the best support in the market.
We had no previous solution.
With a good hardware design, the setup is straightforward.
I have no advice about pricing.
We evaluated Cisco vSAN.
It is easy to design and deploy to react to a changing environment.
Having high availability without the need for a full vCentre/host license is a plus that, along with not needing a physical SAN, makes this solution great when you need functionality without the extra overhead of additional hardware and licences.
It accelerated our P2V plan.
There are bugs in the SAN Health Check utility. It misreports latency issues when the hosts are actual within the correct tolerances. I have been on the phone with VMware about this and they have said it’s a bug.
I have used it for 10 months.
I have not encountered any stability issues yet.
We have not crossed this bridge yet.
So far, technical support is 8/10.
We did not previously use a different solution.
Initial setup was straightforward.
Licensing is fairly straightforward.
Before choosing this product, I did not evaluate other options.
Take a look at the network requirements and use 10GbE.
We have been using vSAN in one environment for about eight months and in another environment for about four months.
The only issue I encountered during deployment was with the hardware and not with vSAN itself.
The disks in the new servers were installed at the factory as RAID disks. I had to mark them as non-RAID disks so that vSAN would be able to see them correctly in order to add them to disk groups.
There have been no issues with stability.
We have had no issues with scalability.
Fortunately, I have not had to contact support for any issues with my implementations.
We chose VMware vSAN for these reasons:
We have a Nutanix environment running in production as well.
The initial setup was straightforward as was learning the vSAN environment.
The complexity comes in setting up and managing the storage policies. These can be simple or complex depending on the environment.
When using VMware Horizon View, there are several storage policies that are auto-created and managed. Creating and managing your own policies and rule sets depend on your needs and workloads.
VMware vSAN is included in the enterprise plus level of software that we purchased. Our cost savings were due to buying commodity server hardware with local hard drives instead of investing in large SAN hardware.
If you really want to squeeze all of the value out of this solution, it should deployed in an all-flash configuration. The all-flash vSAN solution allows customers to take advantage of newer features such as erasure coding, deduplication and compression, greater swap file efficiency and other enhanced management capabilities.
The erasure coding (aka RAID-5/6) feature increases storage capacity efficiency compared to the default RAID-1 fault tolerance method that consumes more space but provides the best performance. Some virtual workloads do not require all of the performance provided by RAID-1. An administrator simply defines a capacity-based storage policy configured for RAID-5/6, which is then quickly applied to the VMs that would require it.
vSAN is a very cost-effective solution for just about any data center. It is very easy to deploy, scale and manage. The entire solution is built on commodity hardware, so customers do not have to break the bank (or budget) to invest in this technology compared to a much more costly centralized storage array.
Snapshot management is something that continues to improve with each release of vSAN. Earlier versions experienced performance degradation, but each version gets more and more efficient with snapshots. The new snapshot format known as “vsanSparse” was introduced in vSAN 6.0, which replaced the traditional “VMFSsparse” formats which involved redo logs.
I have been working with VMware vSAN for quite some time now, dating back to the old vSphere Storage Appliance and then vSAN in vSphere 5.5. It has come a long way in a short period of time with many improvements.
Anytime I have encountered issues with stability, it usually was the result of a poor design or poor implementation. If you are looking to deploy VMware vSAN properly aligned to your business needs, you should consider a vSAN assessment before anything else. Properly sizing and spec’ing the solution will ensure stability.
Scalability is not a major issue with vSAN. The latest version can scale up to 64 nodes per vSAN-enabled cluster. The nodes can be configured to be very dense when it comes to CPU, memory and local disk configurations. A majority of the 2U servers out there contain up to 24 slots (SSD or HDD). All-flash configurations provide more disk capacity thus making the solution more dense. Scaling the solution is also very easy. Scale up or scale out; it all depends on how the solution was initially sized during the design phase.
The stability of the solution has limited the number of times that I have been on a support call for vSAN. The handful of times that I have had to call VMware for support on vSAN, the support experience was phenomenal. The support staff responded swiftly and were very knowledgeable.
I did not previously use a different solution but there are various solutions out there in the hyper-converged market that work very well.
The actual implementation of vSAN is very easy to do. Once the equipment is racked, stacked, powered on and installed with ESXi, the vSAN cluster can be up and running very quickly. To avoid any hiccups, it should be properly sized and designed.
Review all of the options available with each vSAN version (Standard, Advanced, Enterprise, ROBO) and look at the solution from a “long-term” perspective. One example would be a vSAN solution that will eventually span multiple sites. The primary site is ready now but the second and third sites are a year or so away from being production ready. In this case, I would recommend to my customer the Enterprise Edition, so they can take advantage of the stretched cluster feature. Once the other sites are ready, the stretched cluster vSAN can be quickly deployed because the proper licensing is already in place.
I would certainly consider other options, but I apply that logic to any solution. Always weigh the pros and cons of the solution that you are looking for. Does it satisfy your solution requirements? Does it fit with the long term goals? What type of workloads are being deployed? Cloud integration or some type of automation required? Many factors can and will come into play with choosing the proper hyper-converged solution. Look very closely at each one and do a comparison to determine which solution aligns with your needs the most. Once you have narrowed things down to two or three solutions you can then use the results of the assessment to assist with the final decision.
Invest the time and resources to properly design and size vSAN early on, long before hardware is purchased. It is very important to ensuring stability and its overall functionality. Contact a trusted solution provider or expert and evaluate the existing infrastructure or environment to determine the correct hardware and software configuration. Lastly, VMware is very consistent with releasing up-to-date ready node configurations that are certified and tested for vSAN functionality. Adhere to those guidelines and the solution will be successful.
Significant increase in IOPS: VMware, on paper, guarantees you up to 3 million IOPS on vSAN. The more efficient HDDs you have, the better is the IOP speed. And since this works on the local storage cluster, there is very little loss of IOPS compared to the traditional SAN boxes, where you need fiber channel connectivity.
Significant reduction in total cost of ownership: Because of local storage architecture involved in vSAN, the price of these are significantly cheaper if compared to the SAN disks that you have in the SAN boxes. The price difference is anywhere between 20% to 40%, which is a significant amount.
Working in the banking and finance industry, speed is of paramount importance to us since we deal in with millions of records fetching data everyday. vSAN helped us to leverage this and speed up the response time from our applications to the end-users.
The hardware compatibility list (HCL) is a sore point for vSAN. You need to thoroughly check and re-check the list with multiple vendors, like VMware in the first instance, and the manufacturer (like Dell, IBM, HPE, etc.), as the compatibility list is very narrow. I would definitely be happy if there were significant additional support for more models of servers from Dell, IBM, HPE, etc.
I have been using vSAN for 1.5 years.
We did have some stability issues. Initially, we faced issues due to lack of visibility of the HCL from VMware and the hardware vendor (Dell). But once the issue was sorted out, the product gave us rock-solid stability.
We did have some scalability issues. Similarly, when we added a new host in the existing cluster, we faced a similar issue with the HCL, but that was resolved soon.
I rate technical support 4/5.
We used traditional SAN technology before using vSAN.
Initial setup was pretty straightforward.
Verify, and again verify, the hardware compatibility list before you place an order for the hardware.
We didn’t look at alternatives.
This will definitely reduce your TCO by at least 50%. Hence, if you are planning to go with this product, just go ahead. But again, as I have said previously, please MAKE SURE that you take a look at the HCL up to the micro level.
The most valuable features of vSAN are:
The most valuable feature of ESXi is that it is free. I strongly recommend this for those who have a huge development environment. ESXi is the best no-cost virtualization platform in the market right now, where you can consolidate your server into one platform.
The virtualization itself really helped me as a network and system administrator with a lot of servers to maintain. That's a pain. A virtualized environment is really easy to manage. Almost everything is in one dashboard. This really gives us more time in our research and innovation, and less time for maintenance or upgrades.
The minimal downtime alone is a winning blow for both the management and the ITs. Unexpected downtime is inevitable. It's been part any organization. Addressing that pitfall really gives an edge (from a business perspective).
Long-term savings in both buying more server in the future and absolutely the power consumption, not to mention the data center space it released or freed.
The mobility, flexibility, and scalability are really amazing and astonishing features.
I would like to see lowered cost. vSAN is very expensive.
I have used vSAN for two or three months. ESXi has been with us for around three years.
We are using vSAN 6.2, ESXi 5.5 and 6.0, and vSphere 6.0.
We have not had any stability issues so far.
Scalability is one of its strengths.
We haven't called technical support so far. But the web (Google) actually has plenty of good articles and forums and discussions. The website has also one of the best FAQ and DIY sections; 90% level of technical support.
We did not have a chance to try other virtualization platforms because the first one we tried really gave us a strong enough reason to stay loyal.
Initial setup was straightforward. You'll only have what you want.
Hopefully, over the next few years the pricing will be dramatically lower.
We are biased from the start to use VMware products only.
Study and evaluate your current setup. Conduct a case study to see if the advantages really outweigh the disadvantages. Virtualization really is the future. Especially here in my corner, almost all or most of the data centers are still in bare-metal setup. Because of the big price (CAPEX), most of the time, management will disapprove this project. But, if you help them see the big picture, I'm sure they are going to promote you for providing this project.
The most important feature for us is the converged infrastructure, which is all this tool is about. There is no need to manage separate storage areas in SAN/NAS environments. Storage management comes built-in with the vSAN tool. Storage is managed via policies. Define a policy and apply it to the datastore/virtual machine and the software-defined storage does the rest. These are valuable features.
Scalability and future upgrades are a piece of cake. If you want more IOPS, then add disk groups and/or nodes on the fly. If you want to upgrade the hardware, then add new servers and retire the old ones. No service breaks at all.
The feature that we have not yet implemented but are looking at, is the ability to extend the cluster to our other site in order to handle DR situations.
Provisioning virtual machines has been simplified, as there is no provisioning/management of the separate storage layer and it is no more in question.
The management client, i.e., the Flash-based client, is just not up to the mark. I’m really waiting for the HTML5 client to be fully ready and all the features are implemented to it. This, of course, is not a vSAN issue but a vSphere issue.
Of course as vSAN is tightly embedded into vSphere, it is also managed by the same tool. vSphere management is done via browser, and currently the only supported client is the flash-based one. VMware is rolling out a new HTML5 –based client, but that is a slow process. It began as a Fling and since then, there has been quite a number of releases as new features are added. It is today quite usable, but still not complete yet.
There is also the C# -client, also known as the fat-client, which is to be installed onto a management system. Recent versions of vSphere do not support the C#-client anymore. Thus the browser is the only possibility with current versions.
So, my criticism is aimed towards the current Flash-based client, which is utterly slow, and Flash itself being deprecated technology. The sooner we can get rid of it, the happier we all will be.
I have used this solution for around a year.
Stability has not been an issue for us. We have not run into any serious software faults. VMware ESXi is a mature product with very few problems and today, vSAN is also getting there.
The scalability of the product is way beyond our needs.
L1 technical support, which I have mostly been dealing with, has been pretty solid, especially the guys in Ireland, who do handle it pretty well, both technically and in reference to the customer service aspect.
We did not have any comparable solution previously. We did previously use traditional SAN / NAS environments from where the storage areas were provisioned for the VMware clusters.
The initial setup was quite straightforward. All in all, it took three days to complete the entire process; that included installation of the hardware itself, installation of ESXi onto the hardware, creating the data center and the cluster, configuring the networks and multicasting on the surrounding network infrastructure, defining all the disk groups and networks at the cluster, and finally turning the vSAN on. vSAN was the simplest part of the whole process.
As VMware products are licensed per number of sockets, you need to think this fully through. However, don’t go cheap on the number of hosts. You’ll thank me later.
We got presentations both from SimpliVity and Nutanix. No serious evaluation of other products was made. We did evaluate vSAN a couple months before the purchase, so as to get familiar with it, and we do have a lab environment now to play with.
In hindsight, we could have carried out a more-thorough evaluation of vSAN to get a really good feel about it; maybe even run a part of your actual production there for an extended period of time to see all the pros and cons.
Study the VMware Hardware Compatibility List (HCL) carefully with your server hardware provider and make sure all the components/firmware versions are on the HCL; either that or buy predefined hardware, a.k.a. vSAN-ready nodes, from a certified vendor. Always make sure that the hardware and firmware levels are on par with the HCL. You may have to upgrade; for example, you may need to upgrade the disk controller firmware when the updates to ESXi are installed. VMware does a pretty good job here and vCenter tells you that there are inconsistencies. However, you should still be prepared for that in advance, before actually installing the updates.
Don’t go with the minimum number of (storage) nodes, as that won’t give you enough room for a hardware failure during a scheduled maintenance break. For a minimum setup, without advanced options in vSAN 6.5 such as deduplication, compression and when Failures to Tolerate (FTT) = 1, the required number of nodes is three. VMware recommends in best practices a minimum number of four nodes. Do yourself a favour and go with at least that or even five would be good.
When disk groups are designed, it is always better to have more smaller disk groups than a few larger disk groups. This increases your availability, decreases time to heal from disk troubles and gives you an improved performance, as there are more cache devices.
If your budget allows it, then go with the all-flash storage. If not, go with even more disk groups. Our cluster has pretty good performance; although we have spinning disks, the read latency usually stays below 1ms and write latency stays below 2ms.
Plan your network infrastructure carefully, especially that part which handles the vSAN traffic. Go with separate 10G switches and dual interfaces for each server just for vSAN. Handle the virtual machine traffic, migration traffic and management traffic elsewhere. Go with 10G or faster, if you need that. Don’t use 1G for vSAN traffic, unless your environment is really small or is a lab.
Plan your backup / restore strategy really well and test it through. Test restore periodically for both full virtual machines and single files inside virtual machines. To carry out test restore is always important, but with vSAN it is even more so, as all your eggs are in the same basket and there are no more traditional .vmdk files that you can fiddle with. A separate test / lab vSAN cluster would be really good to test various things such as installing updates, restoring backups etc.
I like this solution because policies (such as resiliency) are applied per virtual disk instead of applied on an entire volume.
In a standard SAN solution, and in almost all software-defined storage solutions, the resiliency is applied to an entire volume. For example, you create a volume (or LUN) and you choose RAID 1, RAID 5, RAID 10 and so on. With vSAN, the notion of volume that we know with SAN doesn’t exist. Instead we have VVOL. Thanks to this, we can apply specific settings like the resiliency per virtual disk. It is more flexible because we don’t need to dedicate an entire volume for a specific resiliency.
I’m a consultant, so I don’t have vSAN in my organization. But customers take this solution to increase efficiency, scalability and ease of management.
Currently, vSAN supports stretched cluster. You need to have the exact same number of nodes in each room and only the RAID 1 resiliency is supported. I hope in the future that vSAN supports also the RAID 5 and RAID 6 resiliency mode for stretched cluster.
I have been working with this solution for seven months.
Some customers report that resync doesn’t work very well.
We have not had scalability issues.
I rate technical support 3.5/5.
As a consultant, I use different solutions, such as Microsoft Storage Spaces Direct, and Nutanix.
The initial setup is straightforward because a wizard helps you to enable vSAN.
The license price is too expensive compared to other market actors.
I will evaluate alternatives depending on customer’s needs, but I compare it with Microsoft Storage Spaces Direct and Nutanix.
Be careful about the chosen hardware, especially HBA, storage devices and CPU depending on deduplication or not.
The vSAN technology is clearly the big game changer here. VMware's software-defined storage finally enables us to build a private cloud solution that scales much easier than we are used to.
We wanted to be able to grow much more dynamically than what we have been able to until now. Instead of big investments and complex storage installations, we now have an infrastructure where expansion is a lot easier because we can just buy four more new servers, plug them in and add them to the pool of resources.
We are moving faster every day and are developing new systems and services all the time. We expect the amount of projects this year to be 4-5 times as many as last year and we will be able to support that growth with this solution.
We did plan on using deduplication in our original specification, but during the planning of the configuration, we were advised against it by VMware.
It was a brand new feature, so it was, at the time, perhaps, too early to use it. I am expecting that we will use it in the future when it has matured.
I have used vSphere for seven months for the latest installation, but we have been running VMware for the last 10 years.
We haven't had stability issues that have affected our running servers. However, that is partly because we pay attention to new releases and what they contain, and we don't update just because a new version is available.
Some updates that we chose not to install had bugs that could have caused instability. Also, because we run such a wide range of products from VMware, one has to look at the support matrix before updating/upgrading software, as it may take some time before all products support each other.
We have had a few alarms and alerts in the system, but they have been resolved without any downtime.
Scalability is one of the major advantages of this new installation.
Technical support is no better or worse than what we have seen from other vendors. Usually it works well, but once in a while there are cases that seem to run in circles where you need to get in touch with your account manager and have them escalate the case to get progress.
We have used VMware for virtualization and NetApp for storage for about 10 years.
We stayed with VMware and decided to switch to vSAN because they have had a good track record here with stable products and we could save money (and grow more gradually) by running vSAN instead of a traditional storage system.
I would say initial setup is complex. But we decided to go with best practices and we had consultants from VMware designing and planning the configuration for us, so it wasn't an issue.
Make sure your designs are complete so you can buy all the licenses and products you need as one purchase to get the best deal.
We did not look into alternative solutions for the virtualization part. But for storage, we looked at other vendors. For example: NetApp, Tintri, and Nimble.
Start from scratch. Reject all your old dogmas about how things should be and what is right and embrace the functionality that is available.
We designed our system so we can use NSX and all the other features VMware has to offer, even though we didn't plan on using it in the beginning.
If you are putting constraints on your design because of ties to old legacy systems and designs, then you will never get the full benefits.
We had several servers we used in our VMware cluster, as well as a storage device. The implementation of vSAN reduced the rack space, since we no longer required several slots in the cabinet to rack a storage device. vSAN also made it very easy for us to scale out. Power consumption was also reduced within our datacentre.
I like the scalability and the fact that it reduces your total cost for storage over several years.
The only thing I can think of at this time is to improve the performance monitoring and performance visibility within the GUI. They have already made several improvements in vSAN 6.2, but there's always room for improvement when it comes to monitoring performance.
We had no stability issues.
We had no scalability issues.
VMware technical support provides a great service.
We switched to move towards a software-defined datacentre.
It is very easy to configure and setup. vSAN is already part of vSphere ESXi. You simple need to apply a license and do minor configuration to get it to work.
The first 1-2 years of purchasing vSAN will be expensive. Thereafter, the longer you are running it, the more cost savings you will have.
We looked into several other products, such as Pure Storage and Dell solutions.
Keep it simple, and don’t try and over-complicate things. Make sure to follow VMware best practices when it comes to implementing your vSAN solution. Read those whitepapers and make sure you understand how you want to implement it in your environment.
There was a significant increase in the IOPS and the cost. VMware, on paper, guarantees you up to 3 million IOPS on vSAN. The more efficient hard disk drives (HDDs) you have, the better the IOPS speed. Since this works on the local storage cluster, there is very little loss of IOPS compared to the traditional SAN boxes, where you need FC connectivity.
There was a significant reduction in the total cost of ownership. Due to the local storage architecture involved in vSAN, the prices are significantly cheaper if compared to the SAN disks that you have in the SAN boxes. The price difference is anywhere between 20% to 40%, which is a significant amount.
Since I am working in the banking and finance industry, speed is of paramount importance to us since we deal with millions of records fetching data everyday. vSAN helped us to leverage this and speed up the response time from our applications to the end users.
The vSAN Hardware Compatibility List Checker needs to improve, since currently it is a sore point for vSAN. You need to thoroughly check and re-check the HCL with multiple vendors like VMware, in the first instance, and manufacturers like Dell, IBM, HPE, etc., as the compatibility list is very narrow. I would definitely be happy if there is significant additional support for more models of servers from Dell, IBM, HPE, etc.
I have used this solution for a year and a half.
We did encounter stability issues. Initially, we faced issues due to the lack of visibility of the HCL from VMWare and the hardware vendor (Dell). But once the issue was sorted out, the product gave rock-solid stability.
We did encounter scalability issues. Similarly, when we added a new host in the existing cluster, we faced a similar issue on HCL, but that was resolved soon.
I would give the technical support a 8/10 rating.
We were using traditional SAN technology before moving over to vSAN.
The initial setup was pretty straightforward.
Make sure that you verify and again verify the HCL, before you place an order for the hardware.
This will definitely reduce your TCO by at least 50%. Hence, if you are planning to go with this product, just go ahead. But again, as I have said previously, please make sure that you take a look at the HCL up to the micro-level.
I find that vSAN allows for very easy administration. The fact that you don't have LUNs to set up and assign is great. The ability to set up storage policies and assign them at the disk level is also a great part of this product. You can allow for different setups for different workload requirements.
vSAN allowed for the expansion of our Public Library Patron computer environment into a three-node VMware cluster using commodity servers. This eliminated the need for expensive disk arrays and controllers while providing greater reliability and performance.
We have been using vSAN in one environment for about eight months and another environment for about four months.
The only issue I encountered during deployment was with the hardware and not with vSAN itself. The disks in the new servers were installed at the factory as RAID disks. I had to mark them as non-RAID disks, so that vSAN would be able to see them correctly for addition to disk groups.
We have had no issues with stability.
Fortunately, I have not had to contact support for any issues with my implementations.
We have a Nutanix environment running in production as well. We chose VMware vSAN for several reasons. First, the vSAN solution is part of the ESXi kernel. This allows for the product to be very fast with little overhead. Secondly, vSAN is included in the Enterprise Plus version of ESXi which, compared to competing products, provides a great cost savings.
The initial setup was straightforward, as was learning the vSAN environment. The complexity comes in setting up and managing the storage policies. These can be simple or complex depending on the environment. When using VMware Horizon View, there are several storage policies that are auto-created and -managed. Creating and managing your own policies and rule sets depend on your needs and workloads.
VMware vSAN is included in the Enterprise Plus level of software that we purchase. Our cost savings is in buying commodity server hardware with local hard drives instead of investing in large SAN hardware.
In our model, the price of vSAN storage space is a bit lower than SATA-based storage space from other storages, and vSAN usually has better characteristics (IOPS + latency).
We can easily scale up our vSAN cluster horizontally. All we need is to buy the same hardware nodes and put them in racks.
vSAN has better integration with virtualization than any other datastore.
Stretched All Flash vSAN is the leading product to build a disaster recovery solution. We have a plan to build it in near future.
It’s simple: We are service provider and if a solution can give us new opportunities, it is a good solution. We can build economically effective IaaS clusters on top of vSAN.
vSAN is very complex inside. For example, you need to have a plan for any emergency situation, beginning from the PoC stage; how you monitor SSD and HDD; how you change them. It looks simple, but you cannot just remove a broken component and an install new one. Under the vSAN layer, you need many accurate steps to make these simple actions.
And when you operate a big environment, you need to have more tools to control the health of the solution, to troubleshoot issues and so on. VMware has improved this side from 5.5 to 6.5, and there’s still room for it.
vSAN is not a hardware-agnostic product. We would like to have more compatible SAS controllers and other components in the market. There is room for improvement for both hardware vendors and for VMware.
On the other hand, vSAN is a production-ready solution and all these possible improvements are cosmetic issues.
We have used it from the vSAN 5.5 release date, more than two years.
We use VMware vSAN 5.5 with the latest updates in our products.
The first product is a B2B sector solution, CloudLine, and we sell space on vSAN as one of the storage tiers.
The second one is our B2C solution, CloudLite.ru. It looks like Digital Ocean – we sell IaaS to retail customers in the mass market.
We have plans to build new clusters using vSAN 6.5.
We have encountered stability issues. We had run many tests with vSAN before production. To avoid any issues with vSAN stability, one needs HCL hardware and compatible BIOS drivers for each of the components. The crucial part is that you need HBA without RAID and with disk pass-through, which is important. Finally, you need strong network expertise and a solid network.
We have not encountered any scalability issues; you can scale vSAN horizontally without any issues. But you need to start from 5 (!) nodes; not 3 or 4. It’s a long story – why? :)
Rating technical support is not a simple question. VMware has great technical experts at level 2 and 3, and they are always available if you have severity 1 issue. Technical support is not so good for minor issues.
Previously we use traditional datastores - NetApp, EMC, IBM. And we continue to use it.
Initially, you need to have enough expertise. You need to read some popular bloggers and select hardware from “recommended nodes”. And then you can start a PoC.
We are part of the VMware vCAN program, so our licensing is different from the retail model and it’s comfortable for us.
We keep an eye on all solutions that come to the market. We have tested SimpliVity and Nutanix. We use MS Storage Spaces in our production. All these products have their pros and cons.
You need to use it for the reason of economical efficiency. It’s one of VMware’s great products.
vSAN is a great product, and we see improvement from 5.5 to 6 and 6.5.
The reduction in cost of storage: In my most recent deployment, we reduced cost from around $20,000 per TB (CapEx) to less than $1,000 per TB (CapEx). This is not taking into account deduplication/compression or the ability to add disks and scale vertically, not incurring licensing costs, which would drive the cost down further.
Traditional SANs require large up-front costs, and with "forklift" upgrades, you end up spending a very large amount of money initially and then expect to recoup the costs over the lifetime of the array. This is not how vSAN – or any other HCI (hyperconverged infrastructure) product – works. The idea is to have a small initial investment and, with horizontal/vertical scaling, you can grow into the needs of your environment. This can be accomplished several ways, by either adding more disks to each host (vertical scaling) or by adding more nodes to the cluster (horizontal scaling). This allows for much greater flexibility with your storage. Before HCI, you were required to guess how much storage you were going to need, and were stuck with what you guessed at.
Upgrades are also much simpler. Because the system is software-defined, you simply upgrade the software rather than the entire hardware stack. If you want to upgrade the hardware, you would then simply add nodes in, and remove older nodes. It is also possible to create a new cluster and do a swing migration; however, this is similar to older-style upgrades. The point is that there are a lot of options available with HCI systems.
Management of the environments is overall simpler, allowing for during-hours patching with no downtime and little risk; also allowing us to stay more current with patching, reducing the overall risk of the environments.
The worst part of vSAN, as with most VMware products, is that you need to use the vSphere Web Client to interact with it. The vSphere Web Client is slow and clunky, making interacting with the system difficult and often times painful. I have been told that the new version of the web client will be significantly better, but do not have personal experience with it. Other than being difficult to work with, it can cause outage scenarios to take significantly longer to troubleshoot because you waste a lot of time waiting for the client to load information, or just load in general. It is a huge drawback for an otherwise very good product.
I have used it in various deployment scenarios since 2015, or about 1.5 years.
I have observed no stability issues when the product is deployed as instructed. It can and will have stability issues if you do not follow the hardware compatibility list (HCL) or the vSAN Deployment and Sizing Guide.
The product scales easily, up easier than down, due to the need to remove the disks and migrate the data from the nodes you wish to remove from the cluster.
Actual support engineers are excellent; however, opening cases is often difficult/frustrating.
In my current project, the customer previously used EMC VMAX arrays. As detailed elsewhere, the CapEx savings were incredible.
During my current project, initial setup was very complex, though this was by our own choosing and was needlessly complex. In the past, setups were often very straightforward, though you need to verify your design properly, as mentioned.
VMware licensing is per socket for VSAN, like everything else. The platform is very flexible, so be sure to look at all your options.
I was not part of the evaluation process but cost was a major factor, as well as high availability.
Discuss the deployment with VMware sales; I've met several of them and they are generally smart people looking to help get you the best deployment possible.
Currently, we are on version 6.2. Having all flash, I would say that the most valuable feature for us is deduplication, as it gives us better utilization of the space available. In the latest release, there are already features that we have been waiting for. iSCSI presentation, for example, is something we were waiting for. With iSCSI presentation, we will be able to present the vSAN datastore to our other blade servers; therefore better utilising our investment.
We face the same challenges most organisations do; probably the most common one being that of keeping up with growth and expansion, while keeping within the budgets. vSAN is very scalable, so we can plan our costs well in advance, knowing that additional nodes will be expanding both our compute and storage resources.
I think that the product is evolving in the right direction, most of the improvements and suggestions we had in mind are already available in 6.5. Obviously, there is always room for improvement.
For example, in our case, we had to go with vSAN Advanced license in order to have all flash. I remember attending the vSAN summit at VMworld 2015, and this licensing issue came up during the discussions; so did the request to present vSAN via iSCSI and the 2-node direct connect for ROBOs. In 6.5, all-flash is now supported by all vSAN editions, and ROBO sites can be deployed with a 2-node crossover cable, so it looks like VMware are taking on-board the suggestions we are making, as always J.
We have been using vSAN for the last two years, now. Initially, we decided to try vSAN in our test and dev environment. We started with the hybrid solution using some hardware that we already had in-house. Our development team had already noticed faster build and deployment time frames, so we explored the vSAN option further. Today, we moved to an all-flash solution, which we are now using both for dev and production.
The only issue that I recall having was with a controller driver that did not pass the HCL check; this happened following an update to 6.2, but a patch was released soon after. We did not experience any service interruption or downtime.
Customer support for vSAN was very good; response time was very fast and within the agreed support time frames. The technical guys where very knowledgeable and helped out to address our queries and issues right away.
In most of our environments, we still have "traditional" storage, some of which is becoming end of life and will be decommissioned. Others are relatively still recent and are being used as a secondary storage together with vSAN. It’s like having the best of both worlds in a way. We have been using and implementing most of the VMware products for several years now; vSAN keeps consolidating our infrastructure under one vendor.
When we were setting it up the very first time, we had to start over a few times, but again it was just a learning curve. I think during the first setup, especially if it’s in a testing environment, it’s the best time to hammer it and experiment a little.
We do implementations as service vendors and obviously implemented our own. My advice to whoever is considering vSAN is to try it out, even if it’s just on some hardware you already have. If you don’t have any hardware, most service vendors will be willing to give you a remotely accessible demo. My advice when it comes to production, in regards of hardware, is definitely to go for vSAN-Ready nodes (“VMware-approved hardware”).
In some of our environments, introducing vSAN helped reduce our datacentre hosting costs. In one case, we were able to completely remove a cabinet that had a legacy blade chassis and a legacy SAN. We only had two cabinets in this environment; by consolidating storage and compute in a few servers, we reduced the hosting costs by half. As for pricing and licensing, I think this is something which needs to be discussed on a case-by-case basis; I do not think it’s a “one size fits all”.
I think vSAN together with other alternatives is the future. Actually, it has already been here with us for a while; network, compute and storage are merging in one box. It’s just a matter of time for it to become the norm.
My rating is for this point in time. However, there have been improvements and new features in the latest release, which will probably make me increase my rating in the coming days.
Deduplication and compression: Software-based deduplication and compression optimizes the all-flash storage capacity.
Compared to other vendors, vSAN is compatible with more expensive hardware, and Nutanix is available on multiple hardware platforms, like Supermicro, Dell and Lenovo.
I have used it for two months; just for test purposes.
We have not encountered any stability issues.
We have not encountered any scalability issues.
Technical support is 10/10.
We did not previously use a different solution.
Initial setup was straightforward; I had the KB from VMware to help me deploy the solution.
Before choosing this product, we evaluated OpenStack Object Storage.
It is a good solution for customers that are looking for performance, storage efficiency, and scalability.
When configuring a HA vSphere cluster, you need shared storage. Traditionally, one would need a SAN or NAS to provide this kind of HA. Using vSAN, you can use the same servers as the hypervisor uses for the vSAN storage. No SAN or NAS is needed and much less hardware is needed to provide the same HA solution.
I would like to see improvement in monitoring and performance statistics. When installing the product, it has limited statistics. The default vCenter statistics are available, but deep IOPS/latency and block sizing is absent. You can connect vRealize Operations to vSAN, giving much more information, but this is not available by default.
We have been using this solution for two years.
I did not encounter any issues with stability.
I did not encounter any issues with scalability. I suggest starting with a four-node cluster.
I would give technical support a rating of 7/10.
We use this solution along with another solution, so there was no hard switch.
It is easy for a VMware administrator to install.
We use it in a cloud-provider model based on usage. The end user pricing is not known.
Start with a four-node cluster.
The storage policies allow the administrator to define which VMs have specific storage requirements. For example: Our critical VMs have an increased flash read cache percentage enabled. This improves the overall performance of these machines. The ability to specify policies for every kind of VM in your data center improves storage efficiency, as well as improving performance, redundancy, and so on for specific VMs. With traditional SANs, configuring this was only possible on a LUN level. With vSAN, we can do this on the VM objects themselves.
One of the things that surprised me was the way vSAN handles a disk failure. It auto-rebuilds the vSAN objects when a failure has been detected. (Note: There are two kinds of failures, and this has a different effect on the rebuild timer.) But, in the end, the cluster is self-healing without any user input needed. The only thing that is affected is purely the raw storage that is lost with the drive.
The ease of managing and configuring vSAN. This means that all our VMware administrators are now able to do the daily maintenance and operations. Previously, only a couple of IT administrators were responsible for maintaining our previous storage solution and the complex tasks that came with it.
We have been using this solution for a year.
Keep a close eye on the vSAN HCL. As vSAN is continuously in development, the HCL changes as well and so the HCL gets updates.
When you are planning to upgrade the vSAN version, all other components (ESX version, server firmware, server BIOS) need to be checked to see if they are all on that version’s HCL.
Scalability on vSAN is extremely easy. If the host is compliant with the prerequisites (one SSD and one spinning disk), it will be accepted by the cluster instantaneously. All raw storage will be committed to the vSAN data store and directly available for usage.
In terms of sizing the cluster, as deduplication and compression are only available on all-flash arrays, this can heavily impact the storage capacity of the vSAN cluster.
Since we chose a hybrid-configuration, the lack of deduplication and compression caused a storage growth that exceeded the limits quite rapidly. We had to scale up and address the issue in other ways.
Technical support is good. When encountering issues with vSAN, 99% of the time a VMware support case needs to be opened. All of the standard steps of a support case are run through. In the end, a VMware engineer will solve the issue with you and bring the cluster back to a fully healthy state.
Our previous hyper-converged system broke down due to a power failure. A new system was needed. vSAN was the logical choice, as we are a VMware Partner.
The way VMware integrated the vSAN hyper-converged storage functionalities in their vSphere Kernel is really revolutionary.
It allows the environment to scale out on storage resources when the business needs it. You no longer have to buy those expensive traditional SAN setups scaled for the “future requirements” that you had in mind at the time.
Even an IT administrator with some basic VMware experience would be able to set up vSAN in just a couple of minutes. This is one of the easiest setups I have had in a while.
We had previous solutions, but vSAN was the logical choice.
I would definitely recommend vSAN to others. The old, cumbersome, and traditional storage environments are done and belong to the past. Hyper-converged is the next big thing. It is more cost effective, easier to manage, and scaling up can be done almost on the fly.
I recommend going for an all-flash vSAN setup, if the budget allows it. Some vSAN features like deduplication/compression are only available on an all-flash configuration.
With the falling GB/$, an all-flash is becoming the evident choice. The benefits are there (more features and all-flash performance for all VMs).
Snapshot: You know, that is amazing.
In our routine work, there is repeatable testing and validation. With the snapshot feature, we are able to keep our system in a specific status, including application parameters and network settings.
That has totally reduced our valuable time.
Currently, the work style of our organization has changed; when we get new projects, we can rapidly handle them.
If vSAN developed a "unified system" (that can support block + file + object), it would help users a lot in facing hybrid cloud environments.
I would prefer to use a complete and deep dashboard so I can give a supervisor a way to easily monitor the status of all drives and pool tiers. I think that would be a powerful feature for the future.
I have used it for two years.
Stability is great; however, you have to notify your counterparts if the system breaks down. :-D
Many popular SDS products can support up to 1000 nodes. This is an area where I hope vSAN is improved.
Hmmm, the level of technical support depends on the engineer who supports you when you contact the call center.
Traditional storage is scale-up type, which means there are a lot of supplier limitations and it costs too much.
Why not break though this situation? Now, flash is getting cheaper and bigger. With the changes mentioned, I think it will just stimulate SDS.
Initial setup was very simple for me. It is easy to set up if you get used to using vCenter.
SDS is going to be popular and common. If vSAN wants to remain #1 in the market, offering more discounts or something to attract customers is inevitable.
Currently, our environment is running VMware; therefore, we can consider using original distributed storage that connects directly to the kernel. That would reduce latency and data transfer loss.
It is easy to use and rapidly build up.
The valuable features are:
It is less costly than typical storage and faster to set up than a typical SAN. It does not require “storage competency.”
During some intensive I/O workloads, and on a configuration that had SSDs sub-sized, we reached the limit of the system. When our SSDs became full (due to having too much I/O to manage), performance went down.
We have been using this solution since March, 2015.
There have been no stability issues.
I have never asked anything from technical support. It’s handled by VMware.
The setup is really easy and straightforward. vSAN is built in vSphere, and you have a dashboard to manage the system.
The pricing and licensing models are quite simple. Be careful with the sizing of the SSDs.
Be careful with the sizing of the SSDs, as they’re a big part of the infrastructure. Don’t hesitate to go to 10Gb for the network, even if it can work with 1Gb.
The most valuable features are scalability and speed.
The ability to throw in extra disks on the fly and extend storage with no limits is very useful. I already had to do this twice.
I think performance of my vSAN is better than that of a SAN, even though I am only working with 10 VMs per site. I don’t know how many performance hits we would get if I had more VMs.
Typically when you get a SAN, there’s a size limit or cap, adding more storage means buying an extra shelf.
In our environment we use Dell 530’s (8 bays), the original setup was only 4TB of usable storage from a pair, but later just added 2 extra disks per esx to make a 12TB volume, I still have 4 open bays and could easily add 8TB drives there if I needed to and on-the-fly.
In terms of performance, it beats going through the wire, since the disks are on the bus and with caching, iops are a plenty.
Furthermore, we have a power limitation at our communities, and adding one more box (SAN) would require an extra 8amps of juice.
No single point of failure, although SANs are very reliable these days, there’s connections and switches to content, with vSAN you can now connect 2 ESX servers directly not needing a 10GB switch
Refresh cycles: my storage follows my ESX servers, so no more extra new hardware to purchase.
vSAN Robo’s are inexpensive to own and maintain, the enterprise version is a tad more.
I am able to utilize ESX HW at my robo sites without needing to add a SAN or NAS.
I would like to see the following:
When disks are getting full or near 70%, there’s a potential for receiving out-of-sync nodes. One node may have more content than the other, and the re-sync button starts a process that never ends. This is a known issue.
When looking at space details, the available free space depicts the sum of the two nodes. In reality, that should only show half and even more. I would like to see a gauge that marks a safe zone, or under 70%.
The reality is that once you go over 70%, the sync issue comes into play, performance hits are unavoidable, and the rebuild could take a long time.
We have used this solution for over a year.
When dealing with seasoned vSAN experts, the experience was outstanding. Getting them to respond quickly is always an issue. I sometimes had to go ahead and perform a rebuild, as it was quicker than waiting for a callback.
The installation was easy.
I deployed it myself with trial and error support from VMware.
The ROI is negative, the capex is OK, but the opex is outrageous. They need to drop the opex to 20%.
See if you can really afford it and make sure you have the expertise on hand to deal with initial deployment issues.
I found that buying a new SAN by Tegile is less expensive, less complex, and very inexpensive to maintain. In addition, support is the best in the business.
Simplified datacenter failover in the VMware environment is the most valuable feature of this product.
Previously, when using SRM (VMware Site Recovery Manager) we’d have to configure VMware objects (VMs) for failover. With vSAN, all objects in the datastore are replicated and can be failed over using the built-in high availability feature.
vSAN significantly reduced the complexity of our data center failover along with the data center design requirements.
vSAN health reporting needs some work. There were a few instances where the vSAN would report health issues with disks, even though it was functioning correctly. I believe VMware stated this would be corrected in future versions.
We also had some issues with reinstalling hosts that had vSAN enabled. JBOD disks would retain the vSAN configuration information and would need to be manually cleared to allow for the new vSAN instance to be enabled.
I tested it over a period of five months.
We didn’t have any stability problems. Once configured, vSAN operated without issue.
We didn’t have any scalability problems. vSAN scaled quite well.
Technical support is excellent. VMware provides top notch support.
This was our first time moving to a HCI storage solution.
Setup was straightforward. With ESXi as the base, it was quite easy to then enable vSAN. We used the just a bunch of disks (JBOD) configuration and vSAN consumes those disks easily through the vCenter web GUI.
vSAN is not cheap. Weigh the benefits of a reduction in complexity against the cost.
We did not evaluate any other options.
Use the GUI scripting vSAN implementation, at least for ESXi 6.0. We found that it was much quicker (and still fairly simple) to implement via the GUI. I’ve heard this may have gotten better in ESXi 6.5.
The valuable features are:
We can deploy new servers faster than ever. Our capacity to grow is bigger than when we had SAN storage dependency. We are now able to deploy a pool of QA virtual machines for testing purposes in minutes rather than in hours.
I would like to see faster re-sync and recovery times after a host failure. It’s so difficult to restore a normal situation after a failure. There is a large amount of data to re-sync after a host failure. We have a 1Gb vSAN network, and the restore process can last several hours or days.
I would also like to see a granular sync system, rather than the current “all data” transfer.
I have been using this solution since 2014.
During normal activity, the vSAN’s behavior is excellent. Performance and stability are awesome.
We have only encountered some issues related to the host update process because they increase the data movement between cluster hosts and it ends up collapsing the network.
The vSAN solution has scalability inside its core. Although it has a widely supported HCL, you have to choose the new components when adding nodes to ensure that you won’t have any bottlenecks. With our vSAN installation, we didn’t encounter any issues like that.
We haven’t required help from VMware technical support yet. At the beginning, there was not much information about troubleshooting available on the internet.
This product is now more mature and there is a lot of information available, such as VMware or independent blogs and forums, that help with vSAN problems.
We used the traditional solution of a pool of hypervisor hosts with a common storage attached (iSCSI class). It did the job until we had scalability problems that were related to storage.
The cost of buying a new iSCSI storage was more expensive than rethinking our current solution. For this reason, we changed to vSAN technology.
The installation was as complex as any iSCSI scenario can be. However, it was radically simple in terms of the networking part.
In our case, we passed from our standard virtual switches to distributed ones in order to meet the vSAN’s requirements. We had to take into consideration the disks/RAID controller configuration. We chose an acceptable balance between performance and cost, creating a RAID 0 with each disk of each server on the cluster and made them available for vSAN.
We adjusted the pricing and licensing costs based primarily on the physical processors per server. We chose each node of the cluster with one physical processor since vSAN is licensed per processor. We calculated the performance requirements of our entire virtual platform to decide if one processor solution was a good decision.
We didn’t evaluate other options, except for the line of traditional iSCSI storage solutions. We wanted to continue working with the same virtualization-based system. We wanted to get a solution with the smallest possible footprint. The vSAN solution met these requirements.
This is a very good solution if you have the adequate budget to provide for the related requirements or recommendations, e.g., a 10Gb network. It has a wide catalog of uses that fulfill the highest requirements of performance at all levels. Without any doubts, I recommend this solution.
The valuable features are:
The solution reduced the deployment administration of the storage components.
The areas of improvement are:
We have been using this solution for over a year.
I did not encounter stability issues because I used certified hardware and installed the required firmware/drivers.
However, I have the following issues with stability:
There have been no scalability issues at this stage.
Technical support is strong in their technical knowledge.
I have deployed several Nutanix and VSAN systems. I have never had to switch between products. Being a technical consultant, our customers generally have decided on the preferred technology before they engage me to design and implement their solution. I openly discuss my view on each product when asked.
I found challenges in setting up a VSAN Cluster that were not related to VMware VSAN itself. They were related to server hardware and network configurations.
Licenses are expensive wherever you go. Many people don't appreciate the long-term savings with a technology like vSphere and VSAN, and therefore complain about the up-front costs.
I would prefer if VSAN were free with the Enterprise edition. It would make its adoption more palatable.
I have deployed Nutanix and VMware VSAN clusters.
RTFM and have realistic expectations about the product.
The most valuable vSAN features are:
We are able to deploy vSAN clusters to remote locations very easily at a fraction of the cost. This saves us time and money. We don’t have to worry about stability issues.
Support for iSCSI access would be great, but this may be supported in the latest versions of vSAN.
We have a few physical servers in our environment and it would be great, if these servers could also access the storage in vSAN. With vSAN iSCSI support, we would be able to connect our physical servers to vSAN as well.
We have been using this solution for two years.
In terms of stability, vSAN is very resilient, self-adapting, and self-healing. In the two years that I’ve worked with vSAN, I haven’t experienced any vSAN stability issues.
There haven't been any issues with scalability. Adding additional storage was as simple as inserting a hard drive into a hard drive bay or adding an additional server node to the data center cluster. That was all we had to do, and vSAN auto-configured everything.
We had a VMware vSAN engineer present to set up our very first vSAN cluster. There was nothing to it, but it was great to have an expert on-site for questions and to provide us with training. Other than that, we have never had to log a support request with VMware for vSAN.
We didn’t use a virtual SAN solution previously. We just used traditional, and very expensive, SAN storage arrays. We moved to vSAN because our budget wasn’t getting any bigger, but our storage requirements were increasing.
The setup was straightforward. It literally took a few mouse clicks to setup vSAN.
You get better value for your money with a vSAN solution than with a traditional SAN with lower TCO.
We looked briefly at alternatives, but nothing stood out like vSAN. Nutanix was another solution, but surprisingly, it would have costed us more.
Get a vSAN specialist to come out and spec your vSAN cluster according to your requirements. Have him configure it and test that it is performing properly.
Some of the valuable features of this product are:
It has helped us in reducing the waiting time to provision new storage devices and meet customer SLAs in order to build new VMs.
Some areas where this product can be improved are:
I have used this solution for around 14 months.
We did see some backup failures due to .vmx lock files in the vSAN datastore.
We have encountered some scalability issues and got a couple of performance tickets.
The technical support from VMware is good.
We were not using any other solution previously. This is our first attempt at the software-defined storage system and Nimble is our product for testing purposes.
It was straightforward. There is a single checkbox if the prerequisites are met with.
The pricing policy varies as below:
VMware brought this is as free upgrade, so we did not evaluate any other options but Nimble is the next one.
vSAN 6.2 has lot of new features which can be good for small and medium-sized servers & VDI infrastructures.
Storage policies and I/O are the most valuable features. The storage policies are useful in my job to create my own policies and prioritize some apps over others, and create high availability for some virtual machines.
It increases the performance of the virtual machines and reduces the TCO for storage deployment.
Hardware compatibility needs to be increased to be able to use more RAID controllers available on the market.
I have used it for three years.
I have not encountered any stability issues.
I have not encountered any scalability issues.
Technical support is 8/10.
We previously used another solution. We switched because it reduced the TCO.
Changes have been made in version 6.5.
Before choosing this product, we evaluated EMC ScaleIO.
It is easy to design and easy to implement.
Centered on the VMs, it provides simple and centralized management from a single console. VMware vSAN is focused on the virtual machine and not on a datastore or mon. This allows it to adapt to the workload faster with specific storage policies for virtual machines, without needing to change the storage as in a traditional environment.
Having a single data store for virtual machines, the production of IT administrators has improved because they do not need to work with many LUNs and storage.
The web console, VMware vSphere Web Client, is not based on HTML5, which makes it difficult to manage. It slows down and page refresh is not fast; time is wasted. I know that vSphere 6.5 is already based on HTML5.
I have used it for one year.
I did not encounter any stability issues, as long as it complies with the compatibility matrix.
I have not encountered any scalability issues; very easy to scale.
I have not encountered any problems; no calls to support, but support is very good.
We previously used a traditional environment. We switched because the hyperconverged systems is very easy to deploy, it can scale and provides performance.
If you do not know about this technology, you cannot put it into production easily, but I know about vSAN, so it was very easy to deploy a vSAN environment.
It's a bit pricey. Indeed, there is hardly any price difference with a traditional setting, but it makes that up with the management and ease of use.
Before choosing this product, we also evaluated HPE VSA, Nutanix, and DataCore.
Both vSAN and Nutanix give very good performance, but the support when the infrastructure works with VMware is a simple support; with Nutanix, you have two support vendors if the hypervisor is VMware. Nutanix has a proprietary hypervisor based on KVM.
The most valuable features of the product are its basic functionality and that it's all so simple to implement. The performance is also another very useful feature :-)
We are a partner and we're using Virtual SAN for nearly more than half of our customers, VMware-based customers, and we use it as the basis for DMZ environments, production environments, and DR sites. It's getting a lot better to sell VMware solutions and to make the customer happy.
I'm part of the Beta program, so I know what's going to come up in the next version.
Room for improvement would be support for more NVMe-based devices and especially firmware combinations; that's sometimes a problem. Also, support for special SAS controllers. We have some special customer settings where we solved the customer’s special configuration nearly two years ago, and now it's no longer supported officially for the newest release. There’s room for improvement there.
We have used VMware Virtual SAN since the beginning of version 5.5. It is awesome to see the evolution of the product. We implemented it at a customer site since the first version.
We had some purple screens of death at the beginning, but that was only due to hardware problems. Today, it's very stable and nearly rock solid; so, very nice.
Most of our customers are using it for up to eight hosts in a cluster. Normally, we know - and our customers know - that you can easily scale up to 64 machines, but today, up to eight is absolutely enough.
Technical support is very good. I need them only two times. There was a driver firmware issue; that's all. We extracted all the log files and prepared them for support. They were able to identify the problem within about four to six hours; so, really good.
We were pretty happy with the release before, the VSA version, but it was discontinued. We have many customers who implement the VSAN ROBO solution. We are part of the roadmap discussion and we're going to know what comes up next, so we're pretty happy with the new release.
Initial setup and implementation was pretty easy. It's all about the design and all about the thinking process at the beginning of a product; so implementation was pretty easy.
I would like to give them a perfect rating if the VMware driver issues, especially with NVMEs, are going to be fixed. Then I would absolutely agree a perfect rating, because we've set up with customers using VSAN Hybrid. We have customers using VSAN All-Flash and it's so simple for the customer to implement, to troubleshoot... It's all about the design and thinking process at the beginning of a project. That's why we are there as a partner.
My advice is to definitely test it out; not listen to all the marketing stuff. Test it out on real-life environments, and especially test it out on newer systems. Don’t test it out on five- or six-year-old servers, because you won’t be able to get the best performance.
I think performance and cost are the most valuable features of VMware Virtual SAN. We're stringing up an entire virtualization environment for VDI and RDSH through Horizon View. When we compared the cost of a traditional SAN versus VSAN, that’s what actually made it all possible for us. We're actually able to deploy Virtual SAN for a fraction, like 1/5th, of the cost, of what we're paying for our SAN. It was crazy. The reduced cost made it very palatable and then the actual performance of it made it even that much more functional.
I'm from the cloud virtualization side of things, so consolidating the data allowed us to set up the VSAN instead of a traditional SAN, and allowed us to do faster deployments without having to interact with as many teams. It's simplified our deployment methodology a fair amount, and it gave us the better performance we're looking for from a SAN perspective.
Beyond that, it didn't change a lot how we function, necessarily, but it gave us a better tool, or a tool specific for our use case, or something that opens up the door for more. I think that the product itself is going to be paramount in other expansions and other aspects of the corporation. We'll likely keep expanding it into general computing and servers across the globe. It might help with some of the other deployments, cache centers and data centers, so that we don't necessarily have to buy SAN. It gives us the performance for the cost that really makes it attractive overall. Beyond that, I don't know.
I know it's coming, but I'm really excited for the encryption. I know it's on the all-flash, which is fine, because we're migrating to that anyways. Nonetheless, the encryption would be great for at-rest data, because I don't want to rely on a third party. I don't want to get some self-encrypting drives or anything like that; drives me nuts. That would be very good to get.
I'm looking forward to being able to do VSAN shares with other clusters; sharing the VSAN storage outside of its existing cluster so that we can actually move data a little bit easier between them, or allocate VMs across the entire frame and all the different VSAN storage. I want to try to make more use of the VSAN storage and do some better vMotions across hosts and clusters. That, I think, would be the best.
I like its stability. I think we probably need to get an additional node in there. Right now, we're running some 4-node VSANs. We probably should be at a 5 with a 2-RAID parity on that. Four is okay; it's stable, it's efficient. I haven't really run into any issues with it.
Some of the earlier versions were a little rough; we saw some weird, crooked behavior. Beyond that, it's been solid, and it just works. No issues yet.
Our early deployments of VSAN ran into a few issues with performance. Some of the nodes we installed initially had very high IO utilization when nothing was occurring on the disks; likely related to some replication tasks. Additionally, our fault tolerance was low using just a four-node VSAN (giving an N+1 configuration). We really should be a N+2 (which apparently takes six nodes, not five…).
Performance since then has been outstanding.
We're actually scaling out right now from several 4-node VSAN clusters to - I think we're going to go to - some 8-, and then eventually 12-, node VSANs. That's one of the really nice parts about it; we'll just be able to scale out. The only downside I think I have with it from a scale perspective is, we've got some hybrid VSAN right now. That's what we all started out with. We really liked the all-flash VSAN arrays that you can get, so we're doing that. However, we can't merge the two, so we have to create whole new clusters for the all-flash VSAN. That makes scaling a little bit rough there, but I don't think that will be much of an issue going forward, because flash is pretty inexpensive now and that's probably going to be the standard from here on out.
I think we used technical support earlier on. I didn't personally, but I know our engineers had to work with technical support on some issues with a couple of our VSAN nodes kind of going crazy when they were doing some initial configuration setup. They were just sitting there idle, and one of them would spike up; I don't know if it was trying to replicate data or do something odd. They worked with the support team, got it resolved and addressed it, upgraded to a new version and haven't seen any problems since.
I think there could always be improvement. Whenever we interact with the VMware technical support, it's usually because we have issues that aren't easily solved. We've got our own set of engineers that are really intelligent guys, very capable individuals. Whenever we call in, we always get the initial first line of defense, "Hey, give us your logs." Okay, here's our logs. And then they ask us silly questions and basic troubleshooting and, "Did you do this?" Of course we did. I guess the initial support services guys are just that basic line of defense. They don't always really understand the people that they're dealing with nor have that knowledge of the customer base. That knowledge set they're working with makes it difficult to interact with them a lot of times and getting issues escalated. It's always been kind of a tricky thing for us.
We've been using traditional SAN for a long time. Our engineers had to do test with an initial project to do some developer builds, and they wanted some persistent VMs, and they wanted humongous amounts of storage in them, because they're crazy people. The goal was to give them some virtual machines to replace all these physical machines that they had, because whenever they mess up a machine and they want to rebuild, it takes a long time. You have to rebuild the whole machine, give it back to them, and then they have to build it out all over again.
Using the VDI solution, Horizon View, and VSAN made it actually cost-effective, because if we were try to do the amount of storage that they were looking for on the VMs with traditional SAN, it would have cost us a lot more than anybody's willing to spend or to endure. The VSAN made it very possible and gave us the performance needed to actually facilitate and even perform better on the VMs than they do on the physical boxes that they were using, which is good. It all helped.
At the time, we did not look into other solutions. It was either SAN or VSAN. From a SAN perspective, we have a partnership with HP for some 3PAR storage, and we have some EMC storage as well. When it comes to VSAN, it was included in our ELA that we agreed with from VMware's perspective. We figured, if we're paying for it, we might as well try using it. It worked out really well.
When selecting a vendor like VMware, a lot of the decision comes down to functionality. Functionality, performance, and cost, those are the usually big factors. A lot of times, my company's really focused on cost, which is a pain in the butt. We're a very big VMware shop to start with, so whenever we can use a product that can simplify deployments, simplify management, and integrate with everything that we already have, that makes it really desirable. That's I think what VSAN did; it really simplified the way for us to get our storage for virtual machines and give us that performance and at a lower cost. That satisfied all the different aspects we look for in products.
I gave VSAN a perfect rating because it's been great. We really haven't had any problems with it; it's been solid. I haven't had to deal with the SAN guys, so that makes my life much better. We get much better performance out of it than I would have ever thought. We get all the IOPS we need from it; we get dedupe on the all-flash array. It's my own little SAN and nobody else gets to mess with. I think it's fantastic. I just love it, I really do.
If you have the budget or it's available to you, definitely go for it, because it's going to save money over the traditional SAN.
The only caveat I ever give to anybody about it is that the initial investments are a little rough. You can't just build a 1-node VSAN; you can do a 2-node VSAN, but, boy, no one ever wants to do it. To really get to a point where you get the data redundancy and the high availability, you need a 4-node VSAN, which can cost a fair amount for that initial investment.
If you're trying to do something small, it doesn't make a lot of sense, but if you're in a larger organisation like we are and you have to do a lot, this is a fantastic tool.
The most valuable features of VSAN are consistent and increased performance with a linear cost which helped us in our data center.
Using VSAN Observer, we were able to see exactly what the VSAN environment is doing on a day to day basis, so we've gotten to really enjoy that interface.
The benefits that we're seeing are directly related to our customers. They have better experiences using their EMR and practice management systems.
The manageability is better, it's definitely fully integrated into the VMware stack so it's very easy to use from the web client.
The features I am most looking forward to are the performance monitoring capabilities of VSAN Observer being transitioned into the web client. That's what I'm really looking forward to.
UPDATE: Capacity and performance monitoring is now available in the web client and works well in 6.2. I am looking forward to DARE(data at rest encryption) in the next version.
We have used vSAN for approximately a two years.
We had one issue with deployment, which was related to using the legacy vsphere client to place the hosts into maintenance mode. Which is easily resolved by using the web client for maintenance.
The stability exceeds what we're currently on from a standard SAN platform.
The scalability is much greater than the current SAN that we're on because we're technically locked in to a certain number of discs and a certain number of performance and so the scalability is drastically improved. We currently have a four node cluster and we're going to be just incrementally moving off of our legacy SAN.
UPDATE: We expanded our cluster to five vsan nodes however we are now in process of retrofitting four legacy hosts for a total of nine vsan nodes.
Technical support was very responsive, the technical support staff was. Specifically patching hosts, we inadvertently caused VSANdata evacuations during the middle of the day. Whereas, if you were to do a maintenance mode with non-evacuation, that wouldn't happen but they were able to get to the root cause and provide us an answer on why that happened.
We made the jump to VSAN primarily due to cost renewals going up year over year on traditional platforms. The software and hardware costs that we see now is just linear, we know what it's going to be.
We actually have been with VMware for quite a while so we made the choice to use VSAN because of that partnership that we have had over the years. We're fully focused in VMware and we love the product. That's why we chose VSAN.
I wasn't familiar at all with VSAN at the time, so there was a little learning curve there but outside of that it would be comparable to setting up a legacy SAN environment.
We actually, just by incrementally increasing the cost of our servers, plus the licensing, we were able to linearly scale our environment as opposed to doing forklift upgrades.
We evaluated other all-flash arrays and hyper converged infrastructure.
Everybody wants to say 10 and I would say it's going to be a 10. I love VSAN and I would say it's probably an 8 and there's room for improvement. It's constantly being worked on and I think it's going to be the storage platform going forward.
Colleagues looking into VSAN, I would recommend looking into the VSAN Ready Nodes, they're pre-configured and you can customize your build to whatever you want really, without having to build your own necessarily.
We aren't currently using the Ready Nodes, but I could see where a Ready Node would be beneficial for deployment. The time to deploy would be improved using a Ready Node.
Peer reviews and peer contents are amazing things to be doing. That's part of the reason why we come here. We want to maintain our relevance, industry wide, and so we always constantly bounce ideas off of other peers in the industry.
We have a private cloud that we host in our data center. All of our servers are on VSAN and we have customer servers that we host in our data center on our hardware that is on top of VSAN.
Data store: you don't have to carve out ones and ones and ones and then map from the data stores and data stores and data stores. Good performance.
It makes it really modular too so we can grow as needed, that's actually the case that I submitted to do this talk was about another customer that we host in our rack at our data center wanted to do small entry, have a small entry footprint but then grow as their business acquired other business.
Benefits are being able to grow as needed. We don't have to drop half a million on a SAN for all the storage that we may or may not use and it just eases the pain of a lot of storage. You still have to deal with the, the networking of it, making sure that everything is networked together, but that radically simplifies the storage administration piece.
Some of the problems that I have with, traditional SANS whenever I'm administering them is, whenever I do edit operations I have to be extremely careful. It requires a lot of planning up front to deploy the LUNs. To make sure everything matches all the way through from end to end. So that when I know have a data store, you know, one, whenever I turn it off on the SAN after I’m done using it, I'm not turning off the wrong one and taking down the entire environment. Things like that. You know, I don't have to deal with that 'cause it's just one data store and it does what I need it to do.
So, another big use case that we do is Horizon View for VDI customers. We use it internally and the contrast between our internal use, which is off of an NFS store, contrasting that with a VSAN, deployment is like night and day. Our internal one is kind of slow and kludgy. It's not a big central part of our day to day work so it doesn't impact us as much but I can see how big the difference is between the performance of a Horizon View deployment on an NFS target is compared to how tightly it works with VSAN and how much performance and throughput VSAN does with the, the read and write caching with the flash drives. We haven't got to mess a lot with the flash, all-flash VSAN, yet, but I'm sure we will soon here.
The dedupe is awesome. The stretch clustering is crazy, in my opinion. It's really cool. We've been talking about it internally and have lot of school districts and it actually makes a lot of sense for a school district because they have the fiber runs between the buildings so they can hit the five millisecond, ten, twenty, forty, a gig, requirements of the network and it would be a good use case for them I feel like. We have to look at the reality of it, of course, cause it got announced like yesterday, but it's really exciting to see some of this stuff and especially dedupe. Dedupe for root would be really cool. It's really kind of taking that mindset that I see a lot of people have that VSAN isn't, you know enterprise ready and putting it to rest.
We are a partner with VMware and we do deployments services. Do a lot of professional services that's a lot of what we do and then we're growing our managed services to be able to incorporate VMware monitoring and alerting both, proactive and reactive, to be able to stabilize customer environments and give them the best performance that they can out of their products.
Starting out there was some stability issues but I don't see them the same way that I did. There were bugs, there’s firmware, the HCL cam, seemed a little fluid but things have stabilized significantly. There haven't been any major outages that were something that I would say wasn't our fault or wasn't due to like a configuration error somewhere in the stack so, and the best part about it actually was, whenever we did have these stability issues and outages VSN never dropped data.
It wasn't until we had gone through like five or six, dirty reboots that we started to have it drop the objects from the metadata tables so we couldn't address the objects and see them but they were technically still there, there was just no owner of them. So if we had gone in, you know, with a higher level engineer that knew how to take ownership of those back we would have been able to get them back but it was a VDI deployment so we didn't really care we just scorched earth and began again, but you know, data resiliency has been something that VSAN evangelists really talk about and it's something that they really do. You're not dropping data as long as you stick to the HCL, of course.
Scalability is good. We haven't had to scale a lot. We scale from a three node to a four node and we're trying to decide that to a five node or not, it's pretty easy. Once you have a networking piece set up, like, that's one and done. Upfront costs and then you just bolt everything on the side because you just blast out the same config, same quotes, same everything. Get the exact same hardware. Stick it on. Scales out.
Once you get to the VSAN team they know what they're doing. Like bar-none. They are incredibly receptive. They’re very good at giving you root cause and analyses and helping you work through issues.
We've been a strong VMware partner for a long time and we saw, my direct boss is John Nickelson, he's a vExpert, a huge, huge, huge storage person. He really identified the value that it was going to bring and how, impressive the technology was to have this, you know, kind of decoupling from the, you know, the big SAN box that sits in the corner and it really makes a lot of sense for certain use cases.
Some use cases where a traditional SAN is the right move, you know, if you want the capacity and stuff like that but the VSAN really helps especially with the VDI. That was where our biggest play was initially, was Horizon View mixed with VSAN.
We usually will do a four node deployment. That's in our opinion, the best configuration. Three nodes the minimum, but we like to do four so we can do rolling upgrades without losing our n+1 fault tolerance, and so, when we initially started using this, and technically it was before I started working there. When we initially started using this, we'd roll it out and just take advantage of the performance improvement that it would make. Getting the right cache with the flash drives, you know, allowed us to spin up, spin down, fast log-in times, fast application delivery. Really makes a difference.
If you're looking at a traditional SAN you're already looking at a lot of money anyway. So, VSAN is a contender in a lot of cases.
To my knowledge we didn't ever do like Citrix or, you know, anything like that. We didn't actually deploy the VDIs that are on traditional SANS so I think that we have just done pretty much all VSAN coupled with VDI 'cause it just makes so much sense.
Obviously, it saves rack space and that's something you have to consider. It's an important thing 'cause you got to pay for power, cooling, if you got give him more cabinets cause you got another SAN coming in that's more money for you that you may not be fully utilizing and it really helps with that efficiency. You know, your rack space is doing as much for you as they can because if you have to compute the storage memory, in some cases will view the GPU off load just for us all in a little for you, for your rack, and we have three of the exact same deployments just like on top of each other. Two of them are customer's and one of them is ours and they, you know, at 12 views of stuff, just one on top of the other where it would be, probably have a full rack rather than just, you know, a quarter of the rack and that's very beneficial.
I'd probably rate it a seven right now. Probably in six months it'll be an eight or a nine. Just, you know, growing pains obviously. It's a fairly new product. Having to deal with some of the baby steps, you know, and the HCL, getting the HCL right, the ready nodes things that they've been doing they've pretty much replaced the HCL with ready nodes. That was actually our initial offering for VSM was that. So, that actually simplifies the process a lot. It helps to bolster and make sure that you're not deploying something that isn't going to work.
You got to size the compute, the memory and the storage right? You got to make sure that all those are going to make sense so that you're going to be able to hit that within the con-con-confines of VSAN. Yeah, you only get the one flash disk and you want to make sure that you're hitting at least ten percent flash, magnetic disk and so you have to just you have to evaluate it. You know, make sure that it makes sense and don’t discount just because you think it's not enterprise ready or that it's too expensive.
Some of the most valuable features of VSAN include the ability to be able to provision and grow your storage as you need to without a very large upfront cost. Also the ability to be able to carry along the licenses as part of a refresh as opposed to traditional storage systems, you end up losing that investment after every single refresh which usually occurs every three to five years.
The great thing about VSAN in terms of resiliency and recoverability is the fact that with policy-based storage, you can actual decide what level of recoverability you want, what level of redundancy you want. This no longer the case of trying to figure out complex RAID-systems or anything like that. You set the policy, and you will get the level of redundancy and resiliency that you want. Something that has been in the enterprise space for quite sometime, with some of the more expensive arrays, now you can bring it down into the commercial even the mid-market space. That's pretty amazing.
For scalability of VSAN, I mean, you've seen the blog post out there. They've taken up to four million IOPS. In terms of scalability, we haven't seen any roof, any limit, any ceiling to the scalability there. We are extremely surprised that VSAN has been able to keep up with solutions that are four or five times more expensive.
The technical support for VSAN has been really surprising in a good way. In our experience, there are very few vendors that take full ownership of a problem when it occurs. What VMware has done is that whenever there is a VSAN issue or a question, as long as the hardware is on the hardware compatibility list, they took full ownership. They escalate with the hardware vendor. It's really one throat to choke. Where else can you say that?
We've been a VMware partner for quite sometime. When VSAN was announced, we were actually working with the beta. We decided that this is really the track that we want to follow because we believe in the software-defined data center. Everything is becoming software-defined. For us to not do the same thing with storage when we're doing it with networking and with compute, it just really doesn't make sense. The same kind of savings had been brought by server virtualization, the same kind of flexibility, agility, that can also be applied to storage. So, it just seems like the next natural place to go. For us as a VMware-focused partner, it made sense to get on board with VSAN right from the get-go.
Previously before VSAN, we're using a whole host of different technologies because there were a lot of corner cases where we would have to use an enterprise array. Other times we would end up using something that's a little bit smaller. What VSAN has done is, not only bridge some of the gaps that we had in storage before, but it's allowed us to replace a lot of solutions that didn't really meet the needs perfectly. Here we've got a more custom-fit that we can provide our customers and be able to address about 80% of customer's needs.
From a cost-benefit perspective, especially in regards to TCO, total cost of ownership, CFOs, CEOs that are looking to really cut down the cost of their storage systems because that's becoming a larger part of their overall IT budget. This uncontrolled cost is running along the same lines with the uncontrolled growth and data. So, you know, when more and more of that IT budget is going to storage, you have CFOs, CEOs looking to try to control those costs. What VSAN allows us to do is give them an enterprise-class array, enterprise-class solution at usually half the cost of the traditional arrays.
I would easily give it a 9, because 10 would be perfect and nothing is perfect. After the next few releases, who knows? Maybe that 10 will happen.
For people who are evaluating bringing VSAN into their environment, one of the most important things to do is really get an idea of what the performance the requirements are and what workloads are going to go into that environment. That's best done with an assessment. Right now, VMware partners are providing a assessment service for VSAN. That's a great jumping off point to make sure that the VSAN implementation is going to go as expected and have an immediate win.
It's not a storage array which is a very valuable feature of it and it's maintenance structure isn't paid like a traditional storage array. For me, that's the biggest leap with it is there's a compelling cost with reason to step in to it. You don't have to make a snap decision and get away from where I am. I can keep what I have and dip my toe in VSAN without risking an all-or-nothing decision.
VSAN is really simple to manage. Its GUI is part of the eco-system so it looks and feels like the rest of VMware. So a VMware engineer or a VMware operations guy's is going to be able to manage the provision storage without having to touch an array, which is generally higher profile so there's a cost reduction through headcount.
VSAN manageability is much easier because it's in and part of the vSphere world, so it looks and feels like any other object that people are used to seeing metrics on and there have been great improvement in management. In 655, there's a little bit of lack information. In the newer system, there's a lot more data about what's going on in that system, in the GUI, easily consumable.
The features I'd like to see in future releases of VSAN are around back-up and recovery. There is a great way to replicate data now, but I'd like to see them focus on making recovery from snap shots, off-site, part of the core product.
It's very stable. Once you get it built and you take the time to build the system correctly, do your research, once it's in place it's been very stable and it performs as it says.
I'm looking at two different ways of scaling that system. One is for speed and one is for mass. It scales into mass based on what size of disc you choose and it scales in to speed based on solid-state drive size. Both of those are two different avenues that work well for us.
I haven't had a technical support case open but we do look at the forums and try to avoid issues and problems based on what's in a publicly available space which has always been something that VMware has done really well, which is making issues public so we can avoid them.
We chose it from a cost perspective. In media we are always looking to save money. It's a publicly traded company so the money I give back is smiled on. We saw a way not to pay maintenance for expensive systems and to run it in a system that performs on parallel with what we already own.
So with a traditional storage array you pay maintenance based on the purchase price for the array plus any software you bought with it so that residual number is high, so if you paid a million dollars for the machine, you may have to pay $200,000 for maintenance at some point in time. With VSAN I'm paying server-based maintenance and that's a much lower number.
The top criteria we looked at when considering VSAN was performance and cost. We were going to make sure that we could deliver the performance that people are used to and used the system that costs less than a traditional array model. We did not look at other vendors because there really isn't another vendor that's doing this. There are people that are close but with a traditional hyper-converged box, there's a bunch of things I don't need. With VSAN I have the technical backing from VMware to back-stop the product and is doing what I need and no more so there is a cost-savings for not buying features-compute that I don't need.
I would certainly give it an 8 and I would split in to two parts. The initial configuration of VSAN, once the systems in place, it manages and runs without much attention and that's where it's really shining at the moment, is once it's in production, it doesn't require a lot of care and feeding.
My recommendation is make sure you've got a hardware vendor who's promising you that this equipment that you get is on the HCL, so the compatibility list of what VMware supports and VSAN is important to having a successful deployment. Taking the time to do that and install and build the system correctly first will give you years of good results. Not doing that is a headache.
When looking at any new technology, having peer review and having information available about what it's doing, how many people have adopted it and whether or not it's a good technology is critically important. It's good to be on the edge but you don't want to be the first guy to take the blind leap so having that out and having the forms available has been very important.
The main thing is the comprehensive data center management type of features. The overall management dashboard, capability to have multiple clusters, link clones, distributed computing, where you have vCenters in different geographies. Site Recovery Manager for failover, VSAN for storage, and again the EVO:RAIL mechanism combining with the type of VSAN architecture that is out there, and previously, the automation capabilities of vCloud Automation Center. Previously, I had experience with vCloud Director, but obviously everything's being transported onto vCloud Automation Center now.
The biggest benefit is cost, so for someone looking to deploy low cost storage, but something that integrates with their virtualization architecture. It's a very good fit for smaller companies who have multiple nodes, and can leverage commodity hardware to go with that. VSAN, by its architecture itself, has inbuilt features for reliability, for load balancing. You could enable VCRE cache, along with VSAN, so it works and integrates with a lot of other VMware technologies.
I would love to see VSAN transform into an EVO:RAIL-type of technology, but EVO:RAIL has a separate use-case. I think it's not meant for all companies either. VSAN does serve that purpose, and kind of addresses the primary need there. At the same time, EVO:RAIL is limited to certain hardware manufacturers and some providers who are kind of combining everything into one package and selling it off. Whereas, customers like to use commodity hardware, like to use regular software, and do things their own way. So, if VSAN continues to offer that flexibility, which it does today, I think there's great significance for it. If it integrates with replication and SRM, that takes it in a really good fashion, right to the area where it can be heavily adopted.
I have experience, personally, as a VMUG leader and as a vExpert in the areas of vSphere 6, SRM, I've tested VSAN in my home lab. I have worked with replication technologies. Done a little bit of vCloud Automation Center as well, and vCloud director.
I personally consider VSAN to be a very stable product. Obviously you have to have a minimum of four nodes, to say that, the minimum spec is for three nodes, but if you have 4 nodes or higher for VSAN, it is a very stable product.
It's all about adding nodes, and the number of drives to it. VSAN is very scalable. I was able to, just for a lab purpose, scale it up to 10 terabytes, and I started off at four, so it tells you that it was easier to scale from 4 to 10 terabytes, and the same mechanisms I've read online reviews and some white papers around it, it goes up to quite a few hundred terabytes.
Very straightforward, you need to obviously follow the configuration guide, read advance, just so that you understand the components around VSAN. Then it was just a matter of enabling VSAN, provisioning all the data storage that it needs. You obviously need to have a Solid State Drive to go with it, so many people don't realize that, but you should have one. That is to allow the performance that is required from commodity hardware to be scaled up.
For our VMUG group, I was trying to set up a lab, and I tried to go with the VSAN for storage purposes. It's a very rock solid product, very robust. Compared to the previous iterations it is very flexible and very strong now. It was a breeze to set up, it didn't take time. The reliability of VSAN is really good, I was able to set it up at four nodes and I purposely took out one node just to see what happens, and it just kept working fine.
Looking at VSAN or a different solution, it depends on the use-case really. Someone looking for Oracle database set up on ASN, is not going to first think of VSAN, but, if you design VSAN the right way, it can host Oracle databases. It's just a matter of how much compute you throw at it, how much storage power you throw at it, and how you design the pool. If you have done things the right way and you have sufficient cache, and you have sufficient Solid State, I think it can be a really good use-case for many different organizations.
It offers a lot of scalability to customers. People looking to scale up in terms of nodes when they need it, it's a perfect fit for it.
The value that VSAN brings to our organization, really there are two major areas. One is the ability to replace very expensive proprietary SANs. The other is the need to replicate and keep data available at all times across three separate data centers. Those two elements are really where VSAN plays.
Probably the biggest benefit we get is the replacement of the SANS and it's purely a cost one. To give you an idea, we spend roughly 50% less on equivalent storage by using VSAN to replace our more traditional SAN architecture. Further, the operating costs are 20, 30 percent less. The ability to scale our storage as we need it is far simpler with VSAN than buying the more traditional route. So I would argue that that's probably the single best feature we get.
There are features that I would love to see added to VSAN and I think they're being worked on. One of the major limitations is its inability to provide storage to things outside the hyper-converged world. Any traditional SAN we have left over in our institution will be for that function. Ultimately, if we can remove that by simply extending VSAN's access outside of its little virtual bubble, so to speak, that's the key. And as I said I think that's going to be added.
VeriTech is a consulting and engineering firm specializing in health care. We provide, management and technical skills often acting as the CTO of, healthcare institutions. One of our engagements is I'm actually the interim acting CTO of Baystate Health, in western Massachusetts. VSAN is one of the primary ones but, software defined, architecture and complete hyper-convergence is really what we use VMware for. We use NSX and VSAN as part of our, absolute total infrastructure. And that's all part of vCloud, initiative. We also use Horizon for our VDI, implementation. And that pretty much-those products are 99% of what we use.
The stability of VSAN so far has been excellent. We're just beginning to enter production. We're beginning to migrate our data off a traditional SANS which are a collection of EMC, IBM, NetApp, whole range of them onto the VSAN platform and so far we haven't had any problems.
It's actually the internal feature that I think gets us the great feature of savings out of it. With VSAN I simply add disk drives and hosts to my infrastructure at any of the facilities I have. The net result is an increase of both storage and processing.
In the older model, if I need to add, let's say a terabyte of space for some particular tier one application, I have to add a terabyte, from let's say EMC, into data center one, a terabyte into data center two, a terabyte into data center three, and if, in my adding of those, I cross one of those magic boundaries where I'm out of cabinet space or whatever, then I have all those expenses. None of that is true with VSAN. In VSAN, I simply add drives into a chassis anywhere in my system. If I need more space, I buy a simple chassis, throw it in there, and continue to add the drives. Much more scalable. There really is no limit to it.
Technical support on VSAN has been excellent also. It's been a bit of a paradigm shift for our employees. They're used to that traditional sort of big iron, I'm going to call it stair-step limited approach and it's taken a little bit of skill to get them used to it, but VMware has been there right for us from the beginning. They've helped our people understand the difference and we're pretty much now self-sufficient.
The choice of VSAN was almost made for us. And let me step back for a minute and say it's not particularly the product, although we love the product, it's where we suggested after quite a lot of testing of other-of other competing products, we knew that traditional SAN architecture and the cost of deploying it, maintaining it, was unsustainable. Our budgets in healthcare IT are flat. No one's giving us extra money. But, with all the images and the doctors and the sharing of data, the need to store data is not being held flat. It's going way up.
We simply don't have the money. So we needed some new, way to address storage. And that meant software defined storage. So that was a given. The next step was we needed something that would provide the levels of service we have, and stability we have with the traditional architecture but at far less price. That's where VSAN shone. That's where when we did all the necessary testing and reviews VSAN acted in a secure performance and cost, areas needed.
The selection of VSAN, it's really part of a larger hyper-convergence model and for technical reasons and for simplicity, we wanted products. If we were going to move our entire, siloed approach of storage here, processing here, networking there, onto one single platform, we wanted all of those abilities buried into the extraction or the hypervisor level itself. We didn't want to buy independent little products and snap them in so to speak. Really, that means the only solution suite was the VMware world of products -- NSX for networking, VSAN for storage, and vCloud for everything else. So it really was a no brainer. That was really the essential relationship between VSAN and the other products.
The implementation of VSAN along with the implementation of all hyper-convergence technology is tricky. Although we benefit greatly for it now, there were a lot of issues that, we simply had to work through. And these are not really an issue related to the product itself but more related to the nature of what the product does. Since VSAN is a software component that allows you to add storage to your hyper-converged system, which in turn is based on products like Cisco’s UCS, the revision of code in the Cisco UCS chassis, the types of drives, the levels of drivers across the entire platform are essential to keep in lock step. So, we had many cases where, as we added capacity, turned on new features, began to migrate, we ran into all sorts of, um, difficulty. But the truth is, with our people, with VMware’s, with Cisco’s, everybody supplied the skills we needed and now we're pretty much, we're there.
Well, VSAN is a solution of replacement. VSAN is going to replace all of our traditional SAN. So ultimately at the end of the day a couple years from now, almost all of our storage should be on VSAN. It really should be very little if anything left.
When we selected VSAN, as I said, remember, it's part of a total package, so the better question is, when we were selecting hyper-convergence, who would be the vendor for that. Well, there aren't that many options out there. There's really three. You have Microsoft. You have, open stack solutions and open source solutions, and then you have VMware. The Microsoft product, although engaging, isn't really ready for prime time according to our needs. The open source open/stack option is potentially interesting but requires a great deal of internal engineering and support that healthcare systems really don't have. Really left VMware as the only viable, affordable, complete solution. And hence we chose it.
On one side is a strategic vendor and that's where VMware, Microsoft, in the medical case, Cerner, which is a large application provider. There are four or five vendors that I would consider strategic and these are vendors that we could simply not operate without the function that they provide. So when a vendor's classified as strategic and then we look at the function they provide, there has to be a level of commitment. They must be a market leader. They must have enormous R&D capabilities. They must be flexible. They must interact with our engineers at a peering level, not simply as a dictatorial here, use this, and that's what's good for you and no more. VMware clearly acts appropriately like that. So, because, VSAN is part of hyper-convergence, hyper-convergence is a strategic imperative you can connect the dots where a company like VMware is necessary.
I would say, that they are definitely there. They're a high nine [out of 10]. Anybody that's looking to do hyper-convergence I think needs to understand a few basic principals. And all of these apply to VSAN as it applies to any of the elements of hyper-convergence. This is a long project. It's not something that's going to happen all at once and the value is after completion, the sum total of the parts.
If you go through a project like this for example, at Baystate, it's a two to three year project with required funding across that period of time. If, for some reason, we withdrew funding halfway through this process we would end up with less than the sum of our parts, we would end up with a lot of disconnected stuff. So be sure to make sure that your management and the people involved understand that this is a major commitment. It's not, oh, I'm just going to buy this once and forget it.
The other thing I would suggest, be paid attention to, is the affect this has on your people, on your engineers, on your workers, your HR considerations. In a traditional environment like ours, we're siloed. We have our storage guys here, our networking guys here and so on and so on - very expensive, a lot of duplication. In a hyper-converged model, all of that becomes one. Really what you have is a series of better trained, more effective engineers, but less of them. That doesn't mean you fire people.
That means you now put those people to other projects that have been sort of languishing because we just could never get around to them. That's, I think, a big thing to understand, that you will affect the way your users work. If they're not willing to learn new skills, if they're not willing to cross boundaries which were once siloed, your project could be in jeopardy.
When researching anything like hyper-convergence, the more information the better. We spent a great deal of time talking to not just health care institutions, and to be fair, this is a relatively new trend in health care so there really aren't all that many to talk to, but there are a number of non-healthcare institutions that are further along in some of these projects than healthcare is. We spoke with them, we spoke with vendors, we spoke with even other consulting firms. I think it's very important to gather as much information as you can before, you know, embarking on this.
Finding the resources for the gathering of this information is both hard and easy. It depends on which one we're talking about. The ability to get information from other institutions if they're outside of healthcare, and remember I'm speaking from a healthcare point of view, may be difficult, because they may not be allowed to share certain information. Getting consulting information is difficult unless you, of course, engage them. And I would argue that it's not necessarily such a bad idea to engage for a small amount of money the relative experts in some of these consulting firms and just have a quick conversation with them. If all of a sudden they seem to be knowledgeable, you do your homework on them, I would argue a further engagement is not necessarily a bad idea. But you do have to put some efforts into finding the info. It's not just going to fall out of space.
With Virtual SAN we did like the performance, the simplicity, the fact that it’s very easy to manage and upgrade and the integration with all of the VMware technologies that we are very familiar with. Also the fact that it's completely different from the old paradigm of provisioning storage and different storage systems and it's also saving rack space. We use less physical space for the deployment of storage systems.
When we first were shown Virtual SAN, we compared our traditional storage system what we had with the Virtual SAN performance in one of our labs and we showed that the performance was impressive with Virtual SAN. So, we started adding more applications to it and expanded this Virtual SAN Proof of Concept that we had. So, we realized that our performance is comparable to old flash, disk carriage at a much better core structure that we get as cloud providers. So, the fact that it is very fast, very simple and the fact that it's also easy to consume as a cloud provider, this made it a no brainer for us.
The simplicity to provisioned VMs and applied policy to specific VMs for our customers is one of the most important features for us without having to separate the area of storage like we had before with our traditional storage system. As a cloud provider the biggest challenge with storage is that you get completely mixed workloads. You don't know what the customers will be landing on. So, there is no way to predict the storage performance needs of a customer before they actually start using the systems.
There's some features in the future releases that we would really like to see: encryption as part of the offering. The application and compression would be nice to have those features available in the next Virtual SAN releases and also the capability to serve storage through other protocols. Like NSF or iSCSI for other vendors, for other applications that are not VMware. We have some solutions now that we use like Nexenta, in order to do that but it would be nice if Virtual SAN support of this natively. So we have one vendor to deal with.
Cloud Carib is a cloud provider in the Bahamas, and it’s targeting enterprise customers in the Caribbean. The company started about three years ago and because we're using VMware products since the beginning, working with Virtual SAN on the new storage offering in addition to our vCloud stock which is the Cloud Director, vSphere, and VCNS.
There’s never been an issue with Virtual SAN over the last year that we've been testing it. Support is really efficient because it's same global support with VMware that we've been with and we are familiar with. As far as I know, some of the highest rated support from any vendors that we use. We did all sorts of tests initially before we deployed in the production environment. We simulated disk failures, host failures, network failures, and Virtual SAN didn't have any issues dealing with all of these failures, so we're confident in that deploy more applications, more VMs in this system.
Since the original deployment, we have doubled the capacity of our recent cluster with zero down time. So the more nodes we have the more capacity we get the more performance and there's no downtime. So, it's very easy to scale up, scale-scale out with VSAN and all it takes is a few clicks. It's a very efficient way to upgrade your storage without adding more rack space than you actually need because by having converged storage network in the computer capacity, we don't have to waste rack space which is at a premium where we are in the Caribbean. So we did like the fact that we can scale our compute and our storage at the same time without wasting rack space.
The technical support that we get from VMware is part of our cloud provider contract is very efficient, very quick responses. The support is always knowledgeable and they can resolve the issues. As far as I know it's one of the highest rated support from any vendors that we use and this is very important for our storage system because that's the most critical aspect of customers' data. They need to be able to have confidence that the data is on a solid stable system.
So the way that we found out about Virtual SAN, we kept running into performance problems and capacity problems with our current storage that we had before. With all storage that we had and Virtual SAN is a radically different approach. So we were intrigued by that and, after testing it, we realized that all of our requirements were met by this approach. So some of the criteria that we look at when we evaluate vendors is the credibility of the vendor in the industry. The support history that we have with them, customer references.
So with VMware, we know that VMware is already used by everybody in the Fortune 500. Huge companies globally rely on VMware for day to day operations. So, we’re fully con=fident in basically running all of our infrastructure through the VMware technologies.
We spent quite a bit of time studying the design guides from VMware about the proper implementation and the hardware compatibility list. Ran all of the self-checks that were listed on the website and we used hardware that was certified ATL. And with these requirements- having passed the requirements it was very simple to enable Virtual SAN. So, after the initial, deployment design we're able to implement it in a matter of hours. It was, very, very simple. Much easier than deploying a hardware based storage system.
The vendors that were directly competing with Virtual SAN for our project were hardware vendors that were providing all-flash systems. This would be the comparison for us. The cost of the all-flash systems was prohibitive for us. We are a relatively small cloud provider in the Caribbean and, with Virtual SAN, we like the fact that we can pay through the VCOM program where we pay for only what we use and this was another huge benefit with Virtual SAN in our use case.
There's no issues that we have with it so far. We're very happy with it. I would highly recommend Virtual SAN for any demanding application that this running on VMware. We have no have reservations at all with Virtual SAN. We recommend Virtual SAN to anybody was has demanding applications.
So a lot of people think that Virtual SAN is a new unproven solution that people might use for testing or development but, what we actually see when we talk with our peers and with what we see with our customers is that people are using it in production and we are using it in production. If it can be used in production for a demanding app environment like cloud provider, then it means that it can definitely be used in production for any company that has storage requirements, demanding storage requirements.
Peer reviews and peer comments are very important factors when evaluating storage solutions or any other priced IT solutions is the raw data that you see from the people that are using it the way that we are going to be using something. It's not media or something that's not tested. So, this has the most weight for us. The most important factor when considering a solution whether PRC.
Performance is the most valuable feature because you are moving the storage closer to the CPU. It’s also cheap. We also evaluated an all-flash array, but even a low-end flash is much more expensive. This is much cheaper.
Concrete benefits would be manageability; we don’t have a storage guy because there is less stuff to deal with.
The savings is not the issue but I can scale my system – I’m building the node for 200 users, but all I will have to do is order another host and it will be configured exactly the same, and they are over-provisioned in terms of memory.
I have been using VMWare since it was a beta test.
I don’t know, but my gut feeling is that it distributes across the hosts, which should be very stable, and it’s all done at the hypervisor level. I don’t think we’ll have any issues.
I think it’s scalable in a linear fashion. We’ve outgrown our low-end SAN and hit a wall. We didn’t have a storage guy so we hit a wall when we hit 180 users and it was thrashing the SAN. With VSAN, that kind of issue – especially using the sizing tool – says that you should be more than fine. We're a small shop so we don’t have any doubt that it will scale to size.
They are the best in class – I hold everyone else to their standard. They solve the problem and work the problem. I’m kind of spoiled because I also get federal support so I get especially good service. I have always found their support to be stellar.
I had an issue a few years ago where my hosts were dropping and I couldn’t connect to them, so for three days I worked with VMWare. I went through four shifts of support staff, and they stayed with me. It was a 72 hour outage and I got back around to my original guy, and he figured it out. They are amazing. They don’t point a finger – with IBM they would hand it off from one guy to another and will never ever tell you that.
We replaced our infrastructure and did a proper POC. It’s cheap enough that we can still use the hosts and hook a SAN in, and everyone will get an SSD at their desks, so most of the cost is infrastructure. I loved it when I heard about it – virtualized storage and a distributed RAID. Makes total sense.
Their licensing gets a bit confusing, it’s hard to get the hang of that.
From what I saw, you can create the SAN in a small environment, and then grow. That’s a valuable feature of VSAN and makes it cost effective.
It's cost effective because you can start small and grow as needed.
From my experience testing it, VSAN could be more stable.
We tested it for about three months.
I was not sure about its stability because we have a big SAN shop and I got the impression that it’s good for small offices and not the larger ones.
The scalability seems ok – I would give it 6/10 because in a traditional SAN you can go up to a few terabytes. However with VSAN, it seems you can only get a couple hundred terabytes, and I expected more.
We haven’t had a chance to use it for VSAN, but in general we've had pretty good support from VMware, so I think VSAN tech support will also be good.
We haven’t fully implemented, but it should be simple and straightforward.
We will implement it by ourselves without a vendor team.
We looked into Dell and Nutanix, and chose VSAN because of ease of setup.
Customer support, the actual technology, how robust or stable it is and the ease of deployment are the criteria too look for when selecting a vendor.
I would say that if you’re a medium IT organization and looking for a cost effective solution, VSAN is worthwhile; but, if you’re a bigger environment, I would go with a bigger SAN like EMC, NetApp, and IBM.
I tried to install it on one cluster and the host got stuck.
Very easy, just a few clicks. You don’t need special knowledge of storage .because it can be fully automated or you can set it manually if you want
There are a few products on the market. vSAN has lots of competitors, but if you want to play with a single software provider, go with vSAN. However, if you want more hardware, maybe go with Nutanix.
They lose points because its tricky to fully understand.
The ability to scale as you need – we can start with a very small footprint as opposed to a monolithic storage solution where you buy the entire solution up front. We use everything – Hitachi, NetApp, but we're using it more and more because we can start small and scale as you need. Cost saving essentially.
I would like to software-based disk-level encryption in the next release. We deal a lot with the Department of Defense, and arms and munitions government-regulated stuff, so we would like to see more. From their roadmap, I see its coming but it has been an impediment.
It's not quite there yet. We've had a few outages that were addressed. It's not 100% there yet -- give it another six months.
Scalability is why were using it – especially with v6. Any scalability issues we had, were addressed.
It was excellent. The response time was great, and as we're a large customer so we had no issues.
Initial setup was not difficult to do at all.
We implemented on our own.
We have played with Nutanix but it wasn’t there yet – VSAN is more attractive because it operates kernel level, as opposed to Nutanix.
Picking a vendor also depends on which segment is looking – I run most of the IT stuff and to me peer reviews are very important. Others within our company look to Gartner.
I would say that the main reason its attractive is that you can grow as you need. The other thing that makes it especially attractive is that from an IO perspective, VSAN has the better ability to perform more efficiently because it operates within the hypervisor. It's VMWare specific so that can be a downside. But for pure VMWare shops, VSAN is the best option in my opinion.
I think that it brings speed and security as we have patient-sensitive data that we need to store.
I love VMware – it’s allowed us to virtualize our server infrastructure, but I haven’t tested the stability of VSAN extensively yet.
From what I’ve seen it’s extremely scalable.
VMware is top notch, but I can’t evaluate yet for VSAN.
You should look at scalability and integration withing the vSphere environment.
It's lowered our storage costs while still maintaining High Availability and with easy installation.
Expand the hardware compatibility list – it's pretty short. Definitely also the diagnostic and monitoring could be improved. That stuff is still very new.
We have been using it since it came out in March 2015.
So far so good.
Unsure – all I know is what I read, if it does what it says it does I'm very impressed.
Very good – quality support.
We have three hosts in a cluster, and it was surprisingly easy.
Try it out – that’s the best way to know whether it's right for your organization.
It's fast – it’s really blazing fast.
It saves us the expense of an all-flash array. All-flash would work for us, but VSAN is cheaper. I think that this solution is really new, but it has real benefits over all-flash arrays.
We are seeing some improvements coming up, but at the moment you have to store every object on multiple disks to protect it, and they should be better distributed over disks to help parity.
It's very stable – we have had no failures.
It’s really scalable in terms of both capacity and performance, at least for our needs.
We haven’t had to use it – the product is really stable.
We were using a traditional storage array from Dell and we will see more VSAN usage in the future.
The initial setup was a little bit complicated because we have to do everything from scratch. It’s a new world, and much easier in the newer releases.
We looked at other vendors – classic storage vendors – but we thought this direction was faster as things are moving towards a software designed storage.
I think you should try it – its really stable and valuable and help to drill your costs down.
The ability to scale out incrementally instead of doing a big five year capital expense purchase that hurts the budget. Vith vSAN, you buy x86 servers and you're done, and you can scale up.
The cost. VSAN allows us to do storage cheaper and better than before.
Deduplication feature is needed, and I'm thinking maybe it needs a lot of nodes to store all redundant data. This will be addressed in their next beta version.
Assuming we get policies set up where they need to be, it's a stable technology. It's distributed architecture is slick, and will hold up.
First and foremost, it offers a different way to scale, it’s smaller and easier to digest in smaller bites. As older ESXi hosts are phased out, you can replace with VSAN and add more nodes. Very incremental approach.
I've never had to use it.
It feels forward thinking and I can see this as big game changer, which is exciting, but it's only one-and-a-half-years old -- a toddler. There are some minor things wrong, but it's potential value could far overshadow it curent weaknesses, Currently we don’t have NSX, and that could be a pivotal thing, but we don’t have that licensing yet.
The ability to scale out incrementally instead of doing a big five year capital expense purchase that hurts the budget.
In the storage world, when it's time to buy an array, you buy half that are to be populated, and you buy more disk shelves, but it's not cost effective. Or, you buy it all and don’t use some. But with VSAN, you buy x86 servers as needed and you're done, and you can scale up.
The cost. We had a directive from the CIO to check out and play with VSAN and then to do storage cheaper and better than before, which we have achieved.
The management platforms have some gaps. It's difficult to see what’s going on with the hardware at times. The only platform available doesn’t run full time, and there is a management pack but it requires a product that not many people have (vRealize Operations). So it could use more work in management areas.
Also, it lacks deduplication, so we're using a lot more storage than you necessarily need to.
Prior to deployment, make sure you check your hardware compatibility prior, along with the drivers and firmware, because any one of those three things can cause an outage.
It's very stable, but we ran into some early issues with drivers and firmware, but this is resolved now. You must be careful to be coverd in that respect, otherwise you will have issues.
We’re scaling in a very phased process, running dev test environment with just a small three node cluster, but gradually shifting.
I've never had to use it.
It was pretty straightforward, but we had some issues with drivers, although nothing in the setup. After some time, we were losing some disks that was because of driver issues.
Previously, we couldn’t consolidate more workloads on different types of storage. Now with VSAN, we have the ability to virtualize across multiple data centers.
It's currently doing everything we've ask of it and it meets all our needs right now. To be honest, I don’t think too much about the future of the product or what we might need it to do as our requirements change.
It's got good stability - 10/10.
It's got good scalability - 10/10.
They are good – the response time is quick. You pay for it, but they are good.
No previous solution was used.
It was simple and straightforward.
We've implemented it both on our own and with a vendor team, and it's straightforward with both.
No other options were looked, but peer reviews are important. My peer reviews are usually on social media channels, but it's important.
Product knowledge is the most important criteria we look for when selecting a vendor.
Try it and evaluate it – it's not a fit for every company, but you should at least do an evaluation to see if it is a fit.
Simple to set up, manage, and integrate it with tools you’re already familiar (vCenter, vClient) with.
It also gives us a policy-based storage on a per-VM level.
Also if you can apply redundancies to machines, they’re all different.
Some difficulty finding compatible hardware, but if you follow the HCL provided by VMware. and make sure you're buying the correct nodes, storage devices, and SSD’s that are all supported, then it’s a stable product. Even if you have problems, it's still only one phonecall.
It supports up to 64 nodes so huge scalability.
As a VMware customer for many years, sometimes it takes a few calls, but they have some brilliant people who can solve difficult technical problems.
It loses points because it lacks lots of performance and deduplication abilities that competitors have.
The total cost of ownership, as it's really cheap for us and we have budgetary constraints. Plus, as we're a hospital, doctors need to access their patient data quickly, which VSAN allows them to do.
We haven’t had an issue and we've been using it for about six months now.
I find it’s easy to scale, so if you need 100 more VMs, you know the amount of users per node, and you know exactly how much it’s going to cost you to scale up.
Never had an issue.
The only thing is that as we were early adopters, we found tech support was difficult to deal with because our hardware was Cisco, and they didn’t know what we were talking about.
We had one issue with an MTU, but it didn't take me even a day to set up.
Basically we're a one man shop – we like to keep our list short and simple: VMWare and Cisco.
Try it out. It’s worth it.
It's relatively low-cost for a high-performance solution. By using VSAN, we've been able to simplify our infrastructure considerably.
VSAN currently has no data deduplication, and having such a feature would both be an improvement and provide a feature that Nutanix has.
The stability is very good.
Having now deployed it, it's actually difficult to have any downtime or to even lose data.
Considering I've only done one VSAN cluster, I'd say that the scalability is good. We haven't yet tried to add more clusters.
Our company currently has 20,000 users and we expect further growth, so we'll likely have to scale down the road. That said, I don't anticipate really any issues with scalability at that point.
Tech support is very good as they're responsive.
We have the Technical Account Manager service, which is very helpful.
Setup was not complex at all, was very straightforward, and was easily implemented by everyone on the team.
We implemented it with out in-house team.
We chose VSAN based on a POC. We looked at Nutanix, but VSAN was more robust for our needs and less expensive.
We normally use Gartner as a source, as well as some testing and a POC. The POC was the most important criteria, so my advice would be to do that before committing the resources.
We just started implementation, so it's hard to give our perspective as we're still doing our evaluation. We purchased the product, and we have ten-fold service on it.
If it works out well, storage is our most important element of our infrastructure. We're looking for a stable and high performing solution and think this is it.
I'd like to see support for iSCI. Right now it’s all internal protocols, and they promise it in the next version. They need to support more types of hardware – the list is too narrow.
It's stable, but it's really picky on the hardware. We knew that going in, but the scale was a surprise, not truly as agnostic as we thought it would be. They have a list, and if you deviate a bit, then it won’t support the environment. We had an issue where we deviated slightly, so we probably will have to follow their hardware compatibility list.
Very scalable, it's one of the reasons we bought it. They are in v2.0, and we feel like now it’s mature.
Support is generally good, but a little slow sometimes. You need to stick to their compatibility list if you want their full support.
We were using EMC and we knew we needed something new. Cost is important to look at, because we're nonprofit, as well as the integration with the other VMware products, and the stability of the product too.
Setup is very straightforward.
It’s a good solution – the trend is going towards converged infrastructure. It's all policy based – you can set general policy and then trust VSAN to do everything else.
Scalability, and flexibility.
My conversations, now, have to do with trying to help customers on how to grow with VSAN.
It offers a lower cost of growth for a lot of our customers. They can meet immediate needs, but don’t need to spend a lot of money now. Balancing between capital budget and operational budget, instead of buying SAN to SAN, they can buy what they need now and then have operational costs after that.
If you don't have vRealize Operations, it would detract from usability of VSAN. It allows our customers to see more granularly than other storage solutions.
Never used. Last week, I got in touch with a channel partner, and he talked about different tools and different things they had implemented. Our team excited about it because there we don’t have many resources, but now we have with channel partner.
We set it up for our healthcare customer with our in-house team only.
Look outside of upfraont costs, because it’ll be equivalent to Nutanix. Its biggest value is its scalability. You can buy a little bit, and not a whole infrastructure box when you want to grow. Customers can just spin up half a dozen additional hosts quickly of they want.
I have a lot of confidence in it, but it’s a challenge to convince customers because they’re intrigued but don’t want to take the steps. All the specs and concept of having storage within servers is interesting to customers, but not ready to pull trigger. If we can sell more with Horizon, then licenses included for pricing, and must refresh hosts anyways.
We've decreased the time it takes for us to roll out new solutions. It's sped up that process for us.
It needs to allow for more customizations and individualization specific to each user.
It needs to be more malleable and adjustable to changing requirements. There are too many hard-set limitations.
We've used it for two months.
It's not a consideration, because in my impression VSAN is deployed in set-sizing and is not customizable.
Zero issues with tech support. Our TAM answers after some time, but it's not a negative because they're dedicated just to our company.
It's not difficult, but there are still limitations even in in-depth white papers because it's so new.
It's not enterprise class yet because it's a new iteration and a work in progress. Just make sure it fits and meets your requirements.
Getting rid of sharing storage, especially VSAN 6. That would be even better than having an all-flash array.
I hear a lot of issues of stability whenever you go to maintenance, but people who are having spectacular experiences are not speaking the loudest so it can be hard to tell.
I haven’t looked at configuration maximums but it seems like you can scale it up pretty hard in terms of clusters with vSphere 6.
In general, VMware customer support is world class. Response time is really quick – you get connected to experts much faster than in other companies, like Microsoft for example.Technical Support:
All I've seen is community support, especially from bloggers and community experts. I haven’t had any experience.
It's not very different than vSphere 3. If you're comfortable with VMware it’s straightforward. From what Ive seen, it’s a simple install once you have all the hardware. I have heard you have to tweak it performance wise.
Support is up there in the top five things to look at. If you can call, have online communities, easy access to articles. I would also add that if you can get through to someone who has deep knowledge of the product quickly.
Stability, the issue that we have run into is that they are fly-by-night brand new startups and you can get stranded without support.
You need to vet the company, they need to be around in a few weeks to help you. Also, peer reviews are very important – invaluable. Salesmen will tell you everything, we look at whitepapers and vendor supplied information. Google is your friend.
Originally posted in Spanish at https://www.rhpware.com/2015/02/introduccion-vmware...
The second generation of Virtual SAN is the vSphere 6.0 that comes with sharing the same version number. While the jump from version 1.0 (vSphere 5.5) 6.0 to change really worth it occurred, as this second-generation converged storage integrated into the VMware hypervisor significantly increases performance and features that are based on a much higher performance and increased workloads scale business level, including business-critical applications and Tier 1 capital.
Virtual SAN 6.0 delivers a new architecture based entirely on Flash to deliver high performance and predictable response times below one millisecond in almost all business critical applications level. This is also achieved because in this version doubles the scalability up to 64 nodes per host and up to 200 VMs per host, as well as improvements in technology snapshots and cloning.
The hybrid architecture of Virtual SAN 6.0 provides performance improvements of nearly double compared to the previous version and 6.0 Virtual SAN architecture all-flash four times the performance considering the number of IOPS you get in clusters with similar workloads predictable and low latency.
As the hyper convergent architecture is included in the hypervisor efficiently optimizes the ratio of operations of I/O and dramatically minimizes the impact on the CPU, which leads to products from other companies. The distributed architecture based on the hypervisor reduces bottlenecks allowing Virtual SAN move data and run operations I/O in a much more streamlined and very low latencies, without compromising the computing resources of the platform and keeping the consolidation of the VM's. Also the data store Virtual SAN is highly resilient, resulting in preventing data loss in case of physical failure of disk, hosts, network or racks.
The Virtual SAN distributed architecture allows you to scale elastically uninterrupted. Both capacity and performance can be scaled at the same time when a new host is added to a cluster, and can also scale independently simply by adding disks existing hosts.
The major new capabilities of Virtual SAN 6.0 features include:
Virtual SAN 6.0 All-Flash predictable levels of performance achieved up to 100,000 IOPS per host and response times below one millisecond, making it ideal for critical workloads. - See more at: https://www.rhpware.com/2015/02/introduccion-vmware...
This version duplicates the capabilities of the previous version:
Virtual SAN 6.0 requires vCenter Server 6.0. Both the Windows version as visa Virtual SAN can handle. Virtual SAN 6.0 is configurable and monitored exclusively through vSphere Web Client. It also requires a minimum of 3 vSphere hosts with local storage. This amount is not arbitrary, but is used for the cluster meets the fault tolerance requirements of at least one host, a disc network failure.
Each vSphere host own contribution to the cluster storage Virtual SAN requires a driver disk, which can be SAS, SATA (HBA) or RAID controller. However, a RAID controller must operate in any of the following ways:
The Pass-through (JBOD or HBA) is the preferred mode settings 6.0 Virtual SAN that enables managing RAID configurations the attributes of storage policies and performance requirements defined in a virtual machine
When the hybrid architecture of Virtual SAN 6.0 is used, each vSphere host must have at least one SAS, NL-SAS or SATA disk in order to participate in the Virtual Cluster SAN cluster.
In architecture-based flash drives 6.0 Virtual SAN devices can be used as a layer of cache as well as for persistent storage. In hybrid architectures each host must have at least a flash based (SAS, SATA or PCI-E) in order to participate in the Virtual SAN disk cluster.
In the All-flash architecture each vSphere host must have at least one flash based device marked as device capacity and one for performance in order to participate Virtual SAN cluster.
In hybrid architectures Virtual SAN, each vSphere host must have at least one network adapter 1Gb or 10Gb. VMware's recommendation is 10 Gb.
The All-flash architectures only support 10Gb Ethernet NICs. For redundancy and high availability, you can configure NIC Teaming per host. NIC Teaming is not supported for link aggregation (performance).
Virtual SAN 6.0 is supported by both VMware vSphere Distributed Switch (VDS) and the vSphere Standard Switch (VSS). Other virtual switches are not supported in this release.
You must create a VMkernel port on each host for communicating and labelled for Virtual SAN Virtual SAN traffic. This new interface is used for intracluster communications as well as for read and write operations when a vSphere cluster host is the owner of a particular VM but the current data blocks are housed in a remote cluster host.
In this case, the operations of I / O network must travel through the cluster hosts. If this network interface on a vDS is created, you can use the Network feature I / O control to configure shares or reservations for Virtual SAN traffic.
This new second generation Virtual SAN is a storage solution enterprise-class hypervisor level that combines computing resources and storage from the hosts. With its two supported architectures (hybrid and All-Flash) Virtual SAN 6.0 meets the demands of all virtualized applications, including business-critical applications.
Without doubt Virtual SAN 6.0 is a storage solution that realizes the VMWare defined storage software or SDS (Software Defined Storage) offering great benefits to both customers and the vSphere administrators who every day we face new challenges and complexities. It certainly is an architecture that will change the vision of storage systems from now on.
vMotion and the Distributed Resource Scheduler (DRS) load-balancing resources are the most valuable features.
We can manage capacity and performance in linear fashion.
We get better performance with a better cost efficiency.
The management of vSAN (dashboard, alerts, monitoring) has a significant amount of growth potential.
No issues encountered.
The stability is dependent on how we scale and stabilize I/O across the host(s). We have encountered issues, but have worked through them.
There are no issues because it is linear.
Prior to VSAN, we used SAN Storage, and we switched because we needed a more cost-effective solution for our cloud environment, coupled with easy scalability. Currently, SAN Storage has risks and bottlenecks, due to having only two storage processors which are not enough to handle our needs.
It was straightforward.
We implemented in-house.
Make sure you size correctly when you do the initial implementation.
Originally posted at vcdx133.com.
I previously posted about my “Baby Dragon Triplets” VSAN Home Lab that I recently built. One of the design requirements was to meet 5,000 IOPS @ 4K 50/50 R/W, 100% Random, which from the performance testing below has been met.
The performance testing was executed with two tools:
Iometer – Test configuration
Iometer – Results
VMware I/O Analyser – Test configuration
VMware I/O Analyser – Results
Software-defined and hyper-converged storage solutions are now a viable alternative to conventional storage arrays so let’s take a quick look at how two of the most popular solutions compare – VMware Virtual SAN (VSAN) and EMC ScaleIO:
On vSphere this is an easy win for VMware as VSAN is delivered using kernel modules which provides the shortest path for the IO, has per Virtual Machine policy based management and is tightly integrated with vCenter and Horizon View.
ScaleIO is delivered as Virtual Machines, which is not likely to be as efficient, and is managed separately from the hypervisor – on all other platforms ScaleIO is delivered as lightweight software components not Virtual Machines.
VSAN also has the advantage of being built by the hypervisor vendor, but of course the downside of this is that it is tied to vSphere.
Win for EMC, since the failure of a single SSD with VSAN disables an entire Disk Group. Although VSAN has the ability to support up to three disks failures where as ScaleIO only one, in reality the capacity and performance overhead of supporting more than one failure means that VSAN will nearly always be used with just RAID 1 mirroring.
If you need double disk failure protection you are almost certainly better off using a storage array.
Easy win for VMware as VSAN uses SSDs as a write buffer and read cache, ScaleIO does have the ability to utilise a RAM read cache.
Easy win for EMC as with ScaleIO you can:
VSAN has a more rigid architecture of using Disk Groups which consist of one SSD and up to seven HDDs.
Easy win for EMC as ScaleIO supports up to 1,024 nodes, 256 Protection Domains and 1,024 Storage Pools, and auto-rebalances the data when storage is added or removed.
ScaleIO can also throttle the rebuilding and rebalancing process so that it minimises the impact to the applications.
Easy win for EMC as ScaleIO provides Redirect-on-Write writeable snapshots, QoS (Bandwidth/IOPS limiter), Volume masking and lightweight encryption.
This is a tricky one as VSAN has the more customer friendly licensing as it is per CPU therefore as new CPUs, SSDs and HDDs are released you will be able to support more performance and capacity per license.
ScaleIO has a capacity based license which is likely to mean that further licenses are required as your capacity inevitably increases over time. There is also two ScaleIO licences – Basic and Enterprise (adds QoS, Volume masking, Snapshots, RAM caching, Fault Sets and Thin provisioning).
The one downside of VSAN licensing is that you need to licence all the hosts in the cluster even if they are not used to provision or consume VSAN storage.
Conventional storage arrays
What are the advantages of a conventional mid-range array?
What are the advantages of hyper-converged software-defined solutions?
So which is best?
As always each vendor will build a strong case that their solution is the best, in reality each solution has strengths and weaknesses, and it really depends on your requirements, budget and preferences as to which is right for you.
For me the storage array is not going away, but it is under pressure from software-defined and cloud based solutions, therefore it will need to deliver more innovation and value moving forward. The choice between VSAN and ScaleIO really comes down to your commitment to vSphere – if there is little chance that your organisation will be moving away, then VSAN has to be the way to go, otherwise the cross-platform capabilities of ScaleIO are very compelling.
Over the past decade VMware has changed the way IT is provisioned through the use of Virtual Machines, but if we want a truly Software-Defined Data Centre we also need to virtualise the storage and the network.
For storage virtualisation VMware has introduced Virtual SAN and Virtual Volumes (expected to be available in 2015), and for network virtualisation NSX. In this, the first of a three part series, we will take a look at Virtual SAN (VSAN).
So why VSAN?
Large Data Centres, built by the likes of Amazon, Google and Facebook, utilise commodity compute, storage and networking hardware (that scale-out rather than scale-up) and a proprietary software layer to massively drive down costs. The economics of IT hardware tend to be the inverse of economies of scale (i.e. the smaller the box you buy the less it costs per unit).
Most organisations, no matter their size, do not have the resources to build their own software layer like Amazon, so this is where VSAN (and vSphere and NSX) come in – VMware provides the software and you bring your hardware of choice.
There are a number of hyper-converged solutions on the market today that can combine compute and storage into a single host that can scale-out as required. None of these are Software-Defined (see What are the pros and cons of Software-Defined Storage?) and typically they use Linux Virtual Machines to provision the storage. VSAN is embedded into ESXi, so you now have the choice of having your hyper-converged storage provisioned from a Virtual Machine or integrated into the hypervisor – I know which I would prefer.
Typical use cases are VDI, Tier 2 and 3 applications, Test, Development and Staging environments, DMZ, Management Clusters, Backup and DR targets and Remote Offices.
To create a VSAN you need:
The host is configured as follows:
The solution is simple to manage as it is tightly integrated into vSphere, highly resilient as there is zero data loss in the event of hardware failures and highly performant through the use of Read/Write flash acceleration.
The VSAN cluster can grow or shrink non-disruptively with linear performance and capacity scaling – up to 32 hosts, 3,200 VMs, 2M IOPS and 4.4 PBs. Scaling is very granular as single nodes or disks can be added, and there is no dedicated hot-spare disks instead the free space across the cluster acts as a “hot-spare”.
Per-Virtual Machine policies for Availability, Performance and Capacity can be configured as follows:
The Read/Write process
Typically a VMDK will exist on two hosts, but the Virtual Machine may or may not be running on one of these. VSAN takes advantage of the fact that 10 GbE latency is an order of magnitude lower than even SSDs therefore there is no real world difference between local and remote IO – the net result is a simplified architecture (which is always a good thing) that does not have the complexity and IO overhead of trying to keep compute and storage on the same host.
All writes are first written to the SSD and to maintain redundancy also immediately written to an SSD in another host. A background process sequentially de-stages the data to the HDDs as efficiently as possible. 70% of the SSD cache is used for Reads and 30% for Writes, so where possible reads are delivered from the SSD cache.
So what improvements would we like to see in the future?
VSAN was released early this year after many years of development, the focus of the initial version is to get the core platform right and deliver a reliable high performance product. I am sure there is an aggressive road-map of product enhancements coming from VMware, but what we would like to see?
The top priorities have to be efficiency technologies like redirect-on-write snapshots, de-duplication and compression along with the ability to have an all-flash datastore with even higher-performance flash used for the cache – all of these would lower the cost of VDI storage even further.
Next up would be a two-node cluster, multiple flash drives per disk group, Parity RAID, and kernel modules for synchronous and asynchronous replication (today vSphere Replication is required which supports asynchronous replication only).
So are we about to see the death of the storage array? I doubt it very much, but there are going to be certain use cases (i.e. VDI) whereby VSAN is clearly the better option. For the foreseeable future I would expect many organisations to adopt a hybrid approach mixing a combination of VSAN with conventional storage arrays – in 5 years time who knows how that mix will be, but one thing is for sure the percentage of storage delivered from the host is only likely to be going up.
Some final thoughts on EVO:RAIL
EVO:RAIL is very similar in concept to the other hyper-converged appliances available today (i.e. it is not a Software-Defined solution). It is built on top of vSphere and VSAN so in essence it cannot do anything that you cannot do with VSAN. Its advantage is simplicity – you order an appliance, plug it in, power it on and you are then ready to start provisioning Virtual Machines.
The downside … it goes against VMware’s and the industries move towards more Software-Defined solutions and all the benefits they provide.