This solution reduced our costs by consolidating several types of disparate storage. The savings come mostly in power consumption and density. One of our big data center costs, which was clear when we built our recent data center, is that each space basically has a value tied to it. Going to a flash solution enabled us to have a lower power footprint, as well as higher density. This essentially means that we have more capacity in a smaller space. When it costs several hundred million dollars to build a data center, you have to think that each of those spots has a cost associated with them. This means that each server rack in there is worth that much at the end. When we look at those costs and everything else, it saved us money to go to AFF where we have that really high density. It's getting even better because the newer ones are going to come out and they're going to be even higher.
Being able to easily and quickly pull data out of snapshots is something that benefits us. Our times for recovery on a lot of things are going to be in the minutes, rather than in the range of hours. It takes the same amount of time for us to put a FlexClone out with a ten terabyte VM as it does a one terabyte VM. That is really valuable to us. We can provide somebody with a VM, regardless of size, and we can tell them how much time it will take to be able to get on it. This excludes the extra stuff that happens on the back end, like vMotion. They can already touch the VM, so we don't really worry about it.
One of the other things that helped us out was the inline efficiencies such as the deduplication, compaction, and compression. That made this solution shine in terms of how we're utilizing the environment and minimizing our footprint.
With respect to how simple this solution is around data protection, I would say that it's in the middle. I think that the data protection services that they offer, like SnapCenter, are terrible. There was an issue that we had in our environment where if you had a fully qualified domain name that was too long, or had too many periods in it, then it wouldn't work. They recently fixed this, but clearly, after having a problem like this, the solution is not enterprise-ready. Overall, I see NetApp as really good for data protection, but SnapCenter is the weak point. I'd be much more willing to go with something like Veeam, which utilizes those direct NetApp features. They have the technology, but personally, I don't think that their implementation is there yet on the data production side.
I think that this solution simplifies our IT operations by unifying data services across SAN and NAS environments. In fact, this is one of the reasons that we wanted to switch to this solution, because of the simplicity that it adds.
In terms of being able to leverage data in new ways because of this solution, I cannot think of anything in particular that is not offered by other vendors. One example of something that is game-changing is in-place snapshotting, but we're seeing that from a lot of vendors.
The thin provisioning capability provided by this solution has absolutely allowed us to add new applications without having to purchase additional storage. I would say that the thin provisioning coupled with the storage efficiencies are really helpful. The one thing we've had to worry about as a result of thin provisioning is our VMware teams, or other teams, thin provisioning on top of our thin provisioning, which you always know is not good. The problem is that you don't really have any insight into how much you're actually utilizing.
This solution has enabled us to move lots of data between the data center and cloud without interruption to the business. We have SVM DR relationships between data centers, so for us, even if we lost the whole data center, we could failover.
This solution has improved our application response time, but I was not with the company prior to implementation so I do not have specific metrics.
We have been using this solution's feature that automatically tiers data to the cloud, but it is not to a public cloud. Rather, we store cold data on our private cloud. It's still using object storage, but not on a public cloud.
I would say that this solution has, in a way, freed us from worrying about storage as a limiting factor. The main reason is, as funny as it sounds because our network is now the limiting factor. We can easily max out links with the all-flash array. Now we are looking at going back and upgrading the rest of the infrastructure to be able to keep up with the flash. I think that right now we don't even have a strong NDMP footprint because we couldn't support it, as we would need far too much speed.