For us, the XIV is pretty much set-it-and-forget-it storage. We use it behind the SAN volume controller. Having presented that storage to the SAN, we find that the latency is very predictable, the capacity is excellent and the reliability is fabulous. I have been very happy with the XIVs we have here. We have had them for about three years and (KNOCK KNOCK) they have yet to fail a single drive (I think this might be because the system is optimized to use RAM, “flash assist” (800G flash unit per XIV shelf), and 4G disk drives for the things that each do best).
Improvements to My Organization:
It saved a lot of time. We haven't had any outages and outages in the healthcare industry are terrible. They're like earthquakes that have repercussions for years afterwards.
Room for Improvement:
It could be cheaper, but considering they give you a rack of 325 TB usable, if you buy a fully loaded system, it's really not that expensive. As part of the base package, it comes with a lot of value-add (snapshots,etc...) too, so those things need to be considered when comparing to other systems which do not include this.
I guess we'd like to see the XIV keep pace with how storage is going in terms of speed and latency. Basically, what you would buy at this point is the A9000R and that's probably the fastest system on the planet right now. They're basically doing what I'd like to see.
It's really stable. We haven't had any outages. We haven't even had any failed drives since we've installed it.
It gives you 325 TB usable per rack, so it's very scalable. Since we're using it behind the SVC, we could just bring in five more, if we had the money, and just completely scale it behind the SVC without any difficulty whatsoever.
We have an arrangement with IBM. They come out and they do the disk drive replacement, which they haven't have to do yet. They basically won't come out until there are three failed drives because that's the way the model works. They have done a couple of firmware upgrades and disk drive firmware upgrades, on the frame itself and the drives. That's all handled really well by their support center.
We have really good tech support from IBM; really good contacts. They're sort of our personal liaison.
I used 3PAR, sort of, in a test mode, NetApp in a test mode, and the SVC. All three of those we tested in-house. We didn't test the EMC in-house.
The 3PAR was actually OK. In my opinion, it was the best of the other ones that we looked at, but the GUI was a little bit difficult to use. The IBM GUI is much easier to use and the SVC provides a lot of features that just aren't in the 3PAR. You could use anything behind the SVC and the SVC would make it easy.
I found the NetApp fairly cumbersome. We use OpenVMS and that was another problem with the NetApp: trying to use that with VMS.
The most important criteria for me when selecting a vendor to work with are reliability; ease of use in managing the storage because we manage it a lot, we have a lot of changes to make; and interoperability with the systems that we have in house, such as OpenVMS.
Initial setup was pretty straightforward. We're really used to setting up arrays there. It has six ports per fabric; that's what we’re using. We just set up the cabling, did the zoning, and followed the recommended procedures; there were no problems, whatsoever. There were no unexpected surprises.
Other Solutions Considered:
I didn't pick out XIV personally, but we looked at a bunch of different storage vendors. The technical people really wanted to go with IBM compared to the other vendors, mostly because we had already been familiar with it, but we did several in-house PoCs for other vendors too. The IBM just worked much better with the stuff we had. The GUI was much better. The CLI was better. For us, it was much better.
For our stuff, we looked at 3PAR, NetApp and EMC.
We have had no problems with it and it exceeded expectations as far as speed, latency and reliability.
If it starts out being perfect and then there are problems, the rating would go down. But so far there haven't been any problems.
The interface is fine. We don't use it that much because we just take big chunks of it and present it as MDisks to the IVM.
This system, for us, is actually a set and forgot type of system. We have presented the array to San volume controller, and manage it from there, and we are also using flashsystem 900 for our super fast storage (we are very lucky to have this storage architecture – it’s really good).
As you are probably aware, they have since come out with the a9000 and a9000r which pair the XIV architecture with the flashsystem 900 flash backend, including compression and dedupe. I’m sure we will be looking at this when it is time to refresh. I don’t think anyone yet has a system that truly competes for speed with the flashsystem 900. emc now has a one new system (very expensive) and pure storage (I think it’s the flashblade system but don’t quote me on it) – they are way late to the game and all the others are even further behind still using “SSD” form factor. All the devices (basically all, except the aforementioned) that use what is basically a disk interface (e.g. SAS) are slowing down their flash.
Probably for absolute MAX performance, one would still use standalone flashsystem 900, but the a9000r will give great performance while reducing the price quite a bit by using the compression and dedupe.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Dec 21 2016