Several months ago I walked through some of the issues we faced when XtremIO hit the floor and found it not to be exactly what the marketing collateral might present. While the product was very much a 1.0 (in spite of its Gen2 name), EMC Support gave a full-court-press response to the issues, and our account team delivered on additional product. Now it’s 100% production and we live/die by its field performance. So how’s it doing?
For an organized rundown, I’ll hit the high points of Justin Warren’s Storage Field Day 5 (SFD5) review and append a few of my own notes.
Scale-Out vs. Scale-Up: The Impact of Sharing
True to Justin’s review, XtremIO practically scales up. Anything else is disruptive. EMC Support does their best to make up for this situation by readily offering swing hardware, but it’s still an impact. Storage vMotion works for us, but I’m sure spare hardware isn’t the panacea for everyone, especially those with physical servers.
The impact of sharing is key as well. XtremIO sharing everything can mean more than just the good stuff. In April, ours “shared” a panic over the InfiniBand connection when EMC replaced a storage controller to address one bad FC port. I believe they’ve fixed that issue (or widely publicized to their staff how not to swap an SC in a way that leads to panic, until code can protect), but it was production-down for us. Thankfully we were only one foot in, so our key systems kept going on other storage. We’ve seemed to find the InfiniBand exceptions, so I do not think this is a cause for widespread worry. ‘Just stating the facts.
I could elaborate further, but choosing XtremIO means being prepared to swing your data for disruptive activities. If you expect the need to expand, plan for that–rack space, power, connections, etc for the swing hardware, or whatever other method you choose.
Compression: Needed & Coming
This was the deficit that led to us needing four times the XtremIO capacity to meet our Pure POC’s abilities. At the time, we thought Pure achieved a “deduplication” ratio of 4.5 to 1 and were sorely disappointed when XtremIO didn’t. Then we realized it was data “reduction”, which incorporated compression and deduplication. Pure’s dedupe is likely still more efficient since it uses variable block sizes (like EMC Avamar), but variable takes time and post-processing.
When compression comes in the XIOS 3.0 release later this year, I hope to see our data reduction ratio converge with what we saw on Pure. As it stands, we fluctuate around 1.4 to 1 deduplication (which feels like the wrong word–dedupe seems to imply a minimum of 2:1). I choose to ignore the “Overall Efficiency” ratio at the top, as it is a combination of dedupe and thin provisioning savings, the latter of which nearly everyone has. We’ve thin provisioned for nearly 6 years with our outgoing 3PAR, so that wasn’t a selling point; it was an assumption. As a last note on this, Pure Storage asks the pertinent question: “The new release will come with an upgrade to compression for current customers. Can I enable it non-disruptively, or do I have to migrate all my data off and start over?”
Snapshots & Replication
I won’t say much on these items, because we haven’t historically used the first, and other factors have hindered the second. Given that our first EMC CX300 array even had snapshots, the feature arrival in 2.4 was more of an announcement that XtremIO had fully shown up to the starting line of the SAN race (it was competing extremely well in other areas, but was hard to understand the lag here). We may actually use this feature with Veeam’s Backup & Replication product as it offers the ability to do array-level snapshots and transfer them to a backup proxy for offloaded processing.
As for replication, my colleagues and I see it as feature with huge differentiating potential, at least where deduplication ratios are high. VDI or more clone-based deployments with 5:1, 7:1, or even higher ratios could benefit greatly if only unique data blocks were shipped to partnering array(s). For now, VPLEX is that answer (sans the dedupe).
XtremIO > Alternatives? It Depends
As I mentioned in the past, we started this flash journey with a Pure Storage POC. It wasn’t without challenges, or I probably wouldn’t be writing about XtremIO now, but those issues weren’t necessarily as objectively bad or unique to them as I felt at the time. Everyone has caveats and weaknesses. In our case, Pure’s issues with handling large block I/O gave us pause and cause to listen to EMC’s XtremIO claims.
Those claims fleshed out in some ways, but not in others (at least not without more hardware). Both products can make the I/O meters scream with numbers unlikely to be found in daily production, though it’s nice to see the potential. The rubber meets the road when your data is on their box and you see what it does as a result. No assessment tool can tell you that; only field experience can.
If unwavering low-latency metrics are the goal, XtremIO wins the prize. It doesn’t compromise or slow up for anything–the data flies in and out regardless of block size or volume. Is no-compromise ideal? It depends.
Deduplication is the magic sauce that turned us on to Pure, and XtremIO marketing said, “we can do that, too!” Without compromising speed, though, and without post-processing, the result isn’t the same. That’s the point of the compression mentioned earlier.
Then there’s availability arguments. Pure doesn’t have any backup batteries (but stores to NVRAM in flight, so that’s not a deal-breaker), which EMC can point out. EMC uses 23+2 RAID/parity, which Pure is quick to highlight as a weakness. Everyone wants to be able to fail four drives and keep flying, right?
From what I’ve heard, Hitachi will take an entirely different angle
and argue that magic is unnecessary. Just use their 1.6TB and 3.2TB flash drives and swim in the ocean of space. Personally, I think that’s short-sighted, but they’re welcome to that opinion.
In production, day to day, notwithstanding our noted glitches, XtremIO delivers. Furthermore, it has the heft of EMC behind it, and the vibe I get is that they don’t seem to be content with second place. Philosophies on sub-components may disagree between vendors, but nothing trips XtremIO’s performance. Is there potential for improvement, efficiencies (esp. data reduction), and even hybrid considerations (why not a little optional post-processing?)? Absolutely. And I’ve met the XtremIO engineers from Israel who aim to do just that. Time will tell.
This article originally appeared here.