Dell EMC XtremIO Flash Review

It has offloaded high IOPS processes and cleared the main arrays for bulk work.


What is most valuable?

  • Speed of operation: We had several SQL databases that pounded EMC CX4-480 and EMC VMAX 20K.
  • XtremeIO handles the flow well, running on the controller memory, rather than straight to the drives.

How has it helped my organization?

It has offloaded high IOPS processes and cleared the main arrays for bulk work.

What needs improvement?

Even with the fast SSD drives and processing on the controller, there was still a lag on the FC ports.

The initial node came with only two FC ports per controller. It was used for multiple ports on the VMAX to spread traffic over several VSANs.

For more detail:

I had 4 dH2i powerpath servers hitting it, along with 4 vmware clusters 8 host each, on a X1 brick we only had two controllers both with 2 port
So a total of 4 FC ports.

Compared to the VMAX 20K, where I had 8 ports on vlan 2, 6 ports on vlan 100, 8 ports on vlan 50, so I was able to spread the traffic around between process.
I had 2 directors on one VMAX, whereas I had 3 directors on the other VMAX.

With only 4 ports on the xtremeIO, the most I could do was send traffic on 2 ports to two different VLANS one on each controller.
So my comment was get additional ports, so the DH2I servers don’t hog all the IOPS.

Recommend getting the second brick X2 and the matrix switch, then with 8 FC connector can start spreading the traffic.

The company had me routing the data thru a fabric switch MDS9500, separate from the main traffic as this was a test.
Most of production was on 4 other MDS9500 switches.

Monitor of the switch, did not show a bottleneck going to the servers, only on the 4 8GB FC going to the XtremeIO.
Connect to different blades on the 9500.

Don’t think they have touched it since I left. Nor on the other 8 SAN units.


For how long have I used the solution?

We have been using the solution for two years.

What do I think about the stability of the solution?

We had some stability issues. Initially, one of the ports failed. The unit could not use a LUN larger than 2TB. After testing all our variables, it was determined that it was XtremeIO, and a patch was created.

The servers were attached with both PowerPath and VMware 5.1 datastores, via a MDS 9500 Fabric Switch network.

What do I think about the scalability of the solution?

It didn't expand to the second Node X-2, although that was a stated option.

How is customer service and technical support?

The technical support was poor, even during the port or 2TB limit. It was rare to hear back from the technical analyst looking at the unit from ESRS.

Which solutions did we use previously?

Over my thirty years in the IT field, I have tried many solutions. I worked with:

  • NetApp
  • EMC SANs
  • Direct attached SCSI drive units
  • An IBM 4300 unit attached by VMware 2.5

How was the initial setup?

Compared to others, the setup and operation is easy. I worked at the company almost three years, learning XtremeIO with little assistance from co-workers or the vendor.

What's my experience with pricing, setup cost, and licensing?

Even before Dell bought it, EMC pricing was steep.

Which other solutions did I evaluate?

We evaluated Pure Storage and NetApp.

What other advice do I have?

Our company didn’t send anyone to operations training until we had the unit for two years. I would advise you to send your technical expert to take the training early on.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Add a Comment
Guest
Sign Up with Email