HPE Superdome X Review

It gave us a very large RAM that we needed for the large-scale graph-handling applications.


What is most valuable?

SDX gave us a very large RAM and we need that for the large-scale graph-handling applications. Those turn up in genomics for genome sequence assembly and in metagenomics. Now, people are even doing things like metatranscriptomics and those algorithms and applications right now require a very large RAM. They're not distributed, at least, not for what the biologists are commonly using in the field; we need it for that. We, also, need it for some machine-learning and large-scale Java workflows that also don't distribute.

How has it helped my organization?

Benefits of the SDX, to me, are having a very large RAM. We chose 12 terabytes as the sweet spot. We could have gone higher with the higher-density dims and a very high bandwidth backplane, i.e., connecting the blades into the ccNUMA architecture.

What needs improvement?

This is just my wild speculation, I'd like to see the onboarding of some storage class memory to really expand the already very large RAM, into something that could be even much bigger. This could really help in solving the problem of when you want to do work on a node that's 12 terabytes of RAM and getting the data on/off can be a limiting factor. Thus, having a richer storage hierarchy within the node could actually have some interesting capabilities.

What do I think about the stability of the solution?

Stability has been great, the systems are very solid.

What do I think about the scalability of the solution?

We have been running applications on the individual Superdome X nodes. For us, those are a scale-up solution. We build them with a very large RAM, so they gave us the scale that we needed.

How is customer service and technical support?

We may have made a couple of calls to the support team for over the year and half that they've been running. The technical support has been consistently good.

Which solutions did we use previously?

Previously, we had run the world's largest shared-memory machine. It was called Blacklight which was based on the SGI UV. Also, it was really productive and great for science; it was fantastic for science and that is why we want a large shared-memory again. With Bridges, we essentially expanded our large shared-memory capacity by 50 percent. Today, a lot more of that research should be done.

How was the initial setup?

The setup was reasonably straightforward. We were bringing these in when Omni-Path was a very new product, and that made it a little bit challenging on the Superdome X. But other than that, if you want to do other interconnects, it would be extremely straightforward.

Which other solutions did I evaluate?

We did look at other solutions. However, there are not many vendors in the large shared-memory space, but we did look into the other one or two that were out there. We felt that the HPE's solution offered the best performance of the group.

Selecting a vendor really comes down to:

  • Reliability.
  • Their place in the marketplace to assure that they will have early access to the products for us.
  • The engineering so as to be able to deliver it reliably in very high-quality implementations.
  • The price-performance aspect as well.

What other advice do I have?

For large workloads, i.e., the large member workloads, the HPE Superdomes are really interesting. We're also having people look at them for very novel applications of Spark using a very large RAM. That's a work-in-process, but I think, that it will be interesting to follow that as well.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Add a Comment
Guest
Sign Up with Email