HPE ProLiant DL Servers Review
You have a consistent way of managing them across all of the lines. Support for hardware is more challenging than for other sides.


What is most valuable?

What is good about HPE servers is that you have a consistent way and how to manage them across all the lines. You don't have to learn something for one type of server and then have to learn something else for a different type of server. If you have different types of servers, you can always build on the knowledge you have and you have a unified way to approach things in configuration, in setting up, maintaining, and so on and so forth.

How has it helped my organization?

The organization is always hamstrung by the staff people they have available to run these systems. If you have a trained staff, you don't want to throw all this training overboard just to get a new server. You have an evolving but steadily moving ecosystem of how you get these things set up, connected, maintained and so on, so forth. That's probably even more valuable than just, "Hey, competitor A or B has 2% more efficiency or 2% more power to deliver".

What needs improvement?

It's always the next generation of hardware, of course: Who does the better job? You also can look at things and say, "Hey, we were going all blades. We were going with virtual connect.”, and do specific things in that way. We learned certain lessons doing that, of course.

For the next generation, we probably won't have that many blades. We will probably revert to rack-mounted servers, but have bigger servers instead of the smaller servers. That also evolves with the workloads you have. Over the period of time we typically run these systems, such as five years, there's a lot of change in what the users request from us. Of course, there are new developments. For example, before we started VDI, we said, “OK, if we want to do VDI going forward, we probably want to incorporate some GPU into that.” That would probably lead to new architecture and then we want to do other stuff like high-performance computing, as well, on that. The next generation probably would look completely different from what we have now.

For how long have I used the solution?

We have been using HPE servers for a very long time. The current implementation was done in 2013 and 2014, but we have been using HPE servers for 20 years or more. It was not necessarily called HPE at that time but one of the companies they acquired over the decades.

What do I think about the stability of the solution?

Stability is a non-issue. As long as you don't touch anything, nothing will really happen. If you update everything here and there, you have to really pay attention. We have a complex setup with storage and servers and networking, storage networking and so on and so forth. Once you change one component, all the others might blow up in your face if you don't do it correctly. Especially in the storage space, we rely heavily on HPE to mix and match, make sure that the matrix is correct to do all of the maintenance on that level.

What do I think about the scalability of the solution?

Scaling is fairly easy. With the blades, I think the only barrier here is, once you fill up the enclosure, you need a new enclosure. That's the primary barrier. As long as you can grow inside the enclosure, that's a non-issue. Otherwise, you have a steeper investment, but then again, it scales up from a single server to the full enclosure, to the full rack.

We never had to go that way, though. Everything we did always fit into one enclosure in one rack. We had two of them, spread across sites. Even in a situation where one of our data centers fails, we can still have all of the workload running out of the other data center. By the means of the software stack we have around it, that works without a mishap. You don't really even notice it with the storage and the virtualization layer. That all happens in the blink of an eye, automatically, which is very important for us. It’s also reproducible, of course. And, you can do it backwards, unlike some solutions, where you can failover but if you want to failback, you would need a myriad of highly skilled IT professionals to do that move back with data synchronization, but this solution really does it all.

How is customer service and technical support?

Support on the hardware side is a little bit more challenging than on other sides because there are so many components involved, if you look at servers. There are many vendors who provide components to HPE. You have to mix and match everything. You really need a professional support organization with that to help you. If you do the wrong thing, do the wrong update, that might hamstring you with the whole operation because you don't get anywhere, anymore.

How was the initial setup?

The setup is quite straightforward. It's really a bunch of servers but, of course, that involves getting all of the components together, having everything configured to order and then having it configured to the software stack. We incorporated HPE partners to do that for us and then we took over and said, "Okay, from now on, we involve this system until its end of lifetime." We went from the one version of the hypervisor to the current version of the hypervisor, and we're going to the next, and the next, and the next. Setting up is the first step but from that point on, you can take it yourself and drive it yourself.

Which other solutions did I evaluate?

For the blade offerings, most of the competitors have similar capabilities. However, they probably have evolved them only within the last five years, whereas I would say HPE has a much longer runway into that. They have a much more established, esteemed platform there. The C class of BladeSystems is something that's there for years now. I think we have the second procurement of those. At the end of its lifetime, we're running it for 10 years, whereas others have changed their blade strategies two or three times. I think that's the worst thing you can do, if you have to change it on there.

The C7000 and C3000 have been around for 10 years, maybe 15 years, already. Everything that came afterwards, such as Synergy or the Superdome X, they all build on top of that. The C9000 and whatever they call the Synergy enclosure, it really takes the best from the established path and then just adds the latest technology to that.

If you have that knowledge and ability, and you can leverage that, you have a big advantage over all the others who come to the market with a new solution and try to find customers.

What other advice do I have?

For the server technology, most of the features you can nowadays find with most of the vendors, so they're probably at the stage where HPE was five years ago. The ecosystem is so mature and still evolving. There's nothing like, "Hey we have this feature, we don't change it." The management, the procurement, the provisioning, all of that is really kicking off going forward. Probably with the next generation, I’d gave it a higher rating.

Disclosure: I am a real user, and this review is based on my own experience and opinions.

Add a Comment

Guest
Why do you like it?

Sign Up with Email