HPE Synergy Review
We can get more things in the individual blades and deal with higher thermals on the CPUs.


Valuable Features

It increases the throughput. We had a problem with the C7000 with the down-link speed to the individual blades and what the up-link speeds were. Memory was kind of a constraint problem for us.

Changing the form factor in Synergy allows us to have more RAM, which is significantly helpful for us.

One of the bigger changes is in that larger form size, we can get more things in the individual blades. We can also deal with higher thermals on the CPUs, which are all kind of significant.

We're still testing the storage device to see exactly if that's going to be useful for us or not.

The idea of taking 3PAR and directly attaching it could be compelling for us. We just have a few more things that we need to test out to see if they got fixed from the beta process.

Improvements to My Organization

It's mainly the fact that it gives us the next generation of the C7000, which we've been using since 2009. That gets us in that same useful pattern. The concept of virtual connect, OneView, is compelling. It extends our existing operational knowledge and gives us a longer run life with that kind of pattern. It still solves my issue with cabling and power in the data centers. It is using newer technologies which solve the issues we had with the C7000s.

Room for Improvement

One of the things that I would like to see, and could be in their road map, is getting virtual connect to 100 Gig throughput.

What they're coming out with initially on the road map is a 40 Gig up-link on virtual connect. That would be one of the things that we'd like. Other things that would be useful for us would be adding an AMD CPU to their product line in the 2018 time-frame.

Stability Issues

We are currently testing stability. The beta system had some issues. They were supposed to fix them as they came up in production and we'll confirm that when we get to it.

Scalability Issues

In terms of scalability, we're happy with it in general. We look forward to what we can do with it. We believe that it should be able to replace what we've been doing with the C7000s. It mechanically would reduce the number of C7000s that we'd be running. Because we're growing, we still need to add enclosures.

Customer Service and Technical Support

We have used HPE technical support for this solution in the beta process. We were heavily tied into that. They were great. Some of the bugs that they fixed led us to another bug. But when talking to the product manager, everything that we identified as a bug has now been fixed in the GA product. We'll just confirm this later.

Previous Solutions

Before Synergy, we were using C7000s. We knew that the road map of that new technology coming in the C7000 was coming to an end.

If you're going to buy that new capacity and you're not going to fully populate the enclosures, then you need to move off C7000 and go to Synergy.

When selecting a vendor, I look for operational stability. One of the things that drove us to stay on HPE, as opposed to Cisco UCS, was the fact that UCS basically stops at the hyper-visor. HPE actually goes all the way up to the OS and beyond. If you have an issue with SQL, you can get help from HPE. You can't really get help from Cisco.

Initial Setup

The initial setup, because it was still in beta, was complex. We discovered several bugs in the networking and bugs in the way some of the iLO functions worked. We were one of the more prolific groups in the beta program. Those issues should be fixed and we'll confirm that later.

Other Advice

Think about where you want to be in five years and choose the products in the Synergy family that will help you get to that point. You have a lot of options and if you just buy what is cost effective today, you may find yourself in trouble five years from now.

Disclosure: I am a real user, and this review is based on my own experience and opinions.

1 Comment

ChiefInfcee9Real User

I wanted to pose an update.
As technology moves forward copper and two fiber strand Ethernet cables should have 10/25 Gbps as the min speed with auto-sensing solutions. As finding auto-sensing optics is proving to be a problem, even if you do to manual configured as 10 or 25 Gbps would mean designing the blades be 25 Gbps with 50 Gbps by 2020 and providing options of 12 or 24 strand OM4 fiber connectors that would allow between two fiber links of 10,25,50 while offering 40, 100, and 250 Gbps uplinks by 2020. Adding focus on NVMe over Fabrics to expand storage beyond the blade at a faster design than normal storage solutions support.
Between 2022-2025 the chases should make power and fabric connections easier with the fabric may be GenZ based. GenZ may require cable plants to be single mode and may have a different mechanical connector justified by the eight times the speed of PCIe v3 we use today and being a memory addressable fabric and not just a block/packet forwarding solution.
The biggest issue to me in blades is lock-in as the newest tech and most options are shipped in rack configurations not in the OEM (think HPE or Dell) blade form factor. While the OEM are at risk of being displaced for commodity gear by the ODM (they supply the OEM) using components specified by the Open Compute Project (OCP), the impact of CPU flaws could trip up the industry. Some ARM vendor may step in with a secure low cost container compute platform in an OCP compliant form factor using GenZ to make computing and storage fabrics that are by design software defined.
In 2016 worldwide the 2 socket server was the most shipped, but 60% of them shipped with 1 CPU/socket. By 2020 the core counts of Intel and AMD should make it a world where 90% of systems shipped will be one socket systems. The high CPU capacity and PCIe v5 or GenZ will more radically change what we will be buying in beginning of the next decade which makes buying a blade enclosure today that you want to get 5-8 years of functional life like testing the law of diminishing returns. While the OEM may provide support and pre-2022 parts, post 2022 you will be frozen in technology time. So enclosures that fully populated with 2019 gear may provide value any empty slot/s will be at risk of being lost value.
While I wait for better blade enclosures to be designed for the problems of the next decade not the last decade, I think that buying rack mount servers for enterprises that buy capacity on a project by project funding basis is the best solution for this gap in blade value to design limitations. As the costs of using rack servers will be direct per project, the re-hosting/refactoring in the next decade to the next great hosting concept will be easier to account for while minimizing the orphaned lagging systems that tend to move slower than the rest of the enterprise.

Like (0)19 February 18
Guest
Why do you like it?

Sign Up with Email