Cisco UCS B-Series Review

We use it in the converged infrastructure to push out profiles, firmware, and console access.

What is most valuable?

We are using it in the converged infrastructure with the common UCS Manager to push out:

  • Profiles
  • Firmware
  • Console access
  • VLAN configurations
  • Troubleshooting

How has it helped my organization?

Running in the VCE Vblock gives us the flexibility to deploy a large virtual workload of servers. We use a mix of mainly Windows servers and a few Linux appliances.

I had one blade server fail. The replacement was up and operating quickly after the blade server was swapped over.

What needs improvement?

Smaller locations are held up where they use a pair of converged infrastructure interfaces for redundancy.

To deploy a standard Cisco Blade system with redundancy for maintenance and reliability you have to purchase two converged infrastructure 6296 or 6396 interface / switches, and the chassis, uplink interfaces, plus the blade servers to drop in one or more blade chassis. From my point of view the initial cost to do this for a small regional office where we usually have the computer in a dedicated network closet for the switches and servers.

Cisco does now have a “Mini” solution where they have put the converged infrastructure and management into the chassis via the slots where the uplink interfaces normally install. This setup can support multiple blades and even external C series chassis in a converged environment all sharing some form of external storage from what I have read but never used or experienced.

Most of my companies need is for data distribution from a file sharing server(s), a domain controller and possibly a local database server. I can cover this all in one 2U server from another company that I can cram in 3-6 TB of DAS / RAID disks for file storage with enough RAM and CPU cores in 2 sockets to cover my compute / VM needs.

My demands for servers in most remote sites are different than most. Our end-users all have either a laptop or powerful CAD workstation to do their engineering on. We don’t do VDI via VDI terminals. We do use VDI for engineering apps in 2D on our VBlock and in C-Series UCS servers with NVidia shared video cards for CAD / 3D rendering in our VDI pools.

For how long have I used the solution?

The original M2 servers were in operation for more than five years. The new M4s have been up for under a year.

What do I think about the stability of the solution?

There was only one server failure during my use of 24 blades in my old system. There were 20 blades in my new/replacement implementation. In reality, this is a small installation.

What do I think about the scalability of the solution?

We have not encountered any scalability issues. We added blades and upgraded memory along the way. We had open slots in the chassis and added additional blades. We upgraded the RAM in existing systems for more VM headroom.

How is customer service and technical support?

There were no issues with technical support, as most was handled via VCE.

Which solutions did we use previously?

We had standalone 2U servers from HPE that were tied to a SAN for shared storage.

Limited memory expansion was what we had previously. We did dual Vblock installations to absorb the multiple little clusters of VM hosts that we had on separate servers.

We still use HPE servers as standalone VMware hosts in smaller sites.

The newer generation HPE servers have very high disk capacity servers where we can get 3 TB of disk in a 2U host.

How was the initial setup?

The Vblock system was installed and operational at handover. We had to provide IP ranges for servers, management interfaces, etc. However, the VCE installation teams did the actual configurations of the hosts, SAN, and network connectivity.

What's my experience with pricing, setup cost, and licensing?

Although I was not completely involved in the pricing or licensing costs, I do have to monitor licensing allocation of VMware CPU licenses.

I know that Cisco licenses the number of ports and uplinks on various interfaces inside the Vblock. However, we have not done any upgrades beyond our initial purchase of the replacement Vblocks to run into any new licensing additions.

Which other solutions did I evaluate?

We looked at other considerations, such as BladeSystem from HPE and standalone server stacks, at least five years ago when we purchased the original set of Vblocks.

It was the only integrated system that fit our needs. It filled the requirement for new computing power, an updated network, and SAN storage. It also filled the expansion possibilities of a data center in a box with almost one point of contact for support.

What other advice do I have?

Look closely at your needs.

  • Do you need more computing power and memory or storage expansion possibility?
  • Do you need redundancy in installation sites HA/DRS?
  • If you do HA/DRS, does it need to be near real-time disk writes, or more managed recovery/failover?
Disclosure: My company has a business relationship with this vendor other than being a customer: We are one of the few that had the arrangement to actually purchase the VBlock directly from VCE and not via a 3rd party VAR as when the original systems were put out for bid. After we had done all the specification with the VCE configuration team the VAR tried to tack on a percentage for passing the order from them to VCE and it almost canceled the whole system.
Add a Comment
Sign Up with Email