Vblock Review
I cannot praise the support I got from them enough but a vBlock only makes really good sense if your existing infrastructure is Cisco based


So, you want to get into the whole virtualization scene and you don’t want to deal with vast amounts of vendors, contracts and all the other things that tend to follow' A modern and virtualized infrastructure can be a pain, but VCE has a remedy for this, at least within certain parameters.

The VCE vBlock™is an all-in-one virtualization platform that comes complete with a midrange, tiered  FC SAN from EMC, Cicso 5548 switches to tie into your existing infrastructure (assuming you already have one that is) and a Cisco UCS blade chassis for processing power. All fit into a couple of pretty racks, delivered and configured (if you want it so) by capable professionals.

Okay. So far, so good, so what'

Let’s discuss the good part first: You get a complete package, and a decent UI to go with it. All you need to do is provision a set number of data stores, hosts and vlans, press deploy and 2-3 hours later you are ready to go. No mucking about with WWNs, LUN provisioning, CDs with ESXi and so on. UIM, as the UI management tool is called, feels a bit clunky right off the bat, but you get used to it and chances are that you won’t see all that much of it when you have deployed your stuff anyway.

EMC’s tiering also seems to work OK, from I admit, my limited experience with it. If it works, there is no reason to overly mess with it.

And now for the not-so-good, at least in this author’s not so humble opinion.

A vBlock only makes really good sense if your existing infrastructure is Cisco based. Cisco has their own way of doing stuff and does not play nice with other equipment. The processing hardware isn’t really that good either, especially considering what Cisco likes to charge you for what is nothing more than mid range x86 blades.

In everyday operations you hit another couple of snags. The default setup is based on the (in VMware circuits) highly debated Nexus 1000V™. I will not get into the love-hate relationship VMware admins have with this piece of software, but I feel obliged to mention that it dies for me no less than 3 times in a 2 month time span taking the entire production environment with it. Put a couple of hundred servers on a vBlock and that is costly downtime. However, there is nothing that stops you from using VMware switches, but Nexus 1000V™ is somewhat implied.

A word on VCE support: They are very competent and the most helpful support team I have ever come across in my 15 years in this business. I cannot praise the support I got from them enough.

2 considerations you need to make are:

Can I afford this' The vBlock is portrayed as a high end piece of machinery. The problem is that all the components are mid range at best.

Can I live with the configuration limitations' You are stuck at Cisco’s mercy if you want to upgrade. Cisco does a lot of stuff well, and getting paid is one of them.

How about scaling' This is a possible issue for the enterprise market. Each vBlock is its own entity. The VMs on a vBlock are stuck there and can’t be moved off it without downtime and some pretty heavy admin magic. Assuming it is available that is. 10 vBlocks means you will have 10 SANs, 20 physical 5548 switches and so on to administer. Imagine the horror of administering 100 of these babies'

PROS:

Easy setup and roll out

Comes in a complete package with one vendor and excellent support

CONS:

Price

Scalability

(Expensive) Vendor lock in

Disclosure: I am a real user, and this review is based on my own experience and opinions.
1 visitor found this review helpful

1 Comment

it_user6702Vendor

Well written and well argued!

27 May 13
Guest
Why do you like it?

Sign Up with Email