What is our primary use case?
We use it to replace some Hyper-V infrastructure. We are looking for some decomplication, hopefully. Our old Hyper-V cluster was three Dell R410 servers with two Cisco switches that were connected by iSCSI to VNX. The VNX was coming toward end-of-life. I've de-cabled it now and taken out the rack and I've got a box of Ethernet cables. There was a massive amount of stuff that did the same job as two servers and a couple of Mellanox cards.
Although it was end-of-life, we got some quite severe warning emails from EMC saying, "This is it guys. Your support is terminating. If anything goes wrong with it, good luck." We could have purchased a third-party warranty on it if we'd wished, but then it would have been a matter of luck in terms of the parts. Although nothing ever actually went wrong on the VNX, hardware-wise, it was about not having that parachute.
How has it helped my organization?
It's just taken over the job of something that was going out-of-support. The only thing that we have really noticed as being a massive improvement — because of the live migrations, because it's disk-based rather than iSCSI — is that it is super-fast now.
It's fairly instant. Before, live migrations meant we had to leave it on a countdown. So if we had to move stuff around quickly, we had to do some quick live migration. It would take a few minutes and only one could be done at a time. There is an improvement in having a new Windows Server. The 2008 R2 Server that we replaced didn't have PowerShell for Hyper-V, but obviously this version does. We've just scripted it and, bang, with the improved response times from it being disk-based instead of iSCSI, trying to shove an 8 Gig memory file through goes a lot quicker. It's not really something that's saved our ops at any time but the improved performance is pleasing.
It hasn't increased redundancy or failover capabilities, it has just like-for-like replaced. We did have three servers, two switches, and a disk array, whereas now we just have two servers. There's a big chunk of less stuff doing the same stuff. So we've consolidated. We're doing the same with less. It has saved us money in the sense that there is less stuff to pay out warranty on.
What is most valuable?
We bought their ProActive Premium Support. That's why they email us when we have rebooted to patch, and they check with us that everything is okay. We've not really had any problems with it, so it has not really presented with any real-world benefits yet. Obviously there are benefits to it because it's monitored. We do monitor stuff onsite, but it's good to have backup. We're a small team so that is one of the major benefits of it.
The software is great. It's very easy to understand. I've not delved into any of the command-line stuff, but there's no real need to script it. Since it went in, pretty much the only thing that I have needed to do is increase device image sizes, and that process is very straightforward. As part of the installation, the StarWind representative took me through it. We just migrate everything to the other server, put it into maintenance mode, increase the size, and commit.
There really isn't any maintenance. It's fairly self-sufficient.
What needs improvement?
We were slightly disappointed with the hardware footprint. We were led to believe, and all the pre-sales tech information requirements pointed to the fact, that it was coming on Dell hardware. Then it came on bulk servers. They asked for some email addresses for iDRAC and the like. We thought, "Oh good, it's Dell. We're familiar with that kind of hardware infrastructure." Our other servers here are Dell so we know how the Dell ecosphere works. But then, these weren't Dell. These are Supermicro, which, when you boil it down, are the same Intel parts. But it's a little reminiscent of putting together OEM PCs. That's how the servers look. But they're in and they're working.
What you're not paying for, and that may be why it was £36,000 instead of £110,000, are those Dell Concierge services. They've got a well-rounded, iDRAC infrastructure and we could integrate it into our other stuff. We're all used to how all the ILO stuff works on it. But here it's, "Oh, Supermicro. It all looks a bit '2002.'" It's not what we weren't expecting but it works.
For how long have I used the solution?
We've been using the appliance for two or three months.
What do I think about the stability of the solution?
The stability has been great. There have been no problems, not a hiccup or anything. So far, it seems fine.
What do I think about the scalability of the solution?
It would be fairly easy to add to it. We could add a third node with another card.
How are customer service and technical support?
Tech support is very prompt, very friendly. They're knowledgeable. I don't think I have come across anything that they couldn't answer.
Which solution did I use previously and why did I switch?
It was just a straight one-for-one swap. Decomplication was really was the main driver for it. If you're troubleshooting problems on Windows Server core on iSCSI and logging into a bit of an unfriendly VNX with no info panel on it, and if it was struggling, it had a lot of trouble telling you. We had to actually order a special cable to be able to serial into it at one point. This solution is relatively straightforward now.
We came across StarWind by just having a look at what options were out there. I liked StarWind because, when you look at their material online, they seem more geared towards education. They've got a quite extensive Knowledge Base and they are very good at tutorials. Other companies seemed more to emphasize the marketing: "Look at our shiny boxes."
How was the initial setup?
The initial setup was fairly straightforward. The only thing that wasn't straightforward was, "Oh, we've never had Supermicro before." It was a matter of getting used to, and documenting, how stuff works.
There were no instructions. We just got two boxes. There wasn't any "Welcome to your StarWind Hyperconverged Appliance." It was just two brown boxes with two servers in them.
We just racked it up and then had a phone call with them and let the guys at StarWind know when it was online. It was up and running in our environment pretty much straight away. The only problem I had were the SFP cables: "Which way up do these go? And does it go A to B, or A to A and B to B?" So that required a phone call.
The only other problem that we encountered, that protracted the migration, was that while they've got good V-to-V migration software, our old environment was 2008 R2 and it wasn't supported by the migration software. So we had to "handle" it. It was a matter of having to recreate the service. I scripted it from PowerShell myself, and did them one or two each weekend over a period of three or four weeks. They're production servers so they had to be down to do the Hyper-V conversion process. Our file server took a while. It is about a terabyte-and-a-half. It took about 11 hours to convert, but I had it scripted anyway. So once it converted, I just did a convert from source to the StarWind. That was part of the copy process. It was then just create out and boot and notify me.
For the setup it was just me involved.
Which other solutions did I evaluate?
We looked at Nutanix and found it did almost the same thing but for more money. In fact, StarWind was nearly one-third of the price; it cost us £36,000. That includes five years of monitoring. If we have to reboot we get an email from them saying, "Is everything okay guys?" We tell them, "Yeah, yeah, it's fine. Don't worry. Patching". The Nutanix was near enough £110,000 for relatively the same amount of performance and storage.
There were no additional fees for StarWind. That amount is for five years, done and paid for.
What other advice do I have?
They're not really appliances, they're are just two servers with a bit of software on them. It's slightly misleading that it's hyperconverged appliances. It's just two white-box servers with a Mellanox card in it.
In terms of improvement to IOPS or latency from using it, we haven't seen anything drastic. But then again, we weren't really hitting it hard before. I've not measured it. It has just not caused us any trouble. So it's all good.
I would give it a solid eight out of ten. It's trouble-free, it's very clear to use. It's not one of those implementations where you're tearing your hair out. If you are tearing your hair out, it's about other things, not the actual StarWind part of it. I would probably have given it a ten if the hardware was a bit slicker, or there was more actual, "Welcome to your new StarWind implementation. Here's where everything plugs in," type of documentation. We did get some e-mail stuff, but it tended to look like it was more for Dell hardware and not Supermicro, white box, no-name servers.