Most Helpful Review
Gives us full redundancy - compute and the storage - we could lose a full node and still keep everything up and running
Find out what your peers are saying about Red Hat Ceph Storage vs. StarWind HyperConverged Appliance and other solutions. Updated: January 2020.
397,717 professionals have used our research since 2012.
We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
The configuration of the solution and the user interface are both quite good.
Replicated and erasure coded pools have allowed for multiple copies to be kept, easy scale-out of additional nodes, and easy replacement of failed hard drives. The solution continues working even when there are errors.
Ceph has simplified my storage integration. I no longer need two or three storage systems, as Ceph can support all my storage needs. I no longer need OpenStack Swift for REST object storage access, I no longer need NFS or GlusterFS for filesystem sharing, and most importantly, I no longer need LVM or DRBD for my virtual machines in OpenStack.
Data redundancy is a key feature, since it can survive failures (disks/servers). We didn’t lose our data or have a service interruption during server/disk failures.
We are using Ceph internal inexpensive disk and data redundancy without spending extra money on external storage.
Without any extra costs, I was able to provide a redundant environment.
The community support is very good.
It has helped to save money and scale the storage without limits.
The most valuable features of the solution are the redundancy and its cost. I used to have a SAN, a Dell EMC EqualLogic. Unfortunately, it was they call an "inverted pyramid of doom." It was two or three hosts, two switches, and one storage array at the very bottom. But the SAN, the storage array at the very bottom, is a single point of failure...
The support is the most valuable feature. The support has been amazing. It's around the clock. One of our hard disks accidentally ejected without me knowing or being onsite. They called and told me about it before I had a chance to see it myself.
What makes it valuable is the high-availability. In the education field, when you've got students in classrooms, any loss of service disrupts the lessons to a point that the whole lesson is affected. For part of the business which isn't business-critical, to have a little bit of a hiccup wouldn't be such a big thing, but here, it's the high availability of service that is important.
The hardware footprint is great. We've got two 2U servers which replaced four 2U servers. Granted, they were about three years old at that point, but we actually increased our processing capacity by about 50 percent while keeping our storage capacity about the same. We've actually been able to downgrade to a half rack from a full rack because we've gotten rid of some of our network equipment and some of our additional storage arrays.
The most valuable feature is the high-availability. We have three nodes, and all data will be synched instantly through all the nodes. Even if we had a disaster where two nodes failed, containing dozens of critical machines, almost automatically, all the loads would be run on the remaining node.
Overall, the solution has improved our system's performance. I was concerned about the physical-to-virtual conversion of our database server. It's actually much faster now, as a virtualized host on this Hyper-V cluster.
The software is great. It's very easy to understand. I've not delved into any of the command-line stuff, but there's no real need to script it. Since it went in, pretty much the only thing that I have needed to do is increase device image sizes and that process is very straightforward.
The hardware footprint is perfect. It fits in our rack perfectly, and we were able to condense a lot of physical servers we had. It has greatly eliminated the excess stuff in our server rack...
The management features are pretty good, but they still have room for improvement.
It needs a better UI for easier installation and management.
I have encountered issues with stability when replication factor was not 3, which is the default and recommended value. Go below 3 and problems will arise.
Rebalancing and recovery are a bit slow.
This product uses a lot of CPU and network bandwidth. It needs some deduplication features and to use delta for rebalancing.
Geo-replication needs improvement. It is a new feature, and not well supported yet.
In the deployment step, we need to create some config files to add Ceph functions in OpenStack modules (Nova, Cinder, Glance). It would be useful to have a tool that validates the format of the data in those files, before generating a deploy with failures.
Ceph is not a mature product at this time. Guides are misleading and incomplete. You will meet all kind of bugs and errors trying to install the system for the first time. It requires very experienced personnel to support and keep the system in working condition, and install all necessary packets.
One area for improvement of the solution is that I had to get Windows, which I really didn't want because of the extra maintenance or overhead, as well as viruses, etc. It's going to take time for them to get their Linux to that point. They already have Linux but it's not as mature and they don't really support it on HCAs. They have it for individuals who want to use it on their servers, but not on HCAs.
The only real flaw that I have seen so far is this hard drive that was accidentally ejected because when it was received and added back into the RAID. There was an error there. It was not added back into the RAID correctly, so I have an outstanding hard disk. Apparently, a guy just knocked it with his hand as he was in my office, so it was just a small eject. He said that he didn't crash into anything. That is the only thing that has reared its head.
There is room for improvement in the setup and installation phase. We had massive problems connecting the StarWind appliances to our network infrastructure. That wasn't necessarily a StarWind problem. I don't know if their business partner in the UK wasn't used to having to deal with the supply of the cabling infrastructure, but that's where the problems started.
That situation, where Dell EMC servers were going down, has been my only real difficulty... it ended up being something that the wider audience of Dell EMC was actually aware of as an issue. Neither the StarWind technicians nor the Dell EMC technicians were able to actually identify that problem sooner than a week or so... The communication between Dell EMC support and StarWind support, in that particular scenario, left something to be desired, for me. I did express those concerns to StarWind and they were very responsive to that.
At the moment, the initial configuration is very technical and error-prone. That is the reason Starwind does it for you as a service, which is a great thing. But it would be nice if we could change or rearrange storage assignments ourselves.
The only critique I might have is that the support is overseas in Eastern Europe and, on occasion, there has been a language issue. But in general, they're as good as can be...
We were slightly disappointed with the hardware footprint. We were led to believe, and all the pre-sales tech information requirements pointed to the fact, that it was coming on Dell hardware. Then it came on bulk servers.
I wish I understood what goes into the StarWind software a little bit better. To me, it's kind of magic the way some of it works. As an IT professional, you don't really want things to be magic. I do wish there was a little more "Here's how it works." There could be more documentation given to administrators...
Pricing and Cost Advice
If you can afford a product like Red Hat Ceph Storage then go for it. If you cannot, then you need to test Ceph and get your hands dirty.
We never used the paid support.
Most of time, you can get Ceph with the OpenStack solution in a subscription as a bundle.
In terms of cost, a storage array is more expensive... For half the cost of Compellent, I got two hosts, more storage, and redundancy.
There is a bit of a start-up cost. Having never used HCAs before, I was reluctant to buy it. I would suggest that you jump in and do it, as I wish I hadn't wasted so much time.
Our entire package was around $35,000 for everything, including three years of support.
We looked at Nutanix and found it did almost the same thing but for more money. In fact, StarWind was nearly one-third of the price; it cost us £36,000. That includes five years of monitoring... The Nutanix was near enough £110,000 for relatively the same amount of performance and storage.
The Nutanix piece was about $45,000, getting close to $50,000 with all the licensing involved, whereas the StarWind was less than half of that, after Microsoft licensing and such.
I honestly feel that there's no one else in the market doing what they're doing for the price point that they're doing it at. That's why I asked them about investing in their company. I think that the options they're providing and the software that they have is sort of revolutionary for the price point... The total cost was $24,400.
The other solutions we were looking at were priced much higher than this and they didn't necessarily have full redundancy... Nutanix and VxRail were in the final running... but it came down to our price point.
When I researched they came the most cost-effective.
out of 40 in Software Defined Storage (SDS)
Average Words per Review
out of 40 in Software Defined Storage (SDS)
Average Words per Review
Compared 26% of the time.
Compared 7% of the time.
Compared 5% of the time.
Compared 41% of the time.
Compared 15% of the time.
Compared 12% of the time.
Also Known As
|Red Hat Ceph Storage is an enterprise open source platform that provides unified software-defined storage on standard, economical servers and disks. With block, object, and file storage combined into one platform, Red Hat Ceph Storage efficiently and automatically manages all your data.|
For SMB, ROBO and Enterprises, who look to bring in quick deployment and operation simplicity to virtualization workloads and reduce related expenses, our solution is StarWind HyperConverged Appliance (HCA). It unifies commodity servers, disks and flash, hypervisor of choice, StarWind Virtual SAN, Microsoft Storage Spaces Direct or VMware Virtual SAN and associated software into a single manageable layer. The HCA supports scale-up by adding disks and flash, and scale-out by adding extra nodes.
StarWind HyperConverged Appliance consists of StarWind Virtual SAN, Microsoft Storage Spaces Direct or VMware Virtual SAN “Ready Nodes”, targeting those, who are building their virtualization infrastructure from scratch. In case there is an existing set of servers, we offer a “software only version”, which is essentially our years proven StarWind Virtual SAN. Basically, it’s the fuel powering StarWind HCA.
Learn more about Red Hat Ceph Storage
Learn more about StarWind HyperConverged Appliance
|Dell, DreamHost||Sears Home and Franchise Business|
Software R&D Company32%
Comms Service Provider18%
K 12 Educational Company Or School15%
Non Tech Company8%
Software R&D Company18%
Comms Service Provider11%