What is our primary use case?
We are using DataPlatform, DataProtect, and we are running NAS natively on it now, doing some cloud archiving off of it.
The primary use for DataPlatform is that we are a managed service provider. We use it as our platform for backing up our infrastructure as a service client for our remote clients. Recently, we replaced multiple platforms with Cohesity. That is, we replaced our quantum storage appliances with DataPlatform. We replaced our Beam storage repositories. With that, we also replaced our Connvault storage platforms, which were all internal data.
In the end, we consolidated all of our backup storage to the DataPlatform, and then we run DataProtect with that. We are also a Beam cloud connect partner. So our Beam cloud connect actually stores the data on Cohesity. It is a very versatile solution.
We are also now doing our backup for our Microsoft M365 service and that is run natively off Cohesity as well.
What is most valuable?
Being that DataPlatform is a hyper-converged architecture, it scales very well for an MSP (Managed Service Provider) because we can just add nodes as we gain business. Its scalability is hugely important to us, but also just the simplicity of it is really good. It is simple and easy to use, which for an MSP is pretty important because I could be supporting hundreds of clients on that platform. Try doing that in a Connvault environment. Connvault is just super complicated and it would be impossible.
Simplicity, ease of use, and scale are just phenomenal with DataPlatform.
What needs improvement?
We have been kind of leaning on Cohesity a little bit to just start looking at providing tier-one storage capability off the platform. With the NAS workloads, we have some tier workloads that we will put on it, but it has never been touted as a tier-one storage platform. It would not be considered tier-one for NAS-based workloads anyway. Recently they just released all SSD nodes. Because of that, we believe that the upgrade in performance level is going be a huge benefit to us. Because we already use it as a target destination for our Zerto-based workloads we get to take advantage of the dedupe. The idea was when we do a recovery, we can do a native NAS recovery and it performs pretty well, but then we immediately had to be able to migrate the virtual workload to a primary disk. So that means that we always had to have a pool of tier-one storage sitting there unused in the event of a DR (Disaster Recovery) event or some critical situation experienced by a client. Now, with the FSD (File System Device) disc in there, we believe that we are not going to have to do that anymore.
That lack of tier-one capability is the only pain point or area for improvement, but they are working on that. They have all SSD nodes in it now. We will be testing actual full recoveries on the NAS on their smart files. If I can run 30 or 40 workloads simultaneously with relatively high IO requirements, then we are going to be extremely happy.
They have their CDP (Customer Data Platform) capability now, and we need CDP in a multitenant solution, which is on the roadmap for them. It is not available to us yet today. So that is something that we are anxiously waiting for. We run the multitenant edition and that is one feature that we can use and in our current multitenant configuration.
For how long have I used the solution?
We have been working with Cohesity DataPlatform for approximately a year.
What do I think about the scalability of the solution?
It is a pretty stable solution. It was largely scalable before just because it is a hybrid storage solution. The only thing that limits your performance is the amount of cache that you have in each node. But now if each node is 100% FSD, then the performance is no longer hindered by spinning disk performance.
How are customer service and technical support?
The technical support is just awesome. They are top notch. There were a couple of bugs that we came across on some earlier releases. They identified them as bugs and took care of them. They actually escalated them into engineering right away. We have been able to get bug fixes turned around in some cases as little as a day or two. So the support has been really good for just dealing with bugs. But even just for questions about how to do things, they have been awesome.
How was the initial setup?
I think the initial setup was very simple. You can have a three-node or a twenty-node configuration and it really does not matter. You are up and running right away and in a few hours, you are just cruising along.
What's my experience with pricing, setup cost, and licensing?
The cost is based on capacity. You have got a hardware component because you have got to buy nodes and you have got to buy discs for those nodes. That is typical server pricing cost whether you are buying Cisco systems or HP or whatever, you are buying two new rackmount boxes full of discs.
If I am looking at comparing to the competition — let's say we compare Beam to Cohesity. Beam makes their solution look extremely inexpensive because they just say they are selling the software. You run the solution as a virtual machine. The idea is that they suggest you do not need anything else in the way of dedicated hardware. But in actuality, Beam still needs a lot of compute power to perform — especially if you need compression and encryption and stuff like that. So Beam makes it sound like it has some advantages because of the way it is deployed, but any backup product needs a lot of horsepower. Their claims end up not being the reality.
When you factor the claims for not needing hardware in, then you have to try to compare apples-to-apples. Beam just really does not have an immutable storage capability because they do not manage their storage. From an attack footprint, Beam runs on top of Windows. So its attack surface is pretty large. When I run Beam, I have a dependency on the SQL server. Those are all things that we looked at and realized we might not want to have those risks. We had to look at the risk and decide if it was high enough to say that Beam was not an option. That was kind of a deciding factor because we did not want our backup platform to be the same platform that is our largest attack surface in terms of our app servers and all the Windows servers out there.
Right now what we are seeing in the industry is the hackers are going after the backup systems first. If they get that, then they go take out the production data, and you are done. That fact makes mutable storage absolutely critical to us. The obvious solution to the problem was not having the platform be Windows-based as it was hugely important to security.
What other advice do I have?
My advice to people looking for this type of solution is that you really have to look at your secondary data. It can come in many forms. If you are just looking at Cohesity as a backup product, then that you may look at it and think that it is too costly as just a backup product. But when you look at all of Cohesity's capabilities and if you find two or three things that you can take advantage of on the Cohesity platform, then it becomes more desirable and the decision is a no brainer.
Now I have got one platform that is doing my backup to disc, it is doing my data protection, it is doing my cloud archive, and it is doing my NAS protection. It is serving NAS, it is acting as my dev-ops environment. On top of all that, I can use that normally stagnant, stale data to run antivirus against. I can run Splunk against that data and then it becomes useful data to me. By comparison, data on a Beam backup repository is no good to anybody but Beam.
On a scale from one to ten where one is the worst and ten is the best, I would rate the Cohesity DataPlatform as a ten-out-of-ten. It does everything we want, it is a unified solution, and it is only getting better.
Which deployment model are you using for this solution?