Most Helpful Review
We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
There is unified storage, which provides flexibility. It is set up perfectly for performance and provisioning. We are able to monitor everything using a separate application. It provides error and critical warnings that allow us to take immediate action through ONTAP. We are able to manage everything, log a case, and follow up with the support team, who can fix it. That is how it is unified.
The feature which I like the most is that it has the capabilities that the traditional storage system offers. It provides all the functionality. The deduplication and compression work exactly like ONTAP's traditional storage. So people who have experience with that find it very easy to manage.
The most valuable features are tiering to S3 and being able to turn it on and off, based on a schedule.
If you have a larger amount of data than normal in cloud, it is easy to provision and maintain. Waiting for the delivery of the controller, the configuration of enclosures, etc., all this stuff is eliminated compared to using on-premise.
Its features help us to have a backup of our volumes using the native technology of NetApp ONTAP. That way, we don't have to invest in other solutions for our backup requirement. Also, it helps us to replicate the data to another geographic location so that helps us to save on the costs of backup products.
They have very good support team who is very helpful. They will help you with every aspect of getting the deployment done.
One of the most valuable features is its similarity to the physical app, which makes it familiar. It's almost identical to a real NetApp, which means you can run all of the associated NetApp processes and services with it. Otherwise, we would definitely have to deploy some hardware on a site somewhere, which could be a challenge in terms of CapEx.
This solution provides a unified storage, no matter what kind of data you have.
The interface is user-friendly.
It provides high performance and business-critical storage.
The ability to pool the storage to leverage thin-provisioning is a huge saving in space and costs.
During the maintenance periods, on any part of the storage or VMware migration, we have had no downtime.
Supporting of Automated-Storage-Tiering (AST) is a good feature that saves money.
DataCore's ability to seamlessly move virtual volume data between storage pools as well as their synchronous mirroring has made maintenance and disaster recovery planning achievable.
DataCore has helped provide flexible, highly available, high-performance storage that otherwise would have been outside our price range.
The features I have found most valuable are the active-active, or so-called grid technology, and the integration into our VMware-vCenter.
The SwiftStack Controller, which is the web UI, provides out of band management. This has been one of the best features of it. It allows us to be able to do upgrades and look at performance metrics. It is a top feature and reason to choose the product.
The most valuable feature is its versatility. We use 1space and we can use it for almost anything: for our cloud service, for backups of VMs.
SwiftStack is also quite flexible when it comes to hardware. It depends, of course, on the use case and the kind of hardware you want to buy. But you have quite a bit of choice in hardware. The SwiftStack software itself does not impose anything on you.
It has helped us with the ability to distribute data to different data centers. As part of our DR strategy, we have nodes automatically replicating data from one data center to the other. This makes it easier for us to not have to shift tapes around.
The general consensus on what we've done is that the restores coming back from it have been faster than they were from our prior vendor. Ingest speeds are fine. The restore speeds have improved.
The scalability is phenomenal. It seems infinite, as long as you put enough storage in place, add enough nodes.
The performance is good. It is a secondary storage platform designed for archive and backup, so performance for the right use cases is very good. We have been pretty happy in that regard.
The biggest feature, the biggest reason we went with SwiftStack, rather than deploying our own model with OpenStack Swift, was their deployment model. That was really the primary point in our purchase decision, back when we initially deployed. It took my installation time from days to hours, for deployment in our environment, versus deploying OpenStack Swift ourselves, manually.
We are getting a warning alert about not being able to connect to Cloud Manager when we log into it. The support has provided links, but this particular issue is not fixed yet.
When it comes to a critical or a read-write-intensive application, it doesn't provide the performance that some applications require, especially for SAP. The SAP HANA database has a write-latency of less than 2 milliseconds and the CVO solution does not fit there. It could be used for other databases, where the requirements are not so demanding, especially when it comes to write-latency.
I would like to see more aggressive management of the aggregate space. On the Cloud Volumes ONTAP that we use for offsite backup copies, most of the data sits in S3. There are also the EBS volumes on the Cloud Volumes ONTAP itself. Sometimes what happens is that the aggregate size just stays the same. If it allocates 8 terabytes initially, it just stays at 8 terabytes for a long time, even though we're only using 20 percent of that 8 terabytes. NetApp could undersize that more aggressively.
I would like NetApp to come up with an easier setup for the solution.
The automated deployment was a bit complex using the public APIs. When we had to deploy Cloud Volumes ONTAP on a regular basis using automation, It could be a bit of a challenge.
We want to be able to add more than six disks in aggregate, but there is a limit of the number of disks in aggregate. In GCP, they provide less by limiting the sixth disk in aggregate. In Azure, the same solution provides 12 disks in an aggregate versus GCP where it is just half that amount. They should bump up the disk in aggregate requirement so we don't have to migrate the aggregate from one to another when the capacities are full.
There is room for improvement with the capacity. There's a very hard limit to how many disks you can have and how much space you can have. That is something they should work to fix, because it's limiting. Right now, the limit is about 360 terabytes or 36 disks.
The solution is very expensive. Due to its design, It's not cost-efficient versus doing a physical environment of similar size.
I would like to see reporting added, such as a monthly connectivity report.
I miss dedupe and decompression.
Having an enterprise "Storage Dashboard" that can show capacity, usage, performance, and any issues would be very beneficial.
I think the performance reporting can be improved by adding historical statistics into a database for the purpose of comparing.
We are waiting for container support (on the roadmap), as well as a user-friendly full web-administration capability, and an improved API.
DataCore needs a more efficient and better way to keep track of metrics and counters so that we can do baseline analysis to measure performance.
The cost is becoming prohibitive since they moved to a subscription model.
I think an easier way to open a service call, right through the DataCore GUI, would be an improvement, especially when there is an urgent issue.
The file access needs improvement. The NFS was rolled out as a single service. It needs to be fully integrated into the proxy in a highly available fashion, like the regular proxy access is. I know it's on the roadmap.
At the moment we are using Erasure coding in an 8+4 setting. What would be nice is if, for some standard configurations like 15+4 and 8+4, there were more versatility so we could, for example, select 8+6, or the like.
On the controller features, there needs to be a bit more clean up of the user interface. There are a lot of options available on the GUI which might be better organized or compartmentalized. There are times when you are going through the user interface and you have to look around for where the setting may be. A little bit more attention to the organization of the user interface would be helpful.
They should provide a more concise hardware calculator when you're putting your capacity together.
I would like to see better client integrations, support for a broader client library. SwiftStack could be a little bit more involved in the client side: Python, Java, C, etc.
The biggest room for improvement is the maturity of the proxyFS solution. That piece of code is relatively new, so most of our issues have been around the proxyFS.
[One] thing that I've been looking for, for years as an end user and customer, for any object store, including SwiftStack, is some type of automated method for data archiving. Something where you would have a metadata tagging policy engine and a data mover all built into a single system that would automatically be able to take your data off your primary and put it into an object store in a non-proprietary way - which is key.
Pricing and Cost Advice
If a customer is only using, say, less than 10 terabytes, I don't think CVO would be a good option. A customer using at least 100 or 200 terabytes should get a reasonable price from NetApp.
Once we deploy the pay as you go model, we cannot convert this product as a BYOL model. This is a concern that we have.
They have a very good price which keeps our customers happy.
They give us a good price for CVO licenses. It is one of the reasons that we went with the product.
For NetApp it's about $20,000 for a single node and $30,000 for the HA.
Our licensing costs are folded into the hardware purchases and I have never differentiated between the two.
Cost is a big factor, because a lot of companies can't afford enterprise grade equipment all the time. They skimp where they can. I would recommend that they improve the cost.
Cloud is cloud. It's still expensive. Any good solution comes with a price tag. That's where we are looking to see how well we can manage our data in the cloud by trying to optimize the costs.
Pricing has improved but it is still expensive.
This solution allows the use of off-the-shelf hardware and charges by the TB of storage.
Make sure you are made aware of the annual subscription cost when purchasing.
The cost is at the same level as other storage solutions and it is easy to understand the licensing.
The pricing and licensing are better with DataCore.
We are able to dynamically grow storage at a lower cost. We can repurpose hardware and buy commodity hardware. There is a huge cost savings, on average $100,000 a year compared to traditional storage for what we have at our size.
The pricing model is great and makes sense. We have talked about how to get into more of a frequent billing cycle than once a year. That would be an interesting concept to add into the product, having the ability to have monthly billing instead of having to do a one-year licensing renewal. However, the way the license works by charging for storage consumed is definitely what makes them the most competitive.
Dollar per gigabyte, it costs us more because we are storing more. However, if you look at it from a cost per gigabyte perspective, we have dropped our costs significantly.
We find the pricing rather steep. Of course, you get quality for your money, that's absolutely true... [But] when you look at the prices of the licensing and the prices of your hardware, it's quite substantial.
The annual support and maintenance costs compared to our old solution for backups had about a two-thirds savings, so about a 60% annual savings on our support and maintenance contract. That savings funded additional expansion for what it was costing us for the support and maintenance contracts on old solution.
The pricing and licensing are capacity-based, so it's hard to put my finger on them, because so many different vendors charge in different ways. We are still saving significantly over any of the other options that we evaluated because we can choose the best hardware at the best price, then put SwiftStack software on it. So, it's hard to complain, even though a part of me goes, "It would be nicer if it were less expensive."
We have had a 40 to 50 percent reduction in CAPEX on the acquisition of new hardware, which is probably conservative.
COST_SAVING; We have had a 40 to 50 percent reduction in CAPEX on the acquisition of new hardware, which is probably conservative.
Compared 18% of the time.
Compared 16% of the time.
Compared 10% of the time.
Compared 7% of the time.
Compared 6% of the time.
Compared 22% of the time.
Compared 12% of the time.
Compared 10% of the time.
Compared 8% of the time.
Compared 5% of the time.
Compared 38% of the time.
Compared 22% of the time.
Compared 9% of the time.
Compared 7% of the time.
Compared 7% of the time.
Also Known As
|ONTAP Cloud, CVO, NetApp CVO||DataCore SANsymphony, SANsymphony, DataCore Virtual SAN|
The leading enterprise-grade storage management solution, delivers secure, proven storage management services and supports up to a capacity of 368TB. Software service supports various use cases, such as: File shares and block-level storage serving NAS (NFS, SMB / CIFS) and SAN (iSCSI) Disaster Recovery, Backup, and Archive DevOps Databases (SQL, Oracle, NoSQL) Cloud Volumes ONTAP is offered in a standard single-node configuration or in a High Availability (HA) configuration.
DataCore™ SANsymphony™ enterprise-class Software-defined Storage (SDS) platform provides a high-performance, highly available and agile storage infrastructure with the lowest Total Cost of Ownership (TCO).
|SwiftStack enables you to do more with storage. Store more data, enable more applications and serve more users. We do this by delivering a proven object storage solution that's built on an open-source core and is fully enterprise ready. Our object storage software is an alternative to complex, expensive, on-premises hardware-based storage solutions. SwiftStack delivers the features and flexibility you need to easily manage and scale object storage behind your firewall. Customers are demanding storage where they can pay as they grow, find it is easier to consume, and can infinitely scale. Today, our customers use SwiftStack for archiving active data, serving web content, building private clouds, sharing documents and storing backups.|
Take Control with Unified Cloud Storage
Sign up for a 30-day trial to see how Cloud Volumes ONTAP can help you optimize cloud storage costs and performance, while enhancing data enterprise-grade protection, security, and compliance - wherever your data lives.
Learn more about DataCore SANsymphony SDS
Learn more about SwiftStack
|Rohit, AdvacnedMD, D2L, Trinity Mirror, Eidos Media, WireStorm, Cordant Group, JFK Medical Center, ALD Automotive, Healthix, City of Baton Rouge, ON Semiconductor||Volkswagen, Maimonides Medical Center, The Biodesign Institute, ISCO Industries, Pee Dee Electric Cooperative, United Financial Credit Union, Derby Supply Chain Solutions, Mission Community Hospital, Bellarmine College Preparatory, Colby-Sawyer College, Mount Sinai Health System, The Royal Institute of International Affairs, Quorn Foods, Bitburger, University of Birmingham, Stadtverwaltung Heidelberg, NetEnt to name a few.||Pac-12 Networks, Georgia Institute of Technology, Budd Van Lines|
Comms Service Provider8%
Computer Software Company40%
Comms Service Provider9%
Computer Software Company25%
Real Estate/Law Firm16%
Comms Service Provider8%
Computer Software Company53%
Comms Service Provider10%
No Data Available
See our list of best Cloud Software Defined Storage vendors.