Right now, it's a backup storage system, so it's used as a backup repository. We point our Commvault backups to it and soon to be Veeam backups.
Right now, it's a backup storage system, so it's used as a backup repository. We point our Commvault backups to it and soon to be Veeam backups.
It has helped us with the ability to distribute data to different data centers. As part of our DR strategy, we have nodes automatically replicating data from one data center to the other. This makes it easier for us to not have to shift tapes around or do anything else like that.
We have seen a tremendous increase in the amount of data that is being produced, but at the same point in time, we haven't needed to increase our head count to manage this backup storage. We have seen eight times more data without having to increase the head count to manage the solution.
In theory, we could stop using SwiftStack tomorrow and still have the solution up and running. It would just be up to us to manage and maintain it. However, their solution makes it a lot easier to manage the OpenStack Swift environment, so we don't have to have a dedicated resource who knows the ins and outs and nuances of OpenStack Swift. SwiftStack makes it easier to use this solution.
On the controller features, there needs to be a bit more clean up of the user interface. There are a lot of options available on the GUI which might be better organized or compartmentalized. There are times when you are going through the user interface and you have to look around for where the setting may be. A little bit more attention to the organization of the user interface would be helpful.
They should provide a more concise hardware calculator when you're putting your capacity together.
The stability has been great.
We have seen good performance. Right now, we are somewhat hamstrung by the network that we're on. It's the data center and network that we're on at the moment. We have been happy with SwiftStack's performance, but there's probably more upside that we're not seeing yet.
As long as the stability is there on the platform, it allows us to compete the hardware vendors against each other because it doesn't all have to be homogeneous. E.g., maybe Dell has a great system this year and we have a Dell EMC system, but then next year, we find out that Super Micro has the best bang for the buck. Now, we have Dell EMC and Super Micro in the same cluster providing resources. That it doesn't have to be an all in one brand vendor is a tremendous benefit.
The scalability has been fantastic.
We are not quite at the petabyte range yet. We're just barely under one petabyte. The ingestion is not an issue. Thus far, because it's primarily a backup target, the ingest has been fine. We haven't had any significant restores that need to be done. The general consensus on what we've done is that the restores coming back from it have been faster than they were from our prior vendor. Ingest speeds are fine. The restore speeds have improved.
It is backing up enterprise systems in our data center. Those enterprise systems are being used by close to 600 staff, both administrative and scientific. That staff doesn't directly interact with SwiftStack, but the data that they store is on our primary storage system and gets backed up to it. Indirectly, they're being covered by the capacity on the SwiftStack system.
We are kicking the tires on the SwiftStack 1space feature right now. We're trying to determine the namespace that we'd want to use. Because we're in a transitional period, we're looking at increasing our capacity. We're going to bring on two new SwiftStack nodes with additional capacities. Part of that will be the 1space with the ability to move out to Azure and AWS, who right now are our two primary cloud providers. However, GCP will also likely be involved. Therefore, we're in the process of formulating a long-term plan that won't require us to re-architect after we get out and running.
Our use case for the 1space feature is cold archives and the ability to move some of the data out of our on-premise data centers because of federal mandates for data center consolidation initiatives. It puts more emphasis on shrinking space in the data center, which means denser, greater storage capacity on-premise, but then some of that data needs to move out to the cloud.
The possibility is also there for doing some bursting to the cloud for a cloud compute. This is a future desire that we would like to look into, though it is not on our official roadmap. The transition to public cloud hasn't even been tied in yet.
Going forward, we will be adding two nodes. We are still waiting on the hard drives. Once they are added, we will be expanding our capacity by 70 percent.
Their tech support has been great. They are responsive, answer questions, very knowledgeable, and have been willing to jump on calls with the backup software vendor to assist in troubleshooting with any type of issue that we've run into with our backup (Commvault). Their support staff has jumped on and lent their expertise to the Commvault staff, explaining or helping to troubleshoot why Commvault was having issues.
We did use a different solution. We switched because of cost and scalability. The initial purchase price, ongoing support, and maintenance costs, also coupled with the difficulty in scaling made it almost a no-brainer for us to move off of them.
The initial setup was pretty straightforward. Their support staff was there to walk us through the setup and explain things along the way. When we did run across anything that wasn't clear, they were right there to clear it up. From that respect, the setup was made significantly easier because we were able to work with their support staff. They were able to remote in and assist with any questions or configurations that we might have otherwise had trouble with.
We had two data centers and needed the data to replicate between the two data centers. Therefore, we purchased two storage nodes. One for each data center with a controller satellite and the secondary data center to control both of them. Data essentially replicated from our primary data center to the secondary data center.
Our setup really was extended by our own availability. It didn't take more than two weeks, but it was an hour and a half here and an hour there. The initial setup was done in a day (overall time). In under 48 hours, we were able to have it up and running and fairly optimized.
It doesn't even require a full-time staff to maintain it. It requires someone keeping tabs on it, but we don't need a full-time storage admin to keep the system up and running. After the initial deployment, it runs. We just keep an eye on it. Since it's essentially redundant, we've got the two nodes, and they're replicating. It doesn't require a ton of staff to stand up and maintain.
We used a reseller, Alliance Technology. They did an initial install and configuration of the system drives on the storage nodes.
There were some issues in their setup, and whoever configured it didn't set the system volume up as a RAID 1 mirror for high availability. However, their pricing and recommendation of the hardware was very good.
We were happy with them, with the exception of the misconfiguration. We did run into an issue with the initial quote. For our initial implementation, they had only one storage node and we needed two. So, we had to go back and address that. Then, it also didn't include the optical transceivers that we needed for the 10G connections, but we were able to work with the reseller and get those shipped out. These were minor inconveniences in the procurement process. That was the initial delivery. Alliance did not assist with the deployment from then on.
We worked directly with SwiftStack and their support team once the equipment was in-house and went into production with it
The annual support and maintenance costs compared to our old solution for backups had about a two-thirds savings, so about a 60% annual savings on our support and maintenance contract. That savings funded additional expansion for what it was costing us for the support and maintenance contracts on old solution. We bought an additional node. It also saved us on not having to do forklift upgrades, or rip and replace upgrades. Rather than having to do those, we could purchase an additional node and add higher capacity drives, which is something that other backup vendors weren't certifying.
This gives us annual savings on the support and maintenance contract, but also savings on our time and efforts. We don't have to do a disruptive rip and replace, moving everything over. We can add a node, migrate the capacity, and elegantly decommission a node if we choose to do so. If it's no longer supported or if there's hardware issues, we can migrate the data and do it in a non-disruptive fashion.
We have had a 40 to 50 percent reduction in CAPEX on the acquisition of new hardware, which is probably conservative.
We didn't repurpose existing servers. While it was an option on the table, we had the money within our budget to buy new hardware because of 60 percent or more savings that we had because we didn't have to maintain the support and maintenance of our old systems.
We bought new hardware which provided better capacity and density. Then, this year we did the same thing because we've had several years of savings, so we bought new hardware. This is on the storage nodes.
We did actually repurpose the controller. The controller didn't need to be physical. It could have been virtual. At the time, our virtual infrastructure was a little overextended. We had a physical system which was capable of hosting the controller side of things. So, we did repurpose hardware for the controller. However, for the storage systems, because capacity and density are such an important part of what we were doing going forward, we bought chassis geared more towards getting as much capacity in as small amount of space as possible.
There is no vendor lock-in. The ongoing support and maintenance costs are not through the roof like some vendors, especially those that specialize in backup appliances. So, they're very economical in regards to their ongoing support and maintenance.
The pricing and licensing are capacity-based, so it's hard to put my finger on them, because so many different vendors charge in different ways. We are still saving significantly over any of the other options that we evaluated because we can choose the best hardware at the best price, then put SwiftStack software on it. So, it's hard to complain, even though a part of me goes, "It would be nicer if it were less expensive."
We sort of reevaluated the use of our Spectra Logic tape library. We also reevaluated the Data Domain that we had been using. We looked at a couple other object storage solutions at the time which were Cleversafe and DDN.
At the time, these solutions were more expensive and seemed to follow a similar pattern of providing appliances that would then lock you in. Thus, you were beholden to the manufacturer to deliver an upgraded hardware box, and you still had to buy their certified drives. E.g., even though it was going from a dedupe appliance to an object storage system, you were almost going from a dedupe appliance to an object store appliance, which had its own vendor lock-in.
We wanted the capability to be able to add capacity and drives without having to pay a markup for them, and be able to look at the best bang for the buck across all the vendors. So, we knew that we were going to lose that flexibility if we went with DDN or Cleversafe, because they were selling the entire box, whereas SwiftStack is agnostic. They don't care what the box is underneath. As long as it's an x86 compatible system, you could mix and match. This was one of the deciding factors that hardware-agnostic approach. At the end of the day, if we wanted to, we could basically take SwiftStack out of the picture, and we would still continue to function. They just facilitate making the management of the system easier. You could do it without them. It would be a lot more difficult and require a lot more time, effort, and energy, or someone who knows the technology.
We chose SwiftStack over NetApp or Dell EMC because it is hardware agnostic and the initial capital expenditure was significantly less. Also, the ongoing support and maintenance were significantly less. The flexibility and the ability to scale faster because we weren't tied to any one particular vendor's certification of specific hardware or specific hard drives was another driver.
Know your use cases and how you plan on utilizing it. As part of that use case, understand the flow of your data and how you want that to look. If you're going to send it out to the cloud, understanding that is an important part of an evaluation. Some of the competition out there send you to their cloud, and they're trying to commoditize or potentially lock you in.
The biggest thing about SwiftStack is freedom. It's freedom from vendor lock-in. It's freedom from one cloud provider. It's freedom to scale when you want, how you want, and when you want. Look at how easy or how painful is it to perform upgrades. How long do you have to wait for the manufacturer or the vendor to provide a new chassis or certify new hard drives? Depending on how big a pain point that is within your organization, and depending on what your budgets look like, those are all things to take into consideration when you're looking at the SwiftStack solution.
You need to have somebody who understands Linux, and that's not uncommon in the data center. However, if you're a Windows only shop and only have Windows admins, then that's something to take into consideration. But, if you have a Linux admin, even a junior admin, you can deploy this solution with the help of their support team and be perfectly happy. Look at what you have and how easy or painful is the upgrade process, the initial purchase price, the ongoing support maintenance price, and the innovation. How quickly can you bring the latest and greatest into your solution?
If you start hitting pain points on any of those, SwiftStack gives you the capability to get past some of those obstacles, because you're not tied down waiting for the vendor to innovate or deliver certification on hardware that's been out for six months. It provides us a lot of freedom.
SwiftStack has their finger on the pulse of the storage industry. They are doing a good job of understanding that there's a significant portion of people who don't want vendor lock-in. They look at what is in the best interest of the customer.