What is our primary use case?
We are currently using NetApp Cloud Backup at our Bangalore data center. We have a longer retention period as per the regulatory compliance requirement, so we have to comply with the regulatory processes in relation to our periodic backups.
We have a lot of trade-related data which is all part of the regulatory process compliances, meaning we have to back up that data as well as back up the containers. So, for example, if I have some image which has been installed, then with the SECURE Act, we are required to secure the container images. And securing, in this sense, includes taking backups.
Typically, we're able to see the target, the dashboard, and the different containers that are running. And then we define the backup strategy for that. Along with NetApp Cloud Backup, we make major use of other products like Veeam and Nutanix with our hyper-converged solutions.
So it starts with the backup strategy, and then there are the backup policies based on that strategy. Those policies are enforced on the dashboard for the various hosts, and we can see whenever the real-time dashboard updates.
Basically, our usage depends on the number of licenses that we have. On the cloud, it is a pay-per-use kind of model. Currently it is pretty well-aligned to our needs, so once the backup manager is installed and it's up and running, whether it's in the cloud or on-premises, we're okay, unless the licensing agreement changes or something impacts the number of users. Sometimes we have to cut down on licensing, and restrict the number of users as per the contractual agreement.
As we are a customer of NetApp, it has been channeled through the vendor onboarding team, where there is representation from the APAC region to the entire central function of the vendor onboarding. There will typically be a system integrator who will be involved in case we need to procure certain specific services, particularly when it comes to professional support. If it's support that is provided by their partner, we consider that as well. There are a few available in India and it depends whether we go directly to the OEM or whether we go to the OEM partner.
What is most valuable?
One feature that works well for us is that the Cloud Manager is a completely agentless solution. There's a similar dashboard on both the versions for on-premises and the cloud, and with reference to the Cloud Manager, it's a little faster because there's nothing to be installed as such. Being agentless, it doesn't require any agent to be deployed on the targets where the backups are triggered.
What needs improvement?
One area that can be improved is around how we define the different KPIs. In particular, the business KPIs. I have my own in-house application for the business KPIs, so for example, with our policies around retention, which is a period of seven years, I have to read these parameters from other applications and I need them to integrate well.
NetApp Cloud Backup Manager should help to get this integrated seamlessly with other applications, meaning that it will populate the data around the different parameters. These parameters could be things like the retention period, the backup schedule, or anything. It might be an ITSM ticket, where it's a workflow that is triggered somewhere, and the ITSM ticket has been created for a particular environment like my development environment, an INT environment, or a UAT environment. This kind of process needs to integrate well with my own application, and there are some challenges. For example, if it allows for consuming of RESTful APIs, that's how we will usually integrate, but there are certain challenges when it comes to integrating with our own application around KPIs, whether it's business KPIs or technical KPIs.
What I want is to populate that data from my own applications. So we have have the headroom in the KPI, and we have the throughput, the volumes, the transactions per second, etc., which are all defined. And these are the global parameters. They affect all the lines of business. It's a central application that is consumed by most of the lines of business and it's all around the KPIs.
Earlier, it used to be based on Quest Foglight, which is an application that was taken up and customized. It was made in-house as a core service, and used as a core building block. But our use of Quest Foglight has become a bit outdated. There is no more support available, and it's been there as a kind of legacy application for more than ten years now in the organization. And now it get down to the question: Is this an investment or will we need to divest ourselves of it? So there has to be an option to remediate it out. In that case, one possibility is to integrate the existing application and it gets completely decommissioned. Here it would help if there were some better ways of defining or handling the KPIs in the Cloud Manager, so that most of the parameters are not defined directly by me. Those will be the global parameters that are defined across all the lines of business.
There are some integration challenges when it comes to this, and I've spoken to the support team who say they have the REST APIs, but the integration still isn't going as smooth as it could be. Most of the time, when things aren't working out, we need dedicated engineers to be put in for the entire integration. And then it becomes more of a challenge on top of everything. So if the Cloud Manager isn't being fed all the kinds of parameters from the backup strategy around the ITSM and incident tickets, or backup schedules, or anything related to the backup policies, then it takes a while. Ideally, I would want it to be read directly from our in-house applications.
And this is more to do with our kind of product processes; that is, it's not our own choice to decide. The risk management team has mandated this as part of the compliance, that we have to strictly enforce the KPIs, the headroom, and the rest of the global parameters which are defined for the different lines of business. So if my retention period changes from seven years to, let's say, 10 years or 15 years, then those rules have to be strictly enforced.
Ultimately, we would like better support for ITSM. The ITSM tools like ServiceNow or BMC Remedy are already adding multiple new features, so they have to be upgraded over a period of time, and that means NetApp has to provision for that and factor it in. Some of the AI-based capabilities are there now, and those things have to be incorporated somehow.
One last thing is that NetApp could provide better flash storage. Since they're already on block storage and are doing well in that segment, it makes sense that they will have to step up when it comes to flash array storage and so on. I have been evaluating NetApp's flash array storage solutions versus some others like Toshiba's flash array and Fujitsu's storage array, which are quite cost-effective.
For how long have I used the solution?
I have been using NetApp Cloud Backup for almost three years.
What do I think about the scalability of the solution?
Regarding scalability, we can just keep on increasing the nodes so scaling is fast and we can keep on scaling up to the required performance. So that's okay in the cloud. If I'm doing it on-premises, then it becomes difficult. With NetApp as a cloud service provider which takes care of all the computing requirements, it's sufficiently scalable.
How was the initial setup?
They have the support teams, so if you have any challenges around integration, upgrades, or troubleshooting on a day-to-day basis, they can help.
We need visibility around the entire product roadmap, so if the versions change, we need to know what dependencies are required and how it will be phased out. I will not stick to just one or two roll-outs. Instead, we're looking at at least 10 to 15. Some may say "As long as I'm getting my work done," and that's the kind of attitude that is not tolerated.
As managers, we are hard-pressed to get all the product roadmaps from vendors. NetApp helps to provide us with the visibility and support for all kinds of roll-outs and upgrades which they may have from time to time, every year or two years, or whatever.
There are sometimes challenges that result in dead-ends when we're sorting out the dependencies. For example, it can be difficult to tell exactly which dependencies are required. But this is something we work out while liaising with NetApp, in addition to getting info on things like end of support timelines, and so on. All these aspects are part of our infrastructure and compliance best practices from a risk management perspective.
What's my experience with pricing, setup cost, and licensing?
Our usage depends on the number of licenses we have. On the cloud, it's a pay-to-use kind of model which suits our needs well. Once we have the Cloud Manager installed, the licensing process is okay, regardless of whether we're running backups in the cloud or on-premises. Sometimes, we have to restrict the number of users as per the contractual agreement and in this case we simply cut down on the licensing.
When considering costs, NetApp works out better for us than the backup solution from Veeam. Although Veeam has a good footprint in India and the APAC region, NetApp has a bigger global footprint, making it more cost-effective for us to go with NetApp.
Which other solutions did I evaluate?
I have also evaluated the cloud backup solution by Veeam.
What other advice do I have?
I would rate NetApp Cloud Backup a seven out of ten.
Which deployment model are you using for this solution?