What is our primary use case?
We primarily use the solution to protect data in the cloud and data in the data center.
If something such as ransomeware comes along and corrupts our production data, I roll the volumes back to the last snapshot. More commonly, somebody deletes or corrupts a file inadvertently. In some cases we can roll back to the last Snapshot, however, that usually isn't a viable option because other data in the volume would be lost. That said, the system gives me the ability to mount up a Snapshot, go get the data that they were looking for, and move it back to where they need it.
What is most valuable?
I really believe the NetApp product is awesome. There may be others in niche spaces that can fill a particular use case better than NetApp, but in our environment, Netapp is currently our go-to product line. Pure could be an example of this. At some point in time, I may have something that provides an even less expensive alternative, but for the moment NetApp's my vendor of choice. There will be specific use cases that bring other things into the data center, so I'm not a purist, however, we've had phenomenal success with NetApp and their support. It's been a great relationship for the entire duration. They have evolved well technologically, they've done a great job of getting past the idea of being a vendor for spinning disks. They've really repositioned themselves as a management system for your data regardless of where it resides. I just can't speak highly enough.
Snapshot, SnapMirror, and SnapVault have worked really well for us over the years. The next piece of that puzzle that we will be adding is data tiering, particularly as we start to move some of the stuff that I currently house on SATA disk (e.g.departmental shares, user shares, etc.). There's a lot of that data that's accessed frequently, and there's a lot of that data that's not.
NetApp's FabricPool technology will allow me to basically set up a series of rules and then tell it, "Okay, go do it." And the minute the block becomes hot, it brings it back into my data center. The minute the block becomes not, it goes out into warm storage. If it cools down even further it goes to cheaper and deeper storage, and it's capable of sitting there and moving them in and out as it needs to. There's a lot of promise there because the cloud is never cheaper than on-prem until you can take advantage of some of that cheap and deep stuff.
The integration with the cloud is seamless. They have a singular management interface that makes it so you don't really have to know or care where the data resides.
The greatest value in the snapshot technology lies in the fact that we can mirror these snapshots to a remote site. In fact, one of the features that will be enabled that I have been looking forward to -- and it's been around for a while now, but it's still above the version I'm running -- is a continuous data protection scheme with near real-time mirroring. A lot of times my snapshot schedule might be every hour. By definition, if I snap it and mirror it every hour I could lose, 59 minutes and 59 seconds worth of data. In most cases, that is acceptable for our business. With the addition of synchronous mirroring, we can further protect more critical data.
Because of Snap Mirror and Snap Vault, I can keep (for example) two weeks' worth of data on my primary storage, yet I can keep a year's worth of weekly backups on the remote array. If somebody says "Gosh, you know, we had this file. I don't know exactly when we deleted it, but the last time we knew we had, it was March." Then I have those weekly snapshots and can go and try to recover that data for them. It's not as slick as it could be. Most traditional backup solutions will allow me to just type in the file name, and it would tell me where the data is. With the NetApp snapshot approach, the search really very manual, but it is doable, and It does give us a longer-term retention strategy. The snapshots are immutable, so if I end up getting ransomware or something like that, we have the facility to roll back.
From a functional standpoint, it's been, pretty much bulletproof. I have never gone to a snapshot and not been able to do what I needed to do.
It's extremely user-friendly, it is a set it and forget it kind of setup.
What needs improvement?
It would be ideal if snapshots were searchable. It's a very manual activity. If I have to go looking for a file in a stack of snapshots, it's mount one and look, mount another and look, etc. That's one of the things that traditional backup products bring to the table, that to my knowledge, NetApp does not. I'm not sure whether or not they ever will. They are very tightly partnered with backup vendors like Rubrik, so they may leave searchability as a third-party option. I can't say with certainty. I don't necessarily have all the software that Netapp makes available, so for us, some parts, like grabbing files, is very manual. I'm more focused in the coming year on adding better management tools and cloud than I am worrying about occasionally having to go get a file by hand.
The UI is probably their biggest weakness. There are always glitches in the HTML UI, but those are minor annoyances. They're not functional problems.
For how long have I used the solution?
I've worked with the solution for 15 years. We were an EMC shop for quite a while, but we moved to NetApp, and we have never looked back.
What do I think about the stability of the solution?
The solution is quite stable. It's pretty much bulletproof. As with all vendors, there are periodic software updates, bug fixes, and security updates, but I am not aware of any direct connection between the updates and the snapshots per se.
What do I think about the scalability of the solution?
The scalability is very good, if I want to add more storage, I just add more storage. It happens all the time.
There are some limits to the size of the aggregates, however, that's never been an issue for us. We're talking in large numbers of terabytes before you hit that. It's really a function of the size of the disks in the aggregates.
All of our users store data on the Netapp, but that is completely transparent to them. The only person that uses the software interfaces is me. I'm the only one that administers the product.
How are customer service and technical support?
I've never had to call technical support for an issue regarding snapshots. It's a very stable technology so there are very few issues.
Which solution did I use previously and why did I switch?
Previously, we were using EMC. That was really before I had any involvement with our storage apparatus. I was part of the same team, but I was not 'the storage guy', so I really can't speak to the motivation for our changing storage providers.
When Netapp came into our environment, my immediate supervisor said, "Hey, I need to start backing off from some of the tactical work. Would you look at taking this over?" After that, bit by bit, starting in 2007, I started learning more and more and more about NetApp, and eventually when he moved on, I took over his job, so I inherited the solution.
How was the initial setup?
The initial setup is fundamental to the product, so architecting the correct solution is the primary effort during implementation. You can mirror at the volume level or an entire storage virtual machine. With MetroClusters, there are even more alternatives, but we are not currently using that technology. The point is that there are different levels that you can mirror, and snap. It's an integral part of the product. That has more to do with why we bought NetApp than just its management of local disks.
What about the implementation team?
We have an implementation partner that assists us with engineering the solution.
What other advice do I have?
We're just customers. We don't have a business relationship with the company.
A lot of our data protection strategy is still centered around NetApp.
We will be, over the next three years, migrating to a more cloud-enabled strategy that will still be centered around Netapp technology. We looked at all on-prem, cloud as much as possible, and a couple of points in between, but the problem with migrating from on-prem to cloud is that we were going to have to lift and shift a serious amount of data from the data center to the cloud.
If you account for ingress fees and all those sorts of things, that's just part of the cost of doing that kind of business, but data availability would have been grossly impacted, and we don't have enough of a downtime window anywhere in our scheduling to effectively do that. What we elected to do was go all on-prem, one more round.
Then we can figure out how to break the data transition to the cloud into smaller chunks. I bought five years of support for all my SSD-related hardware. I only bought three years for all my spinning disks. The plan is in the next three years to eliminate the need for spinning disks, but this buys me three years to move stuff to the cloud in a piecemeal fashion rather than trying to do it in a 'big bang'.
I'm a big fan of NetApp. I'm not saying that they're the only storage vendor I would ever do business with. The days of the data center having one of anything are kind of passing us by. In the modern data center, we're going to end up with tiered everything. You'll have multiple public clouds. You'll have a private cloud. You'll have multiple providers for storage, multiple providers for compute. And essentially what we all end up with eventually is a data center where if somebody wants to spin up a server, they pick items à la carte off a menu with a price at the bottom of the screen and say, "Okay, I can live with that." The challenge will be to provide that level of service without incurring tremendous administrative overhead.
The Snapshot technology rides along with the management interface on the controllers. I'm using 9.3, and the latest is 9.6. When we bring the new hardware in January, we will immediately follow it with an upgrade project. There are some new features that they've enabled that we can take advantage of. I'm not currently in a position to talk about what all of those are. I've done some reading and pretty much said, "Huh, that'll be cool one day," and then discarded it from my mind. We have implementation partners who will help with "Here's what makes sense for you." I'm looking forward to getting there, however, we're a couple of versions behind.
They're pretty good at knowing what their marketplace is looking for. They are probably the most technologically proficient in the storage arena. There are other niche players that do one thing very well, and they might do it better than NetApp, however, when you look at storage as a whole, Netapp really stands out. It is the center of my IT universe. Everything else is helped out from it. I've got hosts that boot to it. I've got most of our VMs in NetApp volumes. If it is not in HCI, it's in the NetApp and that's probably 85% of our storage. It's significant. Between data and backup, I've got about a petabyte and a half.
I'd rate it at a nine out of ten because of the searchability issue, and as I've said, NetApp may have a solution for that in a software package that I do not own. In the course of doing my job, I never have to sit there and worry about whether the storage technology is working right or not. It just does what it needs to do, and gives me the ability to focus on other things.
Which deployment model are you using for this solution?