What is our primary use case?
One use case is user files, when customers are trying to place their unstructured data and then access it remotely. A second case is is VDI. All the VDI uses have their home drives hosted in NetApp. In addition, we use NetApp for general-purpose, such as Unix applications, database archives, big data, when they need a lot of reads and fewer writes. That data comes into NAS. In our firm, we use it for tier-three and tier-four, which needs less than 20 millisecond response times. Those types of applications are deployed in NetApp.
How has it helped my organization?
In terms of VDI, pretty much every employee of our firm is a customer of our NAS infrastructure. Everybody's home drive is on NAS, so it's highly critical. Even a minimum outage would cause a lot of potential business risks to the firm. NetApp has come up with performance management devices to improve the performance. And it has all-flash and hybrid aggregates to improve performance in caching. It's really excellent.
As we scale more data, as we add more data into our data pool, we really need it for faster disk drives and quicker response times for our customers, to make sure they will get their data whenever they need it.
What is most valuable?
I love the replication technologies which keep a customer out of risk. At any time, we can do a seamless failover/failback, and have the latest data on it.
The SnapVault is another excellent feature. It's used for remote disk-based backups so we don't need to depend on tape backups with their long restore times.
What needs improvement?
SnapLock is the feature we would like to see enhanced. As a bank, we store data for compliance for a long time: ten years, 15 years. The data would be locked. So they should enhance the SnapLock features.
At the same time, the customers want a seamless failover and failback for SnapLock. As a bank, we want to look at the data availability, so every quarter we failover and failback. Today, we can failover but we can't failback. We'd like to be able to do both.
What do I think about the stability of the solution?
On average, the data that lives in the ONTAP hardware is there for four to six years and then it moves on to its end-of-support-life years. When it gets there, it tends to have a greater number of hardware breaks and failures. From a data perspective that's a big risk for us.
As part of tech refresh, we plan the data movement. One year before it gets to the end-of-support-life, we predominantly migrate it into a CDOT, or some other latest all-flash technology that NetApp provides us.
What do I think about the scalability of the solution?
In CDOT, theoretically, you can have 24 nodes in a cluster, but we are careful about that. Right now, we have ten-node clusters. We feel CDOT provides scalability in terms of the virtual world. You can keep adding nodes, you can keep adding disk shelves, you can scale your volumes. And then you can virtually move your failover capabilities from node A to node B, whichever node you want. When you want to do maintenance, you can just virtually move your LIFs' interfaces to other nodes and then you can safely failover. That's great, amazing.
The only thing that they have to improve in NetApp is that they're still relying on padding each node in active-active in CDOT. That has to go away. They should look at the scalability on a platform level. The computer would have that one file system with multiple nodes on it. If even one node fails, any node in that cluster could take over the functionalities. But today, it absolutely relies on that active-active uncoupling it. That needs to be improved in such a way that it would be one namespace. If this node goes down, any node in the cluster should take over and run that environment. It should also have stability, high-availability, and data protection. It all happens today in the virtual world, but it has to happen in the physical layout as well.
How is customer service and technical support?
Tech support is okay. We have given our feedback. What we have seen it evolve over a period of time. So far it's okay. It still has not reached a level I would call "great," but it's okay. It's going in the right direction.
We have performance issues and capacity issues, among other things. We don't get the right engineer, the right attention the first time, so it needs escalation. We need to raise the priority of the cases to make sure to grab NetApp's attention. Those situations have to be avoided. There needs to be a proactive solution instead of reactive.
What was our ROI?
We do see ROI from the capacity perspective, although I don't have data points at the moment.
What other advice do I have?
I would rate ONTAP at eight out of ten. It's an industry standard. It pretty much supports all the protocols and it delivers what the customer needs. It's operating on the use case perspective. Instead of having thousands of features - what is the use of that if a customer only wants ten percent of it - NetApp is really focusing on the ten percent, and delivering what the customer really needs.
It would be a ten out of ten with cluster enhancement and support improvements. Those are things that they should improve. I hope in a couple of years, when I come to the next NetApp Insight conference, I'll be able to tell you it's a ten.