All-Flash Storage Arrays Cache Reviews

Showing reviews of the top ranking products in All-Flash Storage Arrays, containing the term Cache
NetApp AFF (All Flash FAS): Cache
DM
IT Director at a legal firm

This product was brought in when I started with the company, so that's hard for me to answer how it has improved my organization. I would say that it's improved the performance of our virtual machines because we weren't using Flash before this. We were only using Flash Cache. Stepping from Flash Cache with SAS drives up to an all-flash system really had a notable difference.

Thin provisioning enables us to add new applications without having to purchase additional storage. Virtually anything that we need to get started with is going to be smaller at the beginning than what the sales guys that sell our services tell us. We're about to bring in five terabytes of data. Due to the nature of our business operations that could happen over a series of months or even a year. We get that data from our clients. Thin provisioning allows us to use only the storage we need when we need it.

The solution allows the movement of large amounts of data from one data center to another, without interrupting the business. We're only doing that right now for disaster recovery purposes. With that said, it would be much more difficult to move our data at a file-level than at the block level with SnapMirror. We needed a dedicated connection to the DR location regardless, but it's probably saved our IT operations some bandwidth there.

I'm inclined to say the solution reduced our data center costs, but I don't have good modeling on that. The solution was brought in right when I started, so in regards to any cost modeling, I wasn't part of that conversation.

The solution freed us from worrying about storage as a limiting factor. In our line of business, we deal with some highly duplicative data. It has to do with what our customers send us to store and process through on their behalf. Redundant storage due to business workflows doesn't penalize us on the storage side when we get to block-level deduplication and compression. It can make a really big difference there. In some cases, some of the data we host for clients gets the same type of compression you would see in a VDI type environment. It's been really advantageous to us there.

View full review »
SB
Director at a tech services company with 11-50 employees

The most important features are the IOPS and the ease of the ONTAP manageability.

The deduplicate process is performed in the cache before it goes to storage, which means that we don't use as much storage.

The versatility of NetApp is what makes it really nice.

View full review »
IBM FlashSystem: Cache
Storage Manager at a financial services firm with 10,001+ employees

I would like to see an improvement in the handling of large amounts of rights. An automatic flash system that doesn't do compression or deduplication will flush through the rights directly from the host to the flash modules. It doesn't keep them in the cache. For compression and deduplication systems, they have to do compression, deduplication and the memory and cache for the controller. So they have to keep the data there otherwise you will find yourself stuck with performance issues.

View full review »
Dell EMC Unity XT: Cache
ST
Cloud Engineer at a tech services company with 51-200 employees

The UEMCLI is not an object-oriented CLI and the more object-rich PowerCLI has been discontinued. Only people with bash experience possibly can operate it. Still nowadays, feeding object from one command into another is still a burden with such CLI. When adding a few disks to a cluster, the CLI is actually standing in the queue for one disk to be added to all, requiring multiple scans on each membering host, before proceeding with the second...and scanning all hosts once again.  One could add all disks at once and stand in the queue once for a rescan all.

There isn't a means to add volumegroups , nor hostgroups. A feature that any solution I worked with so far has. Its a burden to assure each host has the same LUN ID on each host in this manner. As of the june 2021 releae , code OE 5.1 it seems to offer the option to have hostgroups in the end !

The integration with vCenter comes with a sideeffect, in that it will take control of the vSphere scan process, moreover every esx host is scanned multiple times. It takes easiliy a few hours to add a few LUNS to a few hosts. Rather Painfull. Even when adding LUNs using the unisphere GUI , you can keep up with the pace of your script.

Support Responsiveness & time to fix bugs should be improved. Over the past 1,5 yrs we had occassional controller reboots and we went all the way from OE 4.5 over 5.02 to 5.03 and eliminated the  most common causes. We still face a stress triggered cache merge issue and though we provided the dumps and engineering acknowledged the bug, it has been told that addressing the bug requires substantial code rewritting and the problem will be fixed in the next major code release (OE 6.x) . We are now a year later, still no fix, but furtunately faced the considition once on one out of 5 arrays during that year. 

View full review »
Huawei OceanStor: Cache
Senior Consultant at a tech services company with 11-50 employees

On a scale from one to ten, I'd rate it at a seven. 

I gave it a score of seven based on my experience with other storage solutions. We've used NetApp, HP, and EMC. We've used quite a number. In fact, I gave this score because of the support. It is quite a high score. But frankly, we're still going through the steps of implementing. We have our main database there. Now we are going to optimize further. It's not only because All-Flash costs money. We should use it optimally. There is a tendency for users to put all their data on the All-Flash. We also have cache solutions whereby a part of the data is on the All-Flash. We are trying to convince users that they don't really need All-Flash for all their data, for all their applications. Maybe in more time, I can compare and give you an idea, but not yet at this stage. I can't tell you which features we need, not yet at this stage.

View full review »
Lenovo ThinkSystem DE Series: Cache
DP
Solutions Developer at Next Dimension Inc.

You can not buy a lot of options for these devices. There are a lot of things that it does not do. Some things that it does not do that we would like it to do include easy tiering. If you have got spindles and you want to cache a couple of terabytes of storage on SSD, that would be something we would like to see that, currently, it does not have the capacity to do.  

The thing it comes down to is that Lenovo needs to add some more of the software features that would allow the ThinkSystem line to compete with other products that we sell. Other than that, it is what it is.  

View full review »
HPE Primera: Cache
BM
Service Delivery Manager at a tech services company with 11-50 employees

I work with an HPE authorized partner in Malta and we offer storage solutions for customers. HPE Primera is one such product that I have experience with.

We have noticed that these days, most of the customers are implementing a solution that is a hybrid between Nimble and Primera All-Flash. There are both spinning disks and flash, where flash is used as the cache, which makes the price more competitive.

The customers are primarily using it for disaster recovery. They have their cluster and they are replicating one another to provide business continuity and disaster recovery applications.

View full review »
Dell EMC PowerMax NVMe: Cache
VF
Presales Engineer Information System and Security at a tech services company with 10,001+ employees
  1. The optimization of the cache memory of each engine and the use of persistent memory. 
  2. I/O density with predictable performance when we grab the I/O to host, as the storage level supported by the PowerMax is too far to be reached regardless of workload and storage capacity utilization. 
View full review »
Infrastructure Lead at Umbra Ltd.

With the SCM memory, it has been set it and forget it. It is being used as a cache drive. There is very little configuration for us to do. We just know that it is working.

PowerMax NVMe's QoS capabilities give us a lot of visibility into taking a look at what could be a potential performance issue. However, because it is so fast, we haven't really noticed any slowdowns from the date of deployment even until today.

It is a very good storage appliance for enterprise-level, mission-critical IT workloads because of its high redundancy, parity drives. It gives us the ability to not worry about our data. Or, if something were to go wrong, e.g., a drive pops, then we have our mission-critical warranty. We get a drive the same day, then get it swapped by the next business day at the latest.

PowerMax NVMe has made it a lot easier to understand how much we are able to provision. It has made it a lot faster to provision new things. 90% of my time for provisioning has been reduced. Also, it has made it very easy to understand and see everything behind it versus the older heritage, where Dell EMC was very convoluted and hard to get working. Things that used to take an hour, probably now take five to 10 minutes.

View full review »
Zadara: Cache
CTO at Pratum

One of the most valuable features is its integration with other cloud solutions. We have a presence within Amazon EC2 and we leverage compute instances in there. Being able to integrate with compute, both locally within Zadara, as well as with other cloud vendors such as Amazon, is very helpful, while also being able to maintain extremely low latency between those connections. We have leveraged 10-Gig direct connections between them to be able to hook up the storage element within Zadara with the cloud platforms such as Amazon EC2. That is one of the primary technical driving factors.

The other large one is the partnership and the managed service offering from Zadara. That means they have a vested interest and are able to understand any issues or problems that we have. They are there to help identify and work through them and come to solutions for us. We have a unique workload, so problems that we may have to identify and work through could be unique to us. Other customers that are just looking to manage a smaller amount of data would not ever identify or have to work through the kinds of things we do. Having a partner that is interested in helping to work through those issues, and make recommendations based on their expertise, is very valuable to us.

Zadara's dedicated cores and memory provide us with a single-tenant experience. We are multi-tenant in that we manage multiple organizations and customers within our environment. We send all of that data to that single-tenant management aspect within Zadara. We have a couple of different virtual, private storage arrays, a couple of them in high-availability. The I/O engine type we're leveraging is the 2400s.

We also have disaster recovery set up on the other side of the U.S. for replication and remote mirroring. Being able to manage that within the platform allows us to add additional storage ourselves, to change the configuration of the VPSA to scale up or scale down, and to make any changes to meet budgetary needs. It truly allows us to manage things from a performance standpoint as well. We can also rely upon Zadara, as a managed-services provider, to manage those requests on our behalf. In the event that we needed to submit a ticket  and say, "Hey, can you add additional storage or volumes?" it's very helpful to have them leverage their time and expertise to perform that on our behalf.

It is also very important that Zadara provides drive options such as SSD, NL-SAS, and SSD cache, for our workload in particular. We require our data to not only be accessible, but to be fast. Typically, most stored data that is hotter or more active is pushed onto faster storage, something like flash cache. The flash cache we began with during our first year with Zadara worked pretty well initially. But our workload being a little unique, after that, the volume of data exceeded the kind of logic that can be used in that type of cache. It just looks at what data is most frequently accessed. Usually the "first in" is on that hot flash cache, and our workload was a little bit more random than that, so we weren't getting as much of the benefit from that flash cache

The fact that Zadara provides us with the ability to actually add a hybrid of both SSDs and SATA allows us to specifically designate what volumes and what data should be on those faster drives, while still taking into account budget constraints. That way, we can manage that hybrid and reduce the performance on some of the drives that are housing data that is really being stored long-term and not accessed. Having that hybrid capability has tremendously helped with the flexibility to manage our needs from a performance standpoint as well as a cost perspective.

As far as I know, they also have solid support for the major cloud vendors out there, in addition to some others that I hadn't heard of. But they certainly support Amazon EC2 and Google and Rackspace, among others. Those integrations are very important. Most organizations have some sort of a cloud presence today, whether they're hosting certain servers or compute instances or some other workload out in the cloud. Being able to integrate with the cloud and obtain data and store data, especially with all these next-generation threats and things like ransomware out there, is important. Having backups and storage locations that you can push data to, offsite, or integrate with, is definitely key.

View full review »
CEO at Momit Srl

The object storage feature is wonderful. With traditional storage, you have a cost per gigabyte that is extremely high or related to the number of disks. With Zadara Storage Cloud, you have a cost per gigabyte that you can cut and tailor to your needs independent from the number or size of the disks. 

We have a lot of tenants, so there is a lot of core and memory under pressure in this service. The good thing is that every single tenant is isolated and defined into their computer engine. This means that a customer is not able to create a problem for another customer, even if they get attacked, spoofed, or run malware.

It is absolutely important that the solution provides drive options such as SSD, NL-SAS, and SSD cache because we have a lot of customers. As managed service providers, we have all kinds of solutions. We have a customer that only has five servers, which means very few I/O disks. However, we also have a system with a cluster of databases that requires high IOPS, which means SSD, NVMe, and all the latest, fastest technologies.

View full review »
Chief Technology Officer at Harbor Solution

Our initial application was probably the simplest one. We were sunsetting a product, but we needed to do some movement and we needed some additional storage, but we knew that what we needed was going to change within six months as we got rid of one product and brought in another. To handle this, we started deploying Block storage with Zadara, which we then changed to Object storage and effectively sent back the drives related to the Block storage as we did that migration. This meant that we did not have to invest in new technology or different platforms but rather, we could do it all on one platform and we can manage that migration very easily.

We use Zadara for most of our storage and it provides us with a single-tenant experience. We have a lot more customer environments running on it and although we don't use the compute services at the moment, we do use it for multi-tenant deployment for all of our storage.

I appreciate that they also offer compute services. Although we don't use it at the moment, it is something that we're looking at.

The fact that Zadara provides drive options such as SSD, NL-SAS, and SSD Cache is really useful for us. Much like in the way we can offer different deployments to our customers, having different drive sizes and different drive types means that we can mix and match, depending on customer requirements at the time they come in.

With available protocols including NFS, CIFS, and iSCSI, Zadara supports all of the main things that you'd want to support.

In terms of integration, Zadara supports all of the public and private clouds that we need it to. I'm not sure if it supports all of them on the market, but it works for everything that we require. This is something that is important to us because of the flexibility we have in that regardless of whether our customers are on-premises, in AWS, or otherwise, we can use Zadara storage to support that.

I would characterize Zadara's solution as elastic in all directions. There clearly are some limits to what technology can do, but from Zadara's perspective, it's very good.

With respect to performance, it was not a major factor for us so I don't know whether Zadara improved it or not. Flexibility around capacity is really the key aspect for us.

Zadara has not actually helped us to reduce our data center footprint but that's because we're adding a lot more customers. Instead, we are growing. It has helped us to redeploy people to more strategic projects. This is not so true with the budget, since it was factored in, but we do focus on more strategic projects.

View full review »
GW
Platform and Infrastructure Manager at a tech services company with 1,001-5,000 employees

We use Zadara as a multi-tenanted experience and it is key to us that we have dedicated resources for each tenant because it maintains a consistent level of performance, regardless of how it scales.

The fact that Zadara provides drive options such as SSD and NL-SAS, as well as SSD Cache, is very important because we need that kind of performance in our recovery environments. For example, when the system is used in anger by a customer, it's critical that it's able to perform there and then. This is a key point for us.

At the moment, we don't use the NFS or CIFS protocols. We are, however, big users of iSCSI and Object, and the ability to just have one single solution that covers all of those areas was important to us. I expect that we will be using NFS and CIFS in the future, but that wasn't a day-one priority for us.

The importance of multi-protocol support stems from the fact that historically, we've had to buy different products to support specific use cases. This meant purchasing equipment from different vendors to support different storage workloads, such as Object or File or Block protocols. Having everything all in one was very attractive to us and furthermore, as we retired old equipment, it can all go onto one central platform.

Another important point is that having a single vendor means it's a lot easier for us to support. Our engineers only need to have experience on one storage platform, rather than the three or four that we've previously had to have.

It is important to us that Zadara integrates with all of the public cloud providers, as well as private clouds because what we're starting to see now, especially in the DR business, is the adoption of hybrid working from our customers. As they move into the cloud, they want to utilize our services in the same way. Because Zadara works exactly the same way in a public cloud as it does on-premises, it's a seamless move for us. We don't have to do anything clever or look at alternative products to support it.

It is important to us that this solution can be configured for on-premises, co-location, and cloud environments because it provides us with a seamless experience. It is really helpful that we have one solution that stretches across on-premises, hybrid, and public cloud systems that looks and works the same.

An example of how Zadara has benefited our company is that during the lockdown due to the global pandemic, we've had a big surge in demand for our products. The ability of Zadara to ramp up quickly and expand the system seamlessly has been a key selling point for us, and it's somewhat fueled our growth. As our customer take-up has grown, Zadara's been the backbone in helping us to cope with that increased demand and that increased capacity.

It's been really easy to do, as well. They've been really easy to work with, and we've substantially increased our usage of Zadara. Even though we've only been using it for just about five months, in that time, we've deployed four Zadara systems across four different data centers. Their servicing capacity has been available within about four weeks of saying, "Can you do this?" and them saying "Yes, we can."

With respect to our recovery solutions, using Zadara has perhaps doubled the performance of what we had before. A bit of that is because it's a newer technology, and a bit of that is also in the way we can scale the engine workload. When the workload is particularly high, we can upgrade the engine, in-place, to be a higher-performance engine, and then when the workload scales down, we can drop back to a lower-performance one. 

That flexibility in the performance of not only being able to take advantage of the latest flash technology but also being able to scale the power of the storage engines, up and down as needed, has been really good for us.

Using Zadara has not at the moment helped to reduce our data center footprint, although I expect that it will do so in the future. In fact, at this point, we've taken up more data center footprint to install Zadara, but within six months we will have removed a lot of the older systems. It takes time to migrate our data but the expectation is that we will probably save between 25% and 30%, compared to our previous footprint.

This solution has had a significant effect on our budgeting. Previously, we would have had to spend money as a capital expense to buy storage. Now, it's an operational expense and I don't need to go and find hundreds of thousands of pounds to buy a new storage system. That's helped tremendously with our budgeting.

Compared to the previous solution, we are expecting a saving of about 40% over five years. When we buy new equipment, our write-down period is five years. So, once we've bought it, it has to earn its keep in that time. Using Zadara has not only saved us money but it will continue to save us money over the five years.

It has saved us in terms of incurring costs because I haven't had to spend the money all upfront, and I'm effectively spreading the cost over the five years. We do see an advantage in that the upfront capital costs are eliminated and overall, we expect between 30% and 40% savings over the lifetime if we'd had to buy the equipment.

View full review »
IA
Chief Information Officer at a tech services company with 201-500 employees

The fact that we have offsite storage that is provided to us using iSCSI as a service has allowed me to offload certain storage-related workloads into Zadara. This means that when I have a planned failover, if I need to maintain the local storage that I have in my data center, I simply shift all of the new incoming traffic into Zadara storage. None of my customers even know that it has happened. In this regard, it allows us to scale in an infinite way because we do not have to keep adding more capacity inside our physical data center, which includes power, networking, footprint, and so on. The fact that Zadara handles all of that for me behind the scenes, somewhere in Virginia, is my biggest selling point.

With its dedicated cores and memory, we feel that Zadora provides us with a single-tenant experience. This is important for us because we are aware that in the actual physical environment, where Zadara is hosting our data, they have other clients. Yet, the fact that we have not had any kind of performance issues, and we don't have the noisy neighbor concept, feels like we are the only ones on that particular storage area network (SAN). It's really important for us.

Zadara provides drive options such as SSD and NL-SAS, as well as SSD cache, and this has been important for us. These options allow us to decide for different volumes, what kind of services we're going to be running on them. For example, if it happens to be a database that requires fast throughput, then we will choose a certain type of drive. If we require volume, but not necessarily performance, then we can choose another drive.

A good thing about Zadara is you do not buy a solution that is fixed at the time of purchase. For instance, if I buy an off-the-shelf storage area network, then whatever that device can do at the time of purchase, give or take one or two upgrades, is where I am. With Zadara, they always improve and they always add more functionalities and more capacities.

One example is that when we became customers, their largest drives were only nine terabytes in size. A year or so later, they improved the technology and they now have 14 terabyte drives available, which is good at almost a 50% increase. It is helpful because we were able to take advantage of those higher densities and higher capacities. We were able to migrate our volumes from the nine terabyte drives to the 14 terabyte drives pretty much without any downtime and without any kind of interruption to service. This type of scalability, and the fact that you are future-proofing your purchase or your operations, is another great advantage that we see with Zadara.

As far as I know, Zadara integrates with all of the public cloud providers. The fact that they are physically located in the vicinity of public cloud regions is a major selling point for them. From my perspective, it is not yet very important because we are not in the public cloud. We have our own private cloud in Miami, and not part of Amazon or Azure. This means that for us, the fact that they happen to be in Virginia next to Amazon does not play a major role. That said, they are in a place where there is a lot of connectivity, so in that regard, there is an advantage. We are not benefiting from the fact that they are playing nice with public clouds, simply because we are not in the public cloud, but I'm sure that's an advantage for many others who are.

Absolutely, we are taking advantage of the fact that they integrate with private clouds.

Zadara saves me money in a couple of ways. One is that my operational costs are very consistent. The second is that the system is consistent and reliable, and this avoids a lot of the headaches that are associated with downtime, reputation, and all of that. So, knowing that we have a reputable, reliable, and consistent vendor on our side, that to me is important.

It is difficult to estimate how much we have saved because it wouldn't be comparing apples to apples. We would be buying a system versus paying for it operationally and I don't really have those kinds of numbers off-hand. Of course, I cannot put a price tag on my reputation.

View full review »