It's, mainly it's for storage, we have various databases with different applications and we are using it just for storage, mainly as just a storage for our systems.
All-Flash Storage Arrays AI Reviews
Showing reviews of the top ranking products in All-Flash Storage Arrays, containing the term AI
NetApp AFF (All Flash FAS): AI
It's very stable. It's always there when we need it. With the Dual Controller, if one drops out, the other one comes right online. We don't use any iSCSI so there is a little bit of a latency break but, over the NFS, we don't notice that switch-on. We can do maintenance in the middle of the day, literally rip a whole controller out of the chassis, and do what we need to do with it.
The initial setup was straightforward. Nothing to it. The professional services from NetApp came in to help us out, and they knew their stuff.
It gives us the power and agility to spin up VMs as quickly as possible.
We have also standardized on NetApp. All the storage that we have for our services runs on NetApp. Being standardized, it's easy for our Operations. We can train them on a single platform.
It helps improve performance for enterprise applications, data analytics, and VMs. With the power of flash, we moved from a traditional hybrid storage to all-flash. Having the full-fledged power of flash, and the controllers, it has doubled the performance compared to what we used to get.
Finally, our total cost of ownership has decreased by approximately 10 - 12 percent.
I just got through the session where it looks like they are going to support Oracle running on Linux with SnapCenter. That is one of the main things that we are hoping to get integrated.
It's a combination of the hardware along with the operating system which produces the stability. Based on the data protection factor and on its sustainability in case of a component failure, it is well-designed on the hardware and software fronts.
I am satisfied with the stability.
I would like to see if they could move the virtual storage machines. They have integrated a DR, so you can back to your DR, but there's no automated way to failover and failback. It's all manual. I'd like to see it all automated.
Support has been good. I've had a few cases where support wasn't able to answer the question or they took quite a while, but majority of issues have been answered fairly quickly.
We have more storage capacity. Managing it is easier and it's available anytime we want it.
It was really straightforward, for the most part. We were used to working with FAS already and this is just adding All Flash and SSD to the mix. It's a lot of the same standards we had already.
Our initial setup involved a lot of development. It was complex mainly because we had to make it simple. We had to simplify it for our own customers, so it was complex for us but it's a very easy solution for enterprises.
The stability is excellent. It's highly stable. We've just never really had a failure since we put it in. It's been two years.
Technical support is a little hit and miss, at least with the particular things that I've called for. The SRA stuff that intergrades with SRM is a problem point. It's a pain point. The support personnel aren't always knowledgeable on that product. At times, they are not even aware what product is supported and what is not, when one has been deprecated and there is a new one out, and what the bug fixes of the newer version are.
We have customers who are not NetApp customers. We teach them what the capabilities and challenges are. Our main goal is to comply with and meet our customers' challenges. If NetApp really fits their needs, we move on from there. In a case where we need to transition the whole infrastructure from a different storage brand to NetApp, we'll do that.
If the customer is an existing user, it's easier for us to convince them. If they're a non-NetApp user, it takes time because we have to do proofs of concept to justify it to them. If they agree technically, then the commercial conversation starts. Normally, the commercial conversion does not take that long, because the technical team has agreed to the solution.
The initial setup is easy and straightforward; there is no complexity.
We are able to offer higher performance to meet the business needs. We see far less issues with applications complaining about not getting the throughput they need, the IOPS, or that they are getting to high of a latency. We put it on AFF and the issues go away.
The user experience with AFF is fast and secure, with continuous access to data. Our users typically don't know where we're putting their data unless we have some benefit in telling them. If they say, "It's not fast enough," we put it over here, and they say, "It's good now. We're happy." Though, we have to be judicious in how we move it, because storage is a bit expensive. Although, the higher storage efficiencies somewhat compensate for it.
The solution is providing IT more headroom so we can give higher performance to more applications. Like every business, our data footprint is growing. Our applications account is growing, and we're just able to keep up with it now somewhat better than we were before.
We are spending less time putting out fires, so there's a tangible benefit right there.
It is straightforward. The whole cluster configuration is pretty straightforward. Just bring up the node and add to the existing clusters. We didn't see any difficulties.
It takes us one day to set up and provision enterprise applications using this product. Migration takes a lot of time but provisioning is setting up the cluster and that takes one day.
The stability is solid. It doesn't fail on us, which is exactly what we want. We are in a critical business that we can't have any percentage of downtime. Therefore, if it stays up, that is what we want. We have been dependent on NetApp for almost a decade now.
AFF is our primary source for our data centers. We use it for our multi-tenancy data center. We like the crypto erase function available on the SSDs and we needed the high performance, IOPs that you can get from SSDs.
We have a big problem in our organization where I can't get the application engineers to give me performance requirements. Now, with the SSDs, I don't need to worry about that anymore. All of our applications are high. Our test applications perform at a higher level now.
It has improved performance of our enterprise applications, data analytics, and VMs because we have a higher IO from the disk now. We run a lot of write-intensive VMs. For sure the solution helps out.
Our total cost of ownership has decreased because of the nature of the SSDs, their mean time to failure is much higher. They don't fail as often and that's going to reduce it. And because we upgraded to the All Flash and the bigger SSD, we reduced our footprint. I increased my capacity 500 percent and reduced my footprint in the data center by 95 percent.
We use it for our EHR. We have 4,000 users who need to have access to a very large EHR called Epic. We are sharing a cache database through AIX servers.
The primary use case is availability, performance, bandwidth, and throughput with respect to our applications.
We are currently using an on-premise solution.
There are many reports accessing the applications. We receive them very quickly. We used to wait a long time for them. Now, you just need to wait a moment.
It takes us just minutes to set up and provision an enterprise application using AFF.
I can definitely say it has helped our orginization. We have an SQL application server, which is in our NetApp storage. The records contain the number of transactions. Since my company is a financial company, we always look into transactions. NetApp all-flash array is faster than we're used to. The read and write, and the random IOPS are all up to speed. I don't see much of a difference when I run the 100k random IOPS with a 70% read and 30% write, and vice versa, 70% write and 30% read. That's a big improvement that we've seen since we started using this solution. It is a valuable asset.
It takes a good administrator or someone with knowledge of the product in order to manage it. That was one of the downfalls that we had with AFF. We have a lot of offshore team whom we have to spend a lot of time training to be up to speed. However, once they're up to speed, they know the product pretty well, and it seems to be okay.
The hardware is a little difficult to configure and operate. However, with the configuration and operation, you get a different nerd knobs that you can use to design and critique the environment.
From the automation point of view, we want zero down time for our clients with good scalability and good performance. Client satisfaction is the most important to us.
We haven't received any negative feedback yet. If we are not receiving any complaints from the client side, then it says that the client is okay with the product.
This solution helps us improve performance for our enterprise applications, data analytics, and VMs.
Besides for the speed, one of the most valuable features that the AFF gives me is the robust hardware that it has. It's simplistic. It deploys very easily. It's already built from the factory to take advantage of the all-flash array.
I would describe the user experience of the solution as very simplistic. There's a very easy GUI to use, and then when you need to get very, very detailed, you have a robust command line that you could do anything you want with to enhance performance for your solutions. Really what we're using the AFF for is solely for speed. We really need the power of the backbone and the speed of the disks because we have to move so much data.
Setting up and provisioning enterprise applications take minutes. It's just not difficult. We only have to use the GUI, curate the spaces, and go. I've set up entire NetApp systems in a morning.
We use almost all of our virtualization workloads on All Flash. Before we migrated to All Flash we used to use a different vendor for NAS solution. Some were NAS and some were Block storage. Now, logging ETLs are maybe ten times faster currently than what they used to be. We are getting amazing speeds off of FAS that we never had before.
We also use a lot of the AFF for end user storage. All the shared file systems, all the file systems that a particular user has, as a G drive, E drive, F drive or shared drives between various customers and various departments are all running off of the All Flash File system. So now, the rendering of FAS is so much faster than what it used to be. On top of that, we used to do Block. We would take Block, we would do NFS or do Samba to share those file systems for the users. Now, because they are coming straight off of NFS 3 and 4, the speed is marvelous. They are almost five to seven times faster rending all their files, saving all their files, retrieving all their files. It's amazing.
I don't know how much IT support has any bearing on All Flash File system. Now the only thing that we have provided that is better now is the speed and stability. Now if you can add that to capabilities, then, of course, IT has provided additional capabilities of having faster rendering and just getting their work done a little quicker.
The biggest workload that we have is maybe 95 to 97% of all virtual workloads are now running on All Flash. It has dramatically changed the way all of our VMs work. Now, not only they are faster but a couple of things that are in addition is that we do snaps off of our flash storage. Not only are the workloads faster but if the virtual machine goes down, the restore is 20 times faster now than it ever used to be. We don't have to go to a spin disc, we can just flash off of our flash back onto a no spin disc and the restore takes almost seconds to come back.
Total costs of ownership have two different values to them. One value is just strictly the capital cost of it. Number two is the operational cost. You've got to look at the CapEx and how much it cost. That is currently a little higher than it would be in two or three years. Now, Apex is where things are getting really nice. The maintenance is less. The discs failure are really low. Data issues or corruption is really low. The CapEx is currently high and Apex is getting to almost insignificant numbers.
All Flash is improving our organization because we used to have the databases on different tiers and now All Flash is reducing the report time. All of the reports and processing is taking less time, so all the information is ready in the morning for the executives to make decisions.
This solution is also bringing up a new initiative for our company to include more databases or more reports into the All Flash because of the speed of getting the information.
For enterprise apps, we mostly use Oracle. All of the Oracle applications have been improved a lot since we began using All Flash. All of the processing and ETL, for instance, used to take 25 hours, now it is taking three. That improves a lot of parts of the price of applications.
TCO has decreased. After we acquired the AFF 8080, we got a couple of A 700s, and they are cheaper than the 8080.
As the main uses for the all-flash we have is for Oracle. For us to provision a new VM with new databases takes 35 minutes exactly.
The valuable features are the fabric pool. We are taking our cold data and pumping it straight into an estuary bucket. Also, efficiency. We're getting about two and a half times upwards of data efficiency through compaction, compression, deduplication, and it's size. When we refreshed from two or three racks of spinning discs down into 5U of rack space, it not only saved us a whole heap of costs in our data center environment but also it's nice to be green. The power savings alone equated to be about 50 tons of CO2 a year that we no longer emit. It's a big game changer.
The user experience from my point of view, as the person who drives it most of the time, is a really good one. The toolsets are really easy to use and from the service offered we're able to offer non-disruptive upgrades. It just works and keeps going. It's hard to explain good things when we have so few bad things that actually occur within the environment. From a user's point of view, the file shares work, everyone's happy, and I'm happy because it's usually not storage that's causing the problem.
We have a pretty amazing story about using AFS. When I went into this organization, we had a 59% uptime ratio, and at the time we were looking at how to improve on efficiency, and how to bring good technology initiatives together to make this digital transformation happen. When the Affordable Care Act came out, it started mandating a lot of these health care organizations to implement an electronic medical record system. Of course, since health care has been behind the curve when it comes to technology, it was a major problem when I came into this organization that had a 59% uptime ratio. They also wanted to implement an electronic medical record system throughout their facility, and we didn't have the technology in place.
One of my key initiatives at the time was to determine what we wanted to do as a whole organization. We wanted to focus on the digital transformation. We needed to determine if we could find some good business partners in place so we selected NetApp. We were trying to create a better, efficient process, with very strong security practices as well. We selected an All-Flash FAS solution because we were starting to implement virtual desktop infrastructure with VMware.
We wanted to throw out zero clients throughout the whole organization for the physicians, which allowed them to do single sign-on. The physician would be able to go to one specific office, tap his badge, sign in to the specific system from there. That floating profile would come over with him, and then you just created some great efficiencies. The security practices behind the ONTAP solution and the security that we were experiencing with NetApp was absolutely out of this world. I've been very impressed with it. One of the main reasons I started with NetApp was because they have a strong focus on health care initiatives. I was asked to sit on the neural network, which was a NetApp-facilitated health care advisory group that focused and looked at the overall roadmap of NetApp. When you have a good business partner like NetApp, versus a vendor where a vendor's going to come in, sell me a solution and just call me a year later and say that they want us to sign something, I'm not looking for people like that. I'm looking for business partners. What I like to say is, "My success is your success, and your success is ours." That's really a critical point that NetApp has demonstrated.
NetApp AFF has improved our organization through the use of clusters. Previously we had migrated from Dell EMC and we had a lot of difficulties moving data around. Now, if we need to move it to any slower storage, we can move it with just a vault move within the cluster. Even moving data between clusters is extremely simple using SnapMirror. The mobility options for data in All Flash FAS have been awesome.
AFF has given us the ability to explore different technology initiatives because of the flexibility that it has, being able to fit it in like a puzzle piece to different products. For example, any other solutions that we've looked at, a lot of times those vendors have integration directly into NetApp, which we haven't found with other storage providers and so it's extremely helpful to have that tie-in.
This solution has also helped us to improve performance. We have hybrid arrays as well so that we can have things that are on slower storage. For the times that we need extremely fast storage, we can put it on AFF and we can use V-vaults if we need to to have different tiers and automatically put things where they need to be. It's really helped us to nail down performance problems when we need it to put them in places to fix them by just having the extreme performance.
Total cost to ownership has definitely dropped because with deduplication compression and compaction always on, we're able to fit a whole lot more in a smaller amount of space and still provide more performance than we had before. Our total cost per gigabyte ends up being less by going to All Flash.
Communication with the customer for showing and exploring the new technologies is available.
Having separate storage virtual machines with completely different setups for NFS and Windows solves problems the FAS has when the domain controllers are unreachable.
We have deployed NetApp AFF with four nodes; two of these are in our primary data center, and the remaining two are in the second data center. We are using Cluster Mode configurations.
We don't use NetApp AFF for machine learning or artificial intelligence applications.
With respect to latency, we basically don't have any. If it's there then nobody knows it and nobody can see it. I'm probably the only one that can recognize that it's there, and I barely catch it. This solution is all-flash, so the latency is almost nonexistent.
The DP protection level is great. You can have three disks failing and you would still get your data. I think it takes four to fail before you can't access data. The snapshot capability is there, which we use a lot, along with those other really wonderful tools that can be used. We depend very heavily on just the DP because it's so reliable. We have not had any data inaccessible because of any kind of drive failure, at all since we started. That was with our original FAS8040. This is a pretty robust and pretty reliable system, and we don't worry too much about the data that is on it. In fact, I don't worry about it at all because it just works.
Using this solution has helped us by making things go faster, but we have not really implemented some of the things that we want to do. For example, we're getting ready to use the VDI capability where we do virtualization of systems. We're still trying to get the infrastructure in place. We deal with different locations around the world and rather than shipping hard drives that are not installed into PCs, then re-installing them at the main site, we want to use VDI. With VDI, we turn on a dumb system that has no permanent storage. It goes in, they run the application and we can control it all from one location, there in our data center. So, that's what we're moving towards. The reason for the A300 is so that our latency is so low that we can do large-scale virtualization. We use VMware a tremendous amount.
NetApp helps us to unify data services across SAN and NAS environments, but I cannot give specifics because the details are confidential.
I have extensive experience with storage systems, and so far, NetApp AFF has not allowed me to leverage data in ways that I have not previously thought of.
Implementing NetApp has allowed us to add new applications without having to purchase additional storage. This is true, in particular, for one of our end customers who spent three years deciding on the necessity of purchasing an A300. Ultimately, the customer ran out of storage space and found that upgrading the existing FAS8040 would have cost three times more. Their current system has quadruple the space of the previous one.
With respect to moving large amounts of data, we are not allowed to move data outside of our data center. However, when we installed the new A300, the moving of data from our FAS8040 was seamless. We were able to move all of the data during the daytime and nobody knew that we were doing it. It ran in the background and nobody noticed.
We have not relocated resources that have been used for storage because I am the only full-time storage resource. I do have some people that are there to help back me up if I need some help or if I go on vacation, but I'm the only dedicated storage guy. Our systems architect, who handles the design for network, storage, and other systems, is also familiar with our storage. We also have a couple of recent hires who will be trained, but they will only be used if I need help or am not available.
Talking about the application response time, I know that it has increased since we started using this solution, but I don't think that the users have actually noticed it. They know that it is a little bit snappier, but I don't think they understand how much faster it really is. I noticed because I can look at the system manager or the unify manager to see the performance numbers. I can see where the number was higher before in places where there was a lot of disk IO. We had a mix of SATA, SAS, and flash, but now we have one hundred percent flash, so the performance graph is barely moving along the bottom. The users have not really noticed yet because they're not really putting a load on it. At least not yet. Give them a chance though. Once they figure it out, they'll use it. I would say that in another year, they'll figure it out.
NetApp AFF has reduced our data center costs, considering the increase in the amount of data space. Had we moved to the same capacity with our older FAS8040 then it would have cost us four and a half million dollars, and we would not have even had new controller heads. With the new A300, it cost under two million, so it was very cost-effective. That, in itself, saved us money. Plus, the fact that it is all solid-state with no spinning disks means that the amount of electricity is going to be less. There may also be savings in terms of cooling in the data center.
As far as worrying about the amount of space, that was the whole reason for buying the A300. Our FAS8040 was a very good unit that did not have a single failure in three years, but when it ran out of space it was time to upgrade.
Our primary use for NetApp AFF is backup for our production. It's more for our database for all of our retail for Nordstrom. We've got to keep it running every day, so we've got to make sure that we have all the databases backed up for three years, or more.
This product was brought in when I started with the company, so that's hard for me to answer how it has improved my organization. I would say that it's improved the performance of our virtual machines because we weren't using Flash before this. We were only using Flash Cache. Stepping from Flash Cache with SAS drives up to an all-flash system really had a notable difference.
Thin provisioning enables us to add new applications without having to purchase additional storage. Virtually anything that we need to get started with is going to be smaller at the beginning than what the sales guys that sell our services tell us. We're about to bring in five terabytes of data. Due to the nature of our business operations that could happen over a series of months or even a year. We get that data from our clients. Thin provisioning allows us to use only the storage we need when we need it.
The solution allows the movement of large amounts of data from one data center to another, without interrupting the business. We're only doing that right now for disaster recovery purposes. With that said, it would be much more difficult to move our data at a file-level than at the block level with SnapMirror. We needed a dedicated connection to the DR location regardless, but it's probably saved our IT operations some bandwidth there.
I'm inclined to say the solution reduced our data center costs, but I don't have good modeling on that. The solution was brought in right when I started, so in regards to any cost modeling, I wasn't part of that conversation.
The solution freed us from worrying about storage as a limiting factor. In our line of business, we deal with some highly duplicative data. It has to do with what our customers send us to store and process through on their behalf. Redundant storage due to business workflows doesn't penalize us on the storage side when we get to block-level deduplication and compression. It can make a really big difference there. In some cases, some of the data we host for clients gets the same type of compression you would see in a VDI type environment. It's been really advantageous to us there.
The procurement process could be improved. It takes a long time for us to receive stuff. The product is good. It's not the product, it's just that it takes forever to get it. It's not our reseller's problem; it's usually held up at NetApp.
Waiting for equipment is one of our biggest hiccups. I live in Pennsylvania and we flew out to Washington state to do an install. We were there for three days, but the product didn't show up. We left and the product came the next day. Then we had to send somebody else out. That's because things were getting held up in shipping and stuff like that. The shipping is my only beef with NetApp.
The initial setup of this solution is straightforward, at least for me. I've deployed NetApp before in my previous jobs, and it was easy with my experience. That said, it is not very complex.
During a maintenance cycle, there are outages for NAS. There is a small timeout when there is a failover from one node to another, and some applications are sensitive to that.
We are in the process of swapping our main controller, and there is no easy way to migrate the data without doing a volume move. I would like a better way to swap hardware.
Technical support could use some improvement.
Our primary use case for NetApp AFF is performance-based applications. Whenever our customers complain about performance, we move their data to an all-flash system to improve it.
We have our own data center and don't share our network with others.
I've set up a NetApp network previously. The setup was pretty straightforward.
I would like to see NetApp improve more of its offline tools and utilities. Drilling down to their active IQ technology, that's great if your cluster is online and attached to the internet, with the ability to post and forward auto support, but in terms of having an offline cluster that is standalone, all of those utilities don't work. If there's a similar way to how NetApp has a unified manager, but on-premises where the user could deploy and auto support could be forwarded to that, and maybe more of a slimmed-down active IQ solution could be made available, I'd be interested in that.
I need a FlexPool to FlexGroup solution.
I would like to see the FAS and AFF platforms simplified so that the differences will disappear at some point. This would reduce the complexity for the end-storage engineers.
Our primary use case for NetApp AFF is unstructured data. We set up it up for high availability and minimum downtime.
Prior to deploying this product, we were having such severe latency issues that certain applications and certain services were becoming unavailable at times. Moving to the AFF completely obliterated all those issues that we were having.
With regard to the overall latency, NetApp AFF is almost immeasurably fast.
Data protection and data management features are simple to use with the web management interface.
We do not have any data on the cloud, but this solution definitely helps to simplify IT operations by unifying data that we have on-premises. We are using a mixture of mounting NFS, CIFS, and then using fiber channel, so data is available to multiple platforms with multiple connectivity paradigms.
The thin provisioning has allowed us to add new applications without having to purchase additional storage. The best example is our recent deployment of an entire server upgrade from Windows 2008 to Windows 2016. Had we not been using thin provisioning then we never would have had enough disk space to actually complete it without upgrading the hardware.
We're a pretty small team, so we have never had dedicated storage resources.
NetApp AFF has reduced our application response time. In some cases, our applications have gone from almost unusable to instantaneous response times.
Storage is always a limiting factor, simply because it's not unlimited. However, this solution has enabled us to present the option of less expensively adding more storage for very specific application uses, which we did not have before.
Prior to NetApp AFF, we were using an HPE Storage solution. It was a little more difficult to swap out the drives on the XP series. You have to shut down the drive and then wait for a prompt to remove it. It's a long process and if somebody pulls it out hot and puts another one in then you're going to have to do a complete rebuild. It is not as robust or stable when you are swapping parts.
Speed, reliability, ease of use are the most valuable features.
The overall latency in your environment is very good.
We don't use the solution for artificial intelligence or machine learning applications.
The simplicity around data protection and data management is very good. We use SnapVault for data protection which works very well. SnapMirror is also good. We mainly use the command line a lot, so we don't tend to use many provisioning tools.
We have not used this solution for artificial intelligence or machine learning applications as of yet. This product has reduced our total latency from a spinning disc going into flash discs. We rarely see any latency and if we do it is not the discs, it's the network. The overall latency right now is about two milliseconds or less.
AFF hasn't enabled us to relocate resources, or employees that we were previously using for storage operations.
It has improved application response time. With latency, we had applications that had thirty to forty milliseconds latency, now they have dropped to approximately one to three, a maximum of five milliseconds. It's a huge improvement.
We use both technologies and we have simplified it. We are trying to shift away from the SAN because it is not as easy to failover to an opposite data center.
We are trying to switch over to have everything one hundred percent NFS. Once the switch to NFS is complete our cutover time will be one hour versus six.
The primary use case is enterprise storage for our email database system.
We have just been using on-premise. We are looking to move the workloads to the cloud, but right now it's just on-premise.
We stay away from what is called a silo architecture. NetApp cluster enables us to do a volume move to different nodes and share the entire cluster with the various sub setups as well as using the most storage we have on ONTAP. We are able to tailor and cut out at a file level, block-level or power level, to our various clients.
On the fiber channel side, there is a limit of sixteen terabytes on each line, and we would like to see this raised because we are having to use some other products.
ONTAP has improved my organization because we now have better performance. We can scale up and we can create servers a lot faster now. With the storage that we had, it used to take a lot longer, but now we can provide the business what they need a lot faster.
It simplifies IT operations by unifying data services across SAN and NAS environments. We use our own type of SAN and NAS for CIFS and also for virtual servers. It's pretty basic. I didn't realize how simple it was to create storage and manage storage until I started using NetApp ONTAP. We use it daily.
Response time has improved. IOPS reading between reading and the storage and getting it to the end-users is a hundred times faster than what it used to be. When we migrated from 7-Mode to cluster mode and went to an all-flash system, the speed and performance were amazing. The business commented on that which was good for us.
Datacenter costs have definitely been reduced with the compression that we get with all-flash. We're getting 20 to one so it's definitely a huge saving.
It has enabled us to stop worrying about storage as a limiting factor. We can thin provision data now and we can over-provision compared to the actual physical hardware that we have. We have a lot of flexibility compared to what we had before.
Prior to bringing in NetApp, we would do a lot of Commvault backups. We utilize Commvault, so we were just backing up the data that way, and recovering that way. Utilizing Snapshots and SnapMirror allows us to recover a lot faster. We use it on a daily basis to recover end-users' files that have been deleted. It's a great tool for that.
We use Workflow Automation. Latency is great on our right, although we do find that with AFF systems, and it may just be what we're doing with them, the read latency is a little bit higher than we would expect from SSDs.
With regard to the simplicity of data protection and data management, it's great. SnapMirror is a breeze to set up and to utilize SnapVault is the same way.
NetApp absolutely simplifies our IT operations by unifying data services.
The thin provisioning is great, and we have used it in lieu of purchasing additional storage. Talking about the storage efficiencies that we're getting, on VMware for instance, we are getting seven to one on some volumes, which is great.
NetApp has allowed us to move large amounts of data between data centers. We are migrating our data center from on-premises to a hosted data center, so we're utilizing this functionality all the time to move loads of data from one center to another. It has been a great tool for that.
Our application response time has absolutely improved. In terms of latency, before when we were running Epic Caché, the latency on our FAS was ten to fifteen milliseconds. Now, running off of the AFFs, we have perhaps one or two milliseconds, so it has greatly improved.
Whether our data center costs are reduced remains to be seen. We've always been told that solid-state is supposed to be cheaper and go down in price, but we haven't been able to see that at all. It's disappointing.
The stability of the solution is very good. The reliability is just top-notch. We have not had any outage or unscheduled downtime. Sometimes a disk fails or the SSD fails, but it gets replaced without any users knowing about it because of service interruptions.
This solution has helped simplify our IT operations. We can easily move data from on-premises to the cloud, or from one cloud to another cloud. NetApp SnapShots and SnapMirror are also helpful.
The thin provisioning has allowed us to add new applications without having to purchase additional storage. We are shrinking the data with functions like deduplication and giving almost two hundred percent. It is very helpful.
This solution has allowed us to move very large amounts of data without affecting IT operations. We have moved four petabytes to the cloud. We have moved data from on-premises to the cloud, and also between clouds. It is easy to do. For example, if you want DR or a backup in a second location, then you just use SnapShot. If you have a database that you want to have available in more than one location then you can synchronize them easily. We are very happy with these features.
Our application response time has been improved since implementing this solution. The AFF cluster is awesome. Our response time is now below two milliseconds, whereas it used to be four or five milliseconds. This is very useful.
The costs of our data center have definitely been reduced by using this solution. The power consumption and space, obviously, because this solution is very small, have been reduced.
We have been using this solution to automatically tier cold data to the cloud. I would not say that it has affected our TCO.
This solution has not changed our position in terms of worrying about storage as a limiting factor.
I can't remember the last time we had an issue or an outage.
It is one of the best solutions out there right now. It is extremely simple, reliable, and seldom ever breaks. It's extremely easy to set up. It's reliable, which is important for us in healthcare. It doesn't take a lot of management or support, as it just works correctly.
Our NetApp environment has been fairly stable and simple that we don't have a lot of resources allocated to support it right now. For our entire infrastructure, we probably have three engineers in our entire enterprise to support our entire NetApp infrastructure. So, we haven't necessarily reallocated resources, but we already run pretty thin as it is.
This solution reduced our costs by consolidating several types of disparate storage. The savings come mostly in power consumption and density. One of our big data center costs, which was clear when we built our recent data center, is that each space basically has a value tied to it. Going to a flash solution enabled us to have a lower power footprint, as well as higher density. This essentially means that we have more capacity in a smaller space. When it costs several hundred million dollars to build a data center, you have to think that each of those spots has a cost associated with them. This means that each server rack in there is worth that much at the end. When we look at those costs and everything else, it saved us money to go to AFF where we have that really high density. It's getting even better because the newer ones are going to come out and they're going to be even higher.
Being able to easily and quickly pull data out of snapshots is something that benefits us. Our times for recovery on a lot of things are going to be in the minutes, rather than in the range of hours. It takes the same amount of time for us to put a FlexClone out with a ten terabyte VM as it does a one terabyte VM. That is really valuable to us. We can provide somebody with a VM, regardless of size, and we can tell them how much time it will take to be able to get on it. This excludes the extra stuff that happens on the back end, like vMotion. They can already touch the VM, so we don't really worry about it.
One of the other things that helped us out was the inline efficiencies such as the deduplication, compaction, and compression. That made this solution shine in terms of how we're utilizing the environment and minimizing our footprint.
With respect to how simple this solution is around data protection, I would say that it's in the middle. I think that the data protection services that they offer, like SnapCenter, are terrible. There was an issue that we had in our environment where if you had a fully qualified domain name that was too long, or had too many periods in it, then it wouldn't work. They recently fixed this, but clearly, after having a problem like this, the solution is not enterprise-ready. Overall, I see NetApp as really good for data protection, but SnapCenter is the weak point. I'd be much more willing to go with something like Veeam, which utilizes those direct NetApp features. They have the technology, but personally, I don't think that their implementation is there yet on the data production side.
I think that this solution simplifies our IT operations by unifying data services across SAN and NAS environments. In fact, this is one of the reasons that we wanted to switch to this solution, because of the simplicity that it adds.
In terms of being able to leverage data in new ways because of this solution, I cannot think of anything in particular that is not offered by other vendors. One example of something that is game-changing is in-place snapshotting, but we're seeing that from a lot of vendors.
The thin provisioning capability provided by this solution has absolutely allowed us to add new applications without having to purchase additional storage. I would say that the thin provisioning coupled with the storage efficiencies are really helpful. The one thing we've had to worry about as a result of thin provisioning is our VMware teams, or other teams, thin provisioning on top of our thin provisioning, which you always know is not good. The problem is that you don't really have any insight into how much you're actually utilizing.
This solution has enabled us to move lots of data between the data center and cloud without interruption to the business. We have SVM DR relationships between data centers, so for us, even if we lost the whole data center, we could failover.
This solution has improved our application response time, but I was not with the company prior to implementation so I do not have specific metrics.
We have been using this solution's feature that automatically tiers data to the cloud, but it is not to a public cloud. Rather, we store cold data on our private cloud. It's still using object storage, but not on a public cloud.
I would say that this solution has, in a way, freed us from worrying about storage as a limiting factor. The main reason is, as funny as it sounds because our network is now the limiting factor. We can easily max out links with the all-flash array. Now we are looking at going back and upgrading the rest of the infrastructure to be able to keep up with the flash. I think that right now we don't even have a strong NDMP footprint because we couldn't support it, as we would need far too much speed.
We have been using the FAS series product, and AFF is pretty similar to the FAS products, as it still runs the ONTAP operating system. They are using AFF because that comes with all-flash disks, which gives us better performance with a smaller footprint. We use that mainly to start our block and NAS data.
Speed is the most valuable feature. It is all-flash, so it is fast.
It simplifies since it is integrated with the other platforms as well. It's maintainable; it does not take too much to maintain the stuff. Creating users and sessions is easy on it.
The primary use case for AFF is as a SAN storage for our SQL database and VMware environment, which drives our treatment systems. We do not use our it currently for AI or machine learning.
We are running ONTAP 9.6.
The most valuable features are dedupe, compression, compaction, and the flexibility to offload your cold data to StorageGRID. This is biggest key point which drove our whole move to the NetApp AFF solution.
AFF has opened our eyes in a different light of how storage value works. In the past, we looked at it more as just a container where we could just dump our customer dBms and let the customers use it in terms of efficiency. Today, to be able to replicate that data to a different location, use that data to recover your environment or be able to have the flexibility with the solution and data. These are things which piqued our interest. It's something that we're willing to provide as a solution to our customers.
The primary use case for AFF is for use in our production environment. Within our production environment, we have a number of different data stores that AFF serves. We use a number of protocols from NFS to CIFS, as well from the file system protocols, and in the block level we use iSCSI.
We are a fully on-prem business as far as data positioning data sets.
We don't have real-time applications that we run in-house, being a law firm. The most important thing is the availability of our environments and applications that we serve to our client base. We don't have real-time applications that we could be measured in real tangible form that would make a huge difference for us. Nevertheless, the way it goes: the faster, the better; the more powerful, the better; and the more resources you can get from it, the better.
We did it for consolidation of eight file repairs. We needed the speed to make sure that it worked when we consolidated.
We've been using AFF for file shares for about 14 years now. So it's hard for me to remember how things were before we had it. For the Windows drives, they switched over before I started with the company, so it's hard for me to remember before that. But for the NFS, I do remember that things were going down all the time and clusters had to be managed like they were very fragile children ready to fall over and break. All of that disappeared the moment we moved to ONTAP. Later on, when we got into the AFF realm, all of a sudden performance problems just vanished because everything was on flash at that point.
Since we've been growing up with AFF, through the 7-Mode to Cluster Mode transition, and the AFF transition, it feels like a very organic growth that has been keeping up with our needs. So it's not like a change. It's been more, "Hey, this is moving in the direction we need to move." And it's always there for us, or close to being always there for us.
One of the ways that we leverage data now, that we wouldn't have been able to do before — and we're talking simple file shares. One of the things we couldn't do before AFF was really search those things in a reasonable timeframe. We had all this unstructured data out there. We had all these things to search for and see: Do we already have this? Do we have things sitting out there that we should have or that we shouldn't have? And we can do those searches in a reasonable timeframe now, whereas before, it was just so long that it wasn't even worth bothering.
AFF thin provisioning allows us to survive. Every volume we have is over-provisioned and we use thin provisioning for everything. Things need to see they have a lot of space, sometimes, to function well, from the file servers to VMware shares to our database applications spitting stuff out to NFS. They need to see that they have space even if they're not going to use it. Especially with AFF, because there's a lot of deduplication and compression behind the scenes, that saves us a lot of space and lets us "lie" to our consumers and say, "Hey, you've got all this space. Trust us. It's all there for you." We don't have to actually buy it until later, and that makes it function at all. We wouldn't even be able to do what we do without thin provisioning.
AFF has definitely improved our response time. I don't have data for you — nothing that would be a good quote — but I do know that before AFF, we had complaints about response time on our file shares. After AFF, we don't. So it's mostly anecdotal, but it's pretty clear that going all-flash made a big difference in our organization.
AFF has probably reduced our data center costs. It's been so long since we considered anything other than it, so it's hard to say. I do know that doing some of the things that we do, without AFF, would certainly cost more because we'd have to buy more storage, to pull them off. So with AFS dedupe and compression, and the fact that it works so well on our files, I think it has saved us some money probably, at least ten to 20 percent versus just other solutions, if not way more.
Dell EMC XtremIO: AI
I am not too impressed with XtremeIO because we had a major failure.
We had some difficulty when we tried to expand our XtremIO because it had to be done in a certain way and only EMC personnel were allowed to do it. That wasn't very flexible. But it seems to be better in the new version. Scalability is always a problem.
It's a great solution. We have 100% high availability and 100% business continuity. All our banking is All-Flash behind the VPLEX.
We've seen great enhancements from the performance point of view. There's good availability, stability, and continuity, but the performance actually has increased by 60 or 70%.
We mostly use it for backup because we cannot measure anything and we are afraid to use it for surveillance systems. We were planning to use it mostly for surveillance systems.
The management should be improved and the GUI interface could be better and easier.
In the next release, they should improve the replication. There should be high availability. You can't do replication from one EMC to another, you would need to use another tool with the way it is now.
The initial setup of this solution is straightforward.
I would say that you can deploy this solution in an hour if you know how to do it.
I work as a technical consultant and our company are resellers. We sell hyper-converged solutions to our customers. We use mainly NetApp HCI and SolidFire. We use a variety of versions depending on the customer's requirements. Our main use of the product is for ESX environments and Hyper-V environments.
The initial setup is straightforward and quick.
I have a SolidFire grid set up and I find that it is a stable solution. I did have to replace a disk on one occasion, which is something that the technical support contacted me about. While I have not used SolidFire in production, I have not heard complaints about stability from any of our customers.
One of the most valuable thing aspect of the solution is the fact that it's all in one and all in a very small physical footprint. It has all of your major components, including your storage area network, servers, and networking footprint.
The delivery of the product is very fast and the solution itself deploys quickly, it is up and running within hours.
The product is competitively priced and technical support is good.
You can easily and effectively scale this solution. It's one of the main selling points and one of the features that makes it far superior to competitors.
Tintri VMstore: AI
- It is fast and reliable. There hasn’t been a single failure in three years of use.
- The VMs have worked fine, and the bandwidth to the Tintri SAN has never been an issue.
Pure Storage FlashArray: AI
I would like to see more detailed reporting on the data. Sure, it is great to see usage, trends, latency, and all the common stuff. However, it would be nice to know what are the exact VMs usage after deduplication and/or what that VMs actual latency and bandwidth is, outside of VMware.
The initial setup is straightforward.
We forget they're there. We plugged the first one in, then we didn't look at it for months. We copied more and more stuff into it over that first year and got more and more impressed at how effective Pure's data-reduction technology was. You copy more and more stuff into them and they just sit there, working away. Now that a lot of our daily operations are automated, we barely even log into them.
We sell a SaaS offering of the storage to our customers. We use the storage as our main storage and also as our backup storage.
Try it out. It is easy to get it up and running, and simple to migrate your Oracle workloads over to run an apples to apples comparison. The performance numbers speak for themselves. If you factor in the ease in terms of operations, as well as the cost of the array compared to other solid state arrays, it becomes a clear positive for Pure Storage.
All of our customers are looking at submillisecond latency, which is the common Pure Storage metric, and we have definitely seen it there. Everything has been great in terms of throughput and availability has been fantastic.
- The performance of the high speed FlashArrays.
- They have a good API set. Their flash snapshot technologies are efficient.
- The deduplication in the array, which is one of the main reasons that it's a cost effective platform, and combining it with the snap technologies, allows the product to be remotely controlled, manually controlled, or scheduled. It does efficient work of storing data while still delivering the performance that you would normally expect from a higher priced solution.
Pure has become the main storage solution for our customers. It is mainly used for our customers' Oracle databases.
We are doing a project in tandem with Boeing to develop a security solution for their Oracle databases. We've been doing it in the VMware virtual solutions lab, which is back-ended by Pure Storage. It's a very complex project. Pure made it fast enough that we could cycle through the things that we needed to cycle through to get it exactly right. We were able to do so a lot of times, to rev it enough to get it refined to where the process was exactly right every time. There's no way we would have had time to rev it that much had it been on anything slower.
It helps simplify storage. When you're running Pure all-flash, you don't have to do a lot of the old Oracle best practices. You don't have to worry about putting log files on a different disk channel than the data files, and those types of issues. As long as you don't max out the bandwidth of your connectivity, your Fibre Channel, then it doesn't matter. That has pushed the bottleneck down to the connectivity to the storage, as opposed to the different spindle groups on your storage. That has made it vastly easier to do large volumes, rapid provisioning in databases, without taking a performance hit.
We like the data reduction rates. That has been really helpful. You get 4U of Pure Storage replacing something like two racks of spinning disks. One of the things that has contributed to that are the data reduction rates. Not only that, it helps dramatically speed the read coming back in, because you don't have to read it 400 times. Actually, the write doesn't hurt anything either because the write goes in once and then it gets deduplicated and that's that. It does help speed I/O because then everything is coming right off the front end of cache. Certainly, in terms of space, it's probably the most helpful.
The initial setup was straightforward, very fast. We had done a PoC before.
Pure Storage has helped improve our organization because before them we had a 3PAR of a giant V400 and every day we would lose a disc or a magazine. We had to call out a guy to come onsite. It was a massive three-rack thing. Pure Storage, it's really modular, we're maxing out shelves where we can, and it doesn't take up as much space, it's not as hot, its a lot better than 3PAR.
Replication is the main reason we have it. It has helped to simplify our storage in the way that it just simplifies and there's nothing to really set up. Once we have them linked we ship them over and we sit our RTOs and our RPOs.
As dedupe and compression go up and we get more out of it, then we do see reduction in total cost of ownership. We're also throwing more and more on than we ever had before, so it's hard to tell, but we're getting more data on a smaller array than we ever had before.
The 3PAR SSD arrays that we have are still failing a lot so even though we're under warranty, we still have to get someone out and usually have someone troubleshoot so that usually adds onto the cost. With Pure, we've had a disc fail and we pop it out and you pop it in and it's good to go.
In terms of performance metrics, depending on what we have on it, some of our databases will get 4.8:1. When we do a big release our SQL tables change values so we'll see that reduced and we'll go up to sometimes 110% utilization. We're working with Pure Storage to try to fix that and see what we're changing so much. We also mistakenly had a 10pb on Pure so that data churn really reduced our usable storage. We're learning how to use Pure properly.
The company started off with a small chunk of the product. Now they have moved up to where Pure Storage became the direct responder in our Australian office, they said it was very stable on their end.
We have a capital of storage with EMC, our previous solution. The fact that Pure has a petabyte of storage means that Pure Storage will become a de-facto standard in all the global organizations.
We run a lot of Oracle workloads and we need a lot of development environments and this solution allows us to snapshot those environments. It releases those to new teams within minutes at a very small storage cost amount.
It really helps simplify storage. It's very, very simple to use. The web interface is also very easy to use. The bureau's EOS is just perfect, there's nothing really complicated about it. With the help of the array, it's very easy to navigate. We can see the volumes and our protection groups. It's a breath of fresh air compared to the Legacy storage that we were using.
Technical support is very responsive. We had an SSD fail and they replaced it within 24 hours.
The initial setup was straightforward in the way that it was a database vacuum storage.
The data reduction is working well for the expected usage of VMs and other stuff like that. I do see it's not working very well for already compressed data which is expected. I know this solution is true to the expectation and how it's advertised.
I would like to see active replication. I know that it's available now but I haven't tried it yet. I hope that it works.
We've had zero drive failures and zero problems with it. We've had it in place for about a year and a half and have had zero complaints, other than that box-to-box replication is not encrypted.
This solution has improved our organization in the way that in the past we had reports that were taking up to two hours and after switching to SSD storage the overall processing power dropped to half an hour. The end users saw an immediate performance gain.
Our primary use case is a big bucket of storage for VMware. We run our virtual machines mostly to make sure that we have our SQL databases sitting on Pure Storage, because it's the fastest storage which we have available.
We switched to Pure Storage mainly because of the frustration of dealing with performance on the old platforms that we used to use.
It helps us simplify our storage because we use it for a specific use case of replication between sites. We have two data centers: a primary data center and a secondary data center. We got a Pure Storage device in each location and we do backups of critical data in both locations and then replicate them back and forth between the sites. This is the biggest thing it does for us.
We have seen a reduction in total costs of ownership. Most of the data that's on the Pure came off of Dell EMC VNX. The money I saved by not renewing maintenance on the Dell EMC devices paid for the Pure Storage devices. I've saved a lot of money and gotten better-performing storage.
With every update we get, we get a reduction in the space used which has been pretty dramatic with each one of the upgrades that we've gone through.
I would evaluate the technical support as good, I have a team who calls in for support, if there an issue. They have not complained to me about any problems.
Technical support is good, but not as good as we would like. We have to get our Pure account team involved often, and they are stars. That always solves the problem. Support is available 24/7, but sometimes they're not as detail-oriented as we would like in investigating problems.
The most valuable feature is that maintenance is free.
The way Pure Storage does the controller storage warranty or replacement has been an issue for some people who just replace the controllers every couple of years, and that's where some of the confusion with pricing and support has come in. They should be clear on the way the controller replacements happen, as it is important to know whether or not you can get a good return on them, because it can be a little confusing.
I rated the solution as a nine out of ten because I knew about a disk failure. Other than that, it would probably be a ten. Disk failures are out of anybody's control.
VMware is currently our main use case because it dedupes really well.
The initial setup was straightforward in the way that the configuration was simple. It's simple to manage.
Technical support is excellent. I've had very good responses from technical support. We had a couple of cases where we needed support. Some of the communications were purely over email and some has been an actual call to the service desk.
Snapshot recovery has been very helpful. When there have been snapshots that we've had to restore it's been easy for our SAN team to make those available for our server team.
It has improved my organization in the way that now we have lower latency, we get fewer complaints from customers, and we see a constant response time.
The first set up we had was really straight forward and simple.
Everything with Pure Storage is so straightforward. It was an easy setup, and we were storing data almost immediately.
While the technical support is good, they are not as good as we would like them to be. We often have to get our account team involved, who are stars. This always solves the problem. Support is available 24/7, but sometimes not as detail-oriented in investigating problems. E.g., we get our Account Team involved to manage the engineers involved and figure out what the problem was. Support is not perfect.
The initial setup was straightforward. We started with about 60TB and have grown from there.
I have 19 years experience with Dell EMC products, and almost two years of experience with Pure Storage. The main difference between Dell EMC All Flash and Pure Storage FlashArray is that the Dell EMC product is building on a traditional architecture. You have more functionalities and more connecting possibilities with Dell EMC at this moment. Of course, Pure Storage FlashArray is on a quick road to closing the gap.
The security operating system is its most valuable feature because it's very simple, easy to use, and operate. You don't have to do very serious training to operate this equipment. It's user-friendly and pretty straightforward.
The performance analytics are moderate. It's not the best performance platform out there but it's the easiest to operate.
The credentials on the iSCSI interface are only available to type in with the Chrome browser, and not with the Firefox browser. Hopefully, in the next release, this will be fixed.
The initial setup is very straightforward. It is clear, simple, and easy. While it's a human interface, there a lot of operations that are automatically done by the unit itself.
Its ease of use is a very big thing for our customers. It's easy to set up and easy to maintain. The support is automated, which is very good.
Pure Storage has proven to be proactive with support. Even when we have small problems, they open a support case before we even notify them that there has been an actual issue.
We receive good quality of support from the first line of support, so we don't need to escalate or wait through a long process.
The solution is very stable.
Only one disk has a problem. The performance with that problem doesn't create problems for our customers. We are able to maintain the performance of the program.
The sales and executive support have been outstanding compared to the rest of the market. I replaced another couple of vendors that I had in place for storage, who over-promised and under-delivered on their technical expectations, and who certainly over-promised on their ability to do conversions from one array to another. My upgrade paths have been simple on the Pure.
It's a very stable product, all self-contained and very well-supported as well.
Everything could be cheaper. Other areas where we would always like to see improvement with products like this are in compression and deduplication. Increasing the overall storage efficiency of the platform would be great.
One thing I'd like to see in a future release is integration between their main storage array and what they call their FlashBlade product; to be able to snapshot directly from the primary array into multiple different backup copies on FlashBlade. That would be an intriguing and interesting feature for us. Other than that, we've not had any big needs or demands.
We only use tech support infrequently. We don't need to call them. It's easy to use, straightforward. Once it's set up, it does what we need it to do.
The initial setup was very easy and straightforward.
Depending on the deployment, the configuration, and the size of the project, and some of our larger machine-learning deployments, where we have to put in an AI-ready infrastructure box, those projects tend to take a little bit longer. It's a newer product and they're still figuring all that out. But it's comparable to any other vendor up there.
I would like to see the NAS add-on component become more fault-tolerant than just a single virtual machine running inside the array. I'm unwilling to use it for that reason. I have other solutions that work, but I would use it if they had a little bit more fault-tolerance or if somebody explained to me that it's better than I think it is.
It's easy to use, and the maintenance upgrades to get free controllers work really well.
The initial setup was straightforward and simple.
Our HANA installation was a greenfield. So, we started the Pure Storage system with HANA.
I would like to have support available in Spanish.
What it needs to do is work a little closer with solutions, like VMware, so it understands the particular workloads that are on it. Today, it does not understand the applications which are running against it.
Performance is its most valuable feature. There is nobody else who is coming close, not that I have seen.
They are on the money with the predictive performance analytics. They claim high performance, and they do have it.
The GUI is simplistic and basic. I feel like it's explanatory, but not enough, it needs a little more to it.
I would like to have better training. I would like to have an hour class or more online training.
The most valuable feature is its performance.
The solution’s inline deduplication and compression are very good.
The upgrade architecture is very good.
Our data reduction rates, latency, and availability are all good.
They make a reliable storage. We use it as a very critical system, and we don't want any corruption on our system.
Since our design is a high availability design, it can work 24/7.
We have done a lot of different things with Pure Storage. We have included some real-time analytics that we developed for our eCommerce website and run those on FlashBlade. We used FlashBlade as it was the only storage platform fast enough to keep up with that data flow.
We are able monitor I/O, latency, read/write, capacity used, and all the different metrics that the Pure gives us the ability to monitor.
It definitely affected the ability to capacity plan, but in a good way. We have all the visibility into the capacity, forecasting, and all the metrics that the solution provides us with.
It takes drastically less time to manage and administer the solution. We would have about three or four people who were dedicated just to work on storage with only one guy who could actually do the Hitachi replication, because it used old archaic technology called HORCM files. In the Pure Storage realm, this is not true. All our junior partners can administer the storage arrays. It is simple and easy to use. We don't have to dedicate a whole team of full time people to work on it.
The initial setup was straightforward. I had done the preparation first. I had a good relationship with the presales engineer. It went as expected.
Manageability is its most valuable feature.
It is simplified storage, as we don't have to maintain or administer it on a daily basis, which is good. We don't have to be experts in managing the storage. We can depend on the solution's ability to phone home and leverage the built-in support function of the product.
It has strong statistics and historical metrics with Pure1. Therefore, it has been everything that we have needed out of a platform.
We noticed a dramatic increase in application performance when moving it from NetApp to Pure Storage.
Pure Storage seemed more cost-effective than NetApp. When we did our POC, we saw big performance gains between all-flash on NetApp and all-flash on Pure Storage. It was significantly better.
The initial setup was pretty straightforward and simple.
The inline deduplication and compression have exceeded our expectations. The rep from Pure Storage kept promising us 4:1, and we were very skeptic about getting that. We were anticipating mainly getting 1.5:1. So far, with the VMs, we have been running closer to 5:1 deduplication and compression, which is amazing to us.
It's reduced our overhead management time on storage, since it is so simple to get in and just provision a volume, present it to the host, and then you are done. With the old HPE system, there were quite a few more steps to have to deal with. Therefore, it has definitely reduced our management.
The initial setup was straightforward. We just followed the information on the screen: click, click, click.
The start up process is very easy.
The product added speed to our SQL environment, so we receive a bit better compression. It did give us a little more space when I moved my SQL environment off the competitor onto Pure Storage. Therefore, I obtained a bit of space and saw an increase in performance.
They have really good baked in analytics to show you trends for growth history, so it does help with future planning for data growth.
The initial setup was very straightforward and we used FlairTech for the deployment.
Prior to this solution, we were using the IBM Storage Network. The support was not very good, and the feature set was very limited.
We needed something that was simpler to manage and maintain.
As partners, we should have the option to download the software, rather than have to go back through Pure to obtain it.
Our previous SAN storage environment never performed with the same levels as this does. The performance levels and the storage have improved my organization.
It has benefited our IT organization because we're a 95% virtualized environment and we're able to allocate resources as needed and manage our whole infrastructure that way.
We are running VMware on Pure. Our main driver for this was to isolate our Citrix environment from the general SAN storage board.
The joint solution has benefited my organization in the way that it isolates it, giving peak performance and does not share it with other environments that have any infrastructures or competing resources.
The initial setup was straightforward.
We deployed it in our VR site first. We got it set up in VR and made sure everything was working. Then we brought it into production and deployed it on the production side.
We tested it first on the VR site. We personally didn't test a unit, besides our VR site, which was about two weeks. Our vendor-approved it and they used it. We went on the advice of our vendor and got the system.
We primarily use this solution for our SQL server in an on-premises deployment.
Having a dedicated array for our SQL server is very nice.
We are running VMware on Pure, and the main driver for that is because it is all-flash. Also, we wanted a dedicated solution for our SQL environment. Running on Pure has given us the ability to scale out our SQL environments. We tripled our environment in the past three years since implementing this solution, and we have not had any issues with the storage keeping up with the workloads.
We are making use of some of the VMware integrations that have been developed by Pure, but we are really waiting for the copy data management part.
The stability of this solution has been great. We did have a recent problem but it was probably poor capacity management on our part, where we allowed the system to become too full and it was unable to do its own correction. Besides that though, it runs great. It's very low-touch compared to some other vendors we have used in the past. In some cases, we used to really have to have an expert to run the storage network and now with Pure, that's not as important. Once it's installed and ready to go, it's very easy to maintain, very easy to provision new space, and very easy to expand the hardware. It's been transformational just in the way that you consume the product. It's a service now.
We use the FlashArray X20, M20, M10. We have regulations against cloud, so we're mostly on-Prem. However, we do use Office 365 for email and we have Azure for development on another team but I don't manage that team.
Our primary use case of this solution is to house data stores on virtual machines.
The seamless integration into the public cloud has improved my organization. It also benefits my IT organization in many ways. We sell it, we use it, and it makes us faster.
The joint solution, VMware on Pure, has helped our organization. It's tested a lot of stuff and been put in production. It's also used for customers.
Our organization takes advantage of the VMware integrations developed by Pure, any APIs that are available to be using.
It offers seamless integrations and has made it easy for us to do. It's a simple product.
The solution is very scalable because it is fairly smooth and easy to upgrade.
I find the speed of the solution its most valuable feature. It is really fast and it is also very easy to use. You can basically set it up and forget about it. You don't have to manage it on a day-to-day basis. I also like the plugins that go into beta where you can see there. For instance, if I need to extend a datastore I can go straight to the plugin and extend the data store, refresh the VAs, and see the new store. I don't have to log in and use my credentials, so I save time and it is easy.
The technical support is fair and the team was helpful.
It replaced an earlier tier. It replaced 3PAR Storage and gave us faster performance than the single databases.
VMware has benefited our IT organization because we're 100% VMware, everything is running on it.
We are running VMware on Pure. Our main driver was the performance for SQL servers. The joint solution has helped my organization in the way that the databases run faster.
My organization is taking advantage of the VM integration developed by Pure. We've deployed it. I think it gives the storage administrator some additional insights on metrics. I don't think we're using it to actually manage the data stores. He's getting more insights on metrics. Pure has a VAAI plugin that allows you to manage the data stores. We're not doing that, but I think it gives them heightened analytics in addition to SD-Pure1, a web interface. The integrations have helped in the way that they're another dashboard to have. Somebody could think that the databases are running slow and our database administrator can look at that tool and say, "No, it's unique to your SQL databases, it's not the other VMs on the data stores."
Having fast storage allows actual servers to perform in high capacity so we don't have slowdowns on our applications.
It benefits our IT organization in the way that it drives down costs, allows us to migrate servers from one data center to another, and gives the flexibility that having bare metal servers wouldn't allow.
We run VMware on Pure and our main driver was for cost and performance.
We used to use a product called XtremIO which was a pretty significant improvement on the old way of deploying storage which was through standalone SANDS and we also used EMC VMAX. That was really expensive. We saw a vast improvement when we switched over to using the Pure Storage model over the XtremIO. It just made us that much more competitive. We were able to offer those workloads to our clients, we sold more, and we keep selling it.
VMware absolutely benefited our IT organization. VMware has always been just above the rest in terms of virtualization. I was not part of the organization prior to VMware being a prevalent powerhouse like it is today. But I know that back in the day of our organization, we used to have every server in a single box. Now, we've trimmed down so much of our infrastructure as well as some of our other client's that we've moved to VMware and it's been a significant improvement.
We are and we aren't running VMware on Pure. We have our ESXi hosts are not running on Pure Storage but we use Pure Storage for the back-end data stores that we run. We don't necessarily run the Hypervisor on Pure, but we run a lot of our client's virtual machines on Pure Storage.
The main driver of running VMware on Pure is for more IOPS. It's a growing trend in the industry that we have to have more clients that have more IOPS and low latency. It's an ongoing battle with the industry. When it comes down to it there's going to be a higher demand for even lower latency; even more speed, and more IOPS. We haven't hit that quite yet, but it will happen. It's just the nature of the business.
The joint solution has benefited our organization. It's with the ability to have the tier-one storage from Pure Storage that's allowed us to not only sell more at a higher cost but also it's allowed us to separate certain workloads from others. We have the tier-one storage, then we have tier-two storage on a different provider that allows us to have more storage, but also to really just give Pure Storage to those that really need it. This provides better performance for those VMs.
The initial setup was easy and straightforward.
It has improved my organization in the way that we have high reliability and faster access to our data.
It has improved our IT organization in the way that we are able to provide systems to our customers quickly and provide high availability and reliability for their applications.
We are running VMware on Pure. Our main driver was speed. The joint solution has helped our organization through speed of delivery and speed of applications.
The solution is stable. There are less complaints, less downtime. That helps us to work in that environment more effectively.
It's fast because it's Flash storage so the IT team doesn't have to worry about it.
Besides virtualization and the benefits associated with that, we're a Workspace ONE customer, we're going to be starting that deployment Q4 of this year and we're looking forward to improving the patient experience with the doctors and the rest of the medical staff.
We are delivering a better experience for doctors and the other staff that deliver desirable outcomes. Again, it's easy on the IT staff. It's important to have infrastructure that you can rely on and not have to worry about failing.
We use SRM for VMware integration. The failovers with SRM are fantastic. It's fast and reliable. It just works, which is sometimes difficult to achieve.
We primarily use the solution for desktop virtualization.
I have IOPS and IOPS input/output. The reason that we have virtualization required for the media is because of high IOPS and we're able to maintain it with PR. The encryption is pretty high. We like the encryption right on the storage.
We use the solution for VM storage in a private cloud model. The main motivations we had to run VMware on Pure were the simplicity and cost.
We're using the M70 R2.
With respect to comparing other solutions, when you put all of the features in a box, leverage them and migrate your application to one of these arrays, it will give you a lot of benefits. Some people have compared benchmark performance tests against other arrays and from my point of view, overall as a whole package when you sum everything up, Pure Storage is the winner.
HPE Nimble Storage: AI
Nimble storage is our primary Production storage vendor. We use this with VMware on a daily basis including a new AFA5000 all flash array for our DMS system.
Straightforward and very easy, as always.
- Ease of use
The product is up 99.99 percent of the time, and it just stays up. We have it on multiple power supplies. The product runs constantly. When there is a problem, it notifies you of the problem.
We have never had any critical failures with it. It is always up. Every time a single component is broken, it has been repaired within 24 hours.
The parts are hot swapable. We just get a part in the mail, and we are good to go. You literally walk in, pull one out, and put a new one in, then the thing is running again.
- Simplicity of use
I don't think it is officially released yet, but the main reason that we chose Nimble is because of the sync rep feature. So, I would like to see that further evolve. This feature will be essential for our setups.
There has been no downtime, which is probably the best thing.
We use InfoSight predictive analytics. It helps us from a performance perspective by identifying potential bottlenecks.
InfoSight has identified controller failures or performance issues.
This is a storage solution and while it is faster than our old storage platform, that in and of itself hasn't really improved any of the operational aspects of the company.
Performance has been restored to the same level of what we replaced, although it has taken six months of working with Hewlett Packard to allow them to understand our unique environment.
I don't think that it's fair to say that All-Flash is for growth. It's the next logical progression that we had to make.
We can have fewer resources manning and monitoring the storage and we can reallocate resources to work on other things while maintaining confidence in our storage solution.
We have successfully integrated various applications such as SAP Business One, Microsoft Dynamics GP, any of the ERP systems we have tried work.
The Nimble Storage solution has enhanced performance over the previous system.
I haven't got the details of the IOPS (Input/Output Operations Per Second) so I don't know it exactly, but definitely performance on the service is much better.
For us, it is about speed and stability. There are a lot of redundancies in place. I am able to access what I need to access.
Our situation is sort of unique. We need fast disk for compute, but then we also need more traditional disk for our images. Having Nimble, where I can have both fast and traditional disk in one pane, and still see everything, is pretty awesome.
We use InfoSight for predictive analysis because the answer to most of our problems is that, "It isn't our problem." However, we are being blamed for it. Thus, I can get my answers improved by using InfoSight that it isn't us causing the problem by going into it. For example, one of our applications was acting weird, and we had the application vendor on. They really couldn't answer much. As one of my troubleshoot methods, I said, "Let me check InfoSight." I logged in, and I could see a VM that was heavily pegged and almost in a critical-like status. That VM was the reason why the issue was the way it was. Now, It wasn't because of our infrastructure set up, it still was an application issue, but I was able to pinpoint exactly what it was based off of that.
That application with problems had about 30 servers. As I'm not an application vendor, I don't know which servers serve what purpose within the application. I was able to go into InfoSight, and it told me that one in particular needed to be worked on, so I didn't have to waste time looking at the other 29 servers. Therefore, I knew that one was the one that we work on, and that is the one that needs to be fixed.
The InfoSight platform and the reporting help us to identify network issues and compatibility problems.
All-flash also positions our company for growth. We've deployed 3PAR all-flash for our core applications and will not transition outside of flash from this point forward.
Nimble has increased performance with better IOPS evaluation, mixed-load capacity. Also, it improves throughput which means we've been able to transition off of remaining rack mounts onto Nimble plus virtualization structure, in a cost-effective manner.
The initial setup was very straightforward.
In terms of scalability, when I said "cloud," that was one of the things that we looked at when considering how we would grow, how we would expand. We are still evaluating. We do have some cloud storage, but we want to have one solution for that. We definitely think that with this product's features, we can go into the cloud and scale to whatever we like.
Our primary use case if for our central data storage. This contains our files, financial services, and customer data.
We want rock solid stability, making sure all of our customers have complete 100 percent uptime, which is our goal. Nimble achieves that pretty well.
InfoSight is a regular part of our weekly routine. We don't use it every single day, but we do check in on things every once in a while. Luckily, with Nimble, you can forget it and you don't really have to worry about it. However, if we do need to look into an issue, we definitely use InfoSight.
InfoSight has increased the availability of analytics and our ability to quickly get to them. We can migrate things faster. We can pull stuff out of production. We can restore from backups more quickly with the Nimble system than anything else.
We have had a couple of networking issues, temperature alarms, and a few things in our data center going on where InfoSight will scale back our utilization (or whatever) in order to keep us productive and up.
InfoSight has enabled us to get our servers back up faster, especially on the back-end. We have instant recovery. We are able to access that fast storage within seconds, which is very helpful. It enabled us to get service back up in a minute and a half.
The solution has improved our throughput tremendously. It has been on-demand access with 10 gig fiber. The disks, even though some of them are spinning discs, are rated, or in Nimble's little custom config in the way that it hits the cache first, then throws it off to the cold later. This is perfect, and it is great. It has improved everything.
This product is definitely stable and we use it on a daily basis.
The solution requires a higher availability.
The pricing of the solution isn't ideal. They should work to make it more affordable. It's very expensive.
I'd like to be able to configure the solution from vCenter, which isn't possible right now.
It would be great if the solution offered even more integrations and plugins.
HPE 3PAR StoreServ: AI
It has high availability, and it's flexible to tuning for system admin.
- Four-node performance
- No split IO groups as on IBM SVC clusters.
- Easy tiering (with a small % of cache) did a good job in a large scale environment of 1000 VMs on 350TB external Monitoring, giving a detailed dashboard. A nicely virtual appliance for remote callout support to HPE services.
The product is really stable. The main thing is the solution is easy to use, and my administrators don't spend a lot of time on maintaining or troubleshooting issues because of it.
We never have a problem. The system runs. One of the main things is that we are in the Caribbean. The amount of power outages that we have compared to the US is more than 60 percent higher. The 3PAR can handle that. A lot of systems, when power goes out and it come back, they just don't work. We never had that. The 3PAR was one thing that always used to backup. I had problem with other servers, but not with the 3PAR.
I would like to have more details on alerting. It is not real granular right now. What It gives you is sort of basic, and we can't do a lot of tweaking on our own. We would like to be able to tweak some of the alerts for our team.
It is our main storage solution for our entire VMware environment.
Everything run on the solution is core: MEDITECH, all the EMRs, and back-ends support services.
We use a combination of flash and spinning disk. For some of our less critical functions, since we run everything on the 3PAR, there is no reason to spend the extra money on flash to run the stuff that is not super mission-critical.
It seems pretty stable. Once we got over the birthing pains, it has been pretty reliable.
As long as the array is not full, it is available. We filled it up.
It is our primary storage. The entire company runs off 3PAR. Right now, we are in a VMware environment. All of our virtual machines are stored on 3PAR, along with all of our EMR applications, practice management solutions, and email. All of our virtual machines are running off of 3PAR. Our file server is on there too.
We're currently running two 3PAR 7200 storage units in high availability. We have three workload tiers. We have Nearline, FAST class, and SSD. Our primary ERP system is an Oracle JD Edwards running on Microsoft SQL Server 2008 R2 that is all on SSD. Then, we have other workloads for our barcode. Our engineering solutions are running on FAST class, and then most of our traditional file and print, storage, and workloads are running on Nearline SATA. Also, have two 4200 LeftHand SANs in the environment. I put very low priority VMs on those two LeftHand SANs. They are minor application servers. They don't need a whole lot of performance. However, the LeftHand SANs are now seven years old. The 3PAR SANs are now five years old, and I have to replace everything in 2020, and I'm looking at HPE SimpliVity, Nimble, and potentially 3PAR as the storage architecture for that environment.
Our JD Edwards, which is our ERP system, that is critical. Also, our barcode scanning, because we do a lot of barcode scanning out in the shipping and manufacturing warehouse. Our accounting system is part of the JD Edwards too. All of that is on the SSD. We're currently evaluating whether we upgrade to JD Edwards 9.2 or if we deploy Microsoft Finance and Operations. If we go with Microsoft Finance and Operations, that'll be totally in the cloud, and I'll be able to carve a third of my storage requirements out because it will no longer be necessary to run an on-premise ERP solution.
My directive when I was hired in 2016 as a direct IT manager versus an outsourced IT manager, as I was when I started in 2014, is anything and everything I can take to the cloud goes to the cloud. If I do that, it reduces the need for all SSD on-premise, and that's actually what I'm trying to get to, because I'd rather utilize Microsoft Cloud, Azure, Office 365, and Dynamics 365. I want to utilize that cloud for my performance, whereas on-premise traditional file, print, and storage doesn't really need SSD.
- Easy to expand
- Easy to maintain
The solution’s deduplication functionality works great. We are getting about a 16:1 dedupe ratio on our VM workloads.
We are using Synergy, so it is the next evolution of blades, which has been great. We have had no complaints, except for the problem with the Spectre vulnerability.
The failover/failback: Just to keep our mission-critical stuff running all the time.
The solution’s deduplication functionality has helped us save a lot of space. People save their files over and over again or email them around, then everybody has a copy of the same thing. Therefore, the deduplication is very helpful.
We start of with the EVAs, and as the EVAs aged out, we were moving up. However, it was the EVA that failed on us and 3PAR was just the next, better solution for our scale of need.
We have two use cases:
- We use it with our internal applications, so for internal use.
- We are provider of national research computing infrastructure. Therefore, we are using it out there with all our systems.
There are not many mission critical applications or processes that we run on 3PAR. The mission critical applications are usually the ones for internal university purposes, like ERP systems. Our research systems are not a mission critical since our researchers can run their computing again in a week.
It provides us some disaster recovery capabilities. The all-flash storage gives us the performance that we need.
The remote copy group failover is very useful and has helped us.
We use InfoSight predictive analytics. The most useful part of it is being able to see the growth curve.
Before using centralized storage, we needed to make sure that we have enough physical disks installed in a server. Now, we know exactly the capacity that we need for the upcoming year, and it's much easier for us to enlarge the capacity and expose these disk volumes to the relevant servers. Again, in our case, it's mostly the databases.
All-flash positions our organization for growth in a way, mostly for performance, because again, we're using all-flash for the performance that it provides, and we have critical databases running on it. It's providing day-to-day functionality, the way I see it.
- High performance
Never go with your first impression regarding 3PAR. They say that hybrid is the best thing anytime, but if you read the small print, it depends on how you use it, etc. So, we went with an all-flash for that reason. So, don't go with your first feeling. Investigate and try it out. Try to get a demo to make sure it works.
I like that the solution's availability gives me:
- Ease of use
- Good support from HPE.
This solution provides flash storage for our servers. Our environment contains Linux operating systems, VMware, and some web servers.
This product has met our expectations. Once we got past the minor configuration issues, it's been smooth sailing, so I'm very happy with it. It is important to understand the terminology upfront because it helps prepare to do the actual implementation.
I would rate this solution a ten out of ten.
It allows us to cohost as needed. We are able to put more systems on one data storage system and it is still able to deliver the availability and speed that we need it to deliver.
All-flash also positions us for growth. We can look to simplify things while still maintaining the reliability and speed that we need to deliver quality healthcare.
In addition, it has increased our performance and it has improved our throughput. The latter improvement means that we're able to ensure that the users can get to their data as quickly as they need it, and that it responds to any queries that they have. It's able to meet their daily needs.
The increased throughput has allowed us to scale and maintain performance, or even have better performance.
In terms of the mission-critical applications that we run on this solution, our application is benefit adjudication.
We have been able to scale faster and get our applications out in much less time. We don't need to worry about the platform's ability to manage the workload, so we are pretty happy.
Our VMware platform sits on 3PAR. We also have databases, ERP applications, and websites running on it.
All-Flash also positions our organization for growth. It certainly has its place. We don't use All-Flash because the performance of the existing arrays knows the job, but I certainly see where if we are doing data-intensive operations it could assist us.
We deployed InfoSight predictive analytics not too long ago. It improved our management of VMs. We are now able to see a lot more using InfoSight and we have a pretty good idea of exactly what's going on in our storage array.
The storage array absolutely increases performance. Compared with what we had before 3PAR, this has certainly done its job.
The solution has also helped us reduce time to deployment, I would say by at least 30%. It's easier for us to deploy. We get our servers up and running quickly and that way we support our environment faster so we can be more agile.
It has also significantly improved throughput, so we don't need to worry about performance for any of our platforms.
Technical support is pretty good. We always have somebody available to support us. We do have a maintenance contract with third-party vendors for HPE but we've been attended to very well in this field.
This solution has allowed for massive performance acceleration of all workloads and massively increased availability (with peer persistence/transparent failover feature).
HPE 3PAR provides fast and reliable storage for our critical systems like the database (MSSQL and Oracle). It also improved the availability of the system and at the same time provides a Disaster Recovery solution by using the remote-copy feature.
The adaptive optimization is also a factor in maximizing the capability of the system.
I would like to have support for On-The-fly reallocation Data when using VVoL.For further explanation I must say When using vvol , you can have 3 tier in Storage .This tiers different in IO an Capacity feature.Usually tier 0 can support maximum IO and minimum Capacity and use SSD in this tier, Tier 1 between Tier 0 and 2 for Support IO and Capacity This layer is known as FC (Fast Class) nad Finally in tier 2 you Have only capacity and usually used NL disk for capacity .from that side in vmware Cluster equivalent this tier have a concept called Gold,Silver,Bronze that define in Storage Policy.You may want to first move a virtual machine to the silver group with disks; and then move it to another group, such as gold or bronze.This feature is based on the feature that the storage device provides and This feature is not yet available on the storage device, according to the documentation of HPE company will be presented in an update called T05
Hitachi Virtual Storage Platform F Series: AI
This machine was being used in the DMZ area of the company. It was isolated. The external users were something around 4.5 million customers. I don't know if every one of them uses the machine, but this machine is the gateway to the website and host application.
To install and do maintenance on this machine at this scale was a one person job. It was a storage admin for this use case. If I look only at that solution, it's not a full-time job.
We increased the capacity of the solution one year after we initially bought it. Before I left my last job, one of the decisions I made was to change the solution to another platform in this environment. It was also out of warranty. They're still using this appliance, but they need to migrate out of it to another solution.
It may be a newer version of the Hitachi Virtual Storage Platform F Series or a new solution from another vendor. It depends on the product features and total cost.
It's the public sector, so everything is running on a bid process. The suppliers bid to give a better offer to win the opportunity. Hitachi came up with a better deal and pricing.
The company is opening up opportunities for vendors. The vendors that supply the solution at the lowest prices win the bidding. It's a public sector or government contract. There was special pricing in this case since it's the government sector.
If you do it correctly, the initial setup is straightforward.
The most valuable aspects of this solution are the storage system and availability.
We are a solution provider and I work with a lot of different SAN products, depending on the needs of the customers. we have implemented this solution, as well as the G series, for some of our clients.
I have a project right now that involves revising and fine-tuning a storage network. This network contains two Hitatch VSP G Series units. There is not a major difference between the F Series and the G Series. Both of them are enterprise-scale and efficient for many data centers. It is used as primary storage in industries such as banking, automotive, health care, and insurance. Large companies, or companies that have an IBM mainframe.
If the solution requires a very high IO/s (Input/Output per second) with sub-millisecond response time then they should select the F Series because it has better performance.
The primary use case of this solution is for storage in some industries such as banking, automotive, healthcare, and insurance companies. It is for large companies or companies with a large mainframe such as an IBM mainframe. Our primary use case was for core banking.
I think the management should be improved, because it is not very user-friendly at all. I also think the support could be better - at least for mid-range users. I think historically Hitachi systems are made for really big organizations like banks or insurance companies. Usually those companies have dedicated personnel dealing with the storage. And because they're very valuable clients, their support contracts are much more tailored to their businesses. Since this program is more aimed towards smaller companies, they don't have support solutions for mid-range companies.
I also think that their management is not very user-friendly. If this wasn't the case, more support wouldn't be necessary. It's a very reliable system, but the whole management user interface is very unfriendly. I know they're aware of that. They have a lot of data of management solutions, but none of them yet have reached sovereignty yet.
IBM FlashSystem: AI
The cluster should be improved because non-disruptive failover was supported only on a few operating systems.
- CLI though intuitive, no other API available - Lacks scheduling or a cron - Has no built in short term performance graphs - IBM TPC overexceeds the montoring needs, had to fall back on Stor2RRd - Support response times are bad -View full review »
As a result of the accelerated read and write operations from disks, productivity across the enterprise has increased in daily work.
The initial setup was a little complicated, especially the storage part of the implementation. The solution itself is complicated as well.
There are multiple applications which require better performance as well as space. Both were a big issue. Some features were expensive, but we had to meet the storage requirement as well as the performance for these and had to design a solution to meet these two requirements.
It was an implementation handled in phases, so it took about four to five months to deploy the complete solution.
We went phase by phase and then we did a performance analysis at the end of each phase to make sure that the storage performance was not going down and we were getting the best performance out of the bots. After every phase we had and throughout each phase, we had multiple performance and penetration tests as well; this was all mandatory.
We had about five people helping with the implementation. For maintenance, as long as you have an experienced person or team, you only need one or two people.
The support is simply not there, so it needs to be improved.
This solution needs a management console where we are alerted to issues and can report them, or escalate them through email or another method. If something happens to our storage, for example, then we will be notified, and we can report it through the console.
Performance is not a problem anymore and the space available is enough for about 5 years of operations. Wa are now busy with cross dc failover which will use the capabilities of this system extensively.
The setup of the IBM devices is really straightforward. The actual deployment for the IBM FlashSystem took about seven hours. One of our partners was responsible for this, and we are very satisfied with his performance.
The ease of installation should be improved. We had issues with the configuration model.
In the next release, there should be and flash and caching features. Customers also have problems accessing their files from the storage. That's what they usually complain about. This is something they should improve.
The main issue is the speed in terms of accessing the data. That is the customer's big complaint. They also complain about the speed of the hard drive.
They have good technical support. IBM has offices in Ukraine that have knowledgable engineers. They speak Ukrainian.
We have three administrators who take care of the different applications and data that are hosted on this storage. We don't perform maintenance on a daily basis. We may extract some stats for the performance and for evaluating capability. However, when it comes to maintenance, we probably work on it once or twice a month.
NetApp EF-Series All Flash Arrays: AI
Our company used this solution primarily for databases. The customer who currently uses it mainly uses it for the data store. He uses it as a single silo with the storage it offers, so he implements the project and uses what he needs to. The solution is not that flexible that you can change the workload so it depends on what you designed before. So you have to compute for your load or for your use before you do the setup.
The initial setup was pretty straightforward.
The initial setup was straightforward. I read the documentation and it was simple for me. The deployment took around three days.
Here in Egypt, we do not have an official office or central point of support. This is our biggest complaint. We do not want to have remote support. Rather, we want an office here. It is very difficult to get an engineer here, on-site, from NetApp. This is true even pre-sales; we want to sit with the NetApp team, and not with partners. It's not that partners are bad, but it's better to meet with NetApp directly.
The main advantage of this solution is performance.
This solution does not have any compression or deduplication, but instead gains better performance through concurrency.
There is a lot of room for improvement. What I don't like is that they do not create barriers in the areas. The data management is based on the software and they do not use segmentation on the storage. That is the main problem - there is no segmentation. You cannot segment the data on the database. You put the data there but you don't know where the data goes on each disk. The information will be there but there is no segmentation. There needs to be improvement in data segmentation.
In future releases, I'd like to see federation and segmentation. Those are the big problems with NetApp at the moment. Compared to HP, Dell and HPE 3PAR, they cannot do the federation which is very important. We have to do remote replication and work with two or more storage sites in different locations. If I have a site and I have a second or third site - they require working federation and NetApp cannot do this right now.
The All Flash Array is stable and highly available.
Dell EMC Unity XT: AI
- One-to-many replication.
- Data deduplication.
- Asynchronous Fibre Channel replication. It is asynchronous on iSCSI and I would like to have that on the Fibre Channel.
- Unisphere-wise, I have to log in to each Unity as a unique environment. In VNX, I logged in to the domain and I was logged in to every VNX. So that's missing.
- I miss storage groups. Now, if I have to add a LUN to a cluster, multiple host, I have to know which host is in that cluster. I have to write it down and that makes it hard. In VNX and earlier, I could simply put a LUN on a storage group and every host in the group had the LUN. This lack bothers me a lot because it takes a lot of time and mistakes are made. Sometimes, a Hyper-V host gets a VMware LUN and vice-versa. Not good.
Stability has been 100%. We have had zero failures.
Unity is a lot like "no one gets fired for buying IBM." I think you will get what you pay for, but a lot of competitors have better efficiencies, better programs, easier installations. I'd be looking elsewhere. I don't feel the product is the leader in the market anymore.
I rate the Unity at eight out of ten. It gets the job done, it does it well, I can rely on it. It's just not cutting-edge in any way right now. To get to a ten, as I said, the upgrade process needs improvement. I should be able to swap it out, with zero downtime, with another array, down the road. I don't think Dell EMC has anything in the roadmap for this product line. I just don't want to have to deal with that anymore, and all of our customers feel pretty much the same.
It's a great product if you want something that just works, and works fairly well. It's a product that's tested, tried, and true, where a multitude of customers have depended on the product for the overall requirements of their companies' data. Typically, a company's data is the lifeline of the company. So, if you want something that's tried, tested, and true, that is relatively feature-rich, and that just works, go for it, right. It's a fantastic product.
The initial setup is straightforward and easy.
Ease of use would really be the best feature. We were easily able to get the correct performance details from it. And the configuration was great, it was relatively easy as well; that was brilliant.
In terms of managing it, the performance metrics that it gives, generic stuff, it does everything that we need it to do. We didn't have to create any custom reporting. It all went well.
We've had almost no stability issues.
We had an issue once and it turned out to be a bug. There was a memory leak and we had an issue in our DR site where one controller would reboot and then come back up and then, later on, the other controller would reboot and come back up. Then it happened once on our production site where both controllers went down at the same time. We worked with Dell Support and they found a memory leak and they recommended we upgrade to the latest code version.
They have a script you run, a utility to gather the logs, etc., and then they analyze. The hardest problem was that, because they're analyzing logs, they have a certain SLA in which to do that. Even though we had a production issue and we wanted it resolved right away, it took them a few days to analyze the logs and get back to us.
Obviously its speed is the main reason why we got it but we've really loved being able to use the interface. We've been able to create LUNs easily. We're able to get in there and create what we need, do everything we need to do, configure it the way it needed to be configured.
In terms of simplicity of ownership, it was almost plug-and-go. We did have some help getting it set up but, as for licensing and being able to get support through Dell EMC's site, it has been really easy. The interface makes it really easy to manage.
The primary use case is for our reporting environment, business intelligence and analytics. We run our Oracle and SAS-based applications on it right now. The performance is sufficient and we don't have any complaints about it.
- Easy to scale
- Easy to maintain
In most situations, tech support works really well. If there are technical logs that they can diagnose and actually pull something out of, fantastic. If there aren't, if it's an abstract sort of issue, like the fan issues we're having, where they cycle every six minutes, it's taken me about six weeks.
They didn't believe me that the environment was not too hot. So they sent a technician out just to make sure that I could read the thermostat, that it was 68 degrees in our office. Then, they sent someone out to reseed each component, which I had already done. I didn't appreciate that part because I did some of those basics. I did exactly what they had said on the phone. The third time, they actually replaced some components and the fourth time they just sent the components to be replaced. It appeared to work, the fan issue did appear to go away, but it came back a couple of weeks later, after an update. I'm not sure if it's update-related, but it came back.
The biggest benefit is where it fits in the cost profile. It's for VMs that, again, aren't mission-critical but do need some performance. It fits really well there for that. We get exactly what we want from it, what we expected.
I would definitely recommend Unity because, compared to VNX and other storage solutions, it is the easiest way to deploy for VMware and physical operating system services.
Regarding ownership, it is very easy. It's a single point of contact. We have the type of support from Dell EMC where, in case of any failure, we get an immediate response from them. For the purchasing process, we just validate the bill of materials and then we reach out to the Dell EMC salesperson to get it delivered to our data center.
We are working on the vSphere integration. Once that integration is done we will easily be able to do everything on the vSphere console.
We had a VNX before and the one that we were using was starting to be phased out. We needed to keep on support and we need to stay with a solution, for our clients, that is newer and cutting edge. We were aimed towards Unity.
When selecting a vendor, the most important criterion is interoperability. It has to be able to integrate really well.
Our old arrays, the VNXs - we had a 5400 and a 5700 - were reaching the end of their days, and we wanted to go to the next step up, but not quite to the Xtreme level. Unity was the obvious choice.
When selecting a vendor, support has to be rock solid. And then, ease of use: Do they have all the features we need? Are there any outstanding issues that are going to clash with our onsite stuff (which usually ends up being with AIX)? As far as Dell EMC goes, we've been pretty good with them for a while.
On the data domains - for the Unity product, but specifically for data domains - I would like a much easier interface for managing, for actually going in and having one place where I could get all of the different parts of the overall unit. And I would also like to be able to identify individual disks a lot more easily.
I don't have to spend nearly as much time getting in to manage the device on a daily basis because it functions very smoothly. We don't have any issues with it. Usually, on a daily basis, we don't mess with it. It's been hands-off since got it set up and configured. It's been great.
Our primary usage is for our users on our civilian side. We deal with both military and civilian, but it's mainly for our civilian users. We recently started using it, six months ago. Our customers like it a lot. It's an improvement from what we were using. We use it for our Outlook and Exchange but we haven't implemented with our VMware yet.
We just recently started using the Dynamic Pools, so while it's scalable, we actually find it valuable that we can just pop in one or two drives when we need to, instead of having to add a whole RAID set. That has actually been very handy for us. A lot of the time, as a government organization, we don't always get all the money we ask for. Sometimes, the money that gets slated to us gets pulled out, last-minute, so we're trying to buy drives and hoard them. We always put drives in last-minute, and that's been extremely helpful.
I know that's not exactly the question in terms of scalability, but that has been more helpful to us than being able to add a zillion disks at a time. Being able to add onesies, twosies to a pool is really helpful.
We just started doing a bunch of automation where, if an end-user's home directory or Departmental share gets filled, I can set certain things through a Unity API so that if it reaches 95 or 98 percent full, it will automatically expand. Now, instead of our getting a ticket and having to go in and do it manually, it does that for us.
Our end-users are happy with the product, there are no issues.
We had a couple issues, but they were very minor, related to storage Snapshots and our backup product, which is Veeam. That turned out to be a Veeam issue.
My only complaint would be some of the CLI Help files could be a little more detailed, but that's very minor complaint. We were trying to run some commands just to see how the storage snaps were interacting with the storage array, and it was a little difficult to look up exactly what commands should be run. The Help files detailing what exactly the commands did wasn't as detailed as we would have wanted them to be. They were very limited in scope. They could have been more detailed.
More integration with VMware would always be helpful, plugins that go directly into the vSphere management. A single pane of glass is always beneficial.
The pricing is competitive. We miss some of the feature functionality that we had with the XtremeIOs but it's certainly suitable for the purpose.
It's fairly scalable. We went through a scale-up and it performed as expected.
Excellent stability. A lot of ours are older, even past what would be considered end-of-life, but they still have a very low failure rate.
We're probably going to be looking into vSAN just to minimize the footprint. We've already minimized the footprint going from VNX to the Unity, but as we're virtualizing more and more, once we're completely virtualized, we'd probably be looking into vSAN through either VxRail or VxRack, and go that way. The smaller the footprint at the data center, the less cost there is.
We have had issues with the capacity and some misunderstandings on how much compression that we should be able to see out-of-the-box. When we were originally sold the box, it was before the merger. The salesman promised us at least a 50 percent compression on the box, so we ordered it with 2TBs of storage. That was a mistake, because now we are locked into smaller drivess. When it comes down to it, we are running out of space.
We realized that were barley getting a 12 percent compression offset, not the 50 percent, and this came about the time of the merger. All of this was happening and a lot of people in the company did not return emails at the time. I guess it's because they were no longer with the company or they knew they wouldn't be, that's just speculation. However, it took us several months and almost ruined the our reputation during that time period. They did make right on it and sent us several drives to double the storage on our devises for free, so they made it right towards the end, but it took a while.
The iSCSI and the VMware integation using vSphere could be less confusing.
It's very stable. We have not had any issues with it since we put it in. We've had one drive fail in two years. It was easily replaced, a hot swap and done. It has been incredibly easy and been stable for two years.
We would like an AI feature that would protect the backup and minimize the consumed space so that we can maintain the quality of the backup. This would help us minimize our IT cost in terms of backup procedures.
In addition, we would like to see the solution integrate easier with any cloud provider. There is a rising demand for moving to the hybrid cloud environment and Dell EMC needs to integrate to these needs.
Moving away from Java-based Unisphere to the HTML5 version of Unity is a huge improvement for our day-to-day management. We are still in the process of getting things in place, but at this stage, I can say the configuration is pretty straightforward and doesn't require additional training to learn the product.
Moving also from a hybrid to an all-flash array helped us to minimize footprints in our data centers. It's like two racks of VNX 8000 down to a quarter rack of Unity 650F.
The initial setup was straightforward for the most part. Because we can do synchronous replication between the two sites, this made the setup challenging for this piece. They did not know how to set this up initially. We ended up having to do bidirectional synchronous replication.
It is pretty stable. I like the stability, because everything works like it should. We made it all redundant. So, we don't have anything to worry about.
We are so virtual that we have two of us managing the whole infrastructure. Everything is taken care of and highly available. Nothing is vulnerable at all. Everything is good. There have been no issues at all, so far.
I don't hear from any of my tech team. We put it in, and it has been stable. We have been through three patch cycles. Junior resources are taking care of it with no issues. Once we show them how it works, very little training is needed to get them up to speed.
The solution helps us be more competitive in the market against our competitors.
We have multiple systems, a heterogeneous environment with Unix and Windows. It's not easy to share multiple files through different platforms. Unity solves this issue.
Also, replication gives us high-availability, and thus quick recovery, and snapshots give us faster recovery within the box, in case there are problems within the box itself.
The initial setup and migration were straightforward.
It's our primary storage. It is just for VMWare with a lot of Fail Over clusters.
For our mission critical applications, we run SQL, Oracle, Fail Over server clusters, VMWare, and databases. We use it for our primary VMWare environments, with a VPLEX, just for failover and performance. We use it for Windows Plus! because you need shared storage. In addition, we use it for healthcare systems.
We only use it for block storage. We don't use any other features. We have a VPLEX for applications.
I have been with the company for a little over the year maintaining the product.View full review »
It gives me flexibility with its ability to replicate to itself and the ability to use the Dell EMC Cloud as an option. That's always sitting there and waiting if we need it.
I like the fact that it comes with a cloud option out-of-the-box. Just purchasing it gave us an unlimited amount of storage. It allows us to dip our toes in without a major commitment. With AWS or Azure, you're locked in and you're using up the contract and you're always worried that you'll spend a lot more. The use case for us would be disaster recovery or cold storage.
We use our VMware Site Recovery Manager and we use the device to replicate all of those hot VMs over to our DR site. We've actually tested it and it takes 19 seconds for us to get a virtual machine up and running, in the event of a disaster, because of the replication between the two systems.
Right now, Unity is a backup target.
The IT challenge we resolved with this solution was having a backup target. With Unity we've got DDVE, or Data Domain Virtual Edition loaded. It was an array that was not being used for anything in particular and we had a need for the data domain capacity, so we're using it as a backup target under DDVE.
As the solution continues to grow and gain more traction, things will come up that will just continue to deepen the integration between VMware, vCenter, and all those other components. Anything in the divisibility there and additional tools is always great.
We have deployed it at remote locations; in a converged platform it really helps. We don't have to have two different storage system which helps to minimize the footprint.
It is a platform that we have standardized on for remote sites which enables us to have engineers and admins who are trained on and knowledgeable about the platform across the board. That enables them to support those sites, which is super-beneficial for us because we can do more with less.
The ability to mix and match SSDs with flash, and spinning disk in there as well, really allows us to meet our performance requirements.
It has met our overall performance expectations. The solution runs as we need it to, without any issues. It hasn't failed.
When it comes to provisioning and management, the solution has reduced complexity because we combined several systems down into one. We're utilizing that technology to see what we have available for file, instead of multiple technologies, and trying to converge all of that together to understand what our capacity management meets are.
Also, Unity's are more easily administrated, so we need fewer people to do the administration. We have less overhead because of that.
Speed and ease of use of the interface are its most valuable features.
The Unity interface is much more advanced than some of the older ones that we had, or that I've experienced. It has made deployment, configuration, and maintenance a lot simpler.
For private cloud, it works very well.
I don't have any complaints from the customers or end users, who are using this solution. It's up and running with no worries.
Engineered from the ground up to meet market demands for all-flash performance, efficiency, and lifecycle simplicity, the Dell EMC Unity XT All-Flash Storage Arrays are NVMe-ready, implement a dual active architecture, contain dual-socket Intel processors with up to 16-cores, and have more system memory.
All of these modern features enable Dell EMC Unity XT to deliver 2X performance and 75% less latency compared to previous generations. They are:
- Designed for Performance
- Optimized for Efficiency
- Built for Multi-cloud
The Unity Arrays are easy to deploy and maintain. The All-Flash models are intuitive and easy to work with, in addition to providing high IOPS with low latency to support Business Critical applications. Because of the newer features and performance, it's easy to maintain and support remotely.
Huawei OceanStor: AI
The initial setup was straightforward. The time it takes to deploy depends on the configuration. It can take a few hours if you are including mounting the storage and bigger deployments with the many controllers can take longer. It can take up to three days, depending on architectural complexities.
I like the solution's speed and the use of SSD technology. If I have to compare it to what we had before, I would say that it's easier to maintain and to support (on top of performance). It offers great capacity.
it servers not only as SAN but also NAS. The conectivity has a lot of options and prepares us for the use of Ethernet 10Gbit.
The configuration (web interface) is very simple and the dashboard provides all the basic information (health, etc) in a glimpe.
The most valuable feature is the availability.
At first, we had some component failures - none that were critical because our system has built-in redundancy. We had a number of these failures at a component level, but that was quickly resolved through a firmware upgrade. We had some doubts we were going in the right direction by using the OceanStor solution, but all the problems got resolved through the firmware upgrade. I don't have much to say. Maybe our period of using it is too limited for me to say much about it.
The initial setup isn't complex. It's quite easy and straightforward.
Initial setup is very straightforward, very easy. We didn't have any challenges.
There are some small things in the solution that can be improved. Supporting software is one of them and the integration with mainstream solution technologies could be better. They are small issues and generally the technology functions well. It's not an issue caused by the vendor but rather due to external circumstances and the cessation of cooperation between Chinese and US companies.
Huawei OceanStor Dorado: AI
We used an integrator for the installation and the initial setup was really straightforward.
The initial setup is straightforward. Deployment takes about a day.
We cannot complain because everything is perfect. The service is perfect. Reliability is perfect. We are very impressed with the solution we implemented.
The logistics can be improved because sometimes we have to wait a long time for the product to be delivered, despite there being stores available in Europe. Some of our customers are discouraged due to this long wait time.
The marketing for this product needs to be improved because it does not have enough exposure.
This solution does not support VMware VVols 2.0. However, I do not feel that this is necessary.
My company has an in-house team that handles the implementation and services for our clients. If the client purchases a maintenance contract then we take care of that, as well.
Dell EMC SC Series: AI
What I really like, from the model line starting with the 3000 all the way up, is the flexibility. You can have spinning disk, you can have flash, you can have a combination.
Another valuable feature is the performance of the auto-tiering. It will move hot data up to your fastest Tier 1 or move your slow data down. Data progression is what it's called. With the auto-tiering you can have multiple tiers, you can have your Tier 1 be either spinning or flash, all the way down to 7.2K. It will change the RAID on the fly so your writes come in at RAID 10. After they sit for a while, they get converted to RAID 5, then they'll cool off and move down the tiers. Your performance is kept going, while the cold data is moved to your slow, non-performance tiers.
With federation, you can have multiple systems across sites. You can treat them as one, and with a live migration, volumes don't go down. You can move them from site to site, doing maintenance, and keep your environment up.
They already integrate with Dell Storage Manager, so you can manage multiple, you can set up replication, you've got monitoring, vSphere, Hyper-V.
I have no complaints, so far, about the stability.
The stability is fine until it comes to patching, and then we have issues. Whenever we have issues related to driver and firmware, it's a pain.
Overall, it has really helped us to virtualize a lot of workloads where server or application owners were very hesitant to move away from their physical boxes because they were used to having local disks and the performance that came with that. With the SC Series SAN, the performance that we've gotten out of the boxes alleviated anyone's concerns. We do not get complaints about the performance of our virtual infrastructure.
Also, with auto-tiering, it's easier to understand than most arrays, knowing that all of your writes go to the tier that you specify, with easy-to-create storage profiles.
One of the nice things about it is that there is no forklift upgrade required to upgrade the storage. That's why a lot of our customers like it.
The maintenance is usually pretty good. It's not like some of the others where they increase it in the fourth, fifth, or sixth years. That's another reason the customers like it.
We've been able to maintain a lot of customers over the last six or seven years. We actually started selling the SC Series - it was called the Compellent - before Dell purchased them.
We had picked it as a strategic storage for our company to sell. It has been good to us over the years. We continue to make a lot of sales.
We use it for VDI, mainly. In terms of performance, there were some difficulties to begin with, with a lot of different upgrades. It took a lot of time because we've got several of them. With all the upgrades done, it has run pretty smoothly.
Right now, we've just got one particular system on it, where we're just trying to test the waters to figure out if it's good because we use a combination of Dell EMC and Cisco equipment. So far, the Dell EMC seems to be doing pretty well. There are some applications that we've run where it appears that the Dell EMC would be a better solution.
Very stable. We rarely have any types of issues or failures.
The solution is affordable, but we are a large customer with Dell EMC and we have corporate level discounts. Having said that, everybody definitely, obviously, wants the cost to be lower. And now there is All-Flash and its price is going down day by day.
We no longer have an issue, especially in the support arena, of wasted hours: getting support, waiting for support to arrive onsite. It's been a huge time saver for us.
In terms of the initial setup, we did two. We did the software-only and we also did the hardware. Software-only was pretty straightforward. With hardware, the challenge we had was that we're pretty siloed in our environment so we had to get networking involved. It was the complexity of the network which was difficult, to make sure we had the performance in the environment.
It's a very stable product.
Honestly, most of the complaints I hear about this solution are either because systems have been misconfigured or sized improperly. Probably 90 percent of the issues people have are because it's not sized properly.
It's secure and fast. There's no downtime and the HA works great. Everything is easy and their support is great.
Mostly, during an upgrade process, there is no downtime at all. The way they do is really great. Very easy, straightforward. There is a pre-check and then, when they finish it, they do a post-check. It's great the way they do the upgrade, no downtime at all.
The most valuable feature is the ability to replicate. We are running a financial company and it needs to be available 24/7. We can't afford any downtime.
The response time is also great.
We had XtremeIO for the past three or four years and, prior to that, we had NetApp. I think the SC Series works better. We are pretty happy with it. In terms of performance with mixed workloads, the SC Series is pretty good. We don't see a lot of latency as we saw with NetApp. But I would say XtremeIO and SC are similar in that regard.
Most storage platforms are the same, but when it comes to the performance and dedupe, as I said, those were the main criteria, what we were after when we talked to Dell EMC. The relationship and trust are also very important.
We work on different solutions. The most important point for me is how is implemented disc virtualization. In this solution, disc management is really very simple and disc utilisation is efficient. all disc are add in a group and raid is oragnized on 2Mo stripe. The stripe are organizer in 2 raid automatically : by default, 20% in raid 10 to write blocs with very effective performane and 80% raid5 to store data with a good use of space.When you add many disks in the folder their are automatically integrated in the profile and add performance to production. The possibility to implement two system in a cluster and can also be integrated in a federate mode to agregate multiple system in one global storage.
The initial setup was rather straightforward. Deployment took about a month for the 5000, and the SC 3000 took about a week.
I think that Dell EMC is one of the best technical support services in Ukraine.
I am satisfied with the technical support.
Compellent's setup is straightforward.
The initial setup is straightforward. It is complex but straightforward. It takes approximately 45 minutes to configure.
The setup was straightforward. I implement it for my customers. It can be deployed within an hour.
You only need one person for the deployment. One person is more than enough. There's no dedicated team required.
We use it for various type of data but mainly for virtual environments.
The solution is used for shared storage for the ESX cluster, VMware, or Vcenter cluster. It's a virtual machine and it's hosting space for virtual service. The primary reason we use the solution is to host the core infrastructure, the virtual servers including file servers, domain controllers, application servers, sequel servers, etc. Basically, the servers that run the business.
Lenovo ThinkSystem DM Series: AI
Main Thing is, to look for the Need of your customer! It ist very very important to know where HIS pain Points are. When he Needs a completly ready storage without any thinking about licences or something like that maybe nimble will fit the best. When he Needs more flexibility about scaleability, and superior featureset of already proven Storage Solutions maybe Lenovo DM-Series would match better.
Scalability is available and could be done, but we bought everything that we needed at the beginning. The system covers our needs so we don't have any reason to improve, update, or scale.
In this solution, I like the option of clustering two storages together and that there can be HA availability. You can get two DM Series which is a 5000 Series, and you can cluster them and make a chain solution. This is a very important feature. If you want to build this solution with brands such as EMC, you should be paying a lot of money because it requires buying equipment like a VPLEX which is so expensive. With the other device you can launch as soon as it appears. But with this DM Series, you only need two of these. It's a very good feature.
It's difficult to calculate pricing on the solution. Lenovo does help us, as a partner, however, there are different types of storage at different price points, and also certain items that are built into the cost already.
Normally, they break down and show you how they get to the final price. Typically, a client will tell you what they need and you run some calculations based on workloads. Once you have a plan that aligns with the customer's needs, you go to Lenovo for pricing, and you need to negotiate with them to try to lessen costs.
Once you have decided on the costs, there are no extras beyond that which a customer would have to pay for. It's all one set negotiated cost.
Lenovo ThinkSystem DE Series: AI
Usually, when a part fails, we have to replace it and it takes a while. It would be better if the part could be available locally here in Kenya so we don't have to wait so long to repair the solution.
If you compare this solution to IBM, there's a feature in IBM that is able to send alerts so that you can be able to place a part order immediately. Lenovo should offer this. It would shorten the process of part replacement.
Pure Storage FlashBlade: AI
This solution has improved how our organization functions in the way that it has essentially made us not have to think about storage. That's probably the biggest selling point. Storage is boring and you don't want to think about it. From that point of view, it's provided a lot of benefits. It ticks the big boxes. It's fast, it's scalable, it's essentially zero-touch maintenance. All of those get us eighty, ninety percent of the way.
The biggest feedback is that none of the teams have noticed. I'm not getting negative feedback which is a big improvement from where we were.
The most valuable feature is its ease of use. We went from six different storage vendors down to one. The training for the operations staff and others that had to deal with this on a provisioning standpoint was much easier to do. The process was seamless and very easy to learn.
It has absolutely simplified our storage because the dashboards on the consoles show a clear understanding of where you are, and it is also very easy to provision. This been a big help for our teams.
I am very happy with the latency, availability, and reduction rates.
The ease of deployment and management has helped us simplify our storage. We also do not have to worry about capacity management as much. A lot of these things are native to Pure Storage.
Our availability is at nine nines.
I'm not directly involved with the technical support but I haven't heard any complaints or any issues about it. The vendor helped us to install and deploy the solution.
I like the scalability options. We recently expanded the storage and it's not straightforward but their team helped make it happen.
I would like to have Snapshots and Snapmail in the next release. People who came from a NetApp background, especially expect these features.
I would rate the technical support as a five (out of ten). They need to improve. When we open a case, it is auto assigned to a support tech person. Nine out of ten times, we get an email right back saying that person is off until tomorrow. I cannot handle that. They just did this over the weekend to us, too. I had to call our rep and have them do something about it.
The initial setup was straightforward.
We will be upgrading our controllers soon.
The initial setup of this solution was straightforward. We didn't run into any major issues.
This solution is deployed in our on-premises lab.
We are running VMware on Pure for improved performance. We have seen an increase in performance. We are using the VMware integrations developed by Pure to some extent, but I do not have specific details.
For our customers, they're mainly using it for a backup repository and for NFS data storage.
Our use cases vary but usually we use the solution for cloud-based solutions. We use it for containers, which provide security on premises for our customers to test environments. Most of our customers are medium to large enterprises based in Bulgaria where we are located. We are resellers, distributors and system integrators. We have a partnership with FlashBlade.