The reasons we switched were performance and the number of IOPS in the previous product. It was an older product which was dog-slow. Some of the larger file servers were the worst. And that played out to everything else that was sharing the storage with it.
All-Flash Storage Arrays Performance Reviews
Showing reviews of the top ranking products in All-Flash Storage Arrays, containing the term Performance
NetApp AFF (All Flash FAS): Performance
Some of the volumes for our response times were 30 to 40 millisecond. When we move to all-flash, our response times were reduced to microseconds. There was a tremendous improvement. In terms of the dedupe and compression, it is squeezing the physical size where we are now seeing an 80 percent reduction, which is very positive.
The solution has affected IT’s ability to positively support new business initiatives.
It has improved performance for our enterprise applications, data analytics, and VMs. These improvements are a result of all-flash, throughput, reliability, compression, etc.
It gives us the power and agility to spin up VMs as quickly as possible.
We have also standardized on NetApp. All the storage that we have for our services runs on NetApp. Being standardized, it's easy for our Operations. We can train them on a single platform.
It helps improve performance for enterprise applications, data analytics, and VMs. With the power of flash, we moved from a traditional hybrid storage to all-flash. Having the full-fledged power of flash, and the controllers, it has doubled the performance compared to what we used to get.
Finally, our total cost of ownership has decreased by approximately 10 - 12 percent.
It has helped us improve the performance for our enterprise applications, data analytics and VMs across the board. We recently upgraded from a FAS3250 platform to the AFF A300 all-flash array. Batch times went from approximately seven hours down to about two and a half. Functionality during the day, such as taking or removing snapshots and cloning instances, is higher than it has ever been.
We are employing the native encryption on disk along with NVMe. Therefore, it is a more secure solution. Our user experience and performance have been remarkably better as well.
A lot of application administrators have a lot more time. We have been able to do some things that we were unable to do before, so it has helped streamline our business a lot.
Our primary use case is escalating a more global performance, which wasn't achievable with the regular spinning drives. We wanted to have higher breakthrough performance with a flash-based solution using all SSD drives.
- Space savings
We were pushing what we had too far on performance. It wasn't so good, so that's when we looked at All Flash.
The solution has drastically and positively affected IT's ability to support new business initiatives. It's a very easily automated solution using REST APIs.
Combined with OnCommand, the solution the solution helps improve the performance of our enterprise applications.
I don't know if it improved the way our organization functions, but I know we don't have any storage outages or slowdowns at this point. We just did a refresh about six months ago to the A700s and we have been very happy with the performance of those boxes.
Our latency is extremely low. We average below a millisecond.
We have a range of customers, from manufacturing to oil & gas, in Malaysia. We have been using NetApp for quite some time, but now performance is a big issue for our customers, along with other challenges for them, so they are opting to go to All Flash.
NetApp is doing a good job of delivering to and satisfying customers. All Flash cloud technology has helped them a lot.
We use it for high performance, block storage, and file storage.
The highest performance need apps are usually deployed on AFF. We're using adaptive QoS to identify what applications require higher performance and moving those volumes over to the AFF.
The improvement for us has been space savings on the All Flash FAS platform. The data space savings are almost three times better than the what we have right now, a two-to-one ratio.
Regarding the user experience, it's pretty fast. For applications where they require a high throughput, this platform is pretty solid. It also helps improve the performance of enterprise applications, data analytics, and VMs because it's pretty fast. We are on a different level of tiered platform, where the All Flash is completely hybrid, SSD aggregate, so it tripled the performance for the customer.
AFF is our primary source for our data centers. We use it for our multi-tenancy data center. We like the crypto erase function available on the SSDs and we needed the high performance, IOPs that you can get from SSDs.
We have a big problem in our organization where I can't get the application engineers to give me performance requirements. Now, with the SSDs, I don't need to worry about that anymore. All of our applications are high. Our test applications perform at a higher level now.
It has improved performance of our enterprise applications, data analytics, and VMs because we have a higher IO from the disk now. We run a lot of write-intensive VMs. For sure the solution helps out.
Our total cost of ownership has decreased because of the nature of the SSDs, their mean time to failure is much higher. They don't fail as often and that's going to reduce it. And because we upgraded to the All Flash and the bigger SSD, we reduced our footprint. I increased my capacity 500 percent and reduced my footprint in the data center by 95 percent.
It made everything faster. The user performance went from about eight seconds, for certain screens, down to three seconds per screen. That was the primary reason. Our users can multitask faster. The way Epic works is that you have multiple screens up at the same time. When you have multiple screens up at the same time and you have a patient sitting in front of you, speed is quality. Where before, the patient would have to wait for answers, now they get them almost instantaneously. Our users can run multiple things at the same time. For the users, the nurses and doctors, it is faster. All around faster.
As for IT's ability to support new business initiatives as a result of using this product, we are upgrading to Epic 2018 next year. The older system couldn't have supported it. That is another reason we went to a faster system. Epic has very high standards to make sure that, if you buy the upgrade, you will be able to support the upgrade. They advised me, top to bottom, make sure you can do it. Our new system passed everything. It's way faster.
We have VMs and we're were running VDI. We're running VMware Horizon View. We have about 900 VMs running on it and we have about another 400 Hyper-V servers running on it. Our footprint is very tiny now versus before. We now have some 30 servers running 1,000 machines where we used to have 1,000 machines running 1,000 machines. We have Exchange, SQL, and Oracle and huge databases running out of it with no problem at all, including Epic. It's full but it's very fast.
It takes us a minute or two minutes to set up and provision enterprise applications using the product. We can spin up a VM in about 30 seconds and have SQL up and running, for the DBAs to go in and do their work, in about two minutes.
The primary use case is availability, performance, bandwidth, and throughput with respect to our applications.
We are currently using an on-premise solution.
NetApp is introducing All Flash FAS with the all-flash array. Our customers like performance, they don't want to deal with latency. Using an all-flash array, our customers get impact from performance.
We were running into a lot of storage roadblocks that were performance based. Also, the IBM product that we were using was at the end of life for 90 percent of our enterprise.
I spent 15 years with IBM. Anytime I go into a data center, and I see Big Blue, it is the first thing that I replace.
Whenever we face any issues with performance, particularly any performance with our high outreaching storage site, we are recommended to use an all-flash service, because we rely on our primary solution at all times. If it seem like there are issues, we have bring in different vendors as a buffer. We have adopted an all-flash primary solution with this use case.
Besides for the speed, one of the most valuable features that the AFF gives me is the robust hardware that it has. It's simplistic. It deploys very easily. It's already built from the factory to take advantage of the all-flash array.
I would describe the user experience of the solution as very simplistic. There's a very easy GUI to use, and then when you need to get very, very detailed, you have a robust command line that you could do anything you want with to enhance performance for your solutions. Really what we're using the AFF for is solely for speed. We really need the power of the backbone and the speed of the disks because we have to move so much data.
Setting up and provisioning enterprise applications take minutes. It's just not difficult. We only have to use the GUI, curate the spaces, and go. I've set up entire NetApp systems in a morning.
The most valuable feature for us is the speed of the read of the information. We can get the information as fast as possible.
The user experience we are getting from All Flash is excellent. The performance is great. The administration is exactly the same as all the other storage in NetApp which is great. It is very good, we are so pleased.
AFF improves how our organization functions because of its speed. Reduction in batch times means that we're able to get better information out of SAP and into BW faster. Those kinds of things are a bit hard to put my finger on. Generally, when we start shrinking the times we need to do things, and we're doing them on a regular basis, it has a flow on impact that the rest of the business can enjoy. We also have more capacity to call on for things like stock take.
AFF is supporting new business because we've got the capacity to do more. In the past, with a spinning disc and our older FAS units, we had plenty of disc capacity but not enough CPU horsepower and the controllers to drive it and it was beginning to really hurt. With the All Flash FAS, we could see that there are oodles of power, not only from disc utilization figures on the actual storage backend but also from the CPU consumption of the storage controllers. When somebody says "we want to do this" it's not a problem. The job gets done and we don't have to do a thing. It's all good.
All Flash FAS has improved performance for our enterprise applications, data analytics, and VMs which are enterprise applications. It powers the VM fleet as well. It does provide some of our BW capabilities but that's more of an SAP HANA thing now. Everything runs off it, all of our critical databases also consume storage off of the All Flash FAS for VMs.
For us TCO has definitely decreased, we pay less in data center fees. We also have the ability with the fabric pool to actually save on our storage costs.
NetApp AFF has improved our organization through the use of clusters. Previously we had migrated from Dell EMC and we had a lot of difficulties moving data around. Now, if we need to move it to any slower storage, we can move it with just a vault move within the cluster. Even moving data between clusters is extremely simple using SnapMirror. The mobility options for data in All Flash FAS have been awesome.
AFF has given us the ability to explore different technology initiatives because of the flexibility that it has, being able to fit it in like a puzzle piece to different products. For example, any other solutions that we've looked at, a lot of times those vendors have integration directly into NetApp, which we haven't found with other storage providers and so it's extremely helpful to have that tie-in.
This solution has also helped us to improve performance. We have hybrid arrays as well so that we can have things that are on slower storage. For the times that we need extremely fast storage, we can put it on AFF and we can use V-vaults if we need to to have different tiers and automatically put things where they need to be. It's really helped us to nail down performance problems when we need it to put them in places to fix them by just having the extreme performance.
Total cost to ownership has definitely dropped because with deduplication compression and compaction always on, we're able to fit a whole lot more in a smaller amount of space and still provide more performance than we had before. Our total cost per gigabyte ends up being less by going to All Flash.
fas 2554, need to scle out with space and performances
- Its incredible performance
- Proactiveness for possible errors
- Powerful tools for management.
We don't use NetApp AFF for machine learning or artificial intelligence applications.
With respect to latency, we basically don't have any. If it's there then nobody knows it and nobody can see it. I'm probably the only one that can recognize that it's there, and I barely catch it. This solution is all-flash, so the latency is almost nonexistent.
The DP protection level is great. You can have three disks failing and you would still get your data. I think it takes four to fail before you can't access data. The snapshot capability is there, which we use a lot, along with those other really wonderful tools that can be used. We depend very heavily on just the DP because it's so reliable. We have not had any data inaccessible because of any kind of drive failure, at all since we started. That was with our original FAS8040. This is a pretty robust and pretty reliable system, and we don't worry too much about the data that is on it. In fact, I don't worry about it at all because it just works.
Using this solution has helped us by making things go faster, but we have not really implemented some of the things that we want to do. For example, we're getting ready to use the VDI capability where we do virtualization of systems. We're still trying to get the infrastructure in place. We deal with different locations around the world and rather than shipping hard drives that are not installed into PCs, then re-installing them at the main site, we want to use VDI. With VDI, we turn on a dumb system that has no permanent storage. It goes in, they run the application and we can control it all from one location, there in our data center. So, that's what we're moving towards. The reason for the A300 is so that our latency is so low that we can do large-scale virtualization. We use VMware a tremendous amount.
NetApp helps us to unify data services across SAN and NAS environments, but I cannot give specifics because the details are confidential.
I have extensive experience with storage systems, and so far, NetApp AFF has not allowed me to leverage data in ways that I have not previously thought of.
Implementing NetApp has allowed us to add new applications without having to purchase additional storage. This is true, in particular, for one of our end customers who spent three years deciding on the necessity of purchasing an A300. Ultimately, the customer ran out of storage space and found that upgrading the existing FAS8040 would have cost three times more. Their current system has quadruple the space of the previous one.
With respect to moving large amounts of data, we are not allowed to move data outside of our data center. However, when we installed the new A300, the moving of data from our FAS8040 was seamless. We were able to move all of the data during the daytime and nobody knew that we were doing it. It ran in the background and nobody noticed.
We have not relocated resources that have been used for storage because I am the only full-time storage resource. I do have some people that are there to help back me up if I need some help or if I go on vacation, but I'm the only dedicated storage guy. Our systems architect, who handles the design for network, storage, and other systems, is also familiar with our storage. We also have a couple of recent hires who will be trained, but they will only be used if I need help or am not available.
Talking about the application response time, I know that it has increased since we started using this solution, but I don't think that the users have actually noticed it. They know that it is a little bit snappier, but I don't think they understand how much faster it really is. I noticed because I can look at the system manager or the unify manager to see the performance numbers. I can see where the number was higher before in places where there was a lot of disk IO. We had a mix of SATA, SAS, and flash, but now we have one hundred percent flash, so the performance graph is barely moving along the bottom. The users have not really noticed yet because they're not really putting a load on it. At least not yet. Give them a chance though. Once they figure it out, they'll use it. I would say that in another year, they'll figure it out.
NetApp AFF has reduced our data center costs, considering the increase in the amount of data space. Had we moved to the same capacity with our older FAS8040 then it would have cost us four and a half million dollars, and we would not have even had new controller heads. With the new A300, it cost under two million, so it was very cost-effective. That, in itself, saved us money. Plus, the fact that it is all solid-state with no spinning disks means that the amount of electricity is going to be less. There may also be savings in terms of cooling in the data center.
As far as worrying about the amount of space, that was the whole reason for buying the A300. Our FAS8040 was a very good unit that did not have a single failure in three years, but when it ran out of space it was time to upgrade.
This product was brought in when I started with the company, so that's hard for me to answer how it has improved my organization. I would say that it's improved the performance of our virtual machines because we weren't using Flash before this. We were only using Flash Cache. Stepping from Flash Cache with SAS drives up to an all-flash system really had a notable difference.
Thin provisioning enables us to add new applications without having to purchase additional storage. Virtually anything that we need to get started with is going to be smaller at the beginning than what the sales guys that sell our services tell us. We're about to bring in five terabytes of data. Due to the nature of our business operations that could happen over a series of months or even a year. We get that data from our clients. Thin provisioning allows us to use only the storage we need when we need it.
The solution allows the movement of large amounts of data from one data center to another, without interrupting the business. We're only doing that right now for disaster recovery purposes. With that said, it would be much more difficult to move our data at a file-level than at the block level with SnapMirror. We needed a dedicated connection to the DR location regardless, but it's probably saved our IT operations some bandwidth there.
I'm inclined to say the solution reduced our data center costs, but I don't have good modeling on that. The solution was brought in right when I started, so in regards to any cost modeling, I wasn't part of that conversation.
The solution freed us from worrying about storage as a limiting factor. In our line of business, we deal with some highly duplicative data. It has to do with what our customers send us to store and process through on their behalf. Redundant storage due to business workflows doesn't penalize us on the storage side when we get to block-level deduplication and compression. It can make a really big difference there. In some cases, some of the data we host for clients gets the same type of compression you would see in a VDI type environment. It's been really advantageous to us there.
NetApp helped us with its ease of deployment and ease of use.
The solution's data protection and data management are also easy.
AFF has improved our response time by about 30%.
We have enough storage, especially with the enhanced deduplication and compaction. It is good to be able to have a multitude of environments without having to worry about having spaces deployed. We always have a good amount of space. We do have multi-performance, with different performance layers for slower and quicker storage.
Coming from a financial background, we are very dependent on performance. Using an all-flash solution, we have a performance guarantee that our applications are going to run fine, no matter how many IOPS we do.
We use NetApp for both SAN and NAS, and this solution has simplified our operations. Specifically, we use it for SAN on VMware, and all of our NFS storage is on NAS. They are unified in that it is the same physical box for both.
This solution has not helped us to leverage data in new ways.
Thin provisioning has allowed us to add new applications without having to purchase additional storage. This is one of the reasons that we purchased NetApp AFF. We almost always run it at seventy percent utilized, and we only purchase new physical storage when we reach the eighty or eighty-five percent mark.
I find that we do have better application response time, although it is not something that I can benchmark.
As a storage team, we are not worried about storage as a limiting factor. When other teams point out that storage might be an issue, we tell them that we've got the right tools to say that it is not.
We have been happy with the performance and it has not given us any issues.
I like the simplicity of data protection and data management. We use snapshots for our FAS recovery, and we use SnapVault for our backups.
NetApp definitely simplifies our IT operations by unifying services. We only use this solution on-premises, but with NAS, we don't need Microsoft Windows to create a share. It's all on our NetApp platform. I like it because we do not have to switch.
I wouldn't say that we have reallocated resources that were previously dedicated to storage operations, although it does give us time to do other things.
We have used NetApp to move large amounts of data between data centers. It has made it easier for us, and RPOs are shorter because of it.
With respect to the response time for applications, I can definitely say that it has improved, although we have not done any benchmarking. I perceive the improvement through monitoring the applications.
This solution is pretty expensive, so I'm not sure whether it has reduced our data center costs.
NetApp has helped eliminate storage as a limiting factor in our business. My customers are happier because they have no issues with performance or accessing their data.
Our primary use case for NetApp AFF is performance-based applications. Whenever our customers complain about performance, we move their data to an all-flash system to improve it.
We have our own data center and don't share our network with others.
The performance of NetApp AFF allows our developers and researches to run models and their tests within a single workday instead of spreading out across multiple workdays.
For our machine learning applications, the latency is less than one millisecond.
The simplicity of data protection and data management is standard with the rest of NetApp's portfolio. We leverage SnapMirror and SnapVault.
In my environment, currently, we only use NAS. I can't talk about simplifying across NAS and SAN, but I can say that it provides simplification across multiple locations, multiple clusters, and data centers.
We have used NetApp to move large amounts of data between data centers, but we do not currently use the cloud.
Our users have told me that the application response time is faster.
The price of the A800 is very expensive, so our data center costs have not been reduced.
We are using ONTAP in combination with StorageGRID for a full data fabric. It provides us with a cold-hot tiering solution that we haven't experienced before.
Thin provisioning has allowed us to over-provision existing storage, especially NVMe SSD, the more expensive disk tier. Along with data efficiencies such as compaction, deduplication, and compression, it allows us to put more data on a single disk.
Adding StorageGRID has reduced our TCO and allows us to better leverage fastest NVMe SDD more, hot tiering to that, and cold tiering to StorageGRID.
Our previous NetApp system was a SAS and SATA spinning disk solution that was reaching end-of-life, and we were overrunning it. We were ready for an upgrade and we stuck with NetApp because of the easy of cross-upgrading, as well as the performance.
There are little things that need improvement. For example, if you are setting up a SnapMirror through the GUI, you are forced to change the destination name of the volume, and we like to keep the volume names the same.
When you have SVM VR and you have multiple aggregates that you're writing the data to on the source array, and it does its SVM DR, it will put it on whatever aggregate it wants, instead of keeping it synced to stay on both sides.
This solution doesn't help leverage the data in ways that I didn't think were possible before.
We are not using it any differently than we were using it from many years ago. We were getting the benefits. What we are seeing right now is the speed, lower latency, and performance, all of the great things that we haven't had in years.
This solution hasn't freed us from worrying about usage, we are already reaching the eighty percent mark, so we are worried about usage, which is why we are looking toward the cloud to move to fabric pools with cloud volumes to tier off our snapshots into the cloud.
I wish that being forced to change the volume name would change or not exist, then I wouldn't have to go to the command line to do it at all.
The most valuable feature is it's fast. We do not use the solution for artificial intelligence or machine learning applications, but our overall latency is low. With our SQL Servers and Oracle servers, compared to the older meta filers, like 7-mode, the 8000 custom mode, or performance on Pure flash systems, you can't compare. We are seeing submillisecond, which is pretty nice.
The solution has enabled us to move large amounts of data from one data center to another (on-premise) without interruption to the business using SnapMirror.
The solution has improved application response time. Compared to the 3250s and 8000s, it has been night and day.
The monitor and performance need improvement. Right now we are using the Active IQ OnCommand Unified Manager, but we also have to do the Grafana to do the performance and I hope we will be able to see the improvement of the active IQ in terms of the performance graph. It should also be more detailed.
In the next release, I'm looking for a flex group because that is the next level of the volumes, extended volume for the flex vault. In the flexible environment, we run into the limitation of the capacity at a hundred terabytes and sometimes in oil and gas, like us, when the seismic data is too big, sometimes a hundred terabytes are not big enough. We have to go with the next level, which is the flex group and I hope it has features like volume being able to transfer to the flex group. I think they said they will add a few more features to the flex group. I also wanted to see the non-disruptive conversion from flex vault to the flex group be easier so we don't have to have any downtime.
The price to performance ratio with NetApp is unmatched by any other vendor right now.
ONTAP has improved my organization because we now have better performance. We can scale up and we can create servers a lot faster now. With the storage that we had, it used to take a lot longer, but now we can provide the business what they need a lot faster.
It simplifies IT operations by unifying data services across SAN and NAS environments. We use our own type of SAN and NAS for CIFS and also for virtual servers. It's pretty basic. I didn't realize how simple it was to create storage and manage storage until I started using NetApp ONTAP. We use it daily.
Response time has improved. IOPS reading between reading and the storage and getting it to the end-users is a hundred times faster than what it used to be. When we migrated from 7-Mode to cluster mode and went to an all-flash system, the speed and performance were amazing. The business commented on that which was good for us.
Datacenter costs have definitely been reduced with the compression that we get with all-flash. We're getting 20 to one so it's definitely a huge saving.
It has enabled us to stop worrying about storage as a limiting factor. We can thin provision data now and we can over-provision compared to the actual physical hardware that we have. We have a lot of flexibility compared to what we had before.
The most valuable features of the solution are speed, performance, and reliability.
The solution has improved application response time. We are using the All Flash FAS boxes of the AFS and our primary use case is around file shares. These aren't really that performance intensive. Therefore, overall, response times have improved, but it's not necessarily something that can be seen.
From a sheer footprint savings, we're in the process of moving one of our large Oracle environments which currently sits on a VMAX array, taking up about an entire rack, to an AFF A800 that is 4U. From just the sheer power of cooling and rack-space savings, there have been savings.
I haven't seen ROI on it yet, but we're working on it.
The primary use case is for customers who need absolute low latency and have low latency in their workloads. They need maximum performance in their virtualization and file storage environments.
We have been using the FAS series product, and AFF is pretty similar to the FAS products, as it still runs the ONTAP operating system. They are using AFF because that comes with all-flash disks, which gives us better performance with a smaller footprint. We use that mainly to start our block and NAS data.
We primarily utilize AFFs for engineering VDIs. We are utilizing it to host VDI and performance is the primary expectation from AFFs. We are satisfied with the product.
We like AFF because it has a very high reliability rate with very high performance. We are using it for top tier performance on application and virtual machine storage, as well as just being able to separate out SVMs for different security and network needs for all of our different customers across the state.
We use the Snapshot feature to simplify backups for data protection. We set different policies that let let our agencies choose what backup policy they want to have for their Snapshots. It's very simple. Users can be given the opportunity to look at previous versions directly from the Windows interface or they can call/put in a ticket seeking support from our IT group if they need a larger system restore, because their data is protected with NetApp and replicated as well.
With AFF, the benefit is that we have 27 data centers across the country, we are able to standardize across all them and do storage replication. The simplicity of being able to offload cold data to StorageGRID with the tiering layers that NetApp provides, this just makes it easier for us to be able to reduce labor hours, operations, and time wasted trying to figure out moving data. The simplicity of tiering is a big bonus for us.
In terms of data protection, we have been leveraging SnapMirror with Snapshot to be able to do cloning. For the simplicity, we find it is able to do SnapMirror on a DR site in a disaster situation so we can recover and the speed to recovery is much more efficient. We find it much easier than what other vendors have done in the past. For us, to be able to do a SnapMirror a volume and restore immediately with a few comments, we find it more effective to use.
AFF has helped us in terms of performance, taking Snapshots, and being able to do cloning. We had a huge struggle with our backup system doing snapshots at the VM level. Using AFF, it has given us the flexibility to take a Snapshot more quickly.
We were early adopters of the cDOT environment five or six years ago. In the early stages of deployment (five or six years ago), we saw some challenges around cDOT. However in the last two to four years, the product has matured incredibly. Ever since the introduction of ONTAP 9.X, we haven't seen any issues in terms of availability and performance.
We are upgrading to ONTAP, which will give us a data encryption level at an aggregate layer of the ONTAP environment. We are looking forward to that.
We are using SnapMirror and not seeing any issues. Let us hope it stays like that.
We've been using AFF for file shares for about 14 years now. So it's hard for me to remember how things were before we had it. For the Windows drives, they switched over before I started with the company, so it's hard for me to remember before that. But for the NFS, I do remember that things were going down all the time and clusters had to be managed like they were very fragile children ready to fall over and break. All of that disappeared the moment we moved to ONTAP. Later on, when we got into the AFF realm, all of a sudden performance problems just vanished because everything was on flash at that point.
Since we've been growing up with AFF, through the 7-Mode to Cluster Mode transition, and the AFF transition, it feels like a very organic growth that has been keeping up with our needs. So it's not like a change. It's been more, "Hey, this is moving in the direction we need to move." And it's always there for us, or close to being always there for us.
One of the ways that we leverage data now, that we wouldn't have been able to do before — and we're talking simple file shares. One of the things we couldn't do before AFF was really search those things in a reasonable timeframe. We had all this unstructured data out there. We had all these things to search for and see: Do we already have this? Do we have things sitting out there that we should have or that we shouldn't have? And we can do those searches in a reasonable timeframe now, whereas before, it was just so long that it wasn't even worth bothering.
AFF thin provisioning allows us to survive. Every volume we have is over-provisioned and we use thin provisioning for everything. Things need to see they have a lot of space, sometimes, to function well, from the file servers to VMware shares to our database applications spitting stuff out to NFS. They need to see that they have space even if they're not going to use it. Especially with AFF, because there's a lot of deduplication and compression behind the scenes, that saves us a lot of space and lets us "lie" to our consumers and say, "Hey, you've got all this space. Trust us. It's all there for you." We don't have to actually buy it until later, and that makes it function at all. We wouldn't even be able to do what we do without thin provisioning.
AFF has definitely improved our response time. I don't have data for you — nothing that would be a good quote — but I do know that before AFF, we had complaints about response time on our file shares. After AFF, we don't. So it's mostly anecdotal, but it's pretty clear that going all-flash made a big difference in our organization.
AFF has probably reduced our data center costs. It's been so long since we considered anything other than it, so it's hard to say. I do know that doing some of the things that we do, without AFF, would certainly cost more because we'd have to buy more storage, to pull them off. So with AFS dedupe and compression, and the fact that it works so well on our files, I think it has saved us some money probably, at least ten to 20 percent versus just other solutions, if not way more.
Dell EMC XtremIO: Performance
The performance very good, and the use case is actually we decided to have all flash a couple years ago and Xtreme IO was one of the vendors that your EMC partner reccomended so there was no discussion of what kind of storage we would buy.
The performance of some of the features of the solution is very powerful. XtremIO was powerful enough to allow us to disable features of other items.
If I didn't have to think about the cost, I would rate the solution ten out of ten. This solution is geared toward enterprise-level companies. Small and medium-sized businesses would find it extremely expensive.
What I like about this solution, is that it is really fast and it's really good in compression. So you can put a lot of data in line on it. I've used it for cases like VDI in a healthcare environment because we use multiple copies of it. It is very good and has very good performance. It can handle big workloads.
The solution's most valuable feature is its high performance.
It's a great solution. We have 100% high availability and 100% business continuity. All our banking is All-Flash behind the VPLEX.
We've seen great enhancements from the performance point of view. There's good availability, stability, and continuity, but the performance actually has increased by 60 or 70%.
The most important thing for the system engineer is to check if there is latency in the IOPS for any run. You cannot measure the number of IOPS or whether or not it is overloaded. You cannot measure anything in EMC about this. Most solutions, especially HP, improved our fall-over performance, with our database and servers. Most servers are HP, but we use EMC now only for backup.
One thing that should be improved is the reporting and monitoring tools. It should use real-time monitoring for storage, IOPS, latency, etc.
The most valuable feature is the performance, as well as how you manage performance on the system.
Pure Storage FlashArray: Performance
Pure Storage is all-flash, so this sometimes tends to make it a bit more expensive in the beginning. Once a customer gets a demo and starts using Pure Storage, sees it working with its ease of use, stability, and performance, this encourages them into purchasing the product.
Our primary use case has been our production Oracle campus management database environment. We use Oracle PeopleSoft as our campus management solution and underneath that we have about six terabytes of Oracle Database. Our most demanding use-case for Pure Storage has been hosting these high performance, transactional databases, while also hosting all of our other critical application storage needs (MSSql data-warehouse, BI/Analytics, VMWare).
The most interesting feature is the speed at which it executes I/O. After moving to Pure Storage, I have noticed that our databases are considerably faster.
Our performance has improved by at least four times.
We do a lot of Oracle implementations and getting Oracle workloads to run faster and better. For a lot of our customers, they are looking at Pure Storage for its underlying storage. It makes everything a lot easier for them in terms of increasing performance, lowering operational costs, and making their day-to-day lives easier.
Most of our customers who use Pure Storage have one of two scenarios:
- They have production data with high performance requirements running out of Pure Storage, and they want an efficient way to make a copy of that data onto some other storage for backup and DR purposes. For this scenario, we have integration with Pure Storage that allows us to very efficiently leverage their APIs to capture that data without the need to do things like repeated full copies of that data, leverages their snapshot APIs and differential APIs which tell us what's different from one snap to another to another.
- The customer has their data, maybe it is on Pure Storage or it's on some other array, then they want to use Actifio to get a copy onto a Pure Storage array.
For example, an Oracle user might need to make a copy of a large Oracle Database. They would want us to spin that database up in one or more lower, testing, or QA environments. These environments sometimes have high performance requirements, which could be met by placing a copy on Pure Storage on them.
Another example is a customer who has Oracle Exadata. Obviously, Oracle engineered systems have very high performance, and they don't want to have all of their test and dev copies in that Exadata platform, because of the cost of the platform. Therefore, Pure Storage, combined with Actifio, captures the data efficiently from the Exadata environment, then stores it on the Pure Storage disk. We then present that data to their test servers, which can be the Exadata Compute Servers or it can be any non-Exadata Linux-based Oracle servers. Then, they can have great performance because of the high speed delivery of data from Pure Storage using Actifio.
I would recommend Pure Storage.
We investigated some flash storage implementations for it and based off of the way that the appliance works the added cost of flash doesn't scale with the performance that you get with it, so it hits on our middle ground. It works perfectly for us. We don't need to look at any type of flash storage.
We're providing some ESXi solutions to our customers with high performance.
The ease of use. That's what our customers love. They say it's very easy, they don't need special training, they don't need to call us or any other company or integrator to help them do their job. That's the main reason they purchase Pure.
Also, performance. The box gives them extreme performance, but ease of use is the main reason they love Pure.
We are doing a project in tandem with Boeing to develop a security solution for their Oracle databases. We've been doing it in the VMware virtual solutions lab, which is back-ended by Pure Storage. It's a very complex project. Pure made it fast enough that we could cycle through the things that we needed to cycle through to get it exactly right. We were able to do so a lot of times, to rev it enough to get it refined to where the process was exactly right every time. There's no way we would have had time to rev it that much had it been on anything slower.
It helps simplify storage. When you're running Pure all-flash, you don't have to do a lot of the old Oracle best practices. You don't have to worry about putting log files on a different disk channel than the data files, and those types of issues. As long as you don't max out the bandwidth of your connectivity, your Fibre Channel, then it doesn't matter. That has pushed the bottleneck down to the connectivity to the storage, as opposed to the different spindle groups on your storage. That has made it vastly easier to do large volumes, rapid provisioning in databases, without taking a performance hit.
We like the data reduction rates. That has been really helpful. You get 4U of Pure Storage replacing something like two racks of spinning disks. One of the things that has contributed to that are the data reduction rates. Not only that, it helps dramatically speed the read coming back in, because you don't have to read it 400 times. Actually, the write doesn't hurt anything either because the write goes in once and then it gets deduplicated and that's that. It does help speed I/O because then everything is coming right off the front end of cache. Certainly, in terms of space, it's probably the most helpful.
We used to run VDI under other storage. The performance wasn't great, but when we moved to Pure we got less than a few microseconds in performance. Latency is the most important aspect for us.
Pure Storage has helped improve our organization because before them we had a 3PAR of a giant V400 and every day we would lose a disc or a magazine. We had to call out a guy to come onsite. It was a massive three-rack thing. Pure Storage, it's really modular, we're maxing out shelves where we can, and it doesn't take up as much space, it's not as hot, its a lot better than 3PAR.
Replication is the main reason we have it. It has helped to simplify our storage in the way that it just simplifies and there's nothing to really set up. Once we have them linked we ship them over and we sit our RTOs and our RPOs.
As dedupe and compression go up and we get more out of it, then we do see reduction in total cost of ownership. We're also throwing more and more on than we ever had before, so it's hard to tell, but we're getting more data on a smaller array than we ever had before.
The 3PAR SSD arrays that we have are still failing a lot so even though we're under warranty, we still have to get someone out and usually have someone troubleshoot so that usually adds onto the cost. With Pure, we've had a disc fail and we pop it out and you pop it in and it's good to go.
In terms of performance metrics, depending on what we have on it, some of our databases will get 4.8:1. When we do a big release our SQL tables change values so we'll see that reduced and we'll go up to sometimes 110% utilization. We're working with Pure Storage to try to fix that and see what we're changing so much. We also mistakenly had a 10pb on Pure so that data churn really reduced our usable storage. We're learning how to use Pure properly.
Our primary use of Pure Storage was for a data virtualization project using Belfrics. We needed the latency that would be required for the project.
The analytics that we gather is used for just one environment (which is big in the banking industry). Production wise, it's running Oracle. Performance wise, it's basically running enterprise applications.
Ease of use is the most valuable feature for us. It just does what it says. It's very efficient, really quick, and replication is great.
Predictive performance analytics are also good. The compression and the predictive analytics tell us how much storage we're using and how much longer we have before it runs out. The compression algorithms are perfect.
This solution was installed at my organization before I got there but having worked with it in the past, I would say that the responsiveness with any SQL applications has remarkably improved.
It has simplified our storage. It's a "set it and forget it."
It's too early to tell if we've seen a reduction in total cost of ownership. The solution is expensive. It's hard to monetize the difference in performance that we're seeing, but it's obviously there and measurable.
Performance and scalability are the most valuable features for us.
Our primary use case of this solution is for storage. We use it to ensure better application performance and to improve the user experience of the application. The cross-storage appliance improves the overall application experience. We have been using this solution as an on-premise solution. It has been useful for our critical applications.
It is easy to manage. You don't have to have the same people who used to manage the Dell EMC arrays because the solution is more intuitive.
I like the fact that, by default, we encrypt at REST. So, with database encryption, we no longer have to layer it using Transparent Data Encryption, we can use the native storage. This helps lessen the performance impact and simplify configuration.
We switched to Pure Storage mainly because of the frustration of dealing with performance on the old platforms that we used to use.
It's a product that we hardly ever call tech support for, because it just works. The performance and ease of use are all there, which is what we were looking for. We don't want to always have to call into tech support for something. It's one of those products where you forget about it because it just works.
We were previously a legacy storage system. After moving to Pure, the stability and performance both dramatically improved.
We don't have to worry about storage anymore. Previously, we had to babysit our storage system, doing things like managing the volumes, looking at the capacity, predicting when would we run out of space, and replication work. All of those created a lot of challenges with the previous system. Since moving to Pure, we no longer have to worry. We defined the policies once, and things mostly work.
Pure Storage simplifies the management, overall.
This solution has improved my organization because it has good performance. The interface is simple. Its ease of use has simplified storage for us.
There are a lot of companies that give a solid performance and a lot of places you can get flash. The pricing wasn't that much different, It's really the simplicity that makes a difference. If the data starts flowing too fast, it slows things down and does it later. Those features are the winners for us.
This solution has improved our organization in the way that we used to see latency but now with this solution we don't. It also has good performance. Latencies have come down for our performance in the SQL databases. We can put a lot more in a lot less in terms of space savings. We also save data center space and have good deduplication.
It has also helped us simplify storage in the way that it's easy to manage. It's the most simple storage solution.
This solution has improved my organization because we can easily snapshot and share the same storage platform for non-production production and so we've been able to get very high performance from non-production environments as well.
The most valuable features would be its performance, retrieval, recovery, and backup. It meets the customer's expectations.
Compared to VMware, it has two to three times better performance.
The thickness and the sizing for when we put it in the data center. Also, the performance and price.
We were previously on legacy storage systems. After moving to Pure Storage, our stability and performance both drastically improved.
Pure Storage is now our de facto standard product to use.
The analytics were gathered for this environment, and the environment is big. Production-wise, it is running Oracle, and performance-wise, it is running enterprise applications.
We are using Pure Storage as an all-flash product. It is a niche product, and only used for high performance data.
With Dell EMC, they have all-flash arrays, but they also have other types of storage. Our client use the solution for DevOps and their high speed databases.
We like the speed. It's very low latency. In virtualization, you can mask lots of problems, and even in code you can mask lots of problems, with low latency. It's just pure speed and low latency.
We also like the compactness, the small footprint. It takes up very little space in a data center and uses little power.
Finally, we love the predictive performance analytics. It's an excellent tool. It's something we were asking for in the past. When they rolled it out, it made a difference.
The security operating system is its most valuable feature because it's very simple, easy to use, and operate. You don't have to do very serious training to operate this equipment. It's user-friendly and pretty straightforward.
The performance analytics are moderate. It's not the best performance platform out there but it's the easiest to operate.
I would recommend Pure Storage, as it is well-established. It also simplifies and optimizes the right space.
The predictive performance analytics are good.
It's a high performance storage array. We want some deals regarding replication and stretch cluster.
One customer didn't have the budget to renew all the VM and VDI infrastructure. It was not so huge (approximately 100 VMs). The VMware partner provided the Horizon View solution, suggested to upgrade it to Windows 10 (for example), but the customer didn't want to recreate the infrastructure.
Without touching anything, and integrating from the traditional storage, was a two-tier Dell EMC squared infrastructure toward a flash array. We were able to guarantee the overall performance and consistency for Windows 7 machines without upgrading anything, which was a huge improvement without an additional cost. Then, we added a lot of additional VMs.
We have workloads that demand high IOPS, so a lot of speed, fast access, time, and overall high performance.
We don't have anymore performance issues, which is good.
The job of support for the storage engineers dramatically changed. We know more quickly the automation of the provisioning. We can now focus on things that bring more value to the company than just managing storage.
We use it for performance, the capacity of deduplication, and compression of the data.
It's a great return on investment, based on the mission. When you're interested in high-performance there isn't much else that competes with it.
My rating of Pure Storage is a ten out of ten because of the price for performance and footprint - the overall value.
It was less expensive than some of the alternatives. It's not as though it was a premium price to get that kind of quality. It's a very competitive product from a price perspective, but I would say better than many in terms of performance and service.
Provides awesome performance, and it's been able to shrink all of our data center from two-and-a-half racks to one rack and give better performance with SSD drives versus spinning disk drives. It has saved overall costs from our heat and power within our data center where we're now just powering up a 3U device.
I rate the product at ten out of ten because the performance of the storage is just unbelievable.
Running SAP on Pure Storage helps a lot without doing any further tuning to improve application performance. Our internal clients are happy.
The most valuable feature is its simplicity. It simplifies the administration and backup.
The predictive performance analytics works well.
The most valuable feature is test performance. It helps us store large amounts of data along with providing us faster retrieval of data.
The back-end data reporting for Pure Storage is phenomenal. The data that you can see on the performance of your customers' array, so you can be proactive about upgrades or enhancements, and is a phenomenal tool to have access to as a partner. I haven't seen this type of stuff out of anything of the other storage systems.
Pure Storage has a lot of statistics which help out with capacity planning.
As a partner administrating the solution, the back-end reporting has positively affected the time involved in managing and administrating.
- Speed: It's fast. It's like I don't notice anything.
- It's very easy to use. The GUI is simplistic, which can be nice.
The predictive performance analytics are great. I get everything I need to know: IOPS, latency, etc. The tech support works with me if I have any questions that need to be answered.
It provides better performance for our desktops.
It has positively affected our space requirements.
We have reduced the time involved in managing and administrating our storage.
We haven't done as much capacity planning as we should have. I am sure it would help us.
This was our first all-flash storage enclosure, so we saw huge boost in performance for all of our servers. It has definitely helped us in terms of performance, which is what we needed it for.
We don't have to build any type of storage device, which takes a long time for an IT guy to do. For storage, this makes it much easier when it is set up, because it can be done almost the same day that it is purchased.
For flash storage, the speed access is its most valuable feature.
The solution’s inline deduplication and compression is very good.
The predictive performance analytics is a very good feature, as our system is performing better than before.
Reliability and performance are its most valuable feature.
Its ability to simplify storage is great.
I look at the performance metrics periodically, which are spectacular.
We have undergone upgrades of controllers with mixed results. Some have gone well, and some have not gone so well.
We would like more extended historical data to help with some of the capacity planning. This is something that we are asking for all the time. E.g., what was the historical performance of this particular volume? So, we would like more historicals.
The performance is great.
The predictive performance analytics are good.
We noticed a dramatic increase in application performance when moving it from NetApp to Pure Storage.
Pure Storage seemed more cost-effective than NetApp. When we did our POC, we saw big performance gains between all-flash on NetApp and all-flash on Pure Storage. It was significantly better.
We are hoping that we get to scale up, at some point. My initial impression is that it should be very easy for us to expand just by replacing the disk groups or by adding a shelf. As far as my impressions of being able to scale, I think it will be pretty simple. Until we get to that point, I don't know.
We haven't really seen much on the performance side, because we only have five VMs in there right now. I can definitely say that it is extremely fast. It is much faster than our legacy HPE spinning disks. However, until we get a lot more servers on it, I won't know if we going to hit a bottleneck or cap it out at all. I don't think we will, but until we get more on there, I won't know.
The product added speed to our SQL environment, so we receive a bit better compression. It did give us a little more space when I moved my SQL environment off the competitor onto Pure Storage. Therefore, I obtained a bit of space and saw an increase in performance.
They have really good baked in analytics to show you trends for growth history, so it does help with future planning for data growth.
We realized that we needed to invest in a new solution when we ran out of space. We didn't really switch over to Pure, we basically just put the non-critical apps on our Unity storage and brought in Pure to be the tier 1 for the performance of critical applications. We had a few programs on our shortlist, like Dell EMC and Pure. We actually have all three on-site currently.
Our customer has been able to migrate some of their cloud services back on-premises, which is of benefit because they were having some performance issues in the cloud.
Our previous SAN storage environment never performed with the same levels as this does. The performance levels and the storage have improved my organization.
It has benefited our IT organization because we're a 95% virtualized environment and we're able to allocate resources as needed and manage our whole infrastructure that way.
We are running VMware on Pure. Our main driver for this was to isolate our Citrix environment from the general SAN storage board.
The joint solution has benefited my organization in the way that it isolates it, giving peak performance and does not share it with other environments that have any infrastructures or competing resources.
We use the solution for the vendor support. It's a banking software system. It's an IBM system and it requires some Pure Storage for the backend and SSDs for performance. The vendor supports Pure Storage.
This solution has improved our performance. We run a lot of security tools that scan for different things, and this would greatly impact our other storage arrays that were either spinning disks or hybrid storage. Even though we did see an impact on Pure, none of our applications that ran on Pure had experienced any problems.
Part of it was to simply go to an all-flash technology that shielded us from that, but it was also that the toolset was very valuable. We could quickly see how we were performing. With some of the other vendors' tools, it's really hard to know where the problem is or how it's performing. You just see the results. You see the symptoms of the problems, and it's hard to come to understand where they are coming from.
It helps us maintain uptime much better than other solutions we've used in the past and the support is extremely quick and responsive.
The ease of management, cutting edge technology, and higher availability benefits our IT organization.
We are running VMware on Pure. The main driver for this was the speed of the virtual machines and the ease of administration with Pure is pretty seamless.
The joint solution has helped my organization. Cody from Pure Storage has been a really big advocate for cutting edge technologies within Pure Storage. He's given us as a customer a lot of tools from his social media to help us do our jobs easier. That's been amazing. It's been awesome for us. The support's been great. Our SC has been great, and our sales reps have been great. Performance is awesome.
The performance and the Evergreen maintenance are the most valuable features of this solution.
We used an older program, but it was too slow. The main reason for switching over to run our VMware on Pure was the speed and, after several meetings with other vendors, we decided to go with the all-flash model. We replace our programs every five years because we want the best performance.
We previously used Hitachi, 3PAR, and HP but we had performance limitations.
It replaced an earlier tier. It replaced 3PAR Storage and gave us faster performance than the single databases.
VMware has benefited our IT organization because we're 100% VMware, everything is running on it.
We are running VMware on Pure. Our main driver was the performance for SQL servers. The joint solution has helped my organization in the way that the databases run faster.
My organization is taking advantage of the VM integration developed by Pure. We've deployed it. I think it gives the storage administrator some additional insights on metrics. I don't think we're using it to actually manage the data stores. He's getting more insights on metrics. Pure has a VAAI plugin that allows you to manage the data stores. We're not doing that, but I think it gives them heightened analytics in addition to SD-Pure1, a web interface. The integrations have helped in the way that they're another dashboard to have. Somebody could think that the databases are running slow and our database administrator can look at that tool and say, "No, it's unique to your SQL databases, it's not the other VMs on the data stores."
Having fast storage allows actual servers to perform in high capacity so we don't have slowdowns on our applications.
It benefits our IT organization in the way that it drives down costs, allows us to migrate servers from one data center to another, and gives the flexibility that having bare metal servers wouldn't allow.
We run VMware on Pure and our main driver was for cost and performance.
We used to use a product called XtremIO which was a pretty significant improvement on the old way of deploying storage which was through standalone SANDS and we also used EMC VMAX. That was really expensive. We saw a vast improvement when we switched over to using the Pure Storage model over the XtremIO. It just made us that much more competitive. We were able to offer those workloads to our clients, we sold more, and we keep selling it.
VMware absolutely benefited our IT organization. VMware has always been just above the rest in terms of virtualization. I was not part of the organization prior to VMware being a prevalent powerhouse like it is today. But I know that back in the day of our organization, we used to have every server in a single box. Now, we've trimmed down so much of our infrastructure as well as some of our other client's that we've moved to VMware and it's been a significant improvement.
We are and we aren't running VMware on Pure. We have our ESXi hosts are not running on Pure Storage but we use Pure Storage for the back-end data stores that we run. We don't necessarily run the Hypervisor on Pure, but we run a lot of our client's virtual machines on Pure Storage.
The main driver of running VMware on Pure is for more IOPS. It's a growing trend in the industry that we have to have more clients that have more IOPS and low latency. It's an ongoing battle with the industry. When it comes down to it there's going to be a higher demand for even lower latency; even more speed, and more IOPS. We haven't hit that quite yet, but it will happen. It's just the nature of the business.
The joint solution has benefited our organization. It's with the ability to have the tier-one storage from Pure Storage that's allowed us to not only sell more at a higher cost but also it's allowed us to separate certain workloads from others. We have the tier-one storage, then we have tier-two storage on a different provider that allows us to have more storage, but also to really just give Pure Storage to those that really need it. This provides better performance for those VMs.
The performance is the most valuable feature.
Compared to what we used to use, it has improved the utilization. It has improved the statistics for all the users as well. It's better, and people are happy, but we're not quite there yet.
The joint solution has helped my organization. The users are more satisfied. They were looking for better performance, which they got once we moved them into Pure Storage compared to what we had before. Now they are trying to add more and more applications because they're getting better performance and stability. There's a lot of stability now. We have fewer problems, fewer outages.
I'd like to see a move towards individual VMs for what the performance of each VM is in a VD infrastructure. I can see the overall volume, but I would love to see things in a more granular level on the VM side. I'd like to say "Hey, this particular VDI, what is the performance on that? How much IO is it using, what are the issues, what is CPU?" etc. I'd like to see that layout in the portal. That would be great for us.
With respect to comparing other solutions, when you put all of the features in a box, leverage them and migrate your application to one of these arrays, it will give you a lot of benefits. Some people have compared benchmark performance tests against other arrays and from my point of view, overall as a whole package when you sum everything up, Pure Storage is the winner.
HPE Nimble Storage: Performance
The AF5000 array is the primary storage for our iManage DMS 10 document management systems. It allows the best performance for users using the system.
- Ease of use
It has increased our performance and allowed us to expand out what we can deliver.
We migrated from a hybrid cloud to an all-flash. We have seen our average latency go from four milliseconds to point four. Therefore, we are getting 10 times better performance down to the end user on everything. We have seen an increase in our IOPS by ten times.
Infosight is good. We watch the capacity side of it. That is about all we have seen on there. InfoSight does allow us to get servers back up faster. We run a lot of virtual servers, so it is about ten times faster from deploying until it is up.
The solution has increased performance.
There has been no downtime, which is probably the best thing.
We use InfoSight predictive analytics. It helps us from a performance perspective by identifying potential bottlenecks.
InfoSight has identified controller failures or performance issues.
This is a storage solution and while it is faster than our old storage platform, that in and of itself hasn't really improved any of the operational aspects of the company.
Performance has been restored to the same level of what we replaced, although it has taken six months of working with Hewlett Packard to allow them to understand our unique environment.
I don't think that it's fair to say that All-Flash is for growth. It's the next logical progression that we had to make.
We can have fewer resources manning and monitoring the storage and we can reallocate resources to work on other things while maintaining confidence in our storage solution.
We have successfully integrated various applications such as SAP Business One, Microsoft Dynamics GP, any of the ERP systems we have tried work.
The Nimble Storage solution has enhanced performance over the previous system.
I haven't got the details of the IOPS (Input/Output Operations Per Second) so I don't know it exactly, but definitely performance on the service is much better.
At my previous place of employment, I mentioned to my previous boss about this solution because it would have been good at my prior place of employment. They were in a similar situation. They had flash, spinning disks, etc. However, they used Pure Storage, Hitachi, and even some Dell EMC. When you have so many different arrays, or so many different companies, that you have to work with, it is very easy when there is a problem for a vendor to point their finger at another vendor. For a better chance of a successful integration, keep the products (and vendors) down to a minimum.
I don't really have to do a whole lot to it. Plug it in, and it does its job successfully.
The performance was already good. This isn't a reactionary, but being proactive. We are doing these measures to ensure that we don't have an issue.
The biggest lesson learned is to keep using Nimble.
The InfoSight platform and the reporting help us to identify network issues and compatibility problems.
All-flash also positions our company for growth. We've deployed 3PAR all-flash for our core applications and will not transition outside of flash from this point forward.
Nimble has increased performance with better IOPS evaluation, mixed-load capacity. Also, it improves throughput which means we've been able to transition off of remaining rack mounts onto Nimble plus virtualization structure, in a cost-effective manner.
This solution has given us reliability that is evident by the fact that it has been running for five years with virtually no hiccups.
InfoSight automatically predicted or resolved problems in your environment. It has given us insights with respect to our VM using more resources than we thought.
Our performance has increased by approximately ten to fifteen percent.
We feel that we can rely on this solution more for the business-critical applications we have, compared to what we had earlier.
Also, the all-flash positions our organization for growth. Video quality keeps increasing. From 4K we are now moving to 8K and we expect that the size of each video file is going to grow very high. So our data size is increasing very fast.
In addition, we have noticed that the solution has increased performance.
We are a 24/7 operation, even though our corporate office is a typical Monday through Friday operation. It is vital that we have all of our data up and running, all of the time because a lot of our customers have what they call "volume runs" during the month. For example, one of our customers is subscription-based, so we have to run about three million packages during a two-week timeframe. All of the data has to be fully accessible and we can't afford to have any downtime.
This solution has increased our performance by ten to fifteen percent. From both an operational standpoint and management perspective, it is an improvement. It has also reduced costs and allowed us to allocate more towards other projects.
My engineers have said that this solution has improved our throughput. This has helped because when a customer comes up asked for a solution then we can guarantee it will actually meet the demand for their product or service.
The All-Flash storage positions us for growth because of the speed aspect. We have pool data that doesn’t have to be accessed quickly, but when it comes to other things then we are required to be on the spot. This is especially true for SQL databases.
We have seen tremendous ROI. It pays for itself 10 times over in the matter of four years.
The solution has increased our performance. We have about 20 times the IOPS that we use to, which was a huge selling point for us. We don't even use close to the amount that it is capable of handling, but it is certainly 20 times faster than what we had before.
HPE 3PAR StoreServ: Performance
- Four-node performance
- No split IO groups as on IBM SVC clusters.
- Easy tiering (with a small % of cache) did a good job in a large scale environment of 1000 VMs on 350TB external Monitoring, giving a detailed dashboard. A nicely virtual appliance for remote callout support to HPE services.
We use this solution for low latency, high-performance workloads.
For the solution that we were looking at an ERP system, and what we need to do with it, 3PAR was one of the best. On top of that, the company used to use another product called, LeftHand. After LeftHand, we moved over to 3PAR. When I saw the performance from LeftHand compared to 3PAR, it was a very good improvement and the way to go.
Speed is what we are all looking for right now. Before, people could wait for data, but now, the moment they wait five minutes, and are not typing, that's the minute they say the system is down. In the past, we used to have a different way of storing data. Since we moved over to the 3PAR, where we have two different sizes, the replication and accessibility are much faster.
The solution has increased our performance.
It has reduced time to deployment by about 30 percent, mainly from the virtualization server standpoint.
It allows us to grow. We added almost 110TB last year alone. Not a lot of product let you throw that in, resulting in the performance that we have been seeing.
It has definitely reduced our time to deployment. We can call up, and say, "I need 110TB," and they configure it so my IOPS stay consistent across 3PAR, Then, I don't actually have to worry about the IOPS. HPE takes care of that for me. I need the space, and they take care of the rest. They install it, and I just provision it, which is nice.
If you can handle the IOPS, throughput is a natural byproduct. Usually, IOPS is where you are capped. HPE has done a great job in making sure that our IOP-intensive EMRs stay up and running. We have really good performance on them.
We run approximately half a billion IOPS every six months. This 3PAR seems to handle it just fine.
We moved to 3PAR from a different array, which was a smaller array with fewer controller cards in it. So, 3PAR did not increase our performance, and it has increased our latency by at least double.
We went with 3PAR because we have HP-UX systems. Since we already knew HP-UX, they offered us a significantly cheaper solution than the one that we had for storage.
We have seen ROI. We are able to see more patients now, bringing more money into the practice.
3PAR has increased our performance.
The product has definitely improved throughput. We are able to more efficiently see patients because all of our medical records and practice management software seems to run faster. Uploading images and charts is a lot faster. Recalling information in the exam rooms is faster. The overall throughput of data, going back and forth, is so we can more efficiently see patients, and it also helps increase our patient flow. We can see patients a lot faster, getting them in and out a lot more quickly.
I has helped our organization with DR and replication of our VM environment and Oracle databases.
The solution has improved our throughput, which has improved our performance.
We're currently running two 3PAR 7200 storage units in high availability. We have three workload tiers. We have Nearline, FAST class, and SSD. Our primary ERP system is an Oracle JD Edwards running on Microsoft SQL Server 2008 R2 that is all on SSD. Then, we have other workloads for our barcode. Our engineering solutions are running on FAST class, and then most of our traditional file and print, storage, and workloads are running on Nearline SATA. Also, have two 4200 LeftHand SANs in the environment. I put very low priority VMs on those two LeftHand SANs. They are minor application servers. They don't need a whole lot of performance. However, the LeftHand SANs are now seven years old. The 3PAR SANs are now five years old, and I have to replace everything in 2020, and I'm looking at HPE SimpliVity, Nimble, and potentially 3PAR as the storage architecture for that environment.
Our JD Edwards, which is our ERP system, that is critical. Also, our barcode scanning, because we do a lot of barcode scanning out in the shipping and manufacturing warehouse. Our accounting system is part of the JD Edwards too. All of that is on the SSD. We're currently evaluating whether we upgrade to JD Edwards 9.2 or if we deploy Microsoft Finance and Operations. If we go with Microsoft Finance and Operations, that'll be totally in the cloud, and I'll be able to carve a third of my storage requirements out because it will no longer be necessary to run an on-premise ERP solution.
My directive when I was hired in 2016 as a direct IT manager versus an outsourced IT manager, as I was when I started in 2014, is anything and everything I can take to the cloud goes to the cloud. If I do that, it reduces the need for all SSD on-premise, and that's actually what I'm trying to get to, because I'd rather utilize Microsoft Cloud, Azure, Office 365, and Dynamics 365. I want to utilize that cloud for my performance, whereas on-premise traditional file, print, and storage doesn't really need SSD.
We used a reseller, High Performance Technologies, for the deployment. Our experience with them was okay.
They are not a good fit for what we need now. So, we have moved onto another provider.
It helps our core applications run very fast. It has increased overall performance by about 60 percent. There is one process which has gone from taking seven hours to taking one hour. It's a key for storage in our organization.
The 3PAR arrays replicate offsite. Everything is safe and optimized. There's automatic promoting and demoting of blocks, moving hot ones to the flash storage and the less used ones onto Nearline storage. This optimizes everything and uses the resources to their best ability.
It has increased performance since we added the flash drives. Originally, we had 2-Tier storage (the Nearline storage and SAS storage), but adding the flash storage really improved performance by maybe 30 percent.
We can cut a VM quite quickly, so we can probably stand up a workload in half an hour. So, the time to deployment is quite good.
We have been able to back up our data more frequently now that we have everything on flash. It responds a lot faster, so the IOPs are a lot faster.
We have seen ROI. While the costs were quite high at the time of purchase for our environment, the ease of use and the fact that it hasn't failed all the time, working fine, that makes it worth buying.
3PAR has increased your performance. At the time that we purchased 3PAR, it was much more powerful than any of our previous storage.
3PAR has helped our company reduce the time to deployment by 60 percent. It is easier than before.
The solution has improved our throughput.
It provides us some disaster recovery capabilities. The all-flash storage gives us the performance that we need.
The remote copy group failover is very useful and has helped us.
We use InfoSight predictive analytics. The most useful part of it is being able to see the growth curve.
Before using centralized storage, we needed to make sure that we have enough physical disks installed in a server. Now, we know exactly the capacity that we need for the upcoming year, and it's much easier for us to enlarge the capacity and expose these disk volumes to the relevant servers. Again, in our case, it's mostly the databases.
All-flash positions our organization for growth in a way, mostly for performance, because again, we're using all-flash for the performance that it provides, and we have critical databases running on it. It's providing day-to-day functionality, the way I see it.
All-flash provides optimization through deduplication and compression and provides high performance for both databases and virtualization. It has increased IOPS by 200 percent. In addition, it has reduced deployment time by ten percent.
One of our customers, using SAP, had 16 terabytes of data. When we implemented this solution for them, their storage was reduced to two terabytes.
- Performance that it gives me.
- The speed of how we process data on the manufacturing floor.
The availability of the server has given us increased stability in our environment.
The mission-critical apps and processes that we use are our Oracle database, VMware, and some web services.
The All-flash positions us for growth because of its better performance, which means that our applications are faster.
This solution has improved throughput. It has helped when we deploy non-production servers.
This solution handles everything to do with our business. If it’s down then we can’t deliver to our customers. We can do more, faster, whether it's spinning up more virtual machines or handling large amounts of data.
The All-Flash has positioned us for growth because we can do more. Going past traditional hard drives has really been fantastic for us. Our performance has increased by anywhere from fifty to one hundred percent. Moreover, our deployment time has been reduced by about fifty percent.
For us, the increase in throughput translates to an increase in productivity.
It allows us to cohost as needed. We are able to put more systems on one data storage system and it is still able to deliver the availability and speed that we need it to deliver.
All-flash also positions us for growth. We can look to simplify things while still maintaining the reliability and speed that we need to deliver quality healthcare.
In addition, it has increased our performance and it has improved our throughput. The latter improvement means that we're able to ensure that the users can get to their data as quickly as they need it, and that it responds to any queries that they have. It's able to meet their daily needs.
The increased throughput has allowed us to scale and maintain performance, or even have better performance.
In terms of the mission-critical applications that we run on this solution, our application is benefit adjudication.
We have been able to scale faster and get our applications out in much less time. We don't need to worry about the platform's ability to manage the workload, so we are pretty happy.
Our VMware platform sits on 3PAR. We also have databases, ERP applications, and websites running on it.
All-Flash also positions our organization for growth. It certainly has its place. We don't use All-Flash because the performance of the existing arrays knows the job, but I certainly see where if we are doing data-intensive operations it could assist us.
We deployed InfoSight predictive analytics not too long ago. It improved our management of VMs. We are now able to see a lot more using InfoSight and we have a pretty good idea of exactly what's going on in our storage array.
The storage array absolutely increases performance. Compared with what we had before 3PAR, this has certainly done its job.
The solution has also helped us reduce time to deployment, I would say by at least 30%. It's easier for us to deploy. We get our servers up and running quickly and that way we support our environment faster so we can be more agile.
It has also significantly improved throughput, so we don't need to worry about performance for any of our platforms.
This solution has allowed for massive performance acceleration of all workloads and massively increased availability (with peer persistence/transparent failover feature).
Hitachi Virtual Storage Platform F Series: Performance
The Hitachi Virtual Storage Platform F Series is a very steady solution. It's a state of the art solution in storage systems.
High-availability and performance are the strongest aspects of these machines.
The high performance of flash storage is especially valuable to us.
We are a solution provider and I work with a lot of different SAN products, depending on the needs of the customers. we have implemented this solution, as well as the G series, for some of our clients.
I have a project right now that involves revising and fine-tuning a storage network. This network contains two Hitatch VSP G Series units. There is not a major difference between the F Series and the G Series. Both of them are enterprise-scale and efficient for many data centers. It is used as primary storage in industries such as banking, automotive, health care, and insurance. Large companies, or companies that have an IBM mainframe.
If the solution requires a very high IO/s (Input/Output per second) with sub-millisecond response time then they should select the F Series because it has better performance.
Businesses are looking for simple storage solutions that can exceed expectations and meet the challenge of delivering continuous availability and high performance while maximizing the value of physical and virtual infrastructures. Simultaneously, enterprise and mission-critical business application environments are becoming increasingly unpredictable and organizations need the ability to deliver and orchestrate automated operations.
Hitachi Universal Storage VM [EOL]: Performance
EMC VNX [EOL]: Performance
It didn't perform as advertised because the instant snapshotting feature while working as advertised slowed down the performance of the entire storage solution.
When we started the process of looking for a SAN vendor I had four primary criteria: reliability, performance, cost, and simplicity. We were really looking for a unicorn and were fully expected to have to make compromises in one of those areas. The Nimble CS-Series was recommended by our solutions provider, and we were blown away. Not only was it the most affordable option, but it was the easiest to deploy, had the best performance, and has proven to be reliable with 100% uptime.
IBM FlashSystem: Performance
- CLI though intuitive, no other API available - Lacks scheduling or a cron - Has no built in short term performance graphs - IBM TPC overexceeds the montoring needs, had to fall back on Stor2RRd - Support response times are bad -View full review »
We worked together with a local partner of IBM for the setup of the product. They showed satisfactory performance. Data migration was achieved by our own team.
The performance of the All-Flash System is very good. There is more enhanced performance and data production in the solution, which I appreciate.
Performance is not a problem anymore and the space available is enough for about 5 years of operations. Wa are now busy with cross dc failover which will use the capabilities of this system extensively.
I would like to see an improvement in the handling of large amounts of rights. An automatic flash system that doesn't do compression or deduplication will flush through the rights directly from the host to the flash modules. It doesn't keep them in the cache. For compression and deduplication systems, they have to do compression, deduplication and the memory and cache for the controller. So they have to keep the data there otherwise you will find yourself stuck with performance issues.
NetApp EF-Series All Flash Arrays: Performance
The NetApp EF-Series gave our organization easy access to our databases. What's great about this solution is that it speeds up our data store because it is a cheap solution for flash performances.
The main advantage of this solution is performance.
This solution does not have any compression or deduplication, but instead gains better performance through concurrency.
The most valuable feature is the ability to set a specific margin of performance to a specific workload. This feature is unique to this vendor and the competitors do not have it.
Dell EMC Unity XT: Performance
Primary use is mid-range storage. We have two variants, we have the hybrid version and the all-flash version. It's for general use. For high performance, we have different systems.
The speed and performance that we get through the SSD hard drives. That's a big factor for us.
We're using it for block storage in a lab, supporting Fortune 500 customers, testing out solutions. We have a number of other competitive solutions in the lab and we try out upgrades for customers, we test out all the different features and functions.
Performance of the system is fine, I really don't have any issues with the actual raw IO of the system, but the competitors are pushing a lot of all-flash solutions in front of us.
We're not doing any integrated Snapshotting of the applications. Some of our team is working on being able to Snapshot Oracle RAC clusters but, for the most part, we're focusing on doing mostly backup solutions, data protection software.
We use it for mass and block storage.
We have not had issues nor performance problems with it.
Ease of use would really be the best feature. We were easily able to get the correct performance details from it. And the configuration was great, it was relatively easy as well; that was brilliant.
In terms of managing it, the performance metrics that it gives, generic stuff, it does everything that we need it to do. We didn't have to create any custom reporting. It all went well.
We're using Dell EMC Unity as our primary storage for our production and for our DR site. We've had no performance issues with it, whatsoever.
We're using it for our data storage, for our virtual machines. It's the only array that we have, so we're not doing tiering at all. Everything is on the unit. We're using it for the data storage that we replicate to our DR site, for the ones that just stay local. We're using it for allocating raw disk-mapping, for mapping storage from the SAN directly to virtual machines for super-clusters and the like. We're using it for everything
The primary use case is for our reporting environment, business intelligence and analytics. We run our Oracle and SAS-based applications on it right now. The performance is sufficient and we don't have any complaints about it.
We use it for ESXi data stores and performance seems to be okay so far. We've only had it a couple months. We have it integrated with VMware.
The primary use case is mid-tier processing for our hospitals. We have a lot of VM infrastructure on the Unity, but not our most mission-critical. The performance has been great.
For most of our general-purpose cluster, we are using a Unity as Tier 2 and Tier 3 storage. Earlier, we were using a VNX box. Compared to VNX we are getting better performance.
We use it for enterprise SAN. We have multiple units. We just started getting them in and the performance has been good. It back-ends our enterprise Oracle, which is for our financials. We have some Mission-Support applications that it supports as well. We have both structured and unstructured data.
We have it set up for storage for VDI. It is as advertised: Very easy to set up, very easy to manage, and the performance is great. We have integrated the solution with Horizon VDI and there was no additional cost to do so.
The product is pretty easy to use. The GUI is nice, really easy to use, and the performance is good.
Obviously, our customers rely on us for uptime. We've had no problems with it so far. Migration to it went very smoothly, so in terms of value to us, it's been very good at keeping our workload and our uptime going.
Also, it has definitely provided much faster performance.
It's easier to administer than some of the alternatives that we had. The teams find it easy to manage. We're a big EMC shop anyway, so for us it was just a lower-tier alternative at a good performance point for the price.
We are leveraging its integration with other applications and there were no significant costs to do so.
We use it for post to all our data stores or virtual environment.
We have had no performance issues.
We used VNX previously and although it was fast, the performance was poor.
We would like to see the synchronous replication process included in the next release. Not having this downgraded our performance by 65 percent. This really needs to be improved.
We have 30 to 40 Unities out there in the field. We don't even scratch the full capabilities of the Unity. We are at about 20 to 30 percent utilization. It is just provisioned so well that we are sitting at 90 percent performance level. We have it well-provisioned so we don't need to worry about performance for the next five years.
It is quite scalable. If you want to add on, you can add on easily. We have a 25-slot enclosure and are probably at 15 right now. If we purchase a big company, need to scale up, we can easily scale up.
It has improved the utilization of our own internal resources and performance across our managed service platform, meeting our customers SLAs.
Unity has reduced the complexity and improved productivity tenfold compared to what it used to be.
The valuable features of Unity include:
- it's flexible
- friendly user-interface
- good performance.
It's not complicated. Any beginner can work with this environment.
It is not an enterprise-level solution, it's for mid-range companies, but it includes a lot of features like compression and encryption.
It has good performance.
The solution is extremely functional for the price that we pay for it. It is worth the investment.
It's our primary storage. It is just for VMWare with a lot of Fail Over clusters.
For our mission critical applications, we run SQL, Oracle, Fail Over server clusters, VMWare, and databases. We use it for our primary VMWare environments, with a VPLEX, just for failover and performance. We use it for Windows Plus! because you need shared storage. In addition, we use it for healthcare systems.
We only use it for block storage. We don't use any other features. We have a VPLEX for applications.
We went from two boxes that were 8U down to a 2U box. Dell EMC Unity XT reduced the electricity we were using just by making that one change.
On a performance level, with SQL querying, it would take 60 seconds. That doesn't sound like a long time, but when people are staring at a spinning icon they can get outraged. This solution has cut it down to about 22 seconds for a query, so it's a lot faster. The difference was astronomical. We were using an EqualLogic, a hybrid array which had spinning disk and SSD, and the Unity just blew it out of the water.
When it comes to provisioning and management, when you compare Unity to EqualLogic, it's night and day. The EqualLogic wasn't nearly as flexible as Unity is. Once we saw what the Unity was capable of, there was no going back to the EqualLogic at all.
Unity is solid and there is not anything to be afraid of in purchasing it. I would recommend it.
Ours is not a very complicated use case and the performance has been adequate for what we've tasked it to do.
I give the Unity a ten out of ten for two reasons:
- ease of use.
It is lightning fast, low on power and heat, and has a small footprint with great performance.
If you don't know your mixed use case, or what you're going to do with it, it's a nice mixed use storage subsystem. It easily integrates with great visibility. It is very easy to maintain and operate. It is just a nice platform, especially if you're setting yourself in a new direction and you don't quite know what you're doing.
We have deployed it at remote locations; in a converged platform it really helps. We don't have to have two different storage system which helps to minimize the footprint.
It is a platform that we have standardized on for remote sites which enables us to have engineers and admins who are trained on and knowledgeable about the platform across the board. That enables them to support those sites, which is super-beneficial for us because we can do more with less.
The ability to mix and match SSDs with flash, and spinning disk in there as well, really allows us to meet our performance requirements.
Price and performance are its most valuable features.
It streamlines processes.
My advice is to take this solution. It does what it tells you it's going to do.
Instead of using multiple types of backup or file storage, we were trying to combine all of that into Unity. Now we're trying to refresh that again and go with the newer technology, the enterprise-level storage. Unity met our overall performance expectations for what it is, and then we obviously needed the enterprise level, so we're going with the PowerMax now.
I would rate Unity at eight out of ten. Any application or product has room for improvement. I don't see anything out there that's a ten. Unity is functional for what it needs to be.
We know that we can add another whole tray or two of disks if we need to. We started with a high-density to begin with, but we knew we had significant space to expand when we needed it.
It has exceeded our performance expectations. We were expecting an improvement, but we were expecting to eventually in the short term come close to hitting capacity on it, and we haven't. Performance-wise, it's held up very well.
We replaced our legacy storage, which was Oracle. We couldn't afford the maintenance agreement for it any longer. We saved millions of dollars by not going back with Oracle.
This solution has meet our overall performance expectations. We were going for form fit function. We had to meet certain guidelines. We couldn't put anything in bigger. Physically, we couldn't put in any additional capabilities. We had to meet the existing network connectivity without modifying the other systems. The versatility of the product, with the optional PCI inputs allowed us to get that. We are able to scale it up or down, for actual storage, to meet the capacity that we need. We're using it in two cases where we're doing a form fit function. One for replacement, then another for overall modernization of the same systems. We're able to take the same product and scale it up to almost three times its size with very little effort.
Engineered from the ground up to meet market demands for all-flash performance, efficiency, and lifecycle simplicity, the Dell EMC Unity XT All-Flash Storage Arrays are NVMe-ready, implement a dual active architecture, contain dual-socket Intel processors with up to 16-cores, and have more system memory.
All of these modern features enable Dell EMC Unity XT to deliver 2X performance and 75% less latency compared to previous generations. They are:
- Designed for Performance
- Optimized for Efficiency
- Built for Multi-cloud
The Unity Arrays are easy to deploy and maintain. The All-Flash models are intuitive and easy to work with, in addition to providing high IOPS with low latency to support Business Critical applications. Because of the newer features and performance, it's easy to maintain and support remotely.
Huawei OceanStor: Performance
I like the solution's speed and the use of SSD technology. If I have to compare it to what we had before, I would say that it's easier to maintain and to support (on top of performance). It offers great capacity.
it servers not only as SAN but also NAS. The conectivity has a lot of options and prepares us for the use of Ethernet 10Gbit.
The configuration (web interface) is very simple and the dashboard provides all the basic information (health, etc) in a glimpe.
Huawei OceanStor Dorado: Performance
We did previously use a different solution, but we switched due to Huawei's better price-performance ratio.
The OceanStor V3 5000 was, when we started, the first real NVMe all-flash storage solution. In NVMe, the performance was much more impressive to seek latency and that was unmatched. When we later sold our machine, the supplier did not have any in stock. Only later Dell was able to introduce PowerMax or IBM, and they introduced their new solution integrating the NVMe.
The other advantage is that HyperMetro functionality is app-specific to VMware or to the virtualized environment to have more reliability and higher capability. It is, therefore, possible to have all the data synchronized, using less storage. All the other features inside the system are very reliable and the installation time was shorter. We use less space for storage now - it decreased from two racks to only four units. That is really impressive.
We are an IT distributor and this is one of the storage solutions that we implement for our clients. The primary use case is for VMware virtualization, although it is also used for database system storage. Oracle and other SQL databases require a lot of performance in terms of IO per second, which is met by using OceanStore Dorado.
Dell EMC SC Series: Performance
What I really like, from the model line starting with the 3000 all the way up, is the flexibility. You can have spinning disk, you can have flash, you can have a combination.
Another valuable feature is the performance of the auto-tiering. It will move hot data up to your fastest Tier 1 or move your slow data down. Data progression is what it's called. With the auto-tiering you can have multiple tiers, you can have your Tier 1 be either spinning or flash, all the way down to 7.2K. It will change the RAID on the fly so your writes come in at RAID 10. After they sit for a while, they get converted to RAID 5, then they'll cool off and move down the tiers. Your performance is kept going, while the cold data is moved to your slow, non-performance tiers.
With federation, you can have multiple systems across sites. You can treat them as one, and with a live migration, volumes don't go down. You can move them from site to site, doing maintenance, and keep your environment up.
They already integrate with Dell Storage Manager, so you can manage multiple, you can set up replication, you've got monitoring, vSphere, Hyper-V.
It's the storage backbone for our virtual environment. So far the performance has been very good.
Overall, it has really helped us to virtualize a lot of workloads where server or application owners were very hesitant to move away from their physical boxes because they were used to having local disks and the performance that came with that. With the SC Series SAN, the performance that we've gotten out of the boxes alleviated anyone's concerns. We do not get complaints about the performance of our virtual infrastructure.
Also, with auto-tiering, it's easier to understand than most arrays, knowing that all of your writes go to the tier that you specify, with easy-to-create storage profiles.
The most valuable feature is the no-forklift upgrade. While the thing is running, I can change out the controllers one at a time and keep the customer up and running. I can add shelves and storage and SSD drives or spinning drives to the system, while it's running. I can bring all that in and rebalance the load across the new disks or, if we take disks away, rebalance the load across what's remaining, and it just works.
Also, in terms of auto-tieriing, Compellent writes all of its writes into Tier 1, unless of course you do something silly and pin a whole bunch of LUNS, which means you're telling your VM, your data stores, that they have to live in that top-tier storage. As long as you follow the best practices recommendations and let it do its own auto-tiering then it works very well.
In most cases the customers with all-flash, most of their active data lives in flash. So, they're really using all of those IOPS and performance in that tier, and the other tier or tiers are just being used for cold data storage. It works very well, as long as customers follow the best practices there.
We use it for VDI, mainly. In terms of performance, there were some difficulties to begin with, with a lot of different upgrades. It took a lot of time because we've got several of them. With all the upgrades done, it has run pretty smoothly.
Right now, we've just got one particular system on it, where we're just trying to test the waters to figure out if it's good because we use a combination of Dell EMC and Cisco equipment. So far, the Dell EMC seems to be doing pretty well. There are some applications that we've run where it appears that the Dell EMC would be a better solution.
We use it for storage. We have gotten really good performance out of it, fast IOPS.
We don't use the hybrid solution, or the built-in data migration capabilities, protocols, or DIP inline upgrades.
We use it for our production loads and for whatever is running on all the production VMs, including our database and regular applications. In terms of performance, we have had issues once or twice, but apart from that, it has been amazing.
It's definitely scalable, definitely fits the majority of our use cases, in high-performance environments such as Oracle, SQL, and so forth.
- Ease of use
- Value for price
We use it for storage. The performance is great. So far there are no issues at all.
We had XtremeIO for the past three or four years and, prior to that, we had NetApp. I think the SC Series works better. We are pretty happy with it. In terms of performance with mixed workloads, the SC Series is pretty good. We don't see a lot of latency as we saw with NetApp. But I would say XtremeIO and SC are similar in that regard.
Most storage platforms are the same, but when it comes to the performance and dedupe, as I said, those were the main criteria, what we were after when we talked to Dell EMC. The relationship and trust are also very important.
We work on different solutions. The most important point for me is how is implemented disc virtualization. In this solution, disc management is really very simple and disc utilisation is efficient. all disc are add in a group and raid is oragnized on 2Mo stripe. The stripe are organizer in 2 raid automatically : by default, 20% in raid 10 to write blocs with very effective performane and 80% raid5 to store data with a good use of space.When you add many disks in the folder their are automatically integrated in the profile and add performance to production. The possibility to implement two system in a cluster and can also be integrated in a federate mode to agregate multiple system in one global storage.
The solution could use more integration with popular backup systems. Dell storage solutions are not very integrated. There are no dedicated models for, for example, Veeam backup or Redhouse backup, etc.
The lower model, the 3000, should have duplication. It doesn't right now. It's only from 5000 that this is offered, but it depends on the performance. It could be they don't offer it on lower models because the duplication is too much of a burden to the performance.
Performance-wise it's high speed. It's also more stable and scalable.
With the hybrid storage approach, we can balance between cheap space and good performance.
The solution's most valuable feature is its performance redundancy. The solution works quite well for that. The redundancy is important to us for snapshot and recovery purposes.
The product offers good performance and is quite powerful.
The implementation is straightforward.
Lenovo ThinkSystem DM Series: Performance
We're a Lenovo partner.
I'd rate the solution six out of ten.
With Lenovo, there are only two solutions: the DE or DM series. For common workloads, we tend to recommend the DE series as it's the best match for smaller companies. The DM series is more for those with many workloads that need very high performance. In use cases where the Workload needs performance, we advice our Customer to take the advantage of the Lenovo Best in Breed DM-Systems.
We typically advise customers to choose the DM series.
In Germany, we have many small firms and smaller environments. Most people will tell clients they absolutely need flash. However, we don't think that is always the case. It's similar with DM. You can't sell clients with small issues or small storage requirements something that offers flash all the time. It's expensive. You need to think about costs and be strategic to ensure you're meeting your client's needs responsibly.
Lenovo ThinkSystem DE Series: Performance
Pure Storage FlashBlade: Performance
Scalability has not been an issue. Performance has been great. It is just the capacity and dedupe part, which is a little light.
This solution has improved my organization in the way that we used to have a lot of spinning discs. There had to be multiple racks because of the capacity and performance required, now we can put it all in just a couple of Us.
The ease of use of this solution has absolutely simplified storage for us.
We've seen some good data reduction rates. We get about 4:1 at our data center. We're getting around 2:1 for the one that we're using for video editing.
I would rate this solution a nine. The only reason I wouldn't give it a ten is because the servers are a little big. We have seen smaller products with the same storage capacity. We switched because we were looking for a solution that could meet our data storage and performance needs. We researched the costs for different storage solutions and Pure Storage was the one that fit our needs. Its cost-benefit was the most valuable part for us.
We are pretty sensitive to response times and have been running storages before PR. We used to not have a great response time but once we moved to this solution it has saved total production storage usage. It has better overall performance.
There is predictive analysis that PR provides us and the customer. It's useful for as to translate what is being expected. We can calculate and plan the capacity based on the predictive analysis.
We have an improved the lifecycle of database testing and has improved QA in the space that we use it for.
Storage has been simplified for us in the way that it's easy to manage. Their automatic monitoring really helps when things break or are about to break. They see a problem coming and alert us even before our own system does.
It's good to have a good way to back up high-performance type arrays like this,
especially if we could back it up on the back end versus the front side where things are impacted.
As a Pure Storage platform, the cost-benefit is hugely based on the deduplication and capacity. Otherwise, we would have had to have a bunch of arrays to support that capacity. The physical footprint of the array, the capacity that it's using, and the high performance are all things that we have seen benefit from.
They are doing very well with the product.
We have integrated the solution with VMware. The integration process was user-friendly.
We use SolarWinds to evaluate our performance metrics.
The management is its most valuable feature.
The predictive performance analytics are pretty good.
The simplicity of the solution is great.
Our primary use case for this solution is backup.
We are running VMware on Pure because our old storage was very poor. Running on Pure helps because it improves our performance in general.
This solution is deployed in our on-premises lab.
We are running VMware on Pure for improved performance. We have seen an increase in performance. We are using the VMware integrations developed by Pure to some extent, but I do not have specific details.
HPE Primera: Performance
Our customers have given very positive feedback about InfoSign, which is the management software for this solution.
Primera has good performance and the compression is also good.
It integrates well with other software including Docker and Kubernetes.
It is easy to expand.