1. leader badge
    The speed of the Pure FlashArray is very, very fast and nothing in the market can compare to it. We also use VMware integrations developed by Pure, their plugins in our vCenter environment. They help by allowing our non-technical operations teams to deploy new data stores and resize data stores without me having to involve myself all the time to do those simple tasks.
  2. leader badge
    This solution makes it easy to manage storage, provision new workloads, and scale-up.The All-Flash models are pretty fast for the vast majority of our remote workloads.
  3. Find out what your peers are saying about Pure Storage, Dell EMC, NetApp and others in All-Flash Storage Arrays. Updated: October 2020.
    442,283 professionals have used our research since 2012.
  4. leader badge
    Things that have been really useful, of course, are the clustering features and being able to stay online during failovers and code upgrades; and just being able to seamlessly do all sorts of movement of data without having to disrupt end-users' ability to get to those files. And we can take advantage of new shelves, new hardware, upgrade in place. It's kind of magic when it comes to doing those sorts of things.
  5. leader badge
    If there is a problem then the HPE facility will detect it and immediately contact me. The most valuable feature of this solution is the native full mesh (multi-pathing).
  6. leader badge
    The solution runs pretty fast.The most valuable feature is data mirroring.
  7. The most valuable features are deduplication and compression, which together, enable you to have more space.The setup is now easy.
  8. report
    Use our free recommendation engine to learn which All-Flash Storage Arrays solutions are best for your needs.
    442,283 professionals have used our research since 2012.
  9. The initial setup is pretty quick.It uses the same platform for connectivity so integration is seamless.
  10. The product offers good performance and is quite powerful. The solution has a wide variety of valuable features. The data progression works well. We use the snapshot functionality quite a bit and really like it.

Advice From The Community

Read answers to top All-Flash Storage Arrays questions. 442,283 professionals have gotten help from our community of experts.
What are some major benefits of all-flash storage arrays? Why should companies invest in all-flash as opposed to a different storage solution? 
author avatarEGonzalez
Real User

Speed (IOPs), increased reliability, compactness.

If your application does not needs those you can go with legacy solutions as far as price justifies.

Sooner or later even price will be in favour of all-flash

author avatarRony_Sklar
Community Manager

Thanks for your input @EGonzalez. What types of storage solutions would you suggest if all-flash is too expensive?

author avatarKrishnamohan Velpuri

We can use Cache tiers with flash disks in part storage pool creation. So that we can get better performance may be not reaches All-flash but better than SAS drives

author avatarRony_Sklar
Community Manager

@Krishnamohan Velpuri Thanks for your input :)

author avatarPhPr
Real User

In general, all-flash arrays have much better price/performance (in case if turning on DECO is not slowing down the array - some vendors have this issue, so PoC is needed) than hdd-only or hybrid arrays. Higher performance, lower power consumption per TB. The support cost for the HDD-only and hybrid arrays will be more an more expensive, as the HDDs share is going lower, and the main R&D is moved to all-flash arrays. Of course, in some cases (i.e. video surveillance, D2D and maybe several others), HDD-only arrays are the better option, so it's better to make a decision case-by-case.

author avatarKrishnamohan Velpuri

All-flash arrays are more costlier than any other storage arrays. Will give more performance with less latency when compared to any other arrays

author avatarRony_Sklar
Community Manager

@Krishnamohan Velpuri Do the benefits of performance with less latency justify the cost of all-flash?

author avatarKrishnamohan Velpuri

@Rony_Sklar Of course, its costly. If performance required.
More money = more performance  as of now. May be in future we may get for less price.
It depends, if customer can bare and if they defiantly need less latency for there applications.

How do thick and thin provisioning affect all-flash storage array performance? What are the relative benefits of each?
author avatarMark S. Cruce

No performance implications. Its just a provisioning strategy...

In thick provisioning, If I need 1GB, I provision 1 GB, even of only 10MB is being used. In thin provisioning, I initially provision 10MB and as the need for more storage grows, I grow the volume with it to the max of 1 GB...

Most everyone uses the provisioning unless there’s a specific reason not to you

author avatarKrishnamohan Velpuri

Lets take it with an example

Suppose i have 100GB of storage in my array and if customer requested for 150GB of lun/volume, we can provision 150GB using thin provisioning , but we can provision only upto 100GB or less using thick provisioning. Of course we have to keep 10% free space for IO and system operations.

There is no concept called over provisioning in Thick, but in thin there we may get over provisioning if we provision more space than available. In some cases we have to monitor carefully for storage in case of thin provisioning overly provisioned. We should keep an eye on utilization...

author avatarRony_Sklar
Community Manager

@Krishnamohan Velpuri interesting. Thanks!

author avatarChristian Baldauf
Real User

With thick provisioning, the allocated space is reserved completely. The provided capacity is immediately subtracted from the total capacity of the storage and is therefore no longer available.

With thin provisioning, the set size is displayed, but the storage is only ever used as much capacity as was actually used.

This makes over provisioning possible, the capacity of the storage can be better utilized.

author avatarMohamed Y Ahmed

Thick and thin provisioning it's a service related configuration, simply as an example you should use the thick option when you are creating it to hold a storage of database to let the virtual hard disk be ready for heavy writing to don't affect the transaction during the partition expansion. And thin you should use it when you are aiming to hold a file server data like imaging as the delay of creating the virtual hard disk file will never impacting the data writing. 

For heavily data writing its more suitable to use thick provisioning. 

For heavily data reading no problem to use the thin provisioning. 

I hope my answer to help you... 

author avatarMarc Staimer
Real User

Applications require shared block storage to be provisioned. The provisioning is by capacity per LUN (logical unit number) or volume. Thick provisioning means all of the capacity allocated is owned and tied up by that application whether it's used or not. Unused capacity is not sharable by other applications. Thin provisioning essentially virtualizes the provisioning so the application thinks it has a certain amount of exclusive capacity when in reality, it's shared. This makes the capacity more flexible and reduces over provisioning.

author avatarFatih Altunbas
Real User

First thing first. You have to think about what is your target. as the other colleagues mentioned when you use thick provisioning in the first place - all of the capacity is used once! when you start with thin provisioned systems, the system will allocate just that capacity which is needed.

Unfortunately, Storage systems with low intelligence features, react like a dumb system - you are the admin - you have to know how much you need. When you use thick provisioning, there might be no place for the system left to swing data from the used space in an empty array for example to pack blocks together because they match and the system can gain performance from such actions. And when there is no capacity left what happens? Correct, the systems slow down till it freezes!

Intelligent systems, on the contrary, don't give you that much possibilities - you can choose thick provisioning without thinking about - how much capacity you need to left for the system to do maintenance duties (for example swing data) - so the system already has two possibilities:

One - it let you overprovisioned the capacity to a calculated threshold. 

Two- or the contrary it only shows you the place you can use without thinking about the told issue because the system knows what it needs and how often.

Ok - there is something between the written below - there are really interesting systems, which can use both ways in one.

As mentioned, it belongs primarily on the used system - you have to know what kind of approach the used system has to hold up data.

Ariel Lindenfeld
Let the community know what you think. Share your opinions now!
author avatarit_user221634 (User with 10,001+ employees)

Customers should consider not only performance, which is really table stakes for an All Flash Array, but also resilience design and data services offered on the platform. AFA is most often used for Tier-1 apps, so the definition of what is required to support a Tier-1 application should not be compromised to fit what a particular AFA does or does not support. Simplicity and interoperability with other non-AFA assets is also key. AFA's should support replication to and data portability between itself and non-AFAs. Further, these capabilities should be native and not require additional hardware and software (virtual or physical) to support these capabilities. Lastly, don't get hung up on the minutia of de-dupe, compression, compaction, or data reduction metrics. All leading vendors have approaches that leverage what their technologies can do to make the most efficient use of flash and preserve its duty-cycle. At the end of the day, you should compare two rations: Storage seen by the host\storage consumed on the array (or another way, provisioned v. allocated) and $/GB. These are the most useful in comparing what you are getting for your money. The $/IOPS conversation is old and challenging to relate to real costs as IOPS is a more ephemeral concept that GB, plus.

author avatarit_user208149 (Presales Technical Consultant Storage at a tech vendor with 1,001-5,000 employees)

Primary requirement for me is the data reduction using data de-duplication algorithms. Second requirement is SSD's wear gauge. i need to be sure that SSD's installed in a Flash Array will work as many years as possible. So the vendor who has the best offering in those 2 topics has the best flash array.

author avatarit_user202749 (Principal Architect with 1,001-5,000 employees)

It depends on your requirements. Are you looking at Flash for Performance, ease of use, or improve data management.
Performance; you likely want an array with larger block size and one where compression and de-duplication can be enabled or disabled on select volumes.

Data reduction or data management: De-duplication and compression help manage storage growth however; you do need to understand your data. If you have many Oracle databases then Block size will be key. Most products use a 4-8K block size. Oracle writes a unique ID on the Data blocks which makes it look like unique data. If your product has a smaller block size your compression and de-duplication will be better. (Below 4K better but performance may suffer slightly)

De-duplication: If you have several Test, Dev, QA databases that are all copies of production de-duplication might help significantly. With this if de-duplication is your goal you need to look at the de-duplication boundaries. Products that offer array wide or Grid wide de-duplication will provide the most benefit.

Remote Replication: If this is a requirement you need to look at this carefully each dose it differently, some products need a separate inline appliance to accommodate replication. Replication with no rehydration of data is preferred as this will reduce Wan Bandwidth requirements and Remote storage volumes.
Ease of use: Can the daily weekly tasks be completed easily, how difficult is it to add or change storage Volumes, LUN’s, Aggregates. Do you need Aggregates? Can you meet the RTO/RPO Business requirements with the storage or will you need to use a Backup tool set to do this? You should include the cost of meeting the RTP/RPO in the solution cost evaluation.

Reporting: you need to look at the caned reports do they have the reports you need to sufficiently mange your data. And equally important to they have the reports needed to show the business the efficiencies provided by the storage Infrastructure. Do you need bill back reports? (Compression, de-duplication rates, I/O Latency reports, ect..).

author avatarit_user213957 (User)

Understanding your particular use case is key to selecting the proper technology for your environment. We are Big Data, Oracle, Data Warehouse. NON-OLTP, So we are hyper sensitive to performance and HA. Flash and Flash Hybrid are the future for the datacenter, but there are varying ways of skinning this cat. so pay close attention to the details. for me HA is paramount. our storage must be NDU in all situations. Performance is important but we are seeing numbers well in excess of anyone's requirements. So what's next? sustained performance. Write Cliff issues, how is this addressed? datacenter cost if your a co-lo is important so having a smaller foot print and lower KW's should be considered. Then of course cost. becareful of the usable number. often de-duplication and compression is factored in with the marketing, So POC is important to understand true usable.
I go back to my original statement, understanding what YOUR company needs is the most important piece of data one should take into the conversation with any and all of the suitors.

author avatarsedson52
Real User

storage virtualization and the ability to tailor the storage solution to needs of the end user and associated compute resources is the biggest factor. Being able to easily implement tiers of storage performance is key to being more efficient with the money spent on storage.

author avatarBob Whitcombe
Real User

AFA's have two major advantages over spinning disk - latency and IOPS - so you must understand your workload before jumping into an AFA selection process. Today, the AFA costs 3-5x more than a traditional array. When that price delta narrows to 50% more, I will probably be All flash. Note that as we get to "all flash everywhere" new Hyper-converged architectures will also figure prominently in my AFA analysis.

With the current gap in pricing however, we must engineer a solution for a need. Quantify the need - am I providing EPIC for a 5000 person hospital, analytics, transaction processing etc? What is the critical gating factor and what are the target SLA's? Do I need more IOPS, more throughput, lower latency? Will going to an AFA bring my response time down from 50ms today to 10ms tomorrow? Do I need to remember I have a 100ms SLA? As many note above, AFA's excel in many critical performance areas over traditional arrays - but I don't start with the array - I start with the workloads and what level of service my consumers require.

author avatarit_user609312 (Sr. Systems Administrator at a healthcare company with 501-1,000 employees)

How many IOPS are you averaging right now? Most organizations have far less IOPS than you would think. If you're looking to just speed up some apps, then use flash array for compute and SAN for long term storage. Or better yet, go with a high end, easy to scale out hyperconverged system and get the best of both worlds! Look for good dedupe and compression numbers.

author avatarit_user618633 (Founder and Group CEO at a tech services company with 51-200 employees)

There is not really any one single thing to consider. These systems are complex for a good reason and the overall outcome will only be as good as the sum of all moving parts....

1. How many disks and therefore how many total IOPS are available?
2. What class of SSD (SLC, cMLC etc)?
3. Are the controllers able to deliver the full capability of the disks behind them?
4. Are the controllers able to flood the interconnects in-front of them?
5. Are there enough controllers to prevent down-time during failure or maintenance without significant degradation of performance?
6. Do the software smarts of the system provide added benefits such as dynamic prioritization of presented volumes allowing you to deliver multiple classes of storage from a single medium?
7. Can the controllers cope with running these software smarts at full speed without affecting the data transport?

And most importantly, perhaps this is the single most important thing...

1. What is the ecosystem behind the storage, i.e. the vendor and their overall capability to patch known issues, properly test releases before publishing them, effectively communicate changes and give you confidence in the long term operation of the SAN. Do they have local support and is it of high quality?

See more All-Flash Storage Arrays questions »

What is All-Flash Storage Arrays?

The all flash storage array has matured to the point where it is now powering much of the growth in the enterprise storage business. Advances in the design, performance and management capabilities of solid state drive (SSDs), coupled with declines in cost, make flash storage viable for many workloads. The category includes NAND flash, SSD SATA, tiered storage and NAND flash memory. Enterprise storage is relentlessly demanding, though, so potential buyers need to think critically about what makes the best choice of flash array.

IT Central Station members who have experience with solid state storage emphasize ease of use as an essential selection criterion.  They suggest asking if daily and weekly tasks can be completed easily. For example, how difficult is it to add or change storage volumes and logical unit numbers (LUNs)?

Some popular user comparisons: 

3PAR vs Unity

Nimble Storage vs 3PAR 

NetApp AFF vs Pure Storage

Performance also matters, though many members comment that virtually all flash drives offer strong performance. Look closely at (IOPS), reduced footprint and lower power usage. In addition, they suggest asking whether one needs an array with larger block size and one where compression and de-duplication can be enabled or disabled on select volumes. But, reviewers add, it’s important to understand one’s data. For example, with an Oracle database, block size will matter a great deal.

Data reduction and data management capabilities factor into many comments about solid state hard drive selection. Deduplication and compression help manage storage growth. Reviewers also point out that recovery abilities matter with solid state drives. No one wants data loss if the storage array powers down suddenly.

Data storage companies offer many different reporting options. IT Central Station members stress the importance of this feature. For instance, do reports show the business the efficiencies provided by the storage infrastructure? Or, do they report on the specifics of data compression, de-duplication rates, I/O Latency reports and so forth.

Find out what your peers are saying about Pure Storage, Dell EMC, NetApp and others in All-Flash Storage Arrays. Updated: October 2020.
442,283 professionals have used our research since 2012.