1. leader badge
    It's very fast and very easy to use. It performs well and is both flexible and compatible. We like it because it's easy to use.At this point, I don't know anything that they could provide in a better way.
  2. leader badge
    Technical support is good. The cloning and snapshot features are the most valuable. With snapshot backup, we can clone a big database in minutes. We take a lot of snapshots for clients in different environments.
  3. Find out what your peers are saying about Pure Storage, NetApp, Dell EMC and others in All-Flash Storage Arrays. Updated: February 2021.
    464,757 professionals have used our research since 2012.
  4. leader badge
    Integration is easy with this product.This solution makes it easy to manage storage, provision new workloads, and scale-up.
  5. leader badge
    The features which are most valuable are the availability of the system and the management. The speed is very good.
  6. leader badge
    The deduplication and compression capabilities are powerful.Good architecture and produces a lot of IOPS.
  7. The power systems are very reliable if you are running 24/7 operations. For ongoing mission-critical applications, it's the best solution. I like most of the features. Its speed, performance, and availability are valuable. We are implementing the data reduction technology the most.
  8. report
    Use our free recommendation engine to learn which All-Flash Storage Arrays solutions are best for your needs.
    464,757 professionals have used our research since 2012.
  9. The most valuable features are the Metro clustering, and disaster recovery.It's very easy-to-use.
  10. One time, we had a drive fail and we were notified before we even saw it on the device.They have a feature called Live Migrate; it's been very, very useful.

Advice From The Community

Read answers to top All-Flash Storage Arrays questions. 464,757 professionals have gotten help from our community of experts.
Rony_Sklar
What are some major benefits of all-flash storage arrays? Why should companies invest in all-flash as opposed to a different storage solution? 
author avatarAliBizmark
Real User

It has better performance than Hybrid storage and customers can have more IOPS,
for the mission critical application customers better use all-flash arrayas

author avatarreviewer810810 (Systems Technician at a tech services company with 51-200 employees)
Real User

Speed (IOPs), increased reliability, compactness.


If your application does not needs those you can go with legacy solutions as far as price justifies.


Sooner or later even price will be in favour of all-flash

author avatarPhPr
Real User

In general, all-flash arrays have much better price/performance (in case if turning on DECO is not slowing down the array - some vendors have this issue, so PoC is needed) than hdd-only or hybrid arrays. Higher performance, lower power consumption per TB. The support cost for the HDD-only and hybrid arrays will be more an more expensive, as the HDDs share is going lower, and the main R&D is moved to all-flash arrays. Of course, in some cases (i.e. video surveillance, D2D and maybe several others), HDD-only arrays are the better option, so it's better to make a decision case-by-case.

author avatarreviewer1155498 (System Administrator at a university with 5,001-10,000 employees)
Real User

It all depends on the Budget as well....if required High IOPs for Database or other read/write intensive services and also budget is not a issue then go for All Flash...Otherwise combining SSD with NL-SAS alongwith tiering / cache features will suffice purpose in most conditions...

author avatarMir Gulzar Ahmed
Real User

The selection of All-Flash Storage Solutions depends on requirements and the goal any customer is looking to achieve at the end.


All-Flash is not a fit for all requirements......


"If a customer needs a storage solution with high to extreme performance,
Rack-space
efficient, power and cooling efficient (environmentally friendly),
build-in compression; then they should choose All-Flash solutions."


It is not all about the performance that storage manufacturers are very aggressive to sell their All-Flash boxes.


Just think about SAS disks..... those were used as performance and SATA as capacity.


Enterprise SAS disk technology is limited to 1.8TB/disk only; that is why storage manufacturers are promoting their All-Flash to fill the capacity gap created because of SAS disks technology limitations.


The modern Flash disks are covering both requirements (Capacity+Performance) but are still expensive (if we assume the same RAW capacity).


If a customer comes with the requirements that fits to an Enterprise Hybrid Storage solution like 40%SSD, 30%SAS and 30%SATA; 400TB RAW in total, then they should select All-Flash instead, as both solutions will not have any significant cost differences.


If a customer's requirements (Performance+Capacity+Cost-effective) fits to SAS then they should choose a SAS based storage solution.


If a customer needs an archival storage then they should choose SATA based solutions.


If a customer needs a storage solution with high to extreme performance, Rack-space, power and cooling efficient (environmentally friendly), build-in compression, then they should choose All-Flash solutions.


If customer needs less SSDs/Flash disks; like.... 10%SSD+45%SAS+45%SATA, then Hybrid type of storage solutions will be a good and cost-effective option.

author avatarKrishnamohan Velpuri
User

All-flash arrays are more costlier than any other storage arrays.
Will give more performance with less latency when compared to any other arrays

Ariel Lindenfeld
Let the community know what you think. Share your opinions now!
author avatarit_user208149 (Presales Technical Consultant Storage at a tech vendor with 1,001-5,000 employees)
Vendor

Primary requirement for me is the data reduction using data de-duplication algorithms. Second requirement is SSD's wear gauge. i need to be sure that SSD's installed in a Flash Array will work as many years as possible. So the vendor who has the best offering in those 2 topics has the best flash array.

author avatarit_user221634 (User with 10,001+ employees)
Vendor

Customers should consider not only performance, which is really table stakes for an All Flash Array, but also resilience design and data services offered on the platform. AFA is most often used for Tier-1 apps, so the definition of what is required to support a Tier-1 application should not be compromised to fit what a particular AFA does or does not support. Simplicity and interoperability with other non-AFA assets is also key. AFA's should support replication to and data portability between itself and non-AFAs. Further, these capabilities should be native and not require additional hardware and software (virtual or physical) to support these capabilities. Lastly, don't get hung up on the minutia of de-dupe, compression, compaction, or data reduction metrics. All leading vendors have approaches that leverage what their technologies can do to make the most efficient use of flash and preserve its duty-cycle. At the end of the day, you should compare two rations: Storage seen by the host\storage consumed on the array (or another way, provisioned v. allocated) and $/GB. These are the most useful in comparing what you are getting for your money. The $/IOPS conversation is old and challenging to relate to real costs as IOPS is a more ephemeral concept that GB, plus.

author avatarit_user213957 (User)
Vendor

Understanding your particular use case is key to selecting the proper technology for your environment. We are Big Data, Oracle, Data Warehouse. NON-OLTP, So we are hyper sensitive to performance and HA. Flash and Flash Hybrid are the future for the datacenter, but there are varying ways of skinning this cat. so pay close attention to the details. for me HA is paramount. our storage must be NDU in all situations. Performance is important but we are seeing numbers well in excess of anyone's requirements. So what's next? sustained performance. Write Cliff issues, how is this addressed? datacenter cost if your a co-lo is important so having a smaller foot print and lower KW's should be considered. Then of course cost. becareful of the usable number. often de-duplication and compression is factored in with the marketing, So POC is important to understand true usable.
I go back to my original statement, understanding what YOUR company needs is the most important piece of data one should take into the conversation with any and all of the suitors.

author avatarit_user202749 (Principal Architect with 1,001-5,000 employees)
Vendor

It depends on your requirements. Are you looking at Flash for Performance, ease of use, or improve data management.
Performance; you likely want an array with larger block size and one where compression and de-duplication can be enabled or disabled on select volumes.

Data reduction or data management: De-duplication and compression help manage storage growth however; you do need to understand your data. If you have many Oracle databases then Block size will be key. Most products use a 4-8K block size. Oracle writes a unique ID on the Data blocks which makes it look like unique data. If your product has a smaller block size your compression and de-duplication will be better. (Below 4K better but performance may suffer slightly)

De-duplication: If you have several Test, Dev, QA databases that are all copies of production de-duplication might help significantly. With this if de-duplication is your goal you need to look at the de-duplication boundaries. Products that offer array wide or Grid wide de-duplication will provide the most benefit.

Remote Replication: If this is a requirement you need to look at this carefully each dose it differently, some products need a separate inline appliance to accommodate replication. Replication with no rehydration of data is preferred as this will reduce Wan Bandwidth requirements and Remote storage volumes.
Ease of use: Can the daily weekly tasks be completed easily, how difficult is it to add or change storage Volumes, LUN’s, Aggregates. Do you need Aggregates? Can you meet the RTO/RPO Business requirements with the storage or will you need to use a Backup tool set to do this? You should include the cost of meeting the RTP/RPO in the solution cost evaluation.

Reporting: you need to look at the caned reports do they have the reports you need to sufficiently mange your data. And equally important to they have the reports needed to show the business the efficiencies provided by the storage Infrastructure. Do you need bill back reports? (Compression, de-duplication rates, I/O Latency reports, ect..).

author avatarTerence Canaday
Real User

Flash changed the whole way to address storage in the first place. And of course, being able to use components, that are not measured in hours lifetime but instead in write access makes it possible to use these far longer than three or five years. Flash Storage systems make it possible to change the whole way of lifecylces compared to the early days where systems were end of life after five years at last and you had to buy a new system without a chance to use the SSDs or DFMs after the five year lifespan of a traditional system.
Where you discussed aspects of IOPS and Raid groups in the past, you now discuss dedupe and compression efficiencies and lifetime of the system. >100.000IOPS with the smalles systems should be enough for everyone. :-)

author avatarsedson52
Real User

storage virtualization and the ability to tailor the storage solution to needs of the end user and associated compute resources is the biggest factor. Being able to easily implement tiers of storage performance is key to being more efficient with the money spent on storage.

author avatarBob Whitcombe
Real User

AFA's have two major advantages over spinning disk - latency and IOPS - so you must understand your workload before jumping into an AFA selection process. Today, the AFA costs 3-5x more than a traditional array. When that price delta narrows to 50% more, I will probably be All flash. Note that as we get to "all flash everywhere" new Hyper-converged architectures will also figure prominently in my AFA analysis.

With the current gap in pricing however, we must engineer a solution for a need. Quantify the need - am I providing EPIC for a 5000 person hospital, analytics, transaction processing etc? What is the critical gating factor and what are the target SLA's? Do I need more IOPS, more throughput, lower latency? Will going to an AFA bring my response time down from 50ms today to 10ms tomorrow? Do I need to remember I have a 100ms SLA? As many note above, AFA's excel in many critical performance areas over traditional arrays - but I don't start with the array - I start with the workloads and what level of service my consumers require.

author avatarit_user609312 (Sr. Systems Administrator at a healthcare company with 501-1,000 employees)
Vendor

How many IOPS are you averaging right now? Most organizations have far less IOPS than you would think. If you're looking to just speed up some apps, then use flash array for compute and SAN for long term storage. Or better yet, go with a high end, easy to scale out hyperconverged system and get the best of both worlds! Look for good dedupe and compression numbers.

Rony_Sklar
How do thick and thin provisioning affect all-flash storage array performance? What are the relative benefits of each?
author avatarMark S. Cruce
User

No performance implications. Its just a provisioning strategy...


In thick provisioning, If I need 1GB, I provision 1 GB, even of only 10MB is being used. In thin provisioning, I initially provision 10MB and as the need for more storage grows, I grow the volume with it to the max of 1 GB...


Most everyone uses the provisioning unless there’s a specific reason not to you

author avatarTerence Canaday
Real User

Thick and Thin does make a´that much difference with All Flash Arrays nowadays, because the system realize "free" space and won´t use it in physical due to zero detection features and UNMAP.
BUT: There still are systems out there, wich have a performance impact with thin provisioning related to the first mapping of a free block which can be "feelable" in terms of user experience. If you are unsure, you can always use thick provisioning eager zero which is from many operating systems the most compatible set to use. :-)


author avatarMir Gulzar Ahmed
Real User

Option 1). Thick provisioning is "provisioning storage space(100GB) now"


Option 2). Thin provisioning is" provisioning storage space(100GB) on demand"


For 1). Disk space of 100GB will be immediately reduced from storage/back-end disk


If your data now is 1 GB and you can go max by 3 year is 50GB then you are putting unnecessary I/O load on back-end storage as it will read/write as of 100GB from the day-one.However, the 100GB is already formatted and provisioned.


For 2). Disk space of 100GB will be provisioned from front-end but not immediately be reduced from storage/back-end disk. Instead it will be used/filled up when required.


If your data now is 1 GB and you can go max by 3 year at 50GB then I/O load on back-end storage will read/write as of 50GB after 3 years.


However, the 100GB is not formatted from the back-end/storage but only provisioned at front end/host; so whenever the new block/cell read/write request is initiated, it will also request the storage system to format the required new block/cell; this will end to an extra I/O to and from the storage system.


Option 2). is better for "If your data now is 1 GB and you can go max by 3 year is 50GB"


No benefit of Thin provisioning if you need 100GB to use that all of it now.


Not recommending to use Thin provisioning for databases. Only use where you are sure that you will get additional back-end space later-on.


For all flash storage systems use Build-in compression de-duplication instead. Thin provisioning should be use only for less/non-critical applications.


Thin provisioning is actually making you lazy to correctly provision your storage system.

author avatarKrishnamohan Velpuri
User

Lets take it with an example


Suppose i have 100GB of storage in my array and if customer requested for 150GB of lun/volume, we can provision 150GB using thin provisioning , but we can provision only upto 100GB or less using thick provisioning. Of course we have to keep 10% free space for IO and system operations.


There is no concept called over provisioning in Thick, but in thin there we may get over provisioning if we provision more space than available. In some cases we have to monitor carefully for storage in case of thin provisioning overly provisioned. We should keep an eye on utilization...

author avatarChristian Baldauf
Real User

With thick provisioning, the allocated space is reserved completely. The provided capacity is immediately subtracted from the total capacity of the storage and is therefore no longer available.


With thin provisioning, the set size is displayed, but the storage is only ever used as much capacity as was actually used.


This makes over provisioning possible, the capacity of the storage can be better utilized.

author avatarMohamed Y Ahmed
Reseller

Thick and thin provisioning it's a service related configuration, simply as an example you should use the thick option when you are creating it to hold a storage of database to let the virtual hard disk be ready for heavy writing to don't affect the transaction during the partition expansion. And thin you should use it when you are aiming to hold a file server data like imaging as the delay of creating the virtual hard disk file will never impacting the data writing. 


For heavily data writing its more suitable to use thick provisioning. 


For heavily data reading no problem to use the thin provisioning. 


I hope my answer to help you... 



author avatarMarc Staimer
Real User

Applications require shared block storage to be provisioned. The provisioning is by capacity per LUN (logical unit number) or volume. Thick provisioning means all of the capacity allocated is owned and tied up by that application whether it's used or not. Unused capacity is not sharable by other applications. Thin provisioning essentially virtualizes the provisioning so the application thinks it has a certain amount of exclusive capacity when in reality, it's shared. This makes the capacity more flexible and reduces over provisioning.

author avatarFatih Altunbas
Real User

First thing first. You have to think about what is your target. as the other colleagues mentioned when you use thick provisioning in the first place - all of the capacity is used once! when you start with thin provisioned systems, the system will allocate just that capacity which is needed.


Unfortunately, Storage systems with low intelligence features, react like a dumb system - you are the admin - you have to know how much you need. When you use thick provisioning, there might be no place for the system left to swing data from the used space in an empty array for example to pack blocks together because they match and the system can gain performance from such actions. And when there is no capacity left what happens? Correct, the systems slow down till it freezes!


Intelligent systems, on the contrary, don't give you that much possibilities - you can choose thick provisioning without thinking about - how much capacity you need to left for the system to do maintenance duties (for example swing data) - so the system already has two possibilities:


One - it let you overprovisioned the capacity to a calculated threshold. 


Two- or the contrary it only shows you the place you can use without thinking about the told issue because the system knows what it needs and how often.


Ok - there is something between the written below - there are really interesting systems, which can use both ways in one.


As mentioned, it belongs primarily on the used system - you have to know what kind of approach the used system has to hold up data.

See more All-Flash Storage Arrays questions »

What is All-Flash Storage Arrays?

The all flash storage array has matured to the point where it is now powering much of the growth in the enterprise storage business. Advances in the design, performance and management capabilities of solid state drive (SSDs), coupled with declines in cost, make flash storage viable for many workloads. The category includes NAND flash, SSD SATA, tiered storage and NAND flash memory. Enterprise storage is relentlessly demanding, though, so potential buyers need to think critically about what makes the best choice of flash array.

IT Central Station members who have experience with solid state storage emphasize ease of use as an essential selection criterion.  They suggest asking if daily and weekly tasks can be completed easily. For example, how difficult is it to add or change storage volumes and logical unit numbers (LUNs)?

Some popular user comparisons: 

3PAR vs Unity

Nimble Storage vs 3PAR 

NetApp AFF vs Pure Storage

Performance also matters, though many members comment that virtually all flash drives offer strong performance. Look closely at (IOPS), reduced footprint and lower power usage. In addition, they suggest asking whether one needs an array with larger block size and one where compression and de-duplication can be enabled or disabled on select volumes. But, reviewers add, it’s important to understand one’s data. For example, with an Oracle database, block size will matter a great deal.

Data reduction and data management capabilities factor into many comments about solid state hard drive selection. Deduplication and compression help manage storage growth. Reviewers also point out that recovery abilities matter with solid state drives. No one wants data loss if the storage array powers down suddenly.

Data storage companies offer many different reporting options. IT Central Station members stress the importance of this feature. For instance, do reports show the business the efficiencies provided by the storage infrastructure? Or, do they report on the specifics of data compression, de-duplication rates, I/O Latency reports and so forth.

Find out what your peers are saying about Pure Storage, NetApp, Dell EMC and others in All-Flash Storage Arrays. Updated: February 2021.
464,757 professionals have used our research since 2012.