When evaluating Enterprise Flash Array Storage, what aspect do you think is the most important to look for?

299
22

Let the community know what you think. Share your opinions now!
Anonymous avatar x30
Guest
As seen in
Logosasseeninsmall

22 Answers

Bob whitcombe li?1427124692

AFA's have two major advantages over spinning disk - latency and IOPS - so you must understand your workload before jumping into an AFA selection process. Today, the AFA costs 3-5x more than a traditional array. When that price delta narrows to 50% more, I will probably be All flash. Note that as we get to "all flash everywhere" new Hyper-converged architectures will also figure prominently in my AFA analysis.

With the current gap in pricing however, we must engineer a solution for a need. Quantify the need - am I providing EPIC for a 5000 person hospital, analytics, transaction processing etc? What is the critical gating factor and what are the target SLA's? Do I need more IOPS, more throughput, lower latency? Will going to an AFA bring my response time down from 50ms today to 10ms tomorrow? Do I need to remember I have a 100ms SLA? As many note above, AFA's excel in many critical performance areas over traditional arrays - but I don't start with the array - I start with the workloads and what level of service my consumers require.

Like (0)01 March 17
59888b42 219e 4282 a9e7 d91b83f81a48 avatar

How many IOPS are you averaging right now? Most organizations have far less IOPS than you would think. If you're looking to just speed up some apps, then use flash array for compute and SAN for long term storage. Or better yet, go with a high end, easy to scale out hyperconverged system and get the best of both worlds! Look for good dedupe and compression numbers.

Like (0)01 March 17
Ffee781b 1966 494c 8cc8 747339399186 avatar
BruceTrevarthenReal UserTOP 20

There is not really any one single thing to consider. These systems are complex for a good reason and the overall outcome will only be as good as the sum of all moving parts....

1. How many disks and therefore how many total IOPS are available?
2. What class of SSD (SLC, cMLC etc)?
3. Are the controllers able to deliver the full capability of the disks behind them?
4. Are the controllers able to flood the interconnects in-front of them?
5. Are there enough controllers to prevent down-time during failure or maintenance without significant degradation of performance?
6. Do the software smarts of the system provide added benefits such as dynamic prioritization of presented volumes allowing you to deliver multiple classes of storage from a single medium?
7. Can the controllers cope with running these software smarts at full speed without affecting the data transport?

And most importantly, perhaps this is the single most important thing...

1. What is the ecosystem behind the storage, i.e. the vendor and their overall capability to patch known issues, properly test releases before publishing them, effectively communicate changes and give you confidence in the long term operation of the SAN. Do they have local support and is it of high quality?

Like (0)01 March 17
Adam wick avatar 1431975696?1431975694
Adam WickVendor

when considering an AFA, you should factor in the obvious - it's always going to scale with SSD. What most clients find out is within the first year of filling up an AFA their data naturally grows and creates new hot data while now retaining warm data. A tiered solution would be nice, but due to AFA you now have to expand with scale and although you may only need to expand your lower tier, you are going to have to buy SSD's. I've seen clients buy a separate SAN (not their original plan) once they saw their SSD Scale expansion quote for an AFA design. So consider the TCO and total costs. If allowed to mention, an architecture life the HPE 3PAR offers allows an organization to start out with an AFA model; then some months later when scale is a topic a client can add disk to the array and really maximize TCO.

Like (0)01 March 17
409df223 252a 42b5 9572 4f29e6e937eb avatar

Dear friend,
All-flash arrays are amazing in many aspects. The first thing you should evaluate Is The fit of cost verse The benefits that application Will have With it. Does The reduction OF The io latency will afect The Business result? Talking about The technologies availiable in the market, you should search for The better (lowest) latency, right software features With The higest endurance and capacity. Lets see each factor:
- latency: lowest is better, But remember that hard Disk have average 5 microsecunds, any thing bellow 1 ms, can give you enought acceleration to your application. Microlatency Bellow 200ms Will be reach OnLy in some flash System that has no raid Controller But asics, for example IBM flash system.
- software reatores: do you need Only accelerate a specific application, so use a flashsystem with out any software frature like compression, snap, virtualization. It we call tier 0 storage, if you need a flash array to general storage use, tier one storage, find a supplier that give you a subsystem with all enterprise features: snapshot, migration, Virtualize other storage. As V9000 of IBM.
- endurance: all flash media are based in nand memory, each cell of nand has a very limited times that it can be writen. Read is not a problem, but it time you rewrite a cell , it became week, so find out how the garbage collection is made in the supplier (THIS PROCESS IS DONE BY ALL SUPPLIERS) , does the supplier have control of it? Or uses the internal ssd microcode and don't control it? Bad idea. Is there other intelligence doing the management of the data in each cell? For example, some manufactures has a internal cell autotier, that moves data that is less changed to cells that are very week, so it give more use time for then,
- capacity: what is the real and usable capacity that the manufacturer give you a commitment to archive. Be carefull some sold with a promise to reach one amount capacity and latter just give you more media when it not reach the promised amount. Test it with your data before buy. Get more media for free will not give you more physical space in your datacenter or reduce your electricity Bill.

In summary: check if you really need flash now, how it handle the garbage collection, if the software features fit your needs, and if the capacity reduction is real for your data. Don't buy with out test with your own data.

Hope the information was helpful
Christian Paglioli

Like (0)01 March 17
876eacb2 3e50 48db bdfe b3fb16861876 avatar

1. Response time
2. Connectivity of flash storage with hosts cinsidering large no of iops generated by storage
3. Backup stratergy for storage based backups
4. Ability to scale out considering most of flash storage are appliances

Like (0)01 March 17
Anonymous avatar x30

In my experience from being involved in performing independent flash vendor comparisons, often in the vendors own labs, I have observed there is no single right answer as it depends on the workload(s) being driven.

For example highly write intensive workloads using random data patterns and larger block sizes place different demands on flash solutions than high read intensive, sequential ones with steady 4k block sizes.

I have tested many different flash products and sometimes a vendor performs to a very high standard for one workload and for the next workload, we see unacceptable latencies often exceeding 50ms when running scaled up workload levels.

The workload demands coupled with an appropriate product configuration determine the best outcomes in my experience.

I would encourage you to look at the workload profiles and how they will be mixed. Other factors impact performance such as the vendors support for the protocol. I mean FC vs iSCSI vs NFS support will often lead to wild performance variations between vendors,

How vendors cope with the metadata command mix also massively affects performance.

As an example, I have seen two comparable flash product configurations where one hit sub 5ms latency for a 50k iop workload which was mostly write intensive on a 5:1 data reduction ratio while the next vendor hit 80ms latency for the exact same workload conditions. Until you test and compare them at scale, it's nothing more than guesswork.

There is a free to use portal at :
Workloadcentral.com<http://Workloadcentral.com> with workload analytics where storage logs can be uploaded and analysed to better understand the current workload behaviour. There is also a chance to see other workloads, such as Oracle, SQL, VDI and download them to replay in your own lab against products under consideration.

Good luck with your investigations!

Like (0)01 March 17
Anonymous avatar x30

When evaluating Enterprise class all flash arrays, there are quite a few things to look for as these arrays differ fundamentally from the standard spinning disk based arrays or hybrid arrays. This comes down to flash/SSD as a media:

1. What type of SSDs are being used – is this EeMLC, TLC, 3D NAND etc
2. Writes are particularly crucial as SSDs have preset and finite number of write cycles. How does the underlying intelligence handle writes
3. What data efficiency measures are used: De-duplication and compression – Are these inline or post process
4. What storage protocols are supported
5. Are capacity enhancements such as erasure coding supported
6. If the all flash arrays are scale out in nature (ex: XtremIO), what is the interconnect protocol
7. Points of integration with orchestration/automation and data management tools
8. Does the array support capabilities such as external storage virtualization etc

Like (0)01 March 17
Anonymous avatar x30

The most important aspect to look for is to make sure the application requirements are met. Same as traditional storage arrays. There are plenty of advanced functions available via snap, clone, recovery options. You need to make sure you understand exactly how the new functions will be used in your environment.

Like (0)01 March 17
037c013f 4441 4851 be82 59f7aa8b1f27 avatar
Paul Bell, MCSEReal UserTOP 10

Performance in a shared environment, mixed workloads.
API access for backups, clones, deployments to support an increasingly Agile/DevOps world
Anlaytics on usage, trends, hotspots, bottlenecks
vCenter/Hyper-V integration

Like (0)01 March 17
7ff3bcce 7403 4228 9bfb 0259d88808b6 avatar?1454345660
Rick KarbowskiConsultantTOP 20

There are relatively few systems that need the high performance of all-flash, especially as the capacities of all-flash go up from dozens of terabytes to several petabytes. Server and Storage virtualization can give similar performance at a fraction of the cost. Make sure your need is priority.

Like (0)01 March 17
Anonymous avatar x30

Choice many people make who have a tighter budget is price per gigabyte.

Like (0)01 March 17
Hitesh chhaya li?1423713685
Hitesh ChhayaReal UserTOP 10

While a block device layer can emulate a disk drive so that a general-purpose file system can be used on a flash-based storage device.

Like (0)01 March 17
Chris2
Chris ChilderhoseReal UserTOP REVIEWERELITE SQUAD

Further to all the great suggestions above another thing to look at is - what sets the vendor apart from all other vendors? What makes them unique or different versus being similar to all others. An example of this would be an analytic website or a unique dashboard.

Like (0)01 March 17
D063051b dab4 483a a1dd 5e3a65b58c47 avatar

all flash arrays are for primary storage.
therefore data protection (snapshots), devops (zero copy clones), and programmability are key attributes

Like (0)31 October 16
1ae1a4c2 bc67 4748 8e2d f9fce9622292 avatar?1441675847

Recovery also is a point, even SSD is fast to write to block. We still need to make sure it won't have data loss if there are power down suddenly.

Like (0)08 September 15
Eli lopez avatar 1433403293?1433403291
Eli LopezVendor

I would want to know what UNIQUE features the array has - they all pretty much have very similar features - give me what differentiates them (if anything...)

Like (0)04 June 15
Miguel angel saiz fernandez avatar 1431958901?1431958899

Added to enhacements on typical Flash characteristics like : type of technology used (SLC, eML, cMLC, TLC, 16LC, ...) affects directly to the pricing and the performance (IOPS and < 1 msg. average latency , less Over Provisioning, right Wear Leveling (I/O pattern -random/sequential-), management of Garbage Collection and Write Amplification (Data Compression & DeDup In-Line are very important efficiency factors), together a longer Drive Endurance and DWPD -Device Writes Per Day-.
Also will continue to be equal or more important the typical TIER-1 characteristics like : High Performance, Scale Out Architecture active/active LUN nodes/controllers and multi-tenancy, Reliability 99,9999% ha, D&R (Sync RPO=0 & Async replication RPO < 5', with consistency groups), Application integration (VMware, Hyper-V, Oracle, SQL, Exchange, SAP,...), Eficiency (Thin Techniques), Easy management (self configuring, optimizing & tuning) intuitive, Data Mobility.

Like (0)19 May 15
Anonymous avatar x30

Customers should consider not only performance, which is really table stakes for an All Flash Array, but also resilience design and data services offered on the platform. AFA is most often used for Tier-1 apps, so the definition of what is required to support a Tier-1 application should not be compromised to fit what a particular AFA does or does not support. Simplicity and interoperability with other non-AFA assets is also key. AFA's should support replication to and data portability between itself and non-AFAs. Further, these capabilities should be native and not require additional hardware and software (virtual or physical) to support these capabilities. Lastly, don't get hung up on the minutia of de-dupe, compression, compaction, or data reduction metrics. All leading vendors have approaches that leverage what their technologies can do to make the most efficient use of flash and preserve its duty-cycle. At the end of the day, you should compare two rations: Storage seen by the host\storage consumed on the array (or another way, provisioned v. allocated) and $/GB. These are the most useful in comparing what you are getting for your money. The $/IOPS conversation is old and challenging to relate to real costs as IOPS is a more ephemeral concept that GB, plus.

Like (2)10 April 15
Anonymous avatar x30

Understanding your particular use case is key to selecting the proper technology for your environment. We are Big Data, Oracle, Data Warehouse. NON-OLTP, So we are hyper sensitive to performance and HA. Flash and Flash Hybrid are the future for the datacenter, but there are varying ways of skinning this cat. so pay close attention to the details. for me HA is paramount. our storage must be NDU in all situations. Performance is important but we are seeing numbers well in excess of anyone's requirements. So what's next? sustained performance. Write Cliff issues, how is this addressed? datacenter cost if your a co-lo is important so having a smaller foot print and lower KW's should be considered. Then of course cost. becareful of the usable number. often de-duplication and compression is factored in with the marketing, So POC is important to understand true usable.
I go back to my original statement, understanding what YOUR company needs is the most important piece of data one should take into the conversation with any and all of the suitors.

Like (1)26 March 15
Anonymous avatar x30

Primary requirement for me is the data reduction using data de-duplication algorithms. Second requirement is SSD's wear gauge. i need to be sure that SSD's installed in a Flash Array will work as many years as possible. So the vendor who has the best offering in those 2 topics has the best flash array.

Like (2)14 March 15
Anonymous avatar x30

It depends on your requirements. Are you looking at Flash for Performance, ease of use, or improve data management.
Performance; you likely want an array with larger block size and one where compression and de-duplication can be enabled or disabled on select volumes.

Data reduction or data management: De-duplication and compression help manage storage growth however; you do need to understand your data. If you have many Oracle databases then Block size will be key. Most products use a 4-8K block size. Oracle writes a unique ID on the Data blocks which makes it look like unique data. If your product has a smaller block size your compression and de-duplication will be better. (Below 4K better but performance may suffer slightly)

De-duplication: If you have several Test, Dev, QA databases that are all copies of production de-duplication might help significantly. With this if de-duplication is your goal you need to look at the de-duplication boundaries. Products that offer array wide or Grid wide de-duplication will provide the most benefit.

Remote Replication: If this is a requirement you need to look at this carefully each dose it differently, some products need a separate inline appliance to accommodate replication. Replication with no rehydration of data is preferred as this will reduce Wan Bandwidth requirements and Remote storage volumes.
Ease of use: Can the daily weekly tasks be completed easily, how difficult is it to add or change storage Volumes, LUN’s, Aggregates. Do you need Aggregates? Can you meet the RTO/RPO Business requirements with the storage or will you need to use a Backup tool set to do this? You should include the cost of meeting the RTP/RPO in the solution cost evaluation.

Reporting: you need to look at the caned reports do they have the reports you need to sufficiently mange your data. And equally important to they have the reports needed to show the business the efficiencies provided by the storage Infrastructure. Do you need bill back reports? (Compression, de-duplication rates, I/O Latency reports, ect..).

Like (2)03 March 15
As seen in
Logosasseeninsmall

Sign Up with Email