All-Flash Storage Arrays Forum

IT Central Station
Jul 03 2020
How do thick and thin provisioning affect all-flash storage array performance? What are the relative benefits of each?
Mark S. CruceNo performance implications. Its just a provisioning strategy... In thick provisioning, If I need 1GB, I provision 1 GB, even of only 10MB is being used. In thin provisioning, I initially provision 10MB and as the need for more storage grows, I grow the volume with it to the max of 1 GB... Most everyone uses the provisioning unless there’s a specific reason not to you
Marc StaimerApplications require shared block storage to be provisioned. The provisioning is by capacity per LUN (logical unit number) or volume. Thick provisioning means all of the capacity allocated is owned and tied up by that application whether it's used or not. Unused capacity is not sharable by other applications. Thin provisioning essentially virtualizes the provisioning so the application thinks it has a certain amount of exclusive capacity when in reality, it's shared. This makes the capacity more flexible and reduces over provisioning.
Mohamed Y AhmedThick and thin provisioning it's a service related configuration, simply as an example you should use the thick option when you are creating it to hold a storage of database to let the virtual hard disk be ready for heavy writing to don't affect the transaction during the partition expansion. And thin you should use it when you are aiming to hold a file server data like imaging as the delay of creating the virtual hard disk file will never impacting the data writing.  For heavily data writing its more suitable to use thick provisioning.  For heavily data reading no problem to use the thin provisioning.  I hope my answer to help you... 
IT Central Station
Jun 03 2020
How does VDI work?  Is All-Flash required, or can it work on other storage arrays?
reviewer1222509VDI is a server farm for virtualizing dedicated user desktops. Detailed information can be found at this link, for example, VMware How does VDI work? In the case of my organization for 2 thousand. 20 TB data has been reserved. VDI solutions have a large data reduction (deduplication and compression, up to 20: 1). If the data is reproducible, this factor may be even higher. Is All-Flash required or can it work on other memory arrays? VDI can work on hybrid arrays, but they are not so efficient in boot storm, non-Vistist, VDI, reboot of the whole farm, low response times below 1ms, requirements of similar experiments as on PC.
Rodney CarlsonVDI uses a server solution for virtual desktops. This way the horsepower needed to run applications comes from the server-side and not the client-side. This enables a couple of main things: a similar desktop experience for the end-user no matter where they are and what hardware they are using, and cost savings on the hardware required for the end-user. It enables patching and security benefits as well. The drawbacks are bandwidth requirements and server infrastructure cost. Using an all-flash storage array would help in the IO limitations of the server because any storage requests would be fulfilled quickly. Pros: consistency in all desktops (they are all the same), speed and performance, patching and security, easier upgrades, lower desktop machine cost, server maintenance. Cons: high server hardware requirements, server storage cost is higher, need a higher bandwidth between server and desktop, server maintenance. Any kind of decision to use VDI needs to consider the cost benefit. Would using a virtual desktop be worth it? You decide.
reviewer1243038No, you don't need allflash - but it depends how many users. If you deploy 'desktop just in time' then all vdi instances are in RAM (with small footprint on the storage). Allflash is recommended but not required. It's recommended to have separate storage for VDI solution. Look at site - they have great documents and videos how it works.
Menachem D Pritzker
Director of Growth
IT Central Station
May 01 2020
Are all types of SSD flash, and vice versa? What's the difference? Thanks! I appreciate the help. 
Khurram SaoodI believe all-flash means the storage box with the provision of Flash media type only, it may be SSDs it may be either flash media like IBM's Flash Core modules. The All-Flash hardware will only support SSDs, Flash Drives or IBM's Flash core modules. But if it is Hybrid Storage then it is capable of accommodating other media types like NLSAS, SAS drives along with SSDs.
Chetan WoodunI have seen a number of replies. Just clarifying. There are 3 storage Tiers with NVMe, SSD and SAS /NL SSD is a disk that doesn't have moving parts like SAS and NL Now, Flash is the implementation of SSD. SSDs are made mostly of flash memory
Jason Guo• Flash o == NAND o == Flash Memory chip o ==NVM o == SLC + MLC + TLC o == small pieces of chip made for other storage manufacture use, not ready for end user; • SSD o == (a few NAND chips + Interface) o == (SATA SSD + SAS SSD + NVMe SSD + M.2 SSD) o == 4 or 8 flash memory chips manufactured (welded together on PCB board) to become a Hard Drive, named Solid State Drive, ready for end user use, or for other storage manufacture use. • All flash storage array o == (lots of SSD + array Controller + SAN + LUN Masking + Dedupe + GUI + replication + Snapshot...) o == put all type of SSD or other type of raw Flash together, with a provision interface, to become a storage array, ready for end user.
Senior Manager with 1-10 employees
Jan 29 2020
Hello Everybody, Can someone help me understand the differences between oracle ZFS5-2 and Oracle FS1-2? Thank you,
Dave-KrenikThis question may be moot now. Oracle EOL'd FS1 and RIF'd most of the FS1 staff a while ago. While both ZFS (now ZFS7) and FS1 are/were unified storage systems, ZFS is primarily NAS; FS1 was primarily SAN with some QoS and multi-tenancy features and such.
Ariel Lindenfeld
Sr. Director of Community
IT Central Station
Jan 14 2020
Let the community know what you think. Share your opinions now!
it_user202749It depends on your requirements. Are you looking at Flash for Performance, ease of use, or improve data management. Performance; you likely want an array with larger block size and one where compression and de-duplication can be enabled or disabled on select volumes. Data reduction or data management: De-duplication and compression help manage storage growth however; you do need to understand your data. If you have many Oracle databases then Block size will be key. Most products use a 4-8K block size. Oracle writes a unique ID on the Data blocks which makes it look like unique data. If your product has a smaller block size your compression and de-duplication will be better. (Below 4K better but performance may suffer slightly) De-duplication: If you have several Test, Dev, QA databases that are all copies of production de-duplication might help significantly. With this if de-duplication is your goal you need to look at the de-duplication boundaries. Products that offer array wide or Grid wide de-duplication will provide the most benefit. Remote Replication: If this is a requirement you need to look at this carefully each dose it differently, some products need a separate inline appliance to accommodate replication. Replication with no rehydration of data is preferred as this will reduce Wan Bandwidth requirements and Remote storage volumes. Ease of use: Can the daily weekly tasks be completed easily, how difficult is it to add or change storage Volumes, LUN’s, Aggregates. Do you need Aggregates? Can you meet the RTO/RPO Business requirements with the storage or will you need to use a Backup tool set to do this? You should include the cost of meeting the RTP/RPO in the solution cost evaluation. Reporting: you need to look at the caned reports do they have the reports you need to sufficiently mange your data. And equally important to they have the reports needed to show the business the efficiencies provided by the storage Infrastructure. Do you need bill back reports? (Compression, de-duplication rates, I/O Latency reports, ect..).
it_user208149Primary requirement for me is the data reduction using data de-duplication algorithms. Second requirement is SSD's wear gauge. i need to be sure that SSD's installed in a Flash Array will work as many years as possible. So the vendor who has the best offering in those 2 topics has the best flash array.
it_user221634Customers should consider not only performance, which is really table stakes for an All Flash Array, but also resilience design and data services offered on the platform. AFA is most often used for Tier-1 apps, so the definition of what is required to support a Tier-1 application should not be compromised to fit what a particular AFA does or does not support. Simplicity and interoperability with other non-AFA assets is also key. AFA's should support replication to and data portability between itself and non-AFAs. Further, these capabilities should be native and not require additional hardware and software (virtual or physical) to support these capabilities. Lastly, don't get hung up on the minutia of de-dupe, compression, compaction, or data reduction metrics. All leading vendors have approaches that leverage what their technologies can do to make the most efficient use of flash and preserve its duty-cycle. At the end of the day, you should compare two rations: Storage seen by the host\storage consumed on the array (or another way, provisioned v. allocated) and $/GB. These are the most useful in comparing what you are getting for your money. The $/IOPS conversation is old and challenging to relate to real costs as IOPS is a more ephemeral concept that GB, plus.