Modular SAN Features

Read what people say are the most valuable features of the solutions they use.
Samuel Rothenbühler says in a Nutanix review
Lead Architect Enterprise Cloud - Member of the Management Board at Amanox Solutions at Amanox Solutions (S&T Group)
Some years ago when we started working with Nutanix the solution was essentially a stable, user friendly hyper converged solution offering a less future rich version of what is now called the distributed storage fabric. This is what competing solutions typically offer today and for many customers it isn't easy to understand the added value (I would argue they should in fact be a requirement) Nutanix offers today in comparison to other approaches. Over the years Nutanix has added lots of enterprise functionality like deduplication, compression, erasure coding, snapshots, (a)-sync replication and so on. While they are very useful, scale extremely well on Nutanix and offer VM granular configuration (if you don't care about granularity do it cluster wide by default). It is other, maybe less obvious features or I should say design principles which should interest most customers a lot: Upgradeable with a single click This was introduced a while ago, I believe around version 4 of the product. At first is was mainly used to upgrade the Nutanix software (Acropolis OS or AOS) but today we use it for pretty much anything from the hypervisor to the system BIOS, the disk firmware and also to upgrade sub components of the Acropolis OS. There is for example a standardized system check (around 150 checks) called NCC (Nutanix Cluster Check) which can be upgrade throughout the cluster with a single click independent of AOS. The One-Click process also allows you to use a granular hypervisor upgrade such as an ESXi offline bundle (could be a ptach release). The Nutanix cluster will then take care of the rolling reboot, vMotion etc. to happen in a fully hyper-converged fashion (e.g. don't reboot multiple nodes at the same time). If you think how this compares to a traditional three tier architecture (including converged generation 1) you do have a much simpler and well tested workflow which is what you use by default. And yes it does automatic prechecks and also ensures what you are updating is on the Nutanix compatibility matrix. It is also worth mentioning that upgrading AOS (the complete Nutanix software layer) doesn't require a host reboot since it isn't part of the hypervisor but installed as a VSA (regular VM). It also doesn't require any VMs to migrate away from the node/host during and after upgrade (I love that fact since bigger cluster tend to have some hickups when using vMotion and other similar techniques especially if you have 100 VMs on a host) not to mentioned the network impact. Linearly scalable Nutanix has several unique capabilities to ensure linear scalability. The key ingredients are data locality, a fully distributed meta data layer as well as granular data management. The first is important especially when you grow your cluster. It is true that 10G networks offer very low latency but the overhead will count towards every single read IO so you should consider the sum of them (and there is a lot of read IOs you get out of every single Nutanix node!). If you look at what development is currently ongoing in the field of persistent flash storage you will see that the network overhead will only become more important going forward. The second key point is the fully distributed meta data database. Every node holds a part of the database (the meta data belonging to it's currently local data for the most part and replica information from other nodes). All meta data is stored on at least three nodes for redundancy (each node writes to it's neighbor nodes in a ring structure, there are no meta data master nodes). No matter how many nodes your cluster holds (or will hold) there is always a defined number of nodes (three or five) involved when a meta data update is performed (a lookup/read is typically local). I like to describe this architecture using Big O notation where in this case you can think of it as O(n) and since there are no master node there aren't any bottlenecks at scale. The last key point is the fact that Nutanix acts as an object storage (you work with so called Vdisks) but the objects are split in small pieces (called extends) and distributed throughout the cluster with one copy residing on the local node and each replica residing on other cluster nodes. If your VM writes three blocks to its virtual disk they will all end up on the local SSD and the replicas (for redundancy) will be spread out in the cluster for fast replication (they can go to three different nodes in the cluster avoiding hot spots). If you move your VM to another node, data locality (for read access) will automatically be built again (of course only for the extends your VM currently uses). You might now think that you don't want to migrate that extends from the previous to the now local node but if you think about the fact that the extend will have to be fetched anyhow then why not saving it locally and serve it directly from the local SSD going forward instead of discarding it and reading it over the network every single time. This is possible because the data structure is very granular. If you would have to migrate the whole Vdisk (e.g. VMDK) because this is the way your storage layer saves its underlying data then you simply wouldn't do it (imagine vSphere DRS migrates your VMs around and your cluster would need to constantly migrate the whole VMDK(s)). If you wonder how this all matters when a rebuild (disk failure, node failure) is required then there is good news too! Nutanix immediately starts self healing (rebuild lost replica extends) whenever a disk or node is lost. During a rebuild all nodes are potentially used as source and target to rebuild the data. Since extends are used (not big objects) data is evenly spread out within the cluster. A bigger cluster will increase the probability of a disk failure but the speed of a rebuild is higher since a bigger cluster has more participating nodes. Furthermore a rebuild of cold data (on SATA) will happen directly on all remaining SATA drives (doesn't use your SSD tier) within the cluster since Nutanix can directly address all disks (and disk tiers) within the cluster. Predictable Thanks to data locality a large portion of your IOs (all reads, can be 70% or more) are served from local disks and therefore only impact the local node. While writes will be replicated for data redundancy they will have second priority over local writes of the destination node(s). This gives you a high degree of predictability and you can plan with a certain amount of VMs per node and you can be confident that this will be reproducible when adding new nodes to the cluster. As I mentioned above the architecture doesn't read all data constantly over the network and uses meta data master nodes to track where everything is stored. Looking at other hyper converged architectures you won't get that kind of assurance especially when you scale your infrastructure and the network won't keep up with all read IOs and meta data updates going over the network. With Nutanix a VM can't take over the whole clusters performance. It will have an influence on other VMs on the local node since they share the local hot tier (SSD) but that's much better compared to today's noisy neighbor and IO blender issues with external storage arrays. If you should have too little local hot storage (SSD) your VMs are allowed to consume remote SSD with secondary priority over the other node's local VMs. This means no more data locality but is better than accessing local SATA instead. Once you move away some VMs or the load on the VM gets smaller you automatically get your data locality back. As described further down Nutanix can tell you exactly what virtual disk uses how much local (and possibliy remote) data, you get full transparency there as well. Extremely fast I think it is known that hyper converged systems offer very high storage performance. Not much to add here but to say that it is indeed extremely fast compared to traditional storage arrays. And yes a full flash Nutanix cluster is as fast (if not faster) than an external full flash storage array with the added benefit that you read from you local SSD and don't have to traverse the network/SAN to get it (that and of course all other hyper convergence benefits). Performance was the area where Nutanix had the most focus when releasing 4.6 earlier this year. The great flexibility of working with small blocks (extends) rather than the whole object on the storage layer comes at the price of much greater meta data complexity since you need to track all these small entities through out the cluster. To my understanding Nutanix invested a great deal of engineering to make their meta data layer extremely efficient to be able to even beat the performance of an object based implementation. As a partner we regularly conduct IO tests in our lab and at our customers and it was very impressive to see how all existing customers could benefit from 30-50% better performance by simply applying the latest software (using one-click upgrade of course). Intelligent Since Nutanix has full visibility into every single virtual disks of every single VM it also has lots of ways to optimize how it deals with our data. This is not only the simple random vs sequential way of processing data but it allows to not have one application take over all system performance and let others starve (to name one example). During a support case we can see all sorts of crazy information (I have a storage background so I can get pretty excited about this) like where exactly your applications consumes it's resources (local, remote disks). What block size is used random/sequential, working set size (hot data) and lots more. All with single virtual disk granularity. At some point they were even thinking at making a tool which would look inside your VM and tell you what files (actually sub file level) are currently hot because the data is there and just needs to be visualized. Extensible If you take a look at the up... View full review »
GILLES THEAUDIERE says in a Nutanix review
Consultant at a tech services company with 10,001+ employees
The fact that there is only one interface to deploy a complete solution for maximum storage is fantastic. View full review »
Alex Lovett says in a HPE 3PAR Flash Storage review
IT Operations Manager
* The Remote Copy Group is amazing for the replication stuff. * We also use dynamic optimization to go between tiers. * The reporting, as well. We can quickly see performance per CPG and per LUN. You can drill right down to see actual performance to the virtual volumes themselves. That's really good. View full review »
Henry says in a NetApp FAS Series review
Team Lead at Tata Consultancy Services
Powerful Easy to use High availability DFM OCI Data fabric FPolicy Cluster-mode Hybrid cloud solution View full review »
MikeBollman says in a HPE 3PAR Flash Storage review
IT Director at a energy/utilities company with 5,001-10,000 employees
From a 3PAR perspective, it's the ease of setup and use. You can train somebody pretty quickly how to do basic operations, especially if it's a very dynamic environment where you're constantly having to make changes with exposing additional VVs to a given host or spin up new servers. It's really simple to use. View full review »
Mohammed Alakhouch says in a Nutanix review
Direction Générale des Impôts at a sports company with 201-500 employees
I think that the most interesting features are the replication and redundancy. It is a good solution and it is easy to work on a Nutanix platform. View full review »
it_user517800 says in a Nutanix review
Chef du service informatique at a healthcare company with 501-1,000 employees
* Simple to use * Ergonomic View full review »
Steffen Hornung says in a Nutanix review
Data Consultant with 501-1,000 employees
* Single click actions is definitely the most important. They were not even aware that they wanted this. * Performance was the main point for buying it. * No virtualization-vendor lock-in is another main point. * But if you ever had good support, Nutanix Support will overwhelm. View full review »
Hashem Dabbas says in an IBM Storwize review
Senior Systems Engineer & Support Contracts Manager at a tech services company with 51-200 employees
Virtualization and Data Migration. V7000 is built with IBM Spectrum Virtualized software, which is part of the IBM Spectrum Storage family. V7000 software: * Provides a single or multiple pool(s) of storage * Provides logical unit virtualization * Manages logical volumes * Mirrors logical volumes V7000 hardware provide the below features: * Large scalable cache: thru IO groups methodology * Copy services: metro mirror, global mirror, Data Migration, point in time copy, active-active copy (Hyperwap) * Space management: thin provisioning, Easy Tier, and compression The Storwize V7000 nodes in a clustered system operate as a single system and present a single point of control for system management and service. View full review »
RESC says in an EMC VNX [EOL] review
Storage Solutions Architect IV at a manufacturing company with 1,001-5,000 employees
The VNX 5700 provides a large number of features including the ability to work for both file and block as a unified array. The ability to run a unified array is a great advantage when the company economics would not allow you to have separate arrays for your servers and another one for your file systems and use the unit as NAS. Through a single interface called UNISPHERE™, we manage the array. It allows you to administer both the block and file in a single pane. The process of extending the size of either an LUN or a file system is very intuitive. It also provides a command line interface that you can use to connect using Putty.exe but some of the commands that you can execute through the GUI are not available in CLI, or at least that is what EMC support has told me. Another great feature is the checkpoints for file systems. We use the checkpoints as backups for our file system instead of having them in a separate product. It is a risk to put the backups and production data in the same array, but we understood it and went with it. We have limited times when our checkpoints become corrupted for some reason (reaching a maximum size of a file system 16TB) or another but the majority of the time works according to plan. For the block, you can use LUN cloning and have the ability to present a copy of the same LUN used for production to your test environment and test applications, programs, OS, or any others without affecting your production environment. The replication feature provides another functionality to protect your data. In our scenario, we only use it for disaster recovery. Since we have a global presence and we are tasked to protect data from around the globe, we have enabled replication between locations. The replication works well between VNX and Celerra NS480s, NS40s, NS20s or NX4s. You can also set replication between VNXe (1st generation, like VNXe3300, VNX3150, or others) where your VNXe is the source. However, you can not set up a file system replication when the VNXe is the target of your replication job. Also, there are limitations with VNXe replication (second generation, like VNXe3200) since it would not allow you to establish replication partners. UNISPHERE™ Analyzer allows you to see the hot spots on the block side. When you enable the Analyzer, it collects performance data and then with EMC support you can see the areas of problem in the array. It allows you to justify adding more space or moving your servers around to other LUNs with less saturation. Once the user becomes familiar with how the Analyzer works, he/she could run it without engaging support and identify areas of concern. It is very helpful to show the application teams if the storage is causing the slowness of a particular application hosted in the specific LUN or not. View full review »
KristianPalsmar says in a Compellent review
Director of Technology with 501-1,000 employees
The feature I have found the most valuable is that the interface is easy to use which makes the product user-friendly. View full review »
reviewer1122030 says in a Compellent review
Chief Business Technology Consultant at a tech services company with 51-200 employees
The most valuable features would be the ease of use and the cost effectiveness. Compared to other mid-range constrict technologies from Dell, this is a much cheaper solution. View full review »
OlegDubitskyi says in a Compellent review
Pre-Sales Architect at a tech services company with 1-10 employees
The most valuable features are the Compression and Deduplication. View full review »
reviewer1072977 says in an IBM Storwize review
Chief Engineer at a government with 1-10 employees
The ability to create LUNs and modify them is the most valuable feature of this solution. It has a very good storage system. View full review »
reviewer744948 says in an IBM Storwize review
Systems Administrator at a tech services company with 11-50 employees
The most valuable features are deduplication and compression. The interface is also good. View full review »
SystemEng457 says in a Compellent review
System Engineer at a tech services company with 51-200 employees
The compression capability is the solution's most valuable feature. View full review »
Coo5678 says in a Compellent review
COO at a tech services company with 11-50 employees
The solution's most valuable features are its storage capabilities and its duplication. View full review »
Stephane says in a Compellent review
Consulant at a software R&D company
Compellent is a very easy-to-use solution because after the first installation you have no more parameters to fix. You can add discs easily and all discs are in one group or folder. It is also very easy to expand. The solution is very complete as you can use it as solo for primary storage for production, and you want to implement a cluster with two storage systems, all you need is include it in the software of the storage system. So it's easy and very efficient, and when I have to choose a solution for a customer, I must find out what kind of security the customer wants. When I use this solution, I have all kinds of installation possibilities. So, for me, it is a very good solution. View full review »
Senior Customer Service Engineer at DP IRAN
The solution is very user-friendly in terms of maintenance and configuration. It's also possible to connect the solution to other storage management solutions. View full review »
Payman Maher says in a HPE 3PAR Flash Storage review
Storage Specialist at a financial services firm with 51-200 employees
The most valuable feature of this solution is the native full mesh (multi-pathing). View full review »
Ricky Santos says in a HPE 3PAR Flash Storage review
System Administrator at a manufacturing company with 10,001+ employees
The most valuable features of this solution are Remote-copy and Adaptive Optimization. Remote-copy provides high availability and disaster recovery for the connected clients. The Adaptive Optimization provides tiering and optimizes the storage requirement of the client based on its load from time to time. View full review »
Solutions Consultant
The most valuable features of this solution are its Data Mobility and High-performance flash capabilities. View full review »
Carsten Cimander says in a HPE 3PAR Flash Storage review
Senior Consultant at a tech services company with 10,001+ employees
The most valuable features are: * Peer Persistence, where storage failover is transparent to servers and there is no complex cluster-software integration. * Thin Provisioning translates to reduced efforts in management on the server-side, where calculated growth of application data and LUNs can be provisioned on the first install. View full review »
Aklilu Shiferaw says in a NetApp FAS Series review
System Engineer at a tech company with 51-200 employees
The most valuable feature is that this acts as unified storage, both SAN and NAS, for all types of workloads. View full review »
DataCent1a09 says in a HPE 3PAR Flash Storage review
Data Center Operations at a tech services company with 10,001+ employees
The stability is what we consider to be the best feature it provides. The stability of this solution is what conquers us, every day. The deduplication is pretty good too. It does the work. View full review »
Sign Up with Email