Hyper-Converged (HCI) Features

Read what people say are the most valuable features of the solutions they use.
Samuel Rothenbühler says in a Nutanix review
Lead Architect Enterprise Cloud - Member of the Management Board at Amanox Solutions at Amanox Solutions (S&T Group)
Some years ago when we started working with Nutanix the solution was essentially a stable, user friendly hyper converged solution offering a less future rich version of what is now called the distributed storage fabric. This is what competing solutions typically offer today and for many customers it isn't easy to understand the added value (I would argue they should in fact be a requirement) Nutanix offers today in comparison to other approaches. Over the years Nutanix has added lots of enterprise functionality like deduplication, compression, erasure coding, snapshots, (a)-sync replication and so on. While they are very useful, scale extremely well on Nutanix and offer VM granular configuration (if you don't care about granularity do it cluster wide by default). It is other, maybe less obvious features or I should say design principles which should interest most customers a lot: Upgradeable with a single click This was introduced a while ago, I believe around version 4 of the product. At first is was mainly used to upgrade the Nutanix software (Acropolis OS or AOS) but today we use it for pretty much anything from the hypervisor to the system BIOS, the disk firmware and also to upgrade sub components of the Acropolis OS. There is for example a standardized system check (around 150 checks) called NCC (Nutanix Cluster Check) which can be upgrade throughout the cluster with a single click independent of AOS. The One-Click process also allows you to use a granular hypervisor upgrade such as an ESXi offline bundle (could be a ptach release). The Nutanix cluster will then take care of the rolling reboot, vMotion etc. to happen in a fully hyper-converged fashion (e.g. don't reboot multiple nodes at the same time). If you think how this compares to a traditional three tier architecture (including converged generation 1) you do have a much simpler and well tested workflow which is what you use by default. And yes it does automatic prechecks and also ensures what you are updating is on the Nutanix compatibility matrix. It is also worth mentioning that upgrading AOS (the complete Nutanix software layer) doesn't require a host reboot since it isn't part of the hypervisor but installed as a VSA (regular VM). It also doesn't require any VMs to migrate away from the node/host during and after upgrade (I love that fact since bigger cluster tend to have some hickups when using vMotion and other similar techniques especially if you have 100 VMs on a host) not to mentioned the network impact. Linearly scalable Nutanix has several unique capabilities to ensure linear scalability. The key ingredients are data locality, a fully distributed meta data layer as well as granular data management. The first is important especially when you grow your cluster. It is true that 10G networks offer very low latency but the overhead will count towards every single read IO so you should consider the sum of them (and there is a lot of read IOs you get out of every single Nutanix node!). If you look at what development is currently ongoing in the field of persistent flash storage you will see that the network overhead will only become more important going forward. The second key point is the fully distributed meta data database. Every node holds a part of the database (the meta data belonging to it's currently local data for the most part and replica information from other nodes). All meta data is stored on at least three nodes for redundancy (each node writes to it's neighbor nodes in a ring structure, there are no meta data master nodes). No matter how many nodes your cluster holds (or will hold) there is always a defined number of nodes (three or five) involved when a meta data update is performed (a lookup/read is typically local). I like to describe this architecture using Big O notation where in this case you can think of it as O(n) and since there are no master node there aren't any bottlenecks at scale. The last key point is the fact that Nutanix acts as an object storage (you work with so called Vdisks) but the objects are split in small pieces (called extends) and distributed throughout the cluster with one copy residing on the local node and each replica residing on other cluster nodes. If your VM writes three blocks to its virtual disk they will all end up on the local SSD and the replicas (for redundancy) will be spread out in the cluster for fast replication (they can go to three different nodes in the cluster avoiding hot spots). If you move your VM to another node, data locality (for read access) will automatically be built again (of course only for the extends your VM currently uses). You might now think that you don't want to migrate that extends from the previous to the now local node but if you think about the fact that the extend will have to be fetched anyhow then why not saving it locally and serve it directly from the local SSD going forward instead of discarding it and reading it over the network every single time. This is possible because the data structure is very granular. If you would have to migrate the whole Vdisk (e.g. VMDK) because this is the way your storage layer saves its underlying data then you simply wouldn't do it (imagine vSphere DRS migrates your VMs around and your cluster would need to constantly migrate the whole VMDK(s)). If you wonder how this all matters when a rebuild (disk failure, node failure) is required then there is good news too! Nutanix immediately starts self healing (rebuild lost replica extends) whenever a disk or node is lost. During a rebuild all nodes are potentially used as source and target to rebuild the data. Since extends are used (not big objects) data is evenly spread out within the cluster. A bigger cluster will increase the probability of a disk failure but the speed of a rebuild is higher since a bigger cluster has more participating nodes. Furthermore a rebuild of cold data (on SATA) will happen directly on all remaining SATA drives (doesn't use your SSD tier) within the cluster since Nutanix can directly address all disks (and disk tiers) within the cluster. Predictable Thanks to data locality a large portion of your IOs (all reads, can be 70% or more) are served from local disks and therefore only impact the local node. While writes will be replicated for data redundancy they will have second priority over local writes of the destination node(s). This gives you a high degree of predictability and you can plan with a certain amount of VMs per node and you can be confident that this will be reproducible when adding new nodes to the cluster. As I mentioned above the architecture doesn't read all data constantly over the network and uses meta data master nodes to track where everything is stored. Looking at other hyper converged architectures you won't get that kind of assurance especially when you scale your infrastructure and the network won't keep up with all read IOs and meta data updates going over the network. With Nutanix a VM can't take over the whole clusters performance. It will have an influence on other VMs on the local node since they share the local hot tier (SSD) but that's much better compared to today's noisy neighbor and IO blender issues with external storage arrays. If you should have too little local hot storage (SSD) your VMs are allowed to consume remote SSD with secondary priority over the other node's local VMs. This means no more data locality but is better than accessing local SATA instead. Once you move away some VMs or the load on the VM gets smaller you automatically get your data locality back. As described further down Nutanix can tell you exactly what virtual disk uses how much local (and possibliy remote) data, you get full transparency there as well. Extremely fast I think it is known that hyper converged systems offer very high storage performance. Not much to add here but to say that it is indeed extremely fast compared to traditional storage arrays. And yes a full flash Nutanix cluster is as fast (if not faster) than an external full flash storage array with the added benefit that you read from you local SSD and don't have to traverse the network/SAN to get it (that and of course all other hyper convergence benefits). Performance was the area where Nutanix had the most focus when releasing 4.6 earlier this year. The great flexibility of working with small blocks (extends) rather than the whole object on the storage layer comes at the price of much greater meta data complexity since you need to track all these small entities through out the cluster. To my understanding Nutanix invested a great deal of engineering to make their meta data layer extremely efficient to be able to even beat the performance of an object based implementation. As a partner we regularly conduct IO tests in our lab and at our customers and it was very impressive to see how all existing customers could benefit from 30-50% better performance by simply applying the latest software (using one-click upgrade of course). Intelligent Since Nutanix has full visibility into every single virtual disks of every single VM it also has lots of ways to optimize how it deals with our data. This is not only the simple random vs sequential way of processing data but it allows to not have one application take over all system performance and let others starve (to name one example). During a support case we can see all sorts of crazy information (I have a storage background so I can get pretty excited about this) like where exactly your applications consumes it's resources (local, remote disks). What block size is used random/sequential, working set size (hot data) and lots more. All with single virtual disk granularity. At some point they were even thinking at making a tool which would look inside your VM and tell you what files (actually sub file level) are currently hot because the data is there and just needs to be visualized. Extensible If you take a look at the up... View full review »
Kristopher Skully says in a StarWind HyperConverged Appliance review
Systems Administrator at Hospice of the Western Reserve
The biggest thing we were looking for was redundancy, with both the compute and the storage, so that way we could lose a full node and still keep everything up and running, and not have to worry about it. Another of the most important features we were looking for, since we're short on time, was something that we could deploy quickly and easily. They were offering what they call a "turnkey solution." We could just buy it, they would preconfigure it, we would throw it in our environment and do some very minimal configuration on the phone with them, and we would be up and running. Then we just needed to start moving our virtual machines over, using Hyper-V’s shared-nothing live migration feature. The solution's hardware footprint is great. We have three 1U servers, a total of 3U, and that's replacing a full rack of equipment. We haven't had to use the ProActive Premium Support feature much yet. But they contacted me one time because there was network glitch on one server. We hadn't actually started migrating virtual machines over to it yet, but they contacted me within ten minutes of the issue happening, as I was still trying to figure it out. I have not seen the problem since. The ProActive Premium Support was another factor that we evaluated when we made the decision to purchase this solution, to make everything easier with less work for us. View full review »
GILLES THEAUDIERE says in a Nutanix review
Consultant at a tech services company with 10,001+ employees
The fact that there is only one interface to deploy a complete solution for maximum storage is fantastic. View full review »
Zeray Assefa says in a Cisco HyperFlex HX-Series review
Director of Network Operations at a government
The most valuable feature is that it is easy to manage. Another valuable feature is the fact that you can individually upgrade the specific parts of the product. Compared to other products like SimpliVity, or Nutanix you have to upgrade the whole node if you want to expand memory or storage. In HyperFlex, you can actually upgrade at will. If you want to upgrade your memory cloud, you upgrade it. So that is a big advantage I've seen from any hyper-convergence product. VxRail or Nutanix and products like that are not as flexible and that is the biggest reason we chose this solution. View full review »
KevinRapson says in a NetApp HCI review
Director at Citrus Consulting
We like SnapMirror and we've been using it for many years. We also like the object storage tools, as well as cloud sync for customers wanting to integrate between the cloud and local. View full review »
Scott Howell says in a HPE SimpliVity review
IT Director at McInnes Cooper
* The ease of management in these deployments were some of the most valuable things. * The date storage infrastructure that it provides is pretty fantastic. * The ease of use on the backup and DR and replication side of things is good. It can be done by a VMware admin with no additional training. View full review »
Mohammed Alakhouch says in a Nutanix review
Direction Générale des Impôts at a sports company with 201-500 employees
I think that the most interesting features are the replication and redundancy. It is a good solution and it is easy to work on a Nutanix platform. View full review »
Divisional Manager - Engineering at a engineering company with 1,001-5,000 employees
The feature that we are most interested in is the scalability. When needed, we are able to add more nodes and scale it up further. That is the feature which we sought. All three nodes are similar and maintaining paths is easy. Our company has various business divisions. One of the business divisions has shifted to a new location. Other divisions will join in the future. We have started a data center for the first division that shifted, and as and when more divisions join, we'll be adding more nodes. That is the reason we see scalability as an important feature. View full review »
MarkMgr says in a StarWind Virtual SAN review
IT Manager at a hospitality company with 51-200 employees
We just use it for the storage replication. We haven't really utilized any of the other StarWind functionality in it. View full review »
SrSolutic070 says in a Cisco HyperFlex HX-Series review
Sr Solution Architect at NetDesign
Most of our users are Cisco customers, so it fits in within the suites we use. Primarily for me, as a solution architect, it's the technology and architecture that is the most valuable. I know from my technical colleagues that it's easy to use and for the customer, the uptime is the most valuable asset. It's running and has low failure which is why we use it. View full review »
Network Engineer at a tech services company
The storage system is its most valuable feature. It has eliminated our entire need for having to worry about storage. We were storing a lot of syslog data and using a lot of templates in our data center. With the storage system, we are now saving an enormous amount of space. View full review »
Enterpridf49 says in a Cisco HyperFlex HX-Series review
Enterprise Architect at a aerospace/defense firm with 1,001-5,000 employees
The flexibility is its most valuable feature; the ability to quickly deploy a number of help machines. It is not constrained by what we want to do. View full review »
Sean Henry says in a NetApp HCI review
Senior MIS Manager at a transportation company with 501-1,000 employees
It was designed from the ground up to work together. It’s not disparate technologies that the vendors put together and decided, “These can work together.” It was designed from the ground up to work together. You get that slight economy of scale from that fact that it was designed that way. It becomes more than the sum of its pieces, rather than less. View full review »
Director of Technology at FAFCO, INC.
Without question, the support. StarWind has been a valuable technology partner for us from the beginning. On the rare occasion when we have had a problem, or have simply needed to do an upgrade, they have provided first-rate support. Anyone who has dealt with common off-shore support understands the frustration in dealing with incompetent and difficult to communicate with support staff. This is not so with StarWind, as their support staff is comprised of true experts who can also communicate clearly. For highly technical products this is essential. The best part about their tech support though, is that you probably won't need it much! On the technical side, the performance has been excellent. I have never found a reason to regret not going with a traditional SAN. The configuration is so much simpler, with fewer points of failure to worry about. Integration with Microsoft clustering has been perfect, allowing us to leverage our investment in MS server licensing to the fullest. Updates to server hardware are now painless and done during working hours with zero stress. We had a RAID failure a few months back, and nobody in the building even noticed and there was no after hours time used for repair. It's the best! View full review »
Justin Brooks says in a HPE SimpliVity review
Systems Engineer III at a logistics company with 1,001-5,000 employees
* The backup and recovery is very fast, effective, and easy to use. * The ability to manage the environment without special consoles or interfaces is a plus. * Managing storage doesn't require a specialized skill set. * Software upgrades and scalability can be done during normal business hours with no downtime. View full review »
anush santhanam says in a HPE SimpliVity review
Technical Architect at a tech services company with 10,001+ employees
* Simple management * HyperGuarantee * Accelerator card * Globally federated architecture The simple management comes in handy since a standard VMware admin can manage it. The HyperGuarantee is unique and the accelerator card is the primary IP with the product. The 4KB based dedupe and optimization are definitely helpful as observed by our clients. The globally federated architecture means that the backup across sites does not consume precious MPLS bandwidth, which is cool. View full review »
SeniorIm1567 says in a HPE SimpliVity review
Senior IM Manager
It's very simple to manage. It has reduced the footprint in the datacenters quite a lot. View full review »
it_user517800 says in a Nutanix review
Chef du service informatique at a healthcare company with 501-1,000 employees
* Simple to use * Ergonomic View full review »
Steffen Hornung says in a Nutanix review
Data Consultant with 501-1,000 employees
* Single click actions is definitely the most important. They were not even aware that they wanted this. * Performance was the main point for buying it. * No virtualization-vendor lock-in is another main point. * But if you ever had good support, Nutanix Support will overwhelm. View full review »
Andre C A Griffith MSc MBCS MCTS MCP ITIL says in a StarWind Virtual SAN review
User at a insurance company with 10,001+ employees
The most valuable features of this solution are as follows: * Consolidated storage among all three hosts * Synchronization of data between hosts * Using VMware VMotion * Incorporating HA in VMware * Management tools available to manage box * Email alerts if there are any problems with managed storage * Excellent support View full review »
Phil Weingart says in a VxRail review
Manager of Solutions Architecture with 501-1,000 employees
The most valuable feature of this solution is the automation and integration points with other automation tools. View full review »
reviewer938199 says in a NetApp HCI review
System Engineer with 201-500 employees
It's an all-flash solution. NetApp guarantees 3-to-1 or more than 3-to-1 It depends on the type of information. Each node guaranteed performance, around 50K IOPS per node. View full review »
Rishan-Ahmed says in an Acronis Cyber Infrastructure review
User at Dataguard MEA
The most valuable feature is the backup capability. View full review »
reviewer1078002 says in a Pivot3 review
Solution Architect at a tech services company with 201-500 employees
The most valuable feature is the ease of implementation. This solution is easy to use and the interface has improved since the previous version. View full review »
Dmitry Sysin says in a StarWind Virtual SAN review
IT at a engineering company with 51-200 employees
In addition to the main functions of the software, I want to note the excellent work of technical support. View full review »
Sign Up with Email