Hyper-Converged Infrastructure Features

Read what people say are the most valuable features of the solutions they use.
E2160b61 5b06 4f5a bc2e f7d52d4ad5af avatar?1458120369
Manager Enterprise Cloud System Engineering at Amanox Solutions (S&T Group)
Uses Nutanix
Some years ago when we started working with Nutanix the solution was essentially a stable, user friendly hyper converged solution offering a less future rich version of what is now called the distributed storage fabric. This is what competing solutions typically offer today and for many customers it isn't easy to understand the added value (I would argue they should in fact be a requirement) Nutanix offers today in comparison to other approaches. Over the years Nutanix has added lots of enterprise functionality like deduplication, compression, erasure coding, snapshots, (a)-sync replication and so on. While they are very useful, scale extremely well on Nutanix and offer VM granular configuration (if you don't care about granularity do it cluster wide by default). It is other, maybe less obvious features or I should say design principles which should interest most customers a lot: Upgradeable with a single click This was introduced a while ago, I believe around version 4 of the product. At first is was mainly used to upgrade the Nutanix software (Acropolis OS or AOS) but today we use it for pretty much anything from the hypervisor to the system BIOS, the disk firmware and also to upgrade sub components of the Acropolis OS. There is for example a standardized system check (around 150 checks) called NCC (Nutanix Cluster Check) which can be upgrade throughout the cluster with a single click independent of AOS. The One-Click process also allows you to use a granular hypervisor upgrade such as an ESXi offline bundle (could be a ptach release). The Nutanix cluster will then take care of the rolling reboot, vMotion etc. to happen in a fully hyper-converged fashion (e.g. don't reboot multiple nodes at the same time). If you think how this compares to a traditional three tier architecture (including converged generation 1) you do have a much simpler and well tested workflow which is what you use by default. And yes it does automatic prechecks and also ensures what you are updating is on the Nutanix compatibility matrix. It is also worth mentioning that upgrading AOS (the complete Nutanix software layer) doesn't require a host reboot since it isn't part of the hypervisor but installed as a VSA (regular VM). It also doesn't require any VMs to migrate away from the node/host during and after upgrade (I love that fact since bigger cluster tend to have some hickups when using vMotion and other similar techniques especially if you have 100 VMs on a host) not to mentioned the network impact. Linearly scalable Nutanix has several unique capabilities to ensure linear scalability. The key ingredients are data locality, a fully distributed meta data layer as well as granular data management. The first is important especially when you grow your cluster. It is true that 10G networks offer very low latency but the overhead will count towards every single read IO so you should consider the sum of them (and there is a lot of read IOs you get out of every single Nutanix node!). If you look at what development is currently ongoing in the field of persistent flash storage you will see that the network overhead will only become more important going forward. The second key point is the fully distributed meta data database. Every node holds a part of the database (the meta data belonging to it's currently local data for the most part and replica information from other nodes). All meta data is stored on at least three nodes for redundancy (each node writes to it's neighbor nodes in a ring structure, there are no meta data master nodes). No matter how many nodes your cluster holds (or will hold) there is always a defined number of nodes (three or five) involved when a meta data update is performed (a lookup/read is typically local). I like to describe this architecture using Big O notation where in this case you can think of it as O(n) and since there are no master node there aren't any bottlenecks at scale. The last key point is the fact that Nutanix acts as an object storage (you work with so called Vdisks) but the objects are split in small pieces (called extends) and distributed throughout the cluster with one copy residing on the local node and each replica residing on other cluster nodes. If your VM writes three blocks to its virtual disk they will all end up on the local SSD ( /categories/ssd ) and the replicas (for redundancy) will be spread out in the cluster for fast replication (they can go to three different nodes in the cluster avoiding hot spots). If you move your VM to another node, data locality (for read access) will automatically be built again (of course only for the extends your VM currently uses). You might now think that you don't want to migrate that extends from the previous to the now local node but if you think about the fact that the extend will have to be fetched anyhow then why not saving it locally and serve it directly from the local SSD ( /categories/ssd ) going forward instead of discarding it and reading it over the network every single time. This is possible because the data structure is very granular. If you would have to migrate the whole Vdisk (e.g. VMDK) because this is the way your storage layer saves its underlying data then you simply wouldn't do it (imagine vSphere ( /products/vsphere ) DRS ( /categories/disaster-recovery-software ) migrates your VMs around and your cluster would need to constantly migrate the whole VMDK(s)). If you wonder how this all matters when a rebuild (disk failure, node failure) is required then there is good news too! Nutanix immediately starts self healing (rebuild lost replica extends) whenever a disk or node is lost. During a rebuild all nodes are potentially used as source and target to rebuild the data. Since extends are used (not big objects) data is evenly spread out within the cluster. A bigger cluster will increase the probability of a disk failure but the speed of a rebuild is higher since a bigger cluster has more participating nodes. Furthermore a rebuild of cold data (on SATA) will happen directly on all remaining SATA drives (doesn't use your SSD ( /categories/ssd ) tier) within the cluster since Nutanix can directly address all disks (and disk tiers) within the cluster. Predictable Thanks to data locality a large portion of your IOs (all reads, can be 70% or more) are served from local disks and therefore only impact the local node. While writes will be replicated for data redundancy they will have second priority over local writes of the destination node(s). This gives you a high degree of predictability and you can plan with a certain amount of VMs per node and you can be confident that this will be reproducible when adding new nodes to the cluster. As I mentioned above the architecture doesn't read all data constantly over the network and uses meta data master nodes to track where everything is stored. Looking at other hyper converged architectures you won't get that kind of assurance especially when you scale your infrastructure and the network won't keep up with all read IOs and meta data updates going over the network. With Nutanix a VM can't take over the whole clusters performance. It will have an influence on other VMs on the local node since they share the local hot tier (SSD ( /categories/ssd )) but that's much better compared to today's noisy neighbor and IO blender issues with external storage arrays. If you should have too little local hot storage (SSD ( /categories/ssd )) your VMs are allowed to consume remote SSD ( /categories/ssd ) with secondary priority over the other node's local VMs. This means no more data locality but is better than accessing local SATA instead. Once you move away some VMs or the load on the VM gets smaller you automatically get your data locality back. As described further down Nutanix can tell you exactly what virtual disk uses how much local (and possibliy remote) data, you get full transparency there as well. Extremely fast I think it is known that hyper converged systems offer very high storage performance. Not much to add here but to say that it is indeed extremely fast compared to traditional storage arrays. And yes a full flash Nutanix cluster is as fast (if not faster) than an external full flash storage array with the added benefit that you read from you local SSD ( /categories/ssd ) and don't have to traverse the network/SAN to get it (that and of course all other hyper convergence benefits). Performance was the area where Nutanix had the most focus when releasing 4.6 earlier this year. The great flexibility of working with small blocks (extends) rather than the whole object on the storage layer comes at the price of much greater meta data complexity since you need to track all these small entities through out the cluster. To my understanding Nutanix invested a great deal of engineering to make their meta data layer extremely efficient to be able to even beat the performance of an object based implementation. As a partner we regularly conduct IO tests in our lab and at our customers and it was very impressive to see how all existing customers could benefit from 30-50% better performance by simply applying the latest software (using one-click upgrade of course). Intelligent Since Nutanix has full visibility into every single virtual disks of every single VM it also has lots of ways to optimize how it deals with our data. This is not only the simple random vs sequential way of processing data but it allows to not have one application take over all system performance and let others starve (to name one example). During a support case we can see all sorts of crazy information (I have a storage background so I can get pretty excited about this) like where exactly your applications consumes it's resources (local, remote disks). What block size is used random/sequential, working set size (hot data) and lots more. All with single virtual disk granularity. At some point they were even thinking at making a ... view full review »
E2160b61 5b06 4f5a bc2e f7d52d4ad5af avatar?1458120369
Manager Enterprise Cloud System Engineering at Amanox Solutions (S&T Group)
Uses Nutanix
* Very easy management (e.g. daily tasks and also major upgrades) * Simple and fast to implement as a partner * Very mature and stable with outstanding Nutanix support if needed (we are a L1 and L2 support partner as well) * Potential to replace 80-90% of all customer use cases we see in Switzerland view full review »
179f42aa cbc9 4f2e ba3d b62b89ec8ff1 avatar
Enterprise Infrastructure Architect at loanDepot
Uses HPE SimpliVity
* Backups * Recovery * VMware VirtualCenter integration * DR * Being able to present the solution as NFS SAN to existing compute view full review »
2458525
Senior Systems Engineer at a manufacturing company with 1,001-5,000 employees
Uses Nutanix
The converged storage infrastructure is a great benefit that removes the necessity of a separate storage network. The web-based management portal (Prism) is very robust and easy to understand. There is very little to manually configure, for the Nutanix or (in our case) VMware OS, once the scripted installation has completed. Very knowledgeable support engineers. view full review »
59888b42 219e 4282 a9e7 d91b83f81a48 avatar
Sr. Systems Administrator at a healthcare company with 501-1,000 employees
Uses HPE SimpliVity
The ease of managing this system! Recently added the All Flash CN3400F and oh my goodness are these nodes fast as lighting! I love having a private cloud for my organization. Public cloud will never care for my organizations data more than I do. view full review »
109e6a9c 13cc 47a8 92d3 a2d7cadeec2c avatar
ICT Network Administrator at a maritime company with 501-1,000 employees
Uses VMware vSAN
The most important feature for us is the converged infrastructure, which is all this tool is about. There is no need to manage separate storage areas in SAN/NAS environments. Storage management comes built-in with the vSAN tool. Storage is managed via policies. Define a policy and apply it to the datastore/virtual machine and the software-defined storage does the rest. These are valuable features. Scalability and future upgrades are a piece of cake. If you want more IOPS, then add disk groups and/or nodes on the fly. If you want to upgrade the hardware, then add new servers and retire the old ones. No service breaks at all. The feature that we have not yet implemented but are looking at, is the ability to extend the cluster to our other site in order to handle DR situations. view full review »
Anonymous avatar x60
Senior IM Manager
Uses HPE SimpliVity
It's very simple to manage. It has reduced the footprint in the datacenters quite a lot. view full review »
145148fc 923b 4ada 96e9 fd7fe0d1e617 avatar
Chef du service informatique at a healthcare company with 501-1,000 employees
Uses Nutanix
* Simple to use * Ergonomic view full review »
2a90477c ed4b 4fa9 9dc1 a531d7363ee7 avatar
Consultor Tecnico at a tech services company
Uses Nutanix
Hypervisor agnostic. I can use VMware or Hyper-V license, if I already have. But if I use a similar solution as Nutanix Acropolis hypervisor (AHV) that comes embedded in Nutanix solution, I can manage all my virtual environments with everything I need without spend more money on licenses. view full review »
Anonymous avatar x60
Virtualization System Administrator. at a integrator with 51-200 employees
Uses VMware vSAN
The most important functionality is the ability to extend cluster storage and cluster computing power securely without loss of data. Also, the ability to set up an extended cluster on multiple sites in a much simpler and easier way than with a traditional storage solution. view full review »
F5c60098 1c78 42ee 97b7 332995d54ab6 avatar
System Architect at a tech services company with 201-500 employees
Uses Nutanix
The Prism interface is valuable. It is very easy to manage, is based on HTML5, and all CVMs are able to assume the management role. They are not the SPOF. view full review »
Rodney barnhardt li?1414331737
Server\Storage Administrator at a manufacturing company with 501-1,000 employees
Uses VxRail
Being able to perform upgrades and check the system through the VxRail Manager has been very helpful. It allows files to be uploaded, performs a pre-check against the system, then upgrades the appliance from the hardware up through the VMware environment. view full review »
Anonymous avatar x60
User at a consultancy with 1-10 employees
Uses Nutanix
* Data protection in its various flavours. * Overall performance of the platform. * Scalability. * Ease of implementation. * Great GUI. view full review »
Anonymous avatar x60
Senior Director, IT Global Service & Support at a tech services company with 1,001-5,000 employees
Uses Stratoscale
The most valuable feature is the ability to dynamically allocate resources between different projects. view full review »
E5f7fa3a 05a4 4805 8c0c 3088b9994f36 avatar
Global VP Technology & President at a tech services company with 51-200 employees
Uses Stratoscale
The scalability of the product, and the ability to provide an AWS region to our clients everywhere; this is one of the most important features for us. view full review »
B5064b29 2c47 40d6 afa6 298020702023 avatar
Senior Systems Administrator at a consultancy with 51-200 employees
Uses HPE SimpliVity
* Redundancy * Cross-site federation * Data virtualization * Integration with vSphere view full review »
E02c350c f171 4dbb b3f7 b515611269e7 avatar
Senior IT Systems Administrator at a tech services company with 51-200 employees
Uses VMware vSAN
The valuable features are: * It concentrates all our virtual platforms into a really small number of servers. * It gets rid of dependencies of expensive SAN storage units which decrease our electricity and cooling expenses in a very drastic way. * It gives us an extra layer of comfort by providing different levels of high availability. view full review »
Anonymous avatar x60
Senior Engineer/Virtualization Specialist at a tech services company with 51-200 employees
Uses HPE SimpliVity
Backup and DR applications are very good and easy to use. Due to inline deduplication, optimization and compression of data, very high IO throughput is possible. No additional training is necessary for support personnel and engineers, as the SimpliVity federation is managed via a plugin in the VMware vSphere client or webclient. view full review »
D3f33bd9 e68f 4f53 b874 87874a7521c3 avatar
GM IT Operations at a insurance company with 501-1,000 employees
Uses HPE SimpliVity
The rapid backup and restore functionality improved our efficiency in the development area. view full review »
Anonymous avatar x60
IT Infrastructure at a healthcare company with 501-1,000 employees
Uses HPE SimpliVity
The built-in ability to do backups to DR, do internal backups, and have on-the-fly deduplication is by far the most valuable to me. view full review »
02194a38 8353 436a aee1 81d3cbb48689 avatar
IS Specialist at a K-12 educational company or school with 51-200 employees
Uses HPE SimpliVity
Seamless backup, quick deployment. view full review »
Anonymous avatar x60
Head of IT at Law Firm
Uses HPE SimpliVity
Inline deduplication, instant recovery of data across datacenters, low latency, easiness of management. The solution is fully integrated in the Virtual center client, which makes for a very lean learning curve. Simplivity adds as a plugin into virtual center and delivers a fully featured dashboard. Backup Policies are VM-centric, but can be applied at a datastore level as well. The dashboard displays live and historic latency values, overall and for each VM, and is used as a GUI for backup and restore. Backup and restore are performed automatically but can be also performedmanually at any time. This is done with a couple of klicks and takes a few seconds. view full review »
987293ca 6c96 4c9c b5e9 711b21335078 avatar?1443731578
Sr. Systems Administrator at a tech vendor with 501-1,000 employees
Uses HPE SimpliVity
In-line data-deduplication and data virtualization. DR Replication. Fast cloning, fast restores, and fast backups. view full review »
Df2e5868 b3e3 4689 81db 421e595c545d avatar
IT Manager at a non-tech company with 501-1,000 employees
Uses HPE SimpliVity
Backup and restore. view full review »
Francesco battistin li?1429621305
Technical Manager at a tech services company with 51-200 employees
Uses HPE SimpliVity
It’s difficult to find a single valuable feature, because all of the features are good. The key point is the Data Virtualization Platform that allows you to achieve great efficiency in storage use, data management, backup and disaster recovery. I appreciated the simple and quickly deployment, the vCenter integrated management features, the backup and DR integrated and finally, but important, performance that is very good. view full review »

Sign Up with Email