Hyper-Converged (HCI) Forum

Rhea Rapps
Content Specialist
IT Central Station
Sep 19 2018
One of the most popular comparisons on IT Central Station is HPE SimpliVity vs Nutanix. People like you are trying to decide which one is best for their company. Can you help them out? What is the biggest difference between HPE SimpliVity and Nutanix? Which of these two solutions would you recommend for hyper-converged and why? Thanks for helping your peers make the best decision! --Rhea
reviewer543450in my position as Head of It I have looked and compared both solutions about 3-4 years ago for the first time, and we clearly went for Simplivity. For these reasons: Data efficiency (Dedupliation and compression), and resiliency. The file System of Nutanix wasn't able to handle a 2-drive failure, so this was a clear showstopper for us. Simplivity can handle even 3! And Nutanix was also lacking a real deduplication, like Simplivity has . My experience with Simplivity has been great, so positive that we moved all of our server infrastructure into it. The accelerator card with deduplication reduces IOs, and this simple thing pays off in many aspects: improved general performance (because of less IOs), faster restores and backups (because of dedup) - so fast that we are NOT using our traditional backup solution (market leader) for ANY restore anymore, because it is so slow in comparison! Integration in Vmware virtual center, so there is basically nothing to learn to manage Simplivity. Deduplication allowed us to squeeze the content of our SAN at that time into a 'smaller' Storage in terms of TB, and to grow 3 years on it without reaching its limits. We scaled up and added new nodes last year, and are going to get new ones now, as our company has grown and the server infrastructure accordingly. I had a look a Nutanix again last summer, and will do again next month. My impression: great vision, especially because of the own hypervisor. This is interesting to me because it could potentially save licences costs in the long run by removing Vmware. And I said in the long run: it was still a 'concept' last year at Nutanix. Some important features were on the roadmap, but not yet there. Then it has to be carefully evaluated, if this hypervisor really can be a substitute for Vmware, market leader since more than a decade. I guess it will need a few years. It is a great vision, but still lacks execution. The greatest downside I saw was the weakness of the deduplication in Nutanix. Dedup is modest in terms of ratio which may be reached (the vendor did not dare to name anything better than 2:1 to me, which is a very modest result in comparison to Simplivity. I was also explained, that dedup or compressions are features which needs to be enabled at the datastore level, and which could eventually limit performance. In other words: you have a set of features/policies which you may enable there, but you should not enable them all on each datastore. This sounds limiting to me, when I think that it’s exactly having dedup and compression in place at all time anywhere on simplivity which reduces IOs and makes the solution great and fast in moving, copying, restoring data. I would really look at the use case: if you are starting from scratch, and do not care about which hypervisor you want to use, or do not need the benefits of vmware, then Nutanix might be interesting. In that case I would compare the total costs of hardware and software including the hypervisor licenses and also considering the costs of backup/restore given the functionality Nutanix offers vs Simplivity. If you are an existing Vmware shop, which is probably true for most companies these days, and/or want to consolidate your datacentre, to me the choice is clear: Simplivity. I still did not find any reason to prefer Nutanix and am really surprised, how widespread Nutanix is while lacking such key features. Regards Matteo
Kerim OzsoydanPlease find here below my comparison and what Simplivity does better. Few additional things to pay attention to, Nutanix uses a small microserver, HPE uses Gen10 server which is a market leader server. Yes, with Simplvity the price is slightly higher but you get a 1 box solution where Virtualization and back up is already attached where Nutanix has gone to 3rd party vendors. Plus, at Nutanix when you go over 3 nodes the prices goes up massively. Support is also very important. With Nutanix support, we don’t what it covers, plus you need separate support packs for all the 3rd party products as well where we can give 1 support pack covers everything. Let me know if you need anything else. Issue HPE SimpliVity Nutanix Why this matters Data Efficiency 1. HPE SimpliVity data efficiency applies to all data, all of the time. 2. Accelerator card eliminates production CPU overhead and improves overall performance. 3. SimpliVity data efficiency is 30-40:1 typical. 4. SimpliVity data efficiency is of 10:1 guaranteed. 1. Dedup and compression consume additional vCPU resources 2. Dedup not recommended for server workloads and snapshots. 3. No published customer production system data efficiency results. 4. No guaranteed data efficiency gains. HPE SimpliVity’380 data efficiency is so effective that HPE SimpliVity can guarantee 10x data efficiency in customer production systems. Customers typically report 30-40x efficiency. This results in dramatically reduced server and storage costs and reduced data center footprint. Nutanix data efficiency features will be several times less effective. Nutanix dedupe and compression draw from the same resources as production, degrading app performance. Data Protection 1. HPE SimpliVity’s built-in backup and disaster recovery capabilities fully leverage SimpliVity global data deduplication and compression. 2. HPE SimpliVity guarantees 60-seconds or less on average for local backup or restore of a 1 TB VM. 1. Nutanix relies on 3rd party backup. 2. Native snapshots impose overhead on storage performance. 3. Native snapshots are now 15 minute or less with 15 min RPO 4. No guaranteed backup and restore performance outcomes in production environments. 1. Superior data protection reduces recovery point and recovery time objectives and thereby reduce the risk of interruption of operations or loss of data due to human or technical errors. 2. Reduces the time required to backup and recover virtualized workloads from hours or days to minutes or seconds. 3. Reduces the cost of implementing disaster recovery. Resilience 1. HPE SimpliVity tolerates ANY 3 simultaneous drive failures without data loss. 2. HPE SimpliVity achieves high availability with a minimum of 2 nodes. 1. Nutanix RF-2 configuration can lose data when just 2 drives fail (1 each on separate nodes in a cluster). 2. Nutanix production-grade resiliency setting (RF-3) requires 5 nodes minimum, and imposes additional sequential write, and random write performance overhead. HPE SimpliVity provides superior resiliency to failure, with better performance and lower cost than Nutanix. Scalability: Data Center 1. HPE SimpliVity can scale from 1 node minimum (2 node HA) 2. Up to 1000VMs per cluster. Linear I/O performance while doing backups, restores, and clones. 3. 1,000 VDI logins in 1,000 seconds. 1,000 desktops deployed in 70 minutes. 4. Validated by ESG & Login VSI. 5. Independently scale compute with any x86 server. 1. Nutanix has a 3-node minimum (RF2), 5-node minimum (RF3) for datacenter. 1 and 2 node configs supported in ROBO (no HA). RF3 is recommended for enterprise production workloads. 2. Nutanix nodes are burdened by data efficiency overhead. 3. Nutanix nodes have 30% less desktop/node capacity. 4. No “Validated by Login VSI” VDI White Papers. 5. No external customer supplied compute nodes supported 1. HPE SimpliVity provides better economics for single-site deployment, today and tomorrow. 2. SimpliVity can scale from a smaller minimum and has been validated by independent third-parties to scale to handle large server and desktop workloads under severe test conditions. Scalability: Multi-site 1. Minimum 1 node/site, 2 nodes/site for HA 2. No 10 gig switch required for 2-node 3. Built-in data protection 4. Global data efficiency reduces WAN bandwidth costs 5. Add or change systems, change backup/DR targets without reconfiguration 1. Nutanix now has 1 and 2 node minimum but only for ROBO. 3 node (RF2) for datacenter, 5-node minimum (RF3). RF3 is recommended for enterprise production workloads. 2. 10G switch required for most models. 3. Nutanix nodes are burdened by data efficiency overhead. 4. Partial global deduplication 5. No federated management; PRISM Central is a manager of managers 1. HPE SimpliVity provides superior economics for ROBO, today and tomorrow. 2. HPE SimpliVity can scale from a smaller minimum, with built-in data protection, direct-connect 10G networking, WAN data efficiency, and federated global management for lowest TCO. Management 1. Integrated into vCenter, the tool you already use. No new consoles. 2. Globally unified: same right-click operation for local and remote-site backup. 3. Turnkey integration with VMware vRealize Automation, Cisco UCS Director 4. No management licenses 1. Nutanix Prism is not integrated into vCenter, forcing you to learn and train on a new GUI and making you captive to Nutanix. 2. Third-party tools required for full third-party backup. 3. Separate Prism instances are needed for each site. No separate screens, no separate GUIs, no management licenses, no new training needed: if you know how to use vCenter, in 5 minutes, you’ll know how to use SimpliVity.
anush santhanamAbsolutely. I have architected solutions on both of these and here is my feedback: From a technology standpoint, Simplivity’s claim to fame revolves around the use of a proprietary accelerator card that allows for inline de-dupe/compression and optimization. Simplivity was a pioneer of this technology in the HCI world. Next, Simplivity’s global federation is very powerful particularly when talking about ROBO type scenarios. It also includes a rich feature set including native snapshot based backups, replication and support for DR technologies (RapidDR). Its cloud connect capabilities allow for backups to be moved to the cloud. Backups are policy driven which is pretty cool. All in all a pretty solid box for ROBO, infra apps, web servers and VDI. However, Simplivity at a disk level still relies on good old RAID technologies which while not a problem with its accelerator card is nothing unique. Also, not all workloads benefit from de-dupe and you cannot turn on and off the feature. It is an always-on feature. Now to Nutanix. Very powerful thought leadership from day one. Is a solid DC solution. Has all the bells and whistles. The NDFS filesystem is unique as is its ability to deliver its capabilities all at a software layer. Nutanix has strong OEM ties with Dell, Lenovo, Cisco etc. For about 85-90% of workloads, Nutanix would be a great fit. Nutanix’s native Acropolis Hypervisor is KVM based and is free. Makes for a strong DR use case. Support is awesome. Nutanix now has cloud orchestration and micro segmentation capabilities in its Acropolis hypervisor. The simple answer would be to look at the actual use case. For ROBO I would go the Simplivity route. Leaving commercials out, Simplivity would also be ok for lightweight workloads. If this is a DC use case particularly for things like Oracle/MS SQL etc, you would be better off with Nutanix. Also both support all flash options. Nutanix allows for block and file storage to be presented. The last point is that, remember when you go the HCI route you most definitely want to look at your apps first. Things like Oracle if not architected to best practices can and will fall flat on its face. A word of caution. Tks Regards Anush Santhanam HCL Technologies
Ariel Lindenfeld
Sr. Director of Community
IT Central Station
Rhea Rapps
Content Specialist
IT Central Station
Aug 16 2018
We all know that it's important to conduct a trial and/or proof-of-concept as part of the buying process.  Do you have any advice for the community about the best way to conduct a trial or POC? How do you conduct a trial effectively?  Are there any mistakes to avoid?
anush santhanamHi, When evaluating HCI, it is absolutely essential to run a trial/POC to evaluate the system against candidate workloads it will be expected to run in production. However, there are quite a few things to watch out for. Here is a short list: 1. Remember that most HCI depend on a distributed architecture which means it is NOT the same as a standard storage array. What that means is that, if you want to do any performance benchmarking with tools such as IOMeter, you need to be extremely careful in the way you create your test VMs and how you provision disks. Guys such as Nutanix have their own tool X-Ray. However I would still stick to a more traditional approach. 2. Look at the list of apps you will be looking to run. If you are going to go for a KVM type of a hypervisor solution, you need to see if the apps are certified. More importantly, keep an eye out on OS certification. While HCI vendors will claim they will and can run anything and everything, you need the certification to come from the app/OS OEM. 3. Use industry standard benchmarking tools. Remember unless you are using a less “standard” type of a hypervisor such as KVM or Xen, you really don’t need to be wasting your time with the hypervisor part as VMWare is the same anywhere. 4. Your primary interest should be the storage layer without question and the distributed architecture. Remember with HCI, the computer does not change and hypervisor (assuming VMWare) does not change. What changes is the storage. Next there are the ancillary elements such as management and monitoring and other integration pieces. Look at these closely. 5. Use workload specific testing tools. Examples include LoginVSI, jMeter, Paessler/Bad boy for web server benchmarking etc. 6. Finally, remember to look at the best practices on a per-app basis. The reason I suggest this is because of the following. You may have been running an app like Oracle in your environment for ages in a monolithic way. However when you try the same app out in HCI it may not give you the performance you want. This has to do with the way the app has been configured/deployed. So looking at app best practices is something to note. 7. If you are looking at DR/backup etc, then evaluate your approaches. Are you using any native backup or replication capability or are you using any external tool. Evaluate these accordingly. Remember your RTO/RPO. Not all HCI will support sync replication. 8. Finally if you are looking at looking at native HCI capabilities around data efficiency etc (inline de-dupe and compression), you will need to design testing for these carefully. 9. Lastly, if you are looking at multiple HCI products, ensure you use a common approach across products. Otherwise your comparison will be like looking at oranges and apples. Hope this helps.
MohamedMostafa1There are several ways to evaluate HCI Solutions before buying, Customers need to contact HCI Vendors or one of the local resellers who propose the same technology. Both of HCI Vendors and Resellers will be able to demonstrate the technology in Three Different scenarios like : 1 – Conduct Cloud-Based Demo, in which the presenter will illustrate product features and characteristics based on a ready-made environment and the presenter will be able to demonstrate also daily administration activities and reports as well. 2 – Conduct a Hosted POC, in which the presenter will work with the customer in building a dedicated environment for him and simulate his current infrastructure components. 3 – Conduct Live POC, in which the presenter has to ship appliances to customer’s data center and deploy the solution and migrate/create VMs for testing purpose and evaluate performance, manageability & Reporting. If the vendor or a qualified reseller is doing the POC, there should be no mistakes because it’s a straightforward procedure.
Bob WhitcombeSelecting an HCI path is pretty straightforward and it goes through the cloud. You first select your workloads and what performance is needed for success. Since the key differentiation across HCI platforms today is software - you should be able to construct a target load of the apps you want to test and run them in a vendors cloud sandbox. You want to align your hardware solutions so you can leverage your existing support models and contracts, but you are testing software platforms for usability, performance and adaptability to your current operations model. Once your workload homework is complete and your have selected an application type, VDI, OLTP, Data Warehouse etc, and determined worst case response times, you can throw a target workload to the cloud for evaluation. At this point you are looking for hiccups and deployment gotchas. HCI and cloud processes may be new to you - so you may need to stretch beyond your deployment models. This is a good thing. Recognize HCI is a leading edge trend and is one step removed from the cloud - which is where you will be in 5-10 years. You want to look for key software features that lower the cost and complexity to manage this installation. But for a corner case or three, most applications will fit squarely in the middle of the "good" zone for today's SSD based HCI solutions. With cloud testing of a target HCI platform you should learn how your applications perform, see key features you really really want and satisfy yourself that these systems can be managed without significant incremental effort by your current staff. Then you do the grid - is the target aligned with my current hardware vendor; endorsements from people running similar applications; killer features and a drop dead signing bonus that justifies adding this platform to my portfolio of aging IT equipment? If and only If you come down to a near tie between two vendors should you go to the trouble of a full meal deal on-site PoC. They may not provide any more information than the version in the cloud, require physical hosting on your site, need an assigned project manager and then you get to deal with the loser - who may very well be your current vendor - and what a joy that will be.
Rhea Rapps
Content Specialist
IT Central Station
Aug 15 2018
One of the most popular comparisons on IT Central Station is DataCore Virtual SAN vs Nutanix. One user says about DataCore Virtual SAN, "Mirroring is the most valuable feature because I can provide a high-level of service and optimize the use of obsolete storage." Another user says about Nutanix, "As a system integrator, Nutanix offers a highly standardized solution which can be deployed in a timely fashion compared to legacy three-tier, generation one converged, and most competing for hyper-converged solutions. This allows us to move quickly with a small team or architects and implementation specialists for large projects."In your opinion, which is better and why?
Bob WhitcombeIn a two product comparison, three elements are required. A list of pros and cons for each product and the weighting criteria you will use to score the competitors. As a bowler, I love my golf score. For Hyper-Converged Infrastructure, I focus on mid-size (>500 employees, >$200M) companies and enterprises. Both products let you build your own Hyper-Converged Infrastructure. I give DataCore high marks for its SAN and storage focus but I see Nutanix as a stronger vehicle for mid-size organizations looking at hybrid-cloud and higher levels of feature sophistication. While striving for neutrality - I favor Nutanix for my customers. The incremental costs are manageable and this group is less interested in building than buying services. For many smaller customers - especially ones who will go the extra mile to save a buck and see options like Scale I/O from EMC or VMWare's VSAN as too expensive, I expect DataCore Virtual San to find a good home. As the HCI standard bearer the application support question is very clear for Nutanix, both from the SW vendor and Nutanix. The DataCore support path is fuzzier. DataCore is of interest to customers seeking to re-purpose a brownfield array of older servers for an application they are very familiar with. Most larger IT shops have no time to build something out and will pay for reliability and reduced risk given the mass of users they support.
Shadi KhouryI think Nutanix is suitable for medium companies because it is an appliance with standardization, compatibility between equipment; but for the same reason, it is bad for scalability if you try to scale your environment with other appliances from other vendors. Whereas DataCore needs more professionality and experience.
Benjamin BodenheimI have used nutanix now for almost a year and the product just works. I sleep at night knowing my systems replicate from one data center to the other
Jean Louis Ferrie
User
Jul 25 2018
Which hypervisor(s) are supported with HPE Simplivity?Thank you. 
Greg-Mathias
IT Services Bureau Chief with 201-500 employees
Apr 27 2018
Working for a state agency, we have to purchase off a state contract unless we can justify buying a product not on the contract. We have seen Nutanix and feel it is the hyper-converged infrastructure that will fill our needs. I need specific examples of what Nutanix provides that other competing products (EMC/vxRail, Cisco HyperFlex, VMWare) will not.   I have come up with the following: Data Locality, Hyper-V Support (we are currently Hyper-V - will be migrating to Acropolis - but, DO NOT want to pay for VMWare, so Hyper-V support is required), Hypervisor alternative included in base product, Integrated backup and DR included in base product, Native hybrid deduplication, ability to scale compute and storage independently, one pane of glass management console included, network visualization via management console, high NPS score. The answers that I have found to the above questions all point to Nutanix as the sole HCI that will fulfill these needs. Am I missing any from a Nutanix standpoint or, am I misconstruing something that others will satisfy that I say they do not?
Charles TustisonThere are a lot of players in this space and the market should make all of them competitive. Most of the time the cost of the hardware is minor compared to warranty agreements, advanced functions, and support. Nutanix doesn't have the same education buy in that VMWare has, but you really don't need a lot of expertise with these solutions, especially in a smaller department or company. You really can own your IT environment without having to keep a lot of experts on staff to manage it. Unless you are dependent on Hyper-V for licensing(using enterprise server versions to stack Windows VMs), then Acropolis(KVM) on Nutanix OSCetnos() should give you a solid experience for provisioning machines, which you then manage as Windows systems. I don't think having a competitor running on a competitor is a good thing(Hyper-V or VMWare on Nutanix). There is too much opportunity for the vendors to shift blame. I feel the same about build your own since your reputation, not the OEMs reputation is on the line. Hit Lenovo up hard for pricing. They haven't had much success in the Data Center since they were spun off of IBM (Blame it on IBM not wanting to sell Intel and keeping all the sales people with the relationships) and they might be very aggressive on pricing and help standing up the solution. The triangle competitive environment you want is Dell Nutanix vs Nutanix vs Lenovo Nutanix. For Lenovo, call up and ask to talk to the Data Center Technical Sales Rep for Federal/State or for your area. They should be able to give you the competitive info you need and maybe a good price. Also the hardware support for Lenovo is still IBM. For Nutanix, talk to them after Dell and Lenovo come back with proposals. That is when the fun begins! Also, don't think that you need one huge infrastructure. If they give you a better price with smaller blocks, build your virtualized Windows infrastructure on smaller Nutanix blocks. You might want to look at having Nutanix from Dell or Lenovo for some roles and straight Nutanix for others. Finally, make sure that there are no restrictions on who you can purchase from. Some bureaucrats think the brand label on the outside of a "made in China" box means it is more secure. That is not the case, but there are sometimes legal issues that come up for some government sales.
Shailesh SurroopWe looked at EMC VXrail, Simplivity and a custom-designed solution based on EMC ScaleIO but Nutanix came on top as we wanted a cost effective solution that will give us the ability to manage any workload and allow us to do box-to-box replication. Prism (Nutanix's management interface) played a big part in the assessment as well and everyone in the team just love it. Our solution is based on CISCO UCS hardware and Nutanix Ultimate license. We have large clusters on Hyper-V, VMware and Xenserver since 2010 and we had no second thoughts when opting for Acropolis hypervisor (AHV). We choose AHV as we wanted to move all files to AFS (Acropolis file system). We are currently moving our VDI infrastructure to Nutanix as well and so far have 300 Windows 10 VMs running without any performance issue. Our Nutanix HCI project is without doubt the best technological investment we have made in a while. And as regards to Nutanix support, they are very quick to respond and with a resolution time of less than 30 minutes.
Quincy JohnsonIf you have multiple workloads that requires high performance you should also consider DataCore's hyper-converged solution. Its management to snaps in well with hyper-v and is hypervisor agnostic. Works on both fiber channel and iSCSi network protocols and you can scale up on storage via DAS or even leverage existing storage on the network. It's robust and I know of situations where an end client put it in their environment to handle the bigger enterprise workloads that a few other HCI were not able to provide the performance needed for. You can start from 2 nodes and scale from there. Your storage mix can be straight SSD, HDD or hybrid with 10% SSD. Check their website for detailed features and capabilities

Sign Up with Email