Hyper-Converged (HCI) Forum

Rony_Sklar
IT Central Station
Jul 07 2020
What are some important factors to keep in mind and to compare when choosing between HCI solutions? 
Rony_Sklar
IT Central Station
Jun 19 2020
From my own research, it seems that Converged Infrastructure relies on hardware, whereas Hyper-Converged Infrastructure is software-based. What does this mean in practical terms? What are the pros and cons of each?
ROBIN JACKSONIn principle you’re right “Converged Infrastructure relies on hardware, whereas Hyper-Converged Infrastructure is software-based”. But there are further advances for software management of containers, VMs, storage, and networks within a single architecture. As a Red Hat partner, we are aware of coming developments based on Red Hat OpenShift which significantly simplify operations and provide complete management and portability across On-Prem, Hybrid, and Multi-Cloud environments.
Bart HeungensAlso in a converged infrastructure software is important. Converged for me is a combination of hardware components that are sold as a single solution and where a software layer is added to make the management easier. But the hardware solution consists mostly from individual server, storage and networking components. Most hyperconverged solutions goes further with integrating the storage layer into the server layer, removing a layer of hardware, and where the software inside the solution create a shared storage pool for the server stack. Automatically the management layer is also simplified just as with the converged solution... Less hardware (or differently used) and more software inside... I call it more a typical evolution of IT infrastructure... Know that converged and hyperconverged is a marketing thing and not really a product as such... I saw converged and hyperconverged solutions already 20 years ago before it even existed... Just look for what you need and pick the right solution... 
Norman AllenA Converged Infrastructure has more hardware.  Compute is on one set of hardware.  Storage is on another set of direct-attached (or other) hardware.  Networking is separated, too. In a Hyper-Converged Infrastructure, Compute and Storage are on the same hardware, and depending on the complexity of the solution, sometimes Networking isn't even needed because you can directly connect the nodes to each other if you only have 2 nodes.  Adding nodes is as simple as duplicating the hardware and scaling up or out, accordingly.    A Hyper-Converged Infrastructure requires less hardware and gives you a more simplified solution.  It is also less expensive to procure, operate and maintain.   
Rony_Sklar
IT Central Station
Jun 06 2020
What are key factors that businesses should take into consideration when choosing between traditional SAN and hyper-converged solutions?
German InfanteThere are so many variables to consider. First of all, have in mind that tendency is not the rule, your needs should be the base of decision, so you don't have to choose HCI because it's the new kid on the block. To start, think with your pocket, SAN is high cost if you are starting the infrastructure; cables, switches, and HBAs are the components to add to this structure that have a higher cost than traditional LAN components, On the other side, SAN requires more experimented experts to manage the connections and issues, but SAN has particular benefits sharing storage and servers functions like you can have on same SAN disk and backup and use special backup software and functionalities to move data between different storage components without direct impact on servers traffic. SAN has some details to consider on cables like distance and speed, its critical the quality or purity to get the distance; the more distance, the less speed supported and transceiver cost can be the worst nightmare. But SAN have capabilities to connect storage boxes to hundreds of miles between them, LAN cables of HCI have 100 mts limit unless you consider a WAN to connect everything or repeaters or cascaded switches adding some risk element to this scenario. Think about required capacities, do you need TB or PB?, Some dozens of TB can be fine on HCI, But if there are PBs you think on SAN, what about availability?, several and common nodes doing replication around the world but fulfilling the rules of latency can be considered with HCI, but, if you need the highest availability, replicating and high amount of data choose a SAN. Speed, if it is a pain in the neck, LAN for HCI starts minimum at 10 Gb and can rise up to 100 Gb if you have the money, SAN has available just up to 32 Gb and your storage controller must be the same speed, this can drive the cost to the sky. Scalability, HCI can have dozens of nodes replicating and adding capacity, performance, and availability around the world. With SAN storage you can have a limited number of replications between storage boxes, depending on manufactures normally you can have almost 4 copies of the same volume distributed around the world and scalability goes up to controllers limits its a scale-up model. HCI is a scale-out model to grow. Functionalities, SAN storage can manage by hardware things like deduplication, compression, multiple kinds of traffic like files, blocks or objects, , on HCI just blocks and need extra hardware to accelerate some process like dedupe. HCI is a way to share storage on LAN and have dependencies like the hypervisor and software or hardware accelerators, SAN is the way to share storage to servers, it is like a VIP lounge, so there are exclusive server visitors to share the buffet and can share the performance of hundreds of hard drives to support the most critical response times.
Tim WilliamsWhether to go 3 Tier (aka SAN) or HCI boils down to asking yourself what matters the most to you: - Customization and tuning (SAN) - Simplicity and ease of management (HCI) - Single number to call support (HCI) - Opex vs Capex - Pay-as-you-grow (HCI)/scalability - Budget cycles If you are a company that only gets budget once every 4/5 years, and you can't get any capital expenditures for Storage/etc, pay-as-you-grow becomes less viable, and HCI is designed with that in mind. It doesn't rule out HCI, but it does reduce some of the value gained. Likewise, if you are on a budget cycle to replace storage and compute at different times, and have no means to repurpose them, HCI is a tougher sell to upper management. HCI requires you replace both at the same time, and sometimes budgets for capital don't work out. There are also some workloads that will work better on a 3Tier solution vs HCI and vice versa. HCI works very well for anything but VMs with very large storage footprints. One of the key aspects of HCI performance is local reads and writes, a workload that is a single large VM will require essentially 2 full HCI nodes to run, and will require more storage than compute. Video workloads come to mind for this. Bodycams for police, surveillance cameras for businesses/schools, graphic editing. Those workloads can't reduce well, and are better suited for a SAN with very few features such as an HPE MSA. HCI runs VDI exceptionally well, and nobody should ever do 3 Tier for VDI going forward. General server virtualization can realize the value of HCI, as it radically simplifies management. 3 Tier requires complex management and time, as you have to manage the storage, the storage fabric, and the hosts separately and with different toolsets. This also leads to support issues as you will frequently see the 3 vendor support teams blame each other. With HCI, you call a single number and they support everything. You can drastically reduce your opex with HCI by simplifiying support and management. If you're planning for growth up front, and cannot pay as you grow, 3 tier will probably be cheaper. HCI gives you the opportunity to not spend capital if you end up not meeting growth projections, and to grow past planned growth much easier as adding a node is much simpler than expanding storage/networking/compute independently. In general, it's best to start with HCI and work to disqualify it rather than the other way around.
Bart HeungensAll depends of how you understand and use HCI: If you see HCI as an integrated solution where storage is integrated into servers, and software-defined storage is used to create a shared pool of storage across compute nodes, performance will be the game changer of choosing for HCI or traditional SAN. The HCI solution of most vendors will be writing data 2 or 3 times for redundancy across compute nodes, and so where there is a performance impact on the applications due to the latency of the network between the nodes. Putting 25Gb networks, as some vendors recommend, is not always a solution since it is npt the bandwidth nut the latency of the network that defines the performance. Low latency application requirements might push customers to traditional SAN in this case. If you use HCO for ease of management through a single pane of glass, I see many storage vendors delivering plugins to server and application software, eliminating the need of using the legacy SAN tools to create volumes and present them to the servers. Often it is possible to create a volume directly from within the hypervisor console and attach them to the hypervisor servers. So for this scenario, I don't see a reason choosing between the one or the other. Today there is a vendor (HPE) that is combining traditional SAN in an HCI solution calling it dHCI. It gives you a HCI user experience, the independent scalability of storage and compute, and the low latency often required. After a time I expect other vendors will follow the same path delivering these kinds of solutions as well.
Rony_Sklar
IT Central Station
May 27 2020
What are the benefits of using cloud versus hyper-converged infrastructure? What should enterprises take into account when choosing between these storage options?
Carlos EtchartI think that the key points to consider are: security, performance, and CAPEX vs OPEX Security: Having HCI on-premise allows you to keep your current security policies. For some customers having sensitive data on the cloud is not even an option due to their policies. If you go to the cloud you must remember that you are responsible for the security of your data, not the cloud service provider and new policy schemes may be needed. Performance: You have to evaluate if the cloud provide the bandwidth, throughput, and availability that your operation requires vs. on-premise. CAPEX vs OPEX: Even though there are some schemes that allow you to have HCI on-premise as EaaS (Everything as a Service like HPE GreenLake) most of the customers own their HCI infrastructure so depending on your expenditure convenience you will favor one or the other.
Tim WilliamsHCI is on-prem, so it's simpler and easier to manage and integrate with applications and your network. Something like a Nutanix can give you a lot of functionality of the cloud without having to deal with the massive headache that is designing your network and applications to be able to utilize the cloud effectively (for Infrastructure). SaaS is a fantastic use of the cloud, but infrastructure-as-a-service hasn't matured in process or manageability yet to justify. It will always cost more to be in the cloud, and it will always be more difficult to get to it. The cloud is amazing if you use it right.
Chaan BeardThere are several benefits of both Cloud and HCI that can be leveraged to the advantage of the feature rich HCI stack user hybrid style. The first is that many applications have not been designed for the cloud and require an on premise stack that can save data in the cloud and offer the same simplicity as cloud operations. If you select an HCI vendor that supports all of the Hypervisors and all of the clouds you can make your applications leverage each technology to your best advantage and lower OPEX costs by up to 60% without rewriting your applications to be cloud friendly. You can also simplify the entire stack and enjoy 5 microsecond latency and not make storage API calls that leave the kernel and introduce even more latency while they access storage from SAN and NAS devices. You can also serve up applications using FRAME technology with this stack that allows you to deploy solutions for remote workers in minutes that are fully secure. AOS offers full encryption and FIPS level 140-2 and better security built into the HCI stack right out the box, no need to go bolting on complex Frankenstein solutions like NSX that require several residents with deep knowledge of 8 different VMware stacks to operate the whole enchilada which increases your OPEX costs dramatically. AOS based HCI eliminates separate SAN, NAS and Object store silo's, it also eliminates system security and server with virtualization silo's and condenses them into one stack, so simple 8 year old children can administer it in a few mouse clicks. Mature HCI based also offers BC/DR benefits that will allow you to use the cloud for what the cloud os good at, BC/DR. Mature HCI vendors also offer their entire HCI stack for AWS and Azure so that you can drag Virtual machines from on prem to the cloud seamlessly. The San Jose based HCI vendor that does this is 4 years ahead of its competition (Dell-EMC) who only work with one Hypervisor while they work with any Hypervisor and all the cloud stacks concurrently. Nutanix Acropolis Operating System is the wave of the future and it runs on any hardware on their HCL from any server vendor. The HCL list is long. It is also a cluster based architecture that can be expanded one node at a time and they have GPU nodes as well. Nutanix Software Defined Valhalla is here today, so advanced everyone will think you are with the gods!!
Mike McCaffery
Consulting Systems Engineer at a reseller with 1,001-5,000 employees
Apr 09 2020
I work for a VAR and I have a customer who is interested in HCI but one of their requirements is segment routing.   Segment routing is a forwarding paradigm that provides source routing, which means that the source can define the path that the packet will take. Segment routing still uses MPLS to forward the packets, but the labels are carried by an IGP. In segment routing, every node has a unique identifier called the node SID. I could not find anything around segment routing and HCI solutions so I was wondering if you were aware of any HCI solutions that supported segment routing? Thanks! I appreciate the help.
Henry A. McKelveyI think that the question of if HCI supports SR is not what you should be asking because it really does not, but SR supports HCI. Let me explain. SR is a network function and HCI is a computer function. Therefore, HCI can use the function of SR to provide faster data transfers for Cloud computing. The HCI virtual systems can use the efficiency of a SR network to deliver data where it needs to go by preselecting the desired path. This could be used to provide remote virtual computing if you can imaging a HCI system functioning on a SR network.
Michael SamaniegoAn HCI solution with VMware vSAN does not have all the network virtualization features like VMware NSX, it would be necessary to analyze the scope of the service you want to have and combine it with the features offered by the VMware NSX solution.
reviewer1223523You could use Nutanix with flow for microsegmentation!
Miriam Tover
Content Specialist
IT Central Station
A lot of community members are trying to decide between hyper-convergence vs. traditional server capabilities. How do you decide between traditional IT infrastructure, Converged Infrastructure (CI), and Hyperconverged Infrastructure (HCI)? What advice do you have for your peers? It's really hard to cut through all of the vendor hype about these solutions. Thanks for sharing and helping others make the right decision.
Shawn SaundersThis is truly a TCO decision. But not as some do TCO, a comprehensive TCO which includes the cost of the new CI/HCI, plus the installation, training, and staffing, and the difference in operational costs over the life of the solution. This should consider the other points from Scott and Werner about consistency in support and compatibility of the components. The best time I see this opportunity is in an environment with substantial IT debt, or when you can align refreshes of the various components together. This helps the TCO conversation dramatically. Keep away from the "shiny object" argument. Just because "everyone" is doing it, is not the right reason. Nor does it make sense just because all your vendors are pushing it, or your technical team is pushing it. Again steer clear of "shiny object". I recommend, getting demos of multiple 3-5 HCI vendors, and capture the capabilities they provide. Then spend some time with your business and technical teams to understand what their requirements in terms of capabilities really are. Separate the must-haves, from the should-haves, to the nice-to-haves. Then build a cost model of what you are currently spending to support your environments (understand your current TCO). Then build your requirements document accordingly. Release this as an RFP, to find a solution that can meet your TCO requirements. Your goal should be a better TCO than the status quo unless there are specific business benefits, that out way a simple TCO. Then you may need to talk with the business to fund that difference, so they can have that specific business value proposition.
John BarnhartHCI lowers operating and capital costs since it is dependent on the integration of commoditized hardware and certified/validated software such as Dell Intel-based x86 servers and "SANSymphony" from DataCore, or say VSAN from VMware, etc. The idea is used to reduce costs and complexity by converging networking, compute, storage and software in one system. Thereby avoiding technology silos and providing a "cloud-like" cost model and deployment/operation/maintenance/support experience for the admin, developer, tenant/end-users. HCI platforms can also help with scalability, flexibility, and reduction of single points of failure as well as HA, BURA, security, compliance, and more. HCI provides a unified resource pool for running applications more efficiently and with better performance due to being more rack dense and because different technologies are converged into one solution. Placing technology inside the same platform is very beneficial if for no other reason than the physical benefits of how data and electronic signals have less area to travel through. For instance, using internal flash memory to support a memory/disk read-write intensive database is a good idea so a separate external array is no longer necessary. It is no different than how far water from a glacier has to travel down the river after it melts in order to reach the ocean. The more rapid and closer it is between the original source and the final destination, the better. The above is a simple answer without knowing the current needs/use case of the business and is only intended to provide a very simple perspective on WHY HCI. You must consider your IT organization, goals and business needs very carefully before choosing any solution. I always advice considering the following; 1. Is "your world" changing? 2. Why? 3. What benefits do you expect to achieve from making incremental changes? 4. What happens if you do nothing?
Bob WhitcombeShould I or Shouldn’t I – that is the HCI question. Not to wax poetic over a simple engineering decision – but HCI is about understanding the size and scale of your applications space. Currently, most HCI implementations are limited to 32 nodes. This can make for a very powerful platform assuming you make the right choices at the outset. What does HCI do that is fundamentally different from the traditional client-server architecture? HCI manages the entire site as a single cluster – whether it has 4 nodes, 14, 24 or 32. It does this by trading ultimate granularity for well-defined Lego blocks of Storage, Network and Compute. When seeking to modify a traditional architecture you need to coordinate between 3 separate teams, Storage, Server and Network to add, move, update or change anything. With HCI, if you need more capacity, you add another Block of storage, compute and network. You trade the ultimate “flexibility” of managing every detail of disk size, CPU type and how you network in a traditional architecture for a standard module in an HCI environment that will be replicated as you scale. At later dates, you can scale by adding a standard block or one that is compute or storage-centric. With that constraint, you don’t have to worry about the complexity of managing, scaling or increasing systems performance. But you do need to pick the right sized module – which means that for multiple needs, you may end up with different HCI clusters – all based on different starting blocks. The decision to use HCI or traditional servers comes down to scale. For most needs today – general virtualization, DevOps and many legacy apps that have been virtualized, HCI is a more cost-effective. To handle EPIC for a large hospital chain or a Global SAP implementation for a major multi-national operation – you probably need a traditional architecture. If you need more than 32 servers to run an application today – that application will need to be cut up to fit in most HCI platforms. The trend is to use containers and virtualization to parse legacy applications into discrete modules that fit on smaller, cheaper platforms, but any time you even think of starting a conversation with "We just need the software dogs to ..." you're barking up the wrong tree. Let’s look at three examples for HCI clusters – Replacing a Legacy applications platform where maintenance is killing you, Building out a new Virtualization cluster for general application use and a DevOps environment for remote and local developer teams. A legacy application – say running logistics for distribution or manufacturing operation – typically is pretty static. It does what it does – but is constrained by the size of the physical plant, so will probably not grow too much beyond current sizing. In that case, I would scope the size of the requirements and if it is under 100TB probably opt for a Hybrid HCI solution where each node has 2 or 4 SSD’s that act as a data cache for a block of hard drives – typically 2-10 2.4TB 2.5” 10Krpm units – in a 2U chassis. You need to decide how many cores are needed for the application, and then have to add the number of cores the HCI software requires for its management overhead - which can range from 16-32. You start with dual-socket 8-core CPU’s and move up as needed, 12, 16, 18, 22, 24, etc. Most systems use Intel CPU’s which are well characterized for performance. So selection of CPU is no different from today - other than the need to accommodate the incremental cores for the HCI Software overhead. Most IT groups have standardized CPUs either because core selection is constrained by Software License costs tied to Core counts or go full meal deal on the core counts and frequencies to get the most from their VMware ELA’s. For HCI networking, since the network is how the disks and inter-process communications are handled, you go with commoditized 10Gb switching. Most server nodes in the HCI platforms will have 4-port 10Gb cards to provide up to 40Gb of Bandwidth for each node - tied together by a pair of 10Gb commodity switches. If your legacy application is a moderate-sized transaction processing engine – then simply move from a hybrid system using a flash cache and spinning rust – to an All-Flash environment. You trade ~120 IOPs HDD’s for 4000 IOP SSD’s and then the network becomes your limiting factor. If I were to build out a new virtualization platform for general applications I would focus on the types of applications – look at their IOPs requirements but in general, would propose All-Flash – just as we are doing today in traditional disk arrays. As noted earlier, the base performance of an HCI cluster is tied to the disk IOPs of the core building blocks, then network latency, and bandwidth. The current trend is for flash densities and performance to grow as HDDs have plateaued. While Spinning disk costs are fairly stable, the prices on flash are falling. If I build a cluster today for my virtualization environment that needs 5000 IOPs now and 10,000 IOPs next year as it doubles in size, I will get better performance from the system today and my future SSD price will fall allowing me to increase performance by adding nodes while the price per node drops. Don't forget that as the utility of a compute service increases - so do the number of users and maintaining user response times as more get added is about lower latency. Read SSD. For a DevOp environment, I want those teams on All-Flash, High Core count CPU design limited to 6-8 nodes depending on how many developers and size of the applications they operate on. I would insist on a separate Dev/Test environment that mimics production for them to deploy and test against to verify performance and applications response times before anything is deployed to the tender mercies of the user community. Obviously, I have made extensive generalizations in these recommendations but like the Pirate Code, they are just guidelines. Traditional IT architectures are okay but HCI is better for appropriately sized applications. Push for All-Flash HCI, there are fewer issues in the future from users. When the budget dogs come busting in to disturb the serenity of the IT tower go Hybrid. Measure twice, scale over and over and document carefully. Have the user and budget dogs sign off because they will be first to complain when the system is suddenly asked to scale rapidly and is unable to deliver incremental performance as user counts grow. User support is tied to IOPs and when your core building block is 120 IOPs per drive you have 30 times less potential than when you use SSD. Once the Software dogs catch up and the applications all live in modular containers that can be combined over high speed networks there will be no "If HCI", Just Do.
Ariel Lindenfeld
Sr. Director of Community
IT Central Station
There are a lot of vendors offering HCI solutions. What is the #1 most important criteria to look for when evaluating solutions? Help your peers cut through the vendor hype and make the best decision.
ROBERT BLUESTEINCost metrics, Rob, Capex, and open savings and even a TCO should be accounted for. 1) Operational efficiency assumptions based on assessments. This should yield time to deploy, VM to admin ratios, device consolidation, and power usage. 2) My most important thing is in the Recovery Time Objective and how well does it sustain without data loss. Recovery Point Objective measures how far you can go back without loss and RTO is how long mission critical devices are brought back online. Since you will find yourself managing VMs, you might consider a cost analysis there as well. (Remember you won't be managing devices any longer) Your benefits in using an HCI is 1) A VM Centric Approach 2) A software-defined datacenter- ( less replacement, better utilization, pay as you go) 3) Data Protection 4) Lower costs 5) Centralized and even automated self-management tools.
Bart HeungensFor me an HCI solution should provide me: - ease of management, 1 console does all, no experts needed, cloud Experience but with on-premise guarantees - invisible IT, don't care about the underlying hardware, 1 stack - built-in intelligence based on AI for monitoring and configuration - guaranteed performance for any workloads, also when failures occur - data efficiency with always-on dedupe and compression - data protection including backup and restore - scalability, ease of adding resources independent of each other (scale up & out) - a single line of support
Bharat BediWhile there is a long list features/functions that we can look at for HCI -> In my experience of creating HCI solutions and selling it to multiple customers, here are some of the key things I have experienced most customers boil it down to: 1) Shrink the data center: This is one of the key "Customer Pitch" that all the big giants have for you, "We will help you reduce the carbon footprint with Hyperconverged Infrastructure". It will be good to understand how much reduction they are helping you with. Can 10 racks come down to two, less or more? With many reduction technologies included and Compute + Storage residing in those nodes, what I mentioned above is possible, especially if you are sitting on a legacy infrastructure. 2) Ease of running it: The other point of running and buying HCI is "Set it and forget it". Not only should you look at how easy it is for you to set up and install the system, but how long does it take to provision new VMs/Storage, etc. It is great to probe your vendors around to find out what they do about QOS, centralized policy management, etc. Remember that most HCI companies portfolios differ at the software layer and some of the features I mentioned above are bundled in their code and work differently with different vendors. 3) Performance: This could be an architecture level difference. In the race of shrinking the hardware footprint down, you could face performance glitch. Here is an example: When you switch on de-duplication and compression, how much effect does it have on the overall performance on CPU, and thereby affecting the VMs. Ask your vendors how they deal with it. I know some of them out there offload such operations to a separate accelerator card 4) Scaling up + Scaling out: How easy it is to add nodes, both for compute and storage? How long does it take while adding nodes and is there a disruption in service? What technologies do the vendors use to create a multi-site cluster? Keep in mind if the cluster is created with remote sites too? Can you add "Storage only" or "Compute only" nodes if needed? All of the above have cost implications in a longer run 5) No finger pointing: Remember point number two? Most of these HCI are based on "Other Vendors' hardware" wrapping it with their own HCI Software and making it behave in a specific way. If something goes wrong, is your vendor okay to take full accountability and not ask you to speak with a hardware vendor? It will be a good idea to look for a vendor with a bigger customer base (not just for HCI but compute and storage in general) - making them a single point of contact and more resources to help you with, in case anything goes wrong.
Nurit Sherman
Content Specialist
IT Central Station
We all know that it's important to conduct a trial and/or proof-of-concept as part of the buying process.  Do you have any advice for the community about the best way to conduct a trial or POC? How do you conduct a trial effectively?  Are there any mistakes to avoid?
Manish BhatiaI would say, gather and understand the requirements, share and check with vendors, invite them for a solution with a POC on your environment, ask for use cases and for any legacy application/hardware, ask for the compatibility matrix, and then you will have the idea about the capabilities of that solution and vendor.
anush santhanamHi, When evaluating HCI, it is absolutely essential to run a trial/POC to evaluate the system against candidate workloads it will be expected to run in production. However, there are quite a few things to watch out for. Here is a short list: 1. Remember that most HCI depend on a distributed architecture which means it is NOT the same as a standard storage array. What that means is that, if you want to do any performance benchmarking with tools such as IOMeter, you need to be extremely careful in the way you create your test VMs and how you provision disks. Guys such as Nutanix have their own tool X-Ray. However I would still stick to a more traditional approach. 2. Look at the list of apps you will be looking to run. If you are going to go for a KVM type of a hypervisor solution, you need to see if the apps are certified. More importantly, keep an eye out on OS certification. While HCI vendors will claim they will and can run anything and everything, you need the certification to come from the app/OS OEM. 3. Use industry standard benchmarking tools. Remember unless you are using a less “standard” type of a hypervisor such as KVM or Xen, you really don’t need to be wasting your time with the hypervisor part as VMWare is the same anywhere. 4. Your primary interest should be the storage layer without question and the distributed architecture. Remember with HCI, the computer does not change and hypervisor (assuming VMWare) does not change. What changes is the storage. Next there are the ancillary elements such as management and monitoring and other integration pieces. Look at these closely. 5. Use workload specific testing tools. Examples include LoginVSI, jMeter, Paessler/Bad boy for web server benchmarking etc. 6. Finally, remember to look at the best practices on a per-app basis. The reason I suggest this is because of the following. You may have been running an app like Oracle in your environment for ages in a monolithic way. However when you try the same app out in HCI it may not give you the performance you want. This has to do with the way the app has been configured/deployed. So looking at app best practices is something to note. 7. If you are looking at DR/backup etc, then evaluate your approaches. Are you using any native backup or replication capability or are you using any external tool. Evaluate these accordingly. Remember your RTO/RPO. Not all HCI will support sync replication. 8. Finally if you are looking at looking at native HCI capabilities around data efficiency etc (inline de-dupe and compression), you will need to design testing for these carefully. 9. Lastly, if you are looking at multiple HCI products, ensure you use a common approach across products. Otherwise your comparison will be like looking at oranges and apples. Hope this helps.
MohamedMostafa1There are several ways to evaluate HCI Solutions before buying, Customers need to contact HCI Vendors or one of the local resellers who propose the same technology. Both of HCI Vendors and Resellers will be able to demonstrate the technology in Three Different scenarios like : 1 – Conduct Cloud-Based Demo, in which the presenter will illustrate product features and characteristics based on a ready-made environment and the presenter will be able to demonstrate also daily administration activities and reports as well. 2 – Conduct a Hosted POC, in which the presenter will work with the customer in building a dedicated environment for him and simulate his current infrastructure components. 3 – Conduct Live POC, in which the presenter has to ship appliances to customer’s data center and deploy the solution and migrate/create VMs for testing purpose and evaluate performance, manageability & Reporting. If the vendor or a qualified reseller is doing the POC, there should be no mistakes because it’s a straightforward procedure.