We just raised a $30M Series A: Read our story

VMware EVO:RAIL [EOL] OverviewUNIXBusinessApplication

What is VMware EVO:RAIL [EOL]?
VMware EVO:RAIL combines VMware compute, networking, and storage resources into a hyper-converged infrastructure appliance to create a simple, easy to deploy, all-in-one solution offered by Qualified EVO:RAIL Partners.

VMware EVO:RAIL [EOL] is also known as EVO:RAIL.

Buyer's Guide

Download the Hyper-Converged (HCI) Buyer's Guide including reviews and more. Updated: October 2021

VMware EVO:RAIL [EOL] Customers
ASAHI INTERACTIVE, Fukuoka-Hibiki Shinkin Bank

Archived VMware EVO:RAIL [EOL] Reviews (more than two years old)

Filter by:
Filter Reviews
Industry
Loading...
Filter Unavailable
Company Size
Loading...
Filter Unavailable
Job Level
Loading...
Filter Unavailable
Rating
Loading...
Filter Unavailable
Considered
Loading...
Filter Unavailable
Order by:
Loading...
  • Date
  • Highest Rating
  • Lowest Rating
  • Review Length
Search:
Showingreviews based on the current filters. Reset all filters
it_user335727
Deputy Director of Information Technologies at County of El Dorado, California
Video Review
Vendor
We've just gotten started with it. We've seen that the single interface makes it easy to manage and the expandability makes it flexible for us.

What is most valuable?

One of the more valuable features of EVO:RAIL for us is the fact that it's a hyper-converged appliance. So, we're using it in a data center; a remote data center as a disaster recovery device and we like the idea of having a single device, doesn’t take up a lot of space, doesn't require a lot of, peripherals, power supplies, extra UPS’s and things like that. So it's very easy to manage.

How has it helped my organization?

Well we've just gotten started with it so it's hard to say. What we have seen as benefits but I think that going forward, I think that the expandability is going to be absolutely crucial for us. Eventually we want to make our disaster recovery site into more of just a barebones continuity facility. We'd like to really kind of have a, full data center there that we can replicate everything and the EVO:RAIL is just I think a great way for us to get started on that path.

Combination of the VMware, SRM and vSphere, makes it very manageable. I think that we're again looking forward to exploring more of the capabilities, of the box itself. It's established a kind of a really strong base for us going forward as our clients’ needs change, as our IT infrastructure requirements change, this is going to be kind of a basis for us going forward.

It is easy to manage because we have a single sort of interface to it being just a hyper-converged, box I guess the real advantage would be having a single interface to manage the box.

What needs improvement?

The ability to expand the box is both inside the box or stack more EVO:RAIL in the enclosure for us really kind of gives us the, you know the flexibility, the expandability that we're looking for.

Future features, can't really say if we have any preferences on that yet. It's kind of early for us.

What do I think about the stability of the solution?

Our impression so far is that it's absolutely, stable, bulletproof really. Everything contained to one box and everything is very easily managed. We're running SRM, too basically you know protect critical servers and, we're seeing it's pretty easy to operate and maintain.

What do I think about the scalability of the solution?

Well, it's very scalable from what we've seen. We think that, again being able to expand, inside the box and then, you know, outside of the box itself with additional units, is really attractive. It's managing basically a single environment like that it's going to be able to expand is something that's attractive to us.

How are customer service and technical support?

Our technical support has been, very good. We had professional services, come in and help us set up the box, from VMware and the-the services were delivered beyond our expectations. We were finished ahead of schedule and accomplished more than what we had expected to during the during the implementation period. So, it went very well. The set-up went very well.

Which solution did I use previously and why did I switch?

Our IT strategy has been leading us to disaster recovery capability. We’re a very distributed small government in El Dorado County, it's a very rural county, and we recognize that our data center even though it's one of the larger towns in the county is still vulnerable to natural disasters. Last year we had one of the biggest wildfires in California was in El Dorado County, so we are fully aware of the need to protect our infrastructure.

We have a very small space for a disaster recovery site that became available, about 60 miles away from our main data center. It's in a relatively unattended space inside of another facility and the EVO:RAIL was just the perfect match for that. Being one box to manage and easily managed remotely, it was very attractive to us.

What's my experience with pricing, setup cost, and licensing?

It actually came out priced significantly less than a traditional data center solution of similar size for us. So that’s a benefit right away for a budget constrained organization like a small government.

Which other solutions did I evaluate?

Well acquired the EVO:RAIL through a partnership between Dell and VMware, and we've dealt extensively with Dell in the past and VMware, and having this offer come to us as a partnership, just made it that much easier for us to move ahead.

We didn't really consider other vendors. We had been looking at several other options from Dell and again, when the partnership, offering came through, it seemed like it would work best for us.

What other advice do I have?

On a scale from one to ten right now I would have to put it on the high end of that because again, we've just gotten started with it. But what we've experienced so far is very positive so I'm going to put it at an eight or a nine and I'm sure over time as we become more familiar with it we can confidently say it's a ten.

I would recommend to any of our peers that are looking at this to consider the ease of management and the physical support required to support a hyper-converged box like the EVO:RAIL compared to a traditional datacenter configuration. I think that with less power supply, less switching, less everything to worry about it's all in one box, especially for our situation using it at a remote site, it's a great solution

I'd say that peer reviews are very important. It's difficult to find, third-party reviews of things, of complex solutions that are very accurate so any ability to sort of extract an honest, third-party review of some product line, peer-reviewed and endorsed, by other organizations of a similar size is very important to us.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
ITCS user
Chief Technology Officer at Oakland Unified School District
Video Review
Vendor
We like having four built-in nodes. We lost one, but our load kept going. In future releases, I want to be able to bring up appliances and different data centers with smooth replication.

What is most valuable?

For us, EVO:RAIL is valuable because I have a small IT staff, and we were able to deploy it easily and quickly to meet our needs. Manageability for EVO:RAIL has been good. You hear the saying of, "A single pane of glass." We like that, being able to see everything with one interface.

How has it helped my organization?

The benefits of EVO:RAIL is the simple configuration, the scalability, so as we add VDI users. We are just planning to add more EVO:RAIL appliances.

The availability of EVO:RAIL has been excellent. We like having the four nodes built-in. We actually lost one, and it just kept on running and kept our load going.

What needs improvement?

In future releases of EVO:RAIL, I want to be able to bring up appliances and different data centers, and have those replicate across smoothly. One thing that we want to make sure that we do is that we have redundancy that is easy to manage, so we want to make sure we can put in EVO:RAIL boxes in different places and have those easily replicate to each other.

What do I think about the scalability of the solution?

Scalability, what we like is just adding new appliances. We have about 3,000 users, and as they're adopting the VDI solution, we're just planning to keep adding boxes.

How are customer service and technical support?

Technical support for EVO:RAIL has been awesome. We're working with Dell, and they've been excellent, and when we've needed to bring in VMware, we've worked with a VMware systems engineer. We were in the Early Adopter Program, so we had access to a VMware employee, which has been excellent.

Which solution did I use previously and why did I switch?

Before EVO:RAIL, we had an old solution that had a standalone rack for storage. We had another standalone rack for servers, different vendors, and then a different vendor for top of rack networking, so it was pretty complicated. When we evaluated EVO:RAIL, we had certain criteria that we were following:

  1. Ease of use
  2. An integrated platform, and then
  3. Honestly was price.

How was the initial setup?

Our implementation went smoothly. We were missing a few cables that we had to order, but other than that, we were able to get up quickly and launch on time.

What's my experience with pricing, setup cost, and licensing?

Cost-benefit has been awesome for us. We actually got a good price. For us, just adding different boxes with a known cost, additional boxes, has been a great feature for us.

Which other solutions did I evaluate?

We initially chose EVO:RAIL because of my small IT staff. We wanted to be able to standardize on one vendor and have that single point of support, and we wanted to have one appliance that covered the full stack of compute, storage, and networking, so our guys could learn that and just move forward.

When we were making our selection of hyper-converged platform, we looked at EVO:RAIL because we're a long-time VMware customer, we looked at Nutanix and SimpliVity. After looking at all three solutions, we chose EVO:RAIL. We chose EVO:RAIL because of our relationship with VMware and looking at their long-term value, so we know that VMware is there for the long-term. We imagine our roadmap for EVO:RAIL that's really going to hit our needs.


What other advice do I have?

I would rate EVO:RAIL, on a scale of one to 10, a nine to be honest with you. I still see some solutions in the future to help with replicating between data centers, but other than that, we're super happy with it.

I work in the education sector and in K-12 in particular. For us, budget is really a big issue, and maybe not so much for hardware, but for people. So, for us, I recommend EVO:RAIL because it is a great solution if you have a small IT staff, again, to learn one platform that can hold your whole virtual infrastructure.

When we're reviewing enterprise technology solutions, it's really important for us to do a little research, so we basically go online. I'm actually a subscriber to Gartner, so we use that. That's a good solution. They have a peer review section. We also go on the internet. We are members of certain groups, of peers for K-12 CIOs. We have a newsgroup that we share with, so running solutions by people, finding out what other people have done is really important. Often times, it's a lot like reviewing a restaurant online. You want to see what other people think of the solution, so that's what we do.

Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Find out what your peers are saying about VMware, Nutanix, Dell EMC and others in Hyper-Converged (HCI). Updated: October 2021.
540,884 professionals have used our research since 2012.
ITCS user
IT Director with 51-200 employees
Vendor
Easy to deploy, configure and manage. The cost is potentially limiting.
As written in the previous post, there are different approaches for implement a SDDC and EVO:RAIL represent the VMware’s way for implement fast and in a simple way an Hyper-Converged Infrastructure. EVO represents an entire new family of Hyper-Converged Infrastructure offerings from VMware, and EVO:RAIL represents the first product in this family. It was announced in this VMworld 2014 (with a first list of EVO partners) and will be available during the second half of 2014. Note that this product will be available only through qualified EVO partners (similar in the concept on how vSphere could be sold through a pre-installed OEM channel). But in this case will be ONLY thought this way: if you want to play with EVO:RAIL you must buy an entire solution or use the Hands on lab…

As written in the previous post, there are different approaches for implement a SDDC and EVO:RAIL represent the VMware’s way for implement fast and in a simple way an Hyper-Converged Infrastructure.

EVO represents an entire new family of Hyper-Converged Infrastructure offerings from VMware, and EVO:RAIL represents the first product in this family. It was announced in this VMworld 2014 (with a first list of EVO partners) and will be available during the second half of 2014.

Note that this product will be available only through qualified EVO partners (similar in the concept on how vSphere could be sold through a pre-installed OEM channel). But in this case will be ONLY thought this way: if you want to play with EVO:RAIL you must buy an entire solution or use the Hands on lab. Announced partners include Dell, EMC, Fujitsu, Inspur, NetOne and SuperMicro.

From the tecnical point of view the EVO:RAIL “building block” is a 2U system composed by 4-Node unit (just microserver), where each node is an independent physical server within the 2U enclosure (this make microserver different, for example, from blades).

The form factor (initially will be only this) has been chosen to simplify the decision (no choice, just add more node) but also the deployment and the efficiency.

Each of the four nodes in a EVO:RAIL appliance have (at a minimum):

  • Two Intel E5-2620 v2 six-core CPUs
  • 192GB of memory
  • Internal Drive Bays for the entire appliance – up to 24 hot plug 2.5 drives
  • One SLC SATADOM or SAS HDD as the ESXi™ boot device
  • Three SAS 10K RPM 1.2TB HDD for the VMware Virtual SAN™ datastore
  • One 400GB MLC enterprise-grade SSD for read/write cache
  • One Virtual SAN-certified pass-through disk controller
  • Two 10GbE NIC ports (configured for either 10GBase-T or SFP+ connections)
  • One 1GbE IPMI port for remote (out-of-band) management
  • 1 x Expansion Slots PCI-E
  • Dual PSU – rated between 1600W

Each EVO partner could make something different, but those will be the minimum requirements, and the form factor actually is fixed (but I suppose that there will be other form factor, for example for VDI environments where GPU are needed).

Will be possible scale out up to four Hyper-Converged Infrastructure Appliance (HCIA), for a maximum of 16 nodes in a cluster (considering the vSphere limit, will be possible to have more in next release).

The top of rack (ToR) switch is not included in the EVO:RAIL requirements, but of course will be needed (an possible with good redundancy), so probably some EVO partners will build a complete offer.

Note that storage will be provided by Virtual SAN, as could be expected in an Hyper-Converged Infrastructure.

But the most intesting aspect of EVO:RAIL is how easy is deploy, configure and also manage it:

  • Rapid configuration in minutes
  • Simple management with a pure HTML5 interface
  • Easy non-disruptive upgrades
  • Automatically scales out

The installation is almost simple (and fast) and based on a guided and automatical deployment mode both for ESXi (note that is something new, not the AutoDeploy system) and for the virtual appliances (like vCenter Server).

And the management will also be simple, though the new HTML5 interface (compatible with any browser without any plugin and with any device!), but note that is just a “wrapper” to the vSphere API, so you can choose to use the new interface or still the vSphere Web Client to manage your environment.

The new interface will be more simple and faster for common tasks and could become also something more (for example a self-provisioning and/or multi-tenant UI).

For sure this could awesome for customers that need to build new infrastructure and potentially (as seen in several post) could be valuable for the SMB and the Mid-size. The only limit that I see is potentially the cost: the vSphere edition will be the Enterprise Plus one, where the cost is not so much SMB friendly! But actually we have to wait to see first proposal of EVO:RAIL appliances.

It’s all perfect? Probably no, but seems really promising. It’s just version 1.0, but the UI is fast and reactive (seems that there is a lot of experience derivated from vCloud Hybrid Service), but some features is missing in this version: for example the hardware managment could be better (hardware lifecycle is separated, and other tools are needed, like for firmware upgrade), EVO:RAIL lacks of API and probably also for plug-in… but lot of new features are already planned for the next release.

Will this solution kill other converged or hyper-converged infrastructure? For sure will be something strange: most of the EVO partners have existing solution based on “building blocks” converged or also hyper-converged (for example Dell has the vStart but also the Nutanix alliance) and will be interesting see if those solution will remain (for example for other hypervisor or for OpenStack implementation) or will be dropped.

And will change the role of system admins (or the virtualization admins)? Maybe, but to be honest, also the existing “block-based” solution are reducing the effort for the deployment and installation and existing management/monitoring tools are reducing the operational effort. But good architects are still needed and skill may change, but integrations, automation and organization capabilities will be still needed.

About the name, seems confirmed that EVO stands for ‘Evolutionary’ and RAIL simply represent the ‘rail mount’ attached to 2U/4-node server platform that allows it to slide easily into a datacenter rack. Previously EVO:RAIL was also know by the projects name: Marvin (Marvin Droid is visible in some screenshots) or Mystic.

For more information on EVO:RAIL see also:

Note that EVO:RAIL is the first product of the EVO family, but no long the only one: EVO:RACK (a hyper-converged infrastructure project) has been announced as a Technology Preview.

This post is also available in: Italian

Disclosure: I am a real user, and this review is based on my own experience and opinions.
it_user315723
Systems Engineer at a media company with 1,001-5,000 employees
Vendor
We can plug it in, go to a web interface, and it automatically configures into the existing cluster.

Valuable Features

One-click setup and the expandability -- plug it in, go to a web interface, and then it automatically configures into the existing cluster. Because we have remote offices, ease of deployment is key.

Room for Improvement

They are already addressing this, but it could be better in terms of licensing and overall costs. As they expand the list of who you can buy EVO:RAIL, it will get better.

Stability Issues

Rock solid stability.

Scalability Issues

Scalability for EVO:RAIL seems very good, for a medium organization it offers great scaling.

Customer Service and Technical Support

Customer Service:

In general, VMware customer support is world class. Response time is really quick – you get connected to experts much faster than in other companies, like Microsoft for example.

Technical Support:

All I've seen is community support, especially from bloggers and community experts. I haven’t had any experience.

Initial Setup

For EVO:RAIL setup is super simple, when you order it they set up everything from the provider before they ship it to you.

Other Advice

Support is up there in the top five things to look at. If you can call, have online communities, easy access to articles. I would also add that if you can get through to someone who has deep knowledge of the product quickly. Stability, the issue that we have run into is that they are fly-by-night brand new startups and you can get stranded without support. You need to vet the company, they need to be around in a few weeks to help you. Also, peer reviews are very important – invaluable. Salesmen will tell you everything, we look at whitepapers and vendor supplied information. Google is your friend.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
it_user320949
Info Sys Spl Prgmng - IS at New York Presbyterian Hospital
Vendor
We can use it at a remote site in our other branches without paying for more bandwidth. We cannot, however, add more VM clusters to scale up.

Valuable Features

It's an all-in-one solution that fits a small environment.

Improvements to My Organization

Remote sites- we have several hospitals so we can use it at a remote site in other branches if we didn’t want to pay for more bandwidth.

Room for Improvement

It wasn’t scalable enough for our needs – could be more scalable to add more VM clusters. Sometimes it didn’t play nice when you tried to bring in another cluster, we wanted to see how far we could go and it didn’t perform like we needed. Performance was lacking.

Stability Issues

Very high – we didn’t see any issues with stability during our POC.

Scalability Issues

It seemed limited to us because we are enterprise.

Customer Service and Technical Support

We used it through Dell, but support was excellent. This is key when I'm selecting new vendors.

Initial Setup

It was straightforward.

Implementation Team

We used a vendor team.

Other Solutions Considered

No one does anything like VMware does.

Other Advice

Your size will determine how you should approach picking a new vendor. I know they are trying to push EVO:RAIL for big business, but I think its more medium business at this point.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
ITCS user
IT Administrator and Sr. VMware Engineer at a retailer with 501-1,000 employees
Real User
Top 5
It forms a SDDC that offers computing, networking, storage, and centralized administration to promote a private or hybrid cloud for end users, testing and development.
Originally posted in Spanish at www.rhpware.com/2014/08/introduccion-vmware-evo-rail VMware EVO: RAIL combines computing power, networking and storage in an appliance hyper converged to create a one-stop solution simple and easy to deploy qualified to be offered by VMware partners. Simplicity transformed EVO: RAIL lets you create a virtual machine and turn it on in a matter of minutes, as well as the deployment of VMs, apply updates and patches and non-disruptively with a single click, thus offering simplified management. Building blocks defined by the software EVO: RAIL is a building block that forms a software-defined datacenter (SDDC) that offers computing, networking, storage, and centralized administration to promote a private or hybrid cloud computing for end users, testing…

Originally posted in Spanish at www.rhpware.com/2014/08/introduccion-vmware-evo-rail

VMware EVO: RAIL combines computing power, networking and storage in an appliance hyper converged to create a one-stop solution simple and easy to deploy qualified to be offered by VMware partners.

Simplicity transformed

EVO: RAIL lets you create a virtual machine and turn it on in a matter of minutes, as well as the deployment of VMs, apply updates and patches and non-disruptively with a single click, thus offering simplified management.

Building blocks defined by the software

EVO: RAIL is a building block that forms a software-defined datacenter (SDDC) that offers computing, networking, storage, and centralized administration to promote a private or hybrid cloud computing for end users, testing and development environments and branch environments office.

Reliable base

Built on technology VMware vSphere, vCenter Server and VMware Virtual SAN EVO: RAIL provides the first appliance hyper converged infrastructure using VMware software 100%.

Highly resilient by design

The highly resilient design allows start with four separate host and datastores Virtual SAN distributed that ensures "zero downtime" application during scheduled maintenance or disk errors, network failures or hosts.

Infrastructure running speed innovation

It lets you meet the demands of accelerated solid business infrastructure simplifying design scaling and sizing predictable, allowing make the purchase fluidly with a single SKU and reducing both CapEx and OpEx.

Freedom of choice

EVO: RAIL is offered as a complete appliance consists of hardware, software and support of the most recognized brands, but customers who choose the option you want.

Hardware

VMware is not entering the hardware market as the EVO software package: RAIL is available only through qualified partners, and indeed will be the partner who will provide all the hardware and software support to customers.

Appliance

Each appliance EVO: RAIL has four independent nodes with computing power, storage and network as mentioned above, but all these resources will be redundant to eliminate single points of failure (SPF Single Point of Failure).

Nodes

Each of the four nodes EVO: RAIL has:

Two Intel 6-core E5-2620v2

  • 192 GB of memory
  • SLC boot device SATADOM or SAS HDD for ESXi hypervisor
  • Three SAS disks 10K RPM 1.2TB of capacity for VMware Virtual SAN datastores
  • An enterprise-grade SSD 400GB MLC for read and write cache
  • A disk controller pass-through certificate for Virtual SAN
  • 10GbE two NICs configured for both connections to 10GBase-T SFP +)
  • A 1GbE port for IPMI remote management

Reliability and fault tolerance

Each appliance EVO: RAIL has the following components and reliability features:

  • Four ESXi hosts in a single appliance allowing it to be resilient to hardware failure or maintenance
  • Two redundant power supplies
  • Two NIC ports per node 10GbE
  • Device boot ESXi hypervisor
  • Mechanical and solid state disks enterprise grade

Automatic scaling

EVO: RAIL Version 1.0 can scale up to four appliances, giving a total of 16 hosts ESXi, 1 Virtual SAN datastores supported by vCenter Server and an instance of EVO: RAIL. It will handle the deployment, configuration and management, allowing computing capacity and growth are automatic Virtual SAN datastores. The new appliances will be discovered automatically and added to the cluster EVO simplest form: RAIL with just a few clicks.

Software

EVO: RAIL provides the first appliance hyper convergent 100% powered by software from VMware and its suite of products and offer, as lines behind it was already mentioned, through qualified partners and partners of VMware business. This software package will come fully installed and loaded into the hardware partner.

The bundle includes:

  • Deployment, configuration and administration of EVO: RAIL
  • VMware vSphere Enterprise Plus, including ESXi
  • Virtual SAN
  • vCenter Server
  • Log vCenter Insight

EVO: RAIL also is optimized for new users of VMware as well as for experienced administrators. A minimum experience for installation and deployment is required, as well as for configuration and management, thus enabling use in environments where IT staff is limited or almost nil. By using VMware products as core, administrators EVO: RAIL can apply all their knowledge of VMware, best practices and processes.

EVO: RAIL uses the same database vCenter Server, therefore, changes in the configuration and management carried out in EVO: RAIL automatically be reflected in vCenter Server and vice versa.

Computing, networking, storage and management EVO: RAIL - Computation

VM density

EVO: RAIL is dimensioned to run 100 VMs sizes and average overall purpose in a datacenter. The actual capacity varies according to the size and workload of the virtual machines. There are no restrictions for the types of applications and all types of applications that can run on vSphere supported.

VM profile General Purpose: 2 vCPUs, Vmem 4GB, 60GB vDisk redundancy

EVO: RAIL is optimized for VMware Horizon View with settings that allow up to 250 virtual machines on a single appliance View EVO: RAIL. Of course, the actual capacity varies according to the size of the remote desktop and the workload they own.

Desktop Profile Horizon View: 2vCPU, 2GB Vmem, 32GB vDisk for linked clones

EVO: RAIL - Red

Connections

Each node EVO: RAIL has two 10GbE network ports. Each port must be connected to a switch with 10GbE TOR also supports IPv4 and IPv6 multicast enabled

Remote management is available at each node through an IPMI 1GbE port that connects to the management network. It is clear that in some configurations these ports can be covered and disabled.

Traffic

EVO: RAIL supports four types of traffic: Management, vSphere vMotion, Virtual SAN and Virtual Machine.

Separation of traffic in VLANs for vSphere vMotion, Virtual SAN and VMs is recommended. EVO: RAIL Version 1.0 does not place management traffic on a VLAN.

The TOR switches must support IPv4 and IPv6 multicast enabled. The automatic scaling feature EVO: RAIL uses IPv6, but IPv6 is required across the network.

VLANs are not necessary to customize the configuration of EVO: RAIL, however it is highly recommended. When using the option Just Go! It assumes that the VLANs are already configured.

EVO: RAIL - Storage

EVO: RAIL creates a unique Virtual SAN datastores with all available disks in a cluster, using SSDs to cache read / write. Total storage capacity is 16TB per appliance EVO: RAIL appliance:

  • 14.4TB of disk capacity (13TB usable) per appliance, hosted on the datastores Virtual SAN for VMs
  • 1.6TB SSD capacity per appliance to cache read / write
  • VM size pre provisioned management: 30GB

EVO: RAIL - Administration

EVO: RAIL enables the deployment, configuration and administration through an intuitive new interface based on HTML5. In turn, EVO: RAIL provides non-disruptive software upgrades with "zero downtime" and automatic scaling of appliances EVO: RAIL.

Deployment, configuration and management

Deploying EVO: RAIL is very simple and consists of only four steps:

Step 1: Decide the network topology EVO: RAIL (VLANs and switches TOR - Top of the rack). The instructions are given in detail in the User Guide that is included in EVO: RAIL

Step 2: Installation and wiring, 10GbE adapters from EVO: RAIL to switch 10GbE TOR

Step 3: Turning EVO: RAIL

Step 4: Connect a laptop or PC client TOR switch and configure the network addresses to communicate with EVO: RAIL. Then open the IP address assigned to EVO: RAIL in a browser using the following format: https://192.168.0.100:7443

EVO settings: RAIL

EVO settings: RAIL has three options:

  1. Just Go!
  2. Customize Me!
  3. Upload Configuration File

With Just Go! EVO: RAIL configure a set of IP addresses by default allowing quick setup. You only need to configure the switch TOR and click the button ‘Just Go!’ you just have to create two passwords.

With ‘Customize Me!’ customers can specify the following parameters:

  • Names of the hosts and vCenter Server
  • Network (IP ranges and / or VLAN ID): ESXi, Virtual SAN, vSphere vMotion, vCenter Server and VM Networks
  • Passwords for ESXi and vCenter Server hosts; optionally authentication with Active Directory
  • Global: Time zone, NTP server, DNS, Proxy servers. Also options for vCenter Log Insight logs or server logs or third parties own

With Configuration File Upload can select and load the options from a configuration file:

EVO: RAIL verifies the configuration data and builds the appliance. EVO: RAIL implements data services, creates new set ESXi hosts and vCenter Server. The final screen contains a link to the administration interface

EVO: RAIL Management.

EVO: RAIL Management consists of a dashboard that lets you view all VMs and sort and filter according to the criteria you select. Users can create virtual machines with just a few clicks, selecting the operating system, size, VLAN and security options. EVO: RAIL simplifies the sizing VMs offering a one-click option that lets you choose among small, medium and large configurations

EVO: RAIL Management revolutionizes management computer with live monitoring of the health of the CPU, memory, storage and use of virtual machines to complete clusters EVO: RAIL, individual appliances and individual nodes. EVO: RAIL Management saves the collection of logs or log files, licenses and settings such as language internationalization. Also simplifies the scaling of computing power, network and storage simple and easy and you can add new appliances transparently and dynamically to any cluster. EVO: RAIL Management also allows users to check for updates or patches to vCenter, ESXi and EVO: RAIL if available and download and apply the simple form of the environment and non-disruptively without service drops.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
ITCS user
IT Director with 51-200 employees
Vendor
The scale of the hyper-converged infrastructure appliances are now doubled but it is still on vSphere 5.5 not vSphere 6.
VMware EVO:RAIL represent the first Hyper-Converged Infrastructure offerings from VMware, announced in VMworld 2014 and available from the second half of 2014, and based (on the software part) on vSphere 5.5 and VSAN 1.0. Now there is a new software release of VMware EVO:RAIL which includes also support for the VMware EVO:RAIL vSphere Loyalty Program. One of the most notable improvements is that the scale of the hyper-converged infrastructure appliances are now doubled: from beyond the initial four appliances in a cluster and 16 nodes overall to eight appliances in a cluster and 32 nodes overall. Appliances running the updated EVO:RAIL software will now support approximately 800 general purpose virtual machines or 2,000 virtual desktop virtual machines per cluster. This new limit…

VMware EVO:RAIL represent the first Hyper-Converged Infrastructure offerings from VMware, announced in VMworld 2014 and available from the second half of 2014, and based (on the software part) on vSphere 5.5 and VSAN 1.0.

Now there is a new software release of VMware EVO:RAIL which includes also support for the VMware EVO:RAIL vSphere Loyalty Program.

One of the most notable improvements is that the scale of the hyper-converged infrastructure appliances are now doubled: from beyond the initial four appliances in a cluster and 16 nodes overall to eight appliances in a cluster and 32 nodes overall. Appliances running the updated EVO:RAIL software will now support approximately 800 general purpose virtual machines or 2,000 virtual desktop virtual machines per cluster.

This new limit is aligned with the vSphere 5.5 limits (where the maxium number of node per cluster is 32).

An there are some enhancements in management: now the hardware replacement process for components in the appliances has been automated. The management interface in the new release of EVO:RAIL software now automates that process for HDDs, SSDs, and 10 GbE NICs.

But note that EVO:RAIL remain still with vSphere 5.5… and not on vSphere 6.0! This point could be really interesting to see when the major release upgrade will be handles. Could be a good sign when VMware itself will consider the new product really ready for the enterprise production environments (see also the upgrade or not upgrade post).

Dell has also two new updates to their line of Dell Engineered Solutions for VMware EVO:RAIL:

  • They are introducing Dell Engineered Solutions for VMware EVO:RAIL Horizon Edition, a hyper-converged end-to-end virtual desktop infrastructure appliance for VMware EVO:RAIL.
  • They are introducing an updated Dell Engineered Solutions for VMware EVO:RAIL, infrastructure edition 1.2, which includes support for the VMware EVO:RAIL vSphere Loyalty Program. You can read all of the news from Dell here.

For more information on EVO:RAIL see also:

Disclosure: I am a real user, and this review is based on my own experience and opinions.
it_user234723
Senior Cloud Engineer at a comms service provider with 51-200 employees
Vendor
It's simplicity is its strongest selling point, however, the lack of flexibility is probably the biggest constraint for customers.
Summary: At VMworld in August VMware announced their new hyperconverged offering, EVO:RAIL. I found myself discussing this during TechFieldDay Extra at VMworld Barcelona and this post details my thoughts having spent a bit longer investigating. I’m not the first to write about EVO:RAIL so I’ll quickly recap the basics before giving my thoughts and some things to bear in mind if you’re considering EVO:RAIL. Briefly, what is EVO:RAIL? There’s no point in rediscovering the wheel so I’ll simply direct you to Julian Wood’s excellent series EVO:RAIL introduction EVO:RAIL hardware dissected EVO:RAIL management As of October 2014 there are now eight qualified OEM partners although beyond that list there’s very little actual information available yet. Most of the vendors have an information…

Summary: At VMworld in August VMware announced their new hyperconverged offering, EVO:RAIL. I found myself discussing this during TechFieldDay Extra at VMworld Barcelona and this post details my thoughts having spent a bit longer investigating. I’m not the first to write about EVO:RAIL so I’ll quickly recap the basics before giving my thoughts and some things to bear in mind if you’re considering EVO:RAIL.

Briefly, what is EVO:RAIL?

There’s no point in rediscovering the wheel so I’ll simply direct you to Julian Wood’s excellent series;

As of October 2014 there are now eight qualified OEM partners although beyond that list there’s very little actual information available yet. Most of the vendors have an information page but products aren’t actually shipping yet and it’s difficult to know how they’ll differentiate and compete with each other. Several partners already have their own offerings in the converged infrastructure space so it’ll be interesting to see how well EVO:RAIL fits into their overall product portfolios and how motivated they are to sell it (good thoughts on that for EMC, HP, and Dell). Unlike their own solutions, the form factor and hardware specifications are largely fixed so it’s going to be management additions (ILO cards, integration with management suites like HP OneView etc), service, and support that vary. For partners without an existing converged offering this is a great opportunity to easily and quickly compete in a growing market segment.

Things to note about EVO:RAIL

In my ‘introduction to converged infrastructure’ post last year I listed a set of considerations – let’s run through them from an EVO:RAIL perspective;

Management. The hyperconverged nature should mean improved management as VMware (and their partners) have done the heavy lifting of integration, licencing, performance tuning etc. EVO:RAIL also offers a lightweight GUI for those that value simplicity while also offering the usual vSphere Web Client and VMware APIs for those that want to use them. This is however a converged appliance and that comes with some limitations – you can manage it using the new HCIA interface or the Web Client but it comes with its own vCSA instance so you can’t add it to an existing vCenter without losing support. It won’t use VUM for patching (although it does promise non-disruptive upgrades) although you can add the vCSA to an existing vCOps instance.

Simplicity. This is the strongest selling point in my opinion – EVO:RAIL is a turnkey deployment of familiar VMware technology. EVO:RAIL handles the deployment, configuration, and management and you can grow the compute and storage automatically as additional appliances are discovered and added. As the technology itself isn’t new there’s not much for support staff to learn, plus there’s ‘one throat to choke’ for both hardware and software (the OEM partner). Some people have pointed out that it doesn’t even use a distributed switch, despite being licenced with Ent+. Apparently the choice of a standard vSwitch was because of a potential performance issue with vDS and VSAN, which eventually turned out not to be an issue. Simplicity was also a key consideration and VMware felt there was no need for a vDS at this scale. I imagine we’ll see a vDS in the next iteration.

Flexibility. This is probably the biggest constraint for customers – it’s a ‘fixed’ appliance and there’s limited scope for change. The hardware and software you get with EVO:RAIL is fixed (4 nodes, 192GB RAM per node, no NSX etc) so even though you have a choice of who to buy it from, what you buy is largely the same regardless of who you choose. There is currently only one model so you have to scale linearly – you can’t buy a storage heavy node or a compute heavy node for example. EVO RAIL is sold 4 nodes at a time and the SMB end of the market may find it hard to finance that kind of CAPEX. As mentioned earlier the partner is responsible for updates (firmware and patching) – you won’t be able to upgrade to the new version of vSphere until they’ve validated and released it for example. Likewise you can’t plug in that nice EMC VNX you have lying around to provide extra storage – you have to use the provided VSAN. Flexibility vs simplicity is always a tradeoff!

Interoperability/integration. In theory this is a big plus for EVO:RAIL as it’s the usual VMware components which have probably the best third party integration in the market (I’m assuming you can use full API access). Another couple of notable integration requirements;

  • 10GB networking (ToR switch) is a requirement as it’s used to connect the four servers inside the 2U form factor given the lack of a backplane. You’ll need 8 ports per appliance therefore. I spoke to VMware engineers at VMworld on this and was told VMware looked for a 2u form factor where they could avoid this but couldn’t. Many SMB’s have not adopted 10GB yet so it’s a potential stumbling block – of course partners may use this opportunity to bundle 10GB networking which would be a good way to differentiate their solution.
  • IPv6 is required for the discovery feature used when more EVO:RAIL appliances are added. This discovery process is proprietary to VMware though it operates much like Apple’s Bonjour and apparently IPv6 is the only protocol which works (it guarantees a local link address).

Risk. This is always a consideration when adopting new technology but being a VMware backed solution using familiar components will go a considerable way to reducing concern. VSAN is a v1.0 product, as is HCIA although as that’s simply a thin wrapper around existing, mature, and best of breed components it’s probably safe to say VSAN maturity is the only concern for some people (given initial teething issues). Duncan Epping has a blogpost about this very subject but his summary is ‘it’s fully supported’ so make sure you know your own comfort level when adopting new technology.

Cost. A choice of partners is great as it’ll allow customers to leveraging existing relationships. It’s worth pointing out that you buy from the partner so any existing licencing agreements (site licences etc) with VMware probably won’t be applicable. At VMworld I was told VMware have had several customers enquire about large orders (in the hundreds) so it’ll be interesting to see how price affects adoption. I don’t think this is really targeted at service providers and I’ve no idea how pricing would work for them. Having spent considerable time compiling orders, having a single SKU for ordering is very welcome!

Pricing

Talking of pricing, let’s have a look at ballpark costs. I’ve heard, though not been officially quoted, a cost of around €150,000 per 4 node block (or £120,000 for us Brits). This might seem high but bear in mind what you need;

UPDATE: 30th Nov – I realised I’d priced in four Supermicro chassis, rather than one, so I’ve updated the pricing.

  • Hardware. Let’s say approx £11k per node, so £45k for four nodes ie. one appliance (this is approx – don’t quote!);
    • Supermicro FatTwin chassis (inc 10GB NICs) £3500 (one chassis for all four nodes)
    • 2 x E2620 CPUs £400 each
    • 12 x 16GB DIMMs (192GB RAM) = £2000
    • 400GB Enterprise SSD = £4500 (yep!)
    • Three 1.2TB 10k rpm SAS disks = £600 x 3 = £1800
    • …plus power supplies, sundries
  • Software. List pricing is approx £11k per node plus vCenter, so a shade under £50k
    • vCenter (vCSA) 5.5 = £2000
    • vSphere 5.5 = £2750 per socket = £5500 per node
    • VSAN v1 = £1500 per socket = £3000 per node
    • Log Insight = £1500 per socket = £3000 per node
  • Support and maintenance for 3 years on both hardware and software – approx £15k
  • Total cost: £110,000

Once pricing is announced by the partners we’ll see just how much of a premium is being charged for the simplicity, automation, and integration that’s baked in to the EVO:RAIL appliance. There are of course multitudes of pricing options – you could just buy four commodity servers and an entry level SAN but there’s not much value in comparing apples and oranges (and I only have so much time to spend on this blogpost).

UPDATE 1st Dec 2014 – Howard Marks has done a more detailed price breakdown where he also compares a solution using Tegile storage. Christian Mohn also poses a question and potential ‘gotcha’ about the licencing – worth a read.

Competition

VMware aren’t the first to offer a converged appliance – in fact they’re several years behind. The likes of VCE’s vBlock was first back in 2010 and that was followed by the hyperconverged vendors like Nutanix and Simplivity. As John Troyer mentioned on vSoup’s VMworld podcast, Scale Computing use KVM to offer an EVO:RAIL competitor at cheaper prices (and have done for a few years). Looking at Gartner’s magic quadrant for converged infra it’s a pretty crowded market.

Microsoft recently announced their Cloud Platform Services (Cloud Pro thoughts on it) which was developed with Dell (who are obviously keeping their converged options wide open as they’ve also partnered with Nutanix and VMware on EVO:RAIL). While more similar to the upcoming EVO:RACK it’s another validation of the direction customers are expected to take.

Final thoughts

From a market perspective I think VMware’s entry into the hyperconverged marketplace is both a big deal and a non-event. It’s big news because it will increase adoption of hyperconverged infrastructure, particularly in the SMB space, through increased awareness and because EVO:RAIL is backed by large tier 1 vendors. It’s a non-event in that EVO:RAIL doesn’t offer anything new other than form factor – it’s standard VMware technologies and you could already get similar (some would say superior) products from the likes of Nutanix, Simplivity and others.

Personally I’m optimistic and positive about EVO:RAIL. Reading the interview with Dave Shanley it’s impressive how much was achieved in 8 months by 6 engineers (backed by a large company, but none the less). If VMware can address the current limitations around management, integration, and flexibility, while maintaining the simplicity it seems likely to be a winner.

Pricing for EVO:RAIL customers will be key although not all of the chosen partners are likely to compete on price.

UPDATE: April 2015 – a recent article from Business Insider implies that pricing is proving a considerable barrier to adoption for EVO:RAIL.

Further Reading

Good post by Marcel VanDeBerg and another from Victoriouss

Mike Laverick has a lot of useful material on his site, but as the VMware Evangelist for EVO:RAIL you’d expect that right! The guys over at the vSoup Podcast also had a chat with Mike.

A comparison of EVO:RAIL and Nutanix (from a Nutanix employee)

Good thoughts over at The Virtualization Practice

VMworld session SDDC1337 – Technical Deep Dive on EVO:RAIL (requires VMworld subscription)

Microsoft’s Cloud Platform System at Network World – good read by Brandon Butler

UPDATED 9th Dec – Some detail about HP’s offerings

UPDATED 16th Dec – EVO:RAIL differentiation between vendors

UPDATED 1st May – EVO:RAIL adoption slow for customers

Disclosure: I am a real user, and this review is based on my own experience and opinions.
ITCS user
Senior Manager, Infrastructure and Operations at a agriculture with 1,001-5,000 employees
Vendor
Each node has dual Intel Ivy bridge processors and 14.4Tb of raw storage however I have seen synchronous replication work great over distances.
An important announcement that came out today was EMC’s launch of Hyper-Converged EVO:RAIL appliance to Redefine Simplicity with hyper converged EVO:RAIL appliances. With this launch EMC has moved forward further with Converged Infrastructure positioning using differentiating factors like – EMC Value add software, global enterprise data protection, management, and support. It’s not that they didn’t have a converged infrastructure in the past – it used to be the vBlock earlier. But the Hyper-converged infrastructure appliance is a lot more software defined. With vBlock the compute and storage was pre-integrated. It offered stablility/predictabliity for specific applications and hardware environments through reference architecture.It was a proven blueprint through VSPEX…

An important announcement that came out today was EMC’s launch of Hyper-Converged EVO:RAIL appliance to Redefine Simplicity with hyper converged EVO:RAIL appliances.

With this launch EMC has moved forward further with Converged Infrastructure positioning using differentiating factors like – EMC Value add software, global enterprise data protection, management, and support. It’s not that they didn’t have a converged infrastructure in the past – it used to be the vBlock earlier.

But the Hyper-converged infrastructure appliance is a lot more software defined.

With vBlock the compute and storage was pre-integrated. It offered stablility/predictabliity for specific applications and hardware environments through reference architecture.It was a proven blueprint through VSPEX architecture.

Hyper-converged is really for smaller footprint and the IT generalist. From what I saw in a demo recently it is all about simplicity. I have put together some of my own notes from the demo and it looks like a great product that with comprehensive feature set right at launch time.

The EMC VSPEX Blue is powered by EMC hardware and VMware EVO:RAIL software. Inside the one appliance there are four servers. The architecture has been kept this away to offer agility, scalability and efficient support.

Hardware –

One appliance that has 4 independent nodes inside it.

Each node has dual Intel Ivy bridge processors and 14.4Tb of raw storage – includes both SSD and HDD

Two models of the appliance are being released – Standard and Performance.

  • The Performance model is for VDI type workloads.
  • Difference between Standard and Performance Model is memory.

VSPEX Blue as it is called, has four key components –

Software

for hardware monitoring, integration with EMC Dial Home, and value added EMC software.

EVO:RAIL Engine

automates cluster deployment and config, clean interface, and pre-sized VM templates with single-click policies.

Resilient Cluster architecture

As is the requirement with EVO:RAIL, VSAN provides distributed datastore that is consistent and fault tolerant. vMotion provides system availability during maintenance and DRS balances the workload.

Final component is the Software defined datacenter (SDDC) building block

– combines compute, storage, network, and management resources into a single software stack with vSphere and VSAN.

In a recent demo that I attended the Dashboard looked clean. It had ESRS embedded in the interface, management framework was in place to add EMC value-add software, and also orchestration was clearly defined.

One differentiator that I saw and would like to confirm later was that the EMC VSPEX BLUE offers information not available in EVO:RAIL. It also mapped alerts to a graphical representation of the hardware layout which helps with part identification for field services. The appliance was integrated with vRealize log Insight so detailed performance metrics are available.

EMC Customers like myself, who have experience with the ESRS piece like the fact that a remote engineer can dial in to acknowledge a call home and fix the hardware issues or dispatch a CE or parts to fix problems. This reduces a lot of operational overhead in terms of troubleshooting and resource availability.

On the dashboard is an area – Installed Apps and Market that allow you to either display installed software or get access to value add softwrae from EMC like EMC RecoverPoint for VMs, VMware vSphere Data Protection Advanced (VDPA) and so on.

On a future looking prospect – the VSPEX BLUE appliance includes EMC CloudArray Virtual Edition license, entitling 1Tb cache and 10Tb cloud storage with support for free. Companies that want a hybrid model and store some data in the cloud for cost benefit or resiliency will definitely find this very useful. Encryption in flight and at-rest with secure local key management is available to address security concerns. For network bandwidth issues, throttling and dat compression is built in. Finally, there is NAS support providing CIFS and NFS file services.

There is also no requirement that the virtual appliance be installed on each ESXi node with the protected VMs.

As an existing RecoverPoint customer of EMC I have seen synchronous replication work great over distances. WAN optimization helps tremendously and offers built in deduplication and compression functionality. The replication robustness is suited for environments utp to 300ms of latency so that addresses a lot of environments and geographical distances.

I don’t know much about pricing but the demo seemed really good and pricing was mentioned as highly competitive. So you may want to check with the local EMC sales team to get a budgtary quote. I personally like to understand ball park pricing of various products from different vendors so that during architecture of the environment there is atleast some level of understand if a solution will fit within the planned cost.

Finally, a note on licensing – The VSPEX Blue is available as a single sku so it makes for easy ordering. Appliance software includes – VMware EVO:RAIL software bundle, management, ESRS, RecoverPoint for VMs – 15 VMs, and Cloud extension.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
it_user234735
Technology Consultant, ASEAN at a tech services company with 501-1,000 employees
Consultant
I’m impressed with the interface - simple to use.
The same VMware EVO:RAIL vs Nutanix questions keep popping up over and over again. I figured I would do a quick VMware EVO:RAIL Overview post so that I can compare with Nutanix or Simplivity. What is EVO:RAIL? EVO represents a new family of ‘Evolutionary’ Hyper-Converged Infrastructure offerings from VMware. RAIL represents the first product within the EVO family that will ship during the second half of 2014. EVO:RAIL is the next evolution of infrastructure building blocks for the SDDC. It delivers compute, storage and networking in a 2U / 4 node package with an intuitive interface that allows for full configuration within 15 minutes. Minimum number of EVO:RAIL hosts? Minimum number is 4 hosts. Each EVO: RAIL appliance has four independent nodes with dedicated…

The same VMware EVO:RAIL vs Nutanix questions keep popping up over and over again. I figured I would do a quick VMware EVO:RAIL Overview post so that I can compare with Nutanix or Simplivity.

What is EVO:RAIL?

EVO represents a new family of ‘Evolutionary’ Hyper-Converged Infrastructure offerings from VMware. RAIL represents the first product within the EVO family that will ship during the second half of 2014. EVO:RAIL is the next evolution of infrastructure building blocks for the SDDC. It delivers compute, storage and networking in a 2U / 4 node package with an intuitive interface that allows for full configuration within 15 minutes.

Minimum number of EVO:RAIL hosts?

Minimum number is 4 hosts. Each EVO: RAIL appliance has four independent nodes with dedicated computer, network, and storage resources and dual, redundant power supplies.

Each of the four EVO:RAIL nodes have (at a minimum):

  • Two Intel E5-2620 v2 six-core CPUs
  • 192GB of memory
  • One SLC SATADOM or SAS HDD as the ESXi™ boot device
  • Three SAS 10K RPM 1.2TB HDD for the VMware Virtual SAN™ datastore
  • One 400GB MLC enterprise-grade SSD for read/write cache
  • One Virtual SAN-certified pass-through disk controller
  • Two 10GbE NIC ports (configured for either 10GBase-T or SFP+ connections)
  • One 1GbE IPMI port for remote (out-of-band) management

What is VMware software included with an EVO:RAIL appliance?

  • vSphere Enterprise Plus
  • vCenter Server
  • Virtual SAN
  • Log Insight
  • Support and Maintenance for 3 years

Total Storage Capacity per Appliances?

  • 14.4TB HDD capacity (approximately 13TB usable) per appliance, allocated to the Virtual SAN datastore for virtual machines
  • 1.6TB SSD capacity per appliance for read/write cache
  • Size of pre-provisioned management VM: 30GB

How many EVO:RAIL appliance can I scale to?

  • With current release EVO:RAIL scales to 4 appliance (16 Hosts)

Who are the EVO:RAIL partners?

  • The following partners were announced at VMworld: Dell, EMC, Fujitsu, Inspur, Net One Systems, Supermicro
  • All support is through by OEM.

How EVO:RAIL Run?

  • EVO:RAIL runs on vCenter Server. vCenter Server is powered-on automatically when the appliance is started. EVO:RAIL uses the vCenter Server Appliance. You can use vCenter Web Client to manage VMs.

EVO:RAIL Networks

  • Each node in EVO:RAIL has 2 x 10GbE NIC (SFP+). This means there is 8 x 10GbE NIC per hosts.
  • IPv6 is required for configuration of the appliance and auto-discovery. Multicast traffic on L2 is required for Virtual SAN.
  • EVO: RAIL supports four types of traffic: Management, vSphere vMotion®, Virtual SAN, and Virtual Machine. Traffic isolation on separate VLANs is recommended for vSphere vMotion, Virtual SAN, and VMs. EVO: RAIL Version 1.0 does not put management traffic on a VLAN.

EVO:RAIL Deployment

EVO: RAIL deployment is simple, with just four steps:

  1. Step 1. Decide on EVO: RAIL network topology (VLANs and top-of-rack switch). Important instructions for your top-of-rack switch are provided in the EVO: RAIL User Guide.
  2. Step 2. Rack and cable: connect the 10GbE adapters on EVO: RAIL to the 10GbE top-of-rack switch.
  3. Step 3. Power on EVO: RAIL.
  4. Step 4. Connect a client workstation/laptop to the top-of-rack switch and configure the network address to talk to EVO: RAIL. Then browse1 to the EVO: RAIL IP address, for example https://ipaddress:7443.

The wizard asks questions about the host names, networking configuration (VLANs and IPs, etc.), passwords, and other things.

After completing the wizard, you get a snazzy little build process indicator that shows a high level workflow around what the engine is doing.

Once completed, you get a very happy completion screen that lets you log into EVO:RAIL’s management interface.

Once logged in, you are presented with a dashboard that contains data on the virtual machines, health of the system, configuration items, various tasks, and the ability to build more virtual machines.

The interface will allow you to manage virtual machines in an easy way. It has pre-defined virtual machine sizes (small / medium / large) and even security profiles that can be applied to the virtual machine configuration!

EVO:RAIL provides you monitoring capabilities. Simple overview.

Conclusion

I’m quite impressed with the interface for EVO:RAIL, it uses HTML5 and is very simple and friendly to use. Welcome to Hyper-Converged World. Next discussion, EVO:RAIL vs Nutanix.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
ITCS user
Solutions Architect with 51-200 employees
Vendor
It is a nice piece of technology, but it is way too expensive.
My view on EVO:RAIL has always been that it is a nice piece of technology, but it is way too expensive with a deeply flawed licensing model (more thoughts at VMware EVO:RAIL or VSAN – which makes the most sense?). It has just been “improved” because you can now use existing vSphere licenses which will dramatically reduce the cost of the appliance (more details here). Even though VMware had a great chance to really make things so much better they have wasted the opportunity – amazingly they are still forcing you to use Enterprise Plus whereas Essentials Plus would be more appropriate in most cases. It is also not clear if the vSphere licences can be moved, or if Virtual SAN and Log Insight are still tied to the hardware or if existing licenses can be used as well. So this still…

My view on EVO:RAIL has always been that it is a nice piece of technology, but it is way too expensive with a deeply flawed licensing model (more thoughts at VMware EVO:RAIL or VSAN – which makes the most sense?). It has just been “improved” because you can now use existing vSphere licenses which will dramatically reduce the cost of the appliance (more details here).

Even though VMware had a great chance to really make things so much better they have wasted the opportunity – amazingly they are still forcing you to use Enterprise Plus whereas Essentials Plus would be more appropriate in most cases. It is also not clear if the vSphere licences can be moved, or if Virtual SAN and Log Insight are still tied to the hardware or if existing licenses can be used as well.

So this still leaves us with the following questions:

  1. Why would you have to use vSphere Enterprise Plus?
  2. Why would you not have perpetual rights to all of the software?
  3. Why would you want 4 under-powered nodes?
  4. Why would you want the minimum number of nodes to be 4 (2 or 3 would be better)?
  5. Why would you scale in 4 node increments (1 would be better)?
  6. Why would you not allow the addition of extra drives?

The bottom line is I would love to know what VMware’s agenda is for EVO:RAIL – if anyone knows please get in touch because I just do not get it.

Disclosure: My company has a business relationship with this vendor other than being a customer: We are Partners with VMware.
ITCS user
Solutions Architect with 51-200 employees
Vendor
VSAN vs. EVO:RAIL
I really like what VMware is doing with their Software-Defined Data Centre strategy – the idea of allowing customers to use commoditised low cost compute, storage and networking hardware for their infrastructure has got to be a good thing – we are on the verge of hopefully making IT both much simpler and cheaper. What I am not so sure about is EVO:RAIL, I get VSAN (see An introduction to VMware Virtual SAN Software-Defined Storage technology and What are the pros and cons of Software-Defined Storage?), but does EVO:RAIL actually make sense? There are some advantages – it is easy to order, as it is a fixed configuration and it is easy to deploy, just plug-in, power-on and go. But compared to VSAN it has some serious constraints Why can’t we specify a CPU and memory quantity (6-cores…

I really like what VMware is doing with their Software-Defined Data Centre strategy – the idea of allowing customers to use commoditised low cost compute, storage and networking hardware for their infrastructure has got to be a good thing – we are on the verge of hopefully making IT both much simpler and cheaper.

What I am not so sure about is EVO:RAIL, I get VSAN (see An introduction to VMware Virtual SAN Software-Defined Storage technology and What are the pros and cons of Software-Defined Storage?), but does EVO:RAIL actually make sense?

There are some advantages – it is easy to order, as it is a fixed configuration and it is easy to deploy, just plug-in, power-on and go.

But compared to VSAN it has some serious constraints:

  1. Why can’t we specify a CPU and memory quantity (6-cores seems a bit behind the times today)?
  2. Why can’t we specify the SSD and HDD configuration (the supplied capacity seems a bit on the low side)?
  3. Why can’t we start with 3 nodes and then add nodes one at a time (purchasing 4 nodes at a time does not seem ideal)?
  4. Why can’t we re-use existing vSphere and VSAN licences?
  5. Why can’t we choose to use something other than vSphere Enterprise Plus (Standard or Essentials Plus may well be more appropriate)?
  6. Why can’t we transfer the VMware licences to another EVO:RAIL appliance or standard server (the licences are OEM based and tied to the hardware)?

I would also argue that VMware has done a great job of making vSphere and VSAN easy to deploy, yes it is going to take a bit longer than EVO:RAIL, but you are not talking about a significant amount of extra time.

So for me EVO:RAIL just does not make sense, not from a technical point of view, but commercially. If VMware were to follow their strategy of Software-Defined solutions surely they would allow customers to buy EVO:RAIL compliant hardware and EVO:RAIL software separately.

Even better just have a special EVO:RAIL build of vSphere that uses standard vSphere/VSAN licencing – that way the customer can move their licences between what ever hardware form they like, is that not the point of the Software-Defined Data Centre?

It looks to me a bit like the vRAM tax and hopefully VMware will listen and make some adjustments.

Comments would be very much appreciated as I am sure there are plenty of people with different opinions.

Disclosure: My company has a business relationship with this vendor other than being a customer: We are Partners with VMware.
it_user234735
Technology Consultant, ASEAN at a tech services company with 501-1,000 employees
Consultant
Nutanix vs. EVO:RAIL
2015 IT Trends: Convergence, Automation, and Integration. The hyper-converged gaining momentum each of the last few years, there are more and more customers taking notice. During VMworld 2014 in August, VMware announced of hyper-convergence: the EVO: RAIL, the combination of virtualization software loaded onto four blade servers, sliding on a rail into a 2u space of a server rack. It represents compute, storage, and networking in a single modular unit. Please read my other post for VMware EVO:RAIL and Nutanix. VMware software included with an EVO:RAIL appliance vSphere Enterprise Plus vCenter Server Virtual SAN Log Insight Support and Maintenance for 3 years Hardware: Hypervisor: Some customers are implementing non VMware products to virtualize workloads, the flexibility to…

2015 IT Trends: Convergence, Automation, and Integration.

The hyper-converged gaining momentum each of the last few years, there are more and more customers taking notice. During VMworld 2014 in August, VMware announced of hyper-convergence: the EVO: RAIL, the combination of virtualization software loaded onto four blade servers, sliding on a rail into a 2u space of a server rack. It represents compute, storage, and networking in a single modular unit.

Please read my other post for VMware EVO:RAIL and Nutanix.

VMware software included with an EVO:RAIL appliance:

  • vSphere Enterprise Plus
  • vCenter Server
  • Virtual SAN
  • Log Insight
  • Support and Maintenance for 3 years

Hardware:

Hypervisor:

Some customers are implementing non VMware products to virtualize workloads, the flexibility to support more than VMware is quickly becoming important. VMware EVO:RAIL only support VMware while Nutanix support KVM or Hyper-V over VMware.

Read my other post for VMware and Microsoft Hyper-V 2012R2 here.

Storage:

This comparison is not cover performance, only comparing the availability and data services that the hyper-converged platforms offer.

Nutanix are using a Virtual Storage Appliance (VSA). There is a VSA on each node in the storage cluster and they act like scale out storage controllers. While VMware has taken the approach of building VSAN as a module in the vSphere kernel. Each approach has its benefits and draw backs. The VSA model will use more host resources to provide storage services. Using the VSA is allowing vendors to offer deduplication, compression, backup and replication among other services. While VMware’s integrated approach uses far less resources, it does lag in the data services it can offer currently.

Disclosure: I am a real user, and this review is based on my own experience and opinions.