Tuesday, October 14, 2014

VMware Software Defined Infrastructure with EVO:RAIL


VMware EVO: RAILTM is the first hyper-converged infrastructure appliance powered 100% by VMware’s compute, networking, storage, and management software. 

EVO: RAIL Deployment, Configuration, and Management streamlines initial setup and ongoing operations including updates with zero downtime and automatic scale-out. EVO: RAIL is fully deployed in minutes, once racked, cabled and powered on.

Custom Builds – These are the traditional, build-your-own style data centers, in which engineers or architects in the organization take the silo components – network, storage, compute, and management – and cobble them together using a combination of experience and know-how.

Converged Infrastructure – Offerings that combine the components from Custom Builds into an offering, either as a product (such as Vblock) or as a reference architecture (such as FlexPod). The benefit is that most of the architecture challenges have been solved for the consumer, and in the case of Vblock, is offered directly from the factory as a set of cabinets fully racked and stacked.


Hyper-Converged Infrastructure – This is usually a clean-slate approach that uses COTS (commodity, off the shelf) components to build a node that contains the compute power, storage, and upstream network interfaces. Nodes are pieced together into a seamless fabric using dedicated or shared network interfaces, varying from InfiniBand to traditional Ethernet, to look and feel like a single, logical entity. There is still a need to plumb these nodes into the physical or underlay network, typically a top of rack (ToR) or end of row (EoR) grid that connects up into a leaf-spine or three-tier topology.



Exposing the EVO

Get ready for some new acronyms. I’ve already spilled the beans on Software Defined Infrastructure in the headline, but now we also have Hyper-Converged Infrastructure Appliance, or HCIA. Under the covers, VMware has taken a COTS approach and layered on VMware vSphere along side Virtual SAN (VSAN). This provides all of the software bits necessary to put together an appliance offering that VMware plans to support end-to-end with a “one support call” model. EVO is sold as a single SKU – this includes the hardware, software, and SnS (support/maintenance) – to make the procurement model relatively less painful. We all know procurement will never be completely painless for most enterprise. 

Each HCIA (again, that’s the Hyper-Converged Infrastructure Appliance for those playing at home) is a 2U chassis with 4 nodes inside. The version 1.0 release will allow for 4 HCIAs to be put together, resulting in 8 RU of rack space and 16 nodes worth of EVO. That’s half the size allowed by the vSphere 5.5 cluster maximum, so I would imagine that the number will grow at some point beyond the 1.0 release. At least, let’s hope so.

EVO Use Cases


If you’re curious what VMware is targeting for EVO, it boils down to just about everything. Here’s a slide that could have easily been renamed to “all your base are belong to us” and not been far off the mark.


Simplicity with the EVO:RAIL Engine


Although the components within RAIL are the vSphere bits you know and love, there’s an additional component that turns EVO:RAIL into a product. That’s the EVO:RAIL Engine. It’s essentially a front-end interface into the product.

  • EVO: RAIL automatically configures the IP addresses and hostnames that you specified when you ordered EVO: RAIL. Configure your TOR switch and click the Just Go! button. All you have to create are two passwords.
  • When you customize EVO: RAIL, all required configuration parameters are supplied for you by default, except for ESXi and vCenter Server passwords. Customize Me! allows you to easily change the defaults.
  • Upload Configuration File, an existing configuration file can be selected and uploaded


Once completed, you get a very happy completion screen that lets you log into EVO:RAIL’s management interface.



Once logged in, you are presented with a dashboard that contains data on the virtual machines, health of the system, configuration items, various tasks, and the ability to build more virtual machines. Notice that the configuration screen also includes build versions of vCenter, ESXi, and EVO:RAIL, along with the ability to license the product and push offline updates (important for those without an interface facing connection) to the EVO.


Hardware Components

The EVO:RAIL comes with pre-defined hardware components listed below:

Per HCIA
  • 24 hot plug 2.5 drives
  • Dual PSUs ~1600W

Per Node
  • Dual socket Intel E5-2620v2 (6 cores)
  • Up to 192 GB of RAM
  • 1 x Expansion Slots PCI-E: Disk controller with pass through capabilities (Virtual SAN requirement)
  • 1 x 146 GB SAS 10K-RPM HDD or 32 GB SATADOM (ESXi boot)
  • 1 x SSD up to 400 GB (Virtual SAN requirement for read/write cache)
  • 3 x 1.2 TB SAS 10K-RPM HDD (Virtual SAN data store)
  • 2 x Network – 10 GbE RJ45 or SFP+
  • 1 x Management RJ45 – 100/1000 NIC

Adding a new HCIA involves cabling the appliance and then letting EVO:RAIL detect and connect. The rest is handled for you. You can only add one appliance at a time in release version 1.0.


Network Layout

The virtual switch is configured for 2 vmnics (vmnic0 and vmnic1) with pretty much all traffic using vmnic0. The only thing that uses vmnic1 is the VSAN traffic. Specifically


This configuration will require that you have provided a 10 GbE top of rack (ToR) switch for connectivity, as well as the following:

  • IPv4 and IPv6 multicast must be enabled on all ports on the TOR switch. When using multiple TOR switches, ISL multicast traffic for IPv4 and IPv6 must be able to communicate between the switches. (EVO uses IPv6 for auto-discovery)
  • Configure a management VLAN on your TOR switch(es) and set it to allow multicast traffic to pass through.
To allow multicast traffic to pass through, you have 2 options for either all EVO: RAIL ports on your TOR switch or for the Virtual SAN and management VLANs (if you have VLANs configured):
  • Enable IGMP Snooping on your TOR switch(es) AND enable IGMP Querier. By default, most switches enable IGMP Snooping, but disable IGMP Querier.
  • Disable IGMP Snooping on your TOR switch(es). This option may lead to additional multicast traffic on your network.

Here’s an example ToR configuration to set up the EVO:RAIL:



No comments:

VMware Private AI

VMware Private AI In the fast-paced world of AI, privacy and control of corporate data are paramount concerns for organizations. That's ...