Tuesday, October 14, 2014

VMware Software Defined Infrastructure with EVO:RAIL


VMware EVO: RAILTM is the first hyper-converged infrastructure appliance powered 100% by VMware’s compute, networking, storage, and management software. 

EVO: RAIL Deployment, Configuration, and Management streamlines initial setup and ongoing operations including updates with zero downtime and automatic scale-out. EVO: RAIL is fully deployed in minutes, once racked, cabled and powered on.

Custom Builds – These are the traditional, build-your-own style data centers, in which engineers or architects in the organization take the silo components – network, storage, compute, and management – and cobble them together using a combination of experience and know-how.

Converged Infrastructure – Offerings that combine the components from Custom Builds into an offering, either as a product (such as Vblock) or as a reference architecture (such as FlexPod). The benefit is that most of the architecture challenges have been solved for the consumer, and in the case of Vblock, is offered directly from the factory as a set of cabinets fully racked and stacked.


Hyper-Converged Infrastructure – This is usually a clean-slate approach that uses COTS (commodity, off the shelf) components to build a node that contains the compute power, storage, and upstream network interfaces. Nodes are pieced together into a seamless fabric using dedicated or shared network interfaces, varying from InfiniBand to traditional Ethernet, to look and feel like a single, logical entity. There is still a need to plumb these nodes into the physical or underlay network, typically a top of rack (ToR) or end of row (EoR) grid that connects up into a leaf-spine or three-tier topology.



Exposing the EVO

Get ready for some new acronyms. I’ve already spilled the beans on Software Defined Infrastructure in the headline, but now we also have Hyper-Converged Infrastructure Appliance, or HCIA. Under the covers, VMware has taken a COTS approach and layered on VMware vSphere along side Virtual SAN (VSAN). This provides all of the software bits necessary to put together an appliance offering that VMware plans to support end-to-end with a “one support call” model. EVO is sold as a single SKU – this includes the hardware, software, and SnS (support/maintenance) – to make the procurement model relatively less painful. We all know procurement will never be completely painless for most enterprise. 

Each HCIA (again, that’s the Hyper-Converged Infrastructure Appliance for those playing at home) is a 2U chassis with 4 nodes inside. The version 1.0 release will allow for 4 HCIAs to be put together, resulting in 8 RU of rack space and 16 nodes worth of EVO. That’s half the size allowed by the vSphere 5.5 cluster maximum, so I would imagine that the number will grow at some point beyond the 1.0 release. At least, let’s hope so.

EVO Use Cases


If you’re curious what VMware is targeting for EVO, it boils down to just about everything. Here’s a slide that could have easily been renamed to “all your base are belong to us” and not been far off the mark.


Simplicity with the EVO:RAIL Engine


Although the components within RAIL are the vSphere bits you know and love, there’s an additional component that turns EVO:RAIL into a product. That’s the EVO:RAIL Engine. It’s essentially a front-end interface into the product.

  • EVO: RAIL automatically configures the IP addresses and hostnames that you specified when you ordered EVO: RAIL. Configure your TOR switch and click the Just Go! button. All you have to create are two passwords.
  • When you customize EVO: RAIL, all required configuration parameters are supplied for you by default, except for ESXi and vCenter Server passwords. Customize Me! allows you to easily change the defaults.
  • Upload Configuration File, an existing configuration file can be selected and uploaded


Once completed, you get a very happy completion screen that lets you log into EVO:RAIL’s management interface.



Once logged in, you are presented with a dashboard that contains data on the virtual machines, health of the system, configuration items, various tasks, and the ability to build more virtual machines. Notice that the configuration screen also includes build versions of vCenter, ESXi, and EVO:RAIL, along with the ability to license the product and push offline updates (important for those without an interface facing connection) to the EVO.


Hardware Components

The EVO:RAIL comes with pre-defined hardware components listed below:

Per HCIA
  • 24 hot plug 2.5 drives
  • Dual PSUs ~1600W

Per Node
  • Dual socket Intel E5-2620v2 (6 cores)
  • Up to 192 GB of RAM
  • 1 x Expansion Slots PCI-E: Disk controller with pass through capabilities (Virtual SAN requirement)
  • 1 x 146 GB SAS 10K-RPM HDD or 32 GB SATADOM (ESXi boot)
  • 1 x SSD up to 400 GB (Virtual SAN requirement for read/write cache)
  • 3 x 1.2 TB SAS 10K-RPM HDD (Virtual SAN data store)
  • 2 x Network – 10 GbE RJ45 or SFP+
  • 1 x Management RJ45 – 100/1000 NIC

Adding a new HCIA involves cabling the appliance and then letting EVO:RAIL detect and connect. The rest is handled for you. You can only add one appliance at a time in release version 1.0.


Network Layout

The virtual switch is configured for 2 vmnics (vmnic0 and vmnic1) with pretty much all traffic using vmnic0. The only thing that uses vmnic1 is the VSAN traffic. Specifically


This configuration will require that you have provided a 10 GbE top of rack (ToR) switch for connectivity, as well as the following:

  • IPv4 and IPv6 multicast must be enabled on all ports on the TOR switch. When using multiple TOR switches, ISL multicast traffic for IPv4 and IPv6 must be able to communicate between the switches. (EVO uses IPv6 for auto-discovery)
  • Configure a management VLAN on your TOR switch(es) and set it to allow multicast traffic to pass through.
To allow multicast traffic to pass through, you have 2 options for either all EVO: RAIL ports on your TOR switch or for the Virtual SAN and management VLANs (if you have VLANs configured):
  • Enable IGMP Snooping on your TOR switch(es) AND enable IGMP Querier. By default, most switches enable IGMP Snooping, but disable IGMP Querier.
  • Disable IGMP Snooping on your TOR switch(es). This option may lead to additional multicast traffic on your network.

Here’s an example ToR configuration to set up the EVO:RAIL:



Sunday, October 12, 2014

VMware vSphere 5.5 C# Client

vSphere 5.5 Update 2 has just released and among the various bug fixes, one that stands out the most to me and I am sure everyone will be quite happy about (including myself) is the ability to now edit a Virtual Hardware 10 Virtual Machine using the legacy vSphere C# Client. Previously, if you tried to edit a Virtual Machine running the latest Virtual Hardware (version 10), you would get a warning message prompting you to use the vSphere Web Client and the operation would be blocked.


Direct download link for the vSphere 5.5 Update 2 C# Client: 





Note: You do not need to install vSphere 5.5 Update 2 to be able to use this new functionality, you just need to upgrade your vSphere C# Client to the vSphere 5.5 Update 2 release and you will be able to connect to previous versions of vSphere 5.5 (vCenter Server & ESXi).

VMware vSphere 5.5 Update 2 Released !!!

VMware released vSphere 5.5 update 2. Now available from vmware site. There are few additional feature and 100 plus bug fixes from vsphere 5.5 update 1 version.

This is a minor update, but with some important support database update. it’s great to see SQL server 2014 now supported.

SRM 5.8 requires vCenter 5.5 update 2.so whenever SRM 5.8 comes out be sure to upgrade your vCenter prior to deployment.

Vsphere 5.5 update 2 also allow the “legacy” vsphere client to modify some properties (RAM,change network port group, remove devices , vCPU , Mount ISO , increase disk space, reservertions, edit advanced settings) of Hardware v10 VMs.

What's New in VMware ESXi 5.5 update 2 ?

  • Support for hosts with 6TB of RAM – vSphere 5.5 Update 2 starts to support hosts with 6TB of RAM.
  • VMware vShield Endpoint Thin Agent is renamed as VMware Tools Guest Introspection plugin – The vShield Endpoint driver bundled with VMware Tools is now called Guest Introspection.
  • Resolved Issues: Take a look at the list of  Resolved issues  with the release of  VMware ESXi 5.5 Update 2
  • vCenter Server database support: vCenter Server now supports the following external databases:   Oracle 12c. (Important: For pre-requisite requirements, see KB 2079443),  Microsoft SQL Server 2012 Service Pack 1  &  Microsoft SQL Server 2014.
  • vCloud Hybrid Service: The vCloud Hybrid Service (vCHS) introduces a new container, Hybrid Cloud Service, on the vSphere Web Client home page. The Hybrid Cloud Service container contains the vCHS installer and the new vCloud Connector installer.
  • Customer Experience Improvement Program: The vSphere customer experience improvement program is introduced to collect configuration data for vSphere and transmit weekly to VMware for analysis in understanding the usage and improving the product.
  • Resolved Issues: Take a look at the list of Resolved issues  with the release of VMware vCenter 5.5 Update 2.

Product support Notices

  • vSphere Web Client: Starting with vSphere 5.5 Update 2, Windows XP and Windows Vista are not supported as vSphere Client Operating System. You can find the complete list of operating system supported by vSphere Web Client in the VMware Compatibility Guide.
  • vSphere Web Client: Because Linux platforms are no longer supported by Adobe Flash, vSphere Web Client is not supported on the Linux OS. Third party browsers that add support for Adobe Flash on the Linux desktop OS might continue to function.
  • VMware vCenter Server Appliance: In vSphere 5.5, the VMware vCenter Server Appliance meets high-governance compliance standards through the enforcement of the DISA Security Technical Information Guidelines (STIG). Before you deploy VMware vCenter Server Appliance, see the VMware Hardened Virtual Appliance Operations Guide for information about the new security deployment standards and to ensure successful operations.
  • vCenter Server database: vSphere 5.5 removes support for IBM DB2 as the vCenter Server database.
  • VMware Tools: Beginning with vSphere 5.5, all information about how to install and configure VMware Tools in vSphere is merged with the other vSphere documentation. For information about using VMware Tools in vSphere, see the vSphere documentation. Installing and Configuring VMware Tools is not relevant to vSphere 5.5 and later.
  • vSphere Data Protection: vSphere Data Protection 5.1 is not compatible with vSphere 5.5 because of a change in the way vSphere Web Client operates. vSphere Data Protection 5.1 users who upgrade to vSphere 5.5 must also update vSphere Data Protection to continue using vSphere Data Protection.

VCE Vision Intelligent Operations

VCE Vision Intelligent Operations enables and simplifies converged operations.  The software acts as a mediation layer between Vblock™ Systems and data center management tools, dynamically informing those tools about Vblock Systems – so all customer management toolsets can experience a consistent and comprehensive view of the entire infrastructure.  Vision Software delivers intelligent discovery to provide a single-objective perspective of Vblock Systems.  Comprehensive awareness of the industry-leading components that comprise Vblock Systems and promote infrastructure standardization through automated validation and system assurance.  And integration capabilities make it possible to provide this level of intelligence to any toolset.  The software is integrated with the VMware Virtualization and Cloud Management Portfolio, and also supports API-enabled integration into other standard industry tools.  




Deeper Dive:

Discovery
  • Detects what is in a Vblock System
  • Provides up-to-date inventory
  • Details component interconnections
Validation
  • Checks compliance to RCM
  • Collects required RCM updates*
  • Checks security status*
Health
  • Consolidated Health Status via API
  • Consolidated SNMP MIB & Traps
  • Consolidated SysLog
Logging
  • Archives component configurations
  • Checkpoints at set intervals
Consolidated collectionlatform with the new Vblock System 240. The Vblock System 240 is the perfect pre-configured system 

VCE launches the new Vblock System 540 with an all flash array

VCE, the leader in converged infrastructure systems, has announced the release of a new line of products led by the industry’s first converged infrastructure with an all flash based array.










The Vblock System 540 contains the latest in next generation Cisco UCS servers, Cisco ACI-ready network devices, and an EMC XtremIO array; all of which combine to provide a whooping 1M+ potential IOPS with sub-millisecond application response times that is perfect for Big Data, online transaction processing (OLTP), online analytical processing (OLAP) and end-user computing. The Vblock System 540 is extremely scalable with up to 192 Cisco UCS M3 or M4 B-series blade servers, and between 10 TB and 120 TB of raw storage capacity with the option of attaching an EMC Isilon storage array. The Vblock System 540 datasheet with all the details can be found here.


VCE has also released an updated version of the flagship converged infrastructure platform in the new Vblock System 740. The Vblock System 740 is designed with unmatched performance and capacity in mind, using the newest next generation Cisco UCS servers, Cisco ACI-ready network devices, and the EMC VMAX3 storage arrays to offer up 3x performance and 2x storage bandwidth over the previous Vblock System 700 model. This beast has support for up to 512 Cisco UCS blades and 4 PB of usable storage! Check out the full set of specs on the Vblock System 740 here.

To round out the spectrum of products, VCE has updated the entry level converged infrastructure platform with the new Vblock System 240. The Vblock System 240 is the perfect pre-configured system for private cloud solutions, utilizing the VNX5200 unified storage system, all in a single rack solution ready to drop into your datacenter in as little as 45 days after your order. See the full set of details here.

Couple these new products, with the VCE Support team that provides world class white-glove service and you have your next datacenter purchase. Visit http://www.vce.com or contact an authorized VCE reseller to see about acquiring your Vblock System.

Wednesday, October 1, 2014

VMware EVO RAIL

Introducing EVO:RAIL





VMware EVO:RAIL combines compute, networking,and storage resources into a hyper-converged infrastructure appliance to create a simple, easy to deploy, all-in-one solution offered by Qualified EVO:RAIL Partners.

EVO:RAIL enables power-on to Virtual Machine creation in minutes, radically easy VM deployment, easy non-disruptive patch and upgrades, simplified management…you get the idea.

Software-Defined Building Block

EVO:RAIL is a scalable Software-Defined Data Center (SDDC) building block that delivers compute, networking, storage, and management to empower private and hybrid cloud, end-user computing, test/dev, and branch office environments.

Trusted Foundation

Building on the proven technology of VMware vSphere, vCenter Server™, and VMware Virtual SAN™, EVO:RAIL delivers the first hyper-converged infrastructure appliance 100 percent powered by VMware software.

Highly Resilient by Design

Resilient appliance design starting with four independent hosts and a distributed Virtual SAN datastore ensures zero application downtime during planned maintenance or during disk, network, or host failures.

Infrastructure at the Speed of Innovation

Meet accelerating business demands by simplifying infrastructure design with predictable sizing and scaling, by streamlining purchase and deployment with a single appliance SKU, and by reducing CapEx and OpEx.

EVO:RAIL Software Bundle

  • EVO:RAIL rapid deployment, configuration and management engine.
  • Compute, network and storage virtualization enabled with vSphere and Virtual SAN


vSphere 8 Security Configuration & Hardening

    The VMware vSphere Security Configuration & Hardening Guide (SCG) has evolved significantly over the past fifteen years, remaining...