Monday, November 25, 2019

ESXi build 6.0,6.5 and 6.7 New Builds Released

VMware has released new builds of ESXi 6.0, 6.5 and 6.7, this is bug fixes, and security fixes, including new Intel CPU microcodes.


The ESXi 6.7, also includes a fix for “Sensor -1 type hardware health alarms” that may fill the vCenter SEAT database disk and other fixes.

The ESXi 6.0 also includes a fix for the CBT corruption on Revert Snapshots.

See the release notes for each version here:

Release notes ESXi 6.0.
Release notes ESXi 6.5.
Release notes ESXi 6.7.


I will share the updated information shortly. I hope this has been informative and thank you for reading! 

Wednesday, November 20, 2019

Free stuff from VMware, a new ebook, Service Mesh For Dummies

VMware have very kindly made available for download a free ebook called Service Mesh For Dummies.

The book covers the following:

The book is downloaded in PDF format, it consists of 6 chapters over 65 pages.

The book is authored by Niran Even-Chen, Oren Penso, and Susan Wu

The six chapters are.
  •     The Rise of Microservices and Cloud-Native Architecture
  •     Service Mesh: A New Paradigm
  •     Service Mesh Use Cases
  •     Recognizing Complexity Challenges in Service Mesh
  •     Transforming the Multi-Cloud Network with NSX Service Mesh
  •     Ten (Or So) Resources to Help You Get Started with Service Mesh
The book is available from VMware

I will share the update information shortly. I hope this has been informative and thank you for reading!

Monday, October 21, 2019

vSphere 6.0 Reaches End Of General Support (EOGS) in March 2020

We would like to remind you that the End of General Support (EOGS) for vSphere 6.0 and the below listed products is March 12, 2020.

This includes the following releases:

  • vCenter Server 6.0
  • vCenter Update Manager 6.0
  • ESXi 6.0
  • Site Recovery Manager 6.0 and 6.1
  • vSAN 6.0, 6.1 and 6.2
  • vSphere Data Protection 6.0 and 6.1
  • vSphere Replication 6.0 and 6.1

Learn more about VMware’s Lifecycle Support dates at vmware.com/go/lifecycle.

To maintain your full level of Support and Subscription Services, VMware recommends upgrading to vSphere 6.5 or 6.7. Note that by upgrading to vSphere 6.5 or 6.7 you not only get all the latest capabilities of vSphere but also the latest vSAN release and capabilities (with the separate vSAN license that is). You can learn more about vSphere 6.7 through a series of blog posts available here

If you are unable to upgrade from vSphere 6.0 before EOGS and are active on Support and Subscription Services, you may purchase Extended Support in one-year increments for up to two years beyond the EOGS date. Visit VMware Extended Support for more information.

Technical Guidance for vSphere 6.0 is available until March 12, 2022 primarily through the self-help portal. During the Technical Guidance phase, VMware will not offer new hardware support, server/client/guest OS updates, new security patches or bug fixes unless otherwise noted. For more information, visit VMware Lifecycle Support Phases.

Upgrading to vSphere 6.7

For more information on the benefits of upgrading and how to upgrade, visit the VMware vSphere Upgrade Center. For detailed technical guidance, visit vSphere Central. VMware has extended general support for vSphere 6.5 and 6.7 to a full five years from date of release, which will end on November 15, 2021. This same date applies to vSphere 6.7 end of general support as well.

If you require assistance upgrading to a newer version of vSphere, VMware’s vSphere Upgrade Service is available. This service delivers a comprehensive guide to upgrading your virtual infrastructure including recommendations for planning and testing the upgrade, the actual upgrade itself, validation guidance, and rollback procedures. For more information, contact your VMware account team

I will share the update information shortly. I hope this has been informative and thank you for reading!

Wednesday, September 11, 2019

VMware Ports and Protocols

VMware recently released new called VMware Ports and protocol tool. The tool documents all network ports and protocols required for communication source and target for listed VMware products.

  • vSphere
  • vSAN
  • NSX for vSphere
  • vRealize Network Insight
  • vRealize Operations Managers
  • vRealize Automation

https://ports.vmware.com/


I will share the update information shortly. I hope this has been informative and thank you for reading!

Thursday, September 5, 2019

Introducing Project Pacific

VMware announced Project Pacific, what I believe to be the biggest evolution of vSphere in easily the last decade.  Simply put, we are re-architecting vSphere to deeply integrate and embed Kubernetes. The introduction of Project Pacific anchors the announcement of VMware Tanzu, a portfolio of products and services that transform how the enterprise builds software on Kubernetes.

Project Pacific evolves vSphere to be a native Kubernetes platform. What’s driving this shift?  Fundamentally it goes to what constitutes a modern application.  Modern apps are often complex combinations of many different technologies – traditional in-house apps, databases, modern apps in containers, and potentially even modern apps in functions.  Managing these apps across that heterogeneity is a complex task for both developers and operators.  Indeed, enabling dev and ops to work better together is a key problem many businesses face.

When we looked at this space and asked ourselves how we can help our customers here, it was clear that vSphere would play a central role.  But we realized that newer technologies, such as Kubernetes, were also critical to the solution.  So we thought – why not combine them and get the best of both worlds?

This is exactly what Project Pacific achieves.  Project Pacific fuses vSphere with Kubernetes to enable our customers to accelerate development and operation of modern apps on vSphere.  This will allow our customers to take advantage of all the investments they’ve made in vSphere and the vSphere ecosystem in terms of technology, tools, and training while supporting modern applications.


Specifically, Project Pacific will deliver the following capabilities:


    vSphere with Native Kubernetes

Project Pacific will embed Kubernetes into the control plane of vSphere, for unified access to compute, storage and networking resources, and also converge VMs and containers using the new Native Pods that are high performing, secure and easy to consume. Concretely this will mean that IT Ops can see and manage Kubernetes objects (e.g. pods) from the vSphere Client.  It will also mean all the various vSphere scripts, 3rd party tools, and more will work against Kubernetes.

    App-focused Management

Rather than managing individual VMs (and now containers!), Project Pacific will enable app-level control for applying policies, quota and role-based access to developers. With Project Pacific, IT will have unified visibility into vCenter Server for Kubernetes clusters, containers and existing VMs, as well as apply enterprise-grade vSphere capabilities (like High Availability (HA), Distributed Resource Scheduler (DRS), and vMotion) at the app level.

Dev & IT Ops Collaboration

IT operators will use vSphere tools to deliver Kubernetes clusters to developers, who can then use Kubernetes APIs to access SDDC infrastructure. With Project Pacific, both developers and IT operators will gain a consistent view via Kubernetes constructs in vSphere.

VMware’s extensive ecosystem of partners will also benefit from Project Pacific which will enable their tools to work against container-based applications seamlessly and without any modifications. Ultimately, Project Pacific will help enterprises to accelerate the deployment of modern apps while reducing the complexity involved to manage them in the hybrid cloud. Project Pacific is currently in technology preview*.

This is a truly groundbreaking innovation for vSphere.  I’m really excited about our broader vision for helping customers build software on Kubernetes

I will share the update information shortly. I hope this has been informative and thank you for reading!

Wednesday, August 14, 2019

HCX Overview



HCX is the swiss army knife of workload mobility. It abstracts and removes the boundaries of underlying infrastructure focusing on the workloads. A HCX vMotion, for example, requires no direct connectivity to ESXi hosts in either direction compared to a vSphere vMotion. All HCX vMotion traffic gets managed through the HCX vMotion Proxy at each location. The HCX vMotion Proxy resembles an ESXi host within the vCenter Server inventory. It’s deployed at the data center level by default, no intervention is necessary. One thing to mention is the HCX vMotion proxy gets added to the vCenter Server host count by default. The HCX team is aware and will be changing this in the future, but this has no impact on your vSphere licensing.


Another boundary HCX removes is it supports several versions of vSphere going back to vSphere 5.0 to the most current release of vSphere 6.7 Update 1. This provides flexibility in moving workloads across vSphere versions, on-premises locations, and vSphere SSO domains. For on-premises to on-premises migrations, a NSX Hybrid Connect license is required per HCX site pairing. We will cover site pairing in the configuration blog post. Migrating workloads from on-premises to VMC does not require a separate HCX license. By default when deploying a VMC SDDC HCX is included as an add-on and is enabled by default. From the VMC add-ons tab, all that is required is clicking open Hybrid Cloud Extension and then deploy HCX. The deployment of HCX is completely automated within the VMC SDDC.



In order to start migrating workloads, network connectivity between the source and destination needs to be in place. The good news is it’s all built-in to the product. HCX has WAN optimization, deduplication, and compression to increase efficiency while decreasing the time it takes to perform migrations. The minimum network bandwidth required to migrate workloads with HCX is 100 Mbps. HCX can leverage your internet connection as well as direct connect. The established network tunnel is secured using suite B encryption. The on-premises workloads being migrated with no downtime will need to reside on a vSphere Distributed Switch (VDS). It also supports a 3rd party switch in the Nexus 1000v. Cold and bulk HCX migration types are currently the only two options which support the use of a vSphere standard switch but implies downtime for the workload (The HCX team is working on adding support for the vSphere standard switch for other migration types). To minimize migration downtime, HCX has a single click option to extend on-premises networks (L2 stretch) to other on-premises sites or VMware Cloud on AWS. Once the workloads have been migrated there is also an option to migrate the extended network, if you choose. Other built-in functionality includes:
  •     Native scheduler for migrations
  •     Per-VM EVC
  •     Upgrade VM Tools / Compatibility (hardware)
  •     Retain mac address
  •     Remove snapshots
  •     Force Unmount ISO images
  •     Bi-directional migration support


HCX provides enhanced functionality on top of the built-in vSphere VM mobility options. Customers can now use HCX to migrate workloads seamlessly from on-premises to other paired on-premises sites (multisite) and VMware Cloud on AWS. Workload mobility can also help with hardware refreshes as well as upgrading from unsupported vSphere 5.x version. The next post will cover the different migration options available within HCX, followed by how to setup and configure the product.

I will share the update information shortly. I hope this has been informative and thank you for reading!

Monday, August 5, 2019

vSAN Space Efficiency Features

vSAN Space efficiency features such as: 
  • Deduplication
  • Compression 
  • Erasure Coding
Reduce the total cost of ownership (TCO) of storage which are all features built directly into vSAN. Let’s go into each one a little more in-depth to learn how we’re saving money, storage and increasing performance at the same time.

Deduplication & Compression

Enabling dedup & compression can actually reduce the amount of physical storage consumed by almost as much as 7 times. For example, let’s say you have 20 Windows Server 2012 R2 VM’s and they have all their specific purpose (AD, Exchange, App, Web, DB, etc…). If we didn’t utilize de-dup and compression we would be holding the same set of data 20 times more than we need to.

Environments with redundant data such as similar operating systems typically benefit the most. Likewise, compression offers more favorable results with data that compresses well like text, bitmap, and program files. Data that is already compressed such as certain graphics formats and video files, as well as files that are encrypted, will yield little or no reduction in storage consumption from compression. Deduplication and compression results will vary based on the types of data stored in an all flash vSAN environment.
Note: Dedup and compression is a single cluster-wide setting that is disable by default and can be enabled using a drop down menu in the vSphere Web Client.

RAID 5/6 Erasure Coding

RAID-5/6 erasure coding is a space efficiency feature optimized for all flash configurations. Erasure coding provides the same levels of redundancy as mirroring, but with a reduced capacity requirement. In general, erasure coding is a method of taking data, breaking it into multiple pieces and spreading it across multiple devices, while adding parity data so it may be recreated in the event one of the pieces is corrupted or lost.


Unlike deduplication and compression, which offer variable levels of space efficiency, erasure coding guarantees capacity reduction over a mirroring data protection method at the same failure tolerance level. As an example, let’s consider a 100GB virtual disk. Surviving one disk or host failure requires 2 copies of data at 2x the capacity, i.e., 200GB. If RAID-5 erasure coding is used to protect the object, the 100GB virtual disk will consume 133GB of raw capacity—a 33% reduction in consumed capacity versus RAID-1 mirroring.
RAID-5 erasure coding requires a minimum of four hosts. Let’s look at a simple example of a 100GB virtual disk. When a policy containing a RAID-5 erasure coding rule is assigned to this object, three data components and one parity component are created. To survive the loss of a disk or host (FTT=1), these components are distributed across four hosts in the cluster.

RAID-6 erasure coding requires a minimum of six hosts. Using our previous example of a 100GB virtual disk, the RAID-6 erasure coding rule creates four data components and two parity components. This configuration can survive the loss of two disks or hosts simultaneously (FTT=2). While erasure coding provides significant capacity savings over mirroring, understand that erasure coding requires additional processing overhead. This is common with any storage platform. Erasure coding is only supported in all flash vSAN configurations. Therefore, the performance impact is negligible in most cases due to the inherent performance of flash devices.

I will share the update information shortly. I hope this has been informative and thank you for reading!

ESXi build 6.0,6.5 and 6.7 New Builds Released

VMware has released new builds of ESXi 6.0, 6.5 and 6.7, this is bug fixes, and security fixes, including new Intel CPU microcodes. The...