Tuesday, December 24, 2019

vSphere 6.7 - To sync VM templates using Content Library

Most of us have stored VMware content such as ISO images, OVF VMs, OVA VMs and valuable scripts on a local or remote data store to build out Windows Virtual Machines (VMs), ESXi hosts or Linux VMs. The content is accessible to anyone with admin rights to the vCenter environment on which the content is hosted. So anyone with access to the vCenter and its datastore can initiate an unauthorized VM build with unprotected VMware content.

Syncing of native VM templates (vmtx) between Content Libraries is available when vCenter Server is configured for Enhanced Linked Mode (ELM). Published libraries are now subscriber aware allowing newly published items to replicate to other subscribed Content Libraries.


Option Description
Local content library  A the local content library is accessible only in the vCenter Server instance where you create it by default. 
Subscribed content library A subscribed content library originates from a published content library. Use this option to take advantage of already existing content libraries.
You can sync the subscribed library with the published library to see up-to-date content, but you cannot add or remove content from the subscribed library. Only an administrator of the published library can add, modify, and remove contents from the published library.




  • Using the content library to Publish (Sync) a native VM Template (vmtx) to Subscriber Libraries in vCenter Server Appliance 6.7.
 
  •  List of Content Libraries accessible to the vCenter Server. Click on the Local Library named "OnPrem-Publisher".

  • Select image temp - We have 3 templates displayed, two OVF, and one VMTX. To Publish (Sync) the VMTX template to subscriber libraries, select the VMTX template named "Win2012-tmp".
  •  The details of the VMTX Template is displayed. Click Actions to continue.  

  • Once the Subscriber Library is selected, clicking Publish will sync the VMTX template from the Local Content Library over to the Subscriber Library.

  • Click Publish to sync this VMTX template from the local Content Library over to the Subscriber Library.

  • Alternatively, we can also publish VMTX templates directly from the Local Library view versus the template details page. Click the radio button checkbox to select the Subscriber Library to publish to.


  • This is intended to notify that only VMTX templates will be published to the Subscriber Library. Click Publish to continue. 
  • We can also verify that the template made it over to the Content Library by clicking on the Subscriber Library named "VC-026 Lib"

 

  • Review the current templates in the Subscriber Library. Here we can see that our VMTX (VM Template) has synchronized to the Subscriber Content Library (VC-026 Lib) from the local Content Library.

  • Once synchronization completes, all templates can be seen from this view. If there are other files in Content Library that you need to review, click Other Types to see them.



  • Other file types include items such as ISOs, text files, etc.

I hope this has been informative and thank you for reading!

Monday, December 9, 2019

VMware Skyline Health for vSAN

VMware Skyline Health for vSAN, which unifies two in-product support offerings, vSAN Health and Skyline. Users will experience less downtime due to new, proactive Skyline notifications as well as resolve issues faster from a single source of support. Skyline Health for vSAN provides self-service findings for configuration, patches, upgrades, and security for vSAN 6.7 and later. All customers with active support are entitled to Skyline Health for vSAN.
 
For customers that want a more holistic proactive support experience, VMware also offers Skyline Advisor proactive support experience for customers with Production and Premier support subscriptions. Skyline Advisor includes cross-product support for vSphere, vSAN, NSX, vROps, Horizon and auto-tags VVD, VMware Cloud Foundation as well as for Dell EMC VxRail, a jointly engineered hyper-converged system. Skyline Advisor extends proactive support to older versions of VMware products, supporting vSphere versions back to 5.5. Customers also get LogAssist, which automates the collection of logs in the event of a support request. Premier support customers receive all of those key capabilities plus advanced findings, and white-glove troubleshooting and remediation.

I hope this has been informative and thank you for reading!

Monday, November 25, 2019

ESXi build 6.0,6.5 and 6.7 New Builds Released

VMware has released new builds of ESXi 6.0, 6.5 and 6.7, this is bug fixes, and security fixes, including new Intel CPU microcodes.


The ESXi 6.7, also includes a fix for “Sensor -1 type hardware health alarms” that may fill the vCenter SEAT database disk and other fixes.

The ESXi 6.0 also includes a fix for the CBT corruption on Revert Snapshots.

See the release notes for each version here:

Release notes ESXi 6.0.
Release notes ESXi 6.5.
Release notes ESXi 6.7.


I will share the updated information shortly. I hope this has been informative and thank you for reading! 

Wednesday, November 20, 2019

Free stuff from VMware, a new ebook, Service Mesh For Dummies

VMware have very kindly made available for download a free ebook called Service Mesh For Dummies.

The book covers the following:

The book is downloaded in PDF format, it consists of 6 chapters over 65 pages.

The book is authored by Niran Even-Chen, Oren Penso, and Susan Wu

The six chapters are.
  •     The Rise of Microservices and Cloud-Native Architecture
  •     Service Mesh: A New Paradigm
  •     Service Mesh Use Cases
  •     Recognizing Complexity Challenges in Service Mesh
  •     Transforming the Multi-Cloud Network with NSX Service Mesh
  •     Ten (Or So) Resources to Help You Get Started with Service Mesh
The book is available from VMware

I will share the update information shortly. I hope this has been informative and thank you for reading!

Monday, October 21, 2019

vSphere 6.0 Reaches End Of General Support (EOGS) in March 2020

We would like to remind you that the End of General Support (EOGS) for vSphere 6.0 and the below listed products is March 12, 2020.

This includes the following releases:

  • vCenter Server 6.0
  • vCenter Update Manager 6.0
  • ESXi 6.0
  • Site Recovery Manager 6.0 and 6.1
  • vSAN 6.0, 6.1 and 6.2
  • vSphere Data Protection 6.0 and 6.1
  • vSphere Replication 6.0 and 6.1

Learn more about VMware’s Lifecycle Support dates at vmware.com/go/lifecycle.

To maintain your full level of Support and Subscription Services, VMware recommends upgrading to vSphere 6.5 or 6.7. Note that by upgrading to vSphere 6.5 or 6.7 you not only get all the latest capabilities of vSphere but also the latest vSAN release and capabilities (with the separate vSAN license that is). You can learn more about vSphere 6.7 through a series of blog posts available here

If you are unable to upgrade from vSphere 6.0 before EOGS and are active on Support and Subscription Services, you may purchase Extended Support in one-year increments for up to two years beyond the EOGS date. Visit VMware Extended Support for more information.

Technical Guidance for vSphere 6.0 is available until March 12, 2022 primarily through the self-help portal. During the Technical Guidance phase, VMware will not offer new hardware support, server/client/guest OS updates, new security patches or bug fixes unless otherwise noted. For more information, visit VMware Lifecycle Support Phases.

Upgrading to vSphere 6.7

For more information on the benefits of upgrading and how to upgrade, visit the VMware vSphere Upgrade Center. For detailed technical guidance, visit vSphere Central. VMware has extended general support for vSphere 6.5 and 6.7 to a full five years from date of release, which will end on November 15, 2021. This same date applies to vSphere 6.7 end of general support as well.

If you require assistance upgrading to a newer version of vSphere, VMware’s vSphere Upgrade Service is available. This service delivers a comprehensive guide to upgrading your virtual infrastructure including recommendations for planning and testing the upgrade, the actual upgrade itself, validation guidance, and rollback procedures. For more information, contact your VMware account team

I will share the update information shortly. I hope this has been informative and thank you for reading!

Wednesday, September 11, 2019

VMware Ports and Protocols

VMware recently released new called VMware Ports and protocol tool. The tool documents all network ports and protocols required for communication source and target for listed VMware products.

  • vSphere
  • vSAN
  • NSX for vSphere
  • vRealize Network Insight
  • vRealize Operations Managers
  • vRealize Automation

https://ports.vmware.com/


I will share the update information shortly. I hope this has been informative and thank you for reading!

Thursday, September 5, 2019

Introducing Project Pacific

VMware announced Project Pacific, what I believe to be the biggest evolution of vSphere in easily the last decade.  Simply put, we are re-architecting vSphere to deeply integrate and embed Kubernetes. The introduction of Project Pacific anchors the announcement of VMware Tanzu, a portfolio of products and services that transform how the enterprise builds software on Kubernetes.

Project Pacific evolves vSphere to be a native Kubernetes platform. What’s driving this shift?  Fundamentally it goes to what constitutes a modern application.  Modern apps are often complex combinations of many different technologies – traditional in-house apps, databases, modern apps in containers, and potentially even modern apps in functions.  Managing these apps across that heterogeneity is a complex task for both developers and operators.  Indeed, enabling dev and ops to work better together is a key problem many businesses face.

When we looked at this space and asked ourselves how we can help our customers here, it was clear that vSphere would play a central role.  But we realized that newer technologies, such as Kubernetes, were also critical to the solution.  So we thought – why not combine them and get the best of both worlds?

This is exactly what Project Pacific achieves.  Project Pacific fuses vSphere with Kubernetes to enable our customers to accelerate development and operation of modern apps on vSphere.  This will allow our customers to take advantage of all the investments they’ve made in vSphere and the vSphere ecosystem in terms of technology, tools, and training while supporting modern applications.


Specifically, Project Pacific will deliver the following capabilities:


    vSphere with Native Kubernetes

Project Pacific will embed Kubernetes into the control plane of vSphere, for unified access to compute, storage and networking resources, and also converge VMs and containers using the new Native Pods that are high performing, secure and easy to consume. Concretely this will mean that IT Ops can see and manage Kubernetes objects (e.g. pods) from the vSphere Client.  It will also mean all the various vSphere scripts, 3rd party tools, and more will work against Kubernetes.

    App-focused Management

Rather than managing individual VMs (and now containers!), Project Pacific will enable app-level control for applying policies, quota and role-based access to developers. With Project Pacific, IT will have unified visibility into vCenter Server for Kubernetes clusters, containers and existing VMs, as well as apply enterprise-grade vSphere capabilities (like High Availability (HA), Distributed Resource Scheduler (DRS), and vMotion) at the app level.

Dev & IT Ops Collaboration

IT operators will use vSphere tools to deliver Kubernetes clusters to developers, who can then use Kubernetes APIs to access SDDC infrastructure. With Project Pacific, both developers and IT operators will gain a consistent view via Kubernetes constructs in vSphere.

VMware’s extensive ecosystem of partners will also benefit from Project Pacific which will enable their tools to work against container-based applications seamlessly and without any modifications. Ultimately, Project Pacific will help enterprises to accelerate the deployment of modern apps while reducing the complexity involved to manage them in the hybrid cloud. Project Pacific is currently in technology preview*.

This is a truly groundbreaking innovation for vSphere.  I’m really excited about our broader vision for helping customers build software on Kubernetes

I will share the update information shortly. I hope this has been informative and thank you for reading!

Wednesday, August 14, 2019

HCX Overview



HCX is the swiss army knife of workload mobility. It abstracts and removes the boundaries of underlying infrastructure focusing on the workloads. A HCX vMotion, for example, requires no direct connectivity to ESXi hosts in either direction compared to a vSphere vMotion. All HCX vMotion traffic gets managed through the HCX vMotion Proxy at each location. The HCX vMotion Proxy resembles an ESXi host within the vCenter Server inventory. It’s deployed at the data center level by default, no intervention is necessary. One thing to mention is the HCX vMotion proxy gets added to the vCenter Server host count by default. The HCX team is aware and will be changing this in the future, but this has no impact on your vSphere licensing.


Another boundary HCX removes is it supports several versions of vSphere going back to vSphere 5.0 to the most current release of vSphere 6.7 Update 1. This provides flexibility in moving workloads across vSphere versions, on-premises locations, and vSphere SSO domains. For on-premises to on-premises migrations, a NSX Hybrid Connect license is required per HCX site pairing. We will cover site pairing in the configuration blog post. Migrating workloads from on-premises to VMC does not require a separate HCX license. By default when deploying a VMC SDDC HCX is included as an add-on and is enabled by default. From the VMC add-ons tab, all that is required is clicking open Hybrid Cloud Extension and then deploy HCX. The deployment of HCX is completely automated within the VMC SDDC.



In order to start migrating workloads, network connectivity between the source and destination needs to be in place. The good news is it’s all built-in to the product. HCX has WAN optimization, deduplication, and compression to increase efficiency while decreasing the time it takes to perform migrations. The minimum network bandwidth required to migrate workloads with HCX is 100 Mbps. HCX can leverage your internet connection as well as direct connect. The established network tunnel is secured using suite B encryption. The on-premises workloads being migrated with no downtime will need to reside on a vSphere Distributed Switch (VDS). It also supports a 3rd party switch in the Nexus 1000v. Cold and bulk HCX migration types are currently the only two options which support the use of a vSphere standard switch but implies downtime for the workload (The HCX team is working on adding support for the vSphere standard switch for other migration types). To minimize migration downtime, HCX has a single click option to extend on-premises networks (L2 stretch) to other on-premises sites or VMware Cloud on AWS. Once the workloads have been migrated there is also an option to migrate the extended network, if you choose. Other built-in functionality includes:
  •     Native scheduler for migrations
  •     Per-VM EVC
  •     Upgrade VM Tools / Compatibility (hardware)
  •     Retain mac address
  •     Remove snapshots
  •     Force Unmount ISO images
  •     Bi-directional migration support


HCX provides enhanced functionality on top of the built-in vSphere VM mobility options. Customers can now use HCX to migrate workloads seamlessly from on-premises to other paired on-premises sites (multisite) and VMware Cloud on AWS. Workload mobility can also help with hardware refreshes as well as upgrading from unsupported vSphere 5.x version. The next post will cover the different migration options available within HCX, followed by how to setup and configure the product.

I will share the update information shortly. I hope this has been informative and thank you for reading!

Monday, August 5, 2019

vSAN Space Efficiency Features

vSAN Space efficiency features such as: 
  • Deduplication
  • Compression 
  • Erasure Coding
Reduce the total cost of ownership (TCO) of storage which are all features built directly into vSAN. Let’s go into each one a little more in-depth to learn how we’re saving money, storage and increasing performance at the same time.

Deduplication & Compression

Enabling dedup & compression can actually reduce the amount of physical storage consumed by almost as much as 7 times. For example, let’s say you have 20 Windows Server 2012 R2 VM’s and they have all their specific purpose (AD, Exchange, App, Web, DB, etc…). If we didn’t utilize de-dup and compression we would be holding the same set of data 20 times more than we need to.

Environments with redundant data such as similar operating systems typically benefit the most. Likewise, compression offers more favorable results with data that compresses well like text, bitmap, and program files. Data that is already compressed such as certain graphics formats and video files, as well as files that are encrypted, will yield little or no reduction in storage consumption from compression. Deduplication and compression results will vary based on the types of data stored in an all flash vSAN environment.
Note: Dedup and compression is a single cluster-wide setting that is disable by default and can be enabled using a drop down menu in the vSphere Web Client.

RAID 5/6 Erasure Coding

RAID-5/6 erasure coding is a space efficiency feature optimized for all flash configurations. Erasure coding provides the same levels of redundancy as mirroring, but with a reduced capacity requirement. In general, erasure coding is a method of taking data, breaking it into multiple pieces and spreading it across multiple devices, while adding parity data so it may be recreated in the event one of the pieces is corrupted or lost.


Unlike deduplication and compression, which offer variable levels of space efficiency, erasure coding guarantees capacity reduction over a mirroring data protection method at the same failure tolerance level. As an example, let’s consider a 100GB virtual disk. Surviving one disk or host failure requires 2 copies of data at 2x the capacity, i.e., 200GB. If RAID-5 erasure coding is used to protect the object, the 100GB virtual disk will consume 133GB of raw capacity—a 33% reduction in consumed capacity versus RAID-1 mirroring.
RAID-5 erasure coding requires a minimum of four hosts. Let’s look at a simple example of a 100GB virtual disk. When a policy containing a RAID-5 erasure coding rule is assigned to this object, three data components and one parity component are created. To survive the loss of a disk or host (FTT=1), these components are distributed across four hosts in the cluster.

RAID-6 erasure coding requires a minimum of six hosts. Using our previous example of a 100GB virtual disk, the RAID-6 erasure coding rule creates four data components and two parity components. This configuration can survive the loss of two disks or hosts simultaneously (FTT=2). While erasure coding provides significant capacity savings over mirroring, understand that erasure coding requires additional processing overhead. This is common with any storage platform. Erasure coding is only supported in all flash vSAN configurations. Therefore, the performance impact is negligible in most cases due to the inherent performance of flash devices.

I will share the update information shortly. I hope this has been informative and thank you for reading!

Understand how vSAN Data Protects

vSAN Protects data in many different forms. We will discuss these in brief.

Storage Policy-Based Management

Storage Policy-Based Management (SPBM) from VMware enables precise control of storage services. Like other storage solutions, vSAN provides services such as availability levels, capacity consumption, and stripe widths for performance. A storage policy contains one or more rules that define service levels.
 
Storage policies are created and managed using the vSphere Web Client. Policies can be assigned to virtual machines and individual objects such as a virtual disk. Storage policies are easily changed or reassigned if application requirements change. These modifications are performed with no downtime and without the need to migrate virtual machines from one datastore to another. SPBM makes it possible to assign and modify service levels with precision on a per-virtual machine basis.
 
Failures To Tolerance (FTT)

Defines how many failures an object can tolerate before it becomes unavailable.
Fault Domains: “Fault domain” is a term that comes up often in availability discussions. In IT, a fault domain usually refers to a group of servers, storage, and/or networking components that would be impacted collectively by an outage. A common example of this is a server rack. If a top-of-rack switch or the power distribution unit for a server rack would fail, it would take all the servers in that rack offline even though the server hardware is functioning properly. That server rack is considered a fault domain.
 
Each host in a vSAN cluster is an implicit fault domain. vSAN automatically distributes components of a vSAN object across fault domains in a cluster based on the Number of Failures to Tolerate rule in the assigned storage policy. The following diagram shows a simple example of component distribution across hosts (fault domains). The two larger components are mirrored copies of the object and the smaller component represents the witness component.
 


To mitigate this risk, place the servers in a vSAN cluster across server racks and configure a fault domain for each rack in the vSAN UI. This instructs vSAN to distribute components across server racks to eliminate the risk of a rack failure taking multiple objects offline. This feature is commonly referred to as “Rack Awareness”. The diagram below shows component placement when three servers in each rack are configured as separate vSAN fault domains.

 

Disk Group

A disk group is a unit of physical storage capacity on a host and a group of physical devices that provideperformance and capacity to the vSAN cluster. On each ESXi host that contributes its local devices to avSAN cluster, devices are organized into disk groups.Each disk group must have one flash cache device and one or multiple capacity devices. The devicesused for caching cannot be shared across disk groups, and cannot be used for other purposes. A singlecaching device must be dedicated to a single disk group. In hybrid clusters, flash devices are used for thecache layer and magnetic disks are used for the storage capacity layer.

Consumed Capacity

Consumed capacity is the amount of physical capacity consumed by one or more virtual machines at anypoint. Many factors determine consumed capacity, including the consumed size of your VMDKs,protection replicas, and so on. When calculating for cache sizing, do not consider the capacity used forprotection replicas.


Object-Based Storage

vSAN stores and manages data in the form of flexible data containers called objects. An object is a logicalvolume that has its data and metadata distributed across the cluster. For example, every VMDK is anobject, as is every snapshot. When you provision a virtual machine on a vSAN datastore, vSAN creates aset of objects comprised of multiple components for each virtual disk. It also creates the VM homenamespace, which is a container object that stores all metadata files of your virtual machine. Based onthe assigned virtual machine storage policy, vSAN provisions and manages each object individually,which might also involve creating a RAID configuration for every object.When vSAN creates an object for a virtual disk and determines how to distribute the object in the cluster,it considers the following factors:nvSAN verifies that the virtual disk requirements are applied according to the specified virtual machinestorage policy settings.nvSAN verifies that the correct cluster resources are used at the time of provisioning. For example,based on the protection policy, vSAN determines how many replicas to create. The performancepolicy determines the amount of flash read cache allocated for each replica and how many stripes tocreate for each replica and where to place them in the cluster.nvSAN continually monitors and reports the policy compliance status of the virtual disk. If you find anynoncompliant policy status, you must troubleshoot and resolve the underlying problem.

vSAN Datastore

After you enable vSAN on a cluster, a single vSAN datastore is created. It appears as another type ofdatastore in the list of datastores that might be available, including Virtual Volume, VMFS, and NFS. Asingle vSAN datastore can provide different service levels for each virtual machine or each virtual disk. InvCenter Server, storage characteristics of the vSAN datastore appear as a set of capabilities. You canreference these capabilities when defining a storage policy for virtual machines. When you later deployvirtual machines, vSAN uses this policy to place virtual machines in the optimal manner based on therequirements of each virtual machine.

Objects and Components

Each object is composed of a set of components, determined by capabilities that are in use in the VMStorage Policy. For example, with Primary level of failures to tolerate set to 1, vSAN ensures that theprotection components, such as replicas and witnesses, are placed on separate hosts in the vSANcluster, where each replica is an object component. In addition, in the same policy, if the Number of diskstripes per object configured to two or more, vSAN also stripes the object across multiple capacitydevices and each stripe is considered a component of the specified object. When needed, vSAN mightalso break large objects into multiple components.

Virtual Machine Compliance Status

Compliant and NoncompliantA virtual machine is considered noncompliant when one or more of its objects fail to meet therequirements of its assigned storage policy. For example, the status might become noncompliant whenone of the mirror copies is inaccessible. If your virtual machines are in compliance with the requirementsdefined in the storage policy, the status of your virtual machines is compliant. From the Physical DiskPlacement tab on the Virtual Disks page, you can verify the virtual machine object compliance status.


Component State: Degraded and Absent States

vSAN acknowledges the following failure states for components:nDegraded. A component is Degraded when vSAN detects a permanent component failure anddetermines that the failed component cannot recover to its original working state. As a result, vSANstarts to rebuild the degraded components immediately. This state might occur when a component ison a failed device.nAbsent. A component is Absent when vSAN detects a temporary component failure wherecomponents, including all its data, might recover and return vSAN to its original state. This state mightoccur when you are restarting hosts or if you unplug a device from a vSAN host. vSAN starts torebuild the components in absent status after waiting for 60 minutes.

Object State

Healthy and UnhealthyDepending on the type and number of failures in the cluster, an object might be in one of the followingstates:nHealthy. When at least one full RAID 1 mirror is available, or the minimum required number of datasegments are available, the object is considered healthy.nUnhealthy. An object is considered unhealthy when no full mirror is available or the minimum requirednumber of data segments are unavailable for RAID 5 or RAID 6 objects. If fewer than 50 percent ofan object's votes are available, the object is unhealthy. Multiple failures in the cluster can causeobjects to become unhealthy. When the operational status of an object is considered unhealthy, itimpacts the availability of the associated VM.

Witness

A witness is a component that contains only metadata and does not contain any actual application data. Itserves as a tiebreaker when a decision must be made regarding the availability of the surviving datastorecomponents, after a potential failure. A witness consumes approximately 2 MB of space for metadata onthe vSAN datastore when using on-disk format 1.0, and 4 MB for on-disk format for version 2.0 and later.vSAN 6.0 and later maintains a quorum by using an asymmetrical voting system where each componentmight have more than one vote to decide the availability of objects. Greater than 50 percent of the votesthat make up a VM’s storage object must be accessible at all times for the object to be consideredavailable. When 50 percent or fewer votes are accessible to all hosts, the object is no longer accessibleto the vSAN datastore. Inaccessible objects can impact the availability of the associated VM.

I hope this has been informative and thank you for reading!

Wednesday, July 31, 2019

Performance Study on PMEM in vSphere


Persistent Memory (PMEM) is modern technology that provides us with a new storage tier that will be beneficial for Enterprise Applications that need reduced latency and flexible data access. Examples are (in-memory) database platforms for log acceleration by caching, data writes and reduced recovery times. High Performance Computing (HPC) workloads can also greatly benefit from PMEM for example using it for in-memory check-pointing.


PMEM converges memory and storage. We now have the possibility to store data at unprecedented speed as PMEM its average latency is less than 0.5 microseconds. PMEM is persistent, like storage. That means that it holds its content through power cycles. The beauty of PMEM is that it has characteristics like typical DDR memory; It is byte-addressable allowing for random access to data. Applications can access PMEM directly for load and store instructions without the need to communicate with the storage stack.
 
vPMEMDisk: vSphere presents PMEM as a regular disk attached to the VM. No guest OS or application change is needed to leverage this mode. For example, legacy applications on legacy OSes can utilize this mode.  Note that vPMEMDisk configuration is available only in vSphere and not in bare-metal OS. 2.vPMEM: vSpherepresentsPMEM as a NVDIMM device to the VM. Most of the latest operating systems (for example, Windows Server 2016 and CentOS 7.4) support NVDIMM devices and can expose them to the applications as block or byte-addressable devices. Applications can use vPMEM as a regular storage device by going through the thin layer of the direct-access (DAX) file system or by mapping a region from the device and accessing it directly in a byte-addressable manner. This mode can be used by legacy or newer applications running on newer OSes.

Persistent Memory Performance on vSphere 6.7: Performance Study |  27v. vMotionPerformancevSphere supports vMotion of both vPMEMDisk and vPMEM. vPMEMDisk vMotion is conducted as XvMotion, where both local storage and memory contents are transferred to another host. vPMEM vMotion is conducted as compute vMotion, where vPMEM is transferred as memory along with vRAM. Note that, PMEM is a host local storage and seamless live-migration like vMotion is only possible in vSphere environment (unlike bare-metal OS) We used 2 identical hosts connected over 40 GbE network. We created 3 vmknics for vMotion over the 40 GbE physical NIC.
 

I will share the update information shortly. I hope this has been informative and thank you for reading!

Wednesday, July 24, 2019

Hybrid Cloud Extension (HCX)

 

Hybrid Cloud Extension (HCX) is an all in one solution for workload mobility. Customers can freely move workloads between multiple on-premises environments as well as VMware Cloud on AWS (VMC). The data center evacuation was using a new HCX migration type called Cloud Motion with vSphere Replication. The workloads were being migrated from an on-premises data center to VMware Cloud on AWS (VMC). The big deal here is the impact during the migration to the users was NONE, ZIP, ZERO downtime. And there was also no replatforming of the workloads or applications.


What is Cross-Cloud Mobility?

Cross-Cloud Mobility is the capability to pair any cloud infrastructure together and expect each cloud to act as an extension to the other(s). The HCX platform becomes the basis on which Cross-Cloud Mobility is provided by leveraging infrastructure services (Any-Any vSphere Zero downtime migration, seamless Disaster Recovery, enabling Hybrid Architectures, etc.) to provide tangible business value.



Hybrid Cloud Extension (HCX) when migrating workloads from one location to another. The network is at the top of the list, ensuring adequate bandwidth and routing is in place. Not only from data center to data center or data center to cloud, but also across regions and remote sites. Validating vSphere version compatibility between the source and destination is also important. Other workload migration considerations could include:
    •     Moving across different vSphere SSO domains independent of enhanced linked mode
    •     Mapping workload dependencies
    •     Older hardware preventing from upgrading to vSphere 6.x
    •     Bi-directional migrations independent of vSphere SSO domains, vSphere versions, hardware, or networks
    •     Validation of resources (compute and storage) on the destination side
    •     SLA and downtime involved
    •     Mobility options correlating to SLAs (downtime, low downtime, no downtime)
    •     No replatforming of the workloads or applications including no changes to IP or MAC Addresses, VM UUID, certs
 I will share the update information shortly. I hope this has been informative and thank you for reading!

Azure VMware Solution by CloudSimple - AVS

CloudSimple provides native VMware Private Clouds as-a-Service from the Azure Public Cloud. Overview Microsoft Azure VMwa...