Wednesday, July 31, 2019

Performance Study on PMEM in vSphere


Persistent Memory (PMEM) is modern technology that provides us with a new storage tier that will be beneficial for Enterprise Applications that need reduced latency and flexible data access. Examples are (in-memory) database platforms for log acceleration by caching, data writes and reduced recovery times. High Performance Computing (HPC) workloads can also greatly benefit from PMEM for example using it for in-memory check-pointing.


PMEM converges memory and storage. We now have the possibility to store data at unprecedented speed as PMEM its average latency is less than 0.5 microseconds. PMEM is persistent, like storage. That means that it holds its content through power cycles. The beauty of PMEM is that it has characteristics like typical DDR memory; It is byte-addressable allowing for random access to data. Applications can access PMEM directly for load and store instructions without the need to communicate with the storage stack.
 
vPMEMDisk: vSphere presents PMEM as a regular disk attached to the VM. No guest OS or application change is needed to leverage this mode. For example, legacy applications on legacy OSes can utilize this mode.  Note that vPMEMDisk configuration is available only in vSphere and not in bare-metal OS. 2.vPMEM: vSpherepresentsPMEM as a NVDIMM device to the VM. Most of the latest operating systems (for example, Windows Server 2016 and CentOS 7.4) support NVDIMM devices and can expose them to the applications as block or byte-addressable devices. Applications can use vPMEM as a regular storage device by going through the thin layer of the direct-access (DAX) file system or by mapping a region from the device and accessing it directly in a byte-addressable manner. This mode can be used by legacy or newer applications running on newer OSes.

Persistent Memory Performance on vSphere 6.7: Performance Study |  27v. vMotionPerformancevSphere supports vMotion of both vPMEMDisk and vPMEM. vPMEMDisk vMotion is conducted as XvMotion, where both local storage and memory contents are transferred to another host. vPMEM vMotion is conducted as compute vMotion, where vPMEM is transferred as memory along with vRAM. Note that, PMEM is a host local storage and seamless live-migration like vMotion is only possible in vSphere environment (unlike bare-metal OS) We used 2 identical hosts connected over 40 GbE network. We created 3 vmknics for vMotion over the 40 GbE physical NIC.
 

I will share the update information shortly. I hope this has been informative and thank you for reading!

Wednesday, July 24, 2019

Hybrid Cloud Extension (HCX)

 

Hybrid Cloud Extension (HCX) is an all in one solution for workload mobility. Customers can freely move workloads between multiple on-premises environments as well as VMware Cloud on AWS (VMC). The data center evacuation was using a new HCX migration type called Cloud Motion with vSphere Replication. The workloads were being migrated from an on-premises data center to VMware Cloud on AWS (VMC). The big deal here is the impact during the migration to the users was NONE, ZIP, ZERO downtime. And there was also no replatforming of the workloads or applications.


What is Cross-Cloud Mobility?

Cross-Cloud Mobility is the capability to pair any cloud infrastructure together and expect each cloud to act as an extension to the other(s). The HCX platform becomes the basis on which Cross-Cloud Mobility is provided by leveraging infrastructure services (Any-Any vSphere Zero downtime migration, seamless Disaster Recovery, enabling Hybrid Architectures, etc.) to provide tangible business value.



Hybrid Cloud Extension (HCX) when migrating workloads from one location to another. The network is at the top of the list, ensuring adequate bandwidth and routing is in place. Not only from data center to data center or data center to cloud, but also across regions and remote sites. Validating vSphere version compatibility between the source and destination is also important. Other workload migration considerations could include:
    •     Moving across different vSphere SSO domains independent of enhanced linked mode
    •     Mapping workload dependencies
    •     Older hardware preventing from upgrading to vSphere 6.x
    •     Bi-directional migrations independent of vSphere SSO domains, vSphere versions, hardware, or networks
    •     Validation of resources (compute and storage) on the destination side
    •     SLA and downtime involved
    •     Mobility options correlating to SLAs (downtime, low downtime, no downtime)
    •     No replatforming of the workloads or applications including no changes to IP or MAC Addresses, VM UUID, certs
 I will share the update information shortly. I hope this has been informative and thank you for reading!

Sunday, July 21, 2019

Enhanced vMotion Compatibility (EVC) Explained

vSphere Enhanced vMotion Compatibility (EVC) ensures that workloads can be live migrated, using vMotion, between ESXi hosts in a cluster that are running different CPU generations. The general recommendation is to have EVC enabled as it will help you in the future where you’ll be scaling your clusters with new hosts that might contain new CPU models. To enable EVC in a brownfield scenario can be challenging.  That why we stress to have it enabled from the get-go. This blog post will go into details about EVC and the per-VM EVC feature.

How does EVC work?

The way EVC allows for uniform vMotion compatibility, is by enforcing a CPUID (instruction) baseline for the virtual machines running on the ESXi hosts. That means EVC will allow and expose CPU instruction-sets to the virtual machines depending on the chosen and supported compatibility level. If you would add a newer host to the cluster, containing newer CPU packages, EVC would potentially hide the new CPU instructions to the virtual machines. By doing so, EVC ensures that all virtual machines in the cluster are running on the same CPU instructions allowing for virtual machines to be live migrated (vMotion) between the ESXi hosts.

EVC determines what instructions to mask from the guest OS by using the CPUID. Basically, you can look at the CPUID being an API for the CPU. It allows EVC to learn what instruction-sets the CPU is capable of doing, and what instructions needs to be masked depending on the configured EVC baseline. When EVC is enabled on the cluster, all ESXi hosts in the cluster must adhere to that setting.

This VMware KB article goes into more detail about all current EVC baselines and what CPU instructions they expose to the virtual machines.


Per-VM EVC

EVC is a cluster level setting that supports virtual machine mobility within a cluster. When a virtual machine is moved to another cluster, either on-prem or in a hybrid cloud environment, it loses its EVC configuration depending on the destination environment. Next to that, it is challenging to change a cluster EVC baseline in a environment with live workloads.

By implementing per-VM EVC, the EVC mode becomes an attribute of the virtual machine rather than the specific processor generation it happens to be booted on in the cluster. This helps to be more flexible with EVC enablement and baselines. We introduced the per-VM EVC feature in vSphere 6.7. Virtual machine hardware version 14 or up is required to enable per-VM EVC. When a virtual machine is powered off, you can change the per-VM EVC baseline.
 



The per-EVC configuration is saved in the vmx file. The vmx file is the file used as a value dictionary that provides the configuration of the virtual machine. If the virtual machine is migrated to another cluster, the per-EVC configuration is moving along with the virtual machine itself. The vmx file will contain featMask.vm.cpuid lines like the following when per-VM EVC is enabled:

featMask.vm.cpuid.Intel = “Val:1”
featMask.vm.cpuid.FAMILY = “Val:6”
featMask.vm.cpuid.MODEL = “Val:0x4f”
featMask.vm.cpuid.STEPPING = “Val:0”
featMask.vm.cpuid.NUMLEVELS = “Val:0xd”

I will share the update information shortly. I hope this has been informative and thank you for reading!

Introducing the Network Insight Virtual Machine Search Poster


VMware vRealize Network Insight can be a revealing experience. It has every single bit of data you ever wanted to see about anything in your infrastructure and it’s available at your fingertips. Because of the vast amount of available data, the search engine is extremely versatile and there are many options available in its syntax.

To make traversing the search engine a bit easier, we’re launching a Search Poster series that will be showcasing the most useful search queries for a specific area. Starting with virtual machines, you can find the first poster in this series on our vRealize microsite here. This will certainly kickstart your search engine learning!

https://vrealize.vmware.com/t/vmware-network-management/virtual-machine-search-poster/






I will share the update information shortly. I hope this has been informative and thank you for reading!

 

VMware Private AI

VMware Private AI In the fast-paced world of AI, privacy and control of corporate data are paramount concerns for organizations. That's ...