Thursday, June 25, 2015

Datastore un-mounting | Dead path issue (APD)

Vsphere 5.1 onwards the process of removal datastore has been changed, before removing any datastore from ESXi hosts and cluster Right-click the datastore and unmounting.

It is not only the process to remove LUN from ESXi hosts but there are few additional pre-checks and post tasks like detaching the device from the host is must before we request storage administrator to unpresenting the LUN from the backend storage array.

This process needs to be followed properly otherwise it may cause bad issue like Dead path or APD ( All paths down) condition on the ESXi host.
  1.  If the LUN is being used as a VMFS datastore, all objects (such as VM, snapshots ,template and HA configuration stored on the VMFS datastore.
  2. Ensue the Datastore is not used for vSphere HA heartbeat.
  3. Ensure the datastore is not part of a SDRS.
  4. Storage I/O control should be disabled.
  5. ISO mount and scripts or utilities are accessing the datastore.


Process to remove datatstore or LUN  from ESXi 5.x hosts

  • Select the ESXi host -> Configuration - > Storage - > Datastores

(note down the naa id for the datatore which starts from naa.xxx)



  •                 Right-click the datastore, which you want to unmounts and select unmounts.



  •    Confirm datastore unmounts pre-check is all marked with Green check mark and click on OK.
  •     Select the ESXi host -> Configuration -> storage -> devices Match the devices with the naa.id (naa.xx) Right-click the Device and Detach. Verify all the Green checks and click on OK to detach the LUN.

           Repeat the same step for all ESXi hosts, where you want to unpresent this datastore.


  •      Inform your storage team to physically unpresent the LUN from the ESXi host using the appropriate array tools.
Rescan the ESXi host and verify detached LUNs are disappeared from the ESXi host.


Monday, June 8, 2015

vSphere 6: vMotion enhancements

With vSphere 6.0 you can migrate Virtual Machines across virtual switches. The new vMotion workflow allows you to choose the destination network which can be on a vSwitch or vDS. This feature eliminates the need to span virtual switches across two locations.
 
VMware vSphere vMotion capabilities have been enhanced on Vsphere 6, enabling users to perform live migration of virtual machines across virtual switches, across vCenter Server systems, and across long distances of up to 100ms RTT.
 
vSphere administrators now can migrate across vCenter Server systems, enabling migration from a Windows version of vCenter Server to vCenter Server Appliance or vice versa, depending on specific requirements. Previously, this was a difficult task and caused a disruption to virtual machine management. This can now be accomplished seamlessly without losing historical data about the virtual machine.
 
Cross vSwitch vMotion
 
Cross vSwitch vMotion allows you to seamless migrate a virtual machines across different virtual switches while performing a vMotion. This means that you are now longer restricted by the network you created on the vSwitches in order to vMotion a virtual machine.
 
vMotion will work across a mix of switches (standard and distributed). Previously, you could only vMotion from vSS to vSS or within a single vDS. This limitation has been removed.
 
The following Cross vSwitch vMotion migrations are possible:
  • vSS to vSS migration.
  • vSS to vDS migration.
  • vDS to vDS migration (transferring VDS port metadata)
  • vDS to VSS is not allowed.
Cross vCenter vMotion


But Cross vSwitch vMotion is not the only vMotion enhancement. vSphere 6 also introduces support for Cross vCenter vMotion. vMotion can now perform the following changes simultaneously.
 
  • Change compute (vMotion) – Performs the migration of virtual machines across compute hosts.
  • Change storage (Storage vMotion) – Performs the migration of the virtual machine disks across datastores.
  • Change network (Cross vSwitch vMotion) – Performs the migration of a VM across different virtual switches.
  • Change vCenter (Cross vCenter vMotion) – Performs the migration of the vCenter which manages the VM.
 
All of these types of vMotion are seamless to the guest OS.
 
Like with vSwitch vMotion, Cross vCenter vMotion requires L2 network connectivity since the IP of the VM will not be changed. This functionality builds upon Enhanced vMotion and shared storage is not required. Target support for local (single site), metro (multiple well-connected sites), and cross-continental sites

With vSphere 6 vMotion you can now:

Migrate from a VCSA to a Windows version of vCenter & vice versa.
  • Replace/retire vCenter server without distruption.
  • Resource pooling across vCenters where additional vCenters were used due to vCenter scalability limits.
  • Migrate VMs across local, metro, and continental distances.
  • Public/Private cloud environments with several vCenters.

There are several requirements for Cross vCenter vMotion to work:

  • Only vCenter 6.0 and greater will be supported. All instances of vCenter prior to version 6.0 will need to be upgraded before this this feature will work. For example, an instance of vCenter 5.5 and 6.0 will not work.
  • Both the source and the destination vCenter servers will need to be joined to the same SSO domain if you want to perform the vMotion using the vSphere Web Client. If the vCenter servers are joined to different SSO domains, it’s still possible to perform a Cross vCenter vMotion, but you must use the API.
  • You will need at least 250 Mbps of available network bandwidth per vMotion operation.
  • Lastly, although not technically required for the vMotion to successfully complete, L2 connectivity is required on the source and destination portgroups. When a Cross vCenter vMotion is performed, a Cross vSwitch vMotion is done as well. The virtual machine portgroups for the VM will need the share an L2 network because the IP will within the guest OS will not be updated.

vSphere 6.0 New Features – Content Library

One of the new feature of vSphere 6 is Content Library . The Content Library provides simple and effective management for VM templates, vApps, ISO images and scripts for vSphere Administrators collectively called “content”.
 
Sometimes we have ISO and other files (needed for VM creation etc) are spread across datastores as multiple administrators are managing vSphere Infrastructure. This can lead to duplication of the contents. To mitigate this issue concept of content library is introduced in vSphere 6.o which allows for a centralized place for storing your contents.
 

Advantage of Content Library
 
The Content Library can be synchronized across sites and vCenter Servers. Sharing consistent templates and files across multiple vCenter Servers in same or different locations brings out consistency, compliance, efficiency and automation in deploying workloads at scale.
 
Following are some of the features of the content library:
 
  • Store and manage content - Once central location to manage all content as VM templates, vApps, ISO’s, scripts etc. This release will have a maximum on 10 libraries and 250 items per library and it is a build-in feature in the vCenter Server. Not a plug-in that you will have to install separately.
  • Share content – Once the content is published on one vCenter Server, you can subscribe to the content from other vCenter Servers. This looks similar to the catalog option in the vCloud Director.
  • Templates – The VM’s will be stored as OVF packages rather than templates. This will affect the template creation process. If you want to make changes to a certain OFV template in the Content Library you have to create a VM out of it first, make the changes and then export it back to an OVF template and into the library.
  • Network – The port for communication for the Content Library will be 443 and there is an option to limit the sync bandwidth.
  • Storage – The Content Library can be stored on datastores, NFS, CFIS, local disks etc. as long as the path to the library is accessible locally from the vCenter Server and the vCenter Server has read/write permission
 
Creating a Content Library

 
 
Selecting Storage for the Content Library
 

Deploying a Virtual Machine to a Content Library




You can clone virtual machines or virtual machine templates to templates in the content library and use them later to provision virtual machines on a virtual data center, a data center, a cluster, or a host.
 
Publishing a Content Library for External Use
 


You can publish a content library for external use and add password protection by editing the content library settings:

Users access the library through the subscription URL that is system generated.

New Certification - VMware vSAN 2017 Specialist

The VMware vSAN 2017 Specialist badge holder is a technical professional who understands the vSAN 6.6 architecture and its complete ...