Thursday, March 31, 2011

VMWare -Raw Device Mapping(RDM)

Before we discuss what is Raw Device Mapping?, we need to know why it is required, it becomes simple to answer first question.

RDM is used when

1. You wish to Cluster VM across boxes or Physical to Virtual,In any MSCS clustering scenario that spans physical hosts ? virtual-to-virtual clusters as well as physical-to-virtual clusters. In this case, cluster data and quorum disks should be configured as RDMs rather than as files on a shared VMFS.
2. To enable use of SAN Management software inside VM?s

Let?s get to the concept of RDM

Imagine RDM as symbolic link from VMFS Volume to a raw LUN,mapping makes LUN appear as files in VMFS. It RDM not RAW LUN is referred in the virtual machine configuration. When you map a LUN to VMFS, it creates a file with extension vmdk, which points to RAW LUN.This just a file, which contains information about RAW LUN and it is locked by virtual center so that VM can write to LUN. In short it means actually data is written on the disk.

Let?s see how to map SAN LUN:

1) When you add a disk you have to option to select the whether you want to mapped SAN LUN

click next

2) Select datastore on which you would like to map SAN LUN

Press Next

3) Select a compatibility mode physical or virtual

With Physical compatibility mode you VM can access LUN directly. This is generally used from the application inside VM wants to directly access LUN. However using physical compatibility mode you loose option to clone VM,make it a template and migration when it involved moving disks.When you wish to implement MS cluster you have to select Physical Compatibility mode

With Virtual compatibility mode, you get several features, like enabling snapshots on disk. Virtual compatibility allows the LUN to behave as VMDK, which enables us to use features like cloning to template,cloning to VM or migrations.

4) Depending upon choice you selec above you would get different screens.You can select the options like virtual disk modes and etc.

Managing path of RAW LUN

It is similiar as the you manage path for Datastores

Then you set policy to manage path by selecting Manage Paths.

Just came across very pictures on how RDM looks on the whole picture, where there external storage. Grey dotted lines are RDM?s. This is screen shotted from NETapps PDF. Hope they won?t mind.


ESX Server Restore steps

A common question that arises on the VMware Communities Forum is how to backup VMware ESX so that you can restore the backup if there is a problem, the theory being that this would be faster than reinstalling the server.

As stated within the VMware KB article 1000761 it is possible to restore ESX to identical hardware; however, you need to reinstall ESX first and restore the data you backed up while making changes to how the system boots, else the Universally Unique Identifier (UUID) written by the installation will not work anymore as you have overwritten the data from your backup.

This method will restore everything effectively to identical hardware, however if you want to use new hardware, perhaps with different PCI devices, then the restoration would fail to properly configure the new devices. It may even fail to properly configure NICs if there are any IRQ differences between the supposed identical hardware.

So in these cases you would have to at least verify the configuration and fix anything that was broken. This could lead to a set of unknowns from a security perspective. You are after all trusting the backup was restored properly and if it was not, then you could end up with security issues. So the verification step would have to be extremely well documented.

It is far easier to reinstall VMware ESX to the hardware and to use a either a installation document,  kickstart, or other type of script to configure all the devices for you using either the Remote CLI or the VMware ESX CLI.

When restoring VMware ESX or VMware ESXi the best tool to have will be very good installation documentation that is easy to follow and has graphics and text for every step of the configuration.  These documents could be reviewed for security concerns, and used to derive the scripts that could do the work for you.
how to restore an ESX host to a previous configuration in the event of a failure or re-installation.

Warning: This procedure is an unsupported workaround. This may lead to corruption if done incorrectly.
Backing up Procedure
Create backups of these items:

    * The /etc/passwd file
    * The /etc/shadow file
    * The contents of /home directory
    * The contents of /root directory
    * The contents of the /etc/vmware directory, excluding:
          o Any soft links
          o /etc/vmware/patchdb
          o /etc/vmware/ssl

Restoring Procedure

To restore configuration:

   1.      Reinstall ESX to the same patch level as the failed one.
   2.      Get the information on the currently configured core dump partition and copy and paste the output into a text editor:

      esxcfg-dumppart –l

   3.      Get the information on the currently configured cos core file and copy and paste the output into a text editor:

      cat /etc/vmware/esx.conf |grep CosCorefile

   4.       Restore /etc/vmware from a previous backup.
   5.      Update the new configuration file with core dump partition information:

      esxcfg-dumppart –s vmhbaX:X:X:X

      Where  vmhbaX:X:X:X is the dump partition name noted from step 2.

   6.      Edit /etc/vmware/esx.conf and update the CosCorefile information to match the path copied in step 3.
   7.       Get the new UUID for the root partition:

      cat /boot/grub/menu.lst |grep UUID

      This generates at least 3 lines with root=UUID=xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx where x is a hexadecimal number.

   8.      Update the configuration with new root device UUID by executing following command

      esxcfg-boot –d “UUID=xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx"

   9. Reboot the ESX host. The ESX host reboots with the old profile.

PXE Booting VMware ESX 4.0

I recently had the opportunity to work on a proof of concept (PoC) in which we wanted to help a customer streamline the processes needed to deploy new hosts and reduce the amount of time it took overall. One of the tools we used in the PoC for this purpose was PXE booting VMware ESX for an automated installation. Here are the details on how we made this work. Before I get into the details, I’ll provide this disclaimer: there are probably easier ways of making this work. I specifically didn’t use UDA or similar because I wanted to gain the experience of how to do this the “old fashioned” way. I also wanted to be able to walk the customer through the “old fashioned” way and explain all the various components. With that in mind, here are the components you’ll need to make this work:
  • You’ll need a DHCP server to pass down the PXE boot information. In this particular instance, I used an existing Windows-based DHCP server. Any DHCP server should work; feel free to use the Linux ISC DHCP server if you prefer.
  • You’ll need an FTP server to host the kickstart script and VMware ESX 4.0 Update 1 installation files. In this case, I used a third-party FTP server running on the same Windows-based server as DHCP. Again, feel free to use a Linux-based FTP server if you prefer.
  • You will need a TFTP server to provide the boot files. The third-party FTP server used in the previous step also provided TFTP functionality. Use whatever TFTP server you prefer.

Make sure that each of these components is working as expected before proceeding. Otherwise, you’ll spend time troubleshooting problems that aren’t immediately apparent.

Preparing for the Automated ESX Installation

First, copy the contents for the VMware ESX 4.0 Update 1 DVD—not the actual ISO, but the contents of the ISO—to a directory on the FTP server. Test it to make sure that the files can be accessed via an anonymous FTP user. Also go ahead and create a simple kickstart script that automates the installation of VMware ESX. I won’t bother to go into detail on this step here; it’s been quite adequately documented elsewhere. You’ll need to put this kickstart script on the FTP server as well. At this point, you’re ready to proceed with gathering the PXE boot files.

Gathering the PXE Boot Files

The first task you’ll need to complete is gathering the necessary files for a PXE boot environment. First, copy the vmlinuz and initrd.img files from the VMware ESX 4.0 Update 1 ISO image. Since I use a Mac, for me this was a simple case of mounting the ISO image and copying out the files I needed. Linux or Windows users, it might be a bit more complicated for you. These files, by the way, are in the ISOLINUX folder on the DVD image. Next, you’ll need the PXE boot files. Specifically, you’ll need the menu.c32 and pxelinux.0 files. These files are not on the DVD ISO image; you’ll have to download Syslinux from this web site. Once you download Syslinux, extract the files into a temporary directory. You’ll find menu.c32 in the com32/menu folder; you’ll find pxelinux.0 in the core folder. Copy both of these files, along with vmlinuz and initrd.img, into the root directory of the TFTP server. (If you don’t know the root directory of the TFTP server, double-check its configuration.) You’re now ready to configure the PXE boot process.

Configuring the PXE Boot Environment

Once the necessary files have been placed into the root directory of the TFTP server, you’re ready to configure the PXE boot environment. To do this, you’ll need to create a PXE configuration file on the TFTP server. The file should be placed into a folder named pxelinux.cfg under the root of the TFTP server. The filename of the PXE configuration file should be named something like this:

01- If the MAC address of the host was 01:02:03:04:05:06, the name of the text file in the pxelinux.cfg folder on the TFTP server would be:


The PoC in which I was engaged involved Cisco UCS, so we knew in advance what the MAC addresses were going to be (the MAC address is assigned in the UCS service profile). The contents of this file should look something like this (lines have been wrapped here for readability and are marked by backslashes; don’t insert any line breaks in the actual file):

default menu.c32menu title Custom PXE Boot Menu Titletimeout 30 label scriptedmenu label Scripted installationkernel vmlinuzappend initrd=initrd.img mem=512M ksdevice=vmnic0 \ ks=ftp://A.B.C.D/ks.cfgIPAPPEND 1 You’ll want to replace ftp://A.B.C.D/ks.cfg with the correct IP address and path for the kickstart script on the FTP server. Only one step remains: configuring the DHCP server.

Configuring the DHCP Server for PXE Boot

As I mentioned earlier, I used the Windows DHCP server as a matter of ease and convenience; feel free to use whatever DHCP server best suits your needs. There are only two options that are necessary for PXE boot: 066 Boot Server Host Name (specify the IP address of the TFTP server)067 Bootfile Name (specify pxelinux.0) In this particular example, I created reservations for each MAC address. Because the values were the same for all reservations, I used server-wide DHCP options, but you could use reservation-specific DHCP options if you wanted different boot options on a per-MAC address (i.e., per-reservation) basis.

The End Result

Recall that this PoC was using Cisco UCS blades. Thus, in this environment, to prepare for a new host coming online we only had to make sure that we had a PXE configuration file and create a matching DHCP reservation. The MAC address would get assigned via the service profile, and when the blade booted then it would automatically proceed with an unattended installation. Combined with Host Profiles in VMware vCenter, this took the process of bringing new ESX/ESXi hosts online down to mere minutes. A definite win for any customer!

VMware vRealize Automation 7.3

vRealize Automation 7.3, the next iteration of VMware’s industry-leading cloud automation platform. While this is a an incremental “dot” r...