Friday, May 27, 2011

VMware is still the best

Now, you can nit-pick on the measurements he made or the criteria he has chosen, but in general I think it’s a solid test of up-to-date versions.

The best conclusions I can draw from his report are these:

VMware might not always be the cheapest, VMware might not always be the one with the highest speeds.. but VMware is still the one with the most diverse OS support (any x86 OS can be virtualized), the best management toolkit and the most reliable architecture.

The article also shows some interesting trends. If you go back in time a bit, you can certainly remember that Citrix was aiming at the server virtualization market when they bought Xen. They even re-branded their entire portfolio with it after the purchase. When you look at the results they present in the test Paul did, the conclusion I draw is that Citrix has run out of fuel in this part of town and is concentrating on the desktop again.
Also, another remarkable trend is to be seen at Microsoft and Red Hat. A few years back, Red Hat didn’t really compete in this part of town and Microsoft was more or less the laughing stock of the bunch. Nobody really considered running Hyper-V in their data center as it was not even ready for proper single server deployment, let alone in a complete data center cluster.

Well, Microsoft did what was to be expected of them; they improved and improved again. One thing can be said of their current version, it can be deployed in a data center scenario. But as the shoot out shows, there is still a lot of room for improvements. And we all know that statements made with previous versions of Hyper-V like ‘who needs live migration’ quickly changed into ‘look, we can do live migration too!’. Reliability and scalability hugely improved, management still is a pain in the butt for Redmond. One thing that strikes me most when I am in selection of a virtualization platform. Many clients tend to think Hyper-V is for free. You get it with any server license you buy. Indeed you do. But keep in mind that you burn that specific license for your virtual platform AND you still need to buy a collection of management software to properly manage the lot. But I guess we haven’t seen the last of this yet.

Red Hat is one of the most remarkable companies in this list. They have been on the virtualization train for quite some time but as this test shows, they really can compete with the big three in this field.. and come out as second. It seems that open source is closing the gap quickly with the enterprise environments and really showing off what they can do, although implementation is a bit limited with Windows and Red Hat Linux as only supported client VM’s.

EMC Isilon 15PB Storage system

EMC has announced a new storage system called Isilon 108NL. The Isilon storage system consists of 36 disk 4U nodes which can be joined together to create a 144 node storage system with a capacity of 15 PB.
It’s a scale-out storage system where each node contains its own 1 Gbps and 10 Gbps interfaces so you can add capacity while maintaining throughput.

According to EMC, the Isilon 108NL can achieve a throughput of 85 GBps and a maximum of  1.4 M IOPS. The Isilon can be equipped with 1, 2 or 3 TB Hitachi SATA disks giving it 36, 72 or 108 TB per storage node. The Isilon is by powered by the OneFS operating system which supports NFS, SMB, iSCSI, http and FTP and delivers N+4 data protection on cluster, directory or file level.

Monday, May 16, 2011

Distributed Resource Scheduling (DRS) for Storage on next vSphere version

DRS for storage will enhance Storage vMotion in order to provide automatic load balancing for storage. Users will be able to define groups of data stores, called storage pods capable of automatic load balancing based on capacity, increasing storage utilization. Storage Distributed Resource Scheduler (DRS) will use Storage vMotion to perform automatic load balancing if a disk becomes overloaded. Storage DRS users will be able to define groups of data stores, called “storage pods,” that will automatically load-balance based on capacity. Users can then provision virtual machines (VMs) to specific storage pods rather than to specific data stores.

Sunday, May 15, 2011

New Vblock Announcements at EMC World 2011

EMC isn't the only company with some news to unveil at EMC World 2011. VCE has some announcements as well all revolving around the BRAND NEW VBLOCKS!

The first announcement that effects VCE is the unveiling of Unified Infrastructure Manager 2.1. UIM is standard with a Vblock and is the major hardware orchestration piece with many new road-map additions to tie it in with other VMware products. Check out Chad Sakac's post because he has already covered this really in depth EMC UIM v2.1 Provisioning and Operations.

The second announcement from VCE is the availability of the new VNX Based Vblocks. The original Vblock names are still there, and I've created a chart that help depict the new differences.

Vblock Name
EMC Array
Other Notes and Features
Vblock 0
NAS, SAN, or Both
Vblock 1
NAS, SAN, or Both
Vblock 1U
NAS, SAN, or Both
Vblock 300 EX
VNX 5300
SAN or Unified
Vblock 300 FX
VNX 5500
SAN or Unified
Vblock 300 GX
VNX 5700
SAN or Unified
Vblock 300 HX
VNX 7500
SAN or Unified
Vblock 2
SAN or use NAS with Gateway.

All new 300 series Vblocks come in SAN or Unified (SAN/NAS) configuratons. No longer can Vblocks be ordered as NAS only. Why? Vblocks boot from SAN. When a Vblock was shipped as NAS only, UCS blades had to be populated with internal hard drives. Boot from SAN gives a lot of flexibility in virtual environments. Less spinning media to have to worry about, less power consumption of blades, if a blade fails it's very easy to replace it and boot it up without moving hard drives or re-installing ESX, UCS profiles with SAN Boot makes VMware 4.1 seem stateless, and UIM can configure blades that boot from SAN.
The Vblock 300 EX is actually cheaper than the original Vblock 0 because of new hardware components.
Some other things you may want to know about Vblock 300 series are the mins and maxs. All new Vblocks have specific minimums that can be shipped out. The minimum on the compute side is atleast 4 blades. VCE has decided on 4 blades because that is the recommended bare minimum VMware cluster size to account for N+1+maintenance. All UCS blade upgrades can be done in packs of 2 to account for redundancy. On the storage side, a Vblock can be shipped with as little 18 drives. The maximums on both compute and storage depend on the Vblock type for the amount of chassis and drives depend on the array.
So what's going to happen with the original Vblock 0, 1, and 1U? Nothing. VCE still offers the original Vblocks and will continue to do so until EMC puts and end of life statement on the arrays.
Your last pondering statement might be, what's up with the branding of 300? Seeing as how there is just a single number, there will be room for newer Vblocks to fit in the range of 0-1000. I can't give any more details, but things are in the pipeline!
Lastly, if you happen to be at EMC World, you can take a glimpse at the new racks. VCE is now shipping brand new custom racks built by Panduit. Simply stunning. This looks like it's a Vblock 300 FX.

Friday, May 13, 2011

VMware Clarifies Support for Microsoft Clustering

VMware published KB Article 1037959 ( ) on April 18, 2011 in an effort to clarify VMware’s position on running Microsoft Clustering technologies on vSphere. Below is a snapshot of the support matrix published by VMware in the KB (always refer to KB 1037959 for the most current information).

For those familiar with VMware’s previous position on Microsoft Clustering, you will notice a couple changes. First, VMware has made a distinction in Microsoft Clustering technologies by segmenting them into Shared disk and Non-shared Disk.

  • Shared Disk – solution in which the the data resides on the same disks and the VMs share the disks (think MSCS)

  • Non-shared Disk – solution in which the data resides on different disks and uses a replication technology to keep the data in sync (think Exchange 2007 CCR / 2010 DAG).

  • Next, VMware has extended support for Microsoft Clustering to include In-Guest iSCSI for MSCS.

    For those interested in leveraging Microsoft SQL Mirroring, the KB states that VMware does not consider Microsoft SQL Mirroring a clustering solution and will fully support Microsoft SQL Mirroring on vSphere.

    Under the Disk Configurations section of the KB, the KB discusses how if using VMFS, the virtual disks used as shared storage for clustered virtual machines must reside on VMFS datastores and must be created using the eagerzeroedthick option. The KB provides detail on how to create the eagerzeroedthick disks for both ESX and ESXi via command line or GUI.  Additional information regarding eagerzeroedthick can be found in KB article 1011170 ( Something to note in KB 1011170, at the bottom of the article it states using the vmkfstools –k command you can convert a preallocated (eagerzeroed) virtual disk to eagerzeroedthick and maintain any existing data. Note, the VM must be powered off for this action.

    In closing, the VMware support statement exists to explicitly define what VMware will and will not support. It is very important for you to remember these support statements do not make any determination (either directly or indirectly) about what the software ISV (Independent Software Vendor) will and will not support.  So be sure to review the official support statements from your ISV and carefully choose the configuration that makes sense for your organization and will be supported by each vendor.

    Wednesday, May 11, 2011

    Vmware Interview Questions

    1. Explain about your production environment? How many cluster’s, ESX, Data Centers, H/w etc ?
    2. How does VMotion works? What’s the port number used for it?
    3. Prerequisites for VMotion
    4. How does HA works? Port number? How many host failure allowed and why?
    5. What are active host / primary host in HA? Explain it?
    6. Prerequisites for HA ?
    7. How do DRS works? Which technology used? What are the priority counts to migrate the VM’s?
    8. How does snap shot’s works?
    9. What are the files will be created while creating a VM and after powering on the VM?
    10. If the VMDK header file corrupt what will happen? How do you troubleshoot?
    11. Prerequisites VC, Update manager?
    12. Have you ever patched the ESX host? What are the steps involved in that?
    13. Have you ever installed an ESX host?  What are the pre and post conversion steps involved in that? What would be the portions listed? What would be the max size of it?
    14. I turned on Maintenance mode in an ESX host, all the VM’s has been migrated to another host, but only one VM failed to migrate? What are the possible reasons?
    15. How will you turn start / stop a VM through command prompt?
    16. I have upgraded a VM from 4 to 8 GB RAM; it’s getting failed at 90% of powering on? How do you troubleshoot?
    17. Storage team provided the new LUN ID to you? How will you configure the LUN in VC? What would be the block size (say for 500 GB volume size)?
    18. I want to add a new VLAN to the production network? What are the steps involved in that? And how do you enable it?
    19. Explain about VCB? What it the minimum priority (*) to consolidate a machine?
    20. How VDR works?
    21. What’s the difference between Top and ESXTOP command?
    22. How will you check the network bandwidth utilization in an ESXS host through command prompt?
    23. How will you generate a report for list of ESX, VM’s, RAM and CPU used in your Vsphere environment?
    24. What the difference between connecting the ESX host through VC and Vsphere? What are the services involved in that? What are the port numbers’s used?
    25. How does FT works? Prerequisites? Port used?
    26. Can I VMotion between 2 different data centers? Why?
    27. Can I deploy a VM by template in different data centers ?
    28. I want to increase the system partition size (windows 2003 server- Guest OS) of a VM? How will you do it without any interruption to the end user?
    29. Which port number used while 2 ESX transfer the data in between?
    30. Unable to connect to a VC through Vsphere client? What could be the reason? How do you troubleshoot?
    31. Have you ever upgraded the ESX 3.5 to 4.0? How did you do it?
    32. What are the Vsphere 4.0, VC 4.0, ESX 4.0, VM 7.0 special features?
    33. What is AAM? Where is it used? How do you start or stop through command prompt?
    34. Have you ever called VMWare support? Etc
    35. Explain about Vsphere Licensing? License server?
    36. How will you change the service console IP?
    37. What’s the difference between ESX and ESXi?
    38. What’s the difference between ESX 3.5 and ESX 4.0?

    Sunday, May 1, 2011

    Awesome Video Demoing The Next Generation of Digital Books

    A must watch video for every tech-lover, this is the future of our books. "Al Gore's Our Choice" is an interactive app for Apple iPad and iPhone, featuring America's vice president's narrative tour spiced-up with great photography, interactive graphics, animations, and more than an hour of engrossing documentary footage this is simply a great experience.

    Thanks to the all new groundbreaking multi-touch interface of the device the app provides users with an immersive experience letting them enjoy the content seamlessly. Do check the video demonstration posted after the jump.

    Al Gore's Our Choice

    Upgrading your vSphere Site Recovery Manager

    vSphere 5.5 going end of life in September 2018, we have been traveling all over doing workshops for upgrading to vSphere 6.x. However, wi...