Thursday, December 1, 2011

VCP-510 Dumps Certified Professional on vSphere 5

 

What is VCP-510 Dumps ?

VCP-510 Dumps are questions that appeared in real VCP-510 Certified Professional on vSphere 5 exam. A recent version of dumps will be very helpful to clear the VCP-510 exam.

How To Download VCP-510 Dumps ?

You can download VCP-510 Certified Professional on vSphere 5 dumps from our site at free of cost.

http://www.filesonic.com/file/4063871064/VCP-510_Dump_Latest.pdf

Monday, November 21, 2011

Storage I/O Control Enhancements in vSphere 5.0

Storage I/O Control (SIOC) was introduced in vSphere 4.1 and allows for cluster wide control of disk resources. The primary aim is to prevent a single VM on a single ESX host from hogging all the I/O bandwidth to a shared datastore. An example could be a low priority VM which runs a data mining type application impacting the performance of other more important business VMs sharing the same datastore.

Configuring Storage I/O Control

Let's have a brief overview of how to configure SIOC. SIOC is enabled very simply via the properties of the datastore. This is a datastore built on a LUN from an EMC VNX 5500:


The Advanced button allows you to modify the latency threshold figure. SIOC doesn't do anything until this threshold is exceeded. By default in vSphere 5.0, the latency threshold is 30ms, but this can be changed if you want to have a lower of higher latency threshold value:


Through SIOC, Virtual Machines can now be assigned a priority when contention arises on a particular datastore. Priority of Virtual Machines is established using the concept of Shares. The more shares a VM has, the more bandwidth it gets to a datastore when contention arises. Although we had a disk shares mechanism in the past, it was only respected by VMs on the same ESX host so wasn't much use on shared storage which was accessed by multipe ESX hosts. Storage I/O Control enables the honoring of share values across all ESX hosts accessing the same datastore.

The shares mechanism is triggered when the latency to a particular datastore rises above the pre-defined latency threshold seen earlier. Note that the latency is calculated cluster-wide. Storage I/O Control also allows one to tune & place a maximum on the number of IOPS that a particular VM can generate to a shared datastore. The Shares and IOPS values are configured on a per VM basis. Edit the Settings of the VM, select the Resource tab, and the Disk setting will allow you to set the Shares value for when contention arises (set to Normal/1000 by default), and limit the IOPs that the VM can generate on the datastore (set to Unlimited by default):



More information on Storage I/O Control can be found in this whitepaper.

Thursday, October 13, 2011

vSphere 5 Configuration Maximums

Sub-sections:

1: vSphere 5 Compute Configuration Maximums
2: vSphere 5 Memory Configuration Maximums
3: vSphere 5 Networking Configuration Maximums
4: vSphere 5 Orchestrator Configuration Maximums
5: vSphere 5 Storage Configuration Maximums
6: vSphere 5 Update Manager Configuration Maximums
7: vSphere 5 vCenter Server, and Cluster and Resource Pool Configuration Maximums
8: vSphere 5 Virtual Machine Configuration Maximums

1: vSphere 5 Compute Configuration Maximums

1 = Maximum amount of virtual CPU's per Fault Tolerance protected virtual machine
4 = Maximum Fault Tolerance protected virtual machines per ESXi host
16 = Maximum amount of virtual disks per Fault Tolerance protected virtual machine
25 = Maximum virtual CPU's per core
160 = Maximum logical CPU's per host
512 = Maximum virtual machines per host
2048 = Maximum virtual CPU's per host
64GB = Maximum amount of RAM per Fault Tolerance protected virtual machine

2: vSphere 5 Memory Configuration Maximums

1 = Maximum number of swap files per virtual machine
1TB = Maximum swap file size
2TB = Maximum RAM per host

3: vSphere 5 Networking Configuration Maximums

2 = Maximum forcedeth 1Gb Ethernet ports (NVIDIA) per host
2 = Maximum VMDirectPath PCI/PCIe devices per virtual machine
4 = Maximum concurrent vMotion operations per host (1Gb/s network)
8 = Maximum concurrent vMotion operations per host (10Gb/s network)
8 = Maximum VMDirectPath PCI/PCIe devices per host
8 = Maximum nx_nic 10Gb Ethernet ports (NetXen) per host
8 = Maximum ixgbe 10Gb Ethernet ports (Intel) per host
8 = Maximum be2net 10Gb Ethernet ports (Emulex) per host
8 = Maximum bnx2x 10Gb Ethernet ports (Broadcom) per host
16 = Maximum bnx2 1Gb Ethernet ports (Broadcom) per host
16 = Maximum igb 1Gb Ethernet ports (Intel) per host
24 = Maximum e1000e 1Gb Ethernet ports (Intel PCI-e) per host
32 = Maximum tg3 1Gb Ethernet ports (Broadcom) per host
32 = Maximum e1000 1Gb Ethernet ports (Intel PCI-x) per host
32 = Maximum distributed switches (VDS) per vCenter
256 = Maximum Port Groups per Standard Switch (VSS)
256 = Maximum ephemeral port groups per vCenter
350 = Maximum hosts per VDS
1016 = Maximum active ports per host (VSS and VDS ports)
4088 = Maximum virtual network switch creation ports per standard switch (VSS)
4096 = Maximum total virtual network switch ports per host (VSS and VDS ports)
5000 = Maximum static port groups per vCenter
30000 = Maximum distributed virtual network switch ports per vCenter
6x10Gb + 4x1Gb = Maximum combination of 10Gb and 1Gb Ethernet ports per host

4: vSphere 5 Orchestrator Configuration Maximums

10 = Maximum vCenter server systems connect to vCenter Orchestrator
100 = Maximum hosts connect to vCenter Orchestrator
150 = Maximum concurrent running workflows
15000 = Maximum virtual machines connect to vCenter Orchestrator

5: vSphere 5 Storage Configuration Maximums

2 = Maximum concurrent Storage vMotion operations per host
4 = Maximum Qlogic 1Gb iSCSI HBA initiator ports per server
4 = Maximum Broadcom 1Gb iSCSI HBA initiator ports per server
4 = Maximum Broadcom 10Gb iSCSI HBA initiator ports per server
4 = Maximum software FCoE adapters
8 = Maximum non-vMotion provisioning operations per host
8 = Maximum concurrent Storage vMotion operations per datastore
8 = Maximum number of paths to a LUN (software iSCSI and hardware iSCSI)
8 = Maximum NICs that can be associated or port bound with the software iSCSI stack per server
8 = Maximum number of FC HBA's of any type
10 = Maximum VASA (vSphere storage APIs – Storage Awareness) storage providers
16 = Maximum FC HBA ports
32 = Maximum number of paths to a FC LUN
32 = Maximum datastores per datastore cluster
62 = Maximum Qlogic iSCSI: static targets per adapter port
64 = Maximum Qlogic iSCSI: dynamic targets per adapter port
64 = Maximum hosts per VMFS volume
64 = Maximum Broadcom 10Gb iSCSI dynamic targets per adapter port
128 = Maximum Broadcom 10Gb iSCSI static targets per adapter port
128 = Maximum concurrent vMotion operations per datastore
255 = Maximum FC LUN Ids
256 = Maximum VMFS volumes per host
256 = Maximum datastores per vCenter
256 = Maximum targets per FC HBA
256 = Maximum iSCSI LUNs per host
256 = Maximum FC LUNs per host
256 = Maximum NFS mounts per host
265 = Maximum software iSCSI targets
1024 = Maximum number of total iSCSI paths on a server
1024 = Maximum number of total FC paths on a server
2048 = Maximum Powered-On virtual machines per VMFS volume
2048 = Maximum virtual disks per host
9000 = Maximum virtual disks per datastore cluster
30'720 = Maximum files per VMFS-3 volume
130'690 = Maximum files per VMFS-5 volume
1MB = Maximum VMFS-5 block size (non upgraded VMFS-3 volume)
8MB = Maximum VMFS-3 block size
256GB = Maximum file size (1MB VMFS-3 block size)
512GB = Maximum file size (2MB VMFS-3 block size)
1TB = Maximum file (4MB VMFS-3 block size)
2TB – 512 bytes = Maximum file size (8MB VMFS-3 block size)
2TB – 512 bytes = Maximum VMFS-3 RDM size
2TB – 512 bytes = Maximum VMFS-5 RDM size (virtual compatibility)
64TB = Maximum VMFS-3 volume size
64TB = Maximum FC LUN size
64TB = Maximum VMFS-5 RDM size (physical compatibility)
64TB = Maximum VMFS-5 volume size

6: vSphere 5 Update Manager Configuration Maximums

1 = Maximum ESXi host upgrades per cluster
24 = Maximum VMware tools upgrades per ESXi host
24 = Maximum virtual machines hardware upgrades per host
70 = Maximum VUM Cisco VDS updates and deployments
71 = Maximum ESXi host remediations per VUM server
71 = Maximum ESXi host upgrades per VUM server
75 = Maximum virtual machines hardware scans per VUM server
75 = Maximum virtual machine hardware upgrades per VUM server
75 = Maximum VMware Tools scans per VUM server
75 = Maximum VMware Tools upgrades per VUM server
75 = Maximum ESXi host scans per VUM server
90 = Maximum VMware Tools scans per ESXi host
90 = Maximum virtual machines hardware scans per host
1000 = Maximum VUM host scans in a single vCenter server
10000 = Maximum VUM virtual machines scans in a single vCenter server

7: vSphere 5 vCenter Server, and Cluster and Resource Pool Configuration Maximums

100% = Maximum failover as percentage of cluster
8 = Maximum resource pool tree depth
32 = Maximum concurrent host HA failover
32 = Maximum hosts per cluster
512 = Maximum virtual machines per host
1024 = Maximum children per resource pool
1600 = Maximum resource pool per host
1600 = Maximum resource pool per cluster
3000 = Maximum virtual machines per cluster

8: vSphere 5 Virtual Machine Configuration Maximums

1 = Maximum IDE controllers per virtual machine
1 = Maximum USB 3.0 devices per virtual machine
1 = Maximum USB controllers per virtual machine
1 = Maximum Floppy controllers per virtual machine
2 = Maximum Floppy devices per virtual machine
3 = Maximum Parallel ports per virtual machine
4 = Maximum IDE devices per virtual machine
4 = Maximum Virtual SCSI adapters per virtual machine
4 = Maximum Serial ports per virtual machine
4 = Maximum VMDirectPath PCI/PCIe devices per virtual machine
6 (if 2 of them are Teradici devices) = Maximum VMDirectPath PCI/PCIe devices per virtual machine
10 = Maximum Virtual NICs per virtual machine
15 = Maximum Virtual SCSI targets per virtual SCSI adapter
20 = Maximum xHCI USB controllers per virtual machine
20 = Maximum USB device connected to a virtual machine
32 = Maximum Virtual CPUs per virtual machine (Virtual SMP)
40 = Maximum concurrent remote console connections to a virtual machine
60 = Maximum Virtual SCSI targets per virtual machine
60 = Maximum Virtual Disks per virtual machine (PVSCSI)
128MB = Maximum Video memory per virtual machine
1TB = Maximum Virtual Machine swap file size
1TB = Maximum RAM per virtual machine
2TB – 512B = Maximum virtual machine Disk Size

Monday, September 26, 2011

vSphere 5 and the new vSphere Distributed Switch – NetFlow

 Introduction

With vSphere 5 comes a plethora of new features and functionality across the entire VMware virtualization platform.  One of the core components that got a nice upgrade was the vSphere Distributed Switch (vDS).  For those of you that have not had the chance to use the vDS, it is a centralized administrative interface that allows access to manage and update a network configuration in one location as opposed to each separate ESX host.  This saves vSphere administrators or network engineers a lot of operational configuration time and/or scripting activities.   The vDS is a feature that is packaged with Enterprise Plus licensing.  Here are some of the new features that are included with the vDS 5.0:
  • New stateless firewall that is built into the ESXi kernel (iptables is no longer used)
  • Network I/O Control improvements (network resource pools and 802.1q support)
  • LLDP standard is now supported for network discovery (no longer just CDP support)
  • The ability to mirror ports for advanced network troubleshooting or analysis
  • The ability to configure NetFlow for visibility of inner-VM communication (NetFlow version 5)
NetFlow Basics

I could do a write-up on each one of these components as they are all worth discussing in more detail, but I wanted to focus on the NetFlow feature for this post as I think it’s an awesome addition.  NetFlow has had experimental support in vSphere for some time, but now VMware has integrated the functionality right into the vDS and is officially supported.

NetFlow gives the administrator the ability to monitor virtual machine network communications to assist with intrusion detection, network profiling, compliance monitoring, and in general, network forensics.  Enabling this functionality can give you some real insight into what is going on within your environment from a network perspective.  Having “cool features” is a nice to have, but having features that you can utilize and show value back to the business is a completely different value add.

Let’s look at how to setup NetFlow on the new vDS, then take a look at the data you can extract from NetFlow with a third party NetFlow viewer.  Once you see the value of the data, you can then make some important IT business decisions on how you need to mitigate risk and protect your investment by getting ahead of the curve (aka VMware vShield or some other third party software).

Setup your vDS 5 Switch

Ensure you are running VMware vSphere 5.0 and have activated Enterprise Plus licensing to setup the vDS switch in your environment.  You can see below the new option to deploy a vDS 5.0 switch, and of course we offer backwards compatibility for those that need to deploy to their 4.x environments.  Select the 5.0 version and hit next.


In the “General” section give the vDS a name, in this example I am giving him “dvSwitch5”.  Select next the number of network interface cards you want to participate in the switch and then select next.


For each host in your cluster that you wish to participate in the vDS, you will need to configure the network interfaces that will support this vDS implementation.  In this example I have selected vmnic 4 and vmnic 5 to be members of the vDS 5 switch.  Select next.


That’s it, review the summary and select finish for your vDS configuration to come online and begin configuring NetFlow.


Setup Netflow on the vDS 5

Now you have a fully functioning vDS 5.0 switch, you can actually start to use it!  First let’s go ahead and configure NetFlow on the dvPortGroup, then we will move some virtual machines over to the new vDS so we can get some real data flowing.  Right click on your newly created dvSwitch and select “edit settings”.  Go to the “NetFlow” tab across the top of the page.  You will need to give your vDS an IP address so your NetFlow tool will know where to collect the data from.  Populate an IP address for the vDS, then you will need to enter the IP address of the collector you plan on using to pull the data from.  Make sure you enter the correct port number (default is 1) for how you setup your NetFlow application to communicate


Right click on the dvPortGroup within the vDS and select the “monitoring” option and enable NetFlow so you can begin to collect data.


Move a few VM’s over to the new vDS so you can begin to capture some real data within your newly established NetFlow configuration.  I have highlighted below how you can change the network connection on a VM to now utilize the dvSwitch5 we created earlier.



Conclusion

VMware vSphere 5 offers some great new features that are integrated into the new vSphere 5 Distributed Switch.  Start to leverage your existing investment by examining your network infrastructure with the NetFlow data you can now begin to extract.  Once you have gathered this data, begin considering how you can mitigate some of the security and compliance risks within your organization.  VMware vShield is a product that can help you in this regard and will integrate into your current environment.

Wednesday, September 14, 2011

VMware Workstation 8 Released


Why Choose VMware Workstation?

Winner of more than 50 industry awards, VMware Workstation is recognized for its broad operating system support, rich user experience, comprehensive feature set, and high performance. It's the perfect companion for any technical professional that wants to save significant time with a tool that is backed by world-class support.


Introducing VMware Workstation 8!

VMware Workstation 8 is your on-ramp to the cloud. With over 50+ new features, it’s going to dramatically change the way you work with virtual machines. Save time, enhance collaboration, and do more than you ever thought possible with a PC

Access Anytime, Anywhere

VMware Workstation provides a seamless way to access all of the virtual machines you need, regardless of where they are running.   Connect to Server enables remote connections to virtual machines running on VMware Workstation, VMware vSphere, and VMware vCenter.  Now you can work with local and server hosted virtual machines side by side within the same interface.  You are no longer constrained by the power of your PC to run multiple virtual machines at the same time.

Share the Benefits

Sharing a virtual machine is quickest way to share and test applications with your team in a more production like environment. Run VMware Workstation as a server to share virtual machines with your teammates, department, or organization. VMware Workstation provides enterprise caliber security to control user access and levels of control.


Unleash the Power of Your PC

VMware Workstation takes advantage of the latest hardware to replicate server and desktop environments. Create virtual machines with up to up to 64GB of RAM with significantly improved virtual SMP performance. And now for those times when you need it, virtual VT enables you to run 64-bit nested virtual machines. Additional improvements include improved NAT performance and support for HD audio, SuperSpeed USB (USB 3.0) and Bluetooth.

From PC to Datacenter

Simply drag and drop a virtual machine to move it from your PC to a VMware vSphere server. It’s the easiest way to deploy a complete application environment from your PC to a server for further testing, demoing, or analysis

Download the copy of the VMware Workstation 8 from here 

vSphere 5 Product Documentation - PDF and E-book Formats

VMware really did an outstanding job with the availability of vSphere 5 information at their new vSphere 5 Documentation Center. It offers a wide range of documents in searchable HTML format but also offers all guides in PFD, ePub and mobi format. It even has a link to one downloadable zip file with all the vSphere 5 PDFs you need.

Archive of all PDFs in this list [zip]
vSphere Basics [pdf | epub | mobi]
vSphere Installation and Setup [pdf | epub | mobi]
vSphere Upgrade [pdf | epub | mobi]
vSphere vCenter Server and Host Management [pdf | epub | mobi]
vSphere Virtual Machine Administration [pdf | epub| mobi]
vSphere Host Profiles [pdf | epub | mobi]
vSphere Networking [pdf | epub | mobi]
vSphere Storage [pdf | epub | mobi]
vSphere Security [pdf | epub | mobi]
vSphere Resource Management [pdf | epub | mobi]
vSphere Availability [pdf | epub | mobi]
vSphere Monitoring and Performance [pdf | epub| mobi]
vSphere Troubleshooting [pdf | epub | mobi]
vSphere Examples and Scenarios [pdf | epub | mobi]

VCP5 Mock Exam is available

VMware has released the mock exam for VCP on vSphere 5. The VCP5 mock exam consist of 30 questions and is available here. The official VCP5 exam can be scheduled at the end of this month. Check your local VUE test centre for available dates.


Tuesday, September 13, 2011

Vblock Infrastructure Packages - Integrated best-of-breed packages from VMware, Cisco and EMC


 IMAGINE a different model, a hybrid model, where best-of-breed companies in disciplines critical to IT – networking, servers, storage, and the virtualization layer – all come together to deliver IT to business in a new, accelerated, deceptively simple and in a startlingly cost effective way. IMAGINE no more. Cisco and EMC, together with VMware, are putting you on a new road to greater efficiency, control and choice. A faster road to unprecedented IT agility and unbounded business opportunities. With the Virtual Compute Environment’s Vblock experience.


Vblock Infrastructure Packages Scalable Platform for Building Solutions




 •    Vblock 2 (3000 – 6000+ VMs)
  • A high-end configuration - extensible to meet the most demanding IT needs
  • Typical use case: Business critical ERP, CRM systems 
   Vblock 1 (800 – 3000+ VMs)
  • A mid-sized configuration - broad range of IT capabilities for organizations of all sizes
  • Typical use case: Shared services – Email, File and Print, Virtual Desktops, etc.
 Vblock 0 (300 – 800+ VMs) ~1H 2010 

  • An entry-level configuration addresses small datacenters or organizations
  • Test/development platform for Partners and customers
 Virtualized Workload Environment Vblock 



Vblock 0 Components

Compute 


Cisco UCS B-series 


Network


Cisco Nexus 1000V

Cisco MDS 9506 


Storage


EMC CLARiiON CX4 


Hypervisor


VMware vSphere 4 


Management


EMC Ionix Unified Infrastructure Manager

VMware vCenter

EMC NaviSphere

EMC PowerPath

Cisco UCS Manager

Cisco Fabric Manager
Vblock 1 Components


Compute

Cisco UCS B-series 

Network 

Cisco Nexus 1000V
Cisco MDS 9506 

Storage 

EMC Symmetrix V-Max 

 Hypervisor 

VMware vSphere 4
             
Management 

EMC Ionix Unified Infrastructure Manager
VMware vCenter
EMC Symmetrix Management console
EMC PowerPath
Cisco UCS Manager
Cisco Fabric Manager

Vblock 2 Components 


  •  Network and Storage Components Balanced systems performance, capability & capacity
  •     Compute Components High Density Compute Environment
  •    Accelerating Virtualization Accelerate IT Standardization and Simplification  Enable              
  •   Vblock: O/S and Application Support
Vblock accelerates virtualization of applications by standardizing IT infrastructure and IT processes 

  •          Broad range of O/S support
  •         Over 300 Enterprise Applications explicitly supported
Vblock applications 

SAP
VMware View 3.5
View 4 in-test
Oracle RAC
Exchange
SharePoint
Accelerate virtualization, standardize IT infrastructure

 Scalability Security Availability vNetwork vStorage vCompute VMware vSphere 4.0 vCenter 4.0 Infrastructure APIs Application APIs 

VMware Private AI

VMware Private AI In the fast-paced world of AI, privacy and control of corporate data are paramount concerns for organizations. That's ...