Friday, December 11, 2015

Installing ESXi 6 Using PXE - Vsphere 6

Preboot execution environment (PXE) to boot a host. Starting with vSphere 6.0, you can PXE boot the ESXi installer from a network interface on hosts with legacy BIOS or using UEFI.

ESXi is distributed in an ISO format that is designed to install to flash memory or to a local hard drive. You can extract the files and boot by using PXE. PXE uses Dynamic Host Configuration Protocol (DHCP) and Trivial File Transfer Protocol (TFTP) to boot an operating system over a network.

PXE booting requires some network infrastructure and a machine with a PXE-capable network adapter. Most machines that can run ESXi have network adapters that can PXE boot.

NOTE PXE booting with legacy BIOS firmware is possible only over IPv4. PXE booting with UEFI firmware is possible with either IPv4 or IPv6.

Overview of PXE Boot Installation Process


The interaction between the ESXi host and other servers proceeds as follows
  1. The user boots the target ESXi host.
  2. The target ESXi host makes a DHCP request.
  3. The DHCP server responds with the IP information and the location of the TFTP server.
  4. The ESXi host contacts the TFTP server and requests the file that the DHCP server specified.
  5. The TFTP server sends the network boot loader, and the ESXi host executes it. The initial boot loader
  6. might load additional boot loader components from the TFTP server.
  7. The boot loader searches for a configuration file on the TFTP server, downloads the kernel and other
  8. ESXi components from the HTTP server or the TFTP server and boots the kernel on the ESXi host.
  9. The installer runs interactively or using a kickstart script, as specified in the configuration file.

Alternative Approaches to PXE Booting

Alternative approaches to PXE booting different software on different hosts are also possible, for example 
  1. Configuring the DHCP server to provide different initial boot loader filenames to different hosts depending on MAC address or other criteria. See your DCHP server's documentation.
  2. Approaches using iPXE as the initial bootloader with an iPXE configuration file that selects the next bootloader based on the MAC address or other criteria.

Tuesday, October 27, 2015

VMware ESXi 6.0 U1a / vSphere 6.0 Update 1 Download available


VMware has recently published a patch for ESXi 6.0 


VMware ESXi 6.0 Update 1a Release Notes

What's New


This release of VMware ESXi contains the following enhancements:

I/O Filter: vSphere APIs for I/O Filtering (VAIO) provide a framework that allows third parties to create software components called I/O filters. The filters can be installed on ESXi hosts and can offer additional data services to virtual machines by processing I/O requests that move between the guest operating system of a virtual machine and virtual disks. 

Exclusive affinity to additional system contexts associated with a low-latency VM: This release introduces a new VMX option sched.cpu.latencySensitivity.sysContexts to address issues on vSphere 6.0 where most system contexts are still worldlets. The Scheduler utilizes the sched.cpu.latencySensitivity.sysContexts option for each virtual machine to automatically identify a set of system contexts that might be involved in the latency-sensitive workloads. For each of these system contexts, exclusive affinity to one dedicated physical core is provided. The VMX option sched.cpu.latencySensitivity.sysContexts denotes how many exclusive cores a low-latency VM can get for the system contexts. 

ESXi Authentication for Active Directory:ESXi is modified to only support AES256-CTS/AES128-CTS/RC4-HMAC encryption for Kerberos communication between ESXi and Active Directory.

Support for SSLv3: Support for SSLv3 has been disabled by default. For further details, see Knowledge Base article 2121021.

Dying Disk Handling (DDH): The Dying Disk Handling feature provides latency monitoring framework in the kernel, a daemon to detect high latency periods, and a mechanism to unmount individual disks and diskgroups.

Stretched Clusters: Virtual SAN 6.0 Update 1 supports stretched clusters that span geographic locations to protect data from site failures or loss of network connection.



Monday, September 7, 2015

VMware Site Recovery Manager (SRM) 6.1

New features of Site Recovery Manager 6.1. There are some useful and long awaited features as follows:

  • Support Stretched Storage and Orchestrated vMotion
  • Enhanced integration with VMware NSX
  • Storage Profile Based Protection

As for a quick reminder, Site Recovery Manager enables simplified automation of disaster recovery and the architecture of SRM has not been changed. Depending on RPO requirements, SRM can use vSphere Replication or array-replication.

Storage Profile Based Protection


Site Recovery Manager 6.1 adds a new type of protection group; the storage policy-based protection groups. Storage policy-based protection groups use vSphere storage profiles to identify protected datastores and virtual machines. They automate the process of protecting and unprotecting virtual machines and adding and removing datastores from protection groups. Storage profile-based protection groups enable deep integration with virtual machine provisioning tools like VMware vRealize Automation. This combination makes it easier than ever to deploy and protect virtual machines.

Storage policy-based protection groups utilize vSphere tags in combination with vSphere storage policy based management to enable automated policy based protection for virtual machines. Storage policy-based management enables vSphere administrators to automate the provisioning and management of virtual machines storage to meet requirements like performance, availability and protection. vSphere tags allow for the ability to attach metadata to vSphere inventory, in this case datastores, which makes these objects more sortable, searchable and possible to associate with storage policies. 


Stretched Storage and Orchestrated vMotion

Site Recovery Manager 6.1 now supports using cross-vCenter vMotion in combination with stretched storage such as NetApp Metro Cluster, you can combine the benefits of Site Recovery Manager with the advantages of stretched storage.

Prior to Site Recovery Manager 6.1 customers had to make a choice between using Site Recovery Manager or vSphere Metro Storage Clusters/Stretched Storage to provide a multisite solution that was optimized for either site mobility or disaster recovery without being able to attain the benefits of both solutions simultaneously. Site Recovery Manager 6.1 now supports using cross-vCenter vMotion in combination with stretched storage, thereby combining the benefits of Site Recovery Manager with the advantages of stretched storage.




Enhanced integration with VMware NSX




Networking is typically one of the more complex and cumbersome aspects of a disaster recovery plan. Ensuring that the proper networks, firewall rules and routing are configured correctly and available can quite challenging. Making an isolated test network with all the same capabilities can be even more so. Additionally, solutions like cross-vCenter vMotion require a stretched layer-2 network which can create even more difficulty.

NSX 6.2 has a number of new features which enhance Site Recovery Manager. This means that organizations can now use NSX and Site Recovery Manager to simplify the creation, testing and execution of recovery plans as well as accelerate recovery times.

NSX 6.2 supports creating “Universal Logical Switches”, which allow for the creation of layer-2 networks that span vCenter boundaries. This means that when utilizing Universal Logical Switches with NSX there will be a virtual port
group at both the protected and recovery site that connect to the same layer-2 network.

Friday, August 7, 2015

Hyper-Converged




Corporate technology undergoes a massive shift every so often as new models emerge to meet changing business needs. This chapter is about hyper-converged infrastructure, which is the culmination and conglomeration of a number of trends, all of which provide specific value to the modern enterprise.

So, what is hyper-convergence? At the highest level, hyper-convergence is a way to enable cloud-like economics and scale without compromising the performance, reliability, and availability you expect in your own data center. Hyper-converged infrastructure provides significant benefits:

  • Elasticity: Hyper-convergence makes it easy to scale out/ in resources as required by business demands.
  • VM-centricity: A focus on the virtual machine (VM) or workload as the cornerstone of enterprise IT, with all supporting constructs revolving around individual VMs.
  • Data protection: Ensuring that data can be restored in the event of loss or corruption is a key IT requirement, made far easier by hyperconverged infrastructure.
  • VM Mobility: Hyper-convergence enables greater application/workload mobility.
  • High availability: Hyper-convergence enables higher levels of availability than possible in legacy systems.
  • Data efficiency: Hyper-converged infrastructure reduces storage, bandwidth, and IOPS requirements.
  • Cost efficiency: Hyper-converged infrastructure brings to IT a sustainable step-based economic model that eliminates waste.
Hyperconvergence is the ultimate in an overall trend of convergence that has hit the market in recent years. Convergence is intended to bring simplicity to increasingly complex data centers.

Hyper-Convergence Constructs

Convergence comes in many forms. At its most basic, convergence simply brings together existing individual storage, compute, and network switching products into pre-tested, pre-validated solutions sold as a single solution. However, this level of convergence only simplifies the purchase and upgrade cycle. It fails to address ongoing operational challenges that have been introduced with the advent of virtualization.  There are still LUNs to create, WAN optimizers to acquire and configure, and third- party backup and replication products to purchase and maintain.

Hyper-convergence is a ground-up rethinking of all the services that comprise the data center. With a focus on the virtual machine or workload, all the elements of the hyper-converged infrastructure support the virtual machine as the basic construct of the data center.

The results are significant and include lower CAPEX as a result of lower upfront prices for infrastructure, lower OPEX through reductions in operational expenses and personnel, and faster time-to-value for new business needs. On the technical side, newly emerging infrastructure engineers people with broad knowledge of infrastructure and business needs can easily support hyper-converged infrastructure. No longer do organizations need to maintain separate islands of resource engineers to manage each aspect of the data center. To fully understand hyper-convergence, it’s important to understand the trends that have led the industry to this point. These include post-virtualization headaches, the rise of the software- defined data center and cloud.

Tuesday, July 21, 2015

How to upgrade Site Recovery Manager 5.8 to version 6.0

The Site Recovery Manager 6.0 has been released for a while now so it is a good time to have a look at the upgrade process since the original release build also received a patch in April 2015. Before we jump into the upgrade process, let’s have a quick look at what’s new in Site Recovery Manager 6.0.


  • Support for VMware vSphere 6.0, including integration with shared infrastructure components such as Platform Services Controller and vCenter Single Sign On.
  • Support for Storage vMotion and Storage DRS on both the protected and recovery sites.
  • Protection and recovery of virtual machines in IPv6 environments.
  • IP customization enhancements to support dual-protocol IP configurations and independent IPv4 and IPv6 configurations.
As in previous releases, you can upgrade an existing vCenter Sire Recovery Manager installation and the upgrade process will preserve existing information and settings (except the advanced settings) in your current vCenter Site Recovery Manager deployment. Before you begin, make sure you have read the following:
Note: As per vCenter Site Recovery Manager Installation and Configuration guide, upgrading from SRM 5.0.x and 5.1.x to SRM 6.0 is not supported. Upgrade SRM 5.0.x and 5.1.x to SRM version 5.5.x or 5.8.x release before you upgrade to SRM 6.0.
Upgrade Order

Improved interoperability with SDRS and Storage vMotion

In previous versions of SRM there was basic compatibility with SDRS and Storage vMotion with some documented caveats. Specifically, Storage vMotion would not throw a warning if an attempt was made to move a VM out of it’s consistency group and for SDRS, datastore clusters could only contain datastores from the same consistency group because otherwise SDRS could easily move VMs out of it.

With SRM 6.0 we now have full compatibility with SDRS and greatly enhanced support for Storage vMotion. Datastore groups can now contain any set of datastores, no restrictions, and SDRS will only make automatic migrations between two non-replicated datastores or within datastores in the same consistency group. Also, Storage vMotion will now generate a warning if a user attempts to move an SRM protected VM.

As described in the vCenter Site Recovery Manager Installation and Configurations guide, you must upgrade certain components of your vSphere environment before you upgrade the vCenter Site Recovery Manager. As always, upgrade the components on the protected site before you upgrade the components on the recovery site. Upgrading the protected site first allows you to perform a disaster recovery on the recovery site if you encounter problems during the upgrade that render the protected site unusable. The exception is the ESXi hosts, which you can upgrade after you finish upgrading the other components on the protected and recovery sites.


This makes it much easier for users to create and use datastore clusters, SDRS and Storage vMotion without having to worry about the impact to recovery of their VMs.

Simplified SSL certificate requirements


SRM 6.0 is now deeply integrated with SSO, using it for authentication and SAML token acquisition among other things. This integration with SSO also allowed the external certificate requirements to be relaxed significantly. Previously, certificates were used both for authenticating to the associated vCenter as well as between SRM instances. This imposed a number of restrictions and requirements that made deploying certificates time consuming and difficult. The SRM 6.0 Installation guide provides a detailed description of all the new simplified certificate requirements. These new requirements will make SRM environments more secure and easier to deploy and maintain

Integration with vSphere 6.0 platform services (SSO, Authorization, Licensing, Tags, etc.)


SRM is now fully integrated with and supported by vSphere 6.0. This has benefits for authentication (SSO), tagging (now shared) and the lookup service to name just a few. To use SRM 6.0 requires:
  • vCenter Server 6.0
  • vSphere Web-Client 6.0
  • If using vSphere Replication, version 6.0

Note: The large majority of SRAs that were compatible with SRM 5.8 remain compatible with SRM 6.0. Check the Compatibility Guide for details and confirm with your array vendor if you have questions.

Also, since vSphere Replication 6.0 now supports up to 2000 VMs, this is now supported in SRM 6.0 as well. All other SRM limits remain the same for this release.

The integration of SRM and the separation of the platform services controller (PSC) from vCenter creates a number of new topology possibilities. These topologies can impact SRM so make sure to read this KB List of recommended topologies for vSphere 6.0.x (2108548) when planning your upgrade or deployment. A more detailed post about topologies and SRM will be published soon.

Also be aware that because of the integration of SRM with SSO, vCenter and the PSC time synchronization among all these components is important.

IP customization enhancements

When using the dr-ip-customizer tool for updating VM IP addresses it now allows simultaneous IPv4 and IPv6 static address specification and is backward compatible with SRM 5.X generated spreadsheets. This increases the flexibility of the tool while maintaining compatibility with previous releases.

When upgrading to Site Recovery Manager 6.0 follow these steps as described below.

1. Upgrade all components of vCenter Server on the protected site.
2. If you use vSphere Replication, upgrade the vSphere Replication deployment on the protected site.
3. Upgrade Site Recovery Manager Server on the protected site.
4. If you use array-based replication, upgrade the storage replication adapters (SRA) on the protected site.
5. Upgrade all components of vCenter Server on the recovery site.
6. If you use vSphere Replication, upgrade the vSphere Replication deployment on the recovery site.
7. Upgrade Site Recovery Manager Server on the recovery site.
8. If you use array-based replication, upgrade the storage replication adapters (SRA) on the recovery site.
9. Verify the connection between the Site Recovery Manager sites.
10. Verify that your protection groups and recovery plans are still valid.
11. Upgrade ESXi Server on the recovery site (optional).
12. Upgrade ESXi Server on the protected site (optional).
13. Upgrade the virtual hardware and VMware Tools on the virtual machines on the ESXi hosts.

Source: vCenter Site Recovery Manager 6.0 Installation and Configuration guide.

At this point the Site Recovery Manager is upgraded to version 6.0. If you are logged in to the vSphere Web Client, log out, clear the browser cache and log back in. The Site Recovery Manager extension should be visible now.

Next, upgrade the Protected Site and then verify your protection groups, recovery plans etc.


Thursday, June 25, 2015

Datastore un-mounting | Dead path issue (APD)

Vsphere 5.1 onwards the process of removal datastore has been changed, before removing any datastore from ESXi hosts and cluster Right-click the datastore and unmounting.

It is not only the process to remove LUN from ESXi hosts but there are few additional pre-checks and post tasks like detaching the device from the host is must before we request storage administrator to unpresenting the LUN from the backend storage array.

This process needs to be followed properly otherwise it may cause bad issue like Dead path or APD ( All paths down) condition on the ESXi host.
  1.  If the LUN is being used as a VMFS datastore, all objects (such as VM, snapshots ,template and HA configuration stored on the VMFS datastore.
  2. Ensue the Datastore is not used for vSphere HA heartbeat.
  3. Ensure the datastore is not part of a SDRS.
  4. Storage I/O control should be disabled.
  5. ISO mount and scripts or utilities are accessing the datastore.


Process to remove datatstore or LUN  from ESXi 5.x hosts

  • Select the ESXi host -> Configuration - > Storage - > Datastores

(note down the naa id for the datatore which starts from naa.xxx)



  •                 Right-click the datastore, which you want to unmounts and select unmounts.



  •    Confirm datastore unmounts pre-check is all marked with Green check mark and click on OK.
  •     Select the ESXi host -> Configuration -> storage -> devices Match the devices with the naa.id (naa.xx) Right-click the Device and Detach. Verify all the Green checks and click on OK to detach the LUN.

           Repeat the same step for all ESXi hosts, where you want to unpresent this datastore.


  •      Inform your storage team to physically unpresent the LUN from the ESXi host using the appropriate array tools.
Rescan the ESXi host and verify detached LUNs are disappeared from the ESXi host.


Monday, June 8, 2015

vSphere 6: vMotion enhancements

With vSphere 6.0 you can migrate Virtual Machines across virtual switches. The new vMotion workflow allows you to choose the destination network which can be on a vSwitch or vDS. This feature eliminates the need to span virtual switches across two locations.
 
VMware vSphere vMotion capabilities have been enhanced on Vsphere 6, enabling users to perform live migration of virtual machines across virtual switches, across vCenter Server systems, and across long distances of up to 100ms RTT.
 
vSphere administrators now can migrate across vCenter Server systems, enabling migration from a Windows version of vCenter Server to vCenter Server Appliance or vice versa, depending on specific requirements. Previously, this was a difficult task and caused a disruption to virtual machine management. This can now be accomplished seamlessly without losing historical data about the virtual machine.
 
Cross vSwitch vMotion
 
Cross vSwitch vMotion allows you to seamless migrate a virtual machines across different virtual switches while performing a vMotion. This means that you are now longer restricted by the network you created on the vSwitches in order to vMotion a virtual machine.
 
vMotion will work across a mix of switches (standard and distributed). Previously, you could only vMotion from vSS to vSS or within a single vDS. This limitation has been removed.
 
The following Cross vSwitch vMotion migrations are possible:
  • vSS to vSS migration.
  • vSS to vDS migration.
  • vDS to vDS migration (transferring VDS port metadata)
  • vDS to VSS is not allowed.
Cross vCenter vMotion


But Cross vSwitch vMotion is not the only vMotion enhancement. vSphere 6 also introduces support for Cross vCenter vMotion. vMotion can now perform the following changes simultaneously.
 
  • Change compute (vMotion) – Performs the migration of virtual machines across compute hosts.
  • Change storage (Storage vMotion) – Performs the migration of the virtual machine disks across datastores.
  • Change network (Cross vSwitch vMotion) – Performs the migration of a VM across different virtual switches.
  • Change vCenter (Cross vCenter vMotion) – Performs the migration of the vCenter which manages the VM.
 
All of these types of vMotion are seamless to the guest OS.
 
Like with vSwitch vMotion, Cross vCenter vMotion requires L2 network connectivity since the IP of the VM will not be changed. This functionality builds upon Enhanced vMotion and shared storage is not required. Target support for local (single site), metro (multiple well-connected sites), and cross-continental sites

With vSphere 6 vMotion you can now:

Migrate from a VCSA to a Windows version of vCenter & vice versa.
  • Replace/retire vCenter server without distruption.
  • Resource pooling across vCenters where additional vCenters were used due to vCenter scalability limits.
  • Migrate VMs across local, metro, and continental distances.
  • Public/Private cloud environments with several vCenters.

There are several requirements for Cross vCenter vMotion to work:

  • Only vCenter 6.0 and greater will be supported. All instances of vCenter prior to version 6.0 will need to be upgraded before this this feature will work. For example, an instance of vCenter 5.5 and 6.0 will not work.
  • Both the source and the destination vCenter servers will need to be joined to the same SSO domain if you want to perform the vMotion using the vSphere Web Client. If the vCenter servers are joined to different SSO domains, it’s still possible to perform a Cross vCenter vMotion, but you must use the API.
  • You will need at least 250 Mbps of available network bandwidth per vMotion operation.
  • Lastly, although not technically required for the vMotion to successfully complete, L2 connectivity is required on the source and destination portgroups. When a Cross vCenter vMotion is performed, a Cross vSwitch vMotion is done as well. The virtual machine portgroups for the VM will need the share an L2 network because the IP will within the guest OS will not be updated.

vSphere 6.0 New Features – Content Library

One of the new feature of vSphere 6 is Content Library . The Content Library provides simple and effective management for VM templates, vApps, ISO images and scripts for vSphere Administrators collectively called “content”.
 
Sometimes we have ISO and other files (needed for VM creation etc) are spread across datastores as multiple administrators are managing vSphere Infrastructure. This can lead to duplication of the contents. To mitigate this issue concept of content library is introduced in vSphere 6.o which allows for a centralized place for storing your contents.
 

Advantage of Content Library
 
The Content Library can be synchronized across sites and vCenter Servers. Sharing consistent templates and files across multiple vCenter Servers in same or different locations brings out consistency, compliance, efficiency and automation in deploying workloads at scale.
 
Following are some of the features of the content library:
 
  • Store and manage content - Once central location to manage all content as VM templates, vApps, ISO’s, scripts etc. This release will have a maximum on 10 libraries and 250 items per library and it is a build-in feature in the vCenter Server. Not a plug-in that you will have to install separately.
  • Share content – Once the content is published on one vCenter Server, you can subscribe to the content from other vCenter Servers. This looks similar to the catalog option in the vCloud Director.
  • Templates – The VM’s will be stored as OVF packages rather than templates. This will affect the template creation process. If you want to make changes to a certain OFV template in the Content Library you have to create a VM out of it first, make the changes and then export it back to an OVF template and into the library.
  • Network – The port for communication for the Content Library will be 443 and there is an option to limit the sync bandwidth.
  • Storage – The Content Library can be stored on datastores, NFS, CFIS, local disks etc. as long as the path to the library is accessible locally from the vCenter Server and the vCenter Server has read/write permission
 
Creating a Content Library

 
 
Selecting Storage for the Content Library
 

Deploying a Virtual Machine to a Content Library




You can clone virtual machines or virtual machine templates to templates in the content library and use them later to provision virtual machines on a virtual data center, a data center, a cluster, or a host.
 
Publishing a Content Library for External Use
 


You can publish a content library for external use and add password protection by editing the content library settings:

Users access the library through the subscription URL that is system generated.

Monday, April 6, 2015

VCP6-DCV Certification Now Available




VMware announced the new roadmap of version 6 certifications, and mentioned that there would be migration paths for those who held certifications on the previous version. Here is below migration paths.



  • VCP6: Data Center Virtualization Exam (exam number: 2V0-621) 
  • VCP6: Data Center Virtualization Delta Exam (exam number: 2V0-621D)
  • VCP6: Desktop & Mobility Exam (exam number: 2V0-651) 
  • VCP6: Cloud Management and Automation Exam (exam number: 2V0-631) 
  • VCP6: Network Virtualization Exam (exam number: 2V0-641) 


Source from : VMware

VMware vSphere 5.x and 6.x Configuration Maximums

Here updated limits and maximums series details about vmware vsphere 6.

Please find the table summaries below.


Compute

Virtual Machine Maximums
vSphere Configuration Itemv5.0v5.1v5.5v6.0
Virtual CPUs per virtual machine (Virtual SMP)326464128

Memory

Virtual Machine Maximums
vSphere Configuration Itemv5.0v5.1v5.5v6.0
RAM per virtual machine1TB1TB1TB4TB
Virtual machine swap file size1TB1TB1TB4TB

Virtual Peripheral Ports

Virtual Machine Maximums
vSphere Configuration Itemv5.0v5.1v5.5v6.0
Serial Ports per virtual machine44432

Storage Virtual Adapters and Devices

Virtual Machine Maximums
vSphere Configuration Itemv5.0v5.1v5.5v6.0
Virtual disk size2TB minus
512 bytes
2TB minus
512 bytes
62TB62TB
Virtual SATA adapters per virtual machine--44
Virtual SATA devices per virtual SATA adapter--3030

Graphics video device

Virtual Machine Maximums

Host CPU maximums

ESXi Host Maximums
vSphere Configuration Itemv5.0v5.1v5.5v6.0
Logical CPUs per host160160320480
NUMA Nodes per host881616

Virtual machine maximums

ESXi Host Maximums
vSphere Configuration Itemv5.0v5.1v5.5v6.0
Virtual machines per host5125125121024
Virtual CPUs per host2048204840964096
Virtual CPUs per core25253232

Fault Tolerance maximums

ESXi Host Maximums
vSphere Configuration Itemv5.0v5.1v5.5v6.0
Virtual CPUs per virtual machine1114
Virtual CPU per host---8

Memory Maximums

ESXi Host Maximums
vSphere Configuration Itemv5.0v5.1v5.5v6.0
RAM per host2TB2TB4TB6TB / 12TB on specific
OEM certified platform

Fibre Channel

ESXi Host Maximums
vSphere Configuration Itemv5.0v5.1v5.5v6.0
LUN ID2552552551023

VMFS5

ESXi Host Maximums
vSphere Configuration Itemv5.0v5.1v5.5v6.0
Raw Device Mapping size (virtual compatibility)2TB minus
512 bytes
2TB minus
512 bytes
62TB62TB
Raw Device Mapping size (physical compatibility)2TB minus
512 bytes
2TB minus
512 bytes
64TB64TB
File size2TB minus
512 bytes
2TB minus
512 bytes
62TB62TB

Physical NICs

Networking Maximums

VMDirectPath limits

Networking Maximums

vSphere Standard and Distributed Switch

Networking Maximums

Cluster (all clusters including HA and DRS)

Cluster and Resource Pool Maximums

Resource Pool

Cluster and Resource Pool Maximums

vCenter Server Scalability

vCenter Server Maximums

vCenter Server Appliance

vCenter Server Maximums

vCenter Server Windows embedded/packaged vPostgres

vCenter Server Maximums

Content Library

vCenter Server Maximums

Host Profile

vCenter Server Maximums

Single Sign On

vCenter Server Maximums

Domain/Replication

Platform Service Controller maximums

Identity Source

Platform Service Controller maximums

Enhanced Linked Mode/Lookup Service

Platform Service Controller maximums

VMCA/Certificate

Platform Service Controller maximums

VMware vCenter Update Manager

vCenter Update Manager Maximums

VMware vCenter Orchestrator

vCenter Orchestrator Maximums

Storage DRS

Storage DRS Maximums

Virtual SAN ESXi host

Virtual SAN Maximums

Virtual SAN Cluster

Virtual SAN Maximums

Virtual SAN virtual machines

Virtual SAN Maximums

Virtual Volumes

Virtual Volumes Maximums

Network I/O Control (NIOC)

Network I/O Control (NIOC) Maximums

vCloud Director Maximums

vCloud Director Maximums

VMware Private AI

VMware Private AI In the fast-paced world of AI, privacy and control of corporate data are paramount concerns for organizations. That's ...