Most of the Hyper-Converged Infrastructure (HCI) requires computing resources that have been traditionally offloaded to dedicated storage arrays. Nearly all other HCI solutions require the deployment of storage virtual appliances to some or all hosts in the cluster. These appliances provide storage services to each host. Storage virtual appliances typically require dedicated CPU and/or memory to avoid resource contention with other virtual machines.
Running a storage virtual appliance on every host in the cluster reduces the overall amount of computing resources available to run regular virtual machine workloads. Consolidation ratios are the lower and total cost of ownership rises when these storage virtual appliances are present and competing for the same resources as regular virtual machine workloads.
Storage virtual appliances can also introduce additional latency, which negatively affects performance. This is due to the number of steps required to handle and replicate write operations as shown in the figure explained.
Storage Controller Virtual Appliance HCI solution :
vSAN is Native in the vSphere Hypervisor
vSAN does not require the deployment of storage virtual appliances or the installation of a vSphere Installation Bundle (VIB) on every host in the cluster. vSAN is native in the vSphere hypervisor and typically consumes less than 10% of the computing resources on each host. vSAN does not compete with other virtual machines for resources and the I/O path is shorter.
A shorter I/O path and the absence of resource-intensive storage virtual appliances enables vSAN to provide excellent performance with minimal overhead. Higher virtual machine consolidation ratios translate into lower total costs of ownership.
vSAN Cluster Types
vSAN Runs on standard x86 servers from more than 15 OEMs. Deployment options include over 500 vSAN ReadyNode choices, integrated systems such as Dell EMC VxRail or , Dell EMC VxRack SDDC systems, and build-your-own using validated hardware on the VMware Compatibility List. A great fit for large and small deployments with options ranging from a 2-node cluster for small implementations to multiple clusters each with as many as 64 nodes—all centrally managed by vCenter Server.
vSAN support Standard Cluster with 3 nodes, 2 node cluster for remote office and Stretched Cluster solution.
Standard Cluster
A standard vSAN cluster consists of a minimum of three physical nodes and can be scaled to 64 nodes.All the hosts in a standard cluster are commonly located at a single location and are well-connected on the same Layer-2 network. 10Gb network connections are required for all-flash configurations and highly recommended for hybrid configurations.
2 Node Cluster
A 2-node cluster consists of two physical nodes in the same location. These hosts are usually connected to the same network switch or are directly connected. Direct connections between hosts eliminate the need to procure and manage an expensive network switch for a 2-node cluster, which lowers costs especially in scenarios such as remote office deployments. While 10Gbps connections may be directly connected, 1Gbps connections will require a crossover cable.
A third “vSAN Witness Host” is required for a 2-node configuration to avoid “split-brain” issues when network connectivity is lost between the two physical nodes. We will discuss the vSAN Witness Host in more detail shortly
Stretched Cluster
A vSAN Stretched Cluster provides resiliency against the loss of an entire site. The hosts in a Stretched Cluster are distributed evenly across two sites. The two sites are well-connected from a network perspective with a round trip time (RTT) latency of no more than five milliseconds (5ms). A vSAN Witness Host is placed at a third site to avoid “split-brain” issues if connectivity is lost between the two Stretched Cluster sites. A vSAN Stretched Cluster may have a maximum of 30 hosts in the cluster and can be distributed proportionally or disproportionately. In cases where there is a need for more hosts across sites, additional vSAN Stretched Clusters may be used.
vSAN Witness Host
While not a cluster type, it is important to understand the use of a vSAN Witness Host in 2 Node and Stretched Cluster vSAN deployments. This “Witness” stores metadata commonly called “witness components” for vSAN objects. Virtual machine data such as virtual disks and virtual machine configuration files are not stored on the vSAN Witness Host. The purpose of the vSAN Witness Host is to serve as a “tie-breaker” in cases where sites are network isolated or disconnected.
A vSAN Witness Host may be a physical vSphere host, or a VMware provided virtual appliance, which can be easily deployed from an OVA. When using a physical host as a vSAN Witness Host, additional licensing is required, and the host must meet some general configuration requirements. When using a vSAN Witness Appliance as the vSAN Witness Host, it can easily reside on other/existing vSphere infrastructure, with no additional need for licensing.
When using 2 Node clusters for deployments such as remote office branch office (ROBO) locations, it is a common practice for vSAN Witness Appliances to reside at a primary datacenter. They may be run at the same ROBO site but would require additional infrastructure at the ROBO site.
vSAN Witness Hosts providing quorum for Stretched Clusters may only be located in a tertiary site that is independent of the Preferred and Secondary Stretched Cluster sites.
One vSAN Witness Host is required for each 2 Node or Stretched Cluster vSAN deployment. Bandwidth requirements to the vSAN Witness Host are determined by the number of vSAN components on a cluster. During failover scenarios, ownership of vSAN components must be moved to the surviving site over a five second (5s) window. The rule of thumb is 2Mbps for every 1000 vSAN components. Maximum latency requirements to/from the vSAN Witness Host depend on the number of hosts in the cluster. 2 Node configurations are allowed up to five hundred milliseconds (500ms) and Stretched Clusters are allowed two hundred milliseconds (200ms) or one hundred milliseconds (100ms) depending on the number of hosts in the Stretched Cluster.
Using the VMware provided vSAN Witness Appliance is generally recommended as a better option for the vSAN Witness Host than using a physical vSphere host. The utilization of a vSAN Witness Appliance is relatively low during normal operations. It is not until a failover process occurs that a vSAN Witness Host will have any significant utilization. Because of this, especially in large 2 Node deployments to ROBO sites, multiple vSAN Witness Appliances may be run on the same shared vSphere infrastructure. VMware supports running the vSAN Witness Appliance on any vSphere 5.5 or higher infrastructure, which can include a standalone ESXi host, a typical vSphere infrastructure, in OVH (the service formally known as vCloud Air), any vCloud Air Network Partner, or any Service Provider/Shared/Co-Location where vSphere is used.
When using a vSAN Witness Appliance, it is patched in the same fashion as any other ESXi host. It is the last host updated when performing 2 Node and Stretched Cluster upgrades and should not be backed up.
I hope this has been informative and thank you for reading!