VMware vSphere - 6.5 Administrator’s Guide

Administering VMware Virtual SAN
VMware vSphere 6.5
vSAN 6.6
This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document, see http://www.vmware.com/support/pubs.
EN-002503-01
You can find the most up-to-date technical documentation on the VMware Web site at:
hp://www.vmware.com/support/
The VMware Web site also provides the latest product updates.
If you have comments about this documentation, submit your feedback to:
docfeedback@vmware.com
Copyright © 2017 VMware, Inc. All rights reserved. Copyright and trademark information.
VMware, Inc.
3401 Hillview Ave. Palo Alto, CA 94304 www.vmware.com
2 VMware, Inc.

Contents

About VMware Virtual SAN 7
Updated Information 9
1
Introduction to Virtual SAN 11
2
Virtual SAN Concepts 11
Virtual SAN Terms and Denitions 13
Virtual SAN and Traditional Storage 16
Building a Virtual SAN Cluster 17
Integrating with Other VMware Software 17
Limitations of Virtual SAN 18
Requirements for Enabling Virtual SAN 19
3
Hardware Requirements for vSAN 19
Cluster Requirements for Virtual SAN 20
Software Requirements for Virtual SAN 20
Networking Requirements for Virtual SAN 21
License Requirements 21
Designing and Sizing a Virtual SAN Cluster 23
4
Designing and Sizing Virtual SAN Storage Components 23
Designing and Sizing vSAN Hosts 29
Design Considerations for a Virtual SAN Cluster 31
Designing the Virtual SAN Network 32
Best Practices for Virtual SAN Networking 34
Designing and Sizing Virtual SAN Fault Domains 34
Using Boot Devices and vSAN 35
Persistent Logging in a Virtual SAN Cluster 35
VMware, Inc.
Preparing a New or Existing Cluster for Virtual SAN 37
5
Selecting or Verifying the Compatibility of Storage Devices 37
Preparing Storage 38
Providing Memory for Virtual SAN 41
Preparing Your Hosts for Virtual SAN 42
Virtual SAN and vCenter Server Compatibility 42
Preparing Storage Controllers 42
Conguring Virtual SAN Network 43
Considerations about the Virtual SAN License 44
Creating a Virtual SAN Cluster 45
6
Characteristics of a Virtual SAN Cluster 45
3
Before Creating a Virtual SAN Cluster 46
Enabling Virtual SAN 47
Using Virtual SAN Conguration Assist and Updates 56
Extending a Datastore Across Two Sites with Stretched Clusters 61
7
Introduction to Stretched Clusters 61
Stretched Cluster Design Considerations 63
Best Practices for Working with Stretched Clusters 64
Network Design for Stretched Clusters 64
Congure Virtual SAN Stretched Cluster 65
Change the Preferred Fault Domain 66
Change the Witness Host 66
Deploying a Virtual SAN Witness Appliance 66
Congure Network Interface for Witness Trac 67
Convert a Stretched Cluster to a Standard Virtual SAN Cluster 69
Increasing Space Eciency in a Virtual SAN Cluster 71
8
Introduction to Virtual SAN Space Eciency 71
Using Deduplication and Compression 71
Using RAID 5 or RAID 6 Erasure Coding 76
RAID 5 or RAID 6 Design Considerations 76
Using Encryption on a Virtual SAN Cluster 77
9
How Virtual SAN Encryption Works 77
Design Considerations for Virtual SAN Encryption 78
Set Up the KMS Cluster 78
Enable Encryption on a New Virtual SAN Cluster 83
Generate New Encryption Keys 83
Enable Virtual SAN Encryption on Existing Virtual SAN Cluster 84
Virtual SAN Encryption and Core Dumps 85
Upgrading the Virtual SAN Cluster 89
10
Before You Upgrade Virtual SAN 89
Upgrade the vCenter Server 91
Upgrade the ESXi Hosts 91
About the Virtual SAN Disk Format 93
Verify the Virtual SAN Cluster Upgrade 97
Using the RVC Upgrade Command Options 98
Device Management in a Virtual SAN Cluster 99
11
Managing Disk Groups And Devices 99
Working with Individual Devices 101
Expanding and Managing a Virtual SAN Cluster 107
12
Expanding a Virtual SAN Cluster 107
Working with Maintenance Mode 111
Managing Fault Domains in Virtual SAN Clusters 113
Using the Virtual SAN iSCSI Target Service 117
4 VMware, Inc.
Migrate a Hybrid Virtual SAN Cluster to an All-Flash Cluster 120
Power o a Virtual SAN Cluster 121
Contents
Using Virtual SAN Policies 123
13
About Virtual SAN Policies 123
View Virtual SAN Storage Providers 126
About the Virtual SAN Default Storage Policy 127
Assign a Default Storage Policy to Virtual SAN Datastores 128
Dene a Virtual Machine Storage Policy for Virtual SAN 129
Monitoring Virtual SAN 131
14
Monitor the Virtual SAN Cluster 131
Monitor Virtual SAN Capacity 132
Monitor Virtual Devices in the Virtual SAN Cluster 133
About Virtual SAN Cluster Resynchronization 133
Monitor Devices that Participate in Virtual SAN Datastores 135
Monitoring Virtual SAN Health 135
Monitoring Virtual SAN Performance 138
About Virtual SAN Cluster Rebalancing 142
Using the Virtual SAN Default Alarms 143
Using the VMkernel Observations for Creating Alarms 145
Handling Failures and Troubleshooting Virtual SAN 147
15
Using esxcli Commands with Virtual SAN 147
Virtual SAN Conguration on an ESXi Host Might Fail 150
Not Compliant Virtual Machine Objects Do Not Become Compliant Instantly 150
Virtual SAN Cluster Conguration Issues 151
Handling Failures in Virtual SAN 152
Shuing Down the Virtual SAN Cluster 164
Index 167
VMware, Inc. 5
6 VMware, Inc.

About VMware Virtual SAN

Administering VMware Virtual SAN describes how to congure, manage, and monitor a VMware Virtual SAN (vSAN) cluster in a VMware vSphere® environment. In addition, Administering VMware Virtual SAN explains how to organize the local physical storage resources that serve as storage capacity devices in a Virtual SAN cluster, dene storage policies for virtual machines deployed to Virtual SAN datastores, and manage failures in a Virtual SAN cluster.
Intended Audience
This information is for experienced virtualization administrators who are familiar with virtualization technology, day-to-day data center operations, and Virtual SAN concepts.
vSphere Client HTML5 for Virtual SAN
The vSphere Client is a new HTML5-based client that ships with vCenter Server alongside the vSphere Web Client. The new vSphere Client uses many of the same interface terminologies, topologies, and workows as the vSphere Web Client. However, the vSphere Client does not support Virtual SAN. Users of Virtual SAN should continue to use the vSphere Web Client for those processes.
N Not all functionality in the vSphere Web Client has been implemented for the vSphere Client in the vSphere 6.5 release. For an up-to-date list of unsupported functionality, see Functionality Updates for the vSphere Client Guide at hp://www.vmware.com/info?id=1413.
VMware, Inc.
7
8 VMware, Inc.

Updated Information 1

Administering VMware Virtual SAN is updated with each release of the product or when necessary.
This table provides the update history of Administering VMware Virtual SAN.
Revision Description
EN-002503-01
EN-002503-00 Initial release.
The topic “Congure Network Interface for Witness Trac,” on page 67 was updated to correct the
n
syntax of a command in the procedure.
Additional minor revisions.
n
VMware, Inc. 9
10 VMware, Inc.

Introduction to Virtual SAN 2

VMware Virtual SAN (vSAN) is a distributed layer of software that runs natively as a part of the ESXi hypervisor. Virtual SAN aggregates local or direct-aached capacity devices of a host cluster and creates a single storage pool shared across all hosts in the Virtual SAN cluster.
While supporting VMware features that require shared storage, such as HA, vMotion, and DRS, Virtual SAN eliminates the need for external shared storage and simplies storage conguration and virtual machine provisioning activities.
This chapter includes the following topics:
“Virtual SAN Concepts,” on page 11
n
“Virtual SAN Terms and Denitions,” on page 13
n
“Virtual SAN and Traditional Storage,” on page 16
n
“Building a Virtual SAN Cluster,” on page 17
n
“Integrating with Other VMware Software,” on page 17
n
“Limitations of Virtual SAN,” on page 18
n

Virtual SAN Concepts

VMware, Inc.
VMware Virtual SAN uses a software-dened approach that creates shared storage for virtual machines. It virtualizes the local physical storage resources of ESXi hosts and turns them into pools of storage that can be divided and assigned to virtual machines and applications according to their quality of service requirements. Virtual SAN is implemented directly in the ESXi hypervisor.
You can congure Virtual SAN to work as either a hybrid or all-ash cluster. In hybrid clusters, ash devices are used for the cache layer and magnetic disks are used for the storage capacity layer. In all-ash clusters, ash devices are used for both cache and capacity.
You can activate Virtual SAN on your existing host clusters and when you create new clusters. Virtual SAN aggregates all local capacity devices into a single datastore shared by all hosts in the Virtual SAN cluster. You can expand the datastore by adding capacity devices or hosts with capacity devices to the cluster. VMware recommends that the ESXi hosts in the cluster share similar or identical congurations across all cluster members, including similar or identical storage congurations. This ensures balanced virtual machine storage components across all devices and hosts in the cluster. Hosts without any local devices also can participate and run their virtual machines on the Virtual SAN datastore.
If a host contributes its local storage devices to the Virtual SAN datastore, it must provide at least one device for ash cache and at least one device for capacity, also called a data disk.
11
The devices on the contributing host form one or more disk groups. Each disk group contains one ash cache device, and one or multiple capacity devices for persistent storage. Each host can be congured to use multiple disk groups.
For best practices, capacity considerations, and general recommendations about designing and sizing a Virtual SAN cluster, see the VMware Virtual SAN Design and Sizing Guide.

Characteristics of Virtual SAN

This topic summarizes characteristics that apply to Virtual SAN, as well as its clusters and datastores.
Virtual SAN provides numerous benets to your environment.
Table 21. Virtual SAN Features
Supported Features Description
Shared storage support Virtual SAN supports VMware features that require shared storage,
Just a Bunch Of Disks (JBOD) Virtual SAN supports JBOD for use in a blade server environment.
On-disk format Virtual SAN 6.6 supports on-disk virtual le format 5.0, that
All-ash and hybrid congurations Virtual SAN can be congured for all-ash or hybrid cluster.
Fault domains Virtual SAN supports conguring fault domains to protect hosts
Stretched cluster Virtual SAN supports stretched clusters that span across two
Virtual SAN health service Virtual SAN health service includes precongured health check
Virtual SAN performance service Virtual SAN performance service includes statistical charts used to
Integration with vSphere storage features Virtual SAN integrates with vSphere data management features
Virtual Machine Storage Policies Virtual SAN works with VM storage policies to support a VM-
Rapid provisioning Virtual SAN enables rapid provisioning of storage in the vCenter
such as HA, vMotion, and DRS. For example, if a host becomes overloaded, DRS can migrate virtual machines to other hosts in the cluster.
If your cluster contains blade servers, you can extend the capacity of the datastore with JBOD storage that is connected to the blade servers.
provides highly scalable snapshot and clone management support per Virtual SAN cluster. For information about the number of virtual machine snapshots and clones supported per Virtual SAN cluster, see the Conguration Maximums documentation.
from rack or chassis failure when the Virtual SAN cluster spans across multiple racks or blade server chassis in a data center.
geographic locations.
tests to monitor, troubleshoot, diagnose the cause of cluster component problems, and identify any potential risk.
monitor IOPS, throughput, latency, and congestion. You can monitor performance of a Virtual SAN cluster, host, disk group, disk, and VMs.
traditionally used with VMFS and NFS storage. These features include snapshots, linked clones, vSphere Replication, and vSphere APIs for Data Protection.
centric approach to storage management.
If you do not assign a storage policy to the virtual machine during deployment, the Virtual SAN Default Storage Policy is automatically assigned to the VM.
Server® during virtual machine creation and deployment operations.
12 VMware, Inc.

Virtual SAN Terms and Definitions

Virtual SAN introduces specic terms and denitions that are important to understand.
Before you get started with Virtual SAN, review the key Virtual SAN terms and denitions.
Disk group
A disk group is a unit of physical storage capacity on a host and a group of physical devices that provide performance and capacity to the Virtual SAN cluster. On each ESXi host that contributes its local devices to a Virtual SAN cluster, devices are organized into disk groups.
Each disk group must have one ash cache device and one or multiple capacity devices. The devices used for caching cannot be shared across disk groups, and cannot be used for other purposes. A single caching device must be dedicated to a single disk group. In hybrid clusters, ash devices are used for the cache layer and magnetic disks are used for the storage capacity layer. In an all-ash cluster, ash devices are used for both cache and capacity. For information about creating and managing disk groups, see Chapter 11, “Device
Management in a Virtual SAN Cluster,” on page 99.
Consumed capacity
Consumed capacity is the amount of physical capacity consumed by one or more virtual machines at any point. Consumed capacity is determined by many factors, including the consumed size of your VMDKs, protection replicas, and so on. When calculating for cache sizing, do not consider the capacity used for protection replicas.
Chapter 2 Introduction to Virtual SAN
Object-based storage
Virtual SAN stores and manages data in the form of exible data containers called objects. An object is a logical volume that has its data and metadata distributed across the cluster. For example, every VMDK is an object, as is every snapshot. When you provision a virtual machine on a Virtual SAN datastore, Virtual SAN creates a set of objects comprised of multiple components for each virtual disk. It also creates the VM home namespace, which is a container object that stores all metadata les of your virtual machine. Based on the assigned virtual machine storage policy, Virtual SAN provisions and manages each object individually, which might also involve creating a RAID conguration for every object.
When Virtual SAN creates an object for a virtual disk and determines how to distribute the object in the cluster, it considers the following factors:
Virtual SAN veries that the virtual disk requirements are applied according to the specied virtual
n
machine storage policy seings.
Virtual SAN veries that the correct cluster resources are utilized at the time of provisioning. For
n
example, based on the protection policy, Virtual SAN determines how many replicas to create. The performance policy determines the amount of ash read cache allocated for each replica and how many stripes to create for each replica and where to place them in the cluster.
Virtual SAN continually monitors and reports the policy compliance status of the virtual disk. If you
n
nd any noncompliant policy status, you must troubleshoot and resolve the underlying problem.
N When required, you can edit VM storage policy seings. Changing the storage policy seings does not aect virtual machine access. Virtual SAN actively throles the storage and network resources used for reconguration to minimize the impact of object reconguration to normal workloads. When you change VM storage policy seings, Virtual SAN might initiate an object recreation process and subsequent resynchronization. See “About Virtual SAN Cluster Resynchronization,” on page 133.
VMware, Inc. 13
Virtual SAN veries that the required protection components, such as mirrors and witnesses, are placed
n
on separate hosts or fault domains. For example, to rebuild components during failure, Virtual SAN looks for ESXi hosts that satisfy the placement rules where protection components of virtual machine objects must be placed on two dierent hosts (not on the same host), or across dierent fault domains.
Virtual SAN datastore
After you enable Virtual SAN on a cluster, a single Virtual SAN datastore is created. It appears as another type of datastore in the list of datastores that might be available, including Virtual Volume, VMFS, and NFS. A single Virtual SAN datastore can provide dierent service levels for each virtual machine or each virtual disk. In vCenter Server®, storage characteristics of the Virtual SAN datastore appear as a set of capabilities. You can reference these capabilities when dening a storage policy for virtual machines. When you later deploy virtual machines, Virtual SAN uses this policy to place virtual machines in the optimal manner based on the requirements of each virtual machine. For general information about using storage policies, see the vSphere Storage documentation.
A Virtual SAN datastore has specic characteristics to consider.
Virtual SAN provides a single Virtual SAN datastore accessible to all hosts in the cluster, whether or not
n
they contribute storage to the cluster. Each host can also mount any other datastores, including Virtual Volumes, VMFS, or NFS.
You can use Storage vMotion to move virtual machines between the Virtual SAN datastores, NFS and
n
VMFS datastores.
Only magnetic disks and ash devices used for capacity can contribute to the datastore capacity. The
n
devices used for ash cache are not counted as part of the datastore.
Objects and components
Each object is composed of a set of components, determined by capabilities that are in use in the VM Storage Policy. For example, when the Primary level of failures to tolerate policy congured to one, Virtual SAN ensures that the protection components, such as replicas and witnesses of the object are placed on separate hosts in the Virtual SAN cluster, where each replica is an object component. In addition, in the same policy, if the Number of disk stripes per object congured to two or more, Virtual SAN also stripes the object across multiple capacity devices and each stripe is considered a component of the specied object. When needed, Virtual SAN might also break large objects into multiple components.
A Virtual SAN datastore contains the following object types:
VM Home Namespace
VMDK
VM Swap Object
Snapshot Delta VMDKs
Memory object
The virtual machine home directory where all virtual machine conguration les are stored, such as .vmx, log les, vmdks, snapshot delta description les, and so on.
A virtual machine disk or .vmdk le that stores the contents of the virtual machine's hard disk drive.
Created when a virtual machine is powered on.
Created when virtual machine snapshots are taken.
Created when the snapshot memory option is selected when creating or suspending a virtual machine.
14 VMware, Inc.
Chapter 2 Introduction to Virtual SAN
Virtual Machine Compliance Status: Compliant and Noncompliant
A virtual machine is considered noncompliant when one or more of its objects fail to meet the requirements of its assigned storage policy. For example, the status might become noncompliant when one of the mirror copies is inaccessible. If your virtual machines are in compliance with the requirements dened in the storage policy, the status of your virtual machines is compliant. From the Physical Disk Placement tab on the Virtual Disks page, you can verify the virtual machine object compliance status. For information about troubleshooting a Virtual SAN cluster, see “Handling Failures in Virtual SAN,” on page 152.
Component State: Degraded and Absent states
Virtual SAN acknowledges the following failure states for components:
Degraded. A component is Degraded when Virtual SAN detects a permanent component failure and
n
determines that the failed component will never recover to its original working state. As a result, Virtual SAN starts to rebuild the degraded components immediately. This state might occur when a component is on a failed device.
Absent. A component is Absent when Virtual SAN detects a temporary component failure where
n
components, including all its data, might recover and return Virtual SAN to its original state. This state might occur when you are restarting hosts or if you unplug a device from a Virtual SAN host. Virtual SAN starts to rebuild the components in absent status after waiting for 60 minutes.
Object State: Healthy and Unhealthy
Depending on the type and number of failures in the cluster, an object might be in one of the following states:
Healthy. When at least one full RAID 1 mirror is available, or the minimum required number of data
n
segments are available, the object is considered healthy.
Unhealthy. When no full mirror is available or the minimum required number of data segments are
n
unavailable for RAID 5 or RAID 6 objects; or fewer than 50 percent of an object’s votes are available. This may be due to multiple failures in the cluster. When the operational status of an object is considered unhealthy, it impacts the availability of the associated VM.
Witness
A witness is a component that contains only metadata and does not contain any actual application data. It serves as a tiebreaker when a decision needs to be made regarding the availability of the surviving datastore components, after a potential failure. A witness consumes approximately 2 MB of space for metadata on the Virtual SAN datastore when using on-disk format 1.0 and 4 MB for on-disk format for version 2.0 and later.
Virtual SAN 6.0 and later maintains quorum with an asymmetrical voting system where each component might have more than one vote to decide the availability of objects. Greater than 50 percent of the votes that make up a VM’s storage object must be accessible at all times for the object to be considered available. When 50 percent or fewer votes are accessible to all hosts, the object is no longer available to the Virtual SAN datastore. This impacts the availability of the associated VM.
Storage Policy-Based Management (SPBM)
When you use Virtual SAN, you can dene virtual machine storage requirements, such as performance and availability, in the form of a policy. Virtual SAN ensures that the virtual machines deployed to Virtual SAN datastores are assigned at least one virtual machine storage policy. When you know the storage requirements of your virtual machines, you can dene storage policies and assign the policies to your virtual machines. If you do not apply a storage policy when deploying virtual machines, Virtual SAN automatically assigns a default Virtual SAN policy with Primary level of failures to tolerate congured to one, a single
VMware, Inc. 15
disk stripe for each object, and thin provisioned virtual disk. For best results, you should dene your own virtual machine storage policies, even if the requirements of your policies are the same as those dened in the default storage policy. For information about working with Virtual SAN storage policies, see Chapter 13,
“Using Virtual SAN Policies,” on page 123.
Ruby vSphere Console (RVC)
The Ruby vSphere Console (RVC) provides a command-line interface used for managing and troubleshooting the Virtual SAN cluster. RVC gives you a cluster-wide view, instead of the host-centric view oered by esxcli. RVC is bundled with vCenter Server Appliance and vCenter Server for Windows, so you do not need to install it separately. For information about the RVC commands, see the RVC Command Reference Guide.
vSphere PowerCLI
VMware vSphere PowerCLI adds command-line scripting support for Virtual SAN, to help you automate conguration and management tasks. vSphere PowerCLI provides a Windows PowerShell interface to the vSphere API. PowerCLI includes cmdlets for administering Virtual SAN components. For information about using vSphere PowerCLI, see vSphere PowerCLI Documentation.
Virtual SAN Observer
The VMware Virtual SAN Observer is a Web-based tool that runs on RVC and is used for in-depth performance analysis and monitoring of the Virtual SAN cluster. Use Virtual SAN Observer for information about the performance statistics of the capacity layer, detailed statistical information about physical disk groups, current CPU usage, consumption of Virtual SAN memory pools, and physical and in-memory object distribution across Virtual SAN clusters.
For information about conguring, launching, and using RVC and the Virtual SAN Observer, see the Virtual SAN Troubleshooting Reference Manual.

Virtual SAN and Traditional Storage

Although Virtual SAN shares many characteristics with traditional storage arrays, the overall behavior and function of Virtual SAN is dierent. For example, Virtual SAN can manage and work only with ESXi hosts and a single Virtual SAN instance can support only one cluster.
Virtual SAN and traditional storage also dier in the following key ways:
Virtual SAN does not require external networked storage for storing virtual machine les remotely,
n
such as on a Fibre Channel (FC) or Storage Area Network (SAN).
Using traditional storage, the storage administrator preallocates storage space on dierent storage
n
systems. Virtual SAN automatically turns the local physical storage resources of the ESXi hosts into a single pool of storage. These pools can be divided and assigned to virtual machines and applications according to their quality of service requirements.
Virtual SAN has no concept of traditional storage volumes based on LUNs or NFS shares, although the
n
iSCSI target service uses LUNs to enable an initiator on a remote host to transport block-level data to a storage device in the Virtual SAN cluster.
Some standard storage protocols, such as FCP, do not apply to Virtual SAN.
n
Virtual SAN is highly integrated with vSphere. You do not need dedicated plug-ins or a storage console
n
for Virtual SAN, compared to traditional storage. You can deploy, manage, and monitor Virtual SAN by using the vSphere Web Client.
A dedicated storage administrator does not need to manage Virtual SAN. Instead a vSphere
n
administrator can manage a Virtual SAN environment.
16 VMware, Inc.
With Virtual SAN usage, VM storage policies are automatically assigned when you deploy new VMs.
n
The storage policies can be changed dynamically as needed.

Building a Virtual SAN Cluster

If you are considering Virtual SAN, you can choose from more than one conguration solution for deploying a Virtual SAN cluster.
Depending on your requirement, you can deploy Virtual SAN in one of the following ways.
Virtual SAN Ready Node
The Virtual SAN Ready Node is a precongured solution of the Virtual SAN software provided by VMware partners, such as Cisco, Dell, Fujitsu, IBM, and Supermicro. This solution includes validated server conguration in a tested, certied hardware form factor for Virtual SAN deployment that is recommended by the server OEM and VMware. For information about the Virtual SAN Ready Node solution for a specic partner, visit the VMware Partner web site.
User-Defined Virtual SAN Cluster
You can build a Virtual SAN cluster by selecting individual software and hardware components, such as drivers, rmware, and storage I/O controllers that are listed in the Virtual SAN Compatibility Guide (VCG) web site at hp://www.vmware.com/resources/compatibility/search.php. You can choose any servers, storage I/O controllers, capacity and ash cache devices, memory, any number of cores you must have per CPU, and so on that are certied and listed on the VCG Web site. Review the compatibility information on the VCG Web site before choosing software and hardware components, drivers, rmware, and storage I/O controllers that are supported by Virtual SAN. When designing a Virtual SAN cluster, use only devices, rmware, and drivers that are listed on the VCG Web site. Using software and hardware versions that are not listed in the VCG might cause cluster failure or unexpected data loss. For information about designing a Virtual SAN cluster, see Chapter 4, “Designing and Sizing a Virtual SAN Cluster,” on page 23.
Chapter 2 Introduction to Virtual SAN

Integrating with Other VMware Software

After you have Virtual SAN up and running, it is integrated with the rest of the VMware software stack. You can do most of what you can do with traditional storage by using vSphere components and features including vSphere vMotion, snapshots, clones, Distributed Resource Scheduler (DRS), vSphere High Availability, vCenter Site Recovery Manager, and more.
Integrating with vSphere HA
You can enable vSphere HA and Virtual SAN on the same cluster. As with traditional datastores, vSphere HA provides the same level of protection for virtual machines on Virtual SAN datastores. This level of protection imposes specic restrictions when vSphere HA and Virtual SAN interact. For specic considerations about integrating vSphere HA and Virtual SAN, see “Using Virtual SAN and vSphere HA,” on page 54.
Integrating with VMware Horizon View
You can integrate Virtual SAN with VMware Horizon View. When integrated, Virtual SAN provides the following benets to virtual desktop environments:
High-performance storage with automatic caching
n
Storage policy-based management, for automatic remediation
n
For information about integrating Virtual SAN with VMware Horizon, see the VMware Horizon with View documentation. For designing and sizing VMware Horizon View for Virtual SAN, see the Designing and Sizing Guide for Horizon View.
VMware, Inc. 17

Limitations of Virtual SAN

This topic discusses the limitations of Virtual SAN.
When working with Virtual SAN, consider the following limitations:
Virtual SAN does not support hosts participating in multiple Virtual SAN clusters. However, a Virtual
n
SAN host can access other external storage resources that are shared across clusters.
Virtual SAN does not support vSphere DPM and Storage I/O Control.
n
Virtual SAN does not support SCSI reservations.
n
Virtual SAN does not support RDM, VMFS, diagnostic partition, and other device access features.
n
18 VMware, Inc.
Requirements for Enabling Virtual
SAN 3
Before you activate Virtual SAN, verify that your environment meets all requirements.
This chapter includes the following topics:
“Hardware Requirements for vSAN,” on page 19
n
“Cluster Requirements for Virtual SAN,” on page 20
n
“Software Requirements for Virtual SAN,” on page 20
n
“Networking Requirements for Virtual SAN,” on page 21
n
“License Requirements,” on page 21
n

Hardware Requirements for vSAN

Verify that the ESXi hosts in your organization meet the vSAN hardware requirements.
Storage Device Requirements
All capacity devices, drivers, and rmware versions in your Virtual SAN conguration must be certied and listed in the Virtual SAN section of the VMware Compatibility Guide.
Table 31. Storage Device Requirements for vSAN Hosts
Storage Component Requirements
Cache
Virtual machine data storage
Storage controllers One SAS or SATA host bus adapter (HBA), or a RAID controller that
VMware, Inc. 19
One SAS or SATA solid-state disk (SSD) or PCIe ash device.
n
Before calculating the Primary level of failures to tolerate,
n
check the size of the ash caching device in each disk group. Verify that it provides at least 10 percent of the anticipated storage consumed on the capacity devices, not including replicas such as mirrors.
vSphere Flash Read Cache must not use any of the ash devices
n
reserved for vSAN cache.
The cache ash devices must not be formaed with VMFS or
n
another le system.
For hybrid group conguration, make sure that at least one SAS,
n
NL-SAS, or SATA magnetic disk is available.
For all-ash disk group conguration, make sure at least one
n
SAS, or SATA solid-state disk (SSD), or PCIe ash device.
is in passthrough mode or RAID 0 mode.
Memory
The memory requirements for vSAN depend on the number of disk groups and devices that the ESXi hypervisor must manage. Each host must contain a minimum of 32 GB of memory to accommodate the maximum number of disk groups (5) and maximum number of capacity devices per disk group (7).
Flash Boot Devices
During installation, the ESXi installer creates a coredump partition on the boot device. The default size of the coredump partition satises most installation requirements.
If the memory of the ESXi host has 512 GB of memory or less, you can boot the host from a USB, SD, or
n
SATADOM device. When you boot a vSAN host from a USB device or SD card, the size of the boot device must be at least 4 GB.
If the memory of the ESXi host has more than 512 GB, you must boot the host from a SATADOM or disk
n
device. When you boot a vSAN host from a SATADOM device, you must use single-level cell (SLC) device. The size of the boot device must be at least 16 GB.
N vSAN 6.5 and later enables you to resize an existing coredump partition on an ESXi host in a vSAN cluster, so you can boot from USB/SD devices. For more information, see the VMware knowledge base article at hp://kb.vmware.com/kb/2147881.
When you boot an ESXi 6.0 or later host from USB device or from SD card, vSAN trace logs are wrien to RAMDisk. These logs are automatically ooaded to persistent media during shutdown or system crash (panic). This is the only support method for handling vSAN traces when booting an ESXi from a USB stick or SD card. If a power failure occurs, vSAN trace logs are not preserved.
When you boot an ESXi 6.0 or later host from a SATADOM device, vSAN trace logs are wrien directly to the SATADOM device. Therefore it is important that the SATADOM device meets the specications outlined in this guide.

Cluster Requirements for Virtual SAN

Verify that a host cluster meets the requirements for enabling Virtual SAN.
All capacity devices, drivers, and rmware versions in your Virtual SAN conguration must be certied
n
and listed in the Virtual SAN section of the VMware Compatibility Guide.
A Virtual SAN cluster must contain a minimum of three hosts that contribute capacity to the cluster. For
n
information about the considerations for a three-host cluster, see “Design Considerations for a Virtual
SAN Cluster,” on page 31.
A host that resides in a Virtual SAN cluster must not participate in other clusters.
n

Software Requirements for Virtual SAN

Verify that the vSphere components in your environment meet the software version requirements for using Virtual SAN.
To use the full set of Virtual SAN capabilities, the ESXi hosts that participate in Virtual SAN clusters must be version 6.5 or later. During the Virtual SAN upgrade from previous versions, you can keep the current on­disk format version, but you cannot use many of the new features. Virtual SAN 6.6 and later software supports all on-disk formats.
20 VMware, Inc.

Networking Requirements for Virtual SAN

Verify that the network infrastructure and the networking conguration on the ESXi hosts meet the minimum networking requirements for Virtual SAN.
Table 32. Networking Requirements for Virtual SAN
Networking Component Requirement
Host Bandwidth Each host must have minimum bandwidth dedicated to Virtual SAN.
Dedicated 1 Gbps for hybrid congurations
n
Dedicated or shared 10 Gbps for all-ash congurations
n
For information about networking considerations in Virtual SAN, see
“Designing the Virtual SAN Network,” on page 32.
Connection between hosts Each host in the Virtual SAN cluster, regardless of whether it
contributes capacity, must have a VMkernel network adapter for Virtual SAN trac. See “Set Up a VMkernel Network for Virtual
SAN,” on page 47.
Host network All hosts in your Virtual SAN cluster must be connected to a Virtual
SAN Layer 2 or Layer 3 network.
IPv4 and IPv6 support The Virtual SAN network supports both IPv4 and IPv6.
Chapter 3 Requirements for Enabling Virtual SAN

License Requirements

Verify that you have a valid license for Virtual SAN.
Using Virtual SAN in production environments requires a special license that you assign to the Virtual SAN clusters.
You can assign a standard Virtual SAN license to the cluster, or a license that covers advanced functions. Advanced features include RAID 5/6 erasure coding, and deduplication and compression. An enterprise license is required for IOPS limits and stretched clusters. For information about assigning licenses, see
“Congure License Seings for a Virtual SAN Cluster,” on page 52.
The capacity of the license must cover the total number of CPUs in the cluster.
VMware, Inc. 21
22 VMware, Inc.
Designing and Sizing a Virtual SAN
Cluster 4
For best performance and use, plan the capabilities and conguration of your hosts and their storage devices before you deploy Virtual SAN in a vSphere environment. Carefully consider certain host and networking congurations within the Virtual SAN cluster.
The Administering VMware vSAN documentation examines the key points about designing and sizing a Virtual SAN cluster. For detailed instructions about designing and sizing a Virtual SAN cluster, see VMware Virtual SAN Design and Sizing Guide.
This chapter includes the following topics:
“Designing and Sizing Virtual SAN Storage Components,” on page 23
n
“Designing and Sizing vSAN Hosts,” on page 29
n
“Design Considerations for a Virtual SAN Cluster,” on page 31
n
“Designing the Virtual SAN Network,” on page 32
n
“Best Practices for Virtual SAN Networking,” on page 34
n
“Designing and Sizing Virtual SAN Fault Domains,” on page 34
n
“Using Boot Devices and vSAN,” on page 35
n
“Persistent Logging in a Virtual SAN Cluster,” on page 35
n

Designing and Sizing Virtual SAN Storage Components

Plan capacity and cache based on the expected consumption. Consider the requirements for availability and endurance.
Planning Capacity in Virtual SAN on page 24
n
You can size the capacity of a Virtual SAN datastore to accommodate the virtual machines (VMs) les in the cluster and to handle failures and maintenance operations.
Design Considerations for Flash Caching Devices in Virtual SAN on page 26
n
Plan the conguration of ash devices for Virtual SAN cache and all-ash capacity to provide high performance and required storage space, and to accommodate future growth.
Design Considerations for Flash Capacity Devices in Virtual SAN on page 27
n
Plan the conguration of ash capacity devices for Virtual SAN all-ash congurations to provide high performance and required storage space, and to accommodate future growth.
Design Considerations for Magnetic Disks in Virtual SAN on page 28
n
Plan the size and number of magnetic disks for capacity in hybrid congurations by following the requirements for storage space and performance.
VMware, Inc.
23
Administering VMware Virtual SAN
Design Considerations for Storage Controllers in Virtual SAN on page 29
n
Include storage controllers on the hosts of a Virtual SAN cluster that best satisfy the requirements for performance and availability.

Planning Capacity in Virtual SAN

You can size the capacity of a Virtual SAN datastore to accommodate the virtual machines (VMs) les in the cluster and to handle failures and maintenance operations.
Raw Capacity
To determine the raw capacity of a Virtual SAN datastore, multiply the total number of disk groups in the cluster by the size of the capacity devices in those disk groups, and subtract the overhead required by the Virtual SAN on-disk format.
Primary level of Failures to Tolerate
When you plan the capacity of the Virtual SAN datastore, not including the number of virtual machines and the size of their VMDK les, you must consider the Primary level of failures to tolerate and the Failure tolerance method aributes of the virtual machine storage policies for the cluster.
The Primary level of failures to tolerate has an important role when you plan and size storage capacity for Virtual SAN. Based on the availability requirements of a virtual machine, the seing might result in doubled consumption or more, compared with the consumption of a virtual machine and its individual devices.
For example, if the Failure tolerance method is set to RAID-1 (Mirroring) - Performance and the Primary level of failures to tolerate (PFTT) is set to 1, virtual machines can use about 50 percent of the raw capacity. If the PFTT is set to 2, the usable capacity is about 33 percent. If the PFTT is set to 3, the usable capacity is about 25 percent.
But if the Failure tolerance method is set to RAID-5/6 (Erasure Coding) - Capacity and the PFTT is set to 1, virtual machines can use about 75 percent of the raw capacity. If the PFTT is set to 2, the usable capacity is about 67 percent. For more information about RAID 5/6, see “Using RAID 5 or RAID 6 Erasure Coding,” on page 76.
For information about the aributes in a Virtual SAN storage policy, see Chapter 13, “Using Virtual SAN
Policies,” on page 123.
Calculating Required Capacity
Plan the capacity required for the virtual machines in a cluster with RAID 1 mirroring based on the following criteria:
1 Calculate the storage space that the virtual machines in the Virtual SAN cluster are expected to
consume.
expected overall consumption = number of VMs in the cluster * expected percentage of
consumption per VMDK
2 Consider the Primary level of failures to tolerate aribute congured in the storage policies for the
virtual machines in the cluster. This aribute directly impacts the number of replicas of a VMDK le on hosts in the cluster.
datastore capacity = expected overall consumption * (PFTT + 1)
3 Estimate the overhead requirement of the Virtual SAN on-disk format.
On-disk format version 3.0 and later adds an additional overhead, typically no more than 1-2
n
percent capacity per device. Deduplication and compression with software checksum enabled require additional overhead of approximately 6.2 percent capacity per device.
24 VMware, Inc.
Chapter 4 Designing and Sizing a Virtual SAN Cluster
On-disk format version 2.0 adds an additional overhead, typically no more than 1-2 percent
n
capacity per device.
On-disk format version 1.0 adds an additional overhead of approximately 1 GB per capacity device.
n
Capacity Sizing Guidelines
Keep at least 30 percent unused space to prevent Virtual SAN from rebalancing the storage load. Virtual
n
SAN rebalances the components across the cluster whenever the consumption on a single capacity device reaches 80 percent or more. The rebalance operation might impact the performance of applications. To avoid these issues, keep storage consumption to less than 70 percent.
Plan extra capacity to handle potential failure or replacement of capacity devices, disk groups, and
n
hosts. When a capacity device is not reachable, Virtual SAN recovers the components from another device in the cluster. When a ash cache device fails or is removed, Virtual SAN recovers the components from the entire disk group.
Reserve extra capacity to make sure that Virtual SAN recovers components after a host failure or when
n
a host enters maintenance mode. For example, provision hosts with enough capacity so that you have sucient free capacity left for components to successfully rebuild after a host failure or during maintenance. This is important when you have more than three hosts, so you have sucient free capacity to rebuild the failed components. If a host fails, the rebuilding takes place on the storage available on another host, so that another failure can be tolerated. However, in a three-host cluster, Virtual SAN will not perform the rebuild operation if the Primary level of failures to tolerate is set to 1 because when one host fails, only two hosts remain in the cluster. To tolerate a rebuild after a failure, you must have at least three hosts.
Provide enough temporary storage space for changes in the Virtual SAN VM storage policy. When you
n
dynamically change a VM storage policy, Virtual SAN might create a layout of the replicas that make up an object. When Virtual SAN instantiates and synchronizes those replicas with the original replica, the cluster must temporarily provide additional space.
If you plan to use advanced features, such as software checksum or deduplication and compression,
n
reserve additional capacity to handle the operational overhead.
Considerations for Virtual Machine Objects
When you plan the storage capacity in the Virtual SAN datastore, consider the space required in the datastore for the VM home namespace objects, snapshots, and swap les.
VM Home Namespace. You can assign a storage policy specically to the home namespace object for a
n
virtual machine. To prevent unnecessary allocation of capacity and cache storage, Virtual SAN applies only the Primary level of failures to tolerate and the Force provisioning seings from the policy on the VM home namespace. Plan storage space to meet the requirements for a storage policy assigned to a VM Home Namespace whose Primary level of failures to tolerate is greater than 0.
Snapshots. Delta devices inherit the policy of the base VMDK le. Plan additional space according to
n
the expected size and number of snapshots, and to the seings in the Virtual SAN storage policies.
The space that is required might be dierent. Its size depends on how often the virtual machine changes data and how long a snapshot is aached to the virtual machine.
Swap les. Virtual SAN uses an individual storage policy for the swap les of virtual machines. The
n
policy tolerates a single failure, denes no striping and read cache reservation, and enables force provisioning.
VMware, Inc. 25
Administering VMware Virtual SAN

Design Considerations for Flash Caching Devices in Virtual SAN

Plan the conguration of ash devices for Virtual SAN cache and all-ash capacity to provide high performance and required storage space, and to accommodate future growth.
Choosing Between PCIe or SSD Flash Devices
Choose PCIe or SSD ash devices according to the requirements for performance, capacity, write endurance, and cost of the Virtual SAN storage.
Compatibility. The model of the PCIe or SSD devices must be listed in the Virtual SAN section of the
n
VMware Compatibility Guide.
Performance. PCIe devices generally have faster performance than SSD devices.
n
Capacity. The maximum capacity that is available for PCIe devices is generally greater than the
n
maximum capacity that is currently listed for SSD devices for Virtual SAN in the VMware Compatibility Guide.
Write endurance. The write endurance of the PCIe or SSD devices must meet the requirements for
n
capacity or for cache in all-ash congurations, and for cache in hybrid congurations.
For information about the write endurance requirements for all-ash and hybrid congurations, see the VMware Virtual SAN Design and Sizing Guide. For information about the write endurance class of PCIe and SSD devices, see the Virtual SAN section of the VMware Compatibility Guide.
Cost. PCIe devices generally have higher cost than SSD devices.
n
Flash Devices as Virtual SAN Cache
Design the conguration of ash cache for Virtual SAN for write endurance, performance, and potential growth based on these considerations.
26 VMware, Inc.
Chapter 4 Designing and Sizing a Virtual SAN Cluster
Table 41. Sizing Virtual SAN Cache
Storage Configuration Considerations
All-ash and hybrid congurations
All-ash congurations In all-ash congurations, Virtual SAN uses the cache layer for write
Hybrid congurations If the read cache reservation is congured in the active VM storage
The ash caching device must provide at least 10 percent of the
n
anticipated storage that virtual machines are expected to consume, not including replicas such as mirrors.
The Primary level of failures to tolerate aribute from the VM storage policy does not impact the size of the cache.
A higher cache-to-capacity ratio eases future capacity growth.
n
Oversizing cache enables you to easily add more capacity to an existing disk group without the need to increase the size of the cache.
Flash caching devices must have high write endurance.
n
When a ash caching device is at the end of its life, replacing it is
n
more complicated than replacing a capacity device because such an operation impacts the entire disk group.
If you add more ash devices to increase the size of the cache, you
n
must create more disk groups. The ratio between ash cache devices and disk groups is always 1:1.
A conguration of multiple disk groups provides the following advantages:
Reduced risk of failure because fewer capacity devices are
n
aected if a single caching device fails
Potentially improved performance if you deploy multiple disk
n
groups that contain smaller ash caching devices.
However, when you congure multiple disk groups, the memory consumption of the hosts increases.
caching only. The write cache must be able to handle very high write activities. This approach extends the life of capacity ash that might be less expensive and might have lower write endurance.
policy for performance reasons, the hosts in the Virtual SAN cluster must have sucient cache to satisfy the reservation during a post­failure rebuild or maintenance operation.
If the available read cache is not sucient to satisfy the reservation, the rebuild or maintenance operation fails. Use read cache reservation only if you must meet a specic, known performance requirement for a particular workload.
The use of snapshots consumes cache resources. If you plan to use several snapshots, consider dedicating more cache than the conventional 10 percent cache-to-consumed-capacity ratio.

Design Considerations for Flash Capacity Devices in Virtual SAN

Plan the conguration of ash capacity devices for Virtual SAN all-ash congurations to provide high performance and required storage space, and to accommodate future growth.
Choosing Between PCIe or SSD Flash Devices
Choose PCIe or SSD ash devices according to the requirements for performance, capacity, write endurance, and cost of the Virtual SAN storage.
Compatibility. The model of the PCIe or SSD devices must be listed in the Virtual SAN section of the
n
VMware Compatibility Guide.
Performance. PCIe devices generally have faster performance than SSD devices.
n
VMware, Inc. 27
Capacity. The maximum capacity that is available for PCIe devices is generally greater than the
n
maximum capacity that is currently listed for SSD devices for Virtual SAN in the VMware Compatibility Guide.
Write endurance. The write endurance of the PCIe or SSD devices must meet the requirements for
n
capacity or for cache in all-ash congurations, and for cache in hybrid congurations.
For information about the write endurance requirements for all-ash and hybrid congurations, see the VMware Virtual SAN Design and Sizing Guide. For information about the write endurance class of PCIe and SSD devices, see the Virtual SAN section of the VMware Compatibility Guide.
Cost. PCIe devices generally have higher cost than SSD devices.
n
Flash Devices as Virtual SAN Capacity
In all-ash congurations, Virtual SAN does not use cache for read operations and does not apply the read­cache reservation seing from the VM storage policy. For cache, you can use a small amount of more expensive ash that has high write endurance. For capacity, you can use ash that is less expensive and has lower write endurance.
Plan a conguration of ash capacity devices by following these guidelines:
For beer performance of Virtual SAN, use more disk groups of smaller ash capacity devices.
n
For balanced performance and predictable behavior, use the same type and model of ash capacity
n
devices.

Design Considerations for Magnetic Disks in Virtual SAN

Plan the size and number of magnetic disks for capacity in hybrid congurations by following the requirements for storage space and performance.
SAS, NL-SAS, and SATA Magnetic Devices
Use SAS, NL-SAS, or SATA magnetic devices by following the requirements for performance, capacity, and cost of the Virtual SAN storage.
Compatibility. The model of the magnetic disk must be certied and listed in the Virtual SAN section of
n
the VMware Compatibility Guide.
Performance. SAS and NL-SAS devices have faster performance than SATA disks.
n
Capacity. The capacity of SAS, NL-SAS, and SATA magnetic disks for Virtual SAN is available in the
n
Virtual SAN section of the VMware Compatibility Guide. Consider using a larger number of smaller devices instead of a smaller number of larger devices.
Cost. SAS and NL-SAS devices are more expensive than SATA disks.
n
Using SATA disks instead of SAS and NL-SAS devices is justiable in environments where capacity and reduced cost have higher priority than performance.
Magnetic Disks as Virtual SAN Capacity
Plan a magnetic disk conguration by following these guidelines:
For beer performance of Virtual SAN, use many magnetic disks that have smaller capacity.
n
You must have enough magnetic disks that provide adequate aggregated performance for transferring data between cache and capacity. Using more small devices provides beer performance than using fewer large devices. Using multiple magnetic disk spindles can speed up the destaging process.
28 VMware, Inc.
Chapter 4 Designing and Sizing a Virtual SAN Cluster
In environments that contain many virtual machines, the number of magnetic disks is also important for read operations when data is not available in the read cache and Virtual SAN reads it from the magnetic disk. In environments that contain a small number of virtual machines, the disk number impacts read operations if the Number of disk stripes per object in the active VM storage policy is greater than one.
For balanced performance and predictable behavior, use the same type and model of magnetic disks in
n
a Virtual SAN datastore.
Dedicate a high enough number of magnetic disks to satisfy the value of the Primary level of failures
n
to tolerate and the Number of disk stripes per object aributes in the dened storage policies. For information about the VM storage policies for Virtual SAN, see Chapter 13, “Using Virtual SAN
Policies,” on page 123.

Design Considerations for Storage Controllers in Virtual SAN

Include storage controllers on the hosts of a Virtual SAN cluster that best satisfy the requirements for performance and availability.
Use storage controller models, and driver and rmware versions that are listed in the VMware
n
Compatibility Guide. Search for Virtual SAN in the VMware Compatibility Guide.
Use multiple storage controllers, if possible, to improve performance and to isolate a potential
n
controller failure to only a subset of disk groups.
Use storage controllers that have the highest queue depths in the VMware Compatibility Guide. Using
n
controllers with high queue depth improves performance. For example, when Virtual SAN is rebuilding components after a failure or when a host enters maintenance mode.
Use storage controllers in passthrough mode for best performance of Virtual SAN. Storage controllers
n
in RAID 0 mode require higher conguration and maintenance eorts compared to storage controllers in passthrough mode.

Designing and Sizing vSAN Hosts

Plan the conguration of the hosts in the vSAN cluster for best performance and availability.
Memory and CPU
Size the memory and the CPU of the hosts in the vSAN cluster based on the following considerations.
VMware, Inc. 29
Table 42. Sizing Memory and CPU of vSAN Hosts
Compute Resource Considerations
Memory
CPU
Host Networking
Memory per virtual machine
n
Memory per host, based on the expected number of
n
virtual machines
At least 32-GB memory for fully operational vSAN
n
with 5 disk groups per host and 7 capacity devices per disk group
Hosts that have 512-GB memory or less can boot from a USB, SD, or SATADOM device. If the memory of the host is greater than 512 GB, boot the host from a SATADOM or disk device.
Sockets per host
n
Cores per socket
n
Number of vCPUs based on the expected number of
n
virtual machines
vCPU-to-core ratio
n
10% CPU overhead for vSAN
n
Provide more bandwidth for vSAN trac to improve performance.
If you plan to use hosts that have 1-GbE adapters, dedicate adapters for vSAN only. For all-ash
n
congurations, plan hosts that have dedicated or shared 10-GbE adapters.
If you plan to use 10-GbE adapters, they can be shared with other trac types for both hybrid and all-
n
ash congurations.
If a 10-GbE adapter is shared with other trac types, use a vSphere Distributed Switch for vSAN trac
n
to isolate the trac by using Network I/O Control and VLANs.
Create a team of physical adapters for vSAN trac for redundancy.
n
Multiple Disk Groups
If the ash cache or storage controller stops responding, an entire disk group can fail. As a result, vSAN rebuilds all components for the failed disk group from another location in the cluster.
Use of multiple disk groups, with each disk group providing less capacity, provides the following benets and disadvantages:
Benets
n
Performance is improved because the datastore has more aggregated cache, and I/O operations are
n
faster.
Risk of failure is spread among multiple disk groups.
n
If a disk group fails, vSAN rebuilds fewer components, so performance is improved.
n
Disadvantages
n
Costs are increased because two or more caching devices are required.
n
More memory is required to handle more disk groups.
n
Multiple storage controllers are required to reduce the risk of a single point of failure.
n
Drive Bays
For easy maintenance, consider hosts whose drive bays and PCIe slots are at the front of the server body.
30 VMware, Inc.
Chapter 4 Designing and Sizing a Virtual SAN Cluster
Blade Servers and External Storage
The capacity of blade servers usually does not scale in a vSAN datastore because they have a limited number of disk slots. To extend the planned capacity of blade servers, use external storage enclosures. For information about the supported models of external storage enclosures, see VMware Compatibility Guide.
Hot Plug and Swap of Devices
Consider the storage controller passthrough mode support for easy hot plugging or replacement of magnetic disks and ash capacity devices on a host. If a controller works in RAID 0 mode, you must perform additional steps before the host can discover the new drive.

Design Considerations for a Virtual SAN Cluster

Design the conguration of hosts and management nodes for best availability and tolerance to consumption growth.
Sizing the Virtual SAN Cluster for Failures to Tolerate
You congure the Primary level of failures to tolerate (PFTT) aribute in the VM storage policies to handle host failures. The number of hosts required for the cluster is calculated as follows: 2 * PFTT + 1. The more failures the cluster is congured to tolerate, the more capacity hosts are required.
If the cluster hosts are connected in rack servers, you can organize the hosts into fault domains to improve failure management. See “Designing and Sizing Virtual SAN Fault Domains,” on page 34.
Limitations of a Two-Host or Three-Host Cluster Configuration
In a two-host or three-host conguration, you can tolerate only one host failure by seing the V of failures to tolerate to 1. Virtual SAN saves each of the two required replicas of virtual machine data on separate
hosts. The witness object is on a third host. Because of the small number of hosts in the cluster, the following limitations exist:
When a host fails, Virtual SAN cannot rebuild data on another host to protect against another failure.
n
If a host must enter maintenance mode, Virtual SAN cannot reprotect evacuated data. Data is exposed
n
to a potential failure while the host is in maintenance mode.
You can use only the Ensure data accessibility data evacuation option. The Evacuate all data option is not available because the cluster does not have a spare host that it can use for evacuating data.
As a result, virtual machines are at risk because they become inaccessible if another failure occurs.
Balanced and Unbalanced Cluster Configuration
Virtual SAN works best on hosts with uniform congurations.
Using hosts with dierent congurations has the following disadvantages in a Virtual SAN cluster:
Reduced predictability of storage performance because Virtual SAN does not store the same number of
n
components on each host.
Dierent maintenance procedures.
n
Reduced performance on hosts in the cluster that have smaller or dierent types of cache devices.
n
Deploying vCenter Server on Virtual SAN
If you deploy vCenter Server on the Virtual SAN datastore, you might not be able to use vCenter Server for troubleshooting, if a problem occurs in the Virtual SAN cluster.
VMware, Inc. 31

Designing the Virtual SAN Network

Consider networking features that can provide availability, security, and bandwidth guarantee in a Virtual SAN cluster.
For details about the Virtual SAN network conguration, see the VMware Virtual SAN Design and Sizing Guide and Virtual SAN Network Design Guide.
Networking Failover and Load Balancing
Virtual SAN uses the teaming and failover policy that is congured on the backing virtual switch for network redundancy only. Virtual SAN does not use NIC teaming for load balancing.
If you plan to congure a NIC team for availability, consider these failover congurations.
Teaming Algorithm Failover Configuration of the Adapters in the Team
Route based on originating virtual port Active/Passive
Route based on IP hash Active/Active with static EtherChannel for standard switch and LACP
port channel for distributed switch
Route based on physical network adapter load Active/Active
Virtual SAN supports IP-hash load balancing, but cannot guarantee improvement in performance for all congurations. You can benet from IP hash when Virtual SAN is among its many consumers. In this case, IP hash performs load balancing. If Virtual SAN is the only consumer, you might notice no improvement. This behavior specically applies to 1-GbE environments. For example, if you use four 1-GbE physical adapters with IP hash for Virtual SAN, you might not be able to use more than 1 Gbps. This behavior also applies to all NIC teaming policies that VMware supports.
Virtual SAN does not support multiple VMkernel adapters on the same subnet. You can use multiple VMkernel adapters on dierent subnets, such as another VLAN or separate physical fabric. Providing availability by using several VMkernel adapters has conguration costs including vSphere and the network infrastructure. Network availability by teaming physical network adapters is easier to achieve with less setup.
Using Unicast in Virtual SAN Network
In Virtual SAN 6.6 and later releases, multicast is not required on the physical switches that support the Virtual SAN cluster. You can design a simple unicast network for Virtual SAN. Earlier releases of Virtual SAN rely on multicast to enable heartbeat and to exchange metadata between hosts in the cluster. If some hosts in your Virtual SAN cluster are running earlier versions of software, a multicast network is still required. For more information about using multicast in a Virtual SAN cluster, refer to an earlier version of Administering VMware Virtual SAN.
N The following conguration is not supported: vCenter Server deployed on a Virtual SAN 6.6 cluster that is using IP addresses from DHCP without reservations. You can use DHCP with reservations, because the assigned IP addresses are bound to the MAC addresses of VMkernel ports.
Allocating Bandwidth for Virtual SAN by Using Network I/O Control
If Virtual SAN trac uses 10-GbE physical network adapters that are shared with other system trac types, such as vSphere vMotion trac, vSphere HA trac, virtual machine trac, and so on, you can use the vSphere Network I/O Control in vSphere Distributed Switch to guarantee the amount of bandwidth that is required for Virtual SAN.
32 VMware, Inc.
Chapter 4 Designing and Sizing a Virtual SAN Cluster
In vSphere Network I/O Control, you can congure reservation and shares for the Virtual SAN outgoing trac.
Set a reservation so that Network I/O Control guarantees that minimum bandwidth is available on the
n
physical adapter for Virtual SAN.
Set shares so that when the physical adapter assigned for Virtual SAN becomes saturated, certain
n
bandwidth is available to Virtual SAN and to prevent Virtual SAN from consuming the entire capacity of the physical adapter during rebuild and synchronization operations. For example, the physical adapter might become saturated when another physical adapter in the team fails and all trac in the port group is transferred to the other adapters in the team.
For example, on a 10-GbE physical adapter that handles trac for Virtual SAN, vSphere vMotion, and virtual machines, you can congure certain bandwidth and shares.
Table 43. Example Network I/O Control Configuration for a Physical Adapter That Handles Virtual SAN
Traffic Type Reservation, Gbps Shares
Virtual SAN 1 100
vSphere vMotion 0.5 70
Virtual machine 0.5 30
If the 10-GbE adapter becomes saturated, Network I/O Control allocates 5 Gbps to Virtual SAN on the physical adapter.
For information about using vSphere Network I/O Control to congure bandwidth allocation for Virtual SAN trac, see the vSphere Networking documentation.
Marking Virtual SAN Traffic
Priority tagging is a mechanism to indicate to the connected network devices that Virtual SAN trac has high Quality of Service (QoS) demands. You can assign Virtual SAN trac to a certain class and mark the
trac accordingly with a Class of Service (CoS) value from 0 (low priority) to 7 (high priority), using the trac ltering and marking policy of vSphere Distributed Switch.
Segmenting Virtual SAN Traffic in a VLAN
Consider isolating Virtual SAN trac in a VLAN for enhanced security and performance, especially if you share the capacity of the backing physical adapter among several trac types.
Jumbo Frames
If you plan to use jumbo frames with Virtual SAN to improve CPU performance, verify that jumbo frames are enabled on all network devices and hosts in the cluster.
By default, the TCP segmentation ooad (TSO) and large receive ooad (LRO) features are enabled on ESXi. Consider whether using jumbo frames improves the performance enough to justify the cost of enabling them on all nodes on the network.

Creating Static Routes for Virtual SAN Networking

You might need to create static routes in your Virtual SAN environment.
In traditional congurations, where vSphere uses a single default gateway, all routed trac aempts to reach its destination through this gateway.
However, certain Virtual SAN deployments might require static routing. For example, deployments where the witness is on a dierent network, or the stretched cluster deployment, where both the data sites and the witness host are on dierent sites.
VMware, Inc. 33
To congure static routing on your ESXi hosts, use the esxcli command:
esxcli network ip route ipv4 add –n remote-network -g gateway-to-use
remote-network is the remote network that your host must access, and gateway-to-use is the interface to use when trac is sent to the remote network.
For more information, see “Network Design for Stretched Clusters,” on page 64.

Best Practices for Virtual SAN Networking

Consider networking best practices for Virtual SAN to improve performance and throughput.
For hybrid congurations, dedicate at least 1-GbE physical network adapter. Place Virtual SAN trac
n
on a dedicated or shared 10-GbE physical adapter for best networking performance.
For all-ash congurations, use a dedicated or shared 10-GbE physical network adapter.
n
Provision one additional physical NIC as a failover NIC.
n
If you use a shared 10-GbE network adapter, place the Virtual SAN trac on a distributed switch and
n
congure Network I/O Control to guarantee bandwidth to Virtual SAN.

Designing and Sizing Virtual SAN Fault Domains

The Virtual SAN fault domains feature instructs Virtual SAN to spread redundancy components across the servers in separate computing racks. In this way, you can protect the environment from a rack-level failure such as loss of power or connectivity.
Fault Domain Constructs
Virtual SAN requires at least two fault domains, each of which consists of one or more hosts. Fault domain denitions must acknowledge physical hardware constructs that might represent a potential zone of failure, for example, an individual computing rack enclosure.
If possible, use at least four fault domains. Three fault domains do not support certain data evacuation modes, and Virtual SAN is unable to reprotect data after a failure. In this case, you need an additional fault domain with capacity for rebuilding, which you cannot provide with only three fault domains.
If fault domains are enabled, Virtual SAN applies the active virtual machine storage policy to the fault domains instead of the individual hosts.
Calculate the number of fault domains in a cluster based on the Primary level of failures to tolerate (PFTT) aribute from the storage policies that you plan to assign to virtual machines.
number of fault domains = 2 * PFTT + 1
If a host is not a member of a fault domain, Virtual SAN interprets it as a stand-alone fault domain.
Using Fault Domains Against Failures of Several Hosts
Consider a cluster that contains four server racks, each with two hosts. If the Primary level of failures to tolerate is set to one and fault domains are not enabled, Virtual SAN might store both replicas of an object
with hosts in the same rack enclosure. In this way, applications might be exposed to a potential data loss on a rack-level failure. When you congure hosts that could potentially fail together into separate fault domains, Virtual SAN ensures that each protection component (replicas and witnesses) is placed in a separate fault domain.
If you add hosts and capacity, you can use the existing fault domain conguration or you can dene fault domains.
34 VMware, Inc.
For balanced storage load and fault tolerance when using fault domains, consider the following guidelines:
Provide enough fault domains to satisfy the Primary level of failures to tolerate that are congured in
n
the storage policies.
Dene at least three fault domains. Dene a minimum of four domains for best protection.
Assign the same number of hosts to each fault domain.
n
Use hosts that have uniform congurations.
n
Dedicate one fault domain of free capacity for rebuilding data after a failure, if possible.
n

Using Boot Devices and vSAN

Starting an ESXi installation that is a part of a vSAN cluster from a ash device imposes certain restrictions.
When you boot a vSAN host from a USB/SD device, you must use a high-quality USB or SD ash drive of 4 GB or larger.
When you boot a vSAN host from a SATADOM device, you must use single-level cell (SLC) device. The size of the boot device must be at least 16 GB.
During installation, the ESXi installer creates a coredump partition on the boot device. The default size of the coredump partition satises most installation requirements.
Chapter 4 Designing and Sizing a Virtual SAN Cluster
If the memory of the ESXi host has 512 GB of memory or less, you can boot the host from a USB, SD, or SATADOM device. If the memory of the ESXi host has more than 512 GB, you must boot the host from a SATADOM or disk device.
N vSAN 6.5 and later enables you to resize an existing coredump partition on an ESXi host in a vSAN cluster, and enables you to boot from USB/SD devices. For more information, see the VMware knowledge base article at hp://kb.vmware.com/kb/2147881.
Hosts that boot from a disk have a local VMFS. If you have a disk with VMFS that runs VMs, you must separate the disk for an ESXi boot that is not for vSAN. In this case you need separate controllers.
Log Information and Boot Devices in vSAN
When you boot ESXi from a USB or SD device, log information and stack traces are lost on host reboot. They are lost because the scratch partition is on a RAM drive. Use persistent storage for logs, stack traces, and memory dumps.
Do not store log information on the vSAN datastore. This conguration is not supported because a failure in the vSAN cluster could impact the accessibility of log information.
Consider the following options for persistent log storage:
Use a storage device that is not used for vSAN and is formaed with VMFS or NFS.
n
Congure the ESXi Dump Collector and vSphere Syslog Collector on the host to send memory dumps
n
and system logs to vCenter Server.
For information about seing up the scratch partition with a persistent location, see the vSphere Installation and Setup documentation.

Persistent Logging in a Virtual SAN Cluster

Provide storage for persistence of the logs from the hosts in the Virtual SAN cluster.
If you install ESXi on a USB or SD device and you allocate local storage to Virtual SAN, you might not have enough local storage or datastore space left for persistent logging.
VMware, Inc. 35
To avoid potential loss of log information, congure the ESXi Dump Collector and vSphere Syslog Collector to redirect ESXi memory dumps and system logs to a network server. See the vSphere Installation and Setup documentation.
36 VMware, Inc.
Preparing a New or Existing Cluster
for Virtual SAN 5
Before you enable Virtual SAN on a cluster and start using it as virtual machine storage, provide the infrastructure that is required for correct operation of Virtual SAN.
This chapter includes the following topics:
“Selecting or Verifying the Compatibility of Storage Devices,” on page 37
n
“Preparing Storage,” on page 38
n
“Providing Memory for Virtual SAN,” on page 41
n
“Preparing Your Hosts for Virtual SAN,” on page 42
n
“Virtual SAN and vCenter Server Compatibility,” on page 42
n
“Preparing Storage Controllers,” on page 42
n
“Conguring Virtual SAN Network,” on page 43
n
“Considerations about the Virtual SAN License,” on page 44
n

Selecting or Verifying the Compatibility of Storage Devices

An important step before you deploy Virtual SAN is to verify that your storage devices, drivers, and rmware are compatible with Virtual SAN by consulting the VMware Compatibility Guide.
You can choose from several options for Virtual SAN compatibility.
Use a Virtual SAN Ready Node server, a physical server that OEM vendors and VMware validate for
n
Virtual SAN compatibility.
Assemble a node by selecting individual components from validated device models.
n
VMware Compatibility Guide
Section Component Type for Verification
Systems Physical server that runs ESXi.
Virtual SAN
Magnetic disk SAS or SATA model for hybrid congurations.
n
Flash device model that is listed in the VMware Compatibility Guide. Certain models of
n
PCIe ash devices can also work with Virtual SAN. Consider also write endurance and performance class.
Storage controller model that supports passthrough.
n
Virtual SAN can work with storage controllers that are congured for RAID 0 mode if each storage device is represented as an individual RAID 0 group.
VMware, Inc. 37

Preparing Storage

Provide enough disk space for Virtual SAN and for the virtualized workloads that use the Virtual SAN datastore.

Preparing Storage Devices

Use ash devices and magnetic disks based on the requirements for Virtual SAN.
Verify that the cluster has the capacity to accommodate anticipated virtual machine consumption and the Primary level of failures to tolerate in the storage policy for the virtual machines.
The storage devices must meet the following requirements so that Virtual SAN can claim them:
The storage devices are local to the ESXi hosts. Virtual SAN cannot claim remote devices.
n
The storage devices do not have any preexisting partition information.
n
On the same host, you cannot have both all-ash and hybrid disk groups.
n
Prepare Devices for Disk Groups
Each disk group provides one ash caching device and at least one magnetic disk or one ash capacity device. The capacity of the ash caching device must be at least 10 percent of the anticipated consumed storage on the capacity device, without the protection copies.
Virtual SAN requires at least one disk group on a host that contributes storage to a cluster that consists of at least three hosts. Use hosts that have uniform conguration for best performance of Virtual SAN.
Raw and Usable Capacity
Provide raw storage capacity that is greater than the capacity for virtual machines to handle certain cases.
Do not include the size of the ash caching devices as capacity. These devices do not contribute storage
n
and are used as cache unless you have added ash devices for storage.
Provide enough space to handle the Primary level of failures to tolerate (PFTT) value in a virtual
n
machine storage policy. A PFTT that is greater than 0 extends the device footprint. If the PFTT is set to 1, the footprint is double. If the PFTT is set to 2, the footprint is triple, and so on.
Verify whether the Virtual SAN datastore has enough space for an operation by examining the space on
n
the individual hosts rather than on the consolidated Virtual SAN datastore object. For example, when you evacuate a host, all free space in the datastore might be on the host that you are evacuating and the cluster is not able to accommodate the evacuation to another host.
Provide enough space to prevent the datastore from running out of capacity, if workloads that have
n
thinly provisioned storage start consuming a large amount of storage.
Verify that the physical storage can accommodate the reprotection and maintenance mode of the hosts
n
in the Virtual SAN cluster.
Consider the Virtual SAN overhead to the usable storage space.
n
On-disk format version 1.0 adds an additional overhead of approximately 1 GB per capacity device.
n
On-disk format version 2.0 adds an additional overhead, typically no more than 1-2 percent
n
capacity per device.
On-disk format version 3.0 and later adds an additional overhead, typically no more than 1-2
n
percent capacity per device. Deduplication and compression with software checksum enabled require additional overhead of approximately 6.2 percent capacity per device.
For more information about planning the capacity of Virtual SAN datastores, see the VMware Virtual SAN Design and Sizing Guide.
38 VMware, Inc.
Chapter 5 Preparing a New or Existing Cluster for Virtual SAN
Virtual SAN Policy Impact on Capacity
The Virtual SAN storage policy for virtual machines aects the capacity devices in several ways.
Table 51. Virtual SAN VM Policy and Raw Capacity
Aspects of Policy Influence Description
Policy changes
Available space for reprotecting or maintenance mode
The Primary level of failures to tolerate (PFTT) inuences the
n
physical storage space that you must supply for virtual machines. The greater the PFTT is for higher availability, the more space you must provide.
When PFTT is set to 1, it imposes two replicas of the VMDK le of a virtual machine. With PFTT set to 1, a VMDK le that is 50 GB requires 100 GB space on dierent hosts. If the PFTT is changed to 2, you must have enough space to support three replicas of the VMDK across the hosts in the cluster, or 150 GB.
Some policy changes, such as a new number of disk stripes per
n
object, require temporary resources. Virtual SAN recreates the new objects that are aected by the change and for a certain time, the physical storage must accommodate the old and new objects.
When you place a host in maintenance mode or you clone a virtual machine, although the Virtual SAN datastore indicates that enough space is available, the datastore might not be able to evacuate the virtual machine objects because the free space is on the host that is being placed in maintenance mode.

Mark Flash Devices as Capacity Using ESXCLI

You can manually mark the ash devices on each host as capacity devices using esxcli.
Prerequisites
Verify that you are using Virtual SAN 6.5 or later.
Procedure
1 To learn the name of the ash device that you want to mark as capacity, run the following command on
each host.
a In the ESXi Shell, run the esxcli storage core device list command.
b Locate the device name at the top of the command output and write the name down.
The command takes the following options:
Table 52. Command Options
Options Description
-d|--disk=str
-t|--tag=str
The command lists all device information identied by ESXi.
The name of the device that you want to tag as a capacity device. For example,
mpx.vmhba1:C0:T4:L0
Specify the tag that you want to add or remove. For example, the capacityFlash tag is used for marking a ash device for capacity.
2 In the output, verify that the Is SSD aribute for the device is true.
VMware, Inc. 39
3 To tag a ash device as capacity, run the esxcli vsan storage tag add -d <device name> -t
capacityFlash command.
For example, the esxcli vsan storage tag add -t capacityFlash -d mpx.vmhba1:C0:T4:L0 command, where mpx.vmhba1:C0:T4:L0 is the device name.
4 Verify whether the ash device is marked as capacity.
a In the output, identify whether the IsCapacityFlash aribute for the device is set to 1.
Example: Command Output
You can run the vdq -q -d <device name> command to verify the IsCapacityFlash aribute. For example, running the vdq -q -d mpx.vmhba1:C0:T4:L0 command, returns the following output.
\{
"Name" : "mpx.vmhba1:C0:T4:L0",
"VSANUUID" : "",
"State" : "Eligible for use by VSAN",
"ChecksumSupport": "0",
"Reason" : "None",
"IsSSD" : "1",
"IsCapacityFlash": "1",
"IsPDL" : "0",
\},

Untag Flash Devices Used as Capacity Using ESXCLI

You can untag ash devices that are used as capacity devices, so that they are available for caching.
Procedure
1 To untag a ash device marked as capacity, run the esxcli vsan storage tag remove -d <device name>
-t capacityFlash command. For example, the esxcli vsan storage tag remove -t capacityFlash -d mpx.vmhba1:C0:T4:L0 command, where mpx.vmhba1:C0:T4:L0 is the device name.
2 Verify whether the ash device is untagged.
a In the output, identify whether the IsCapacityFlash aribute for the device is set to 0.
Example: Command Output
You can run the vdq -q -d <device name> command to verify the IsCapacityFlash aribute. For example, running the vdq -q -d mpx.vmhba1:C0:T4:L0 command, returns the following output.
[
\{
"Name" : "mpx.vmhba1:C0:T4:L0",
"VSANUUID" : "",
"State" : "Eligible for use by VSAN",
"ChecksumSupport": "0",
"Reason" : "None",
"IsSSD" : "1",
"IsCapacityFlash": "0",
"IsPDL" : "0",
\},
40 VMware, Inc.
Chapter 5 Preparing a New or Existing Cluster for Virtual SAN

Mark Flash Devices as Capacity using RVC

Run the vsan.host_claim_disks_differently RVC command to mark storage devices as ash, capacity ash, or magnetic disk (HDD).
You can use the RVC tool to tag ash devices as capacity devices either individually, or in a batch by specifying the model of the device. When you want to tag ash devices as capacity devices you can include them in all-ash disk groups.
N The vsan.host_claim_disks_differently command does not check the device type before tagging them. The command tags any device that you append with the capacity_flash command option, including the magnetic disks and devices that are already in use. Make sure you verify the device status before tagging.
For information about the RVC commands for Virtual SAN management, see the RVC Command Reference Guide.
Prerequisites
Verify that you are using Virtual SAN version 6.5 or later.
n
Verify that SSH is enabled on the vCenter Server Appliance.
n
Procedure
1 Open an SSH connection to the vCenter Server Appliance.
2 Log into the appliance by using a local account that has administrator privilege.
3 Start the RVC by running the following command.
rvc local_user_name@target_vCenter_Server
For example, to use the same vCenter Server Appliance to mark ash devices for capacity as a user root, run the following command:
rvc root@localhost
4 Enter the password for the user name.
5 Navigate to the vcenter_server/data_center/computers/cluster/hosts directory in the vSphere
infrastructure.
6 Run the vsan.host_claim_disks_differently command with the --claim-type capacity_flash
--model model_name options to mark all ash devices of the same model as capacity on all hosts in the cluster.
vsan.host_claim_disks_differently --claim-type capacity_flash --model model_name *
What to do next
Enable Virtual SAN on the cluster and claim capacity devices.

Providing Memory for Virtual SAN

You must provision hosts with memory according to the maximum number of devices and disk groups that you intend to map to Virtual SAN.
To satisfy the case of the maximum number of devices and disk groups, you must provision hosts with 32 GB of memory for system operations. For information about the maximum device conguration, see the vSphere Conguration Maximums documentation.
VMware, Inc. 41

Preparing Your Hosts for Virtual SAN

As a part of the preparation for enabling Virtual SAN, review the requirements and recommendations about the conguration of hosts for the cluster.
Verify that the storage devices on the hosts, and the driver and rmware versions for them, are listed in
n
the Virtual SAN section of the VMware Compatibility Guide.
Make sure that a minimum of three hosts contribute storage to the Virtual SAN datastore.
n
For maintenance and remediation operations on failure, add at least four hosts to the cluster.
n
Designate hosts that have uniform conguration for best storage balance in the cluster.
n
Do not add hosts that have only compute resources to the cluster to avoid unbalanced distribution of
n
storage components on the hosts that contribute storage. Virtual machines that require a lot of storage space and run on compute-only hosts might store a great number of components on individual capacity hosts. As a result, the storage performance in the cluster might be lower.
Do not congure aggressive CPU power management policies on the hosts for saving power. Certain
n
applications that are sensitive to CPU speed latency might have very low performance. For information about CPU power management policies, see the vSphere Resource Management documentation.
If your cluster contains blade servers, consider extending the capacity of the datastore with an external
n
storage enclose that is connected to the blade servers and is listed in the Virtual SAN section of the VMware Compatibility Guide.
Consider the conguration of the workloads that you place on a hybrid or all-ash disk conguration.
n
For high levels of predictable performance, provide a cluster of all-ash disk groups.
n
For balance between performance and cost, provide a cluster of hybrid disk groups.
n

Virtual SAN and vCenter Server Compatibility

Synchronize the versions of vCenter Server and of ESXi to avoid potential faults because of dierences in the Virtual SAN support in vCenter Server and ESXi.
For best integration between Virtual SAN components on vCenter Server and ESXi, deploy the latest version of the two vSphere components. See the vSphere Installation and Setup and vSphere Upgrade documentation.

Preparing Storage Controllers

Congure the storage controller on a host according to the requirements of Virtual SAN.
Verify that the storage controllers on the Virtual SAN hosts satisfy certain requirements for mode, driver, and rmware version, queue depth, caching and advanced features.
42 VMware, Inc.
Chapter 5 Preparing a New or Existing Cluster for Virtual SAN
Table 53. Examining Storage Controller Configuration for Virtual SAN
Storage Controller Feature Storage Controller Requirement
Required mode
RAID mode
Driver and rmware version
Queue depth Verify that the queue depth of the controller is 256 or higher. Higher
Cache Disable the storage controller cache, or set it to 100 percent read if
Advanced features Disable advanced features, for example, HP SSD Smart Path.
Review the Virtual SAN requirements in the VMware Compatibility
n
Guide for the required mode, passthrough or RAID 0, of the controller.
If both passthrough and RAID 0 modes are supported, congure
n
passthrough mode instead of RAID0. RAID 0 introduces complexity for disk replacement.
In the case of RAID 0, create one RAID volume per physical disk
n
device.
Do not enable a RAID mode other than the mode listed in the
n
VMware Compatibility Guide.
Do not enable controller spanning.
n
Use the latest driver and rmware version for the controller
n
according to VMware Compatibility Guide.
If you use the in-box controller driver, verify that the driver is
n
certied for Virtual SAN.
OEM ESXi releases might contain drivers that are not certied and listed in the VMware Compatibility Guide.
queue depth provides improved performance.
disabling cache is not possible.

Configuring Virtual SAN Network

Before you enable Virtual SAN on a cluster and ESXi hosts, you must construct the necessary network to carry the Virtual SAN communication.
Virtual SAN provides a distributed storage solution, which implies exchanging data across the ESXi hosts that participate in the cluster. Preparing the network for installing Virtual SAN includes certain conguration aspects.
For information about network design guidelines, see “Designing the Virtual SAN Network,” on page 32.
Placing Hosts in the Same Subnet
Hosts must be connected in the same subnet for best networking performance. In Virtual SAN 6.0 and later, you can also connect hosts in the same Layer 3 network if required.
Dedicating Network Bandwidth on a Physical Adapter
Allocate at least 1 Gbps bandwidth for Virtual SAN. You might use one of the following conguration options:
Dedicate 1-GbE physical adapters for a hybrid host conguration.
n
Use dedicated or shared 10-GbE physical adapters for all-ash congurations.
n
Use dedicated or shared 10-GbE physical adapters for hybrid congurations if possible.
n
Direct Virtual SAN trac on a 10-GbE physical adapter that handles other system trac and use
n
vSphere Network I/O Control on a distributed switch to reserve bandwidth for Virtual SAN.
VMware, Inc. 43
Configuring a Port Group on a Virtual Switch
Congure a port group on a virtual switch for Virtual SAN.
Assign the physical adapter for Virtual SAN to the port group as an active uplink.
n
In the case of a NIC team for network availability, select a teaming algorithm based on the connection of the physical adapters to the switch.
If designed, assign Virtual SAN trac to a VLAN by enabling tagging in the virtual switch.
n
Examining the Firewall on a Host for Virtual SAN
Virtual SAN sends messages on certain ports on each host in the cluster. Verify that the host rewalls allow trac on these ports.
Table 5‑4. Ports on the Hosts in Virtual SAN
Virtual SAN Service Traffic Direction Communicating Nodes Transport Protocol Port
Virtual SAN Vendor Provider (vsanvp)
Virtual SAN Clustering Service
Virtual SAN Transport
Unicast agent ESXi UDP 12321
Incoming and outgoing vCenter Server and ESXi TCP 8080
ESXi UDP 12345, 23451
ESXi TCP 2233

Considerations about the Virtual SAN License

When you prepare your cluster for Virtual SAN, review the requirements of the Virtual SAN license.
Make sure that you obtained a valid license for full host conguration control in the cluster. The license
n
should be dierent from the one that you used for evaluation purposes.
After the license or the evaluation period of a Virtual SAN expires, you can continue to use the current conguration of Virtual SAN resources. However, you cannot add capacity to a disk group or create disk groups.
If the cluster consists of all-ash disk groups, verify that the all-ash feature is available under your
n
license.
If the Virtual SAN cluster uses advanced features such as deduplication and compression or stretched
n
cluster, verify that the feature is available under your license.
Consider the CPU capacity of the Virtual SAN license across the cluster when adding and removing
n
hosts to the cluster.
Virtual SAN licenses have per CPU capacity. When you assign a Virtual SAN license to a cluster, the amount of license capacity that is used equals the total number of CPUs on the hosts that participate in the cluster.
44 VMware, Inc.

Creating a Virtual SAN Cluster 6

You can activate Virtual SAN when you create a cluster or enable Virtual SAN on your existing clusters.
This chapter includes the following topics:
“Characteristics of a Virtual SAN Cluster,” on page 45
n
“Before Creating a Virtual SAN Cluster,” on page 46
n
“Enabling Virtual SAN,” on page 47
n
“Using Virtual SAN Conguration Assist and Updates,” on page 56
n

Characteristics of a Virtual SAN Cluster

Before working on a Virtual SAN environment, you should be aware of the characteristics of a Virtual SAN cluster.
A Virtual SAN cluster includes the following characteristics:
You can have multiple Virtual SAN clusters for each vCenter Server instance. You can use a single
n
vCenter Server to manage more than one Virtual SAN cluster.
Virtual SAN consumes all devices, including ash cache and capacity devices, and does not share
n
devices with other features.
VMware, Inc.
Virtual SAN clusters can include hosts with or without capacity devices. The minimum requirement is
n
three hosts with capacity devices. For best result, create a Virtual SAN cluster with uniformly congured hosts.
If a host contributes capacity, it must have at least one ash cache device and one capacity device.
n
In hybrid clusters, the magnetic disks are used for capacity and ash devices for read and write cache.
n
Virtual SAN allocates 70 percent of all available cache for read cache and 30 percent of available cache for write buer. In this congurations, the ash devices serve as a read cache and a write buer.
In all-ash cluster, one designated ash device is used as a write cache, additional ash devices are
n
used for capacity. In all-ash clusters, all read requests come directly from the ash pool capacity.
Only local or direct-aached capacity devices can participate in a Virtual SAN cluster. Virtual SAN
n
cannot consume other external storage, such as SAN or NAS, aached to cluster.
For best practices about designing and sizing a Virtual SAN cluster, see Chapter 4, “Designing and Sizing a
Virtual SAN Cluster,” on page 23.
45

Before Creating a Virtual SAN Cluster

This topic provides a checklist of software and hardware requirements for creating a Virtual SAN cluster. You can also use the checklist to verify that the cluster meets the guidelines and basic requirements.
Requirements for Virtual SAN Cluster
Before you get started, verify specic models of hardware devices, and specic versions of drivers and rmware in the VMware Compatibility Guide Web site at
hp://www.vmware.com/resources/compatibility/search.php. The following table lists the key software and
hardware requirements supported by Virtual SAN.
C Using uncertied software and hardware components, drivers, controllers, and rmware might cause unexpected data loss and performance issues.
Table 61. Virtual SAN Cluster Requirements
Requirements Description
ESXi Hosts
Memory
Storage I/O controllers, drivers,
rmware
Cache and capacity
Network connectivity
Verify that you are using the latest version of ESXi on your hosts.
n
Verify that there are at least three ESXi hosts with supported storage
n
congurations available to be assigned to the Virtual SAN cluster. For best results, congure the Virtual SAN cluster with four or more hosts.
Verify that each host has a minimum of 8 GB of memory.
n
For larger congurations and beer performance, you must have a
n
minimum of 32 GB of memory in the cluster. See “Designing and Sizing
vSAN Hosts,” on page 29.
Verify that the storage I/O controllers, drivers, and rmware versions are
n
certied and listed in the VCG Web site at
hp://www.vmware.com/resources/compatibility/search.php.
Verify that the controller is congured for passthrough or RAID 0 mode.
n
Verify that the controller cache and advanced features are disabled. If you
n
cannot disable the cache, you must set the read cache to 100 percent.
Verify that you are using controllers with higher queue depths. Using
n
controllers with queue depths less than 256 can signicantly impact the performance of your virtual machines during maintenance and failure.
Verify that Virtual SAN hosts contributing storage to the cluster must have
n
at least one cache and one capacity device. Virtual SAN requires exclusive access to the local cache and capacity devices of the hosts in the Virtual SAN cluster. They cannot share these devices with other uses, such as Virtual Flash File System (VFFS), VMFS partitions, or an ESXi boot partition.
For best results, create a Virtual SAN cluster with uniformly congured
n
hosts.
Verify that each host is congured with at least one network adapter.
n
For hybrid congurations, verify that Virtual SAN hosts have a minimum
n
dedicated bandwidth of 1 GbE.
For all-ash congurations, verify that Virtual SAN hosts have a minimum
n
bandwidth of 10 GbE.
For best practices and considerations about designing the Virtual SAN network, see “Designing the Virtual SAN Network,” on page 32 and “Networking
Requirements for Virtual SAN,” on page 21.
46 VMware, Inc.
Table 61. Virtual SAN Cluster Requirements (Continued)
Requirements Description
Virtual SAN and vCenter Server Compatibility
License key
For detailed information about Virtual SAN Cluster requirements, see Chapter 3, “Requirements for
Enabling Virtual SAN,” on page 19.
For in-depth information about designing and sizing the Virtual SAN cluster, see the VMware Virtual SAN Design and Sizing Guide.

Enabling Virtual SAN

Chapter 6 Creating a Virtual SAN Cluster
Verify that you are using the latest version of the vCenter Server.
Verify that you have a valid Virtual SAN license key.
n
To use the all-ash feature, your license must support that capability.
n
To use advanced features, such as stretched clusters or deduplication and
n
compression, your license must support those features.
Verify that the amount of license capacity that you plan on using equals the
n
total number of CPUs in the hosts participating in the Virtual SAN cluster. Do not provide license capacity only for hosts providing capacity to the cluster. For information about licensing for Virtual SAN, see the vCenter Server and Host Management documentation.
To use Virtual SAN, you must create a host cluster and enable Virtual SAN on the cluster.
A Virtual SAN cluster can include hosts with capacity and hosts without capacity. Follow these guidelines when you create a Virtual SAN cluster.
A Virtual SAN cluster must include a minimum of three ESXi hosts. For a Virtual SAN cluster to
n
tolerate host and device failures, at least three hosts that join the Virtual SAN cluster must contribute capacity to the cluster. For best results, consider adding four or more hosts contributing capacity to the cluster.
Only ESXi 5.5 Update 1 or later hosts can join the Virtual SAN cluster.
n
All hosts in the Virtual SAN cluster must have the same on-disk format.
n
Before you move a host from a Virtual SAN cluster to another cluster, make sure that the destination
n
cluster is Virtual SAN enabled.
To be able to access the Virtual SAN datastore, an ESXi host must be a member of the Virtual SAN
n
cluster.
After you enable Virtual SAN, the Virtual SAN storage provider is automatically registered with vCenter Server and the Virtual SAN datastore is created. For information about storage providers, see the vSphere Storage documentation.

Set Up a VMkernel Network for Virtual SAN

To enable the exchange of data in the Virtual SAN cluster, you must provide a VMkernel network adapter for Virtual SAN trac on each ESXi host.
Procedure
1 In the vSphere Web Client, navigate to the host.
2 Click the  tab.
3 Under Networking, select VMkernel adapters.
4
Click the Add host networking icon ( ) to open the Add Networking wizard.
VMware, Inc. 47
5 On the Select connection type page, select VMkernel Network Adapter and click Next.
6 Congure the target switching device.
7 On the Port properties page, select vSAN .
8 Complete the VMkernel adapter conguration.
9 On the Ready to complete page, verify that Virtual SAN is Enabled in the status for the VMkernel
adapter, and click Finish.
Virtual SAN network is enabled for the host.
What to do next
You can enable Virtual SAN on the host cluster.

Create a Virtual SAN Cluster

You can enable Virtual SAN when you create a cluster.
Procedure
1 Right-click a data center in the vSphere Web Client and select New Cluster.
2 Type a name for the cluster in the Name text box.
This name appears in the vSphere Web Client navigator.
3 Select the vSAN Turn ON check box and click OK.
The cluster appears in the inventory.
4 Add hosts to the Virtual SAN cluster. See “Add a Host to the Virtual SAN Cluster,” on page 108.
Virtual SAN clusters can include hosts with or without capacity devices. For best results, add hosts with capacity.
Enabling Virtual SAN creates a Virtual SAN datastore and registers the Virtual SAN storage provider. Virtual SAN storage providers are built-in software components that communicate the storage capabilities of the datastore to vCenter Server.
What to do next
Verify that the Virtual SAN datastore has been created. See “View Virtual SAN Datastore,” on page 53.
Verify that the Virtual SAN storage provider is registered. See “View Virtual SAN Storage Providers,” on page 126.
Claim the storage devices or create disk groups. See Chapter 11, “Device Management in a Virtual SAN
Cluster,” on page 99.
Congure the Virtual SAN cluster. See “Congure a Cluster for Virtual SAN,” on page 49.
48 VMware, Inc.
Chapter 6 Creating a Virtual SAN Cluster

Configure a Cluster for Virtual SAN

You can use the Congure Virtual SAN wizard to complete the basic conguration of your Virtual SAN cluster.
Prerequisites
You must create a cluster and add hosts to the cluster before using the Congure Virtual SAN wizard to complete the basic conguration.
Procedure
1 Navigate to an existing cluster in the vSphere Web Client.
2 Click the  tab.
3 Under vSAN, select General and click the  buon.
4 Select vSAN capabilities.
a (Optional) Select the Deduplication and Compression check box if you want to enable
deduplication and compression on the cluster.
You can select the Allow Reduced Redundancy check box to enable deduplication and compression on a Virtual SAN cluster that has limited resources, such as a three-host cluster with the Primary level of failures to tolerate set to 1. If you allow reduced redundancy, your data might be at risk during the disk reformat operation.
b (Optional) Select the Encryption check box if you want to enable data at rest encryption, and select
a KMS.
VMware, Inc. 49
c Select the fault tolerance mode for the cluster.
Option Description
Do not configure
2 host virtual SAN cluster
Stretched cluster
Configure fault domains
d You can select the Allow Reduced Redundancy check box to enable encryption or deduplication
and compression on a Virtual SAN cluster that has limited resources. For example, if you have a three-host cluster with the Primary level of failures to tolerate set to 1. If you allow reduced redundancy, your data might be at risk during the disk reformat operation.
5 Click Next.
6 On the Network validation page, check the seings for Virtual SAN VMkernel adapters, and click Next.
7 On the Claim disks page, select the disks for use by the cluster and click Next.
Default seing used for a single-site Virtual SAN cluster.
Provides fault tolerance for a cluster that has two hosts at a remote
oce, with a witness host in the main oce. Set the Primary level of failures to tolerate policy to 1.
Supports two active sites, each with an even number of hosts and storage devices, and a witness host at a third site.
Supports fault domains that you can use to group Virtual SAN hosts that might fail together. Assign one or more hosts to each fault domain.
For each host that contributes storage, select one ash device for the cache tier, and one or more devices for the capacity tier.
8 Follow the wizard to complete the conguration of the cluster, based on the fault tolerance mode.
a If you selected  two host vSAN cluster, choose a witness host for the cluster, and claim
disks for the witness host.
b If you selected  stretched cluster, dene fault domains for the cluster, choose a witness
host, and claim disks for the witness host.
c If you selected  fault domains, dene fault domains for the cluster.
For more information about fault domains, see “Managing Fault Domains in Virtual SAN Clusters,” on page 113.
For more information about stretched clusters, see Chapter 7, “Extending a Datastore Across Two Sites
with Stretched Clusters,” on page 61.
9 On the Ready to complete page, review the conguration, and click Finish.

Edit Virtual SAN Settings

You can edit the seings of your Virtual SAN cluster to change the method for claiming disks and to enable deduplication and compression.
Edit the seings of an existing Virtual SAN cluster if you want to enable deduplication and compression, or to enable encryption. If you enable deduplication and compression, or if you enable encryption, the on-disk format of the cluster is automatically upgraded to the latest version.
Procedure
1 Navigate to the Virtual SAN host cluster in the vSphere Web Client.
2 Click the  tab.
3 Under vSAN, select General.
4 Click the Virtual SAN is turned ON Edit buon.
50 VMware, Inc.
Chapter 6 Creating a Virtual SAN Cluster
5 (Optional) If you want to enable deduplication and compression on the cluster, select the Deduplication
and compression check box.
Virtual SAN will automatically upgrade the on-disk format, causing a rolling reformat of every disk group in the cluster.
6 (Optional) If you want to enable encryption on the cluster, select theEncryption check box, and select a
KMS server.
Virtual SAN will automatically upgrade the on-disk format, causing a rolling reformat of every disk group in the cluster.
7 Click OK.

Enable Virtual SAN on an Existing Cluster

You can edit cluster properties to enable Virtual SAN for an existing cluster.
After enabling Virtual SAN on your cluster, you cannot move Virtual SAN hosts from a Virtual SAN enabled cluster to a non-Virtual SAN cluster.
Prerequisites
Verify that your environment meets all requirements. See Chapter 3, “Requirements for Enabling Virtual
SAN,” on page 19.
Procedure
1 Navigate to an existing host cluster in the vSphere Web Client.
2 Click the  tab.
3 Under vSAN, select General and click Edit to edit the cluster seings.
4 If you want to enable deduplication and compression on the cluster, select the Deduplication and
compression check box.
Virtual SAN automatically upgrades the on-disk format, causing a rolling reformat of every disk group in the cluster.
5 (Optional) If you want to enable encryption on the cluster, select theEncryption check box, and select a
KMS server.
Virtual SAN automatically upgrades the on-disk format, causing a rolling reformat of every disk group in the cluster.
6 Click OK.
What to do next
Claim the storage devices or create disk groups. See Chapter 11, “Device Management in a Virtual SAN
Cluster,” on page 99.

Disable Virtual SAN

You can turn o Virtual SAN for a host cluster.
When you disable the Virtual SAN cluster, all virtual machines located on the shared Virtual SAN datastore become inaccessible. If you intend to use virtual machine while Virtual SAN is disabled, make sure you migrate virtual machines from Virtual SAN datastore to another datastore before disabling the Virtual SAN cluster.
Prerequisites
Verify that the hosts are in maintenance mode.
VMware, Inc. 51
Procedure
1 Navigate to the host cluster in the vSphere Web Client.
2 Click the  tab.
3 Under vSAN, select General and click Edit to edit Virtual SAN seings.
4 Deselect the vSAN Turn On check box.

Configure License Settings for a Virtual SAN Cluster

You must assign a license to a Virtual SAN cluster before its evaluation period expires or its currently assigned license expires.
If you upgrade, combine, or divide Virtual SAN licenses, you must assign the new licenses to Virtual SAN clusters. When you assign a Virtual SAN license to a cluster, the amount of license capacity that is used equals the total number of CPUs in the hosts participating in the cluster. The license usage of the Virtual SAN cluster is recalculated and updated every time you add or remove a host from the cluster. For information about managing licenses and licensing terminology and denitions, see the vCenter Server and Host Management documentation.
When you enable Virtual SAN on a cluster, you can use Virtual SAN in evaluation mode to explore its features. The evaluation period starts when Virtual SAN is enabled, and expires after 60 days. To use Virtual SAN, you must license the cluster before the evaluation period expires. Just like vSphere licenses, Virtual SAN licenses have per CPU capacity. Some advanced features, such as all-ash conguration and stretched clusters, require a license that supports the feature.
Prerequisites
To view and manage Virtual SAN licenses, you must have the Global.Licenses privilege on the
n
vCenter Server systems, where the vSphere Web Client runs.
Procedure
1 In the vSphere Web Client, navigate to a cluster where you have enabled Virtual SAN.
2 Click the  tab.
3 Under , select Licensing, and click Assign License.
4 Select a licensing option.
Select an existing license and click OK.
n
Create a new Virtual SAN license.
n
a
Click the Create New License ( ) icon.
b In the New Licenses dialog box, type or copy and paste a Virtual SAN license key and click
Next.
c On the Edit license names page, rename the new license as appropriate and click Next.
d Click Finish.
e In the Assign License dialog, select the newly created license and click OK.
52 VMware, Inc.
Chapter 6 Creating a Virtual SAN Cluster

View Virtual SAN Datastore

After you enable Virtual SAN, a single datastore is created. You can review the capacity of the Virtual SAN datastore.
Prerequisites
Activate Virtual SAN and congure disk groups.
Procedure
1 Navigate to Storage in the vSphere Web Client.
2 Select the Virtual SAN datastore.
3 Click the  tab.
4 Review the Virtual SAN datastore capacity.
The size of the Virtual SAN datastore depends on the number of capacity devices per ESXi host and the number of ESXi hosts in the cluster. For example, if a host has seven 2 TB for capacity devices, and the cluster includes eight hosts, the approximate storage capacity would be 7 x 2 TB x 8 = 112 TB. Note that when using the all-ash conguration, ash devices are used for capacity. For hybrid conguration, magnetic disks are used for capacity.
Some capacity is allocated for metadata.
On-disk format version 1.0 adds approximately 1 GB per capacity device.
n
On-disk format version 2.0 adds capacity overhead, typically no more than 1-2 percent capacity per
n
device.
On-disk format version 3.0 and later adds capacity overhead, typically no more than 1-2 percent
n
capacity per device. Deduplication and compression with software checksum enabled require additional overhead of approximately 6.2 percent capacity per device.
What to do next
Use the storage capabilities of the Virtual SAN datastore to create a storage policy for virtual machines. For information, see the vSphere Storage documentation.
VMware, Inc. 53
Administering VMware Virtual SAN

Using Virtual SAN and vSphere HA

You can enable vSphere HA and Virtual SAN on the same cluster. As with traditional datastores, vSphere HA provides the same level of protection for virtual machines on Virtual SAN datastores. This level of protection imposes specic restrictions when vSphere HA and Virtual SAN interact.
ESXi Host Requirements
You can use Virtual SAN with a vSphere HA cluster only if the following conditions are met:
The cluster's ESXi hosts all must be version 5.5 Update 1 or later.
n
The cluster must have a minimum of three ESXi hosts. For best results, congure the Virtual SAN
n
cluster with four or more hosts.
Networking Differences
Virtual SAN uses its own logical network. When Virtual SAN and vSphere HA are enabled for the same cluster, the HA interagent trac ows over this storage network rather than the management network. vSphere HA uses the management network only when Virtual SAN is disabled.vCenter Server chooses the appropriate network when vSphere HA is congured on a host.
N You must disable vSphere HA before you enable Virtual SAN on the cluster. Then you can reenable vSphere HA.
When a virtual machine is only partially accessible in all network partitions, you cannot power on the virtual machine or fully access it in any partition. For example, if you partition a cluster into P1 and P2, the VM namespace object is accessible to the partition named P1 and not to P2. The VMDK is accessible to the partition named P2 and not to P1. In such cases, the virtual machine cannot be powered on and it is not fully accessible in any partition.
The following table shows the dierences in vSphere HA networking whether or not Virtual SAN is used.
Table 62. vSphere HA Networking Differences
Virtual SAN Enabled Virtual SAN Disabled
Network used by vSphere HA Virtual SAN storage network Management network
Heartbeat datastores Any datastore mounted to more than
one host, but not Virtual SAN datastores
Host declared isolated Isolation addresses not pingable and
Virtual SAN storage network inaccessible
Any datastore mounted to more than one host
Isolation addresses not pingable and management network inaccessible
If you change the Virtual SAN network conguration, the vSphere HA agents do not automatically acquire the new network seings. To make changes to the Virtual SAN network, you must reenable host monitoring for the vSphere HA cluster by using vSphere Web Client:
1 Disable Host Monitoring for the vSphere HA cluster.
2 Make the Virtual SAN network changes.
3 Right-click all hosts in the cluster and select  HA.
4 Reenable Host Monitoring for the vSphere HA cluster.
54 VMware, Inc.
Chapter 6 Creating a Virtual SAN Cluster
Capacity Reservation Settings
When you reserve capacity for your vSphere HA cluster with an admission control policy, this seing must be coordinated with the corresponding Primary level of failures to tolerate policy seing in the Virtual SAN rule set and must not be lower than the capacity reserved by the vSphere HA admission control seing. For example, if the Virtual SAN rule set allows for only two failures, the vSphere HA admission control policy must reserve capacity that is equivalent to only one or two host failures. If you are using the Percentage of Cluster Resources Reserved policy for a cluster that has eight hosts, you must not reserve more than 25 percent of the cluster resources. In the same cluster, with the Primary level of failures to tolerate policy, the seing must not be higher than two hosts. If vSphere HA reserves less capacity, failover activity might be unpredictable. Reserving too much capacity overly constrains the powering on of virtual machines and intercluster vSphere vMotion migrations. For information about the Percentage of Cluster Resources Reserved policy, see the vSphere Availability documentation.
Virtual SAN and vSphere HA Behavior in a Multiple Host Failure
After a Virtual SAN cluster fails with a loss of failover quorum for a virtual machine object, vSphere HA might not be able to restart the virtual machine even when the cluster quorum has been restored. vSphere HA guarantees the restart only when it has a cluster quorum and can access the most recent copy of the virtual machine object. The most recent copy is the last copy to be wrien.
Consider an example where a Virtual SAN virtual machine is provisioned to tolerate one host failure. The virtual machine runs on a Virtual SAN cluster that includes three hosts, H1, H2, and H3. All three hosts fail in a sequence, with H3 being the last host to fail.
After H1 and H2 recover, the cluster has a quorum (one host failure tolerated). Despite this quorum, vSphere HA is unable to restart the virtual machine because the last host that failed (H3) contains the most recent copy of the virtual machine object and is still inaccessible.
In this example, either all three hosts must recover at the same time, or the two-host quorum must include H3. If neither condition is met, HA aempts to restart the virtual machine when host H3 is online again.

Deploying Virtual SAN with vCenter Server Appliance

You can create a Virtual SAN cluster as you deploy a vCenter Server Appliance, and host the appliance on that cluster.
The vCenter Server Appliance is a precongured Linux virtual machine, which is used for running VMware vCenter Server on Linux systems. This feature enables you to congure a Virtual SAN cluster on new ESXi hosts without using vCenter Server.
When you use the vCenter Server Appliance Installer to deploy a vCenter Server Appliance, you can create a single-host Virtual SAN cluster, and host the vCenter Server Appliance on the cluster. During Stage 1 of the deployment, when you select a datastore, click Install on a new Virtual SAN cluster containing the target host. Follow the steps in the Installer wizard to complete the deployment.
The vCenter Server Appliance Installer creates a one-host Virtual SAN cluster, with disks claimed from the host. vCenter Server Appliance is deployed on the Virtual SAN cluster.
After you complete the deployment, you can manage the single-host Virtual SAN cluster with the vCenter Server Appliance. You must complete the conguration of the Virtual SAN cluster.
You can deploy a Platform Services Controller and vCenter Server on the same Virtual SAN cluster or on separate clusters.
You can deploy a Platform Services Controller and vCenter Server to the same Virtual SAN cluster.
n
Deploy the PSC and vCenter Server to the same single-host virtual SAN datastore. After you complete the deployment, the Platform Services Controller and vCenter Server both run on the same cluster.
VMware, Inc. 55
You can deploy a Platform Services Controller and vCenter Server to dierent Virtual SAN clusters.
n
Deploy the Platform Services Controller and vCenter Server to separate single-host Virtual SAN clusters. After you complete the deployment, you must complete the conguration of each Virtual SAN cluster separately.

Using Virtual SAN Configuration Assist and Updates

You can use Conguration Assist to check the conguration of your Virtual SAN cluster, and resolve any issues.
Virtual SAN Conguration Assist enables you to verify the conguration of cluster components, resolve issues, and troubleshoot problems. The conguration checks cover hardware compatibility, network, and Virtual SAN conguration options.
The Conguration Assist checks are divided into categories. Each category contains individual conguration checks.
Table 63. Configuration Assist Categories
Configuration Category Description
Hardware compatibility Checks the hardware components for the Virtual SAN cluster, to ensure that
they are using supported hardware, software, and drivers.
vSAN conguration Checks Virtual SAN conguration options.
Generic cluster Checks basic cluster conguration options.
Network conguration Checks Virtual SAN network conguration.
Burn-in test Checks burn-in test operations.
If storage controller rmware or drivers do not meet the requirements listed in the VMware Compatibility Guide, you can use the Updates page to update the controllers.
56 VMware, Inc.
Chapter 6 Creating a Virtual SAN Cluster

Check Virtual SAN Configuration

You can view the conguration status of your Virtual SAN cluster, and resolve issues that aect the operation of your cluster.
Procedure
1 Navigate to the Virtual SAN cluster in the vSphere Web Client.
2 Click the  tab.
3 Under vSAN, click  Assist to review the Virtual SAN conguration categories.
If the Test Result column displays a warning icon, expand the category to review the results of individual conguration checks.
4 Select an individual conguration check and review the detailed information at the boom of the page.
You can click the Ask VMware buon to open a knowledge base article that describes the check and provides information about how to resolve the issue.
Some conguration checks provide additional buons that help you complete the conguration.

Configure Distributed Switch for Virtual SAN

You can use the Congure New Distributed Switch for Virtual SAN wizard to congure a vSphere Distributed Switch to support Virtual SAN trac.
If your cluster does not have a vSphere Distributed Switch congured to support Virtual SAN trac, the Conguration Assist page issues a warning for Network  > Use vDS for vSAN.
Procedure
1 Navigate to your Virtual SAN cluster in the vSphere Web Client.
2 Click the  tab.
3 Under vSAN, select  Assist and click to expand the Network  category.
4 Click Use vDS for vSAN. In the lower half of the page, click Create vDS.
5 In Name and type, enter a name for the new distributed switch, and choose whether to create a new
switch or migrate an existing standard switch.
6 Select the unused adapters you want to migrate to the new distributed switch, and click Next.
7 (Optional) In Migrate infrastructure VMs, select the VM to treat as an infrastructure VM during the
migration for existing standard switch, and click Next.
This step is not necessary if you are creating a new distributed switch.
8 In Ready to complete, review the conguration, and click Finish.

Create VMkernel Network Adapter for Virtual SAN

You can use the New VMkernel Network Adapters for vSAN wizard to congure vmknics to support Virtual SAN trac.
If ESXi hosts in your cluster do not have vmknics congured to support Virtual SAN trac, the
Conguration Assist page issues a warning for Network  > All hosts have a vSAN vmknic .
Procedure
1 Navigate to your Virtual SAN cluster in the vSphere Web Client.
VMware, Inc. 57
2 Click the  tab.
3 Under vSAN, select  Assist and click to expand the Network  category.
4 Click All hosts have a vSAN vmknic . In the lower half of the page, click Create VMkernel
Network Adapter.
5 In Select hosts, select the check box for each host that does not have a vmknic congured for Virtual
SAN, and click Next.
Hosts without a Virtual SAN vmknic are listed in the Conguration Assist page.
6 In Location and services, select a distributed switch and select the vSAN  check box. Click Next.
7 In vSAN adapter seings, select a port group, IP seings and conguration, and click Next.
8 In Ready to complete, review the conguration, and click Finish.

Install Controller Management Tools for Driver and Firmware Updates

Storage controller vendors provide a software management tool that Virtual SAN can use to update controller drivers and rmware. If the management tool is not present on ESXi hosts, you can download the tool.
The Updates page only supports specic storage controller models from selected vendors.
Prerequisites
Verify hardware compatibility on the Conguration Assist page.
n
DRS must be enabled if you must keep VMs running during the software updates.
n
Procedure
1 Navigate to the Virtual SAN cluster in the vSphere Web Client.
2 Click the  tab.
3 Under vSAN, click Updates to review the components that are missing or ready to install.
4 Select the Management (Mgmt) tool for your controller, and click the Download icon.
The Management tool is downloaded from the Internet to your vCenter Server.
5 Click the Update All icon to install the management tool on the ESXi hosts in your cluster.
Conrm whether you want to update all hosts at once, or if you want to use a rolling update.
6 Click the Refresh icon.
The Updates page displays controller components that require an update.
What to do next
When the storage controller Management tool is available, the Updates page lists any missing drivers or rmware. You can update those missing components.
58 VMware, Inc.
Chapter 6 Creating a Virtual SAN Cluster

Update Storage Controller Drivers and Firmware

You can use Virtual SAN to update old or incorrect drivers and rmware on storage controllers.
Conguration Assist veries that your storage controllers use the latest driver and rmware version
according to the VMware Compatibility Guide. If controller drivers or rmware do not meet the requirements, use the Updates page to perform driver and rmware updates.
Prerequisites
The controller Management tools for your storage devices must be present on the ESXi host.
Procedure
1 Navigate to the Virtual SAN cluster in the vSphere Web Client.
2 Click the  tab.
3 Under vSAN, click Updates to review the components that are missing or ready to install.
The Updates page lists any missing rmware and driver components.
N If the controller Management (Mgmt) tool is not available, you are prompted to download and install the Management tool. When the management tool is available, any missing drivers or rmware are listed.
4 Select the component you want to update, and click the Update icon to update the component on the
ESXi hosts in your cluster. Or you can click the Update All icon to update all missing components.
Conrm whether you want to update all hosts at once, or if you want to use a rolling update.
N For some management tools and drivers, the update process bypasses maintenance mode and performs a reboot based on the installation result. In these cases, the MM Required and Reboot Required elds are empty.
5 Click the Refresh icon.
The updated components are removed from the display.
VMware, Inc. 59
60 VMware, Inc.
Extending a Datastore Across Two
Sites with Stretched Clusters 7
You can create a stretched cluster that spans two geographic locations (or sites). Stretched clusters enable you to extend the Virtual SAN datastore across two sites to use it as stretched storage. The stretched cluster continues to function if a failure or scheduled maintenance occurs at one site.
This chapter includes the following topics:
“Introduction to Stretched Clusters,” on page 61
n
“Stretched Cluster Design Considerations,” on page 63
n
“Best Practices for Working with Stretched Clusters,” on page 64
n
“Network Design for Stretched Clusters,” on page 64
n
“Congure Virtual SAN Stretched Cluster,” on page 65
n
“Change the Preferred Fault Domain,” on page 66
n
“Change the Witness Host,” on page 66
n
“Deploying a Virtual SAN Witness Appliance,” on page 66
n
“Congure Network Interface for Witness Trac,” on page 67
n
“Convert a Stretched Cluster to a Standard Virtual SAN Cluster,” on page 69
n

Introduction to Stretched Clusters

Stretched clusters extend the Virtual SAN cluster from a single site to two sites for a higher level of availability and intersite load balancing. Stretched clusters are typically deployed in environments where the distance between data centers is limited, such as metropolitan or campus environments.
You can use stretched clusters to manage planned maintenance and avoid disaster scenarios, because maintenance or loss of one site does not aect the overall operation of the cluster. In a stretched cluster conguration, both sites are active sites. If either site fails, Virtual SAN uses the storage on the other site. vSphere HA restarts any VM that must be restarted on the remaining active site.
You must designate one site as the preferred site. The other site becomes a secondary or nonpreferred site. The system uses the preferred site only in cases where there is a loss of network connection between the two active sites, so the one designated as preferred is the one that remains operational.
A Virtual SAN stretched cluster can tolerate one link failure at a time without data becoming unavailable. A link failure is a loss of network connection between the two sites or between one site and the witness host. During a site failure or loss of network connection, Virtual SAN automatically switches to fully functional sites.
VMware, Inc.
61
For more information about working with stretched clusters, see the Virtual SAN Stretched Cluster Guide.
Witness Host
Each stretched cluster consists of two sites and one witness host. The witness host resides at a third site and contains the witness components of virtual machine objects. It contains only metadata, and does not participate in storage operations.
The witness host serves as a tiebreaker when a decision must be made regarding availability of datastore components when the network connection between the two sites is lost. In this case, the witness host typically forms a Virtual SAN cluster with the preferred site. But if the preferred site becomes isolated from the secondary site and the witness, the witness host forms a cluster using the secondary site. When the preferred site is online again, data is resynchronized to ensure that both sites have the latest copies of all data.
If the witness host fails, all corresponding objects become noncompliant but are fully accessible.
The witness host has the following characteristics:
The witness host can use low bandwidth/high latency links.
n
The witness host cannot run VMs.
n
A single witness host can support only one Virtual SAN stretched cluster.
n
The witness host must have one VMkernel adapter with Virtual SAN trac enabled, with connections
n
to all hosts in the cluster. The witness host uses one VMkernel adapter for management and one VMkernel adapter for Virtual SAN data trac. The witness host can have only one VMkernel adapter dedicated to Virtual SAN.
The witness host must be a standalone host dedicated to the stretched cluster. It cannot be added to any
n
other cluster or moved in inventory through vCenter Server.
The witness host can be a physical host or an ESXi host running inside a VM. The VM witness host does not provide other types of functionality, such as storing or running VMs. Multiple witness hosts can run as VMs on a single physical server. For patching and basic networking and monitoring conguration, the VM witness host works in the same way as a typical ESXi host. You can manage it with vCenter Server, patch it and update it by using esxcli or vSphere Update Manager, and monitor it with standard tools that interact with ESXi hosts.
You can use a witness virtual appliance as the witness host in a stretched cluster. The witness virtual appliance is an ESXi host in a VM, packaged as an OVF or OVA. The appliance is available in dierent options, based on the size of the deployment.
62 VMware, Inc.
Chapter 7 Extending a Datastore Across Two Sites with Stretched Clusters
Stretched Cluster Versus Fault Domains
Stretched clusters provide redundancy and failure protection across data centers in two geographical locations. Fault domains provide protection from rack-level failures within the same site. Each site in a stretched cluster resides in a separate fault domain.
A stretched cluster requires three fault domains: the preferred site, the secondary site, and a witness host.
In Virtual SAN 6.6 and later releases, you can provide an additional level of local fault protection for virtual machine objects in stretched clusters. When you congure a stretched cluster with four or more hosts in each site, the following policy rules are available for objects in the cluster:
Primary level of failures to tolerate (PFTT). This rule denes the number of host and device failures
n
that a virtual machine object can tolerate across the two sites. The default value is 1, and the maximum value is 3.
Secondary level of failures to tolerate. Denes the number of host and object failures that a virtual
n
machine object can tolerate within a single site. The default value is 0, and the maximum value is 3.
. This rule is available only if the Primary level of failures to tolerate is set to 0. You can set the
n
Anity rule to None, Preferred, or Secondary. This rule enables you to restrict virtual machine objects to a selected site in the stretched cluster. The default value is None.
N When you congure the Secondary level of failures to tolerate for the stretched cluster, the Fault tolerance method rule applies to the Secondary level of failures to tolerate. The failure tolerance method
used for the Primary level of failures to tolerate (PFTT) defaults to RAID 1.
In a stretched cluster with local fault protection, even when one site is unavailable, the cluster can perform repairs on missing or broken components in the available site.

Stretched Cluster Design Considerations

Consider these guidelines when working with a Virtual SAN stretched cluster.
Congure DRS seings for the stretched cluster.
n
DRS must be enabled on the cluster. If you place DRS in partially automated mode, you can control
n
which VMs to migrate to each site.
Create two host groups, one for the preferred site and one for the secondary site.
n
Create two VM groups, one to hold the VMs on the preferred site and one to hold the VMs on the
n
secondary site.
Create two VM-Host anity rules that map VMs-to-host groups, and specify which VMs and hosts
n
reside in the preferred site and which VMs and hosts reside in the secondary site.
Congure VM-Host anity rules to perform the initial placement of VMs in the cluster.
n
Congure HA seings for the stretched cluster.
n
HA must be enabled on the cluster.
n
HA rule seings should respect VM-Host anity rules during failover.
n
Disable HA datastore heartbeats.
n
Stretched clusters require on-disk format 2.0 or later. If necessary, upgrade the on-disk format before
n
conguring a stretched cluster. See “Upgrade Virtual SAN Disk Format Using vSphere Web Client,” on page 95.
Congure the Primary level of failures to tolerate to 1 for stretched clusters.
n
VMware, Inc. 63
Virtual SAN stretched clusters do not support symmetric multiprocessing fault tolerance (SMP-FT).
n
When a host is disconnected or not responding, you cannot add or remove the witness host. This
n
limitation ensures that Virtual SAN collects enough information from all hosts before initiating reconguration operations.
Using esxcli to add or remove hosts is not supported for stretched clusters.
n

Best Practices for Working with Stretched Clusters

When working with Virtual SAN stretched clusters, follow these recommendations for proper performance.
If one of the sites (fault domains) in a stretched cluster is inaccessible, new VMs can still be provisioned
n
in the sub-cluster containing the other two sites. These new VMs are implicitly force provisioned and will be non-compliant until the partitioned site rejoins the cluster. This implicit force provisioning is performed only when two of the three sites are available. A site here refers to either a data site or the witness host.
If an entire site goes oine due to a power outage or loss of network connection, restart the site
n
immediately, without much delay. Instead of restarting Virtual SAN hosts one by one, bring all hosts online approximately at the same time, ideally within a span of 10 minutes. By following this process, you avoid resynchronizing a large amount of data across the sites.
If a host is permanently unavailable, remove the host from the cluster before you perform any
n
reconguration tasks.
If you want to clone a VM witness host to support multiple stretched clusters, do not congure the VM
n
as a witness host before cloning it. First deploy the VM from OVF, then clone the VM, and congure each clone as a witness host for a dierent cluster. Or you can deploy as many VMs as you need from the OVF, and congure each one as a witness host for a dierent cluster.

Network Design for Stretched Clusters

All three sites in a stretched cluster communicate across the management network and across the Virtual SAN network. The VMs in both data sites communicate across a common virtual machine network.
A Virtual SAN stretched cluster must meet certain basic networking requirements.
Management network requires connectivity across all three sites, using a Layer 2 stretched network or a
n
Layer 3 network.
Virtual SAN network requires connectivity across all three sites. VMware recommends using a Layer 2
n
stretched network between the two data sites and a Layer 3 network between the data sites and the witness host.
VM network requires connectivity between the data sites, but not the witness host. VMware
n
recommends using a Layer 2 stretched network between the data sites. In the event of a failure, the VMs do not require a new IP address to work on the remote site.
vMotion network requires connectivity between the data sites, but not the witness host. VMware
n
supports using a Layer 2 stretched or a Layer 3 network between data sites.
Using Static Routes on ESXi Hosts
If you use a single default gateway on ESXi hosts, note that each ESXi host contains a default TCP/IP stack that has a single default gateway. The default route is typically associated with the management network TCP/IP stack.
The management network and the Virtual SAN network might be isolated from one another. For example, the management network might use vmk0 on physical NIC 0, while the Virtual SAN network uses vmk2 on physical NIC 1 (separate network adapters for two distinct TCP/IP stacks). This conguration implies that the Virtual SAN network has no default gateway.
64 VMware, Inc.
Chapter 7 Extending a Datastore Across Two Sites with Stretched Clusters
Consider a Virtual SAN network that is stretched over two data sites on a Layer 2 broadcast domain (for example, 172.10.0.0) and the witness host is on another broadcast domain (for example, 172.30.0.0). If the VMkernel adapters on a data site try to connect to the Virtual SAN network on the witness host, the connection will fail because the default gateway on the ESXi host is associated with the management network and there is no route from the management network to the Virtual SAN network.
You can use static routes to resolve this issue. Dene a new routing entry that indicates which path to follow to reach a particular network. For a Virtual SAN network on a stretched cluster, you can add static routes to ensure proper communication across all hosts.
For example, you can add a static route to the hosts on each data site, so requests to reach the 172.30.0.0 witness network are routed through the 172.10.0.0 interface. Also add a static route to the witness host so that requests to reach the 172.10.0.0 network for the data sites are routed through the 172.30.0.0 interface.
N If you use static routes, you must manually add the static routes for new ESXi hosts added to either site before those hosts can communicate across the cluster. If you replace the witness host, you must update the static route conguration.
Use the esxcli network ip route command to add static routes.

Configure Virtual SAN Stretched Cluster

Congure a Virtual SAN cluster that stretches across two geographic locations or sites.
Prerequisites
Verify that you have a minimum of three hosts: one for the preferred site, one for the secondary site,
n
and one host to act as a witness.
Verify that you have congured one host to serve as the witness host for the stretched cluster. Verify
n
that the witness host is not part of the Virtual SAN cluster, and that it has only one VMkernel adapter congured for Virtual SAN data trac.
Verify that the witness host is empty and does not contain any components. To congure an existing
n
Virtual SAN host as a witness host, rst evacuate all data from the host and delete the disk group.
Procedure
1 Navigate to the Virtual SAN cluster in the vSphere Web Client.
2 Click the  tab.
3 Under vSAN, click Fault Domains and Stretched Cluster.
4 Click the Stretched Cluster  buon to open the stretched cluster conguration wizard.
5 Select the fault domain that you want to assign to the secondary site and click >>.
The hosts that are listed under the Preferred fault domain are in the preferred site.
6 Click Next.
7 Select a witness host that is not a member of the Virtual SAN stretched cluster and click Next.
8 Claim storage devices on the witness host and click Next.
Claim storage devices on the witness host. Select one ash device for the cache tier, and one or more devices for the capacity tier.
9 On the Ready to complete page, review the conguration and click Finish.
VMware, Inc. 65

Change the Preferred Fault Domain

You can congure the secondary site as the preferred site. The current preferred site becomes the secondary site.
Procedure
1 Navigate to the Virtual SAN cluster in the vSphere Web Client.
2 Click the  tab.
3 Under vSAN, click Fault Domains and Stretched Cluster.
4 Select the secondary fault domain and click the Mark Fault Domain as preferred for Stretched Cluster
icon ( ).
5 Click Yes to conrm.
The selected fault domain is marked as the preferred fault domain.

Change the Witness Host

You can change the witness host for a Virtual SAN stretched cluster.
Change the ESXi host used as a witness host for your Virtual SAN stretched cluster.
Prerequisites
Verify that the witness host is not in use.
Procedure
1 Navigate to the Virtual SAN cluster in the vSphere Web Client.
2 Click the  tab.
3 Under vSAN, click Fault Domains and Stretched Cluster.
4 Click the Change witness host buon.
5 Select a new host to use as a witness host, and click Next.
6 Claim disks on the new witness host, and click Next.
7 On the Ready to complete page, review the conguration, and click Finish.

Deploying a Virtual SAN Witness Appliance

Specic Virtual SAN congurations, such as a stretched cluster, require a witness host. Instead of using a dedicated physical ESXi host as a witness host, you can deploy the Virtual SAN witness appliance. The appliance is a precongured virtual machine that runs ESXi and is distributed as an OVA le.
Unlike a general-purpose ESXi host, the witness appliance does not run virtual machines. Its only purpose is to serve as a Virtual SAN witness.
The workow to deploy and congure the Virtual SAN witness appliance includes this process.
1 Download the appliance from the VMware Web site.
2 Deploy the appliance to a Virtual SAN host or cluster. For more information, see Deploying OVF
Templates in the vSphere Virtual Machine Administration documentation.
3 Congure the Virtual SAN network on the witness appliance.
4 Congure the management network on the witness appliance.
66 VMware, Inc.
Chapter 7 Extending a Datastore Across Two Sites with Stretched Clusters
5 Add the appliance to vCenter Server as a witness ESXi host. Make sure to congure the Virtual SAN
VMkernel interface on the host.

Set Up the Virtual SAN Network on the Witness Appliance

The Virtual SAN witness appliance includes two precongured network adapters. You must change the conguration of the second adapter so that the appliance can connect to the Virtual SAN network.
Procedure
1 In the vSphere Web Client, navigate to the virtual appliance that contains the witness host.
2 Right-click the appliance and select Edit .
3 On the Virtual Hardware tab, expand the second Network adapter.
4 From the drop-down menu, select the vSAN port group and click OK.

Configure Management Network

Congure the witness appliance, so that it is reachable on the network.
By default, the appliance can automatically obtain networking parameters if your network includes a DHCP server. If not, you must congure appropriate seings.
Procedure
1 Power on your witness appliance and open its console.
Because your appliance is an ESXi host, you see the Direct Console User Interface (DCUI).
2 Press F2 and navigate to the Network Adapters page.
3 On the Network Adapters page, verify that at least one vmnic is selected for transport.
4 Congure the IPv4 parameters for the management network.
a Navigate to the IPv4 Conguration section and change the default DHCP seing to static.
b Enter the following seings:
IP address
n
Subnet mask
n
Default gateway
n
5 Congure DNS parameters.
Primary DNS server
n
Alternate DNS server
n
Hostname
n

Configure Network Interface for Witness Traffic

vSAN data trac requires a low-latency, high-bandwidth link. Witness trac can use a high-latency, low­bandwidth and routable link. To separate data trac from witness trac, you can congure a dedicated VMkernel network adapter for vSAN witness trac.
You can separate data trac from witness trac in supported stretched cluster congurations. The VMkernel adapter used for vSAN data trac and the VMkernel adapter used for witness trac must be connected to the same physical switch.
VMware, Inc. 67
Administering VMware Virtual SAN
You can add support for a direct network cross-connection to carry vSAN data trac in a two-host vSAN stretched cluster. You can congure a separate network connection for witness trac. On each data host in the cluster, congure the management VMkernel network adapter to also carry witness trac. Do not congure the witness trac type on the witness host.
Prerequisites
Verify that the data site to witness trac connection has a minimum bandwidth of 100 MBps and a
n
latency of less than 200 ms RTT.
Verify that vSAN trac can be carried over a direct Ethernet cable connection with a speed of 10 GBps.
n
Verify that data trac and witness trac use the same IP version.
n
Procedure
1 Open an SSH connection to the ESXi host.
2 Use the esxcli network ip interface list command to determine which VMkernel network adapter
is used for management trac.
For example:
esxcli network ip interface list
vmk0
Name: vmk0
MAC Address: e4:11:5b:11:8c:16
Enabled: true
Portset: vSwitch0
Portgroup: Management Network
Netstack Instance: defaultTcpipStack
VDS Name: N/A
VDS UUID: N/A
VDS Port: N/A
VDS Connection: -1
Opaque Network ID: N/A
Opaque Network Type: N/A
External ID: N/A
MTU: 1500
TSO MSS: 65535
Port ID: 33554437
vmk1
Name: vmk1
MAC Address: 00:50:56:6a:3a:74
Enabled: true
Portset: vSwitch1
Portgroup: vsandata
Netstack Instance: defaultTcpipStack
VDS Name: N/A
VDS UUID: N/A
VDS Port: N/A
VDS Connection: -1
Opaque Network ID: N/A
Opaque Network Type: N/A
68 VMware, Inc.
Chapter 7 Extending a Datastore Across Two Sites with Stretched Clusters
External ID: N/A
MTU: 9000
TSO MSS: 65535
Port ID: 50331660
N Multicast information is included for backward compatibility. vSAN 6.6 and later releases do not require multicast.
3 Use the esxcli vsan network ip add command to congure the management VMkernel network
adapter to support witness trac.
esxcli vsan network ip add -i vmkx -T=witness
4 Use the esxcli vsan network list command to verify the new network conguration.
For example:
esxcli vsan network list
Interface
VmkNic Name: vmk0
IP Protocol: IP
Interface UUID: 8cf3ec57-c9ea-148b-56e1-a0369f56dcc0
Agent Group Multicast Address: 224.2.3.4
Agent Group IPv6 Multicast Address: ff19::2:3:4
Agent Group Multicast Port: 23451
Master Group Multicast Address: 224.1.2.3
Master Group IPv6 Multicast Address: ff19::1:2:3
Master Group Multicast Port: 12345
Host Unicast Channel Bound Port: 12321
Multicast TTL: 5
Traffic Type: witness
Interface
VmkNic Name: vmk1
IP Protocol: IP
Interface UUID: 6df3ec57-4fb6-5722-da3d-a0369f56dcc0
Agent Group Multicast Address: 224.2.3.4
Agent Group IPv6 Multicast Address: ff19::2:3:4
Agent Group Multicast Port: 23451
Master Group Multicast Address: 224.1.2.3
Master Group IPv6 Multicast Address: ff19::1:2:3
Master Group Multicast Port: 12345
Host Unicast Channel Bound Port: 12321
Multicast TTL: 5
Traffic Type: vsan
In the vSphere Web Client, the management VMkernel network interface is not selected for vSAN trac. Do not re-enable the interface in the vSphere Web Client.

Convert a Stretched Cluster to a Standard Virtual SAN Cluster

You can decommission a stretched cluster and convert it to a standard Virtual SAN cluster.
When you disable a stretched cluster, the witness host is removed, but the fault domain conguration remains. Because the witness host is not available, all witness components are missing for your virtual machines. To ensure full availability for your VMs, repair the cluster objects immediately.
VMware, Inc. 69
Procedure
1 Navigate to the Virtual SAN stretched cluster in the vSphere Web Client.
2 Disable the stretched cluster.
a Click the  tab.
b Under vSAN, click Fault Domains and Stretched Cluster.
c Click the Stretched Cluster  buon.
The stretched cluster conguration wizard is displayed.
d Click Disable, and click Yes to conrm.
3 Remove the fault domain conguration.
a
Select a fault domain and click the Remove selected fault domains icon (
b
Select the other fault domain and click the Remove selected fault domains icon ( ). Click Yes to
conrm.
4 Repair the objects in the cluster.
a Click the Monitor tab and select Virtual SAN.
b Under Virtual SAN, click Health and click vSAN object health.
). Click Yes to conrm.
c Click Repair object immediately.
Virtual SAN recreates the witness components within the cluster.
70 VMware, Inc.
Increasing Space Efficiency in a
Virtual SAN Cluster 8
You can use space eciency techniques to reduce the amount of space for storing data. These techniques reduce the total storage space required to meet your needs.
This chapter includes the following topics:
“Introduction to Virtual SAN Space Eciency,” on page 71
n
“Using Deduplication and Compression,” on page 71
n
“Using RAID 5 or RAID 6 Erasure Coding,” on page 76
n
“RAID 5 or RAID 6 Design Considerations,” on page 76
n

Introduction to Virtual SAN Space Efficiency

You can use space eciency techniques to reduce the amount of space for storing data. These techniques reduce the total storage capacity required to meet your needs.
You can enable deduplication and compression on a Virtual SAN cluster to eliminate duplicate data and reduce the amount of space needed to store data.
You can set the Failure tolerance method policy aribute on VMs to use RAID 5 or RAID 6 erasure coding. Erasure coding can protect your data while using less storage space than the default RAID 1 mirroring.
You can use deduplication and compression, and RAID 5 or RAID 6 erasure coding to increase storage space savings. RAID 5 or RAID 6 provide clearly dened space savings over RAID 1. Deduplication and compression can provide additional savings.

Using Deduplication and Compression

Virtual SAN can perform block-level deduplication and compression to save storage space. When you enable deduplication and compression on a Virtual SAN all-ash cluster, redundant data within each disk group is reduced.
Deduplication removes redundant data blocks, whereas compression removes additional redundant data within each data block. These techniques work together to reduce the amount of space required to store the data. Virtual SAN applies deduplication and then compression as it moves data from the cache tier to the capacity tier.
You can enable deduplication and compression as a cluster-wide seing, but they are applied on a disk group basis. When you enable deduplication and compression on a Virtual SAN cluster, redundant data within a particular disk group is reduced to a single copy.
You can enable deduplication and compression when you create a new Virtual SAN all-ash cluster or when you edit an existing Virtual SAN all-ash cluster. For more information about creating and editing Virtual SAN clusters, see “Enabling Virtual SAN,” on page 47.
VMware, Inc.
71
When you enable or disable deduplication and compression, Virtual SAN performs a rolling reformat of every disk group on every host. Depending on the data stored on the Virtual SAN datastore, this process might take a long time. It is recommended that you do not perform these operations frequently. If you plan to disable deduplication and compression, you must rst verify that enough physical capacity is available to place your data.
N Deduplication and compression might not be eective for encrypted VMs, because VM Encryption encrypts data on the host before it is wrien out to storage. Consider storage tradeos when using VM Encryption.
How to Manage Disks in a Cluster with Deduplication and Compression
Consider the following guidelines when managing disks in a cluster with deduplication and compression enabled.
Avoid adding disks to a disk group incrementally. For more ecient deduplication and compression,
n
consider adding a new disk group to increase cluster storage capacity.
When you add a new disk group manually, add all of the capacity disks at the same time.
n
You cannot remove a single disk from a disk group. You must remove the entire disk group to make
n
modications.
A single disk failure causes the entire disk group to fail.
n
Verifying Space Savings from Deduplication and Compression
The amount of storage reduction from deduplication and compression depends on many factors, including the type of data stored and the number of duplicate blocks. Larger disk groups tend to provide a higher deduplication ratio. You can check the results of deduplication and compression by viewing the Deduplication and Compression Overview in the Virtual SAN Capacity monitor.
72 VMware, Inc.
Chapter 8 Increasing Space Efficiency in a Virtual SAN Cluster
You can view the Deduplication and Compression Overview when you monitor Virtual SAN capacity in the vSphere Web Client. It displays information about the results of deduplication and compression. The Used Before space indicates the logical space required before applying deduplication and compression, while the Used After space indicates the physical space used after applying deduplication and compression. The Used After space also displays an overview of the amount of space saved, and the Deduplication and Compression ratio.
The Deduplication and Compression ratio is based on the logical (Used Before) space required to store data before applying deduplication and compression, in relation to the physical (Used After) space required after applying deduplication and compression. Specically, the ratio is the Used Before space divided by the Used After space. For example, if the Used Before space is 3 GB, but the physical Used After space is 1 GB, the deduplication and compression ratio is 3x.
When deduplication and compression are enabled on the Virtual SAN cluster, it might take several minutes for capacity updates to be reected in the Capacity monitor as disk space is reclaimed and reallocated.

Deduplication and Compression Design Considerations

Consider these guidelines when you congure deduplication and compression in a Virtual SAN cluster.
Deduplication and compression are available only on all-ash disk groups.
n
On-disk format version 3.0 or later is required to support deduplication and compression.
n
You must have a valid license to enable deduplication and compression on a cluster.
n
You can enable deduplication and compression only if the storage-claiming method is set to manual.
n
You can change the storage-claiming method to automatic after deduplication and compression has been enabled.
When you enable deduplication and compression on a Virtual SAN cluster, all disk groups participate
n
in data reduction through deduplication and compression.
Virtual SAN can eliminate duplicate data blocks within each disk group, but not across disk groups.
n
Capacity overhead for deduplication and compression is approximately ve percent of total raw
n
capacity.
Policies with 100 percent proportional capacity reservations are always honored. Using these policies
n
can make deduplication and compression less ecient.
Policies with less than 100 percent proportional capacity are treated as if no proportional capacity
n
reservation was requested. The object remains compliant with the policy, and no events are logged.

Enable Deduplication and Compression on a New Virtual SAN Cluster

You can enable deduplication and compression when you congure a new Virtual SAN all-ash cluster.
Procedure
1 Navigate to an existing cluster in the vSphere Web Client.
2 Click the  tab.
3 Under Virtual SAN, select General and click the  vSAN buon.
4 Congure deduplication and compression on the cluster.
a On the vSAN capabilites page, select the Enable check box under Deduplication and
Compression.
b (Optional) Enable reduced redundancy for your VMs.
See “Reducing VM Redundancy for Virtual SAN Cluster,” on page 75.
VMware, Inc. 73
5 On the Claim disks page, specify which disks to claim for the Virtual SAN cluster.
a
Select a ash device to be used for capacity and click the Claim for capacity tier icon ( ).
b
Select a ash device to be used as cache and click the Claim for cache tier icon (
6 Complete your cluster conguration.

Enable Deduplication and Compression on Existing Virtual SAN Cluster

You can enable deduplication and compression by editing conguration parameters on an existing Virtual SAN cluster.
Prerequisites
Create a Virtual SAN cluster.
Procedure
1 Navigate to the Virtual SAN host cluster in the vSphere Web Client.
2 Click the  tab.
3 Under vSAN, select General.
).
4 In the vSAN is turned ON pane, click the Edit buon.
5 Congure deduplication and compression.
a Set deduplication and compression to Enabled.
b (Optional) Enable reduced redundancy for your VMs.
See “Reducing VM Redundancy for Virtual SAN Cluster,” on page 75.
c Click OK to save your conguration changes.
While enabling deduplication and compression, Virtual SAN changes disk format on each disk group of the cluster. To accomplish this change, Virtual SAN evacuates data from the disk group, removes the disk group, and recreates it with a new format that supports deduplication and compression.
The enablement operation does not require virtual machine migration or DRS. The time required for this operation depends on the number of hosts in the cluster and amount of data. You can monitor the progress on the Tasks and Events tab.

Disable Deduplication and Compression

You can disable deduplication and compression on your Virtual SAN cluster.
When deduplication and compression are disabled on the Virtual SAN cluster, the size of the used capacity in the cluster can expand (based on the deduplication ratio). Before you disable deduplication and compression, verify that the cluster has enough capacity to handle the size of the expanded data.
Procedure
1 Navigate to the Virtual SAN host cluster in the vSphere Web Client.
2 Click the  tab.
3 Under vSAN, select General.
4 In the vSAN is turned ON pane, click the Edit buon.
74 VMware, Inc.
Chapter 8 Increasing Space Efficiency in a Virtual SAN Cluster
5 Disable deduplication and compression.
a Set the disk claiming mode to Manual.
b Set deduplication and compression to Disabled.
c Click OK to save your conguration changes.
While disabling deduplication and compression, Virtual SAN changes disk format on each disk group of the cluster. To accomplish this change, Virtual SAN evacuates data from the disk group, removes the disk group, and recreates it with a format that does not supports deduplication and compression.
The time required for this operation depends on the number of hosts in the cluster and amount of data. You can monitor the progress on the Tasks and Events tab.

Reducing VM Redundancy for Virtual SAN Cluster

When you enable deduplication and compression, in certain cases, you might need to reduce the level of protection for your virtual machines.
Enabling deduplication and compression requires a format change for disk groups. To accomplish this change, Virtual SAN evacuates data from the disk group, removes the disk group, and recreates it with a new format that supports deduplication and compression.
In certain environments, your Virtual SAN cluster might not have enough resources for the disk group to be fully evacuated. Examples for such deployments include a three-node cluster with no resources to evacuate the replica or witness while maintaining full protection. Or a four-node cluster with RAID-5 objects already deployed. In the laer case, you have no place to move part of the RAID-5 stripe, since RAID-5 objects require a minimum of four nodes.
You can still enable deduplication and compression and use the Allow Reduced Redundancy option. This option keeps the VMs running, but the VMs might be unable to tolerate the full level of failures dened in the VM storage policy. As a result, temporarily during the format change for deduplication and compression, your virtual machines might be at risk of experiencing data loss. Virtual SAN restores full compliance and redundancy after the format conversion is completed.

Adding or Removing Disks when Deduplication and Compression Is Enabled

When you add disks to a Virtual SAN cluster with enabled deduplication and compression, specic considerations apply.
You can add a capacity disk to a disk group with enabled deduplication and compression. However, for
n
more ecient deduplication and compression, instead of adding capacity disks, create a new disk group to increase cluster storage capacity.
When you remove a disk form a cache tier, the entire disk group is removed. Removing a cache tier disk
n
when deduplication and compression is enabled triggers data evacuation.
Deduplication and compression is implemented at a disk group level. You cannot remove a capacity
n
disk from the cluster with enabled deduplication and compression. You must remove the entire disk group.
If a capacity disk fails, the entire disk group becomes unavailable. To resolve this issue, identify and
n
replace the failing component immediately. When removing the failed disk group, use the No Data Migration option.
VMware, Inc. 75

Using RAID 5 or RAID 6 Erasure Coding

You can use RAID 5 or RAID 6 erasure coding to protect against data loss and increase storage eciency. Erasure coding can provide the same level of data protection as mirroring (RAID 1), while using less storage capacity.
RAID 5 or RAID 6 erasure coding enables Virtual SAN to tolerate the failure of up to two capacity devices in the datastore. You can congure RAID 5 on all-ash clusters with four or more fault domains. You can congure RAID 5 or RAID 6 on all-ash clusters with six or more fault domains.
RAID 5 or RAID 6 erasure coding requires less additional capacity to protect your data than RAID 1 mirroring. For example, a VM protected by a Primary level of failures to tolerate value of 1 with RAID 1 requires twice the virtual disk size, but with RAID 5 it requires 1.33 times the virtual disk size. The following table shows a general comparison between RAID 1 and RAID 5 or RAID 6.
Table 81. Capacity Required to Store and Protect Data at Different RAID Levels
Primary level of Failures
RAID Configuration
RAID 1 (mirroring) 1 100 GB 200 GB
RAID 5 or RAID 6 (erasure coding) with four fault domains
RAID 1 (mirroring) 2 100 GB 300 GB
RAID 5 or RAID 6 (erasure coding) with six fault domains
to Tolerate Data Size Capacity Required
1 100 GB 133 GB
2 100 GB 150 GB
RAID 5 or RAID 6 erasure coding is a policy aribute that you can apply to virtual machine components. To use RAID 5, set Failure tolerance method to RAID-5/6 (Erasure Coding) - Capacity and Primary level of
failures to tolerate to 1. To use RAID 6, set Failure tolerance method to RAID-5/6 (Erasure Coding) ­Capacity and Primary level of failures to tolerate to 2. RAID 5 or RAID 6 erasure coding does not support a Primary level of failures to tolerate value of 3.
To use RAID 1, set Failure tolerance method to RAID-1 (Mirroring) - Performance. RAID 1 mirroring requires fewer I/O operations to the storage devices, so it can provide beer performance. For example, a cluster resynchronization takes less time to complete with RAID 1.
N In a Virtual SAN stretched cluster, the Failure tolerance method of RAID-5/6 (Erasure Coding) ­Capacity applies only to the Secondary level of failures to tolerate.
For more information about conguring policies, see Chapter 13, “Using Virtual SAN Policies,” on page 123.

RAID 5 or RAID 6 Design Considerations

Consider these guidelines when you congure RAID 5 or RAID 6 erasure coding in a Virtual SAN cluster.
RAID 5 or RAID 6 erasure coding is available only on all-ash disk groups.
n
On-disk format version 3.0 or later is required to support RAID 5 or RAID 6.
n
You must have a valid license to enable RAID 5/6 on a cluster.
n
You can achieve additional space savings by enabling deduplication and compression on the Virtual
n
SAN cluster.
76 VMware, Inc.
Using Encryption on a Virtual SAN
Cluster 9
You can use data at rest encryption to protect data in your Virtual SAN cluster.
Virtual SAN can perform data at rest encryption. Data is encrypted after all other processing, such as deduplication, is performed. Data at rest encryption protects data on storage devices, in case a device removed from the cluster.
Using encryption on your Virtual SAN cluster requires some preparation. After your environment is set up, you can enable encryption on your Virtual SAN cluster.
Virtual SAN encryption requires an external Key Management Server (KMS), the vCenter Server system, and your ESXi hosts. vCenter Server requests encryption keys from an external KMS. The KMS generates and stores the keys, and vCenter Server obtains the key IDs from the KMS and distributes them to the ESXi hosts.
vCenter Server does not store the KMS keys, but keeps a list of key IDs.
This chapter includes the following topics:
“How Virtual SAN Encryption Works,” on page 77
n
“Design Considerations for Virtual SAN Encryption,” on page 78
n
“Set Up the KMS Cluster,” on page 78
n
“Enable Encryption on a New Virtual SAN Cluster,” on page 83
n
“Generate New Encryption Keys,” on page 83
n
“Enable Virtual SAN Encryption on Existing Virtual SAN Cluster,” on page 84
n
“Virtual SAN Encryption and Core Dumps,” on page 85
n

How Virtual SAN Encryption Works

When you enable encryption, Virtual SAN encrypts everything in the Virtual SAN datastore. All les are encrypted, so all virtual machines and their corresponding data are protected. Only administrators with encryption privileges can perform encryption and decryption tasks.
Virtual SAN uses encryption keys as follows:
vCenter Server requests an AES-256 Key Encryption Key (KEK) from the KMS. vCenter Server stores
n
only the ID of the KEK, but not the key itself.
The ESXi host encrypts disk data using the industry standard AES-256 XTS mode. Each disk has a
n
dierent randomly generated Data Encryption Key (DEK).
Each ESXi host uses the KEK to encrypt its DEKs, and stores the encrypted DEKs on disk. The host does
n
not store the KEK on disk. If a host reboots, it requests the KEK with the corresponding ID from the KMS. The host can then decrypt its DEKs as needed.
VMware, Inc.
77
A host key is used to encrypt core dumps, not data. All hosts in the same cluster use the same host key.
n
When collecting support bundles, a random key is generated to re-encrypt the core dumps. Use a password when you encrypt the random key.
When a host reboots, it does not mount its disk groups until it receives the KEK. This process can take several minutes or longer to complete. You can monitor the status of the disk groups in the Virtual SAN health service, under Physical disks > Software state health.

Design Considerations for Virtual SAN Encryption

Consider these guidelines when working with Virtual SAN encryption.
Do not deploy your KMS server on the same Virtual SAN datastore that you plan to encrypt.
n
Encryption is CPU intensive. AES-NI signicantly improves encryption performance. Enable AES-NI in
n
your BIOS.
The witness host in a stretched cluster does not participate in Virtual SAN encryption. Only metadata is
n
stored on the witness host.
Establish a policy regarding core dumps. Core dumps are encrypted because they can contain sensitive
n
information such as keys. If you decrypt a core dump, carefully handle its sensitive information. ESXi core dumps might contain keys for the ESXi host and for the data on it.
Always use a password when you collect a vm-support bundle. You can specify the password when
n
you generate the support bundle from the vSphere Web Client or using the vm-support command.
The password recrypts core dumps that use internal keys to use keys that are based on the password. You can later use the password to decrypt any encrypted core dumps that might be included in the support bundle. Unencrypted core dumps or logs are not aected.
The password that you specify during vm-support bundle creation is not persisted in vSphere
n
components. You are responsible for keeping track of passwords for support bundles.

Set Up the KMS Cluster

A Key Management Server (KMS) cluster provides the keys that you can use to encrypt the Virtual SAN datastore.
Before you can encrypt the Virtual SAN datastore, you must set up a KMS cluster to support encryption. That task includes adding the KMS to vCenter Server and establishing trust with the KMS. vCenter Server provisions encryption keys from the KMS cluster.
The KMS must support the Key Management Interoperability Protocol (KMIP) 1.1 standard.

Add a KMS to vCenter Server

You add a Key Management Server (KMS) to your vCenter Server system from the vSphere Web Client.
vCenter Server creates a KMS cluster when you add the rst KMS instance. If you congure the KMS cluster on two or more vCenter Servers, make sure you use the same KMS cluster name.
N Do not deploy your KMS servers on the Virtual SAN cluster you plan to encrypt. If a failure occurs, hosts in the Virtual SAN cluster must communicate with the KMS.
When you add the KMS, you are prompted to set this cluster as a default. You can later change the
n
default cluster explicitly.
After vCenter Server creates the rst cluster, you can add KMS instances from the same vendor to the
n
cluster.
You can set up the cluster with only one KMS instance.
n
78 VMware, Inc.
Chapter 9 Using Encryption on a Virtual SAN Cluster
If your environment supports KMS solutions from dierent vendors, you can add multiple KMS
n
clusters.
Prerequisites
Verify that the key server is in the vSphere Compatibility Matrixes and is KMIP 1.1 compliant.
n
Verify that you have the required privileges: Cryptographer.ManageKeyServers
n
Connecting to a KMS by using only an IPv6 address is not supported.
n
Connecting to a KMS through a proxy server that requires user name or password is not supported.
n
Procedure
1 Log in to the vCenter Server system with the vSphere Web Client.
2 Browse the inventory list and select the vCenter Server instance.
3 Click  and click Key Management Servers.
4 Click Add KMS, specify the KMS information in the wizard, and click OK.
Option Value
KMS cluster
Cluster name
Server alias
Server address
Server port
Proxy address
Proxy port
User name
Password
Select Create new cluster for a new cluster. If a cluster exists, you can select that cluster.
Name for the KMS cluster. You can use this name to connect to the KMS if your vCenter Server instance becomes unavailable.
Alias for the KMS. You can use this alias to connect to the KMS if your vCenter Server instance becomes unavailable.
IP address or FQDN of the KMS.
Port on which vCenter Server connects to the KMS.
Optional proxy address for connecting to the KMS.
Optional proxy port for connecting to the KMS.
Some KMS vendors allow users to isolate encryption keys that are used by dierent users or groups by specifying a user name and password. Specify a user name only if your KMS supports this functionality, and if you intend to use it.
Some KMS vendors allow users to isolate encryption keys that are used by dierent users or groups by specifying a user name and password. Specify a password only if your KMS supports this functionality, and if you intend to use it.
Establish a Trusted Connection by Exchanging Certificates
After you add the KMS to the vCenter Server system, you can establish a trusted connection. The exact process depends on the certicates that the KMS accepts, and on company policy.
Prerequisites
Add the KMS cluster.
Procedure
1 Log in to the vSphere Web Client, and select a vCenter Server system.
2 Click  and select Key Management Servers.
3 Select the KMS instance with which you want to establish a trusted connection.
4 Click Establish trust with KMS.
VMware, Inc. 79
5 Select the option appropriate for your server and complete the steps.
Option See
Root CA certificate
Certificate
New Certificate Signing Request
Upload certificate and private key
Use the Root CA Certificate Option to Establish a Trusted Connection
Some KMS vendors such as SafeNet require that you upload your root CA certicate to the KMS. All certicates that are signed by your root CA are then trusted by this KMS.
The root CA certicate that vSphere Virtual Machine Encryption uses is a self-signed certicate that is stored in a separate store in the VMware Endpoint Certicate Store (VECS) on the vCenter Server system.
N Generate a root CA certicate only if you want to replace existing certicates. If you do, other certicates that are signed by that root CA become invalid. You can generate a new root CA certicate as
part of this workow.
“Use the Root CA Certicate Option to Establish a Trusted Connection,”
on page 80.
“Use the Certicate Option to Establish a Trusted Connection,” on
page 80.
“Use the New Certicate Signing Request Option to Establish a Trusted Connection,” on page 81.
“Use the Upload Certicate and Private Key Option to Establish a Trusted Connection,” on page 81.
Procedure
1 Log in to the vSphere Web Client, and select a vCenter Server system.
2 Click  and select Key Management Servers.
3 Select the KMS instance with which you want to establish a trusted connection.
4 Select Root CA  and click OK.
The Download Root CA Certicate dialog box is populated with the root certicate that vCenter Server uses for encryption. This certicate is stored in VECS.
5 Copy the certicate to the clipboard or download the certicate as a le.
6 Follow the instructions from your KMS vendor to upload the certicate to their system.
N Some KMS vendors, for example SafeNet, require that the KMS vendor restarts the KMS to pick up the root certicate that you upload.
What to do next
Finalize the certicate exchange. See “Complete the Trust Setup,” on page 82.
Use the Certificate Option to Establish a Trusted Connection
Some KMS vendors such as Vormetric require that you upload the vCenter Server certicate to the KMS. After the upload, the KMS accepts trac that comes from a system with that certicate.
vCenter Server generates a certicate to protect connections with the KMS. The certicate is stored in a separate key store in the VMware Endpoint Certicate Store (VECS) on the vCenter Server system.
Procedure
1 Log in to the vSphere Web Client, and select a vCenter Server system.
2 Click  and select Key Management Servers.
3 Select the KMS instance with which you want to establish a trusted connection.
80 VMware, Inc.
Chapter 9 Using Encryption on a Virtual SAN Cluster
4 Select  and click OK.
The Download Certicate dialog box is populated with the root certicate that vCenter Server uses for encryption. This certicate is stored in VECS.
N Do not generate a new certicate unless you want to replace existing certicates.
5 Copy the certicate to the clipboard or download it as a le.
6 Follow the instructions from your KMS vendor to upload the certicate to the KMS.
What to do next
Finalize the trust relationship. See “Complete the Trust Setup,” on page 82.
Use the New Certificate Signing Request Option to Establish a Trusted Connection
Some KMS vendors, for example Thales, require that vCenter Server generate a Certicate Signing Request (CSR) and send that CSR to the KMS. The KMS signs the CSR and returns the signed certicate. You can upload the signed certicate to vCenter Server.
Using the New  Signing Request option is a two-step process. First you generate the CSR and send it to the KMS vendor. Then you upload the signed certicate that you receive from the KMS vendor to vCenter Server.
Procedure
1 Log in to the vSphere Web Client, and select a vCenter Server system.
2 Click  and select Key Management Servers.
3 Select the KMS instance with which you want to establish a trusted connection.
4 Select New  Signing Request and click OK.
5 In the dialog box, copy the full certicate in the text box to the clipboard or download it as a le, and
click OK.
Use the Generate new CSR buon in the dialog box only if you explicitly want to generate a CSR. Using that option makes any signed certicates that are based on the old CSR invalid.
6 Follow the instructions from your KMS vendor to submit the CSR.
7 When you receive the signed certicate from the KMS vendor, click Key Management Servers again,
and select New  Signing Request again.
8 Paste the signed certicate into the boom text box or click Upload File and upload the le, and click
OK.
What to do next
Finalize the trust relationship. See “Complete the Trust Setup,” on page 82.
Use the Upload Certificate and Private Key Option to Establish a Trusted Connection
Some KMS vendors such as HyTrust require that you upload the KMS server certicate and private key to the vCenter Server system.
Some KMS vendors generate a certicate and private key for the connection and make them available to you. After you upload the les, the KMS trusts your vCenter Server instance.
Prerequisites
Request a certicate and private key from the KMS vendor. The les are X509 les in PEM format.
n
VMware, Inc. 81
Procedure
1 Log in to the vSphere Web Client, and select a vCenter Server system.
2 Click  and select Key Management Servers.
3 Select the KMS instance with which you want to establish a trusted connection.
4 Select Upload  and private key and click OK.
5 Paste the certicate that you received from the KMS vendor into the top text box or click Upload File to
upload the certicate le.
6 Paste the key le into the boom text box or click Upload File to upload the key le.
7 Click OK.
What to do next
Finalize the trust relationship. See “Complete the Trust Setup,” on page 82.

Set the Default KMS Cluster

You must set the default KMS cluster if you do not make the rst cluster the default cluster, or if your environment uses multiple clusters and you remove the default cluster.
Prerequisites
As a best practice, verify that the Connection Status in the Key Management Servers tab shows Normal and a green check mark.
Procedure
1 Log in to the vSphere Web Client and select a vCenter Server system.
2 Click the  tab and click Key Management Servers under More.
3 Select the cluster and click Set KMS cluster as default.
Do not select the server. The menu to set the default is available only for the cluster.
4 Click Yes.
The word default appears next to the cluster name.

Complete the Trust Setup

Unless the Add Server dialog box prompted you to trust the KMS, you must explicitly establish trust after certicate exchange is complete.
You can complete the trust setup, that is, make vCenter Server trust the KMS, either by trusting the KMS or by uploading a KMS certicate. You have two options:
Trust the certicate explicitly by using the Refresh KMS  option.
n
Upload a KMS leaf certicate or the KMS CA certicate to vCenter Server by using the Upload KMS
n
 option.
N If you upload the root CA certicate or the intermediate CA certicate, vCenter Server trusts all certicates that are signed by that CA. For strong security, upload a leaf certicate or an intermediate CA certicate that the KMS vendor controls.
Procedure
1 Log in to the vSphere Web Client, and select a vCenter Server system.
82 VMware, Inc.
Chapter 9 Using Encryption on a Virtual SAN Cluster
2 Click  and select Key Management Servers.
3 Select the KMS instance with which you want to establish a trusted connection.
4 To establish the trust relationship, refresh or upload the KMS certicate.
Option Action
Refresh KMS certificate
Upload KMS certificate
a Click All Actions, and select Refresh KMS .
b In the dialog box that appears, click Trust.
a Click All Actions, and select Upload KMS .
b In the dialog box that appears, click Upload , upload a certicate
le, and click OK.

Enable Encryption on a New Virtual SAN Cluster

You can enable encryption when you congure a new Virtual SAN cluster.
Prerequisites
Required privileges:
n
Host.Inventory.EditCluster
n
Cryptographer.ManageEncryptionPolicy
n
Cryptographer.ManageKMS
n
Cryptographer.ManageKeys
n
You must have set up a KMS cluster and established a trusted connection between vCenter Server and
n
the KMS.
Procedure
1 Navigate to an existing cluster in the vSphere Web Client.
2 Click the  tab.
3 Under vSAN, select General and click the  vSAN buon.
4 On the vSAN capabilites page, select the Encryption check box, and select a KMS cluster.
N Make sure the Erase disks before use check box is deselected, unless you want to wipe existing data from the storage devices as they are encrypted.
5 On the Claim disks page, specify which disks to claim for the Virtual SAN cluster.
a
Select a ash device to be used for capacity and click the Claim for capacity tier icon (
b
Select a ash device to be used as cache and click the Claim for cache tier icon (
).
).
6 Complete your cluster conguration.
Encryption of data at rest is enabled on the Virtual SAN cluster. Virtual SAN encrypts all data added to the Virtual SAN datastore.

Generate New Encryption Keys

You can generate new encryption keys, in case a key expires or becomes compromised.
The following options are available when you generate new encryption keys for your Virtual SAN cluster.
If you generate a new KEK, all hosts in the Virtual SAN cluster receive the new KEK from the KMS.
n
Each host's DEK is re-encrypted with the new KEK.
VMware, Inc. 83
If you choose to re-encrypt all data using new keys, a new KEK and new DEKs are generated. A rolling
n
disk re-format is required to re-encrypt data.
Prerequisites
Required privileges:
n
Host.Inventory.EditCluster
n
Cryptographer.ManageKeys
n
You must have set up a KMS cluster and established a trusted connection between vCenter Server and
n
the KMS.
Procedure
1 Navigate to the Virtual SAN host cluster in the vSphere Web Client.
2 Click the  tab.
3 Under vSAN, select General.
4 In the vSAN is turned ON pane, click the Generate new encryption keys buon.
5 To generate a new KEK, click OK. The DEKs will be re-encrypted with the new KEK.
To generate a new KEK and new DEKs, and re-encrypt all data in the Virtual SAN cluster, select
n
the following check box: Also re-encrypt all data on the storage using new keys.
If your Virtual SAN cluster has limited resources, select the Allow Reduced Redundancy check
n
box. If you allow reduced redundancy, your data might be at risk during the disk reformat operation.

Enable Virtual SAN Encryption on Existing Virtual SAN Cluster

You can enable encryption by editing the conguration parameters of an existing Virtual SAN cluster.
Prerequisites
Required privileges:
n
Host.Inventory.EditCluster
n
Cryptographer.ManageEncryptionPolicy
n
Cryptographer.ManageKMS
n
Cryptographer.ManageKeys
n
You must have set up a KMS cluster and established a trusted connection between vCenter Server and
n
the KMS.
The cluster's disk-claiming mode must be set to manual.
n
Procedure
1 Navigate to the Virtual SAN host cluster in the vSphere Web Client.
2 Click the  tab.
3 Under vSAN, select General.
4 In the vSAN is turned ON pane, click the Edit buon.
5 On the Edit vSAN seings dialog, check the Encryption check box, and select a KMS cluster.
6 (Optional) If the storage devices in your cluster contain sensitive data, select the Erase disks before use
check box.
This seing directs Virtual SAN to wipe existing data from the storage devices as they are encrypted.
84 VMware, Inc.
7 Click OK.
A rolling reformat of all disk groups takes places as Virtual SAN encrypts all data in the Virtual SAN datastore.

Virtual SAN Encryption and Core Dumps

If your Virtual SAN cluster uses encryption, and if an error occurs on the ESXi host, the resulting core dump is encrypted to protect customer data. Core dumps that are included in the vm-support package are also encrypted.
N Core dumps can contain sensitive information. Follow your organization's data security and privacy policy when handling core dumps.
Core Dumps on ESXi Hosts
When an ESXi host crashes, an encrypted core dump is generated and the host reboots. The core dump is encrypted with the host key that is in the ESXi key cache. What you can do next depends on several factors.
In most cases, vCenter Server retrieves the key for the host from the KMS and aempts to push the key
n
to the ESXi host after reboot. If the operation is successful, you can generate the vm-support package and you can decrypt or re-encrypt the core dump.
Chapter 9 Using Encryption on a Virtual SAN Cluster
If vCenter Server cannot connect to the ESXi host, you might be able to retrieve the key from the KMS.
n
If the host used a custom key, and that key diers from the key that vCenter Server pushes to the host,
n
you cannot manipulate the core dump. Avoid using custom keys.
Core Dumps and vm-support Packages
When you contact VMware Technical Support because of a serious error, your support representative usually asks you to generate a vm-support package. The package includes log les and other information, including core dumps. If support representatives cannot resolve the issues by looking at log les and other information, you can decrypt the core dumps to make relevant information available. Follow your organization's security and privacy policy to protect sensitive information, such as host keys.
Core Dumps on vCenter Server Systems
A core dump on a vCenter Server system is not encrypted. vCenter Server already contains potentially sensitive information. At the minimum, ensure that the Windows system on which vCenter Server runs or the vCenter Server Appliance is protected. You also might consider turning o core dumps for the vCenter Server system. Other information in log les can help determine the problem.

Collect a vm-support Package for an ESXi Host in an Encrypted Virtual SAN Cluster

If encryption is enabled on a Virtual SAN cluster, any core dumps in the vm-support package are encrypted. You can collect the package from the vSphere Web Client, and you can specify a password if you expect to decrypt the core dump later.
The vm-support package includes log les, core dump les, and more.
VMware, Inc. 85
Prerequisites
Inform your support representative that encryption is enabled for the Virtual SAN cluster. Your support representative might ask you to decrypt core dumps to extract relevant information.
N Core dumps can contain sensitive information. Follow your organization's security and privacy policy to protect sensitive information such as host keys.
Procedure
1 Log in to vCenter Server with the vSphere Web Client.
2 Click Hosts and Clusters, and right-click the ESXi host.
3 Select Export System Logs.
4 In the dialog box, select Password for encrypted core dumps, and specify and conrm a password.
5 Leave the defaults for other options or make changes if requested by VMware Technical Support, and
click Finish.
6 Specify a location for the le.
7 If your support representative asked you to decrypt the core dump in the vm-support package, log in to
any ESXi host and follow these steps.
a Log in to the ESXi and connect to the directory where the vm-support package is located.
The lename follows the paern esx.date_and_time.tgz.
b Make sure that the directory has enough space for the package, the uncompressed package, and the
recompressed package, or move the package.
c Extract the package to the local directory.
vm-support -x *.tgz .
The resulting le hierarchy might contain core dump les for the ESXi host, usually in /var/core, and might contain multiple core dump les for virtual machines.
d Decrypt each encrypted core dump le separately.
crypto-util envelope extract --offset 4096 --keyfile vm-support-incident-key-file
--password encryptedZdump decryptedZdump
vm-support-incident-key-le is the incident key le that you nd at the top level in the directory.
encryptedZdump is the name of the encrypted core dump le.
decryptedZdump is the name for the le that the command generates. Make the name similar to the encryptedZdump name.
e Provide the password that you specied when you created the vm-support package.
f Remove the encrypted core dumps, and compress the package again.
vm-support --reconstruct
8 Remove any les that contain condential information.

Decrypt or Re-encrypt an Encrypted Core Dump

You can decrypt or re-encrypt an encrypted core dump on your ESXi host by using the crypto-util CLI.
You can decrypt and examine the core dumps in the vm-support package yourself. Core dumps might contain sensitive information. Follow your organization's security and privacy policy to protect sensitive information, such as host keys.
86 VMware, Inc.
Chapter 9 Using Encryption on a Virtual SAN Cluster
For details about re-encrypting a core dump and other features of crypto-util, see the command-line help.
N crypto-util is for advanced users.
Prerequisites
The ESXi host key that was used to encrypt the core dump must be available on the ESXi host that generated the core dump.
Procedure
1 Log directly in to the ESXi host on which the core dump occurred.
If the ESXi host is in lockdown mode, or if SSH access is disabled, you might have to enable access rst.
2 Determine whether the core dump is encrypted.
Option Description
Monitor core dump
zdump file
crypto-util envelope describe vmmcores.ve
crypto-util envelope describe --offset 4096 zdumpFile
3 Decrypt the core dump, depending on its type.
Option Description
Monitor core dump
zdump file
crypto-util envelope extract vmmcores.ve vmmcores
crypto-util envelope extract --offset 4096 zdumpEncrypted
zdumpUnencrypted
VMware, Inc. 87
88 VMware, Inc.

Upgrading the Virtual SAN Cluster 10

Upgrading Virtual SAN is a multistage process, in which you must perform the upgrade procedures in the order described here.
Before you aempt to upgrade, make sure you understand the complete upgrade process clearly to ensure a smooth and uninterrupted upgrade. If you are not familiar with the general vSphere upgrade procedure, you should rst review the vSphere Upgrade documentation.
N Failure to follow the sequence of upgrade tasks described here will lead to data loss and cluster failure.
The Virtual SAN cluster upgrade proceeds in the following sequence of tasks.
1 Upgrade the vCenter Server. See the vSphere Upgrade documentation.
2 Upgrade the ESXi hosts. See “Upgrade the ESXi Hosts,” on page 91. For information about migrating
and preparing your ESXi hosts for upgrade, see the vSphere Upgrade documentation.
3 Upgrade the Virtual SAN disk format. Upgrading the disk format is optional, but for best results,
upgrade the objects to use the latest version. The on-disk format exposes your environment to the complete feature set of Virtual SAN. See “Upgrade Virtual SAN Disk Format Using RVC,” on page 96.
This chapter includes the following topics:
“Before You Upgrade Virtual SAN,” on page 89
n
“Upgrade the vCenter Server,” on page 91
n
“Upgrade the ESXi Hosts,” on page 91
n
“About the Virtual SAN Disk Format,” on page 93
n
“Verify the Virtual SAN Cluster Upgrade,” on page 97
n
“Using the RVC Upgrade Command Options,” on page 98
n

Before You Upgrade Virtual SAN

Plan and design your upgrade to be fail-safe. Before you aempt to upgrade Virtual SAN, verify that your environment meets the vSphere hardware and software requirements.
Upgrade Prerequisite
Consider the aspects that could delay the overall upgrade process. For guidelines and best practices, see the vSphere Upgrade documentation.
Review the key requirements before you upgrade your cluster to Virtual SAN 6.6.
VMware, Inc.
89
Table 101. Upgrade Prerequisite
Upgrade Prerequisites Description
Software, hardware, drivers, rmware, and storage I/O controllers
Virtual SAN version Verify that you are using the latest version of Virtual SAN. If you are
Disk space Verify that you have enough space available to complete the software
Virtual SAN disk format Verify that you have enough capacity available to upgrade the disk
Virtual SAN hosts Verify that you have placed the Virtual SAN hosts in maintenance mode
Virtual Machines Verify that you have backed up your virtual machines.
Verify that the software and hardware components, drivers, rmware, and storage I/O controllers that you plan on using are supported by Virtual SAN for 6.6 and later, and are listed on the VMware Compatibility Guide Web site at
hp://www.vmware.com/resources/compatibility/search.php.
currently running a beta version and plan on upgrading Virtual SAN to
6.6, your upgrade will fail. When you upgrade from a beta version, you must perform a fresh deployment of Virtual SAN.
version upgrade. The amount of disk storage needed for the vCenter Server installation depends on your vCenter Server conguration. For guidelines about the disk space required for upgrading vSphere, see the vSphere Upgrade documentation.
format. If you do not have free space equal to the consumed capacity of the largest disk group, with the space available on disk groups other than the disk groups that are being converted, you must choose Allow reduced redundancy as the data migration option.
For example, the largest disk group in a cluster has 10 TB of physical capacity, but only 5 TB is being consumed. An additional 5 TB of spare capacity will be needed elsewhere in the cluster, excluding the disk groups that are being migrated. When upgrading the Virtual SAN disk format, verify that the hosts are not in maintenance mode. When any member host of a Virtual SAN cluster enters maintenance mode, the cluster capacity is automatically reduced, because the member host no longer contributes storage to the cluster and the capacity on the host is unavailable for data. For information about various evacuation modes, see the “Place a Member of Virtual SAN Cluster in Maintenance Mode,” on page 112.
and selected the Ensure data accessibility or Evacuate all data option.
You can use the vSphere Update Manager for automating and testing the upgrade process. However, when you use vSphere Update Manager to upgrade Virtual SAN, the default evacuation mode is Ensure data accessibility. When you use the Ensure data accessibility mode, your
data is not completely protected, and if you encounter a failure while upgrading Virtual SAN, you might experience unexpected data loss. However, the Ensure data accessibility mode is faster than the Evacuate all data mode, because you do not need to move all data to another host in the cluster. For information about various evacuation modes, see the
“Place a Member of Virtual SAN Cluster in Maintenance Mode,” on
page 112.
Recommendations
Consider the following recommendations when deploying ESXi hosts for use with Virtual SAN:
If ESXi hosts are congured with memory capacity of 512 GB or less, use SATADOM, SD, USB, or hard
n
disk devices as the installation media.
If ESXi hosts are congured with memory capacity greater than 512 GB, use a separate magnetic disk or
n
ash device as the installation device. If you are using a separate device, verify that Virtual SAN is not claiming the device.
When you boot a Virtual SAN host from a SATADOM device, you must use a single-level cell (SLC)
n
device and the size of the boot device must be at least 16 GB.
90 VMware, Inc.
Chapter 10 Upgrading the Virtual SAN Cluster
Virtual SAN 6.5 and later enables you to adjust the boot size requirements for an ESXI host in a Virtual SAN cluster. For more information, see the VMware knowledge base article at hp://kb.vmware.com/kb/2147881.
Upgrading the Witness Host in a Two Host or Stretched Cluster
The witness host for a two host cluster or stretched cluster is located outside of the Virtual SAN cluster, but it is managed by the same vCenter Server. You can use the same process to upgrade the witness host as you use for a Virtual SAN data host.
Do not upgrade the witness host until all data hosts have been upgraded and have exited maintenance mode.
Using vSphere Update Manager to upgrade hosts in parallel can result in the witness host being upgraded in parallel with one of the data hosts. To avoid upgrade problems, congure vSphere Update Manager so it does not upgrade the witness host in parallel with the data hosts.

Upgrade the vCenter Server

This rst task to perform during the Virtual SAN upgrade is a general vSphere upgrade, which includes upgrading vCenter Server and ESXi hosts.
VMware supports in-place upgrades on 64-bit systems from vCenter Server 4.x, vCenter Server 5.0.x, vCenter Server 5.1.x, and vCenter Server 5.5 to vCenter Server 6.0 and later. The vCenter Server upgrade includes a database schema upgrade and an upgrade of the vCenter Server. Instead of performing an in­place upgrade to vCenter Server, you can use a dierent machine for the upgrade. For detailed instructions and various upgrade options, see the vSphere Upgrade documentation.

Upgrade the ESXi Hosts

After you upgrade the vCenter Server, the next task for the Virtual SAN cluster upgrade is upgrading the ESXi hosts to use the current version.
If you have multiple hosts in the Virtual SAN cluster, and you use vSphere Update Manager to upgrade the hosts, the default evacuation mode is Ensure data accessibility. If you use this mode, and while upgrading Virtual SAN you encounter a failure, your data will be at risk. For information about working with evacuation modes, see “Place a Member of Virtual SAN Cluster in Maintenance Mode,” on page 112
For information about using vSphere Update Manager, see the documentation Web site at
hps://www.vmware.com/support/pubs/vum_pubs.html .
Before you aempt to upgrade the ESXi hosts, review the best practices discussed in the vSphere Upgrade documentation. VMware provides several ESXi upgrade options. Choose the upgrade option that works best with the type of host that you are upgrading. For more information about various upgrade options, see the vSphere Upgrade documentation.
Prerequisites
Verify that you have sucient disk space for upgrading the ESXi hosts. For guidelines about the disk
n
space requirement, see the vSphere Upgrade documentation.
Verify that you are using the latest version of ESXi. You can download the latest ESXi installer from the
n
VMware product download Web site at hps://my.vmware.com/web/vmware/downloads.
Verify that you are using the latest version of vCenter Server.
n
Verify the compatibility of the network conguration, storage I/O controller, storage device, and backup
n
software.
Verify that you have backed up the virtual machines.
n
VMware, Inc. 91
Use Distributed Resource Scheduler (DRS) to prevent virtual machine downtime during the upgrade.
n
Verify that the automation level for each virtual machine is set to Fully Automated mode to help DRS migrate virtual machines when hosts are entering maintenance mode. Alternatively, you can also power o all virtual machines or perform manual migration.
Procedure
1 Place the host that you intend to upgrade in maintenance mode.
You must begin your upgrade path with ESXi 5.5 or later hosts in the Virtual SAN cluster.
a Right-click the host in the vSphere Web Client navigator and select Maintenance Mode > Enter
Maintenance Mode.
b Select the Ensure data accessibility or Evacuate all data evacuation mode, depending on your
requirement, and wait for the host to enter maintenance mode.
If you are using vSphere Update Manager to upgrade the host, or if you are working with a three­host cluster, the default evacuation mode available is Ensure data accessibility. This mode is faster than the Evacuate all data mode. However, the Ensure data accessibility mode does not fully protect your data. During failure your data might be at risk and you might experience downtime, and unexpected data loss.
2 Upload the software to the datastore of your ESXi host and verify that the le is available in the
directory inside the datastore. For example, you can upload the software to /vmfs/volumes/<datastore>/VMware-ESXi-6.0.0-1921158-depot.zip.
3 Run the esxcli command install -
d /vmfs/volumes/53b536fd-34123144-8531-00505682e44d/depot/VMware-ESXi-6.0.0-1921158-depot.zip
--no-sig-check. Use the esxcli software VIB to run this command.
After the ESXi host has installed successfully, you see the following message:
The update completed successfully, but the system needs to be rebooted for the changes to be
effective.
4 You must manually restart your ESXi host from the vSphere Web Client.
a Navigate to the ESXi host in the vSphere Web Client inventory.
b Right-click the host, select Power > Reboot, click Yes to conrm, and then wait for the host to
restart.
c Right-click the host, select Connection > Disconnect, and then select Connection > Connect to
reconnect to the host.
To upgrade the remaining hosts in the cluster, repeat this procedure for each host.
If you have multiple hosts in your Virtual SAN cluster, you can use vSphere Update Manager to upgrade the remaining hosts.
5 Exit maintenance mode.
What to do next
1 (Optional) Upgrade the Virtual SAN disk format. See “Upgrade Virtual SAN Disk Format Using RVC,”
on page 96.
2 Verify the host license. In most cases, you must reapply your host license. You can use
vSphere Web Client and vCenter Server for applying host licenses. For more information about applying host licenses, see the vCenter Server and Host Management documentation.
3 (Optional) Upgrade the virtual machines on the hosts by using the vSphere Web Client or vSphere
Update Manager.
92 VMware, Inc.

About the Virtual SAN Disk Format

The disk format upgrade is optional and a Virtual SAN cluster continues to run smoothly if you use a previous disk format version.
For best results, upgrade the objects to use the latest on-disk format. The latest on-disk format provides the complete feature set of Virtual SAN.
Depending on the size of disk groups, the disk format upgrade can be time-consuming because the disk groups are upgraded one at a time. For each disk group upgrade, all data from each device is evacuated and the disk group is removed from the Virtual SAN cluster. The disk group is then added back to Virtual SAN with the new on-disk format.
N Once you upgrade the on-disk format, you cannot roll back software on the hosts or add certain older hosts to the cluster.
When you initiate an upgrade of the on-disk format, Virtual SAN performs several operations that you can monitor from the Resyncing Components page. The table summarizes each process that takes place during the disk format upgrade.
Table 102. Upgrade Progress
Percentage of Completion Description
0%-5% Cluster check. Cluster components are checked and
5%-10% Disk group upgrade. Virtual SAN performs the initial disk
10%-15% Object realignment. Virtual SAN modies the layout of all
15%-95% Disk group removal and reformat. Each disk group is
95%-100% Final object version upgrade. Object conversion to the new
Chapter 10 Upgrading the Virtual SAN Cluster
prepared for the upgrade. This process takes a few minutes. Virtual SAN veries that no outstanding issues exist that can prevent completion of the upgrade.
All hosts are connected.
n
All hosts have the correct software version.
n
All disks are healthy.
n
All objects are accessible.
n
upgrade with no data migration. This process takes a few minutes.
objects to ensure that they are properly aligned. This process can take a few minutes for a small system with few snapshots. It can take many hours or even days for large a system with many snapshots, many fragmented writes, and many unaligned objects.
removed from the cluster, reformaed, and added back to the cluster. The time required for this process varies, depending on the megabytes allocated and the system utilization. A system at or near its I/O capacity transfers slowly.
on-disk format and resynchronization is completed. The time required for this process varies, depending on the amount of space used and whether the Allow reduced redundancy option is selected.
During the upgrade, you can monitor the upgrade process from the vSphere Web Client when you navigate to the Resyncing Components page. See “Monitor the Resynchronization Tasks in the Virtual SAN Cluster,” on page 134. You also can use the RVC vsan.upgrade_status <cluster> command to monitor the upgrade. Use the optional -r <seconds> ag to refresh the upgrade status periodically until you press Ctrl+C. The minimum number of seconds allowed between each refresh is 60.
VMware, Inc. 93
You can monitor other upgrade tasks, such as device removal and upgrade, from the vSphere Web Client in the Recent Tasks pane of the status bar.
The following considerations apply when upgrading the disk format:
If you upgrade a cluster with three hosts, and you want to perform a full evacuation, the evacuation
n
fails for objects with a Primary level of failures to tolerate greater than zero. A three-host cluster cannot reprotect a disk group that is being fully evacuated using the resources of only two hosts. For example, when the Primary level of failures to tolerate is set to 1, Virtual SAN requires three protection components (two mirrors and a witness), where each protection component is placed on a separate host.
For a three-host cluster, you must choose the Ensure data accessibility evacuation mode. When in this mode, any hardware failure might result in data loss.
You also must ensure that enough free space is available. The space must be equal to the logical consumed capacity of the largest disk group. This capacity must be available on a disk group separate from the one that is being migrated.
When upgrading a three-host cluster or when upgrading a cluster with limited resources, allow the
n
virtual machines to operate in a reduced redundancy mode. Run the RVC command with the option,
vsan.ondisk_upgrade --allow-reduced-redundancy.
Using the --allow-reduced-redundancy command option means that certain virtual machines might be
n
unable to tolerate failures during the migration. This lowered tolerance for failure also can cause data loss. Virtual SAN restores full compliance and redundancy after the upgrade is completed. During the upgrade, the compliance status of virtual machines and their redundancies is temporarily noncompliant. After you complete the upgrade and nish all rebuild tasks, the virtual machines will become compliant.
While the upgrade is in progress, do not remove or disconnect any host, and do not place a host in
n
maintenance mode. These actions might cause the upgrade to fail.
For information about the RVC commands and command options, see the RVC Command Reference Guide.
94 VMware, Inc.
Chapter 10 Upgrading the Virtual SAN Cluster

Upgrade Virtual SAN Disk Format Using vSphere Web Client

After you have nished upgrading the Virtual SAN hosts, you can perform the disk format upgrade.
N If you enable encryption or deduplication and compression on an existing Virtual SAN cluster, the on-disk format is automatically upgraded to the latest version. This procedure is not required. You can avoid reformaing the disk groups twice. See “Edit Virtual SAN Seings,” on page 50.
Prerequisites
Verify that you are using the updated version of vCenter Server.
n
Verify that you are using the latest version of ESXi hosts.
n
Verify that the disks are in a healthy state. Navigate to the Disk Management page in the
n
vSphere Web Client to verify the object status.
Verify that the hardware and software that you plan on using are certied and listed in the VMware
n
Compatibility Guide Web site at hp://www.vmware.com/resources/compatibility/search.php.
Verify that you have enough free space to perform the disk format upgrade. Run the RVC command,
n
vsan.whatif_host_failures, to determine whether you have enough capacity to successfully nish the
upgrade or perform a component rebuild, in case you encounter any failure during the upgrade.
Verify that your hosts are not in maintenance mode. When upgrading the disk format, you should not
n
place the hosts in maintenance mode. When any member host of a Virtual SAN cluster enters maintenance mode, the available resource capacity in the cluster is reduced because the member host no longer contributes capacity to the cluster and the cluster upgrade might fail.
Verify that there are no component rebuilding tasks currently in progress in the Virtual SAN cluster. See
n
“Monitor the Resynchronization Tasks in the Virtual SAN Cluster,” on page 134.
Procedure
1 Navigate to the Virtual SAN cluster in the vSphere Web Client.
2 Click the  tab.
3 Under vSAN, select General.
VMware, Inc. 95
4 (Optional) Under On-disk Format Version, click Pre-check Upgrade.
The upgrade pre-check analyzes the cluster to uncover any issues that might prevent a successful upgrade. Some of the items checked are host status, disk status, network status, and object status. Upgrade issues are displayed in the Disk pre-check status eld.
5 Under On-disk Format Version, click Upgrade.
6 Click Yes on the Upgrade dialog to perform the upgrade of the on-disk format.
Virtual SAN performs a rolling reboot of each disk group in the cluster. The On-disk Format Version column displays the disk format version of storage devices in the cluster. The Disks with outdated version column indicates the number of devices using the new format. When the upgrade is successful, the Disks with outdated version will be 0.
If a failure occurs during the upgrade, you can check the Resyncing Components page in the vSphere Web Client. Wait for all resynchronizations to complete, and run the upgrade again. You also can check the cluster health using the health service. After you have resolved any issues raised by the health checks, you can run the upgrade again.

Upgrade Virtual SAN Disk Format Using RVC

After you have nished upgrading the Virtual SAN hosts, you can use the Ruby vSphere Console (RVC) to continue with the disk format upgrade.
Prerequisites
Verify that you are using the updated version of vCenter Server.
n
Verify that the version of the ESXi hosts running in the Virtual SAN cluster is 6.5 or later.
n
Verify that the disks are in a healthy state from the Disk Management page in the vSphere Web Client.
n
You can also run the vsan.disk_stats RVC command to verify disk status.
Verify that the hardware and software that you plan on using are certied and listed in the VMware
n
Compatibility Guide Web site at hp://www.vmware.com/resources/compatibility/search.php.
Verify that you have enough free space to perform the disk format upgrade. Run the RVC
n
vsan.whatif_host_failures command to determine that you have enough capacity to successfully
nish the upgrade or perform a component rebuild in case you encounter failure during the upgrade.
Verify that you have PuTTY or similar SSH client installed for accessing RVC.
n
For detailed information about downloading the RVC tool and using the RVC commands, see the RVC Command Reference Guide.
Verify that your hosts are not in maintenance mode. When upgrading the on-disk format, do not place
n
your hosts in maintenance mode. When any member host of a Virtual SAN cluster enters maintenance mode, the available resource capacity in the cluster is reduced because the member host no longer contributes capacity to the cluster and the cluster upgrade might fail.
Verify that there are no component rebuilding tasks currently in progress in the Virtual SAN cluster by
n
running the RVC vsan.resync_dashboard command.
Procedure
1 Log in to your vCenter Server using RVC.
96 VMware, Inc.
Chapter 10 Upgrading the Virtual SAN Cluster
2 Run the vsan.disks_stats /< vCenter IP address or hostname>/<data center
name>/computers/<cluster name> command to view the disk status.
For example: vsan.disks_stats /192.168.0.1/BetaDC/computers/VSANCluster
The command lists the names of all devices and hosts in the Virtual SAN cluster. The command also displays the current disk format and its health status. You can also check the current health of the devices in the Health Status column from the Disk Management page. For example, the device status appears as Unhealthy in the Health Status column for the hosts or disk groups that have failed devices.
3 Run the vsan.ondisk_upgrade <path to vsan cluster> command .
For example: vsan.ondisk_upgrade /192.168.0.1/BetaDC/computers/VSANCluster
4 Monitor the progress in RVC.
RVC upgrades one disk group at a time.
After the disk format upgrade has completed successfully, the following message appears.
Done with disk format upgrade phase
There are n v1 objects that require upgrade Object upgrade progress: n upgraded, 0 left
Object upgrade completed: n upgraded
Done VSAN upgrade
5 Run the vsan.obj_status_report command to verify that the object versions are upgraded to the new
on-disk format.

Verify the Virtual SAN Disk Format Upgrade

After you nish upgrading the disk format, you must verify whether the Virtual SAN cluster is using the new on-disk format.
Procedure
1 Navigate to the Virtual SAN cluster in the vSphere Web Client.
2 Click the  tab.
3 Under vSAN, click Disk Management.
The current disk format version appears in the Disk Format Version column. For example, if you are using disk format 2.0, it appears as version 2 in the Disk Format Version column. For on-disk format 3.0, the disk format version appears as version 3.

Verify the Virtual SAN Cluster Upgrade

The Virtual SAN cluster upgrade is not complete until you have veried that you are using the latest version of vSphere and Virtual SAN is available for use.
Procedure
1 Navigate to the Virtual SAN cluster in the vSphere Web Client.
2 Click the  tab, and verify that vSAN is listed.
You also can navigate to your ESXi host and select Summary > , and verify that you
u
are using the latest version of the ESXi host.
VMware, Inc. 97

Using the RVC Upgrade Command Options

The vsan.ondisk_upgrade command provides various command options that you can use to control and manage the Virtual SAN cluster upgrade. For example, you can allow reduced redundancy to perform the upgrade when you have lile free space available.
Run the vsan.ondisk_upgrade --help command to display the list of RVC command options.
Use these command options with the vsan.ondisk_upgrade command.
Table 103. Upgrade Command Options
Options Description
--hosts_and_clusters
--ignore-objects, -i
--allow-reduced-redundancy, -a
--force, -f
--help, -h
Use to specify paths to all host systems in the cluster or cluster's compute resources.
Use to skip Virtual SAN object upgrade. You can also use this command option to eliminate the object version upgrade. When you use this command option, objects continue to use the current on-disk format version.
Use to remove the requirement of having a free space equal to one disk group during disk upgrade. With this option, virtual machines operate in a reduced redundancy mode during upgrade, which means certain virtual machines might be unable to tolerate failures temporarily and that inability might cause data loss. Virtual SAN restores full compliance and redundancy after the upgrade is completed.
Use to enable force-proceed and automatically answer all conrmation questions.
Use to display the help options.
For information about using the RVC commands, see the RVC Command Reference Guide.
98 VMware, Inc.
Device Management in a Virtual SAN
Cluster 11
You can perform various device management tasks in a Virtual SAN cluster. You can create hybrid or all­ash disk groups, enable Virtual SAN to claim devices for capacity and cache, enable or disable LED
indicators on devices, mark devices as ash, mark remote devices as local, and so on.
This chapter includes the following topics:
“Managing Disk Groups And Devices,” on page 99
n
“Working with Individual Devices,” on page 101
n

Managing Disk Groups And Devices

When you enable Virtual SAN on a cluster, choose a disk-claiming mode to organize devices into groups.
Virtual SAN 6.6 and later releases have a uniform workow for claiming disks across all scenarios. It groups all available disks by model and size, or by host. You must select which devices to use for cache and which to use for capacity.
Create a Disk Group on a Host
When you create disk groups, you must manually specify each host and each device to be used for the Virtual SAN datastore. You organize cache and capacity devices into disk groups.
To create a disk group, you dene the disk group and individually select devices to include in the disk group. Each disk group contains one ash cache device and one or more capacity devices.
When you create a disk group, consider the ratio of ash cache to consumed capacity. Although the ratio depends on the requirements and workload of the cluster, consider using at least 10 percent of ash cache to consumed capacity ratio (not including replicas such as mirrors).
The Virtual SAN cluster initially contains a single Virtual SAN datastore with zero bytes consumed.
As you create disk groups on each host and add cache and capacity devices, the size of the datastore grows according to the amount of physical capacity added by those devices. Virtual SAN creates a single distributed Virtual SAN datastore using the local empty capacity available from the hosts added to the cluster.
VMware, Inc. 99
If the cluster requires multiple ash cache devices, you must create multiple disk groups manually, because a maximum of one ash cache device is allowed per disk group.
N If a new ESXi host is added to the Virtual SAN cluster, the local storage from that host is not added to the Virtual SAN datastore automatically. You have to manually create a disk group and add the devices to the disk group in order to use the new storage from the new ESXi host.
Claim Disks for the Virtual SAN Cluster
You can select multiple devices from your hosts, and Virtual SAN creates default disk groups for you.
When you add more capacity to the hosts or add new hosts with capacity to the Virtual SAN cluster, you can select the new devices to increase the capacity of the Virtual SAN datastore. In an all-ash cluster, you can mark ash devices for use as capacity.
After Virtual SAN has claimed devices, it creates the Virtual SAN shared datastore. The total size of the datastore reects the capacity of all capacity devices in disk groups across all hosts in the cluster. Some capacity overhead is used for metadata.

Create a Disk Group on a Virtual SAN Host

You can manually combine specic cache devices with specic capacity devices to dene disk groups on a particular host.
In this method, you manually select devices to create a disk group for a host. You add one cache device and at least one capacity device to the disk group.
Procedure
1 Navigate to the Virtual SAN cluster in the vSphere Web Client.
2 Click the  tab.
3 Under vSAN, click Disk Management.
4
Select the host and click the Create a new disk group icon ( ).
Select the ash device to be used for cache.
n
From the Capacity type drop-down menu, select the type of capacity disks to use, depending on
n
the type of disk group you want to create (HDD for hybrid or Flash for all-ash).
Select the devices you want to use for capacity.
u
5 Click OK.
The new disk group appears in the list.

Claim Storage Devices for a Virtual SAN Cluster

You can select a group of cache and capacity devices, and Virtual SAN organizes them into default disk groups.
Procedure
1 Navigate to the Virtual SAN cluster in the vSphere Web Client.
2 Click the  tab.
100 VMware, Inc.
Loading...