This document supports the version of each product listed and
supports all subsequent versions until the document is
replaced by a new edition. To check for more recent editions of
this document, see http://www.vmware.com/support/pubs.
EN-002644-00
Page 2
vSphere Resource Management
You can find the most up-to-date technical documentation on the VMware Web site at:
hp://www.vmware.com/support/
The VMware Web site also provides the latest product updates.
If you have comments about this documentation, submit your feedback to:
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com
2 VMware, Inc.
Page 3
Contents
About vSphere Resource Management7
Geing Started with Resource Management9
1
Resource Types 9
Resource Providers 9
Resource Consumers 10
Goals of Resource Management 10
Conguring Resource Allocation
2
Seings11
Resource Allocation Shares 11
Resource Allocation Reservation 12
Resource Allocation Limit 12
Resource Allocation Seings Suggestions 13
Edit Resource Seings 13
Changing Resource Allocation Seings—Example 14
Admission Control 15
CPU Virtualization Basics17
3
Software-Based CPU Virtualization 17
Hardware-Assisted CPU Virtualization 18
Virtualization and Processor-Specic Behavior 18
Performance Implications of CPU Virtualization 18
VMware, Inc.
Administering CPU Resources19
4
View Processor Information 19
Specifying CPU Conguration 19
Multicore Processors 20
Hyperthreading 20
Using CPU Anity 22
Host Power Management Policies 23
Memory Virtualization Basics27
5
Virtual Machine Memory 27
Memory Overcommitment 28
Memory Sharing 28
Types of Memory Virtualization 29
Administering Memory Resources33
6
Understanding Memory Overhead 33
How ESXi Hosts Allocate Memory 34
3
Page 4
vSphere Resource Management
Memory Reclamation 35
Using Swap Files 36
Sharing Memory Across Virtual Machines 40
Memory Compression 41
Measuring and Dierentiating Types of Memory Usage 42
Memory Reliability 43
About System Swap 43
Conguring Virtual Graphics45
7
View GPU Statistics 45
Add an NVIDIA GRID vGPU to a Virtual Machine 45
Conguring Host Graphics 46
Conguring Graphics Devices 47
Managing Storage I/O Resources49
8
About Virtual Machine Storage Policies 50
About I/O Filters 50
Storage I/O Control Requirements 50
Storage I/O Control Resource Shares and Limits 51
Set Storage I/O Control Resource Shares and Limits 52
Enable Storage I/O Control 52
Set Storage I/O Control Threshold Value 53
Storage DRS Integration with Storage Proles 54
Managing Resource Pools55
9
Why Use Resource Pools? 56
Create a Resource Pool 57
Edit a Resource Pool 58
Add a Virtual Machine to a Resource Pool 58
Remove a Virtual Machine from a Resource Pool 59
Remove a Resource Pool 60
Resource Pool Admission Control 60
Creating a DRS Cluster63
10
Admission Control and Initial Placement 63
Virtual Machine Migration 65
DRS Cluster Requirements 67
Conguring DRS with Virtual Flash 68
Create a Cluster 68
Edit Cluster Seings 69
Set a Custom Automation Level for a Virtual Machine 71
Disable DRS 72
Restore a Resource Pool Tree 72
Using DRS Clusters to Manage Resources73
11
Adding Hosts to a Cluster 73
Adding Virtual Machines to a Cluster 75
Removing Virtual Machines from a Cluster 75
4 VMware, Inc.
Page 5
Removing a Host from a Cluster 76
DRS Cluster Validity 77
Managing Power Resources 82
Using DRS Anity Rules 86
Contents
Creating a Datastore Cluster91
12
Initial Placement and Ongoing Balancing 92
Storage Migration Recommendations 92
Create a Datastore Cluster 92
Enable and Disable Storage DRS 93
Set the Automation Level for Datastore Clusters 93
Seing the Aggressiveness Level for Storage DRS 94
Datastore Cluster Requirements 95
Adding and Removing Datastores from a Datastore Cluster 96
Using Datastore Clusters to Manage Storage Resources97
13
Using Storage DRS Maintenance Mode 97
Applying Storage DRS Recommendations 99
Change Storage DRS Automation Level for a Virtual Machine 100
Set Up O-Hours Scheduling for Storage DRS 100
Storage DRS Anti-Anity Rules 101
Clear Storage DRS Statistics 104
Storage vMotion Compatibility with Datastore Clusters 105
Using NUMA Systems with ESXi107
14
What is NUMA? 107
How ESXi NUMA Scheduling Works 108
VMware NUMA Optimization Algorithms and Seings 109
Resource Management in NUMA Architectures 110
Using Virtual NUMA 110
Specifying NUMA Controls 111
Advanced
15
Aributes115
Set Advanced Host Aributes 115
Set Advanced Virtual Machine Aributes 118
Latency Sensitivity 120
About Reliable Memory 120
Fault Denitions123
16
Virtual Machine is Pinned 124
Virtual Machine not Compatible with any Host 124
VM/VM DRS Rule Violated when Moving to another Host 124
Host Incompatible with Virtual Machine 124
Host Has Virtual Machine That Violates VM/VM DRS Rules 124
Host has Insucient Capacity for Virtual Machine 124
Host in Incorrect State 124
Host Has Insucient Number of Physical CPUs for Virtual Machine 125
VMware, Inc. 5
Page 6
vSphere Resource Management
Host has Insucient Capacity for Each Virtual Machine CPU 125
The Virtual Machine Is in vMotion 125
No Active Host in Cluster 125
Insucient Resources 125
Insucient Resources to Satisfy Congured Failover Level for HA 125
No Compatible Hard Anity Host 125
No Compatible Soft Anity Host 125
Soft Rule Violation Correction Disallowed 125
Soft Rule Violation Correction Impact 126
DRS Troubleshooting Information127
17
Cluster Problems 127
Host Problems 130
Virtual Machine Problems 133
Index137
6 VMware, Inc.
Page 7
About vSphere Resource Management
vSphere Resource Management describes resource management for VMware® ESXi and vCenter® Server
environments.
This documentation focuses on the following topics.
Resource allocation and resource management concepts
n
Virtual machine aributes and admission control
n
Resource pools and how to manage them
n
Clusters, vSphere® Distributed Resource Scheduler (DRS), vSphere Distributed Power Management
n
(DPM), and how to work with them
Datastore clusters, Storage DRS, Storage I/O Control, and how to work with them
n
Advanced resource management options
n
Performance considerations
n
Intended Audience
This information is for system administrators who want to understand how the system manages resources
and how they can customize the default behavior. It’s also essential for anyone who wants to understand
and use resource pools, clusters, DRS, datastore clusters, Storage DRS, Storage I/O Control, or vSphere
DPM.
VMware, Inc.
This documentation assumes you have a working knowledge of VMware ESXi and of vCenter Server.
Task instructions in this guide are based on the vSphere Web Client. You can also perform most of the tasks
in this guide by using the new vSphere Client. The new vSphere Client user interface terminology, topology,
and workow are closely aligned with the same aspects and elements of the vSphere Web Client user
interface. You can apply the vSphere Web Client instructions to the new vSphere Client unless otherwise
instructed.
N Not all functionality in the vSphere Web Client has been implemented for the vSphere Client in the
vSphere 6.5 release. For an up-to-date list of unsupported functionality, see Functionality Updates for thevSphere Client Guide at hp://www.vmware.com/info?id=1413.
7
Page 8
vSphere Resource Management
8 VMware, Inc.
Page 9
Getting Started with Resource
Management1
To understand resource management, you must be aware of its components, its goals, and how best to
implement it in a cluster seing.
Resource allocation seings for a virtual machine (shares, reservation, and limit) are discussed, including
how to set them and how to view them. Also, admission control, the process whereby resource allocation
seings are validated against existing resources is explained.
Resource management is the allocation of resources from resource providers to resource consumers.
The need for resource management arises from the overcommitment of resources—that is, more demand
than capacity and from the fact that demand and capacity vary over time. Resource management allows you
to dynamically reallocate resources, so that you can more eciently use available capacity.
This chapter includes the following topics:
“Resource Types,” on page 9
n
“Resource Providers,” on page 9
n
“Resource Consumers,” on page 10
n
“Goals of Resource Management,” on page 10
n
Resource Types
Resources include CPU, memory, power, storage, and network resources.
N ESXi manages network bandwidth and disk resources on a per-host basis, using network trac
shaping and a proportional share mechanism, respectively.
Resource Providers
Hosts and clusters, including datastore clusters, are providers of physical resources.
For hosts, available resources are the host’s hardware specication, minus the resources used by the
virtualization software.
A cluster is a group of hosts. You can create a cluster using vSphere Web Client, and add multiple hosts to
the cluster. vCenter Server manages these hosts’ resources jointly: the cluster owns all of the CPU and
memory of all hosts. You can enable the cluster for joint load balancing or failover. See Chapter 10, “Creating
a DRS Cluster,” on page 63 for more information.
A datastore cluster is a group of datastores. Like DRS clusters, you can create a datastore cluster using the
vSphere Web Client, and add multiple datstores to the cluster. vCenter Server manages the datastore
resources jointly. You can enable Storage DRS to balance I/O load and space utilization. See Chapter 12,
“Creating a Datastore Cluster,” on page 91.
VMware, Inc.
9
Page 10
vSphere Resource Management
Resource Consumers
Virtual machines are resource consumers.
The default resource seings assigned during creation work well for most machines. You can later edit the
virtual machine seings to allocate a share-based percentage of the total CPU, memory, and storage I/O of
the resource provider or a guaranteed reservation of CPU and memory. When you power on that virtual
machine, the server checks whether enough unreserved resources are available and allows power on only if
there are enough resources. This process is called admission control.
A resource pool is a logical abstraction for exible management of resources. Resource pools can be grouped
into hierarchies and used to hierarchically partition available CPU and memory resources. Accordingly,
resource pools can be considered both resource providers and consumers. They provide resources to child
resource pools and virtual machines, but are also resource consumers because they consume their parents’
resources. See Chapter 9, “Managing Resource Pools,” on page 55.
ESXi hosts allocate each virtual machine a portion of the underlying hardware resources based on a number
of factors:
Resource limits dened by the user.
n
Total available resources for the ESXi host (or the cluster).
n
Number of virtual machines powered on and resource usage by those virtual machines.
n
Overhead required to manage the virtualization.
n
Goals of Resource Management
When managing your resources, you must be aware of what your goals are.
In addition to resolving resource overcommitment, resource management can help you accomplish the
following:
Performance Isolation: Prevent virtual machines from monopolizing resources and guarantee
n
predictable service rates.
Ecient Usage: Exploit undercommied resources and overcommit with graceful degradation.
n
Easy Administration: Control the relative importance of virtual machines, provide exible dynamic
n
partitioning, and meet absolute service-level agreements.
10 VMware, Inc.
Page 11
Configuring Resource Allocation
Settings2
When available resource capacity does not meet the demands of the resource consumers (and virtualization
overhead), administrators might need to customize the amount of resources that are allocated to virtual
machines or to the resource pools in which they reside.
Use the resource allocation seings (shares, reservation, and limit) to determine the amount of CPU,
memory, and storage resources provided for a virtual machine. In particular, administrators have several
options for allocating resources.
Reserve the physical resources of the host or cluster.
n
Set an upper bound on the resources that can be allocated to a virtual machine.
n
Guarantee that a particular virtual machine is always allocated a higher percentage of the physical
n
resources than other virtual machines.
This chapter includes the following topics:
“Resource Allocation Shares,” on page 11
n
“Resource Allocation Reservation,” on page 12
n
“Resource Allocation Limit,” on page 12
n
“Resource Allocation Seings Suggestions,” on page 13
n
“Edit Resource Seings,” on page 13
n
“Changing Resource Allocation Seings—Example,” on page 14
n
“Admission Control,” on page 15
n
Resource Allocation Shares
Shares specify the relative importance of a virtual machine (or resource pool). If a virtual machine has twice
as many shares of a resource as another virtual machine, it is entitled to consume twice as much of that
resource when these two virtual machines are competing for resources.
Shares are typically specied as High, Normal, or Low and these values specify share values with a 4:2:1
ratio, respectively. You can also select Custom to assign a specic number of shares (which expresses a
proportional weight) to each virtual machine.
Specifying shares makes sense only with regard to sibling virtual machines or resource pools, that is, virtual
machines or resource pools with the same parent in the resource pool hierarchy. Siblings share resources
according to their relative share values, bounded by the reservation and limit. When you assign shares to a
virtual machine, you always specify the priority for that virtual machine relative to other powered-on
virtual machines.
VMware, Inc.
11
Page 12
vSphere Resource Management
The following table shows the default CPU and memory share values for a virtual machine. For resource
pools, the default CPU and memory share values are the same, but must be multiplied as if the resource
pool were a virtual machine with four virtual CPUs and 16 GB of memory.
Table 2‑1. Share Values
SettingCPU share valuesMemory share values
High2000 shares per virtual CPU20 shares per megabyte of congured virtual
Normal1000 shares per virtual CPU10 shares per megabyte of congured virtual
Low500 shares per virtual CPU5 shares per megabyte of congured virtual machine
For example, an SMP virtual machine with two virtual CPUs and 1GB RAM with CPU and memory shares
set to Normal has 2x1000=2000 shares of CPU and 10x1024=10240 shares of memory.
N Virtual machines with more than one virtual CPU are called SMP (symmetric multiprocessing)
virtual machines. ESXi supports up to 128 virtual CPUs per virtual machine.
The relative priority represented by each share changes when a new virtual machine is powered on. This
aects all virtual machines in the same resource pool. All of the virtual machines have the same number of
virtual CPUs. Consider the following examples.
machine memory.
machine memory.
memory.
Two CPU-bound virtual machines run on a host with 8GHz of aggregate CPU capacity. Their CPU
n
shares are set to Normal and get 4GHz each.
A third CPU-bound virtual machine is powered on. Its CPU shares value is set to High, which means it
n
should have twice as many shares as the machines set to Normal. The new virtual machine receives
4GHz and the two other machines get only 2GHz each. The same result occurs if the user species a
custom share value of 2000 for the third virtual machine.
Resource Allocation Reservation
A reservation species the guaranteed minimum allocation for a virtual machine.
vCenter Server or ESXi allows you to power on a virtual machine only if there are enough unreserved
resources to satisfy the reservation of the virtual machine. The server guarantees that amount even when the
physical server is heavily loaded. The reservation is expressed in concrete units (megaher or megabytes).
For example, assume you have 2GHz available and specify a reservation of 1GHz for VM1 and 1GHz for
VM2. Now each virtual machine is guaranteed to get 1GHz if it needs it. However, if VM1 is using only
500MHz, VM2 can use 1.5GHz.
Reservation defaults to 0. You can specify a reservation if you need to guarantee that the minimum required
amounts of CPU or memory are always available for the virtual machine.
Resource Allocation Limit
Limit species an upper bound for CPU, memory, or storage I/O resources that can be allocated to a virtual
machine.
A server can allocate more than the reservation to a virtual machine, but never allocates more than the limit,
even if there are unused resources on the system. The limit is expressed in concrete units (megaher,
megabytes, or I/O operations per second).
CPU, memory, and storage I/O resource limits default to unlimited. When the memory limit is unlimited,
the amount of memory congured for the virtual machine when it was created becomes its eective limit.
12 VMware, Inc.
Page 13
In most cases, it is not necessary to specify a limit. There are benets and drawbacks:
Benets — Assigning a limit is useful if you start with a small number of virtual machines and want to
n
manage user expectations. Performance deteriorates as you add more virtual machines. You can
simulate having fewer resources available by specifying a limit.
Drawbacks — You might waste idle resources if you specify a limit. The system does not allow virtual
n
machines to use more resources than the limit, even when the system is underutilized and idle
resources are available. Specify the limit only if you have good reasons for doing so.
Resource Allocation Settings Suggestions
Select resource allocation seings (reservation, limit and shares) that are appropriate for your ESXi
environment.
The following guidelines can help you achieve beer performance for your virtual machines.
Use Reservation to specify the minimum acceptable amount of CPU or memory, not the amount you
n
want to have available. The amount of concrete resources represented by a reservation does not change
when you change the environment, such as by adding or removing virtual machines. The host assigns
additional resources as available based on the limit for your virtual machine, the number of shares and
estimated demand.
When specifying the reservations for virtual machines, do not commit all resources (plan to leave at
n
least 10% unreserved). As you move closer to fully reserving all capacity in the system, it becomes
increasingly dicult to make changes to reservations and to the resource pool hierarchy without
violating admission control. In a DRS-enabled cluster, reservations that fully commit the capacity of the
cluster or of individual hosts in the cluster can prevent DRS from migrating virtual machines between
hosts.
If you expect frequent changes to the total available resources, use Shares to allocate resources fairly
n
across virtual machines. If you use Shares, and you upgrade the host, for example, each virtual machine
stays at the same priority (keeps the same number of shares) even though each share represents a larger
amount of memory, CPU, or storage I/O resources.
Edit Resource Settings
Use the Edit Resource Seings dialog box to change allocations for memory and CPU resources.
Procedure
1Browse to the virtual machine in the vSphere Web Client navigator.
2Right-click and select Edit Resource .
3Edit the CPU Resources.
OptionDescription
Shares
Reservation
Limit
CPU shares for this resource pool with respect to the parent’s total. Sibling
resource pools share resources according to their relative share values
bounded by the reservation and limit. Select Low, Normal, or High, which
specify share values respectively in a 1:2:4 ratio. Select Custom to give each
virtual machine a specic number of shares, which expresses a
proportional weight.
Guaranteed CPU allocation for this resource pool.
Upper limit for this resource pool’s CPU allocation. Select Unlimited to
specify no upper limit.
VMware, Inc. 13
Page 14
VM-QA
host
VM-Marketing
vSphere Resource Management
4Edit the Memory Resources.
OptionDescription
Shares
Reservation
Limit
Memory shares for this resource pool with respect to the parent’s total.
Sibling resource pools share resources according to their relative share
values bounded by the reservation and limit. Select Low, Normal, or High,
which specify share values respectively in a 1:2:4 ratio. Select Custom to
give each virtual machine a specic number of shares, which expresses a
proportional weight.
Guaranteed memory allocation for this resource pool.
Upper limit for this resource pool’s memory allocation. Select Unlimited to
specify no upper limit.
5Click OK.
Changing Resource Allocation Settings—Example
The following example illustrates how you can change resource allocation seings to improve virtual
machine performance.
Assume that on an ESXi host, you have created two new virtual machines—one each for your QA (VM-QA)
and Marketing (VM-Marketing) departments.
Figure 2‑1. Single Host with Two Virtual Machines
In the following example, assume that VM-QA is memory intensive and accordingly you want to change the
resource allocation seings for the two virtual machines to:
Specify that, when system memory is overcommied, VM-QA can use twice as much CPU and memory
n
resources as the Marketing virtual machine. Set the CPU shares and memory shares for VM-QA to
High and for VM-Marketing set them to Normal.
Ensure that the Marketing virtual machine has a certain amount of guaranteed CPU resources. You can
n
do so using a reservation seing.
Procedure
1Browse to the virtual machines in the vSphere Web Client navigator.
2Right-click VM-QA, the virtual machine for which you want to change shares, and select Edit .
3Under Virtual Hardware, expand CPU and select High from the Shares drop-down menu.
4Under Virtual Hardware, expand Memory and select High from the Shares drop-down menu.
5Click OK.
6Right-click the marketing virtual machine (VM-Marketing) and select Edit .
7Under Virtual Hardware, expand CPU and change the Reservation value to the desired number.
8Click OK.
14 VMware, Inc.
Page 15
If you select the cluster’s Resource Reservation tab and click CPU, you should see that shares for VM-QA
are twice that of the other virtual machine. Also, because the virtual machines have not been powered on,
the Reservation Usedelds have not changed.
Admission Control
When you power on a virtual machine, the system checks the amount of CPU and memory resources that
have not yet been reserved. Based on the available unreserved resources, the system determines whether it
can guarantee the reservation for which the virtual machine is congured (if any). This process is called
admission control.
If enough unreserved CPU and memory are available, or if there is no reservation, the virtual machine is
powered on. Otherwise, an Insufficient Resources warning appears.
N In addition to the user-specied memory reservation, for each virtual machine there is also an
amount of overhead memory. This extra memory commitment is included in the admission control
calculation.
When the vSphere DPM feature is enabled, hosts might be placed in standby mode (that is, powered o) to
reduce power consumption. The unreserved resources provided by these hosts are considered available for
admission control. If a virtual machine cannot be powered on without these resources, a recommendation to
power on sucient standby hosts is made.
CPU virtualization emphasizes performance and runs directly on the processor whenever possible. The
underlying physical resources are used whenever possible and the virtualization layer runs instructions
only as needed to make virtual machines operate as if they were running directly on a physical machine.
CPU virtualization is not the same thing as emulation. ESXi does not use emulation to run virtual CPUs.
With emulation, all operations are run in software by an emulator. A software emulator allows programs to
run on a computer system other than the one for which they were originally wrien. The emulator does this
by emulating, or reproducing, the original computer’s behavior by accepting the same data or inputs and
achieving the same results. Emulation provides portability and runs software designed for one platform
across several platforms.
When CPU resources are overcommied, the ESXi host time-slices the physical processors across all virtual
machines so each virtual machine runs as if it has its specied number of virtual processors. When an ESXi
host runs multiple virtual machines, it allocates to each virtual machine a share of the physical resources.
With the default resource allocation seings, all virtual machines associated with the same host receive an
equal share of CPU per virtual CPU. This means that a single-processor virtual machines is assigned only
half of the resources of a dual-processor virtual machine.
This chapter includes the following topics:
“Software-Based CPU Virtualization,” on page 17
n
“Hardware-Assisted CPU Virtualization,” on page 18
n
“Virtualization and Processor-Specic Behavior,” on page 18
n
“Performance Implications of CPU Virtualization,” on page 18
n
Software-Based CPU Virtualization
With software-based CPU virtualization, the guest application code runs directly on the processor, while the
guest privileged code is translated and the translated code runs on the processor.
The translated code is slightly larger and usually runs more slowly than the native version. As a result,
guest applications, which have a small privileged code component, run with speeds very close to native.
Applications with a signicant privileged code component, such as system calls, traps, or page table updates
can run slower in the virtualized environment.
VMware, Inc.
17
Page 18
vSphere Resource Management
Hardware-Assisted CPU Virtualization
Certain processors provide hardware assistance for CPU virtualization.
When using this assistance, the guest can use a separate mode of execution called guest mode. The guest
code, whether application code or privileged code, runs in the guest mode. On certain events, the processor
exits out of guest mode and enters root mode. The hypervisor executes in the root mode, determines the
reason for the exit, takes any required actions, and restarts the guest in guest mode.
When you use hardware assistance for virtualization, there is no need to translate the code. As a result,
system calls or trap-intensive workloads run very close to native speed. Some workloads, such as those
involving updates to page tables, lead to a large number of exits from guest mode to root mode. Depending
on the number of such exits and total time spent in exits, hardware-assisted CPU virtualization can speed up
execution signicantly.
Virtualization and Processor-Specific Behavior
Although VMware software virtualizes the CPU, the virtual machine detects the specic model of the
processor on which it is running.
Processor models might dier in the CPU features they oer, and applications running in the virtual
machine can make use of these features. Therefore, it is not possible to use vMotion® to migrate virtual
machines between systems running on processors with dierent feature sets. You can avoid this restriction,
in some cases, by using Enhanced vMotion Compatibility (EVC) with processors that support this feature.
See the vCenter Server and Host Management documentation for more information.
Performance Implications of CPU Virtualization
CPU virtualization adds varying amounts of overhead depending on the workload and the type of
virtualization used.
An application is CPU-bound if it spends most of its time executing instructions rather than waiting for
external events such as user interaction, device input, or data retrieval. For such applications, the CPU
virtualization overhead includes the additional instructions that must be executed. This overhead takes CPU
processing time that the application itself can use. CPU virtualization overhead usually translates into a
reduction in overall performance.
For applications that are not CPU-bound, CPU virtualization likely translates into an increase in CPU use. If
spare CPU capacity is available to absorb the overhead, it can still deliver comparable performance in terms
of overall throughput.
ESXi supports up to 128 virtual processors (CPUs) for each virtual machine.
N Deploy single-threaded applications on uniprocessor virtual machines, instead of on SMP virtual
machines that have multiple CPUs, for the best performance and resource use.
Single-threaded applications can take advantage only of a single CPU. Deploying such applications in dualprocessor virtual machines does not speed up the application. Instead, it causes the second virtual CPU to
use physical resources that other virtual machines could otherwise use.
18 VMware, Inc.
Page 19
Administering CPU Resources4
You can congure virtual machines with one or more virtual processors, each with its own set of registers
and control structures.
When a virtual machine is scheduled, its virtual processors are scheduled to run on physical processors. The
VMkernel Resource Manager schedules the virtual CPUs on physical CPUs, thereby managing the virtual
machine’s access to physical CPU resources. ESXi supports virtual machines with up to 128 virtual CPUs.
This chapter includes the following topics:
“View Processor Information,” on page 19
n
“Specifying CPU Conguration,” on page 19
n
“Multicore Processors,” on page 20
n
“Hyperthreading,” on page 20
n
“Using CPU Anity,” on page 22
n
“Host Power Management Policies,” on page 23
n
View Processor Information
You can access information about current CPU conguration in the vSphere Web Client.
Procedure
1Browse to the host in the vSphere Web Client navigator.
2Click and expand Hardware.
3Select Processors to view the information about the number and type of physical processors and the
number of logical processors.
N In hyperthreaded systems, each hardware thread is a logical processor. For example, a dual-core
processor with hyperthreading enabled has two cores and four logical processors.
Specifying CPU Configuration
You can specify CPU conguration to improve resource management. However, if you do not customize
CPU conguration, the ESXi host uses defaults that work well in most situations.
You can specify CPU conguration in the following ways:
Use the aributes and special features available through the vSphere Web Client. The
n
vSphere Web Client allows you to connect to the ESXi host or a vCenter Server system.
VMware, Inc.
19
Page 20
vSphere Resource Management
Use advanced seings under certain circumstances.
n
Use the vSphere SDK for scripted CPU allocation.
n
Use hyperthreading.
n
Multicore Processors
Multicore processors provide many advantages for a host performing multitasking of virtual machines.
Intel and AMD have developed processors which combine two or more processor cores into a single
integrated circuit (often called a package or socket). VMware uses the term socket to describe a single
package which can have one or more processor cores with one or more logical processors in each core.
A dual-core processor, for example, provides almost double the performance of a single-core processor, by
allowing two virtual CPUs to run at the same time. Cores within the same processor are typically congured
with a shared last-level cache used by all cores, potentially reducing the need to access slower main memory.
A shared memory bus that connects a physical processor to main memory can limit performance of its
logical processors when the virtual machines running on them are running memory-intensive workloads
which compete for the same memory bus resources.
Each logical processor of each processor core is used independently by the ESXi CPU scheduler to run
virtual machines, providing capabilities similar to SMP systems. For example, a two-way virtual machine
can have its virtual processors running on logical processors that belong to the same core, or on logical
processors on dierent physical cores.
The ESXi CPU scheduler can detect the processor topology and the relationships between processor cores
and the logical processors on them. It uses this information to schedule virtual machines and optimize
performance.
The ESXi CPU scheduler can interpret processor topology, including the relationship between sockets, cores,
and logical processors. The scheduler uses topology information to optimize the placement of virtual CPUs
onto dierent sockets. This optimization can maximize overall cache usage, and to improve cache anity by
minimizing virtual CPU migrations.
Hyperthreading
Hyperthreading technology allows a single physical processor core to behave like two logical processors.
The processor can run two independent applications at the same time. To avoid confusion between logical
and physical processors, Intel refers to a physical processor as a socket, and the discussion in this chapter
uses that terminology as well.
Intel Corporation developed hyperthreading technology to enhance the performance of its Pentium IV and
Xeon processor lines. Hyperthreading technology allows a single processor core to execute two independent
threads simultaneously.
While hyperthreading does not double the performance of a system, it can increase performance by beer
utilizing idle resources leading to greater throughput for certain important workload types. An application
running on one logical processor of a busy core can expect slightly more than half of the throughput that it
obtains while running alone on a non-hyperthreaded processor. Hyperthreading performance
improvements are highly application-dependent, and some applications might see performance degradation
with hyperthreading because many processor resources (such as the cache) are shared between logical
processors.
N On processors with Intel Hyper-Threading technology, each core can have two logical processors
which share most of the core's resources, such as memory caches and functional units. Such logical
processors are usually called threads.
20 VMware, Inc.
Page 21
Chapter 4 Administering CPU Resources
Many processors do not support hyperthreading and as a result have only one thread per core. For such
processors, the number of cores also matches the number of logical processors. The following processors
support hyperthreading and have two threads per core.
Processors based on the Intel Xeon 5500 processor microarchitecture.
n
Intel Pentium 4 (HT-enabled)
n
Intel Pentium EE 840 (HT-enabled)
n
Hyperthreading and ESXi Hosts
A host that is enabled for hyperthreading should behave similarly to a host without hyperthreading. You
might need to consider certain factors if you enable hyperthreading, however.
ESXi hosts manage processor time intelligently to guarantee that load is spread smoothly across processor
cores in the system. Logical processors on the same core have consecutive CPU numbers, so that CPUs 0 and
1 are on the rst core together, CPUs 2 and 3 are on the second core, and so on. Virtual machines are
preferentially scheduled on two dierent cores rather than on two logical processors on the same core.
If there is no work for a logical processor, it is put into a halted state, which frees its execution resources and
allows the virtual machine running on the other logical processor on the same core to use the full execution
resources of the core. The VMware scheduler properly accounts for this halt time, and charges a virtual
machine running with the full resources of a core more than a virtual machine running on a half core. This
approach to processor management ensures that the server does not violate any of the standard ESXi
resource allocation rules.
Consider your resource management needs before you enable CPU anity on hosts using hyperthreading.
For example, if you bind a high priority virtual machine to CPU 0 and another high priority virtual machine
to CPU 1, the two virtual machines have to share the same physical core. In this case, it can be impossible to
meet the resource demands of these virtual machines. Ensure that any custom anityseings make sense
for a hyperthreaded system.
Enable Hyperthreading
To enable hyperthreading, you must rst enable it in your system's BIOS seings and then turn it on in the
vSphere Web Client. Hyperthreading is enabled by default.
Consult your system documentation to determine whether your CPU supports hyperthreading.
Procedure
1Ensure that your system supports hyperthreading technology.
2Enable hyperthreading in the system BIOS.
Some manufacturers label this option Logical Processor, while others call it Enable Hyperthreading.
3Ensure that hyperthreading is enabled for the ESXi host.
aBrowse to the host in the vSphere Web Client navigator.
bClick .
cUnder System, click Advanced System and select VMkernel.Boot.hyperthreading.
You must restart the host for the seing to take eect. Hyperthreading is enabled if the value is
true.
4Under Hardware, click Processors to view the number of Logical processors.
Hyperthreading is enabled.
VMware, Inc. 21
Page 22
vSphere Resource Management
Using CPU Affinity
By specifying a CPU anityseing for each virtual machine, you can restrict the assignment of virtual
machines to a subset of the available processors in multiprocessor systems. By using this feature, you can
assign each virtual machine to processors in the speciedanity set.
CPU anityspecies virtual machine-to-processor placement constraints and is dierent from the
relationship created by a VM-VM or VM-Host anity rule, which species virtual machine-to-virtual
machine host placement constraints.
In this context, the term CPU refers to a logical processor on a hyperthreaded system and refers to a core on
a non-hyperthreaded system.
The CPU anityseing for a virtual machine applies to all of the virtual CPUs associated with the virtual
machine and to all other threads (also known as worlds) associated with the virtual machine. Such virtual
machine threads perform processing required for emulating mouse, keyboard, screen, CD-ROM, and
miscellaneous legacy devices.
In some cases, such as display-intensive workloads, signicant communication might occur between the
virtual CPUs and these other virtual machine threads. Performance might degrade if the virtual machine's
anityseing prevents these additional threads from being scheduled concurrently with the virtual
machine's virtual CPUs. Examples of this include a uniprocessor virtual machine with anity to a single
CPU or a two-way SMP virtual machine with anity to only two CPUs.
For the best performance, when you use manual anityseings, VMware recommends that you include at
least one additional physical CPU in the anityseing to allow at least one of the virtual machine's threads
to be scheduled at the same time as its virtual CPUs. Examples of this include a uniprocessor virtual
machine with anity to at least two CPUs or a two-way SMP virtual machine with anity to at least three
CPUs.
Assign a Virtual Machine to a Specific Processor
Using CPU anity, you can assign a virtual machine to a specic processor. This allows you to restrict the
assignment of virtual machines to a specic available processor in multiprocessor systems.
Procedure
1Find the virtual machine in the vSphere Web Client inventory.
aTo nd a virtual machine, select a data center, folder, cluster, resource pool, or host.
bClick the Related Objects tab and click Virtual Machines.
2Right-click the virtual machine and click Edit .
3Under Virtual Hardware, expand CPU.
4Under Scheduling Anity, select physical processor anity for the virtual machine.
Use '-' for ranges and ',' to separate values.
For example, "0, 2, 4-7" would indicate processors 0, 2, 4, 5, 6 and 7.
5Select the processors where you want the virtual machine to run and click OK.
22 VMware, Inc.
Page 23
Potential Issues with CPU Affinity
Before you use CPU anity, you might need to consider certain issues.
Potential issues with CPU anity include:
For multiprocessor systems, ESXi systems perform automatic load balancing. Avoid manual
n
specication of virtual machine anity to improve the scheduler’s ability to balance load across
processors.
Anity can interfere with the ESXi host’s ability to meet the reservation and shares specied for a
n
virtual machine.
Because CPU admission control does not consider anity, a virtual machine with manual anity
n
seings might not always receive its full reservation.
Virtual machines that do not have manual anityseings are not adversely aected by virtual
machines with manual anityseings.
When you move a virtual machine from one host to another, anity might no longer apply because the
n
new host might have a dierent number of processors.
The NUMA scheduler might not be able to manage a virtual machine that is already assigned to certain
n
processors using anity.
Chapter 4 Administering CPU Resources
Anity can aect the host's ability to schedule virtual machines on multicore or hyperthreaded
n
processors to take full advantage of resources shared on such processors.
Host Power Management Policies
You can apply several power management features in ESXi that the host hardware provides to adjust the
balance between performance and power. You can control how ESXi uses these features by selecting a power
management policy.
Selecting a high-performance policy provides more absolute performance, but at lower eciency and
performance per wa. Low-power policies provide less absolute performance, but at higher eciency.
You can select a policy for the host that you manage by using the VMware Host Client. If you do not select a
policy, ESXi uses Balanced by default.
Table 4‑1. CPU Power Management Policies
Power Management PolicyDescription
High PerformanceDo not use any power management features.
Balanced (Default)Reduce energy consumption with minimal performance
Low PowerReduce energy consumption at the risk of lower
CustomUser-dened power management policy. Advanced
compromise
performance
conguration becomes available.
When a CPU runs at lower frequency, it can also run at lower voltage, which saves power. This type of
power management is typically called Dynamic Voltage and Frequency Scaling (DVFS). ESXi aempts to
adjust CPU frequencies so that virtual machine performance is not aected.
When a CPU is idle, ESXi can apply deep halt states, also known as C-states. The deeper the C-state, the less
power the CPU uses, but it also takes longer for the CPU to start running again. When a CPU becomes idle,
ESXi applies an algorithm to predict the idle state duration and chooses an appropriate C-state to enter. In
power management policies that do not use deep C-states, ESXi uses only the shallowest halt state for idle
CPUs, C1.
VMware, Inc. 23
Page 24
vSphere Resource Management
Select a CPU Power Management Policy
You set the CPU power management policy for a host using the vSphere Web Client.
Prerequisites
Verify that the BIOS seings on the host system allow the operating system to control power management
(for example, OS Controlled).
N Some systems have Processor Clocking Control (PCC) technology, which allows ESXi to manage
power on the host system even if the host BIOS seings do not specify OS Controlled mode. With this
technology, ESXi does not manage P-states directly. Instead, the host cooperates with the BIOS to determine
the processor clock rate. HP systems that support this technology have a BIOS seing called Cooperative
Power Management that is enabled by default.
If the host hardware does not allow the operating system to manage power, only the Not Supported policy
is available. (On some systems, only the High Performance policy is available.)
Procedure
1Browse to the host in the vSphere Web Client navigator.
2Click .
3Under Hardware, select Power Management and click the Editbuon.
4Select a power management policy for the host and click OK.
The policy selection is saved in the host conguration and can be used again at boot time. You can
change it at any time, and it does not require a server reboot.
Configure Custom Policy Parameters for Host Power Management
When you use the Custom policy for host power management, ESXi bases its power management policy on
the values of several advanced conguration parameters.
Prerequisites
Select Custom for the power management policy, as described in “Select a CPU Power Management Policy,”
on page 24.
Procedure
1Browse to the host in the vSphere Web Client navigator.
2Click .
3Under System, select Advanced System .
4In the right pane, you can edit the power management parameters that aect the Custom policy.
Power management parameters that aect the Custom policy have descriptions that begin with InCustom policy. All other power parameters aect all power management policies.
5Select the parameter and click the Editbuon.
N The default values of power management parameters match the Balanced policy.
ParameterDescription
Power.UsePStates
Power.MaxCpuLoad
24 VMware, Inc.
Use ACPI P-states to save power when the processor is busy.
Use P-states to save power on a CPU only when the CPU is busy for less
than the given percentage of real time.
Page 25
ParameterDescription
Power.MinFreqPct
Power.UseStallCtr
Power.TimerHz
Power.UseCStates
Power.CStateMaxLatency
Power.CStateResidencyCoef
Power.CStatePredictionCoef
Power.PerfBias
Do not use any P-states slower than the given percentage of full CPU
speed.
Use a deeper P-state when the processor is frequently stalled waiting for
events such as cache misses.
Controls how many times per second ESXi reevaluates which P-state each
CPU should be in.
Use deep ACPI C-states (C2 or below) when the processor is idle.
Do not use C-states whose latency is greater than this value.
When a CPU becomes idle, choose the deepest C-state whose latency
multiplied by this value is less than the host's prediction of how long the
CPU will remain idle. Larger values make ESXi more conservative about
using deep C-states, while smaller values are more aggressive.
A parameter in the ESXi algorithm for predicting how long a CPU that
becomes idle will remain idle. Changing this value is not recommended.
Performance Energy Bias Hint (Intel-only). Sets an MSR on Intel processors
to an Intel-recommended value. Intel recommends 0 for high performance,
6 for balanced, and 15 for low power. Other values are undened.
6Click OK.
Chapter 4 Administering CPU Resources
VMware, Inc. 25
Page 26
vSphere Resource Management
26 VMware, Inc.
Page 27
Memory Virtualization Basics5
Before you manage memory resources, you should understand how they are being virtualized and used by
ESXi.
The VMkernel manages all physical RAM on the host. The VMkernel dedicates part of this managed
physical RAM for its own use. The rest is available for use by virtual machines.
The virtual and physical memory space is divided into blocks called pages. When physical memory is full,
the data for virtual pages that are not present in physical memory are stored on disk. Depending on
processor architecture, pages are typically 4 KB or 2 MB. See “Advanced Memory Aributes,” on page 116.
This chapter includes the following topics:
“Virtual Machine Memory,” on page 27
n
“Memory Overcommitment,” on page 28
n
“Memory Sharing,” on page 28
n
“Types of Memory Virtualization,” on page 29
n
Virtual Machine Memory
Each virtual machine consumes memory based on its congured size, plus additional overhead memory for
virtualization.
The congured size is the amount of memory that is presented to the guest operating system. This is
dierent from the amount of physical RAM that is allocated to the virtual machine. The laer depends on
the resource seings (shares, reservation, limit) and the level of memory pressure on the host.
For example, consider a virtual machine with a congured size of 1GB. When the guest operating system
boots, it detects that it is running on a dedicated machine with 1GB of physical memory. In some cases, the
virtual machine might be allocated the full 1GB. In other cases, it might receive a smaller allocation.
Regardless of the actual allocation, the guest operating system continues to behave as though it is running
on a dedicated machine with 1GB of physical memory.
Shares
Reservation
VMware, Inc. 27
Specify the relative priority for a virtual machine if more than the reservation
is available.
Is a guaranteed lower bound on the amount of physical RAM that the host
reserves for the virtual machine, even when memory is overcommied. Set
the reservation to a level that ensures the virtual machine has sucient
memory to run eciently, without excessive paging.
Page 28
vSphere Resource Management
After a virtual machine consumes all of the memory within its reservation, it
is allowed to retain that amount of memory and this memory is not
reclaimed, even if the virtual machine becomes idle. Some guest operating
systems (for example, Linux) might not access all of the congured memory
immediately after booting. Until the virtual machines consumes all of the
memory within its reservation, VMkernel can allocate any unused portion of
its reservation to other virtual machines. However, after the guest’s workload
increases and the virtual machine consumes its full reservation, it is allowed
to keep this memory.
Limit
Is an upper bound on the amount of physical RAM that the host can allocate
to the virtual machine. The virtual machine’s memory allocation is also
implicitly limited by its congured size.
Memory Overcommitment
For each running virtual machine, the system reserves physical RAM for the virtual machine’s reservation (if
any) and for its virtualization overhead.
The total congured memory sizes of all virtual machines may exceed the amount of available physical
memory on the host. However, it doesn't necessarily mean memory is overcommied. Memory isovercommied when the combined working memory footprint of all virtual machines exceed that of the
host memory sizes.
Because of the memory management techniques the ESXi host uses, your virtual machines can use more
virtual RAM than there is physical RAM available on the host. For example, you can have a host with 2GB
memory and run four virtual machines with 1GB memory each. In that case, the memory is overcommied.
For instance, if all four virtual machines are idle, the combined consumed memory may be well below 2GB.
However, if all 4GB virtual machines are actively consuming memory, then their memory footprint may
exceed 2GB and the ESXi host will become overcommied.
Overcommitment makes sense because, typically, some virtual machines are lightly loaded while others are
more heavily loaded, and relative activity levels vary over time.
To improve memory utilization, the ESXi host transfers memory from idle virtual machines to virtual
machines that need more memory. Use the Reservation or Shares parameter to preferentially allocate
memory to important virtual machines. This memory remains available to other virtual machines if it is not
in use. ESXi implements various mechanisms such as ballooning, memory sharing, memory compression
and swapping to provide reasonable performance even if the host is not heavily memory overcommied.
An ESXi host can run out of memory if virtual machines consume all reservable memory in a memory
overcommied environment. Although the powered on virtual machines are not aected, a new virtual
machine might fail to power on due to lack of memory.
N All virtual machine memory overhead is also considered reserved.
In addition, memory compression is enabled by default on ESXi hosts to improve virtual machine
performance when memory is overcommied as described in “Memory Compression,” on page 41.
Memory Sharing
Memory sharing is a proprietary ESXi technique that can help achieve greater memory density on a host.
Memory sharing relies on the observation that several virtual machines might be running instances of the
same guest operating system. These virtual machines might have the same applications or components
loaded, or contain common data. In such cases, a host uses a proprietary Transparent Page Sharing (TPS)
technique to eliminate redundant copies of memory pages. With memory sharing, a workload running on a
28 VMware, Inc.
Page 29
virtual machine often consumes less memory than it might when running on physical machines. As a result,
virtual machine
1
guest virtual memory
guest physical memory
machine memory
a b
a
ab bc
b
c b
b c
virtual machine
2
higher levels of overcommitment can be supported eciently. The amount of memory saved by memory
sharing depends on whether the workload consists of nearly identical machines which might free up more
memory. A more diverse workload might result in a lower percentage of memory savings.
N Due to security concerns, inter-virtual machine transparent page sharing is disabled by default and
page sharing is being restricted to intra-virtual machine memory sharing. Page sharing does not occur
across virtual machines and only occurs inside a virtual machine. See “Sharing Memory Across Virtual
Machines,” on page 40 for more information.
Types of Memory Virtualization
There are two types of memory virtualization: Software-based and hardware-assisted memory
virtualization.
Because of the extra level of memory mapping introduced by virtualization, ESXi can eectively manage
memory across all virtual machines. Some of the physical memory of a virtual machine might be mapped to
shared pages or to pages that are unmapped, or swapped out.
A host performs virtual memory management without the knowledge of the guest operating system and
without interfering with the guest operating system’s own memory management subsystem.
The VMM for each virtual machine maintains a mapping from the guest operating system's physical
memory pages to the physical memory pages on the underlying machine. (VMware refers to the underlying
host physical pages as “machine” pages and the guest operating system’s physical pages as “physical”
pages.)
Chapter 5 Memory Virtualization Basics
Each virtual machine sees a contiguous, zero-based, addressable physical memory space. The underlying
machine memory on the server used by each virtual machine is not necessarily contiguous.
For both software-based and hardware-assisted memory virtualization, the guest virtual to guest physical
addresses are managed by the guest operating system. The hypervisor is only responsible for translating the
guest physical addresses to machine addresses. Software-based memory virtualization combines the guest's
virtual to machine addresses in software and saves them in the shadow page tables managed by the
hypervisor. Hardware-assisted memory virtualization utilizes the hardware facility to generate the
combined mappings with the guest's page tables and the nested page tables maintained by the hypervisor.
The diagram illustrates the ESXi implementation of memory virtualization.
Figure 5‑1. ESXi Memory Mapping
The boxes represent pages, and the arrows show the dierent memory mappings.
n
The arrows from guest virtual memory to guest physical memory show the mapping maintained by the
n
page tables in the guest operating system. (The mapping from virtual memory to linear memory for
x86-architecture processors is not shown.)
VMware, Inc. 29
The arrows from guest physical memory to machine memory show the mapping maintained by the
n
VMM.
Page 30
vSphere Resource Management
The dashed arrows show the mapping from guest virtual memory to machine memory in the shadow
n
page tables also maintained by the VMM. The underlying processor running the virtual machine uses
the shadow page table mappings.
Software-Based Memory Virtualization
ESXi virtualizes guest physical memory by adding an extra level of address translation.
The VMM maintains the combined virtual-to-machine page mappings in the shadow page tables. The
n
shadow page tables are kept up to date with the guest operating system's virtual-to-physical mappings
and physical-to-machine mappings maintained by the VMM.
The VMM intercepts virtual machine instructions that manipulate guest operating system memory
n
management structures so that the actual memory management unit (MMU) on the processor is not
updated directly by the virtual machine.
The shadow page tables are used directly by the processor's paging hardware.
n
There is non-trivial computation overhead for maintaining the coherency of the shadow page tables.
n
The overhead is more pronounced when the number of virtual CPUs increases.
This approach to address translation allows normal memory accesses in the virtual machine to execute
without adding address translation overhead, after the shadow page tables are set up. Because the
translation look-aside buer (TLB) on the processor caches direct virtual-to-machine mappings read from
the shadow page tables, no additional overhead is added by the VMM to access the memory. Note that
software MMU has a higher overhead memory requirement than hardware MMU. Hence, in order to
support software MMU, the maximum overhead supported for virtual machines in the VMkernel needs to
be increased. In some cases, software memory virtualization may have some performance benet over
hardware-assisted approach if the workload induces a huge amount of TLB misses.
Performance Considerations
The use of two sets of page tables has these performance implications.
No overhead is incurred for regular guest memory accesses.
n
Additional time is required to map memory within a virtual machine, which happens when:
n
The virtual machine operating system is seing up or updating virtual address to physical address
n
mappings.
The virtual machine operating system is switching from one address space to another (context
n
switch).
Like CPU virtualization, memory virtualization overhead depends on workload.
n
Hardware-Assisted Memory Virtualization
Some CPUs, such as AMD SVM-V and the Intel Xeon 5500 series, provide hardware support for memory
virtualization by using two layers of page tables.
The rst layer of page tables stores guest virtual-to-physical translations, while the second layer of page
tables stores guest physical-to-machine translation. The TLB (translation look-aside buer) is a cache of
translations maintained by the processor's memory management unit (MMU) hardware. A TLB miss is a
miss in this cache and the hardware needs to go to memory (possibly many times) to nd the required
translation. For a TLB miss to a certain guest virtual address, the hardware looks at both page tables to
translate guest virtual address to machine address. The rst layer of page tables is maintained by the guest
operating system. The VMM only maintains the second layer of page tables.
30 VMware, Inc.
Page 31
Chapter 5 Memory Virtualization Basics
Performance Considerations
When you use hardware assistance, you eliminate the overhead for software memory virtualization. In
particular, hardware assistance eliminates the overhead required to keep shadow page tables in
synchronization with guest page tables. However, the TLB miss latency when using hardware assistance is
signicantly higher. By default the hypervisor uses large pages in hardware assisted modes to reduce the
cost of TLB misses. As a result, whether or not a workload benets by using hardware assistance primarily
depends on the overhead the memory virtualization causes when using software memory virtualization. If a
workload involves a small amount of page table activity (such as process creation, mapping the memory, or
context switches), software virtualization does not cause signicant overhead. Conversely, workloads with a
large amount of page table activity are likely to benet from hardware assistance.
The performance of hardware MMU has improved since it was rst introduced with extensive caching
implemented in hardware. Using software memory virtualization techniques, the frequency of context
switches in a typical guest may happen from 100 to 1000 times per second. Each context switch will trap the
VMM in software MMU. Hardware MMU approaches avoid this issue.
By default the hypervisor uses large pages in hardware assisted modes to reduce the cost of TLB misses. The
best performance is achieved by using large pages in both guest virtual to guest physical and guest physical
to machine address translations.
The option LPage.LPageAlwaysTryForNPT can change the policy for using large pages in guest physical to
machine address translations. For more information, see “Advanced Memory Aributes,” on page 116.
N Binary translation only works with software-based memory virtualization.
VMware, Inc. 31
Page 32
vSphere Resource Management
32 VMware, Inc.
Page 33
Administering Memory Resources6
Using the vSphere Web Client you can view information about and make changes to memory allocation
seings. To administer your memory resources eectively, you must also be familiar with memory
overhead, idle memory tax, and how ESXi hosts reclaim memory.
When administering memory resources, you can specify memory allocation. If you do not customize
memory allocation, the ESXi host uses defaults that work well in most situations.
You can specify memory allocation in several ways.
Use the aributes and special features available through the vSphere Web Client. The
n
vSphere Web Client allows you to connect to the ESXi host or vCenter Server system.
Use advanced seings.
n
Use the vSphere SDK for scripted memory allocation.
n
This chapter includes the following topics:
“Understanding Memory Overhead,” on page 33
n
“How ESXi Hosts Allocate Memory,” on page 34
n
“Memory Reclamation,” on page 35
n
“Using Swap Files,” on page 36
n
“Sharing Memory Across Virtual Machines,” on page 40
n
“Memory Compression,” on page 41
n
“Measuring and Dierentiating Types of Memory Usage,” on page 42
n
“Memory Reliability,” on page 43
n
“About System Swap,” on page 43
n
Understanding Memory Overhead
Virtualization of memory resources has some associated overhead.
ESXi virtual machines can incur two kinds of memory overhead.
The additional time to access memory within a virtual machine.
n
The extra space needed by the ESXi host for its own code and data structures, beyond the memory
n
allocated to each virtual machine.
VMware, Inc.
33
Page 34
vSphere Resource Management
ESXi memory virtualization adds lile time overhead to memory accesses. Because the processor's paging
hardware uses page tables (shadow page tables for software-based approach or two level page tables for
hardware-assisted approach) directly, most memory accesses in the virtual machine can execute without
address translation overhead.
The memory space overhead has two components.
A xed, system-wide overhead for the VMkernel.
n
Additional overhead for each virtual machine.
n
Overhead memory includes space reserved for the virtual machine frame buer and various virtualization
data structures, such as shadow page tables. Overhead memory depends on the number of virtual CPUs
and the congured memory for the guest operating system.
Overhead Memory on Virtual Machines
Virtual machines require a certain amount of available overhead memory to power on. You should be aware
of the amount of this overhead.
The following table lists the amount of overhead memory a virtual machine requires to power on. After a
virtual machine is running, the amount of overhead memory it uses might dier from the amount listed in
the table. The sample values were collected with VMX swap enabled and hardware MMU enabled for the
virtual machine. (VMX swap is enabled by default.)
N The table provides a sample of overhead memory values and does not aempt to provide
information about all possible congurations. You can congure a virtual machine to have up to 64 virtual
CPUs, depending on the number of licensed CPUs on the host and the number of CPUs that the guest
operating system supports.
Table 6‑1. Sample Overhead Memory on Virtual Machines
Memory (MB)1 VCPU2 VCPUs4 VCPUs8 VCPUs
25620.2924.2832.2348.16
102425.9029.9137.8653.82
409648.6452.7260.6776.78
16384139.62143.98151.93168.60
How ESXi Hosts Allocate Memory
A host allocates the memory specied by the Limit parameter to each virtual machine, unless memory is
overcommied. ESXi never allocates more memory to a virtual machine than its specied physical memory
size.
For example, a 1GB virtual machine might have the default limit (unlimited) or a user-specied limit (for
example 2GB). In both cases, the ESXi host never allocates more than 1GB, the physical memory size that
was specied for it.
When memory is overcommied, each virtual machine is allocated an amount of memory somewhere
between what is specied by Reservation and what is specied by Limit. The amount of memory granted to
a virtual machine above its reservation usually varies with the current memory load.
A host determines allocations for each virtual machine based on the number of shares allocated to it and an
estimate of its recent working set size.
Shares — ESXi hosts use a modied proportional-share memory allocation policy. Memory shares
n
entitle a virtual machine to a fraction of available physical memory.
34 VMware, Inc.
Page 35
Chapter 6 Administering Memory Resources
Working set size — ESXi hosts estimate the working set for a virtual machine by monitoring memory
n
activity over successive periods of virtual machine execution time. Estimates are smoothed over several
time periods using techniques that respond rapidly to increases in working set size and more slowly to
decreases in working set size.
This approach ensures that a virtual machine from which idle memory is reclaimed can ramp up
quickly to its full share-based allocation when it starts using its memory more actively.
Memory activity is monitored to estimate the working set sizes for a default period of 60 seconds. To
modify this default , adjust the Mem.SamplePeriod advanced seing. See “Set Advanced Host
Aributes,” on page 115.
Memory Tax for Idle Virtual Machines
If a virtual machine is not actively using all of its currently allocated memory, ESXi charges more for idle
memory than for memory that is in use. This is done to help prevent virtual machines from hoarding idle
memory.
The idle memory tax is applied in a progressive fashion. The eective tax rate increases as the ratio of idle
memory to active memory for the virtual machine rises. (In earlier versions of ESXi that did not support
hierarchical resource pools, all idle memory for a virtual machine was taxed equally.)
You can modify the idle memory tax rate with the Mem.IdleTax option. Use this option, together with the
Mem.SamplePeriod advanced aribute, to control how the system determines target memory allocations for
virtual machines. See “Set Advanced Host Aributes,” on page 115.
N In most cases, changes to Mem.IdleTax are not necessary nor appropriate.
VMX Swap Files
Virtual machine executable (VMX) swap les allow the host to greatly reduce the amount of overhead
memory reserved for the VMX process.
N VMX swap les are not related to the swap to host swap cache feature or to regular host-level swap
les.
ESXi reserves memory per virtual machine for a variety of purposes. Memory for the needs of certain
components, such as the virtual machine monitor (VMM) and virtual devices, is fully reserved when a
virtual machine is powered on. However, some of the overhead memory that is reserved for the VMX
process can be swapped. The VMX swap feature reduces the VMX memory reservation signicantly (for
example, from about 50MB or more per virtual machine to about 10MB per virtual machine). This allows the
remaining memory to be swapped out when host memory is overcommied, reducing overhead memory
reservation for each virtual machine.
The host creates VMX swap les automatically, provided there is sucient free disk space at the time a
virtual machine is powered on.
Memory Reclamation
ESXi hosts can reclaim memory from virtual machines.
A host allocates the amount of memory specied by a reservation directly to a virtual machine. Anything
beyond the reservation is allocated using the host’s physical resources or, when physical resources are not
available, handled using special techniques such as ballooning or swapping. Hosts can use two techniques
for dynamically expanding or contracting the amount of memory allocated to virtual machines.
ESXi systems use a memory balloon driver (vmmemctl), loaded into the guest operating system running
n
in a virtual machine. See “Memory Balloon Driver,” on page 36.
VMware, Inc. 35
Page 36
1
2
3
memory
memory
memory
swap space
swap space
vSphere Resource Management
ESXi system swaps out a page from a virtual machine to a server swap le without any involvement by
n
the guest operating system. Each virtual machine has its own swap le.
Memory Balloon Driver
The memory balloon driver (vmmemctl) collaborates with the server to reclaim pages that are considered least
valuable by the guest operating system.
The driver uses a proprietary ballooning technique that provides predictable performance that closely
matches the behavior of a native system under similar memory constraints. This technique increases or
decreases memory pressure on the guest operating system, causing the guest to use its own native memory
management algorithms. When memory is tight, the guest operating system determines which pages to
reclaim and, if necessary, swaps them to its own virtual disk.
Figure 6‑1. Memory Ballooning in the Guest Operating System
N You must congure the guest operating system with sucient swap space. Some guest operating
systems have additional limitations.
If necessary, you can limit the amount of memory vmmemctl reclaims by seing the sched.mem.maxmemctl
parameter for a specic virtual machine. This option species the maximum amount of memory that can be
reclaimed from a virtual machine in megabytes (MB). See “Set Advanced Virtual Machine Aributes,” on
page 118.
Using Swap Files
You can specify the location of your guest swap le, reserve swap space when memory is overcommied,
and delete a swap le.
ESXi hosts use swapping to forcibly reclaim memory from a virtual machine when the vmmemctl driver is not
available or is not responsive.
It was never installed.
n
It is explicitly disabled.
n
It is not running (for example, while the guest operating system is booting).
n
It is temporarily unable to reclaim memory quickly enough to satisfy current system demands.
n
36 VMware, Inc.
Page 37
Chapter 6 Administering Memory Resources
It is functioning properly, but maximum balloon size is reached.
n
Standard demand-paging techniques swap pages back in when the virtual machine needs them.
Swap File Location
By default, the swap le is created in the same location as the virtual machine's congurationle, which
may either be on a VMFS datastore, a vSAN datastore or a VVol datastore. On a vSAN datastore or a VVol
datastore, the swap le is created as a separate vSAN or VVol object.
The ESXi host creates a swap le when a virtual machine is powered on. If this le cannot be created, the
virtual machine cannot power on. Instead of accepting the default, you can also:
Use per-virtual machine conguration options to change the datastore to another shared storage
n
location.
Use host-local swap, which allows you to specify a datastore stored locally on the host. This allows you
n
to swap at a per-host level, saving space on the SAN. However, it can lead to a slight degradation in
performance for vSphere vMotion because pages swapped to a local swap le on the source host must
be transferred across the network to the destination host. Currently vSAN and VVol datastores cannot
be specied for host-local swap.
Enable Host-Local Swap for a DRS Cluster
Host-local swap allows you to specify a datastore stored locally on the host as the swap le location. You can
enable host-local swap for a DRS cluster.
Procedure
1Browse to the cluster in the vSphere Web Client navigator.
2Click .
3Under , select General to view the swap le location and click Edit to change it.
4Select the Datastore by host option and click OK.
5Browse to one of the hosts in the cluster in the vSphere Web Client navigator.
8Click Edit and select the local datastore to use and click OK.
9Repeat Step 5 through Step 8 for each host in the cluster.
Host-local swap is now enabled for the DRS cluster.
Enable Host-Local Swap for a Standalone Host
Host-local swap allows you to specify a datastore stored locally on the host as the swap le location. You can
enable host-local swap for a standalone host.
Procedure
1Browse to the host in the vSphere Web Client navigator.
5Select a local datastore from the list and click OK.
VMware, Inc. 37
Page 38
vSphere Resource Management
Host-local swap is now enabled for the standalone host.
Swap Space and Memory Overcommitment
You must reserve swap space for any unreserved virtual machine memory (the dierence between the
reservation and the congured memory size) on per-virtual machine swap les.
This swap reservation is required to ensure that the ESXi host is able to preserve virtual machine memory
under any circumstances. In practice, only a small fraction of the host-level swap space might be used.
If you are overcommiing memory with ESXi, to support the intra-guest swapping induced by ballooning,
ensure that your guest operating systems also have sucient swap space. This guest-level swap space must
be greater than or equal to the dierence between the virtual machine’s congured memory size and its
Reservation.
C If memory is overcommied, and the guest operating system is congured with insucient swap
space, the guest operating system in the virtual machine can fail.
To prevent virtual machine failure, increase the size of the swap space in your virtual machines.
Windows guest operating systems— Windows operating systems refer to their swap space as paging
n
les. Some Windows operating systems try to increase the size of paging les automatically, if there is
sucient free disk space.
See your Microsoft Windows documentation or search the Windows help les for “paging les.” Follow
the instructions for changing the size of the virtual memory paging le.
Linux guest operating system — Linux operating systems refer to their swap space as swap les. For
n
information on increasing swap les, see the following Linux man pages:
mkswap — Sets up a Linux swap area.
n
swapon — Enables devices and les for paging and swapping.
n
Guest operating systems with a lot of memory and small virtual disks (for example, a virtual machine with
8GB RAM and a 2GB virtual disk) are more susceptible to having insucient swap space.
N Do not store swap les on thin-provisioned LUNs. Running a virtual machine with a swap le that is
stored on a thin-provisioned LUN can cause swap le growth failure, which can lead to termination of the
virtual machine.
When you create a large swap le (for example, larger than 100GB), the amount of time it takes for the
virtual machine to power on can increase signicantly. To avoid this, set a high reservation for large virtual
machines.
You can also place swap les on less costly storage using host-local swap les.
Configure Virtual Machine Swapfile Properties for the Host
Congure a swaple location for the host to determine the default location for virtual machine swaples in
the vSphere Web Client.
By default, swaples for a virtual machine are located on a datastore in the folder that contains the other
virtual machine les. However, you can congure your host to place virtual machine swaples on an
alternative datastore.
You can use this option to place virtual machine swaples on lower-cost or higher-performance storage. You
can also override this host-level seing for individual virtual machines.
38 VMware, Inc.
Page 39
Chapter 6 Administering Memory Resources
Seing an alternative swaple location might cause migrations with vMotion to complete more slowly. For
best vMotion performance, store the virtual machine on a local datastore rather than in the same directory as
the virtual machine swaples. If the virtual machine is stored on a local datastore, storing the swaple with
the other virtual machine les will not improve vMotion.
1Browse to the host in the vSphere Web Client navigator.
2Click .
3Under Virtual Machines, click Swap location.
The selected swaple location is displayed. If conguration of the swaple location is not supported on
the selected host, the tab indicates that the feature is not supported.
If the host is part of a cluster, and the cluster seings specify that swaples are to be stored in the same
directory as the virtual machine, you cannot edit the swaple location from the host under .
To change the swaple location for such a host, edit the cluster seings.
4Click Edit.
5Select where to store the swaple.
OptionDescription
Virtual machine directory
Use a specific datastore
Stores the swaple in the same directory as the virtual machine
conguration le.
Stores the swaple in the location you specify.
If the swaple cannot be stored on the datastore that the host species, the
swaple is stored in the same folder as the virtual machine.
6(Optional) If you select Use a datastore, select a datastore from the list.
7Click OK.
The virtual machine swaple is stored in the location you selected.
Configure a Virtual Machine Swap File Location for a Cluster
By default, swap les for a virtual machine are on a datastore in the folder that contains the other virtual
machine les. However, you can instead congure the hosts in your cluster to place virtual machine swap
les on an alternative datastore of your choice.
You can congure an alternative swap le location to place virtual machine swap les on either lower-cost
or higher-performance storage, depending on your needs.
Prerequisites
Before you congure a virtual machine swap le location for a cluster, you must congure the virtual
machine swap le locations for the hosts in the cluster as described in “Congure Virtual Machine Swaple
Properties for the Host,” on page 38.
Procedure
1Browse to the cluster in the vSphere Web Client.
2Click .
3Select > General.
VMware, Inc. 39
Page 40
vSphere Resource Management
4Next to swap le location, click Edit.
5Select where to store the swap le.
OptionDescription
Virtual machine directory
Datastore specified by host
6Click OK.
Delete Swap Files
If a host fails, and that host had running virtual machines that were using swap les, those swap les
continue to exist and consume many gigabytes of disk space. You can delete the swap les to eliminate this
problem.
Procedure
1Restart the virtual machine that was on the host that failed.
Stores the swap le in the same directory as the virtual machine
congurationle.
Stores the swap le in the location specied in the host conguration.
If the swap le cannot be stored on the datastore that the host species, the
swap le is stored in the same folder as the virtual machine.
2Stop the virtual machine.
The swap le for the virtual machine is deleted.
Sharing Memory Across Virtual Machines
Many ESXi workloads present opportunities for sharing memory across virtual machines (as well as within
a single virtual machine).
ESXi memory sharing runs as a background activity that scans for sharing opportunities over time. The
amount of memory saved varies over time. For a fairly constant workload, the amount generally increases
slowly until all sharing opportunities are exploited.
To determine the eectiveness of memory sharing for a given workload, try running the workload, and use
resxtop or esxtop to observe the actual savings. Find the information in the PSHARE eld of the interactive
mode in the Memory page.
Use the Mem.ShareScanTime and Mem.ShareScanGHz advanced seings to control the rate at which the system
scans memory to identify opportunities for sharing memory.
You can also congure sharing for individual virtual machines by seing the sched.mem.pshare.enable
option.
Due to security concerns, inter-virtual machine transparent page sharing is disabled by default and page
sharing is being restricted to intra-virtual machine memory sharing. This means page sharing does not occur
across virtual machines and only occurs inside of a virtual machine. The concept of salting has been
introduced to help address concerns system administrators may have over the security implications of
transparent page sharing. Salting can be used to allow more granular management of the virtual machines
participating in transparent page sharing than was previously possible. With the new salting seings,
virtual machines can share pages only if the salt value and contents of the pages are identical. A new host
cong option Mem.ShareForceSalting can be congured to enable or disable salting.
See Chapter 15, “Advanced Aributes,” on page 115 for information on how to set advanced options.
40 VMware, Inc.
Page 41
Memory Compression
ESXi provides a memory compression cache to improve virtual machine performance when you use
memory overcommitment. Memory compression is enabled by default. When a host's memory becomes
overcommied, ESXi compresses virtual pages and stores them in memory.
Because accessing compressed memory is faster than accessing memory that is swapped to disk, memory
compression in ESXi allows you to overcommit memory without signicantly hindering performance. When
a virtual page needs to be swapped, ESXi rstaempts to compress the page. Pages that can be compressed
to 2 KB or smaller are stored in the virtual machine's compression cache, increasing the capacity of the host.
You can set the maximum size for the compression cache and disable memory compression using the
Advanced Seings dialog box in the vSphere Web Client.
Enable or Disable the Memory Compression Cache
Memory compression is enabled by default. You can use Advanced System Seings in the
vSphere Web Client to enable or disable memory compression for a host.
Procedure
1Browse to the host in the vSphere Web Client navigator.
Chapter 6 Administering Memory Resources
2Click .
3Under System, select Advanced System .
4Locate Mem.MemZipEnable and click the Editbuon.
5Enter 1 to enable or enter 0 to disable the memory compression cache.
6Click OK.
Set the Maximum Size of the Memory Compression Cache
You can set the maximum size of the memory compression cache for the host's virtual machines.
You set the size of the compression cache as a percentage of the memory size of the virtual machine. For
example, if you enter 20 and a virtual machine's memory size is 1000 MB, ESXi can use up to 200MB of host
memory to store the compressed pages of the virtual machine.
If you do not set the size of the compression cache, ESXi uses the default value of 10 percent.
Procedure
1Browse to the host in the vSphere Web Client navigator.
2Click .
3Under System, select Advanced System .
4Locate Mem.MemZipMaxPct and click the Editbuon.
The value of this aribute determines the maximum size of the compression cache for the virtual
machine.
5Enter the maximum size for the compression cache.
The value is a percentage of the size of the virtual machine and must be between 5 and 100 percent.
6Click OK.
VMware, Inc. 41
Page 42
virtual machine
1
guest virtual memory
guest physical memory
machine memory
e
e
e
f
f
f
a
a
a
a
a
b
b
b
b
b
c
c
cc
c
d
d
d
virtual machine
2
vSphere Resource Management
Measuring and Differentiating Types of Memory Usage
The Performance tab of the vSphere Web Client displays several metrics that can be used to analyze
memory usage.
Some of these memory metrics measure guest physical memory while other metrics measure machine
memory. For instance, two types of memory usage that you can examine using performance metrics are
guest physical memory and machine memory. You measure guest physical memory using the Memory
Granted metric (for a virtual machine) or Memory Shared (for a host). To measure machine memory,
however, use Memory Consumed (for a virtual machine) or Memory Shared Common (for a host).
Understanding the conceptual dierence between these types of memory usage is important for knowing
what these metrics are measuring and how to interpret them.
The VMkernel maps guest physical memory to machine memory, but they are not always mapped one-toone. Multiple regions of guest physical memory might be mapped to the same region of machine memory
(when memory sharing) or specic regions of guest physical memory might not be mapped to machine
memory (when the VMkernel swaps out or balloons guest physical memory). In these situations,
calculations of guest physical memory usage and machine memory usage for an individual virtual machine
or a host dier.
Consider the example in the following gure, which shows two virtual machines running on a host. Each
block represents 4 KB of memory and each color/leer represents a dierent set of data on a block.
Figure 6‑2. Memory Usage Example
The performance metrics for the virtual machines can be determined as follows:
To determine Memory Granted (the amount of guest physical memory that is mapped to machine
n
memory) for virtual machine 1, count the number of blocks in virtual machine 1's guest physical
memory that have arrows to machine memory and multiply by 4 KB. Since there are ve blocks with
arrows, Memory Granted is 20 KB.
Memory Consumed is the amount of machine memory allocated to the virtual machine, accounting for
n
savings from shared memory. First, count the number of blocks in machine memory that have arrows
from virtual machine 1's guest physical memory. There are three such blocks, but one block is shared
with virtual machine 2. So count two full blocks plus half of the third and multiply by 4 KB for a total of
10 KB Memory Consumed.
The important dierence between these two metrics is that Memory Granted counts the number of blocks
with arrows at the guest physical memory level and Memory Consumed counts the number of blocks with
arrows at the machine memory level. The number of blocks diers between the two levels due to memory
sharing and so Memory Granted and Memory Consumed dier. Memory is being saved through sharing or
other reclamation techniques.
42 VMware, Inc.
Page 43
A similar result is obtained when determining Memory Shared and Memory Shared Common for the host.
Memory Shared for the host is the sum of each virtual machine's Memory Shared. Calculate shared
n
memory by looking at each virtual machine's guest physical memory and counting the number of
blocks that have arrows to machine memory blocks that themselves have more than one arrow pointing
at them. There are six such blocks in the example, so Memory Shared for the host is 24 KB.
Memory Shared Common is the amount of machine memory shared by virtual machines. To determine
n
common memory, look at the machine memory and count the number of blocks that have more than
one arrow pointing at them. There are three such blocks, so Memory Shared Common is 12 KB.
Memory Shared is concerned with guest physical memory and looks at the origin of the arrows. Memory
Shared Common, however, deals with machine memory and looks at the destination of the arrows.
The memory metrics that measure guest physical memory and machine memory might appear
contradictory. In fact, they are measuring dierent aspects of a virtual machine's memory usage. By
understanding the dierences between these metrics, you can beer use them to diagnose performance
issues.
Memory Reliability
Memory reliability, also known as error isolation, allows ESXi to stop using parts of memory when it
determines that a failure might occur, as well as when a failure did occur.
Chapter 6 Administering Memory Resources
When enough corrected errors are reported at a particular address, ESXi stops using this address to prevent
the corrected error from becoming an uncorrected error.
Memory reliability provides beer VMkernel reliability despite corrected and uncorrected errors in RAM. It
also enables the system to avoid using memory pages that might contain errors.
Correcting an Error Isolation Notification
With memory reliability, VMkernel stops using pages that receive an error isolation notication.
The user receives an event in the vSphere Web Client when VMkernel recovers from an uncorrectable
memory error, when VMkernel retires a signicant percentage of system memory due to a large number of
correctable errors, or if there are a large number of pages that are unable to retire.
Procedure
1Vacate the host.
2Migrate the virtual machines.
3Run memory related hardware tests.
About System Swap
System swap is a memory reclamation process that can take advantage of unused memory resources across
an entire system.
System swap allows the system to reclaim memory from memory consumers that are not virtual machines.
When system swap is enabled you have a tradeo between the impact of reclaiming the memory from
another process and the ability to assign the memory to a virtual machine that can use it. The amount of
space required for the system swap is 1GB.
Memory is reclaimed by taking data out of memory and writing it to background storage. Accessing the
data from background storage is slower than accessing data from memory, so it is important to carefully
select where to store the swapped data.
VMware, Inc. 43
Page 44
vSphere Resource Management
ESXi determines automatically where the system swap should be stored, this is the Preferred swap
location. This decision can be aided by selecting a certain set of options. The system selects the best possible
enabled option. If none of the options are feasible then system swap is not activated.
The available options are:
Datastore - Allow the use of the datastore specied. Please note that a vSAN datastore or a VVol
n
datastore cannot be specied for system swap les.
Host Swap Cache - Allow the use of part of the host swap cache.
n
Preferred swap le location - Allow the use of the preferred swap le location congured for the host.
n
Configure System Swap
You can customize the options that determine the system swap location.
Prerequisites
Select the Enabled check box in the Edit System Swap Seings dialog box.
Procedure
1Browse to the host in the vSphere Web Client navigator.
2Click .
3Under System, select System Swap.
4Click Edit.
5Select the check boxes for each option that you want to enable.
6If you select the datastore option, select a datastore from the drop-down menu.
7Click OK.
44 VMware, Inc.
Page 45
Configuring Virtual Graphics7
You can edit graphics seings for supported graphics implementations.
VMware supports 3d graphics solutions from AMD, Intel and NVIDIA.
n
NVIDIA GRID support.
n
Allows single NVIDIA vib to support both vSGA and vGPU implementations.
n
Provides vCenter GPU performance charts for Intel and NVIDIA.
n
Enables graphics for Horizon View VDI desktops.
n
You can congure host graphics seings, and customize vGPU graphics seings on a per VM basis.
This chapter includes the following topics:
“View GPU Statistics,” on page 45
n
“Add an NVIDIA GRID vGPU to a Virtual Machine,” on page 45
n
“Conguring Host Graphics,” on page 46
n
“Conguring Graphics Devices,” on page 47
n
View GPU Statistics
You can view detailed information for a host graphics card.
You can see GPU temperature, utilization, and memory usage.
N These statistics are only displayed when the GPU driver is installed on the host.
Procedure
1In the vSphere Web Client, navigate to the host.
2Click the Monitor tab and click Performance.
3Click Advanced and select GPU from the drop-down menu.
Add an NVIDIA GRID vGPU to a Virtual Machine
If an ESXi host has an NVIDIA GRID GPU graphics device, you can congure a virtual machine to use the
NVIDIA GRID virtual GPU (vGPU) technology.
NVIDIA GRID GPU graphics devices are designed to optimize complex graphics operations and enable
them to run at high performance without overloading the CPU.
VMware, Inc.
45
Page 46
vSphere Resource Management
Prerequisites
Verify that an NVIDIA GRID GPU graphics device with an appropriate driver is installed on the host.
n
See the vSphere Upgrade documentation.
Verify that the virtual machine is compatible with ESXi 6.0 and later.
n
Procedure
1Right-click a virtual machine and select Edit .
2On the Virtual Hardware tab, select Shared PCI Device from the drop-down menu.
3Click Add.
4Expand the New PCI device, and select the NVIDIA GRID vGPU passthrough device to which to
connect your virtual machine.
5Select a GPU prole.
A GPU prole represents the vGPU type.
6Click Reserve all memory.
7Click OK.
The virtual machine can access the device.
Configuring Host Graphics
You can customize the graphics options on a per host basis.
Prerequisites
Virtual machines should be powered o.
Procedure
1Select a host and select > Graphics.
2Under Host Graphics, select Edit.
3In the Edit Host Graphics Seings window, select:
OptionDescription
Shared
Shared Direct
4Select a shared passthrough GPU assignment policy.
aSpread VMs across GPUs (best performance)
bGroup VMs on GPU until full (GPU Consolidation)
5Click OK.
What to do next
VMware shared virtual graphics
Vendor shared passthrough graphics
After clicking OK, you must restart Xorg on the host.
46 VMware, Inc.
Page 47
Configuring Graphics Devices
You can edit graphics type for a video card.
Prerequisites
Virtual machines must be powered o.
Procedure
1Under Graphics Devices, select a graphics card and click Edit.
aSelect Shared for VMware shared virtual graphics.
bSelect Shared Direct for Vendor shared passthrough graphics.
2Click OK.
If you select a device, it shows which virtual machines are using that device if they are active.
What to do next
After clicking OK, you must restart Xorg on the host.
Chapter 7 Configuring Virtual Graphics
VMware, Inc. 47
Page 48
vSphere Resource Management
48 VMware, Inc.
Page 49
Managing Storage I/O Resources8
vSphere Storage I/O Control allows cluster-wide storage I/O prioritization, which allows beer workload
consolidation and helps reduce extra costs associated with over provisioning.
Storage I/O Control extends the constructs of shares and limits to handle storage I/O resources. You can
control the amount of storage I/O that is allocated to virtual machines during periods of I/O congestion,
which ensures that more important virtual machines get preference over less important virtual machines for
I/O resource allocation.
When you enable Storage I/O Control on a datastore, ESXi begins to monitor the device latency that hosts
observe when communicating with that datastore. When device latency exceeds a threshold, the datastore is
considered to be congested and each virtual machine that accesses that datastore is allocated I/O resources
in proportion to their shares. You set shares per virtual machine. You can adjust the number for each based
on need.
The I/O lter framework (VAIO) allows VMware and its partners to develop lters that intercept I/O for
each VMDK and provides the desired functionality at the VMDK granularity. VAIO works along Storage
Policy-Based Management (SPBM) which allows you to set the lter preferences through a storage policy
that is aached to VMDKs.
Conguring Storage I/O Control is a two-step process:
1Enable Storage I/O Control for the datastore.
VMware, Inc.
2Set the number of storage I/O shares and upper limit of I/O operations per second (IOPS) allowed for
each virtual machine.
By default, all virtual machine shares are set to Normal (1000) with unlimited IOPS.
N Storage I/O Control is enabled by default on Storage DRS-enabled datastore clusters.
This chapter includes the following topics:
“About Virtual Machine Storage Policies,” on page 50
n
“About I/O Filters,” on page 50
n
“Storage I/O Control Requirements,” on page 50
n
“Storage I/O Control Resource Shares and Limits,” on page 51
n
“Set Storage I/O Control Resource Shares and Limits,” on page 52
n
“Enable Storage I/O Control,” on page 52
n
“Set Storage I/O Control Threshold Value,” on page 53
n
“Storage DRS Integration with Storage Proles,” on page 54
n
49
Page 50
vSphere Resource Management
About Virtual Machine Storage Policies
Virtual machine storage policies are essential to virtual machine provisioning. The policies control which
type of storage is provided for the virtual machine, how the virtual machine is placed within the storage,
and which data services are oered for the virtual machine.
vSphere includes default storage policies. However, you can dene and assign new policies.
You use the VM Storage Policies interface to create a storage policy. When you dene the policy, you specify
various storage requirements for applications that run on virtual machines. You can also use storage policies
to request specic data services, such as caching or replication, for virtual disks.
You apply the storage policy when you create, clone, or migrate the virtual machine. After you apply the
storage policy, the Storage Policy Based Management (SPBM) mechanism places the virtual machine in a
matching datastore and, in certain storage environments, determines how the virtual machine storage
objects are provisioned and allocated within the storage resource to guarantee the required level of service.
The SPBM also enables requested data services for the virtual machine. vCenter Server monitors policy
compliance and sends an alert if the virtual machine is in breach of the assigned storage policy.
See vSphere Storage for more information.
About I/O Filters
I/O lters that are associated with virtual disks gain direct access to the virtual machine I/O path regardless
of the underlying storage topology.
VMware oers certain categories of I/O lters. In addition, the I/O lters can be created by third-party
vendors. Typically, they are distributed as packages that provide an installer to deploy lter components on
vCenter Server and ESXi host clusters.
When I/O lters are deployed on the ESXi cluster, vCenter Server automatically congures and registers an
I/O lter storage provider, also called a VASA provider, for each host in the cluster. The storage providers
communicate with vCenter Server and make data services oered by the I/O lter visible in the VM Storage
Policies interface. You can reference these data services when dening common rules for a VM policy. After
you associate virtual disks with this policy, the I/O lters are enabled on the virtual disks.
See vSphere Storage for more information.
Storage I/O Control Requirements
Storage I/O Control has several requirements and limitations.
Datastores that are Storage I/O Control-enabled must be managed by a single vCenter Server system.
n
Storage I/O Control is supported on Fibre Channel-connected, iSCSI-connected, and NFS-connected
n
storage. Raw Device Mapping (RDM) is not supported.
Storage I/O Control does not support datastores with multiple extents.
n
Before using Storage I/O Control on datastores that are backed by arrays with automated storage tiering
n
capabilities, check the VMware Storage/SAN Compatibility Guide to verify whether your automated tiered
storage array has been certied to be compatible with Storage I/O Control.
Automated storage tiering is the ability of an array (or group of arrays) to migrate LUNs/volumes or
parts of LUNs/volumes to dierent types of storage media (SSD, FC, SAS, SATA) based on user-set
policies and current I/O paerns. No special certication is required for arrays that do not have these
automatic migration/tiering features, including those that provide the ability to manually migrate data
between dierent types of storage media.
50 VMware, Inc.
Page 51
Storage I/O Control Resource Shares and Limits
You allocate the number of storage I/O shares and upper limit of I/O operations per second (IOPS) allowed
for each virtual machine. When storage I/O congestion is detected for a datastore, the I/O workloads of the
virtual machines accessing that datastore are adjusted according to the proportion of virtual machine shares
each virtual machine has.
Storage I/O shares are similar to shares used for memory and CPU resource allocation, which are described
in “Resource Allocation Shares,” on page 11. These shares represent the relative importance of a virtual
machine regarding the distribution of storage I/O resources. Under resource contention, virtual machines
with higher share values have greater access to the storage array. When you allocate storage I/O resources,
you can limit the IOPS allowed for a virtual machine. By default, IOPS are unlimited.
The benets and drawbacks of seing resource limits are described in “Resource Allocation Limit,” on
page 12. If the limit you want to set for a virtual machine is in terms of MB per second instead of IOPS, you
can convert MB per second into IOPS based on the typical I/O size for that virtual machine. For example, to
restrict a back up application with 64 KB IOs to 10 MB per second, set a limit of 160 IOPS.
View Storage I/O Control Shares and Limits
You can view the shares and limits for all virtual machines running on a datastore. Viewing this information
allows you to compare the seings of all virtual machines that are accessing the datastore, regardless of the
cluster in which they are running.
Chapter 8 Managing Storage I/O Resources
Procedure
1Browse to the datastore in the vSphere Web Client navigator.
2Click the VMs tab.
The tab displays each virtual machine running on the datastore and the associated shares value, and
percentage of datastore shares.
Monitor Storage I/O Control Shares
Use the datastore Performance tab to monitor how Storage I/O Control handles the I/O workloads of the
virtual machines accessing a datastore based on their shares.
Datastore performance charts allow you to monitor the following information:
Average latency and aggregated IOPS on the datastore
n
Latency among hosts
n
Queue depth among hosts
n
Read/write IOPS among hosts
n
Read/write latency among virtual machine disks
n
Read/write IOPS among virtual machine disks
n
Procedure
1Browse to the datastore in the vSphere Web Client navigator.
2Under the Monitor tab, click the Performance tab.
3From the View drop-down menu, select Performance.
For more information, see the vSphere Monitoring and Performance documentation.
VMware, Inc. 51
Page 52
vSphere Resource Management
Set Storage I/O Control Resource Shares and Limits
Allocate storage I/O resources to virtual machines based on importance by assigning a relative amount of
shares to the virtual machine.
Unless virtual machine workloads are very similar, shares do not necessarily dictate allocation in terms of
I/O operations or megabytes per second. Higher shares allow a virtual machine to keep more concurrent I/O
operations pending at the storage device or datastore compared to a virtual machine with lower shares. Two
virtual machines might experience dierent throughput based on their workloads.
Prerequisites
See vSphere Storage for information on creating VM storage policies and dening common rules for VM
storage policies.
Procedure
1Find the virtual machine in the vSphere Web Client inventory.
aTo nd a virtual machine, select a data center, folder, cluster, resource pool, or host.
bClick the VMs tab.
2Right-click the virtual machine and click Edit .
3Click the Virtual Hardware tab and select a virtual hard disk from the list. Expand Hard disk.
4Select a VM storage policy from the drop-down menu.
If you select a storage policy, do not manually congureShares and Limit - IOPS.
5
Under Shares, click the drop-down menu and select the relative amount of shares to allocate to the
virtual machine (Low, Normal, or High).
You can select Custom to enter a user-dened shares value.
6
Under Limit - IOPS, click the drop-down menu and enter the upper limit of storage resources to
allocate to the virtual machine.
IOPS are the number of I/O operations per second. By default, IOPS are unlimited. You select Low (500),
Normal (1000), or High (2000), or you can select Custom to enter a user-dened number of shares.
7Click OK.
Enable Storage I/O Control
When you enable Storage I/O Control, ESXi monitors datastore latency and throles the I/O load if the
datastore average latency exceeds the threshold.
Procedure
1Browse to the datastore in the vSphere Web Client navigator.
2Click the tab.
3Click and click General.
4Click Edit for Datastore Capabilities.
5Select the Enable Storage I/O Control check box.
6Click OK.
Under Datastore Capabilities, Storage I/O Control is enabled for the datastore.
52 VMware, Inc.
Page 53
Set Storage I/O Control Threshold Value
The congestion threshold value for a datastore is the upper limit of latency that is allowed for a datastore
before Storage I/O Control begins to assign importance to the virtual machine workloads according to their
shares.
You do not need to adjust the threshold seing in most environments.
C Storage I/O Control will not function correctly if you share the same spindles on two dierent
datastores.
If you change the congestion threshold seing, set the value based on the following considerations.
A higher value typically results in higher aggregate throughput and weaker isolation. Throling will
n
not occur unless the overall average latency is higher than the threshold.
If throughput is more critical than latency, do not set the value too low. For example, for Fibre Channel
n
disks, a value below 20ms could lower peak disk throughput. A very high value (above 50ms) might
allow very high latency without any signicant gain in overall throughput.
A lower value will result in lower device latency and stronger virtual machine I/O performance
n
isolation. Stronger isolation means that the shares controls are enforced more often. Lower device
latency translates into lower I/O latency for the virtual machines with the highest shares, at the cost of
higher I/O latency experienced by the virtual machines with fewer shares.
Chapter 8 Managing Storage I/O Resources
A very low value (lower than 20ms) will result in lower device latency and isolation among I/Os at the
n
potential cost of a decrease in aggregate datastore throughput.
Seing the value extremely high or extremely lowly results in poor isolation.
n
Prerequisites
Verify that Storage I/O Control is enabled.
Procedure
1Browse to the datastore in the vSphere Web Client navigator.
2Click the tab and click .
3Click General.
4Click Edit for Datastore Capabilities.
5Select the Enable Storage I/O Control check box.
Storage I/O Control automatically sets the latency threshold that corresponds to the estimated latency
when the datastore is operating at 90% of its peak throughput.
6(Optional) Adjust the Congestion Threshold.
Select a value from the Percentage of peak throughput drop-down menu.
u
The percentage of peak throughput value indicates the estimated latency threshold when the datastore
is using that percentage of its estimated peak throughput.
Select a value from the Manual drop-down menu.
u
The value must be between 5ms and 100ms. Seing improper congestion threshold values can be
detrimental to the performance of the virtual machines on the datastore.
7(Optional) Click Reset to defaults to restore the congestion threshold seing to the default value
(30ms).
8Click OK.
VMware, Inc. 53
Page 54
vSphere Resource Management
Storage DRS Integration with Storage Profiles
Storage Policy Based Management (SPBM) allows you to specify the policy for a virtual machine which is
enforced by Storage DRS. A datastore cluster can have set of datastores with dierent capability proles. If
the virtual machines have storage proles associated with them, Storage DRS can enforce placement based
on underlying datastore capabilities.
As part of Storage DRS integration with storage proles, the Storage DRS cluster level advanced option
EnforceStorageProfiles is introduced. Advanced option EnforceStorageProfiles takes one of these integer
values: 0,1 or 2. The default value is 0. When the option is set to 0, it indicates that there is no storage prole
or policy enforcement on the Storage DRS cluster. When the option is set to 1, it indicates that there is a
storage prole or policy soft enforcement on the Storage DRS cluster. This is analogous with DRS soft rules.
Storage DRS will comply with storage prole or policy in the optimum level. Storage DRS will violate the
storage prole compliant if it is required to do so. Storage DRS anity rules will have higher precedence
over storage proles only when storage prole enforcement is set to 1. When the option is set to 2, it
indicates that there is a storage prole or policy hard enforcement on the Storage DRS cluster. This is
analogous with DRS hard rules. Storage DRS will not violate the storage prole or policy compliant. Storage
proles will have higher precedence over anity rules. Storage DRS will generate fault: could not fix
anti-affinity rule violation
Prerequisites
By default, Storage DRS will not enforce storage policies associated with a virtual machine. Please congure
EnforceStorageProfiles option according to your requirements. The options are Default (0), Soft (1) or Hard
(2).
Procedure
1Log in to the vSphere Web client as an Administrator.
2In the vSphere Web Client, click on the Storage DRS cluster, then select Manage > > Storage
4Click in the area under the Option heading and type EnforceStorageProfiles
5Click in the area under the Value heading to the right of the previously entered advanced option name
and type the value of either 0, 1 or 2.
6Click OK.
54 VMware, Inc.
Page 55
Managing Resource Pools9
root resource pool
siblings
siblings
parent resource pool
child resource pool
A resource pool is a logical abstraction for exible management of resources. Resource pools can be grouped
into hierarchies and used to hierarchically partition available CPU and memory resources.
Each standalone host and each DRS cluster has an (invisible) root resource pool that groups the resources of
that host or cluster. The root resource pool does not appear because the resources of the host (or cluster) and
the root resource pool are always the same.
Users can create child resource pools of the root resource pool or of any user-created child resource pool.
Each child resource pool owns some of the parent’s resources and can, in turn, have a hierarchy of child
resource pools to represent successively smaller units of computational capability.
A resource pool can contain child resource pools, virtual machines, or both. You can create a hierarchy of
shared resources. The resource pools at a higher level are called parent resource pools. Resource pools and
virtual machines that are at the same level are called siblings. The cluster itself represents the root resource
pool. If you do not create child resource pools, only the root resource pools exist.
In the following example, RP-QA is the parent resource pool for RP-QA-UI. RP-Marketing and RP-QA are
siblings. The three virtual machines immediately below RP-Marketing are also siblings.
Figure 9‑1. Parents, Children, and Siblings in Resource Pool Hierarchy
VMware, Inc.
For each resource pool, you specify reservation, limit, shares, and whether the reservation should be
expandable. The resource pool resources are then available to child resource pools and virtual machines.
This chapter includes the following topics:
“Why Use Resource Pools?,” on page 56
n
“Create a Resource Pool,” on page 57
n
“Edit a Resource Pool,” on page 58
n
“Add a Virtual Machine to a Resource Pool,” on page 58
n
“Remove a Virtual Machine from a Resource Pool,” on page 59
n
“Remove a Resource Pool,” on page 60
n
“Resource Pool Admission Control,” on page 60
n
55
Page 56
VM-QA 1VM-QA 2
6GHz, 3GB
4GHz, 2GB2GHz, 1GB
RP-QA
VM-Marketing 1VM-Marketing 2VM-Marketing 3
RP-
Marketing
host
vSphere Resource Management
Why Use Resource Pools?
Resource pools allow you to delegate control over resources of a host (or a cluster), but the benets are
evident when you use resource pools to compartmentalize all resources in a cluster. Create multiple
resource pools as direct children of the host or cluster and congure them. You can then delegate control
over the resource pools to other individuals or organizations.
Using resource pools can result in the following benets.
Flexible hierarchical organization—Add, remove, or reorganize resource pools or change resource
n
allocations as needed.
Isolation between pools, sharing within pools—Top-level administrators can make a pool of resources
n
available to a department-level administrator. Allocation changes that are internal to one departmental
resource pool do not unfairly aect other unrelated resource pools.
Access control and delegation—When a top-level administrator makes a resource pool available to a
n
department-level administrator, that administrator can then perform all virtual machine creation and
management within the boundaries of the resources to which the resource pool is entitled by the
current shares, reservation, and limit seings. Delegation is usually done in conjunction with
permissions seings.
Separation of resources from hardware—If you are using clusters enabled for DRS, the resources of all
n
hosts are always assigned to the cluster. That means administrators can perform resource management
independently of the actual hosts that contribute to the resources. If you replace three 2GB hosts with
two 3GB hosts, you do not need to make changes to your resource allocations.
This separation allows administrators to think more about aggregate computing capacity and less about
individual hosts.
Management of sets of virtual machines running a multitier service— Group virtual machines for a
n
multitier service in a resource pool. You do not need to set resources on each virtual machine. Instead,
you can control the aggregate allocation of resources to the set of virtual machines by changing seings
on their enclosing resource pool.
For example, assume a host has a number of virtual machines. The marketing department uses three of the
virtual machines and the QA department uses two virtual machines. Because the QA department needs
larger amounts of CPU and memory, the administrator creates one resource pool for each group. The
administrator sets CPU Shares to High for the QA department pool and to Normal for the Marketing
department pool so that the QA department users can run automated tests. The second resource pool with
fewer CPU and memory resources is sucient for the lighter load of the marketing sta. Whenever the QA
department is not fully using its allocation, the marketing department can use the available resources.
The numbers in the following gure show the eective allocations to the resource pools.
Figure 9‑2. Allocating Resources to Resource Pools
56 VMware, Inc.
Page 57
Create a Resource Pool
You can create a child resource pool of any ESXi host, resource pool, or DRS cluster.
N If a host has been added to a cluster, you cannot create child resource pools of that host. If the cluster
is enabled for DRS, you can create child resource pools of the cluster.
When you create a child resource pool, you are prompted for resource pool aribute information. The
system uses admission control to make sure you cannot allocate resources that are not available.
Prerequisites
The vSphere Web Client is connected to the vCenter Server system.
Procedure
1In the vSphere Web Client navigator, select a parent object for the resource pool (a host, another
resource pool, or a DRS cluster).
2Right-click the object and select New Resource Pool.
3Type a name to identify the resource pool.
4Specify how to allocate CPU and memory resources.
Chapter 9 Managing Resource Pools
The CPU resources for your resource pool are the guaranteed physical resources the host reserves for a
resource pool. Normally, you accept the default and let the host handle resource allocation.
OptionDescription
Shares
Reservation
Expandable Reservation
Limit
Specify shares for this resource pool with respect to the parent’s total
resources. Sibling resource pools share resources according to their relative
share values bounded by the reservation and limit.
Select Low, Normal, or High to specify share values respectively in a
n
1:2:4 ratio.
Select Custom to give each virtual machine a specic number of
n
shares, which expresses a proportional weight.
Specify a guaranteed CPU or memory allocation for this resource pool.
Defaults to 0.
A nonzero reservation is subtracted from the unreserved resources of the
parent (host or resource pool). The resources are considered reserved,
regardless of whether virtual machines are associated with the resource
pool.
When the check box is selected (default), expandable reservations are
considered during admission control.
If you power on a virtual machine in this resource pool, and the combined
reservations of the virtual machines are larger than the reservation of the
resource pool, the resource pool can use resources from its parent or
ancestors.
Specify the upper limit for this resource pool’s CPU or memory allocation.
You can usually accept the default (Unlimited).
To specify a limit, deselect the Unlimited check box.
5Click OK.
After you create a resource pool, you can add virtual machines to it. A virtual machine’s shares are relative
to other virtual machines (or resource pools) with the same parent resource pool.
VMware, Inc. 57
Page 58
vSphere Resource Management
Example: Creating Resource Pools
Assume that you have a host that provides 6GHz of CPU and 3GB of memory that must be shared between
your marketing and QA departments. You also want to share the resources unevenly, giving one department
(QA) a higher priority. This can be accomplished by creating a resource pool for each department and using
the Sharesaribute to prioritize the allocation of resources.
The example shows how to create a resource pool with the ESXi host as the parent resource.
1In the Create Resource Pool dialog box, type a name for the QA department’s resource pool (for
example, RP-QA).
2Specify Shares of High for the CPU and memory resources of RP-QA.
3Create a second resource pool, RP-Marketing.
Leave Shares at Normal for CPU and memory.
4Click OK.
If there is resource contention, RP-QA receives 4GHz and 2GB of memory, and RP-Marketing 2GHz and
1GB. Otherwise, they can receive more than this allotment. Those resources are then available to the virtual
machines in the respective resource pools.
Edit a Resource Pool
After you create the resource pool, you can edit its CPU and memory resource seings.
Procedure
1Browse to the resource pool in the vSphere Web Client navigator.
2Click the tab and click .
3(Optional) You can change all aributes of the selected resource pool as described in “Create a Resource
Pool,” on page 57.
Under CPU Resources, click Edit to change CPU resource seings.
u
Under Memory Resources, click Edit to change memory resource seings.
u
4Click OK to save your changes.
Add a Virtual Machine to a Resource Pool
When you create a virtual machine, you can specify a resource pool location as part of the creation process.
You can also add an existing virtual machine to a resource pool.
When you move a virtual machine to a new resource pool:
The virtual machine’s reservation and limit do not change.
n
If the virtual machine’s shares are high, medium, or low, %Shares adjusts to reect the total number of
n
shares in use in the new resource pool.
If the virtual machine has custom shares assigned, the share value is maintained.
n
N Because share allocations are relative to a resource pool, you might have to manually change a
virtual machine’s shares when you move it into a resource pool so that the virtual machine’s shares are
consistent with the relative values in the new resource pool. A warning appears if a virtual machine
would receive a very large (or very small) percentage of total shares.
58 VMware, Inc.
Page 59
Chapter 9 Managing Resource Pools
Under Monitor, the information displayed in the Resource Reservation tab about the resource pool’s
n
reserved and unreserved CPU and memory resources changes to reect the reservations associated with
the virtual machine (if any).
N If a virtual machine has been powered o or suspended, it can be moved but overall available
resources (such as reserved and unreserved CPU and memory) for the resource pool are not aected.
Procedure
1Find the virtual machine in the vSphere Web Client inventory.
aTo nd a virtual machine, select a data center, folder, cluster, resource pool, or host.
bClick the VMs tab.
2Right-click the virtual machine and click Migrate.
You can move the virtual machine to another host.
n
You can move the virtual machine's storage to another datastore.
n
You can move the virtual machine to another host and move its storage to another datastore.
n
3Select a resource pool in which to run the virtual machine.
4Review your selections and click Finish.
If a virtual machine is powered on, and the destination resource pool does not have enough CPU or memory
to guarantee the virtual machine’s reservation, the move fails because admission control does not allow it.
An error dialog box displays available and requested resources, so you can consider whether an adjustment
might resolve the issue.
Remove a Virtual Machine from a Resource Pool
You can remove a virtual machine from a resource pool either by moving the virtual machine to another
resource pool or deleting it.
When you remove a virtual machine from a resource pool, the total number of shares associated with the
resource pool decreases, so that each remaining share represents more resources. For example, assume you
have a pool that is entitled to 6GHz, containing three virtual machines with shares set to Normal. Assuming
the virtual machines are CPU-bound, each gets an equal allocation of 2GHz. If one of the virtual machines is
moved to a dierent resource pool, the two remaining virtual machines each receive an equal allocation of
3GHz.
Procedure
1Browse to the resource pool in the vSphere Web Client navigator.
2Choose one of the following methods to remove the virtual machine from a resource pool.
Right-click the virtual machine and select Migrate to move the virtual machine to another resource
n
pool.
You do not need to power o the virtual machine before you move it.
Right-click the virtual machine and select Delete.
n
You must power o the virtual machine before you can completely remove it.
VMware, Inc. 59
Page 60
vSphere Resource Management
Remove a Resource Pool
You can remove a resource pool from the inventory.
Procedure
1In the vSphere Web Client, right-click the resource pool and Select Delete.
A conrmation dialog box appears.
2Click Yes to remove the resource pool.
Resource Pool Admission Control
When you power on a virtual machine in a resource pool, or try to create a child resource pool, the system
performs additional admission control to ensure the resource pool’s restrictions are not violated.
Before you power on a virtual machine or create a resource pool, ensure that sucient resources are
available using the Resource Reservation tab in the vSphere Web Client. The Available Reservation value
for CPU and memory displays resources that are unreserved.
How available CPU and memory resources are computed and whether actions are performed depends on
the Reservation Type.
Table 9‑1. Reservation Types
Reservation TypeDescription
FixedThe system checks whether the selected resource pool has sucient unreserved
resources. If it does, the action can be performed. If it does not, a message appears and
the action cannot be performed.
Expandable
(default)
The system considers the resources available in the selected resource pool and its direct
parent resource pool. If the parent resource pool also has the Expandable Reservation
option selected, it can borrow resources from its parent resource pool. Borrowing
resources occurs recursively from the ancestors of the current resource pool as long as
the Expandable Reservation option is selected. Leaving this option selected oers
more exibility, but, at the same time provides less protection. A child resource pool
owner might reserve more resources than you anticipate.
The system does not allow you to violate preconguredReservation or Limitseings. Each time you
recongure a resource pool or power on a virtual machine, the system validates all parameters so all servicelevel guarantees can still be met.
Expandable Reservations Example 1
This example shows you how a resource pool with expandable reservations works.
Assume an administrator manages pool P, and denes two child resource pools, S1 and S2, for two dierent
users (or groups).
The administrator knows that users want to power on virtual machines with reservations, but does not
know how much each user will need to reserve. Making the reservations for S1 and S2 expandable allows
the administrator to more exibly share and inherit the common reservation for pool P.
Without expandable reservations, the administrator needs to explicitly allocate S1 and S2 a specic amount.
Such specic allocations can be inexible, especially in deep resource pool hierarchies and can complicate
seing reservations in the resource pool hierarchy.
Expandable reservations cause a loss of strict isolation. S1 can start using all of P's reservation, so that no
memory or CPU is directly available to S2.
60 VMware, Inc.
Page 61
Expandable Reservations Example 2
VM-K1, 2GHzVM-K2, 2GHz
2GHz
6GHz
RP-KID
VM-M1, 1GHz
RP-MOM
VM-K1, 2GHzVM-K2, 2GHz
2GHz
6GHz
RP-KID
VM-M1, 1GHzVM-M2, 2GHz
RP-MOM
This example shows how a resource pool with expandable reservations works.
Assume the following scenario, as shown in the gure.
Parent pool RP-MOM has a reservation of 6GHz and one running virtual machine VM-M1 that reserves
n
1GHz.
You create a child resource pool RP-KID with a reservation of 2GHz and with Expandable Reservation
n
selected.
You add two virtual machines, VM-K1 and VM-K2, with reservations of 2GHz each to the child
n
resource pool and try to power them on.
VM-K1 can reserve the resources directly from RP-KID (which has 2GHz).
n
No local resources are available for VM-K2, so it borrows resources from the parent resource pool, RP-
n
MOM. RP-MOM has 6GHz minus 1GHz (reserved by the virtual machine) minus 2GHz (reserved by
RP-KID), which leaves 3GHz unreserved. With 3GHz available, you can power on the 2GHz virtual
machine.
Figure 9‑3. Admission Control with Expandable Resource Pools: Successful Power-On
Chapter 9 Managing Resource Pools
Now, consider another scenario with VM-M1 and VM-M2.
Power on two virtual machines in RP-MOM with a total reservation of 3GHz.
n
You can still power on VM-K1 in RP-KID because 2GHz are available locally.
n
When you try to power on VM-K2, RP-KID has no unreserved CPU capacity so it checks its parent. RP-
n
MOM has only 1GHz of unreserved capacity available (5GHz of RP-MOM are already in use—3GHz
reserved by the local virtual machines and 2GHz reserved by RP-KID). As a result, you cannot power
on VM-K2, which requires a 2GHz reservation.
Figure 9‑4. Admission Control with Expandable Resource Pools: Power-On Prevented
VMware, Inc. 61
Page 62
vSphere Resource Management
62 VMware, Inc.
Page 63
Creating a DRS Cluster10
A cluster is a collection of ESXi hosts and associated virtual machines with shared resources and a shared
management interface. Before you can obtain the benets of cluster-level resource management you must
create a cluster and enable DRS.
Depending on whether or not Enhanced vMotion Compatibility (EVC) is enabled, DRS behaves dierently
when you use vSphere Fault Tolerance (vSphere FT) virtual machines in your cluster.
Table 10‑1. DRS Behavior with vSphere FT Virtual Machines and EVC
EVCDRS (Load Balancing)DRS (Initial Placement)
EnabledEnabled (Primary and Secondary VMs)Enabled (Primary and Secondary VMs)
DisabledDisabled (Primary and Secondary VMs)Disabled (Primary VMs)
Fully Automated (Secondary VMs)
This chapter includes the following topics:
“Admission Control and Initial Placement,” on page 63
n
“Virtual Machine Migration,” on page 65
n
“DRS Cluster Requirements,” on page 67
n
“Conguring DRS with Virtual Flash,” on page 68
n
“Create a Cluster,” on page 68
n
“Edit Cluster Seings,” on page 69
n
“Set a Custom Automation Level for a Virtual Machine,” on page 71
n
“Disable DRS,” on page 72
n
“Restore a Resource Pool Tree,” on page 72
n
Admission Control and Initial Placement
When you aempt to power on a single virtual machine or a group of virtual machines in a DRS-enabled
cluster, vCenter Server performs admission control. It checks that there are enough resources in the cluster
to support the virtual machine(s).
If the cluster does not have sucient resources to power on a single virtual machine, or any of the virtual
machines in a group power-on aempt, a message appears. Otherwise, for each virtual machine, DRS
generates a recommendation of a host on which to run the virtual machine and takes one of the following
actions
Automatically executes the placement recommendation.
n
VMware, Inc.
63
Page 64
vSphere Resource Management
Displays the placement recommendation, which the user can then choose to accept or override.
n
N No initial placement recommendations are given for virtual machines on standalone hosts or in
non-DRS clusters. When powered on, they are placed on the host where they currently reside.
DRS considers network bandwidth. By calculating host network saturation, DRS is able to make beer
n
placement decisions. This can help avoid performance degradation of virtual machines with a more
comprehensive understanding of the environment.
Single Virtual Machine Power On
In a DRS cluster, you can power on a single virtual machine and receive initial placement recommendations.
When you power on a single virtual machine, you have two types of initial placement recommendations:
A single virtual machine is being powered on and no prerequisite steps are needed.
n
The user is presented with a list of mutually exclusive initial placement recommendations for the
virtual machine. You can select only one.
A single virtual machine is being powered on, but prerequisite actions are required.
n
These actions include powering on a host in standby mode or the migration of other virtual machines
from one host to another. In this case, the recommendations provided have multiple lines, showing each
of the prerequisite actions. The user can either accept this entire recommendation or cancel powering on
the virtual machine.
Group Power-on
You can aempt to power on multiple virtual machines at the same time (group power-on).
Virtual machines selected for a group power-on aempt do not have to be in the same DRS cluster. They can
be selected across clusters but must be within the same data center. It is also possible to include virtual
machines located in non-DRS clusters or on standalone hosts. These virtual machines are powered on
automatically and not included in any initial placement recommendation.
The initial placement recommendations for group power-on aempts are provided on a per-cluster basis. If
all the placement-related actions for a group power-on aempt are in automatic mode, the virtual machines
are powered on with no initial placement recommendation given. If placement-related actions for any of the
virtual machines are in manual mode, the powering on of all the virtual machines (including the virtual
machines that are in automatic mode) is manual. These actions are included in an initial placement
recommendation.
For each DRS cluster that the virtual machines being powered on belong to, there is a single
recommendation, which contains all the prerequisites (or no recommendation). All such cluster-specic
recommendations are presented together under the Power On Recommendations tab.
When a nonautomatic group power-on aempt is made, and virtual machines not subject to an initial
placement recommendation (that is, the virtual machines on standalone hosts or in non-DRS clusters) are
included, vCenter Server aempts to power them on automatically. If these power-ons are successful, they
are listed under the Started Power-Ons tab. Any virtual machines that fail to power-on are listed under the
Failed Power-Ons tab.
64 VMware, Inc.
Page 65
Example: Group Power-on
Host 1
VM1
VM4
VM2VM3
VM5VM6
Host 2
VM7
Host 3
VM8VM9
Host 1
VM1VM2VM3
Host 2
VM7VM4VM5
Host 3
VM8VM9VM6
The user selects three virtual machines in the same data center for a group power-on aempt. The rst two
virtual machines (VM1 and VM2) are in the same DRS cluster (Cluster1), while the third virtual machine
(VM3) is on a standalone host. VM1 is in automatic mode and VM2 is in manual mode. For this scenario, the
user is presented with an initial placement recommendation for Cluster1 (under the Power OnRecommendations tab) which consists of actions for powering on VM1 and VM2. An aempt is made to
power on VM3 automatically and, if successful, it is listed under the Started Power-Ons tab. If this aempt
fails, it is listed under the Failed Power-Ons tab.
Virtual Machine Migration
Although DRS performs initial placements so that load is balanced across the cluster, changes in virtual
machine load and resource availability can cause the cluster to become unbalanced. To correct such
imbalances, DRS generates migration recommendations.
If DRS is enabled on the cluster, load can be distributed more uniformly to reduce the degree of this
imbalance. For example, the three hosts on the left side of the following gure are unbalanced. Assume that
Host 1, Host 2, and Host 3 have identical capacity, and all virtual machines have the same conguration and
load (which includes reservation, if set). However, because Host 1 has six virtual machines, its resources
might be overused while ample resources are available on Host 2 and Host 3. DRS migrates (or recommends
the migration of) virtual machines from Host 1 to Host 2 and Host 3. On the right side of the diagram, the
properly load balanced conguration of the hosts that results appears.
Chapter 10 Creating a DRS Cluster
Figure 10‑1. Load Balancing
When a cluster becomes unbalanced, DRS makes recommendations or migrates virtual machines,
depending on the default automation level:
If the cluster or any of the virtual machines involved are manual or partially automated, vCenter Server
n
does not take automatic actions to balance resources. Instead, the Summary page indicates that
migration recommendations are available and the DRS Recommendations page displays
recommendations for changes that make the most ecient use of resources across the cluster.
VMware, Inc. 65
Page 66
vSphere Resource Management
If the cluster and virtual machines involved are all fully automated, vCenter Server migrates running
n
virtual machines between hosts as needed to ensure ecient use of cluster resources.
N Even in an automatic migration setup, users can explicitly migrate individual virtual machines,
but vCenter Server might move those virtual machines to other hosts to optimize cluster resources.
By default, automation level is specied for the whole cluster. You can also specify a custom automation
level for individual virtual machines.
DRS Migration Threshold
The DRS migration threshold allows you to specify which recommendations are generated and then applied
(when the virtual machines involved in the recommendation are in fully automated mode) or shown (if in
manual mode). This threshold is also a measure of how much cluster imbalance across host (CPU and
memory) loads is acceptable.
You can move the threshold slider to use one of veseings, ranging from Conservative to Aggressive. The
ve migration seings generate recommendations based on their assigned priority level. Each seing you
move the slider to the right allows the inclusion of one more lower level of priority. The Conservative seing
generates only priority-one recommendations (mandatory recommendations), the next level to the right
generates priority-two recommendations and higher, and so on, down to the Aggressive level which
generates priority-ve recommendations and higher (that is, all recommendations.)
A priority level for each migration recommendation is computed using the load imbalance metric of the
cluster. This metric is displayed as Current host load standard deviation in the cluster's Summary tab in the
vSphere Web Client. A higher load imbalance leads to higher-priority migration recommendations. For
more information about this metric and how a recommendation priority level is calculated, see the VMware
Knowledge Base article "Calculating the priority level of a VMware DRS migration recommendation."
After a recommendation receives a priority level, this level is compared to the migration threshold you set. If
the priority level is less than or equal to the threshold seing, the recommendation is either applied (if the
relevant virtual machines are in fully automated mode) or displayed to the user for conrmation (if in
manual or partially automated mode.)
Migration Recommendations
If you create a cluster with a default manual or partially automated mode, vCenter Server displays
migration recommendations on the DRS Recommendations page.
The system supplies as many recommendations as necessary to enforce rules and balance the resources of
the cluster. Each recommendation includes the virtual machine to be moved, current (source) host and
destination host, and a reason for the recommendation. The reason can be one of the following:
Balance average CPU loads or reservations.
n
Balance average memory loads or reservations.
n
Satisfy resource pool reservations.
n
Satisfy an anity rule.
n
Host is entering maintenance mode or standby mode.
n
N If you are using the vSphere Distributed Power Management (DPM) feature, in addition to migration
recommendations, DRS provides host power state recommendations.
66 VMware, Inc.
Page 67
DRS Cluster Requirements
Hosts that are added to a DRS cluster must meet certain requirements to use cluster features successfully.
Shared Storage Requirements
A DRS cluster has certain shared storage requirements.
Ensure that the managed hosts use shared storage. Shared storage is typically on a SAN, but can also be
implemented using NAS shared storage.
See the vSphere Storage documentation for information about other shared storage.
Shared VMFS Volume Requirements
A DRS cluster has certain shared VMFS volume requirements.
Congure all managed hosts to use shared VMFS volumes.
Place the disks of all virtual machines on VMFS volumes that are accessible by source and destination
n
hosts.
Ensure the VMFS volume is suciently large to store all virtual disks for your virtual machines.
n
Chapter 10 Creating a DRS Cluster
Ensure all VMFS volumes on source and destination hosts use volume names, and all virtual machines
n
use those volume names for specifying the virtual disks.
N Virtual machine swap les also need to be on a VMFS accessible to source and destination hosts (just
like .vmdk virtual disk les). This requirement does not apply if all source and destination hosts are ESX
Server 3.5 or higher and using host-local swap. In that case, vMotion with swap les on unshared storage is
supported. Swap les are placed on a VMFS by default, but administrators might override the le location
using advanced virtual machine conguration options.
Processor Compatibility Requirements
A DRS cluster has certain processor compatibility requirements.
To avoid limiting the capabilities of DRS, you should maximize the processor compatibility of source and
destination hosts in the cluster.
vMotion transfers the running architectural state of a virtual machine between underlying ESXi hosts.
vMotion compatibility means that the processors of the destination host must be able to resume execution
using the equivalent instructions where the processors of the source host were suspended. Processor clock
speeds and cache sizes might vary, but processors must come from the same vendor class (Intel versus
AMD) and the same processor family to be compatible for migration with vMotion.
Processor families are dened by the processor vendors. You can distinguish dierent processor versions
within the same family by comparing the processors’ model, stepping level, and extended features.
Sometimes, processor vendors have introduced signicant architectural changes within the same processor
family (such as 64-bit extensions and SSE3). VMware identies these exceptions if it cannot guarantee
successful migration with vMotion.
vCenter Server provides features that help ensure that virtual machines migrated with vMotion meet
processor compatibility requirements. These features include:
Enhanced vMotion Compatibility (EVC) – You can use EVC to help ensure vMotion compatibility for
n
the hosts in a cluster. EVC ensures that all hosts in a cluster present the same CPU feature set to virtual
machines, even if the actual CPUs on the hosts dier. This prevents migrations with vMotion from
failing due to incompatible CPUs.
VMware, Inc. 67
Page 68
vSphere Resource Management
Congure EVC from the Cluster Seings dialog box. The hosts in a cluster must meet certain
requirements for the cluster to use EVC. For information about EVC and EVC requirements, see the
vCenter Server and Host Management documentation.
CPU compatibility masks – vCenter Server compares the CPU features available to a virtual machine
n
with the CPU features of the destination host to determine whether to allow or disallow migrations
with vMotion. By applying CPU compatibility masks to individual virtual machines, you can hide
certain CPU features from the virtual machine and potentially prevent migrations with vMotion from
failing due to incompatible CPUs.
vMotion Requirements for DRS Clusters
A DRS cluster has certain vMotion requirements.
To enable the use of DRS migration recommendations, the hosts in your cluster must be part of a vMotion
network. If the hosts are not in the vMotion network, DRS can still make initial placement
recommendations.
To be congured for vMotion, each host in the cluster must meet the following requirements:
vMotion does not support raw disks or migration of applications clustered using Microsoft Cluster
n
Service (MSCS).
vMotion requires a private Gigabit Ethernet migration network between all of the vMotion enabled
n
managed hosts. When vMotion is enabled on a managed host, congure a unique network identity
object for the managed host and connect it to the private migration network.
Configuring DRS with Virtual Flash
DRS can manage virtual machines that have virtual ash reservations.
Virtual ash capacity appears as a statistic that is regularly reported from the host to the
vSphere Web Client. Each time DRS runs, it uses the most recent capacity value reported.
You can congure one virtual ash resource per host. This means that during virtual machine power-on
time, DRS does not need to select between dierent virtual ash resources on a given host.
DRS selects a host that has sucient available virtual ash capacity to start the virtual machine. If DRS
cannot satisfy the virtual ash reservation of a virtual machine, it cannot be powered-on. DRS treats a
powered-on virtual machine with a virtual ash reservation as having a soft anity with its current host.
DRS will not recommend such a virtual machine for vMotion except for mandatory reasons, such as puing
a host in maintenance mode, or to reduce the load on an over utilized host.
Create a Cluster
A cluster is a group of hosts. When a host is added to a cluster, the host's resources become part of the
cluster's resources. The cluster manages the resources of all hosts within it. Clusters enable the vSphere
High Availability (HA) and vSphere Distributed Resource Scheduler (DRS) solutions. You can also enable
vSAN on a cluster.
Prerequisites
Verify that you have sucient permissions to create a cluster object.
n
Verify that a data center exists in the inventory.
n
Procedure
1Browse to a data center in the vSphere Web Client navigator.
2Right-click the data center and select New Cluster.
68 VMware, Inc.
Page 69
Chapter 10 Creating a DRS Cluster
3Enter a name for the cluster.
4Select DRS and vSphere HA cluster features.
OptionDescription
To use DRS with this cluster
To use HA with this cluster
aSelect the DRS Turn ON check box.
bSelect an automation level and a migration threshold.
aSelect the vSphere HA Turn ON check box.
bSelect whether to enable host monitoring and admission control.
cIf admission control is enabled, specify a policy.
dSelect a VM Monitoring option.
eSpecify the virtual machine monitoring sensitivity.
5Select an Enhanced vMotion Compatibility (EVC) seing.
EVC ensures that all hosts in a cluster present the same CPU feature set to virtual machines, even if the
actual CPUs on the hosts dier. This prevents migrations with vMotion from failing due to
incompatible CPUs.
6Enable vSAN.
aSelect the vSAN Turn ON check box.
bSpecify whether to add disks automatically or manually to the vSAN cluster.
7Click OK.
The cluster is added to the inventory.
What to do next
Add hosts and resource pools to the cluster.
For information about vSAN and how to use vSAN clusters, see the vSphere Storage publication.
Edit Cluster Settings
When you add a host to a DRS cluster, the host’s resources become part of the cluster’s resources. In addition
to this aggregation of resources, with a DRS cluster you can support cluster-wide resource pools and enforce
cluster-level resource allocation policies.
The following cluster-level resource management capabilities are also available.
Load Balancing
The distribution and usage of CPU and memory resources for all hosts and
virtual machines in the cluster are continuously monitored. DRS compares
these metrics to an ideal resource usage given the aributes of the cluster’s
resource pools and virtual machines, the current demand, and the imbalance
target. DRS then provides recommendations or performs virtual machine
migrations accordingly. See “Virtual Machine Migration,” on page 65. When
you power on a virtual machine in the cluster, DRS aempts to maintain
proper load balancing by either placing the virtual machine on an
appropriate host or making a recommendation. See “Admission Control and
Initial Placement,” on page 63.
Power management
When the vSphere Distributed Power Management (DPM) feature is enabled,
DRS compares cluster and host-level capacity to the demands of the cluster’s
virtual machines, including recent historical demand. DRS then recommends
you place hosts in standby, or places hosts in standby power mode when
VMware, Inc. 69
Page 70
vSphere Resource Management
sucient excess capacity is found. DRS powers-on hosts if capacity is
needed. Depending on the resulting host power state recommendations,
virtual machines might need to be migrated to and from the hosts as well.
See “Managing Power Resources,” on page 82.
Affinity Rules
You can control the placement of virtual machines on hosts within a cluster,
by assigning anity rules. See “Using DRS Anity Rules,” on page 86.
Prerequisites
You can create a cluster without a special license, but you must have a license to enable a cluster for vSphere
DRS (or vSphere HA).
Procedure
1Browse to a cluster in the vSphere Web Client.
2Click the tab and click Services.
3Under vSphere DRS click Edit.
4Under DRS Automation, select a default automation level for DRS.
Automation LevelAction
Initial placement: Recommended host is displayed.
Manual
Partially Automated
Fully Automated
n
Migration: Recommendation is displayed.
n
Initial placement: Automatic.
n
Migration: Recommendation is displayed.
n
Initial placement: Automatic.
n
Migration: Recommendation is run automatically.
n
5Set the Migration Threshold for DRS.
6Select the Predictive DRS check box. In addition to real-time metrics, DRS responds to forecasted
metrics provided by vRealize Operations server. You must also congurePredictive DRS in a version
of vRealize Operations that supports this feature.
Override for individual virtual machines can be set from the VM Overrides page.
8Under Additional Options, select a check box to enforce one of the default policies.
OptionDescription
VM Distribution
Memory Metric for Load Balancing
CPU Over-Commitment
For availability, distribute a more even number of virtual machines across
hosts. This is secondary to DRS load balancing.
Load balance based on consumed memory of virtual machines rather than
active memory. This seing is only recommended for clusters where host
memory is not over-commied.
Control CPU over-commitment in the cluster.
9Under Power Management, select Automation Level.
10 If DPM is enabled, set the DPM Threshold.
11 (Optional) Select the vSphere HA Turn ON check box to enable vSphere HA.
vSphere HA allows you to:
Enable host monitoring.
n
Enable admission control.
n
70 VMware, Inc.
Page 71
Specify the type of policy that admission control enforces.
n
Adjust the monitoring sensitivity of virtual machine monitoring.
n
12 If appropriate, enable Enhanced vMotion Compatibility (EVC) and select the mode it operates in.
13 Click OK.
Set a Custom Automation Level for a Virtual Machine
After you create a DRS cluster, you can customize the automation level for individual virtual machines to
override the cluster’s default automation level.
For example, you can select Manual for specic virtual machines in a cluster with full automation, or
Partially Automated for specic virtual machines in a manual cluster.
If a virtual machine is set to Disabled, vCenter Server does not migrate that virtual machine or provide
migration recommendations for it. This is known as pinning the virtual machine to its registered host.
N If you have not enabled Enhanced vMotion Compatibility (EVC) for the cluster, fault tolerant virtual
machines are set to DRS disabled. They appear on this screen, but you cannot assign an automation mode to
them.
Procedure
Chapter 10 Creating a DRS Cluster
1Browse to the cluster in the vSphere Web Client navigator.
4Select the Enable individual virtual machine automation levels check box.
5To temporarily disable any individual virtual machine overrides, deselect the Enable individual virtual
machine automation levels check box.
Virtual machine seings are restored when the check box is selected again.
6To temporarily suspend all vMotion activity in a cluster, put the cluster in manual mode and deselect
the Enable individual virtual machine automation levels check box.
7Select one or more virtual machines.
8Click the Automation Level column and select an automation level from the drop-down menu.
OptionDescription
Manual
Fully Automated
Partially Automated
Disabled
Placement and migration recommendations are displayed, but do not run
until you manually apply the recommendation.
Placement and migration recommendations run automatically.
Initial placement is performed automatically. Migration recommendations
are displayed, but do not run.
vCenter Server does not migrate the virtual machine or provide migration
recommendations for it.
9Click OK.
N Other VMware products or features, such as vSphere vApp and vSphere Fault Tolerance, might
override the automation levels of virtual machines in a DRS cluster. Refer to the product-specic
documentation for details.
VMware, Inc. 71
Page 72
vSphere Resource Management
Disable DRS
You can turn o DRS for a cluster.
When DRS is disabled, the cluster’s resource pool hierarchy and anity rules are not reestablished when
DRS is turned back on. If you disable DRS, the resource pools are removed from the cluster. To avoid losing
the resource pools, save a snapshot of the resource pool tree on your local machine. You can use the
snapshot to restore the resource pool when you enable DRS.
Procedure
1Browse to the cluster in the vSphere Web Client navigator.
2Click the tab and click Services.
3Under vSphere DRS, click Edit.
4Deselect the Turn On vSphere DRS check box.
5Click OK to turn o DRS.
6(Optional) Choose an option to save the resource pool.
Click Yes to save a resource pool tree snapshot on a local machine.
n
Click No to turn o DRS without saving a resource pool tree snapshot.
n
Restore a Resource Pool Tree
You can restore a previously saved resource pool tree snapshot.
Prerequisites
vSphere DRS must be turned ON.
n
You can restore a snapshot only on the same cluster that it was taken.
n
No other resource pools are present in the cluster.
n
Procedure
1Browse to the cluster in the vSphere Web Client navigator.
2Right-click on the cluster and select Restore Resource Pool Tree.
3Click Browse, and locate the snapshot le on your local machine.
4Click Open.
5Click OK to restore the resource pool tree.
72 VMware, Inc.
Page 73
Using DRS Clusters to Manage
Resources11
After you create a DRS cluster, you can customize it and use it to manage resources.
To customize your DRS cluster and the resources it contains you can congureanity rules and you can
add and remove hosts and virtual machines. When a cluster’s seings and resources have been dened, you
should ensure that it is and remains a valid cluster. You can also use a valid DRS cluster to manage power
resources and interoperate with vSphere HA.
This chapter includes the following topics:
“Adding Hosts to a Cluster,” on page 73
n
“Adding Virtual Machines to a Cluster,” on page 75
n
“Removing Virtual Machines from a Cluster,” on page 75
n
“Removing a Host from a Cluster,” on page 76
n
“DRS Cluster Validity,” on page 77
n
“Managing Power Resources,” on page 82
n
“Using DRS Anity Rules,” on page 86
n
Adding Hosts to a Cluster
The procedure for adding hosts to a cluster is dierent for hosts managed by the same vCenter Server
(managed hosts) than for hosts not managed by that server.
After a host has been added, the virtual machines deployed to the host become part of the cluster and DRS
can recommend migration of some virtual machines to other hosts in the cluster.
Add a Managed Host to a Cluster
When you add a standalone host already being managed by vCenter Server to a DRS cluster, the host’s
resources become associated with the cluster.
You can decide whether you want to associate existing virtual machines and resource pools with the
cluster’s root resource pool or graft the resource pool hierarchy.
N If a host has no child resource pools or virtual machines, the host’s resources are added to the cluster
but no resource pool hierarchy with a top-level resource pool is created.
Procedure
1Browse to the host in the vSphere Web Client navigator.
2Right-click the host and select Move To.
VMware, Inc.
73
Page 74
vSphere Resource Management
3Select a cluster.
4Click OK to apply the changes.
5Select what to do with the host’s virtual machines and resource pools.
Put this host’s virtual machines in the cluster’s root resource pool
n
vCenter Server removes all existing resource pools of the host and the virtual machines in the host’s
hierarchy are all aached to the root. Because share allocations are relative to a resource pool, you
might have to manually change a virtual machine’s shares after selecting this option, which
destroys the resource pool hierarchy.
Create a resource pool for this host’s virtual machines and resource pools
n
vCenter Server creates a top-level resource pool that becomes a direct child of the cluster and adds
all children of the host to that new resource pool. You can supply a name for that new top-level
resource pool. The default is Grafted from <host_name>.
The host is added to the cluster.
Add an Unmanaged Host to a Cluster
You can add an unmanaged host to a cluster. Such a host is not currently managed by the same vCenter
Server system as the cluster and it is not visible in the vSphere Web Client.
Procedure
1Browse to the cluster in the vSphere Web Client navigator.
2Right-click the cluster and select Add Host.
3Enter the host name, user name, and password, and click Next.
4View the summary information and click Next.
5Assign an existing or a new license key and click Next.
6(Optional) You can enable lockdown mode to prevent remote users from logging directly into the host.
If you do not enable lockdown mode, you can congure this option later by editing Security Prole in
host seings.
7Select what to do with the host’s virtual machines and resource pools.
Put this host’s virtual machines in the cluster’s root resource pool
n
vCenter Server removes all existing resource pools of the host and the virtual machines in the host’s
hierarchy are all aached to the root. Because share allocations are relative to a resource pool, you
might have to manually change a virtual machine’s shares after selecting this option, which
destroys the resource pool hierarchy.
Create a resource pool for this host’s virtual machines and resource pools
n
vCenter Server creates a top-level resource pool that becomes a direct child of the cluster and adds
all children of the host to that new resource pool. You can supply a name for that new top-level
resource pool. The default is Grafted from <host_name>.
8Review seings and click Finish.
The host is added to the cluster.
74 VMware, Inc.
Page 75
Adding Virtual Machines to a Cluster
You can add a virtual machine to a cluster in a number of ways.
When you add a host to a cluster, all virtual machines on that host are added to the cluster.
n
When a virtual machine is created, the New Virtual Machine wizard prompts you for the location to
n
place the virtual machine. You can select a standalone host or a cluster and you can select any resource
pool inside the host or cluster.
You can migrate a virtual machine from a standalone host to a cluster or from a cluster to another
n
cluster using the Migrate Virtual Machine wizard. To start this wizard, right-click the virtual machine
name and select Migrate.
Move a Virtual Machine to a Cluster
You can move a virtual machine to a cluster.
Procedure
1Find the virtual machine in the vSphere Web Client inventory.
aTo nd a virtual machine, select a data center, folder, cluster, resource pool, or host.
Chapter 11 Using DRS Clusters to Manage Resources
bClick the VMs tab.
2Right-click on the virtual machine and select Move To.
3Select a cluster.
4Click OK.
Removing Virtual Machines from a Cluster
You can remove virtual machines from a cluster.
You can remove a virtual machine from a cluster in two ways.
When you remove a host from a cluster, all of the powered-o virtual machines that you do not migrate
n
to other hosts are removed as well. You can remove a host only if it is in maintenance mode or
disconnected. If you remove a host from a DRS cluster, the cluster can become yellow because it is
overcommied.
You can migrate a virtual machine from a cluster to a standalone host or from a cluster to another
n
cluster using the Migrate Virtual Machine wizard. To start this wizard right-click the virtual machine
name and select Migrate.
Move a Virtual Machine Out of a Cluster
You can move a virtual machine out of a cluster.
Procedure
1Find the virtual machine in the vSphere Web Client inventory.
aTo nd a virtual machine, select a data center, folder, cluster, resource pool, or host.
bClick the VMs tab.
2Right-click the virtual machine and select Migrate.
3Select Change datastore and click Next.
VMware, Inc. 75
Page 76
vSphere Resource Management
4Select a datastore and click Next.
5Click Finish.
If the virtual machine is a member of a DRS cluster rules group, vCenter Server displays a warning
before it allows the migration to proceed. The warning indicates that dependent virtual machines are
not migrated automatically. You have to acknowledge the warning before migration can proceed.
Removing a Host from a Cluster
When you remove a host from a DRS cluster, you aect resource pool hierarchies, virtual machines, and you
might create invalid clusters. Consider the aected objects before you remove the host.
Resource Pool Hierarchies – When you remove a host from a cluster, the host retains only the root
n
resource pool, even if you used a DRS cluster and decided to graft the host resource pool when you
added the host to the cluster. In that case, the hierarchy remains with the cluster. You can create a host-specic resource pool hierarchy.
N Ensure that you remove the host from the cluster by rst placing it in maintenance mode. If you
instead disconnect the host before removing it from the cluster, the host retains the resource pool that
reects the cluster hierarchy.
Virtual Machines – A host must be in maintenance mode before you can remove it from the cluster and
n
for a host to enter maintenance mode all powered-on virtual machines must be migrated o that host.
When you request that a host enter maintenance mode, you are also asked whether you want to migrate
all the powered-o virtual machines on that host to other hosts in the cluster.
Invalid Clusters – When you remove a host from a cluster, the resources available for the cluster
n
decrease. If the cluster has enough resources to satisfy the reservations of all virtual machines and
resource pools in the cluster, the cluster adjusts resource allocation to reect the reduced amount of
resources. If the cluster does not have enough resources to satisfy the reservations of all resource pools,
but there are enough resources to satisfy the reservations for all virtual machines, an alarm is issued
and the cluster is marked yellow. DRS continues to run.
Place a Host in Maintenance Mode
You place a host in maintenance mode when you need to service it, for example, to install more memory. A
host enters or leaves maintenance mode only as the result of a user request.
Virtual machines that are running on a host entering maintenance mode need to be migrated to another host
(either manually or automatically by DRS) or shut down. The host is in a state of Entering MaintenanceMode until all running virtual machines are powered down or migrated to dierent hosts. You cannot
power on virtual machines or migrate virtual machines to a host entering maintenance mode.
When no more running virtual machines are on the host, the host’s icon changes to include undermaintenance and the host’s Summary panel indicates the new state. While in maintenance mode, the host
does not allow you to deploy or power on a virtual machine.
N DRS does not recommend (or perform, in fully automated mode) any virtual machine migrations o
of a host entering maintenance or standby mode if the vSphere HA failover level would be violated after the
host enters the requested mode.
Procedure
1Browse to the host in the vSphere Web Client navigator.
2Right-click the host and select Maintenance Mode > Enter Maintenance Mode.
If the host is part of a partially automated or manual DRS cluster, a list of migration
n
recommendations for virtual machines running on the host appears.
76 VMware, Inc.
Page 77
Chapter 11 Using DRS Clusters to Manage Resources
If the host is part of an automated DRS cluster, virtual machines are migrated to dierent hosts
n
when the host enters maintenance mode.
3If applicable, click Yes.
The host is in maintenance mode until you select Maintenance Mode > Exit Maintenance Mode.
Remove a Host from a Cluster
You can remove hosts from a cluster.
Procedure
1Browse to the host in the vSphere Web Client navigator.
2Right-click the host and select Maintenance Mode > Enter Maintenance Mode.
When the host is in maintenance mode, move it to a dierent inventory location, either the top-level
data center or to a dierent cluster.
3Right-click the host and select Move To.
4Select a new location for the lost and click OK.
When you move the host, its resources are removed from the cluster. If you grafted the host’s resource pool
hierarchy onto the cluster, that hierarchy remains with the cluster.
What to do next
After you remove a host from a cluster, you can perform the following tasks.
Remove the host from vCenter Server.
n
Run the host as a standalone host under vCenter Server.
n
Move the host into another cluster.
n
Using Standby Mode
When a host machine is placed in standby mode, it is powered o.
Normally, hosts are placed in standby mode by the vSphere DPM feature to optimize power usage. You can
also place a host in standby mode manually. However, DRS might undo (or recommend undoing) your
change the next time it runs. To force a host to remain o, place it in maintenance mode and power it o.
DRS Cluster Validity
The vSphere Web Client indicates whether a DRS cluster is valid, overcommied (yellow), or invalid (red).
DRS clusters become overcommied or invalid for several reasons.
A cluster might become overcommied if a host fails.
n
A cluster becomes invalid if vCenter Server is unavailable and you power on virtual machines using the
n
vSphere Web Client.
A cluster becomes invalid if the user reduces the reservation on a parent resource pool while a virtual
n
machine is in the process of failing over.
If changes are made to hosts or virtual machines using the vSphere Web Client while vCenter Server is
n
unavailable, those changes take eect. When vCenter Server becomes available again, you might nd
that clusters have turned red or yellow because cluster requirements are no longer met.
VMware, Inc. 77
Page 78
cluster
Total Capacity: 12G
Reserved Capacity: 11G
Available Capacity: 1G
RP1
Reservation: 4G
Reservation Used: 4G
Unreserved: 0G
RP2
Reservation: 4G
Reservation Used: 3G
Unreserved: 1G
RP3
Reservation: 3G
Reservation Used: 3G
Unreserved: 0G
VM1, 2G
VM7, 2G
VM2, 2G
VM4, 1GVM8, 2G
VM3, 3GVM5, 2GVM6, 2G
vSphere Resource Management
When considering cluster validity scenarios, you should understand these terms.
Reservation
Reservation Used
A xed, guaranteed allocation for the resource pool input by the user.
The sum of the reservation or reservation used (whichever is larger) for each
child resource pool, added recursively.
Unreserved
This nonnegative number diers according to resource pool type.
Nonexpandable resource pools: Reservation minus reservation used.
n
Expandable resource pools: (Reservation minus reservation used) plus
n
any unreserved resources that can be borrowed from its ancestor
resource pools.
Valid DRS Clusters
A valid cluster has enough resources to meet all reservations and to support all running virtual machines.
The following gure shows an example of a valid cluster with xed resource pools and how its CPU and
memory resources are computed.
Figure 11‑1. Valid Cluster with Fixed Resource Pools
78 VMware, Inc.
The cluster has the following characteristics:
A cluster with total resources of 12GHz.
n
Three resource pools, each of type Fixed (Expandable Reservation is not selected).
n
The total reservation of the three resource pools combined is 11GHz (4+4+3 GHz). The total is shown in
n
the Reserved Capacityeld for the cluster.
RP1 was created with a reservation of 4GHz. Two virtual machines. (VM1 and VM7) of 2GHz each are
n
powered on (Reservation Used: 4GHz). No resources are left for powering on additional virtual
machines. VM6 is shown as not powered on. It consumes none of the reservation.
Page 79
cluster
Total Capacity: 16G
Reserved Capacity: 16G
Available Capacity: 0G
RP1 (expandable)
Reservation: 4G
Reservation Used: 6G
Unreserved: 0G
RP2
Reservation: 5G
Reservation Used: 3G
Unreserved: 2G
RP3 (expandable)
Reservation: 5G
Reservation Used: 5G
Unreserved: 0G
VM1, 2G
VM7, 2G
VM2, 2G
VM4, 1GVM8, 2G
VM3, 3GVM5, 2GVM6, 2G
Chapter 11 Using DRS Clusters to Manage Resources
RP2 was created with a reservation of 4GHz. Two virtual machines of 1GHz and 2GHz are powered on
RP3 was created with a reservation of 3GHz. One virtual machine with 3GHz is powered on. No
n
resources for powering on additional virtual machines are available.
The following gure shows an example of a valid cluster with some resource pools (RP1 and RP3) using
reservation type Expandable.
Figure 11‑2. Valid Cluster with Expandable Resource Pools
A valid cluster can be congured as follows:
A cluster with total resources of 16GHz.
n
RP1 and RP3 are of type Expandable, RP2 is of type Fixed.
n
The total reservation used of the three resource pools combined is 16GHz (6GHz for RP1, 5GHz for RP2,
n
and 5GHz for RP3). 16GHz shows up as the Reserved Capacity for the cluster at top level.
RP1 was created with a reservation of 4GHz. Three virtual machines of 2GHz each are powered on. Two
n
of those virtual machines (for example, VM1 and VM7) can use RP1’s reservations, the third virtual
machine (VM6) can use reservations from the cluster’s resource pool. (If the type of this resource pool
were Fixed, you could not power on the additional virtual machine.)
RP2 was created with a reservation of 5GHz. Two virtual machines of 1GHz and 2GHz are powered on
RP3 was created with a reservation of 5GHz. Two virtual machines of 3GHz and 2GHz are powered on.
Even though this resource pool is of type Expandable, no additional 2GHz virtual machine can be
powered on because the parent’s extra resources are already used by RP1.
VMware, Inc. 79
Page 80
X
cluster
Total Capacity: 12G 8G
Reserved Capacity: 12G
Available Capacity: 0G
RP1 (expandable)
Reservation: 4G
Reservation Used: 4G
Unreserved: 0G
RP2
Reservation: 5G
Reservation Used: 3G
Unreserved: 2G
RP3 (expandable)
Reservation: 3G
Reservation Used: 3G
Unreserved: 0G
VM1, 2G
VM7, 0G
VM2, 2G
VM4, 1G
VM3, 3GVM5, 5GVM6, 2G
vSphere Resource Management
Overcommitted DRS Clusters
A cluster becomes overcommied (yellow) when the tree of resource pools and virtual machines is
internally consistent but the cluster does not have the capacity to support all resources reserved by the child
resource pools.
There will always be enough resources to support all running virtual machines because, when a host
becomes unavailable, all its virtual machines become unavailable. A cluster typically turns yellow when
cluster capacity is suddenly reduced, for example, when a host in the cluster becomes unavailable. VMware
recommends that you leave adequate additional cluster resources to avoid your cluster turning yellow.
Figure 11‑3. Yellow Cluster
80 VMware, Inc.
In this example:
A cluster with total resources of 12GHz coming from three hosts of 4GHz each.
n
Three resource pools reserving a total of 12GHz.
n
The total reservation used by the three resource pools combined is 12GHz (4+5+3 GHz). That shows up
n
as the Reserved Capacity in the cluster.
One of the 4GHz hosts becomes unavailable, so total resources reduce to 8GHz.
n
At the same time, VM4 (1GHz) and VM3 (3GHz), which were running on the host that failed, are no
n
longer running.
The cluster is now running virtual machines that require a total of 6GHz. The cluster still has 8GHz
n
available, which is sucient to meet virtual machine requirements.
The resource pool reservations of 12GHz can no longer be met, so the cluster is marked as yellow.
Page 81
cluster
Total Capacity: 12G
Reserved Capacity: 12G 15G
Available Capacity: 0G
RP1 (expandable)
Reservation: 4G
Reservation Used: 4G
Unreserved: 0G
RP2
Reservation: 2G
Reservation Used: 2G 5G
Unreserved: 0G
RP3 (expandable)
Reservation: 6G
Reservation Used: 2G
Unreserved: 4G 0G
VM1, 1G
VM7, 3G
VM2, 3GVM3, 1GVM4, 1GVM5, 1GVM6, 1G
Chapter 11 Using DRS Clusters to Manage Resources
Invalid DRS Clusters
A cluster enabled for DRS becomes invalid (red) when the tree is no longer internally consistent, that is,
resource constraints are not observed.
The total amount of resources in the cluster does not aect whether the cluster is red. A cluster can be red,
even if enough resources exist at the root level, if there is an inconsistency at a child level.
You can resolve a red DRS cluster problem either by powering o one or more virtual machines, moving
virtual machines to parts of the tree that have sucient resources, or editing the resource pool seings in the
red part. Adding resources typically helps only when you are in the yellow state.
A cluster can also turn red if you recongure a resource pool while a virtual machine is failing over. A
virtual machine that is failing over is disconnected and does not count toward the reservation used by the
parent resource pool. You might reduce the reservation of the parent resource pool before the failover
completes. After the failover is complete, the virtual machine resources are again charged to the parent
resource pool. If the pool’s usage becomes larger than the new reservation, the cluster turns red.
If a user is able to start a virtual machine (in an unsupported way) with a reservation of 3GHz under
resource pool 2, the cluster would become red, as shown in the following gure.
Figure 11‑4. Red Cluster
VMware, Inc. 81
Page 82
vSphere Resource Management
Managing Power Resources
The vSphere Distributed Power Management (DPM) feature allows a DRS cluster to reduce its power
consumption by powering hosts on and o based on cluster resource utilization.
vSphere DPM monitors the cumulative demand of all virtual machines in the cluster for memory and CPU
resources and compares this to the total available resource capacity of all hosts in the cluster. If sucient
excess capacity is found, vSphere DPM places one or more hosts in standby mode and powers them o after
migrating their virtual machines to other hosts. Conversely, when capacity is deemed to be inadequate, DRS
brings hosts out of standby mode (powers them on) and uses vMotion to migrate virtual machines to them.
When making these calculations, vSphere DPM considers not only current demand, but it also honors any
user-specied virtual machine resource reservations.
If you enable Forecasted Metrics when you create a DRS cluster, DPM will issue proposals in advance
depending on the rolling forecast window you select.
N ESXi hosts cannot automatically be brought out of standby mode unless they are running in a cluster
managed by vCenter Server.
vSphere DPM can use one of three power management protocols to bring a host out of standby mode:
Intelligent Platform Management Interface (IPMI), Hewle-Packard Integrated Lights-Out (iLO), or WakeOn-LAN (WOL). Each protocol requires its own hardware support and conguration. If a host does not
support any of these protocols it cannot be put into standby mode by vSphere DPM. If a host supports
multiple protocols, they are used in the following order: IPMI, iLO, WOL.
N Do not disconnect a host in standby mode or move it out of the DRS cluster without rst powering it
on, otherwise vCenter Server is not able to power the host back on.
Configure IPMI or iLO Settings for vSphere DPM
IPMI is a hardware-level specication and Hewle-Packard iLO is an embedded server management
technology. Each of them describes and provides an interface for remotely monitoring and controlling
computers.
You must perform the following procedure on each host.
Prerequisites
Both IPMI and iLO require a hardware Baseboard Management Controller (BMC) to provide a gateway for
accessing hardware control functions, and allow the interface to be accessed from a remote system using
serial or LAN connections. The BMC is powered-on even when the host itself is powered-o. If properly
enabled, the BMC can respond to remote power-on commands.
If you plan to use IPMI or iLO as a wake protocol, you must congure the BMC. BMC conguration steps
vary according to model. See your vendor’s documentation for more information. With IPMI, you must also
ensure that the BMC LAN channel is congured to be always available and to allow operator-privileged
commands. On some IPMI systems, when you enable "IPMI over LAN" you must congure this in the BIOS
and specify a particular IPMI account.
vSphere DPM using only IPMI supports MD5- and plaintext-based authentication, but MD2-based
authentication is not supported. vCenter Server uses MD5 if a host's BMC reports that it is supported and
enabled for the Operator role. Otherwise, plaintext-based authentication is used if the BMC reports it is
supported and enabled. If neither MD5 nor plaintext authentication is enabled, IPMI cannot be used with
the host and vCenter Server aempts to use Wake-on-LAN.
Procedure
1Browse to the host in the vSphere Web Client navigator.
82 VMware, Inc.
Page 83
Chapter 11 Using DRS Clusters to Manage Resources
2Click the tab.
3Under System, click Power Management.
4Click Edit.
5Enter the following information.
User name and password for a BMC account. (The user name must have the ability to remotely
n
power the host on.)
IP address of the NIC associated with the BMC, as distinct from the IP address of the host. The IP
n
address should be static or a DHCP address with innite lease.
MAC address of the NIC associated with the BMC.
n
6Click OK.
Test Wake-on-LAN for vSphere DPM
The use of Wake-on-LAN (WOL) for the vSphere DPM feature is fully supported, if you congure and
successfully test it according to the VMware guidelines. You must perform these steps before enabling
vSphere DPM for a cluster for the rst time or on any host that is being added to a cluster that is using
vSphere DPM.
Prerequisites
Before testing WOL, ensure that your cluster meets the prerequisites.
Your cluster must contain at least two hosts that are version ESX 3.5 (or ESX 3i version 3.5) or later.
n
Each host's vMotion networking link must be working correctly. The vMotion network should also be a
n
single IP subnet, not multiple subnets separated by routers.
The vMotion NIC on each host must support WOL. To check for WOL support, rst determine the
n
name of the physical network adapter corresponding to the VMkernel port by selecting the host in the
inventory panel of the vSphere Web Client, selecting the tab, and clicking Networking.
After you have this information, click on Network Adapters and nd the entry corresponding to the
network adapter. The Wake On LAN Supported column for the relevant adapter should show Yes.
To display the WOL-compatibility status for each NIC on a host, select the host in the inventory panel of
n
the vSphere Web Client, select the tab, and click Network Adapters. The NIC must show
Yes in the Wake On LAN Supported column.
The switch port that each WOL-supporting vMotion NIC is plugged into should be set to auto negotiate
n
the link speed, and not set to a xed speed (for example, 1000 Mb/s). Many NICs support WOL only if
they can switch to 100 Mb/s or less when the host is powered o.
After you verify these prerequisites, test each ESXi host that is going to use WOL to support vSphere DPM.
When you test these hosts, ensure that the vSphere DPM feature is disabled for the cluster.
C Ensure that any host being added to a vSphere DPM cluster that uses WOL as a wake protocol is
tested and disabled from using power management if it fails the testing. If this is not done, vSphere DPM
might power o hosts that it subsequently cannot power back up.
Procedure
1Browse to the host in the vSphere Web Client navigator.
2Right-click the host and select Power > Enter Standby Mode
This action powers down the host.
3Right-click the host and select Power > Power On to aempt to bring it out of standby mode.
VMware, Inc. 83
Page 84
vSphere Resource Management
4Observe whether or not the host successfully powers back on.
5For any host that fails to exit standby mode successfully, perform the following steps.
aSelect the host in the vSphere Web Client navigator and select the tab.
bUnder Hardware > Power Management, click Edit to adjust the power management policy.
After you do this, vSphere DPM does not consider that host a candidate for being powered o.
Enabling vSphere DPM for a DRS Cluster
After you have performed conguration or testing steps required by the wake protocol you are using on
each host, you can enable vSphere DPM.
Congure the power management automation level, threshold, and host-level overrides. These seings are
congured under Power Management in the cluster’s Seings dialog box.
You can also create scheduled tasks to enable and disable DPM for a cluster using the Schedule Task:
Change Cluster Power Seings wizard.
N If a host in your DRS cluster has USB devices connected, disable DPM for that host. Otherwise, DPM
might turn o the host and sever the connection between the device and the virtual machine that was using
it.
Automation Level
Whether the host power state and migration recommendations generated by vSphere DPM are run
automatically or not depends upon the power management automation level selected for the feature.
The automation level is congured under Power Management in the cluster’s Seings dialog box.
N The power management automation level is not the same as the DRS automation level.
Table 11‑1. Power Management Automation Level
OptionDescription
OThe feature is disabled and no recommendations are made.
ManualHost power operation and related virtual machine migration recommendations are made, but
not automatically run. These recommendations appear on the cluster’s DRS tab in the
vSphere Web Client.
AutomaticHost power operations are automatically run if related virtual machine migrations can all be
run automatically.
vSphere DPM Threshold
The power state (host power on or o) recommendations generated by the vSphere DPM feature are
assigned priorities that range from priority-one recommendations to priority-ve recommendations.
These priority ratings are based on the amount of over- or under-utilization found in the DRS cluster and
the improvement that is expected from the intended host power state change. A priority-one
recommendation is mandatory, while a priority-ve recommendation brings only slight improvement.
84 VMware, Inc.
Page 85
Chapter 11 Using DRS Clusters to Manage Resources
The threshold is congured under Power Management in the cluster’s Seings dialog box. Each level you
move the vSphere DPM Threshold slider to the right allows the inclusion of one more lower level of priority
in the set of recommendations that are executed automatically or appear as recommendations to be
manually executed. At the Conservative seing, vSphere DPM only generates priority-one
recommendations, the next level to the right only priority-two and higher, and so on, down to the
Aggressive level which generates priority-ve recommendations and higher (that is, all recommendations.)
N The DRS threshold and the vSphere DPM threshold are essentially independent. You can
dierentiate the aggressiveness of the migration and host-power-state recommendations they respectively
provide.
Host-Level Overrides
When you enable vSphere DPM in a DRS cluster, by default all hosts in the cluster inherit its vSphere DPM
automation level.
You can override this default for an individual host by selecting the Host Options page of the cluster's
Seings dialog box and clicking its Power Managementseing. You can change this seing to the following
options:
Disabled
n
Manual
n
Automatic
n
N Do not change a host's Power Management seing if it has been set to Disabled due to failed exit
standby mode testing.
After enabling and running vSphere DPM, you can verify that it is functioning properly by viewing each
host’s Last Time Exited Standby information displayed on the Host Options page in the cluster Seings
dialog box and on the Hosts tab for each cluster. This eld shows a timestamp and whether vCenter Server
Succeeded or Failed the last time it aempted to bring the host out of standby mode. If no such aempt has
been made, the eld displays Never.
N Times for the Last Time Exited Standby text box are derived from the vCenter Server event log. If
this log is cleared, the times are reset to Never.
Monitoring vSphere DPM
You can use event-based alarms in vCenter Server to monitor vSphere DPM.
The most serious potential error you face when using vSphere DPM is the failure of a host to exit standby
mode when its capacity is needed by the DRS cluster. You can monitor for instances when this error occurs
by using the preconguredExit Standby Error alarm in vCenter Server. If vSphere DPM cannot bring a host
out of standby mode (vCenter Server event DrsExitStandbyModeFailedEvent), you can congure this alarm
to send an alert email to the administrator or to send notication using an SNMP trap. By default, this alarm
is cleared after vCenter Server is able to successfully connect to that host.
To monitor vSphere DPM activity, you can also create alarms for the following vCenter Server events.
Table 11‑2. vCenter Server Events
Event TypeEvent Name
Entering Standby mode (about to power o host)
Successfully entered Standby mode (host power o
succeeded)
VMware, Inc. 85
DrsEnteringStandbyModeEvent
DrsEnteredStandbyModeEvent
Page 86
vSphere Resource Management
Table 11‑2. vCenter Server Events (Continued)
Event TypeEvent Name
Exiting Standby mode (about to power on the host)
Successfully exited Standby mode (power on succeeded)
For more information about creating and editing alarms, see the vSphere Monitoring and Performance
documentation.
If you use monitoring software other than vCenter Server, and that software triggers alarms when physical
hosts are powered o unexpectedly, you might have a situation where false alarms are generated when
vSphere DPM places a host into standby mode. If you do not want to receive such alarms, work with your
vendor to deploy a version of the monitoring software that is integrated with vCenter Server. You could also
use vCenter Server itself as your monitoring solution, because starting with vSphere 4.x, it is inherently
aware of vSphere DPM and does not trigger these false alarms.
Using DRS Affinity Rules
You can control the placement of virtual machines on hosts within a cluster by using anity rules.
You can create two types of rules.
Used to specify anity or anti-anity between a group of virtual machines and a group of hosts. An
n
anity rule species that the members of a selected virtual machine DRS group can or must run on the
members of a specic host DRS group. An anti-anity rule species that the members of a selected
virtual machine DRS group cannot run on the members of a specic host DRS group.
DrsExitingStandbyModeEvent
DrsExitedStandbyModeEvent
See “VM-Host Anity Rules,” on page 88 for information about creating and using this type of rule.
Used to specify anity or anti-anity between individual virtual machines. A rule specifying anity
n
causes DRS to try to keep the specied virtual machines together on the same host, for example, for
performance reasons. With an anti-anity rule, DRS tries to keep the specied virtual machines apart,
for example, so that when a problem occurs with one host, you do not lose both virtual machines.
See “VM-VM Anity Rules,” on page 87 for information about creating and using this type of rule.
When you add or edit an anity rule, and the cluster's current state is in violation of the rule, the system
continues to operate and tries to correct the violation. For manual and partially automated DRS clusters,
migration recommendations based on rule fulllment and load balancing are presented for approval. You
are not required to fulll the rules, but the corresponding recommendations remain until the rules are
fullled.
To check whether any enabled anity rules are being violated and cannot be corrected by DRS, select the
cluster's DRS tab and click Faults. Any rule currently being violated has a corresponding fault on this page.
Read the fault to determine why DRS is not able to satisfy the particular rule. Rules violations also produce
a log event.
N VM-VM and VM-Host anity rules are dierent from an individual host’s CPU anity rules.
Create a Host DRS Group
A VM-Host anity rule establishes an anity (or anti-anity) relationship between a virtual machine DRS
group with a host DRS group. You must create both of these groups before you can create a rule that links
them.
Procedure
1Browse to the cluster in the vSphere Web Client navigator.
2Click the tab.
86 VMware, Inc.
Page 87
Chapter 11 Using DRS Clusters to Manage Resources
3Under , select VM/Host Groups and click Add.
4In the Create VM/Host Group dialog box, type a name for the group.
5Select Host Group from the Type drop down box and click Add.
6Click the check box next to a host to add it. Continue this process until all desired hosts have been
added.
7Click OK.
What to do next
Using this host DRS group, you can create a VM-Host anity rule that establishes an anity (or antianity) relationship with an appropriate virtual machine DRS group.
“Create a Virtual Machine DRS Group,” on page 87
“Create a VM-Host Anity Rule,” on page 89
Create a Virtual Machine DRS Group
Anity rules establish an anity (or anti-anity) relationship between DRS groups. You must create DRS
groups before you can create a rule that links them.
Procedure
1Browse to the cluster in the vSphere Web Client navigator.
2Click the tab.
3Under , select VM/Host Groups and click Add.
4In the Create VM/Host Group dialog box, type a name for the group.
5Select VM Group from the Type drop down box and click Add.
6Click the check box next to a virtual machine to add it. Continue this process until all desired virtual
machines have been added.
7Click OK.
What to do next
“Create a Host DRS Group,” on page 86
“Create a VM-Host Anity Rule,” on page 89
“Create a VM-VM Anity Rule,” on page 88
VM-VM Affinity Rules
A VM-VM anity rule species whether selected individual virtual machines should run on the same host
or be kept on separate hosts. This type of rule is used to create anity or anti-anity between individual
virtual machines that you select.
When an anity rule is created, DRS tries to keep the specied virtual machines together on the same host.
You might want to do this, for example, for performance reasons.
With an anti-anity rule, DRS tries to keep the specied virtual machines apart. You could use such a rule if
you want to guarantee that certain virtual machines are always on dierent physical hosts. In that case, if a
problem occurs with one host, not all virtual machines would be placed at risk.
VMware, Inc. 87
Page 88
vSphere Resource Management
Create a VM-VM Affinity Rule
You can create VM-VM anity rules to specify whether selected individual virtual machines should run on
the same host or be kept on separate hosts.
N If you use the vSphere HA Specify Failover Hosts admission control policy and designate multiple
failover hosts, VM-VM anity rules are not supported.
Procedure
1Browse to the cluster in the vSphere Web Client navigator.
2Click the tab.
3Under , click VM/Host Rules.
4Click Add.
5In the Create VM/Host Rule dialog box, type a name for the rule.
6From the Type drop-down menu, select either Keep Virtual Machines Together or Separate Virtual
Machines.
7Click Add.
8Select at least two virtual machines to which the rule will apply and click OK.
9Click OK.
VM-VM Affinity Rule Conflicts
You can create and use multiple VM-VM anity rules, however, this might lead to situations where the
rules conict with one another.
If two VM-VM anity rules are in conict, you cannot enable both. For example, if one rule keeps two
virtual machines together and another rule keeps the same two virtual machines apart, you cannot enable
both rules. Select one of the rules to apply and disable or remove the conicting rule.
When two VM-VM anity rules conict, the older one takes precedence and the newer rule is disabled. DRS
only tries to satisfy enabled rules and disabled rules are ignored. DRS gives higher precedence to preventing
violations of anti-anity rules than violations of anity rules.
VM-Host Affinity Rules
A VM-Host anity rule species whether or not the members of a selected virtual machine DRS group can
run on the members of a specic host DRS group.
Unlike a VM-VM anity rule, which speciesanity (or anti-anity) between individual virtual machines,
a VM-Host anity rule species an anity relationship between a group of virtual machines and a group of
hosts. There are 'required' rules (designated by "must") and 'preferential' rules (designated by "should".)
A VM-Host anity rule includes the following components.
One virtual machine DRS group.
n
One host DRS group.
n
A designation of whether the rule is a requirement ("must") or a preference ("should") and whether it is
n
anity ("run on") or anti-anity ("not run on").
Because VM-Host anity rules are cluster-based, the virtual machines and hosts that are included in a rule
must all reside in the same cluster. If a virtual machine is removed from the cluster, it loses its DRS group
aliation, even if it is later returned to the cluster.
88 VMware, Inc.
Page 89
Chapter 11 Using DRS Clusters to Manage Resources
Create a VM-Host Affinity Rule
You can create VM-Host anity rules to specify whether or not the members of a selected virtual machine
DRS group can run on the members of a specic host DRS group.
Prerequisites
Create the virtual machine and host DRS groups to which the VM-Host anity rule applies.
Procedure
1Browse to the cluster in the vSphere Web Client navigator.
2Click the tab.
3Under , click VM/Host Rules.
4Click Add.
5In the Create VM/Host Rule dialog box, type a name for the rule.
6From the Type drop down menu, select Virtual Machines to Hosts.
7Select the virtual machine DRS group and the host DRS group to which the rule applies.
8Select a specication for the rule.
Must run on hosts in group. Virtual machines in VM Group 1 must run on hosts in Host Group A.
n
Should run on hosts in group. Virtual machines in VM Group 1 should, but are not required, to
n
run on hosts in Host Group A.
Must not run on hosts in group. Virtual machines in VM Group 1 must never run on host in Host
n
Group A.
Should not run on hosts in group. Virtual machines in VM Group 1 should not, but might, run on
n
hosts in Host Group A.
9Click OK.
Using VM-Host Affinity Rules
You use a VM-Host anity rule to specify an anity relationship between a group of virtual machines and
a group of hosts. When using VM-Host anity rules, you should be aware of when they could be most
useful, how conicts between rules are resolved, and the importance of caution when seing required
anity rules.
One use case where VM-Host anity rules are helpful is when the software you are running in your virtual
machines has licensing restrictions. You can place such virtual machines into a DRS group and then create a
rule that requires them to run on a host DRS group that contains only host machines that have the required
licenses.
N When you create a VM-Host anity rule that is based on the licensing or hardware requirements of
the software running in your virtual machines, you are responsible for ensuring that the groups are properly
set up. The rule does not monitor the software running in the virtual machines nor does it know what nonVMware licenses are in place on which ESXi hosts.
If you create more than one VM-Host anity rule, the rules are not ranked, but are applied equally. Be
aware that this has implications for how the rules interact. For example, a virtual machine that belongs to
two DRS groups, each of which belongs to a dierent required rule, can run only on hosts that belong to
both of the host DRS groups represented in the rules.
VMware, Inc. 89
Page 90
vSphere Resource Management
When you create a VM-Host anity rule, its ability to function in relation to other rules is not checked. So it
is possible for you to create a rule that conicts with the other rules you are using. When two VM-Host
anity rules conict, the older one takes precedence and the newer rule is disabled. DRS only tries to satisfy
enabled rules and disabled rules are ignored.
DRS, vSphere HA, and vSphere DPM never take any action that results in the violation of required anity
rules (those where the virtual machine DRS group 'must run on' or 'must not run on' the host DRS group).
Accordingly, you should exercise caution when using this type of rule because of its potential to adversely
aect the functioning of the cluster. If improperly used, required VM-Host anity rules can fragment the
cluster and inhibit the proper functioning of DRS, vSphere HA, and vSphere DPM.
A number of cluster functions are not performed if doing so would violate a required anity rule.
DRS does not evacuate virtual machines to place a host in maintenance mode.
n
DRS does not place virtual machines for power-on or load balance virtual machines.
n
vSphere HA does not perform failovers.
n
vSphere DPM does not optimize power management by placing hosts into standby mode.
n
To avoid these situations, exercise caution when creating more than one required anity rule or consider
using VM-Host anity rules that are preferential only (those where the virtual machine DRS group 'should
run on' or 'should not run on' the host DRS group). Ensure that the number of hosts in the cluster with
which each virtual machine is aned is large enough that losing a host does not result in a lack of hosts on
which the virtual machine can run. Preferential rules can be violated to allow the proper functioning of DRS,
vSphere HA, and vSphere DPM.
N You can create an event-based alarm that is triggered when a virtual machine violates a VM-Host
anity rule. In the vSphere Web Client, add a new alarm for the virtual machine and select VM is violating
VM-Host Rule as the event trigger. For more information about creating and editing alarms, see the
vSphere Monitoring and Performance documentation.
90 VMware, Inc.
Page 91
Creating a Datastore Cluster12
A datastore cluster is a collection of datastores with shared resources and a shared management interface.
Datastore clusters are to datastores what clusters are to hosts. When you create a datastore cluster, you can
use vSphere Storage DRS to manage storage resources.
N Datastore clusters are referred to as storage pods in the vSphere API.
When you add a datastore to a datastore cluster, the datastore's resources become part of the datastore
cluster's resources. As with clusters of hosts, you use datastore clusters to aggregate storage resources,
which enables you to support resource allocation policies at the datastore cluster level. The following
resource management capabilities are also available per datastore cluster.
Space utilization load
balancing
You can set a threshold for space use. When space use on a datastore exceeds
the threshold, Storage DRS generates recommendations or performs Storage
vMotion migrations to balance space use across the datastore cluster.
I/O latency load
balancing
You can set an I/O latency threshold for boleneck avoidance. When I/O
latency on a datastore exceeds the threshold, Storage DRS generates
recommendations or performs Storage vMotion migrations to help alleviate
high I/O load.
Anti-affinity rules
You can create anti-anity rules for virtual machine disks. For example, the
virtual disks of a certain virtual machine must be kept on dierent
datastores. By default, all virtual disks for a virtual machine are placed on
the same datastore.
This chapter includes the following topics:
“Initial Placement and Ongoing Balancing,” on page 92
n
“Storage Migration Recommendations,” on page 92
n
“Create a Datastore Cluster,” on page 92
n
“Enable and Disable Storage DRS,” on page 93
n
“Set the Automation Level for Datastore Clusters,” on page 93
n
“Seing the Aggressiveness Level for Storage DRS,” on page 94
n
“Datastore Cluster Requirements,” on page 95
n
VMware, Inc.
“Adding and Removing Datastores from a Datastore Cluster,” on page 96
n
91
Page 92
vSphere Resource Management
Initial Placement and Ongoing Balancing
Storage DRS provides initial placement and ongoing balancing recommendations to datastores in a Storage
DRS-enabled datastore cluster.
Initial placement occurs when Storage DRS selects a datastore within a datastore cluster on which to place a
virtual machine disk. This happens when the virtual machine is being created or cloned, when a virtual
machine disk is being migrated to another datastore cluster, or when you add a disk to an existing virtual
machine.
Initial placement recommendations are made in accordance with space constraints and with respect to the
goals of space and I/O load balancing. These goals aim to minimize the risk of over-provisioning one
datastore, storage I/O bolenecks, and performance impact on virtual machines.
Storage DRS is invoked at the congured frequency (by default, every eight hours) or when one or more
datastores in a datastore cluster exceeds the user-congurable space utilization thresholds. When Storage
DRS is invoked, it checks each datastore's space utilization and I/O latency values against the threshold. For
I/O latency, Storage DRS uses the 90th percentile I/O latency measured over the course of a day to compare
against the threshold.
Storage Migration Recommendations
vCenter Server displays migration recommendations on the Storage DRS Recommendations page for
datastore clusters that have manual automation mode.
The system provides as many recommendations as necessary to enforce Storage DRS rules and to balance
the space and I/O resources of the datastore cluster. Each recommendation includes the virtual machine
name, the virtual disk name, the name of the datastore cluster, the source datastore, the destination
datastore, and a reason for the recommendation.
Balance datastore space use
n
Balance datastore I/O load
n
Storage DRS makes mandatory recommendations for migration in the following situations:
The datastore is out of space.
n
Anti-anity or anity rules are being violated.
n
The datastore is entering maintenance mode and must be evacuated.
n
In addition, optional recommendations are made when a datastore is close to running out of space or when
adjustments should be made for space and I/O load balancing.
Storage DRS considers moving virtual machines that are powered o or powered on for space balancing.
Storage DRS includes powered-o virtual machines with snapshots in these considerations.
Create a Datastore Cluster
You can manage datastore cluster resources using Storage DRS.
Procedure
1Browse to data centers in the vSphere Web Client navigator.
2Right-click the data center object and select New Datastore Cluster.
3To complete the New Datastore Cluster wizard, follow the prompts.
4Click Finish.
92 VMware, Inc.
Page 93
Enable and Disable Storage DRS
Storage DRS allows you to manage the aggregated resources of a datastore cluster. When Storage DRS is
enabled, it provides recommendations for virtual machine disk placement and migration to balance space
and I/O resources across the datastores in the datastore cluster.
When you enable Storage DRS, you enable the following functions.
Space load balancing among datastores within a datastore cluster.
n
I/O load balancing among datastores within a datastore cluster.
n
Initial placement for virtual disks based on space and I/O workload.
n
The Enable Storage DRS check box in the Datastore Cluster Seings dialog box enables or disables all of
these components at once. If necessary, you can disable I/O-related functions of Storage DRS independently
of space balancing functions.
When you disable Storage DRS on a datastore cluster, Storage DRS seings are preserved. When you enable
Storage DRS, the seings for the datastore cluster are restored to the point where Storage DRS was disabled.
Procedure
1Browse to the datastore cluster in the vSphere Web Client navigator.
Chapter 12 Creating a Datastore Cluster
2Click the tab and click Services.
3Select Storage DRS and click Edit.
4Select Turn ON vSphere DRS and click OK.
5(Optional) To disable only I/O-related functions of Storage DRS, leaving space-related controls enabled,
perform the following steps.
aUnder Storage DRS select Edit.
bDeselect the Enable I/O metric for Storage DRS check box and click OK.
Set the Automation Level for Datastore Clusters
The automation level for a datastore cluster species whether or not placement and migration
recommendations from Storage DRS are applied automatically.
Procedure
1Browse to the datastore cluster in the vSphere Web Client navigator.
2Click the tab and click Services.
3Select DRS and click Edit.
4Expand DRS Automation and select an automation level.
Manual is the default automation level.
OptionDescription
No Automation (Manual Mode)
Partially Automated
Fully Automated
Placement and migration recommendations are displayed, but do not run
until you manually apply the recommendation.
Placement recommendations run automatically and migration
recommendations are displayed, but do not run until you manually apply
the recommendation.
Placement and migration recommendations run automatically.
VMware, Inc. 93
Page 94
vSphere Resource Management
5Click OK.
Setting the Aggressiveness Level for Storage DRS
The aggressiveness of Storage DRS is determined by specifying thresholds for space used and I/O latency.
Storage DRS collects resource usage information for the datastores in a datastore cluster. vCenter Server
uses this information to generate recommendations for placement of virtual disks on datastores.
When you set a low aggressiveness level for a datastore cluster, Storage DRS recommends Storage vMotion
migrations only when absolutely necessary, for example, if when I/O load, space utilization, or their
imbalance is high. When you set a high aggressiveness level for a datastore cluster, Storage DRS
recommends migrations whenever the datastore cluster can benet from space or I/O load balancing.
In the vSphere Web Client, you can use the following thresholds to set the aggressiveness level for Storage
DRS:
Space Utilization
I/O Latency
You can also set advanced options to further congure the aggressiveness level of Storage DRS.
Space utilization
difference
I/O load balancing
invocation interval
I/O imbalance threshold
Storage DRS generates recommendations or performs migrations when the
percentage of space utilization on the datastore is greater than the threshold
you set in the vSphere Web Client.
Storage DRS generates recommendations or performs migrations when the
90th percentile I/O latency measured over a day for the datastore is greater
than the threshold.
This threshold ensures that there is some minimum dierence between the
space utilization of the source and the destination. For example, if the space
used on datastore A is 82% and datastore B is 79%, the dierence is 3. If the
threshold is 5, Storage DRS will not make migration recommendations from
datastore A to datastore B.
After this interval, Storage DRS runs to balance I/O load.
Lowering this value makes I/O load balancing less aggressive. Storage DRS
computes an I/O fairness metric between 0 and 1, which 1 being the fairest
distribution. I/O load balancing runs only if the computed metric is less than
1 - (I/O imbalance threshold / 100).
Set Storage DRS Runtime Rules
Set Storage DRS triggers and congure advanced options for the datastore cluster.
Procedure
1(Optional) Select or deselect the Enable I/O metric for SDRS recommendations check box to enable or
disable I/O metric inclusion.
When you disable this option, vCenter Server does not consider I/O metrics when making Storage DRS
recommendations. When you disable this option, you disable the following elements of Storage DRS:
I/O load balancing among datastores within a datastore cluster.
n
Initial placement for virtual disks based on I/O workload. Initial placement is based on space only.
n
94 VMware, Inc.
Page 95
Chapter 12 Creating a Datastore Cluster
2(Optional) Set Storage DRS thresholds.
You set the aggressiveness level of Storage DRS by specifying thresholds for used space and I/O latency.
Use the Utilized Space slider to indicate the maximum percentage of consumed space allowed
n
before Storage DRS is triggered. Storage DRS makes recommendations and performs migrations
when space use on the datastores is higher than the threshold.
Use the I/O Latency slider to indicate the maximum I/O latency allowed before Storage DRS is
n
triggered. Storage DRS makes recommendations and performs migrations when latency is higher
than the threshold.
N The Storage DRS I/O Latency threshold for the datastore cluster should be lower than or
equal to the Storage I/O Control congestion threshold.
3(Optional) Congure advanced options.
No recommendations until utilization dierence between source and destination is: Use the slider
n
to specify the space utilization dierence threshold. Utilization is usage * 100/capacity.
This threshold ensures that there is some minimum dierence between the space utilization of the
source and the destination. For example, if the space used on datastore A is 82% and datastore B is
79%, the dierence is 3. If the threshold is 5, Storage DRS will not make migration
recommendations from datastore A to datastore B.
Check imbalances every: Specify how often Storage DRS should assess space and I/O load
n
balancing.
I/O imbalance threshold: Use the slider to indicate the aggressiveness of I/O load balancing.
n
Lowering this value makes I/O load balancing less aggressive. Storage DRS computes an I/O
fairness metric between 0 and 1, which 1 being the fairest distribution. I/O load balancing runs only
if the computed metric is less than 1 - (I/O imbalance threshold / 100).
4Click OK.
Datastore Cluster Requirements
Datastores and hosts that are associated with a datastore cluster must meet certain requirements to use
datastore cluster features successfully.
Follow these guidelines when you create a datastore cluster.
Datastore clusters must contain similar or interchangeable datastores.
n
A datastore cluster can contain a mix of datastores with dierent sizes and I/O capacities, and can be
from dierent arrays and vendors. However, the following types of datastores cannot coexist in a
datastore cluster.
NFS and VMFS datastores cannot be combined in the same datastore cluster.
n
Replicated datastores cannot be combined with non-replicated datastores in the same Storage-DRS-
n
enabled datastore cluster.
All hosts aached to the datastores in a datastore cluster must be ESXi 5.0 and later. If datastores in the
n
datastore cluster are connected to ESX/ESXi 4.x and earlier hosts, Storage DRS does not run.
Datastores shared across multiple data centers cannot be included in a datastore cluster.
n
As a best practice, do not include datastores that have hardware acceleration enabled in the same
n
datastore cluster as datastores that do not have hardware acceleration enabled. Datastores in a datastore
cluster must be homogeneous to guarantee hardware acceleration-supported behavior.
VMware, Inc. 95
Page 96
vSphere Resource Management
Adding and Removing Datastores from a Datastore Cluster
You add and remove datastores to and from an existing datastore cluster.
You can add to a datastore cluster any datastore that is mounted on a host in the vSphere Web Client
inventory, with the following exceptions:
All hosts aached to the datastore must be ESXi 5.0 and later.
n
The datastore cannot be in more than one data center in the same instance of the vSphere Web Client.
n
When you remove a datastore from a datastore cluster, the datastore remains in the vSphere Web Client
inventory and is not unmounted from the host.
96 VMware, Inc.
Page 97
Using Datastore Clusters to Manage
Storage Resources13
After you create a datastore cluster, you can customize it and use it to manage storage I/O and space
utilization resources.
This chapter includes the following topics:
“Using Storage DRS Maintenance Mode,” on page 97
n
“Applying Storage DRS Recommendations,” on page 99
n
“Change Storage DRS Automation Level for a Virtual Machine,” on page 100
n
“Set Up O-Hours Scheduling for Storage DRS,” on page 100
n
“Storage DRS Anti-Anity Rules,” on page 101
n
“Clear Storage DRS Statistics,” on page 104
n
“Storage vMotion Compatibility with Datastore Clusters,” on page 105
n
Using Storage DRS Maintenance Mode
You place a datastore in maintenance mode when you need to take it out of use to service it. A datastore
enters or leaves maintenance mode only as the result of a user request.
Maintenance mode is available to datastores within a Storage DRS-enabled datastore cluster. Standalone
datastores cannot be placed in maintenance mode.
Virtual disks that are located on a datastore that is entering maintenance mode must be migrated to another
datastore, either manually or using Storage DRS. When you aempt to put a datastore in maintenance
mode, the Placement Recommendations tab displays a list of migration recommendations, datastores
within the same datastore cluster where virtual disks can be migrated. On the Faults tab, vCenter Server
displays a list of the disks that cannot be migrated and the reasons why. If Storage DRS anity or anti-anity rules prevent disks from being migrated, you can choose to enable the Ignore Anity Rules for
Maintenance option.
The datastore is in a state of Entering Maintenance Mode until all virtual disks have been migrated.
Place a Datastore in Maintenance Mode
If you need to take a datastore out of service, you can place the datastore in Storage DRS maintenance mode.
Prerequisites
Storage DRS is enabled on the datastore cluster that contains the datastore that is entering maintenance
mode.
VMware, Inc.
97
Page 98
vSphere Resource Management
No CD-ROM image les are stored on the datastore.
There are at least two datastores in the datastore cluster.
Procedure
1Browse to the datastore in the vSphere Web Client navigator.
2Right-click the datastore and select Maintenance Mode > Enter Maintenance Mode.
A list of recommendations appears for datastore maintenance mode migration.
3(Optional) On the Placement Recommendations tab, deselect any recommendations you do not want to
apply.
N The datastore cannot enter maintenance mode without evacuating all disks. If you deselect
recommendations, you must manually move the aected virtual machines.
4If necessary, click Apply Recommendations.
vCenter Server uses Storage vMotion to migrate the virtual disks from the source datastore to the
destination datastore and the datastore enters maintenance mode.
The datastore icon might not be immediately updated to reect the datastore's current state. To update the
icon immediately, click Refresh.
Ignore Storage DRS Affinity Rules for Maintenance Mode
Storage DRS anity or anti-anity rules might prevent a datastore from entering maintenance mode. You
can ignore these rules when you put a datastore in maintenance mode.
When you enable the Ignore Anity Rules for Maintenance option for a datastore cluster, vCenter Server
ignores Storage DRS anity and anti-anity rules that prevent a datastore from entering maintenance
mode.
Storage DRS rules are ignored only for evacuation recommendations. vCenter Server does not violate the
rules when making space and load balancing recommendations or initial placement recommendations.
Procedure
1Browse to the datastore cluster in the vSphere Web Client navigator.
2Click the tab and click Services.
3Select DRS and click Edit.
4Expand Advanced Options and click Add.
5In the Option column, type IgnoreAffinityRulesForMaintenance.
6In the Value column, type 1 to enable the option.
Type 0 to disable the option.
7Click OK.
The Ignore Anity Rules for Maintenance Mode option is applied to the datastore cluster.
98 VMware, Inc.
Page 99
Chapter 13 Using Datastore Clusters to Manage Storage Resources
Applying Storage DRS Recommendations
Storage DRS collects resource usage information for all datastores in a datastore cluster. Storage DRS uses
the information to generate recommendations for virtual machine disk placement on datastores in a
datastore cluster.
Storage DRS recommendations appear on the Storage DRS tab in the vSphere Web Client datastore view.
Recommendations also appear when you aempt to put a datastore into Storage DRS maintenance mode.
When you apply Storage DRS recommendations, vCenter Server uses Storage vMotion to migrate virtual
machine disks to other datastores in the datastore cluster to balance the resources.
You can apply a subset of the recommendations by selecting the Override Suggested DRS Recommendations
check box and selecting each recommendation to apply.
Table 13‑1. Storage DRS Recommendations
LabelDescription
PriorityPriority level (1-5) of the recommendation. (Hidden by
default.)
RecommendationAction being recommended by Storage DRS.
ReasonWhy the action is needed.
Space Utilization % Before (source) and (destination)Percentage of space used on the source and destination
datastores before migration.
Space Utilization % After (source) and (destination)Percentage of space used on the source and destination
datastores after migration.
I/O Latency Before (source)Value of I/O latency on the source datastore before
migration.
I/O Latency Before (destination)Value of I/O latency on the destination datastore before
migration.
Refresh Storage DRS Recommendations
Storage DRS migration recommendations appear on the Storage DRS tab in the vSphere Web Client. You
can refresh these recommendations by running Storage DRS.
Prerequisites
At least one datastore cluster must exist in the vSphere Web Client inventory.
Enable Storage DRS for the datastore cluster. The Storage DRS tab appears only if Storage DRS is enabled.
Procedure
1In the vSphere Web Client datastore view, select the datastore cluster and click the Storage DRS tab.
2Select the Recommendations view and click the Run Storage DRS link in the upper right corner.
The recommendations are updated. The Last Updated timestamp displays the time when Storage DRS
recommendations were refreshed.
VMware, Inc. 99
Page 100
vSphere Resource Management
Change Storage DRS Automation Level for a Virtual Machine
You can override the datastore cluster-wide automation level for individual virtual machines. You can also
override default virtual disk anity rules.
Procedure
1Browse to the datastore cluster in the vSphere Web Client navigator.
2Click the tab and click .
3Under VM Overrides, select Add.
4Select a virtual machine.
5Click the Automation level drop-down menu, and select an automation level for the virtual machine.
OptionDescription
Default (Manual)
Fully Automated
Disabled
6Click the Keep VMDKs together, drop-down menu to override default VMDK anity.
Placement and migration recommendations are displayed, but do not run
until you manually apply the recommendation.
Placement and migration recommendations run automatically.
vCenter Server does not migrate the virtual machine or provide migration
recommendations for it.
See “Override VMDK Anity Rules,” on page 103.
7Click OK.
Set Up Off-Hours Scheduling for Storage DRS
You can create a scheduled task to change Storage DRS seings for a datastore cluster so that migrations for
fully automated datastore clusters are more likely to occur during o-peak hours.
You can create a scheduled task to change the automation level and aggressiveness level for a datastore
cluster. For example, you might congure Storage DRS to run less aggressively during peak hours, when
performance is a priority, to minimize the occurrence of storage migrations. During non-peak hours, Storage
DRS can run in a more aggressive mode and be invoked more frequently.
Prerequisites
Enable Storage DRS.
Procedure
1Browse to the datastore cluster in the vSphere Web Client navigator.
2Click the tab and click Services.
3Under vSphere DRS click the Schedule DRSbuon.
4In the Edit Datastore Cluster dialog box, click SDRS Scheduling.
100 VMware, Inc.
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.