vSphere Resource
Management
17 APR 2018
VMware vSphere 6.7
VMware ESXi 6.7
vCenter Server 6.7
vSphere Resource Management
You can find the most up-to-date technical documentation on the VMware website at:
https://docs.vmware.com/
If you have comments about this documentation, submit your feedback to
docfeedback@vmware.com
VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com
Copyright © 2006–2018 VMware, Inc. All rights reserved. Copyright and trademark information.
VMware, Inc. 2
Contents
About vSphere Resource Management 7
Getting Started with Resource Management 9
1
Resource Types 9
Resource Providers 9
Resource Consumers 10
Goals of Resource Management 10
Configuring Resource Allocation Settings 11
2
Resource Allocation Shares 11
Resource Allocation Reservation 12
Resource Allocation Limit 13
Resource Allocation Settings Suggestions 13
Edit Settings 14
Changing Resource Allocation Settings—Example 14
Admission Control 15
CPU Virtualization Basics 17
3
Software-Based CPU Virtualization 17
Hardware-Assisted CPU Virtualization 18
Virtualization and Processor-Specific Behavior 18
Performance Implications of CPU Virtualization 18
VMware, Inc.
Administering CPU Resources 20
4
View Processor Information 20
Specifying CPU Configuration 21
Multicore Processors 21
Hyperthreading 22
Using CPU Affinity 23
Host Power Management Policies 25
Memory Virtualization Basics 28
5
Virtual Machine Memory 28
Memory Overcommitment 29
Memory Sharing 30
Memory Virtualization 30
Support for Large Page Sizes 32
3
vSphere Resource Management
Administering Memory Resources 33
6
Understanding Memory Overhead 33
How ESXi Hosts Allocate Memory 35
Memory Reclamation 36
Using Swap Files 37
Sharing Memory Across Virtual Machines 42
Memory Compression 42
Measuring and Differentiating Types of Memory Usage 43
Memory Reliability 45
About System Swap 45
Persistent Memory 47
7
Configuring Virtual Graphics 49
8
View GPU Statistics 49
Add an NVIDIA GRID vGPU to a Virtual Machine 50
Configuring Host Graphics 50
Configuring Graphics Devices 51
Managing Storage I/O Resources 52
9
About Virtual Machine Storage Policies 53
About I/O Filters 53
Storage I/O Control Requirements 54
Storage I/O Control Resource Shares and Limits 54
Set Storage I/O Control Resource Shares and Limits 55
Enable Storage I/O Control 56
Set Storage I/O Control Threshold Value 56
Storage DRS Integration with Storage Profiles 58
Managing Resource Pools 59
10
Why Use Resource Pools? 60
Create a Resource Pool 63
Edit a Resource Pool 64
Add a Virtual Machine to a Resource Pool 64
Remove a Virtual Machine from a Resource Pool 65
Remove a Resource Pool 66
Resource Pool Admission Control 66
Creating a DRS Cluster 69
11
Admission Control and Initial Placement 69
Virtual Machine Migration 71
DRS Cluster Requirements 73
VMware, Inc. 4
vSphere Resource Management
Configuring DRS with Virtual Flash 75
Create a Cluster 76
Edit Cluster Settings 76
Set a Custom Automation Level for a Virtual Machine 78
Disable DRS 79
Restore a Resource Pool Tree 79
Using DRS Clusters to Manage Resources 81
12
Adding Hosts to a Cluster 81
Adding Virtual Machines to a Cluster 83
Removing Virtual Machines from a Cluster 83
Removing a Host from a Cluster 84
DRS Cluster Validity 86
Managing Power Resources 91
Using DRS Affinity Rules 96
Creating a Datastore Cluster 102
13
Initial Placement and Ongoing Balancing 103
Storage Migration Recommendations 103
Create a Datastore Cluster 104
Enable and Disable Storage DRS 104
Set the Automation Level for Datastore Clusters 105
Setting the Aggressiveness Level for Storage DRS 105
Datastore Cluster Requirements 107
Adding and Removing Datastores from a Datastore Cluster 107
Using Datastore Clusters to Manage Storage Resources 109
14
Using Storage DRS Maintenance Mode 109
Applying Storage DRS Recommendations 111
Change Storage DRS Automation Level for a Virtual Machine 112
Set Up Off-Hours Scheduling for Storage DRS 112
Storage DRS Anti-Affinity Rules 113
Clear Storage DRS Statistics 117
Storage vMotion Compatibility with Datastore Clusters 117
Using NUMA Systems with ESXi 119
15
What is NUMA? 119
How ESXi NUMA Scheduling Works 120
VMware NUMA Optimization Algorithms and Settings 121
Resource Management in NUMA Architectures 123
Using Virtual NUMA 123
Specifying NUMA Controls 124
VMware, Inc. 5
vSphere Resource Management
Advanced Attributes 128
16
Set Advanced Host Attributes 128
Set Advanced Virtual Machine Attributes 131
Latency Sensitivity 133
About Reliable Memory 134
Backing Guest vRAM with 1GB Pages 134
Fault Definitions 136
17
Virtual Machine is Pinned 137
Virtual Machine not Compatible with any Host 137
VM/VM DRS Rule Violated when Moving to another Host 137
Host Incompatible with Virtual Machine 137
Host Has Virtual Machine That Violates VM/VM DRS Rules 137
Host has Insufficient Capacity for Virtual Machine 138
Host in Incorrect State 138
Host Has Insufficient Number of Physical CPUs for Virtual Machine 138
Host has Insufficient Capacity for Each Virtual Machine CPU 138
The Virtual Machine Is in vMotion 138
No Active Host in Cluster 138
Insufficient Resources 138
Insufficient Resources to Satisfy Configured Failover Level for HA 139
No Compatible Hard Affinity Host 139
No Compatible Soft Affinity Host 139
Soft Rule Violation Correction Disallowed 139
Soft Rule Violation Correction Impact 139
DRS Troubleshooting Information 140
18
Cluster Problems 140
Host Problems 144
Virtual Machine Problems 147
VMware, Inc. 6
About vSphere Resource Management
vSphere Resource Management describes resource management for VMware® ESXi and vCenter
®
Server environments.
This documentation focuses on the following topics.
n
Resource allocation and resource management concepts
n
Virtual machine attributes and admission control
n
Resource pools and how to manage them
n
Clusters, vSphere® Distributed Resource Scheduler (DRS), vSphere Distributed Power Management
(DPM), and how to work with them
n
Datastore clusters, Storage DRS, Storage I/O Control, and how to work with them
n
Advanced resource management options
n
Performance considerations
Intended Audience
This information is for system administrators who want to understand how the system manages resources
and how they can customize the default behavior. It’s also essential for anyone who wants to understand
and use resource pools, clusters, DRS, datastore clusters, Storage DRS, Storage I/O Control, or vSphere
DPM.
This documentation assumes you have a working knowledge of VMware ESXi and of vCenter Server.
Note In this document, "Memory" can refer to physical RAM or Persistent Memory.
vSphere Client and vSphere Web Client
The instructions in this guide are specific primarily to the vSphere Client (an HTML5-based GUI). Most
instructions also apply to the vSphere Web Client (a Flex-based GUI).
VMware, Inc.
7
vSphere Resource Management
For the workflows that significantly differ between the two clients, there are duplicate procedures. The
procedures indicate when they are intended exclusively for the vSphere Client or the vSphere Web Client.
Note In vSphere 6.7, most of the vSphere Web Client functionality is implemented in the vSphere Client.
For an up-to-date list of the unsupported functionality, see Functionality Updates for the vSphere Client.
VMware, Inc. 8
Getting Started with Resource
Management 1
To understand resource management, you must be aware of its components, its goals, and how best to
implement it in a cluster setting.
Resource allocation settings for a virtual machine (shares, reservation, and limit) are discussed, including
how to set them and how to view them. Also, admission control, the process whereby resource allocation
settings are validated against existing resources is explained.
Resource management is the allocation of resources from resource providers to resource consumers.
The need for resource management arises from the overcommitment of resources—that is, more demand
than capacity and from the fact that demand and capacity vary over time. Resource management allows
you to dynamically reallocate resources, so that you can more efficiently use available capacity.
Note In this chapter, "Memory" refers to physical RAM.
This chapter includes the following topics:
n
Resource Types
n
Resource Providers
n
Resource Consumers
n
Goals of Resource Management
Resource Types
Resources include CPU, memory, power, storage, and network resources.
Note ESXi manages network bandwidth and disk resources on a per-host basis, using network traffic
shaping and a proportional share mechanism, respectively.
Resource Providers
Hosts and clusters, including datastore clusters, are providers of physical resources.
For hosts, available resources are the host’s hardware specification, minus the resources used by the
virtualization software.
VMware, Inc.
9
vSphere Resource Management
A cluster is a group of hosts. You can create a cluster using vSphere Client, and add multiple hosts to the
cluster. vCenter Server manages these hosts’ resources jointly: the cluster owns all of the CPU and
memory of all hosts. You can enable the cluster for joint load balancing or failover. See Chapter 11
Creating a DRS Cluster for more information.
A datastore cluster is a group of datastores. Like DRS clusters, you can create a datastore cluster using
the vSphere Client, and add multiple datstores to the cluster. vCenter Server manages the datastore
resources jointly. You can enable Storage DRS to balance I/O load and space utilization. See Chapter 13
Creating a Datastore Cluster.
Resource Consumers
Virtual machines are resource consumers.
The default resource settings assigned during creation work well for most machines. You can later edit
the virtual machine settings to allocate a share-based percentage of the total CPU, memory, and storage
I/O of the resource provider or a guaranteed reservation of CPU and memory. When you power on that
virtual machine, the server checks whether enough unreserved resources are available and allows power
on only if there are enough resources. This process is called admission control.
A resource pool is a logical abstraction for flexible management of resources. Resource pools can be
grouped into hierarchies and used to hierarchically partition available CPU and memory resources.
Accordingly, resource pools can be considered both resource providers and consumers. They provide
resources to child resource pools and virtual machines, but are also resource consumers because they
consume their parents’ resources. See Chapter 10 Managing Resource Pools.
ESXi hosts allocate each virtual machine a portion of the underlying hardware resources based on a
number of factors:
n
Resource limits defined by the user.
n
Total available resources for the ESXi host (or the cluster).
n
Number of virtual machines powered on and resource usage by those virtual machines.
n
Overhead required to manage the virtualization.
Goals of Resource Management
When managing your resources, you must be aware of what your goals are.
In addition to resolving resource overcommitment, resource management can help you accomplish the
following:
n
Performance Isolation: Prevent virtual machines from monopolizing resources and guarantee
predictable service rates.
n
Efficient Usage: Exploit undercommitted resources and overcommit with graceful degradation.
n
Easy Administration: Control the relative importance of virtual machines, provide flexible dynamic
partitioning, and meet absolute service-level agreements.
VMware, Inc. 10
Configuring Resource Allocation
Settings 2
When available resource capacity does not meet the demands of the resource consumers (and
virtualization overhead), administrators might need to customize the amount of resources that are
allocated to virtual machines or to the resource pools in which they reside.
Use the resource allocation settings (shares, reservation, and limit) to determine the amount of CPU,
memory, and storage resources provided for a virtual machine. In particular, administrators have several
options for allocating resources.
n
Reserve the physical resources of the host or cluster.
n
Set an upper bound on the resources that can be allocated to a virtual machine.
n
Guarantee that a particular virtual machine is always allocated a higher percentage of the physical
resources than other virtual machines.
Note In this chapter, "Memory" refers to physical RAM.
This chapter includes the following topics:
n
Resource Allocation Shares
n
Resource Allocation Reservation
n
Resource Allocation Limit
n
Resource Allocation Settings Suggestions
n
Edit Settings
n
Changing Resource Allocation Settings—Example
n
Admission Control
Resource Allocation Shares
Shares specify the relative importance of a virtual machine (or resource pool). If a virtual machine has
twice as many shares of a resource as another virtual machine, it is entitled to consume twice as much of
that resource when these two virtual machines are competing for resources.
Shares are typically specified as High, Normal, or Low and these values specify share values with a
4:2:1 ratio, respectively. You can also select Custom to assign a specific number of shares (which
expresses a proportional weight) to each virtual machine.
VMware, Inc.
11
vSphere Resource Management
Specifying shares makes sense only with regard to sibling virtual machines or resource pools, that is,
virtual machines or resource pools with the same parent in the resource pool hierarchy. Siblings share
resources according to their relative share values, bounded by the reservation and limit. When you assign
shares to a virtual machine, you always specify the priority for that virtual machine relative to other
powered-on virtual machines.
The following table shows the default CPU and memory share values for a virtual machine. For resource
pools, the default CPU and memory share values are the same, but must be multiplied as if the resource
pool were a virtual machine with four virtual CPUs and 16 GB of memory.
Table 2‑1. Share Values
Setting CPU share values Memory share values
High 2000 shares per virtual CPU 20 shares per megabyte of configured virtual machine
memory.
Normal 1000 shares per virtual CPU 10 shares per megabyte of configured virtual machine
memory.
Low 500 shares per virtual CPU 5 shares per megabyte of configured virtual machine
memory.
For example, an SMP virtual machine with two virtual CPUs and 1GB RAM with CPU and memory shares
set to Normal has 2x1000=2000 shares of CPU and 10x1024=10240 shares of memory.
Note Virtual machines with more than one virtual CPU are called SMP (symmetric multiprocessing)
virtual machines. ESXi supports up to 128 virtual CPUs per virtual machine.
The relative priority represented by each share changes when a new virtual machine is powered on. This
affects all virtual machines in the same resource pool. All of the virtual machines have the same number
of virtual CPUs. Consider the following examples.
n
Two CPU-bound virtual machines run on a host with 8GHz of aggregate CPU capacity. Their CPU
shares are set to Normal and get 4GHz each.
n
A third CPU-bound virtual machine is powered on. Its CPU shares value is set to High, which means
it should have twice as many shares as the machines set to Normal. The new virtual machine
receives 4GHz and the two other machines get only 2GHz each. The same result occurs if the user
specifies a custom share value of 2000 for the third virtual machine.
Resource Allocation Reservation
A reservation specifies the guaranteed minimum allocation for a virtual machine.
vCenter Server or ESXi allows you to power on a virtual machine only if there are enough unreserved
resources to satisfy the reservation of the virtual machine. The server guarantees that amount even when
the physical server is heavily loaded. The reservation is expressed in concrete units (megahertz or
megabytes).
VMware, Inc. 12
vSphere Resource Management
For example, assume you have 2GHz available and specify a reservation of 1GHz for VM1 and 1GHz for
VM2. Now each virtual machine is guaranteed to get 1GHz if it needs it. However, if VM1 is using only
500MHz, VM2 can use 1.5GHz.
Reservation defaults to 0. You can specify a reservation if you need to guarantee that the minimum
required amounts of CPU or memory are always available for the virtual machine.
Resource Allocation Limit
Limit specifies an upper bound for CPU, memory, or storage I/O resources that can be allocated to a
virtual machine.
A server can allocate more than the reservation to a virtual machine, but never allocates more than the
limit, even if there are unused resources on the system. The limit is expressed in concrete units
(megahertz, megabytes, or I/O operations per second).
CPU, memory, and storage I/O resource limits default to unlimited. When the memory limit is unlimited,
the amount of memory configured for the virtual machine when it was created becomes its effective limit.
In most cases, it is not necessary to specify a limit. There are benefits and drawbacks:
n
Benefits — Assigning a limit is useful if you start with a small number of virtual machines and want to
manage user expectations. Performance deteriorates as you add more virtual machines. You can
simulate having fewer resources available by specifying a limit.
n
Drawbacks — You might waste idle resources if you specify a limit. The system does not allow virtual
machines to use more resources than the limit, even when the system is underutilized and idle
resources are available. Specify the limit only if you have good reasons for doing so.
Resource Allocation Settings Suggestions
Select resource allocation settings (reservation, limit and shares) that are appropriate for your ESXi
environment.
The following guidelines can help you achieve better performance for your virtual machines.
n
Use Reservation to specify the minimum acceptable amount of CPU or memory, not the amount you
want to have available. The amount of concrete resources represented by a reservation does not
change when you change the environment, such as by adding or removing virtual machines. The host
assigns additional resources as available based on the limit for your virtual machine, the number of
shares and estimated demand.
n
When specifying the reservations for virtual machines, do not commit all resources (plan to leave at
least 10% unreserved). As you move closer to fully reserving all capacity in the system, it becomes
increasingly difficult to make changes to reservations and to the resource pool hierarchy without
violating admission control. In a DRS-enabled cluster, reservations that fully commit the capacity of
the cluster or of individual hosts in the cluster can prevent DRS from migrating virtual machines
between hosts.
VMware, Inc. 13
vSphere Resource Management
n
If you expect frequent changes to the total available resources, use Shares to allocate resources
fairly across virtual machines. If you use Shares, and you upgrade the host, for example, each virtual
machine stays at the same priority (keeps the same number of shares) even though each share
represents a larger amount of memory, CPU, or storage I/O resources.
Edit Settings
Use the Edit Settings dialog box to change allocations for memory and CPU resources.
Procedure
1 Browse to the virtual machine in the vSphere Client.
2 Right-click and select Edit Settings.
3 Edit the CPU Resources.
Option Description
Shares CPU shares for this resource pool with respect to the parent’s total. Sibling
resource pools share resources according to their relative share values bounded
by the reservation and limit. Select Low, Normal, or High, which specify share
values respectively in a 1:2:4 ratio. Select Custom to give each virtual machine a
specific number of shares, which expresses a proportional weight.
Reservation Guaranteed CPU allocation for this resource pool.
Limit Upper limit for this resource pool’s CPU allocation. Select Unlimited to specify no
upper limit.
4 Edit the Memory Resources.
Option Description
Shares Memory shares for this resource pool with respect to the parent’s total. Sibling
resource pools share resources according to their relative share values bounded
by the reservation and limit. Select Low, Normal, or High, which specify share
values respectively in a 1:2:4 ratio. Select Custom to give each virtual machine a
specific number of shares, which expresses a proportional weight.
Reservation Guaranteed memory allocation for this resource pool.
Limit Upper limit for this resource pool’s memory allocation. Select Unlimited to specify
no upper limit.
5 Click OK.
Changing Resource Allocation Settings—Example
The following example illustrates how you can change resource allocation settings to improve virtual
machine performance.
Assume that on an ESXi host, you have created two new virtual machines—one each for your QA (VM-
QA) and Marketing (VM-Marketing) departments.
VMware, Inc. 14
vSphere Resource Management
Figure 2‑1. Single Host with Two Virtual Machines
In the following example, assume that VM-QA is memory intensive and accordingly you want to change
the resource allocation settings for the two virtual machines to:
n
Specify that, when system memory is overcommitted, VM-QA can use twice as much CPU and
memory resources as the Marketing virtual machine. Set the CPU shares and memory shares for
VM-QA to High and for VM-Marketing set them to Normal.
n
Ensure that the Marketing virtual machine has a certain amount of guaranteed CPU resources. You
can do so using a reservation setting.
Procedure
1 Browse to the virtual machines in the vSphere Client.
2 Right-click VM-QA, the virtual machine for which you want to change shares, and select Edit
Settings.
3 Under Virtual Hardware, expand CPU and select High from the Shares drop-down menu.
4 Under Virtual Hardware, expand Memory and select High from the Shares drop-down menu.
5 Click OK.
6 Right-click the marketing virtual machine (VM-Marketing) and select Edit Settings.
7 Under Virtual Hardware, expand CPU and change the Reservation value to the desired number.
8 Click OK.
Admission Control
When you power on a virtual machine, the system checks the amount of CPU and memory resources that
have not yet been reserved. Based on the available unreserved resources, the system determines
whether it can guarantee the reservation for which the virtual machine is configured (if any). This process
is called admission control.
If enough unreserved CPU and memory are available, or if there is no reservation, the virtual machine is
powered on. Otherwise, an Insufficient Resources warning appears.
Note In addition to the user-specified memory reservation, for each virtual machine there is also an
amount of overhead memory. This extra memory commitment is included in the admission control
calculation.
VMware, Inc. 15
vSphere Resource Management
When the vSphere DPM feature is enabled, hosts might be placed in standby mode (that is, powered off)
to reduce power consumption. The unreserved resources provided by these hosts are considered
available for admission control. If a virtual machine cannot be powered on without these resources, a
recommendation to power on sufficient standby hosts is made.
VMware, Inc. 16
CPU Virtualization Basics 3
CPU virtualization emphasizes performance and runs directly on the processor whenever possible. The
underlying physical resources are used whenever possible and the virtualization layer runs instructions
only as needed to make virtual machines operate as if they were running directly on a physical machine.
CPU virtualization is not the same thing as emulation. ESXi does not use emulation to run virtual CPUs.
With emulation, all operations are run in software by an emulator. A software emulator allows programs to
run on a computer system other than the one for which they were originally written. The emulator does
this by emulating, or reproducing, the original computer’s behavior by accepting the same data or inputs
and achieving the same results. Emulation provides portability and runs software designed for one
platform across several platforms.
When CPU resources are overcommitted, the ESXi host time-slices the physical processors across all
virtual machines so each virtual machine runs as if it has its specified number of virtual processors. When
an ESXi host runs multiple virtual machines, it allocates to each virtual machine a share of the physical
resources. With the default resource allocation settings, all virtual machines associated with the same
host receive an equal share of CPU per virtual CPU. This means that a single-processor virtual machines
is assigned only half of the resources of a dual-processor virtual machine.
This chapter includes the following topics:
n
Software-Based CPU Virtualization
n
Hardware-Assisted CPU Virtualization
n
Virtualization and Processor-Specific Behavior
n
Performance Implications of CPU Virtualization
Software-Based CPU Virtualization
With software-based CPU virtualization, the guest application code runs directly on the processor, while
the guest privileged code is translated and the translated code runs on the processor.
The translated code is slightly larger and usually runs more slowly than the native version. As a result,
guest applications, which have a small privileged code component, run with speeds very close to native.
Applications with a significant privileged code component, such as system calls, traps, or page table
updates can run slower in the virtualized environment.
VMware, Inc.
17
vSphere Resource Management
Hardware-Assisted CPU Virtualization
Certain processors provide hardware assistance for CPU virtualization.
When using this assistance, the guest can use a separate mode of execution called guest mode. The
guest code, whether application code or privileged code, runs in the guest mode. On certain events, the
processor exits out of guest mode and enters root mode. The hypervisor executes in the root mode,
determines the reason for the exit, takes any required actions, and restarts the guest in guest mode.
When you use hardware assistance for virtualization, there is no need to translate the code. As a result,
system calls or trap-intensive workloads run very close to native speed. Some workloads, such as those
involving updates to page tables, lead to a large number of exits from guest mode to root mode.
Depending on the number of such exits and total time spent in exits, hardware-assisted CPU virtualization
can speed up execution significantly.
Virtualization and Processor-Specific Behavior
Although VMware software virtualizes the CPU, the virtual machine detects the specific model of the
processor on which it is running.
Processor models might differ in the CPU features they offer, and applications running in the virtual
machine can make use of these features. Therefore, it is not possible to use vMotion® to migrate virtual
machines between systems running on processors with different feature sets. You can avoid this
restriction, in some cases, by using Enhanced vMotion Compatibility (EVC) with processors that support
this feature. See the vCenter Server and Host Management documentation for more information.
Performance Implications of CPU Virtualization
CPU virtualization adds varying amounts of overhead depending on the workload and the type of
virtualization used.
An application is CPU-bound if it spends most of its time executing instructions rather than waiting for
external events such as user interaction, device input, or data retrieval. For such applications, the CPU
virtualization overhead includes the additional instructions that must be executed. This overhead takes
CPU processing time that the application itself can use. CPU virtualization overhead usually translates
into a reduction in overall performance.
For applications that are not CPU-bound, CPU virtualization likely translates into an increase in CPU use.
If spare CPU capacity is available to absorb the overhead, it can still deliver comparable performance in
terms of overall throughput.
VMware, Inc. 18
vSphere Resource Management
ESXi supports up to 128 virtual processors (CPUs) for each virtual machine.
Note Deploy single-threaded applications on uniprocessor virtual machines, instead of on SMP virtual
machines that have multiple CPUs, for the best performance and resource use.
Single-threaded applications can take advantage only of a single CPU. Deploying such applications in
dual-processor virtual machines does not speed up the application. Instead, it causes the second virtual
CPU to use physical resources that other virtual machines could otherwise use.
VMware, Inc. 19
Administering CPU Resources 4
You can configure virtual machines with one or more virtual processors, each with its own set of registers
and control structures.
When a virtual machine is scheduled, its virtual processors are scheduled to run on physical processors.
The VMkernel Resource Manager schedules the virtual CPUs on physical CPUs, thereby managing the
virtual machine’s access to physical CPU resources. ESXi supports virtual machines with up to 128 virtual
CPUs.
Note In this chapter, "Memory" can refer to physical RAM or Persistent Memory.
This chapter includes the following topics:
n
View Processor Information
n
Specifying CPU Configuration
n
Multicore Processors
n
Hyperthreading
n
Using CPU Affinity
n
Host Power Management Policies
View Processor Information
You can access information about current CPU configuration in the vSphere Client.
Procedure
1 Browse to the host in the vSphere Client.
2 Click Configure and expand Hardware.
3 Select Processors to view the information about the number and type of physical processors and the
number of logical processors.
Note In hyperthreaded systems, each hardware thread is a logical processor. For example, a dual-
core processor with hyperthreading enabled has two cores and four logical processors.
VMware, Inc.
20
vSphere Resource Management
Specifying CPU Configuration
You can specify CPU configuration to improve resource management. However, if you do not customize
CPU configuration, the ESXi host uses defaults that work well in most situations.
You can specify CPU configuration in the following ways:
n
Use the attributes and special features available through the vSphere Client. The vSphere Client
allows you to connect to the ESXi host or a vCenter Server system.
n
Use advanced settings under certain circumstances.
n
Use the vSphere SDK for scripted CPU allocation.
n
Use hyperthreading.
Multicore Processors
Multicore processors provide many advantages for a host performing multitasking of virtual machines.
Note In this topic, "Memory" can refer to physical RAM or Persistent Memory.
Intel and AMD have developed processors which combine two or more processor cores into a single
integrated circuit (often called a package or socket). VMware uses the term socket to describe a single
package which can have one or more processor cores with one or more logical processors in each core.
A dual-core processor, for example, provides almost double the performance of a single-core processor,
by allowing two virtual CPUs to run at the same time. Cores within the same processor are typically
configured with a shared last-level cache used by all cores, potentially reducing the need to access
slower main memory. A shared memory bus that connects a physical processor to main memory can limit
performance of its logical processors when the virtual machines running on them are running memory-
intensive workloads which compete for the same memory bus resources.
Each logical processor of each processor core is used independently by the ESXi CPU scheduler to run
virtual machines, providing capabilities similar to SMP systems. For example, a two-way virtual machine
can have its virtual processors running on logical processors that belong to the same core, or on logical
processors on different physical cores.
The ESXi CPU scheduler can detect the processor topology and the relationships between processor
cores and the logical processors on them. It uses this information to schedule virtual machines and
optimize performance.
The ESXi CPU scheduler can interpret processor topology, including the relationship between sockets,
cores, and logical processors. The scheduler uses topology information to optimize the placement of
virtual CPUs onto different sockets. This optimization can maximize overall cache usage, and to improve
cache affinity by minimizing virtual CPU migrations.
VMware, Inc. 21
vSphere Resource Management
Hyperthreading
Hyperthreading technology allows a single physical processor core to behave like two logical processors.
The processor can run two independent applications at the same time. To avoid confusion between
logical and physical processors, Intel refers to a physical processor as a socket, and the discussion in this
chapter uses that terminology as well.
Intel Corporation developed hyperthreading technology to enhance the performance of its Pentium IV and
Xeon processor lines. Hyperthreading technology allows a single processor core to execute two
independent threads simultaneously.
While hyperthreading does not double the performance of a system, it can increase performance by
better utilizing idle resources leading to greater throughput for certain important workload types. An
application running on one logical processor of a busy core can expect slightly more than half of the
throughput that it obtains while running alone on a non-hyperthreaded processor. Hyperthreading
performance improvements are highly application-dependent, and some applications might see
performance degradation with hyperthreading because many processor resources (such as the cache)
are shared between logical processors.
Note On processors with Intel Hyper-Threading technology, each core can have two logical processors
which share most of the core's resources, such as memory caches and functional units. Such logical
processors are usually called threads.
Many processors do not support hyperthreading and as a result have only one thread per core. For such
processors, the number of cores also matches the number of logical processors. The following
processors support hyperthreading and have two threads per core.
n
Processors based on the Intel Xeon 5500 processor microarchitecture.
n
Intel Pentium 4 (HT-enabled)
n
Intel Pentium EE 840 (HT-enabled)
Hyperthreading and ESXi Hosts
A host that is enabled for hyperthreading should behave similarly to a host without hyperthreading. You
might need to consider certain factors if you enable hyperthreading, however.
ESXi hosts manage processor time intelligently to guarantee that load is spread smoothly across
processor cores in the system. Logical processors on the same core have consecutive CPU numbers, so
that CPUs 0 and 1 are on the first core together, CPUs 2 and 3 are on the second core, and so on. Virtual
machines are preferentially scheduled on two different cores rather than on two logical processors on the
same core.
VMware, Inc. 22
vSphere Resource Management
If there is no work for a logical processor, it is put into a halted state, which frees its execution resources
and allows the virtual machine running on the other logical processor on the same core to use the full
execution resources of the core. The VMware scheduler properly accounts for this halt time, and charges
a virtual machine running with the full resources of a core more than a virtual machine running on a half
core. This approach to processor management ensures that the server does not violate any of the
standard ESXi resource allocation rules.
Consider your resource management needs before you enable CPU affinity on hosts using
hyperthreading. For example, if you bind a high priority virtual machine to CPU 0 and another high priority
virtual machine to CPU 1, the two virtual machines have to share the same physical core. In this case, it
can be impossible to meet the resource demands of these virtual machines. Ensure that any custom
affinity settings make sense for a hyperthreaded system.
Enable Hyperthreading
To enable hyperthreading, you must first enable it in your system's BIOS settings and then turn it on in the
vSphere Client. Hyperthreading is enabled by default.
Consult your system documentation to determine whether your CPU supports hyperthreading.
Procedure
1 Ensure that your system supports hyperthreading technology.
2 Enable hyperthreading in the system BIOS.
Some manufacturers label this option Logical Processor, while others call it Enable
Hyperthreading.
3 Ensure that hyperthreading is enabled for the ESXi host.
a Browse to the host in the vSphere Client.
b Click Configure.
c Under System, click Advanced System Settings and select VMkernel.Boot.hyperthreading.
You must restart the host for the setting to take effect. Hyperthreading is enabled if the value is
true.
4 Under Hardware, click Processors to view the number of Logical processors.
Hyperthreading is enabled.
Using CPU Anity
By specifying a CPU affinity setting for each virtual machine, you can restrict the assignment of virtual
machines to a subset of the available processors in multiprocessor systems. By using this feature, you
can assign each virtual machine to processors in the specified affinity set.
CPU affinity specifies virtual machine-to-processor placement constraints and is different from the
relationship created by a VM-VM or VM-Host affinity rule, which specifies virtual machine-to-virtual
machine host placement constraints.
VMware, Inc. 23
vSphere Resource Management
In this context, the term CPU refers to a logical processor on a hyperthreaded system and refers to a core
on a non-hyperthreaded system.
The CPU affinity setting for a virtual machine applies to all of the virtual CPUs associated with the virtual
machine and to all other threads (also known as worlds) associated with the virtual machine. Such virtual
machine threads perform processing required for emulating mouse, keyboard, screen, CD-ROM, and
miscellaneous legacy devices.
In some cases, such as display-intensive workloads, significant communication might occur between the
virtual CPUs and these other virtual machine threads. Performance might degrade if the virtual machine's
affinity setting prevents these additional threads from being scheduled concurrently with the virtual
machine's virtual CPUs. Examples of this include a uniprocessor virtual machine with affinity to a single
CPU or a two-way SMP virtual machine with affinity to only two CPUs.
For the best performance, when you use manual affinity settings, VMware recommends that you include
at least one additional physical CPU in the affinity setting to allow at least one of the virtual machine's
threads to be scheduled at the same time as its virtual CPUs. Examples of this include a uniprocessor
virtual machine with affinity to at least two CPUs or a two-way SMP virtual machine with affinity to at least
three CPUs.
Assign a Virtual Machine to a Specific Processor
Using CPU affinity, you can assign a virtual machine to a specific processor. This allows you to restrict the
assignment of virtual machines to a specific available processor in multiprocessor systems.
Procedure
1 Browse to the virtual machine in the vSphere Client.
a To find a virtual machine, select a data center, folder, cluster, resource pool, or host.
b Select VMs.
2 Right-click the virtual machine and click Edit Settings.
3 Under Virtual Hardware, expand CPU.
4 Under Scheduling Affinity, select physical processor affinity for the virtual machine.
Use '-' for ranges and ',' to separate values.
For example, "0, 2, 4-7" would indicate processors 0, 2, 4, 5, 6 and 7.
5 Select the processors where you want the virtual machine to run and click OK.
Potential Issues with CPU Anity
Before you use CPU affinity, you might need to consider certain issues.
Potential issues with CPU affinity include:
n
For multiprocessor systems, ESXi systems perform automatic load balancing. Avoid manual
specification of virtual machine affinity to improve the scheduler’s ability to balance load across
processors.
VMware, Inc. 24
vSphere Resource Management
n
Affinity can interfere with the ESXi host’s ability to meet the reservation and shares specified for a
virtual machine.
n
Because CPU admission control does not consider affinity, a virtual machine with manual affinity
settings might not always receive its full reservation.
Virtual machines that do not have manual affinity settings are not adversely affected by virtual
machines with manual affinity settings.
n
When you move a virtual machine from one host to another, affinity might no longer apply because
the new host might have a different number of processors.
n
The NUMA scheduler might not be able to manage a virtual machine that is already assigned to
certain processors using affinity.
n
Affinity can affect the host's ability to schedule virtual machines on multicore or hyperthreaded
processors to take full advantage of resources shared on such processors.
Host Power Management Policies
You can apply several power management features in ESXi that the host hardware provides to adjust the
balance between performance and power. You can control how ESXi uses these features by selecting a
power management policy.
Selecting a high-performance policy provides more absolute performance, but at lower efficiency and
performance per watt. Low-power policies provide less absolute performance, but at higher efficiency.
You can select a policy for the host that you manage by using the VMware Host Client. If you do not
select a policy, ESXi uses Balanced by default.
Table 4‑1. CPU Power Management Policies
Power Management Policy Description
High Performance Do not use any power management features.
Balanced (Default) Reduce energy consumption with minimal performance
compromise
Low Power Reduce energy consumption at the risk of lower performance
Custom User-defined power management policy. Advanced configuration
becomes available.
When a CPU runs at lower frequency, it can also run at lower voltage, which saves power. This type of
power management is typically called Dynamic Voltage and Frequency Scaling (DVFS). ESXi attempts to
adjust CPU frequencies so that virtual machine performance is not affected.
When a CPU is idle, ESXi can apply deep halt states, also known as C-states. The deeper the C-state,
the less power the CPU uses, but it also takes longer for the CPU to start running again. When a CPU
becomes idle, ESXi applies an algorithm to predict the idle state duration and chooses an appropriate C-
state to enter. In power management policies that do not use deep C-states, ESXi uses only the
shallowest halt state for idle CPUs, C1.
VMware, Inc. 25
vSphere Resource Management
Select a CPU Power Management Policy
You set the CPU power management policy for a host using the vSphere Client.
Prerequisites
Verify that the BIOS settings on the host system allow the operating system to control power
management (for example, OS Controlled).
Note Some systems have Processor Clocking Control (PCC) technology, which allows ESXi to manage
power on the host system even if the host BIOS settings do not specify OS Controlled mode. With this
technology, ESXi does not manage P-states directly. Instead, the host cooperates with the BIOS to
determine the processor clock rate. HP systems that support this technology have a BIOS setting called
Cooperative Power Management that is enabled by default.
If the host hardware does not allow the operating system to manage power, only the Not Supported policy
is available. (On some systems, only the High Performance policy is available.)
Procedure
1 Browse to the host in the vSphere Client.
2 Click Configure.
3 Under Hardware, select Power Management and click the Edit button.
4 Select a power management policy for the host and click OK.
The policy selection is saved in the host configuration and can be used again at boot time. You can
change it at any time, and it does not require a server reboot.
Configure Custom Policy Parameters for Host Power Management
When you use the Custom policy for host power management, ESXi bases its power management policy
on the values of several advanced configuration parameters.
Prerequisites
Select Custom for the power management policy, as described in Select a CPU Power Management
Policy.
Procedure
1 Browse to the host in the vSphere Client.
2 Click Configure.
3 Under System, select Advanced System Settings.
4 In the right pane, you can edit the power management parameters that affect the Custom policy.
Power management parameters that affect the Custom policy have descriptions that begin with In
Custom policy. All other power parameters affect all power management policies.
VMware, Inc. 26
vSphere Resource Management
5 Select the parameter and click the Edit button.
Note The default values of power management parameters match the Balanced policy.
Parameter Description
Power.UsePStates Use ACPI P-states to save power when the processor is busy.
Power.MaxCpuLoad Use P-states to save power on a CPU only when the CPU is busy for less than
the given percentage of real time.
Power.MinFreqPct Do not use any P-states slower than the given percentage of full CPU speed.
Power.UseStallCtr Use a deeper P-state when the processor is frequently stalled waiting for events
such as cache misses.
Power.TimerHz Controls how many times per second ESXi reevaluates which P-state each CPU
should be in.
Power.UseCStates Use deep ACPI C-states (C2 or below) when the processor is idle.
Power.CStateMaxLatency Do not use C-states whose latency is greater than this value.
Power.CStateResidencyCoef When a CPU becomes idle, choose the deepest C-state whose latency multiplied
by this value is less than the host's prediction of how long the CPU will remain
idle. Larger values make ESXi more conservative about using deep C-states,
while smaller values are more aggressive.
Power.CStatePredictionCoef A parameter in the ESXi algorithm for predicting how long a CPU that becomes
idle will remain idle. Changing this value is not recommended.
Power.PerfBias Performance Energy Bias Hint (Intel-only). Sets an MSR on Intel processors to an
Intel-recommended value. Intel recommends 0 for high performance, 6 for
balanced, and 15 for low power. Other values are undefined.
6 Click OK.
VMware, Inc. 27
Memory Virtualization Basics 5
Before you manage memory resources, you should understand how they are being virtualized and used
by ESXi.
The VMkernel manages all physical RAM on the host. The VMkernel dedicates part of this managed
physical RAM for its own use. The rest is available for use by virtual machines.
The virtual and physical memory space is divided into blocks called pages. When physical memory is full,
the data for virtual pages that are not present in physical memory are stored on disk. Depending on
processor architecture, pages are typically 4 KB or 2 MB. See Advanced Memory Attributes.
This chapter includes the following topics:
n
Virtual Machine Memory
n
Memory Overcommitment
n
Memory Sharing
n
Memory Virtualization
n
Support for Large Page Sizes
Virtual Machine Memory
Each virtual machine consumes memory based on its configured size, plus additional overhead memory
for virtualization.
The configured size is the amount of memory that is presented to the guest operating system. This is
different from the amount of physical RAM that is allocated to the virtual machine. The latter depends on
the resource settings (shares, reservation, limit) and the level of memory pressure on the host.
VMware, Inc.
28
vSphere Resource Management
For example, consider a virtual machine with a configured size of 1GB. When the guest operating system
boots, it detects that it is running on a dedicated machine with 1GB of physical memory. In some cases,
the virtual machine might be allocated the full 1GB. In other cases, it might receive a smaller allocation.
Regardless of the actual allocation, the guest operating system continues to behave as though it is
running on a dedicated machine with 1GB of physical memory.
Shares Specify the relative priority for a virtual machine if more than the reservation
is available.
Reservation Is a guaranteed lower bound on the amount of physical RAM that the host
reserves for the virtual machine, even when memory is overcommitted. Set
the reservation to a level that ensures the virtual machine has sufficient
memory to run efficiently, without excessive paging.
After a virtual machine consumes all of the memory within its reservation, it
is allowed to retain that amount of memory and this memory is not
reclaimed, even if the virtual machine becomes idle. Some guest operating
systems (for example, Linux) might not access all of the configured memory
immediately after booting. Until the virtual machines consumes all of the
memory within its reservation, VMkernel can allocate any unused portion of
its reservation to other virtual machines. However, after the guest’s
workload increases and the virtual machine consumes its full reservation, it
is allowed to keep this memory.
Limit Is an upper bound on the amount of physical RAM that the host can
allocate to the virtual machine. The virtual machine’s memory allocation is
also implicitly limited by its configured size.
Memory Overcommitment
For each running virtual machine, the system reserves physical RAM for the virtual machine’s reservation
(if any) and for its virtualization overhead.
The total configured memory sizes of all virtual machines may exceed the amount of available physical
memory on the host. However, it doesn't necessarily mean memory is overcommitted. Memory is
overcommitted when the combined working memory footprint of all virtual machines exceed that of the
host memory sizes.
Because of the memory management techniques the ESXi host uses, your virtual machines can use
more virtual RAM than there is physical RAM available on the host. For example, you can have a host
with 2GB memory and run four virtual machines with 1GB memory each. In that case, the memory is
overcommitted. For instance, if all four virtual machines are idle, the combined consumed memory may
be well below 2GB. However, if all 4GB virtual machines are actively consuming memory, then their
memory footprint may exceed 2GB and the ESXi host will become overcommitted.
Overcommitment makes sense because, typically, some virtual machines are lightly loaded while others
are more heavily loaded, and relative activity levels vary over time.
VMware, Inc. 29
vSphere Resource Management
To improve memory utilization, the ESXi host transfers memory from idle virtual machines to virtual
machines that need more memory. Use the Reservation or Shares parameter to preferentially allocate
memory to important virtual machines. This memory remains available to other virtual machines if it is not
in use. ESXi implements various mechanisms such as ballooning, memory sharing, memory compression
and swapping to provide reasonable performance even if the host is not heavily memory overcommitted.
An ESXi host can run out of memory if virtual machines consume all reservable memory in a memory
overcommitted environment. Although the powered on virtual machines are not affected, a new virtual
machine might fail to power on due to lack of memory.
Note All virtual machine memory overhead is also considered reserved.
In addition, memory compression is enabled by default on ESXi hosts to improve virtual machine
performance when memory is overcommitted as described in Memory Compression.
Memory Sharing
Memory sharing is a proprietary ESXi technique that can help achieve greater memory density on a host.
Memory sharing relies on the observation that several virtual machines might be running instances of the
same guest operating system. These virtual machines might have the same applications or components
loaded, or contain common data. In such cases, a host uses a proprietary Transparent Page Sharing
(TPS) technique to eliminate redundant copies of memory pages. With memory sharing, a workload
running on a virtual machine often consumes less memory than it might when running on physical
machines. As a result, higher levels of overcommitment can be supported efficiently. The amount of
memory saved by memory sharing depends on whether the workload consists of nearly identical
machines which might free up more memory. A more diverse workload might result in a lower percentage
of memory savings.
Note Due to security concerns, inter-virtual machine transparent page sharing is disabled by default and
page sharing is being restricted to intra-virtual machine memory sharing. Page sharing does not occur
across virtual machines and only occurs inside a virtual machine. See Sharing Memory Across Virtual
Machines for more information.
Memory Virtualization
Because of the extra level of memory mapping introduced by virtualization, ESXi can effectively manage
memory across all virtual machines.
Some of the physical memory of a virtual machine might be mapped to shared pages or to pages that are
unmapped, or swapped out.
A host performs virtual memory management without the knowledge of the guest operating system and
without interfering with the guest operating system’s own memory management subsystem.
VMware, Inc. 30