This document supports the version of each product listed and
supports all subsequent versions until the document is replaced
by a new edition. To check for more recent editions of this
document, see http://www.vmware.com/support/pubs.
EN-000107-02
vSphere Resource Management Guide
You can find the most up-to-date technical documentation on the VMware Web site at:
http://www.vmware.com/support/
The VMware Web site also provides the latest product updates.
If you have comments about this documentation, submit your feedback to:
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks
and names mentioned herein may be trademarks of their respective companies.
VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com
2 VMware, Inc.
Contents
Updated Information5
About This Book7
Getting Started with Resource Management9
1
What Is Resource Management? 9
Configuring Resource Allocation Settings 10
Viewing Resource Allocation Information 13
Admission Control 16
Managing CPU Resources17
2
CPU Virtualization Basics 17
Administering CPU Resources 18
Managing Memory Resources25
3
Memory Virtualization Basics 25
Administering Memory Resources 28
Managing Resource Pools37
4
Why Use Resource Pools? 38
Create Resource Pools 39
Add Virtual Machines to a Resource Pool 40
Removing Virtual Machines from a Resource Pool 41
Resource Pool Admission Control 41
VMware, Inc.
Creating a DRS Cluster45
5
Admission Control and Initial Placement 46
Virtual Machine Migration 47
DRS Cluster Prerequisites 49
Create a DRS Cluster 50
Set a Custom Automation Level for a Virtual Machine 51
Disable DRS 51
Using DRS Clusters to Manage Resources53
6
Using DRS Rules 53
Adding Hosts to a Cluster 55
Adding Virtual Machines to a Cluster 56
Remove Hosts from a Cluster 56
Removing Virtual Machines from a Cluster 57
DRS Cluster Validity 58
Managing Power Resources 62
3
vSphere Resource Management Guide
Viewing DRS Cluster Information67
7
Viewing the Cluster Summary Tab 67
Using the DRS Tab 69
Using NUMA Systems with ESX/ESXi73
8
What is NUMA? 73
How ESX/ESXi NUMA Scheduling Works 74
VMware NUMA Optimization Algorithms and Settings 75
Resource Management in NUMA Architectures 76
Specifying NUMA Controls 77
Performance Monitoring Utilities: resxtop and esxtop81
A
Using the esxtop Utility 81
Using the resxtop Utility 81
Using esxtop or resxtop in Interactive Mode 82
Using Batch Mode 96
Using Replay Mode 97
Advanced Attributes99
B
Set Advanced Host Attributes 99
Set Advanced Virtual Machine Attributes 101
Index103
4 VMware, Inc.
Updated Information
This vSphere Resource Management Guide is updated with each release of the product or when necessary.
This table provides the update history of the vSphere Resource Management Guide.
RevisionDescription
EN-000107-02 Included a point in “Multicore Processors,” on page 19 section.
EN-000107-01 Removed references to CPU.MachineClearThreshold since this advanced CPU attribute is not available
through the vSphere Client.
EN-000107-00 Initial release.
VMware, Inc. 5
vSphere Resource Management Guide
6 VMware, Inc.
About This Book
The vSphere Resource Management Guide describes resource management for VMware® ESX™, ESXi, and
VMware vCenter™ Server environments.
This guide focuses on the following topics.
n
Resource allocation and resource management concepts
n
Virtual machine attributes and admission control
n
Resource pools and how to manage them
n
Clusters, VMware Distributed Resource Scheduler (DRS), VMware Distributed Power Management
(DPM), and how to work with them
n
Advanced resource management options
n
Performance considerations
The vSphere Resource Management Guide covers ESX, ESXi, and vCenter Server.
Intended Audience
This manual is for system administrators who want to understand how the system manages resources and
how they can customize the default behavior. It’s also essential for anyone who wants to understand and use
resource pools, clusters, DRS, or VMware DPM.
This manual assumes you have a working knowledge of VMware ESX and VMware ESXi and of vCenter
Server.
Document Feedback
VMware welcomes your suggestions for improving our documentation. If you have comments, send your
feedback to docfeedback@vmware.com.
vSphere Documentation
The VMware vSphere™ documentation consists of the combined vCenter Server and ESX/ESXi documentation
set.
VMware, Inc.
7
vSphere Resource Management Guide
Technical Support and Education Resources
The following technical support resources are available to you. To access the current version of this book and
other books, go to http://www.vmware.com/support/pubs.
Online and Telephone
Support
Support Offerings
VMware Professional
Services
To use online support to submit technical support requests, view your product
and contract information, and register your products, go to
http://www.vmware.com/support.
Customers with appropriate support contracts should use telephone support
for the fastest response on priority 1 issues. Go to
http://www.vmware.com/support/phone_support.html.
To find out how VMware support offerings can help meet your business needs,
go to http://www.vmware.com/support/services.
VMware Education Services courses offer extensive hands-on labs, case study
examples, and course materials designed to be used as on-the-job reference
tools. Courses are available onsite, in the classroom, and live online. For onsite
pilot programs and implementation best practices, VMware Consulting
Services provides offerings to help you assess, plan, build, and manage your
virtual environment. To access information about education classes,
certification programs, and consulting services, go to
http://www.vmware.com/services.
8 VMware, Inc.
Getting Started with Resource
Management1
To understand resource management, you must be aware of its components, its goals, and how best to
implement it in a cluster setting.
Resource allocation settings for a virtual machine (shares, reservation, and limit) are discussed, including how
to set them and how to view them. Also, admission control, the process whereby resource allocation settings
are validated against existing resources is explained.
This chapter includes the following topics:
n
“What Is Resource Management?,” on page 9
n
“Configuring Resource Allocation Settings,” on page 10
n
“Viewing Resource Allocation Information,” on page 13
n
“Admission Control,” on page 16
What Is Resource Management?
Resource management is the allocation of resources from resource providers to resource consumers.
The need for resource management arises from the overcommitment of resources—that is, more demand than
capacity and from the fact that demand and capacity vary over time. Resource management allows you to
dynamically reallocate resources, so that you can more efficiently use available capacity.
Resource Types
Resources include CPU, memory, power, storage, and network resources.
Resource management in this context focuses primarily on CPU and memory resources. Power resource
consumption can also be reduced with the VMware® Distributed Power Management (DPM) feature.
NOTE ESX/ESXi manages network bandwidth and disk resources on a per-host basis, using network traffic
shaping and a proportional share mechanism, respectively.
Resource Providers
Hosts and clusters are providers of physical resources.
For hosts, available resources are the host’s hardware specification, minus the resources used by the
virtualization software.
A cluster is a group of hosts. You can create a cluster using VMware® vCenter Server, and add multiple hosts
to the cluster. vCenter Server manages these hosts’ resources jointly: the cluster owns all of the CPU and
memory of all hosts. You can enable the cluster for joint load balancing or failover. See Chapter 5, “Creating a
DRS Cluster,” on page 45 for more information.
VMware, Inc.
9
vSphere Resource Management Guide
Resource Consumers
Virtual machines are resource consumers.
The default resource settings assigned during creation work well for most machines. You can later edit the
virtual machine settings to allocate a share-based percentage of the total CPU and memory of the resource
provider or a guaranteed reservation of CPU and memory. When you power on that virtual machine, the server
checks whether enough unreserved resources are available and allows power on only if there are enough
resources. This process is called admission control.
A resource pool is a logical abstraction for flexible management of resources. Resource pools can be grouped
into hierarchies and used to hierarchically partition available CPU and memory resources. Accordingly,
resource pools can be considered both resource providers and consumers. They provide resources to child
resource pools and virtual machines, but are also resource consumers because they consume their parents’
resources. See Chapter 4, “Managing Resource Pools,” on page 37.
An ESX/ESXi host allocates each virtual machine a portion of the underlying hardware resources based on a
number of factors:
n
Total available resources for the ESX/ESXi host (or the cluster).
n
Number of virtual machines powered on and resource usage by those virtual machines.
n
Overhead required to manage the virtualization.
n
Resource limits defined by the user.
Goals of Resource Management
When managing your resources, you should be aware of what your goals are.
In addition to resolving resource overcommitment, resource management can help you accomplish the
following:
n
Performance Isolation—prevent virtual machines from monopolizing resources and guarantee
predictable service rates.
n
Efficient Utilization—exploit undercommitted resources and overcommit with graceful degradation.
n
Easy Administration—control the relative importance of virtual machines, provide flexible dynamic
partitioning, and meet absolute service-level agreements.
Configuring Resource Allocation Settings
When available resource capacity does not meet the demands of the resource consumers (and virtualization
overhead), administrators might need to customize the amount of resources that are allocated to virtual
machines or to the resource pools in which they reside.
Use the resource allocation settings (shares, reservation, and limit) to determine the amount of CPU and
memory resources provided for a virtual machine. In particular, administrators have several options for
allocating resources.
n
Reserve the physical resources of the host or cluster.
n
Ensure that a certain amount of memory for a virtual machine is provided by the physical memory of the
ESX/ESXi machine.
n
Guarantee that a particular virtual machine is always allocated a higher percentage of the physical
resources than other virtual machines.
n
Set an upper bound on the resources that can be allocated to a virtual machine.
10 VMware, Inc.
Chapter 1 Getting Started with Resource Management
Resource Allocation Shares
Shares specify the relative priority or importance of a virtual machine (or resource pool). If a virtual machine
has twice as many shares of a resource as another virtual machine, it is entitled to consume twice as much of
that resource when these two virtual machines are competing for resources.
Shares are typically specified as High, Normal, or Low and these values specify share values with a 4:2:1 ratio,
respectively. You can also select Custom to assign a specific number of shares (which expresses a proportional
weight) to each virtual machine.
Specifying shares makes sense only with regard to sibling virtual machines or resource pools, that is, virtual
machines or resource pools with the same parent in the resource pool hierarchy. Siblings share resources
according to their relative share values, bounded by the reservation and limit. When you assign shares to a
virtual machine, you always specify the priority for that virtual machine relative to other powered-on virtual
machines.
The following table shows the default CPU and memory share values for a virtual machine. For resource pools,
the default CPU and memory share values are the same, but must be multiplied as if the resource pool were
a virtual machine with four VCPUs and 16 GB of memory.
Table 1-1. Share Values
SettingCPU share valuesMemory share values
High2000 shares per virtual CPU20 shares per megabyte of configured virtual machine
memory.
Normal1000 shares per virtual CPU10 shares per megabyte of configured virtual machine
memory.
Low500 shares per virtual CPU5 shares per megabyte of configured virtual machine
memory.
For example, an SMP virtual machine with two virtual CPUs and 1GB RAM with CPU and memory shares set
to Normal has 2x1000=2000 shares of CPU and 10x1024=10240 shares of memory.
NOTE Virtual machines with more than one virtual CPU are called SMP (symmetric multiprocessing) virtual
machines. ESX/ESXi supports up to eight virtual CPUs per virtual machine. This is also called eight-way SMP
support.
The relative priority represented by each share changes when a new virtual machine is powered on. This affects
all virtual machines in the same resource pool. All of the virtual machines have the same number of VCPUs.
Consider the following examples.
n
Two CPU-bound virtual machines run on a host with 8GHz of aggregate CPU capacity. Their CPU shares
are set to Normal and get 4GHz each.
n
A third CPU-bound virtual machine is powered on. Its CPU shares value is set to High, which means it
should have twice as many shares as the machines set to Normal. The new virtual machine receives 4GHz
and the two other machines get only 2GHz each. The same result occurs if the user specifies a custom
share value of 2000 for the third virtual machine.
Resource Allocation Reservation
A reservation specifies the guaranteed minimum allocation for a virtual machine.
vCenter Server or ESX/ESXi allows you to power on a virtual machine only if there are enough unreserved
resources to satisfy the reservation of the virtual machine. The server guarantees that amount even when the
physical server is heavily loaded. The reservation is expressed in concrete units (megahertz or megabytes).
VMware, Inc. 11
vSphere Resource Management Guide
For example, assume you have 2GHz available and specify a reservation of 1GHz for VM1 and 1GHz for VM2.
Now each virtual machine is guaranteed to get 1GHz if it needs it. However, if VM1 is using only 500MHz,
VM2 can use 1.5GHz.
Reservation defaults to 0. You can specify a reservation if you need to guarantee that the minimum required
amounts of CPU or memory are always available for the virtual machine.
Resource Allocation Limit
Limit specifies an upper bound for CPU or memory resources that can be allocated to a virtual machine.
A server can allocate more than the reservation to a virtual machine, but never allocates more than the limit,
even if there is unutilized CPU or memory on the system. The limit is expressed in concrete units (megahertz
or megabytes).
CPU and memory limit default to unlimited. When the memory limit is unlimited, the amount of memory
configured for the virtual machine when it was created becomes its effective limit in most cases.
In most cases, it is not necessary to specify a limit. There are benefits and drawbacks:
n
Benefits — Assigning a limit is useful if you start with a small number of virtual machines and want to
manage user expectations. Performance deteriorates as you add more virtual machines. You can simulate
having fewer resources available by specifying a limit.
n
Drawbacks — You might waste idle resources if you specify a limit. The system does not allow virtual
machines to use more resources than the limit, even when the system is underutilized and idle resources
are available. Specify the limit only if you have good reasons for doing so.
Resource Allocation Settings Suggestions
Select resource allocation settings (shares, reservation, and limit) that are appropriate for your ESX/ESXi
environment.
The following guidelines can help you achieve better performance for your virtual machines.
n
If you expect frequent changes to the total available resources, use Shares to allocate resources fairly across
virtual machines. If you use Shares, and you upgrade the host, for example, each virtual machine stays
at the same priority (keeps the same number of shares) even though each share represents a larger amount
of memory or CPU.
n
Use Reservation to specify the minimum acceptable amount of CPU or memory, not the amount you want
to have available. The host assigns additional resources as available based on the number of shares,
estimated demand, and the limit for your virtual machine. The amount of concrete resources represented
by a reservation does not change when you change the environment, such as by adding or removing
virtual machines.
n
When specifying the reservations for virtual machines, do not commit all resources (plan to leave at least
10% unreserved.) As you move closer to fully reserving all capacity in the system, it becomes increasingly
difficult to make changes to reservations and to the resource pool hierarchy without violating admission
control. In a DRS-enabled cluster, reservations that fully commit the capacity of the cluster or of individual
hosts in the cluster can prevent DRS from migrating virtual machines between hosts.
Changing Resource Allocation Settings—Example
The following example illustrates how you can change resource allocation settings to improve virtual machine
performance.
Assume that on an ESX/ESXi host, you have created two new virtual machines—one each for your QA (VMQA) and Marketing (VM-Marketing) departments.
12 VMware, Inc.
VM-QA
ESX/ESXi
host
VM-Marketing
Chapter 1 Getting Started with Resource Management
Figure 1-1. Single Host with Two Virtual Machines
In the following example, assume that VM-QA is memory intensive and accordingly you want to change the
resource allocation settings for the two virtual machines to:
n
Specify that, when system memory is overcommitted, VM-QA can use twice as much memory and CPU
as the Marketing virtual machine. Set the memory shares and CPU shares for VM-QA to High and for
VM-Marketing set them to Normal.
n
Ensure that the Marketing virtual machine has a certain amount of guaranteed CPU resources. You can
do so using a reservation setting.
Procedure
1Start the vSphere Client and connect to a vCenter Server.
2Right-click VM-QA, the virtual machine for which you want to change shares, and select Edit Settings.
3Select the Resources and In the CPU panel, select High from the Shares drop-down menu.
4In the Memory panel, select High from the Shares drop-down menu.
5Click OK.
6Right-click the marketing virtual machine (VM-Marketing) and select Edit Settings.
7In the CPU panel, change the value in the Reservation field to the desired number.
8Click OK.
If you select the cluster’s Resource Allocation tab and click CPU, you should see that shares for VM-QA are
twice that of the other virtual machine. Also, because the virtual machines have not been powered on, the
Reservation Used fields have not changed.
Viewing Resource Allocation Information
Using the vSphere Client, you can select a cluster, resource pool, standalone host, or a virtual machine in the
inventory panel and view how its resources are being allocated by clicking the Resource Allocation tab.
This information can then be used to help inform your resource management decisions.
Cluster Resource Allocation Tab
The Resource Allocation tab is available when you select a cluster from the inventory panel.
The Resource Allocation tab displays information about the CPU and memory resources in the cluster.
CPU Section
The following information about CPU resource allocation is shown:
VMware, Inc. 13
vSphere Resource Management Guide
Table 1-2. CPU Resource Allocation
FieldDescription
Total CapacityGuaranteed CPU allocation, in megahertz (MHz), reserved for this object.
Reserved CapacityNumber of megahertz (MHz) of the reserved allocation that this object is using.
Available CapacityNumber of megahertz (MHz) not reserved.
Memory Section
The following information about memory resource allocation is shown:
Table 1-3. Memory Resource Allocation
FieldDescription
Total CapacityGuaranteed memory allocation, in megabytes (MB), for this object.
Reserved CapacityNumber of megabytes (MB) of the reserved allocation that this object is using.
Overhead ReservationThe amount of the “Reserved Capacity” field that is being reserved for
Available CapacityNumber of megabytes (MB) not reserved.
virtualization overhead.
NOTE Reservations for the root resource pool of a cluster that is enabled for VMware HA might be larger than
the sum of the explicitly-used resources in the cluster. These reservations not only reflect the reservations for
the running virtual machines and the hierarchically-contained (child) resource pools in the cluster, but also
the reservations needed to support VMware HA failover. See the vSphere Availability Guide.
The Resource Allocation tab also displays a chart showing the resource pools and virtual machines in the DRS
cluster with the following CPU or memory usage information. To view CPU or memory information, click the
CPU button or Memory button, respectively.
Table 1-4. CPU or Memory Usage Information
FieldDescription
NameName of the object.
Reservation - MHzGuaranteed minimum CPU allocation, in megahertz (MHz), reserved for this object.
Reservation - MBGuaranteed minimum memory allocation, in megabytes (MB), for this object.
Limit - MHzMaximum amount of CPU the object can use.
Limit - MBMaximum amount of memory the object can use.
SharesA relative metric for allocating CPU or memory capacity. The values Low, Normal, High, and
Custom are compared to the sum of all shares of all virtual machines in the enclosing resource
pool.
Shares ValueActual value based on resource and object settings.
% SharesPercentage of cluster resources assigned to this object.
Worst Case AllocationThe amount of (CPU or memory) resource that is allocated to the virtual machine based on
user-configured resource allocation policies (for example, reservation, shares and limit), and
with the assumption that all virtual machines in the cluster consume their full amount of
allocated resources. The values for this field must be updated manually by pressing the F5 key.
TypeType of reserved CPU or memory allocation, either Expandable or Fixed.
14 VMware, Inc.
Chapter 1 Getting Started with Resource Management
Virtual Machine Resource Allocation Tab
A Resource Allocation tab is available when you select a virtual machine from the inventory panel.
This Resource Allocation tab displays information about the CPU and memory resources for the selected
virtual machine.
CPU Section
These bars display the following information about host CPU usage:
Table 1-5. Host CPU
FieldDescription
ConsumedActual consumption of CPU resources by the virtual machine.
ActiveEstimated amount of resources consumed by virtual machine if there is no resource contention. If
you have set an explicit limit, this amount does not exceed that limit.
Table 1-6. Resource Settings
FieldDescription
ReservationGuaranteed minimum CPU allocation for this virtual machine.
LimitMaximum CPU allocation for this virtual machine.
SharesCPU shares for this virtual machine.
Worst Case
Allocation
The amount of (CPU or memory) resource that is allocated to the virtual machine based on userconfigured resource allocation policies (for example, reservation, shares and limit), and with the
assumption that all virtual machines in the cluster consume their full amount of allocated resources.
Memory Section
These bars display the following information about host memory usage:
Table 1-7. Host Memory
FieldDescription
ConsumedActual consumption of physical memory that has been allocated to the virtual machine.
Overhead Consumption Amount of consumed memory being used for virtualization purposes. Overhead Consumption
is included in the amount shown in Consumed.
These bars display the following information about guest memory usage:
Table 1-8. Guest Memory
FieldDescription
PrivateAmount of memory backed by host memory and not being shared.
SharedAmount of memory being shared.
SwappedAmount of memory reclaimed by swapping.
BalloonedAmount of memory reclaimed by ballooning.
UnaccessedAmount of memory never referenced by the guest.
ActiveAmount of memory recently accessed.
VMware, Inc. 15
vSphere Resource Management Guide
Table 1-9. Resource Settings
FieldDescription
ReservationGuaranteed memory allocation for this virtual machine.
LimitUpper limit for this virtual machine’s memory allocation.
When you power on a virtual machine, the system checks the amount of CPU and memory resources that have
not yet been reserved. Based on the available unreserved resources, the system determines whether it can
guarantee the reservation for which the virtual machine is configured (if any). This process is called admission
control.
The amount of (CPU or memory) resource that is allocated to the virtual machine based on userconfigured resource allocation policies (for example, reservation, shares and limit), and with the
assumption that all virtual machines in the cluster consume their full amount of allocated resources.
The amount of memory that is being reserved for virtualization overhead.
If enough unreserved CPU and memory are available, or if there is no reservation, the virtual machine is
powered on. Otherwise, an Insufficient Resources warning appears.
NOTE In addition to the user-specified memory reservation, for each virtual machine there is also an amount
of overhead memory. This extra memory commitment is included in the admission control calculation.
When the VMware DPM feature is enabled, hosts might be placed in standby mode (that is, powered off) to
reduce power consumption. The unreserved resources provided by these hosts are considered available for
admission control. If a virtual machine cannot be powered on without these resources, a recommendation to
power on sufficient standby hosts is made.
16 VMware, Inc.
Managing CPU Resources2
ESX/ESXi hosts support CPU virtualization.
When you utilize CPU virtualization, you should understand how it works, its different types, and processorspecific behavior. Also, you need to be aware of the performance implications of CPU virtualization.
This chapter includes the following topics:
n
“CPU Virtualization Basics,” on page 17
n
“Administering CPU Resources,” on page 18
CPU Virtualization Basics
CPU virtualization emphasizes performance and runs directly on the processor whenever possible. The
underlying physical resources are used whenever possible and the virtualization layer runs instructions only
as needed to make virtual machines operate as if they were running directly on a physical machine.
CPU virtualization is not the same thing as emulation. With emulation, all operations are run in software by
an emulator. A software emulator allows programs to run on a computer system other than the one for which
they were originally written. The emulator does this by emulating, or reproducing, the original computer’s
behavior by accepting the same data or inputs and achieving the same results. Emulation provides portability
and runs software designed for one platform across several platforms.
When CPU resources are overcommitted, the ESX/ESXi host time-slices the physical processors across all
virtual machines so each virtual machine runs as if it has its specified number of virtual processors. When an
ESX/ESXi host runs multiple virtual machines, it allocates to each virtual machine a share of the physical
resources. With the default resource allocation settings, all virtual machines associated with the same host
receive an equal share of CPU per virtual CPU. This means that a single-processor virtual machines is assigned
only half of the resources of a dual-processor virtual machine.
Software-Based CPU Virtualization
With software-based CPU virtualization, the guest application code runs directly on the processor, while the
guest privileged code is translated and the translated code executes on the processor.
The translated code is slightly larger and usually executes more slowly than the native version. As a result,
guest programs, which have a small privileged code component, run with speeds very close to native. Programs
with a significant privileged code component, such as system calls, traps, or page table updates can run slower
in the virtualized environment.
VMware, Inc.
17
vSphere Resource Management Guide
Hardware-Assisted CPU Virtualization
Certain processors (such as Intel VT and AMD SVM) provide hardware assistance for CPU virtualization.
When using this assistance, the guest can use a separate mode of execution called guest mode. The guest code,
whether application code or privileged code, runs in the guest mode. On certain events, the processor exits
out of guest mode and enters root mode. The hypervisor executes in the root mode, determines the reason for
the exit, takes any required actions, and restarts the guest in guest mode.
When you use hardware assistance for virtualization, there is no need to translate the code. As a result, system
calls or trap-intensive workloads run very close to native speed. Some workloads, such as those involving
updates to page tables, lead to a large number of exits from guest mode to root mode. Depending on the number
of such exits and total time spent in exits, this can slow down execution significantly.
Virtualization and Processor-Specific Behavior
Although VMware software virtualizes the CPU, the virtual machine detects the specific model of the processor
on which it is running.
Processor models might differ in the CPU features they offer, and applications running in the virtual machine
can make use of these features. Therefore, it is not possible to use VMotion® to migrate virtual machines
between systems running on processors with different feature sets. You can avoid this restriction, in some
cases, by using Enhanced VMotion Compatibility (EVC) with processors that support this feature. See BasicSystem Administration for more information.
Performance Implications of CPU Virtualization
CPU virtualization adds varying amounts of overhead depending on the workload and the type of
virtualization used.
An application is CPU-bound if it spends most of its time executing instructions rather than waiting for external
events such as user interaction, device input, or data retrieval. For such applications, the CPU virtualization
overhead includes the additional instructions that must be executed. This overhead takes CPU processing time
that the application itself can use. CPU virtualization overhead usually translates into a reduction in overall
performance.
For applications that are not CPU-bound, CPU virtualization likely translates into an increase in CPU use. If
spare CPU capacity is available to absorb the overhead, it can still deliver comparable performance in terms
of overall throughput.
ESX/ESXi supports up to eight virtual processors (CPUs) for each virtual machine.
NOTE Deploy single-threaded applications on uniprocessor virtual machines, instead of on SMP virtual
machines, for the best performance and resource use.
Single-threaded applications can take advantage only of a single CPU. Deploying such applications in dualprocessor virtual machines does not speed up the application. Instead, it causes the second virtual CPU to use
physical resources that other virtual machines could otherwise use.
Administering CPU Resources
You can configure virtual machines with one or more virtual processors, each with its own set of registers and
control structures.
When a virtual machine is scheduled, its virtual processors are scheduled to run on physical processors. The
VMkernel Resource Manager schedules the virtual CPUs on physical CPUs, thereby managing the virtual
machine’s access to physical CPU resources. ESX/ESXi supports virtual machines with up to eight virtual
processors.
18 VMware, Inc.
Chapter 2 Managing CPU Resources
View Processor Information
You can access information about current CPU configuration through the vSphere Client or using the vSphere
SDK.
Procedure
1In the vSphere Client, select the host and click the Configuration tab.
2Select Processors.
You can view the information about the number and type of physical processors and the number of logical
processors.
NOTE In hyperthreaded systems, each hardware thread is a logical processor. For example, a dual-core
processor with hyperthreading enabled has two cores and four logical processors.
3(Optional) You can also disable or enable hyperthreading by clicking Properties.
Specifying CPU Configuration
You can specify CPU configuration to improve resource management. However, if you do not customize CPU
configuration, the ESX/ESXi host uses defaults that work well in most situations.
You can specify CPU configuration in the following ways:
n
Use the attributes and special features available through the vSphere Client. The vSphere Client graphical
user interface (GUI) allows you to connect to an ESX/ESXi host or a vCenter Server system.
n
Use advanced settings under certain circumstances.
n
Use the vSphere SDK for scripted CPU allocation.
n
Use hyperthreading.
Multicore Processors
Multicore processors provide many advantages for an ESX/ESXi host performing multitasking of virtual
machines.
Intel and AMD have each developed processors which combine two or more processor cores into a single
integrated circuit (often called a package or socket). VMware uses the term socket to describe a single package
which can have one or more processor cores with one or more logical processors in each core.
A dual-core processor, for example, can provide almost double the performance of a single-core processor, by
allowing two virtual CPUs to execute at the same time. Cores within the same processor are typically
configured with a shared last-level cache used by all cores, potentially reducing the need to access slower main
memory. A shared memory bus that connects a physical processor to main memory can limit performance of
its logical processors if the virtual machines running on them are running memory-intensive workloads which
compete for the same memory bus resources.
Each logical processor of each processor core can be used independently by the ESX CPU scheduler to execute
virtual machines, providing capabilities similar to SMP systems. For example, a two-way virtual machine can
have its virtual processors running on logical processors that belong to the same core, or on logical processors
on different physical cores.
The ESX CPU scheduler can detect the processor topology and the relationships between processor cores and
the logical processors on them. It uses this information to schedule virtual machines and optimize performance.
VMware, Inc. 19
vSphere Resource Management Guide
The ESX CPU scheduler can interpret processor topology, including the relationship between sockets, cores,
and logical processors. The scheduler uses topology information to optimize the placement of virtual CPUs
onto different sockets to maximize overall cache utilization, and to improve cache affinity by minimizing
virtual CPU migrations.
In undercommitted systems, the ESX CPU scheduler spreads load across all sockets by default. This improves
performance by maximizing the aggregate amount of cache available to the running virtual CPUs. As a result,
the virtual CPUs of a single SMP virtual machine are spread across multiple sockets (unless each socket is also
a NUMA node, in which case the NUMA scheduler restricts all the virtual CPUs of the virtual machine to
reside on the same socket.)
In some cases, such as when an SMP virtual machine exhibits significant data sharing between its virtual CPUs,
this default behavior might be sub-optimal. For such workloads, it can be beneficial to schedule all of the virtual
CPUs on the same socket, with a shared last-level cache, even when the ESX/ESXi host is undercommitted. In
such scenarios, you can override the default behavior of spreading virtual CPUs across packages by including
the following configuration option in the virtual machine's .vmx configuration file:
sched.cpu.vsmpConsolidate="TRUE".
To find out if a change in this parameter helps with performance, please do proper load testing. You cannot
easily predict the effect of a change in this parameter. If you do not see a performance boost after changing the
parameter, you have to revert the parameter to its default value.
Hyperthreading
Hyperthreading technology allows a single physical processor core to behave like two logical processors. The
processor can run two independent applications at the same time. To avoid confusion between logical and
physical processors, Intel refers to a physical processor as a socket, and the discussion in this chapter uses that
terminology as well.
Intel Corporation developed hyperthreading technology to enhance the performance of its Pentium IV and
Xeon processor lines. Hyperthreading technology allows a single processor core to execute two independent
threads simultaneously.
While hyperthreading does not double the performance of a system, it can increase performance by better
utilizing idle resources leading to greater throughput for certain important workload types. An application
running on one logical processor of a busy core can expect slightly more than half of the throughput that it
obtains while running alone on a non-hyperthreaded processor. Hyperthreading performance improvements
are highly application-dependent, and some applications might see performance degradation with
hyperthreading because many processor resources (such as the cache) are shared between logical processors.
NOTE On processors with Intel Hyper-Threading technology, each core can have two logical processors which
share most of the core's resources, such as memory caches and functional units. Such logical processors are
usually called threads.
Many processors do not support hyperthreading and as a result have only one thread per core. For such
processors, the number of cores also matches the number of logical processors. The following processors
support hyperthreading and have two threads per core.
n
Processors based on the Intel Xeon 5500 processor microarchitecture.
n
Intel Pentium 4 (HT-enabled)
n
Intel Pentium EE 840 (HT-enabled)
20 VMware, Inc.
Chapter 2 Managing CPU Resources
Hyperthreading and ESX/ESXi Hosts
An ESX/ESXi host enabled for hyperthreading should behave similarly to a host without hyperthreading. You
might need to consider certain factors if you enable hyperthreading, however.
ESX/ESXi hosts manage processor time intelligently to guarantee that load is spread smoothly across processor
cores in the system. Logical processors on the same core have consecutive CPU numbers, so that CPUs 0 and
1 are on the first core together, CPUs 2 and 3 are on the second core, and so on. Virtual machines are
preferentially scheduled on two different cores rather than on two logical processors on the same core.
If there is no work for a logical processor, it is put into a halted state, which frees its execution resources and
allows the virtual machine running on the other logical processor on the same core to use the full execution
resources of the core. The VMware scheduler properly accounts for this halt time, and charges a virtual machine
running with the full resources of a core more than a virtual machine running on a half core. This approach to
processor management ensures that the server does not violate any of the standard ESX/ESXi resource
allocation rules.
Consider your resource management needs before you enable CPU affinity on hosts using hyperthreading.
For example, if you bind a high priority virtual machine to CPU 0 and another high priority virtual machine
to CPU 1, the two virtual machines have to share the same physical core. In this case, it can be impossible to
meet the resource demands of these virtual machines. Ensure that any custom affinity settings make sense for
a hyperthreaded system.
Enable Hyperthreading
To enable hyperthreading you must first enable it in your system's BIOS settings and then turn it on in the
vSphere Client. Hyperthreading is enabled by default.
Some Intel processors, for example Xeon 5500 processors or those based on the P4 microarchitecture, support
hyperthreading. Consult your system documentation to determine whether your CPU supports
hyperthreading. ESX/ESXi cannot enable hyperthreading on a system with more than 32 physical cores,
because ESX/ESXi has a logical limit of 64 CPUs.
Procedure
1Ensure that your system supports hyperthreading technology.
2Enable hyperthreading in the system BIOS.
Some manufacturers label this option Logical Processor, while others call it Enable Hyperthreading.
3Make sure that you turn on hyperthreading for your ESX/ESXi host.
aIn the vSphere Client, select the host and click the Configuration tab.
bSelect Processors and click Properties.
cIn the dialog box, you can view hyperthreading status and turn hyperthreading off or on (default).
Hyperthreading is now enabled.
Set Hyperthreading Sharing Options for a Virtual Machine
You can specify how the virtual CPUs of a virtual machine can share physical cores on a hyperthreaded system.
Two virtual CPUs share a core if they are running on logical CPUs of the core at the same time. You can set
this for individual virtual machines.
Procedure
1In the vSphere Client inventory panel, right-click the virtual machine and select Edit Settings.
2Click the Resources tab, and click Advanced CPU.
VMware, Inc. 21
vSphere Resource Management Guide
3Select a hyperthreading mode for this virtual machine from the Mode drop-down menu.
Hyperthreaded Core Sharing Options
You can set the hyperthreaded core sharing mode for a virtual machine using the vSphere Client.
Table 2-1 shows the available choices for this mode.
Table 2-1. Hyperthreaded Core Sharing Modes
OptionDescription
AnyThe default for all virtual machines on a hyperthreaded system. The virtual CPUs of a virtual machine
with this setting can freely share cores with other virtual CPUs from this or any other virtual machine
at any time.
NoneVirtual CPUs of a virtual machine should not share cores with each other or with virtual CPUs from
other virtual machines. That is, each virtual CPU from this virtual machine should always get a whole
core to itself, with the other logical CPU on that core being placed into the halted state.
InternalThis option is similar to none. Virtual CPUs from this virtual machine cannot share cores with virtual
CPUs from other virtual machines. They can share cores with the other virtual CPUs from the same
virtual machine.
You can select this option only for SMP virtual machines. If applied to a uniprocessor virtual machine,
the system changes this option to none.
These options have no effect on fairness or CPU time allocation. Regardless of a virtual machine’s
hyperthreading settings, it still receives CPU time proportional to its CPU shares, and constrained by its CPU
reservation and CPU limit values.
For typical workloads, custom hyperthreading settings should not be necessary. The options can help in case
of unusual workloads that interact badly with hyperthreading. For example, an application with cache
thrashing problems might slow down an application sharing its physical core. You can place the virtual
machine running the application in the none or internal hyperthreading status to isolate it from other virtual
machines.
If a virtual CPU has hyperthreading constraints that do not allow it to share a core with another virtual CPU,
the system might deschedule it when other virtual CPUs are entitled to consume processor time. Without the
hyperthreading constraints, you can schedule both virtual CPUs on the same core.
The problem becomes worse on systems with a limited number of cores (per virtual machine). In such cases,
there might be no core to which the virtual machine that is descheduled can be migrated. As a result, virtual
machines with hyperthreading set to none or internal can experience performance degradation, especially on
systems with a limited number of cores.
Quarantining
In certain rare circumstances, an ESX/ESXi host might detect that an application is interacting badly with the
Pentium IV hyperthreading technology (this does not apply to systems based on the Intel Xeon 5500 processor
microarchitecture). In such cases, quarantining, which is transparent to the user, might be necessary.
Certain types of self-modifying code, for example, can disrupt the normal behavior of the Pentium IV trace
cache and can lead to substantial slowdowns (up to 90 percent) for an application sharing a core with the
problematic code. In those cases, the ESX/ESXi host quarantines the virtual CPU running this code and places
its virtual machine in the none or internal mode, as appropriate.
22 VMware, Inc.
Chapter 2 Managing CPU Resources
Using CPU Affinity
By specifying a CPU affinity setting for each virtual machine, you can restrict the assignment of virtual
machines to a subset of the available processors in multiprocessor systems. By using this feature, you can assign
each virtual machine to processors in the specified affinity set.
In this context, the term CPU refers to a logical processor on a hyperthreaded system, but refers to a core on a
non-hyperthreaded system.
The CPU affinity setting for a virtual machine applies not only to all of the virtual CPUs associated with the
virtual machine, but also to all other threads (also known as worlds) associated with the virtual machine. Such
virtual machine threads perform processing required for emulating mouse, keyboard, screen, CD-ROM and
miscellaneous legacy devices.
In some cases, such as display-intensive workloads, significant communication might occur between the virtual
CPUs and these other virtual machine threads. Performance might degrade if the virtual machine's affinity
setting prevents these additional threads from being scheduled concurrently with the virtual machine's virtual
CPUs (for example, a uniprocessor virtual machine with affinity to a single CPU, or a two-way SMP virtual
machine with affinity to only two CPUs).
For the best performance, when you use manual affinity settings, VMware recommends that you include at
least one additional physical CPU in the affinity setting to allow at least one of the virtual machine's threads
to be scheduled at the same time as its virtual CPUs (for example, a uniprocessor virtual machine with affinity
to at least two CPUs or a two-way SMP virtual machine with affinity to at least three CPUs).
NOTE CPU affinity specifies virtual machine-to-processor placement constraints and is different from the
affinity based on DRS rules, which specifies virtual machine-to-virtual machine host placement constraints.
Assign a Virtual Machine to a Specific Processor
Using CPU affinity, you can assign a virtual machine to a specific processor. This allows you to restrict the
assignment of virtual machines to a specific available processor in multiprocessor systems.
Procedure
1In the vSphere Client inventory panel, select a virtual machine and select Edit Settings.
2Select the Resources tab and select Advanced CPU.
3Click the Run on processor(s) button.
4Select the processors on which you want the virtual machine to run and click OK.
Potential Issues with CPU Affinity
Before you use CPU affinity, you might need to consider certain issues.
Potential issues with CPU affinity include:
n
For multiprocessor systems, ESX/ESXi systems perform automatic load balancing. Avoid manual
specification of virtual machine affinity to improve the scheduler’s ability to balance load across
processors.
n
Affinity can interfere with the ESX/ESXi host’s ability to meet the reservation and shares specified for a
virtual machine.
n
Because CPU admission control does not consider affinity, a virtual machine with manual affinity settings
might not always receive its full reservation.
Virtual machines that do not have manual affinity settings are not adversely affected by virtual machines
with manual affinity settings.
VMware, Inc. 23
vSphere Resource Management Guide
n
When you move a virtual machine from one host to another, affinity might no longer apply because the
new host might have a different number of processors.
n
The NUMA scheduler might not be able to manage a virtual machine that is already assigned to certain
processors using affinity.
n
Affinity can affect an ESX/ESXi host's ability to schedule virtual machines on multicore or hyperthreaded
processors to take full advantage of resources shared on such processors.
CPU Power Management
To improve CPU power efficiency, you can configure your ESX/ESXi hosts to dynamically switch CPU
frequencies based on workload demands. This type of power management is called Dynamic Voltage and
Frequency Scaling (DVFS). It uses processor performance states (P-states) made available to the VMkernel
through an ACPI interface.
ESX/ESXi supports the Enhanced Intel SpeedStep and Enhanced AMD PowerNow! CPU power management
technologies. For the VMkernel to take advantage of the power management capabilities provided by these
technologies, you might need to first enable power management, sometimes referred to as Demand-Based
Switching (DBS), in the BIOS.
To set the CPU power management policy, use the advanced host attribute Power.CpuPolicy. This attribute
setting is saved in the host configuration and can be used again at boot time, but it can be changed at any time
and does not require a server reboot. You can set this attribute to the following values.
static
dynamic
The default. The VMkernel can detect power management features available
on the host but does not actively use them unless requested by the BIOS for
power capping or thermal events.
The VMkernel optimizes each CPU's frequency to match demand in order to
improve power efficiency but not affect performance. When CPU demand
increases, this policy setting ensures that CPU frequencies also increase.
24 VMware, Inc.
Managing Memory Resources3
All modern operating systems provide support for virtual memory, allowing software to use more memory
than the machine physically has. Similarly, the ESX/ESXi hypervisor provides support for overcommitting
virtual machine memory, where the amount of guest memory configured for all virtual machines might be
larger than the amount of physical host memory.
If you intend to use memory virtualization, you should understand how ESX/ESXi hosts allocate, tax, and
reclaim memory. Also, you need to be aware of the memory overhead incurred by virtual machines.
This chapter includes the following topics:
n
“Memory Virtualization Basics,” on page 25
n
“Administering Memory Resources,” on page 28
Memory Virtualization Basics
Before you manage memory resources, you should understand how they are being virtualized and used by
ESX/ESXi.
The VMkernel manages all machine memory. (An exception to this is the memory that is allocated to the service
console in ESX.) The VMkernel dedicates part of this managed machine memory for its own use. The rest is
available for use by virtual machines. Virtual machines use machine memory for two purposes: each virtual
machine requires its own memory and the VMM requires some memory and a dynamic overhead memory for
its code and data.
The virtual memory space is divided into blocks, typically 4KB, called pages. The physical memory is also
divided into blocks, also typically 4KB. When physical memory is full, the data for virtual pages that are not
present in physical memory are stored on disk. ESX/ESXi also provides support for large pages (2 MB). See
“Advanced Memory Attributes,” on page 100.
Virtual Machine Memory
Each virtual machine consumes memory based on its configured size, plus additional overhead memory for
virtualization.
Configured Size
The configured size is a construct maintained by the virtualization layer for the virtual machine. It is the amount
of memory that is presented to the guest operating system, but it is independent of the amount of physical
RAM that is allocated to the virtual machine, which depends on the resource settings (shares, reservation, limit)
explained below.
VMware, Inc.
25
vSphere Resource Management Guide
For example, consider a virtual machine with a configured size of 1GB. When the guest operating system boots,
it detects that it is running on a dedicated machine with 1GB of physical memory. The actual amount of physical
host memory allocated to the virtual machine depends on its memory resource settings and memory contention
on the ESX/ESXi host. In some cases, the virtual machine might be allocated the full 1GB. In other cases, it
might receive a smaller allocation. Regardless of the actual allocation, the guest operating system continues to
behave as though it is running on a dedicated machine with 1GB of physical memory.
Shares
Reservation
Limit
Memory Overcommitment
Specify the relative priority for a virtual machine if more than the reservation
is available.
Is a guaranteed lower bound on the amount of physical memory that the host
reserves for the virtual machine, even when memory is overcommitted. Set the
reservation to a level that ensures the virtual machine has sufficient memory
to run efficiently, without excessive paging.
After a virtual machine has accessed its full reservation, it is allowed to retain
that amount of memory and this memory is not reclaimed, even if the virtual
machine becomes idle. For example, some guest operating systems (for
example, Linux) might not access all of the configured memory immediately
after booting. Until the virtual machines accesses its full reservation, VMkernel
can allocate any unused portion of its reservation to other virtual machines.
However, after the guest’s workload increases and it consumes its full
reservation, it is allowed to keep this memory.
Is an upper bound on the amount of physical memory that the host can allocate
to the virtual machine. The virtual machine’s memory allocation is also
implicitly limited by its configured size.
Overhead memory includes space reserved for the virtual machine frame
buffer and various virtualization data structures.
For each running virtual machine, the system reserves physical memory for the virtual machine’s reservation
(if any) and for its virtualization overhead.
Because of the memory management techniques the ESX/ESXi host uses, your virtual machines can use more
memory than the physical machine (the host) has available. For example, you can have a host with 2GB memory
and run four virtual machines with 1GB memory each. In that case, the memory is overcommitted.
Overcommitment makes sense because, typically, some virtual machines are lightly loaded while others are
more heavily loaded, and relative activity levels vary over time.
To improve memory utilization, the ESX/ESXi host transfers memory from idle virtual machines to virtual
machines that need more memory. Use the Reservation or Shares parameter to preferentially allocate memory
to important virtual machines. This memory remains available to other virtual machines if it is not in use.
Memory Sharing
Many workloads present opportunities for sharing memory across virtual machines.
For example, several virtual machines might be running instances of the same guest operating system, have
the same applications or components loaded, or contain common data. ESX/ESXi systems use a proprietary
page-sharing technique to securely eliminate redundant copies of memory pages.
With memory sharing, a workload consisting of multiple virtual machines often consumes less memory than
it would when running on physical machines. As a result, the system can efficiently support higher levels of
overcommitment.
26 VMware, Inc.
Chapter 3 Managing Memory Resources
The amount of memory saved by memory sharing depends on workload characteristics. A workload of many
nearly identical virtual machines might free up more than thirty percent of memory, while a more diverse
workload might result in savings of less than five percent of memory.
Software-Based Memory Virtualization
ESX/ESXi virtualizes guest physical memory by adding an extra level of address translation.
n
The VMM for each virtual machine maintains a mapping from the guest operating system's physical
memory pages to the physical memory pages on the underlying machine. (VMware refers to the
underlying host physical pages as “machine” pages and the guest operating system’s physical pages as
“physical” pages.)
Each virtual machine sees a contiguous, zero-based, addressable physical memory space. The underlying
machine memory on the server used by each virtual machine is not necessarily contiguous.
n
The VMM intercepts virtual machine instructions that manipulate guest operating system memory
management structures so that the actual memory management unit (MMU) on the processor is not
updated directly by the virtual machine.
n
The ESX/ESXi host maintains the virtual-to-machine page mappings in a shadow page table that is kept
up to date with the physical-to-machine mappings (maintained by the VMM).
n
The shadow page tables are used directly by the processor's paging hardware.
This approach to address translation allows normal memory accesses in the virtual machine to execute without
adding address translation overhead, after the shadow page tables are set up. Because the translation lookaside buffer (TLB) on the processor caches direct virtual-to-machine mappings read from the shadow page
tables, no additional overhead is added by the VMM to access the memory.
Performance Considerations
The use of two-page tables has these performance implications.
n
No overhead is incurred for regular guest memory accesses.
n
Additional time is required to map memory within a virtual machine, which might mean:
n
The virtual machine operating system is setting up or updating virtual address to physical address
mappings.
n
The virtual machine operating system is switching from one address space to another (context switch).
n
Like CPU virtualization, memory virtualization overhead depends on workload.
Hardware-Assisted Memory Virtualization
Some CPUs, such as AMD SVM-V and the Intel Xeon 5500 series, provide hardware support for memory
virtualization by using two layers of page tables.
The first layer of page tables stores guest virtual-to-physical translations, while the second layer of page tables
stores guest physical-to-machine translation. The TLB (translation look-aside buffer) is a cache of translations
maintained by the processor's memory management unit (MMU) hardware. A TLB miss is a miss in this cache
and the hardware needs to go to memory (possibly many times) to find the required translation. For a TLB
miss to a certain guest virtual address, the hardware looks at both page tables to translate guest virtual address
to host physical address.
The diagram in Figure 3-1 illustrates the ESX/ESXi implementation of memory virtualization.
VMware, Inc. 27
virtual machine
1
guest virtual memory
guest physical memory
machine memory
a b
a
ab bc
b
c b
b c
virtual machine
2
vSphere Resource Management Guide
Figure 3-1. ESX/ESXi Memory Mapping
n
The boxes represent pages, and the arrows show the different memory mappings.
n
The arrows from guest virtual memory to guest physical memory show the mapping maintained by the
page tables in the guest operating system. (The mapping from virtual memory to linear memory for x86architecture processors is not shown.)
n
The arrows from guest physical memory to machine memory show the mapping maintained by the VMM.
n
The dashed arrows show the mapping from guest virtual memory to machine memory in the shadow
page tables also maintained by the VMM. The underlying processor running the virtual machine uses the
shadow page table mappings.
Because of the extra level of memory mapping introduced by virtualization, ESX/ESXi can effectively manage
memory across all virtual machines. Some of the physical memory of a virtual machine might be mapped to
shared pages or to pages that are unmapped, or swapped out.
An ESX/ESXi host performs virtual memory management without the knowledge of the guest operating system
and without interfering with the guest operating system’s own memory management subsystem.
Performance Considerations
When you use hardware assistance, you eliminate the overhead for software memory virtualization. In
particular, hardware assistance eliminates the overhead required to keep shadow page tables in
synchronization with guest page tables. However, the TLB miss latency when using hardware assistance is
significantly higher. As a result, whether or not a workload benefits by using hardware assistance primarily
depends on the overhead the memory virtualization causes when using software memory virtualization. If a
workload involves a small amount of page table activity (such as process creation, mapping the memory, or
context switches), software virtualization does not cause significant overhead. Conversely, workloads with a
large amount of page table activity are likely to benefit from hardware assistance.
Administering Memory Resources
Using the vSphere Client you can view information about and make changes to memory allocation settings.
To administer your memory resources effectively, you must also be familiar with memory overhead, idle
memory tax, and how ESX/ESXi hosts reclaim memory.
When administering memory resources, you can specify memory allocation. If you do not customize memory
allocation, the ESX/ESXi host uses defaults that work well in most situations.
You can specify memory allocation in several ways.
n
Use the attributes and special features available through the vSphere Client. The vSphere Client GUI
allows you to connect to an ESX/ESXi host or a vCenter Server system.
28 VMware, Inc.
n
Use advanced settings.
n
Use the vSphere SDK for scripted memory allocation.
Chapter 3 Managing Memory Resources
View Memory Allocation Information
You can use the vSphere Client to view information about current memory allocations.
You can view the information about the total memory and memory available to virtual machines. In ESX, you
can also view memory assigned to the service console.
Procedure
1In the vSphere Client, select a host and click the Configuration tab.
2Click Memory.
You can view the information shown in “Host Memory Information,” on page 29.
Host Memory Information
The vSphere Client shows information about host memory allocation.
The host memory fields are discussed in Table 3-1.
Table 3-1. Host Memory Information
FieldDescription
TotalTotal physical memory for this host.
SystemMemory used by the ESX/ESXi system.
ESX/ESXi uses at least 50MB of system memory for the VMkernel, and additional memory for
device drivers. This memory is allocated when the ESX/ESXi is loaded and is not configurable.
The actual required memory for the virtualization layer depends on the number and type of PCI
(peripheral component interconnect) devices on a host. Some drivers need 40MB, which almost
doubles base system memory.
The ESX/ESXi host also attempts to keep some memory free at all times to handle dynamic
allocation requests efficiently. ESX/ESXi sets this level at approximately six percent of the
memory available for running virtual machines.
An ESXi host uses additional system memory for management agents that run in the service
console of an ESX host.
Virtual MachinesMemory used by virtual machines running on the selected host.
Most of the host’s memory is used for running virtual machines. An ESX/ESXi host manages the
allocation of this memory to virtual machines based on administrative parameters and system
load.
The amount of physical memory the virtual machines can use is always less than what is in the
physical host because the virtualization layer takes up some resources. For example, a host with
a dual 3.2GHz CPU and 2GB of memory might make 6GHz of CPU power and 1.5GB of memory
available for use by virtual machines.
Service ConsoleMemory reserved for the service console.
Click Properties to change how much memory is available for the service console. This field
appears only in ESX. ESXi does not provide a service console.
Understanding Memory Overhead
Virtualization of memory resources has some associated overhead.
ESX/ESXi virtual machines can incur two kinds of memory overhead.
n
The additional time to access memory within a virtual machine.
n
The extra space needed by the ESX/ESXi host for its own code and data structures, beyond the memory
allocated to each virtual machine.
VMware, Inc. 29
vSphere Resource Management Guide
ESX/ESXi memory virtualization adds little time overhead to memory accesses. Because the processor's paging
hardware uses page tables (shadow page tables for software-based approach or nested page tables for
hardware-assisted approach) directly, most memory accesses in the virtual machine can execute without
address translation overhead.
The memory space overhead has two components.
n
A fixed, system-wide overhead for the VMkernel and (for ESX only) the service console.
n
Additional overhead for each virtual machine.
For ESX, the service console typically uses 272MB and the VMkernel uses a smaller amount of memory. The
amount depends on the number and size of the device drivers that are being used.
Overhead memory includes space reserved for the virtual machine frame buffer and various virtualization
data structures, such as shadow page tables. Overhead memory depends on the number of virtual CPUs and
the configured memory for the guest operating system.
ESX/ESXi also provides optimizations such as memory sharing to reduce the amount of physical memory used
on the underlying server. These optimizations can save more memory than is taken up by the overhead.
Overhead Memory on Virtual Machines
Virtual machines incur overhead memory. You should be aware of the amount of this overhead.
Table 3-2 lists the overhead memory (in MB) for each number of VCPUs.
An ESX/ESXi host allocates the memory specified by the Limit parameter to each virtual machine, unless
memory is overcommitted. An ESX/ESXi host never allocates more memory to a virtual machine than its
specified physical memory size.
For example, a 1GB virtual machine might have the default limit (unlimited) or a user-specified limit (for
example 2GB). In both cases, the ESX/ESXi host never allocates more than 1GB, the physical memory size that
was specified for it.
When memory is overcommitted, each virtual machine is allocated an amount of memory somewhere between
what is specified by Reservation and what is specified by Limit. The amount of memory granted to a virtual
machine above its reservation usually varies with the current memory load.
30 VMware, Inc.
Loading...
+ 76 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.