This document supports the version of each product listed and
supports all subsequent versions until the document is
replaced by a new edition. To check for more recent editions of
this document, see http://www.vmware.com/support/pubs.
EN-002352-00
vSphere Command-Line Interface Concepts and Examples
You can find the most up-to-date technical documentation on the VMware Web site at:
hp://www.vmware.com/support/
The VMware Web site also provides the latest product updates.
If you have comments about this documentation, submit your feedback to:
Path Claiming with esxcli storage core claiming 108
Using the Reclaim Troubleshooting Command 109
Unclaiming Paths or Sets of Paths 109
Managing Claim Rules 110
Change the Current Claim Rules in the VMkernel 110
Adding Claim Rules 111
Removing Claim Rules 112
Listing Claim Rules 113
Loading Claim Rules 113
Moving Claim Rules 113
Load and Apply Path Claim Rules 114
Running Path Claim Rules 114
Managing Users117
7
Users in the vSphere Environment 117
vicfg-user Command Syntax 118
Managing Users with vicfg-user 118
Assigning Permissions with ESXCLI 120
Managing Virtual Machines123
8
vmware-cmd Overview 123
Connection Options for vmware-cmd 124
General Options for vmware-cmd 124
Format for Specifying Virtual Machines 124
List and Register Virtual Machines 125
Retrieving Virtual Machine Aributes 125
Managing Virtual Machine Snapshots with vmware-cmd 127
Take a Virtual Machine Snapshot 127
Reverting and Removing Snapshots 128
Powering Virtual Machines On and O 128
Connecting and Disconnecting Virtual Devices 129
Working with the AnswerVM API 130
Forcibly Stop a Virtual Machine with ESXCLI 130
Managing vSphere Networking131
9
Introduction to vSphere Networking 131
Networking Using vSphere Standard Switches 132
Networking Using vSphere Distributed Switches 133
Retrieving Basic Networking Information 134
Troubleshoot a Networking Setup 134
6 VMware, Inc.
Seing Up vSphere Networking with vSphere Standard Switches 136
Seing Up Virtual Switches and Associating a Switch with a Network Interface 136
Retrieving Information About Virtual Switches 137
Adding and Deleting Virtual Switches 138
Checking, Adding, and Removing Port Groups 139
Managing Uplinks and Port Groups 140
Seing the Port Group VLAN ID 141
Managing Uplink Adapters 142
Adding and Modifying VMkernel Network Interfaces 145
Seing Up vSphere Networking with vSphere Distributed Switch 148
Managing Standard Networking Services in the vSphere Environment 149
Seing the DNS Conguration 149
Seing the DNS Conguration with ESXCLI 149
Seing the DNS Conguration with vicfg-dns 151
Manage an NTP Server 152
Manage the IP Gateway 152
Seing Up IPsec 153
Using IPsec with ESXi 154
Managing Security Associations 155
Managing Security Policies 156
Manage the ESXi Firewall 157
Monitor VXLAN 158
Contents
Monitoring ESXi Hosts161
10
Using resxtop for Performance Monitoring 161
Managing Diagnostic Partitions 161
Managing Core Dumps 162
Manage Local Core Dumps with ESXCLI 162
Manage Core Dumps with ESXi Dump Collector 163
Manage Core Dumps with vicfg-dumppart 164
Conguring ESXi Syslog Services 164
Managing ESXi SNMP Agents 166
Conguring SNMP Communities 166
Conguring the SNMP Agent to Send Traps 166
Conguring the SNMP Agent for Polling 168
Retrieving Hardware Information 169
Index171
VMware, Inc. 7
vSphere Command-Line Interface Concepts and Examples
8 VMware, Inc.
About This Book
vSphere Command-Line Interface Concepts and Examples explains how to use the commands in the VMware
vSphere® Command-Line Interface (vCLI) and includes command overviews and examples.
Intended Audience
This book is for experienced Windows or Linux system administrators who are familiar with vSphere
administration tasks and data center operations and know how to use commands in scripts.
VMware Technical Publications Glossary
VMware Technical Publications provides a glossary of terms that might be unfamiliar to you. For denitions
of terms as they are used in VMware technical documentation, go to
hp://www.vmware.com/support/pubs.
Related Documentation
The documentation for vCLI is available in the vSphere Documentation Center and on the vCLI
documentation page. Go to hp://www.vmware.com/support/developer/vcli.
Geing Started with vSphere Command-Line Interfaces includes information about available CLIs, enabling
n
the ESXi Shell, as well as installing and running vCLI and DCLI commands.
VMware, Inc.
vSphere Command-Line Interface Reference is a reference to both ESXCLI commands and vicfg-
n
commands. The vicfg- command help is generated from the POD available for each command, run
pod2html for any vicfg- command to generate individual HTML les interactively. The ESXCLI
reference information is generated from the ESXCLI help.
DCLI Reference is a reference to DCLI commands for managing vCenter services.
n
The documentation for PowerCLI is available in the vSphere Documentation Center and on the PowerCLI
documentation page.
The vSphere SDK for Perl documentation explains how you can use the vSphere SDK for Perl and related
utility applications to manage your vSphere environment.
The vSphere Management Assistant Guide explains how to install and use the vSphere Management Assistant
(vMA). vMA is a virtual machine that includes vCLI and other prepackaged software.
Background information for the tasks discussed in this book is available in the vSphere documentation set.
The vSphere documentation consists of the combined VMware vCenter Server and ESXi documentation.
9
vSphere Command-Line Interface Concepts and Examples
10 VMware, Inc.
vSphere CLI Command Overviews1
This chapter introduces the command set, presents supported commands for dierent versions of vSphere,
lists connection options, and discusses vCLI and lockdown mode.
This chapter includes the following topics:
“Introduction,” on page 11
n
“List of Available Host Management Commands,” on page 13
n
“Targets and Protocols for vCLI Host Management Commands,” on page 15
n
“Supported Platforms for vCLI Commands,” on page 15
n
“Commands with an esxcfg Prex,” on page 16
n
“ESXCLI Commands Available on Dierent ESXi Hosts,” on page 17
n
“Trust Relationship Requirement for ESXCLI Commands,” on page 17
n
“Using ESXCLI Output,” on page 19
n
“Connection Options for vCLI Host Management Commands,” on page 19
n
“Connection Options for DCLI Commands,” on page 19
n
“vCLI Host Management Commands and Lockdown Mode,” on page 19
n
Introduction
The commands in the vSphere CLI package allow you to perform vSphere conguration tasks using
commands from vCLI package installed on supported platforms, or using commands from vMA. The
package consists of several command sets.
The following table lists the components of the vSphere CLI command set.
vCLI CommandsDescription
ESXCLI commandsManage many aspects of an ESXi host. You can run ESXCLI commands remotely or in the
ESXi Shell.
You can also run ESXCLI commands from the PowerCLI prompt by using the Get-EsxCli
cmdlet.
vicfg- commands
VMware, Inc. 11
Set of commands for many aspects of host management Eventually, these commands will be
replaced by ESXCLI commands.
A set of esxcfg- commands that precisely mirrors the vicfg- commands is also included in
the vCLI package.
vSphere Command-Line Interface Concepts and Examples
vCLI CommandsDescription
Other commands
(vmware-cmd, vifs,
vmkfstools)
DCLI commandsManage VMware SDDC services.
Commands implemented in Perl that do not have a vicfg-prex. These commands are
scheduled to be deprecated or replaced by ESXCLI commands.
DCLI is a CLI client to the vSphere Automation SDK interface for managing VMware SDDC
services. A DCLI command talks to a vSphere Automation API endpoint to locate relevant
information, and then executes the command and displays result to the user.
You can install the vSphere CLI command set on a supported Linux or Windows system. See Geing Startedwith vSphere Command-Line Interfaces. You can also deploy the vSphere Management Assistant (vMA) to an
ESXi system of your choice.
After installation, run vCLI commands from the Linux or Windows system or from vMA.
Manage ESXi hosts with other vCLI commands by specifying connection options such as the target
n
host, user, and password or a congurationle. See “Connection Options for vCLI Host Management
Commands,” on page 19.
Manage vCenter services with DCLI commands by specifying a target vCenter Server system and
n
authentication options. See Geing Started with vSphere Command-Line Interfaces for a list of connection
options.
Documentation
You can nd information about dierent aspects of vCLI in separate publications.
Geing Started with vSphere Command-Line Interfaces includes information about available CLIs, enabling the
ESXi Shell, and installing and running vCLI commands.
Reference information for vCLI and DCLI commands is available on the vCLI documentation page
hp://www.vmware.com/support/developer/vcli/ and in the vSphere Documentation Center for the product
version that you are using.
vSphere Command-Line Interface Reference is a reference to vicfg- and related vCLI commands and
n
includes reference information for ESXCLI commands. All reference information is generated from the
help.
A reference to esxtop and resxtop is included in the Resource Management documentation.
n
The DCLI Reference is included separately from the vSphere Command-Line Interface Reference. All
n
reference information is generated from the help.
Command-Line Help
Available command-line help diers for the dierent command sets.
Command SetAvailable Command-Line Help
vicfg-
commands
ESXCLI
commands
DCLI commands
Run <vicfg-cmd> --help for an overview of each options.
Run Pod2Html with a vicfg- command as input and pipe the output to a le for more detailed help
information.
This output corresponds to the information available in the vSphere Command-Line Interface Reference.
Run --help at any level of the hierarchy for information about both commands and namespaces
available from that level.
Run --help for any command or namespace to display the input options, whether the option is
required, and the input option type. For namespaces, --help displays all available child namespaces
and commands.
Run dcli --help to display usage information for DCLI.
12 VMware, Inc.
List of Available Host Management Commands
vCLI host management commands from earlier versions have been replaced with commands that have
equivalent functionality.
The following table lists vCLI host management commands in alphabetical order and the corresponding
ESXCLI command if available. For ESXCLI, new commands and namespaces are added with each release.
See the Release Notes for the corresponding release for information.
Functionality of the DCLI command set that is being added in vSphere 6.0 and later is dierent from these
commands. They are not included in the table.
Chapter 1 vSphere CLI Command Overviews
vCLI 4.1
Command
esxcliesxcli (new syntax)
resxtopresxtop (No ESXCLI
svmotionsvmotion (No ESXCLI
vicfg-advcfgesxcli system settings
vicfg-authconfigvicfg-authconfig (No
vicfg-cfgbackupvicfg-cfgbackup (No
vicfg-dnsesxcli network ip dns
vicfg-dumppartesxcli system coredumpSets both the partition (esxcli system coredump partition)
vicfg-hostopsesxcli system
vicfg-ipsecesxcli network ip ipsec
vicfg-iscsiesxcli iscsi
vCLI 5.1 and later
CommandComment
All vCLI 4.1 commands have been renamed. Signicant additions
have been made to ESXCLI. Many tasks previously performed
with a vicfg- command is now performed with ESXCLI.
Monitors in real time how ESXi hosts use resources. Runs in
equivalent)
Supported only on Linux.
equivalent)
Must run against a
vCenter Server system.
advanced
ESXCLI equivalent)
ESXCLI equivalent)
Cannot run against a
vCenter Server system.
maintenancemode
esxcli system shutdown
interactive or batch mode.
See “Using resxtop for Performance Monitoring,” on page 161.
See the vSphere Resource Management documentation for a detailed
reference.
Moves a virtual machine’s congurationle, and, optionally, its
disks, while the virtual machine is running.
See “Migrating Virtual Machines with svmotion,” on page 63.
Performs advanced conguration.
The advanced seings are a set of VMkernel options. These
options are typically in place for specic workarounds or
debugging.
Use this command as instructed by VMware.
Remotely congures Active Directory seings for an ESXi host.
See “Using vicfg-authcong for Active Directory Conguration,”
on page 26.
Backs up the conguration data of an ESXi system and restores
previously saved conguration data.
See “Backing Up Conguration Information with vicfg-
cfgbackup,” on page 24.
Species an ESXi host’s DNS (Domain Name Server)
conguration.
See “Seing the DNS Conguration,” on page 149.
and the network (esxcli system coredump network) to use for
core dumps. Use this command to set up ESXi Dump Collector.
See “Managing Diagnostic Partitions,” on page 161.
Manages hosts.
“Stopping, Rebooting, and Examining Hosts,” on page 21.
“Entering and Exiting Maintenance Mode,” on page 22.
Sets up IPsec (Internet Protocol Security), which secures IP
communications coming from and arriving at ESXi hosts. ESXi
hosts support IPsec using IPv6.
See “Seing Up IPsec,” on page 153.
Manages hardware and software iSCSI storage.
See Chapter 5, “Managing iSCSI Storage,” on page 69.
VMware, Inc. 13
vSphere Command-Line Interface Concepts and Examples
vCLI 4.1
Command
vCLI 5.1 and later
CommandComment
vicfg-moduleesxcli system module
vicfg-mpath
vicfg-mpath35
esxcli storage core
path
vicfg-nasesxcli storage nfs
vicfg-nicsesxcli network nic
vicfg-ntpvicfg-ntp (No ESXCLI
equivalent)
vicfg-rescanesxcli storage core
adapter rescan
vicfg-routeesxcli network ip route
vicfg-scsidevsesxcli storage core
device
vicfg-snmpesxcli system snmp
vicfg-syslogesxcli system syslog
vicfg-uservicfg-user (No ESXCLI
equivalent)
vicfg-vmknicesxcli network ip
interface
vicfg-volumeesxcli storage
filesystem
vicfg-vswitchesxcli network vswitch
vifsvifs (No ESXCLI equivalent)
vihostupdateesxcli software vib
Enables VMkernel options. Use this command with the options
listed in this document, or as instructed by VMware.
See “Managing VMkernel Modules,” on page 25.
Congures storage arrays.
See “Managing Paths,” on page 50.
Manages NAS/NFS lesystems.
See “Managing NFS/NAS Datastores,” on page 57.
Manages the ESXi host's uplink adapters.
See “Managing Uplink Adapters,” on page 142.
Denes the NTP (Network Time Protocol) server.
See “Manage an NTP Server,” on page 152.
Rescans the storage conguration.
See “Scanning Storage Adapters,” on page 66.
Manages the ESXi host's route entry.
See “Manage the IP Gateway,” on page 152.
Finds and examines available LUNs.
See “Examining LUNs,” on page 45.
Manages the SNMP agent. See “Managing ESXi SNMP Agents,”
on page 166. Using SNMP in a vSphere environment is discussed
in detail in the vSphere Monitoring and Performance documentation.
New options added in vCLI 5.0.
Expanded SNMP support added in vCLI 5.1.
Species log seings for ESXi hosts including local storage
policies and server and port information for network logging. See
“Conguring ESXi Syslog Services,” on page 164.
The vCenter Server and Host Management documentation explains
how to set up system logs using the vSphere Web Client.
Creates, modies, deletes, and lists local direct access users and
groups of users. See Chapter 7, “Managing Users,” on page 117.
The vSphere Security documentation discusses security
implications of user management and custom roles.
Adds, deletes, and modies VMkernel network interfaces.
See “Adding and Modifying VMkernel Network Interfaces,” on
page 145.
Supports resignaturing the copy of a VMFS volume, and
mounting and unmounting the copy.
See “Managing Duplicate VMFS Datastores,” on page 32.
Adds or removes virtual switches or modies virtual switch
seings.
See “Seing Up Virtual Switches and Associating a Switch with a
Network Interface,” on page 136.
Performs le system operations such as retrieving and uploading
les on the ESXi system.
See “Managing the Virtual Machine File System with vmkfstools,”
on page 30.
Updates legacy ESXi hosts to a dierent version of the same major
release.
You cannot run vihostupdate against ESXi 5.0 and later hosts.
See “Managing VMkernel Modules,” on page 25.
14 VMware, Inc.
Chapter 1 vSphere CLI Command Overviews
vCLI 4.1
Command
vmkfstoolsvmkfstools (No ESXCLI
vmware-cmdvmware-cmd (No ESXCLI
vCLI 5.1 and later
CommandComment
Creates and manipulates virtual disks, le systems, logical
equivalent)
equivalent)
volumes, and physical storage devices on an ESXi host.
See “Managing the Virtual Machine File System with vmkfstools,”
on page 30.
Performs virtual machine operations remotely. This includes, for
example, creating a snapshot, powering the virtual machine on or
o, and geing information about the virtual machine.
See Chapter 8, “Managing Virtual Machines,” on page 123.
Targets and Protocols for vCLI Host Management Commands
Most vCLI commands are used to manage or retrieve information about one or more ESXi hosts. They can
target an ESXi host or a vCenter Server system.
When you target a vCenter Server system, you can use --vihost to specify the ESXi host to run the
command against. The only exception is svmotion, which you can run against vCenter Server systems, but
not against ESXi systems.
The following commands must have an ESXi system, not a vCenter Server system as a target.
vifs
n
vicfg-user
n
vicfg-cfgbackup
n
vihostupdate
n
vmkfstools
n
The resxtop command requires an HTTPS connection. All other commands support HTTP and HTTPS.
Supported Platforms for vCLI Commands
Platform support for vCLI commands diers depending on the vCenter Server and ESXi version.
You cannot run the vihostupdate command against an ESXi 5.0 or later system.
You cannot run vicfg-syslog --setserver or vicfg-syslog --setport with an ESXi 5.0 or later target.
The following table lists platform support for the dierent vCLI commands.
vCenter Server 5.x
CommandESXi 5.x and 6.x
DCLI
esxcli
resxtop
svmotion
vicfg-advcfg
vicfg-authconfig
vicfg-cfgbackup
vicfg-dns
vicfg-dumppart
vicfg-hostops
NoNoNoNoNo
YesYesYesYesNo
Yes (from Linux)Yes (from Linux)Yes (from
NoYesNoNoYes
YesYesYesYesYes
YesYesYesYesYes
YesNoYesNoNo
YesYesYesYesYes
YesYesYesYesYes
YesYesYesYesYes
and 6.xESXi 4.xESX 4.x
Linux)
Yes (from
Linux)
vCenter Server
4.x
Yes (from Linux)
VMware, Inc. 15
vSphere Command-Line Interface Concepts and Examples
CommandESXi 5.x and 6.x
vicfg-ipsec
vicfg-iscsi
vicfg-module
vicfg-mpath
vicfg-nas
vicfg-nics
vicfg-ntp
vicfg-rescan
vicfg-route
vicfg-scsidevs
vicfg-snmp
vicfg-syslog
vicfg-user
vicfg-vmknic
vicfg-volume
vicfg-vswitch
vifs
vihostupdateUse esxcli
vmkfstools
vmware-cmd
vicfg-mpath35
vihostupdate35
YesNoYesYesNo
YesYesYesYesYes
YesYesYesYesYes
YesYesYesYesYes
YesYesYesYesYes
YesYesYesYesYes
YesYesYesYesYes
YesYesYesYesYes
YesYesYesYesYes
YesYesYesYesYes
YesNoYesYesNo
NoNo for 5.0 targetYesNoYes
YesNoYesYesNo
YesYesYesYesYes
YesYesYesYesYes
YesYesYesYesYes
YesNoYesYesNo
software vib
instead.
YesNoYesYesNo
YesYesYesYesYes
NoNoNoNoNo
NoNoNoNoNo
vCenter Server 5.x
and 6.xESXi 4.xESX 4.x
Use esxcli
software vib
instead.
YesYesNo
vCenter Server
4.x
Commands with an esxcfg Prefix
To facilitate easy migration if shell scripts that use esxcfg- commands, the vCLI package includes a copy of
each vicfg- command that uses an esxcfgprex.
I You should use ESXCLI or the vCLI commands with the vicfgprex. Commands with the
esxcfg prex are available mainly for compatibility reasons and are now obsolete. vCLI esxcfg- commands
are equivalent to vicfg- commands, but not completely equivalent to the deprecated esxcfg- service console
commands.
The Following table lists all vCLI vicfg- commands for which a vCLI command with an esxcfgprex is
available.
Command with vicfg PrefixCommand with esxcfg Prefix
vicfg-advcfgesxcfg-advcfg
vicfg-cfgbackupesxcfg-cfgbackup
vicfg-dnsesxcfg-dns
vicfg-dumppartesxcfg-dumppart
vicfg-moduleesxcfg-module
16 VMware, Inc.
Chapter 1 vSphere CLI Command Overviews
Command with vicfg PrefixCommand with esxcfg Prefix
vicfg-mpathesxcfg-mpath
vicfg-nasesxcfg-nas
vicfg-nicsesxcfg-nics
vicfg-ntpesxcfg-ntp
vicfg-rescanesxcfg-rescan
vicfg-routeesxcfg-route
vicfg-scsidevsesxcfg-scsidevs
vicfg-snmpesxcfg-snmp
vicfg-syslogesxcfg-syslog
vicfg-vmknicesxcfg-vmknic
vicfg-volumeesxcfg-volume
vicfg-vswitchesxcfg-vswitch
ESXCLI Commands Available on Different ESXi Hosts
The available ESXCLI commands depend on the ESXi host version.
When you run an ESXCLI vCLI command, you must know the commands supported on the target host. For
example, if you run commands against ESXi 5.x hosts, ESXCLI 5.x commands are supported. If you run
commands against ESXi 6.x hosts, ESXCLI 6.x commands are supported.
Some commands or command outputs are determined by the host type. In addition, VMware partners
might develop custom ESXCLI commands that you can run on hosts where the partner VIB has been
installed.
Run esxcli --server <target> --help for a list of namespaces supported on the target. You can drill down
into the namespaces for additional help.
Trust Relationship Requirement for ESXCLI Commands
Starting with vSphere 6.0, ESXCLI checks whether a trust relationship exists between the machine where
you run the ESXCLI command and the ESXi host. An error results if the trust relationship does not exist.
Download and Install the vCenter Server Certificate
You can download the vCenter Server root certicate by using a Web browser and add it to the trusted
certicates on the machine where you plan to run ESXCLI commands.
Procedure
1Enter the URL of the vCenter Server system or vCenter Server Appliance into a Web browser.
2Click the Download trusted root link.
3Change the extension of the downloaded le to .zip. (The le is a ZIP le of all certicates in the
TRUSTED_ROOTS store).
4Extract the ZIP le.
A certicates folder is extracted. The folder includes les with the extension .0. .1, and so on, which are
certicates, and les with the extension .r0, r1, and so on which are CRL les associated with the
certicates.
VMware, Inc. 17
vSphere Command-Line Interface Concepts and Examples
5Add the trusted root certicates to the list of trusted roots.
The process diers depending on the platform that you are on.
What to do next
You can now run ESXCLI commands against any host that is managed by the trusted vCenter Server system
without supplying additional information if you specify the vCenter Server system in the --server option
and the ESXi host in the --vihost option.
Using the --cacertsfile Option
Using a certicate to establish the trust relationship is the most secure option.
You can specify the certicate with the --cacertsfile parameter or the VI_CACERTFILE variable.
Using the --thumbprint Option
You can supply the thumbprint for the target ESXi host or vCenter Server system in the --thumbprint
parameter or the VI_THUMBPRINT variable.
When you run a command, ESXCLI rst checks whether a certicatele is available. If not, ESXCLI checks
whether a thumbprint of the target server is available. If not, you receive an error of the following type.
Connect to sof-40583-srv failed. Server SHA-1 thumbprint: 5D:01:06:63:55:9D:DF:FE:38:81:6E:2C:FA:
71:BC:Usin63:82:C5:16:51 (not trusted).
You can run the command with the thumbprint to establish the trust relationship, or add the thumbprint to
the VI_THUMBPRINT variable. For example, using the thumbprint of the ESXi host above, you can run the
following command.
01:06:63:55:9D:DF:FE:38:81:6E:2C:FA:71:BC:63:82:C5:16:51 storage nfs list
Use the Credential Store
Your vCLI installation includes a credential store. You can establish trust for a user with the credential store.
You can manage the credential store with the credstore-admin utility application, which is located in
the /Perl/apps/general directory inside the VMware vSphere CLI directory.
I Updating the credential store is a two-step process. First you add the user and password for
the server, and then you add the thumbprint for the server.
Procedure
1Add the user and password for the target ESXi host to the local credential store.
If you are using a non-default credential store le, you must pass it in with the --credstore option.
If you do not use the --credstore option, the host becomes accessible without authentication.
18 VMware, Inc.
Chapter 1 vSphere CLI Command Overviews
Using ESXCLI Output
Many ESXCLI commands generate output you might want to use in your application. You can run esxcli
with the --formatter dispatcher option and send the resulting output as input to a parser.
The --formatter options supports three values - csv, xml, and keyvalue and is used before any namespace.
The following example lists all le system information in CSV format.
esxcli --formatter=csv storage filesystem list
You can pipe the output to a le.
esxcli --formatter=keyvalue storage filesystem list > myfilesystemlist.txt
I You should always use a formaer for consistent output.
Connection Options for vCLI Host Management Commands
You can run host management commands such as ESXCLI commands, vicfg- commands, and other
commands with several dierent connection options.
You can target hosts directly or target a vCenter Server system and specify the host you want to manage. If
you are targeting a vCenter Server system, specify the Platform Services Controller, which includes the
vCenter Single Sign-On service, for best security.
I For connections to ESXi hosts version 6.0 or later, vCLI supports both the IPv4 protocol and the
IPv6 protocol. For earlier versions, vCLI supports only IPv4. In all cases, you can congure IPv6 on the
target host with several of the networking commands.
See the Geing Started with vSphere Command-Line Interfaces documentation for a complete list and examples.
Connection Options for DCLI Commands
DCLI is a CLI client to the vSphere Automation SDK interface for managing VMware SDDC services. A
DCLI command talks to a vSphere Automation SDK endpoint to get the vSphere Automation SDK
command information, executes the command, and displays result to the user.
You can run DCLI commands locally or from an administration server.
Run DCLI on the Linux shell of a vCenter Server Appliance.
n
Install vCLI on a supported Windows or Linux system and target a vCenter Server Windows
n
installation or a vCenter Server Appliance. You have to provide endpoint information to successfully
run commands.
DCLI commands support other connection options than other commands in the command set.
See the Geing Started with vSphere Command-Line Interfaces documentation for a complete list and examples.
vCLI Host Management Commands and Lockdown Mode
For additional security, an administrator can place one or more hosts managed by a vCenter Server system
in lockdown mode. Lockdown mode aects login privileges for the ESXi host.
See the vSphere Security document in the vSphere Documentation Center for a detailed discussion of normal
lockdown mode and strict lockdown mode, and of how to enable and disable them.
VMware, Inc. 19
vSphere Command-Line Interface Concepts and Examples
To make changes to ESXi systems in lockdown mode, you must go through a vCenter Server system that
manages the ESXi system as the user vpxuser and include both the --server and --vihost parameters.
esxcli --server MyVC --vihost MyESXi storage filesystem list
The command prompts for the vCenter Server system user name and password.
The following commands cannot run against vCenter Server systems and are therefore not available in
lockdown mode.
vifs
n
vicfg-user
n
vicfg-cfgbackup
n
vihostupdate
n
vmkfstools
n
If you have problems running a command on an ESXi host directly, without specifying a vCenter Server
target, check whether lockdown mode is enabled on that host.
20 VMware, Inc.
Managing Hosts2
Host management commands can stop and reboot ESXi hosts, back up conguration information, and
manage host updates. You can also use a host management command to make your host join an Active
Directory domain or exit from a domain.
For information on updating ESXi 5.0 hosts with the esxcli software command and on changing the host
acceptance level to match the level of a VIB that you might want to use for an update, see the vSphereUpgrade documentation in the vSphere 5.0 Documentation Center.
This chapter includes the following topics:
“Stopping, Rebooting, and Examining Hosts,” on page 21
n
“Entering and Exiting Maintenance Mode,” on page 22
n
“Backing Up Conguration Information with vicfg-cfgbackup,” on page 24
n
“Managing VMkernel Modules,” on page 25
n
“Using vicfg-authcong for Active Directory Conguration,” on page 26
n
“Updating Hosts,” on page 27
n
Stopping, Rebooting, and Examining Hosts
You can stop, reboot, and examine hosts with ESXCLI or with vicfg-hostops.
Stopping and Rebooting Hosts with ESXCLI
You can shut down or reboot an ESXi host by using the vSphere Web Client or vCLI commands, such as
ESXCLI or vicfg-hostops.
Shuing down a managed host disconnects it from the vCenter Server system, but does not remove the host
from the inventory. You can shut down a single host or all hosts in a data center or cluster. Specify one of the
options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of
<conn_options>.
To shut down a host, run esxcli system shutdown poweroff. You must specify the --reason option and
supply a reason for the shutdown. A --delay option allows you to specify a delay interval, in seconds.
To reboot a host, run system shutdown reboot. You must specify the --reason option and supply a reason
for the reboot. A --delay option allows you to specify a delay interval, in seconds.
VMware, Inc.
21
vSphere Command-Line Interface Concepts and Examples
Stopping, Rebooting, and Examining Hosts with vicfg-hostops
You can shut down or reboot an ESXi host by using the vSphere Web Client, or ESXCLI or the vicfg-hostops
vCLI command.
Shuing down a managed host disconnects it from the vCenter Server system, but does not remove the host
from the inventory. You can shut down a single host or all hosts in a data center or cluster. Specify one of the
options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of
<conn_options>.
Single host - Run vicfg-hostops with --operation shutdown.
n
If the host is in maintenance mode, run the command without the --force option.
n
vicfg-hostops <conn_options> --operation shutdown
If the host is not in maintenance mode, use --force to shut down the host and all running virtual
You can display information about a host by running vicfg-hostops with --operation info.
vicfg-hostops <conn_options> --operation info
The command returns the host name, manufacturer, model, processor type, CPU cores, memory capacity,
and boot time. The command also returns whether vMotion is enabled and whether the host is in
maintenance mode.
Entering and Exiting Maintenance Mode
You can instruct your host to enter or exit maintenance mode with ESXCLI or with vicfg-hostops.
Enter and Exit Maintenance Mode with ESXCLI
You place a host in maintenance mode to service it, for example, to install more memory. A host enters or
leaves maintenance mode only as the result of a user request.
esxcli system maintenanceMode set allows you to enable or disable maintenance mode.
22 VMware, Inc.
Chapter 2 Managing Hosts
When you run the vicfg-hostops vCLI command, you can specify one of the options listed in “Connection
Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
Procedure
1To enter maintenance mode, run the following command.
esxcli <conn_options> system maintenanceMode set --enable true
After all virtual machines on the host have been suspended or migrated, the host enters maintenance
mode.
N You cannot deploy or power on a virtual machine on hosts in maintenance mode.
2To exit maintenance mode, run the following command.
esxcli <conn_options> system maintenanceMode set --enable false
N If you aempt to exit maintenance mode when the host is no longer in maintenance mode, an
error informs you that maintenance mode is already disabled.
Enter and Exit Maintenance Mode with vicfg-hostops
You place a host in maintenance mode to service it, for example, to install more memory. A host enters or
leaves maintenance mode only as the result of a user request.
vicfg-hostops suspends virtual machines by default, or powers o the virtual machine if you run vicfghostops --action poweroff.
Nvicfg-hostops does not work with VMware DRS. Virtual machines are always suspended.
The host is in a state of Entering Maintenance Mode until all running virtual machines are suspended or
migrated. When a host is entering maintenance mode, you cannot power on virtual machines on it or
migrate virtual machines to it.
When you run the vicfg-hostops vCLI command, you can specify one of the options listed in “Connection
Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
Procedure
1To enter maintenance mode, run the following command.
vicfg-hostops <conn_options> --operation enter
2To check whether the host is in maintenance mode or in the Entering Maintenance Mode state, run the
following command.
vicfg-hostops <conn_options> --operation info
After all virtual machines on the host have been suspended or migrated, the host enters maintenance mode.
You cannot deploy or power on a virtual machine on hosts in maintenance mode.
What to do next
You can put all hosts in a cluster or data center in maintenance mode by using the --cluster or --
datacenter option. You must not use those options unless suspending all virtual machines in that cluster or
data center is no problem.
You can later run vicfg-hostops <conn_options> --operation exit to exit maintenance mode.
VMware, Inc. 23
vSphere Command-Line Interface Concepts and Examples
Backing Up Configuration Information with vicfg-cfgbackup
After you congure an ESXi host, you can back up the host conguration data. You should always back up
your host conguration after you change the conguration or upgrade the ESXi image.
I The vicfg-cfgbackup command is available only for ESXi hosts. The command is not available
through a vCenter Server system connection. No equivalent ESXCLI command is supported.
Backup Tasks
During a conguration backup, the serial number is backed up with the conguration.
The number is restored when you restore the conguration. The number is not preserved when you run the
Recovery CD (ESXi Embedded) or perform a repair operation (ESXi Installable).
You can back up and restore conguration information as follows.
1Back up the conguration by using the vicfg-cfgbackup command.
2Run the Recovery CD or repair operation.
3Restore the conguration by using the vicfg-cfgbackup command.
When you restore a conguration, you must make sure that all virtual machines on the host are stopped.
Backing Up Configuration Data
You can back up conguration data by running vicfg-cfgbackup with the -s option.
The following example backs up conguration data in a temporary location.
For the backup lename, include the number of the build that is running on the host that you are backing
up. If you are running vCLI on vMA, the backup le is saved locally on vMA. Backup les can safely be
stored locally because virtual appliances are stored in the /vmfs/volumes/<datastore> directory on the host,
which is separate from the ESXi image and congurationles.
Restore Configuration Data
If you have created a backup, you can later restore ESXi conguration data.
When you restore conguration data, the number of the build running on the host must be the same as the
number of the build that was running when you created the backup le. To override this requirement,
include the -f (force) option.
When you run the vicfg-cfgbackup vCLI command, you can specify one of the options listed in
“Connection Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
Procedure
1Power o all virtual machines that are running on the host that you want to restore.
2Log in to a host on which vCLI is installed, or log in to vMA.
3Run vicfg-cfgbackup with the -lag to load the host conguration from the specied backup le.
If you run the following command, you are prompted for conrmation.
4(Optional) To restore the host to factory seings, run vicfg-cfgbackup with the -r option.
vicfg-cfgbackup <conn_options> -r
Using vicfg-cfgbackup from vMA
To back up a host conguration, you can run vicfg-cfgbackup from a vMA instance. The vMA instance can
run on the host that you are backing up or restoring, also referred to as the target host, or on a remote host.
To restore a host conguration, you must run vicfg-cfgbackup from a vMA instance running on a remote
host. The host must be in maintenance mode, which means all virtual machines, including vMA, must be
suspended on the target host.
For example, a backup operation for two ESXi hosts, host1 and host2, with vMA deployed on both hosts
works as follows.
To back up one of the host’s conguration, run vicfg-cfgbackup from the vMA appliance running on
n
either host1 or host2. Use the --server option to specify the host for which you want backup
information. The information is stored on vMA.
To restore the host1 conguration, run vicfg-cfgbackup from the vMA appliance running on host2. Use
n
the --server option to point to host1 to restore the conguration to that host.
To restore the host2 conguration, run vicfg-cfgbackup from the vMA appliance running on host1. Use
n
the --server option to point to host2 to restore the conguration to that host.
Managing VMkernel Modules
The esxcli system module and vicfg-module commands support seing and retrieving VMkernel module
options.
The vicfg-module and esxcli system module commands are implementations of the deprecated esxcfg-
module service console command. The two commands support most of the options esxcfg-module supports.
vicfg-module and esxcli system module are commonly used when VMware Technical Support, a
Knowledge Base article, or VMware documentation instruct you to do so.
Manage Modules with esxcli system module
Not all VMkernel modules have seable module options.
The following example illustrates how to examine and enable a VMkernel module. Specify one of the
connection options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in
place of <conn_options>.
Procedure
1List information about the module.
esxcli <conn_options> system module list --module=module_name
The system returns the name, type, value, and description of the module.
2(Optional) List all enabled or loaded modules.
esxcli <conn_options> system module list --enabled=true
esxcli <conn_options> system module list --loaded=true
3Enable the model.
esxcli <conn_options> system module set --module=module_name --enabled=true
VMware, Inc. 25
vSphere Command-Line Interface Concepts and Examples
4Set the parameter.
esxcli system module parameters set --module=module_name --parameter-
string="parameter_string"
5Verify that the module is congured.
esxcli <conn_options> system module parameters list --module=module_name
Manage Modules with vicfg-module
Not all VMkernel modules have seable module options.
The following example illustrates how the examine and enable a VMkernel modules. Specify one of the
connection options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in
place of <conn_options>.
Procedure
1Run vicfg-module --list to list the modules on the host.
vicfg-module <conn_options> --list
2Run vicfg-module --set-options with connection options, the option string to be passed to a module,
Using vicfg-authconfig for Active Directory Configuration
ESXi can be integrated with Active Directory. Active Directory provides authentication for all local services
and for remote access through the vSphere Web Services SDK, vSphere Web Client, PowerCLI, and vSphere
CLI.
You can congure Active Directory seings with the vSphere Web Client, as discussed in the vCenter Serverand Host Management documentation, or use vicfg-autconfig.
vicfg-authconfig allows you to remotely congure Active Directory seings on ESXi hosts. You can list
supported and active authentication mechanisms, list the current domain, and join or part from an Active
Directory domain.
Prepare ESXi Hosts for Active Directory Integration
Before you run the vicfg-authconfig command on an ESXi host, you must prepare the host.
Procedure
1Congure ESXi and Active Directory to use same NTP server.
I All hosts that join Active Directory must also be managed by an NTP server to avoid
issues with clock skews and Kerberos tickets. You must make sure the ESXi system and the Active
Directory server are using the same time zone.
The ESXi system’s time zone is always set to UTC.
26 VMware, Inc.
Chapter 2 Managing Hosts
2Congure the ESXi system’s DNS to be in the Active Directory domain.
Set Up Active Directory to Work with ESXi
You can run vicfg-authconfig to add the ESXi host to the Active Directory domain. You can run the
command directly against the host or against a vCenter Server system, specifying the host with --vihost.
Prerequisites
Verify that you have installed the ESXi host, as explained in the vSphere Installation and Setup
n
documentation.
Verify that you have installed Windows Active Directory on a Windows Server that runs Windows 2000
n
Server, Windows Server 2003, or Windows Server 2008. See the Microsoft Web site for instructions and
best practices.
Verify that you have the appropriate Active Directory permissions and administrative privileges on the
n
ESXi host.
Verify that time between the ESXi system and Windows Active Directory is synchronized.
n
Procedure
1Test that the Windows Active Directory Server can ping the ESXi host by using the host name.
ping <ESX_hostname>
2Run vicfg-authconfig to add the host to the Active Directory domain.
vicfg-authconfig --server=<ESXi Server IP Address>
--username=<ESXi Server Admin Username>
--password=<ESXi Server Admin User's Password>
--authscheme AD --joindomain <AD Domain Name>
--adusername=<Active Directory Administrator User Name>
The system prompts for user names and passwords if you do not specify them on the command line.
Passwords are not echoed to the screen.
3Check that a Successfully Joined <Domain Name> message appears.
4Verify the ESXi host is in the intended Windows Active Directory domain.
vicfg-authconfig --server XXX.XXX.XXX.XXX --authscheme AD -c
You are prompted for a user name and password for the ESXi system.
Updating Hosts
When you add custom drivers or patches to a host, the process is called an update.
Update ESXi 4.0 and ESXi 4.1 hosts with the vihostupdate command, as discussed in the vSphere
n
Command-Line Interface Installation and Reference Guide included in the vSphere 4.1 documentation set.
Update ESXi 5.0 hosts with esxcli software vib commands discussed in the vSphere Upgrade
n
documentation included in the vSphere 5.0 documentation set. You cannot run the vihostupdate
command against ESXi 5.0 or later.
Update ESXi 5.0 hosts with esxcli software vib commands discussed in the vSphere Upgrade
n
documentation included in the vSphere 5.0 documentation set. You cannot run the vihostupdate
command against ESXi 5.0 or later.
Update ESXi 5.1 hosts with esxcli software vib commands discussed in the vSphere Upgrade
n
documentation included in the vSphere 5.1 documentation set.
VMware, Inc. 27
vSphere Command-Line Interface Concepts and Examples
Update ESXi 5.5 hosts with esxcli software vib commands discussed in the vSphere Upgrade
n
documentation included in the vSphere 5.5 documentation set.
Update ESXi 6.0 hosts with esxcli software vib commands discussed in the vSphere Upgrade
n
documentation included in the vSphere 6.0 documentation set.
Update ESXi 6.5 hosts with esxcli software vib commands discussed in the vSphere Upgrade
n
documentation included in the vSphere 6.5 documentation set.
28 VMware, Inc.
Managing Files3
The vSphere CLI includes two commands for le manipulation. vmkfstools allows you to manipulate VMFS
(Virtual Machine File System) and virtual disks. vifs supports remote interaction with les on your ESXi
host.
N See Chapter 4, “Managing Storage,” on page 41 for information about storage manipulation
commands.
This chapter includes the following topics:
“Introduction to Virtual Machine File Management,” on page 29
n
“Managing the Virtual Machine File System with vmkfstools,” on page 30
n
“Upgrading VMFS3 Volumes to VMFS5,” on page 31
n
“Managing VMFS Volumes,” on page 31
n
“Reclaiming Unused Storage Space,” on page 34
n
“Using vifs to View and Manipulate Files on Remote ESXi Hosts,” on page 35
n
Introduction to Virtual Machine File Management
VMware, Inc.
You can use the vSphere Web Client or vCLI commands to access dierent types of storage devices that your
ESXi host discovers and to deploy datastores on those devices.
N Datastores are logical containers, analogous to le systems, that hide specics of each storage device
and provide a uniform model for storing virtual machine les. Datastores can be used for storing ISO
images, virtual machine templates, and oppy images. The vSphere Web Client uses the term datastore
exclusively. In vCLI, the term datastore, as well as VMFS or NFS volume, refer to the same logical container
on the physical device.
Depending on the type of storage you use, datastores can be backed by the VMFS and NFS le system
formats.
Virtual Machine File System (VMFS) - High-performance le system that is optimized for storing
n
virtual machines. Your host can deploy a VMFS datastore on any SCSI-based local or networked storage
device, including Fibre Channel and iSCSI SAN equipment. As an alternative to using the VMFS
datastore, your virtual machine can have direct access to raw devices and use a mapping le (RDM) as
a proxy.
You manage VMFS and RDMs with the vSphere Web Client, or the vmkfstools command.
29
iSCSI array
VMFSVMFS
LANLAN
iSCSI
hardware
initiator
ethernet
NIC
Host
requires TCP/IP connectivity
software
initiator
NAS
appliance
NFS
LAN
ethernet
NIC
fibre
array
VMFS
VMFS
LAN
fibre
channel
HBA
local
ethernet
SCSI
vSphere Command-Line Interface Concepts and Examples
Network File System (NFS) - The NFS client built into ESXi uses the NFS protocol over TCP/IP to access
n
a designated NFS volume that is located on a NAS server. The ESXi host can mount the volume and use
it for its storage needs. vSphere supports versions 3 and 4.1 of the NFS protocol. Typically, the NFS
volume or directory is created by a storage administrator and is exported form the NFS server. The NFS
volumes do not need to be formaed with a local le system, such as VMFS. You can mount the
volumes directly and use them to store and boot virtual machines in the same way that you use VMFS
datastores. The host can access a designated NFS volume located on an NFS server, mount the volume,
and use it for any storage needs.
You manage NAS storage devices from the vSphere Web Client or with the esxcli storage nfs
command. The diagram below illustrates dierent types of storage, but it is for conceptual purposes
only. It is not a recommended conguration.
Figure 3‑1. Virtual Machines Accessing Different Types of Storage
Managing the Virtual Machine File System with vmkfstools
VMFS datastores primarily serve as repositories for virtual machines.
You can store multiple virtual machines on the same VMFS volume. Each virtual machine, encapsulated in a
set of les, occupies a separate single directory. For the operating system inside the virtual machine, VMFS
preserves the internal le system semantics.
In addition, you can use the VMFS datastores to store other les, such as virtual machine templates and ISO
images. VMFS supports le and block sizes that enable virtual machines to run data-intensive applications,
including databases, ERP, and CRM, in virtual machines. See the vSphere Storage documentation.
30 VMware, Inc.
Chapter 3 Managing Files
You use the vmkfstools vCLI to create and manipulate virtual disks, le systems, logical volumes, and
physical storage devices on an ESXi host. You can use vmkfstools to create and manage a virtual machine
le system on a physical partition of a disk and to manipulate les, such as virtual disks, stored on VMFS-3
and NFS. You can also use vmkfstools to set up and manage raw device mappings (RDMs).
I The vmkfstools vCLI supports most but not all of the options that the vmkfstools ESXi Shell
command supports. See VMware Knowledge Base article 1008194.
You cannot run vmkfstools with --server pointing to a vCenter Server system.
The vSphere Storage documentation includes a complete reference to the vmkfstools command that you can
use in the ESXi Shell. You can use most of the same options with the vmkfstools vCLI command. Specify one
of the connection options listed in “Connection Options for vCLI Host Management Commands,” on
page 19 in place of <conn_options>.
The following options supported by the vmkfstools ESXi Shell command are not supported by the
vmkfstools vCLI command.
--breaklock -B
n
--chainConsistent -e
n
--eagerzero -k
n
--fix -x
n
--lock -L
n
--migratevirtualdisk -M
n
--parseimage -Y
n
--punchzero -K
n
--snapshotdisk -I
n
--verbose -v
n
Upgrading VMFS3 Volumes to VMFS5
vSphere 5.0 supports VMFS5 volumes, which have improved scalability and performance.
You can upgrade from VMFS3 to VMFS5 by using the vSphere Web Client, the vmkfstools ESXi Shell
command, or the esxcli storage vmfs upgrade command. You can pass the volume label or the volume
UUID to the ESXCLI command.
I You cannot upgrade VMFS3 volumes to VMFS5 with the vmkfstools command included in
vSphere CLI.
Managing VMFS Volumes
Dierent commands are available for listing, mounting, and unmounting VMFS volumes and for listing,
mounting, and unmounting VMFS snapshot volumes.
Managing VMFS volumes
n
esxcli storage filesystem list shows all volumes, mounted and unmounted, that are resolved, that
is, that are not snapshot volumes.
esxcli storage filesystem unmount unmounts a currently mounted lesystem. Use this command for
snapshot volumes or resolved volumes.
Managing snapshot volumes
n
VMware, Inc. 31
vSphere Command-Line Interface Concepts and Examples
esxcli storage vmfs snapshot commands can be used for listing, mounting, and resignaturing
snapshot volumes. See “Mounting Datastores with Existing Signatures,” on page 32 and
“Resignaturing VMFS Copies,” on page 33.
Managing Duplicate VMFS Datastores
In some cases VMFS datastores can have duplicate UUIDs.
Each VMFS datastore created in a LUN has a unique UUID that is stored in the le system superblock.
When the LUN is replicated or when a snapshot is made, the resulting LUN copy is identical, byte-for-byte,
to the original LUN. As a result, if the original LUN contains a VMFS datastore with UUID X, the LUN copy
appears to contain an identical VMFS datastore, or a VMFS datastore copy, with the same UUID X.
ESXi hosts can determine whether a LUN contains the VMFS datastore copy, and either mount the datastore
copy with its original UUID or change the UUID to resignature the datastore.
When a LUN contains a VMFS datastore copy, you can mount the datastore with the existing signature or
assign a new signature. The vSphere Storage documentation discusses volume resignaturing in detail.
Mounting Datastores with Existing Signatures
You can mount a VMFS datastore copy without changing its signature if the original is not mounted.
For example, you can maintain synchronized copies of virtual machines at a secondary site as part of a
disaster recovery plan. In the event of a disaster at the primary site, you can mount the datastore copy and
power on the virtual machines at the secondary site.
I You can mount a VMFS datastore only if it does not conict with an already mounted VMFS
datastore that has the same UUID.
When you mount the VMFS datastore, ESXi allows both read and write operations to the datastore that
resides on the LUN copy. The LUN copy must be writable. The datastore mounts are persistent and valid
across system reboots.
You can mount a datastore with ESXCLI or with vicfg-volume. See “Mount a Datastore with ESXCLI,” on
page 32 or “Mount a Datastore with vicfg-volume,” on page 33.
Mount a Datastore with ESXCLI
The esxcli storage filesystem commands support mounting and unmounting volumes. You can also
specify whether to persist the mounted volumes across reboots by using the --no-persist option.
Use the esxcli storage filesystem command to list mounted volumes, mount new volumes, and unmount
a volume. Specify one of the connection options listed in “Connection Options for vCLI Host Management
Commands,” on page 19 in place of <conn_options>.
Procedure
1List all volumes that have been detected as snapshots.
esxcli <conn_options> storage filesystem list
2Run esxcli storage filesystem mount with the volume label or volume UUID.
esxcli <conn_options> storage filesystem volume mount --volume-label=<label>|--volume-
uuid=<VMFS-UUID>
N This command fails if the original copy is online.
32 VMware, Inc.
Chapter 3 Managing Files
What to do next
You can later run esxcli storage filesystem volume unmount to unmount the snapshot volume.
The vicfg-volume command supports mounting and unmounting volumes.
Use the vicfg-volume command to list mounted volumes, mount new volumes, and unmount a volume.
Specify one of the connection options listed in “Connection Options for vCLI Host Management
Commands,” on page 19 in place of <conn_options>.
Procedure
1List all volumes that have been detected as snapshots or replicas.
vicfg-volume <conn_options> --list
2Run vicfg-volume --persistent-mount with the VMFS-UUID or label as an argument to mount a
The vicfg-volume command supports resignaturing a snapshot volume and mounting and unmounting the
volume. You can also make the mounted volume persistent across reboots and query a list of snapshot
volumes and original volumes.
Resignaturing VMFS Copies
You can use datastore resignaturing to retain the data stored on the VMFS datastore copy.
When resignaturing a VMFS copy, the ESXi host assigns a new UUID and a new label to the copy, and
mounts the copy as a datastore distinct from the original. Because ESXi prevents you from resignaturing the
mounted datastore, unmount the datastore before resignaturing.
The default format of the new label assigned to the datastore is snap-<snapID>-<oldLabel>, where <snapID>
is an integer and <oldLabel> is the label of the original datastore.
When you perform datastore resignaturing, consider the following points.
Datastore resignaturing is irreversible.
n
The LUN copy that contains the VMFS datastore that you resignature is no longer treated as a LUN
n
copy.
A spanned datastore can be resignatured only if all its extents are online.
n
The resignaturing process is crash and fault tolerant. If the process is interrupted, you can resume it
n
later.
You can mount the new VMFS datastore without a risk of its UUID conicting with UUIDs of any other
n
datastore, such as an ancestor or child in a hierarchy of LUN snapshots.
VMware, Inc. 33
vSphere Command-Line Interface Concepts and Examples
You can resignature a VMFS copy with ESXCLI or with vicfg-volume. See “Resignature a VMFS Copy with
ESXCLI,” on page 34 or “Resignature a VMFS Copy with vicfg-volume,” on page 34.
Resignature a VMFS Copy with ESXCLI
The esxcli storage vmfs snapshot commands support resignaturing a snapshot volume.
Specify one of the connection options listed in “Connection Options for vCLI Host Management
The command returns to the prompt or signals an error.
Reclaiming Unused Storage Space
When VMFS datastores reside on thin-provisioned LUNs, you can use ESXCLI commands to reclaim the
unused logical blocks of a thin-provisioned LUN formaed with VMFS.
When you run the commands, you must specify the volume label --volume-label or the volume ID --
volume-uuid but you cannot specify both.
In each iteration, the command issues unmap commands to the number of le system blocks that are
specied by the optional reclaim-unit argument, which defaults to 200. For newly created VMFS-5 le
systems, the lesystem block size is always 1 MB. For VMFS-3 le systems or VMFS-5 le systems that were
upgraded from VMFS-3, the lesystem block size could be one of 1, 2, 4, 8 MB.
34 VMware, Inc.
Chapter 3 Managing Files
The following examples illustrate how to use the command.
Using vifs to View and Manipulate Files on Remote ESXi Hosts
You can use the vifs utility for datastore le management.
C If you manipulate les directly, your vSphere setup might end up in an inconsistent state. Use the
vSphere Web Client or one of the other vCLI commands to manipulate virtual machine congurationles
and virtual disks.
The vifs command performs common operations such as copy, remove, get, and put on ESXi les and
directories. The command is supported against ESXi hosts but not against vCenter Server systems.
Some similarities between vifs and DOS or UNIX/Linux le system management utilities exist, but there are
many dierences. For example, vifs does not support wildcard characters or current directories and, as a
result, relative pathnames. You should use vifs only as documented.
Instead of using the vifs command, you can browse datastore contents and host les by using a Web
browser. Connect to the following location.
http://ESX_host_IP_Address/host
http://ESX_host_IP_Address/folder
You can view data center and datastore directories from this root URL. The following examples demonstrate
the syntax that you can use.
The ESXi host prompts for a user name and password.
The vifs command supports dierent operations for the following groups of les and directories. Dierent
operations are available for each group, and you specify locations with a dierent syntax. The behavior
diers for vSphere 4.x and vSphere 5.0.
VMware, Inc. 35
vSphere Command-Line Interface Concepts and Examples
HostHost congurationles. You must
Temp
DatastoresDatastore les and directories. You have two choices for specifying a
vSphere 4.xvSphere 5.0
Host congurationles. You must
specify the le’s unique name
identier.
Specify host locations by using
the /host/<path> syntax.
The /tmp directory and les in that
directory.
Specify temp locations by using
the /tmp/dir/subdir syntax.
datastore.
n
Use datastore prex style '[ds_name] relative_path' as demonstrated
in the following example.
'[myStorage1] testvms/VM1/VM1.vmx'(Linux) or "[myStorage1]
testvms/VM1/VM1.vmx" (Windows)
n
Use URL style /folder/dir/subdir/file?dsName=<name> as
demonstrated in the following example.
The two example paths refer to a virtual machine congurationle for the
VM1 virtual machine in the testvms/VM1 directory of the myStorage1
datastore.
specify the le’s unique name
identier.
Specify host locations by using
the /host/<path> syntax.
You cannot list subdirectories
of /host.
Not supported.
To avoid problems with directory names that use special characters or spaces, enclose the path in quotes for
both operating systems.
When you run vifs, you can specify the operation name and argument and one of the standard connection
options. Use aliases, symbolic links, or wrapper scripts to simplify the invocation syntax.
I The concepts of working directory and last directory or le operated on are not supported with
vifs.
vifs Options
vifs command-specic options allow you to retrieve and upload les from the remote host and perform a
number of other operations.
All vifs options work on datastore les or directories. Some options also work on host les and les in the
temp directory. You must also specify connection options.
CommandDescriptionTargetSyntax
--copy
-c <source>
<target>
--dir
-D <remote_dir>
--force
-F
Copies a le in a datastore to another location
in a datastore. The <source> must be a
remote source path, the <target> a remote
target path or directory.
The --force option replaces existing
destination les.
Lists the contents of a datastore directory.Datastore
Overwrites the destination le. Used with -move and --copy.
Datastore
Temp
Temp
Datastore
Temp
copy src_file_path
dst_directory_path [-force]
copy src_file_path
dst_file_path [--force]
dir
datastore_directory_path
copy src_file_path
dst_file_path [--force]
36 VMware, Inc.
CommandDescriptionTargetSyntax
--get
-g <remote_path>
<local_path>
--listdc
-C
--listds
-S
--mkdir
-M <remote_dir>
--move
-m <source>
<target>
--put
-p <local_path>
<remote_path>
--rm
-r <remote_path>
--rmdir
-R <remote_dir>
Downloads a le from the ESXi host to the
machine on which you run vCLI. This
operation uses HTTP GET.
Lists the data center paths available on an
ESXi system.
Lists the datastore names on the ESXi system.
When multiple data centers are available, use
the --dc (-Z) argument to specify the name of
the data center from which you want to list
the datastore.
Creates a directory in a datastore. This
operation fails if the parent directory of
dst_datastore_file_path does not exist.
Moves a le in a datastore to another location
in a datastore. The <source> must be a
remote source path, the <target> a remote
target path or directory.
The --force option replaces existing
destination les.
Uploads a le from the machine on which you
run vCLI to the ESXi host. This operation uses
HTTP PUT.
This command can replace existing host les
but cannot create new les.
Deletes a datastore le.Datastore
Deletes a datastore directory. This operation
fails if the directory is not empty.
Datastore
Host
Datastore
Host
Datastore
Host
Datastore
Temp
Datastore
Temp
Datastore
Host Temp
Temp
Datastore
Temp
get src_dstore_file_path
dst_local_file_path
get src_d store_dir_path
dst_local_file_path
vifs --listds
mkdir dst_directory_path
move src_file_path
dst_directory_path [-force]
move src_file_path
dst_file_path [--force]
put src_local_file_path
dst_file_path
put src_local_file_path
dst_directory_path
rm dst_file_path
rmdir dst_directory_path
Chapter 3 Managing Files
vifs Examples
You can use vifs to interact with the remote ESXi or vCenter Server system in a variety of ways.
Specify one of the connection options listed in “Connection Options for vCLI Host Management
Commands,” on page 19 in place of <conn_options>.
N The examples illustrate use on a Linux system. You must use double quotes instead of single quotes
when on a Windows system.
Listing Remote Information
List all data centers on a vCenter Server system with --listdc, using --server to point to the
vSphere Command-Line Interface Concepts and Examples
40 VMware, Inc.
Managing Storage4
A virtual machine uses a virtual disk to store its operating system, program les, and other data associated
with its activities. A virtual disk is a large physical le, or a set of les, that can be copied, moved, archived,
and backed up.
To store virtual disk les and manipulate the les, a host requires dedicated storage space. ESXi storage is
storage space on a variety of physical storage systems, local or networked, that a host uses to store virtual
machine disks.
Chapter 5, “Managing iSCSI Storage,” on page 69 discusses iSCSI storage management. Chapter 6,
“Managing Third-Party Storage Arrays,” on page 101 explains how to manage the Pluggable Storage
Architecture, including Path Selection Plugin (PSP) and Storage Array Type Plug-in (SATP) conguration.
For information on masking and unmasking paths with ESXCLI, see the vSphere Storage documentation.
This chapter includes the following topics:
“Introduction to Storage,” on page 42
n
“Examining LUNs,” on page 45
n
“Detach a Device and Remove a LUN,” on page 48
n
“Reaach a Device,” on page 49
n
VMware, Inc.
“Working with Permanent Device Loss,” on page 49
n
“Managing Paths,” on page 50
n
“Managing Path Policies,” on page 54
n
“Scheduling Queues for Virtual Machine I/O,” on page 57
n
“Managing NFS/NAS Datastores,” on page 57
n
“Monitor and Manage FibreChannel SAN Storage,” on page 59
n
“Monitoring and Managing Virtual SAN Storage,” on page 60
n
“Monitoring vSphere Flash Read Cache,” on page 62
n
“Monitoring and Managing Virtual Volumes,” on page 62
n
“Migrating Virtual Machines with svmotion,” on page 63
n
“Conguring FCoE Adapters,” on page 65
n
“Scanning Storage Adapters,” on page 66
n
“Retrieving SMART Information,” on page 66
n
41
fibre channel switch fabric / IP network
server
group 1
virtual machines
server
group 2
server
group 3
iSCSI
storage array
NAS
storage array
vCenter Server
terminal
Web access
vSphere Client
ESX/ESXi
fibre channel
storage array
vSphere Command-Line Interface Concepts and Examples
Introduction to Storage
Fibre Channel SAN arrays, iSCSI SAN arrays, and NAS arrays are widely used storage technologies
supported by VMware vSphere to meet dierent data center storage needs.
The storage arrays are connected to and shared between groups of servers through storage area networks.
This arrangement allows aggregation of the storage resources and provides more exibility in provisioning
them to virtual machines.
Figure 4‑1. vSphere Data Center Physical Topology
How Virtual Machines Access Storage
42 VMware, Inc.
A virtual disk hides the physical storage layer from the virtual machine's operating system.
Regardless of the type of storage device that your host uses, the virtual disk always appears to the virtual
machine as a mounted SCSI device. As a result, you can run operating systems that are not certied for
specic storage equipment, such as SAN, in the virtual machine.
When a virtual machine communicates with its virtual disk stored on a datastore, it issues SCSI commands.
Because datastores can exist on various types of physical storage, these commands are encapsulated into
other forms, depending on the protocol that the ESXi host uses to connect to a storage device.
Figure 4-2 depicts ve virtual machines that use dierent types of storage to illustrate the dierences
between each type.
Figure 4‑2. Virtual Machines Accessing Different Types of Storage
iSCSI array
VMFSVMFS
LANLAN
iSCSI
hardware
initiator
ethernet
NIC
Host
requires TCP/IP connectivity
software
initiator
NAS
appliance
NFS
LAN
ethernet
NIC
fibre
array
VMFS
VMFS
LAN
fibre
channel
HBA
local
ethernet
SCSI
Chapter 4 Managing Storage
You can use vCLI commands to manage the virtual machine le system and storage devices.
VMFS - Use vmkfstools to create, modify, and manage VMFS virtual disks and raw device mappings.
n
See “Managing the Virtual Machine File System with vmkfstools,” on page 30 for an introduction and
the vSphere Storage documentation for a detailed reference.
Datastores - Several commands allow you to manage datastores and are useful for multiple protocols.
n
LUNs - Use esxcli storage core or vicfg-scsidevs commands to display available LUNs and
n
mappings for each VMFS volume to its corresponding partition. See “Examining LUNs,” on
page 45.
Path management - Use esxcli storage core or vicfg-mpath commands to list information about
n
Fibre Channel or iSCSI LUNs and to change a path’s state. See “Managing Paths,” on page 50. Use
the ESXCLI command to view and modify path policies. See “Managing Path Policies,” on
page 54.
Rescan - Use esxcli storage core or vicfg-rescan adapter rescan to perform a rescan operation
n
each time you recongure your storage setup. See “Scanning Storage Adapters,” on page 66.
Storage devices - Several commands manage only specic storage devices.
n
NFS storage - Use esxcli storage nfs or vicfg-nas to manage NAS storage devices. See
n
“Managing NFS/NAS Datastores,” on page 57.
iSCSI storage - Use esxcli iscsi or vicfg-iscsi to manage both hardware and software iSCSI. See
n
Chapter 5, “Managing iSCSI Storage,” on page 69.
Software-dened storage - vSphere supports several types of software-dened storage.
n
Virtual SAN storage - Use commands in the esxcli vsan namespace to manage Virtual SAN. See
n
“Monitoring and Managing Virtual SAN Storage,” on page 60.
VMware, Inc. 43
vSphere Command-Line Interface Concepts and Examples
Virtual Flash storage - Use commands in the esxcli storage vflash namespace to manage
n
VMware vSphere Flash Read Cache.
Virtual volumes - Virtual volumes oer a dierent layer of abstraction than datastores. As a result,
n
ner-grained management is possible. Use commands in the esxcli storage vvol namespace.
Datastores
ESXi hosts use storage space on a variety of physical storage systems, including internal and external
devices and networked storage.
A host can discover storage devices to which it has access and format them as datastores. Each datastore is a
special logical container, analogous to a le system on a logical volume, where the host places virtual disk
les and other virtual machine les. Datastores hide specics of each storage product and provide a uniform
model for storing virtual machine les.
Depending on the type of storage you use, datastores can be backed by the following le system formats.
Virtual Machine File System (VMFS) - High-performance le system optimized for storing virtual
n
machines. Your host can deploy a VMFS datastore on any SCSI-based local or networked storage
device, including Fibre Channel and iSCSI SAN equipment.
As an alternative to using the VMFS datastore, your virtual machine can have direct access to raw
devices and use a mapping le (RDM) as a proxy. See “Managing the Virtual Machine File System with
vmkfstools,” on page 30.
Network File System (NFS) - File system on a NAS storage device. ESXi supports NFS version 3 over
n
TCP/IP. The host can access a designated NFS volume located on an NFS server, mount the volume, and
use it for any storage needs.
Storage Device Naming
Each storage device, or LUN, is identied by several device identier names.
Device Identifiers
Depending on the type of storage, the ESXi host uses dierent algorithms and conventions to generate an
identier for each storage device.
SCSI INQUIRY identiers - The host uses the SCSI INQUIRY command to query a storage device and
n
uses the resulting data, in particular the Page 83 information, to generate a unique identier. SCSI
INQUIRY device identiers are unique across all hosts, persistent, and have one of the following
formats.
naa.<number>
n
t10.<number>
n
eui.<number>
n
These formats follow the T10 commiee standards. See the SCSI-3 documentation on the T10 commie
Web site for information on Page 83.
Path-based identier. If the device does not provide the information on Page 83 of the T10 commiee
n
SCSI-3 documentation, the host generates an mpx.<path> name, where <path> represents the rst path to
the device, for example, mpx.vmhba1:C0:T1:L3. This identier can be used in the same way as the SCSI
inquiry identiers.
The mpx.identier is created for local devices on the assumption that their path names are unique.
However, this identier is neither unique nor persistent and could change after every boot.
44 VMware, Inc.
Typically, the path to the device has the following format.
storage array
target
LUNLUNLUN
storage array
targettargettarget
LUNLUNLUN
vmhba<adapter>:C<channel>:T<target>:L<LUN>
vmbh<adapter> is the name of the storage adapter. The name refers to the physical adapter on the
n
host, not the SCSI controller used by the virtual machines.
C<channel> is the storage channel number. Software iSCSI adapters and dependent hardware
n
adapters use the channel number to show multiple paths to the same target.
T<target> is the target number. Target numbering is determined by the host and might change if
n
the mappings of targets that are visible to the host change. Targets that are shared by dierent
hosts might not have the same target number.
L<LUN> is the LUN number that shows the position of the LUN within the target. The number is
n
provided by the storage system. If a target has only one LUN, the LUN number is always zero (0).
Legacy Identifiers
In addition to the SCSI INQUIRY or mpx identiers, ESXi generates an alternative legacy name, called VML
name, for each device. Use the device UID instead.
Examining LUNs
A LUN (Logical Unit Number) is an identier for a disk volume in a storage array target.
Chapter 4 Managing Storage
Target and Device Representation
In the ESXi context, the term target identies a single storage unit that a host can access. The terms device
and LUN describe a logical volume that represents storage space on a target.
The terms device and LUN mean a SCSI volume presented to the host from a storage target.
Dierent storage vendors present their storage systems to ESXi hosts in dierent ways. Some vendors
present a single target with multiple LUNs on it. Other vendors, especially iSCSI vendors, present multiple
targets with one LUN each.
Figure 4‑3. Target and LUN Representations
In Figure 4-3, three LUNs are available in each conguration. On the left, the host sees one target, but that
target has three LUNs that can be used. Each LUN represents an individual storage volume. On the right,
the host sees three dierent targets, each having one LUN.
VMware, Inc. 45
vSphere Command-Line Interface Concepts and Examples
Examining LUNs with esxcli storage core
You can use esxcli storage core to display information about available LUNs on ESXi 5.0.
You can run one of the following commands to examine LUNs. Specify one of the connection options listed
in “Connection Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
List all logical devices known on this system with detailed information.
n
esxcli <conn_options> storage core device list
The command lists device information for all logical devices on this system. The information includes
the name (UUID), device type, display name, and multipathing plugin. Specify the --device option to
only list information about a specic device. See “Storage Device Naming,” on page 44 for background
information.
naa.5000c50037b3967e
Display Name: <name> (naa.5000c50037b3967e)
Has Settable Display Name: true
Size: 953869
Device Type: Direct-Access
...
naa.500000e014e7a4e0
Display Name: <name> (naa.500000e014e7a4e0)
Has Settable Display Name: true
Size: 70007
Device Type: Direct-Access
...
mpx.vmhba0:C0:T0:L0
Display Name: Local <name> CD-ROM (mpx.vmhba0:C0:T0:L0)
Has Settable Display Name: false
Size: 0
Device Type: CD-ROM
List a specic logical device with its detailed information.
n
esxcli <conn_options> storage core device list -d mpx.vmhba32:C0:T1:L0
List all device unique identiers.
n
esxcli <conn_options> storage core device list
The command lists the primary UID for each device, such as naa.xxx or other primary name, and any
other UIDs for each UID (VML name). You can specify --device to only list information for a specic
device.
Print mappings for VMFS volumes to the corresponding partition, path to that partition, VMFS UUID,
n
extent number, and volume names.
esxcli <conn_option> storage filesystem list
Print HBA devices with identifying information.
n
esxcli <conn_options> storage core adapter list
The return value includes adapter and UID information.
Print a mapping between HBAs and the devices it provides paths to.
n
esxcli <conn_options> storage core path list
46 VMware, Inc.
Chapter 4 Managing Storage
Examining LUNs with vicfg-scsidevs
You can use vicfg-scsidevs to display information about available LUNs on ESXi 4.x hosts.
I You can run vicfg-scsidevs --query and vicfg-scsidevs --vmfs against ESXi version 3.5. The
other options are supported only against ESXi version 4.0 and later.
You can run one of the following commands to examine LUNs. Specify one of the connection options listed
in “Connection Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
List all logical devices known on this system with detailed information.
n
vicfg-scsidevs <conn_options> --list
The command lists device information for all logical devices on this system. The information includes
the name (UUID), device type, display name, and multipathing plugin. Specify the --device option to
only list information about a specic device. The following example shows output for two devices. The
actual listing might include multiple devices and the precise format diers between releases.
mpx.vmhba2:C0:T1:L0
Device Type: cdrom
Size: 0 MB
Display Name: Local HL-DT-ST (mpx.vmhba2:C0:T1:L0)
Display Name: DGC Fibre Channel Disk (naa.60060...)
...
List all logical devices with abbreviated information.
n
vicfg-scsidevs <conn_options> --compact-list
The information includes the device ID, device type, size, plugin, and device display name.
List all device unique identiers.
n
vicfg-scsidevs <conn_options> --uids
The command lists the primary UID for each device, such as naa.xxx or other primary name, and any
other UIDs for each UID (VML name). You can specify --device to only list information for a specic
device.
List a specic logical device with its detailed information.
Print mappings for VMFS volumes to the corresponding partition, path to that partition, VMFS uuid,
n
extent number, and volume names.
vicfg-scsidevs <conn_options> --vmfs
VMware, Inc. 47
vSphere Command-Line Interface Concepts and Examples
Print HBA devices with identifying information.
n
vicfg-scsidevs <conn_options> --hbas
The return value includes the adapter ID, driver ID, adapter UID, PCI, vendor, and model.
Print a mapping between HBAs and the devices it provides paths to.
n
vicfg-scsidevs <conn_options> --hba-device-list
Detach a Device and Remove a LUN
Before you can remove a LUN, you must detach the corresponding device by using the vSphere Web Client,
or the esxcli storage core device set command.
Detaching a device brings a device oine. Detaching a device does not impact path states. If the LUN is still
visible, the path state is not set to dead.
Prerequisites
Make sure you are familiar with virtual machine migration. See the vCenter Server and Host Management
n
documentation.
Make sure you are familiar with datastore mounting and unmounting. See “Mount a Datastore with
n
ESXCLI,” on page 32.
Procedure
1Migrate virtual machines from the device you plan to detach.
2Unmount the datastore deployed on the device.
If the unmount fails, ESXCLI returns an error. If you ignore that error, you will get an error when you
aempt to detach a device with a VMFS partition still in use.
3If the unmount failed, check whether the device is in use.
esxcli storage core device world list -d <device>
If a VMFS volume is using the device indirectly, the world name includes the string idle0. If a virtual
machine uses the device as an RDM, the virtual machine process name is displayed. If any other process
is using the raw device, the information is displayed.
4Detach the storage device.
esxcli storage core device set -d naa.xxx... --state=off
Detach is persistent across reboots and device unregistration. Any device that is detached remains
detached until a manual aach operation. Rescan does not bring persistently detached devices back
online. A persistently detached device comes back in the o state.
ESXi maintains the persistent information about the device’s oine state even if the device is
unregistered. You can remove the device information by running esxcli storage core device
detached remove -d naa.12.
5(Optional) To troubleshoot the detach operation, list all devices that were detached manually.
esxcli storage core device detached list
6Perform a rescan.
esxcli <conn_options> storage core adapter rescan
48 VMware, Inc.
Reattach a Device
When you have completed storage reconguration, you can reaach the storage device, mount the
datastore, and restart the virtual machines.
Prerequisites
Make sure you are familiar with datastore mounting. See“Mounting Datastores with Existing Signatures,”
on page 32.
Procedure
1(Optional) Check whether the device is detached.
esxcli storage core device detached list
2Aach the device.
esxcli storage core device set -d naa.XXX --state=on
3Mount the datastore and restart virtual machines.
Working with Permanent Device Loss
Chapter 4 Managing Storage
In some cases a permanent device loss (PDL) might occur.
With earlier ESXi releases, an APD (All Paths Down) event results when the LUN becomes unavailable. The
event is dicult for administrators because they do not have enough information about the state of the LUN
to know which corrective action is appropriate.
In ESXi 5.0, the ESXi host can determine whether the cause of an APD event is temporary, or whether the
cause is PDL. A PDL status occurs when the storage array returns SCSI sense codes indicating that the LUN
is no longer available or that a severe, unrecoverable hardware problem exist with it. ESXi has an improved
infrastructure that can speed up operations of upper-layer applications in a device loss scenario.
I Do not plan for APD or PDL events, for example, when you want to upgrade your hardware.
Instead, perform an orderly removal of LUNs from your ESXi server, which is described in “Detach a Device
and Remove a LUN,” on page 48, perform the operation, and add the LUN back.
Removing a PDL LUN
How you remove a PDL LUN depends on whether it was in use.
If the LUN that goes into PDL is not in use by any user process or by the VMkernel, the LUN
n
disappears by itself after a PDL.
If the LUN was in use when it entered PLD, delete the LUN manually by following the process
n
described in “Detach a Device and Remove a LUN,” on page 48.
Reattach a PDL LUN
You can reaach a PDL LUN after it has been removed.
Procedure
1Return the LUN to working order.
VMware, Inc. 49
vSphere Command-Line Interface Concepts and Examples
2Remove any users of the device.
You cannot bring a device back without removing active users. The ESXi host cannot know whether the
device that was added back has changed. ESXi must be able to treat the device similarly to a new device
being discovered.
3Perform a rescan to get the device back in working order.
Managing Paths
To maintain a constant connection between an ESXi host and its storage, ESXi supports multipathing. With
multipathing you can use more than one physical path for transferring data between the ESXi host and the
external storage device.
In case of failure of an element in the SAN network, such as an HBA, switch, or cable, the ESXi host can fail
over to another physical path. On some devices, multipathing also oers load balancing, which redistributes
I/O loads between multiple paths to reduce or eliminate potential bolenecks.
The storage architecture in vSphere 4.0 and later supports a special VMkernel layer, Pluggable Storage
Architecture (PSA). The PSA is an open modular framework that coordinates the simultaneous operation of
multiple multipathing plug-ins (MPPs). You can manage PSA using ESXCLI commands. See Chapter 6,
“Managing Third-Party Storage Arrays,” on page 101. This section assumes you are using only PSA plug-ins
included in vSphere by default.
Multipathing with Local Storage and FC SANs
Multipathing is a technique that lets you use more than one physical path that transfers data between the
host and an external storage device.
In a simple multipathing local storage topology, you can use one ESXi host with two HBAs. The ESXi host
connects to a dual-port local storage system through two cables. This conguration ensures fault tolerance if
one of the connection elements between the ESXi host and the local storage system fails.
To support path switching with FC SAN, the ESXi host typically has two HBAs available from which the
storage array can be reached through one or more switches. Alternatively, the setup can include one HBA
and two storage processors so that the HBA can use a dierent path to reach the disk array.
In FC Multipathing, multiple paths connect each host with the storage device. For example, if HBA1 or the
link between HBA1 and the switch fails, HBA2 takes over and provides the connection between the server
and the switch. The process of one HBA taking over for another is called HBA failover.
50 VMware, Inc.
Figure 4‑4. FC Multipathing
storage array
SP1SP2
switchswitch
HBA2HBA1HBA3HBA4
Host 1
Host 2
Chapter 4 Managing Storage
If SP1 or the link between SP1 and the switch breaks, SP2 takes over and provides the connection between
the switch and the storage device. This process is called SP failover. ESXi multipathing supports HBA and
SP failover.
After you have set up your hardware to support multipathing, you can use the vSphere Web Client or vCLI
commands to list and manage paths. You can perform the following tasks.
List path information with vicfg-mpath or esxcli storage core path. See “Listing Path Information,”
n
on page 51.
Change path state with vicfg-mpath or esxcli storage core path. See “Changing the State of a Path,”
n
on page 53.
Change path policies with ESXCLI. See “Set Policy Details for Devices that Use Round Robin,” on
n
page 56.
Mask paths with ESXCLI. See the vSphere Storage documentation.
n
Manipulate the rules that match paths to multipathing plugins to newly discovered devices with esxcli
n
claimrule. See “Managing Claim Rules,” on page 110.
Run or rerun claim rules or unclaim paths. See “Managing Claim Rules,” on page 110.
n
Rescan with vicfg-rescan. See “Scanning Storage Adapters,” on page 66.
n
Listing Path Information
You can list path information with ESXCLI or with vicfg-mpath.
Listing Path Information with ESXCLI
You can run esxcli storage core path to display information about Fibre Channel or iSCSI LUNs.
I Use industry-standard device names, with format eui.xxx or naa.xxx to ensure consistency. Do
not use VML LUN names unless device names are not available.
Names of virtual machine HBAs are not guaranteed to be valid across reboots.
VMware, Inc. 51
vSphere Command-Line Interface Concepts and Examples
You can display information about paths by running esxcli storage core path. Specify one of the options
listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of
<conn_options>.
List all devices with their corresponding paths, state of the path, adapter type, and other information.
n
esxcli <conn_options> storage core path list
Limit the display to only a specied path or device.
n
esxcli <conn_options> storage core path list --path <path>
esxcli <conn_options> storage core path list --device <device>
List the statistics for the SCSI paths in the system. You can list all paths or limit the display to a specic
n
path.
esxcli <conn_options> storage core path stats get
esxcli <conn_options> storage core path stats get --path <path>
List detailed information for the paths for the device specied with --device.
n
esxcli <conn_options> storage core path list -d <naa.xxxxxx>
List all adapters.
n
esxcli <conn_options> storage core adapter list
Rescan all adapters.
n
esxcli <conn_options> storage core adapter rescan
Listing Path Information with vicfg-mpath
You can run vicfg-mpath to list information about Fibre Channel or iSCSI LUNs.
I Use industry-standard device names, with format eui.xxx or naa.xxx to ensure consistency. Do
not use VML LUN names unless device names are not available.
Names of virtual machine HBAs are not guaranteed to be valid across reboots.
You can display information about paths by running vicfg-mpath with one of the following options. Specify
one of the options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in
place of <conn_options>.
List all devices with their corresponding paths, state of the path, adapter type, and other information.
n
vicfg-mpath <conn_options> --list-paths
Display a short listing of all paths.
n
vicfg-mpath <conn_options> --list-compact
List all paths with adapter and device mappings.
n
vicfg-mpath <conn_options> --list-map
List paths and detailed information by specifying the path UID (long path). The path UID is the rst
List paths and detailed information by specifying the path runtime name.
n
vicfg-mpath <conn_options> -l -P vmhba32:C0:T0:L0
The return information includes the runtime name, device, device display name, adapter, adapter
identier, target identier, plugin, state, transport, and adapter and target transport details.
52 VMware, Inc.
Chapter 4 Managing Storage
List detailed information for the paths for the device specied with --device.
You can change the state of a path with ESXCLI or with vicfg-mpath.
Disable a Path with ESXCLI
You can temporarily disable a path with ESXCLI for maintenance or other reasons, and enable the path
when you need it again.
If you are changing a path's state, the change operation fails if I/O is active when the path seing is changed.
Reissue the command. You must issue at least one I/O operation before the change takes eect.
Specify one of the options listed in “Connection Options for vCLI Host Management Commands,” on
page 19 in place of <conn_options>.
Procedure
1(Optional) List all devices and corresponding paths.
esxcli <conn_options> storage core path list
The display includes information about each path's state.
2Set the state of a LUN path to o.
esxcli <conn_options> storage core path set --state off --path vmhba32:C0:T1:L0
What to do next
When you are ready, set the path state to active again.
esxcli <conn_options> storage core path set --state active --path vmhba32:C0:T1:L0
Disable a Path with vicfg-mpath
You can temporarily disable a path with vicfg-mpath for maintenance or other reasons, and enable the path
when you need it again.
If you are changing a path's state, the change operation fails if I/O is active when the path seing is changed.
Reissue the command. You must issue at least one I/O operation before the change takes eect.
Specify one of the options listed in “Connection Options for vCLI Host Management Commands,” on
page 19 in place of <conn_options>.
Procedure
1(Optional) List all devices and corresponding paths.
vicfg-mpath <conn_options> --list-paths
The display includes information about each path's state.
2Set the state of a LUN path to o.
vicfg-mpath <conn_options> --state off --path vmhba32:C0:T1:L0
What to do next
When you are ready, set the path state to active again.
vicfg-mpath <conn_options> --state active --path vmhba32:C0:T1:L0
VMware, Inc. 53
vSphere Command-Line Interface Concepts and Examples
Managing Path Policies
For each storage device managed by NMP, and not PowerPath, an ESXi host uses a path selection policy. If
you have a third-party PSP installed on your host, its policy also appears on the list.
Supported Path Policies
The following path policies are supported by default.
PolicyDescription
VMW_PSP_FIXED
VMW_PSP_MRU
VMW_PSP_RR
The host uses the designated preferred path, if it has been congured. Otherwise, the host selects the
rst working path discovered at system boot time. If you want the host to use a particular preferred
path, specify it through the vSphere Web Client, or by using esxcli storage nmp psp fixed
deviceconfig set. See “Changing Path Policies,” on page 55.
The default policy for active-active storage devices is VMW_PSP_FIXED.
N If the host uses a default preferred path and the path's status turns to Dead, a new path is
selected as preferred. However, if you explicitly designate the preferred path, it will remain preferred
even when it becomes inaccessible.
The host selects the path that it used most recently. When the path becomes unavailable, the host
selects an alternative path. The host does not revert back to the original path when that path becomes
available again. There is no preferred path seing with the MRU policy. MRU is the default policy for
active-passive storage devices.
The VMW_PSP_MRU ranking capability allows you to assign ranks to individual paths. To set ranks to
individual paths, use the esxcli storage nmp psp generic pathconfig set command. For
details, see the VMware knowledge base article 2003468.
The host uses an automatic path selection algorithm that rotates through all active paths when
connecting to active-passive arrays, or through all available paths when connecting to active-active
arrays. Automatic path selection implements load balancing across the physical paths available to your
host. Load balancing is the process of spreading I/O requests across the paths. The goal is to optimize
throughput performance such as I/O per second, megabytes per second, or response times.
VMW_PSP_RR is the default for a number of arrays and can be used with both active-active and activepassive arrays to implement load balancing across paths for dierent LUNs.
Path Policy Effects
The type of array and the path policy determine the behavior of the host.
PolicyActive/Active ArrayActive/Passive Array
Most Recently
Used
FixedVMkernel resumes using the preferred
Round RobinNo fail back.Next path in round robin scheduling is selected.
Administrator action is required to fail
back after path failure.
path when connectivity is restored.
Administrator action is required to fail back after path
failure.
VMkernel aempts to resume by using the preferred path.
This action can cause path thrashing or failure when
another SP now owns the LUN.
Multipathing Considerations
You should consider a number of key points when working with multipathing.
The following considerations help you with multipathing.
If no SATP is assigned to the device by the claim rules, the default SATP for iSCSI or FC devices is
n
VMW_SATP_DEFAULT_AA. The default PSP is VMW_PSP_FIXED.
When the system searches the SATP rules to locate a SATP for a given device, it searches the driver
n
rules rst. If there is no match, the vendor/model rules are searched, and nally the transport rules are
searched. If no match occurs, NMP selects a default SATP for the device.
54 VMware, Inc.
If VMW_SATP_ALUA is assigned to a specic storage device, but the device is not ALUA-aware, no claim
n
rule match occurs for this device. The device is claimed by the default SATP based on the device's
transport type.
The default PSP for all devices claimed by VMW_SATP_ALUA is VMW_PSP_MRU. The VMW_PSP_MRU selects an
n
active/optimized path as reported by the VMW_SATP_ALUA, or an active/unoptimized path if there is no
active/optimized path. This path is used until a beer path is available (MRU). For example, if the
VMW_PSP_MRU is currently using an active/unoptimized path and an active/optimized path becomes
available, the VMW_PSP_MRU will switch the current path to the active/optimized one.
While VMW_PSP_MRU is typically selected for ALUA arrays by default, certain ALUA storage arrays need
n
to use VMW_PSP_FIXED. To check whether your storage array requires VMW_PSP_FIXED, see the VMware
Compatibility Guide or contact your storage vendor. When using VMW_PSP_FIXED with ALUA arrays,
unless you explicitly specify a preferred path, the ESXi host selects the most optimal working path and
designates it as the default preferred path. If the host selected path becomes unavailable, the host selects
an alternative available path. However, if you explicitly designate the preferred path, it remains
preferred no maer what its status is.
By default, the PSA claim rule 101 masks Dell array pseudo devices. Do not delete this rule, unless you
n
want to unmask these devices.
Changing Path Policies
Chapter 4 Managing Storage
You can change path policies with ESXCLI or with vicfg-mpath.
Change the Path Policy with ESXCLI
You can change the path policy with ESXCLI.
Specify one of the options listed in “Connection Options for vCLI Host Management Commands,” on
page 19 in place of <conn_options>.
Prerequisites
Verify that you are familiar with the supported path policies. See “Managing Path Policies,” on page 54.
Procedure
1Ensure your device is claimed by the NMP plug-in.
Only NMP devices allow you to change the path policy.
esxcli <conn_options> storage nmp device list
2Retrieve the list of path selection policies on the system to see which values are valid for the --psp
option when you set the path policy.
esxcli storage core plugin registration list --plugin-class="PSP"
3Set the path policy by using ESXCLI.
esxcli <conn_options> storage nmp device set --device naa.xxx --psp VMW_PSP_RR
VMware, Inc. 55
vSphere Command-Line Interface Concepts and Examples
4(Optional) If you specied the VMW_PSP_FIXED policy, you must make sure the preferred path is set
correctly.
aCheck which path is the preferred path for a device.
esxcli <conn_options> storage nmp psp fixed deviceconfig get --device naa.xxx
The command sets the preferred path to vmhba3:C0:T5:L3.
Set Policy Details for Devices that Use Round Robin
ESXi hosts can use multipathing for failover. With some storage devices, ESXi hosts can also use
multipathing for load balancing.
To achieve beer load balancing across paths, administrators can specify that the ESXi host should switch
paths under specic circumstances. Dierent options determine when the ESXi host switches paths and
what paths are chosen. Only a limited number of storage arrays support round robin.
You can use esxcli storage nmp psp roundrobin to retrieve and set round robin path options on a device
controlled by the roundrobin PSP. Specify one of the options listed in “Connection Options for vCLI Host
Management Commands,” on page 19 in place of <conn_options>.
56 VMware, Inc.
Chapter 4 Managing Storage
No vicfg- command exists for performing the operations. The ESXCLI commands for seing round robin
path options have changed. The commands supported in ESXi 4.x are no longer supported.
Procedure
1Retrieve path selection seings for a device that is using the roundrobin PSP.
esxcli <conn_options> storage nmp psp roundrobin deviceconfig get --device na.xxx
2Set the path selection. You can specify when the path should change, and whether unoptimized paths
should be included.
Use --bytes or --iops to specify when the path should change, as in the following examples.
Sets the device specied by --device to switch to the next path each time 12345 bytes have been
sent along the current path.
esxcli <conn_options> storage nmp psp roundrobin deviceconfig set --type=iops --iops
4200 --device naa.xxx
Sets the device specied by --device to switch after 4200 I/O operations have been performed on a
path.
Use useano to specify that the round robin PSP should include paths in the active, unoptimized
u
state in the round robin set (1) or that the PSP should use active, unoptimized paths only if no
active optimized paths are available (0). If you do not include this option, the PSP includes only
active optimized paths in the round robin path set.
Scheduling Queues for Virtual Machine I/O
You can use ESXCLI to enable or disable per le I/O scheduling.
By default, vSphere provides a mechanism that creates scheduling queues for each virtual machine le. Each
le has individual bandwidth controls. This mechanism ensures that the I/O for a particular virtual machine
goes into its own separate queue and does not interfere with the I/O of other virtual machines.
This capability is enabled by default. You can turn it o by using the esxcli system settings kernel set -
s isPerFileSchedModelActive option.
Run esxcli system settings kernel set -s isPerFileSchedModelActive -v FALSE to disable per le
n
scheduling.
Run esxcli system settings kernel set -s isPerFileSchedModelActive -v TRUE to enable per le
n
scheduling.
Managing NFS/NAS Datastores
ESXi hosts can access a designated NFS volume located on a NAS (Network Aached Storage) server, can
mount the volume, and can use it for its storage needs. You can use NFS volumes to store and boot virtual
machines in the same way that you use VMFS datastores.
Capabilities Supported by NFS/NAS
An NFS client built into the ESXi hypervisor uses the Network File System (NFS) protocol over TCP/IP to
access a designated NFS volume that is located on a NAS server. The ESXi host can mount the volume and
use it for its storage needs.
vSphere supports versions 3 and 4.1 of the NFS protocol.
VMware, Inc. 57
vSphere Command-Line Interface Concepts and Examples
Typically, the NFS volume or directory is created by a storage administrator and is exported from the NFS
server. The NFS volume does not need to be formaed with a local le system, such as VMFS. You can
mount the volume directly on ESXi hosts, and use it to store and boot virtual machines in the same way that
you use VMFS datastores.
In addition to storing virtual disks on NFS datastores, you can also use NFS as a central repository for ISO
images, virtual machine templates, and so on. If you use the datastore for ISO images, you can connect the
virtual machine's CD-ROM device to an ISO le on the datastore and install a guest operating system from
the ISO le.
ESXi hosts support the following shared storage capabilities on NFS volumes.
VMware vMotion and Storage vMotion
n
High Availability (HA), Fault Tolerance, and Distributed Resource Scheduler (DRS)
n
ISO images, which are presented as CD-ROMs to virtual machines
n
Virtual machine snapshots
n
Host proles
n
Virtual machines with large capacity virtual disks, or disks greater than 2 TB. Virtual disks created on
n
NFS datastores are thin-provisioned by default, unless you use hardware acceleration that supports the
Reserve Space operation. See Hardware Acceleration on NAS Devices in the vSphere Storage
documentation.
In addition to storing virtual disks on NFS datastores, you can also use NFS as a central repository for ISO
images, virtual machine templates, and so on.
To use NFS as a shared repository, you create a directory on the NFS server and then mount the directory as
a datastore on all hosts. If you use the datastore for ISO images, you can connect the virtual machine's CDROM device to an ISO le on the datastore and install a guest operating system from the ISO le.
Adding and Deleting NAS File Systems
You can list, add, and delete a NAS le system with ESXCLI or with vicfg-nas.
Manage a NAS File System with ESXCLI
You can use ESXCLI as a vCLI command with connection options or in the ESXi Shell.
For more information on connection options, see “Connection Options for vCLI Host Management
Commands,” on page 19.
Procedure
1List all known NAS le systems.
esxcli <conn_options> storage nfs list
For each NAS le system, the command lists the mount name, share name, and host name and whether
the le system is mounted. If no NAS le systems are available, the system does not return a NAS
lesystem and returns to the command prompt.
58 VMware, Inc.
Chapter 4 Managing Storage
2Add a new NAS le system to the ESXi host.
Specify the NAS server with --host, the volume to use for the mount with --volume-name, and the share
name on the remote system to use for this NAS mount point with --share.
This command adds an entry to the known NAS le system list and supplies the share name of the new
NAS le system. You must supply the host name, share name, and volume name for the new NAS le
system.
3Add a second NAS le system with read-only access.
This command unmounts the NAS le system and removes it from the list of known le systems.
Managie a NAS File System with vicfg-nas
You can use vicfg-nas as a vCLI command with connection options.
For more information on connection options, see “Connection Options for vCLI Host Management
Commands,” on page 19.
Procedure
1List all known NAS le systems.
vicfg-nas <conn_options> -l
For each NAS le system, the command lists the mount name, share name, and host name and whether
the le system is mounted. If no NAS le systems are available, the system returns a No NAS datastore
This command adds an entry to the known NAS le system list and supplies the share name of the new
NAS le system. You must supply the host name and the share name for the new NAS le system.
3Add a second NAS le system with read-only access.
vicfg-nas <conn_options> -a -y --n esx42nas2 -s /home FileServerHome2
4Delete one of the NAS le systems.
vicfg-nas <conn_options> -d FileServerHome1
This command unmounts the NAS le system and removes it from the list of known le systems.
Monitor and Manage FibreChannel SAN Storage
The esxcli storage san commands help administrators troubleshoot issues with I/O devices and fabric, and
include Fibre Channel, FCoE, iSCSI, SAS protocol statistics.
The commands allow you to retrieve device information and I/O statistics from those device. You can also
issue Loop Initialization Primitives (LIP) to FC/FCoE devices and you can reset SAS devices.
VMware, Inc. 59
vSphere Command-Line Interface Concepts and Examples
For FC and FCoE devices, you can retrieve FC events such as RSCN, LINKUP, LINKDOWN, Frame Drop and FCoE
CVL. The commands log a warning in the VMkernel log if it encounters too many Link Toggling or frame
drops.
The following example examines and resets SAN storage through a FibreChannel adapter. Instead of fc, the
information retrieval commands can also use iscsi, fcoe, and sas.
Procedure
1List adapter aributes.
esxcli storage san fc list
2Retrieve all events for a Fibre Channel I/O device.
esxcli storage san fc events get
3Clear all I/O Device Management events for the specied adapter.
esxcli storage san fc events clear --adapter adapter
4Reset the adapter.
esxcli storage san fc reset
Monitoring and Managing Virtual SAN Storage
Virtual SAN is a distributed layer of software that runs natively as a part of the ESXi hypervisor. Virtual
SAN aggregates local or direct-aached storage disks of a host cluster and creates a single storage pool
shared across all hosts of the cluster.
While supporting VMware features that require shared storage, such as HA, vMotion, and DRS, Virtual
SAN eliminates the need for an external shared storage and simplies storage conguration and virtual
machine provisioning activities.
You can use ESXCLI commands to retrieve Virtual SAN information, manage Virtual SAN clusters, perform
network management, add storage, set the policy, and perform other monitoring and management tasks.
Type esxcli vsan --help for a complete list of commands.
Retrieve Virtual SAN Information
You can use ESXCLI commands to retrieve Virtual SAN information.
Procedure
1Verify which VMkernel adapters are used for Virtual SAN communication.
esxcli vsan network list
2List storage disks that were claimed by Virtual SAN.
esxcli vsan storage list
3Get Virtual SAN cluster information.
esxcli vsan cluster get
Manage a Virtual SAN Cluster
You can activate Virtual SAN when you create host clusters or enable Virtual SAN on existing clusters.
When enabled, Virtual SAN aggregates all local storage disks available on the hosts into a single datastore
shared by all hosts.
You can run these commands in the ESXi Shell for a host, or the command aects the target host that you
specify as part of the vCLI connection options.
60 VMware, Inc.
Procedure
1Join the target host to a given Virtual SAN cluster.
esxcli vsan cluster join --cluster-uuid <uuid>
N The UUID of the cluster is required.
2Verify that the target host is joined to a Virtual SAN cluster.
esxcli vsan cluster get
3Remove the target host from the Virtual SAN cluster.
esxcli vsan cluster leave
Add and Remove Virtual SAN Storage
You can use ESXCLI commands to add and remove Virtual SAN storage.
Procedure
1Add an HDD or data disk for use by Virtual SAN.
esxcli vsan storage add --disks <device_name>
Chapter 4 Managing Storage
N The command expects an empty disk, which will be partitioned or formaed. Specify a device
name, for example, mpx.vmhba2:C0:T1:L0.
2Add an SSD disk for use by Virtual SAN.
esxcli vsan storage add --ssd <device_name>
N The command expects an empty disk, which will be partitioned or formaed. Specify a device
name, for example, mpx.vmhba2:C0:T1:L0.
3List the Virtual SAN storage conguration. You can display the complete list, or lter to show only a
single device.
esxcli vsan storage list --device <device>
4Remove disks or disk groups.
N You can remove disks or disk groups only when Virtual SAN is in manual mode. For the
automatic disk claim mode, the remove action is not supported.
Remove an individual Virtual SAN disk.
n
esxcli vsan storage remove --disk <device_name>
Instead of specifying the device name, you can specify the UUID if you include the --uuid option.
Remove a disk group's SSD and each of its backing HDD drives from Virtual SAN usage.
n
esxcli vsan storage remove --ssd <device_name>
Instead of specifying the device name, you can specify the UUID if you include the --uuid option.
Any SSD that you remove from Virtual SAN becomes available for such features as Flash Read
Cache.
VMware, Inc. 61
vSphere Command-Line Interface Concepts and Examples
Monitoring vSphere Flash Read Cache
Flash Read Cache™ lets you accelerate virtual machine performance through the use of host resident ash
devices as a cache.
The vSphere Storage documentation discusses vSphere Flash Read Cache in some detail.
You can reserve a Flash Read Cache for any individual virtual disk. The Flash Read Cache is created only
when a virtual machine is powered on, and it is discarded when a virtual machine is suspended or powered
o. When you migrate a virtual machine you have the option to migrate the cache. By default the cache is
migrated if the virtual ash module on the source and destination hosts are compatible. If you do not
migrate the cache, the cache is rewarmed on the destination host. You can change the size of the cache while
a virtual machine is powered on. In this instance, the existing cache is discarded and a new write-through
cache is created, which results in a cache warm up period. The advantage of creating a new cache is that the
cache size can beer match the application's active data.
Flash Read Cache supports write-through or read caching. Write-back or write caching are not supported.
Data reads are satised from the cache, if present. Data writes are dispatched to the backing storage, such as
a SAN or NAS. All data that is read from or wrien to the backing storage is unconditionally stored in the
cache.
N Not all workloads benet with a Flash Read Cache. The performance boost depends on your
workload paern and working set size. Read-intensive workloads with working sets that t into the cache
can benet from a Flash Read Cache conguration. By conguring Flash Read Cache for your read-intensive
workloads additional I/O resources become available on your shared storage, which can result in a
performance increase for other workloads even though they are not congured to use Flash Read Cache.
You can manage vSphere Flash Read Cache from the vSphere Web Client. You can monitor Flash Read
Cache by using commands in the esxcli storage vflash namespace. The following table lists available
commands. See the vSphere Command-Line Interface Reference or the online help for a list of options to each
command.
Table 4‑1. Commands for Monitoring vSphere Flash Read Cache
The Virtual Volumes functionality changes the storage management paradigm from managing space inside
datastores to managing abstract storage objects handled by storage arrays.
With Virtual Volumes, an individual virtual machine, not the datastore, becomes a unit of storage
management, while storage hardware gains complete control over virtual disk content, layout, and
management. The vSphere Storage documentation discusses Virtual Volumes in some detail and explains
how to manage them by using the vSphere Web Client.
62 VMware, Inc.
Chapter 4 Managing Storage
The following ESXCLI commands are available for managing display information about virtual volumes
and for unbinding all Virtual Volumes from all vendor providers. See the vSphere Storage documentation for
information on creating Virtual Volumes and conguring multipathing and SCSI-based endpoints.
Table 4‑2. VVol Commands
CommandDescription
storage vvol daemon unbindall
storage vvol protocolendpoint list
storage vvol storagecontainer list
storage vvol storagecontainer restore
storage vvol vasacontext get
storage vvol vendorprovider list
storage vvol vendorprovider restore
Unbinds all Virtual Volume instances from all storage
providers that are known to the ESXi host.
Lists the VVol protocol endpoints currently known to the
ESXi host.
Lists the VVol storage containers currently known to the
ESXi host.
Restores storage containers of vendor providers that are
registered on the host.
Gets the VASA context (VC UUID).
Lists the vendor providers registered on the host.
Restores the vendor providers that are registered on the
host.
Migrating Virtual Machines with svmotion
Storage vMotion moves a virtual machine's congurationle, and, optionally, its disks, while the virtual
machine is running. You can perform Storage vMotion tasks from the vSphere Web Client or with the
svmotion command.
I No ESXCLI command for Storage vMotion is available.
You can place the virtual machine and all of its disks in a single location, or choose separate locations for the
virtual machine congurationle and each virtual disk. You cannot change the virtual machine's execution
host during a migration with svmotion.
Storage vMotion Uses
Storage vMotion has several uses in administering your vSphere environment.
Upgrade ESXi without virtual machine downtime in situations where virtual machine disks must be
n
moved to shared storage to allow migration with vMotion.
Perform storage maintenance and reconguration. You can use Storage vMotion to move virtual
n
machines o a storage device to allow maintenance or reconguration of the storage device without
virtual machine downtime.
Redistribute storage load. You can use Storage vMotion to manually redistribute virtual machines or
n
virtual disks to dierent storage volumes to balance capacity or improve performance.
Storage vMotion Requirements and Limitations
You can migrate virtual machine disks with Storage vMotion if the virtual machine and its host meet specic
resource and conguration requirements.
To migrate virtual machine disks with Storage vMotion, the virtual machine and its host must meet the
following requirements.
For ESXi 5.0 and later hosts, you can migrate virtual machines that have snapshots. For earlier versions
n
of ESXi, you cannot migrate virtual machines that have snapshots.
VMware, Inc. 63
vSphere Command-Line Interface Concepts and Examples
Virtual machine disks must be in persistent mode or be raw device mappings (RDMs). For physical and
n
virtual compatibility mode RDMs, you can migrate the mapping le only. For virtual compatibility
mode RDMs, you can use the vSphere Web Client to convert to thick-provisioned or thin-provisioned
disks during migration as long as the destination is not an NFS datastore. You cannot use the svmotion
command to perform this conversion.
The host on which the virtual machine is running must have a license that includes Storage vMotion.
n
The host on which the virtual machine is running must have access to both the source and target
n
datastores.
A particular host can be involved in up to four migrations with vMotion or Storage vMotion at one
n
time. See Limits on Simultaneous Migrations in the vCenter Server and Host Management documentation for
details.
If you use the vSphere Web Client for migration with svmotion, the system performs several compatibility
checks. These checks are not supported by the svmotion vCLI command.
Running svmotion in Interactive Mode
You can run svmotion in interactive mode by using the --interactive option. The command prompts you
for the information it needs to complete the storage migration.
In interactive mode, the svmotion command uses the following syntax.
svmotion <conn_options> --interactive
When you use --interactive, all other options are ignored.
I When responding to the prompts, use quotes around input strings with special characters.
Running svmotion in Noninteractive Mode
You can run svmotion in noninteractive mode if you do not use the --interactive option.
I When you run svmotion, --server must point to a vCenter Server system.
In noninteractive mode, the svmotion command uses the following syntax.
[--disks <virtual disk datastore path>:<new datastore>,
<virtual disk datastore path>:<new datastore>]
Square brackets indicate optional elements, not datastores.
The --vm option species the virtual machine and its destination. By default, all virtual disks are relocated to
the same datastore as the virtual machine. This option requires the current virtual machine congurationle
location. See “Determine the Path to the Virtual Machine Conguration File and Disk File,” on page 64.
The --disks option relocates individual virtual disks to dierent datastores. The --disks option requires the
current virtual disk datastore path as an option. See “Determine the Path to the Virtual Machine
Conguration File and Disk File,” on page 64.
Determine the Path to the Virtual Machine Configuration File and Disk File
To use the --vm option, you need the current virtual machine congurationle location.
Procedure
1Run vmware-cmd -l to list all virtual machine congurationles (VMX les).
N The example is for Linux. Use double quotes on Windows. The square brackets surround the
datastore name and do not indicate an optional element.
This command relocates the virtual machine's congurationle to new_datastore, but leaves the two
disks, myvm_1.vmdk and myvm_2.vmdk, in old_datastore.
Configuring FCoE Adapters
ESXi can use Fibre Channel over Ethernet (FCoE) adapters to access Fibre Channel storage.
The FCoE protocol encapsulates Fibre Channel frames into Ethernet frames. As a result, your host does not
need special Fibre Channel links to connect to Fibre Channel storage, but can use 10 Gbit lossless Ethernet to
deliver Fibre Channel trac.
To use FCoE, you need to install FCoE adapters. The adapters that VMware supports generally fall into two
categories, hardware FCoE adapters and software FCoE adapters.
Hardware FCoE adapters include completely ooaded specialized Converged Network Adapters
n
(CNAs) that contain network and Fibre Channel functionalities on the same card. When such an
adapter is installed, your host detects and can use both CNA components. In the vSphere Web Client,
the networking component appears as a standard network adapter (vmnic) and the Fibre Channel
component as a FCoE adapter (vmhba). You do not have to congure a hardware FCoE adapter to be
able to use it.
VMware, Inc. 65
vSphere Command-Line Interface Concepts and Examples
A software FCoE adapter is a software code that performs some of the FCoE processing. The adapter
n
can be used with a number of NICs that support partial FCoE ooad. Unlike the hardware FCoE
adapter, the software adapter must be activated.
Scanning Storage Adapters
You must perform a rescan operation each time you recongure your storage setup.
You can scan by using the vSphere Web Client, the vicfg-rescan vCLI command, or the esxcli storage
core adapter rescan command.
esxcli storage core adapter rescan supports the following additional options.
n
-a|--all or -A|--adapter=<string> – Scan all adapters or a specied adapter.
n
-S|--skip-claim – Skip claiming of new devices by the appropriate multipath plug-in.
n
-F|--skip-fs-scan – Skip lesystem scan.
n
-t|--type – Specify the type of scan to perform. The command either scans for all changes (all) or
n
for added, deleted, or updated adapters (add, delete, update).
vicfg-rescan supports only a simple rescan operation on a specied adapter.
n
Rescanning a storage adapter with ESXCLI
The following command scans a specic adapter and skips the lesystem scan that is performed by default.
The command returns an indication of success or failure, but no detailed information.
Rescanning a storage adapter with vicfg-rescan
Run vicfg-rescan, specifying the adapter name.
vicfg-rescan <conn_options> vmhba1
The command returns an indication of success or failure, but no detailed information.
Retrieving SMART Information
You can use ESXCLI to retrieve information related to SMART. SMART is a monitoring system for computer
hard disks that reports information about the disks.
You can use the following example syntax to retrieve SMART information.
esxcli storage core device smart get -d device
What the command returns depends on the level of SMART information that the device supports. If no
information is available for a parameter, the output displays N/A, as in the following sample output.
vSphere Command-Line Interface Concepts and Examples
68 VMware, Inc.
Managing iSCSI Storage5
ESXi systems include iSCSI technology to access remote storage using an IP network. You can use the
vSphere Web Client, commands in the esxcli iscsi namespace, or the vicfg-iscsi command to congure
both hardware and software iSCSI storage for your ESXi system.
See the vSphere Storage documentation for additional information.
This chapter includes the following topics:
“iSCSI Storage Overview,” on page 69
n
“Protecting an iSCSI SAN,” on page 71
n
“Command Syntax for esxcli iscsi and vicfg-iscsi,” on page 73
n
“iSCSI Storage Setup with ESXCLI,” on page 78
n
“iSCSI Storage Setup with vicfg-iscsi,” on page 84
n
“Listing and Seing iSCSI Options,” on page 89
n
“Listing and Seing iSCSI Parameters,” on page 90
n
“Enabling iSCSI Authentication,” on page 94
n
“Set Up Ports for iSCSI Multipathing,” on page 97
n
“Managing iSCSI Sessions,” on page 98
n
iSCSI Storage Overview
With iSCSI, SCSI storage commands that your virtual machine issues to its virtual disk are converted into
TCP/IP protocol packets and transmied to a remote device, or target, on which the virtual disk is located.
To the virtual machine, the device appears as a locally aached SCSI drive.
To access remote targets, the ESXi host uses iSCSI initiators. Initiators transport SCSI requests and responses
between ESXi and the target storage device on the IP network. ESXi supports the following types of
initiators.
Software iSCSI adapter - VMware code built into the VMkernel. Allows an ESXi host to connect to the
n
iSCSI storage device through standard network adapters. The software initiator handles iSCSI
processing while communicating with the network adapter.
Hardware iSCSI adapter - Ooads all iSCSI and network processing from your host. Hardware iSCSI
n
adapters are broken into two types.
Dependent hardware iSCSI adapter - Leverages the VMware iSCSI management and conguration
n
interfaces.
VMware, Inc.
69
iSCSI storage
SP
IP network
HBA2HBA1
NIC2
software
adapter
NIC1
Host 1
Host 2
hardware
iSCSI
software
iSCSI
vSphere Command-Line Interface Concepts and Examples
Independent hardware iSCSI adapter - Leverages its own iSCSI management and conguration
n
interfaces.
See the vSphere Storage documentation for details on setup and failover scenarios.
You must congure iSCSI initiators for the host to access and display iSCSI storage devices.
Figure 5-1 depicts hosts that use dierent types of iSCSI initiators.
The host on the left uses an independent hardware iSCSI adapter to connect to the iSCSI storage system.
n
The host on the right uses software iSCSI.
n
Dependent hardware iSCSI can be implemented in dierent ways and is not shown. iSCSI storage devices
from the storage system become available to the host. You can access the storage devices and create VMFS
datastores for your storage needs.
Figure 5‑1. iSCSI Storage
Discovery Sessions
70 VMware, Inc.
A discovery session is part of the iSCSI protocol. The discovery session returns the set of targets that you can
access on an iSCSI storage system.
ESXi systems support dynamic and static discovery.
Dynamic discovery - Also known as Send Targets discovery. Each time the ESXi host contacts a
n
specied iSCSI storage server, it sends a Send Targets request to the server. In response, the iSCSI
storage server supplies a list of available targets to the ESXi host. Monitor and manage with esxcli
iscsi adapter discovery sendtarget or vicfg-iscsi commands.
Static discovery - The ESXi host does not have to perform discovery. Instead, the ESXi host uses the IP
n
addresses or domain names and iSCSI target names, IQN or EUI format names, to communicate with
the iSCSI target. Monitor and manage with esxcli iscsi adapter discovery statictarget or vicfg-
iscsi commands.
For either case, you set up target discovery addresses so that the initiator can determine which storage
resource on the network is available for access. You can do this setup with dynamic discovery or static
discovery. With dynamic discovery, all targets associated with an IP address or host name and the iSCSI
name are discovered. With static discovery, you must specify the IP address or host name and the iSCSI
name of the target you want to access. The iSCSI HBA must be in the same VLAN as both ports of the iSCSI
array.
Discovery Target Names
The target name is either an IQN name or an EUI name.
The IQN and EUI names use specic formats.
The IQN name uses the following format.
n
iqn.yyyy-mm.{reversed domain name}:id_string
The following IQN name contains example values.
iqn.2007-05.com.mydomain:storage.tape.sys3.abc
The ESXi host generates an IQN name for software iSCSI and dependent hardware iSCSI adapters. You
can change that default IQN name.
The EUI name is described in IETF rfc3720 as follows.
n
The IEEE Registration Authority provides a service for assigning globally unique identiers [EUI]. The
EUI-64 format is used to build a global identier in other network protocols. For example, Fibre
Channel denes a method of encoding it into a WorldWideName.
The format is eui. followed by an EUI-64 identier (16 ASCII-encoded hexadecimal digits).
The following EUI name contains example values.
Chapter 5 Managing iSCSI Storage
Type EUI-64 identifier (ASCII-encoded hexadecimal)
+- -++--------------+
| || |
eui.02004567A425678D
The IEEE EUI-64 iSCSI name format can be used when a manufacturer is registered with the IEEE
Registration Authority and uses EUI-64 formaed worldwide unique names for its products.
You can check in the UI of the storage array whether an array uses an IQN name or an EUI name.
Protecting an iSCSI SAN
Your iSCSI conguration is only as secure as your IP network. By enforcing good security standards when
you set up your network, you help safeguard your iSCSI storage.
Protecting Transmitted Data
A primary security risk in iSCSI SANs is that an aacker might snitransmied storage data.
Neither the iSCSI adapter nor the ESXi host iSCSI initiator encrypts the data that it transmits to and from the
targets, making the data vulnerable to sningaacks. You must therefore take additional measures to
prevent aackers from easily seeing iSCSI data.
Allowing your virtual machines to share virtual switches and VLANs with your iSCSI conguration
potentially exposes iSCSI trac to misuse by a virtual machine aacker. To help ensure that intruders
cannot listen to iSCSI transmissions, make sure that none of your virtual machines can see the iSCSI storage
network.
Protect your system by giving the iSCSI SAN a dedicated virtual switch.
If you use an independent hardware iSCSI adapter, make sure that the iSCSI adapter and ESXi physical
n
network adapter are not inadvertently connected outside the host. Such a connection might result from
sharing a switch.
If you use dependent hardware or software iscsi adapter, which uses ESXi networking, congure iSCSI
n
storage through a dierent virtual switch than the one used by your virtual machines.
VMware, Inc. 71
vSphere Command-Line Interface Concepts and Examples
You can also congure your iSCSI SAN on its own VLAN to improve performance and security. Placing
your iSCSI conguration on a separate VLAN ensures that no devices other than the iSCSI adapter can see
transmissions within the iSCSI SAN. With a dedicated VLAN, network congestion from other sources
cannot interfere with iSCSI trac.
Securing iSCSI Ports
You can improve the security of iSCSI ports by installing security patches and limiting the devices connected
to the iSCSI network.
When you run iSCSI devices, the ESXi host does not open ports that listen for network connections. This
measure reduces the chances that an intruder can break into the ESXi host through spare ports and gain
control over the host. Therefore, running iSCSI does not present an additional security risks at the ESXi host
end of the connection.
An iSCSI target device must have one or more open TCP ports to listen for iSCSI connections. If security
vulnerabilities exist in the iSCSI device software, your data can be at risk through no fault of the ESXi
system. To lower this risk, install all security patches that your storage equipment manufacturer provides
and limit the devices connected to the iSCSI network.
Setting iSCSI CHAP
iSCSI storage systems authenticate an initiator using a name and key pair. ESXi systems support Challenge
Handshake Authentication Protocol (CHAP).
Using CHAP for your SAN implementation is a best practice. The ESXi host and the iSCSI storage system
must have CHAP enabled and must have common credentials. During iSCSI login, the iSCSI storage system
exchanges its credentials with the ESXi system and checks them.
You can set up iSCSI authentication by using the vSphere Web Client, as discussed in the vSphere Storage
documentation or by using the esxcli command, discussed in “Enabling iSCSI Authentication,” on
page 94. To use CHAP authentication, you must enable CHAP on both the initiator side and the storage
system side. After authentication is enabled, it applies for targets to which no connection has been
established, but does not apply to targets to which a connection is established. After the discovery address is
set, the new volumes to which you add a connection are exposed and can be used.
For software iSCSI and dependent hardware iSCSI, ESXi hosts support per-discovery and per-target CHAP
credentials. For independent hardware iSCSI, ESXi hosts support only one set of CHAP credentials per
initiator. You cannot assign dierent CHAP credentials for dierent targets.
When you congure independent hardware iSCSI initiators, ensure that the CHAP conguration matches
your iSCSI storage. If CHAP is enabled on the storage array, it must be enabled on the initiator. If CHAP is
enabled, you must set up the CHAP authentication credentials on the ESXi host to match the credentials on
the iSCSI storage.
Supported CHAP Levels
To set CHAP levels with esxcli iscsi adapter setauth or vicfg-iscsi, specify one of the values in
Table 5-1 for <level>. Only two levels are supported for independent hardware iSCSI.
Mutual CHAP is supported for software iSCSI and for dependent hardware iSCSI, but not for independent
hardware iSCSI.
I Ensure that CHAP is set to chapRequired before you set mutual CHAP, and use compatible
levels for CHAP and mutual CHAP. Use dierent passwords for CHAP and mutual CHAP to avoid security
risks.
72 VMware, Inc.
Chapter 5 Managing iSCSI Storage
Table 5‑1. Supported Levels for CHAP
LevelDescriptionSupported
chapProhibited
chapDiscouraged
chapPreferred
chapRequired
Host does not use CHAP
authentication. If authentication is
enabled, specify chapProhibited to
disable it.
Host uses a non-CHAP connection,
but allows a CHAP connection as
fallback.
Host uses CHAP if the CHAP
connection succeeds, but uses nonCHAP connections as fallback.
Host requires successful CHAP
authentication. The connection fails if
CHAP negotiation fails.
Software iSCSI
Dependent hardware iSCSI
Independent hardware iSCSI
Software iSCSI
Dependent hardware iSCSI
Software iSCSI
Dependent hardware iSCSI
Independent hardware iSCSI
Software iSCSI
Dependent hardware iSCSI
Returning Authentication to Default Inheritance
The values of iSCSI authentication seings associated with a dynamic discovery address or a static
discovery target are inherited from the corresponding seings of the parent. For the dynamic discovery
address, the parent is the adapter. For the static target, the parent is the adapter or discovery address.
If you use the vSphere Web Client to modify authentication seings, you must deselect the Inherit from
n
Parent check box before you can make a change to the discovery address or discovery target.
If you use vicfg-iscsi, the value you set overrides the inherited value.
n
If you use esxcli iscsi commands, the value you set overrides the inherited value. You can set CHAP
n
at the following levels.
esxcli iscsi adapter auth chap [get|set]
n
esxcli iscsi adapter discovery sendtarget auth chap [get|set]
n
esxcli iscsi adapter target portal auth chap [get|set]
n
Inheritance is relevant only if you want to return a dynamic discovery address or a static discovery target to
its inherited value. In that case, use one of the following commands.
Dynamic discovery
n
esxcli iscsi adapter discovery sendtarget auth chap set --inherit
Static discovery
n
esxcli iscsi adapter target portal auth chap set --inherit
N You can set target-level CHAP authentication properties to be inherited from the send target level
and set send target level CHAP authentication properties to be inherited from the adapter level. Reseing
adapter-level properties is not supported.
Command Syntax for esxcli iscsi and vicfg-iscsi
In vSphere 5.0 and later, you can manage iSCSI storage by using either esxcli iscsi commands or vicfg-
iscsi options.
For details, see the vSphere Command-Line Interface Reference. “esxcli iscsi Command Syntax,” on page 74
and “vicfg-iscsi Command Syntax,” on page 75.
VMware, Inc. 73
vSphere Command-Line Interface Concepts and Examples
esxcli iscsi Command Syntax
The esxcli iscsi command includes a number of nested namespaces.
The following table illustrates the namespace hierarchy. Commands at each level are included in bold. Many
namespaces include both commands and namespaces.
adapter [get|list|
set]
networkportal
[add|list|remove]
physicalnetworkpor
tal [list]
session [add|list|
remove]
ibftboot [get|
import]
logicalnetworkport
al list
plugin list
software [get|set]
authchap [set|get]
discovery
[rediscover]
target [list]portal [list]authchap [get|set]
capabilities get
firmware [get|
set]
param [get|set]
ipconfig [get|
set]
param [get|set]
connection list
sendtarget [add|
list|remove]
authchap [get|set]
param [get|set]
statictarget
[add|list|remove]
status get
param [get|set]
Key to esxcli iscsi Short Options
ESXCLI commands for iSCSI management consistently use the same short options. For several options, the
associated full option depends on the command.
Table 5‑3. Short Options for iSCSI ESXCLI Command Options
Lower-case
OptionOption
a--address,
alias
c--cid2--dns2
d--directionD--default
f--file, force
74 VMware, Inc.
Upper-case
OptionOptionNumberOption
A--adapter1--dns1
Table 5‑3. Short Options for iSCSI ESXCLI Command Options (Continued)
Lower-case
OptionOption
g--gateway
i--ipI--inherit
k--key
l--level
m--methodM--module
n--nicN--authname,
o--option
p--plugin
s--isid,
subnet,
switch
v--value
Upper-case
OptionOptionNumberOption
--name
S--state,
secret
Chapter 5 Managing iSCSI Storage
vicfg-iscsi Command Syntax
vicfg-iscsi supports a comprehensive set of options.
Lists external, vendor-specic
properties of an iSCSI adapter.
Lists target information.
Lists iSCSI parameter
information.
Lists iSCSI parameter details.
Sets iSCSI parameters.
Returns parameters in
discovery target or send target
to default inheritance
behavior.
Resets target level
authentication properties to be
inherited from adapter level.
Used with the -authentication option.
VMware, Inc. 77
vSphere Command-Line Interface Concepts and Examples
iSCSI Storage Setup with ESXCLI
You can set up iSCSI storage by using commands in the esxcli iscsi namespace.
You can also set up iSCSI storage by using the vSphere Web Client or vicfg-iscsi commands. See “iSCSI
Storage Setup with vicfg-iscsi,” on page 84.
Set Up Software iSCSI with ESXCLI
Software iSCSI setup requires a number of high-level tasks.
You should be familiar with the corresponding command for each task. You can refer to the relevant
documentation for each command or run esxcli iscsi --help in the console. Specify one of the options
listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of
<conn_options>.
Prerequisites
Verify that you are familiar with iSCSI authentication. See “Enabling iSCSI Authentication,” on
n
page 94.
Verify that you are familiar with CHAP. See “Seing iSCSI CHAP,” on page 72.
n
Verify that you are familiar with iSCSI parameters. See “Listing and Seing iSCSI Parameters,” on
n
page 90.
Procedure
1Enable software iSCSI.
esxcli <conn_options> iscsi software set --enabled=true
2Check whether a network portal, that is, a bound port, exists for iSCSI trac.
esxcli <conn_options> iscsi adapter list
3If no adapter exists, add one.
Software iSCSI does not require port binding, but requires that at least one VMkernel NIC is available
and can be used as an iSCSI NIC. You can name the adapter as you add it.
esxcli <conn_options> iscsi networkportal add -n <portal_name> -A <vmhba>
4(Optional) Check the status.
esxcli <conn_options> iscsi software get
The system prints true if software iSCSI is enabled, or false if it is not enabled.
5(Optional) Set the iSCSI name and alias.
esxcli <conn_options> iscsi adapter set --adapter=<iscsi adapter> --name=<name>
esxcli <conn_options> iscsi adapter set --adapter=<iscsi adapter> --alias=<alias>
78 VMware, Inc.
Chapter 5 Managing iSCSI Storage
6Add a dynamic discovery address or a static discovery address.
With dynamic discovery, all storage targets associated with a host name or IP address are
When you later remove a discovery address, it might still be displayed as the parent of a static target.
You can add the discovery address and rescan to display the correct parent for the static targets.
7(Optional) Set the authentication information for CHAP.
You can set per-target CHAP for static targets, per-adapter CHAP, or apply the command to the
discovery address.
OptionCommand
Adapter-level CHAP
Discovery-level CHAP
Target-level CHAP
esxcli iscsi adapter auth chap set --direction=uni -chap_username=<name> --chap_password=<pwd> -level=[prohibited, discouraged, preferred, required] -secret=<string> --adapter=<vmhba>
esxcli iscsi adapter discovery sendtarget auth chap set -direction=uni --chap_username=<name> --chap_password=<pwd>
bRun esxcli iscsi session add or rescan the adapter to add the session back.
Set Up Dependent Hardware iSCSI with ESXCLI
Dependent hardware iSCSI setup requires several high-level tasks.
You should be familiar with the corresponding command for each task. You can refer to the relevant
documentation for each command or run esxcli iscsi --help in the console. Specify one of the options
listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of
<conn_options>.
Prerequisites
Verify that you are familiar with iSCSI authentication. See “Enabling iSCSI Authentication,” on
n
page 94.
Verify that you are familiar with CHAP. See “Seing iSCSI CHAP,” on page 72.
n
Verify that you are familiar with iSCSI parameters. See “Listing and Seing iSCSI Parameters,” on
n
page 90.
Procedure
1Determine the iSCSI adapter type and retrieve the iSCSI adapter ID.
esxcli <conn_options> iscsi adapter list
2(Optional) Set the iSCSI name and alias.
esxcli <conn_options> iscsi adapter set --adapter <adapter_name> --name=<name>
esxcli <conn_options> iscsi adapter set --adapter <adapter_name> --alias=<alias>
80 VMware, Inc.
Chapter 5 Managing iSCSI Storage
3Set up port binding.
aIdentify the VMkernel port of the dependent hardware iSCSI adapter.
esxcli <conn_options> iscsi logicalnetworkportal list --adapter=<adapter_name>
bConnect the dependent hardware iSCSI initiator to the iSCSI VMkernel ports by running the
When you later remove a discovery address, it might still be displayed as the parent of a static target.
You can add the discovery address and rescan to display the correct parent for the static targets.
5(Optional) Set the authentication information for CHAP.
You can set per-target CHAP for static targets, per-adapter CHAP, or apply the command to the
discovery address.
OptionCommand
Adapter-level CHAP
Discovery-level CHAP
Target-level CHAP
esxcli iscsi adapter auth chap set --direction=uni -chap_username=<name> --chap_password=<pwd> -level=[prohibited, discouraged, preferred, required] -secret=<string> --adapter=<vmhba>
esxcli iscsi adapter discovery sendtarget auth chap set -direction=uni --chap_username=<name> --chap_password=<pwd>
9(Optional) If you want to make additional iSCSI login parameter changes, you must log out of the
corresponding iSCSI session and log back in.
aRun esxcli iscsi session remove to log out.
bRun esxcli iscsi session add or rescan the adapter to add the session back.
Set Up Independent Hardware iSCSI with ESXCLI
With independent hardware-based iSCSI storage, you use a specialized third-party adapter capable of
accessing iSCSI storage over TCP/IP. This iSCSI initiator handles all iSCSI and network processing and
management for your ESXi system.
You must install and congure the independent hardware iSCSI adapter for your host before you can access
the iSCSI storage device. For installation information, see vendor documentation.
Hardware iSCSI setup requires a number of high-level tasks. You should be familiar with the corresponding
command for each task. You can refer to the relevant documentation for each command or run esxcli iscsi
--help in the console. Specify one of the options listed in “Connection Options for vCLI Host Management
Commands,” on page 19 in place of <conn_options>.
82 VMware, Inc.
Chapter 5 Managing iSCSI Storage
Prerequisites
Verify that you are familiar with iSCSI authentication. See “Enabling iSCSI Authentication,” on
n
page 94.
Verify that you are familiar with CHAP. See “Seing iSCSI CHAP,” on page 72.
n
Verify that you are familiar with iSCSI parameters. See “Listing and Seing iSCSI Parameters,” on
n
page 90.
Procedure
1Determine the iSCSI adapter type and retrieve the iSCSI adapter ID.
esxcli <conn_options> iscsi adapter list
2Congure the hardware initiator (HBA) by running esxcli iscsi networkportal ipconfig with one or
more of the following options.
OptionDescription
-A|--adapter=<str>
-1|--dns1=<str>
-2|--dns2=<str>
-g|--gateway=<str>
-i|--ip=<str>
-n|--nic=<str>
-s|--subnet=<str>
iSCSI adapter name (required)
iSCSI network portal primary DNS address
iSCSI network portal secondary DNS address
iSCSI network portal gateway address
iSCSI network portal IP address (required)
iSCSI network portal (vmknic)
iSCSI network portal subnet mask (required)
3(Optional) Set the iSCSI name and alias.
esxcli <conn_options> iscsi adapter set --adapter <adapter_name> --name=<name>
esxcli <conn_options> iscsi adapter set --adapter <adapter_name> --alias=<alias>
4Add a dynamic discovery address or a static discovery address.
With dynamic discovery, all storage targets associated with a host name or IP address are
You can set up iSCSI storage by using the vicfg-iscsi command.
You can also set up iSCSI storage by using the vSphere Web Client or commands in the esxcli iscsi
namespace. See “iSCSI Storage Setup with ESXCLI,” on page 78.
84 VMware, Inc.
Chapter 5 Managing iSCSI Storage
Set Up Software iSCSI with vicfg-iscsi
Software iSCSI setup requires a number of high-level tasks.
You should be familiar with the corresponding command for each task. You can refer to the relevant
documentation for each command. Specify one of the options listed in “Connection Options for vCLI Host
Management Commands,” on page 19 in place of <conn_options>.
Prerequisites
Verify that you are familiar with iSCSI authentication. See “Enabling iSCSI Authentication,” on
n
page 94.
Verify that you are familiar with CHAP. See “Seing iSCSI CHAP,” on page 72.
n
Procedure
1Determine the HBA type and retrieve the HBA ID.
vicfg-iscsi <conn_options> --adapter --list
2Enable software iSCSI for the HBA.
vicfg-iscsi <conn_options> --swiscsi --enable
3(Optional) Check the status.
vicfg-iscsi <conn_options> --swiscsi --list
The system prints Software iSCSI is enabled or Software iSCSI is not enabled.
When you later remove a discovery address, it might still be displayed as the parent of a static target.
You can add the discovery address and rescan to display the correct parent for the static targets.
vSphere Command-Line Interface Concepts and Examples
The target (-i) and name (-n) option determine what the command applies to.
OptionResult
-i and -n
Only -i
Neither -i nor -n
7(Optional) Set the authentication information for mutual CHAP by running vicfg-iscsi -A again with
the -b option and a dierent authentication user name and password.
For <level>, specify chapProhibited or chapRequired.
chapProhibited – The host does not use CHAP authentication. If authentication is enabled, specify
n
chapProhibited to disable it.
chapRequired – The host requires successful CHAP authentication. The connection fails if CHAP
n
negotiation fails. You can set this value for mutual CHAP only if CHAP is set to chapRequired.
For <auth_method>, CHAP is the only valid value.
I You are responsible for making sure that CHAP is set before you set mutual CHAP, and
for using compatible levels for CHAP and mutual CHAP.
8(Optional) Set iSCSI parameters by running vicfg-iscsi -W.
Command applies to per-target CHAP for static targets.
Command applies to the discovery address.
Command applies to per-adapter CHAP.
9After setup is complete, run vicfg-rescan to rescan all storage devices.
Set Up Dependent Hardware iSCSI with vicfg-iscsi
Dependent hardware iSCSI setup requires a number of high-level tasks.
You should be familiar with the corresponding command for each task. You can refer to the relevant
documentation for each command. Specify one of the options listed in “Connection Options for vCLI Host
Management Commands,” on page 19 in place of <conn_options>.
Prerequisites
Verify that you are familiar with iSCSI authentication. See “Enabling iSCSI Authentication,” on
n
page 94.
Verify that you are familiar with CHAP. See “Seing iSCSI CHAP,” on page 72.
When you later remove a discovery address, it might still be displayed as the parent of a static target.
You can add the discovery address and rescan to display the correct parent for the static targets.
The target (-i) and name (-n) option determine what the command applies to.
OptionResult
-i and -n
Only -i
Neither -i nor -n
Command applies to per-target CHAP for static targets.
Command applies to the discovery address.
Command applies to per-adapter CHAP.
6(Optional) Set iSCSI parameters by running vicfg-iscsi -W.
7After setup is complete, run vicfg-rescan to rescan all storage devices.
Set Up Independent Hardware iSCSI with vicfg-iscsi
With independent hardware-based iSCSI storage, you use a specialized third-party adapter capable of
accessing iSCSI storage over TCP/IP. This iSCSI initiator handles all iSCSI and network processing and
management for your ESXi system.
You must install and congure the independent hardware iSCSI adapter for your host before you can access
the iSCSI storage device. For installation information, see vendor documentation.
VMware, Inc. 87
vSphere Command-Line Interface Concepts and Examples
Hardware iSCSI setup requires a number of high-level tasks. You should be familiar with the corresponding
command for each task. You can refer to the relevant documentation for each command or the manpage
(Linux). Specify one of the options listed in “Connection Options for vCLI Host Management Commands,”
on page 19 in place of <conn_options>.
Prerequisites
Verify that you are familiar with iSCSI authentication. See “Enabling iSCSI Authentication,” on
n
page 94.
Verify that you are familiar with CHAP. See “Seing iSCSI CHAP,” on page 72.
n
Procedure
1Determine the HBA type and retrieve the HBA ID.
vicfg-iscsi <conn_options> --adapter --list
2Congure the hardware initiator (HBA) by running vicfg-iscsi -N with one or more of the following
options.
--list – List network properties.
n
--ip <ip_addr> – Set HBA IPv4 address.
n
--subnetmask <subnet_mask> – Set HBA network mask.
n
--gateway <default_gateway> – Set HBA gateway.
n
--set ARP=true|false – Enable or disable ARP redirect.
n
You can also set the HBA IPv4 address and network mask and gateway in one command.
When you later remove a discovery address, it might still be displayed as the parent of a static target.
You can add the discovery address and rescan to display the correct parent for the static targets.
The system returns information about targets for the specied adapter, including the iSCSI name, in
IQN or EUI format, and alias. See “Discovery Target Names,” on page 71.
The following table lists all seable parameters. These parameters are also described in the IETF rfc 3720.
You can run esxcli iscsi adapter param get to determine whether a parameter is seable or not.
The parameters in the table apply to software iSCSI and dependent hardware iSCSI.
Table 5‑5. Settable iSCSI Parameters
ParameterDescription
DataDigestType
HeaderDigest
MaxOutstandingR2T
FirstBurstLength
MaxBurstLength
MaxRecvDataSegLen
NoopOutInterval
NoopOutTimeout
RecoveryTimeout
DelayedAck
Increases data integrity. When data digest is enabled, the system performs a
checksum over each PDUs data part and veries using the CRC32C algorithm.
N Systems that use Intel Nehalem processors ooad the iSCSI digest
calculations for software iSCSI, thus reducing the impact on performance.
Valid values are digestProhibited, digestDiscouraged,
digestPreferred, or digestRequired.
Increases data integrity. When header digest is enabled, the system performs a
checksum over the header part of each iSCSI Protocol Data Unit (PDU) and
veries using the CRC32C algorithm.
Max Outstanding R2T denes the Ready to Transfer (R2T) PDUs that can be in
transition before an acknowledgement PDU is received.
Maximum amount of unsolicited data an iSCSI initiator can send to the target
during the execution of a single SCSI command, in bytes.
Maximum SCSI data payload in a Data-In or a solicited Data-Out iSCSI
sequence, in bytes.
Maximum data segment length, in bytes, that can be received in an iSCSI
PDU.
Time interval, in seconds, between NOP-Out requests sent from your iSCSI
initiator to an iSCSI target. The NOP-Out requests serve as the ping
mechanism to verify that a connection between the iSCSI initiator and the
iSCSI target is active.
Supported only at the initiator level.
Amount of time, in seconds, that can lapse before your host receives a NOP-In
message. The message is sent by the iSCSI target in response to the NOP-Out
request. When the NoopTimeout limit is exceeded, the initiator terminates the
current session and starts a new one.
Supported only at the initiator level.
Amount of time, in seconds, that can lapse while a session recovery is
performed. If the timeout exceeds its limit, the iSCSI initiator terminates the
session.
Allows systems to delay acknowledgment of received data packets.
You can use the following ESXCLI commands to list parameter options.
Run esxcli iscsi adapter param get to list parameter options for the iSCSI adapter.
n
Run esxcli iscsi adapter discovery sendtarget param get or esxcli iscsi adapter target portal
n
param set to retrieve information about iSCSI parameters and whether they are seable.
VMware, Inc. 91
vSphere Command-Line Interface Concepts and Examples
Run esxcli iscsi adapter discovery sendtarget param get or esxcli iscsi adapter target portal
n
param set to set iSCSI parameter options.
If special characters are in the <name>=<value> sequence, for example, if you add a space, you must surround
the sequence with double quotes ("<name> = <value>").
Returning Parameters to Default Inheritance with ESXCLI
The values of iSCSI parameters associated with a dynamic discovery address or a static discovery target are
inherited from the corresponding seings of the parent.
For the dynamic discovery address, the parent is the adapter. For the static target, the parent is the adapter
or discovery address.
If you use the vSphere Web Client to modify authentication seings, you must deselect the Inherit from
n
Parent check box before you can make a change to the discovery address or discovery target.
If you use esxcli iscsi, the value you set overrides the inherited value.
n
Inheritance is relevant only if you want to return a dynamic discovery address or a static discovery target to
its inherited value. In that case, use the following command, which requires the --name option for static
discovery addresses, but not for dynamic discovery targets.
Target TypeCommand
Dynamic target
Static target
esxcli iscsi adapter discovery sendtarget param set
esxcli iscsi adapter target portal param set
Listing and Setting iSCSI Parameters with vicfg-iscsi
You can list and set iSCSI parameters by running vicfg-iscsi -W.
The following table lists all seable parameters. These parameters are also described in the IETF rfc 3720.
You can also run vicfg-iscsi --parameter --list --details to determine whether a parameter is seable
or not.
The parameters in the table apply to software iSCSI and dependent hardware iSCSI.
Table 5‑6. Settable iSCSI Parameters
ParameterDescription
DataDigestType
HeaderDigest
MaxOutstandingR2T
FirstBurstLength
MaxBurstLength
MaxRecvDataSegLen
Increases data integrity. When data digest is enabled, the system performs a
checksum over each PDUs data part and veries using the CRC32C algorithm.
N Systems that use Intel Nehalem processors ooad the iSCSI digest
calculations for software iSCSI, thus reducing the impact on performance.
Valid values are digestProhibited, digestDiscouraged,
digestPreferred, or digestRequired.
Increases data integrity. When header digest is enabled, the system performs a
checksum over the header part of each iSCSI Protocol Data Unit (PDU) and
veries using the CRC32C algorithm.
Max Outstanding R2T denes the Ready to Transfer (R2T) PDUs that can be in
transition before an acknowledgement PDU is received.
Maximum amount of unsolicited data an iSCSI initiator can send to the target
during the execution of a single SCSI command, in bytes.
Maximum SCSI data payload in a Data-In or a solicited Data-Out iSCSI
sequence, in bytes.
Maximum data segment length, in bytes, that can be received in an iSCSI
PDU.
92 VMware, Inc.
Chapter 5 Managing iSCSI Storage
Table 5‑6. Settable iSCSI Parameters (Continued)
ParameterDescription
NoopOutInterval
NoopOutTimeout
RecoveryTimeout
DelayedAck
Time interval, in seconds, between NOP-Out requests sent from your iSCSI
initiator to an iSCSI target. The NOP-Out requests serve as the ping
mechanism to verify that a connection between the iSCSI initiator and the
iSCSI target is active.
Supported only at the initiator level.
Amount of time, in seconds, that can lapse before your host receives a NOP-In
message. The message is sent by the iSCSI target in response to the NOP-Out
request. When the NoopTimeout limit is exceeded, the initiator terminates the
current session and starts a new one.
Supported only at the initiator level.
Amount of time, in seconds, that can lapse while a session recovery is
performed. If the timeout exceeds its limit, the iSCSI initiator terminates the
session.
Allows systems to delay acknowledgment of received data packets.
You can use the following vicfg-iscsi options to list parameter options. Specify one of the options listed in
“Connection Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
Run vicfg-iscsi -W -l to list parameter options for the HBA.
vSphere Command-Line Interface Concepts and Examples
The target (-i) and name (-n) option determine what the command applies to.
OptionResult
-i and -n
Only -i
Neither -i nor -n
Command applies to per-target CHAP for static targets.
Command applies to the discovery address.
Command applies to per-adapter CHAP.
If special characters are in the <name>=<value> sequence, for example, if you add a space, you must surround
the sequence with double quotes ("<name> = <value>").
Returning Parameters to Default Inheritance with vicfg-iscsi
The values of iSCSI parameters associated with a dynamic discovery address or a static discovery target are
inherited from the corresponding seings of the parent.
For the dynamic discovery address, the parent is the adapter. For the static target, the parent is the adapter
or discovery address.
If you use the vSphere Web Client to modify authentication seings, you must deselect the Inherit from
n
Parent check box before you can make a change to the discovery address or discovery target.
If you use vicfg-iscsi, the value you set overrides the inherited value.
n
Inheritance is relevant only if you want to return a dynamic discovery address or a static discovery target to
its inherited value. In that case, use the --reset <param_name> option, which requires the --name option for
static discovery addresses, but not for dynamic discovery targets.
The host does not use CHAP authentication. If authentication is enabled,
specify chapProhibited to disable it.
The host requires successful CHAP authentication. The connection fails if
CHAP negotiation fails. You can set this value for mutual CHAP only if
CHAP is set to chapRequired.
For direction, specify mutual.
I You are responsible for making sure that CHAP is set before you set mutual CHAP, and
for using compatible levels for CHAP and mutual CHAP. Use a dierent secret in CHAP and mutual
CHAP.
Enable Mutual iSCSI Authentication with ESXCLI
Mutual authentication is supported for software iSCSI and dependent hardware iSCSI, but not for
independent hardware iSCSI.
For information on iSCSI CHAP, see “Seing iSCSI CHAP,” on page 72.
Prerequisites
Verify that CHAP authentication is already set up when you start seing up mutual CHAP.
n
Verify that CHAP and mutual CHAP use dierent user names and passwords. The second user name
n
and password are supported for mutual authentication on the storage side.
Verify that CHAP and mutual CHAP use compatible CHAP levels.
n
VMware, Inc. 95
vSphere Command-Line Interface Concepts and Examples
Procedure
1Enable authentication.
esxcli <conn_options> iscsi adapter auth chap set --direction=uni --chap_username=<name> --
You can use the vicfg-iscsi -A -c options to enable iSCSI authentication. Mutual authentication is
supported for software iSCSI and dependent hardware iSCSI, but not for independent hardware iSCSI.
For information on iSCSI CHAP, see “Seing iSCSI CHAP,” on page 72.
Prerequisites
Verify that CHAP authentication is already set up when you start seing up mutual CHAP.
n
Verify that CHAP and mutual CHAP use dierent user names and passwords. The second user name
n
and password are supported for mutual authentication on the storage side.
Verify that CHAP and mutual CHAP use compatible CHAP levels.
3After setup is complete, perform rediscovery and rescan all storage devices.
96 VMware, Inc.
Set Up Ports for iSCSI Multipathing
With port binding, you create a separate VMkernel port for each physical NIC using 1:1 mapping.
You can add all network adapter and VMkernel port pairs to a single vSwitch. The vSphere Storage
documentation explains in detail how to specify port binding.
You cannot set up ports for multipathing by using vicfg-iscsi.
In the examples below, specify one of the options listed in “Connection Options for vCLI Host Management
Commands,” on page 19 in place of <conn_options>.
I The ESXi 4.x ESXCLI commands for seing up iSCSI are no longer supported.
Prerequisites
Verify that you are familiar with iSCSI session removal. See “Removing iSCSI Sessions,” on page 99.
Procedure
1Find out which uplinks are available for use with iSCSI adapters.
esxcli <conn_options> iscsi physicalnetworkportal list --adapter=<adapter_name>
2Connect the software iSCSI or dependent hardware iSCSI initiator to the iSCSI VMkernel ports by
running the following command for each port.
Chapter 5 Managing iSCSI Storage
esxcli <conn_options> iscsi networkportal nic add --adapter=<adapter_name> --nic=<bound_nic>
3Verify that the ports were added to the iSCSI initiator by running the following command.
esxcli <conn_options> iscsi networkportal list --adapter=<adapter_name>
4(Optional) If there are active iSCSI sessions between your host and targets, discontinue them. See
vSphere Command-Line Interface Concepts and Examples
Managing iSCSI Sessions
To communicate with each other, iSCSI initiators and targets establish iSCSI sessions. You can use esxcli
iscsi session to list and manage iSCSI sessions for software iSCSI and dependent hardware iSCSI.
Introduction to iSCSI Session Management
By default, software iSCSI and dependent hardware iSCSI initiators start one iSCSI session between each
initiator port and each target port.
If your iSCSI initiator or target has more than one port, your host can establish multiple sessions. The
default number of sessions for each target equals the number of ports on the iSCSI adapter times the number
of target ports. You can display all current sessions to analyze and debug them. You might add sessions to
the default for several reasons.
Cloning sessions - Some iSCSI arrays support multiple sessions between the iSCSI adapter and target
n
ports. If you clone an existing session on one of these arrays, the array presents more data paths for
your adapter. Duplicate sessions do not persist across reboot. Additional sessions to the target might
have performance benets, but the result of cloning depends entirely on the array. You must log out
from an iSCSI session if you want to clone a session. You can use the esxcli iscsi session add
command to clone a session.
Enabling Header and Data Digest - If you are logged in to a session and want to enable the Header and
n
Data Digest parameters, you must set the parameter, remove the session, and add the session back for
the parameter change to take eect. You must log out from an iSCSI session if you want to clone a
session.
Establishing target-specic sessions - You can establish a session to a specic target port. This can be
n
useful if your host connects to a single-port storage system that, by default, presents only one target
port to your initiator, but can redirect additional sessions to a dierent target port. Establishing a new
session between your iSCSI initiator and another target port creates an additional path to the storage
system.
C Some storage systems do not support multiple sessions from the same initiator name or endpoint.
Aempts to create multiple sessions to such targets can result in unpredictable behavior of your iSCSI
environment.
Listing iSCSI Sessions
You can use esxcli iscsi session to list sessions.
The following example scenario uses the available commands. Run esxcli iscsi session --help and each
command with --help for reference information. The example uses a congurationle to log in to the host.
Specify one of the options listed in “Connection Options for vCLI Host Management Commands,” on
page 19 in place of <conn_options>.
I The ESXi 4.x ESXCLI commands for managing iSCSI sessions are not supported against ESXi
5.0 hosts.
List a software iSCSI session at the adapter level.
n
esxcli <conn_options> iscsi session list --adapter=<iscsi_adapter>
List a software iSCSI session at the target level.
n
esxcli <conn_options> iscsi session list --name=<target> --adapter=<iscsi_adapter>
98 VMware, Inc.
Chapter 5 Managing iSCSI Storage
Logging in to iSCSI Sessions
You can use esxcli iscsi session to log in to a session.
Specify one of the options listed in “Connection Options for vCLI Host Management Commands,” on
page 19 in place of <conn_options>.
Log in to a session on the current software iSCSI or dependent hardware iSCSI conguration at the