VMware ESXI - 6.5 User Manual

vSphere Command-Line Interface
Concepts and Examples
ESXi 6.5
vCenter Server 6.5
This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document, see http://www.vmware.com/support/pubs.
vSphere Command-Line Interface Concepts and Examples
You can find the most up-to-date technical documentation on the VMware Web site at:
hp://www.vmware.com/support/
The VMware Web site also provides the latest product updates.
If you have comments about this documentation, submit your feedback to:
docfeedback@vmware.com
Copyright © 2007–2017 VMware, Inc. All rights reserved. Copyright and trademark information.
VMware, Inc.
3401 Hillview Ave. Palo Alto, CA 94304 www.vmware.com
2 VMware, Inc.

Contents

About This Book 9
vSphere CLI Command Overviews 11
1
Introduction 11
Documentation 12
Command-Line Help 12
List of Available Host Management Commands 13
Targets and Protocols for vCLI Host Management Commands 15
Supported Platforms for vCLI Commands 15
Commands with an esxcfg Prex 16
ESXCLI Commands Available on Dierent ESXi Hosts 17
Trust Relationship Requirement for ESXCLI Commands 17
Download and Install the vCenter Server Certicate 17
Using the --cacertsle Option 18
Using the --thumbprint Option 18
Use the Credential Store 18
Using ESXCLI Output 19
Connection Options for vCLI Host Management Commands 19
Connection Options for DCLI Commands 19
vCLI Host Management Commands and Lockdown Mode 19
Managing Hosts 21
2
Stopping, Rebooting, and Examining Hosts 21
Stopping and Rebooting Hosts with ESXCLI 21
Stopping, Rebooting, and Examining Hosts with vicfg-hostops 22
Entering and Exiting Maintenance Mode 22
Enter and Exit Maintenance Mode with ESXCLI 22
Enter and Exit Maintenance Mode with vicfg-hostops 23
Backing Up Conguration Information with vicfg-cfgbackup 24
Backup Tasks 24
Backing Up Conguration Data 24
Restore Conguration Data 24
Using vicfg-cfgbackup from vMA 25
Managing VMkernel Modules 25
Manage Modules with esxcli system module 25
Manage Modules with vicfg-module 26
Using vicfg-authcong for Active Directory Conguration 26
Prepare ESXi Hosts for Active Directory Integration 26
Set Up Active Directory to Work with ESXi 27
Updating Hosts 27
VMware, Inc.
3
vSphere Command-Line Interface Concepts and Examples
Managing Files 29
3
Introduction to Virtual Machine File Management 29
Managing the Virtual Machine File System with vmkfstools 30
Upgrading VMFS3 Volumes to VMFS5 31
Managing VMFS Volumes 31
Managing Duplicate VMFS Datastores 32
Mounting Datastores with Existing Signatures 32
Resignaturing VMFS Copies 33
Reclaiming Unused Storage Space 34
Using vifs to View and Manipulate Files on Remote ESXi Hosts 35
vifs Options 36
vifs Examples 37
Managing Storage 41
4
Introduction to Storage 42
How Virtual Machines Access Storage 42
Datastores 44
Storage Device Naming 44
Examining LUNs 45
Target and Device Representation 45
Examining LUNs with esxcli storage core 46
Examining LUNs with vicfg-scsidevs 47
Detach a Device and Remove a LUN 48
Reaach a Device 49
Working with Permanent Device Loss 49
Removing a PDL LUN 49
Reaach a PDL LUN 49
Managing Paths 50
Multipathing with Local Storage and FC SANs 50
Listing Path Information 51
Changing the State of a Path 53
Managing Path Policies 54
Multipathing Considerations 54
Changing Path Policies 55
Set Policy Details for Devices that Use Round Robin 56
Scheduling Queues for Virtual Machine I/O 57
Managing NFS/NAS Datastores 57
Capabilities Supported by NFS/NAS 57
Adding and Deleting NAS File Systems 58
Monitor and Manage FibreChannel SAN Storage 59
Monitoring and Managing Virtual SAN Storage 60
Retrieve Virtual SAN Information 60
Manage a Virtual SAN Cluster 60
Add and Remove Virtual SAN Storage 61
Monitoring vSphere Flash Read Cache 62
Monitoring and Managing Virtual Volumes 62
Migrating Virtual Machines with svmotion 63
Storage vMotion Uses 63
Storage vMotion Requirements and Limitations 63
4 VMware, Inc.
Running svmotion in Interactive Mode 64
Running svmotion in Noninteractive Mode 64
Conguring FCoE Adapters 65
Scanning Storage Adapters 66
Retrieving SMART Information 66
Contents
Managing iSCSI Storage 69
5
iSCSI Storage Overview 69
Discovery Sessions 70
Discovery Target Names 71
Protecting an iSCSI SAN 71
Protecting Transmied Data 71
Securing iSCSI Ports 72
Seing iSCSI CHAP 72
Command Syntax for esxcli iscsi and vicfg-iscsi 73
esxcli iscsi Command Syntax 74
Key to esxcli iscsi Short Options 74
vicfg-iscsi Command Syntax 75
iSCSI Storage Setup with ESXCLI 78
Set Up Software iSCSI with ESXCLI 78
Set Up Dependent Hardware iSCSI with ESXCLI 80
Set Up Independent Hardware iSCSI with ESXCLI 82
iSCSI Storage Setup with vicfg-iscsi 84
Set Up Software iSCSI with vicfg-iscsi 85
Set Up Dependent Hardware iSCSI with vicfg-iscsi 86
Set Up Independent Hardware iSCSI with vicfg-iscsi 87
Listing and Seing iSCSI Options 89
Listing iSCSI Options with ESXCLI 89
Seing MTU with ESXCLI 89
Listing and Seing iSCSI Options with vicfg-iscsi 89
Listing and Seing iSCSI Parameters 90
Listing and Seing iSCSI Parameters with ESXCLI 90
Returning Parameters to Default Inheritance with ESXCLI 92
Listing and Seing iSCSI Parameters with vicfg-iscsi 92
Returning Parameters to Default Inheritance with vicfg-iscsi 94
Enabling iSCSI Authentication 94
Enable iSCSI Authentication with ESXCLI 94
Enable Mutual iSCSI Authentication with ESXCLI 95
Enable iSCSI Authentication with vicfg-iscsi 96
Set Up Ports for iSCSI Multipathing 97
Managing iSCSI Sessions 98
Introduction to iSCSI Session Management 98
Listing iSCSI Sessions 98
Logging in to iSCSI Sessions 99
Removing iSCSI Sessions 99
VMware, Inc. 5
vSphere Command-Line Interface Concepts and Examples
Managing Third-Party Storage Arrays 101
6
Managing NMP with esxcli storage nmp 101
Device Management with esxcli storage nmp device 102
Listing Paths with esxcli storage nmp path 102
Managing Path Selection Policy Plug-Ins with esxcli storage nmp psp 103
Fixed Path Selection Policy Operations 104
Customizing Round Robin Setup 105
Managing SATPs 106
Path Claiming with esxcli storage core claiming 108
Using the Reclaim Troubleshooting Command 109
Unclaiming Paths or Sets of Paths 109
Managing Claim Rules 110
Change the Current Claim Rules in the VMkernel 110
Adding Claim Rules 111
Removing Claim Rules 112
Listing Claim Rules 113
Loading Claim Rules 113
Moving Claim Rules 113
Load and Apply Path Claim Rules 114
Running Path Claim Rules 114
Managing Users 117
7
Users in the vSphere Environment 117
vicfg-user Command Syntax 118
Managing Users with vicfg-user 118
Assigning Permissions with ESXCLI 120
Managing Virtual Machines 123
8
vmware-cmd Overview 123
Connection Options for vmware-cmd 124
General Options for vmware-cmd 124
Format for Specifying Virtual Machines 124
List and Register Virtual Machines 125
Retrieving Virtual Machine Aributes 125
Managing Virtual Machine Snapshots with vmware-cmd 127
Take a Virtual Machine Snapshot 127
Reverting and Removing Snapshots 128
Powering Virtual Machines On and O 128
Connecting and Disconnecting Virtual Devices 129
Working with the AnswerVM API 130
Forcibly Stop a Virtual Machine with ESXCLI 130
Managing vSphere Networking 131
9
Introduction to vSphere Networking 131
Networking Using vSphere Standard Switches 132
Networking Using vSphere Distributed Switches 133
Retrieving Basic Networking Information 134
Troubleshoot a Networking Setup 134
6 VMware, Inc.
Seing Up vSphere Networking with vSphere Standard Switches 136
Seing Up Virtual Switches and Associating a Switch with a Network Interface 136
Retrieving Information About Virtual Switches 137
Adding and Deleting Virtual Switches 138
Checking, Adding, and Removing Port Groups 139
Managing Uplinks and Port Groups 140
Seing the Port Group VLAN ID 141
Managing Uplink Adapters 142
Adding and Modifying VMkernel Network Interfaces 145
Seing Up vSphere Networking with vSphere Distributed Switch 148
Managing Standard Networking Services in the vSphere Environment 149
Seing the DNS Conguration 149
Seing the DNS Conguration with ESXCLI 149
Seing the DNS Conguration with vicfg-dns 151
Manage an NTP Server 152
Manage the IP Gateway 152
Seing Up IPsec 153
Using IPsec with ESXi 154
Managing Security Associations 155
Managing Security Policies 156
Manage the ESXi Firewall 157
Monitor VXLAN 158
Contents
Monitoring ESXi Hosts 161
10
Using resxtop for Performance Monitoring 161
Managing Diagnostic Partitions 161
Managing Core Dumps 162
Manage Local Core Dumps with ESXCLI 162
Manage Core Dumps with ESXi Dump Collector 163
Manage Core Dumps with vicfg-dumppart 164
Conguring ESXi Syslog Services 164
Managing ESXi SNMP Agents 166
Conguring SNMP Communities 166
Conguring the SNMP Agent to Send Traps 166
Conguring the SNMP Agent for Polling 168
Retrieving Hardware Information 169
Index 171
VMware, Inc. 7
vSphere Command-Line Interface Concepts and Examples
8 VMware, Inc.

About This Book

vSphere Command-Line Interface Concepts and Examples explains how to use the commands in the VMware vSphere® Command-Line Interface (vCLI) and includes command overviews and examples.
Intended Audience
This book is for experienced Windows or Linux system administrators who are familiar with vSphere administration tasks and data center operations and know how to use commands in scripts.
VMware Technical Publications Glossary
VMware Technical Publications provides a glossary of terms that might be unfamiliar to you. For denitions of terms as they are used in VMware technical documentation, go to
hp://www.vmware.com/support/pubs.
Related Documentation
The documentation for vCLI is available in the vSphere Documentation Center and on the vCLI documentation page. Go to hp://www.vmware.com/support/developer/vcli.
Geing Started with vSphere Command-Line Interfaces includes information about available CLIs, enabling
n
the ESXi Shell, as well as installing and running vCLI and DCLI commands.
VMware, Inc.
vSphere Command-Line Interface Reference is a reference to both ESXCLI commands and vicfg-
n
commands. The vicfg- command help is generated from the POD available for each command, run
pod2html for any vicfg- command to generate individual HTML les interactively. The ESXCLI
reference information is generated from the ESXCLI help.
DCLI Reference is a reference to DCLI commands for managing vCenter services.
n
The documentation for PowerCLI is available in the vSphere Documentation Center and on the PowerCLI documentation page.
The vSphere SDK for Perl documentation explains how you can use the vSphere SDK for Perl and related utility applications to manage your vSphere environment.
The vSphere Management Assistant Guide explains how to install and use the vSphere Management Assistant (vMA). vMA is a virtual machine that includes vCLI and other prepackaged software.
Background information for the tasks discussed in this book is available in the vSphere documentation set. The vSphere documentation consists of the combined VMware vCenter Server and ESXi documentation.
9
vSphere Command-Line Interface Concepts and Examples
10 VMware, Inc.

vSphere CLI Command Overviews 1

This chapter introduces the command set, presents supported commands for dierent versions of vSphere, lists connection options, and discusses vCLI and lockdown mode.
This chapter includes the following topics:
“Introduction,” on page 11
n
“List of Available Host Management Commands,” on page 13
n
“Targets and Protocols for vCLI Host Management Commands,” on page 15
n
“Supported Platforms for vCLI Commands,” on page 15
n
“Commands with an esxcfg Prex,” on page 16
n
“ESXCLI Commands Available on Dierent ESXi Hosts,” on page 17
n
“Trust Relationship Requirement for ESXCLI Commands,” on page 17
n
“Using ESXCLI Output,” on page 19
n
“Connection Options for vCLI Host Management Commands,” on page 19
n
“Connection Options for DCLI Commands,” on page 19
n
“vCLI Host Management Commands and Lockdown Mode,” on page 19
n

Introduction

The commands in the vSphere CLI package allow you to perform vSphere conguration tasks using commands from vCLI package installed on supported platforms, or using commands from vMA. The package consists of several command sets.
The following table lists the components of the vSphere CLI command set.
vCLI Commands Description
ESXCLI commands Manage many aspects of an ESXi host. You can run ESXCLI commands remotely or in the
ESXi Shell.
You can also run ESXCLI commands from the PowerCLI prompt by using the Get-EsxCli cmdlet.
vicfg- commands
VMware, Inc. 11
Set of commands for many aspects of host management Eventually, these commands will be replaced by ESXCLI commands.
A set of esxcfg- commands that precisely mirrors the vicfg- commands is also included in the vCLI package.
vSphere Command-Line Interface Concepts and Examples
vCLI Commands Description
Other commands (vmware-cmd, vifs, vmkfstools)
DCLI commands Manage VMware SDDC services.
Commands implemented in Perl that do not have a vicfg- prex. These commands are scheduled to be deprecated or replaced by ESXCLI commands.
DCLI is a CLI client to the vSphere Automation SDK interface for managing VMware SDDC services. A DCLI command talks to a vSphere Automation API endpoint to locate relevant information, and then executes the command and displays result to the user.
You can install the vSphere CLI command set on a supported Linux or Windows system. See Geing Started with vSphere Command-Line Interfaces. You can also deploy the vSphere Management Assistant (vMA) to an ESXi system of your choice.
After installation, run vCLI commands from the Linux or Windows system or from vMA.
Manage ESXi hosts with other vCLI commands by specifying connection options such as the target
n
host, user, and password or a conguration le. See “Connection Options for vCLI Host Management
Commands,” on page 19.
Manage vCenter services with DCLI commands by specifying a target vCenter Server system and
n
authentication options. See Geing Started with vSphere Command-Line Interfaces for a list of connection options.

Documentation

You can nd information about dierent aspects of vCLI in separate publications.
Geing Started with vSphere Command-Line Interfaces includes information about available CLIs, enabling the ESXi Shell, and installing and running vCLI commands.
Reference information for vCLI and DCLI commands is available on the vCLI documentation page
hp://www.vmware.com/support/developer/vcli/ and in the vSphere Documentation Center for the product
version that you are using.
vSphere Command-Line Interface Reference is a reference to vicfg- and related vCLI commands and
n
includes reference information for ESXCLI commands. All reference information is generated from the help.
A reference to esxtop and resxtop is included in the Resource Management documentation.
n
The DCLI Reference is included separately from the vSphere Command-Line Interface Reference. All
n
reference information is generated from the help.

Command-Line Help

Available command-line help diers for the dierent command sets.
Command Set Available Command-Line Help
vicfg-
commands
ESXCLI commands
DCLI commands
Run <vicfg-cmd> --help for an overview of each options.
Run Pod2Html with a vicfg- command as input and pipe the output to a le for more detailed help information.
pod2html vicfg-authconfig.pl > vicfg-authconfig.html
This output corresponds to the information available in the vSphere Command-Line Interface Reference.
Run --help at any level of the hierarchy for information about both commands and namespaces available from that level.
Run --help for any command or namespace to display the input options, whether the option is required, and the input option type. For namespaces, --help displays all available child namespaces and commands.
Run dcli --help to display usage information for DCLI.
12 VMware, Inc.

List of Available Host Management Commands

vCLI host management commands from earlier versions have been replaced with commands that have equivalent functionality.
The following table lists vCLI host management commands in alphabetical order and the corresponding ESXCLI command if available. For ESXCLI, new commands and namespaces are added with each release. See the Release Notes for the corresponding release for information.
Functionality of the DCLI command set that is being added in vSphere 6.0 and later is dierent from these commands. They are not included in the table.
Chapter 1 vSphere CLI Command Overviews
vCLI 4.1 Command
esxcli esxcli (new syntax)
resxtop resxtop (No ESXCLI
svmotion svmotion (No ESXCLI
vicfg-advcfg esxcli system settings
vicfg-authconfig vicfg-authconfig (No
vicfg-cfgbackup vicfg-cfgbackup (No
vicfg-dns esxcli network ip dns
vicfg-dumppart esxcli system coredump Sets both the partition (esxcli system coredump partition)
vicfg-hostops esxcli system
vicfg-ipsec esxcli network ip ipsec
vicfg-iscsi esxcli iscsi
vCLI 5.1 and later Command Comment
All vCLI 4.1 commands have been renamed. Signicant additions have been made to ESXCLI. Many tasks previously performed with a vicfg- command is now performed with ESXCLI.
Monitors in real time how ESXi hosts use resources. Runs in
equivalent)
Supported only on Linux.
equivalent)
Must run against a vCenter Server system.
advanced
ESXCLI equivalent)
ESXCLI equivalent)
Cannot run against a vCenter Server system.
maintenancemode
esxcli system shutdown
interactive or batch mode.
See “Using resxtop for Performance Monitoring,” on page 161. See the vSphere Resource Management documentation for a detailed reference.
Moves a virtual machine’s conguration le, and, optionally, its disks, while the virtual machine is running.
See “Migrating Virtual Machines with svmotion,” on page 63.
Performs advanced conguration.
The advanced seings are a set of VMkernel options. These options are typically in place for specic workarounds or debugging.
Use this command as instructed by VMware.
Remotely congures Active Directory seings for an ESXi host.
See “Using vicfg-authcong for Active Directory Conguration,” on page 26.
Backs up the conguration data of an ESXi system and restores previously saved conguration data.
See “Backing Up Conguration Information with vicfg-
cfgbackup,” on page 24.
Species an ESXi host’s DNS (Domain Name Server) conguration.
See “Seing the DNS Conguration,” on page 149.
and the network (esxcli system coredump network) to use for core dumps. Use this command to set up ESXi Dump Collector.
See “Managing Diagnostic Partitions,” on page 161.
Manages hosts.
“Stopping, Rebooting, and Examining Hosts,” on page 21.
“Entering and Exiting Maintenance Mode,” on page 22.
Sets up IPsec (Internet Protocol Security), which secures IP communications coming from and arriving at ESXi hosts. ESXi hosts support IPsec using IPv6.
See “Seing Up IPsec,” on page 153.
Manages hardware and software iSCSI storage.
See Chapter 5, “Managing iSCSI Storage,” on page 69.
VMware, Inc. 13
vSphere Command-Line Interface Concepts and Examples
vCLI 4.1 Command
vCLI 5.1 and later Command Comment
vicfg-module esxcli system module
vicfg-mpath
vicfg-mpath35
esxcli storage core path
vicfg-nas esxcli storage nfs
vicfg-nics esxcli network nic
vicfg-ntp vicfg-ntp (No ESXCLI
equivalent)
vicfg-rescan esxcli storage core
adapter rescan
vicfg-route esxcli network ip route
vicfg-scsidevs esxcli storage core
device
vicfg-snmp esxcli system snmp
vicfg-syslog esxcli system syslog
vicfg-user vicfg-user (No ESXCLI
equivalent)
vicfg-vmknic esxcli network ip
interface
vicfg-volume esxcli storage
filesystem
vicfg-vswitch esxcli network vswitch
vifs vifs (No ESXCLI equivalent)
vihostupdate esxcli software vib
Enables VMkernel options. Use this command with the options listed in this document, or as instructed by VMware.
See “Managing VMkernel Modules,” on page 25.
Congures storage arrays.
See “Managing Paths,” on page 50.
Manages NAS/NFS lesystems.
See “Managing NFS/NAS Datastores,” on page 57.
Manages the ESXi host's uplink adapters.
See “Managing Uplink Adapters,” on page 142.
Denes the NTP (Network Time Protocol) server.
See “Manage an NTP Server,” on page 152.
Rescans the storage conguration.
See “Scanning Storage Adapters,” on page 66.
Manages the ESXi host's route entry.
See “Manage the IP Gateway,” on page 152.
Finds and examines available LUNs.
See “Examining LUNs,” on page 45.
Manages the SNMP agent. See “Managing ESXi SNMP Agents,” on page 166. Using SNMP in a vSphere environment is discussed in detail in the vSphere Monitoring and Performance documentation.
New options added in vCLI 5.0.
Expanded SNMP support added in vCLI 5.1.
Species log seings for ESXi hosts including local storage policies and server and port information for network logging. See
“Conguring ESXi Syslog Services,” on page 164.
The vCenter Server and Host Management documentation explains how to set up system logs using the vSphere Web Client.
Creates, modies, deletes, and lists local direct access users and groups of users. See Chapter 7, “Managing Users,” on page 117.
The vSphere Security documentation discusses security implications of user management and custom roles.
Adds, deletes, and modies VMkernel network interfaces.
See “Adding and Modifying VMkernel Network Interfaces,” on page 145.
Supports resignaturing the copy of a VMFS volume, and mounting and unmounting the copy.
See “Managing Duplicate VMFS Datastores,” on page 32.
Adds or removes virtual switches or modies virtual switch
seings.
See “Seing Up Virtual Switches and Associating a Switch with a
Network Interface,” on page 136.
Performs le system operations such as retrieving and uploading les on the ESXi system.
See “Managing the Virtual Machine File System with vmkfstools,” on page 30.
Updates legacy ESXi hosts to a dierent version of the same major release.
You cannot run vihostupdate against ESXi 5.0 and later hosts.
See “Managing VMkernel Modules,” on page 25.
14 VMware, Inc.
Chapter 1 vSphere CLI Command Overviews
vCLI 4.1 Command
vmkfstools vmkfstools (No ESXCLI
vmware-cmd vmware-cmd (No ESXCLI
vCLI 5.1 and later Command Comment
Creates and manipulates virtual disks, le systems, logical
equivalent)
equivalent)
volumes, and physical storage devices on an ESXi host.
See “Managing the Virtual Machine File System with vmkfstools,” on page 30.
Performs virtual machine operations remotely. This includes, for example, creating a snapshot, powering the virtual machine on or o, and geing information about the virtual machine.
See Chapter 8, “Managing Virtual Machines,” on page 123.

Targets and Protocols for vCLI Host Management Commands

Most vCLI commands are used to manage or retrieve information about one or more ESXi hosts. They can target an ESXi host or a vCenter Server system.
When you target a vCenter Server system, you can use --vihost to specify the ESXi host to run the command against. The only exception is svmotion, which you can run against vCenter Server systems, but not against ESXi systems.
The following commands must have an ESXi system, not a vCenter Server system as a target.
vifs
n
vicfg-user
n
vicfg-cfgbackup
n
vihostupdate
n
vmkfstools
n
The resxtop command requires an HTTPS connection. All other commands support HTTP and HTTPS.

Supported Platforms for vCLI Commands

Platform support for vCLI commands diers depending on the vCenter Server and ESXi version.
You cannot run the vihostupdate command against an ESXi 5.0 or later system.
You cannot run vicfg-syslog --setserver or vicfg-syslog --setport with an ESXi 5.0 or later target.
The following table lists platform support for the dierent vCLI commands.
vCenter Server 5.x
Command ESXi 5.x and 6.x
DCLI
esxcli
resxtop
svmotion
vicfg-advcfg
vicfg-authconfig
vicfg-cfgbackup
vicfg-dns
vicfg-dumppart
vicfg-hostops
No No No No No
Yes Yes Yes Yes No
Yes (from Linux) Yes (from Linux) Yes (from
No Yes No No Yes
Yes Yes Yes Yes Yes
Yes Yes Yes Yes Yes
Yes No Yes No No
Yes Yes Yes Yes Yes
Yes Yes Yes Yes Yes
Yes Yes Yes Yes Yes
and 6.x ESXi 4.x ESX 4.x
Linux)
Yes (from Linux)
vCenter Server
4.x
Yes (from Linux)
VMware, Inc. 15
vSphere Command-Line Interface Concepts and Examples
Command ESXi 5.x and 6.x
vicfg-ipsec
vicfg-iscsi
vicfg-module
vicfg-mpath
vicfg-nas
vicfg-nics
vicfg-ntp
vicfg-rescan
vicfg-route
vicfg-scsidevs
vicfg-snmp
vicfg-syslog
vicfg-user
vicfg-vmknic
vicfg-volume
vicfg-vswitch
vifs
vihostupdate Use esxcli
vmkfstools
vmware-cmd
vicfg-mpath35
vihostupdate35
Yes No Yes Yes No
Yes Yes Yes Yes Yes
Yes Yes Yes Yes Yes
Yes Yes Yes Yes Yes
Yes Yes Yes Yes Yes
Yes Yes Yes Yes Yes
Yes Yes Yes Yes Yes
Yes Yes Yes Yes Yes
Yes Yes Yes Yes Yes
Yes Yes Yes Yes Yes
Yes No Yes Yes No
No No for 5.0 target Yes No Yes
Yes No Yes Yes No
Yes Yes Yes Yes Yes
Yes Yes Yes Yes Yes
Yes Yes Yes Yes Yes
Yes No Yes Yes No
software vib
instead.
Yes No Yes Yes No
Yes Yes Yes Yes Yes
No No No No No
No No No No No
vCenter Server 5.x and 6.x ESXi 4.x ESX 4.x
Use esxcli software vib
instead.
Yes Yes No
vCenter Server
4.x

Commands with an esxcfg Prefix

To facilitate easy migration if shell scripts that use esxcfg- commands, the vCLI package includes a copy of each vicfg- command that uses an esxcfg prex.
I You should use ESXCLI or the vCLI commands with the vicfg prex. Commands with the
esxcfg prex are available mainly for compatibility reasons and are now obsolete. vCLI esxcfg- commands
are equivalent to vicfg- commands, but not completely equivalent to the deprecated esxcfg- service console commands.
The Following table lists all vCLI vicfg- commands for which a vCLI command with an esxcfg prex is available.
Command with vicfg Prefix Command with esxcfg Prefix
vicfg-advcfg esxcfg-advcfg
vicfg-cfgbackup esxcfg-cfgbackup
vicfg-dns esxcfg-dns
vicfg-dumppart esxcfg-dumppart
vicfg-module esxcfg-module
16 VMware, Inc.
Chapter 1 vSphere CLI Command Overviews
Command with vicfg Prefix Command with esxcfg Prefix
vicfg-mpath esxcfg-mpath
vicfg-nas esxcfg-nas
vicfg-nics esxcfg-nics
vicfg-ntp esxcfg-ntp
vicfg-rescan esxcfg-rescan
vicfg-route esxcfg-route
vicfg-scsidevs esxcfg-scsidevs
vicfg-snmp esxcfg-snmp
vicfg-syslog esxcfg-syslog
vicfg-vmknic esxcfg-vmknic
vicfg-volume esxcfg-volume
vicfg-vswitch esxcfg-vswitch

ESXCLI Commands Available on Different ESXi Hosts

The available ESXCLI commands depend on the ESXi host version.
When you run an ESXCLI vCLI command, you must know the commands supported on the target host. For example, if you run commands against ESXi 5.x hosts, ESXCLI 5.x commands are supported. If you run commands against ESXi 6.x hosts, ESXCLI 6.x commands are supported.
Some commands or command outputs are determined by the host type. In addition, VMware partners might develop custom ESXCLI commands that you can run on hosts where the partner VIB has been installed.
Run esxcli --server <target> --help for a list of namespaces supported on the target. You can drill down into the namespaces for additional help.

Trust Relationship Requirement for ESXCLI Commands

Starting with vSphere 6.0, ESXCLI checks whether a trust relationship exists between the machine where you run the ESXCLI command and the ESXi host. An error results if the trust relationship does not exist.

Download and Install the vCenter Server Certificate

You can download the vCenter Server root certicate by using a Web browser and add it to the trusted certicates on the machine where you plan to run ESXCLI commands.
Procedure
1 Enter the URL of the vCenter Server system or vCenter Server Appliance into a Web browser.
2 Click the Download trusted root  link.
3 Change the extension of the downloaded le to .zip. (The le is a ZIP le of all certicates in the
TRUSTED_ROOTS store).
4 Extract the ZIP le.
A certicates folder is extracted. The folder includes les with the extension .0. .1, and so on, which are
certicates, and les with the extension .r0, r1, and so on which are CRL les associated with the certicates.
VMware, Inc. 17
vSphere Command-Line Interface Concepts and Examples
5 Add the trusted root certicates to the list of trusted roots.
The process diers depending on the platform that you are on.
What to do next
You can now run ESXCLI commands against any host that is managed by the trusted vCenter Server system without supplying additional information if you specify the vCenter Server system in the --server option and the ESXi host in the --vihost option.

Using the --cacertsfile Option

Using a certicate to establish the trust relationship is the most secure option.
You can specify the certicate with the --cacertsfile parameter or the VI_CACERTFILE variable.

Using the --thumbprint Option

You can supply the thumbprint for the target ESXi host or vCenter Server system in the --thumbprint parameter or the VI_THUMBPRINT variable.
When you run a command, ESXCLI rst checks whether a certicate le is available. If not, ESXCLI checks whether a thumbprint of the target server is available. If not, you receive an error of the following type.
Connect to sof-40583-srv failed. Server SHA-1 thumbprint: 5D:01:06:63:55:9D:DF:FE:38:81:6E:2C:FA:
71:BC:Usin63:82:C5:16:51 (not trusted).
You can run the command with the thumbprint to establish the trust relationship, or add the thumbprint to the VI_THUMBPRINT variable. For example, using the thumbprint of the ESXi host above, you can run the following command.
esxcli --server myESXi --username user1 --password 'my_password' --thumbprint 5D:
01:06:63:55:9D:DF:FE:38:81:6E:2C:FA:71:BC:63:82:C5:16:51 storage nfs list

Use the Credential Store

Your vCLI installation includes a credential store. You can establish trust for a user with the credential store.
You can manage the credential store with the credstore-admin utility application, which is located in the /Perl/apps/general directory inside the VMware vSphere CLI directory.
I Updating the credential store is a two-step process. First you add the user and password for the server, and then you add the thumbprint for the server.
Procedure
1 Add the user and password for the target ESXi host to the local credential store.
credstore_admin.pl add --server <esxi_HOSTNAME_OR_IP> --username <user> --password <pwd>
2 Add the thumbprint for the target ESXi host. This thumbprint was returned in the error when you
aempted to connect to the host.
credstore_admin.pl add --server <esxi_HOSTNAME_OR_IP> --thumbprint <thumbprint>
3
If you are using a non-default credential store le, you must pass it in with the --credstore option.
If you do not use the --credstore option, the host becomes accessible without authentication.
18 VMware, Inc.
Chapter 1 vSphere CLI Command Overviews

Using ESXCLI Output

Many ESXCLI commands generate output you might want to use in your application. You can run esxcli with the --formatter dispatcher option and send the resulting output as input to a parser.
The --formatter options supports three values - csv, xml, and keyvalue and is used before any namespace.
The following example lists all le system information in CSV format.
esxcli --formatter=csv storage filesystem list
You can pipe the output to a le.
esxcli --formatter=keyvalue storage filesystem list > myfilesystemlist.txt
I You should always use a formaer for consistent output.

Connection Options for vCLI Host Management Commands

You can run host management commands such as ESXCLI commands, vicfg- commands, and other commands with several dierent connection options.
You can target hosts directly or target a vCenter Server system and specify the host you want to manage. If you are targeting a vCenter Server system, specify the Platform Services Controller, which includes the vCenter Single Sign-On service, for best security.
I For connections to ESXi hosts version 6.0 or later, vCLI supports both the IPv4 protocol and the IPv6 protocol. For earlier versions, vCLI supports only IPv4. In all cases, you can congure IPv6 on the target host with several of the networking commands.
See the Geing Started with vSphere Command-Line Interfaces documentation for a complete list and examples.

Connection Options for DCLI Commands

DCLI is a CLI client to the vSphere Automation SDK interface for managing VMware SDDC services. A DCLI command talks to a vSphere Automation SDK endpoint to get the vSphere Automation SDK command information, executes the command, and displays result to the user.
You can run DCLI commands locally or from an administration server.
Run DCLI on the Linux shell of a vCenter Server Appliance.
n
Install vCLI on a supported Windows or Linux system and target a vCenter Server Windows
n
installation or a vCenter Server Appliance. You have to provide endpoint information to successfully run commands.
DCLI commands support other connection options than other commands in the command set.
See the Geing Started with vSphere Command-Line Interfaces documentation for a complete list and examples.

vCLI Host Management Commands and Lockdown Mode

For additional security, an administrator can place one or more hosts managed by a vCenter Server system in lockdown mode. Lockdown mode aects login privileges for the ESXi host.
See the vSphere Security document in the vSphere Documentation Center for a detailed discussion of normal lockdown mode and strict lockdown mode, and of how to enable and disable them.
VMware, Inc. 19
vSphere Command-Line Interface Concepts and Examples
To make changes to ESXi systems in lockdown mode, you must go through a vCenter Server system that manages the ESXi system as the user vpxuser and include both the --server and --vihost parameters.
esxcli --server MyVC --vihost MyESXi storage filesystem list
The command prompts for the vCenter Server system user name and password.
The following commands cannot run against vCenter Server systems and are therefore not available in lockdown mode.
vifs
n
vicfg-user
n
vicfg-cfgbackup
n
vihostupdate
n
vmkfstools
n
If you have problems running a command on an ESXi host directly, without specifying a vCenter Server target, check whether lockdown mode is enabled on that host.
20 VMware, Inc.

Managing Hosts 2

Host management commands can stop and reboot ESXi hosts, back up conguration information, and manage host updates. You can also use a host management command to make your host join an Active Directory domain or exit from a domain.
For information on updating ESXi 5.0 hosts with the esxcli software command and on changing the host acceptance level to match the level of a VIB that you might want to use for an update, see the vSphere Upgrade documentation in the vSphere 5.0 Documentation Center.
This chapter includes the following topics:
“Stopping, Rebooting, and Examining Hosts,” on page 21
n
“Entering and Exiting Maintenance Mode,” on page 22
n
“Backing Up Conguration Information with vicfg-cfgbackup,” on page 24
n
“Managing VMkernel Modules,” on page 25
n
“Using vicfg-authcong for Active Directory Conguration,” on page 26
n
“Updating Hosts,” on page 27
n

Stopping, Rebooting, and Examining Hosts

You can stop, reboot, and examine hosts with ESXCLI or with vicfg-hostops.

Stopping and Rebooting Hosts with ESXCLI

You can shut down or reboot an ESXi host by using the vSphere Web Client or vCLI commands, such as ESXCLI or vicfg-hostops.
Shuing down a managed host disconnects it from the vCenter Server system, but does not remove the host from the inventory. You can shut down a single host or all hosts in a data center or cluster. Specify one of the options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of
<conn_options>.
To shut down a host, run esxcli system shutdown poweroff. You must specify the --reason option and supply a reason for the shutdown. A --delay option allows you to specify a delay interval, in seconds.
To reboot a host, run system shutdown reboot. You must specify the --reason option and supply a reason for the reboot. A --delay option allows you to specify a delay interval, in seconds.
VMware, Inc.
21
vSphere Command-Line Interface Concepts and Examples

Stopping, Rebooting, and Examining Hosts with vicfg-hostops

You can shut down or reboot an ESXi host by using the vSphere Web Client, or ESXCLI or the vicfg-hostops vCLI command.
Shuing down a managed host disconnects it from the vCenter Server system, but does not remove the host from the inventory. You can shut down a single host or all hosts in a data center or cluster. Specify one of the options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of
<conn_options>.
Single host - Run vicfg-hostops with --operation shutdown.
n
If the host is in maintenance mode, run the command without the --force option.
n
vicfg-hostops <conn_options> --operation shutdown
If the host is not in maintenance mode, use --force to shut down the host and all running virtual
n
machines.
vicfg-hostops <conn_options> --operation shutdown --force
All hosts in data center or cluster - To shut down all hosts in a cluster or data center, specify --cluster
n
or --datacenter.
vicfg-hostops <conn_options> --operation shutdown --cluster <my_cluster>
vicfg-hostops <conn_options> --operation shutdown --datacenter <my_datacenter>
You can reboot a single host or all hosts in a data center or cluster.
Single host - Run vicfg-hostops with --operation reboot.
n
If the host is in maintenance mode, run the command without the --force option.
n
vicfg-hostops <conn_options> --operation reboot
If the host is not in maintenance mode, use --force to shut down the host and all running virtual
n
machines.
vicfg-hostops <conn_options> --operation reboot --force
All hosts in data center or cluster - You can specify --cluster or --datacenter to reboot all hosts in a
n
cluster or data center.
vicfg-hostops <conn_options> --operation reboot --cluster <my_cluster>
vicfg-hostops <conn_options> --operation reboot --datacenter <my_datacenter>
You can display information about a host by running vicfg-hostops with --operation info.
vicfg-hostops <conn_options> --operation info
The command returns the host name, manufacturer, model, processor type, CPU cores, memory capacity, and boot time. The command also returns whether vMotion is enabled and whether the host is in maintenance mode.

Entering and Exiting Maintenance Mode

You can instruct your host to enter or exit maintenance mode with ESXCLI or with vicfg-hostops.

Enter and Exit Maintenance Mode with ESXCLI

You place a host in maintenance mode to service it, for example, to install more memory. A host enters or leaves maintenance mode only as the result of a user request.
esxcli system maintenanceMode set allows you to enable or disable maintenance mode.
22 VMware, Inc.
Chapter 2 Managing Hosts
When you run the vicfg-hostops vCLI command, you can specify one of the options listed in “Connection
Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
Procedure
1 To enter maintenance mode, run the following command.
esxcli <conn_options> system maintenanceMode set --enable true
After all virtual machines on the host have been suspended or migrated, the host enters maintenance mode.
N You cannot deploy or power on a virtual machine on hosts in maintenance mode.
2 To exit maintenance mode, run the following command.
esxcli <conn_options> system maintenanceMode set --enable false
N If you aempt to exit maintenance mode when the host is no longer in maintenance mode, an error informs you that maintenance mode is already disabled.

Enter and Exit Maintenance Mode with vicfg-hostops

You place a host in maintenance mode to service it, for example, to install more memory. A host enters or leaves maintenance mode only as the result of a user request.
vicfg-hostops suspends virtual machines by default, or powers o the virtual machine if you run vicfg­hostops --action poweroff.
N vicfg-hostops does not work with VMware DRS. Virtual machines are always suspended.
The host is in a state of Entering Maintenance Mode until all running virtual machines are suspended or migrated. When a host is entering maintenance mode, you cannot power on virtual machines on it or migrate virtual machines to it.
When you run the vicfg-hostops vCLI command, you can specify one of the options listed in “Connection
Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
Procedure
1 To enter maintenance mode, run the following command.
vicfg-hostops <conn_options> --operation enter
2 To check whether the host is in maintenance mode or in the Entering Maintenance Mode state, run the
following command.
vicfg-hostops <conn_options> --operation info
After all virtual machines on the host have been suspended or migrated, the host enters maintenance mode. You cannot deploy or power on a virtual machine on hosts in maintenance mode.
What to do next
You can put all hosts in a cluster or data center in maintenance mode by using the --cluster or --
datacenter option. You must not use those options unless suspending all virtual machines in that cluster or
data center is no problem.
You can later run vicfg-hostops <conn_options> --operation exit to exit maintenance mode.
VMware, Inc. 23
vSphere Command-Line Interface Concepts and Examples

Backing Up Configuration Information with vicfg-cfgbackup

After you congure an ESXi host, you can back up the host conguration data. You should always back up your host conguration after you change the conguration or upgrade the ESXi image.
I The vicfg-cfgbackup command is available only for ESXi hosts. The command is not available through a vCenter Server system connection. No equivalent ESXCLI command is supported.

Backup Tasks

During a conguration backup, the serial number is backed up with the conguration.
The number is restored when you restore the conguration. The number is not preserved when you run the Recovery CD (ESXi Embedded) or perform a repair operation (ESXi Installable).
You can back up and restore conguration information as follows.
1 Back up the conguration by using the vicfg-cfgbackup command.
2 Run the Recovery CD or repair operation.
3 Restore the conguration by using the vicfg-cfgbackup command.
When you restore a conguration, you must make sure that all virtual machines on the host are stopped.

Backing Up Configuration Data

You can back up conguration data by running vicfg-cfgbackup with the -s option.
The following example backs up conguration data in a temporary location.
vicfg-cfgbackup <conn_options> -s /tmp/ESXi_181842_backup.txt
For the backup lename, include the number of the build that is running on the host that you are backing up. If you are running vCLI on vMA, the backup le is saved locally on vMA. Backup les can safely be stored locally because virtual appliances are stored in the /vmfs/volumes/<datastore> directory on the host, which is separate from the ESXi image and conguration les.

Restore Configuration Data

If you have created a backup, you can later restore ESXi conguration data.
When you restore conguration data, the number of the build running on the host must be the same as the number of the build that was running when you created the backup le. To override this requirement, include the -f (force) option.
When you run the vicfg-cfgbackup vCLI command, you can specify one of the options listed in
“Connection Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
Procedure
1 Power o all virtual machines that are running on the host that you want to restore.
2 Log in to a host on which vCLI is installed, or log in to vMA.
3 Run vicfg-cfgbackup with the -l ag to load the host conguration from the specied backup le.
If you run the following command, you are prompted for conrmation.
n
vicfg-cfgbackup <conn_options> -l /tmp/ESXi_181842_backup.tgz
If you run the following command, you are not prompted for conrmation.
n
vicfg-cfgbackup <conn_options> -l /tmp/ESXi_181842_backup.tgz -q
24 VMware, Inc.
Chapter 2 Managing Hosts
4 (Optional) To restore the host to factory seings, run vicfg-cfgbackup with the -r option.
vicfg-cfgbackup <conn_options> -r

Using vicfg-cfgbackup from vMA

To back up a host conguration, you can run vicfg-cfgbackup from a vMA instance. The vMA instance can run on the host that you are backing up or restoring, also referred to as the target host, or on a remote host.
To restore a host conguration, you must run vicfg-cfgbackup from a vMA instance running on a remote host. The host must be in maintenance mode, which means all virtual machines, including vMA, must be suspended on the target host.
For example, a backup operation for two ESXi hosts, host1 and host2, with vMA deployed on both hosts works as follows.
To back up one of the host’s conguration, run vicfg-cfgbackup from the vMA appliance running on
n
either host1 or host2. Use the --server option to specify the host for which you want backup information. The information is stored on vMA.
To restore the host1 conguration, run vicfg-cfgbackup from the vMA appliance running on host2. Use
n
the --server option to point to host1 to restore the conguration to that host.
To restore the host2 conguration, run vicfg-cfgbackup from the vMA appliance running on host1. Use
n
the --server option to point to host2 to restore the conguration to that host.

Managing VMkernel Modules

The esxcli system module and vicfg-module commands support seing and retrieving VMkernel module options.
The vicfg-module and esxcli system module commands are implementations of the deprecated esxcfg-
module service console command. The two commands support most of the options esxcfg-module supports. vicfg-module and esxcli system module are commonly used when VMware Technical Support, a
Knowledge Base article, or VMware documentation instruct you to do so.

Manage Modules with esxcli system module

Not all VMkernel modules have seable module options.
The following example illustrates how to examine and enable a VMkernel module. Specify one of the connection options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
Procedure
1 List information about the module.
esxcli <conn_options> system module list --module=module_name
The system returns the name, type, value, and description of the module.
2 (Optional) List all enabled or loaded modules.
esxcli <conn_options> system module list --enabled=true
esxcli <conn_options> system module list --loaded=true
3 Enable the model.
esxcli <conn_options> system module set --module=module_name --enabled=true
VMware, Inc. 25
vSphere Command-Line Interface Concepts and Examples
4 Set the parameter.
esxcli system module parameters set --module=module_name --parameter-
string="parameter_string"
5 Verify that the module is congured.
esxcli <conn_options> system module parameters list --module=module_name

Manage Modules with vicfg-module

Not all VMkernel modules have seable module options.
The following example illustrates how the examine and enable a VMkernel modules. Specify one of the connection options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
Procedure
1 Run vicfg-module --list to list the modules on the host.
vicfg-module <conn_options> --list
2 Run vicfg-module --set-options with connection options, the option string to be passed to a module,
and the module name.
vicfg-module <conn_options> --set-options '<parameter_name>=<value>' <module_name>
3 (Optional) To retrieve the option string that is congured to be passed to a module when the module is
loaded, run vicfg-module --get-options.
N This string is not necessarily the option string currently in use by the module.
vicfg-module <conn_options> --get-options module_name
Veries that a module is congured.

Using vicfg-authconfig for Active Directory Configuration

ESXi can be integrated with Active Directory. Active Directory provides authentication for all local services and for remote access through the vSphere Web Services SDK, vSphere Web Client, PowerCLI, and vSphere CLI.
You can congure Active Directory seings with the vSphere Web Client, as discussed in the vCenter Server and Host Management documentation, or use vicfg-autconfig.
vicfg-authconfig allows you to remotely congure Active Directory seings on ESXi hosts. You can list
supported and active authentication mechanisms, list the current domain, and join or part from an Active Directory domain.

Prepare ESXi Hosts for Active Directory Integration

Before you run the vicfg-authconfig command on an ESXi host, you must prepare the host.
Procedure
1 Congure ESXi and Active Directory to use same NTP server.
I All hosts that join Active Directory must also be managed by an NTP server to avoid issues with clock skews and Kerberos tickets. You must make sure the ESXi system and the Active Directory server are using the same time zone.
The ESXi system’s time zone is always set to UTC.
26 VMware, Inc.
Chapter 2 Managing Hosts
2 Congure the ESXi system’s DNS to be in the Active Directory domain.

Set Up Active Directory to Work with ESXi

You can run vicfg-authconfig to add the ESXi host to the Active Directory domain. You can run the command directly against the host or against a vCenter Server system, specifying the host with --vihost.
Prerequisites
Verify that you have installed the ESXi host, as explained in the vSphere Installation and Setup
n
documentation.
Verify that you have installed Windows Active Directory on a Windows Server that runs Windows 2000
n
Server, Windows Server 2003, or Windows Server 2008. See the Microsoft Web site for instructions and best practices.
Verify that you have the appropriate Active Directory permissions and administrative privileges on the
n
ESXi host.
Verify that time between the ESXi system and Windows Active Directory is synchronized.
n
Procedure
1 Test that the Windows Active Directory Server can ping the ESXi host by using the host name.
ping <ESX_hostname>
2 Run vicfg-authconfig to add the host to the Active Directory domain.
vicfg-authconfig --server=<ESXi Server IP Address>
--username=<ESXi Server Admin Username>
--password=<ESXi Server Admin User's Password>
--authscheme AD --joindomain <AD Domain Name>
--adusername=<Active Directory Administrator User Name>
--adpassword=<Active Directory Administrator User's Password>
The system prompts for user names and passwords if you do not specify them on the command line. Passwords are not echoed to the screen.
3 Check that a Successfully Joined <Domain Name> message appears.
4 Verify the ESXi host is in the intended Windows Active Directory domain.
vicfg-authconfig --server XXX.XXX.XXX.XXX --authscheme AD -c
You are prompted for a user name and password for the ESXi system.

Updating Hosts

When you add custom drivers or patches to a host, the process is called an update.
Update ESXi 4.0 and ESXi 4.1 hosts with the vihostupdate command, as discussed in the vSphere
n
Command-Line Interface Installation and Reference Guide included in the vSphere 4.1 documentation set.
Update ESXi 5.0 hosts with esxcli software vib commands discussed in the vSphere Upgrade
n
documentation included in the vSphere 5.0 documentation set. You cannot run the vihostupdate command against ESXi 5.0 or later.
Update ESXi 5.0 hosts with esxcli software vib commands discussed in the vSphere Upgrade
n
documentation included in the vSphere 5.0 documentation set. You cannot run the vihostupdate command against ESXi 5.0 or later.
Update ESXi 5.1 hosts with esxcli software vib commands discussed in the vSphere Upgrade
n
documentation included in the vSphere 5.1 documentation set.
VMware, Inc. 27
vSphere Command-Line Interface Concepts and Examples
Update ESXi 5.5 hosts with esxcli software vib commands discussed in the vSphere Upgrade
n
documentation included in the vSphere 5.5 documentation set.
Update ESXi 6.0 hosts with esxcli software vib commands discussed in the vSphere Upgrade
n
documentation included in the vSphere 6.0 documentation set.
Update ESXi 6.5 hosts with esxcli software vib commands discussed in the vSphere Upgrade
n
documentation included in the vSphere 6.5 documentation set.
28 VMware, Inc.

Managing Files 3

The vSphere CLI includes two commands for le manipulation. vmkfstools allows you to manipulate VMFS (Virtual Machine File System) and virtual disks. vifs supports remote interaction with les on your ESXi host.
N See Chapter 4, “Managing Storage,” on page 41 for information about storage manipulation commands.
This chapter includes the following topics:
“Introduction to Virtual Machine File Management,” on page 29
n
“Managing the Virtual Machine File System with vmkfstools,” on page 30
n
“Upgrading VMFS3 Volumes to VMFS5,” on page 31
n
“Managing VMFS Volumes,” on page 31
n
“Reclaiming Unused Storage Space,” on page 34
n
“Using vifs to View and Manipulate Files on Remote ESXi Hosts,” on page 35
n

Introduction to Virtual Machine File Management

VMware, Inc.
You can use the vSphere Web Client or vCLI commands to access dierent types of storage devices that your ESXi host discovers and to deploy datastores on those devices.
N Datastores are logical containers, analogous to le systems, that hide specics of each storage device and provide a uniform model for storing virtual machine les. Datastores can be used for storing ISO images, virtual machine templates, and oppy images. The vSphere Web Client uses the term datastore exclusively. In vCLI, the term datastore, as well as VMFS or NFS volume, refer to the same logical container on the physical device.
Depending on the type of storage you use, datastores can be backed by the VMFS and NFS le system formats.
Virtual Machine File System (VMFS) - High-performance le system that is optimized for storing
n
virtual machines. Your host can deploy a VMFS datastore on any SCSI-based local or networked storage device, including Fibre Channel and iSCSI SAN equipment. As an alternative to using the VMFS datastore, your virtual machine can have direct access to raw devices and use a mapping le (RDM) as a proxy.
You manage VMFS and RDMs with the vSphere Web Client, or the vmkfstools command.
29
iSCSI array
VMFS VMFS
LAN LAN
iSCSI
hardware
initiator
ethernet
NIC
Host
requires TCP/IP connectivity
software
initiator
NAS
appliance
NFS
LAN
ethernet
NIC
fibre
array
VMFS
VMFS
LAN
fibre
channel
HBA
local
ethernet
SCSI
vSphere Command-Line Interface Concepts and Examples
Network File System (NFS) - The NFS client built into ESXi uses the NFS protocol over TCP/IP to access
n
a designated NFS volume that is located on a NAS server. The ESXi host can mount the volume and use it for its storage needs. vSphere supports versions 3 and 4.1 of the NFS protocol. Typically, the NFS volume or directory is created by a storage administrator and is exported form the NFS server. The NFS volumes do not need to be formaed with a local le system, such as VMFS. You can mount the volumes directly and use them to store and boot virtual machines in the same way that you use VMFS datastores. The host can access a designated NFS volume located on an NFS server, mount the volume, and use it for any storage needs.
You manage NAS storage devices from the vSphere Web Client or with the esxcli storage nfs command. The diagram below illustrates dierent types of storage, but it is for conceptual purposes only. It is not a recommended conguration.
Figure 31. Virtual Machines Accessing Different Types of Storage

Managing the Virtual Machine File System with vmkfstools

VMFS datastores primarily serve as repositories for virtual machines.
You can store multiple virtual machines on the same VMFS volume. Each virtual machine, encapsulated in a set of les, occupies a separate single directory. For the operating system inside the virtual machine, VMFS preserves the internal le system semantics.
In addition, you can use the VMFS datastores to store other les, such as virtual machine templates and ISO images. VMFS supports le and block sizes that enable virtual machines to run data-intensive applications, including databases, ERP, and CRM, in virtual machines. See the vSphere Storage documentation.
30 VMware, Inc.
Chapter 3 Managing Files
You use the vmkfstools vCLI to create and manipulate virtual disks, le systems, logical volumes, and physical storage devices on an ESXi host. You can use vmkfstools to create and manage a virtual machine le system on a physical partition of a disk and to manipulate les, such as virtual disks, stored on VMFS-3 and NFS. You can also use vmkfstools to set up and manage raw device mappings (RDMs).
I The vmkfstools vCLI supports most but not all of the options that the vmkfstools ESXi Shell command supports. See VMware Knowledge Base article 1008194.
You cannot run vmkfstools with --server pointing to a vCenter Server system.
The vSphere Storage documentation includes a complete reference to the vmkfstools command that you can use in the ESXi Shell. You can use most of the same options with the vmkfstools vCLI command. Specify one of the connection options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
The following options supported by the vmkfstools ESXi Shell command are not supported by the
vmkfstools vCLI command.
--breaklock -B
n
--chainConsistent -e
n
--eagerzero -k
n
--fix -x
n
--lock -L
n
--migratevirtualdisk -M
n
--parseimage -Y
n
--punchzero -K
n
--snapshotdisk -I
n
--verbose -v
n

Upgrading VMFS3 Volumes to VMFS5

vSphere 5.0 supports VMFS5 volumes, which have improved scalability and performance.
You can upgrade from VMFS3 to VMFS5 by using the vSphere Web Client, the vmkfstools ESXi Shell command, or the esxcli storage vmfs upgrade command. You can pass the volume label or the volume UUID to the ESXCLI command.
I You cannot upgrade VMFS3 volumes to VMFS5 with the vmkfstools command included in vSphere CLI.

Managing VMFS Volumes

Dierent commands are available for listing, mounting, and unmounting VMFS volumes and for listing, mounting, and unmounting VMFS snapshot volumes.
Managing VMFS volumes
n
esxcli storage filesystem list shows all volumes, mounted and unmounted, that are resolved, that
is, that are not snapshot volumes.
esxcli storage filesystem unmount unmounts a currently mounted lesystem. Use this command for
snapshot volumes or resolved volumes.
Managing snapshot volumes
n
VMware, Inc. 31
vSphere Command-Line Interface Concepts and Examples
esxcli storage vmfs snapshot commands can be used for listing, mounting, and resignaturing
snapshot volumes. See “Mounting Datastores with Existing Signatures,” on page 32 and
“Resignaturing VMFS Copies,” on page 33.

Managing Duplicate VMFS Datastores

In some cases VMFS datastores can have duplicate UUIDs.
Each VMFS datastore created in a LUN has a unique UUID that is stored in the le system superblock. When the LUN is replicated or when a snapshot is made, the resulting LUN copy is identical, byte-for-byte, to the original LUN. As a result, if the original LUN contains a VMFS datastore with UUID X, the LUN copy appears to contain an identical VMFS datastore, or a VMFS datastore copy, with the same UUID X.
ESXi hosts can determine whether a LUN contains the VMFS datastore copy, and either mount the datastore copy with its original UUID or change the UUID to resignature the datastore.
When a LUN contains a VMFS datastore copy, you can mount the datastore with the existing signature or assign a new signature. The vSphere Storage documentation discusses volume resignaturing in detail.

Mounting Datastores with Existing Signatures

You can mount a VMFS datastore copy without changing its signature if the original is not mounted.
For example, you can maintain synchronized copies of virtual machines at a secondary site as part of a disaster recovery plan. In the event of a disaster at the primary site, you can mount the datastore copy and power on the virtual machines at the secondary site.
I You can mount a VMFS datastore only if it does not conict with an already mounted VMFS datastore that has the same UUID.
When you mount the VMFS datastore, ESXi allows both read and write operations to the datastore that resides on the LUN copy. The LUN copy must be writable. The datastore mounts are persistent and valid across system reboots.
You can mount a datastore with ESXCLI or with vicfg-volume. See “Mount a Datastore with ESXCLI,” on page 32 or “Mount a Datastore with vicfg-volume,” on page 33.
Mount a Datastore with ESXCLI
The esxcli storage filesystem commands support mounting and unmounting volumes. You can also specify whether to persist the mounted volumes across reboots by using the --no-persist option.
Use the esxcli storage filesystem command to list mounted volumes, mount new volumes, and unmount a volume. Specify one of the connection options listed in “Connection Options for vCLI Host Management
Commands,” on page 19 in place of <conn_options>.
Procedure
1 List all volumes that have been detected as snapshots.
esxcli <conn_options> storage filesystem list
2 Run esxcli storage filesystem mount with the volume label or volume UUID.
esxcli <conn_options> storage filesystem volume mount --volume-label=<label>|--volume-
uuid=<VMFS-UUID>
N This command fails if the original copy is online.
32 VMware, Inc.
Chapter 3 Managing Files
What to do next
You can later run esxcli storage filesystem volume unmount to unmount the snapshot volume.
esxcli <conn_options> storage filesystem volume unmount --volume-label=<label>|--volume-
uuid=<VMFS-UUID>
Mount a Datastore with vicfg-volume
The vicfg-volume command supports mounting and unmounting volumes.
Use the vicfg-volume command to list mounted volumes, mount new volumes, and unmount a volume. Specify one of the connection options listed in “Connection Options for vCLI Host Management
Commands,” on page 19 in place of <conn_options>.
Procedure
1 List all volumes that have been detected as snapshots or replicas.
vicfg-volume <conn_options> --list
2 Run vicfg-volume --persistent-mount with the VMFS-UUID or label as an argument to mount a
volume.
vicfg-volume <conn_options> --persistent-mount <VMFS-UUID|label>
N This command fails if the original copy is online.
What to do next
You can later run vicfg-volume --unmount to unmount the snapshot or replica volume.
vicfg-volume <conn_options> --unmount <VMFS-UUID|label>
The vicfg-volume command supports resignaturing a snapshot volume and mounting and unmounting the volume. You can also make the mounted volume persistent across reboots and query a list of snapshot volumes and original volumes.

Resignaturing VMFS Copies

You can use datastore resignaturing to retain the data stored on the VMFS datastore copy.
When resignaturing a VMFS copy, the ESXi host assigns a new UUID and a new label to the copy, and mounts the copy as a datastore distinct from the original. Because ESXi prevents you from resignaturing the mounted datastore, unmount the datastore before resignaturing.
The default format of the new label assigned to the datastore is snap-<snapID>-<oldLabel>, where <snapID> is an integer and <oldLabel> is the label of the original datastore.
When you perform datastore resignaturing, consider the following points.
Datastore resignaturing is irreversible.
n
The LUN copy that contains the VMFS datastore that you resignature is no longer treated as a LUN
n
copy.
A spanned datastore can be resignatured only if all its extents are online.
n
The resignaturing process is crash and fault tolerant. If the process is interrupted, you can resume it
n
later.
You can mount the new VMFS datastore without a risk of its UUID conicting with UUIDs of any other
n
datastore, such as an ancestor or child in a hierarchy of LUN snapshots.
VMware, Inc. 33
vSphere Command-Line Interface Concepts and Examples
You can resignature a VMFS copy with ESXCLI or with vicfg-volume. See “Resignature a VMFS Copy with
ESXCLI,” on page 34 or “Resignature a VMFS Copy with vicfg-volume,” on page 34.
Resignature a VMFS Copy with ESXCLI
The esxcli storage vmfs snapshot commands support resignaturing a snapshot volume.
Specify one of the connection options listed in “Connection Options for vCLI Host Management
Commands,” on page 19 in place of <conn_options>.
Procedure
1 List unresolved snapshots or replica volumes.
esxcli <conn_options> storage vmfs snapshot list
2 (Optional) Unmount the copy.
esxcli <conn_options> storage filesystem unmount
3 Run the resignature command.
esxcli <conn_options> storage vmfs snapshot resignature --volume-label=<label>|--volume-
uuid=<id>
The command returns to the prompt or signals an error.
What to do next
After resignaturing, you might have to perform the following operations.
If the resignatured datastore contains virtual machines, update references to the original VMFS
n
datastore in the virtual machine les, including .vmx, .vmdk, .vmsd, and .vmsn.
To power on virtual machines, register them with the vCenter Server system.
n
Resignature a VMFS Copy with vicfg-volume
You can use vicfg-volume to mount, unmount, and resignature VMFS volumes.
Prerequisites
Verify that the VMFS copy you want to resignature is not mounted.
Procedure
Run vicfg-volume with the resignature option.
u
vicfg-volume <conn_options> --resignature <VMFS-UUID|label>
The command returns to the prompt or signals an error.

Reclaiming Unused Storage Space

When VMFS datastores reside on thin-provisioned LUNs, you can use ESXCLI commands to reclaim the unused logical blocks of a thin-provisioned LUN formaed with VMFS.
When you run the commands, you must specify the volume label --volume-label or the volume ID --
volume-uuid but you cannot specify both.
In each iteration, the command issues unmap commands to the number of le system blocks that are specied by the optional reclaim-unit argument, which defaults to 200. For newly created VMFS-5 le
systems, the lesystem block size is always 1 MB. For VMFS-3 le systems or VMFS-5 le systems that were upgraded from VMFS-3, the lesystem block size could be one of 1, 2, 4, 8 MB.
34 VMware, Inc.
Chapter 3 Managing Files
The following examples illustrate how to use the command.
# esxcli storage vmfs unmap --volume-label datastore1 --reclaim-unit 100
# esxcli storage vmfs unmap -l datastore1 -n 100
# esxcli storage vmfs unmap --volume-uuid 515615fb-1e65c01d-b40f-001d096dbf97 --reclaim-unit 500
# esxcli storage vmfs unmap -u 515615fb-1e65c01d-b40f-001d096dbf97 -n 500
# esxcli storage vmfs unmap -l datastore1
# esxcli storage vmfs unmap -u 515615fb-1e65c01d-b40f-001d096dbf97

Using vifs to View and Manipulate Files on Remote ESXi Hosts

You can use the vifs utility for datastore le management.
C If you manipulate les directly, your vSphere setup might end up in an inconsistent state. Use the vSphere Web Client or one of the other vCLI commands to manipulate virtual machine conguration les and virtual disks.
The vifs command performs common operations such as copy, remove, get, and put on ESXi les and directories. The command is supported against ESXi hosts but not against vCenter Server systems.
Some similarities between vifs and DOS or UNIX/Linux le system management utilities exist, but there are many dierences. For example, vifs does not support wildcard characters or current directories and, as a result, relative pathnames. You should use vifs only as documented.
Instead of using the vifs command, you can browse datastore contents and host les by using a Web browser. Connect to the following location.
http://ESX_host_IP_Address/host
http://ESX_host_IP_Address/folder
You can view data center and datastore directories from this root URL. The following examples demonstrate the syntax that you can use.
http://<ESXi_addr>/folder?dcPath=ha-datacenter
http://<ESXi_host_name>/folder?dcPath=ha-datacenter
The ESXi host prompts for a user name and password.
The vifs command supports dierent operations for the following groups of les and directories. Dierent operations are available for each group, and you specify locations with a dierent syntax. The behavior diers for vSphere 4.x and vSphere 5.0.
VMware, Inc. 35
vSphere Command-Line Interface Concepts and Examples
Host Host conguration les. You must
Temp
Datastores Datastore les and directories. You have two choices for specifying a
vSphere 4.x vSphere 5.0
Host conguration les. You must
specify the le’s unique name
identier.
Specify host locations by using the /host/<path> syntax.
The /tmp directory and les in that directory.
Specify temp locations by using the /tmp/dir/subdir syntax.
datastore.
n
Use datastore prex style '[ds_name] relative_path' as demonstrated in the following example.
'[myStorage1] testvms/VM1/VM1.vmx'(Linux) or "[myStorage1] testvms/VM1/VM1.vmx" (Windows)
n
Use URL style /folder/dir/subdir/file?dsName=<name> as demonstrated in the following example.
'/folder/testvms/VM1/VM1.vmx?dsName=myStorage1' (Linux) "/folder/testvms/VM1/VM1.vmx?dsName=myStorage1" (Windows)
The two example paths refer to a virtual machine conguration le for the VM1 virtual machine in the testvms/VM1 directory of the myStorage1 datastore.
specify the le’s unique name
identier.
Specify host locations by using the /host/<path> syntax.
You cannot list subdirectories of /host.
Not supported.
To avoid problems with directory names that use special characters or spaces, enclose the path in quotes for both operating systems.
When you run vifs, you can specify the operation name and argument and one of the standard connection options. Use aliases, symbolic links, or wrapper scripts to simplify the invocation syntax.
I The concepts of working directory and last directory or le operated on are not supported with
vifs.

vifs Options

vifs command-specic options allow you to retrieve and upload les from the remote host and perform a
number of other operations.
All vifs options work on datastore les or directories. Some options also work on host les and les in the temp directory. You must also specify connection options.
Command Description Target Syntax
--copy
-c <source> <target>
--dir
-D <remote_dir>
--force
-F
Copies a le in a datastore to another location in a datastore. The <source> must be a remote source path, the <target> a remote target path or directory.
The --force option replaces existing destination les.
Lists the contents of a datastore directory. Datastore
Overwrites the destination le. Used with -­move and --copy.
Datastore Temp
Temp
Datastore Temp
copy src_file_path dst_directory_path [-­force]
copy src_file_path dst_file_path [--force]
dir datastore_directory_path
copy src_file_path dst_file_path [--force]
36 VMware, Inc.
Command Description Target Syntax
--get
-g <remote_path> <local_path>
--listdc
-C
--listds
-S
--mkdir
-M <remote_dir>
--move
-m <source> <target>
--put
-p <local_path>
<remote_path>
--rm
-r <remote_path>
--rmdir
-R <remote_dir>
Downloads a le from the ESXi host to the machine on which you run vCLI. This operation uses HTTP GET.
Lists the data center paths available on an ESXi system.
Lists the datastore names on the ESXi system. When multiple data centers are available, use the --dc (-Z) argument to specify the name of the data center from which you want to list the datastore.
Creates a directory in a datastore. This operation fails if the parent directory of dst_datastore_file_path does not exist.
Moves a le in a datastore to another location in a datastore. The <source> must be a remote source path, the <target> a remote target path or directory.
The --force option replaces existing destination les.
Uploads a le from the machine on which you run vCLI to the ESXi host. This operation uses HTTP PUT.
This command can replace existing host les but cannot create new les.
Deletes a datastore le. Datastore
Deletes a datastore directory. This operation fails if the directory is not empty.
Datastore Host
Datastore Host
Datastore Host
Datastore Temp
Datastore Temp
Datastore Host Temp
Temp
Datastore Temp
get src_dstore_file_path dst_local_file_path
get src_d store_dir_path dst_local_file_path
vifs --listds
mkdir dst_directory_path
move src_file_path dst_directory_path [-­force]
move src_file_path dst_file_path [--force]
put src_local_file_path dst_file_path
put src_local_file_path dst_directory_path
rm dst_file_path
rmdir dst_directory_path
Chapter 3 Managing Files

vifs Examples

You can use vifs to interact with the remote ESXi or vCenter Server system in a variety of ways.
Specify one of the connection options listed in “Connection Options for vCLI Host Management
Commands,” on page 19 in place of <conn_options>.
N The examples illustrate use on a Linux system. You must use double quotes instead of single quotes when on a Windows system.
Listing Remote Information
List all data centers on a vCenter Server system with --listdc, using --server to point to the
n
vCenter Server system.
vifs --server <my_vc>--username administrator --password <pswd> --listdc
List all datastores on a vCenter Server system with --listds.
n
vifs --server <my_vc> --username administrator --password <pswd> --dc kw-dev --listds
List all datastores on an ESXi host with --listds.
n
vifs --server <my_ESXi> --username root --password <pswd> --listds
The command lists the names of all datastores on the specied server.
VMware, Inc. 37
vSphere Command-Line Interface Concepts and Examples
You can use each name that has been returned to refer to datastore paths by using square bracket notation.
'[my_datastore] dir/subdir/file'
List the content of a directory in a datastore.
n
vifs --server <my_ESXi> --username root --password <pswd>--dir '[Storage1]'
vifs --server <my_ESXi> --username root --password <pswd> --dir '[Storage1] WindowsXP'
The command lists the directory content. In this example, the command lists the contents of a virtual machine directory.
Content Listing
_________________
vmware-37.log
vmware-38.log
...
vmware.log
...
winxpPro-sp2.vmdk
winxpPro-sp2.vmx
winxpPro-sp2.vmxf
...
List the contents of one of the datastores.
n
vifs <conn_options> --dir '[osdc-cx700-02]'
The command lists the complete contents of the datastore.
Working with Directories and Files on the Remote Server
Create a new directory in a datastore with --mkdir <remote_dir>.
n
vifs --server <my_ESXi> --username root --password <pswd> --mkdir '[Storage1] test'
Remove a directory with --rmdir <remote_dir>.
n
vifs --server <my_ESXi> --username root --password <pswd> --rmdir '[Storage1] test'
Forcibly remove a directory with --rmdir --force <remote_dir>.
n
vifs --server <my_ESXi> --username root --password <pswd> --rmdir '[Storage1] test2' --force
Update a le on the remote server with --put <local_path> <remote_path>.
n
vifs --server <my_ESXi> --username root --password <pswd>
--put /tmp/testfile '[Storage1] test/testfile'
Retrieve a le from the remote server with --get <remote_path> <local_path>|<local_dir>. The
n
command overwrites the local le if it exists. If you do not specify a le name, the le name of the remote le is used.
vifs --server <my_ESXi> --username root --password <pswd> --get '[Storage1]
test/testfile' /tmp/tfile
vifs --server <my_ESXi> --username root --password <pswd> --get '[Storage1]
test/testfile' /tmp
Delete a le on the remote server with -rm <remote_path>.
n
vifs --server <my_ESXi> --username root --password <pswd> --rm '[Storage1] test2/testfile'
38 VMware, Inc.
Chapter 3 Managing Files
Forcibly remove a le on the remote server with --rm <remote_path> --force.
n
vifs --server <my_ESXi> --username root --password <pswd> --rm '[Storage1] test2/testfile2'
--force
Move a le from one location on the remote server to another location with --move
n
<remote_source_path> <remote_target_path>. If you specify a le name, the le is moved and renamed
at the same time.
vifs --server <my_ESXi> --username root --password <pswd> --move '[Storage1] test/tfile'
'[Storage1] newfile'
If the target le already exists on the remote server, the command fails unless you use --force.
vifs --server <my_ESXi> --username root --password <pswd> --move '[Storage1] test/tfile2'
'[Storage1] test2/tfile' --force
Create a copy of a le on the remote server at a dierent location on the remote server.
n
vifs --server <my_ESXi> --username root --password <pswd> --copy '[Storage1] test/tfile'
'[Storage1] test/tfile2'
If the target le already exists on the remote server, the command fails unless you use --force.
vifs --server <my_ESXi> --username root --password <pswd> --copy '[Storage1] test/tfile'
'[Storage1] test/tfile2' --force
Manage Files and Directories on the Remote ESXi System
The following example scenario illustrates other uses of vifs.
1 Create a directory in the datastore.
vifs <conn_options> --mkdir '[osdc-cx700-03] vcli_test'
N You must specify the precise path. There is no concept of a relative path.
2 Place a le that is on the system from which you are running the commands into the newly created
directory.
vifs <conn_options> --put /tmp/test_doc '[osdc-cx700-03] vcli_test/test_doc'
3 Move a le into a virtual machine directory.
vifs <conn_options> - -move '[osdc-cx700-03] vcli_test/test_doc'
'[osdc-cx700-03] winxpPro-sp2/test_doc
A message indicates success or failure.
4 Retrieve one of the les from the remote ESXi system.
vifs <conn_options> --get '[osdc-cx700-03] winxpPro-sp2/vmware.log' ~user1/vmware.log
Retrieves a log le for analysis.
5 Clean up by removing the le and directory you created earlier.
vifs <conn_options> --rm '[osdc-cx700-03] vcli_test/test_doc'
vifs <conn_options> --rmdir '[osdc-cx700-03] vcli_test'
VMware, Inc. 39
vSphere Command-Line Interface Concepts and Examples
40 VMware, Inc.

Managing Storage 4

A virtual machine uses a virtual disk to store its operating system, program les, and other data associated with its activities. A virtual disk is a large physical le, or a set of les, that can be copied, moved, archived, and backed up.
To store virtual disk les and manipulate the les, a host requires dedicated storage space. ESXi storage is storage space on a variety of physical storage systems, local or networked, that a host uses to store virtual machine disks.
Chapter 5, “Managing iSCSI Storage,” on page 69 discusses iSCSI storage management. Chapter 6, “Managing Third-Party Storage Arrays,” on page 101 explains how to manage the Pluggable Storage
Architecture, including Path Selection Plugin (PSP) and Storage Array Type Plug-in (SATP) conguration.
For information on masking and unmasking paths with ESXCLI, see the vSphere Storage documentation.
This chapter includes the following topics:
“Introduction to Storage,” on page 42
n
“Examining LUNs,” on page 45
n
“Detach a Device and Remove a LUN,” on page 48
n
“Reaach a Device,” on page 49
n
VMware, Inc.
“Working with Permanent Device Loss,” on page 49
n
“Managing Paths,” on page 50
n
“Managing Path Policies,” on page 54
n
“Scheduling Queues for Virtual Machine I/O,” on page 57
n
“Managing NFS/NAS Datastores,” on page 57
n
“Monitor and Manage FibreChannel SAN Storage,” on page 59
n
“Monitoring and Managing Virtual SAN Storage,” on page 60
n
“Monitoring vSphere Flash Read Cache,” on page 62
n
“Monitoring and Managing Virtual Volumes,” on page 62
n
“Migrating Virtual Machines with svmotion,” on page 63
n
“Conguring FCoE Adapters,” on page 65
n
“Scanning Storage Adapters,” on page 66
n
“Retrieving SMART Information,” on page 66
n
41
fibre channel switch fabric / IP network
server
group 1
virtual machines
server
group 2
server
group 3
iSCSI
storage array
NAS
storage array
vCenter Server
terminal
Web access
vSphere Client
ESX/ESXi
fibre channel
storage array
vSphere Command-Line Interface Concepts and Examples

Introduction to Storage

Fibre Channel SAN arrays, iSCSI SAN arrays, and NAS arrays are widely used storage technologies supported by VMware vSphere to meet dierent data center storage needs.
The storage arrays are connected to and shared between groups of servers through storage area networks. This arrangement allows aggregation of the storage resources and provides more exibility in provisioning them to virtual machines.
Figure 41. vSphere Data Center Physical Topology

How Virtual Machines Access Storage

42 VMware, Inc.
A virtual disk hides the physical storage layer from the virtual machine's operating system.
Regardless of the type of storage device that your host uses, the virtual disk always appears to the virtual machine as a mounted SCSI device. As a result, you can run operating systems that are not certied for specic storage equipment, such as SAN, in the virtual machine.
When a virtual machine communicates with its virtual disk stored on a datastore, it issues SCSI commands. Because datastores can exist on various types of physical storage, these commands are encapsulated into other forms, depending on the protocol that the ESXi host uses to connect to a storage device.
Figure 4-2 depicts ve virtual machines that use dierent types of storage to illustrate the dierences
between each type.
Figure 42. Virtual Machines Accessing Different Types of Storage
iSCSI array
VMFS VMFS
LAN LAN
iSCSI
hardware
initiator
ethernet
NIC
Host
requires TCP/IP connectivity
software
initiator
NAS
appliance
NFS
LAN
ethernet
NIC
fibre
array
VMFS
VMFS
LAN
fibre
channel
HBA
local
ethernet
SCSI
Chapter 4 Managing Storage
You can use vCLI commands to manage the virtual machine le system and storage devices.
VMFS - Use vmkfstools to create, modify, and manage VMFS virtual disks and raw device mappings.
n
See “Managing the Virtual Machine File System with vmkfstools,” on page 30 for an introduction and the vSphere Storage documentation for a detailed reference.
Datastores - Several commands allow you to manage datastores and are useful for multiple protocols.
n
LUNs - Use esxcli storage core or vicfg-scsidevs commands to display available LUNs and
n
mappings for each VMFS volume to its corresponding partition. See “Examining LUNs,” on page 45.
Path management - Use esxcli storage core or vicfg-mpath commands to list information about
n
Fibre Channel or iSCSI LUNs and to change a path’s state. See “Managing Paths,” on page 50. Use the ESXCLI command to view and modify path policies. See “Managing Path Policies,” on page 54.
Rescan - Use esxcli storage core or vicfg-rescan adapter rescan to perform a rescan operation
n
each time you recongure your storage setup. See “Scanning Storage Adapters,” on page 66.
Storage devices - Several commands manage only specic storage devices.
n
NFS storage - Use esxcli storage nfs or vicfg-nas to manage NAS storage devices. See
n
“Managing NFS/NAS Datastores,” on page 57.
iSCSI storage - Use esxcli iscsi or vicfg-iscsi to manage both hardware and software iSCSI. See
n
Chapter 5, “Managing iSCSI Storage,” on page 69.
Software-dened storage - vSphere supports several types of software-dened storage.
n
Virtual SAN storage - Use commands in the esxcli vsan namespace to manage Virtual SAN. See
n
“Monitoring and Managing Virtual SAN Storage,” on page 60.
VMware, Inc. 43
vSphere Command-Line Interface Concepts and Examples
Virtual Flash storage - Use commands in the esxcli storage vflash namespace to manage
n
VMware vSphere Flash Read Cache.
Virtual volumes - Virtual volumes oer a dierent layer of abstraction than datastores. As a result,
n
ner-grained management is possible. Use commands in the esxcli storage vvol namespace.

Datastores

ESXi hosts use storage space on a variety of physical storage systems, including internal and external devices and networked storage.
A host can discover storage devices to which it has access and format them as datastores. Each datastore is a special logical container, analogous to a le system on a logical volume, where the host places virtual disk les and other virtual machine les. Datastores hide specics of each storage product and provide a uniform model for storing virtual machine les.
Depending on the type of storage you use, datastores can be backed by the following le system formats.
Virtual Machine File System (VMFS) - High-performance le system optimized for storing virtual
n
machines. Your host can deploy a VMFS datastore on any SCSI-based local or networked storage device, including Fibre Channel and iSCSI SAN equipment.
As an alternative to using the VMFS datastore, your virtual machine can have direct access to raw devices and use a mapping le (RDM) as a proxy. See “Managing the Virtual Machine File System with
vmkfstools,” on page 30.
Network File System (NFS) - File system on a NAS storage device. ESXi supports NFS version 3 over
n
TCP/IP. The host can access a designated NFS volume located on an NFS server, mount the volume, and use it for any storage needs.

Storage Device Naming

Each storage device, or LUN, is identied by several device identier names.
Device Identifiers
Depending on the type of storage, the ESXi host uses dierent algorithms and conventions to generate an identier for each storage device.
SCSI INQUIRY identiers - The host uses the SCSI INQUIRY command to query a storage device and
n
uses the resulting data, in particular the Page 83 information, to generate a unique identier. SCSI INQUIRY device identiers are unique across all hosts, persistent, and have one of the following formats.
naa.<number>
n
t10.<number>
n
eui.<number>
n
These formats follow the T10 commiee standards. See the SCSI-3 documentation on the T10 commie Web site for information on Page 83.
Path-based identier. If the device does not provide the information on Page 83 of the T10 commiee
n
SCSI-3 documentation, the host generates an mpx.<path> name, where <path> represents the rst path to the device, for example, mpx.vmhba1:C0:T1:L3. This identier can be used in the same way as the SCSI inquiry identiers.
The mpx. identier is created for local devices on the assumption that their path names are unique. However, this identier is neither unique nor persistent and could change after every boot.
44 VMware, Inc.
Typically, the path to the device has the following format.
storage array
target
LUN LUN LUN
storage array
target targettarget
LUN LUN LUN
vmhba<adapter>:C<channel>:T<target>:L<LUN>
vmbh<adapter> is the name of the storage adapter. The name refers to the physical adapter on the
n
host, not the SCSI controller used by the virtual machines.
C<channel> is the storage channel number. Software iSCSI adapters and dependent hardware
n
adapters use the channel number to show multiple paths to the same target.
T<target> is the target number. Target numbering is determined by the host and might change if
n
the mappings of targets that are visible to the host change. Targets that are shared by dierent hosts might not have the same target number.
L<LUN> is the LUN number that shows the position of the LUN within the target. The number is
n
provided by the storage system. If a target has only one LUN, the LUN number is always zero (0).
Legacy Identifiers
In addition to the SCSI INQUIRY or mpx identiers, ESXi generates an alternative legacy name, called VML name, for each device. Use the device UID instead.

Examining LUNs

A LUN (Logical Unit Number) is an identier for a disk volume in a storage array target.
Chapter 4 Managing Storage

Target and Device Representation

In the ESXi context, the term target identies a single storage unit that a host can access. The terms device and LUN describe a logical volume that represents storage space on a target.
The terms device and LUN mean a SCSI volume presented to the host from a storage target.
Dierent storage vendors present their storage systems to ESXi hosts in dierent ways. Some vendors present a single target with multiple LUNs on it. Other vendors, especially iSCSI vendors, present multiple targets with one LUN each.
Figure 43. Target and LUN Representations
In Figure 4-3, three LUNs are available in each conguration. On the left, the host sees one target, but that target has three LUNs that can be used. Each LUN represents an individual storage volume. On the right, the host sees three dierent targets, each having one LUN.
VMware, Inc. 45
vSphere Command-Line Interface Concepts and Examples

Examining LUNs with esxcli storage core

You can use esxcli storage core to display information about available LUNs on ESXi 5.0.
You can run one of the following commands to examine LUNs. Specify one of the connection options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
List all logical devices known on this system with detailed information.
n
esxcli <conn_options> storage core device list
The command lists device information for all logical devices on this system. The information includes the name (UUID), device type, display name, and multipathing plugin. Specify the --device option to only list information about a specic device. See “Storage Device Naming,” on page 44 for background information.
naa.5000c50037b3967e
Display Name: <name> (naa.5000c50037b3967e)
Has Settable Display Name: true
Size: 953869
Device Type: Direct-Access
...
naa.500000e014e7a4e0
Display Name: <name> (naa.500000e014e7a4e0)
Has Settable Display Name: true
Size: 70007
Device Type: Direct-Access
...
mpx.vmhba0:C0:T0:L0
Display Name: Local <name> CD-ROM (mpx.vmhba0:C0:T0:L0)
Has Settable Display Name: false
Size: 0
Device Type: CD-ROM
List a specic logical device with its detailed information.
n
esxcli <conn_options> storage core device list -d mpx.vmhba32:C0:T1:L0
List all device unique identiers.
n
esxcli <conn_options> storage core device list
The command lists the primary UID for each device, such as naa.xxx or other primary name, and any other UIDs for each UID (VML name). You can specify --device to only list information for a specic device.
Print mappings for VMFS volumes to the corresponding partition, path to that partition, VMFS UUID,
n
extent number, and volume names.
esxcli <conn_option> storage filesystem list
Print HBA devices with identifying information.
n
esxcli <conn_options> storage core adapter list
The return value includes adapter and UID information.
Print a mapping between HBAs and the devices it provides paths to.
n
esxcli <conn_options> storage core path list
46 VMware, Inc.
Chapter 4 Managing Storage

Examining LUNs with vicfg-scsidevs

You can use vicfg-scsidevs to display information about available LUNs on ESXi 4.x hosts.
I You can run vicfg-scsidevs --query and vicfg-scsidevs --vmfs against ESXi version 3.5. The other options are supported only against ESXi version 4.0 and later.
You can run one of the following commands to examine LUNs. Specify one of the connection options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
List all logical devices known on this system with detailed information.
n
vicfg-scsidevs <conn_options> --list
The command lists device information for all logical devices on this system. The information includes the name (UUID), device type, display name, and multipathing plugin. Specify the --device option to only list information about a specic device. The following example shows output for two devices. The actual listing might include multiple devices and the precise format diers between releases.
mpx.vmhba2:C0:T1:L0
Device Type: cdrom
Size: 0 MB
Display Name: Local HL-DT-ST (mpx.vmhba2:C0:T1:L0)
Plugin: NMP
Console Device: /vmfs/devices/cdrom/mpx.vmhba2:C0:T1:L0
Devfs Path: /vmfs/devices/cdrom/mpx.vmhba2:C0:T1:L0
Vendor: SONY Model: DVD-ROM GDRXX8XX Revis: 3.00
SCSI Level: 5 Is Pseudo: Status:
Is RDM Capable: Is Removable:
Other Names:
vml.000N000000XXXdXXXXXXXXaXXXaXX
VAAI Status: nnnn
naa.60060...
Device Type: disk
Size: 614400 MB
Display Name: DGC Fibre Channel Disk (naa.60060...)
...
List all logical devices with abbreviated information.
n
vicfg-scsidevs <conn_options> --compact-list
The information includes the device ID, device type, size, plugin, and device display name.
List all device unique identiers.
n
vicfg-scsidevs <conn_options> --uids
The command lists the primary UID for each device, such as naa.xxx or other primary name, and any other UIDs for each UID (VML name). You can specify --device to only list information for a specic device.
List a specic logical device with its detailed information.
n
vicfg-scsidevs <conn_options> -l -d mpx.vmhba32:C0:T1:L0
Print mappings for VMFS volumes to the corresponding partition, path to that partition, VMFS uuid,
n
extent number, and volume names.
vicfg-scsidevs <conn_options> --vmfs
VMware, Inc. 47
vSphere Command-Line Interface Concepts and Examples
Print HBA devices with identifying information.
n
vicfg-scsidevs <conn_options> --hbas
The return value includes the adapter ID, driver ID, adapter UID, PCI, vendor, and model.
Print a mapping between HBAs and the devices it provides paths to.
n
vicfg-scsidevs <conn_options> --hba-device-list

Detach a Device and Remove a LUN

Before you can remove a LUN, you must detach the corresponding device by using the vSphere Web Client, or the esxcli storage core device set command.
Detaching a device brings a device oine. Detaching a device does not impact path states. If the LUN is still visible, the path state is not set to dead.
Prerequisites
Make sure you are familiar with virtual machine migration. See the vCenter Server and Host Management
n
documentation.
Make sure you are familiar with datastore mounting and unmounting. See “Mount a Datastore with
n
ESXCLI,” on page 32.
Procedure
1 Migrate virtual machines from the device you plan to detach.
2 Unmount the datastore deployed on the device.
If the unmount fails, ESXCLI returns an error. If you ignore that error, you will get an error when you aempt to detach a device with a VMFS partition still in use.
3 If the unmount failed, check whether the device is in use.
esxcli storage core device world list -d <device>
If a VMFS volume is using the device indirectly, the world name includes the string idle0. If a virtual machine uses the device as an RDM, the virtual machine process name is displayed. If any other process is using the raw device, the information is displayed.
4 Detach the storage device.
esxcli storage core device set -d naa.xxx... --state=off
Detach is persistent across reboots and device unregistration. Any device that is detached remains detached until a manual aach operation. Rescan does not bring persistently detached devices back online. A persistently detached device comes back in the o state.
ESXi maintains the persistent information about the device’s oine state even if the device is unregistered. You can remove the device information by running esxcli storage core device
detached remove -d naa.12.
5 (Optional) To troubleshoot the detach operation, list all devices that were detached manually.
esxcli storage core device detached list
6 Perform a rescan.
esxcli <conn_options> storage core adapter rescan
48 VMware, Inc.

Reattach a Device

When you have completed storage reconguration, you can reaach the storage device, mount the datastore, and restart the virtual machines.
Prerequisites
Make sure you are familiar with datastore mounting. See“Mounting Datastores with Existing Signatures,” on page 32.
Procedure
1 (Optional) Check whether the device is detached.
esxcli storage core device detached list
2 Aach the device.
esxcli storage core device set -d naa.XXX --state=on
3 Mount the datastore and restart virtual machines.

Working with Permanent Device Loss

Chapter 4 Managing Storage
In some cases a permanent device loss (PDL) might occur.
With earlier ESXi releases, an APD (All Paths Down) event results when the LUN becomes unavailable. The event is dicult for administrators because they do not have enough information about the state of the LUN to know which corrective action is appropriate.
In ESXi 5.0, the ESXi host can determine whether the cause of an APD event is temporary, or whether the cause is PDL. A PDL status occurs when the storage array returns SCSI sense codes indicating that the LUN is no longer available or that a severe, unrecoverable hardware problem exist with it. ESXi has an improved infrastructure that can speed up operations of upper-layer applications in a device loss scenario.
I Do not plan for APD or PDL events, for example, when you want to upgrade your hardware. Instead, perform an orderly removal of LUNs from your ESXi server, which is described in “Detach a Device
and Remove a LUN,” on page 48, perform the operation, and add the LUN back.

Removing a PDL LUN

How you remove a PDL LUN depends on whether it was in use.
If the LUN that goes into PDL is not in use by any user process or by the VMkernel, the LUN
n
disappears by itself after a PDL.
If the LUN was in use when it entered PLD, delete the LUN manually by following the process
n
described in “Detach a Device and Remove a LUN,” on page 48.

Reattach a PDL LUN

You can reaach a PDL LUN after it has been removed.
Procedure
1 Return the LUN to working order.
VMware, Inc. 49
vSphere Command-Line Interface Concepts and Examples
2 Remove any users of the device.
You cannot bring a device back without removing active users. The ESXi host cannot know whether the device that was added back has changed. ESXi must be able to treat the device similarly to a new device being discovered.
3 Perform a rescan to get the device back in working order.

Managing Paths

To maintain a constant connection between an ESXi host and its storage, ESXi supports multipathing. With multipathing you can use more than one physical path for transferring data between the ESXi host and the external storage device.
In case of failure of an element in the SAN network, such as an HBA, switch, or cable, the ESXi host can fail over to another physical path. On some devices, multipathing also oers load balancing, which redistributes I/O loads between multiple paths to reduce or eliminate potential bolenecks.
The storage architecture in vSphere 4.0 and later supports a special VMkernel layer, Pluggable Storage Architecture (PSA). The PSA is an open modular framework that coordinates the simultaneous operation of multiple multipathing plug-ins (MPPs). You can manage PSA using ESXCLI commands. See Chapter 6,
“Managing Third-Party Storage Arrays,” on page 101. This section assumes you are using only PSA plug-ins
included in vSphere by default.

Multipathing with Local Storage and FC SANs

Multipathing is a technique that lets you use more than one physical path that transfers data between the host and an external storage device.
In a simple multipathing local storage topology, you can use one ESXi host with two HBAs. The ESXi host connects to a dual-port local storage system through two cables. This conguration ensures fault tolerance if one of the connection elements between the ESXi host and the local storage system fails.
To support path switching with FC SAN, the ESXi host typically has two HBAs available from which the storage array can be reached through one or more switches. Alternatively, the setup can include one HBA and two storage processors so that the HBA can use a dierent path to reach the disk array.
In FC Multipathing, multiple paths connect each host with the storage device. For example, if HBA1 or the link between HBA1 and the switch fails, HBA2 takes over and provides the connection between the server and the switch. The process of one HBA taking over for another is called HBA failover.
50 VMware, Inc.
Figure 44. FC Multipathing
storage array
SP1 SP2
switch switch
HBA2 HBA1 HBA3 HBA4
Host 1
Host 2
Chapter 4 Managing Storage
If SP1 or the link between SP1 and the switch breaks, SP2 takes over and provides the connection between the switch and the storage device. This process is called SP failover. ESXi multipathing supports HBA and SP failover.
After you have set up your hardware to support multipathing, you can use the vSphere Web Client or vCLI commands to list and manage paths. You can perform the following tasks.
List path information with vicfg-mpath or esxcli storage core path. See “Listing Path Information,”
n
on page 51.
Change path state with vicfg-mpath or esxcli storage core path. See “Changing the State of a Path,”
n
on page 53.
Change path policies with ESXCLI. See “Set Policy Details for Devices that Use Round Robin,” on
n
page 56.
Mask paths with ESXCLI. See the vSphere Storage documentation.
n
Manipulate the rules that match paths to multipathing plugins to newly discovered devices with esxcli
n
claimrule. See “Managing Claim Rules,” on page 110.
Run or rerun claim rules or unclaim paths. See “Managing Claim Rules,” on page 110.
n
Rescan with vicfg-rescan. See “Scanning Storage Adapters,” on page 66.
n

Listing Path Information

You can list path information with ESXCLI or with vicfg-mpath.
Listing Path Information with ESXCLI
You can run esxcli storage core path to display information about Fibre Channel or iSCSI LUNs.
I Use industry-standard device names, with format eui.xxx or naa.xxx to ensure consistency. Do not use VML LUN names unless device names are not available.
Names of virtual machine HBAs are not guaranteed to be valid across reboots.
VMware, Inc. 51
vSphere Command-Line Interface Concepts and Examples
You can display information about paths by running esxcli storage core path. Specify one of the options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of
<conn_options>.
List all devices with their corresponding paths, state of the path, adapter type, and other information.
n
esxcli <conn_options> storage core path list
Limit the display to only a specied path or device.
n
esxcli <conn_options> storage core path list --path <path>
esxcli <conn_options> storage core path list --device <device>
List the statistics for the SCSI paths in the system. You can list all paths or limit the display to a specic
n
path.
esxcli <conn_options> storage core path stats get
esxcli <conn_options> storage core path stats get --path <path>
List detailed information for the paths for the device specied with --device.
n
esxcli <conn_options> storage core path list -d <naa.xxxxxx>
List all adapters.
n
esxcli <conn_options> storage core adapter list
Rescan all adapters.
n
esxcli <conn_options> storage core adapter rescan
Listing Path Information with vicfg-mpath
You can run vicfg-mpath to list information about Fibre Channel or iSCSI LUNs.
I Use industry-standard device names, with format eui.xxx or naa.xxx to ensure consistency. Do not use VML LUN names unless device names are not available.
Names of virtual machine HBAs are not guaranteed to be valid across reboots.
You can display information about paths by running vicfg-mpath with one of the following options. Specify one of the options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
List all devices with their corresponding paths, state of the path, adapter type, and other information.
n
vicfg-mpath <conn_options> --list-paths
Display a short listing of all paths.
n
vicfg-mpath <conn_options> --list-compact
List all paths with adapter and device mappings.
n
vicfg-mpath <conn_options> --list-map
List paths and detailed information by specifying the path UID (long path). The path UID is the rst
n
item in the vicfg-mpath --list display.
vicfg-mpath <conn_options> --list
-P sas.5001c231c79c4a00-sas.1221000001000000-naa.5000c5000289c61b
List paths and detailed information by specifying the path runtime name.
n
vicfg-mpath <conn_options> -l -P vmhba32:C0:T0:L0
The return information includes the runtime name, device, device display name, adapter, adapter identier, target identier, plugin, state, transport, and adapter and target transport details.
52 VMware, Inc.
Chapter 4 Managing Storage
List detailed information for the paths for the device specied with --device.
n
vicfg-mpath <conn_options> -l -d mpx.vmhba32:C0:T1:L0
vicfg-mpath <conn_options> --list --device naa.60060...

Changing the State of a Path

You can change the state of a path with ESXCLI or with vicfg-mpath.
Disable a Path with ESXCLI
You can temporarily disable a path with ESXCLI for maintenance or other reasons, and enable the path when you need it again.
If you are changing a path's state, the change operation fails if I/O is active when the path seing is changed. Reissue the command. You must issue at least one I/O operation before the change takes eect.
Specify one of the options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
Procedure
1 (Optional) List all devices and corresponding paths.
esxcli <conn_options> storage core path list
The display includes information about each path's state.
2 Set the state of a LUN path to o.
esxcli <conn_options> storage core path set --state off --path vmhba32:C0:T1:L0
What to do next
When you are ready, set the path state to active again.
esxcli <conn_options> storage core path set --state active --path vmhba32:C0:T1:L0
Disable a Path with vicfg-mpath
You can temporarily disable a path with vicfg-mpath for maintenance or other reasons, and enable the path when you need it again.
If you are changing a path's state, the change operation fails if I/O is active when the path seing is changed. Reissue the command. You must issue at least one I/O operation before the change takes eect.
Specify one of the options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
Procedure
1 (Optional) List all devices and corresponding paths.
vicfg-mpath <conn_options> --list-paths
The display includes information about each path's state.
2 Set the state of a LUN path to o.
vicfg-mpath <conn_options> --state off --path vmhba32:C0:T1:L0
What to do next
When you are ready, set the path state to active again.
vicfg-mpath <conn_options> --state active --path vmhba32:C0:T1:L0
VMware, Inc. 53
vSphere Command-Line Interface Concepts and Examples

Managing Path Policies

For each storage device managed by NMP, and not PowerPath, an ESXi host uses a path selection policy. If you have a third-party PSP installed on your host, its policy also appears on the list.
Supported Path Policies
The following path policies are supported by default.
Policy Description
VMW_PSP_FIXED
VMW_PSP_MRU
VMW_PSP_RR
The host uses the designated preferred path, if it has been congured. Otherwise, the host selects the rst working path discovered at system boot time. If you want the host to use a particular preferred
path, specify it through the vSphere Web Client, or by using esxcli storage nmp psp fixed deviceconfig set. See “Changing Path Policies,” on page 55.
The default policy for active-active storage devices is VMW_PSP_FIXED.
N If the host uses a default preferred path and the path's status turns to Dead, a new path is selected as preferred. However, if you explicitly designate the preferred path, it will remain preferred even when it becomes inaccessible.
The host selects the path that it used most recently. When the path becomes unavailable, the host selects an alternative path. The host does not revert back to the original path when that path becomes available again. There is no preferred path seing with the MRU policy. MRU is the default policy for active-passive storage devices.
The VMW_PSP_MRU ranking capability allows you to assign ranks to individual paths. To set ranks to individual paths, use the esxcli storage nmp psp generic pathconfig set command. For details, see the VMware knowledge base article 2003468.
The host uses an automatic path selection algorithm that rotates through all active paths when connecting to active-passive arrays, or through all available paths when connecting to active-active arrays. Automatic path selection implements load balancing across the physical paths available to your host. Load balancing is the process of spreading I/O requests across the paths. The goal is to optimize throughput performance such as I/O per second, megabytes per second, or response times.
VMW_PSP_RR is the default for a number of arrays and can be used with both active-active and active­passive arrays to implement load balancing across paths for dierent LUNs.
Path Policy Effects
The type of array and the path policy determine the behavior of the host.
Policy Active/Active Array Active/Passive Array
Most Recently Used
Fixed VMkernel resumes using the preferred
Round Robin No fail back. Next path in round robin scheduling is selected.
Administrator action is required to fail back after path failure.
path when connectivity is restored.
Administrator action is required to fail back after path failure.
VMkernel aempts to resume by using the preferred path. This action can cause path thrashing or failure when another SP now owns the LUN.

Multipathing Considerations

You should consider a number of key points when working with multipathing.
The following considerations help you with multipathing.
If no SATP is assigned to the device by the claim rules, the default SATP for iSCSI or FC devices is
n
VMW_SATP_DEFAULT_AA. The default PSP is VMW_PSP_FIXED.
When the system searches the SATP rules to locate a SATP for a given device, it searches the driver
n
rules rst. If there is no match, the vendor/model rules are searched, and nally the transport rules are searched. If no match occurs, NMP selects a default SATP for the device.
54 VMware, Inc.
If VMW_SATP_ALUA is assigned to a specic storage device, but the device is not ALUA-aware, no claim
n
rule match occurs for this device. The device is claimed by the default SATP based on the device's transport type.
The default PSP for all devices claimed by VMW_SATP_ALUA is VMW_PSP_MRU. The VMW_PSP_MRU selects an
n
active/optimized path as reported by the VMW_SATP_ALUA, or an active/unoptimized path if there is no active/optimized path. This path is used until a beer path is available (MRU). For example, if the
VMW_PSP_MRU is currently using an active/unoptimized path and an active/optimized path becomes
available, the VMW_PSP_MRU will switch the current path to the active/optimized one.
While VMW_PSP_MRU is typically selected for ALUA arrays by default, certain ALUA storage arrays need
n
to use VMW_PSP_FIXED. To check whether your storage array requires VMW_PSP_FIXED, see the VMware Compatibility Guide or contact your storage vendor. When using VMW_PSP_FIXED with ALUA arrays,
unless you explicitly specify a preferred path, the ESXi host selects the most optimal working path and designates it as the default preferred path. If the host selected path becomes unavailable, the host selects an alternative available path. However, if you explicitly designate the preferred path, it remains preferred no maer what its status is.
By default, the PSA claim rule 101 masks Dell array pseudo devices. Do not delete this rule, unless you
n
want to unmask these devices.

Changing Path Policies

Chapter 4 Managing Storage
You can change path policies with ESXCLI or with vicfg-mpath.
Change the Path Policy with ESXCLI
You can change the path policy with ESXCLI.
Specify one of the options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
Prerequisites
Verify that you are familiar with the supported path policies. See “Managing Path Policies,” on page 54.
Procedure
1 Ensure your device is claimed by the NMP plug-in.
Only NMP devices allow you to change the path policy.
esxcli <conn_options> storage nmp device list
2 Retrieve the list of path selection policies on the system to see which values are valid for the --psp
option when you set the path policy.
esxcli storage core plugin registration list --plugin-class="PSP"
3 Set the path policy by using ESXCLI.
esxcli <conn_options> storage nmp device set --device naa.xxx --psp VMW_PSP_RR
VMware, Inc. 55
vSphere Command-Line Interface Concepts and Examples
4 (Optional) If you specied the VMW_PSP_FIXED policy, you must make sure the preferred path is set
correctly.
a Check which path is the preferred path for a device.
esxcli <conn_options> storage nmp psp fixed deviceconfig get --device naa.xxx
b If necessary, change the preferred path.
esxcli <conn_options> storage nmp psp fixed deviceconfig set --device naa.xxx --path
vmhba3:C0:T5:L3
The command sets the preferred path to vmhba3:C0:T5:L3. Run the command with --default to clear the preferred path selection.
Change the Path Policy with vicfg-mpath
You can change the path policy with vicfg-mpath.
Specify one of the options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
Prerequisites
Verify that you are familiar with the supported path policies. See “Managing Path Policies,” on page 54.
Procedure
1 List all multipathing plugins loaded into the system.
vicfg-mpath <conn_options> --list-plugins
At a minimum, this command returns NMP (Native Multipathing Plug-in) and MASK_PATH. If other MPP plug-ins have been loaded, they are listed as well.
2 Set the path policy by using ESXCLI.
esxcli <conn_options> nmp device set --device naa.xxx --psp VMW_PSP_RR
3 (Optional) If you specied the VMW_PSP_FIXED policy, you must make sure the preferred path is set
correctly.
a Check which path is the preferred path for a device.
esxcli <conn_options> storage nmp psp fixed deviceconfig get -d naa.xxxx
b If necessary, change the preferred path.
esxcli <conn_options> storage nmp psp fixed deviceconfig set --device naa.xxx --path
vmhba3:C0:T5:L3
The command sets the preferred path to vmhba3:C0:T5:L3.

Set Policy Details for Devices that Use Round Robin

ESXi hosts can use multipathing for failover. With some storage devices, ESXi hosts can also use multipathing for load balancing.
To achieve beer load balancing across paths, administrators can specify that the ESXi host should switch paths under specic circumstances. Dierent options determine when the ESXi host switches paths and what paths are chosen. Only a limited number of storage arrays support round robin.
You can use esxcli storage nmp psp roundrobin to retrieve and set round robin path options on a device controlled by the roundrobin PSP. Specify one of the options listed in “Connection Options for vCLI Host
Management Commands,” on page 19 in place of <conn_options>.
56 VMware, Inc.
Chapter 4 Managing Storage
No vicfg- command exists for performing the operations. The ESXCLI commands for seing round robin path options have changed. The commands supported in ESXi 4.x are no longer supported.
Procedure
1 Retrieve path selection seings for a device that is using the roundrobin PSP.
esxcli <conn_options> storage nmp psp roundrobin deviceconfig get --device na.xxx
2 Set the path selection. You can specify when the path should change, and whether unoptimized paths
should be included.
Use --bytes or --iops to specify when the path should change, as in the following examples.
u
esxcli <conn_options> storage nmp psp roundrobin deviceconfig set --type "bytes" -B
12345 --device naa.xxx
Sets the device specied by --device to switch to the next path each time 12345 bytes have been sent along the current path.
esxcli <conn_options> storage nmp psp roundrobin deviceconfig set --type=iops --iops
4200 --device naa.xxx
Sets the device specied by --device to switch after 4200 I/O operations have been performed on a path.
Use useano to specify that the round robin PSP should include paths in the active, unoptimized
u
state in the round robin set (1) or that the PSP should use active, unoptimized paths only if no active optimized paths are available (0). If you do not include this option, the PSP includes only active optimized paths in the round robin path set.

Scheduling Queues for Virtual Machine I/O

You can use ESXCLI to enable or disable per le I/O scheduling.
By default, vSphere provides a mechanism that creates scheduling queues for each virtual machine le. Each le has individual bandwidth controls. This mechanism ensures that the I/O for a particular virtual machine goes into its own separate queue and does not interfere with the I/O of other virtual machines.
This capability is enabled by default. You can turn it o by using the esxcli system settings kernel set -
s isPerFileSchedModelActive option.
Run esxcli system settings kernel set -s isPerFileSchedModelActive -v FALSE to disable per le
n
scheduling.
Run esxcli system settings kernel set -s isPerFileSchedModelActive -v TRUE to enable per le
n
scheduling.

Managing NFS/NAS Datastores

ESXi hosts can access a designated NFS volume located on a NAS (Network Aached Storage) server, can mount the volume, and can use it for its storage needs. You can use NFS volumes to store and boot virtual machines in the same way that you use VMFS datastores.

Capabilities Supported by NFS/NAS

An NFS client built into the ESXi hypervisor uses the Network File System (NFS) protocol over TCP/IP to access a designated NFS volume that is located on a NAS server. The ESXi host can mount the volume and use it for its storage needs.
vSphere supports versions 3 and 4.1 of the NFS protocol.
VMware, Inc. 57
vSphere Command-Line Interface Concepts and Examples
Typically, the NFS volume or directory is created by a storage administrator and is exported from the NFS server. The NFS volume does not need to be formaed with a local le system, such as VMFS. You can mount the volume directly on ESXi hosts, and use it to store and boot virtual machines in the same way that you use VMFS datastores.
In addition to storing virtual disks on NFS datastores, you can also use NFS as a central repository for ISO images, virtual machine templates, and so on. If you use the datastore for ISO images, you can connect the virtual machine's CD-ROM device to an ISO le on the datastore and install a guest operating system from the ISO le.
ESXi hosts support the following shared storage capabilities on NFS volumes.
VMware vMotion and Storage vMotion
n
High Availability (HA), Fault Tolerance, and Distributed Resource Scheduler (DRS)
n
ISO images, which are presented as CD-ROMs to virtual machines
n
Virtual machine snapshots
n
Host proles
n
Virtual machines with large capacity virtual disks, or disks greater than 2 TB. Virtual disks created on
n
NFS datastores are thin-provisioned by default, unless you use hardware acceleration that supports the Reserve Space operation. See Hardware Acceleration on NAS Devices in the vSphere Storage documentation.
In addition to storing virtual disks on NFS datastores, you can also use NFS as a central repository for ISO images, virtual machine templates, and so on.
To use NFS as a shared repository, you create a directory on the NFS server and then mount the directory as a datastore on all hosts. If you use the datastore for ISO images, you can connect the virtual machine's CD­ROM device to an ISO le on the datastore and install a guest operating system from the ISO le.

Adding and Deleting NAS File Systems

You can list, add, and delete a NAS le system with ESXCLI or with vicfg-nas.
Manage a NAS File System with ESXCLI
You can use ESXCLI as a vCLI command with connection options or in the ESXi Shell.
For more information on connection options, see “Connection Options for vCLI Host Management
Commands,” on page 19.
Procedure
1 List all known NAS le systems.
esxcli <conn_options> storage nfs list
For each NAS le system, the command lists the mount name, share name, and host name and whether the le system is mounted. If no NAS le systems are available, the system does not return a NAS lesystem and returns to the command prompt.
58 VMware, Inc.
Chapter 4 Managing Storage
2 Add a new NAS le system to the ESXi host.
Specify the NAS server with --host, the volume to use for the mount with --volume-name, and the share name on the remote system to use for this NAS mount point with --share.
esxcli <conn_options> storage nfs add --host=dir42.eng.vmware.com --share=/<mount_dir> --
volume-name=nfsstore-dir42
This command adds an entry to the known NAS le system list and supplies the share name of the new NAS le system. You must supply the host name, share name, and volume name for the new NAS le system.
3 Add a second NAS le system with read-only access.
esxcli <conn_options> storage nfs add --host=dir42.eng.vmware.com --share=/home --volume-
name=FileServerHome2 --readonly
4 Delete one of the NAS le systems.
esxcli <conn_options> storage nfs remove --volume-name=FileServerHome2
This command unmounts the NAS le system and removes it from the list of known le systems.
Managie a NAS File System with vicfg-nas
You can use vicfg-nas as a vCLI command with connection options.
For more information on connection options, see “Connection Options for vCLI Host Management
Commands,” on page 19.
Procedure
1 List all known NAS le systems.
vicfg-nas <conn_options> -l
For each NAS le system, the command lists the mount name, share name, and host name and whether the le system is mounted. If no NAS le systems are available, the system returns a No NAS datastore
found message.
2 Add a new NAS le system to the ESXi host.
vicfg-nas <conn_options --add --nasserver dir42.eng.vmware.com -s /<mount_dir> nfsstore-dir42
This command adds an entry to the known NAS le system list and supplies the share name of the new NAS le system. You must supply the host name and the share name for the new NAS le system.
3 Add a second NAS le system with read-only access.
vicfg-nas <conn_options> -a -y --n esx42nas2 -s /home FileServerHome2
4 Delete one of the NAS le systems.
vicfg-nas <conn_options> -d FileServerHome1
This command unmounts the NAS le system and removes it from the list of known le systems.

Monitor and Manage FibreChannel SAN Storage

The esxcli storage san commands help administrators troubleshoot issues with I/O devices and fabric, and include Fibre Channel, FCoE, iSCSI, SAS protocol statistics.
The commands allow you to retrieve device information and I/O statistics from those device. You can also issue Loop Initialization Primitives (LIP) to FC/FCoE devices and you can reset SAS devices.
VMware, Inc. 59
vSphere Command-Line Interface Concepts and Examples
For FC and FCoE devices, you can retrieve FC events such as RSCN, LINKUP, LINKDOWN, Frame Drop and FCoE
CVL. The commands log a warning in the VMkernel log if it encounters too many Link Toggling or frame
drops.
The following example examines and resets SAN storage through a FibreChannel adapter. Instead of fc, the information retrieval commands can also use iscsi, fcoe, and sas.
Procedure
1 List adapter aributes.
esxcli storage san fc list
2 Retrieve all events for a Fibre Channel I/O device.
esxcli storage san fc events get
3 Clear all I/O Device Management events for the specied adapter.
esxcli storage san fc events clear --adapter adapter
4 Reset the adapter.
esxcli storage san fc reset

Monitoring and Managing Virtual SAN Storage

Virtual SAN is a distributed layer of software that runs natively as a part of the ESXi hypervisor. Virtual SAN aggregates local or direct-aached storage disks of a host cluster and creates a single storage pool shared across all hosts of the cluster.
While supporting VMware features that require shared storage, such as HA, vMotion, and DRS, Virtual SAN eliminates the need for an external shared storage and simplies storage conguration and virtual machine provisioning activities.
You can use ESXCLI commands to retrieve Virtual SAN information, manage Virtual SAN clusters, perform network management, add storage, set the policy, and perform other monitoring and management tasks. Type esxcli vsan --help for a complete list of commands.

Retrieve Virtual SAN Information

You can use ESXCLI commands to retrieve Virtual SAN information.
Procedure
1 Verify which VMkernel adapters are used for Virtual SAN communication.
esxcli vsan network list
2 List storage disks that were claimed by Virtual SAN.
esxcli vsan storage list
3 Get Virtual SAN cluster information.
esxcli vsan cluster get

Manage a Virtual SAN Cluster

You can activate Virtual SAN when you create host clusters or enable Virtual SAN on existing clusters. When enabled, Virtual SAN aggregates all local storage disks available on the hosts into a single datastore shared by all hosts.
You can run these commands in the ESXi Shell for a host, or the command aects the target host that you specify as part of the vCLI connection options.
60 VMware, Inc.
Procedure
1 Join the target host to a given Virtual SAN cluster.
esxcli vsan cluster join --cluster-uuid <uuid>
N The UUID of the cluster is required.
2 Verify that the target host is joined to a Virtual SAN cluster.
esxcli vsan cluster get
3 Remove the target host from the Virtual SAN cluster.
esxcli vsan cluster leave

Add and Remove Virtual SAN Storage

You can use ESXCLI commands to add and remove Virtual SAN storage.
Procedure
1 Add an HDD or data disk for use by Virtual SAN.
esxcli vsan storage add --disks <device_name>
Chapter 4 Managing Storage
N The command expects an empty disk, which will be partitioned or formaed. Specify a device name, for example, mpx.vmhba2:C0:T1:L0.
2 Add an SSD disk for use by Virtual SAN.
esxcli vsan storage add --ssd <device_name>
N The command expects an empty disk, which will be partitioned or formaed. Specify a device name, for example, mpx.vmhba2:C0:T1:L0.
3 List the Virtual SAN storage conguration. You can display the complete list, or lter to show only a
single device.
esxcli vsan storage list --device <device>
4 Remove disks or disk groups.
N You can remove disks or disk groups only when Virtual SAN is in manual mode. For the automatic disk claim mode, the remove action is not supported.
Remove an individual Virtual SAN disk.
n
esxcli vsan storage remove --disk <device_name>
Instead of specifying the device name, you can specify the UUID if you include the --uuid option.
Remove a disk group's SSD and each of its backing HDD drives from Virtual SAN usage.
n
esxcli vsan storage remove --ssd <device_name>
Instead of specifying the device name, you can specify the UUID if you include the --uuid option. Any SSD that you remove from Virtual SAN becomes available for such features as Flash Read Cache.
VMware, Inc. 61
vSphere Command-Line Interface Concepts and Examples

Monitoring vSphere Flash Read Cache

Flash Read Cache™ lets you accelerate virtual machine performance through the use of host resident ash devices as a cache.
The vSphere Storage documentation discusses vSphere Flash Read Cache in some detail.
You can reserve a Flash Read Cache for any individual virtual disk. The Flash Read Cache is created only when a virtual machine is powered on, and it is discarded when a virtual machine is suspended or powered o. When you migrate a virtual machine you have the option to migrate the cache. By default the cache is migrated if the virtual ash module on the source and destination hosts are compatible. If you do not migrate the cache, the cache is rewarmed on the destination host. You can change the size of the cache while a virtual machine is powered on. In this instance, the existing cache is discarded and a new write-through cache is created, which results in a cache warm up period. The advantage of creating a new cache is that the cache size can beer match the application's active data.
Flash Read Cache supports write-through or read caching. Write-back or write caching are not supported. Data reads are satised from the cache, if present. Data writes are dispatched to the backing storage, such as a SAN or NAS. All data that is read from or wrien to the backing storage is unconditionally stored in the cache.
N Not all workloads benet with a Flash Read Cache. The performance boost depends on your workload paern and working set size. Read-intensive workloads with working sets that t into the cache can benet from a Flash Read Cache conguration. By conguring Flash Read Cache for your read-intensive workloads additional I/O resources become available on your shared storage, which can result in a performance increase for other workloads even though they are not congured to use Flash Read Cache.
You can manage vSphere Flash Read Cache from the vSphere Web Client. You can monitor Flash Read Cache by using commands in the esxcli storage vflash namespace. The following table lists available commands. See the vSphere Command-Line Interface Reference or the online help for a list of options to each command.
Table 41. Commands for Monitoring vSphere Flash Read Cache
Command Description
storage vflash cache get Gets individual vflash cache info.
storage vflash cache list Lists individual vflash caches.
storage vflash cache stats get Gets vflash cache statistics.
storage vflash cache stats reset Resets vflash cache statistics.
storage vflash device list Lists vflash SSD devices.
storage vflash module get Gets vflash module info.
storage vflash module list Lists vflash modules.
storage vflash module stats get Gets vflash module statistics.

Monitoring and Managing Virtual Volumes

The Virtual Volumes functionality changes the storage management paradigm from managing space inside datastores to managing abstract storage objects handled by storage arrays.
With Virtual Volumes, an individual virtual machine, not the datastore, becomes a unit of storage management, while storage hardware gains complete control over virtual disk content, layout, and management. The vSphere Storage documentation discusses Virtual Volumes in some detail and explains how to manage them by using the vSphere Web Client.
62 VMware, Inc.
Chapter 4 Managing Storage
The following ESXCLI commands are available for managing display information about virtual volumes and for unbinding all Virtual Volumes from all vendor providers. See the vSphere Storage documentation for information on creating Virtual Volumes and conguring multipathing and SCSI-based endpoints.
Table 42. VVol Commands
Command Description
storage vvol daemon unbindall
storage vvol protocolendpoint list
storage vvol storagecontainer list
storage vvol storagecontainer restore
storage vvol vasacontext get
storage vvol vendorprovider list
storage vvol vendorprovider restore
Unbinds all Virtual Volume instances from all storage providers that are known to the ESXi host.
Lists the VVol protocol endpoints currently known to the ESXi host.
Lists the VVol storage containers currently known to the ESXi host.
Restores storage containers of vendor providers that are registered on the host.
Gets the VASA context (VC UUID).
Lists the vendor providers registered on the host.
Restores the vendor providers that are registered on the host.

Migrating Virtual Machines with svmotion

Storage vMotion moves a virtual machine's conguration le, and, optionally, its disks, while the virtual machine is running. You can perform Storage vMotion tasks from the vSphere Web Client or with the
svmotion command.
I No ESXCLI command for Storage vMotion is available.
You can place the virtual machine and all of its disks in a single location, or choose separate locations for the virtual machine conguration le and each virtual disk. You cannot change the virtual machine's execution host during a migration with svmotion.

Storage vMotion Uses

Storage vMotion has several uses in administering your vSphere environment.
Upgrade ESXi without virtual machine downtime in situations where virtual machine disks must be
n
moved to shared storage to allow migration with vMotion.
Perform storage maintenance and reconguration. You can use Storage vMotion to move virtual
n
machines o a storage device to allow maintenance or reconguration of the storage device without virtual machine downtime.
Redistribute storage load. You can use Storage vMotion to manually redistribute virtual machines or
n
virtual disks to dierent storage volumes to balance capacity or improve performance.

Storage vMotion Requirements and Limitations

You can migrate virtual machine disks with Storage vMotion if the virtual machine and its host meet specic resource and conguration requirements.
To migrate virtual machine disks with Storage vMotion, the virtual machine and its host must meet the following requirements.
For ESXi 5.0 and later hosts, you can migrate virtual machines that have snapshots. For earlier versions
n
of ESXi, you cannot migrate virtual machines that have snapshots.
VMware, Inc. 63
vSphere Command-Line Interface Concepts and Examples
Virtual machine disks must be in persistent mode or be raw device mappings (RDMs). For physical and
n
virtual compatibility mode RDMs, you can migrate the mapping le only. For virtual compatibility mode RDMs, you can use the vSphere Web Client to convert to thick-provisioned or thin-provisioned disks during migration as long as the destination is not an NFS datastore. You cannot use the svmotion command to perform this conversion.
The host on which the virtual machine is running must have a license that includes Storage vMotion.
n
The host on which the virtual machine is running must have access to both the source and target
n
datastores.
A particular host can be involved in up to four migrations with vMotion or Storage vMotion at one
n
time. See Limits on Simultaneous Migrations in the vCenter Server and Host Management documentation for details.
If you use the vSphere Web Client for migration with svmotion, the system performs several compatibility checks. These checks are not supported by the svmotion vCLI command.

Running svmotion in Interactive Mode

You can run svmotion in interactive mode by using the --interactive option. The command prompts you for the information it needs to complete the storage migration.
In interactive mode, the svmotion command uses the following syntax.
svmotion <conn_options> --interactive
When you use --interactive, all other options are ignored.
I When responding to the prompts, use quotes around input strings with special characters.

Running svmotion in Noninteractive Mode

You can run svmotion in noninteractive mode if you do not use the --interactive option.
I When you run svmotion, --server must point to a vCenter Server system.
In noninteractive mode, the svmotion command uses the following syntax.
svmotion [standard vCLI options] --datacenter=<datacenter_name>
--vm <VM config datastore path>:<new datastore>
[--disks <virtual disk datastore path>:<new datastore>,
<virtual disk datastore path>:<new datastore>]
Square brackets indicate optional elements, not datastores.
The --vm option species the virtual machine and its destination. By default, all virtual disks are relocated to the same datastore as the virtual machine. This option requires the current virtual machine conguration le location. See “Determine the Path to the Virtual Machine Conguration File and Disk File,” on page 64.
The --disks option relocates individual virtual disks to dierent datastores. The --disks option requires the current virtual disk datastore path as an option. See “Determine the Path to the Virtual Machine
Conguration File and Disk File,” on page 64.
Determine the Path to the Virtual Machine Configuration File and Disk File
To use the --vm option, you need the current virtual machine conguration le location.
Procedure
1 Run vmware-cmd -l to list all virtual machine conguration les (VMX les).
vmware-cmd -H <vc_server> -U <login_user> -P <login_password> -h <esx_host> -l
64 VMware, Inc.
2 Choose the VMX le for the virtual machine of interest.
By default, the virtual disk le has the same name as the VMX le but has a .vmdk extension.
3 (Optional) Use vifs to verify that you are using the correct VMDK le.
Relocate a Virtual Machine's Storage
You can relocate a virtual machine's storage including the disks.
Procedure
1 Determine the path to the virtual machine conguration le.
2 Run svmotion by using the following syntax.
svmotion
--url=https://myvc.mycorp.com/sdk --datacenter=DC1
--vm="[storage1] myvm/myvm.vmx:new_datastore"
N The example is for Windows. Use single quotes on Linux.
Relocate a Virtual Machine's Configuration File
You can relocate a virtual machine's conguration le, but leave the virtual disks.
Chapter 4 Managing Storage
Procedure
1 Determine the path to the virtual disk les and the virtual machine conguration le.
2 Run svmotion by using the following syntax.
svmotion
<conn_options>
--datacenter='My DC'
--vm='[old_datastore] myvm/myvm.vmx:new_datastore'
--disks='[old_datastore] myvm/myvm_1.vmdk:old_datastore, [old_datastore] myvm/myvm_2.vmdk:
old_datastore'
N The example is for Linux. Use double quotes on Windows. The square brackets surround the datastore name and do not indicate an optional element.
This command relocates the virtual machine's conguration le to new_datastore, but leaves the two disks, myvm_1.vmdk and myvm_2.vmdk, in old_datastore.

Configuring FCoE Adapters

ESXi can use Fibre Channel over Ethernet (FCoE) adapters to access Fibre Channel storage.
The FCoE protocol encapsulates Fibre Channel frames into Ethernet frames. As a result, your host does not need special Fibre Channel links to connect to Fibre Channel storage, but can use 10 Gbit lossless Ethernet to deliver Fibre Channel trac.
To use FCoE, you need to install FCoE adapters. The adapters that VMware supports generally fall into two categories, hardware FCoE adapters and software FCoE adapters.
Hardware FCoE adapters include completely ooaded specialized Converged Network Adapters
n
(CNAs) that contain network and Fibre Channel functionalities on the same card. When such an adapter is installed, your host detects and can use both CNA components. In the vSphere Web Client, the networking component appears as a standard network adapter (vmnic) and the Fibre Channel component as a FCoE adapter (vmhba). You do not have to congure a hardware FCoE adapter to be able to use it.
VMware, Inc. 65
vSphere Command-Line Interface Concepts and Examples
A software FCoE adapter is a software code that performs some of the FCoE processing. The adapter
n
can be used with a number of NICs that support partial FCoE ooad. Unlike the hardware FCoE adapter, the software adapter must be activated.

Scanning Storage Adapters

You must perform a rescan operation each time you recongure your storage setup.
You can scan by using the vSphere Web Client, the vicfg-rescan vCLI command, or the esxcli storage
core adapter rescan command.
esxcli storage core adapter rescan supports the following additional options.
n
-a|--all or -A|--adapter=<string> – Scan all adapters or a specied adapter.
n
-S|--skip-claim – Skip claiming of new devices by the appropriate multipath plug-in.
n
-F|--skip-fs-scan – Skip lesystem scan.
n
-t|--type – Specify the type of scan to perform. The command either scans for all changes (all) or
n
for added, deleted, or updated adapters (add, delete, update).
vicfg-rescan supports only a simple rescan operation on a specied adapter.
n
Rescanning a storage adapter with ESXCLI
The following command scans a specic adapter and skips the lesystem scan that is performed by default.
esxcli <conn_options> storage core adapter rescan --adapter=vmhba33 --skip-claim
The command returns an indication of success or failure, but no detailed information.
Rescanning a storage adapter with vicfg-rescan
Run vicfg-rescan, specifying the adapter name.
vicfg-rescan <conn_options> vmhba1
The command returns an indication of success or failure, but no detailed information.

Retrieving SMART Information

You can use ESXCLI to retrieve information related to SMART. SMART is a monitoring system for computer hard disks that reports information about the disks.
You can use the following example syntax to retrieve SMART information.
esxcli storage core device smart get -d device
What the command returns depends on the level of SMART information that the device supports. If no information is available for a parameter, the output displays N/A, as in the following sample output.
Parameter Value Threshold Worst
-----------------------------------------------------
Health Status OK N/A N/A
Media Wearout Indicator N/A N/A N/A
Write Error Count N/A N/A N/A
Read Error Count 119 6 74
Power-on Hours 57 0 57
Power Cycle Count 100 20 100
Reallocated Sector Count 100 36 100
Raw Read Error Rate 119 6 74
Drive Temperature 38 0 49
66 VMware, Inc.
Driver Rated Max Temperature 62 45 51
Write Sectors TOT Count 200 0 200
Read Sectors TOT Count 100 0 253
Initial Bad Block Count N/A N/A N/A
Chapter 4 Managing Storage
VMware, Inc. 67
vSphere Command-Line Interface Concepts and Examples
68 VMware, Inc.

Managing iSCSI Storage 5

ESXi systems include iSCSI technology to access remote storage using an IP network. You can use the vSphere Web Client, commands in the esxcli iscsi namespace, or the vicfg-iscsi command to congure both hardware and software iSCSI storage for your ESXi system.
See the vSphere Storage documentation for additional information.
This chapter includes the following topics:
“iSCSI Storage Overview,” on page 69
n
“Protecting an iSCSI SAN,” on page 71
n
“Command Syntax for esxcli iscsi and vicfg-iscsi,” on page 73
n
“iSCSI Storage Setup with ESXCLI,” on page 78
n
“iSCSI Storage Setup with vicfg-iscsi,” on page 84
n
“Listing and Seing iSCSI Options,” on page 89
n
“Listing and Seing iSCSI Parameters,” on page 90
n
“Enabling iSCSI Authentication,” on page 94
n
“Set Up Ports for iSCSI Multipathing,” on page 97
n
“Managing iSCSI Sessions,” on page 98
n

iSCSI Storage Overview

With iSCSI, SCSI storage commands that your virtual machine issues to its virtual disk are converted into TCP/IP protocol packets and transmied to a remote device, or target, on which the virtual disk is located. To the virtual machine, the device appears as a locally aached SCSI drive.
To access remote targets, the ESXi host uses iSCSI initiators. Initiators transport SCSI requests and responses between ESXi and the target storage device on the IP network. ESXi supports the following types of initiators.
Software iSCSI adapter - VMware code built into the VMkernel. Allows an ESXi host to connect to the
n
iSCSI storage device through standard network adapters. The software initiator handles iSCSI processing while communicating with the network adapter.
Hardware iSCSI adapter - Ooads all iSCSI and network processing from your host. Hardware iSCSI
n
adapters are broken into two types.
Dependent hardware iSCSI adapter - Leverages the VMware iSCSI management and conguration
n
interfaces.
VMware, Inc.
69
iSCSI storage
SP
IP network
HBA2 HBA1
NIC2
software
adapter
NIC1
Host 1
Host 2
hardware
iSCSI
software
iSCSI
vSphere Command-Line Interface Concepts and Examples
Independent hardware iSCSI adapter - Leverages its own iSCSI management and conguration
n
interfaces.
See the vSphere Storage documentation for details on setup and failover scenarios.
You must congure iSCSI initiators for the host to access and display iSCSI storage devices.
Figure 5-1 depicts hosts that use dierent types of iSCSI initiators.
The host on the left uses an independent hardware iSCSI adapter to connect to the iSCSI storage system.
n
The host on the right uses software iSCSI.
n
Dependent hardware iSCSI can be implemented in dierent ways and is not shown. iSCSI storage devices from the storage system become available to the host. You can access the storage devices and create VMFS datastores for your storage needs.
Figure 51. iSCSI Storage

Discovery Sessions

70 VMware, Inc.
A discovery session is part of the iSCSI protocol. The discovery session returns the set of targets that you can access on an iSCSI storage system.
ESXi systems support dynamic and static discovery.
Dynamic discovery - Also known as Send Targets discovery. Each time the ESXi host contacts a
n
specied iSCSI storage server, it sends a Send Targets request to the server. In response, the iSCSI storage server supplies a list of available targets to the ESXi host. Monitor and manage with esxcli
iscsi adapter discovery sendtarget or vicfg-iscsi commands.
Static discovery - The ESXi host does not have to perform discovery. Instead, the ESXi host uses the IP
n
addresses or domain names and iSCSI target names, IQN or EUI format names, to communicate with the iSCSI target. Monitor and manage with esxcli iscsi adapter discovery statictarget or vicfg-
iscsi commands.
For either case, you set up target discovery addresses so that the initiator can determine which storage resource on the network is available for access. You can do this setup with dynamic discovery or static discovery. With dynamic discovery, all targets associated with an IP address or host name and the iSCSI name are discovered. With static discovery, you must specify the IP address or host name and the iSCSI name of the target you want to access. The iSCSI HBA must be in the same VLAN as both ports of the iSCSI array.

Discovery Target Names

The target name is either an IQN name or an EUI name.
The IQN and EUI names use specic formats.
The IQN name uses the following format.
n
iqn.yyyy-mm.{reversed domain name}:id_string
The following IQN name contains example values.
iqn.2007-05.com.mydomain:storage.tape.sys3.abc
The ESXi host generates an IQN name for software iSCSI and dependent hardware iSCSI adapters. You can change that default IQN name.
The EUI name is described in IETF rfc3720 as follows.
n
The IEEE Registration Authority provides a service for assigning globally unique identiers [EUI]. The EUI-64 format is used to build a global identier in other network protocols. For example, Fibre Channel denes a method of encoding it into a WorldWideName.
The format is eui. followed by an EUI-64 identier (16 ASCII-encoded hexadecimal digits).
The following EUI name contains example values.
Chapter 5 Managing iSCSI Storage
Type EUI-64 identifier (ASCII-encoded hexadecimal)
+- -++--------------+
| || |
eui.02004567A425678D
The IEEE EUI-64 iSCSI name format can be used when a manufacturer is registered with the IEEE Registration Authority and uses EUI-64 formaed worldwide unique names for its products.
You can check in the UI of the storage array whether an array uses an IQN name or an EUI name.

Protecting an iSCSI SAN

Your iSCSI conguration is only as secure as your IP network. By enforcing good security standards when you set up your network, you help safeguard your iSCSI storage.

Protecting Transmitted Data

A primary security risk in iSCSI SANs is that an aacker might sni transmied storage data.
Neither the iSCSI adapter nor the ESXi host iSCSI initiator encrypts the data that it transmits to and from the targets, making the data vulnerable to sning aacks. You must therefore take additional measures to prevent aackers from easily seeing iSCSI data.
Allowing your virtual machines to share virtual switches and VLANs with your iSCSI conguration potentially exposes iSCSI trac to misuse by a virtual machine aacker. To help ensure that intruders cannot listen to iSCSI transmissions, make sure that none of your virtual machines can see the iSCSI storage network.
Protect your system by giving the iSCSI SAN a dedicated virtual switch.
If you use an independent hardware iSCSI adapter, make sure that the iSCSI adapter and ESXi physical
n
network adapter are not inadvertently connected outside the host. Such a connection might result from sharing a switch.
If you use dependent hardware or software iscsi adapter, which uses ESXi networking, congure iSCSI
n
storage through a dierent virtual switch than the one used by your virtual machines.
VMware, Inc. 71
vSphere Command-Line Interface Concepts and Examples
You can also congure your iSCSI SAN on its own VLAN to improve performance and security. Placing your iSCSI conguration on a separate VLAN ensures that no devices other than the iSCSI adapter can see transmissions within the iSCSI SAN. With a dedicated VLAN, network congestion from other sources cannot interfere with iSCSI trac.

Securing iSCSI Ports

You can improve the security of iSCSI ports by installing security patches and limiting the devices connected to the iSCSI network.
When you run iSCSI devices, the ESXi host does not open ports that listen for network connections. This measure reduces the chances that an intruder can break into the ESXi host through spare ports and gain control over the host. Therefore, running iSCSI does not present an additional security risks at the ESXi host end of the connection.
An iSCSI target device must have one or more open TCP ports to listen for iSCSI connections. If security vulnerabilities exist in the iSCSI device software, your data can be at risk through no fault of the ESXi system. To lower this risk, install all security patches that your storage equipment manufacturer provides and limit the devices connected to the iSCSI network.

Setting iSCSI CHAP

iSCSI storage systems authenticate an initiator using a name and key pair. ESXi systems support Challenge Handshake Authentication Protocol (CHAP).
Using CHAP for your SAN implementation is a best practice. The ESXi host and the iSCSI storage system must have CHAP enabled and must have common credentials. During iSCSI login, the iSCSI storage system exchanges its credentials with the ESXi system and checks them.
You can set up iSCSI authentication by using the vSphere Web Client, as discussed in the vSphere Storage documentation or by using the esxcli command, discussed in “Enabling iSCSI Authentication,” on page 94. To use CHAP authentication, you must enable CHAP on both the initiator side and the storage system side. After authentication is enabled, it applies for targets to which no connection has been established, but does not apply to targets to which a connection is established. After the discovery address is set, the new volumes to which you add a connection are exposed and can be used.
For software iSCSI and dependent hardware iSCSI, ESXi hosts support per-discovery and per-target CHAP credentials. For independent hardware iSCSI, ESXi hosts support only one set of CHAP credentials per initiator. You cannot assign dierent CHAP credentials for dierent targets.
When you congure independent hardware iSCSI initiators, ensure that the CHAP conguration matches your iSCSI storage. If CHAP is enabled on the storage array, it must be enabled on the initiator. If CHAP is enabled, you must set up the CHAP authentication credentials on the ESXi host to match the credentials on the iSCSI storage.
Supported CHAP Levels
To set CHAP levels with esxcli iscsi adapter setauth or vicfg-iscsi, specify one of the values in
Table 5-1 for <level>. Only two levels are supported for independent hardware iSCSI.
Mutual CHAP is supported for software iSCSI and for dependent hardware iSCSI, but not for independent hardware iSCSI.
I Ensure that CHAP is set to chapRequired before you set mutual CHAP, and use compatible levels for CHAP and mutual CHAP. Use dierent passwords for CHAP and mutual CHAP to avoid security risks.
72 VMware, Inc.
Chapter 5 Managing iSCSI Storage
Table 51. Supported Levels for CHAP
Level Description Supported
chapProhibited
chapDiscouraged
chapPreferred
chapRequired
Host does not use CHAP authentication. If authentication is enabled, specify chapProhibited to disable it.
Host uses a non-CHAP connection, but allows a CHAP connection as fallback.
Host uses CHAP if the CHAP connection succeeds, but uses non­CHAP connections as fallback.
Host requires successful CHAP authentication. The connection fails if CHAP negotiation fails.
Software iSCSI
Dependent hardware iSCSI
Independent hardware iSCSI
Software iSCSI
Dependent hardware iSCSI
Software iSCSI
Dependent hardware iSCSI
Independent hardware iSCSI
Software iSCSI
Dependent hardware iSCSI
Returning Authentication to Default Inheritance
The values of iSCSI authentication seings associated with a dynamic discovery address or a static discovery target are inherited from the corresponding seings of the parent. For the dynamic discovery address, the parent is the adapter. For the static target, the parent is the adapter or discovery address.
If you use the vSphere Web Client to modify authentication seings, you must deselect the Inherit from
n
Parent check box before you can make a change to the discovery address or discovery target.
If you use vicfg-iscsi, the value you set overrides the inherited value.
n
If you use esxcli iscsi commands, the value you set overrides the inherited value. You can set CHAP
n
at the following levels.
esxcli iscsi adapter auth chap [get|set]
n
esxcli iscsi adapter discovery sendtarget auth chap [get|set]
n
esxcli iscsi adapter target portal auth chap [get|set]
n
Inheritance is relevant only if you want to return a dynamic discovery address or a static discovery target to its inherited value. In that case, use one of the following commands.
Dynamic discovery
n
esxcli iscsi adapter discovery sendtarget auth chap set --inherit
Static discovery
n
esxcli iscsi adapter target portal auth chap set --inherit
N You can set target-level CHAP authentication properties to be inherited from the send target level and set send target level CHAP authentication properties to be inherited from the adapter level. Reseing adapter-level properties is not supported.

Command Syntax for esxcli iscsi and vicfg-iscsi

In vSphere 5.0 and later, you can manage iSCSI storage by using either esxcli iscsi commands or vicfg-
iscsi options.
For details, see the vSphere Command-Line Interface Reference. “esxcli iscsi Command Syntax,” on page 74 and “vicfg-iscsi Command Syntax,” on page 75.
VMware, Inc. 73
vSphere Command-Line Interface Concepts and Examples

esxcli iscsi Command Syntax

The esxcli iscsi command includes a number of nested namespaces.
The following table illustrates the namespace hierarchy. Commands at each level are included in bold. Many namespaces include both commands and namespaces.
adapter [get|list| set]
networkportal [add|list|remove]
physicalnetworkpor tal [list]
session [add|list| remove]
ibftboot [get| import]
logicalnetworkport al list
plugin list
software [get|set]
auth chap [set|get]
discovery [rediscover]
target [list] portal [list] auth chap [get|set]
capabilities get
firmware [get| set]
param [get|set]
ipconfig [get| set]
param [get|set]
connection list
sendtarget [add| list|remove]
auth chap [get|set]
param [get|set]
statictarget [add|list|remove]
status get
param [get|set]

Key to esxcli iscsi Short Options

ESXCLI commands for iSCSI management consistently use the same short options. For several options, the associated full option depends on the command.
Table 53. Short Options for iSCSI ESXCLI Command Options
Lower-case Option Option
a --address,
alias
c --cid 2 --dns2
d --direction D --default
f --file, force
74 VMware, Inc.
Upper-case Option Option Number Option
A --adapter 1 --dns1
Table 53. Short Options for iSCSI ESXCLI Command Options (Continued)
Lower-case Option Option
g --gateway
i --ip I --inherit
k --key
l --level
m --method M --module
n --nic N --authname,
o --option
p --plugin
s --isid,
subnet, switch
v --value
Upper-case Option Option Number Option
--name
S --state,
secret
Chapter 5 Managing iSCSI Storage

vicfg-iscsi Command Syntax

vicfg-iscsi supports a comprehensive set of options.
Table 5‑4. Options for vicfg-iscsi
Option Suboptions Description
-A -­authentication
-A -­authentication
-A -­authentication
-c <level>
-m <auth_method> -b
-v <ma_username>
-x <ma_password> [-i <stor_ip_addr|stor_hostname> [:<portnum>] [-n <iscsi_name>]] <adapter_name>
--level <level>
--method <auth_method> --mutual
--mchap_username <ma_username>
--mchap_password <ma_password> [--ip <stor_ip_addr|stor_hostname> [:<portnum>] [--name <iscsi_name>]] <adapter_name>
-c <level>
-m <auth_method>
-u <auth_u_name>
-w <a_password> [-i <stor_ip_addr|stor_hostname> [:<portnum>] [-n <iscsi_name>]] <adapter_name>
--level <level>
--method <auth_method>
--chap_password <auth_u_name>
--chap_username <chap_password> [--ip <stor_ip_addr|stor_hostname> [:<portnum>] [--name <iscsi_name>]] <adapter_name>
-l <adapter_name>
--list <adapter_name>
Enables mutual authentication. You must enable authentication before you can enable mutual authentication.
Enables authentication using the specied options.
Lists supported authentication methods.
VMware, Inc. 75
vSphere Command-Line Interface Concepts and Examples
Table 54. Options for vicfg-iscsi (Continued)
Option Suboptions Description
-D --discovery -a -i <stor_ip_addr|stor_hostname[:<portnum>] <adapter_name>
--add --ip <stor_ip_addr|stor_hostname> [:<portnum>] <adapter_name>
-D --discovery -l <adapter_name>
--list <adapter_name>
-D --discovery -r -i <stor_ip_addr|stor_hostname>[:<portnum>] <adapter_name>
--remove --ip <stor_ip_addr|stor_hostname> [:<portnum>] <adapter_name>
-H -l [<adapter_name>]
--list [<adapter_name>]
-L --lun -l <adapter_name>
--list <adapter_name>
-L --lun -l -t <target_ID> <adapter_name>
--list --target_id <target_id> <adapter_name>
-N --network
(Independent hardware iSCSI only)
-N --network
(Independent hardware iSCSI only)
-N --network
(Independent hardware iSCSI only)
-N --network
(Independent hardware iSCSI only)
-N --network
(Independent hardware iSCSI only)
-p --pnp
(Independent hardware iSCSI only)
-p --pnp
(Independent hardware iSCSI only)
-I --iscsiname -a <alias_name> <adapter_name>
-I --iscsiname -n <iscsi_name> <adapter_name>
-I --iscsiname -l <adapter_name>
-l <adapter_name>
--list <adapter_name>
-i <ip_addr> <adapter_name>
--ip <ip_addr> <vmhba>
-s <subnet_mask> <adapter_name>
--subnetmask <subnet_mask> <adapter_name>
-g <default_gateway> <adapter_name>
--gateway <default_gateway> <adapter_name>
-i <ip_addr> -s <subnet mask>
-g <default_gateway> <adapter_name>
--ip <ip_addr> --subnetmask <subnet_mask>
--gateway <default_gateway> <adapter_name>
-l <adapter_name>
--list <adapter_name>
-M <mtu_size> <adapter_name>
--mtu <mtu-size> <adapter_name>
--alias <alias_name> <adapter_name>
--name <iscsi_name> <adapter_name>
--list <adapter_name>
Adds a dynamic discovery address.
Lists dynamic discovery addresses.
Removes a dynamic discovery address.
Lists all iSCSI adapters or a specied adapter.
Lists LUN information.
Lists LUN information for a specic target.
Lists network properties.
Sets the HBA IPv4 address to ip_addr.
Sets the HBA network mask to subnet_mask.
Sets the HBA gateway to default_gateway.
Sets the IP address, subnet mask, and default gateway in one command.
Lists physical network portal options.
Sets physical network portal options.
Sets the iSCSI initiator alias.
Sets the iSCSI initiator name.
Lists iSCSI initiator options.
76 VMware, Inc.
Chapter 5 Managing iSCSI Storage
Table 54. Options for vicfg-iscsi (Continued)
Option Suboptions Description
-M --mtu -p -M <mtu_size> <adapter_name>
--pnp - -mtu <mtu-size> <adapter_name>
-S --static -l <adapter_name>
--list <adapter_name>
-S --static -r -i <stor_ip_addr|stor_hostname> [:<portnum>] -n <target_name> <adapter_name>
--remove - -ip <stor_ip_addr|stor_hostname> [:<portnum>] -name <target_name> <adapter_name>
-S --static -a -i <stor_ip_addr|stor_hostname> [:<portnum>]
-n <target_name> <adapter_name>
--add --ip <stor_ip_addr|stor_hostname> [:<portnum>]
-name <target_name> <adapter_name>
-P --phba -l <adapter_name>
--list <adapter_name>
-T --target -l <adapter_name>
--list <adapter_name>
-W --parameter -l [-i <stor_ip_addr|stor_hostname> [:<portnum>] [-n <iscsi_name]] <adapter_name>
--list [--ip <stor_ip_addr|stor_hostname> [:<portnum>] [--name <iscsi_name]] <adapter_name>
-W --parameter -l -k [-i <stor_ip_addr|stor_hostname> [:<portnum>] [-n <iscsi_name]] <adapter_name>
--list --detail [--ip <stor_ip_addr|stor_hostname> [:<portnum>] [-­name <iscsi_name]] <adapter_name>
-W --parameter -W -j <name>=<value>
-i <stor_ip_addr|stor_hostname> [:port_num>] [-n <iscsi_name>]] <adapter_name>
--parameter - -set <name>=<value>
--ip <stor_ip_addr|stor_hostname> [:port_num>] [--name <iscsi_name>]] <adapter_name>
-W --parameter -W - o <param_name>
-i <stor_ip_addr|stor_hostname> [:port_num>] [-n <iscsi_name>]] <adapter_name>
-parameter --reset <param_name>
-ip <stor_ip_addr|stor_hostname> [:port_num>] [­name <iscsi_name>]] <adapter_name>
-z --reset_auth -a -z
-m <auth_method> -b [-i <stor_ip_addr|stor_hostname> [:<portnum>] [-n <iscsi_name>]] <adapter_name>
--authentication --reset_auth
--method <auth_method> [--ip <stor_ip_addr|stor_hostname> [:<portnum>] [--name <iscsi_name>]] <adapter_name>
Sets MTU size. Used with the
--pnp option.
Lists static discovery addresses.
Removes a static discovery address.
Adds a static discovery address.
Lists external, vendor-specic properties of an iSCSI adapter.
Lists target information.
Lists iSCSI parameter information.
Lists iSCSI parameter details.
Sets iSCSI parameters.
Returns parameters in discovery target or send target to default inheritance behavior.
Resets target level authentication properties to be inherited from adapter level. Used with the -­authentication option.
VMware, Inc. 77
vSphere Command-Line Interface Concepts and Examples

iSCSI Storage Setup with ESXCLI

You can set up iSCSI storage by using commands in the esxcli iscsi namespace.
You can also set up iSCSI storage by using the vSphere Web Client or vicfg-iscsi commands. See “iSCSI
Storage Setup with vicfg-iscsi,” on page 84.

Set Up Software iSCSI with ESXCLI

Software iSCSI setup requires a number of high-level tasks.
You should be familiar with the corresponding command for each task. You can refer to the relevant documentation for each command or run esxcli iscsi --help in the console. Specify one of the options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of
<conn_options>.
Prerequisites
Verify that you are familiar with iSCSI authentication. See “Enabling iSCSI Authentication,” on
n
page 94.
Verify that you are familiar with CHAP. See “Seing iSCSI CHAP,” on page 72.
n
Verify that you are familiar with iSCSI parameters. See “Listing and Seing iSCSI Parameters,” on
n
page 90.
Procedure
1 Enable software iSCSI.
esxcli <conn_options> iscsi software set --enabled=true
2 Check whether a network portal, that is, a bound port, exists for iSCSI trac.
esxcli <conn_options> iscsi adapter list
3 If no adapter exists, add one.
Software iSCSI does not require port binding, but requires that at least one VMkernel NIC is available and can be used as an iSCSI NIC. You can name the adapter as you add it.
esxcli <conn_options> iscsi networkportal add -n <portal_name> -A <vmhba>
4 (Optional) Check the status.
esxcli <conn_options> iscsi software get
The system prints true if software iSCSI is enabled, or false if it is not enabled.
5 (Optional) Set the iSCSI name and alias.
esxcli <conn_options> iscsi adapter set --adapter=<iscsi adapter> --name=<name>
esxcli <conn_options> iscsi adapter set --adapter=<iscsi adapter> --alias=<alias>
78 VMware, Inc.
Chapter 5 Managing iSCSI Storage
6 Add a dynamic discovery address or a static discovery address.
With dynamic discovery, all storage targets associated with a host name or IP address are
n
discovered. You can run the following command.
esxcli <conn_options> iscsi adapter discovery sendtarget add --address=<ip/dns[:port]> --
adapter=<adapter_name>
With static discovery, you must specify the host name or IP address and the iSCSI name of the
n
storage target. You can run the following command.
esxcli <conn_options> iscsi adapter discovery statictarget add --address=<ip/dns[:port]>
--adapter=<adapter_name> --name=<target_name>
When you later remove a discovery address, it might still be displayed as the parent of a static target. You can add the discovery address and rescan to display the correct parent for the static targets.
7 (Optional) Set the authentication information for CHAP.
You can set per-target CHAP for static targets, per-adapter CHAP, or apply the command to the discovery address.
Option Command
Adapter-level CHAP
Discovery-level CHAP
Target-level CHAP
esxcli iscsi adapter auth chap set --direction=uni -­chap_username=<name> --chap_password=<pwd> -­level=[prohibited, discouraged, preferred, required] -­secret=<string> --adapter=<vmhba>
esxcli iscsi adapter discovery sendtarget auth chap set -­direction=uni --chap_username=<name> --chap_password=<pwd>
--level=[prohibited, discouraged, preferred, required] -­secret=<string> --adapter=<vmhba> -­address<sendtarget_address>
esxcli iscsi adapter target portal auth chap set -­direction=uni --chap_username=<name> --chap_password=<pwd>
--level=[prohibited, discouraged, preferred, required] -­secret=<string> --adapter=<vmhba> --name<iscsi_iqn_name>
The following example sets adapter-level CHAP.
esxcli <conn_options> iscsi adapter auth chap set --direction=uni --chap_username=<name> --
chap_password=<pwd> --level=preferred --secret=uni_secret --adapter=vmhba33
8 (Optional) Set the authentication information for mutual CHAP by running esxcli iscsi adapter auth
chap set again with --direction set to mutual and a dierent authentication user name and secret.
Option Command
Adapter-level CHAP
Discovery-level CHAP
Target-level CHAP
esxcli iscsi adapter auth chap set --direction=mutual -­mchap_username=<name2> --mchap_password=<pwd2> -­level=[prohibited required] --secret=<string2> -­adapter=<vmhba>
esxcli iscsi adapter discovery sendtarget auth chap set -­direction=mutual --mchap_username=<name2> -­mchap_password=<pwd2> --level=[prohibited, required] -­secret=<string2> --adapter=<vmhba> -­address=<sendtarget_address>
esxcli iscsi adapter target portal auth chap set -­direction=mutual --mchap_username=<nam2e> -­mchap_password=<pwd2> --level=[prohibited required] -­secret=<string2> --adapter=<vmhba> --name=<iscsi_iqn_name>
I You are responsible for making sure that CHAP is set before you set mutual CHAP, and for using compatible levels for CHAP and mutual CHAP.
VMware, Inc. 79
vSphere Command-Line Interface Concepts and Examples
9 (Optional) Set iSCSI parameters.
Option Command
Adapter-level CHAP
Discovery-level CHAP
Target-level CHAP
10 After setup is complete, perform rediscovery and rescan all storage devices.
The following example performs the rediscovery and rescan operations.
esxcli <conn_options> iscsi adapter discovery rediscover
esxcli <conn_options> storage core adapter rescan --adapter=vmhba36
11 (Optional) If you want to make additional iSCSI login parameter changes, you must log out of the
corresponding iSCSI session and log back in.
a Run esxcli iscsi session remove to log out.
esxcli iscsi adapter param set --adapter=<vmhba> -­key=<key> --value=<value>
esxcli iscsi adapter discovery sendtarget param set -­adapter=<vmhba> --key=<key> --value=<value> -­address=<sendtarget_address>
esxcli iscsi adapter target portal param set -­adapter=<vmhba> --key=<key> --value=<value> -­address=<address> --name=<iqn.name>
b Run esxcli iscsi session add or rescan the adapter to add the session back.

Set Up Dependent Hardware iSCSI with ESXCLI

Dependent hardware iSCSI setup requires several high-level tasks.
You should be familiar with the corresponding command for each task. You can refer to the relevant documentation for each command or run esxcli iscsi --help in the console. Specify one of the options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of
<conn_options>.
Prerequisites
Verify that you are familiar with iSCSI authentication. See “Enabling iSCSI Authentication,” on
n
page 94.
Verify that you are familiar with CHAP. See “Seing iSCSI CHAP,” on page 72.
n
Verify that you are familiar with iSCSI parameters. See “Listing and Seing iSCSI Parameters,” on
n
page 90.
Procedure
1 Determine the iSCSI adapter type and retrieve the iSCSI adapter ID.
esxcli <conn_options> iscsi adapter list
2 (Optional) Set the iSCSI name and alias.
esxcli <conn_options> iscsi adapter set --adapter <adapter_name> --name=<name>
esxcli <conn_options> iscsi adapter set --adapter <adapter_name> --alias=<alias>
80 VMware, Inc.
Chapter 5 Managing iSCSI Storage
3 Set up port binding.
a Identify the VMkernel port of the dependent hardware iSCSI adapter.
esxcli <conn_options> iscsi logicalnetworkportal list --adapter=<adapter_name>
b Connect the dependent hardware iSCSI initiator to the iSCSI VMkernel ports by running the
following command for each port.
esxcli <conn_options> iscsi networkportal add --nic=<bound_vmknic> --
adapter=<iscsi_adapter>
c Verify that the ports were added to the dependent hardware iSCSI initiator.
esxcli <conn_options> iscsi physicalnetworkportal list --adapter=<adapter_name>
4 Add a dynamic discovery address or a static discovery address.
With dynamic discovery, all storage targets associated with a host name or IP address are
n
discovered. You can run the following command.
esxcli <conn_options> iscsi adapter discovery sendtarget add --address=<ip/dns[:port]> --
adapter=<adapter_name>
With static discovery, you must specify the host name or IP address and the iSCSI name of the
n
storage target. You can run the following command.
esxcli <conn_options> iscsi adapter discovery statictarget add --address=<ip/dns[:port]>
--adapter=<adapter_name> --name=<target_name>
When you later remove a discovery address, it might still be displayed as the parent of a static target. You can add the discovery address and rescan to display the correct parent for the static targets.
5 (Optional) Set the authentication information for CHAP.
You can set per-target CHAP for static targets, per-adapter CHAP, or apply the command to the discovery address.
Option Command
Adapter-level CHAP
Discovery-level CHAP
Target-level CHAP
esxcli iscsi adapter auth chap set --direction=uni -­chap_username=<name> --chap_password=<pwd> -­level=[prohibited, discouraged, preferred, required] -­secret=<string> --adapter=<vmhba>
esxcli iscsi adapter discovery sendtarget auth chap set -­direction=uni --chap_username=<name> --chap_password=<pwd>
--level=[prohibited, discouraged, preferred, required] -­secret=<string> --adapter=<vmhba> -­address<sendtarget_address>
esxcli iscsi adapter target portal auth chap set -­direction=uni --chap_username=<name> --chap_password=<pwd>
--level=[prohibited, discouraged, preferred, required] -­secret=<string> --adapter=<vmhba> --name<iscsi_iqn_name>
The following example sets adapter-level CHAP.
esxcli <conn_options> iscsi adapter auth chap set --direction=uni --chap_username=<name> --
chap_password=<pwd> --level=preferred --secret=uni_secret --adapter=vmhba33
VMware, Inc. 81
vSphere Command-Line Interface Concepts and Examples
6 (Optional) Set the authentication information for mutual CHAP by running esxcli iscsi adapter auth
chap set again with --direction set to mutual and a dierent authentication user name and secret.
Option Command
Adapter-level CHAP
Discovery-level CHAP
Target-level CHAP
I You are responsible for making sure that CHAP is set before you set mutual CHAP, and for using compatible levels for CHAP and mutual CHAP.
7 (Optional) Set iSCSI parameters.
esxcli iscsi adapter auth chap set --direction=mutual -­mchap_username=<name2> --mchap_password=<pwd2> -­level=[prohibited required] --secret=<string2> -­adapter=<vmhba>
esxcli iscsi adapter discovery sendtarget auth chap set -­direction=mutual --mchap_username=<name2> -­mchap_password=<pwd2> --level=[prohibited, required] -­secret=<string2> --adapter=<vmhba> -­address=<sendtarget_address>
esxcli iscsi adapter target portal auth chap set -­direction=mutual --mchap_username=<nam2e> -­mchap_password=<pwd2> --level=[prohibited required] -­secret=<string2> --adapter=<vmhba> --name=<iscsi_iqn_name>
Option Command
Adapter-level CHAP
Discovery-level CHAP
Target-level CHAP
esxcli iscsi adapter param set --adapter=<vmhba> -­key=<key> --value=<value>
esxcli iscsi adapter discovery sendtarget param set -­adapter=<vmhba> --key=<key> --value=<value> -­address=<sendtarget_address>
esxcli iscsi adapter target portal param set -­adapter=<vmhba> --key=<key> --value=<value> -­address=<address> --name=<iqn.name>
8 After setup is complete, perform rediscovery and rescan all storage devices.
The following example performs the rediscovery and rescan operations.
esxcli <conn_options> iscsi adapter discovery rediscover
esxcli <conn_options> storage core adapter rescan --adapter=vmhba36
9 (Optional) If you want to make additional iSCSI login parameter changes, you must log out of the
corresponding iSCSI session and log back in.
a Run esxcli iscsi session remove to log out.
b Run esxcli iscsi session add or rescan the adapter to add the session back.

Set Up Independent Hardware iSCSI with ESXCLI

With independent hardware-based iSCSI storage, you use a specialized third-party adapter capable of accessing iSCSI storage over TCP/IP. This iSCSI initiator handles all iSCSI and network processing and management for your ESXi system.
You must install and congure the independent hardware iSCSI adapter for your host before you can access the iSCSI storage device. For installation information, see vendor documentation.
Hardware iSCSI setup requires a number of high-level tasks. You should be familiar with the corresponding command for each task. You can refer to the relevant documentation for each command or run esxcli iscsi
--help in the console. Specify one of the options listed in “Connection Options for vCLI Host Management
Commands,” on page 19 in place of <conn_options>.
82 VMware, Inc.
Chapter 5 Managing iSCSI Storage
Prerequisites
Verify that you are familiar with iSCSI authentication. See “Enabling iSCSI Authentication,” on
n
page 94.
Verify that you are familiar with CHAP. See “Seing iSCSI CHAP,” on page 72.
n
Verify that you are familiar with iSCSI parameters. See “Listing and Seing iSCSI Parameters,” on
n
page 90.
Procedure
1 Determine the iSCSI adapter type and retrieve the iSCSI adapter ID.
esxcli <conn_options> iscsi adapter list
2 Congure the hardware initiator (HBA) by running esxcli iscsi networkportal ipconfig with one or
more of the following options.
Option Description
-A|--adapter=<str>
-1|--dns1=<str>
-2|--dns2=<str>
-g|--gateway=<str>
-i|--ip=<str>
-n|--nic=<str>
-s|--subnet=<str>
iSCSI adapter name (required)
iSCSI network portal primary DNS address
iSCSI network portal secondary DNS address
iSCSI network portal gateway address
iSCSI network portal IP address (required)
iSCSI network portal (vmknic)
iSCSI network portal subnet mask (required)
3 (Optional) Set the iSCSI name and alias.
esxcli <conn_options> iscsi adapter set --adapter <adapter_name> --name=<name>
esxcli <conn_options> iscsi adapter set --adapter <adapter_name> --alias=<alias>
4 Add a dynamic discovery address or a static discovery address.
With dynamic discovery, all storage targets associated with a host name or IP address are
n
discovered. You can run the following command.
esxcli <conn_options> iscsi adapter discovery sendtarget add --address=<ip/dns[:port]> --
adapter=<adapter_name>
With static discovery, you must specify the host name or IP address and the iSCSI name of the
n
storage target. You can run the following command.
esxcli <conn_options> iscsi adapter discovery statictarget add --address=<ip/dns[:port]>
--adapter=<adapter_name> --name=<target_name>
VMware, Inc. 83
vSphere Command-Line Interface Concepts and Examples
5 (Optional) Set the authentication information for CHAP.
You can set per-target CHAP for static targets, per-adapter CHAP, or apply the command to the discovery address.
Option Command
Adapter-level CHAP
Discovery-level CHAP
Target-level CHAP
The following example sets adapter-level CHAP.
esxcli <conn_options> iscsi adapter auth chap set --direction=uni --chap_username=<name> --
chap_password=<pwd> --level=preferred --secret=uni_secret --adapter=vmhba33
esxcli iscsi adapter auth chap set --direction=uni -­chap_username=<name> --chap_password=<pwd> -­level=[prohibited, discouraged, preferred, required] -­secret=<string> --adapter=<vmhba>
esxcli iscsi adapter discovery sendtarget auth chap set -­direction=uni --chap_username=<name> --chap_password=<pwd>
--level=[prohibited, discouraged, preferred, required] -­secret=<string> --adapter=<vmhba> -­address<sendtarget_address>
esxcli iscsi adapter target portal auth chap set -­direction=uni --chap_username=<name> --chap_password=<pwd>
--level=[prohibited, discouraged, preferred, required] -­secret=<string> --adapter=<vmhba> --name<iscsi_iqn_name>
N Mutual CHAP is not supported for independent hardware iSCSI storage.
6 (Optional) Set iSCSI parameters.
Option Command
Adapter-level CHAP
Discovery-level CHAP
Target-level CHAP
esxcli iscsi adapter param set --adapter=<vmhba> -­key=<key> --value=<value>
esxcli iscsi adapter discovery sendtarget param set -­adapter=<vmhba> --key=<key> --value=<value> -­address=<sendtarget_address>
esxcli iscsi adapter target portal param set -­adapter=<vmhba> --key=<key> --value=<value> -­address=<address> --name=<iqn.name>
7 After setup is complete, run esxcli storage core adapter rescan --adapter=<iscsi_adapter> to
rescan all storage devices.
8 After setup is complete, perform rediscovery and rescan all storage devices.
The following example performs the rediscovery and rescan operations.
esxcli <conn_options> iscsi adapter discovery rediscover
esxcli <conn_options> storage core adapter rescan --adapter=vmhba36

iSCSI Storage Setup with vicfg-iscsi

You can set up iSCSI storage by using the vicfg-iscsi command.
You can also set up iSCSI storage by using the vSphere Web Client or commands in the esxcli iscsi namespace. See “iSCSI Storage Setup with ESXCLI,” on page 78.
84 VMware, Inc.
Chapter 5 Managing iSCSI Storage

Set Up Software iSCSI with vicfg-iscsi

Software iSCSI setup requires a number of high-level tasks.
You should be familiar with the corresponding command for each task. You can refer to the relevant documentation for each command. Specify one of the options listed in “Connection Options for vCLI Host
Management Commands,” on page 19 in place of <conn_options>.
Prerequisites
Verify that you are familiar with iSCSI authentication. See “Enabling iSCSI Authentication,” on
n
page 94.
Verify that you are familiar with CHAP. See “Seing iSCSI CHAP,” on page 72.
n
Procedure
1 Determine the HBA type and retrieve the HBA ID.
vicfg-iscsi <conn_options> --adapter --list
2 Enable software iSCSI for the HBA.
vicfg-iscsi <conn_options> --swiscsi --enable
3 (Optional) Check the status.
vicfg-iscsi <conn_options> --swiscsi --list
The system prints Software iSCSI is enabled or Software iSCSI is not enabled.
4 (Optional) Set the iSCSI name and alias.
vicfg-iscsi <conn_options> -I -n <iscsi_name> <adapter_name>
vicfg-iscsi <conn_options> --iscsiname - -name <iscsi_name> <adapter_name>
vicfg-iscsi <conn_options> -I -a <alias_name> <adapter_name>
vicfg-iscsi <conn_options> --iscsiname --alias <alias_name> <adapter_name>
5 Add a dynamic discovery address or a static discovery address.
With dynamic discovery, all storage targets associated with a host name or IP address are
n
discovered. You can run the following command.
vicfg-iscsi <conn_options> --discovery --add --ip <ip_addr | domain_name> <adapter_name>
With static discovery, you must specify the host name or IP address and the iSCSI name of the
n
storage target. You can run the following command.
vicfg-iscsi <conn_options> --static --add --ip <ip_addr | domain_name> --name
<iscsi_name> <adapter_name>
When you later remove a discovery address, it might still be displayed as the parent of a static target. You can add the discovery address and rescan to display the correct parent for the static targets.
6 Set the authentication information for CHAP.
vicfg-iscsi <conn_options> -A -c <level> -m <auth_method> -u <auth_u_name> -w <chap_password>
[-i <stor_ip_addr|stor_hostname> [:<portnum>] [-n <iscsi_name]] <adapter_name>
vicfg-iscsi <conn_options> - -authentication - -level <level> - -method <auth_method>
--chap_username <auth_u_name> --chap_password <chap_password>
[--ip <stor_ip_addr|stor_hostname> [:<portnum>] [-name <iscsi_name]]
<adapter_name>
VMware, Inc. 85
vSphere Command-Line Interface Concepts and Examples
The target (-i) and name (-n) option determine what the command applies to.
Option Result
-i and -n
Only -i
Neither -i nor -n
7 (Optional) Set the authentication information for mutual CHAP by running vicfg-iscsi -A again with
the -b option and a dierent authentication user name and password.
For <level>, specify chapProhibited or chapRequired.
chapProhibited – The host does not use CHAP authentication. If authentication is enabled, specify
n
chapProhibited to disable it.
chapRequired – The host requires successful CHAP authentication. The connection fails if CHAP
n
negotiation fails. You can set this value for mutual CHAP only if CHAP is set to chapRequired.
For <auth_method>, CHAP is the only valid value.
I You are responsible for making sure that CHAP is set before you set mutual CHAP, and for using compatible levels for CHAP and mutual CHAP.
8 (Optional) Set iSCSI parameters by running vicfg-iscsi -W.
Command applies to per-target CHAP for static targets.
Command applies to the discovery address.
Command applies to per-adapter CHAP.
9 After setup is complete, run vicfg-rescan to rescan all storage devices.

Set Up Dependent Hardware iSCSI with vicfg-iscsi

Dependent hardware iSCSI setup requires a number of high-level tasks.
You should be familiar with the corresponding command for each task. You can refer to the relevant documentation for each command. Specify one of the options listed in “Connection Options for vCLI Host
Management Commands,” on page 19 in place of <conn_options>.
Prerequisites
Verify that you are familiar with iSCSI authentication. See “Enabling iSCSI Authentication,” on
n
page 94.
Verify that you are familiar with CHAP. See “Seing iSCSI CHAP,” on page 72.
n
Procedure
1 Determine the HBA type and retrieve the HBA ID.
vicfg-iscsi <conn_options> --adapter --list
2 (Optional) Set the iSCSI name and alias.
vicfg-iscsi <conn_options> -I -n <iscsi_name> <adapter_name>
vicfg-iscsi <conn_options> --iscsiname - -name <iscsi_name> <adapter_name>
vicfg-iscsi <conn_options> -I -a <alias_name> <adapter_name>
vicfg-iscsi <conn_options> --iscsiname --alias <alias_name> <adapter_name>
86 VMware, Inc.
Chapter 5 Managing iSCSI Storage
3 Set up port binding.
a Identify the VMkernel port of the dependent hardware iSCSI adapter.
esxcli <conn_options> swiscsi vmknic list -d <vmhba>
b Connect the dependent hardware iSCSI initiator to the iSCSI VMkernel ports by running the
following command for each port.
esxcli <conn_options> swiscsi nic add -n <port_name> -d <vmhba>
c Verify that the ports were added to the dependent hardware iSCSI initiator.
esxcli <conn_options> swiscsi nic list -d <vmhba>
d Rescan the dependent hardware SCSI initiator.
vicfg-rescan <conn_options> <vmhba>
4 Add a dynamic discovery address or a static discovery address.
With dynamic discovery, all storage targets associated with a host name or IP address are
n
discovered. You can run the following command.
vicfg-iscsi <conn_options> --discovery --add --ip <ip_addr | domain_name> <adapter_name>
With static discovery, you must specify the host name or IP address and the iSCSI name of the
n
storage target. You can run the following command.
vicfg-iscsi <conn_options> --static --add --ip <ip_addr | domain_name> --name
<iscsi_name> <adapter_name>
When you later remove a discovery address, it might still be displayed as the parent of a static target. You can add the discovery address and rescan to display the correct parent for the static targets.
5 Set the authentication information for CHAP.
vicfg-iscsi <conn_options> -A -c <level> -m <auth_method> -u <auth_u_name> -w <chap_password>
[-i <stor_ip_addr|stor_hostname> [:<portnum>] [-n <iscsi_name]] <adapter_name>
vicfg-iscsi <conn_options> - -authentication - -level <level> - -method <auth_method>
--chap_username <auth_u_name> --chap_password <chap_password>
[--ip <stor_ip_addr|stor_hostname> [:<portnum>] [-name <iscsi_name]]
<adapter_name>
The target (-i) and name (-n) option determine what the command applies to.
Option Result
-i and -n
Only -i
Neither -i nor -n
Command applies to per-target CHAP for static targets.
Command applies to the discovery address.
Command applies to per-adapter CHAP.
6 (Optional) Set iSCSI parameters by running vicfg-iscsi -W.
7 After setup is complete, run vicfg-rescan to rescan all storage devices.

Set Up Independent Hardware iSCSI with vicfg-iscsi

With independent hardware-based iSCSI storage, you use a specialized third-party adapter capable of accessing iSCSI storage over TCP/IP. This iSCSI initiator handles all iSCSI and network processing and management for your ESXi system.
You must install and congure the independent hardware iSCSI adapter for your host before you can access the iSCSI storage device. For installation information, see vendor documentation.
VMware, Inc. 87
vSphere Command-Line Interface Concepts and Examples
Hardware iSCSI setup requires a number of high-level tasks. You should be familiar with the corresponding command for each task. You can refer to the relevant documentation for each command or the manpage (Linux). Specify one of the options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
Prerequisites
Verify that you are familiar with iSCSI authentication. See “Enabling iSCSI Authentication,” on
n
page 94.
Verify that you are familiar with CHAP. See “Seing iSCSI CHAP,” on page 72.
n
Procedure
1 Determine the HBA type and retrieve the HBA ID.
vicfg-iscsi <conn_options> --adapter --list
2 Congure the hardware initiator (HBA) by running vicfg-iscsi -N with one or more of the following
options.
--list – List network properties.
n
--ip <ip_addr> – Set HBA IPv4 address.
n
--subnetmask <subnet_mask> – Set HBA network mask.
n
--gateway <default_gateway> – Set HBA gateway.
n
--set ARP=true|false – Enable or disable ARP redirect.
n
You can also set the HBA IPv4 address and network mask and gateway in one command.
vicfg-iscsi <conn_options> --ip <ip_addr> --subnetmask <subnet_mask> --gateway
<default_gateway>
3 (Optional) Set the iSCSI name and alias.
vicfg-iscsi <conn_options> -I -n <iscsi_name> <adapter_name>
vicfg-iscsi <conn_options> --iscsiname - -name <iscsi_name> <adapter_name>
vicfg-iscsi <conn_options> -I -a <alias_name> <adapter_name>
vicfg-iscsi <conn_options> --iscsiname --alias <alias_name> <adapter_name>
4 Add a dynamic discovery address or a static discovery address.
With dynamic discovery, all storage targets associated with a host name or IP address are
n
discovered. You can run the following command.
vicfg-iscsi <conn_options> --discovery --add --ip <ip_addr | domain_name> <adapter_name>
With static discovery, you must specify the host name or IP address and the iSCSI name of the
n
storage target. You can run the following command.
vicfg-iscsi <conn_options> --static --add --ip <ip_addr | domain_name> --name
<iscsi_name> <adapter_name>
When you later remove a discovery address, it might still be displayed as the parent of a static target. You can add the discovery address and rescan to display the correct parent for the static targets.
5 Set the authentication information for CHAP.
vicfg-iscsi <conn_options> -A -c <level> -m <auth_method> -u <auth_u_name> -w <chap_password>
[-i <stor_ip_addr|stor_hostname> [:<portnum>] [-n <iscsi_name]] <adapter_name>
vicfg-iscsi <conn_options> - -authentication - -level <level> - -method <auth_method>
--chap_username <auth_u_name> --chap_password <chap_password>
[--ip <stor_ip_addr|stor_hostname> [:<portnum>] [-name <iscsi_name]]
<adapter_name>
88 VMware, Inc.
The target (-i) and name (-n) option determine what the command applies to.
Option Result
-i and -n
Only -i
Neither -i nor -n
Command applies to per-target CHAP for static targets.
Command applies to the discovery address.
Command applies to per-adapter CHAP.
N Mutual CHAP is not supported for independent hardware iSCSI storage.
6 (Optional) Set iSCSI parameters by running vicfg-iscsi -W.
7 After setup is complete, run vicfg-rescan to rescan all storage devices.

Listing and Setting iSCSI Options

You can list and set iSCSI options with ESXCLI or with vicfg-iscsi.
You can also manage parameters. See “Listing and Seing iSCSI Parameters,” on page 90.

Listing iSCSI Options with ESXCLI

You can use esxcli iscsi information retrieval commands to list external HBA properties, information about targets, and LUNs.
Chapter 5 Managing iSCSI Storage
You can use the following esxcli iscsi options to list iSCSI parameters. Specify one of the options listed in
“Connection Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
Run esxcli iscsi adapter firmware to list or upload the rmware for the iSCSI adapter.
n
esxcli <conn_options> iscsi adapter firmware get --adapter=<adapter_name>
esxcli <conn_options> iscsi adapter firmware set --file=<firmware_file_path>
The system returns information about the vendor, model, description, and serial number of the HBA.
Run commands in the esxcli iscsi adapter target name space.
n
esxcli iscsi adapter target portal lists and sets authentication and portal parameters.
n
esxcli iscsi adapter target list lists LUN information.
n

Setting MTU with ESXCLI

You can change MTU seings by using ESXCLI.
If you want to change the MTU used for your iSCSI storage, you must make the change in two places.
Run esxcli network vswitch standard set to change the MTU of the virtual switch.
n
Run esxcli network ip interface set to change the MTU of the network interface.
n

Listing and Setting iSCSI Options with vicfg-iscsi

You can use vicfg-iscsi information retrieval options to list external HBA properties, information about targets, and LUNs.
You can use the following vicfg-iscsi options to list iSCSI parameters. Specify one of the options listed in
“Connection Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
Run vicfg-iscsi -P|--phba to list external (vendor-specic) properties of an iSCSI adapter.
n
vicfg-iscsi <conn_options> -P -l <adapter_name>
vicfg-iscsi <conn_options> --phba --list <adapter_name>
VMware, Inc. 89
vSphere Command-Line Interface Concepts and Examples
The system returns information about the vendor, model, description, and serial number of the HBA.
Run vicfg-iscsi -T | --target to list target information.
n
vicfg-iscsi <conn_options> -T -l <adapter_name>
vicfg-iscsi <conn_options> --target --list <adapter_name>
The system returns information about targets for the specied adapter, including the iSCSI name, in IQN or EUI format, and alias. See “Discovery Target Names,” on page 71.
Run vicfg-iscsi -L|--lun to list LUN information.
n
vicfg-iscsi <conn_options> -L -l <adapter_name>
vicfg-iscsi <conn_options> --lun --list <adapter_name>
The command returns the operating system device name, bus number, target ID, LUN ID, and LUN size for the LUN.
Run vicfg-iscsi -L with -t to list only LUNs on a specied target.
n
vicfg-iscsi <conn_options> -L -l -t <target_ID> <adapter_name>
vicfg-iscsi <conn_options> --lun --list --target_id <target_id> <adapter_name>
The system returns the LUNs on the specied target and the corresponding device name, device number, LUN ID, and LUN size.
Run vicfg-iscsi -p|--pnp to list physical network portal information for independent hardware iSCSI
n
devices. You can also use this option with --mtu.
vicfg-iscsi <conn_options> -p -l <adapter_name>
vicfg-iscsi <conn_options> --pnp --list <adapter_name>
The system returns information about the MAC address, MTU, and current transfer rate.
Run vicfg-iscsi -I -l to list information about the iSCSI initiator. ESXi systems use a software-based
n
iSCSI initiator in the VMkernel to connect to storage. The command returns the iSCSI name, alias name, and alias seable bit for the initiator.
vicfg-iscsi <conn_options> -I -l vmhba42
Run vicfg-iscsi -p -M to set the MTU for the adapter. You must specify the size and adapter name.
n
vicfg-iscsi <conn_options> -p -M <mtu_size> <adapter_name>
vicfg-iscsi <conn_options> --pnp --mtu <mtu-size> <adapter_name>

Listing and Setting iSCSI Parameters

You can list and set iSCSI parameters for software iSCSI and for dependent hardware iSCSI by using ESXCLI or vicfg-iscsi.

Listing and Setting iSCSI Parameters with ESXCLI

You can list and set iSCSI parameters for software iSCSI and for dependent hardware iSCSI by using ESXCLI.
You can retrieve and set iSCSI parameters by running one of the following commands.
90 VMware, Inc.
Chapter 5 Managing iSCSI Storage
Parameter Type Command
Adapter-level parameters
Target-level parameters
Discovery-level parameters
esxcli iscsi adapter param set --adapter=<vmhba> --key=<key> -­value=<value>
esxcli iscsi adapter target portal param set --adapter=<vmhba> -­key=<key> --value=<value> --address=<address> --name=<iqn.name>
esxcli iscsi adapter discovery sendtarget param set --adapter=<vmhba> -­key=<key> --value=<value> --address=<address>
The following table lists all seable parameters. These parameters are also described in the IETF rfc 3720. You can run esxcli iscsi adapter param get to determine whether a parameter is seable or not.
The parameters in the table apply to software iSCSI and dependent hardware iSCSI.
Table 55. Settable iSCSI Parameters
Parameter Description
DataDigestType
HeaderDigest
MaxOutstandingR2T
FirstBurstLength
MaxBurstLength
MaxRecvDataSegLen
NoopOutInterval
NoopOutTimeout
RecoveryTimeout
DelayedAck
Increases data integrity. When data digest is enabled, the system performs a checksum over each PDUs data part and veries using the CRC32C algorithm.
N Systems that use Intel Nehalem processors ooad the iSCSI digest calculations for software iSCSI, thus reducing the impact on performance.
Valid values are digestProhibited, digestDiscouraged, digestPreferred, or digestRequired.
Increases data integrity. When header digest is enabled, the system performs a checksum over the header part of each iSCSI Protocol Data Unit (PDU) and veries using the CRC32C algorithm.
Max Outstanding R2T denes the Ready to Transfer (R2T) PDUs that can be in transition before an acknowledgement PDU is received.
Maximum amount of unsolicited data an iSCSI initiator can send to the target during the execution of a single SCSI command, in bytes.
Maximum SCSI data payload in a Data-In or a solicited Data-Out iSCSI sequence, in bytes.
Maximum data segment length, in bytes, that can be received in an iSCSI PDU.
Time interval, in seconds, between NOP-Out requests sent from your iSCSI initiator to an iSCSI target. The NOP-Out requests serve as the ping mechanism to verify that a connection between the iSCSI initiator and the iSCSI target is active.
Supported only at the initiator level.
Amount of time, in seconds, that can lapse before your host receives a NOP-In message. The message is sent by the iSCSI target in response to the NOP-Out request. When the NoopTimeout limit is exceeded, the initiator terminates the current session and starts a new one.
Supported only at the initiator level.
Amount of time, in seconds, that can lapse while a session recovery is performed. If the timeout exceeds its limit, the iSCSI initiator terminates the session.
Allows systems to delay acknowledgment of received data packets.
You can use the following ESXCLI commands to list parameter options.
Run esxcli iscsi adapter param get to list parameter options for the iSCSI adapter.
n
Run esxcli iscsi adapter discovery sendtarget param get or esxcli iscsi adapter target portal
n
param set to retrieve information about iSCSI parameters and whether they are seable.
VMware, Inc. 91
vSphere Command-Line Interface Concepts and Examples
Run esxcli iscsi adapter discovery sendtarget param get or esxcli iscsi adapter target portal
n
param set to set iSCSI parameter options.
If special characters are in the <name>=<value> sequence, for example, if you add a space, you must surround the sequence with double quotes ("<name> = <value>").

Returning Parameters to Default Inheritance with ESXCLI

The values of iSCSI parameters associated with a dynamic discovery address or a static discovery target are inherited from the corresponding seings of the parent.
For the dynamic discovery address, the parent is the adapter. For the static target, the parent is the adapter or discovery address.
If you use the vSphere Web Client to modify authentication seings, you must deselect the Inherit from
n
Parent check box before you can make a change to the discovery address or discovery target.
If you use esxcli iscsi, the value you set overrides the inherited value.
n
Inheritance is relevant only if you want to return a dynamic discovery address or a static discovery target to its inherited value. In that case, use the following command, which requires the --name option for static discovery addresses, but not for dynamic discovery targets.
Target Type Command
Dynamic target
Static target
esxcli iscsi adapter discovery sendtarget param set
esxcli iscsi adapter target portal param set

Listing and Setting iSCSI Parameters with vicfg-iscsi

You can list and set iSCSI parameters by running vicfg-iscsi -W.
The following table lists all seable parameters. These parameters are also described in the IETF rfc 3720. You can also run vicfg-iscsi --parameter --list --details to determine whether a parameter is seable or not.
The parameters in the table apply to software iSCSI and dependent hardware iSCSI.
Table 56. Settable iSCSI Parameters
Parameter Description
DataDigestType
HeaderDigest
MaxOutstandingR2T
FirstBurstLength
MaxBurstLength
MaxRecvDataSegLen
Increases data integrity. When data digest is enabled, the system performs a checksum over each PDUs data part and veries using the CRC32C algorithm.
N Systems that use Intel Nehalem processors ooad the iSCSI digest calculations for software iSCSI, thus reducing the impact on performance.
Valid values are digestProhibited, digestDiscouraged, digestPreferred, or digestRequired.
Increases data integrity. When header digest is enabled, the system performs a checksum over the header part of each iSCSI Protocol Data Unit (PDU) and veries using the CRC32C algorithm.
Max Outstanding R2T denes the Ready to Transfer (R2T) PDUs that can be in transition before an acknowledgement PDU is received.
Maximum amount of unsolicited data an iSCSI initiator can send to the target during the execution of a single SCSI command, in bytes.
Maximum SCSI data payload in a Data-In or a solicited Data-Out iSCSI sequence, in bytes.
Maximum data segment length, in bytes, that can be received in an iSCSI PDU.
92 VMware, Inc.
Chapter 5 Managing iSCSI Storage
Table 56. Settable iSCSI Parameters (Continued)
Parameter Description
NoopOutInterval
NoopOutTimeout
RecoveryTimeout
DelayedAck
Time interval, in seconds, between NOP-Out requests sent from your iSCSI initiator to an iSCSI target. The NOP-Out requests serve as the ping mechanism to verify that a connection between the iSCSI initiator and the iSCSI target is active.
Supported only at the initiator level.
Amount of time, in seconds, that can lapse before your host receives a NOP-In message. The message is sent by the iSCSI target in response to the NOP-Out request. When the NoopTimeout limit is exceeded, the initiator terminates the current session and starts a new one.
Supported only at the initiator level.
Amount of time, in seconds, that can lapse while a session recovery is performed. If the timeout exceeds its limit, the iSCSI initiator terminates the session.
Allows systems to delay acknowledgment of received data packets.
You can use the following vicfg-iscsi options to list parameter options. Specify one of the options listed in
“Connection Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
Run vicfg-iscsi -W -l to list parameter options for the HBA.
n
vicfg-iscsi <conn_options> -W -l
[-i <stor_ip_addr|stor_hostname> [:<portnum>] [-n <iscsi_name>]] <adapter_name>
vicfg-iscsi <conn_options> --parameter --list
[--ip <stor_ip_addr|stor_hostname> [:<portnum>] [--name <iscsi_name>]] <adapter_name>
The target (-i) and name (-n) option determine what the command applies to.
Option Result
-i and -n
Only -i
Neither -i nor -n
Run vicfg-iscsi -W -l -k to list iSCSI parameters and whether they are seable.
n
vicfg-iscsi <conn_options> -W -l -k
[-i <stor_ip_addr|stor_hostname>[:<port_num>] [-n <iscsi_name>]] <adapter_name>
vicfg-iscsi <conn_options> --parameter --list --detail
[--ip <stor_ip_addr|stor_hostname>[:<port_num>][--name <iscsi_name>]] <adapter_name>
Run vicfg-iscsi -W -j to set iSCSI parameter options.
n
vicfg-iscsi <conn_options> -W -j <name>=<value>
[-i <stor_ip_addr|stor_hostname>[:port_num>][-n <iscsi_name>]] <adapter_name>
vicfg-iscsi <conn_options> --parameter --set <name>=<value>
[--ip <stor_ip_addr|stor_hostname>[:port_num>][--name <iscsi_name>]] <adapter_name>
Command applies to static targets.
Command applies to the discovery address.
Command applies to per-adapter parameters.
VMware, Inc. 93
vSphere Command-Line Interface Concepts and Examples
The target (-i) and name (-n) option determine what the command applies to.
Option Result
-i and -n
Only -i
Neither -i nor -n
Command applies to per-target CHAP for static targets.
Command applies to the discovery address.
Command applies to per-adapter CHAP.
If special characters are in the <name>=<value> sequence, for example, if you add a space, you must surround the sequence with double quotes ("<name> = <value>").

Returning Parameters to Default Inheritance with vicfg-iscsi

The values of iSCSI parameters associated with a dynamic discovery address or a static discovery target are inherited from the corresponding seings of the parent.
For the dynamic discovery address, the parent is the adapter. For the static target, the parent is the adapter or discovery address.
If you use the vSphere Web Client to modify authentication seings, you must deselect the Inherit from
n
Parent check box before you can make a change to the discovery address or discovery target.
If you use vicfg-iscsi, the value you set overrides the inherited value.
n
Inheritance is relevant only if you want to return a dynamic discovery address or a static discovery target to its inherited value. In that case, use the --reset <param_name> option, which requires the --name option for static discovery addresses, but not for dynamic discovery targets.
vicfg-iscsi <conn_options> --parameter --reset <param_name>
--ip <stor_ip_addr | stor_hostname>[:port_num>] <adapter_name>
vicfg-iscsi <conn_options> -W - o <param_name>
-i <stor_ip_addr|stor_hostname>[:port_num>] <adapter_name>

Enabling iSCSI Authentication

You can enable iSCSI authentication by using ESXCLI or vicfg-iscsi.

Enable iSCSI Authentication with ESXCLI

You can use the esxcli iscsi adapter auth commands to enable iSCSI authentication.
For information on iSCSI CHAP, see “Seing iSCSI CHAP,” on page 72.
94 VMware, Inc.
Chapter 5 Managing iSCSI Storage
Procedure
1 (Optional) Set the authentication information for CHAP.
esxcli <conn_options> iscsi adapter auth chap set --direction=uni --chap_username=<name> --
chap_password=<pwd> --level=[prohibited, discouraged, preferred, required] --secret=<string>
--adapter=<adapter_name>
You can set per-target CHAP for static targets, per-adapter CHAP, or apply the command to the discovery address.
Option Command
Per-adapter CHAP
Per-discovery CHAP
Per-target CHAP
esxcli iscsi adapter auth chap set
esxcli iscsi adapter discovery sendtarget auth chap set
esxcli iscsi adapter target portal auth chap set
The following example sets adapter-level CHAP.
esxcli <conn_options> iscsi adapter auth chap set --direction=uni --chap_username=User1 --
chap_password=MySpecialPwd --level=preferred --secret=uni_secret --adapter=vmhba33
2 (Optional) Set the authentication information for mutual CHAP by running esxcli iscsi adapter auth
chap set again with the -d option set to mutual option and a dierent authentication user name and
secret.
esxcli <conn_options> iscsi adapter auth chap set --direction=mutual --
mchap_username=<m_name> --mchap_password=<m_pwd> --level=[prohibited, required] --
secret=<string> --adapter=<adapter_name>
For <level>, specify prohibited or required.
Option Description
prohibited
required
The host does not use CHAP authentication. If authentication is enabled, specify chapProhibited to disable it.
The host requires successful CHAP authentication. The connection fails if CHAP negotiation fails. You can set this value for mutual CHAP only if CHAP is set to chapRequired.
For direction, specify mutual.
I You are responsible for making sure that CHAP is set before you set mutual CHAP, and for using compatible levels for CHAP and mutual CHAP. Use a dierent secret in CHAP and mutual CHAP.

Enable Mutual iSCSI Authentication with ESXCLI

Mutual authentication is supported for software iSCSI and dependent hardware iSCSI, but not for independent hardware iSCSI.
For information on iSCSI CHAP, see “Seing iSCSI CHAP,” on page 72.
Prerequisites
Verify that CHAP authentication is already set up when you start seing up mutual CHAP.
n
Verify that CHAP and mutual CHAP use dierent user names and passwords. The second user name
n
and password are supported for mutual authentication on the storage side.
Verify that CHAP and mutual CHAP use compatible CHAP levels.
n
VMware, Inc. 95
vSphere Command-Line Interface Concepts and Examples
Procedure
1 Enable authentication.
esxcli <conn_options> iscsi adapter auth chap set --direction=uni --chap_username=<name> --
chap_password=<pw> --level=[prohibited, discouraged, preferred, required] --secret=<string>
--adapter=<adapter_name>
The specied chap_username and secret must be supported on the storage side.
2 List possible VMkernel NICs to bind.
esxcli <conn_options> iscsi logicalnetworkportal list
3 Enable mutual authentication.
esxcli <conn_options> iscsi adapter auth chap set --direction=mutual --
mchap_username=<m_name> --mchap_password=<m_pwd> --level=[prohibited, required] --
secret=<string> --adapter=<adapter_name>
The specied mchap_username and secret must be supported on the storage side.
4 After setup is complete, perform rediscovery and rescan all storage devices.
The following example performs the rediscovery and rescan operations.
esxcli <conn_options> iscsi adapter discovery rediscover
esxcli <conn_options> storage core adapter rescan --adapter=vmhba36

Enable iSCSI Authentication with vicfg-iscsi

You can use the vicfg-iscsi -A -c options to enable iSCSI authentication. Mutual authentication is supported for software iSCSI and dependent hardware iSCSI, but not for independent hardware iSCSI.
For information on iSCSI CHAP, see “Seing iSCSI CHAP,” on page 72.
Prerequisites
Verify that CHAP authentication is already set up when you start seing up mutual CHAP.
n
Verify that CHAP and mutual CHAP use dierent user names and passwords. The second user name
n
and password are supported for mutual authentication on the storage side.
Verify that CHAP and mutual CHAP use compatible CHAP levels.
n
Procedure
1 Enable authentication on the ESXi host.
vicfg-iscsi <conn_options> -A -c <level> -m <auth_method> -u <auth_u_name> -w <chap_password>
[-i <stor_ip_addr|stor_hostname> [:<portnum>] [-n <iscsi_name]] <adapter_name>
The specied user name and password must be supported on the storage side.
2 Enable mutual authentication on the ESXi host.
vicfg-iscsi <conn_options> -A -c <level> -m <auth_method> -b -u <ma_username>
-w <ma_password> [-i <stor_ip_addr|stor_hostname> [:<portnum>]
[-n <iscsi_name]] <adapter_name>
3 After setup is complete, perform rediscovery and rescan all storage devices.
96 VMware, Inc.

Set Up Ports for iSCSI Multipathing

With port binding, you create a separate VMkernel port for each physical NIC using 1:1 mapping.
You can add all network adapter and VMkernel port pairs to a single vSwitch. The vSphere Storage documentation explains in detail how to specify port binding.
You cannot set up ports for multipathing by using vicfg-iscsi.
In the examples below, specify one of the options listed in “Connection Options for vCLI Host Management
Commands,” on page 19 in place of <conn_options>.
I The ESXi 4.x ESXCLI commands for seing up iSCSI are no longer supported.
Prerequisites
Verify that you are familiar with iSCSI session removal. See “Removing iSCSI Sessions,” on page 99.
Procedure
1 Find out which uplinks are available for use with iSCSI adapters.
esxcli <conn_options> iscsi physicalnetworkportal list --adapter=<adapter_name>
2 Connect the software iSCSI or dependent hardware iSCSI initiator to the iSCSI VMkernel ports by
running the following command for each port.
Chapter 5 Managing iSCSI Storage
esxcli <conn_options> iscsi networkportal nic add --adapter=<adapter_name> --nic=<bound_nic>
3 Verify that the ports were added to the iSCSI initiator by running the following command.
esxcli <conn_options> iscsi networkportal list --adapter=<adapter_name>
4 (Optional) If there are active iSCSI sessions between your host and targets, discontinue them. See
Removing iSCSI Sessions.
5 Rescan the iSCSI initiator.
esxcli <conn_options> storage core adapter rescan --adapter <iscsi adapter>
6 To disconnect the iSCSI initiator from the ports, run the following command.
esxcli <conn_options> iscsi networkportal remove --adapter=<adapter_name> --nic=<bound_nic>
VMware, Inc. 97
vSphere Command-Line Interface Concepts and Examples

Managing iSCSI Sessions

To communicate with each other, iSCSI initiators and targets establish iSCSI sessions. You can use esxcli
iscsi session to list and manage iSCSI sessions for software iSCSI and dependent hardware iSCSI.

Introduction to iSCSI Session Management

By default, software iSCSI and dependent hardware iSCSI initiators start one iSCSI session between each initiator port and each target port.
If your iSCSI initiator or target has more than one port, your host can establish multiple sessions. The default number of sessions for each target equals the number of ports on the iSCSI adapter times the number of target ports. You can display all current sessions to analyze and debug them. You might add sessions to the default for several reasons.
Cloning sessions - Some iSCSI arrays support multiple sessions between the iSCSI adapter and target
n
ports. If you clone an existing session on one of these arrays, the array presents more data paths for your adapter. Duplicate sessions do not persist across reboot. Additional sessions to the target might have performance benets, but the result of cloning depends entirely on the array. You must log out from an iSCSI session if you want to clone a session. You can use the esxcli iscsi session add command to clone a session.
Enabling Header and Data Digest - If you are logged in to a session and want to enable the Header and
n
Data Digest parameters, you must set the parameter, remove the session, and add the session back for
the parameter change to take eect. You must log out from an iSCSI session if you want to clone a session.
Establishing target-specic sessions - You can establish a session to a specic target port. This can be
n
useful if your host connects to a single-port storage system that, by default, presents only one target port to your initiator, but can redirect additional sessions to a dierent target port. Establishing a new session between your iSCSI initiator and another target port creates an additional path to the storage system.
C Some storage systems do not support multiple sessions from the same initiator name or endpoint. Aempts to create multiple sessions to such targets can result in unpredictable behavior of your iSCSI environment.

Listing iSCSI Sessions

You can use esxcli iscsi session to list sessions.
The following example scenario uses the available commands. Run esxcli iscsi session --help and each command with --help for reference information. The example uses a conguration le to log in to the host. Specify one of the options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
I The ESXi 4.x ESXCLI commands for managing iSCSI sessions are not supported against ESXi
5.0 hosts.
List a software iSCSI session at the adapter level.
n
esxcli <conn_options> iscsi session list --adapter=<iscsi_adapter>
List a software iSCSI session at the target level.
n
esxcli <conn_options> iscsi session list --name=<target> --adapter=<iscsi_adapter>
98 VMware, Inc.
Chapter 5 Managing iSCSI Storage

Logging in to iSCSI Sessions

You can use esxcli iscsi session to log in to a session.
Specify one of the options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
Log in to a session on the current software iSCSI or dependent hardware iSCSI conguration at the
n
adapter level.
esxcli <conn_options> iscsi session add --adapter=<adapter_name>
The following example applies custom values.
esxcli --config /host-config-file iscsi session add --adapter=vmhba36
Log in to a session on the current software iSCSI or dependent hardware iSCSI conguration at the
n
target level.
esxcli <conn_options> iscsi session add --name=<target> --adapter=<adapter_name>
The following example applies custom values.
esxcli --config /host-config-file iscsi session add -name=iqn.xxx --adapter=vmhba36
Add duplicate sessions with target and session IDs in current software iSCSI or dependent hardware
n
iSCSI conguration.
esxcli <conn_options> iscsi session add --name=<iqn.xxxx> --isid=<session_id> --
adapter=<iscsi_adapter>
iqn.xxxx is the target IQN, which you can determine by listing all sessions. session_id is the session's
iSCSI ID. The following example applies custom values.
esxcli --config /host-config-file iscsi session add -name=iqn.xxx --isid='00:02:3d:00:00:01'
--adapter=vmhba36

Removing iSCSI Sessions

You can use esxcli iscsi session to remove iSCSI sessions.
Specify one of the options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of <conn_options>.
Remove sessions from the current software iSCSI or dependent hardware iSCSI conguration at the
n
adapter level.
esxcli <conn_options> iscsi session remove --adapter=<iscsi_adapter>
The following example applies custom values.
esxcli iscsi session remove --adapter=vmhba36
Remove sessions from the current software iSCSI or dependent hardware iSCSI conguration at the
n
target level.
esxcli <conn_options> iscsi session remove --name=<iqn> --adapter=<iscsi_adapter>
The following example applies custom values.
esxcli <conn_options> iscsi session remove --name=iqn.xxx --adapter=vmhba38
VMware, Inc. 99
vSphere Command-Line Interface Concepts and Examples
Remove sessions from the current software iSCSI or dependent hardware iSCSI conguration with
n
target and session ID.
esxcli <conn_options> iscsi session remove --name=<iqn.xxxx> --isid=<session id> --
adapter=<iscsi_adapter>
iqn.xxxx is the target IQN, which you can determine by listing all sessions. session_id is the session's
iSCSI ID. The following example applies custom values.
esxcli --config /host-config-file iscsi session remove --name=iqn.xxx --session='00:02:3d:
01:00:01' --adapter=vmhba36
100 VMware, Inc.
Loading...