Citrix Systems Server 6 User Manual

Citrix XenServer ® 6.0 Administrator's Guide

Published Friday, 02 March 2012
1.1 Edition
Citrix XenServer ® 6.0 Administrator's Guide
Copyright © 2012 Citrix Systems. Inc. All Rights Reserved. Version: 6.0
Citrix, Inc. 851 West Cypress Creek Road Fort Lauderdale, FL 33309 United States of America
Disclaimers
This document is furnished "AS IS." Citrix, Inc. disclaims all warranties regarding the contents of this document, including, but not limited to, implied warranties of merchantability and fitness for any particular purpose. This document may contain technical or other inaccuracies or typographical errors. Citrix, Inc. reserves the right to revise the information in this document at any time without notice. This document and the software described in this document constitute confidential information of Citrix, Inc. and its licensors, and are furnished under a license from Citrix, Inc.
Citrix Systems, Inc., the Citrix logo, Citrix XenServer and Citrix XenCenter, are trademarks of Citrix Systems, Inc. and/or one or more of its subsidiaries, and may be registered in the United States Patent and Trademark Office and in other countries. All other trademarks and registered trademarks are property of their respective owners.
Trademarks
Citrix® XenServer ® XenCenter ®

Contents

Document Overview .......................................................................................... 1
Introducing XenServer ........................................................................................................ 1
Benefits of Using XenServer ........................................................................................ 1
Administering XenServer ............................................................................................. 2
XenServer Editions ...................................................................................................... 2
New Features in XenServer 6.0 ........................................................................................... 2
XenServer Documentation .................................................................................................. 4
Managing Users ................................................................................................. 5
Authenticating Users With Active Directory (AD) .................................................................. 5
Configuring Active Directory Authentication ................................................................. 6
User Authentication .................................................................................................... 8
Removing Access for a User ........................................................................................ 9
Leaving an AD Domain .............................................................................................. 10
Role Based Access Control ................................................................................................ 10
Roles ........................................................................................................................ 11
Definitions of RBAC Roles and Permissions ................................................................. 12
Using RBAC with the CLI ........................................................................................... 17
To List All the Available Defined Roles in XenServer ............................................ 17
To Display a List of Current Subjects: ................................................................. 18
To Add a Subject to RBAC ................................................................................. 19
To Assign an RBAC Role to a Created subject ...................................................... 19
To Change a Subject's RBAC Role: ...................................................................... 19
Auditing ................................................................................................................... 20
Audit Log xe CLI Commands .............................................................................. 20
To Obtain All Audit Records From the Pool ......................................................... 20
To Obtain Audit Records of the Pool Since a Precise Millisecond Timestamp .......... 20
To Obtain Audit Records of the Pool Since a Precise Minute Timestamp ................ 20
How Does XenServer Compute the Roles for the Session? ........................................... 20
XenServer Hosts and Resource Pools .............................................................. 22
iii
Hosts and Resource Pools Overview .................................................................................. 22
Requirements for Creating Resource Pools ......................................................................... 22
Creating a Resource Pool .................................................................................................. 23
Creating Heterogeneous Resource Pools ............................................................................ 23
Adding Shared Storage ...................................................................................................... 24
Removing a XenServer Host from a Resource Pool .............................................................. 25
Preparing a Pool of XenServer Hosts for Maintenance ........................................................ 25
High Availability ................................................................................................................ 26
HA Overview ............................................................................................................ 26
Overcommitting ................................................................................................ 26
Overcommitment Warning ................................................................................ 27
Host Fencing .................................................................................................... 27
Configuration Requirements ...................................................................................... 27
Restart Priorities ....................................................................................................... 28
Enabling HA on a XenServer Pool ...................................................................................... 29
Enabling HA Using the CLI ......................................................................................... 29
Removing HA Protection from a VM using the CLI ...................................................... 29
Recovering an Unreachable Host ............................................................................... 30
Shutting Down a host When HA is Enabled ................................................................ 30
Shutting Down a VM When it is Protected by HA ....................................................... 30
Host Power On ................................................................................................................. 30
Powering on Hosts Remotely ..................................................................................... 30
Using the CLI to Manage Host Power On ................................................................... 31
To Enable Host Power On Using the CLI ............................................................. 31
To Turn on Hosts Remotely Using the CLI ........................................................... 31
Configuring a Custom Script for XenServer's Host Power On Feature ............................ 31
Key/Value Pairs ................................................................................................. 32
host.power_on_mode ............................................................................... 32
host.power_on_config .............................................................................. 32
Sample Script ................................................................................................... 32
Storage ............................................................................................................. 34
iv
Storage Overview ............................................................................................................. 34
Storage Repositories (SRs) ......................................................................................... 34
Virtual Disk Images (VDIs) ......................................................................................... 34
Physical Block Devices (PBDs) .................................................................................... 34
Virtual Block Devices (VBDs) ..................................................................................... 35
Summary of Storage objects ..................................................................................... 35
Virtual Disk Data Formats ......................................................................................... 35
VHD-based VDIs ............................................................................................... 35
VHD Chain Coalescing ............................................................................... 36
Space Utilization ....................................................................................... 36
LUN-based VDIs ................................................................................................ 36
Storage Repository Types .................................................................................................. 37
Local LVM ................................................................................................................. 37
Creating a Local LVM SR (lvm) ........................................................................... 38
Local EXT3 VHD ........................................................................................................ 38
Creating a Local EXT3 SR (ext) ........................................................................... 38
udev ........................................................................................................................ 38
ISO ........................................................................................................................... 38
Software iSCSI Support ............................................................................................. 39
XenServer Host iSCSI configuration .................................................................... 39
Citrix StorageLink SRs ................................................................................................ 39
Upgrading XenServer with StorageLink SRs ......................................................... 40
Creating a Shared StorageLink SR ...................................................................... 40
Managing Hardware Host Bus Adapters (HBAs) .......................................................... 44
Sample QLogic iSCSI HBA setup ......................................................................... 44
Removing HBA-based SAS, FC or iSCSI Device Entries .......................................... 45
LVM over iSCSI ......................................................................................................... 45
Creating a Shared LVM Over iSCSI SR Using the Software iSCSI Initiator
(lvmoiscsi) ........................................................................................................ 45
Creating a Shared LVM over Fibre Channel / iSCSI HBA or SAS SR (lvmohba) .......... 46
NFS VHD .................................................................................................................. 48
Creating a Shared NFS SR (NFS) ......................................................................... 49
v
LVM over Hardware HBA ........................................................................................... 49
Storage Configuration ....................................................................................................... 49
Creating Storage Repositories .................................................................................... 49
Upgrading LVM Storage from XenServer 5.0 or Earlier ................................................. 50
LVM Performance Considerations .............................................................................. 50
VDI Types ......................................................................................................... 50
Creating a Raw Virtual Disk Using the xe CLI ...................................................... 51
Converting Between VDI Formats .............................................................................. 51
Probing an SR ........................................................................................................... 51
Storage Multipathing ................................................................................................ 54
MPP RDAC Driver Support for LSI Arrays. ................................................................... 55
Managing Storage Repositories ......................................................................................... 55
Destroying or Forgetting a SR .................................................................................... 55
Introducing an SR ..................................................................................................... 55
Resizing an SR .......................................................................................................... 56
Converting Local Fibre Channel SRs to Shared SRs ...................................................... 56
Moving Virtual Disk Images (VDIs) Between SRs ......................................................... 57
Copying All of a VMs VDIs to a Different SR ....................................................... 57
Copying Individual VDIs to a Different SR ........................................................... 57
Adjusting the Disk IO Scheduler ................................................................................ 57
Automatically Reclaiming Space When Deleting Snapshots .......................................... 58
Reclaiming Space Using the Off Line Coalesce Tool ............................................. 58
Virtual Disk QoS Settings ................................................................................................... 59
Configuring VM Memory ................................................................................. 61
What is Dynamic Memory Control (DMC)? ......................................................................... 61
The Concept of Dynamic Range ................................................................................. 61
The Concept of Static Range ..................................................................................... 62
DMC Behaviour ........................................................................................................ 62
How Does DMC Work? ............................................................................................. 62
Memory Constraints ................................................................................................. 63
Supported Operating Systems ................................................................................... 63
vi
xe CLI Commands ............................................................................................................. 64
Display the Static Memory Properties of a VM ........................................................... 64
Display the Dynamic Memory Properties of a VM ....................................................... 65
Updating Memory Properties .................................................................................... 65
Update Individual Memory Properties ....................................................................... 66
Upgrade Issues ................................................................................................................. 66
Workload Balancing Interaction ......................................................................................... 66
Xen Memory Usage ......................................................................................... 67
Setting Control Domain Memory ....................................................................................... 67
Networking ...................................................................................................... 69
Networking Support .......................................................................................................... 69
vSwitch Networks ............................................................................................................. 69
XenServer Networking Overview ....................................................................................... 70
Network Objects ....................................................................................................... 70
Networks .................................................................................................................. 71
VLANs ...................................................................................................................... 71
Using VLANs with Management Interfaces ......................................................... 71
Using VLANs with Virtual Machines ................................................................... 71
Using VLANs with Dedicated Storage NICs .......................................................... 71
Combining Management Interfaces and Guest VLANs on a Single Host NIC ........... 71
NIC Bonds ................................................................................................................ 71
Switch Configuration ......................................................................................... 73
Active-Active Bonding ....................................................................................... 74
Active-Passive Bonding ...................................................................................... 75
Initial Networking Configuration ................................................................................ 76
Managing Networking Configuration .................................................................................. 76
Cross-Server Private networks ................................................................................... 76
Creating Networks in a Standalone Server ................................................................. 77
Creating Networks in Resource Pools ......................................................................... 78
Creating VLANs ......................................................................................................... 78
Creating NIC Bonds on a Standalone Host .................................................................. 78
vii
Creating a NIC bond ......................................................................................... 79
Controlling the MAC Address of the Bond .......................................................... 79
Reverting NIC bonds ......................................................................................... 80
Creating NIC bonds in resource pools ........................................................................ 80
Adding NIC bonds to new resource pools ........................................................... 80
Adding NIC bonds to an existing pool ................................................................ 81
Configuring a dedicated storage NIC .......................................................................... 81
Using SR-IOV Enabled NICs ........................................................................................ 82
Controlling the rate of outgoing data (QoS) ................................................................ 82
Changing networking configuration options ............................................................... 83
Hostname ......................................................................................................... 84
DNS servers ...................................................................................................... 84
Changing IP address configuration for a standalone host ..................................... 84
Changing IP address configuration in resource pools ........................................... 84
Primary management interface ......................................................................... 85
Disabling management access ........................................................................... 85
Adding a new physical NIC ................................................................................ 85
Networking Troubleshooting ............................................................................................. 85
Diagnosing network corruption ................................................................................. 86
Recovering from a bad network configuration ............................................................ 86
Disaster Recovery and Backup ........................................................................ 87
Understanding XenServer DR ............................................................................................. 87
DR Infrastructure Requirements ........................................................................................ 88
Deployment Considerations .............................................................................................. 89
Steps to Take Before a Disaster ................................................................................. 89
Steps to Take After a Disaster ................................................................................... 89
Steps to Take After a Recovery .................................................................................. 89
Enabling Disaster Recovery in XenCenter ........................................................................... 89
Recovering VMs and vApps in the Event of Disaster (Failover) ............................................. 90
Restoring VMs and vApps to the Primary Site After Disaster (Failback) ................................. 90
Test Failover ..................................................................................................................... 91
viii
vApps ............................................................................................................................... 92
Using the Manage vApps dialog box in XenCenter ...................................................... 93
Backing Up and Restoring XenServer Hosts and VMs .......................................................... 93
Backing up Virtual Machine metadata ....................................................................... 94
Backing up single host installations .................................................................... 95
Backing up pooled installations ......................................................................... 95
Backing up XenServer hosts ...................................................................................... 95
Backing up VMs ........................................................................................................ 96
VM Snapshots .................................................................................................................. 97
Regular Snapshots .................................................................................................... 97
Quiesced Snapshots .................................................................................................. 97
Snapshots with memory ........................................................................................... 97
Creating a VM Snapshot ........................................................................................... 97
Creating a snapshot with memory ............................................................................. 98
To list all of the snapshots on a XenServer pool .......................................................... 98
To list the snapshots on a particular VM .................................................................... 98
Restoring a VM to its previous state .......................................................................... 99
Deleting a snapshot .......................................................................................... 99
Snapshot Templates ................................................................................................ 100
Creating a template from a snapshot ............................................................... 100
Exporting a snapshot to a template ................................................................. 100
Advanced Notes for Quiesced Snapshots .......................................................... 101
VM Protection and Recovery ........................................................................................... 102
Naming convention for VM archive folders ............................................................... 102
Coping with machine failures .......................................................................................... 102
Member failures ..................................................................................................... 103
Master failures ....................................................................................................... 103
Pool failures ........................................................................................................... 104
Coping with Failure due to Configuration Errors ........................................................ 104
Physical Machine failure .......................................................................................... 104
Monitoring and Managing XenServer ........................................................... 106
ix
Alerts ............................................................................................................................. 106
Customizing Alerts .................................................................................................. 107
Configuring Email Alerts .......................................................................................... 108
Custom Fields and Tags ................................................................................................... 109
Custom Searches ............................................................................................................ 109
Determining throughput of physical bus adapters ............................................................. 109
Troubleshooting ............................................................................................. 110
XenServer host logs ........................................................................................................ 110
Sending host log messages to a central server .......................................................... 110
XenCenter logs ............................................................................................................... 111
Troubleshooting connections between XenCenter and the XenServer host ......................... 111
A. Command Line Interface ........................................................................... 112
Basic xe Syntax ............................................................................................................... 112
Special Characters and Syntax ......................................................................................... 113
Command Types ............................................................................................................. 113
Parameter Types ..................................................................................................... 114
Low-level Parameter Commands .............................................................................. 115
Low-level List Commands ........................................................................................ 115
xe Command Reference .................................................................................................. 116
Appliance Commands ............................................................................................. 116
Appliance Parameters ..................................................................................... 116
appliance-assert-can-be-recovered ................................................................... 116
appliance-create ............................................................................................. 116
appliance-destroy ........................................................................................... 117
appliance-recover ........................................................................................... 117
appliance-shutdown ........................................................................................ 117
appliance-start ................................................................................................ 117
Audit Commands .................................................................................................... 117
audit-log-get parameters ................................................................................. 117
audit-log-get ................................................................................................... 117
Bonding Commands ................................................................................................ 118
x
Bond Parameters ............................................................................................ 118
bond-create .................................................................................................... 118
bond-destroy .................................................................................................. 118
CD Commands ........................................................................................................ 118
CD Parameters ................................................................................................ 118
cd-list ............................................................................................................. 119
Console Commands ................................................................................................ 120
Console Parameters ........................................................................................ 120
Disaster Recovery (DR) Commands .......................................................................... 120
drtask-create .................................................................................................. 120
drtask-destroy ................................................................................................. 121
vm-assert-can-be-recovered ............................................................................ 121
appliance-assert-can-be-recovered ................................................................... 121
appliance-recover ........................................................................................... 121
vm-recover ..................................................................................................... 121
sr-enable-database-replication ......................................................................... 121
sr-disable-database-replication ........................................................................ 121
Example Usage ............................................................................................... 121
Event Commands .................................................................................................... 122
Event Classes .................................................................................................. 122
event-wait ...................................................................................................... 123
GPU Commands ...................................................................................................... 123
Physical GPU (pGPU) Parameters ..................................................................... 123
GPU Group Parameters ................................................................................... 124
Virtual GPU (vGPU) Parameters ....................................................................... 124
vgpu-create .................................................................................................... 125
vgpu-destroy ................................................................................................... 125
Host Commands ..................................................................................................... 125
Host Selectors ................................................................................................. 125
Host Parameters ............................................................................................. 126
host-backup .................................................................................................... 129
xi
host-bugreport-upload .................................................................................... 129
host-crashdump-destroy .................................................................................. 129
host-crashdump-upload ................................................................................... 129
host-disable .................................................................................................... 129
host-dmesg ..................................................................................................... 129
host-emergency-management-reconfigure ....................................................... 130
host-enable .................................................................................................... 130
host-evacuate ................................................................................................. 130
host-forget ..................................................................................................... 130
host-get-system-status .................................................................................... 130
host-get-system-status-capabilities ................................................................... 131
host-is-in-emergency-mode ............................................................................. 132
host-apply-edition .......................................................................................... 132
host-license-add ............................................................................................. 132
host-license-view ............................................................................................ 132
host-logs-download ......................................................................................... 132
host-management-disable ............................................................................... 132
host-management-reconfigure ......................................................................... 133
host-power-on ................................................................................................ 133
host-get-cpu-features ...................................................................................... 133
host-set-cpu-features ...................................................................................... 133
host-set-power-on .......................................................................................... 133
host-reboot .................................................................................................... 133
host-restore .................................................................................................... 134
host-set-hostname-live .................................................................................... 134
host-shutdown ................................................................................................ 134
host-syslog-reconfigure ................................................................................... 134
host-data-source-list ....................................................................................... 135
host-data-source-record .................................................................................. 135
host-data-source-forget ................................................................................... 135
host-data-source-query ................................................................................... 135
xii
Log Commands ....................................................................................................... 136
log-set-output ................................................................................................. 136
Message Commands ............................................................................................... 136
Message Parameters ....................................................................................... 136
message-create ............................................................................................... 136
message-destroy ............................................................................................. 137
message-list .................................................................................................... 137
Network Commands ............................................................................................... 137
Network Parameters ....................................................................................... 137
network-create ............................................................................................... 138
network-destroy ............................................................................................. 138
Patch (Update) Commands ...................................................................................... 138
Patch Parameters ............................................................................................ 138
patch-apply .................................................................................................... 139
patch-clean ..................................................................................................... 139
patch-pool-apply ............................................................................................. 139
patch-precheck ............................................................................................... 139
patch-upload .................................................................................................. 139
PBD Commands ...................................................................................................... 139
PBD Parameters .............................................................................................. 139
pbd-create ...................................................................................................... 140
pbd-destroy .................................................................................................... 140
pbd-plug ......................................................................................................... 140
pbd-unplug ..................................................................................................... 140
PIF Commands ........................................................................................................ 140
PIF Parameters ............................................................................................... 141
pif-forget ........................................................................................................ 143
pif-introduce ................................................................................................... 143
pif-plug ........................................................................................................... 143
pif-reconfigure-ip ............................................................................................ 143
pif-scan .......................................................................................................... 144
xiii
pif-unplug ....................................................................................................... 144
Pool Commands ...................................................................................................... 144
Pool Parameters ............................................................................................. 144
pool-designate-new-master ............................................................................. 145
pool-dump-database ....................................................................................... 145
pool-eject ....................................................................................................... 146
pool-emergency-reset-master .......................................................................... 146
pool-emergency-transition-to-master ............................................................... 146
pool-ha-enable ............................................................................................... 146
pool-ha-disable ............................................................................................... 146
pool-join ......................................................................................................... 146
pool-recover-slaves ......................................................................................... 146
pool-restore-database ..................................................................................... 146
pool-sync-database ......................................................................................... 146
Storage Manager Commands ................................................................................... 147
SM Parameters ............................................................................................... 147
SR Commands ........................................................................................................ 147
SR Parameters ................................................................................................ 147
sr-create ......................................................................................................... 148
sr-destroy ....................................................................................................... 149
sr-enable-database-replication ......................................................................... 149
sr-disable-database-replication ........................................................................ 149
sr-forget ......................................................................................................... 149
sr-introduce .................................................................................................... 149
sr-probe ......................................................................................................... 149
sr-scan ........................................................................................................... 149
Task Commands ...................................................................................................... 150
Task Parameters .............................................................................................. 150
task-cancel ..................................................................................................... 151
Template Commands .............................................................................................. 151
Template Parameters ...................................................................................... 151
xiv
template-export .............................................................................................. 157
Update Commands ................................................................................................. 157
update-upload ................................................................................................ 158
User Commands ..................................................................................................... 158
user-password-change ..................................................................................... 158
VBD Commands ...................................................................................................... 158
VBD Parameters .............................................................................................. 158
vbd-create ...................................................................................................... 159
vbd-destroy .................................................................................................... 160
vbd-eject ........................................................................................................ 160
vbd-insert ....................................................................................................... 160
vbd-plug ......................................................................................................... 160
vbd-unplug ..................................................................................................... 160
VDI Commands ....................................................................................................... 160
VDI Parameters ............................................................................................... 161
vdi-clone ........................................................................................................ 162
vdi-copy ......................................................................................................... 162
vdi-create ....................................................................................................... 162
vdi-destroy ..................................................................................................... 163
vdi-forget ........................................................................................................ 163
vdi-import ...................................................................................................... 163
vdi-introduce .................................................................................................. 163
vdi-resize ........................................................................................................ 163
vdi-snapshot ................................................................................................... 164
vdi-unlock ....................................................................................................... 164
VIF Commands ....................................................................................................... 164
VIF Parameters ............................................................................................... 164
vif-create ........................................................................................................ 166
vif-destroy ...................................................................................................... 166
vif-plug ........................................................................................................... 166
vif-unplug ....................................................................................................... 166
xv
VLAN Commands .................................................................................................... 166
vlan-create ..................................................................................................... 167
pool-vlan-create .............................................................................................. 167
vlan-destroy .................................................................................................... 167
VM Commands ....................................................................................................... 167
VM Selectors .................................................................................................. 167
VM Parameters ............................................................................................... 167
vm-assert-can-be-recovered ............................................................................ 174
vm-cd-add ...................................................................................................... 174
vm-cd-eject .................................................................................................... 174
vm-cd-insert ................................................................................................... 174
vm-cd-list ....................................................................................................... 174
vm-cd-remove ................................................................................................ 174
vm-clone ........................................................................................................ 175
vm-compute-maximum-memory ...................................................................... 175
vm-copy ......................................................................................................... 175
vm-crashdump-list .......................................................................................... 175
vm-data-source-list ......................................................................................... 176
vm-data-source-record .................................................................................... 176
vm-data-source-forget ..................................................................................... 176
vm-data-source-query ..................................................................................... 176
vm-destroy ..................................................................................................... 177
vm-disk-add .................................................................................................... 177
vm-disk-list ..................................................................................................... 177
vm-disk-remove .............................................................................................. 177
vm-export ....................................................................................................... 177
vm-import ...................................................................................................... 178
vm-install ........................................................................................................ 178
vm-memory-shadow-multiplier-set .................................................................. 179
vm-migrate ..................................................................................................... 179
vm-reboot ...................................................................................................... 179
xvi
vm-recover ..................................................................................................... 179
vm-reset-powerstate ....................................................................................... 179
vm-resume ..................................................................................................... 180
vm-shutdown ................................................................................................. 180
vm-start ......................................................................................................... 180
vm-suspend .................................................................................................... 180
vm-uninstall .................................................................................................... 181
vm-vcpu-hotplug ............................................................................................. 181
vm-vif-list ....................................................................................................... 181
Workload Balancing XE Commands .......................................................................... 181
pool-initialize-wlb ............................................................................................ 181
pool-param-set other-config ............................................................................ 181
pool-retrieve-wlb-diagnostics ........................................................................... 182
host-retrieve-wlb-evacuate-recommendations .................................................. 182
vm-retrieve-wlb-recommendations .................................................................. 182
pool-certificate-list .......................................................................................... 182
pool-certificate-install ...................................................................................... 182
pool-certificate-sync ........................................................................................ 182
pool-param-set ............................................................................................... 182
pool-deconfigure-wlb ...................................................................................... 183
pool-retrieve-wlb-configuration ....................................................................... 183
pool-retrieve-wlb-recommendations ............................................................... 183
pool-retrieve-wlb-report ................................................................................. 183
pool-send-wlb-configuration ........................................................................... 184
B. Workload Balancing Service Commands ................................................... 186
Service Commands ......................................................................................................... 186
Logging in to the Workload Balancing Virtual Appliance ............................................ 186
service workloadbalancing restart ............................................................................ 186
service workloadbalancing start ............................................................................... 186
service workloadbalancing stop ............................................................................... 186
service workloadbalancing status ............................................................................. 186
xvii
Modifying the Workload Balancing configuration options .......................................... 187
Editing the Workload Balancing configuration file ..................................................... 187
Increasing the Detail in the Workload Balancing Log ................................................. 188
xviii

Document Overview

This document is a system administrator's guide for Citrix XenServer®, the complete server virtualization platform from Citrix®. It contains procedures to guide you through configuring a XenServer deployment. In particular, it focuses on setting up storage, networking and resource pools, and how to administer XenServer hosts using the xe command line interface.
This document covers the following topics:
• Managing users with Active Directory and Role Based Access Controls
• Creating resource pools and setting up High Availability
• Configuring and managing storage repositories
• Configuring virtual machine memory using Dynamic Memory Control
• Setting control domain memory on a XenServer host
• Configuring networking
• Recovering virtual machines using Disaster Recovery and backing up data
• Monitoring and managing XenServer
• Troubleshooting XenServer
• Using the XenServer xe command line interface

Introducing XenServer

Citrix XenServer® is the complete server virtualization platform from Citrix®. The XenServer package contains all you need to create and manage a deployment of virtual x86 computers running on Xen®, the open-source paravirtualizing hypervisor with near-native performance. XenServer is optimized for both Windows and Linux virtual servers.
XenServer runs directly on server hardware without requiring an underlying operating system, which results in an efficient and scalable system. XenServer works by abstracting elements from the physical machine (such as hard drives, resources and ports) and allocating them to the virtual machines running on it.
A virtual machine (VM) is a computer composed entirely of software that can run its own operating system and applications as if it were a physical computer. A VM behaves exactly like a physical computer and contains its own virtual (software-based) CPU, RAM, hard disk and network interface card (NIC).
XenServer lets you create VMs, take VM disk snapshots and manage VM workloads. For a comprehensive list of major XenServer features and editions, visit www.citrix.com/xenserver.

Benefits of Using XenServer

Using XenServer reduces costs by:
• Consolidating multiple VMs onto physical servers
• Reducing the number of separate disk images that need to be managed
• Allowing for easy integration with existing networking and storage infrastructures
Using XenServer increases flexibility by:
• Allowing you to schedule zero downtime maintenance by using XenMotion to live migrate VMs between XenServer hosts
• Increasing availability of VMs by using High Availability to configure policies that restart VMs on another XenServer host if one fails
1
• Increasing portability of VM images, as one VM image will work on a range of deployment infrastructures

Administering XenServer

There are two methods by which to administer XenServer: XenCenter and the XenServer Command-Line Interface (CLI).
XenCenter is a graphical, Windows-based user interface. XenCenter allows you to manage XenServer hosts, pools and shared storage, and to deploy, manage and monitor VMs from your Windows desktop machine.
The XenCenter Help is a great resource for getting started with XenCenter.
The XenServer Command-line Interface (CLI) allows you to administer XenServer using the Linux-based xe commands.
For a comprehensive list of xe commands and descriptions, see the XenServer Administrator's Guide.

XenServer Editions

The features available in XenServer depend on the edition. The four editions of XenServer are:
Citrix XenServer (Free): Proven virtualization platform that delivers uncompromised performance, scale, and flexibility at no cost.
Citrix XenServer Advanced Edition: Key high availability and advanced management tools that take virtual infrastructure to the next level.
Citrix XenServer Enterprise Edition: Essential integration and optimization capabilities for production deployments of virtual machines.
Citrix XenServer Platinum Edition: Advanced automation and cloud computing features for enterprise-wide virtual environments.
For more information about how the XenServer edition affects the features available, visit www.citrix.com/
xenserver.

New Features in XenServer 6.0

XenServer 6.0 includes a number of new features and ongoing improvements, including:
Integrated Site Recovery (Disaster Recovery):
• Automated remote data replication between storage arrays with fast recovery and failback capabilities. Integrated Site Recovery replaces StorageLink Gateway Site Recovery used in previous versions, removes the Windows VM requirement, and works with any iSCSI or Hardware HBA storage repository.
Integrated StorageLink:
• Access to use existing storage array-based features such as data replication, de-duplication, snapshot and cloning. Replaces the StorageLink Gateway technology used in previous editions and removes the requirement to run a VM with the StorageLink components.
GPU Pass-Through:
• Enables a physical GPU to be assigned to a VM providing high-end graphics. Allows applications to leverage GPU instructions in XenDesktop VDI deployments with HDX 3D Pro.
Virtual Appliance Support (vApp):
• Ability to create multi-VM and boot sequenced virtual appliances (vApps) that integrate with Integrated Site Recovery and High Availability. vApps can be easily imported and exported using the Open Virtualization Format (OVF) standard.
2
Rolling Pool Upgrade Wizard:
• Simplify upgrades (automated or semi-automated) to XenServer 6.0 with a wizard that performs pre-checks with a step-by-step process that blocks unsupported upgrades.
Microsoft SCVMM and SCOM Support:
• Manage XenServer hosts and VMs with System Center Virtual Machine Manager (SCVMM) 2012. System Center Operations Manager (SCOM) 2012 will also be able to manage and monitor XenServer hosts and VMs. System Center integration is available with a special supplemental pack from Citrix. For more information refer to
Microsoft System Center Virtual Machine Manager 2012.
Distributed Virtual Switch Improvements:
• New fail safe mode allows Cross-Server Private Networks, ACLs, QoS, RSPAN and NetFlow settings to continue to be applied to a running VM in the event of vSwitch Controller failure.
Increased Performance and Scale:
• Supported limits have been increased to 1 TB memory for XenServer hosts, and up to16 virtual processors and 128 GB virtual memory for VMs. Improved XenServer Tools with smaller footprint.
Networking Improvements:
• Open vSwitch is now the default networking stack in XenServer 6.0 and now provides formal support for Active­Backup NIC bonding.
VM Import and Export Improvements:
• Full support for VM disk and OVF appliance imports directly from XenCenter with the ability to change VM parameters (virtual processor, virtual memory, virtual interfaces, and target storage repository) with the Import wizard. Full OVF import support for XenServer, XenConvert and VMware.
SR-IOV Improvements:
• Improved scalability and certification with the SR-IOV Test Kit. Experimental SR-IOV with XenMotion support with Solarflare SR-IOV adapters.
Simplified Installer:
• Host installations only require a single ISO.
Enhanced Guest OS Support:
• Support for Ubuntu 10.04 (32/64-bit).
• Updated support for Debian Squeeze 6.0 64-bit, Oracle Enterprise Linux 6.0 (32/64-bit), and SLES 10 SP4 (32/64­bit).
• Experimental VM templates for CentOS 6.0 (32/64-bit) Ubuntu 10.10 (32/64-bit) and Solaris 10.
Workload Balancing Improvements:
• New, ready-to-use Linux-based virtual appliance with a smaller footprint replaces the Windows-based virtual appliance and eliminates the Windows licensing dependency.
XenDesktop Enhancements:
HDX enhancements for optimized user experience with virtual desktops, GPU Pass-Through, and increased VM and XenServer host limits.
3
VM Protection and Recovery:
• Now available for Advanced, Enterprise and Platinum Edition customers.
NFS Support for High Availability:
• HA Heartbeat disk can now reside on a NFS storage repository.
XenCenter Improvements:
• XenCenter operations now run in parallel, and XenCenter will be available in Japanese and Simplified Chinese (ETA Q4 2011).
Host Architectural Improvements:
• XenServer 6.0 now runs on the Xen 4.1 hypervisor, provides GPT support and a smaller, more scalable Dom0.

XenServer Documentation

XenServer documentation shipped with this release includes:
Release Notes cover known issues that affect this release.
XenServer Quick Start Guide provides an introduction for new users to the XenServer environment and components. This guide steps through the installation and configuration essentials to get XenServer and the XenCenter management console up and running quickly. After installation, it demonstrates how to create a Windows VM, VM template and pool of XenServer hosts. It introduces basic administrative tasks and advanced features, such as shared storage, VM snapshots and XenMotion live migration.
XenServer Installation Guide steps through the installation, configuration and initial operation of XenServer and the XenCenter management console.
XenServer Virtual Machine Installation Guide describes how to install Windows and Linux VMs within a XenServer environment. This guide explains how to create new VMs from installation media, from VM templates included in the XenServer package and from existing physical machines (P2V). It explains how to import disk images and how to import and export appliances.
XenServer Administrator's Guide gives an in-depth description of the tasks involved in configuring a XenServer deployment, including setting up storage, networking and pools. It describes how to administer XenServer using the xe Command Line Interface.
vSwitch Controller User Guide is a comprehensive user guide to the vSwitch and Controller for XenServer.
Supplemental Packs and the DDK introduces the XenServer Driver Development Kit, which can be used to modify and extend the functionality of XenServer.
XenServer Software Development Kit Guide presents an overview of the XenServer SDK. It includes code samples that demonstrate how to write applications that interface with XenServer hosts.
XenAPI Specification is a reference guide for programmers to the XenServer API.
For additional resources, visit the Citrix Knowledge Center.
4

Managing Users

Defining users, groups, roles and permissions allows you to control who has access to your XenServer hosts and pools and what actions they can perform.
When you first install XenServer, a user account is added to XenServer automatically. This account is the local super user (LSU), or root, which is authenticated locally by the XenServer computer.
The local super user (LSU), or root, is a special user account used for system administration and has all rights or permissions. In XenServer, the local super user is the default account at installation. The LSU is authenticated by XenServer and not an external authentication service. This means that if the external authentication service fails, the LSU can still log in and manage the system. The LSU can always access the XenServer physical server through SSH.
You can create additional users by adding their Active Directory accounts through either the XenCenter's Users tab or the CLI. All editions of XenServer can add user accounts from Active Directory. However, only XenServer Enterprise and Platinum editions let you assign these Active Directory accounts different levels of permissions (through the Role Based Access Control (RBAC) feature). If you do not use Active Directory in your environment, you are limited to the LSU account.
The permissions assigned to users when you first add their accounts varies according to your version of XenServer:
• In the XenServer and XenServer Advanced edition, when you create (add) new users, XenServer automatically grants the accounts access to all features available in that version.
• In the XenServer Enterprise and Platinum editions, when you create new users, XenServer does not assign newly created user accounts roles automatically. As a result, these accounts do not have any access to the XenServer pool until you assign them a role.
If you do not have one of these editions, you can add users from Active Directory. However, all users will have the Pool Administrator role.
These permissions are granted through roles, as discussed in the section called “Authenticating Users With Active
Directory (AD)”.

Authenticating Users With Active Directory (AD)

If you want to have multiple user accounts on a server or a pool, you must use Active Directory user accounts for authentication. This lets XenServer users log in to a pool's XenServers using their Windows domain credentials.
The only way you can configure varying levels of access for specific users is by enabling Active Directory authentication, adding user accounts, and assign roles to those accounts.
Active Directory users can use the xe CLI (passing appropriate -u and -pw arguments) and also connect to the host using XenCenter. Authentication is done on a per-resource pool basis.
Access is controlled by the use of subjects. A subject in XenServer maps to an entity on your directory server (either a user or a group). When external authentication is enabled, the credentials used to create a session are first checked against the local root credentials (in case your directory server is unavailable) and then against the subject list. To permit access, you must create a subject entry for the person or group you wish to grant access to. This can be done using XenCenter or the xe CLI.
If you are familiar with XenCenter, note that the XenServer CLI uses slightly different terminology to refer to Active Directory and user account features:
XenCenter Term XenServer CLI Term
Users Subjects
Add users Add subjects
5
Understanding Active Directory Authentication in the XenServer Environment
Even though XenServers are Linux-based, XenServer lets you use Active Directory accounts for XenServer user accounts. To do so, it passes Active Directory credentials to the Active Directory domain controller.
When added to XenServer, Active Directory users and groups become XenServer subjects, generally referred to as simply users in XenCenter. When a subject is registered with XenServer, users/groups are authenticated with Active Directory on login and do not need to qualify their user name with a domain name.
Note:
By default, if you did not qualify the user name (for example, enter either mydomain\myuser or myser@mydomain.com), XenCenter always attempts to log users in to Active Directory authentication servers using the domain to which it is currently joined. The exception to this is the LSU account, which XenCenter always authenticates locally (that is, on the XenServer) first.
The external authentication process works as follows:
1. The credentials supplied when connecting to a server are passed to the Active Directory domain controller
for authentication.
2. The domain controller checks the credentials. If they are invalid, the authentication fails immediately.
3. If the credentials are valid, the Active Directory controller is queried to get the subject identifier and group
membership associated with the credentials.
4. If the subject identifier matches the one stored in the XenServer, the authentication is completed successfully.
When you join a domain, you enable Active Directory authentication for the pool. However, when a pool is joined to a domain, only users in that domain (or a domain with which it has trust relationships) can connect to the pool.
Note:
Manually updating the DNS configuration of a DHCP-configured network PIF is unsupported and might cause Active Directory integration, and consequently user authentication, to fail or stop working.
Upgrading XenServer
When you upgrade from an earlier version of XenServer, any user accounts created in the previous XenServer version are assigned the role of pool-admin. This is done for backwards compatibility reasons. As a result, if you are upgrading from a previous version of XenServer, make sure you revisit the role associated with each user account to make sure it is still appropriate.

Configuring Active Directory Authentication

XenServer supports use of Active Directory servers using Windows 2003 or later.
Active Directory authentication for a XenServer host requires that the same DNS servers are used for both the Active Directory server (configured to allow for interoperability) and the XenServer host. In some configurations, the active directory server may provide the DNS itself. This can be achieved either using DHCP to provide the IP address and a list of DNS servers to the XenServer host, or by setting values in the PIF objects or using the installer if a manual static configuration is used.
Citrix recommends enabling DHCP to broadcast host names. In particular, the host names localhost or linux should not be assigned to hosts.
Warning:
XenServer hostnames should be unique throughout the XenServer deployment.
6
Note the following:
• XenServer labels its AD entry on the AD database using its hostname. Therefore, if two XenServer hosts have the same hostname and are joined to the same AD domain, the second XenServer will overwrite the AD entry of the first XenServer, regardless of if they are in the same or in different pools, causing the AD authentication on the first XenServer to stop working.
It is possible to use the same hostname in two XenServer hosts, as long as they join different AD domains.
• The XenServer hosts can be in different time-zones, as it is the UTC time that is compared. To ensure synchronization is correct, you may choose to use the same NTP servers for your XenServer pool and the Active Directory server.
• Mixed-authentication pools are not supported (that is, you cannot have a pool where some servers in the pool are configured to use Active Directory and some are not).
• The XenServer Active Directory integration uses the Kerberos protocol to communicate with the Active Directory servers. Consequently, XenServer does not support communicating with Active Directory servers that do not utilize Kerberos.
• For external authentication using Active Directory to be successful, it is important that the clocks on your XenServer hosts are synchronized with those on your Active Directory server. When XenServer joins the Active Directory domain, this will be checked and authentication will fail if there is too much skew between the servers.
Warning:
Host names must consist solely of no more than 63 alphanumeric characters, and must not be purely numeric.
Once you have Active Directory authentication enabled, if you subsequently add a server to that pool, you are prompted to configure Active Directory on the server joining the pool. When you are prompted for credentials on the joining server, enter Active Directory credentials with sufficient privileges to add servers to that domain.
Active Directory integration
Make sure that the following firewall ports are open for outbound traffic in order for XenServer to access the domain controllers.
Port Protocol Use
53 UDP/TCP DNS
88 UDP/TCP Kerberos 5
123 UDP NTP
137 UDP NetBIOS Name Service
139 TCP NetBIOS Session (SMB)
389 UDP/TCP LDAP
445 TCP SMB over TCP
464 UDP/TCP Machine password changes
3268 TCP Global Catalog Search
Note:
To view the firewall rules on a Linux computer using iptables, run the following command:
iptables - nL
7
Note:
XenServer uses Likewise (Likewise uses Kerberos) to authenticate the AD user in the AD server, and to encrypt communications with the AD server.
How does XenServer manage the machine account password for AD integration?
Similarly to Windows client machines, Likewise automatically updates the machine account password, renewing it once every 30 days, or as specified in the machine account password renewal policy in the AD server. For more information, refer to http://support.microsoft.com/kb/154501.
Enabling external authentication on a pool
External authentication using Active Directory can be configured using either XenCenter or the CLI using the
command below.
xe pool-enable-external-auth auth-type=AD \ service-name=<full-qualified-domain> \ config:user=<username> \ config:pass=<password>
The user specified needs to have Add/remove computer objects or workstations privileges, which is the default for domain administrators.
Note:
If you are not using DHCP on the network used by Active Directory and your XenServer hosts, use you can use these two approaches to setup your DNS:
1. Set up your domain DNS suffix search order for resolving non-FQDNs:
xe pif-param-set uuid=<pif-uuid_in_the_dns_subnetwork> \ “other-config:domain=suffix1.com suffix2.com suffix3.com”
2. Configure the DNS server to use on your XenServer hosts:
xe pif-reconfigure-ip mode=static dns=<dnshost>
3. Manually set the primary management interface to use a PIF that is on the same network as your DNS server:
xe host-management-reconfigure pif-uuid=<pif_in_the_dns_subnetwork>
Note:
External authentication is a per-host property. However, Citrix advises that you enable and disable this on a per-pool basis – in this case XenServer will deal with any failures that occur when enabling authentication on a particular host and perform any roll-back of changes that may be required, ensuring that a consistent configuration is used across the pool. Use the host-param-list command to inspect properties of a host and to determine the status of external authentication by checking the values of the relevant fields.
Disabling external authentication
Use XenCenter to disable Active Directory authentication, or the following xe command:
xe pool-disable-external-auth

User Authentication

To allow a user access to your XenServer host, you must add a subject for that user or a group that they are in. (Transitive group memberships are also checked in the normal way, for example: adding a subject for group A, where group A contains group B and user 1 is a member of group B would permit access to user 1.) If
8
you wish to manage user permissions in Active Directory, you could create a single group that you then add and remove users to/from; alternatively, you can add and remove individual users from XenServer, or a combination of users and groups as your would be appropriate for your authentication requirements. The subject list can be managed from XenCenter or using the CLI as described below.
When authenticating a user, the credentials are first checked against the local root account, allowing you to recover a system whose AD server has failed. If the credentials (i.e. username then password) do not match/ authenticate, then an authentication request is made to the AD server – if this is successful the user's information will be retrieved and validated against the local subject list, otherwise access will be denied. Validation against the subject list will succeed if the user or a group in the transitive group membership of the user is in the subject list.
Note:
When using Active Directory groups to grant access for Pool Administrator users who will require host ssh access, the number of users in the Active Directory group must not exceed
500.
Allowing a user access to XenServer using the CLI
To add an AD subject to XenServer:
xe subject-add subject-name=<entity name>
The entity name should be the name of the user or group to which you want to grant access. You may optionally include the domain of the entity (for example, '<xendt\user1>' as opposed to '<user1>') although the behavior will be the same unless disambiguation is required.
Removing access for a user using the CLI
1. Identify the subject identifier for the subject t you wish to revoke access. This would be the user or the group containing the user (removing a group would remove access to all users in that group, providing they are not also specified in the subject list). You can do this using the subject list command:
xe subject-list
You may wish to apply a filter to the list, for example to get the subject identifier for a user named user1 in the testad domain, you could use the following command:
xe subject-list other-config:subject-name='<domain\user>'
2. Remove the user using the subject-remove command, passing in the subject identifier you learned in the previous step:
xe subject-remove subject-uuid=<subject-uuid>
3. You may wish to terminate any current session this user has already authenticated. See Terminating all
authenticated sessions using xe and Terminating individual user sessions using xe for more information about
terminating sessions. If you do not terminate sessions the users whose permissions have been revoked may be able to continue to access the system until they log out.
Listing subjects with access
To identify the list of users and groups with permission to access your XenServer host or pool, use the following command:
xe subject-list

Removing Access for a User

Once a user is authenticated, they will have access to the server until they end their session, or another user terminates their session. Removing a user from the subject list, or removing them from a group that is in the subject list, will not automatically revoke any already-authenticated sessions that the user has; this means that
9
they may be able to continue to access the pool using XenCenter or other API sessions that they have already created. In order to terminate these sessions forcefully, XenCenter and the CLI provide facilities to terminate individual sessions, or all currently active sessions. See the XenCenter help for more information on procedures using XenCenter, or below for procedures using the CLI.
Terminating all authenticated sessions using xe
Execute the following CLI command:
xe session-subject-identifier-logout-all
Terminating individual user sessions using xe
1. Determine the subject identifier whose session you wish to log out. Use either the session-subject­identifier-list or subject-list xe commands to find this (the first shows users who have sessions, the second shows all users but can be filtered, for example, using a command like xe subject-list other-config:subject­name=xendt\\user1 – depending on your shell you may need a double-backslash as shown).
2. Use the session-subject-logout command, passing the subject identifier you have determined in the previous step as a parameter, for example:
xe session-subject-identifier-logout subject-identifier=<subject-id>

Leaving an AD Domain

Warning:
When you leave the domain (that is, disable Active Directory authentication and disconnect a pool or server from its domain), any users who authenticated to the pool or server with Active Directory credentials are disconnected.
Use XenCenter to leave an AD domain. See the XenCenter help for more information. Alternately run the pool-
disable-external-auth command, specifying the pool uuid if required.
Note:
Leaving the domain will not cause the host objects to be removed from the AD database. See
this knowledge base article for more information about this and how to remove the disabled
host entries.

Role Based Access Control

Note:
The full RBAC feature is only available in Citrix XenServer Enterprise Edition or higher. To learn more about upgrading XenServer, click here.
XenServer's Role Based Access Control (RBAC) allows you to assign users, roles, and permissions to control who has access to your XenServer and what actions they can perform. The XenServer RBAC system maps a user (or a group of users) to defined roles (a named set of permissions), which in turn have associated XenServer permissions (the ability to perform certain operations).
As users are not assigned permissions directly, but acquire them through their assigned role, management of individual user permissions becomes a matter of simply assigning the user to the appropriate role; this simplifies common operations. XenServer maintains a list of authorized users and their roles.
RBAC allows you to easily restrict which operations different groups of users can perform- thus reducing the probability of an accident by an inexperienced user.
To facilitate compliance and auditing, RBAC also provides an Audit Log feature and its corresponding Workload Balancing Pool Audit Trail report.
10
RBAC depends on Active Directory for authentication services. Specifically, XenServer keeps a list of authorized users based on Active Directory user and group accounts. As a result, you must join the pool to the domain and add Active Directory accounts before you can assign roles.
The local super user (LSU), or root, is a special user account used for system administration and has all rights or permissions. In XenServer, the local super user is the default account at installation. The LSU is authenticated via XenServer and not external authentication service, so if the external authentication service fails, the LSU can still log in and manage the system. The LSU can always access the XenServer physical host via SSH.
RBAC process
This is the standard process for implementing RBAC and assigning a user or group a role:
1. Join the domain. See Enabling external authentication on a pool
2. Add an Active Directory user or group to the pool. This becomes a subject. See the section called “To Add a
Subject to RBAC”.
3. Assign (or modify) the subject's RBAC role. See the section called “To Assign an RBAC Role to a Created subject”.

Roles

XenServer is shipped with the following six, pre-established roles:
Pool Administrator (Pool Admin) – the same as being the local root. Can perform all operations.
Note:
The local super user (root) will always have the "Pool Admin" role. The Pool Admin role has the same permissions as the local root.
Pool Operator (Pool Operator) – can do everything apart from adding/removing users and modifying their
roles. This role is focused mainly on host and pool management (i.e. creating storage, making pools, managing the hosts etc.)
Virtual Machine Power Administrator (VM Power Admin) – creates and manages Virtual Machines. This role is
focused on provisioning VMs for use by a VM operator.
Virtual Machine Administrator (VM Admin) – similar to a VM Power Admin, but cannot migrate VMs or perform
snapshots.
Virtual Machine Operator (VM Operator) – similar to VM Admin, but cannot create/destroy VMs – but can
perform start/stop lifecycle operations.
Read-only (Read Only) – can view resource pool and performance data.
11
Note:
You cannot add, remove or modify roles in this version of XenServer.
Warning:
You can not assign the role of pool-admin to an AD group which has more than 500 members, if you want users of the AD group to have SSH access.
For a summary of the permissions available for each role and more detailed information on the operations available for each permission, see the section called “Definitions of RBAC Roles and Permissions”.
All XenServer users need to be allocated to an appropriate role. By default, all new users will be allocated to the Pool Administrator role. It is possible for a user to be assigned to multiple roles; in that scenario, the user will have the union of all the permissions of all their assigned roles.
A user's role can be changed in two ways:
1. Modify the subject -> role mapping (this requires the assign/modify role permission, only available to a Pool
Administrator.)
2. Modify the user's containing group membership in Active Directory.

Definitions of RBAC Roles and Permissions

The following table summarizes which permissions are available for each role. For details on the operations available for each permission, see Definitions of permissions.
Table 1. Permissions available for each role
Role permissions
Assign/ modify roles
Log in to (physical) server consoles (through SSH and XenCenter)
Server backup/ restore
Import/ export OVF/ OVA packages and disk images
Pool Admin Pool
Operator
X
X
X
X
VM Power Admin
VM Admin VM Operator Read Only
Log out active user connections
Create and dismiss alerts
X X
X X
12
Role permissions
Pool Admin Pool
Operator
VM Power Admin
VM Admin VM Operator Read Only
Cancel task of any user
Pool management
VM advanced operations
VM create/ destroy operations
VM change CD media
View VM consoles
XenCenter view mgmt ops
Cancel own tasks
X X
X X
X X X
X X X X
X X X X X
X X X X X
X X X X X
X X X X X X
Read audit logs
Configure, Initialize, Enable, Disable WLB
Apply WLB Optimization Recommendations
Modify WLB Report Subscriptions
Accept WLB Placement Recommendations
Display WLB Configuration
Generate WLB Reports
X X X X X X
X X
X X
X X
X X X
X X X X X X
X X X X X X
13
Role permissions
Pool Admin Pool
Operator
VM Power Admin
VM Admin VM Operator Read Only
Connect to pool and read all pool metadata
X X X X X X
Definitions of Permissions
The following table provides additional details about permissions:
Table 2. Definitions of permissions
Permission Allows Assignee To Rationale/Comments
Assign/modify roles • Add/remove users
• Add/remove roles from users
• Enable and disable Active Directory integration (being joined to the domain)
This permission lets the user grant himself or herself any permission or perform any task.
Warning: This role lets the user disable the Active Directory integration and all subjects added from Active Directory.
Log in to server consoles • Server console access through
ssh
• Server console access through XenCenter
Server backup/restore VM create/ destroy operations
Import/export OVF/OVA packages and disk images
Log out active user connections • Ability to disconnect logged in
Create/dismiss alerts Warning: A user with this
• Back up and restore servers
• Back up and restore pool metadata
• Import OVF and OVA packages
• Import disk images
• Export VMs as OVF/OVA packages
users
Warning: With access to a root shell, the assignee could arbitrarily reconfigure the entire system, including RBAC.
The ability to restore a backup lets the assignee revert RBAC configuration changes.
permission can dismiss alerts for the entire pool.
Note: The ability to view alerts is part of the Connect to Pool and read all pool metadata permission.
Cancel task of any user • Cancel any user's running task This permission lets the user
request XenServer cancel an in­progress task initiated by any user.
14
Permission Allows Assignee To Rationale/Comments
Pool management • Set pool properties (naming,
default SRs)
• Enable, disable, and configure HA
• Set per-VM HA restart priorities
• Enable, disable, and configure Workload Balancing (WLB)
• Add and remove server from pool
• Emergency transition to master
• Emergency master address
• Emergency recover slaves
• Designate new master
• Manage pool and server certificates
• Patching
• Set server properties
• Configure server logging
• Enable and disable servers
• Shut down, reboot, and power­on servers
• System status reports
• Apply license
• Live migration of all other VMs on a server to another server, due to either WLB, Maintenance Mode, or HA
• Configure server management interfaces
• Disable server management
• Delete crashdumps
• Add, edit, and remove networks
• Add, edit, and remove PBDs/ PIFs/VLANs/Bonds/SRs
• Add, remove, and retrieve secrets
This permission includes all the actions required to maintain a pool.
Note: If the primary management interface is not functioning, no logins can authenticate except local root logins.
15
Permission Allows Assignee To Rationale/Comments
VM advanced operations • Adjust VM memory (through
Dynamic Memory Control)
• Create a VM snapshot with memory, take VM snapshots, and roll-back VMs
• Migrate VMs
• Start VMs, including specifying physical server
• Resume VMs
VM create/destroy operations • Install or delete
• Clone VMs
• Add, remove, and configure virtual disk/CD devices
• Add, remove, and configure virtual network devices
• Import/export VMs
• VM configuration change
VM change CD media • Eject current CD
• Insert new CD
This permission provides the assignee with enough privileges to start a VM on a different server if they are not satisfied with the server XenServer selected.
VM change power state • Start VMs (automatic
placement)
• Shut down VMs
• Reboot VMs
• Suspend VMs
• Resume VMs (automatic placement)
View VM consoles • See and interact with VM
consoles
Configure, Initialize, Enable, Disable WLB
Apply WLB Optimization Recommendations
Modify WLB Report Subscriptions • Change the WLB report
• Configure WLB
• Initialize WLB and change WLB servers
• Enable WLB
• Disable WLB
• Apply any optimization recommendations that appear in the WLB tab
generated or its recipient
This permission does not include start_on, resume_on, and migrate, which are part of the VM advanced operations permission.
This permission does not let the user view server consoles.
When a user's role does not have this permission, this functionality is not visible.
16
Permission Allows Assignee To Rationale/Comments
Accept WLB Placement Recommendations
Display WLB Configuration • View WLB settings for a pool as
Generate WLB Reports • View and run WLB reports,
XenCenter view mgmt operations • Create and modify global
Cancel own tasks • Lets a user cancel their own
Read audit log • Download the XenServer audit
• Select one of the servers Workload Balancing recommends for placement ("star" recommendations)
shown on the WLB tab
including the Pool Audit Trail report
XenCenter folders
• Create and modify global XenCenter custom fields
• Create and modify global XenCenter searches
tasks
log
Folders, custom fields, and searches are shared between all users accessing the pool
Connect to pool and read all pool metadata
Note:
In some cases, a Read Only user cannot move a resource into a folder in XenCenter, even after receiving an elevation prompt and supplying the credentials of a more privileged user. In this case, log on to XenCenter as the more privileged user and retry the action.
• Log in to pool
• View pool metadata
• View historical performance data
• View logged in users
• View users and roles
• View messages
• Register for and receive events

Using RBAC with the CLI

To List All the Available Defined Roles in XenServer
• Run the command: xe role-list
This command returns a list of the currently defined roles, for example:
17
uuid( RO): 0165f154-ba3e-034e-6b27-5d271af109ba name ( RO): pool-admin description ( RO): The Pool Administrator role can do anything
uuid ( RO): b9ce9791-0604-50cd-0649-09b3284c7dfd name ( RO): pool-operator description ( RO): The Pool Operator can do anything but access Dom0 \ and manage subjects and roles
uuid( RO): 7955168d-7bec-10ed-105f-c6a7e6e63249 name ( RO): vm-power-admin description ( RO): The VM Power Administrator role can do anything \ affecting VM properties across the pool
uuid ( RO): aaa00ab5-7340-bfbc-0d1b-7cf342639a6e name ( RO): vm-admin description ( RO): The VM Administrator role can do anything to a VM
uuid ( RO): fb8d4ff9-310c-a959-0613-54101535d3d5 name ( RO): vm-operator description ( RO): The VM Operator role can do anything to an already
uuid ( RO): 7233b8e3-eacb-d7da-2c95-f2e581cdbf4e name ( RO): read-only description ( RO): The Read-Only role can only read values
Note:
This list of roles is static; it is not possible to add, remove, or modify roles.
To Display a List of Current Subjects:
• Run the command xe subject-list
This will return a list of XenServer users, their uuid, and the roles they are associated with:
18
uuid ( RO): bb6dd239-1fa9-a06b-a497-3be28b8dca44 subject-identifier ( RO): S-1-5-21-1539997073-1618981536-2562117463-2244 other-config (MRO): subject-name: example01\user_vm_admin; subject-upn: \ user_vm_admin@XENDT.NET; subject-uid: 1823475908; subject-gid: 1823474177; \ subject-sid: S-1-5-21-1539997073-1618981536-2562117463-2244; subject-gecos: \ user_vm_admin; subject-displayname: user_vm_admin; subject-is-group: false; \ subject-account-disabled: false; subject-account-expired: false; \ subject-account-locked: false;subject-password-expired: false roles (SRO): vm-admin
uuid ( RO): 4fe89a50-6a1a-d9dd-afb9-b554cd00c01a subject-identifier ( RO): S-1-5-21-1539997073-1618981536-2562117463-2245 other-config (MRO): subject-name: example02\user_vm_op; subject-upn: \ user_vm_op@XENDT.NET; subject-uid: 1823475909; subject-gid: 1823474177; \ subject-sid: S-1-5-21-1539997073-1618981536-2562117463-2245; \ subject-gecos: user_vm_op; subject-displayname: user_vm_op; \ subject-is-group: false; subject-account-disabled: false; \ subject-account-expired: false; subject-account-locked: \ false; subject-password-expired: false roles (SRO): vm-operator
uuid ( RO): 8a63fbf0-9ef4-4fef-b4a5-b42984c27267 subject-identifier ( RO): S-1-5-21-1539997073-1618981536-2562117463-2242 other-config (MRO): subject-name: example03\user_pool_op; \ subject-upn: user_pool_op@XENDT.NET; subject-uid: 1823475906; \ subject-gid: 1823474177; subject-s id: S-1-5-21-1539997073-1618981536-2562117463-2242; \ subject-gecos: user_pool_op; subject-displayname: user_pool_op; \ subject-is-group: false; subject-account-disabled: false; \ subject-account-expired: false; subject-account-locked: \ false; subject-password-expired: false roles (SRO): pool-operator
To Add a Subject to RBAC
In order to enable existing AD users to use RBAC, you will need to create a subject instance within XenServer, either for the AD user directly, or for one of their containing groups:
1. Run the command xe subject-add subject-name=<AD user/group>
This adds a new subject instance.
To Assign an RBAC Role to a Created subject
Once you have added a subject, you can assign it to an RBAC role. You can refer to the role by either its uuid or name:
1. Run the command:
xe subject-role-add uuid=<subject uuid> role-uuid=<role_uuid>
or
xe subject-role-add uuid=<subject uuid> role-name=<role_name>
For example, the following command adds a subject with the uuid b9b3d03b-3d10-79d3-8ed7­a782c5ea13b4 to the Pool Administrator role:
xe subject-role-add uuid=b9b3d03b-3d10-79d3-8ed7-a782c5ea13b4 role-name=pool-admin
To Change a Subject's RBAC Role:
To change a user's role it is necessary to remove them from their existing role, and add them to a new role:
19
1. Run the commands:
xe subject-role-remove uuid=<subject uuid> role-name= \ <role_name_to_remove> xe subject-role-add uuid=<subject uuid > role-name= \ <role_name_to_add>
To ensure that the new role takes effect, the user should be logged out and logged back in again (this requires the "Logout Active User Connections" permission - available to a Pool Administrator or Pool Operator).
Warning:
Once you have added or removed a pool-admin subject, there can be a delay of a few seconds for ssh sessions associated to this subject to be accepted by all hosts of the pool.

Auditing

The RBAC audit log will record any operation taken by a logged-in user.
• the message will explicitly record the Subject ID and user name associated with the session that invoked the operation.
• if an operation is invoked for which the subject does not have authorization, this will be logged.
• if the operation succeeded then this is recorded; if the operation failed then the error code is logged.
Audit Log xe CLI Commands
xe audit-log-get [since=<timestamp>] filename=<output filename>
This command downloads to a file all the available records of the RBAC audit file in the pool. If the optional parameter 'since' is present, then it only downloads the records from that specific point in time.
To Obtain All Audit Records From the Pool
Run the following command:
xe audit-log-get filename=/tmp/auditlog-pool-actions.out
To Obtain Audit Records of the Pool Since a Precise Millisecond Timestamp
Run the following command:
xe audit-log-get since=2009-09-24T17:56:20.530Z \ filename=/tmp/auditlog-pool-actions.out
To Obtain Audit Records of the Pool Since a Precise Minute Timestamp
Run the following command:
xe audit-log-get since=2009-09-24T17:56Z \ filename=/tmp/auditlog-pool-actions.out

How Does XenServer Compute the Roles for the Session?

1. The subject is authenticated via the Active Directory server to verify which containing groups the subject may
also belong to.
2. XenServer then verifies which roles have been assigned both to the subject, and to its containing groups.
3. As subjects can be members of multiple Active Directory groups, they will inherit all of the permissions of the
associated roles.
20
In this illustration, since Subject 2 (Group 2) is the Pool Operator and User 1 is a member
of Group 2, when Subject 3 (User 1) tries to log in, he or she inherits both Subject
3 (VM Operator) and Group 2 (Pool Operator) roles. Since the Pool Operator role is
higher, the resulting role for Subject 3 (User 1) is Pool Operator and not VM Operator.
21

XenServer Hosts and Resource Pools

This chapter describes how resource pools can be created through a series of examples using the xe command line interface (CLI). A simple NFS-based shared storage configuration is presented and a number of simple VM management examples are discussed. Procedures for dealing with physical node failures are also described.

Hosts and Resource Pools Overview

A resource pool comprises multiple XenServer host installations, bound together into a single managed entity which can host Virtual Machines. When combined with shared storage, a resource pool enables VMs to be started on any XenServer host which has sufficient memory and then dynamically moved between XenServer hosts while running with minimal downtime (XenMotion). If an individual XenServer host suffers a hardware failure, then the administrator can restart the failed VMs on another XenServer host in the same resource pool. If high availability (HA) is enabled on the resource pool, VMs will automatically be moved if their host fails. Up to 16 hosts are supported per resource pool, although this restriction is not enforced.
A pool always has at least one physical node, known as the master. Only the master node exposes an administration interface (used by XenCenter and the XenServer Command Line Interface, known as the xe CLI); the master forwards commands to individual members as necessary.
Note:
If the pool's master fails, master re-election will only take place if High Availability is enabled.

Requirements for Creating Resource Pools

A resource pool is a homogeneous (or heterogeneous with restrictions, see the section called “Creating
Heterogeneous Resource Pools”) aggregate of one or more XenServer hosts, up to a maximum of 16. The
definition of homogeneous is:
• the CPUs on the server joining the pool are the same (in terms of vendor, model, and features) as the CPUs on servers already in the pool.
• the server joining the pool is running the same version of XenServer software, at the same patch level, as servers already in the pool
The software will enforce additional constraints when joining a server to a pool – in particular:
• it is not a member of an existing resource pool
• it has no shared storage configured
• there are no running or suspended VMs on the XenServer host which is joining
• there are no active operations on the VMs in progress, such as one shutting down
You must also check that the clock of the host joining the pool is synchronized to the same time as the pool master (for example, by using NTP), that its primary management interface is not bonded (you can configure this once the host has successfully joined the pool), and that its management IP address is static (either configured on the host itself or by using an appropriate configuration on your DHCP server).
XenServer hosts in resource pools may contain different numbers of physical network interfaces and have local storage repositories of varying size. In practice, it is often difficult to obtain multiple servers with the exact same CPUs, and so minor variations are permitted. If you are sure that it is acceptable in your environment for hosts with varying CPUs to be part of the same resource pool, then the pool joining operation can be forced by passing a --force parameter.
Note:
22
The requirement for a XenServer host to have a static IP address to be part of a resource pool also applies to servers providing shared NFS or iSCSI storage for the pool.
Although not a strict technical requirement for creating a resource pool, the advantages of pools (for example, the ability to dynamically choose on which XenServer host to run a VM and to dynamically move a VM between XenServer hosts) are only available if the pool has one or more shared storage repositories. If possible, postpone creating a pool of XenServer hosts until shared storage is available. Once shared storage has been added, Citrix recommends that you move existing VMs whose disks are in local storage into shared storage. This can be done using the xe vm-copy command or XenCenter.

Creating a Resource Pool

Resource pools can be created using either the XenCenter management console or the CLI. When you join a new host to a resource pool, the joining host synchronizes its local database with the pool-wide one, and inherits some settings from the pool:
• VM, local, and remote storage configuration is added to the pool-wide database. All of these will still be tied to the joining host in the pool unless you explicitly take action to make the resources shared after the join has completed.
• The joining host inherits existing shared storage repositories in the pool and appropriate PBD records are created so that the new host can access existing shared storage automatically.
• Networking information is partially inherited to the joining host: the structural details of NICs, VLANs and bonded interfaces are all inherited, but policy information is not. This policy information, which must be re­configured, includes:
• the IP addresses of management NICs, which are preserved from the original configuration
• the location of the primary management interface, which remains the same as the original configuration.
For example, if the other pool hosts have their primary management interfaces on a bonded interface, then the joining host must be explicitly migrated to the bond once it has joined.
• Dedicated storage NICs, which must be re-assigned to the joining host from XenCenter or the CLI, and the
PBDs re-plugged to route the traffic accordingly. This is because IP addresses are not assigned as part of the pool join operation, and the storage NIC is not useful without this configured correctly. See the section
called “Configuring a dedicated storage NIC” for details on how to dedicate a storage NIC from the CLI.
To join XenServer hosts host1 and host2 into a resource pool using the CLI
1. Open a console on XenServer host host2.
2. Command XenServer host host2 to join the pool on XenServer host host1 by issuing the command:
xe pool-join master-address=<host1> master-username=<administrators_username> \ master-password=<password>
The master-address must be set to the fully-qualified domain name of XenServer host host1 and the password must be the administrator password set when XenServer host host1 was installed.
Naming a resource pool
XenServer hosts belong to an unnamed pool by default. To create your first resource pool, rename the
existing nameless pool. Use tab-complete to find the pool_uuid:
xe pool-param-set name-label=<"New Pool"> uuid=<pool_uuid>

Creating Heterogeneous Resource Pools

Note:
Heterogeneous resource pool creation is only available to XenServer Advanced editions and above. To learn more about XenServer editions and to find out how to upgrade, visit the Citrix website here.
23
XenServer 6.0 simplifies expanding deployments over time by allowing disparate host hardware to be joined into a resource pool, known as heterogeneous resource pools. Heterogeneous resource pools are made possible by leveraging technologies in recent Intel (FlexMigration) and AMD (Extended Migration) CPUs that provide CPU "masking" or "leveling". These features allow a CPU to be configured to appear as providing a different make, model, or functionality than it actually does. This enables you to create pools of hosts with disparate CPUs but still safely support live migrations.
Using XenServer to mask the CPU features of a new server, so that it will match the features of the existing servers in a pool, requires the following:
• the CPUs of the server joining the pool must be of the same vendor (i.e. AMD, Intel) as the CPUs on servers already in the pool, though the specific type, (family, model and stepping numbers) need not be.
• the CPUs of the server joining the pool must support either Intel FlexMigration or AMD Enhanced Migration.
• the features of the older CPUs must be a sub-set of the features of the CPUs of the server joining the pool.
• the server joining the pool is running the same version of XenServer software, with the same hotfixes installed, as servers already in the pool.
• XenServer Advanced edition or higher.
Creating heterogeneous resource pools is most easily done with XenCenter which will automatically suggest using CPU masking when possible. Refer to the Pool Requirements section in the XenCenter help for more details. To display the help in XenCenter press F1.
To add a heterogeneous XenServer host to a resource pool using the xe CLI
1. Find the CPU features of the Pool Master by running the xe host-get-cpu-features command.
2. On the new server, run the xe host-set-cpu-features command and copy and paste the Pool Master's
features into the features parameter. For example:
xe host-set-cpu-features features=<pool_master's_cpu_ features>
3. Restart the new server.
4. Run the xe pool-join command on the new server to join the pool.
To return a server with masked CPU features back to its normal capabilities, run the xe host-reset-cpu-features command.
Note:
To display a list of all properties of the CPUs in a host, run the xe host-cpu-info command.

Adding Shared Storage

For a complete list of supported shared storage types, see the Storage chapter. This section demonstrates how shared storage (represented as a storage repository) can be created on an existing NFS server.
Adding NFS shared storage to a resource pool using the CLI
1. Open a console on any XenServer host in the pool.
2. Create the storage repository on <server:/path> by issuing the command
xe sr-create content-type=user type=nfs name-label=<"Example SR"> shared=true \ device-config:server=<server> \ device-config:serverpath=<path>
The device-config:server refers to the hostname of the NFS server and device­config:serverpath refers to the path on the NFS server. Since shared is set to true, the shared
storage will be automatically connected to every XenServer host in the pool and any XenServer hosts that
24
subsequently join will also be connected to the storage. The Universally Unique Identifier (UUID) of the created storage repository will be printed on the screen.
3. Find the UUID of the pool by the command
xe pool-list
4. Set the shared storage as the pool-wide default with the command
xe pool-param-set uuid=<pool_uuid> default-SR=<sr_uuid>
Since the shared storage has been set as the pool-wide default, all future VMs will have their disks created on shared storage by default. See Storage for information about creating other types of shared storage.

Removing a XenServer Host from a Resource Pool

Note:
Before removing a XenServer host from a pool, ensure that you shut down all the VMs running on that host. Otherwise, you may see a warning stating that the host cannot be removed.
When a XenServer host is removed (ejected) from a pool, the machine is rebooted, reinitialized, and left in a state equivalent to that after a fresh installation. It is important not to eject a XenServer host from a pool if there is important data on the local disks.
To remove a host from a resource pool using the CLI
1. Open a console on any host in the pool.
2. Find the UUID of the host by running the command
xe host-list
3. Eject the required host from the pool:
xe pool-eject host-uuid=<host_uuid>
The XenServer host will be ejected and left in a freshly-installed state.
Warning:
Do not eject a host from a resource pool if it contains important data stored on its local disks. All of the data will be erased upon ejection from the pool. If you wish to preserve this data, copy the VM to shared storage on the pool first using XenCenter, or the xe vm-copy CLI command.
When a XenServer host containing locally stored VMs is ejected from a pool, those VMs will still be present in the pool database and visible to the other XenServer hosts. They will not start until the virtual disks associated with them have been changed to point at shared storage which can be seen by other XenServer hosts in the pool, or simply removed. It is for this reason that you are strongly advised to move any local storage to shared storage upon joining a pool, so that individual XenServer hosts can be ejected (or physically fail) without loss of data.

Preparing a Pool of XenServer Hosts for Maintenance

Before performing maintenance operations on a XenServer host that is part of a resource pool, you should disable it (which prevents any VMs from being started on it), then migrate its VMs to another XenServer host in the pool. This can most readily be accomplished by placing the XenServer host into Maintenance mode using XenCenter. See the XenCenter Help for details.
Note:
Placing the master host into maintenance mode will result in the loss of the last 24hrs of RRD updates for offline VMs. This is because the backup synchronization occurs every 24hrs.
25
Warning:
Citrix highly recommends rebooting all XenServers prior to installing an update and then verifying their configuration. This is because some configuration changes only take effect when a XenServer is rebooted, so the reboot may uncover configuration problems that would cause the update to fail.
To prepare a XenServer host in a pool for maintenance operations using the CLI
1. Run the command
xe host-disable uuid=<xenserver_host_uuid> xe host-evacuate uuid=<xenserver_host_uuid>
This will disable the XenServer host and then migrate any running VMs to other XenServer hosts in the pool.
2. Perform the desired maintenance operation.
3. Once the maintenance operation is completed, enable the XenServer host:
xe host-enable
Restart any halted VMs and/or resume any suspended VMs.

High Availability

This section explains the XenServer implementation of virtual machine high availability (HA), and how to configure it using the xe CLI.
Note:
XenServer HA is only available with XenServer Advanced edition or above. To find out about XenServer editions, visit the Citrix website here.

HA Overview

When HA is enabled, XenServer continually monitors the health of the hosts in a pool. The HA mechanism automatically moves protected VMs to a healthy host if the current VM host fails. Additionally, if the host that fails is the master, HA selects another host to take over the master role automatically, so that you can continue to manage the XenServer pool.
To absolutely guarantee that a host is unreachable, a resource pool configured for high-availability uses several heartbeat mechanisms to regularly check up on hosts. These heartbeats go through both the storage interfaces (to the Heartbeat SR) and the networking interfaces (over the management interfaces). Both of these heartbeat routes can be multi-homed for additional resilience to prevent false positives.
XenServer dynamically maintains a failover plan which details what to do if a set of hosts in a pool fail at any given time. An important concept to understand is the host failures to tolerate value, which is defined as part of HA configuration. This determines the number of failures that is allowed without any loss of service. For example, if a resource pool consisted of 16 hosts, and the tolerated failures is set to 3, the pool calculates a failover plan that allows for any 3 hosts to fail and still be able to restart VMs on other hosts. If a plan cannot be found, then the pool is considered to be overcommitted. The plan is dynamically recalculated based on VM lifecycle operations and movement. Alerts are sent (either through XenCenter or e-mail) if changes (for example the addition on new VMs to the pool) cause your pool to become overcommitted.
Overcommitting
A pool is overcommitted if the VMs that are currently running could not be restarted elsewhere following a user­defined number of host failures.
This would happen if there was not enough free memory across the pool to run those VMs following failure. However there are also more subtle changes which can make HA guarantees unsustainable: changes to Virtual
26
Block Devices (VBDs) and networks can affect which VMs may be restarted on which hosts. Currently it is not possible for XenServer to check all actions before they occur and determine if they will cause violation of HA demands. However an asynchronous notification is sent if HA becomes unsustainable.
Overcommitment Warning
If you attempt to start or resume a VM and that action causes the pool to be overcommitted, a warning alert is raised. This warning is displayed in XenCenter and is also available as a message instance through the Xen API. The message may also be sent to an email address if configured. You will then be allowed to cancel the operation, or proceed anyway. Proceeding causes the pool to become overcommitted. The amount of memory used by VMs of different priorities is displayed at the pool and host levels.
Host Fencing
If a server failure occurs such as the loss of network connectivity or a problem with the control stack is encountered, the XenServer host self-fences to ensure that the VMs are not running on two servers simultaneously. When a fence action is taken, the server immediately and abruptly restarts, causing all VMs running on it to be stopped. The other servers will detect that the VMs are no longer running and the VMs will be restarted according to the restart priorities assign to them. The fenced server will enter a reboot sequence, and when it has restarted it will try to re-join the resource pool.

Configuration Requirements

Note:
Citrix recommends that you enable HA only in pools that contain at least 3 XenServer hosts. For details on how the HA feature behaves when the heartbeat is lost between two hosts in a pool, see the Citrix Knowledge Base article CTX129721.
To use the HA feature, you need:
• Shared storage, including at least one iSCSI, NFS or Fibre Channel LUN of size 356MB or greater- the heartbeat SR. The HA mechanism creates two volumes on the heartbeat SR:
4MB heartbeat volume
Used for heartbeating.
256MB metadata volume
Stores pool master metadata to be used in the case of master failover.
Note:
For maximum reliability, Citrix strongly recommends that you use a dedicated NFS or iSCSI storage repository as your HA heartbeat disk, which must not be used for any other purpose.
If you are using a NetApp or EqualLogic SR, manually provision an NFS or iSCSI LUN on the array to use as the heartbeat SR.
• A XenServer pool (this feature provides high availability at the server level within a single resource pool).
• XenServer Advanced edition or higher on all hosts.
• Static IP addresses for all hosts.
Warning:
Should the IP address of a server change while HA is enabled, HA will assume that the host's network has failed, and will probably fence the host and leave it in an unbootable state. To remedy this situation, disable HA using the host-emergency-ha-disable command, reset the pool master using pool-emergency-reset-master, and then re-enable HA.
For a VM to be protected by the HA feature, it must be agile. This means that:
27
• it must have its virtual disks on shared storage (any type of shared storage may be used; the iSCSI, NFS or Fibre Channel LUN is only required for the storage heartbeat and can be used for virtual disk storage if you prefer, but this is not necessary)
• it must not have a connection to a local DVD drive configured
• it should have its virtual network interfaces on pool-wide networks.
Citrix strongly recommends the use of a bonded primary management interface on the servers in the pool if HA is enabled, and multipathed storage for the heartbeat SR.
If you create VLANs and bonded interfaces from the CLI, then they may not be plugged in and active despite being created. In this situation, a VM can appear to be not agile, and cannot be protected by HA. If this occurs, use the CLI pif-plug command to bring the VLAN and bond PIFs up so that the VM can become agile. You can also determine precisely why a VM is not agile by using the xe diagnostic-vm-status CLI command to analyze its placement constraints, and take remedial action if required.

Restart Priorities

Virtual machines can assigned a restart priority and a flag to indicates whether or not they should be protected by HA. When HA is enabled, every effort is made to keep protected virtual machines live. If a restart priority is specified, any protected VM that is halted will be started automatically. If a server fails then the running VMs will be started on another server.
An explanation of the restart priorities is shown below:
HA Restart Priority Restart Explanation
0 attempt to start VMs with this priority first
1 attempt to start VMs with this priority, only after having attempted to restart
all VMs with priority 0
2 attempt to start VMs with this priority, only after having attempted to restart
all VMs with priority 1
3 attempt to start VMs with this priority, only after having attempted to restart
all VMs with priority 2
best-effort attempt to start VMs with this priority, only after having attempted to restart
all VMs with priority 3
HA Always Run Explanation
True VMs with this setting are included in the restart plan
False VMs with this setting are NOT included in the restart plan
Warning:
Citrix strongly advises that only StorageLink Service VMs should be given a restart priority of 0. All other VMs (including those dependent on a StorageLink VM) should be assigned a restart priority 1 or higher.
The "best-effort" HA restart priority must NOT be used in pools with StorageLink SRs.
The restart priorities determine the order in which XenServer attempts to start VMs when a failure occurs. In a given configuration where a number of server failures greater than zero can be tolerated (as indicated in the HA panel in the GUI, or by the ha-plan-exists-for field on the pool object on the CLI), the VMs that have restart priorities 0 1, 2 or 3 are guaranteed to be restarted given the stated number of server failures. VMs with a
28
best-effort priority setting are not part of the failover plan and are not guaranteed to be kept running, since capacity is not reserved for them. If the pool experiences server failures and enters a state where the number of tolerable failures drops to zero, the protected VMs will no longer be guaranteed to be restarted. If this condition is reached, a system alert will be generated. In this case, should an additional failure occur, all VMs that have a restart priority set will behave according to the best-effort behavior.
If a protected VM cannot be restarted at the time of a server failure (for example, if the pool was overcommitted when the failure occurred), further attempts to start this VM will be made as the state of the pool changes. This means that if extra capacity becomes available in a pool (if you shut down a non-essential VM, or add an additional server, for example), a fresh attempt to restart the protected VMs will be made, which may now succeed.
Note:
No running VM will ever be stopped or migrated in order to free resources for a VM with always-run=true to be restarted.

Enabling HA on a XenServer Pool

HA can be enabled on a pool using either XenCenter or the command-line interface. In either case, you will specify a set of priorities that determine which VMs should be given highest restart priority when a pool is overcommitted.
Warning:
When HA is enabled, some operations that would compromise the plan for restarting VMs may be disabled, such as removing a server from a pool. To perform these operations, HA can be temporarily disabled, or alternately, VMs protected by HA made unprotected.

Enabling HA Using the CLI

1. Verify that you have a compatible Storage Repository (SR) attached to your pool. iSCSI, NFS or Fibre Channel
are compatible SR types. Please refer to the reference guide for details on how to configure such a storage repository using the CLI.
2. For each VM you wish to protect, set a restart priority. You can do this as follows:
xe vm-param-set uuid=<vm_uuid> ha-restart-priority=<1> ha-always-run=true
3. Enable HA on the pool:
xe pool-ha-enable heartbeat-sr-uuids=<sr_uuid>
4. Run the pool-ha-compute-max-host-failures-to-tolerate command. This command returns the maximum
number of hosts that can fail before there are insufficient resources to run all the protected VMs in the pool.
xe pool-ha-compute-max-host-failures-to-tolerate
The number of failures to tolerate determines when an alert is sent: the system will recompute a failover plan as the state of the pool changes and with this computation the system identifies the capacity of the pool and how many more failures are possible without loss of the liveness guarantee for protected VMs. A system alert is generated when this computed value falls below the specified value for ha-host-failures- to-tolerate.
5. Specify the number of failures to tolerate parameter. This should be less than or equal to the computed
value:
xe pool-param-set ha-host-failures-to-tolerate=<2> uuid=<pool-uuid>

Removing HA Protection from a VM using the CLI

To disable HA features for a VM, use the xe vm-param-set command to set the ha-always-run parameter to false. This does not clear the VM restart priority settings. You can enable HA for a VM again by setting the ha-always-run parameter to true.
29

Recovering an Unreachable Host

If for some reason a host cannot access the HA statefile, it is possible that a host may become unreachable. To recover your XenServer installation it may be necessary to disable HA using the host-emergency-ha-disable command:
xe host-emergency-ha-disable --force
If the host was the pool master, then it should start up as normal with HA disabled. Slaves should reconnect and automatically disable HA. If the host was a Pool slave and cannot contact the master, then it may be necessary to force the host to reboot as a pool master (xe pool-emergency-transition-to-master) or to tell it where the new master is (xe pool-emergency-reset-master):
xe pool-emergency-transition-to-master uuid=<host_uuid> xe pool-emergency-reset-master master-address=<new_master_hostname>
When all hosts have successfully restarted, re-enable HA:
xe pool-ha-enable heartbeat-sr-uuid=<sr_uuid>

Shutting Down a host When HA is Enabled

When HA is enabled special care needs to be taken when shutting down or rebooting a host to prevent the HA mechanism from assuming that the host has failed. To shutdown a host cleanly in an HA-enabled environment, first disable the host, then evacuate the host and finally shutdown the host using either XenCenter or the CLI. To shutdown a host in an HA-enabled environment on the command line:
xe host-disable host=<host_name> xe host-evacuate uuid=<host_uuid> xe host-shutdown host=<host_name>

Shutting Down a VM When it is Protected by HA

When a VM is protected under a HA plan and set to restart automatically, it cannot be shut down while this protection is active. To shut down a VM, first disable its HA protection and then execute the CLI command. XenCenter offers you a dialog box to automate disabling the protection if you click on the Shutdown button of a protected VM.
Note:
If you shut down a VM from within the guest, and the VM is protected, it is automatically restarted under the HA failure conditions. This helps ensure that operator error (or an errant program that mistakenly shuts down the VM) does not result in a protected VM being left shut down accidentally. If you want to shut this VM down, disable its HA protection first.

Host Power On

Powering on Hosts Remotely

You can use the XenServer Host Power On feature to turn a server on and off remotely, either from XenCenter or by using the CLI. When using Workload Balancing (WLB), you can configure Workload Balancing to turn hosts on and off automatically as VMs are consolidated or brought back online.
To enable host power, the server must have one of the following power-control solutions:
Wake On LAN enabled network card.
Dell Remote Access Cards (DRAC). To use XenServer with DRAC, you must install the Dell supplemental pack to get DRAC support. DRAC support requires installing RACADM command-line utility on the server with the remote access controller and enable DRAC and its interface. RACADM is often included in the DRAC management software. For more information, see Dell’s DRAC documentation.
30
Hewlett-Packard Integrated Lights-Out (iLO). To use XenServer with iLO, you must enable iLO on the host and connect interface to the network. For more information, see HP’s iLO documentation.
• A custom script based on the XenAPI that enables you to turn the power on and off through XenServer. For more information, see the section called “Configuring a Custom Script for XenServer's Host Power On Feature”.
Using the Host Power On feature requires three tasks:
1. Ensuring the hosts in the pool support controlling the power remotely (that is, they have Wake-on-LAN
functionality, a DRAC or iLO card, or you created custom script).
2. Enabling the Host Power On functionality using the CLI or XenCenter.
3. (Optional.) Configuring automatic Host Power On functionality in Workload Balancing. For information on how
to configure Host Power On in Workload Balancing please refer to the Citrix XenServer Workload Balancing Administrator's Guide.
Note:
You must enable Host Power On and configure the Power Management feature in Workload Balancing before Workload Balancing can turn hosts on and off automatically.

Using the CLI to Manage Host Power On

You can manage the Host Power On feature using either the CLI or XenCenter. This topic provides information about managing it with the CLI.
Host Power On is enabled at the host level (that is, on each XenServer).
After you enable Host Power On, you can turn hosts on using either the CLI or XenCenter.
After enabling Host Power On, you can enable the Workload Balancing Automation and Power Management features, as described in the Workload Balancing Administrator's Guide.
To Enable Host Power On Using the CLI
1. Run the command:
xe host-set-power-on host=<host uuid>\ power-on-mode=("" , "wake-on-lan", "iLO", "DRAC","custom") power-on-config:key=value
For iLO and DRAC the keys are power_on_ip, power_on_user, power_on_password_secret. Use power_on_password_secret to specify the password if you are using the secret feature.
To Turn on Hosts Remotely Using the CLI
1. Run the command:
xe host-power-on host=<host uuid>

Configuring a Custom Script for XenServer's Host Power On Feature

If your servers' remote-power solution uses a protocol that is not supported by default (such as Wake-On-Ring or Intel Active Management Technology), you can create a custom Linux Python script to turn on your XenServer computers remotely. However, you can also can create custom scripts for iLO, DRAC, and Wake-On-LAN remote­power solutions.
This topic provides information about configuring a custom script for Host Power On using the key/value pairs associated with the XenServer API call host.power_on.
31
When you create a custom script, run it from the command line each time you want to control power remotely on XenServer. Alternatively, you can specify it in XenCenter and use the XenCenter UI features to interact with it.
The XenServer API is documented in the document, the [Citrix XenServer Management API], which is available from the Citrix Web site.
Note:
Do not modify the scripts provided by default in the /etc/xapi.d/plugins/ directory. You can include new scripts in this directory, but you should never modify the scripts contained in that directory after installation.
Key/Value Pairs
To use Host Power On, you must configure the host.power_on_mode and host.power_on_config keys. Their values are provided below.
There is also an API call that lets you set these fields all at once:
void host.set_host_power_on_mode(string mode, Dictionary<string,string> config)
host.power_on_mode
Definition: This contains key/value pairs to specify the type of remote-power solution (for example, Dell DRAC).
Possible values:
• An empty string, representing power-control disabled
• "iLO". Lets you specify HP iLO.
• "DRAC". Lets you specify Dell DRAC. To use DRAC, you must have already installed the Dell supplemental
pack.
• "wake-on-lan". Lets you specify Wake on LAN.
• Any other name (used to specify a custom power-on script). This option is used to specify a custom script
for power management.
Type: string
host.power_on_config
Definition: This contains key/value pairs for mode configuration. Provides additional information for iLO and DRAC.
Possible values:
• If you configured iLO or DRAC as the type of remote-power solution, you must also specify one of the
following keys:
• "power_on_ip". This is the IP address you specified configured to communicate with the power-control card. Alternatively, you can enter the domain name for the network interface where iLO or DRAC is configured.
• "power_on_user". This is the iLO or DRAC user name that is associated with the management processor, which you may or may not have changed from its factory default settings.
• "power_on_password_secret". Specifies using the secrets feature to secure your password.
• To use the secrets feature to store your password, specify the key "power_on_password_secret".
Type: Map (string,string)
Sample Script
This sample script imports the XenServer API, defines itself as a custom script, and then passes parameters specific to the host you want to control remotely. You must define the parameters session, remote_host, and power_on_config in all custom scripts.
32
The result is only displayed when the script is unsuccessful.
import XenAPI def custom(session,remote_host,
power_on_config): result="Power On Not Successful"
for key in power_on_config.keys():
result=result+" key="+key+" value="+power_on_config[key] return result
Note:
After creation, save the script in the /etc/xapi.d/plugins with a .py extension.
33

Storage

This chapter discusses the framework for storage abstractions. It describes the way physical storage hardware of various kinds is mapped to VMs, and the software objects used by the XenServer host API to perform storage­related tasks. Detailed sections on each of the supported storage types include procedures for creating storage for VMs using the CLI, with type-specific device configuration options, generating snapshots for backup purposes and some best practices for managing storage in XenServer host environments. Finally, the virtual disk QoS (quality of service) settings are described.

Storage Overview

This section explains what the XenServer storage objects are and how they are related to each other.

Storage Repositories (SRs)

XenServer defines a container called a storage repository (SR) to describe a particular storage target, in which Virtual Disk Images (VDIs) are stored. A VDI is a disk abstraction which contains the contents of a virtual disk.
The interface to storage hardware allows VDIs to be supported on a large number of SR types. The XenServer SR is very flexible, with built-in support for IDE, SATA, SCSI and SAS drives locally connected, and iSCSI, NFS, SAS and Fibre Channel remotely connected. The SR and VDI abstractions allow advanced storage features such as sparse provisioning, VDI snapshots, and fast cloning to be exposed on storage targets that support them. For storage subsystems that do not inherently support advanced operations directly, a software stack is provided based on Microsoft's Virtual Hard Disk (VHD) specification which implements these features.
Each XenServer host can use multiple SRs and different SR types simultaneously. These SRs can be shared between hosts or dedicated to particular hosts. Shared storage is pooled between multiple hosts within a defined resource pool. A shared SR must be network accessible to each host. All hosts in a single resource pool must have at least one shared SR in common.
SRs are storage targets containing virtual disk images (VDIs). SR commands provide operations for creating, destroying, resizing, cloning, connecting and discovering the individual VDIs that they contain.
A storage repository is a persistent, on-disk data structure. For SR types that use an underlying block device, the process of creating a new SR involves erasing any existing data on the specified storage target. Other storage types such as NFS, NetApp, EqualLogic and StorageLink SRs, create a new container on the storage array in parallel to existing SRs.
CLI operations to manage storage repositories are described in the section called “SR Commands”.

Virtual Disk Images (VDIs)

Virtual Disk Images are a storage abstraction that is presented to a VM. VDIs are the fundamental unit of virtualized storage in XenServer. Similar to SRs, VDIs are persistent, on-disk objects that exist independently of XenServer hosts. CLI operations to manage VDIs are described in the section called “VDI Commands”. The actual on-disk representation of the data differs by the SR type and is managed by a separate storage plug-in interface for each SR, called the SM API.

Physical Block Devices (PBDs)

Physical Block Devices represent the interface between a physical server and an attached SR. PBDs are connector objects that allow a given SR to be mapped to a XenServer host. PBDs store the device configuration fields that are used to connect to and interact with a given storage target. For example, NFS device configuration includes the IP address of the NFS server and the associated path that the XenServer host mounts. PBD objects manage the run-time attachment of a given SR to a given XenServer host. CLI operations relating to PBDs are described in the section called “PBD Commands”.
34

Virtual Block Devices (VBDs)

Virtual Block Devices are connector objects (similar to the PBD described above) that allows mappings between VDIs and VMs. In addition to providing a mechanism for attaching (also called plugging) a VDI into a VM, VBDs allow for the fine-tuning of parameters regarding QoS (quality of service), statistics, and the bootability of a given VDI. CLI operations relating to VBDs are described in the section called “VBD Commands”.

Summary of Storage objects

The following image is a summary of how the storage objects presented so far are related:
Graphical overview of storage repositories and related objects

Virtual Disk Data Formats

In general, there are three types of mapping of physical storage to a VDI:
File-based VHD on a filesystem; VM images are stored as thin-provisioned VHD format files on either a local
non-shared filesystem (EXT type SR) or a shared NFS target (NFS type SR)
Logical Volume-based VHD on a LUN; The default XenServer blockdevice-based storage inserts a Logical Volume
manager on a disk, either a locally attached device (LVM type SR) or a SAN attached LUN over either Fibre Channel (LVMoHBA type SR), iSCSI (LVMoISCSI type SR) or SAS (LVMoHBA type Sr). VDIs are represented as volumes within the Volume manager and stored in VHD format to allow thin provisioning of reference nodes on snapshot and clone.
LUN per VDI; LUNs are directly mapped to VMs as VDIs by SR types that provide an array-specific plug in
(NetApp, EqualLogic or StorageLink type SRs). The array storage abstraction therefore matches the VDI storage abstraction for environments that manage storage provisioning at an array level.
VHD-based VDIs
VHD files may be chained, allowing two VDIs to share common data. In cases where a VHD-backed VM is cloned, the resulting VMs share the common on-disk data at the time of cloning. Each proceeds to make its own changes in an isolated copy-on-write (CoW) version of the VDI. This feature allows VHD-based VMs to be quickly cloned from templates, facilitating very fast provisioning and deployment of new VMs.
The VHD format used by LVM-based and File-based SR types in XenServer uses sparse provisioning. The image file is automatically extended in 2MB chunks as the VM writes data into the disk. For File-based VHD, this has the considerable benefit that VM image files take up only as much space on the physical storage as required. With LVM-based VHD the underlying logical volume container must be sized to the virtual size of the VDI, however
35
unused space on the underlying CoW instance disk is reclaimed when a snapshot or clone occurs. The difference between the two behaviors can be characterized in the following way:
• For LVM-based VHDs, the difference disk nodes within the chain consume only as much data as has been
written to disk but the leaf nodes (VDI clones) remain fully inflated to the virtual size of the disk. Snapshot leaf nodes (VDI snapshots) remain deflated when not in use and can be attached Read-only to preserve the deflated allocation. Snapshot nodes that are attached Read-Write will be fully inflated on attach, and deflated on detach.
• For file-based VHDs, all nodes consume only as much data as has been written, and the leaf node files grow
to accommodate data as it is actively written. If a 100GB VDI is allocated for a new VM and an OS is installed, the VDI file will physically be only the size of the OS data that has been written to the disk, plus some minor metadata overhead.
When cloning VMs based on a single VHD template, each child VM forms a chain where new changes are written to the new VM, and old blocks are directly read from the parent template. If the new VM was converted into a further template and more VMs cloned, then the resulting chain will result in degraded performance. XenServer supports a maximum chain length of 30, but it is generally not recommended that you approach this limit without good reason. If in doubt, you can always "copy" the VM using XenServer or the vm-copy command, which resets the chain length back to 0.
VHD Chain Coalescing
VHD images support chaining, which is the process whereby information shared between one or more VDIs is not duplicated. This leads to a situation where trees of chained VDIs are created over time as VMs and their associated VDIs get cloned. When one of the VDIs in a chain is deleted, XenServer rationalizes the other VDIs in the chain to remove unnecessary VDIs.
This coalescing process runs asynchronously. The amount of disk space reclaimed and the time taken to perform the process depends on the size of the VDI and the amount of shared data. Only one coalescing process will ever be active for an SR. This process thread runs on the SR master host.
If you have critical VMs running on the master server of the pool and experience occasional slow IO due to this process, you can take steps to mitigate against this:
• Migrate the VM to a host other than the SR master
• Set the disk IO priority to a higher level, and adjust the scheduler. See the section called “Virtual Disk QoS
Settings” for more information.
Space Utilization
Space utilization is always reported based on the current allocation of the SR, and may not reflect the amount of virtual disk space allocated. The reporting of space for LVM-based SRs versus File-based SRs will also differ given that File-based VHD supports full thin provisioning, while the underlying volume of an LVM-based VHD will be fully inflated to support potential growth for writeable leaf nodes. Space utilization reported for the SR will depend on the number of snapshots, and the amount of difference data written to a disk between each snapshot.
LVM-based space utilization differs depending on whether an LVM SR is upgraded or created as a new SR in XenServer. Upgraded LVM SRs will retain a base node that is fully inflated to the size of the virtual disk, and any subsequent snapshot or clone operations will provision at least one additional node that is fully inflated. For new SRs, in contrast, the base node will be deflated to only the data allocated in the VHD overlay.
When VHD-based VDIs are deleted, the space is marked for deletion on disk. Actual removal of allocated data may take some time to occur as it is handled by the coalesce process that runs asynchronously and independently for each VHD-based SR.
LUN-based VDIs
Mapping a raw LUN as a Virtual Disk image is typically the most high-performance storage method. For administrators that want to leverage existing storage SAN infrastructure such as NetApp, EqualLogic or StorageLink accessible arrays, the array snapshot, clone and thin provisioning capabilities can be exploited directly using one of the array specific adapter SR types (NetApp, EqualLogic or StorageLink). The virtual machine storage
36
operations are mapped directly onto the array APIs using a LUN per VDI representation. This includes activating the data path on demand such as when a VM is started or migrated to another host.
Managed NetApp LUNs are accessible using the NetApp SR driver type, and are hosted on a Network Appliance device running a version of Ontap 7.0 or greater. LUNs are allocated and mapped dynamically to the host using the XenServer host management framework.
EqualLogic storage is accessible using the EqualLogic SR driver type, and is hosted on an EqualLogic storage array running a firmware version of 4.0 or greater. LUNs are allocated and mapped dynamically to the host using the XenServer host management framework.
For further information on StorageLink supported array systems and the various capabilities in each case, please refer to the StorageLink documentation directly.

Storage Repository Types

The storage repository types supported in XenServer are provided by plugins in the control domain; these can be examined and plugins supported third parties can be added to the /opt/xensource/sm directory. Modification of these files is unsupported, but visibility of these files may be valuable to developers and power users. New storage manager plugins placed in this directory are automatically detected by XenServer. Use the sm-list command (see the section called “Storage Manager Commands”) to list the available SR types.
New storage repositories are created using the New Storage wizard in XenCenter. The wizard guides you through the various probing and configuration steps. Alternatively, use the sr-create command. This command creates a new SR on the storage substrate (potentially destroying any existing data), and creates the SR API object and a corresponding PBD record, enabling VMs to use the storage. On successful creation of the SR, the PBD is automatically plugged. If the SR shared=true flag is set, a PBD record is created and plugged for every XenServer Host in the resource pool.
All XenServer SR types support VDI resize, fast cloning and snapshot. SRs based on the LVM SR type (local, iSCSI, or HBA) provide thin provisioning for snapshot and hidden parent nodes. The other SR types support full thin provisioning, including for virtual disks that are active.
Note:
Automatic LVM metadata archiving is disabled by default. This does not prevent metadata recovery for LVM groups.
Warning:
When VHD VDIs are not attached, for example in the case of a VDI snapshot, they are stored by default thinly-provisioned. Because of this it is imperative to ensure that there is sufficient disk-space available for the VDI to become thickly provisioned when attempting to attach it. VDI clones, however, are thickly-provisioned.
The maximum supported VDI sizes are:
Storage type Maximum VDI size
EXT3 2TB
LVM 2TB
NetApp 2TB
EqualLogic 15TB
ONTAP(NetApp) 12TB

Local LVM

The Local LVM type presents disks within a locally-attached Volume Group.
37
By default, XenServer uses the local disk on the physical host on which it is installed. The Linux Logical Volume Manager (LVM) is used to manage VM storage. A VDI is implemented in VHD format in an LVM logical volume of the specified size.
XenServer versions prior to 6.0 did not use the VHD format and will remain in legacy mode. See the section called
“Upgrading LVM Storage from XenServer 5.0 or Earlier” for information about upgrading a storage repository to
the new format.
Creating a Local LVM SR (lvm)
Device-config parameters for lvm SRs are:
Parameter Name Description Required?
Device device name on the local host to
use for the SR
To create a local lvm SR on /dev/sdb use the following command.
xe sr-create host-uuid=<valid_uuid> content-type=user \ name-label=<"Example Local LVM SR"> shared=false \ device-config:device=/dev/sdb type=lvm
Yes

Local EXT3 VHD

The Local EXT3 VHD type represents disks as VHD files stored on a local path.
Local disks can also be configured with a local EXT SR to serve VDIs stored in the VHD format. Local disk EXT SRs must be configured using the XenServer CLI.
By definition, local disks are not shared across pools of XenServer host. As a consequence, VMs whose VDIs are stored in SRs on local disks are not agile-- they cannot be migrated between XenServer hosts in a resource pool.
Creating a Local EXT3 SR (ext)
Device-config parameters for ext SRs:
Parameter Name Description Required?
Device device name on the local host to
use for the SR
Yes
To create a local ext SR on /dev/sdb use the following command:
xe sr-create host-uuid=<valid_uuid> content-type=user \ name-label=<"Example Local EXT3 SR"> shared=false \ device-config:device=/dev/sdb type=ext

udev

The udev type represents devices plugged in using the udev device manager as VDIs.
XenServer has two SRs of type udev that represent removable storage. One is for the CD or DVD disk in the physical CD or DVD-ROM drive of the XenServer host. The other is for a USB device plugged into a USB port of the XenServer host. VDIs that represent the media come and go as disks or USB sticks are inserted and removed.
ISO
The ISO type handles CD images stored as files in ISO format. This SR type is useful for creating shared ISO libraries. For storage repositories that store a library of ISOs, the content-type parameter must be set to iso.
For example:
38
xe sr-create host-uuid=<valid_uuid> content-type=iso \ type=iso name-label=<"Example ISO SR"> \ device_config:location=<nfs server:path>>

Software iSCSI Support

XenServer provides support for shared SRs on iSCSI LUNs. iSCSI is supported using the open-iSCSI software iSCSI initiator or by using a supported iSCSI Host Bus Adapter (HBA). The steps for using iSCSI HBAs are identical to those for Fibre Channel HBAs, both of which are described in the section called “Creating a Shared LVM over
Fibre Channel / iSCSI HBA or SAS SR (lvmohba)”.
Shared iSCSI support using the software iSCSI initiator is implemented based on the Linux Volume Manager (LVM) and provides the same performance benefits provided by LVM VDIs in the local disk case. Shared iSCSI SRs using the software-based host initiator are capable of supporting VM agility using XenMotion: VMs can be started on any XenServer host in a resource pool and migrated between them with no noticeable downtime.
iSCSI SRs use the entire LUN specified at creation time and may not span more than one LUN. CHAP support is provided for client authentication, during both the data path initialization and the LUN discovery phases.
XenServer Host iSCSI configuration
All iSCSI initiators and targets must have a unique name to ensure they can be uniquely identified on the network. An initiator has an iSCSI initiator address, and a target has an iSCSI target address. Collectively these are called iSCSI Qualified Names, or IQNs.
XenServer hosts support a single iSCSI initiator which is automatically created and configured with a random IQN during host installation. The single initiator can be used to connect to multiple iSCSI targets concurrently.
iSCSI targets commonly provide access control using iSCSI initiator IQN lists, so all iSCSI targets/LUNs to be accessed by a XenServer host must be configured to allow access by the host's initiator IQN. Similarly, targets/ LUNs to be used as shared iSCSI SRs must be configured to allow access by all host IQNs in the resource pool.
Note:
iSCSI targets that do not provide access control will typically default to restricting LUN access to a single initiator to ensure data integrity. If an iSCSI LUN is intended for use as a shared SR across multiple XenServer hosts in a resource pool, ensure that multi-initiator access is enabled for the specified LUN.
The XenServer host IQN value can be adjusted using XenCenter, or using the CLI with the following command when using the iSCSI software initiator:
xe host-param-set uuid=<valid_host_id> other-config:iscsi_iqn=<new_initiator_iqn>
Warning:
It is imperative that every iSCSI target and initiator have a unique IQN. If a non-unique IQN identifier is used, data corruption and/or denial of LUN access can occur.
Warning:
Do not change the XenServer host IQN with iSCSI SRs attached. Doing so can result in failures connecting to new targets or existing SRs.

Citrix StorageLink SRs

The Citrix StorageLink (CSL) storage repository provides direct access to native array APIs to offload intensive tasks such as LUN provisioning, snapshot and cloning of data. CSL provides a number of supported adapter types in order to communicate with array management APIs. Once successfully configured, the adapter will handle all provisioning and mapping of storage to XenServer hosts on demand. Data path support for LUNs includes both Fibre Channel and iSCSI as hardware permits.
CSL SRs can be created, viewed, and managed using both XenCenter and the xe CLI.
Note:
39
For more information on using CSL SR types with XenCenter, see the XenCenter online help.
Because the CSL SR can be used to access different storage arrays, the exact features available for a given CSL SR depend on the capabilities of the array. All CSL SRs use a LUN-per-VDI model where a new LUN is provisioned for each virtual disk (VDI).
CSL SRs can co-exist with other SR types on the same storage array hardware, and multiple CSL SRs can be defined within the same resource pool.
CSL supports the following array types:
• NetApp/ IBM N Series
Important:
When using NetApp storage with StorageLink on a XenServer host, Initiator Groups are automatically created for the host on the array. These Initiator Groups are created with Linux as the Operating System (OS).
Manually adding Initiator Groups with other OS values is not recommended.
• Dell EqualLogic PS Series
Important:
The Dell EqualLogic API uses SNMP for the communication. It requires SNMP v3 and therefore requires firmware v5.0.0 or greater. If your EqualLogic array uses earlier firmware, you will need to upgrade it to v5.0.0. The firmware can be downloaded from the Dell EqualLogic Firmware download site at https://www.equallogic.com/support/download.aspx?id=1502 (note that you need a Dell Support account to access this page).
After a firmware upgrade to v5.0.0 or later, you will need to explicitly reset the administrator (grpadmin) password. This is necessary so that the password can be converted to the necessary SNMPv3 authentication and encryption keys.
To reset the password, log in to the array via telnet or ssh, and run the following command:
account select grpadmin passwd
At the prompt, enter the new password; at the next prompt, retype it to confirm. The password can be the same as the original password.
Upgrading XenServer with StorageLink SRs
If you are upgrading pools (from XenServer version 5.6 or later to the current version of XenServer) that contain StorageLink Gateway SRs, note that only the following adapters are supported: NetApp and Dell EqualLogic. If the pool contains VMs running on any other types of StorageLink Gateway SRs, do not upgrade the pool.
Note:
Before you upgrade, you will need to first detach any supported StorageLink Gateway SRs and then, once you have upgraded, re-attach them and re-enter your credentials (if you are using XenCenter, the Rolling Pool Upgrade wizard will perform this automatically).
Warning:
If the default SR in the pool you wish to upgrade is a supported StorageLink SR, you must set the default to an SR of a different type (non-StorageLink). Any VMs suspended on a StorageLink Gateway SR by the Rolling Pool Upgrade wizard will not be resumable after the upgrade.
Creating a Shared StorageLink SR
The device-config parameters for CSL SRs are:
40
Parameter name Description Optional?
target The server name or IP address of
the array management console
storageSystemId The storage system ID to use for
allocating storage
storagePoolId The storage pool ID within the
specified storage system to use for allocating storage
username The username to use for
connection to the array management console
adapterid The name of the adapter No
password The password to use for
connecting to the array management console
chapuser The username to use for CHAP
authentication
chappassword The password to use for CHAP
authentication
No
No
No
Yes
Yes
Yes
Yes
protocol Specifies the storage protocol to
use (fc or iscsi) for multi-protocol storage systems. If not specified fc is used if available, otherwise iscsi.
provision-type Specifies whether to use thick or
thin provisioning (thick or thin); default is thick
provision-options Additional provisioning options:
Set to dedup to use the de­duplication features supported by the storage system
Note:
When creating a new SR on a NetApp array using StorageLink, you can choose to use either an aggregate or an existing FlexVol.
If you choose an existing FlexVol for creating the SR, each VDI would be hosted on a LUN within the FlexVol.
If you choose an aggregate instead, each VDI will be hosted on a LUN inside a new FlexVol within the aggregate.
Yes
Yes
Yes
To Create a CSL SR Using XenCenter
1. On the XenCenter toolbar, click New Storage. This displays the New Storage Repository wizard.
2. Under Virtual disk storage Select Advanced StorageLink technology and then click Next.
3. Work through the wizard to configure your specific storage array.
41
To Create a CSL SR Using the CLI
1. Use the sr-probe command with the device-config:target parameter and username and password
credentials to identify the available storage system IDs.
For example:
42
xe sr-probe type=cslg device-config:adapterid=NETAPP \ device-config:username=**** device-config:password=**** \ device-config:target=****
<csl__storageSystemInfoList> <csl__storageSystemInfo> <friendlyName>devfiler</friendlyName> <displayName>NetApp FAS3020 (devfiler)</displayName> <vendor>NetApp</vendor> <model>FAS3020</model> <serialNum>3064792</serialNum> <storageSystemId>NETAPP__LUN__0A50E2F6</storageSystemId> <systemCapabilities> <capabilities>PROVISIONING</capabilities> <capabilities>THIN_PROVISIONING</capabilities> <capabilities>MAPPING</capabilities> <capabilities>MULTIPLE_STORAGE_POOLS</capabilities> <capabilities>LUN_GROUPING</capabilities> <capabilities>DEDUPLICATION</capabilities> <capabilities>DIFF_SNAPSHOT</capabilities> <capabilities>REMOTE_REPLICATION</capabilities> <capabilities>CLONE</capabilities> <capabilities>RESIZE</capabilities> <capabilities>REQUIRES_STORAGE_POOL_CLEANUP</capabilities> <capabilities>SUPPORTS_OPTIMIZED_ISCSI_LOGIN</capabilities> <capabilities>SUPPORTS_INSTANT_CLONE</capabilities> <capabilities>SUPPORTS_CLONE_OF_SNAPSHOT</capabilities> </systemCapabilities> <protocolSupport> <capabilities>FC</capabilities> <capabilities>ISCSI</capabilities> <capabilities>NFS</capabilities> <capabilities>CIFS</capabilities> </protocolSupport> <csl__snapshotMethodInfoList> <csl__snapshotMethodInfo> <name>LUNClone</name> <displayName>LUNClone</displayName> <maxSnapshots>128</maxSnapshots> <supportedNodeTypes><nodeType>STORAGE_VOLUME</nodeType></supportedNodeTypes> <snapshotTypeList> <snapshotType>DIFF_SNAPSHOT</snapshotType> <snapshotType>IS_DEFAULT</snapshotType> </snapshotTypeList> <snapshotCapabilities> <capabilities>THIN_PROVISIONED_TARGET</capabilities> <capabilities>AUTO_PROVISIONED_TARGET</capabilities> </snapshotCapabilities> </csl__snapshotMethodInfo> <csl__snapshotMethodInfo> <name>SplitLUNClone</name> <displayName>SplitLUNClone</displayName> <maxSnapshots>128</maxSnapshots> <supportedNodeTypes><nodeType>STORAGE_VOLUME</nodeType></supportedNodeTypes> <snapshotTypeList><snapshotType>CLONE</snapshotType></snapshotTypeList> <snapshotCapabilities><capabilities>THIN_PROVISIONED_TARGET</capabilities> <capabilities>AUTO_PROVISIONED_TARGET</capabilities> </snapshotCapabilities> </csl__snapshotMethodInfo> </csl__snapshotMethodInfoList> </csl__storageSystemInfo> </csl__storageSystemInfoList> </screen>
43
You can use grep to filter the sr-probe output to just display the storage system IDs:
xe sr-probe type=cslg device-config:adapterid=NETAPP \ device-config:username=xxxx device-config:password=xxxx \ device-config:target=xxxx | grep storageSystemId
<csl__storageSystemInfoList> <csl__storageSystemInfo> <friendlyName>devfiler</friendlyName> <displayName>NetApp FAS3020 (devfiler)</displayName> <vendor>NetApp</vendor> <model>FAS3020</model> <serialNum>3064792</serialNum> <storageSystemId>NETAPP__LUN__0A50E2F6</storageSystemId> <systemCapabilities> <capabilities>PROVISIONING</capabilities>
2. Add the desired storage system ID to the sr-probe command to identify the storage pools available within the
specified storage system. You can use grep to filter the sr-probe output to just display the storage pool IDs:
xe sr-probe type=cslg device-config:adapterid=NETAPP \ device-config:username=xxxx device-config:password=xxxx \ device-config:target=xxxx device-config:storageSystemId=NETAPP__LUN__0A50E2F6 | grep storageSystemId
<csl__storagePoolInfo> <displayName>aggr0</displayName> <friendlyName>aggr0</friendlyName> <storagePoolId>61393750-84b6-11dc-9a7d-00a09804ab62</storagePoolId> <parentStoragePoolId></parentStoragePoolId> <storageSystemId>NETAPP__LUN__0A50E2F6</storageSystemId> <sizeInMB>116262</sizeInMB> <freeSpaceInMB>5746</freeSpaceInMB> <availableFreeSpaceInMB>0</availableFreeSpaceInMB> <isDefault>Yes</isDefault> <status>0</status> <provisioningOptions> <supportedRaidTypes><raidType>RAID6</raidType>
3. Create the SR specifying the desired storage system and storage pool IDs:
xe sr-create type=cslg device-config:adapterid=NETAPP \ device-config:target=xxxx device-config:username=xxxx \ device-config:password=xxxx device-config:storageSystemId=xxxx \ device-config:storagePoolId=xxxx

Managing Hardware Host Bus Adapters (HBAs)

This section covers various operations required to manage SAS, Fibre Channel and iSCSI HBAs.
Sample QLogic iSCSI HBA setup
For full details on configuring QLogic Fibre Channel and iSCSI HBAs please refer to the QLogic website.
Once the HBA is physically installed into the XenServer host, use the following steps to configure the HBA:
1. Set the IP networking configuration for the HBA. This example assumes DHCP and HBA port 0. Specify the
appropriate values if using static IP addressing or a multi-port HBA.
44
/opt/QLogic_Corporation/SANsurferiCLI/iscli -ipdhcp 0
2. Add a persistent iSCSI target to port 0 of the HBA.
/opt/QLogic_Corporation/SANsurferiCLI/iscli -pa 0 <iscsi_target_ip_address>
3. Use the xe sr-probe command to force a rescan of the HBA controller and display available LUNs. See the
section called “Probing an SR” and the section called “Creating a Shared LVM over Fibre Channel / iSCSI HBA or SAS SR (lvmohba)” for more details.
Removing HBA-based SAS, FC or iSCSI Device Entries
Note:
This step is not required. Citrix recommends that only power users perform this process if it is necessary.
Each HBA-based LUN has a corresponding global device path entry under /dev/disk/by-scsibus in the format <SCSIid>-<adapter>:<bus>:<target>:<lun> and a standard device path under /dev. To remove the device entries for LUNs no longer in use as SRs use the following steps:
1. Use sr-forget or sr-destroy as appropriate to remove the SR from the XenServer host database. See the section
called “Destroying or Forgetting a SR” for details.
2. Remove the zoning configuration within the SAN for the desired LUN to the desired host.
3. Use the sr-probe command to determine the ADAPTER, BUS, TARGET, and LUN values corresponding to the
LUN to be removed. See the section called “Probing an SR” for details.
4. Remove the device entries with the following command:
echo "1" > /sys/class/scsi_device/<adapter>:<bus>:<target>:<lun>/device/delete
Warning:
Make absolutely sure you are certain which LUN you are removing. Accidentally removing a LUN required for host operation, such as the boot or root device, will render the host unusable.

LVM over iSCSI

The LVM over iSCSI type represents disks as Logical Volumes within a Volume Group created on an iSCSI LUN.
Creating a Shared LVM Over iSCSI SR Using the Software iSCSI Initiator (lvmoiscsi)
Device-config parameters for lvmoiscsi SRs:
Parameter Name Description Required?
target the IP address or hostname of the iSCSI filer that hosts the SR yes
targetIQN the IQN target address of iSCSI filer that hosts the SR yes
SCSIid the SCSI bus ID of the destination LUN yes
chapuser the username to be used for CHAP authentication no
chappassword the password to be used for CHAP authentication no
45
Parameter Name Description Required?
port the network port number on which to query the target no
usediscoverynumber the specific iSCSI record index to use no
incoming_chapuser the username that the iSCSI filter will use to authenticate against
the host
incoming_chappassword the password that the iSCSI filter will use to authenticate against
the host
To create a shared lvmoiscsi SR on a specific LUN of an iSCSI target use the following command.
xe sr-create host-uuid=<valid_uuid> content-type=user \ name-label=<"Example shared LVM over iSCSI SR"> shared=true \ device-config:target=<target_ip=> device-config:targetIQN=<target_iqn=> \ device-config:SCSIid=<scsci_id> \ type=lvmoiscsi
Creating a Shared LVM over Fibre Channel / iSCSI HBA or SAS SR (lvmohba)
SRs of type lvmohba can be created and managed using the xe CLI or XenCenter.
Device-config parameters for lvmohba SRs:
Parameter name Description Required?
SCSIid Device SCSI ID Yes
no
no
To create a shared lvmohba SR, perform the following steps on each host in the pool:
1. Zone in one or more LUNs to each XenServer host in the pool. This process is highly specific to the SAN
equipment in use. Please refer to your SAN documentation for details.
2. If necessary, use the HBA CLI included in the XenServer host to configure the HBA:
• Emulex: /bin/sbin/ocmanager
• QLogic FC: /opt/QLogic_Corporation/SANsurferCLI
• QLogic iSCSI: /opt/QLogic_Corporation/SANsurferiCLI
See the section called “Managing Hardware Host Bus Adapters (HBAs)” for an example of QLogic iSCSI HBA configuration. For more information on Fibre Channel and iSCSI HBAs please refer to the Emulex and QLogic websites.
3. Use the sr-probe command to determine the global device path of the HBA LUN. sr-probe forces a re-scan of
HBAs installed in the system to detect any new LUNs that have been zoned to the host and returns a list of properties for each LUN found. Specify the host-uuid parameter to ensure the probe occurs on the desired host.
The global device path returned as the <path> property will be common across all hosts in the pool and therefore must be used as the value for the device-config:device parameter when creating the SR.
If multiple LUNs are present use the vendor, LUN size, LUN serial number, or the SCSI ID as included in the <path> property to identify the desired LUN.
46
xe sr-probe type=lvmohba \ host-uuid=1212c7b3-f333-4a8d-a6fb-80c5b79b5b31 Error code: SR_BACKEND_FAILURE_90 Error parameters: , The request is missing the device parameter, \ <?xml version="1.0" ?> <Devlist> <BlockDevice> <path> /dev/disk/by-id/scsi-360a9800068666949673446387665336f </path> <vendor> HITACHI </vendor> <serial> 730157980002 </serial> <size> 80530636800 </size> <adapter> 4 </adapter> <channel> 0 </channel> <id> 4 </id> <lun> 2 </lun> <hba> qla2xxx </hba> </BlockDevice> <Adapter> <host> Host4 </host> <name> qla2xxx </name> <manufacturer> QLogic HBA Driver </manufacturer> <id> 4 </id> </Adapter> </Devlist>
4. On the master host of the pool create the SR, specifying the global device path returned in the <path>
property from sr-probe. PBDs will be created and plugged for each host in the pool automatically.
xe sr-create host-uuid=<valid_uuid> \ content-type=user \ name-label=<"Example shared LVM over HBA SR"> shared=true \ device-config:SCSIid=<device_scsi_id> type=lvmohba
Note:
You can use the XenCenter Repair Storage Repository function to retry the PBD creation and plugging portions of the sr-create operation. This can be valuable in cases where the LUN zoning was incorrect for one or more hosts in a pool when the SR was created. Correct
47
the zoning for the affected hosts and use the Repair Storage Repository function instead of removing and re-creating the SR.

NFS VHD

The NFS VHD type stores disks as VHD files on a remote NFS filesystem.
NFS is a ubiquitous form of storage infrastructure that is available in many environments. XenServer allows existing NFS servers that support NFS V3 over TCP/IP to be used immediately as a storage repository for virtual disks (VDIs). VDIs are stored in the Microsoft VHD format only. Moreover, as NFS SRs can be shared, VDIs stored in a shared SR allow VMs to be started on any XenServer hosts in a resource pool and be migrated between them using XenMotion with no noticeable downtime.
Creating an NFS SR requires the hostname or IP address of the NFS server. The sr-probe command provides a list of valid destination paths exported by the server on which the SR can be created. The NFS server must be configured to export the specified path to all XenServer hosts in the pool, or the creation of the SR and the plugging of the PBD record will fail.
As mentioned at the beginning of this chapter, VDIs stored on NFS are sparse. The image file is allocated as the VM writes data into the disk. This has the considerable benefit that VM image files take up only as much space on the NFS storage as is required. If a 100GB VDI is allocated for a new VM and an OS is installed, the VDI file will only reflect the size of the OS data that has been written to the disk rather than the entire 100GB.
VHD files may also be chained, allowing two VDIs to share common data. In cases where a NFS-based VM is cloned, the resulting VMs will share the common on-disk data at the time of cloning. Each will proceed to make its own changes in an isolated copy-on-write version of the VDI. This feature allows NFS-based VMs to be quickly cloned from templates, facilitating very fast provisioning and deployment of new VMs.
Note:
The maximum supported length of VHD chains is 30.
As VHD-based images require extra metadata to support sparseness and chaining, the format is not as high­performance as LVM-based storage. In cases where performance really matters, it is well worth forcibly allocating the sparse regions of an image file. This will improve performance at the cost of consuming additional disk space.
XenServer's NFS and VHD implementations assume that they have full control over the SR directory on the NFS server. Administrators should not modify the contents of the SR directory, as this can risk corrupting the contents of VDIs.
XenServer has been tuned for enterprise-class storage that use non-volatile RAM to provide fast acknowledgments of write requests while maintaining a high degree of data protection from failure. XenServer has been tested extensively against Network Appliance FAS270c and FAS3020c storage, using Data OnTap 7.2.2.
In situations where XenServer is used with lower-end storage, it will cautiously wait for all writes to be acknowledged before passing acknowledgments on to guest VMs. This will incur a noticeable performance cost, and might be remedied by setting the storage to present the SR mount point as an asynchronous mode export. Asynchronous exports acknowledge writes that are not actually on disk, and so administrators should consider the risks of failure carefully in these situations.
The XenServer NFS implementation uses TCP by default. If your situation allows, you can configure the implementation to use UDP in situations where there may be a performance benefit. To do this, specify the device-config parameter useUDP=true at SR creation time.
Warning:
Since VDIs on NFS SRs are created as sparse, administrators must ensure that there is enough disk space on the NFS SRs for all required VDIs. XenServer hosts do not enforce that the space required for VDIs on NFS SRs is actually present.
48
Creating a Shared NFS SR (NFS)
Device-config parameters for NFS SRs:
Parameter Name Description Required?
server IP address or hostname of the NFS
server
serverpath path, including the NFS mount
point, to the NFS server that hosts the SR
To create a shared NFS SR on 192.168.1.10:/export1 use the following command.
xe sr-create host-uuid=<host_uuid> content-type=user \ name-label=<"Example shared NFS SR"> shared=true \ device-config:server=<192.168.1.10> device-config:serverpath=</export1> type=nfs
Yes
Yes

LVM over Hardware HBA

The LVM over hardware HBA type represents disks as VHDs on Logical Volumes within a Volume Group created on an HBA LUN providing, for example, hardware-based iSCSI or FC support.
XenServer hosts support Fibre Channel (FC) storage area networks (SANs) through Emulex or QLogic host bus adapters (HBAs). All FC configuration required to expose a FC LUN to the host must be completed manually, including storage devices, network devices, and the HBA within the XenServer host. Once all FC configuration is complete the HBA will expose a SCSI device backed by the FC LUN to the host. The SCSI device can then be used to access the FC LUN as if it were a locally attached SCSI device.
Use the sr-probe command to list the LUN-backed SCSI devices present on the host. This command forces a scan for new LUN-backed SCSI devices. The path value returned by sr-probe for a LUN-backed SCSI device is consistent across all hosts with access to the LUN, and therefore must be used when creating shared SRs accessible by all hosts in a resource pool.
The same features apply to QLogic iSCSI HBAs.
See the section called “Creating Storage Repositories” for details on creating shared HBA-based FC and iSCSI SRs.
Note:
XenServer support for Fibre Channel does not support direct mapping of a LUN to a VM. HBA­based LUNs must be mapped to the host and specified for use in an SR. VDIs within the SR are exposed to VMs as standard block devices.

Storage Configuration

This section covers creating storage repository types and making them available to a XenServer host. The examples provided pertain to storage configuration using the CLI, which provides the greatest flexibility. See the XenCenter Help for details on using the New Storage Repository wizard.

Creating Storage Repositories

This section explains how to create Storage Repositories (SRs) of different types and make them available to a XenServer host. The examples provided cover creating SRs using the xe CLI. See the XenCenter help for details on using the New Storage Repository wizard to add SRs using XenCenter.
Note:
Local SRs of type lvm and ext can only be created using the xe CLI. After creation all SR types can be managed by either XenCenter or the xe CLI.
49
There are two basic steps involved in creating a new storage repository for use on a XenServer host using the CLI:
1. Probe the SR type to determine values for any required parameters.
2. Create the SR to initialize the SR object and associated PBD objects, plug the PBDs, and activate the SR.
These steps differ in detail depending on the type of SR being created. In all examples the sr-create command returns the UUID of the created SR if successful.
SRs can also be destroyed when no longer in use to free up the physical device, or forgotten to detach the SR from one XenServer host and attach it to another. See the section called “Destroying or Forgetting a SR” for details.
Note:
When specifying StorageLink configuration for a XenServer host or pool, supply either the default credentials of username: admin and password: storagelink, or any custom credentials specified during installation of the StorageLink Gateway service. Unlike StorageLink Manager, XenCenter does not supply the default credentials automatically.

Upgrading LVM Storage from XenServer 5.0 or Earlier

See the XenServer Installation Guide for information on upgrading LVM storage to enable the latest features. Local, LVM on iSCSI, and LVM on HBA storage types from older (XenServer 5.0 and before) product versions will need to be upgraded before they will support snapshot and fast clone.
Warning:
SR upgrade of SRs created in version 5.0 or before requires the creation of a 4MB metadata volume. Please ensure that there are at least 4MB of free space on your SR before attempting to upgrade the storage.
Note:
Upgrade is a one-way operation so Citrix recommends only performing the upgrade when you are certain the storage will no longer need to be attached to a pool running an older software version.

LVM Performance Considerations

The snapshot and fast clone functionality provided in XenServer 5.5 and later for LVM-based SRs comes with an inherent performance overhead. In cases where optimal performance is desired, XenServer supports creation of VDIs in the raw format in addition to the default VHD format. The XenServer snapshot functionality is not supported on raw VDIs.
Note:
Non-transportable snapshots using the default Windows VSS provider will work on any type of VDI.
Warning:
Do not try to snapshot a VM that has type=raw disks attached. This could result in a partial snapshot being created. In this situation, you can identify the orphan snapshot VDIs by checking the snapshot-of field and then deleting them.
VDI Types
In general, VHD format VDIs will be created. You can opt to use raw at the time you create the VDI; this can only be done using the xe CLI. After software upgrade from a previous XenServer version, existing data will be preserved as backwards-compatible raw VDIs but these are special-cased so that snapshots can be taken of them once you have allowed this by upgrading the SR. Once the SR has been upgraded and the first snapshot has been taken, you will be accessing the data through a VHD format VDI.
50
To check if an SR has been upgraded, verify that its sm-config:use_vhd key is true. To check if a VDI was created with type=raw, check its sm-config map. The sr-param-list and vdi-param-list xe commands can be used respectively for this purpose.
Creating a Raw Virtual Disk Using the xe CLI
1. Run the following command to create a VDI given the UUID of the SR you want to place the virtual disk in:
xe vdi-create sr-uuid=<sr-uuid> type=user virtual-size=<virtual-size> \ name-label=<VDI name> sm-config:type=raw
2. Attach the new virtual disk to a VM and use your normal disk tools within the VM to partition and format,
or otherwise make use of the new disk. You can use the vbd-create command to create a new VBD to map the virtual disk into your VM.

Converting Between VDI Formats

It is not possible to do a direct conversion between the raw and VHD formats. Instead, you can create a new VDI (either raw, as described above, or VHD if the SR has been upgraded or was created on XenServer 5.5 or later) and then copy data into it from an existing volume. Citrix recommends that you use the xe CLI to ensure that the new VDI has a virtual size at least as big as the VDI you are copying from (by checking its virtual-size field, for example by using the vdi-param-list command). You can then attach this new VDI to a VM and use your preferred tool within the VM (standard disk management tools in Windows, or the dd command in Linux) to do a direct block-copy of the data. If the new volume is a VHD volume, it is important to use a tool that can avoid writing empty sectors to the disk so that space is used optimally in the underlying storage repository — in this case a file-based copy approach may be more suitable.

Probing an SR

The sr-probe command can be used in two ways:
1. To identify unknown parameters for use in creating a SR.
2. To return a list of existing SRs.
In both cases sr-probe works by specifying an SR type and one or more device-config parameters for that SR type. When an incomplete set of parameters is supplied the sr-probe command returns an error message indicating parameters are missing and the possible options for the missing parameters. When a complete set of parameters is supplied a list of existing SRs is returned. All sr-probe output is returned as XML.
For example, a known iSCSI target can be probed by specifying its name or IP address, and the set of IQNs available on the target will be returned:
xe sr-probe type=lvmoiscsi device-config:target=<192.168.1.10>
Error code: SR_BACKEND_FAILURE_96 Error parameters: , The request is missing or has an incorrect target IQN parameter, \ <?xml version="1.0" ?> <iscsi-target-iqns> <TGT> <Index> 0 </Index> <IPAddress>
192.168.1.10
</IPAddress> <TargetIQN> iqn.192.168.1.10:filer1 </TargetIQN> </TGT> </iscsi-target-iqns>
Probing the same target again and specifying both the name/IP address and desired IQN returns the set of SCSIids (LUNs) available on the target/IQN.
51
xe sr-probe type=lvmoiscsi device-config:target=192.168.1.10 \ device-config:targetIQN=iqn.192.168.1.10:filer1
Error code: SR_BACKEND_FAILURE_107 Error parameters: , The SCSIid parameter is missing or incorrect, \ <?xml version="1.0" ?> <iscsi-target> <LUN> <vendor> IET </vendor> <LUNid> 0 </LUNid> <size> 42949672960 </size> <SCSIid> 149455400000000000000000002000000b70200000f000000 </SCSIid> </LUN> </iscsi-target>
Probing the same target and supplying all three parameters will return a list of SRs that exist on the LUN, if any.
xe sr-probe type=lvmoiscsi device-config:target=192.168.1.10 \ device-config:targetIQN=192.168.1.10:filer1 \ device-config:SCSIid=149455400000000000000000002000000b70200000f000000
<?xml version="1.0" ?> <SRlist> <SR> <UUID> 3f6e1ebd-8687-0315-f9d3-b02ab3adc4a6 </UUID> <Devlist> /dev/disk/by-id/scsi-149455400000000000000000002000000b70200000f000000 </Devlist> </SR> </SRlist>
The following parameters can be probed for each SR type:
SR type device-config parameter, in order of
dependency
Can be probed?
Required for sr-create?
lvmoiscsi target No Yes
chapuser No No
chappassword No No
targetIQN Yes Yes
SCSIid Yes Yes
lvmohba SCSIid Yes Yes
NetApp target No Yes
username No Yes
password No Yes
52
SR type device-config parameter, in order of
dependency
chapuser No No
chappassword No No
Can be probed?
Required for sr-create?
aggregate No
*
Yes
FlexVols No No
allocation No No
asis No No
nfs server No Yes
serverpath Yes Yes
lvm device No Yes
ext device No Yes
EqualLogic target No Yes
username No Yes
password No Yes
chapuser No No
chappassword No No
storagepool No
Yes
cslg target No Yes
storageSystemId Yes Yes
storagePoolId Yes Yes
username No No
password No No
cslport No No
chapuser No No
chappassword No No
provision-type Yes No
protocol Yes No
provision-options Yes No
raid-type Yes No
*
Aggregate probing is only possible at sr-create time. It needs to be done there so that the aggregate can be specified at the point that the
SR is created.
53
Storage pool probing is only possible at sr-create time. It needs to be done there so that the aggregate can be specified at the point that the
SR is created.
If the username, password, or port configuration of the StorageLink service are changed from the default value then the appropriate parameter
and value must be specified.

Storage Multipathing

Dynamic multipathing support is available for Fibre Channel and iSCSI storage backends. By default, it uses round­robin mode load balancing, so both routes have active traffic on them during normal operation. You can enable multipathing in XenCenter or on the xe CLI.
Before attempting to enable multipathing, verify that multiple targets are available on your storage server. For example, an iSCSI storage backend queried for sendtargets on a given portal should return multiple targets, as in the following example:
iscsiadm -m discovery --type sendtargets --portal 192.168.0.161
192.168.0.161:3260,1 iqn.strawberry:litchie
192.168.0.204:3260,2 iqn.strawberry:litchie
To enable storage multipathing using the xe CLI
1. Unplug all PBDs on the host:
xe pbd-unplug uuid=<pbd_uuid>
2. Set the host's other-config:multipathing parameter:
xe host-param-set other-config:multipathing=true uuid=host_uuid
3. Set the host's other-config:multipathhandle parameter to dmp:
xe host-param-set other-config:multipathhandle=dmp uuid=host_uuid
4. If there are existing SRs on the host running in single path mode but that have multiple paths:
• Migrate or suspend any running guests with virtual disks in affected the SRs
• Unplug and re-plug the PBD of any affected SRs to reconnect them using multipathing:
xe pbd-plug uuid=<pbd_uuid>
To disable multipathing, first unplug your VBDs, set the host other-config:multipathing parameter to false and then replug your PBDs as described above. Do not modify the other- config:multipathhandle parameter as this will be done automatically.
Multipath support in XenServer is based on the device-mapper multipathd components. Activation and deactivation of multipath nodes is handled automatically by the Storage Manager API. Unlike the standard dm- multipath tools in Linux, device mapper nodes are not automatically created for all LUNs on the system, and it is only when LUNs are actively used by the storage management layer that new device mapper nodes are provisioned. Therefore, it is unnecessary to use any of the dm-multipath CLI tools to query or refresh DM table nodes in XenServer. Should it be necessary to query the status of device-mapper tables manually, or list active device mapper multipath nodes on the system, use the mpathutil utility:
• mpathutil list
• mpathutil status
Note:
Due to incompatibilities with the integrated multipath management architecture, the standard dm-multipath CLI utility should not be used with XenServer. Please use the mpathutil CLI tool for querying the status of nodes on the host.
Note:
54
Multipath support in EqualLogic arrays does not encompass Storage IO multipathing in the traditional sense of the term. Multipathing must be handled at the network/NIC bond level. Refer to the EqualLogic documentation for information about configuring network failover for EqualLogic SRs/LVMoISCSI SRs.

MPP RDAC Driver Support for LSI Arrays.

XenServer supports the LSI Multi-Path Proxy Driver (MPP) for the Redundant Disk Array Controller (RDAC). By default this driver is disabled.
To enable the driver:
1. Open a console on the host, and run the following command:
# /opt/xensource/libexec/mpp-rdac --enable
2. Reboot the host.
To disable the driver:
1. Open a console on the host, and run the following command:
# /opt/xensource/libexec/mpp-rdac --disable
2. Reboot the host.
Note:
This procedure must be carried out on every host in a pool.

Managing Storage Repositories

This section covers various operations required in the ongoing management of Storage Repositories (SRs).

Destroying or Forgetting a SR

You can destroy an SR, which actually deletes the contents of the SR from the physical media. Alternatively you can forget an SR, which allows you to re-attach the SR, for example, to another XenServer host, without removing any of the SR contents. In both cases, the PBD of the SR must first be unplugged. Forgetting an SR is the equivalent of the SR Detach operation within XenCenter.
1. Unplug the PBD to detach the SR from the corresponding XenServer host:
xe pbd-unplug uuid=<pbd_uuid>
2. To destroy the SR, which deletes both the SR and corresponding PBD from the XenServer host database and
deletes the SR contents from the physical media:
xe sr-destroy uuid=<sr_uuid>
3. Or, to forget the SR, which removes the SR and corresponding PBD from the XenServer host database but
leaves the actual SR contents intact on the physical media:
xe sr-forget uuid=<sr_uuid>
Note:
It might take some time for the software object corresponding to the SR to be garbage collected.

Introducing an SR

Introducing an SR that has been forgotten requires introducing an SR, creating a PBD, and manually plugging the PBD to the appropriate XenServer hosts to activate the SR.
55
The following example introduces a SR of type lvmoiscsi.
1. Probe the existing SR to determine its UUID:
xe sr-probe type=lvmoiscsi device-config:target=<192.168.1.10> \ device-config:targetIQN=<192.168.1.10:filer1> \ device-config:SCSIid=<149455400000000000000000002000000b70200000f000000>
2. Introduce the existing SR UUID returned from the sr-probe command. The UUID of the new SR is returned:
xe sr-introduce content-type=user name-label=<"Example Shared LVM over iSCSI SR"> shared=true uuid=<valid_sr_uuid> type=lvmoiscsi
3. Create a PBD to accompany the SR. The UUID of the new PBD is returned:
xe pbd-create type=lvmoiscsi host-uuid=<valid_uuid> sr-uuid=<valid_sr_uuid> \ device-config:target=<192.168.0.1> \ device-config:targetIQN=<192.168.1.10:filer1> \ device-config:SCSIid=<149455400000000000000000002000000b70200000f000000>
4. Plug the PBD to attach the SR:
xe pbd-plug uuid=<pbd_uuid>
5. Verify the status of the PBD plug. If successful the currently-attached property will be true:
xe pbd-list sr-uuid=<sr_uuid>
Note:
Steps 3 through 5 must be performed for each host in the resource pool, and can also be performed using the Repair Storage Repository function in XenCenter.

Resizing an SR

If you have resized the LUN on which a iSCSI or HBA SR is based, use the following procedures to reflect the size change in XenServer:
1. iSCSI SRs - unplug all PBDs on the host that reference LUNs on the same target. This is required to reset
the iSCSI connection to the target, which in turn will allow the change in LUN size to be recognized when the PBDs are replugged.
2. HBA SRs - reboot the host.
Note:
In previous versions of XenServer explicit commands were required to resize the physical volume group of iSCSI and HBA SRs. These commands are now issued as part of the PBD plug operation and are no longer required.

Converting Local Fibre Channel SRs to Shared SRs

Use the xe CLI and the XenCenter Repair Storage Repository feature to convert a local FC SR to a shared FC SR:
1. Upgrade all hosts in the resource pool to XenServer 6.0.
2. Ensure all hosts in the pool have the SR's LUN zoned appropriately. See the section called “Probing an SR” for
details on using the sr-probe command to verify the LUN is present on each host.
3. Convert the SR to shared:
xe sr-param-set shared=true uuid=<local_fc_sr>
4. Within XenCenter the SR is moved from the host level to the pool level, indicating that it is now shared. The
SR will be marked with a red exclamation mark to show that it is not currently plugged on all hosts in the pool.
5. Select the SR and then select the Storage > Repair Storage Repository menu option.
56
6. Click Repair to create and plug a PBD for each host in the pool.

Moving Virtual Disk Images (VDIs) Between SRs

The set of VDIs associated with a VM can be copied from one SR to another to accommodate maintenance requirements or tiered storage configurations. XenCenter provides the ability to copy a VM and all of its VDIs to the same or a different SR, and a combination of XenCenter and the xe CLI can be used to copy individual VDIs.
Copying All of a VMs VDIs to a Different SR
The XenCenter Copy VM function creates copies of all VDIs for a selected VM on the same or a different SR. The source VM and VDIs are not affected by default. To move the VM to the selected SR rather than creating a copy, select the Remove original VM option in the Copy Virtual Machine dialog box.
1. Shutdown the VM.
2. Within XenCenter select the VM and then select the VM > Copy VM menu option.
3. Select the desired target SR.
Copying Individual VDIs to a Different SR
A combination of the xe CLI and XenCenter can be used to copy individual VDIs between SRs.
1. Shutdown the VM.
2. Use the xe CLI to identify the UUIDs of the VDIs to be moved. If the VM has a DVD drive its vdi-uuid will
be listed as <not in database> and can be ignored.
xe vbd-list vm-uuid=<valid_vm_uuid>
Note:
The vbd-list command displays both the VBD and VDI UUIDs. Be sure to record the VDI UUIDs rather than the VBD UUIDs.
3. In XenCenter select the VM Storage tab. For each VDI to be moved, select the VDI and click the Detach button.
This step can also be done using the vbd-destroy command.
Note:
If you use the vbd-destroy command to detach the VDI UUIDs, be sure to first check if the VBD has the parameter other-config:owner set to true. If so, set it to false. Issuing the vbd-destroy command with other-config:owner=true will also destroy the associated VDI.
4. Use the vdi-copy command to copy each of the VM VDIs to be moved to the desired SR.
xe vdi-copy uuid=<valid_vdi_uuid> sr-uuid=<valid_sr_uuid>
5. Within XenCenter select the VM Storage tab. Click the Attach button and select the VDIs from the new SR.
This step can also be done use the vbd-create command.
6. To delete the original VDIs, within XenCenter select the Storage tab of the original SR. The original VDIs will
be listed with an empty value for the VM field and can be deleted with the Delete button.

Adjusting the Disk IO Scheduler

For general performance, the default disk scheduler noop is applied on all new SR types. The noop scheduler provides the fairest performance for competing VMs accessing the same device. To apply disk QoS (see the section
called “Virtual Disk QoS Settings”) it is necessary to override the default setting and assign the cfq disk scheduler
57
to the SR. The corresponding PBD must be unplugged and re-plugged for the scheduler parameter to take effect. The disk scheduler can be adjusted using the following command:
xe sr-param-set other-config:scheduler=noop|cfq|anticipatory|deadline \ uuid=<valid_sr_uuid>
Note:
This will not effect EqualLogic, NetApp or NFS storage.

Automatically Reclaiming Space When Deleting Snapshots

When deleting snapshots with XenServer 6.0, all allocated space allocated on LVM-based SRs is reclaimed automatically and a VM reboot is not required; this is referred to as Online Coalescing.
Note:
Online Coalescing only applies to LVM-based SRs (LVM, LVMoISCSI, and LVMoHBA), it does not apply to EXT or NFS SRs, whose behaviour remains unchanged.
In certain cases, automated space reclamation may be unable to proceed, in these cases it is advisable to use the Off Line Coalesce tool:
• Under conditions where a VM I/O throughput is considerable
• In conditions where space is not being reclaimed after a period of time
Note:
Running the Off Line Coalesce tool will incur some downtime for the VM, due to the suspend/ resume operations performed.
Before running the tool, delete any snapshots and clones you no longer want; the script will reclaim as much space as possible given the remaining snapshots/clones. If you want to reclaim all space, delete all snapshots and clones.
All VM disks must be either on shared or local storage for a single host. VMs with disks in both types of storage cannot be coalesced.
Reclaiming Space Using the Off Line Coalesce Tool
Note:
Online Coalescing only applies to LVM-based SRs (LVM, LVMoISCSI, and LVMoHBA), it does not apply to EXT or NFS SRs, whose behaviour remains unchanged.
Using XenCenter, enable hidden objects (View menu -> Hidden objects). In the Resource pane, select the VM for which you want to obtain the UUID. The UUID will displayed in the General tab.
In the Resource pane, select the resource pool master host (the first host in the list). The UUID will be displayed in the General tab. If you are not using a resource pool, select the VM host.
1. Open a console on the host and run the following command:
xe host-call-plugin host-uuid=<host-UUID> \ plugin=coalesce-leaf fn=leaf-coalesce args:vm_uuid=<VM-UUID>
For example, if the VM UUID is 9bad4022-2c2d-dee6-abf5-1b6195b1dad5 and the host UUID is b8722062­de95-4d95-9baa-a5fe343898ea you would run this command:
xe host-call-plugin host-uuid=b8722062-de95-4d95-9baa-a5fe343898ea \ plugin=coalesce-leaf fn=leaf-coalesce args:vm_uuid=9bad4022-2c2d-dee6-abf5-1b6195b1dad5
58
2. This command suspends the VM (unless it is already powered down), initiates the space reclamation process,
and then resumes the VM.
Note:
Citrix recommends that, before executing the off-line coalesce tool, you shutdown or suspend the VM manually (using either XenCenter or the XenServer CLI). If you execute the coalesce tool on a VM that is running, the tool automatically suspends the VM, performs the required VDI coalesce operation(s), and resumes the VM.
If the Virtual Disk Images (VDIs) to be coalesced are on shared storage, you must execute the off-line coalesce tool on the pool master.
If the VDIs to be coalesced are on local storage, you must execute the off-line coalesce tool on the server to which the local storage is attached.

Virtual Disk QoS Settings

Virtual disks have an optional I/O priority Quality of Service (QoS) setting. This setting can be applied to existing virtual disks using the xe CLI as described in this section.
In the shared SR case, where multiple hosts are accessing the same LUN, the QoS setting is applied to VBDs accessing the LUN from the same host. QoS is not applied across hosts in the pool.
Before configuring any QoS parameters for a VBD, ensure that the disk scheduler for the SR has been set appropriately. See the section called “Adjusting the Disk IO Scheduler” for details on how to adjust the scheduler. The scheduler parameter must be set to cfq on the SR for which the QoS is desired.
Note:
Remember to set the scheduler to cfq on the SR, and to ensure that the PBD has been re­plugged in order for the scheduler change to take effect.
The first parameter is qos_algorithm_type. This parameter needs to be set to the value ionice, which is the only type of QoS algorithm supported for virtual disks in this release.
The QoS parameters themselves are set with key/value pairs assigned to the qos_algorithm_param parameter. For virtual disks, qos_algorithm_param takes a sched key, and depending on the value, also requires a class key.
Possible values of qos_algorithm_param:sched are:
sched=rt or sched=real-time sets the QoS scheduling parameter to real time priority, which requires a class parameter to set a value
sched=idle sets the QoS scheduling parameter to idle priority, which requires no class parameter to set any value
sched=<anything> sets the QoS scheduling parameter to best effort priority, which requires a class parameter to set a value
The possible values for class are:
• One of the following keywords: highest, high, normal, low, lowest
• an integer between 0 and 7, where 7 is the highest priority and 0 is the lowest, so that, for example, I/O requests with a priority of 5, will be given priority over I/O requests with a priority of 2.
To enable the disk QoS settings, you also need to set the other-config:scheduler to cfq and replug PBDs for the storage in question.
For example, the following CLI commands set the virtual disk's VBD to use real time priority 5:
59
xe vbd-param-set uuid=<vbd_uuid> qos_algorithm_type=ionice xe vbd-param-set uuid=<vbd_uuid> qos_algorithm_params:sched=rt xe vbd-param-set uuid=<vbd_uuid> qos_algorithm_params:class=5 xe sr-param-set uuid=<sr_uuid> other-config:scheduler=cfq xe pbd-plug uuid=<pbd_uuid>
60

Configuring VM Memory

When a VM is first created, it is allocated a fixed amount of memory. To improve the utilisation of physical memory in your XenServer environment, you can use Dynamic Memory Control (DMC), a memory management feature that enables dynamic reallocation of memory between VMs.
XenCenter provides a graphical display of memory usage in its Memory tab. This is described in the XenCenter Help.
In previous editions of XenServer adjusting virtual memory on VMs required a restart to add or remove memory and an interruption to users' service.
Dynamic Memory Control (DMC) provides the following benefits:
• Memory can be added or removed without restart thus providing a more seamless experience to the user.
• When servers are full, DMC allows you to start more VMs on these servers, reducing the amount of memory allocated to the running VMs proportionally.

What is Dynamic Memory Control (DMC)?

XenServer DMC (sometimes known as "dynamic memory optimization", "memory overcommit" or "memory ballooning") works by automatically adjusting the memory of running VMs, keeping the amount of memory allocated to each VM between specified minimum and maximum memory values, guaranteeing performance and permitting greater density of VMs per server. Without DMC, when a server is full, starting further VMs will fail with "out of memory" errors: to reduce the existing VM memory allocation and make room for more VMs you must edit each VM's memory allocation and then reboot the VM. With DMC enabled, even when the server is full, XenServer will attempt to reclaim memory by automatically reducing the current memory allocation of running VMs within their defined memory ranges.
Without DMC, when a server is full, starting further VMs will fail with "out of memory" errors: to reduce the existing VM memory allocation and make room for more VMs you must edit each VM's memory allocation and then reboot the VM. With DMC enabled, even when the server is full, XenServer will attempt to reclaim memory by automatically reducing the current memory allocation of running VMs within their defined memory ranges.
Note:
Dynamic Memory Control is only available for XenServer Advanced or higher editions. To learn more about XenServer Advanced or higher editions and to find out how to upgrade, visit the Citrix website here.

The Concept of Dynamic Range

For each VM the administrator can set a dynamic memory range – this is the range within which memory can be added/removed from the VM without requiring a reboot. When a VM is running the administrator can adjust the dynamic range. XenServer always guarantees to keep the amount of memory allocated to the VM within the dynamic range; therefore adjusting it while the VM is running may cause XenServer to adjust the amount of memory allocated to the VM. (The most extreme case is where the administrator sets the dynamic min/max to the same value, thus forcing XenServer to ensure that this amount of memory is allocated to the VM.) If new VMs are required to start on "full" servers, running VMs have their memory ‘squeezed’ to start new ones. The required extra memory is obtained by squeezing the existing running VMs proportionally within their pre-defined dynamic ranges
DMC allows you to configure dynamic minimum and maximum memory levels – creating a Dynamic Memory Range (DMR) that the VM will operate in.
• Dynamic Minimum Memory: A lower memory limit that you assign to the VM.
• Dynamic Higher Limit: An upper memory limit that you assign to the VM.
61
For example, if the Dynamic Minimum Memory was set at 512 MB and the Dynamic Maximum Memory was set at 1024 MB this would give the VM a Dynamic Memory Range (DMR) of 512 - 1024 MB, within which, it would operate. With DMC, XenServer guarantees at all times to assign each VM memory within its specified DMR.

The Concept of Static Range

Many Operating Systems that XenServer supports do not fully ‘understand’ the notion of dynamically adding or removing memory. As a result, XenServer must declare the maximum amount of memory that a VM will ever be asked to consume at the time that it boots. (This allows the guest operating system to size its page tables and other memory management structures accordingly.) This introduces the concept of a static memory range within XenServer. The static memory range cannot be adjusted while the VM is running. For a particular boot, the dynamic range is constrained such as to be always contained within this static range. Note that the static minimum (the lower bound of the static range) is there to protect the administrator and is set to the lowest amount of memory that the OS can run with on XenServer.
Note:
Citrix advises not to change the static minimum level as this is set at the supported level per operating system – refer to the memory constraints table for more details.
By setting a static maximum level, higher than a dynamic max, means that in the future, if you need to allocate more memory to a VM, you can do so without requiring a reboot.

DMC Behaviour

Automatic VM squeezing
• If DMC is not enabled, when hosts are full, new VM starts fail with ‘out of memory’ errors.
• If DMC is enabled, even when hosts are full, XenServer will attempt to reclaim memory (by reducing the memory allocation of running VMs within their defined dynamic ranges). In this way running VMs are squeezed proportionally at the same distance between the dynamic minimum and dynamic maximum for all VMs on the host
When DMC is enabled
• When the host's memory is plentiful - All running VMs will receive their Dynamic Maximum Memory level
• When the host's memory is scarce - All running VMs will receive their Dynamic Minimum Memory level.
When you are configuring DMC, remember that allocating only a small amount of memory to a VM can negatively impact it. For example, allocating too little memory:
• Using Dynamic Memory Control to reduce the amount of physical memory available to a VM may cause it to boot slowly. Likewise, if you allocate too little memory to a VM, it may start extremely slowly.
• Setting the dynamic memory minimum for a VM too low may result in poor performance or stability problems when the VM is starting.

How Does DMC Work?

Using DMC, it is possible to operate a guest virtual machine in one of two modes:
1. Target Mode: The administrator specifies a memory target for the guest.XenServer adjusts the guest's memory
allocation to meet the target. Specifying a target is particularly useful in virtual server environments, and in any situation where you know exactly how much memory you want a guest to use. XenServer will adjust the guest's memory allocation to meet the target you specify.
2. Dynamic Range Mode: The administrator specifies a dynamic memory range for the guest; XenServer chooses
a target from within the range and adjusts the guest's memory allocation to meet the target. Specifying a dynamic range is particularly useful in virtual desktop environments, and in any situation where you want
62
XenServer to repartition host memory dynamically in response to changing numbers of guests, or changing host memory pressure. XenServer chooses a target from within the range and adjusts the guest's memory allocation to meet the target.
Note:
It is possible to change between target mode and dynamic range mode at any time for any running guest. Simply specify a new target, or a new dynamic range, and XenServer takes care of the rest.

Memory Constraints

XenServer allows administrators to use all memory control operations with any guest operating system. However, XenServer enforces the following memory property ordering constraint for all guests:
0 memory-static-min memory-dynamic-min memory-dynamic-max memory­static-max
XenServer allows administrators to change guest memory properties to any values that satisfy this constraint, subject to validation checks. However, in addition to the above constraint, Citrix supports only certain guest memory configurations for each supported operating system. See below for further details.

Supported Operating Systems

Citrix supports only certain guest memory configurations. The range of supported configurations depends on the guest operating system in use. XenServer does not prevent administrators from configuring guests to exceed the supported limit. However, customers are strongly advised to keep memory properties within the supported limits to avoid performance or stability problems.
Operating System Supported Memory Limits
Family Version Architectures Dynamic
Minimum
Microsoft Windows
XP SP3 x86
Server 2003 (+SP1,SP2)
Server 2008 (+SP2)
Server 2008 R2 (+SP1)
Vista (+SP1,SP2) x86
x86
x64
x86
x64
x64
256 MB 4 GB
256 MB 64 GB
256 MB 128 GB
512 MB 64 GB
512 MB 128 GB
512 MB 128 GB
1 GB 4 GB
Dynamic Maximum
Additional Constraints
Dynamic Minimum
¼ Static Maximum
for all supported operating systems
Cent0S Linux
7 (+SP1)
4.5 - 4.8 x86
5.0 - 5.6 x86 x64
x86
x64
1 GB 4 GB
2 GB 128 GB
256 MB 16 GB
512 MB 16 GB
63
Operating System Supported Memory Limits
RedHat Enterprise Linux
Oracle Enterprise Linux
SUSE Enterprise Linux
4.5 - 4.8 x86
5.0 - 5.6 x86 x64
6.0
5.0 - 5.6
6.0
9 SP4 x86
10 SP1,SP2,SP3,SP4
11 (+SP1)
x86
x64
x86
x64
x86
x64
x86
x64
x86
256 MB 16 GB
512 MB 16 GB
512 MB 8 GB
512 MB 32 GB
512 MB 64 GB
512 MB 128 GB
512 MB 8 GB
512 MB 32 GB
256 MB 16 GB
512 MB 16 GB
512 MB 128 GB
512 MB 16 GB
Debian GNU/Linux
Ubuntu 10.04
Warning:
When configuring guest memory, Citrix advises NOT to exceed the maximum amount of physical memory addressable by your operating system. Setting a memory maximum that is greater than the operating system supported limit, may lead to stability problems within your guest.
In addition reducing the lower limit, below dynamic minimum, could also lead to stability problems. Administrators are encouraged to calibrate the sizes of their VMs carefully, and make sure that their working set of applications functions reliably at dynamic-minimum.
Lenny (5.0) x86
Squeeze (6.0) x86 x64

xe CLI Commands

x64
x86
x64
512 MB 128 GB
128 MB 32 GB
128 MB 32 GB
128 MB 512 MB
128 MB 32 GB

Display the Static Memory Properties of a VM

1. Find the uuid of the required VM:
xe vm-list
2. Note the uuid, and then run the command param-name=memory-static
64
xe vm-param-get uuid=<uuid> param-name=memory-static-{min,max}
For example, the following displays the static maximum memory properties for the VM with the uuid beginning ec77:
xe vm-param-get uuid= \ ec77a893-bff2-aa5c-7ef2-9c3acf0f83c0 \ param-name=memory-static-max; 268435456
This shows that the static maximum memory for this VM is 268435456 bytes (256MB).

Display the Dynamic Memory Properties of a VM

To display the dynamic memory properties, follow the procedure as above but use the command param­name=memory-dynamic:
1. Find the uuid of the required VM:
xe vm-list
2. Note the uuid, and then run the command param-name=memory-dynamic:
xe vm-param-get uuid=<uuid> param-name=memory-dynamic-{min,max}
For example, the following displays the dynamic maximum memory properties for the VM with uuid beginning ec77
xe vm-param-get uuid= \ ec77a893-bff2-aa5c-7ef2-9c3acf0f83c0 \ param-name=memory-dynamic-max; 134217728
This shows that the dynamic maximum memory for this VM is 134217728 bytes (128MB).

Updating Memory Properties

Warning:
It is essential that you use the correct ordering when setting the static/dynamic minimum/ maximum parameters. In addition you must not invalidate the following constraint:
0 memory-static-min ≤ memory-dynamic-min ≤ memory-dynamic-max memory-static-max
Update the static memory range of a virtual machine:
xe vm-memory-static-range-set uuid=<uuid> min=<value>max=<value>
Update the dynamic memory range of a virtual machine:
xe vm-memory-dynamic-range-set \ uuid=<uuid> min=<value> \ max=<value>
Specifying a target is particularly useful in virtual server environments, and in any situation where you know exactly how much memory you want a guest to use. XenServer will adjust the guest's memory allocation to meet the target you specify. For example:
xe vm-target-set target=<value> vm=<vm-name>
Update all memory limits (static and dynamic) of a virtual machine:
65
xe vm-memory-limits-set \ uuid=<uuid> \ static-min=<value> \ dynamic-min=<value> \ dynamic-max=<value> static-max=<value>
Note:
• To allocate a specific amount memory to a VM that won't change, set the Dynamic Maximum and Dynamic Minimum to the same value.
• You cannot increase the dynamic memory of a VM beyond the static maximum.
• To alter the static maximum of a VM – you will need to suspend or shut down the VM.

Update Individual Memory Properties

Warning:
Citrix advises not to change the static minimum level as this is set at the supported level per operating system – refer to the memory constraints table for more details.
Update the dynamic memory properties of a VM.
1. Find the uuid of the required VM:
xe vm-list
2. Note the uuid, and then use the command memory-dynamic-{min,max}=<value>
xe vm-param-set uuid=<uuid>memory-dynamic-{min,max}=<value>
The following example changes the dynamic maximum to 128MB:
xe vm-param-set uuid=ec77a893-bff2-aa5c-7ef2-9c3acf0f83c0 memory-dynamic-max=128MiB

Upgrade Issues

After upgrading from Citrix XenServer 5.5, XenServer sets all VMs memory so that the dynamic minimum is equal to the dynamic maximum.

Workload Balancing Interaction

If Workload Balancing (WLB) is enabled, XenServer defers decisions about host selection to the workload balancing server. If WLB is disabled, or if the WLB server has failed or is unavailable, XenServer will use its internal algorithm to make decisions regarding host selection.
66

Xen Memory Usage

When calculating the memory footprint of a Xen host there are two components that must be taken into consideration. First there is the memory consumed by the Xen hypervisor itself; then there is the memory consumed by the control domain of the host. The control domain is a privileged VM that provides low-level services to other VMs, such as providing access to physical devices. It also runs the management tool stack.

Setting Control Domain Memory

If your control domain requires more allocated memory, this can be set using the Xen CLI.
Use the xe vm-memory-target-set command to set the amount of memory available to the control domain.
The xe vm-memory-target-wait command can be used to check if the control domain is currently at the requested memory target specified at the last use of the xe vm-memory-target-set command. xe vm-memory-target-wait will not return until the actual memory usage of the control domain is at the target, or will time out if the target cannot be reached, for example when the target is lower than the actual memory requirements of the VM.
The following fields on a VM define how much memory will be allocated. The default values shown are indicative of a machine with 8 GB of RAM:
name default description
memory-actual 411041792 The actual amount of memory
current available for use by the VM
Read Only
memory-target 411041792 The target amount of memory
as set by using xe vm-memory-
target-set
Read Only
memory-static-max 790102016 The maximum possible physical
memory
Read Write when the VM is suspended; Read Only when the VM is running
memory-dynamic-max 790102016 The desired maximum memory to
be made available
Read Write
memory-dynamic-min 306184192 The desired minimum memory to
be made available
Read Write
memory-static-min 306184192 The minimum possible physical
memory
Read Write when the VM is suspended; Read Only when the VM is running
67
name default description
memory-overhead 1048576 (for example) The memory overhead due to
virtualization
Dynamic memory values must be within the boundaries set by the static memory values. Additionally the memory target must fall in the range between the dynamic memory values.
Note:
The amount of memory reported in XenCenter on the General tab in the Xen field may exceed the values set using this mechanism. This is because the amount reported includes the memory used by the control domain, the hypervisor itself, and the crash kernel. The amount of memory used by the hypervisor will be larger for hosts with more memory.
To find out how much host memory is actually available to be assigned to VMs, get the value of the memory- free field of the host, and then use the vm-compute-maximum-memory command to get the actual amount of free memory that can be allocated to the VM:
xe host-list uuid=<host_uuid> params=memory-free xe vm-compute-maximum-memory vm=<vm_name> total=<host_memory_free_value>
68

Networking

This chapter provides an overview of XenServer networking, including networks, VLANs, and NIC bonds. It also discusses how to manage your networking configuration and troubleshoot it.
Important:
As of this release, the XenServer default network stack is the vSwitch; however, you can revert to the Linux network stack if desired by using the instructions in the section called “vSwitch
Networks”.
If you are already familiar with XenServer networking concepts, you may want to skip ahead to one of the following sections:
• To create networks for standalone XenServer hosts, see the section called “Creating Networks in a Standalone
Server”.
• To create private networks across XenServer hosts, see the section called “Cross-Server Private networks”
• To create networks for XenServer hosts that are configured in a resource pool, see the section called “Creating
Networks in Resource Pools”.
• To create VLANs for XenServer hosts, either standalone or part of a resource pool, see the section called
“Creating VLANs”.
• To create bonds for standalone XenServer hosts, see the section called “Creating NIC Bonds on a Standalone
Host”.
• To create bonds for XenServer hosts that are configured in a resource pool, see the section called “Creating
NIC bonds in resource pools”.
For additional information about networking and network design, see Designing XenServer Network Configurations in the Citrix Knowledge Center.
For consistency with XenCenter, this chapter now uses the term primary management interface to refer to the IP-enabled NIC that carries the management traffic. In previous releases, this chapter used the term the management interface. However, management interface is now used generically to refer to any IP-enabled NIC, including the NIC carrying management traffic and NICs configured for storage traffic.

Networking Support

XenServer supports up to 16 physical network interfaces (or up to 16 bonded network interfaces) per XenServer host and up to 7 virtual network interfaces per VM.
Note:
XenServer provides automated configuration and management of NICs using the xe command line interface (CLI). Unlike previous XenServer versions, the host networking configuration files should not be edited directly in most cases; where a CLI command is available, do not edit the underlying files.

vSwitch Networks

When used with a controller appliance, vSwitch networks support open flow and provide extra functionality, including Cross Server Private Networks and Access Control Lists (ACL). The controller appliance for the XenServer vSwitch is known as the vSwitch Controller: it lets you monitor your networks through a graphical user interface. The vSwitch Controller:
• Supports fine-grained security policies to control the flow of traffic sent to and from a VM.
69
• Provides detailed visibility into the behavior and performance of all traffic sent in the virtual network environment.
A vSwitch greatly simplifies IT administration in virtualized networking environments—all VM configuration and statistics remain bound to the VM even if it migrates from one physical host in the resource pool to another. See the XenServer vSwitch Controller User Guide for more details.
Note:
To revert to the Linux network stack, run the following command:
xe-switch-network-backend bridge
Reboot your host after running this command.
Warning:
The Linux network stack is not open-flow enabled, does not support Cross Server Private Networks, and cannot be managed by the XenServer vSwitch Controller.

XenServer Networking Overview

This section describes the general concepts of networking in the XenServer environment.
One network is created for each physical network interface card (NIC) during XenServer installation. When you add a server to a resource pool, these default networks are merged so that all physical NICs with the same device name are attached to the same network.
Typically, you would only add a new network if you wanted to create an internal network, set up a new VLAN using an existing NIC, or create a NIC bond.
You can configure four different types of networks in XenServer:
Single-Server Private networks have no association to a physical network interface and can be used to provide connectivity between the virtual machines on a given host, with no connection to the outside world.
Cross-Server Private networks extend the single server private network concept to allow VMs on different hosts to communicate with each other by using the vSwitch.
External networks have an association with a physical network interface and provide a bridge between a virtual machine and the physical network interface connected to the network, enabling a virtual machine to connect to resources available through the server's physical network interface card.
Bonded networks create a bond between two NICs to create a single, high-performing channel between the virtual machine and the network.
Note:
Some networking options have different behaviors when used with standalone XenServer hosts compared to resource pools. This chapter contains sections on general information that applies to both standalone hosts and pools, followed by specific information and procedures for each.

Network Objects

This chapter uses three types of server-side software objects to represent networking entities. These objects are:
• A PIF, which represents a physical NIC on a XenServer host. PIF objects have a name and description, a globally unique UUID, the parameters of the NIC that they represent, and the network and server they are connected to.
• A VIF, which represents a virtual NIC on a virtual machine. VIF objects have a name and description, a globally unique UUID, and the network and VM they are connected to.
70
• A network, which is a virtual Ethernet switch on a XenServer host. Network objects have a name and description, a globally unique UUID, and the collection of VIFs and PIFs connected to them.
Both XenCenter and the xe CLI allow configuration of networking options, control over which NIC is used for management operations, and creation of advanced networking features such as virtual local area networks (VLANs) and NIC bonds.

Networks

Each XenServer host has one or more networks, which are virtual Ethernet switches. Networks that are not associated with a PIF are considered internal and can be used to provide connectivity only between VMs on a given XenServer host, with no connection to the outside world. Networks associated with a PIF are considered external and provide a bridge between VIFs and the PIF connected to the network, enabling connectivity to resources available through the PIF's NIC.

VLANs

Virtual Local Area Networks (VLANs), as defined by the IEEE 802.1Q standard, allow a single physical network to support multiple logical networks. XenServer hosts can work with VLANs in multiple ways.
Note:
All supported VLAN configurations are equally applicable to pools and standalone hosts, and bonded and non-bonded configurations.
Using VLANs with Management Interfaces
Switch ports configured to perform 802.1Q VLAN tagging/untagging, commonly referred to as ports with a native VLAN or as access mode ports, can be used with primary management interfaces to place management traffic on
a desired VLAN. In this case the XenServer host is unaware of any VLAN configuration.
Primary management interfaces cannot be assigned to a XenServer VLAN via a trunk port.
Using VLANs with Virtual Machines
Switch ports configured as 802.1Q VLAN trunk ports can be used in combination with the XenServer VLAN features to connect guest virtual network interfaces (VIFs) to specific VLANs. In this case, the XenServer host performs the VLAN tagging/untagging functions for the guest, which is unaware of any VLAN configuration.
XenServer VLANs are represented by additional PIF objects representing VLAN interfaces corresponding to a specified VLAN tag. XenServer networks can then be connected to the PIF representing the physical NIC to see all traffic on the NIC, or to a PIF representing a VLAN to see only the traffic with the specified VLAN tag.
For procedures on how to create VLANs for XenServer hosts, either standalone or part of a resource pool, see
the section called “Creating VLANs”.
Using VLANs with Dedicated Storage NICs
Dedicated storage NICs (also known as IP-enabled NICs or simply management interfaces) can be configured to use native VLAN / access mode ports as described above for primary management interfaces, or with trunk ports and XenServer VLANs as described above for virtual machines. To configure dedicated storage NICs, see the
section called “Configuring a dedicated storage NIC”.
Combining Management Interfaces and Guest VLANs on a Single Host NIC
A single switch port can be configured with both trunk and native VLANs, allowing one host NIC to be used for a management interface (on the native VLAN) and for connecting guest VIFs to specific VLAN IDs.

NIC Bonds

NIC bonds can improve XenServer host resiliency by using two physical NICs as if they were one. Specifically, NIC bonding is a technique for increasing resiliency and/or bandwidth in which an administrator configures two NICs
71
together so they logically function as one network card. Both NICs have the same MAC address and, in the case of management interfaces, have one IP address.
If one NIC in the bond fails, the host's network traffic is automatically redirected through the second NIC. NIC bonding is sometimes also known as NIC teaming. XenServer supports up to eight bonded networks. You can bond two NICs together of any type (management interfaces or non-management interfaces).
In the illustration that follows, the primary management interface is bonded with a NIC so that it forms a bonded pair of NICs. XenServer will use this bond for management traffic.
This illustration shows three pairs of bonded NICs, including the primary
management interface. Excluding the Primary Management Interface bond,
XenServer uses the other two NIC bonds and the two un-bonded NICs for VM traffic.
Specifically, you can bond two NICs together of the following types:
Primary management interfaces. You can bond a primary management interface to another NIC so that the second NIC provides failover for management traffic. However, NIC bonding does not provide load balancing for management traffic.
NICs (non-management). You can bond NICs that XenServer is using solely for VM traffic. Bonding these NICs not only provides resiliency, but doing so also balances the traffic from multiple VMs between the NICs.
Other management interfaces. You can bond NICs that you have configured as management interfaces (for example, for storage). However, for most iSCSI software initiator storage, Citrix recommends configuring multipathing instead of NIC bonding since bonding management interfaces only provides failover and not load balancing.
It should be noted that certain iSCSI storage arrays, such as Dell EqualLogic, require using bonds.
NIC bonds can work in either an Active/Active mode, with VM traffic balanced between the bonded NICs, or in an Active/Passive mode, where only one NIC actively carries traffic.
XenServer NIC bonds completely subsume the underlying physical devices (PIFs). To activate a bond, the underlying PIFs must not be in use, either as the primary management interface for the host or by running VMs with VIFs attached to the networks associated with the PIFs.
In XenServer, NIC bonds are represented by additional PIFs, including one that represents the bond itself. The bond PIF can then be connected to a XenServer network to allow VM traffic and host management functions to occur over the bonded NIC. The exact steps to create a NIC bond depend on the number of NICs in your host and whether the primary management interface of the host is assigned to a PIF to be used in the bond.
Provided you enable bonding on NICs carrying only guest traffic, both links are active and NIC bonding can balance each VM’s traffic between NICs. Likewise, bonding the primary management interface NIC to a second NIC also provides resilience. However, only one link (NIC) in the bond is active and the other remains unused unless traffic fails over to it.
If you bond a management interface, a single IP address is assigned to the bond. That is, each NIC does not have its own IP address; XenServer treats the two NICs as one logical connection.
72
When bonded NICs are used for VM (guest) traffic, you do not need to configure an IP address for the bond. This is because the bond operates at Layer 2 of the OSI model, the data link layer, and no IP addressing is used at this layer. When used for non-VM traffic (to connect to shared network storage or XenCenter for management), you must configure an IP address for the bond. If you bond a management interface to a non-management NIC, as of XenServer 6.0, the bond assumes the IP address of the management interface automatically.
Gratuitous ARP packets are sent when assignment of traffic changes from one interface to another as a result of fail-over.
Note:
Bonding is set up with an Up Delay of 31000ms and a Down Delay of 200ms. The seemingly long Up Delay is deliberate because of the time some switches take to actually start routing traffic. Without a delay, when a link comes back after failing, the bond could rebalance traffic onto it before the switch is ready to pass traffic. If you want to move both connections to a different switch, move one, then wait 31 seconds for it to be used again before moving the other.
Switch Configuration
Depending on your redundancy requirements, you can connect the NICs in the bond to either the same or separate switches. If you connect one of the NICs to a second, redundant switch and a NIC or switch fails, traffic fails over to the other NIC. Adding a second switch helps in the following ways:
• When you bond NICs used exclusively for VM traffic, traffic is sent over both NICs. If you connect a link to a second switch and the NIC or switch fails, the virtual machines remain on the network since their traffic fails over to the other NIC/switch.
• When you connect one of the links in a bonded primary management interface to a second switch, it prevents a single point of failure for your pool. If the switch fails, the management network still remains online and the hosts can still communicate with each other.
When you attach bonded NICs to two switches, the switches must be running in a stacked configuration. (That is, the switches must be configured to function as a single switch that is seen as a single domain – for example, when multiple rack-mounted switches are connected across the backplane.) Switches must be in a stacked configuration because the MAC addresses of VMs will be changing between switches quite often while traffic is rebalanced across the two NICs. The switches do not require any additional configuration.
The illustration that follows shows how the cables and network configuration for the bonded NICs have to match.
73
This illustration shows how two NICs in a bonded pair use the same network settings, as represented
by the networks in each host. The NICs in the bonds connect to different switches for redundancy.
Active-Active Bonding
Active-Active, which is the default bonding mode, is an active/active configuration for guest traffic: both NICs can route VM traffic simultaneously. When bonds are used for management traffic, only one NIC in the bond can route traffic: the other NIC remains unused and provides fail-over support.
For VM traffic, active-active mode also balances traffic. However, it is important to note that "balance" refers to the quantity (MB) of data routed on the NIC. When XenServer rebalances traffic, it simply changes which VM's traffic (more precisely, which VIF's traffic) is carried across which NIC.
While NIC bonding can provide load balancing for traffic from multiple VMs, it cannot provide a single VM with the throughput of two NICs. Any given VIF only uses one of the links in a bond at a time. As XenServer rebalances traffic, VIFs are not permanently assigned to a specific NIC in the bond. However, for VIFs with high throughput, periodic rebalancing ensures that the load on the links is approximately equal.
The illustration that follows shows the differences between the three different types of interfaces that you can bond.
74
This illustration shows how, when configured in Active-active mode, the links that are active in bonds vary
according to traffic type. In the top picture of a management network, NIC 1 is active and NIC 2 is passive. For
the VM traffic, both NICs in the bond are active. For the storage traffic, only NIC 3 is active and NIC 4 is passive.
XenServer load balances the traffic between NICs by using the source MAC address of the packet. Because, for management traffic, only one NIC in the bond is used, active-active mode does not balance management traffic.
API Management traffic can be assigned to a XenServer bond interface and will be automatically load-balanced across the physical NICs.
Re-balancing is provided by the existing ALB re-balance capabilities: the number of bytes going over each slave (interface) is tracked over a given period. When a packet is to be sent that contains a new source MAC address it is assigned to the slave interface with the lowest utilization. Traffic is re-balanced every 10 seconds.
Active-active mode is sometimes referred to as Source Load Balancing (SLB) bonding as XenServer uses SLB to share load across bonded network interfaces. SLB is derived from the open-source ALB mode and reuses the ALB capability to dynamically re-balance load across NICs.
Note:
Active-active bonding does not require switch support for Etherchannel or 802.3ad (LACP).
Active-Passive Bonding
Active-Passive bonding:
• routes traffic over only one of the NICs in the bond
• will failover to use the other NIC in the bond if the active NIC loses network connectivity
• can be configured with one fast link and one slow path for cost savings. In this scenario, the slow path should only be used if there is a failure on the fast path
• does not require switch support for Etherchannel or 802.3ad(LACP)
• is derived from the open source Active-Backup mode
As active-active mode is the default bonding configuration in XenServer, you must configure active-passive mode if you want to use it. You do not need to configure active-passive mode just because a network is carrying management traffic or storage traffic. When bonds are configured or left as active-active mode and XenServer detects management or storage traffic, XenServer automatically leaves one NIC in the bond unused. However, you can explicitly configure active-passive mode, if desired.
When trying to determine when to configure active-passive mode, consider configuring it in situations such as the following:
• When you are connecting one NIC to a switch that does not work well with active-active bonding.
For example, if the switch does not work well, you might see symptoms like packet loss, an incorrect ARP table on the switch. Likewise, the switch would not update the ARP table correctly and/or the switch would have incorrect settings on the ports (you might configure aggregation for the ports and it would not work).
• When you do not need load balancing or when you only intend to send traffic on one NIC.
For example, if the redundant path uses a cheaper technology (for example, a lower-performing switch or external up-link) and that results in slower performance, configure active-passive bonding instead.
Note:
As of XenServer 6.0, the vSwitch supports active-passive NIC bonding. If you are using the vSwitch as your networking configuration, you can set the bonding mode to active-passive (also known as active-backup) using the XenCenter or the CLI.
75
Tip:
Configuring active-passive mode in XenCenter is easy. You simply select Active-passive as the bond mode when you create the bond.
Important:
After you have created VIFs or your pool is in production, be extremely careful about making changes to bonds or creating new bonds.

Initial Networking Configuration

The XenServer host networking configuration is specified during initial host installation. Options such as IP address configuration (DHCP/static), the NIC used as the primary management interface, and hostname are set based on the values provided during installation.
When a host has multiple NICs the configuration present after installation depends on which NIC is selected for management operations during installation:
• PIFs are created for each NIC in the host
• the PIF of the NIC selected for use as the primary management interface is configured with the IP addressing options specified during installation
• a network is created for each PIF ("network 0", "network 1", etc.)
• each network is connected to one PIF
• the IP addressing options of all other PIFs are left unconfigured
When a XenServer host has a single NIC, the follow configuration is present after installation:
• a single PIF is created corresponding to the host's single NIC
• the PIF is configured with the IP addressing options specified during installation and to enable management of the host
• the PIF is set for use in host management operations
• a single network, network 0, is created
• network 0 is connected to the PIF to enable external connectivity to VMs
In both cases the resulting networking configuration allows connection to the XenServer host by XenCenter, the xe CLI, and any other management software running on separate machines via the IP address of the primary management interface. The configuration also provides external networking for VMs created on the host.
The PIF used for management operations is the only PIF ever configured with an IP address during XenServer installation. External networking for VMs is achieved by bridging PIFs to VIFs using the network object which acts as a virtual Ethernet switch.
The steps required for networking features such as VLANs, NIC bonds, and dedicating a NIC to storage traffic are covered in the sections that follow.

Managing Networking Configuration

Some of the network configuration procedures in this section differ depending on whether you are configuring a stand-alone server or a server that is part of a resource pool.

Cross-Server Private networks

Note:
76
Creating cross-server private networks requires Citrix XenServer Advanced editions or higher. To learn more about XenServer editions, and to find out how to upgrade, visit the Citrix website here.
Previous versions of XenServer allowed you to create single-server private networks that allowed VMs running on the same host to communicate with each other. The cross-server private network feature, which extends the single-server private network concept to allow VMs on different hosts to communicate with each other. Cross­server private networks combine the same isolation properties of a single-server private network but with the additional ability to span hosts across a resource pool. This combination enables use of VM agility features such as XenMotion live migration and Workload Balancing (WLB) for VMs with connections to cross-server private networks.
Cross-server private networks are completely isolated. VMs that are not connected to the private network cannot sniff or inject traffic into the network, even when they are located on the same physical host with VIFs connected to a network on the same underlying physical network device (PIF). VLANs provide similar functionality, though unlike VLANs, cross-server private networks provide isolation without requiring configuration of a physical switch fabric, through the use of the Generic Routing Encapsulation (GRE) IP tunnelling protocol.
Private networks provide the following benefits without requiring a physical switch:
• the isolation properties of single-server private networks
• the ability to span a resource pool, enabling VMs connected to a private network to live on multiple hosts within the same pool
• compatibility with features such as XenMotion and Workload Balancing
Cross-Server Private Networks must be created on a management interface, as they require an IP addressable PIF. Any IP-enabled PIF (referred to as a 'Management Interface' in XenCenter) can be used as the underlying network transport. If you choose to put cross-server private network traffic on a second management interface, then this second management interface must be on a separate subnet.
If both management interfaces are on the same subnet, traffic will be routed incorrectly.
Note:
To create a cross-server private network, the following conditions must be met:
• All of the hosts in the pool must be using XenServer 6.0 or greater
• All of the hosts in the pool must be using the vSwitch for the networking stack
• The vSwitch Controller must be running and you must have added the pool to it (The pool must have a vSwitch Controller configured that handles the initialization and configuration tasks required for the vSwitch connection)
• The cross-server private network must be created on a NIC configured as a management interface. This can be the primary management interface or another management interface (IP-enabled PIF) you configure specifically for this purpose, provided it is on a separate subnet.
For more information on configuring the vSwitch, see the XenServer vSwitch Controller User Guide. For UI-based procedures for configuring private networks, see the XenCenter Help.

Creating Networks in a Standalone Server

Because external networks are created for each PIF during host installation, creating additional networks is typically only required to:
• use a private network
• support advanced operations such as VLANs or NIC bonding
77
To add or remove networks using XenCenter, refer to the XenCenter online Help.
To add a new network using the CLI
1. Open the XenServer host text console.
2. Create the network with the network-create command, which returns the UUID of the newly created network:
xe network-create name-label=<mynetwork>
At this point the network is not connected to a PIF and therefore is internal.

Creating Networks in Resource Pools

All XenServer hosts in a resource pool should have the same number of physical network interface cards (NICs), although this requirement is not strictly enforced when a XenServer host is joined to a pool.
Having the same physical networking configuration for XenServer hosts within a pool is important because all hosts in a pool share a common set of XenServer networks. PIFs on the individual hosts are connected to pool­wide networks based on device name. For example, all XenServer hosts in a pool with an eth0 NIC will have a corresponding PIF plugged into the pool-wide Network 0 network. The same will be true for hosts with eth1 NICs and Network 1, as well as other NICs present in at least one XenServer host in the pool.
If one XenServer host has a different number of NICs than other hosts in the pool, complications can arise because not all pool networks will be valid for all pool hosts. For example, if hosts host1 and host2 are in the same pool and host1 has four NICs while host2 only has two, only the networks connected to PIFs corresponding to eth0 and eth1 will be valid on host2. VMs on host1 with VIFs connected to networks corresponding to eth2 and eth3 will not be able to migrate to host host2.

Creating VLANs

For servers in a resource pool, you can use the pool-vlan-create command. This command creates the VLAN and automatically creates and plugs in the required PIFs on the hosts in the pool. See the section called “pool-vlan-
create” for more information.
To connect a network to an external VLAN using the CLI
1. Open the XenServer host console.
2. Create a new network for use with the VLAN. The UUID of the new network is returned:
xe network-create name-label=network5
3. Use the pif-list command to find the UUID of the PIF corresponding to the physical NIC supporting the desired VLAN tag. The UUIDs and device names of all PIFs are returned, including any existing VLANs:
xe pif-list
4. Create a VLAN object specifying the desired physical PIF and VLAN tag on all VMs to be connected to the new VLAN. A new PIF will be created and plugged into the specified network. The UUID of the new PIF object is returned.
xe vlan-create network-uuid=<network_uuid> pif-uuid=<pif_uuid> vlan=5
5. Attach VM VIFs to the new network. See the section called “Creating Networks in a Standalone Server” for more details.

Creating NIC Bonds on a Standalone Host

Citrix recommends using XenCenter to create NIC bonds. For instructions, see the XenCenter help.
78
This section describes how to use the xe CLI to bond NIC interfaces on a XenServer host that is not in a pool. See
the section called “Creating NIC bonds in resource pools” for details on using the xe CLI to create NIC bonds on
XenServer hosts that comprise a resource pool.
Creating a NIC bond
When you bond a NIC, the bond absorbs the PIF/NIC currently in use as the primary management interface. From XenServer 6.0 onwards, the primary management interface is automatically moved to the bond PIF.
Bonding two NICs
1. Use the network-create command to create a new network for use with the bonded NIC. The UUID of the new network is returned:
xe network-create name-label=<bond0>
2. Use the pif-list command to determine the UUIDs of the PIFs to use in the bond:
xe pif-list
3. Do one of the following:
• To configure the bond in active-active mode (default), use the bond-create command to create the bond.
Using commas to separate the parameters, specify the newly created network UUID and the UUIDs of the PIFs to be bonded:
xe bond-create network-uuid=<network_uuid> pif-uuids=<pif_uuid_1>,<pif_uuid_2>
The UUID for the bond is returned after running the command.
• To configure the bond in active-passive mode, use the same syntax but add the optional mode parameter
and specify active-backup:
xe bond-create network-uuid=<network_uuid> pif-uuids=<pif_uuid_1>,<pif_uuid_2> / mode=<balance-slb | active-backup>
Note:
In previous releases, you specified the other-config:bond-mode to change the bond mode. While this command still works in XenServer 6.0, it may be not be supported in future releases and it is not as efficient as the mode parameter. other-config:bond-mode requires running pif-unplug and pif-plug to get the mode change to take effect.
Controlling the MAC Address of the Bond
When you bond the primary management interface, the PIF/NIC currently in use as the primary management interface is subsumed by the bond. If the host uses DHCP, in most cases the bond's MAC address is the same as the PIF/NIC currently in use, and the primary management interface's IP address can remain unchanged.
You can change the bond's MAC address so that it is different from the MAC address for the (current) primary management-interface NIC. However, as the bond is enabled and the MAC/IP address in use changes, existing network sessions to the host will be dropped.
You can control the MAC address for a bond in two ways:
• An optional mac parameter can be specified in the bond-create command. You can use this parameter to set
the bond MAC address to any arbitrary address.
• If the mac parameter is not specified, from XenServer 6.0 onwards, XenServer uses the MAC address of the
primary management interface if this is one of the interfaces in the bond. If the primary management interface is not part of the bond, but another management interface is, the bond uses the MAC address (and also the IP address) that management interface. If none of the NICs in the bond are management interfaces, the bond uses the MAC of the first named NIC.
79
Reverting NIC bonds
If reverting a XenServer host to a non-bonded configuration, be aware that the bond-destroy command automatically configures the primary-slave as the interface to be used for the primary management interface. Consequently, all VIFs will be moved to the primary management interface.
The term primary-slave refers to the PIF that the MAC and IP configuration was copied from when creating the bond. When bonding two NICs, the primary slave is:
1. The primary management interface NIC (if the primary management interface is one of the bonded NICs).
2. Any other NIC with an IP address (if the primary management interface was not part of the bond).
3. The first named NIC. You can find out which one it is by running the following:
xe bond-list params=all

Creating NIC bonds in resource pools

Whenever possible, create NIC bonds as part of initial resource pool creation prior to joining additional hosts to the pool or creating VMs. Doing so allows the bond configuration to be automatically replicated to hosts as they are joined to the pool and reduces the number of steps required. Adding a NIC bond to an existing pool requires one of the following:
• Using the CLI to configure the bonds on the master and then each member of the pool.
• Using the CLI to configure the bonds on the master and then restarting each member of the pool so that it
inherits its settings from the pool master.
• Using XenCenter to configure the bonds on the master. XenCenter automatically synchronizes the networking
settings on the member servers with the master, so you do not need to reboot the member servers.
For simplicity and to prevent misconfiguration, Citrix recommends using XenCenter to create NIC bonds. For details, refer to the XenCenter Help.
This section describes using the xe CLI to create bonded NIC interfaces on XenServer hosts that comprise a resource pool. See the section called “Creating a NIC bond” for details on using the xe CLI to create NIC bonds on a standalone XenServer host.
Warning:
Do not attempt to create network bonds while HA is enabled. The process of bond creation will disturb the in-progress HA heartbeating and cause hosts to self-fence (shut themselves down); subsequently they will likely fail to reboot properly and will need the host-emergency-
ha-disable command to recover.
Adding NIC bonds to new resource pools
1. Select the host you want to be the master. The master host belongs to an unnamed pool by default. To create a resource pool with the CLI, rename the existing nameless pool:
xe pool-param-set name-label=<"New Pool"> uuid=<pool_uuid>
2. Create the NIC bond as described in the section called “Creating a NIC bond”.
3. Open a console on a host that you want to join to the pool and run the command:
xe pool-join master-address=<host1> master-username=root master-password=<password>
The network and bond information is automatically replicated to the new host. The primary management interface is automatically moved from the host NIC where it was originally configured to the bonded PIF (that is, the primary management interface is now absorbed into the bond so that the entire bond functions as the primary management interface).
80
Use the host-list command to find the UUID of the host being configured:
xe host-list
Adding NIC bonds to an existing pool
Warning:
Do not attempt to create network bonds while HA is enabled. The process of bond creation disturbs the in-progress HA heartbeating and causes hosts to self-fence (shut themselves down); subsequently they will likely fail to reboot properly and you will need to run the host-
emergency-ha-disable command to recover them.
Note:
If you are not using XenCenter for NIC bonding, the quickest way to create pool-wide NIC bonds is to create the bond on the master, and then restart the other pool members. Alternately you can use the service xapi restart command. This causes the bond and VLAN settings on the master to be inherited by each host. The primary management interface of each host must, however, be manually reconfigured.
Follow the procedure in previous sections to create a NIC Bond, see the section called “Adding NIC bonds to new
resource pools”.

Configuring a dedicated storage NIC

You can use either XenCenter or the xe CLI to assign a NIC an IP address and dedicate it to a specific function, such as storage traffic. When you configure a NIC with an IP address, you do so by creating a management interface. IP-enabled NICs are referred to collectively as management interfaces. (The IP-enabled NIC XenServer used for management is a type of management interface known as the primary management interface.)
When you want to dedicate a management interface for a specific purpose, you must ensure the appropriate network configuration is in place to ensure the NIC is used only for the desired traffic. For example, to dedicate a NIC to storage traffic, the NIC, storage target, switch, and/or VLAN must be configured so that the target is only accessible over the assigned NIC. If your physical and IP configuration do not limit the traffic that can be sent across the storage management interface, it is possible for other traffic, such as management traffic, to be sent across the storage NIC.
Note:
When selecting a NIC to configure as a management interface for use with iSCSI or NFS SRs, ensure that the dedicated NIC uses a separate IP subnet that is not routable from the primary management interface. If this is not enforced, then storage traffic may be directed over the main primary management interface after a host reboot, due to the order in which network interfaces are initialized.
To assign NIC functions using the xe CLI
1. Ensure that the PIF is on a separate subnet, or routing is configured to suit your network topology in order to force the desired traffic over the selected PIF.
2. Setup an IP configuration for the PIF, adding appropriate values for the mode parameter and if using static IP addressing the IP, netmask, gateway, and DNS parameters:
xe pif-reconfigure-ip mode=<DHCP | Static> uuid=<pif-uuid>
3. Set the PIF's disallow-unplug parameter to true:
xe pif-param-set disallow-unplug=true uuid=<pif-uuid>
xe pif-param-set other-config:management_purpose="Storage" uuid=<pif-uuid>
81
If you want to use a storage interface that can be routed from the primary management interface also (bearing in mind that this configuration is not the best practice), you have two options:
• After a host reboot, ensure that the storage interface is correctly configured, and use the xe pbd-unplug and
xe pbd-plug commands to reinitialize the storage connections on the host. This restarts the storage connection and routes it over the correct interface.
• Alternatively, you can use xe pif-forget to remove the interface from the XenServer database and manually
configure it in the control domain. This is an advanced option and requires you to be familiar with how to manually configure Linux networking.

Using SR-IOV Enabled NICs

Single Root I/O Virtualization (SR-IOV) is a PCI device virtualization technology that allows a single PCI device to appear as multiple PCI devices on the physical PCI bus. The actual physical device is known as a Physical Function (PF) while the others are known as Virtual Functions (VF). The purpose of this is for the hypervisor to directly assign one or more of these VFs to a Virtual Machine (VM) using SR-IOV technology: the guest can then use the VF as any other directly assigned PCI device.
Assigning one or more VFs to a VM allows the VM to directly exploit the hardware. When configured, each VM behaves as though it is using the NIC directly, reducing processing overhead and improving performance.
Warning:
If your VM has an SR-IOV VF, functions that require VM mobility, for example, Live Migration, Workload Balancing, Rolling Pool Upgrade, High Availability and Disaster Recovery, are not possible. This is because the VM is directly tied to the physical SR-IOV enabled NIC VF. In addition, VM network traffic sent via an SR-IOV VF bypasses the vSwitch, so it is not possible to create ACLs or view QoS.
Assigning a SR-IOV NIC VF to a VM
Note:
SR-IOV is supported only with SR-IOV enabled NICs listed on the XenServer Hardware
Compatibility List and only when used in conjunction with a Windows Server 2008 guest
operating system.
1. Open a local command shell on your XenServer host.
2. Run the command lspci to display a list of the Virtual Functions (VF). For example:
07:10.0 Ethernet controller: Intel Corporation 82559 \ Ethernet Controller Virtual Function (rev 01)
In the example above, 07:10.0 is the bus:device.function address of the VF.
3. Assign the required VF to the target VM by running the following commands:
xe vm-param-set other-config:pci=0/0000:<bus:device.function> uuid=<vm-uuid>
4. Start the VM, and install the appropriate VF driver for your specific hardware.
Note:
You can assign multiple VFs to a single VM, however the same VF cannot be shared across multiple VMs.

Controlling the rate of outgoing data (QoS)

To limit the amount of outgoing data a VM can send per second, you can set an optional Quality of Service (QoS) value on VM virtual interfaces (VIFs). The setting lets you specify a maximum transmit rate for outgoing packets in kilobytes per second.
82
Loading...