HP P4000 SAN User Manual

Best practices for deploying Citrix XenServer on HP StorageWorks P4000 SAN
Table of contents
Executive summary ............................................................................................................................... 3
Business case ...................................................................................................................................... 3
High availability .............................................................................................................................. 4
Scalability ....................................................................................................................................... 4
Virtualization ................................................................................................................................... 4
XenServer storage model ...................................................................................................................... 5
HP StorageWorks P4000 SAN ...................................................................................................... 5
Storage repository ........................................................................................................................ 6
Virtual disk image ........................................................................................................................ 6
Physical block device .................................................................................................................... 6
Virtual block device ...................................................................................................................... 6
Overview of XenServer iSCSI storage repositories ................................................................................... 6
iSCSI using the software initiator (lvmoiscsi) ........................................................................................ 6
iSCSI Host Bus Adapter (HBA) (lvmohba) ............................................................................................ 6
SAN connectivity ............................................................................................................................. 6
Benefits of shared storage ................................................................................................................. 7
Storage node .................................................................................................................................. 7
Clustering and Network RAID ........................................................................................................ 8
Networking bonding ..................................................................................................................... 8
Configuring an iSCSI volume ................................................................................................................ 9
Example.......................................................................................................................................... 9
Creating a new volume ............................................................................................................... 10
Configuring the new volume ........................................................................................................ 11
Comparing full and thin provisioning ............................................................................................ 12
Benefits of thin provisioning ......................................................................................................... 12
Configuring a XenServer Host ............................................................................................................. 13
Synchronizing time ......................................................................................................................... 14
NTP for XenServer ...................................................................................................................... 15
Network configuration and bonding ................................................................................................. 16
Example .................................................................................................................................... 17
Connecting to an iSCSI volume ........................................................................................................ 19
Determining or changing the host’s IQN ....................................................................................... 19
Specifying IQN authentication ..................................................................................................... 21
Creating an SR .......................................................................................................................... 25
Creating a VM on the new SR ......................................................................................................... 28
Summary....................................................................................................................................... 30
Configuring for high availability .......................................................................................................... 31
Configuration ............................................................................................................................. 33
Implementing Network RAID for SRs ................................................................................................. 33
Configuring Network RAID .......................................................................................................... 34
Pooling XenServer hosts .................................................................................................................. 35
Configuring VMs for high availability ............................................................................................... 36
Creating a heartbeat volume ....................................................................................................... 36
Configuring the resource pool for HA ........................................................................................... 37
Configuring multi-site high availability with a single cluster .............................................................. 39
Configuring multi-site high availability with multiple clusters ............................................................. 41
Disaster recoverability ........................................................................................................................ 45
Backing up configurations ............................................................................................................... 46
Resource pool configuration ........................................................................................................ 46
Host configuration ...................................................................................................................... 46
Backing up metadata ..................................................................................................................... 47
SAN based Snapshots .................................................................................................................... 48
SAN based Snapshot Rollback ........................................................................................................ 49
Reattach storage repositories ........................................................................................................... 50
Virtual Machines (VMs) ...................................................................................................................... 51
Creating VMs ................................................................................................................................ 51
Size of the storage repository .......................................................................................................... 51
Increasing storage repository volume size ......................................................................................... 52
Uniqueness of VMs ......................................................................................................................... 53
Process preparing a VM for Cloning ................................................................................................ 54
Changing the Storage Repository and Virtual Disk UUID ..................................................................... 54
SmartClone the Golden Image VM ................................................................................................... 60
For more information .......................................................................................................................... 63
Executive summary
Using Citrix XenServer with HP StorageWorks P4000 SAN storage, you can host individual desktops and servers inside virtual machines (VMs) that are hosted and managed from a central location utilizing optimized, shared storage. This solution provides cost-effective high availability and scalable performance.
Organizations are demanding better resource utilization, higher availability, along with more flexibility to react to rapidly changing business needs. The 64-bit XenServer hypervisor provides outstanding support for VMs, including granular control over processor, network, and disk resource allocations; as a result, your virtualized servers can operate at performance levels rates that closely match physical platforms. Meanwhile, additional XenServer hosts deployed in a resource pool provide scalability and support for high-availability (HA) applications, allowing VMs to restart automatically on other hosts at the same site or even at a remote site.
Enterprise IT infrastructures are powered by storage. HP StorageWorks P4000 SANs offer scalable storage solutions that can simplify management, reduce operational costs, and optimize performance in your environment. Easy to deploy and maintain, HP StorageWorks P4000 SAN storages help to ensure that crucial business data remains available; through innovative double-fault protection across the entire SAN, your storage is protected from disk, network, and storage node faults.
You can grow your HP StorageWorks P4000 SAN non-disruptively, in a single operation, by simply adding storage; thus, you can scale performance, capacity and redundancy as storage requirements evolve. Features such as asynchronous and synchronous replication, storage clustering, network RAID, thin provisioning, snapshots, remote copies, cloning, performance monitoring, and a single pane-of­glass management can add value in your environment.
This paper explores options for configuring and using XenServer, with emphasis on best practices and tips for an HP StorageWorks P4000 SAN environment.
Target audience: This paper provides information for XenServer Administrators interested in implementing XenServer-based server virtualization using HP StorageWorks P4000 SAN storage. Basic knowledge of XenServer technologies is assumed.
Business case
Organizations implementing server virtualization typically require shared storage to take full advantage of today’s powerful hypervisors. For example, XenServer supports features such as XenMotion and HA that require shared storage to serve a pool of XenServer host systems. By leveraging the iSCSI storage protocol, XenServers are able to access the storage just like local storage but over an Ethernet network. Since standard Ethernet networks are already used by most IT organizations to provide their communications backbones, no additional specialized hardware is required to support a Storage Area Network (SAN) implementation. Security of your data is handled first-most by authentication at the storage, physical, as well as the iSCSI protocol mechanisms itself. Just like any other data, it too can be encrypted at the client, thereby satisfying any data security compliance requirements.
Rapid deployment
Shared storage is not only a requirement for a highly-available XenServer configuration; it is also desirable for supporting rapid data deployment. Using simple management software, you can respond to a request for an additional VM and associated storage with just a few clicks. To minimize deployment time, you can use a golden-image clone, with both storage and operating system (OS) pre-configured and ready for application deployment.
3
“Data de-duplication1” allows you to roll out hundreds of OS images while only occupying the space needed to store the original image. Initial deployment time is reduced to the time required to perform the following activities:
Configure “the first operating system” Configure the particular deployment for uniqueness Configure the applications in VMs
No longer should a server roll-out take days.
High availability
Highly-available storage is a critical component of a highly-available XenServer resource pool. If a XenServer host at a particular site were to fail or the entire site were to go down, the ability of another XenServer pool to take up the load of the affected VMs means that your business-critical applications can continue to run.
HP StorageWorks P4000 SAN solutions provide the following mechanisms for maximizing availability:
Storage nodes are clustered to provide redundancy. Hardware RAID implemented at the storage-node level can eliminate the impact of disk drive
failures.
Configuring multiple network connections to each node can eliminate the impact of link failures. Synchronous replication between sites can minimize the impact of a site failure. Snapshots can prevent data corruption when you are rolling back to a particular point-in–time. Remote snapshots can be used to add sources for data recovery.
Comprehensive, cost-effective capabilities for high availability and disaster recovery (DR) applications are built into every HP StorageWorks P4000 SAN. There is no need for additional upgrades; simply install a storage node and start using it. When you need additional storage, higher performance, or increased availability, just add one or more storage nodes to your existing SAN.
Scalability
The storage node is the building block of an HP StorageWorks P4000 SAN, providing disk spindles, a RAID backplane, CPU processing power, memory cache, and networking throughput that, in combination, contribute toward overall SAN performance. Thus, HP StorageWorks P4000 SANs can scale linearly and predictably as your storage requirements increase.
Virtualization
Server virtualization allows you to consolidate multiple applications using a single host server or server pool. Meanwhile, storage virtualization allows you to consolidate your data using multiple storage nodes to enhance resource utilization, availability, performance, scalability and disaster recoverability, while helping to achieve the same objectives for VMs.
4
1
Occurring at the iSCSI block level on the host server disk
The following section outlines the XenServer storage model.
XenServer storage model
The XenServer storage model used in conjunction with HP StorageWorks P4000 SANs is shown in Figure 1.
Figure 1. XenServer storage model
Brief descriptions of the components of this storage model are provided below.
HP StorageWorks P4000 SAN
A SAN can be defined as an architecture that allows remote storage devices to appear to a server as though these devices are locally-attached.
In an HP StorageWorks P4000 SAN implementation, data storage is consolidated on a pooled cluster of storage nodes to enhance availability, resource utilization, and scalability. Volumes are allocated to XenServer hosts via an Ethernet infrastructure (1 Gb/second or 10 Gb/second) that utilizes the iSCSI block-based storage protocol.
5
Performance, capacity and availability can be scaled on-demand and on-line.
Storage repository
A storage repository (SR) is defined as a container of storage to which XenServer Virtual Machine data will be stored. Although SRs can support locally connected storage types such as IDE, SATA, SCSI and SAS drives, remotely connected iSCSI SAN storage will be discussed in this document. Storage Repositories abstract the underlying differences in the storage connectivity, although differences between local and remotely connected storage repositories enable specific XenServer Resource Pool capabilities such as High Availability and XenMotion. The demands of a XenServer resource pool dictate that storage must be equally accessible amongst all hosts and therefore data must not be stored on local SRs
Virtual disk image
A virtual disk image (VDI) is the disk presented to the Virtual Machine and its OS as a local disk (even if the disk image is stored remotely). This image will be stored in the container of a SR. Although multiple VDIs may be stored on a single SR, it is considered best practice to store one VDI per SR. It is also considered best practice to have one Virtual Machine allocated to a single VDI. VDIs can be stored in different formats depending upon the type of connectivity afforded to Storage Repository.
Physical block device
A physical block device (PBD) is a connector that describes how XenServer hosts find and connect to an SR.
Virtual block device
A virtual block device (VBD) is a connector that describes how a VM connects to its associated VDI, which is located on a SR.
Overview of XenServer iSCSI storage repositories
XenServer hosts access HP StorageWorks P4000 iSCSI storage repositories (SRs) either using the open-iSCSI software initiator or thru an iSCSI Host Bus Adapter (HBA). XenServer SRs regardless of the access method, use Linux Logical Volume Manager (LVM) as the underlying file system to store Virtual Disk Images (VDI). Although multiple virtual disks may reside on a single storage repository, it is recommended that a single VDI occupy the space of an SR for optimum performance.
iSCSI using the software initiator (lvmoiscsi)
In this method, the open-iSCSI software initiator is used to connect to an iSCSI volume over the Ethernet network. The iSCSI volume is presented to a XenServer or Resource Pool thru this connection.
iSCSI Host Bus Adapter (HBA) (lvmohba)
In this method, a specialized hardware interface, an iSCSI Host Bus Adapter, is used to connect to an iSCSI volume over the Ethernet network. The iSCSI volume is presented to a XenServer or Resource Pool through this connection.
lvmoiscsi is the method used in this paper. Please refer to the XenServer Administrator’s Guide for configuration requirements of lvmohba.
6
SAN connectivity
Physically connected via an Ethernet IP infrastructure, HP StorageWorks P4000 SANs provide storage for XenServer hosts using the iSCSI block-based storage protocol to carry storage data from host to
storage or from storage to storage. Each host acts as an initiator (iSCSI client) connecting to a storage target (HP StorageWorks P4000 SAN volume) in a SR, where the data is stored.
Since SCSI commands are encapsulated within an Ethernet packet, storage no longer needs to be locally-connected, inside a server. Thus, storage performance for a XenServer host becomes a function of bandwidth, based on 1 Gb/second or 10 Gb/second Ethernet connectivity.
Moving storage from physical servers allows you to create a SAN where servers must now remotely access shared storage. The mechanism for accessing this shared storage is iSCSI, in much the same way as other block-based storage protocols such as Fibre Channel (FC). SAN topology can be deployed efficiently using the standard, pre-existing Ethernet switching infrastructure.
Benefits of shared storage
The benefits of sharing storage include: The ability to provide equal access to shared storage is a basic requirement for hosts deployed in a
resource pool, enabling XenServer functionality such as HA and XenMotion, which supports the migration of VMs between resource pools in the event of a failover or a manual live state.
Since storage resources are no longer dedicated to a particular physical server, utilization is
enhanced; moreover, you are now able to consolidate data.
Storage reallocation can be achieved without cabling changes.
In much the same way that XenServer can be used to efficiently virtualize server resources, HP StorageWorks P4000 SANs can be used to virtualize and consolidate storage resources while extending storage functionality. Backup and DR are also simplified and enhanced by the ability to move VM data anywhere an Ethernet packet can travel.
Storage node
The storage node is the basic building block of an HP StorageWorks P4000 SAN and includes the following components:
CPU Disk drives RAID controller Memory Cache Multiple network interfaces
These components work in concert to respond to storage read and write requests from an iSCSI client.
The RAID controller supports a range of RAID types for the node’s disk drives, allowing you to
configure different levels of fault-tolerance and performance within the node. For example, RAID 10 maximizes throughput and redundancy, RAID 6 can compensate for dual disk drive faults while better utilizing capacity, and RAID 5 provides minimal redundancy but maximizes capacity utilization.
Network interfaces can be used to provide fault tolerance or may be aggregated to provide additional bandwidth. 1 Gb/second and 10 Gb/second interfaces are supported.
CPU, memory, and cache work together to respond to iSCSI requests for reading or writing data.
All physical storage node components described above are virtualized, becoming a building block for an HP StorageWorks P4000 SAN.
7
Clustering and Network RAID
Since an individual storage node would represent a single point of failure (SPOF), the HP StorageWorks P4000 SAN supports a cluster of storage nodes working together and managed as a single unit. Just as conventional RAID can protect against a SPOF within a disk, Network RAID can be used to spread a volume’s data blocks across the cluster to protect against single or multiple storage node failures.
HP StorageWorks SAN/iQ, the storage software logic, performs the storage virtualization and distributes data across the cluster.
Network RAID helps prevent storage downtime for XenServer hosts accessing that volume, which is critical for ensuring that these hosts can always access VM data. An additional benefit of virtualizing a volume across the cluster is that the resources of all the nodes can be combined to increase read and write throughput as well as capacity2.
It is a best practice to configure a minimum of two nodes in a cluster and use Network RAID at Replication Level 2.
Networking bonding
Each storage node supports multiple network interfaces to help eliminate SPOFs from the communication pathway. Configuration of the network interfaces is best implemented when attaching both network interfaces to an Ethernet switching infrastructure.
Network bonding3 provides a mechanism for aggregating multiple network interfaces into a single, logical interface. Bonding supports path failover in the event of a failure; in addition, depending on the particular options configured, bonding can also enhance throughput.
In its basic form, a network bond forms an active/passive failover configuration; that is, if one path in this configuration were to fail, the other would assume responsibility for communicating data.
Note
Each network interface should be connected to a different switch.
With Adaptive Load Balancing (ALB) enabled on the network bond, both network interfaces can transmit data from the storage node; however, only one interface can receive data. This configuration requires no additional switch configuration support and may also span each connection across multiple switches ensuring there is no single point of failure to multiple switches.
Enabling IEEE 802.3ad Link Aggregation Control Protocol (LACP) Dynamic Mode on the network bond allows both network ports to send and receive data in addition to providing fault tolerance. However, the associated switch must support this feature; pre-configuration may be required for the attached ports.
Note
LACP requires both network interfaces’ ports to be connected to a single switch, thus creating a potential SPOF.
8
Best practices for network configuration depend on your particular environment; however, at a minimum, you should configure an ALB bond between network interfaces.
2
The total space available for data storage is the sum of storage node capacities.
3
Also known as NIC teaming, where “NIC” refers to a network interface card
Configuring an iSCSI volume
The XenServer SR stores VM data on a volume (iSCSI Target) that is a logical entity with specific attributes. The volume consists of storage on one or more storage nodes.
When planning storage for your VMs, you should consider the following:
How will that storage be used? What are the storage requirements at the OS and application levels? How would data growth impact capacity and data availability? Which XenServer host – or, in a resource pool, hosts – require access to the data? How does your DR approach affect the data?
Example
An HP StorageWorks P4000 SAN is configured using the Centralized Management Console (CMC). In this example, the HP-Boulder management group defines a single storage site for a XenServer host resource pool (farm) or a synchronously-replicated stretch resource pool. HP-Boulder can be thought of as a logical grouping of resources.
A cluster named IT-DataCenter contains two storage nodes, v8.1-01 and v8.1-02.
20 volumes have currently been created. This example focuses on volume XPSP2-01, which is sized at 10GB; however, because it has been thinly provisioned, this volume occupies far less space on the SAN. Its iSCSI qualified name (IQN) is iqn.2003-10.com.lefthandnetworks:hp-boulder:55:xpsp2-01, which uniquely identifies this volume in the SR.
Figure 2 shows how the CMC can be used to obtain detailed information about a particular storage volume.
9
Figure 2. Using CMC to obtain detailed information about volume XPSP2-01
Creating a new volume
The CMC is used to create volumes such as XPSP2-01, as shown in Figure 3.
10
Figure 3. Creating a new volume
It is a best practice to create a unique iSCSI volume for each VM in an SR. Thus, HP suggests matching the name of the VM to that of the XenServer SR and of the volume created in the CMC. Using this convention, it is always clear which VM is related to which storage allocation.
This example is based on a 10GB Windows XP SP2 VM. The name of the iSCSI volume – XPSP2-01 – is repeated when creating the SR as well as the VM.
The assignment of Servers will define which iSCSI Initiators (XenServer Hosts) are allowed to read/write to the storage and will be discussed later in the Configuring a XenServer Host section.
Configuring the new volume
Network RAID (2-Way replication) is selected to enhance storage availability; now, the cluster can survive at most one non-adjacent node failure.
Note
The more nodes there are in a cluster, the more nodes can fail without XenServer hosts losing access to data.
Thin Provisioning has also been selected to maximize data efficiency in the SAN–only data that is actually written to the volume that can occupy space. In functionality, this is equivalent to a sparse XenServer virtual hard drive (VHD); however, it is implemented efficiently in the storage with no limitation on the type of volume connected within XenServer.
Figure 4 shows how to configure Thin Provisioning.
11
Figure 4. Configuring 2-Way Replication and Thin Provisioning
You can change volume properties at any time. However, if you change volume size, you may also
need to update the XenServer configuration as well as the VM’s OS in order for the new size to be
recognized.
Comparing full and thin provisioning
You have two options for provisioning volumes on the SAN: Full Provisioning
With Full Provisioning, you reserve the same amount of space in the storage cluster as that presented to the XenServer host. Thus, when you create a fully-provisioned 10GB volume, 10GB of space is reserved for this volume in the cluster; if you also select 2-Way Replication, 20 GB of space (10 GB x 2) would be reserved. The Full Provisioning option ensures that the full space requirement is reserved for a volume within the storage cluster.
Thin Provisioning
With Thin Provisioning, you reserve less space in the storage cluster than that presented to XenServer hosts. Thus, when a thinly-provisioned 10GB volume is created, only 1GB of space is initially reserved for this volume; however, a 10GB volume is presented to the host. If you were also to select 2-Way Replication, 2GB of space (1 GB x 2) would initially be reserved for this volume.
As the initial 1GB reservation becomes almost consumed by writes, additional space is reserved from available space on the storage cluster. As more and more writes occur, the full 10GB of space will eventually be reserved.
Benefits of thin provisioning
The key advantage of using thin provisioning is that it minimizes the initial storage footprint during deployment. As your needs change, you can increase the size of the storage cluster by adding storage nodes to increase the amount of space available, creating a cost-effective, pay-as-you-grow architecture.
12
When undertaking a project to consolidate servers through virtualization, you typically find under­utilized resources on the bare-metal server; however, storage tends to be over-allocated. Now,
XenServer’s resource virtualization approach means that storage can also be consolidated in clusters;
moreover, thin provisioning can be selected to optimize storage utilization.
As your storage needs grow, you can add storage nodes to increase performance and capacity – a single, simple GUI operation is all that is required to add a new node to a management group and storage cluster. HP SAN/iQ storage software automatically redistributes your data based on the new cluster size, immediately providing additional space to support the growth of thinly-provisioned volumes. There is no need to change VM configurations or disrupt access to live data volumes.
However, there is a risk associated with the use of thin provisioning. Since less space is reserved on the SAN than that presented to XenServer hosts, writes to a thinly-provisioned volume may fail if the SAN should run out of space. To minimize this risk, SAN/iQ software monitors utilization and issues warnings when a cluster is nearly full, allowing you to plan your data growth needs in conjunction with thin provisioning. Thus, to support planned storage growth, it is a best practice to configure e­mail alerts, Simple Network Management Protocol (SNMP) triggers, or CMC storage monitoring so that you can initiate an effective response prior to a full-cluster event. Should a full-cluster event occur, writes requiring additional space cannot be accepted and will fail until such space is made available, effectively forcing the SR offline.
In order to increase available space in a storage cluster, you have the following options:
Add another storage node to the SAN Delete other volumes Reduce the volume replication level
Note
Reducing the replication level or omitting replication frees up space; however, the affected volumes would become more prone to failure.
Replicate volumes to another cluster and then delete the originals
Note
After moving volumes to another cluster, you would have to reconfigure XenServer host access to match the new SR.
Adding a storage node to a cluster may be the least disruptive option for increasing space without impacting data availability.
This section has provided guidelines and best practices for configuring a new iSCSI volume. The following section describes how to configure a XenServer host.
Configuring a XenServer Host
This section provides guidelines and best practices for configuring a XenServer host so that it can communicate with an HP StorageWorks P4000 SAN, ensuring that the storage bandwidth for each VM is optimized. For example, since XenServer iSCSI SRs depend on the underlying network configuration, you can maximize availability by bonding network interfaces; in addition, you can create a dedicated storage network to achieve predictable storage throughput for VMs. After you
13
have configured a single host in a resource pool, you can scale up with additional hosts to enhance VM availability.
The sample SRs configured below utilize the iSCSI volumes described in the previous section.
Guidelines are provided for the following tasks:
Synchronizing time between XenServer hosts Setting up networks and configuring network bonding Connecting to iSCSI volumes in the SR iSCSI Storage Repositories that will be created utilizing the
HP StorageWorks iSCSI volumes created in the previous section
Creating a VM on the SR and best practices implemented ensuring that each virtual machine
maximizes its available iSCSI storage bandwidth
The section ends with a summary.
Synchronizing time
A server’s BIOS provides a local mechanism for accurately recording time; in the case of a XenServer
host, its VMs also use this time.
By default, XenServer hosts are configured to use local time for time stamping operations. Alternatively, a network time protocol (NTP) server can be used to manage time for a management group rather than relying on local settings.
Since XenServer hosts, VMs, applications, and storage nodes all utilize event logging, it is considered a best practice – particularly when there are multiple hosts – to synchronize time for the entire virtualized environment via an NTP server. Having a common time-line for all event and error logs can aid in troubleshooting, administration, and performance management.
Note
Configurations depend on local resources and networking policy. NTP synchronization updates occur every five minutes. If you do not set the time zone for the management group, Greenwich Mean Time (GMT) is used.
NTP for HP StorageWorks P4000 SAN NTP Server configuration can be found on a Management Group’s Time tab. Common time for all event and error logs can aid in troubleshooting, administration, and performance management. A preferred NTP Server of 0.pool.ntp.org is used in this example shown in Figure 5.
14
Figure 5. Turning on NTP using the CMC
NTP for XenServer
Although NTP Server configuration may be performed during a XenServer installation, the console may also be used post installation. Within XenCenter, highlight the XenServer and select the Console tab. Enable NTP using xsconsole. Enable NTP as shown in Figure 6.
15
Figure 6. Turning on NTP using the XenServer xsconsole
Network configuration and bonding
Network traffic to XenServer hosts may consist of the following types:
XenServer management VM LAN traffic iSCSI SAN traffic
Although a single physical network adapter can accommodate all these traffic types, its bandwidth would have to be shared by each. However, since it is critically important for iSCSI SRs to perform predictably when serving VM storage, it is considered a best practice to dedicate a network adapter to the iSCSI SAN. Furthermore, to maximize the availability of SAN access, you can bond multiple network adapters to act as a single interface, which not only provides redundant paths but also increases the bandwidth for SRs. If desired, you can create an additional bond for LAN connectivity.
Note
XenServer supports source-level balancing (SLB) bonding.
It is a best practice to ensure that the network adapters configured in a bond have matching physical network interfaces so that the appropriate failover path can be configured. In addition, to avoid a SPOF at a common switch, multiple switches should be configured for each failover path to provide an additional level of redundancy in the physical switch fabric.
16
You can create bonds using either XenCenter or the XenServer console, which allows you to specify more options and must be used to set certain bonded network parameters for the iSCSI SAN. For example, the console must be used to set the disallow-unplug parameter to true.
Example
In the following example, six separate network links are available to a XenServer host. Of these, two are bonded for VM LAN traffic and two for iSCSI SAN traffic. In general, the procedure is as follows:
1. Ensure there are no VMs running on the particular XenServer host.
2. Select the host in XenCenter and open the Network tab, as shown in Figure 7.
A best practice for the networks is to add a meaningful description to each network in the description field.
Figure 7. Select host in XenCenter and open Network tab.
3. Select the NICs tab and click the Create Bond button. Add the interfaces you wish to bond, as
shown in Figure 8.
17
Figure 8. Bonding network adapters NIC 4 and NIC 5
Figure 8 shows the creation of a network bond consisting of NIC 4 and NIC 5 to connect the host to the iSCSI SAN and, thus, the SRs that are common to all hosts. NIC 2 and NIC 3 had already been bonded to form a single logical network link for Ethernet traffic.
The network in this example consists of a class C subnet of 255.255.255.0 with a network address of 1.1.1.0. No gateway is configured. IP addressing is set using the pif-reconfigure-ip command.
4. As shown in Figure 9, select Properties for each bonded network; rename Bond 2+3 to Bond 0
and rename Bond 4+5 to Bond 1; and enter appropriate descriptions for these networks.
18
Figure 9. Renaming network bonds
The iSCSI SAN Bond 1 interface is now ready to be used. In order for the bond’s IP address to be
recognized, you can reboot the XenServer host; alternatively, use the host-management-reconfigure command.
Connecting to an iSCSI volume
While HP StorageWorks iSCSI volumes were created in a previous section, no access was assigned to those volumes.
Before a volume can be recognized by a XenServer host as an SR, you must use the CMC to define the authentication method to be assigned to this volume. The following authentication methods are supported on XenServer hosts:
IQN – You can assign a volume based on an IQN definition. Think of this as a one-to-one
relationship, with one rule for one host.
CHAP – Challenge Handshake Authentication Protocol (CHAP) provides a mechanism for defining a
user name and secret password credential to ensure that access to a particular iSCSI volume is appropriate. Think of this as a one-to-many relationship, with one rule for many hosts.
XenServer hosts only support one -way CHAP credential access.
Determining or changing the host’s IQN
Each XenServer host has been assigned a default IQN that you can change, if desired; in addition, each volume has been assigned a unique iSCSI name during its creation. Although specific naming is not required within an iSCSI SAN, all IQNs must be unique from initiator to target.
A XenServer host’s IQN can be found via XenCenter’s General tab, as shown in Figure 10. Here, the IQN of XenServer host XenServer-55b-02 is iqn.2009-06.com.example:e834bedd.
19
Loading...
+ 44 hidden pages