Citrix Systems Server 4 User Manual

CloudPlatform (powered by Apache CloudStack) Version
4.2 Installation Guide
Revised October 27, 2013 11:15 pm Pacific
Citrix CloudPlatform
CloudPlatform (powered by Apache CloudStack) Version 4.2 Installation Guide
CloudPlatform (powered by Apache CloudStack) Version 4.2 Installation Guide Revised October 27, 2013 11:15 pm Pacific
Author Citrix CloudPlatform
© 2013 Citrix Systems, Inc. All rights reserved. Specifications are subject to change without notice. Citrix Systems, Inc., the Citrix logo, Citrix XenServer, Citrix XenCenter, and CloudPlatform are trademarks or registered trademarks of Citrix Systems, Inc. All other brands or products are trademarks or registered trademarks of their respective holders.
Installation Guide for CloudPlatform.
1. Getting More Information and Help 1
1.1. Additional Documentation Available ............................................................................... 1
1.2. Citrix Knowledge Center ............................................................................................... 1
1.3. Contacting Support ....................................................................................................... 1
2. Concepts 3
2.1. What Is CloudPlatform? ................................................................................................ 3
2.2. What Can CloudPlatform Do? ....................................................................................... 3
2.3. Deployment Architecture Overview ................................................................................ 4
2.3.1. Management Server Overview ........................................................................... 5
2.3.2. Cloud Infrastructure Overview ............................................................................ 5
2.3.3. Networking Overview ......................................................................................... 6
3. Cloud Infrastructure Concepts 9
3.1. About Regions ............................................................................................................. 9
3.2. About Zones ................................................................................................................ 9
3.3. About Pods ................................................................................................................ 11
3.4. About Clusters ........................................................................................................... 12
3.5. About Hosts ............................................................................................................... 13
3.6. About Primary Storage ............................................................................................... 13
3.7. About Secondary Storage ........................................................................................... 14
3.8. About Physical Networks ............................................................................................ 14
3.8.1. Basic Zone Network Traffic Types .................................................................... 15
3.8.2. Basic Zone Guest IP Addresses ....................................................................... 16
3.8.3. Advanced Zone Network Traffic Types .............................................................. 16
3.8.4. Advanced Zone Guest IP Addresses ................................................................ 16
3.8.5. Advanced Zone Public IP Addresses ................................................................ 17
3.8.6. System Reserved IP Addresses ....................................................................... 17
4. Upgrade Instructions 19
4.1. Upgrade from 3.0.x to 4.2 ........................................................................................... 19
4.2. Upgrade from 2.2.x to 4.2 ........................................................................................... 28
4.3. Upgrade from 2.1.x to 4.2 ........................................................................................... 37
4.4. Upgrading and Hotfixing XenServer Hypervisor Hosts ................................................... 37
4.4.1. Upgrading to a New XenServer Version ............................................................ 37
4.4.2. Applying Hotfixes to a XenServer Cluster .......................................................... 39
5. Installation 43
5.1. Who Should Read This .............................................................................................. 43
5.2. Overview of Installation Steps ..................................................................................... 43
5.3. Minimum System Requirements .................................................................................. 44
5.3.1. Management Server, Database, and Storage System Requirements ................... 44
5.3.2. Host/Hypervisor System Requirements ............................................................. 44
5.3.3. Hypervisor Compatibility Matrix ......................................................................... 45
5.4. Management Server Installation .................................................................................. 47
5.4.1. Management Server Installation Overview ......................................................... 47
5.4.2. Prepare the Operating System ......................................................................... 47
5.4.3. Install the Management Server on the First Host ............................................... 49
5.4.4. Install and Configure the Database ................................................................... 50
5.4.5. About Password and Key Encryption ................................................................ 54
5.4.6. Changing the Default Password Encryption ....................................................... 55
5.4.7. Prepare NFS Shares ....................................................................................... 56
5.4.8. Prepare and Start Additional Management Servers ............................................ 59
5.4.9. Management Server Load Balancing ................................................................ 60
5.4.10. Prepare the System VM Template .................................................................. 61
5.4.11. Installation Complete! Next Steps ................................................................... 62
iii
CloudPlatform (powered by Apache CloudStack) Version 4.2 Installation Guide
5.5. Setting Configuration Parameters ................................................................................ 62
5.5.1. About Configuration Parameters ....................................................................... 62
5.5.2. Setting Global Configuration Parameters ........................................................... 64
5.5.3. Setting Local Configuration Parameters ............................................................ 64
5.5.4. Granular Global Configuration Parameters ........................................................ 64
6. User Interface 69
6.1. Supported Browsers ................................................................................................... 69
6.2. Log In to the UI ......................................................................................................... 69
6.2.1. End User's UI Overview ................................................................................... 69
6.2.2. Root Administrator's UI Overview ..................................................................... 70
6.2.3. Logging In as the Root Administrator ................................................................ 70
6.2.4. Changing the Root Password ........................................................................... 71
6.3. Using SSH Keys for Authentication ............................................................................. 71
6.3.1. Creating an Instance from a Template that Supports SSH Keys .......................... 71
6.3.2. Creating the SSH Keypair ................................................................................ 72
6.3.3. Creating an Instance ........................................................................................ 73
6.3.4. Logging In Using the SSH Keypair ................................................................... 73
6.3.5. Resetting SSH Keys ........................................................................................ 73
7. Steps to Provisioning Your Cloud Infrastructure 75
7.1. Overview of Provisioning Steps ................................................................................... 75
7.2. Adding Regions (optional) ........................................................................................... 76
7.2.1. The First Region: The Default Region ............................................................... 76
7.2.2. Adding a Region .............................................................................................. 76
7.2.3. Adding Third and Subsequent Regions ............................................................. 77
7.2.4. Deleting a Region ............................................................................................ 78
7.3. Adding a Zone ........................................................................................................... 79
7.3.1. Create a Secondary Storage Mount Point for the New Zone ............................... 79
7.3.2. Steps to Add a New Zone ................................................................................ 79
7.4. Adding a Pod ............................................................................................................. 88
7.5. Adding a Cluster ........................................................................................................ 89
7.5.1. Add Cluster: KVM or XenServer ....................................................................... 89
7.5.2. Add Cluster: OVM ........................................................................................... 89
7.5.3. Add Cluster: vSphere ....................................................................................... 90
7.6. Adding a Host ............................................................................................................ 93
7.6.1. Adding a Host (XenServer, KVM, or OVM) ........................................................ 93
7.6.2. Adding a Host (vSphere) .................................................................................. 95
7.7. Adding Primary Storage .............................................................................................. 95
7.8. Adding Secondary Storage ......................................................................................... 96
7.8.1. Adding an NFS Secondary Staging Store for Each Zone .................................... 97
7.9. Initialize and Test ....................................................................................................... 98
8. Installing XenServer for CloudPlatform 101
8.1. System Requirements for XenServer Hosts ................................................................ 101
8.2. XenServer Installation Steps ..................................................................................... 102
8.3. Configure XenServer dom0 Memory .......................................................................... 102
8.4. Username and Password .......................................................................................... 102
8.5. Time Synchronization ............................................................................................... 102
8.6. Licensing .................................................................................................................. 103
8.6.1. Getting and Deploying a License .................................................................... 103
8.7. Install CloudPlatform XenServer Support Package (CSP) ............................................ 103
8.8. Primary Storage Setup for XenServer ........................................................................ 104
8.9. iSCSI Multipath Setup for XenServer (Optional) .......................................................... 105
8.10. Physical Networking Setup for XenServer ................................................................ 106
iv
8.10.1. Configuring Public Network with a Dedicated NIC for XenServer (Optional) ....... 106
8.10.2. Configuring Multiple Guest Networks for XenServer (Optional) ........................ 106
8.10.3. Separate Storage Network for XenServer (Optional) ....................................... 107
8.10.4. NIC Bonding for XenServer (Optional) ........................................................... 107
9. Installing KVM for CloudPlatform 111
9.1. System Requirements for KVM Hypervisor Hosts ....................................................... 111
9.1.1. Supported Operating Systems for KVM Hosts .................................................. 111
9.1.2. System Requirements for KVM Hosts ............................................................. 111
9.2. Install and configure the Agent .................................................................................. 112
9.3. Installing the CloudPlatform Agent on a KVM Host ..................................................... 112
9.4. Physical Network Configuration for KVM .................................................................... 113
9.5. Time Synchronization for KVM Hosts ......................................................................... 114
9.6. Primary Storage Setup for KVM (Optional) ................................................................. 114
10. Installing VMware for CloudPlatform 117
10.1. System Requirements for vSphere Hosts ................................................................. 117
10.1.1. Software requirements ................................................................................. 117
10.1.2. Hardware requirements ................................................................................ 117
10.1.3. vCenter Server requirements: ....................................................................... 118
10.1.4. Other requirements: ..................................................................................... 118
10.2. Preparation Checklist for VMware ............................................................................ 119
10.2.1. vCenter Checklist ......................................................................................... 119
10.2.2. Networking Checklist for VMware .................................................................. 119
10.3. vSphere Installation Steps ....................................................................................... 120
10.4. ESXi Host setup ..................................................................................................... 120
10.5. Physical Host Networking ........................................................................................ 120
10.5.1. Configure Virtual Switch ............................................................................... 120
10.5.2. Configure vCenter Management Network ...................................................... 121
10.5.3. Configure NIC Bonding for vSphere .............................................................. 121
10.6. Configuring a vSphere Cluster with Nexus 1000v Virtual Switch ................................. 122
10.6.1. About Cisco Nexus 1000v Distributed Virtual Switch ....................................... 122
10.6.2. Prerequisites and Guidelines ........................................................................ 122
10.6.3. Nexus 1000v Virtual Switch Preconfiguration ................................................. 123
10.6.4. Enabling Nexus Virtual Switch in CloudPlatform ............................................. 126
10.6.5. Configuring Nexus 1000v Virtual Switch in CloudPlatform ............................... 126
10.6.6. Removing Nexus Virtual Switch .................................................................... 127
10.6.7. Configuring a VMware Datacenter with VMware Distributed Virtual Switch ........ 127
10.7. Storage Preparation for vSphere (iSCSI only) ........................................................... 132
10.7.1. Enable iSCSI initiator for ESXi hosts ............................................................. 132
10.7.2. Add iSCSI target .......................................................................................... 132
10.7.3. Create an iSCSI datastore ............................................................................ 133
10.7.4. Multipathing for vSphere (Optional) ............................................................... 133
10.8. Add Hosts or Configure Clusters (vSphere) .............................................................. 133
11. Bare Metal Installation 135
11.1. Bare Metal Host System Requirements .................................................................... 135
11.2. About Bare Metal Kickstart Installation ..................................................................... 135
11.2.1. Limitations of Kickstart Baremetal Installation ................................................. 136
11.3. Provisioning a Bare Metal Host with Kickstart ........................................................... 136
11.3.1. Download the Software ................................................................................ 136
11.3.2. Set Up IPMI ................................................................................................ 136
11.3.3. Enable PXE on the Bare Metal Host ............................................................. 137
11.3.4. Install the PXE and DHCP Servers ............................................................... 137
11.3.5. Set Up a File Server .................................................................................... 138
v
CloudPlatform (powered by Apache CloudStack) Version 4.2 Installation Guide
11.3.6. Create a Bare Metal Image .......................................................................... 140
11.3.7. Create a Bare Metal Compute Offering ......................................................... 140
11.3.8. Create a Bare Metal Network Offering ........................................................... 141
11.3.9. Set Up the Security Group Agent (Optional) .................................................. 141
11.3.10. (Optional) Set Bare Metal Configuration Parameters ..................................... 143
11.3.11. Add a Bare Metal Zone .............................................................................. 143
11.3.12. Add a Bare Metal Cluster ........................................................................... 144
11.3.13. Add a Bare Metal Host ............................................................................... 144
11.3.14. Add the PXE Server and DHCP Server to Your Deployment ......................... 145
11.3.15. Create a Bare Metal Template .................................................................... 146
11.3.16. Provision a Bare Metal Instance .................................................................. 147
11.3.17. Test Bare Metal Installation ........................................................................ 147
11.3.18. Example CentOS 6.x Kickstart File .............................................................. 147
11.3.19. Example Fedora 17 Kickstart File ................................................................ 148
11.3.20. Example Ubuntu 12.04 Kickstart File ........................................................... 149
11.4. Using Cisco UCS as Bare Metal Host CloudPlatform ................................................ 151
11.4.1. Registering a UCS Manager ......................................................................... 151
11.4.2. Associating a Profile with a UCS Blade ......................................................... 152
11.4.3. Disassociating a Profile from a UCS Blade .................................................... 153
12. Installing Oracle VM (OVM) for CloudPlatform 155
12.1. System Requirements for OVM Hosts ...................................................................... 155
12.2. OVM Installation Overview ...................................................................................... 155
12.3. Installing OVM on the Host(s) ................................................................................. 155
12.4. Primary Storage Setup for OVM .............................................................................. 156
12.5. Set Up Host(s) for System VMs ............................................................................... 156
13. Choosing a Deployment Architecture 157
13.1. Small-Scale Deployment ......................................................................................... 157
13.2. Large-Scale Redundant Setup ................................................................................. 158
13.3. Separate Storage Network ...................................................................................... 159
13.4. Multi-Node Management Server .............................................................................. 159
13.5. Multi-Site Deployment ............................................................................................. 159
14. Network Setup 161
14.1. Basic and Advanced Networking ............................................................................. 161
14.2. VLAN Allocation Example ....................................................................................... 162
14.3. Example Hardware Configuration ............................................................................. 162
14.3.1. Dell 62xx ..................................................................................................... 162
14.3.2. Cisco 3750 .................................................................................................. 163
14.4. Layer-2 Switch ....................................................................................................... 163
14.4.1. Dell 62xx ..................................................................................................... 163
14.4.2. Cisco 3750 .................................................................................................. 164
14.5. Hardware Firewall ................................................................................................... 164
14.5.1. Generic Firewall Provisions .......................................................................... 164
14.5.2. External Guest Firewall Integration for Juniper SRX (Optional) ........................ 165
14.5.3. External Guest Firewall Integration for Cisco VNMC (Optional) ........................ 167
14.6. External Guest Load Balancer Integration (Optional) ................................................. 172
14.7. Topology Requirements .......................................................................................... 173
14.7.1. Security Requirements ................................................................................. 173
14.7.2. Runtime Internal Communications Requirements ........................................... 173
14.7.3. Storage Network Topology Requirements ...................................................... 174
14.7.4. External Firewall Topology Requirements ...................................................... 174
14.7.5. Advanced Zone Topology Requirements ....................................................... 174
14.7.6. XenServer Topology Requirements ............................................................... 174
vi
14.7.7. VMware Topology Requirements .................................................................. 174
14.7.8. KVM Topology Requirements ....................................................................... 174
14.8. Guest Network Usage Integration for Traffic Sentinel ................................................ 174
14.9. Setting Zone VLAN and Running VM Maximums ...................................................... 175
15. Amazon Web Service Interface 177
15.1. Amazon Web Services EC2 Compatible Interface ..................................................... 177
15.2. System Requirements ............................................................................................. 177
15.3. Enabling the AWS API Compatible Interface ............................................................ 177
15.4. AWS API User Setup Steps (SOAP Only) ................................................................ 178
15.4.1. AWS API User Registration .......................................................................... 178
15.4.2. AWS API Command-Line Tools Setup .......................................................... 179
15.5. Supported AWS API Calls ....................................................................................... 179
16. Additional Installation Options 183
16.1. Installing the Usage Server (Optional) ...................................................................... 183
16.1.1. Requirements for Installing the Usage Server ................................................ 183
16.1.2. Steps to Install the Usage Server .................................................................. 183
16.2. SSL (Optional) ........................................................................................................ 183
16.3. Database Replication (Optional) .............................................................................. 184
16.3.1. Failover ....................................................................................................... 186
vii
viii
Chapter 1.
Getting More Information and Help

1.1. Additional Documentation Available

The following guides are available:
• Installation Guide — Covers initial installation of CloudPlatform. It aims to cover in full detail all the steps and requirements to obtain a functioning cloud deployment.
At times, this guide mentions additional topics in the context of installation tasks, but does not give full details on every topic. Additional details on many of these topics can be found in the CloudPlatform Administration Guide. For example, security groups, firewall and load balancing rules, IP address allocation, and virtual routers are covered in more detail in the Administration Guide.
• Administration Guide — Discusses how to set up services for the end users of your cloud. Also covers ongoing runtime management and maintenance. This guide discusses topics like domains, accounts, service offerings, projects, guest networks, administrator alerts, virtual machines, storage, and measuring resource usage.
• Developer's Guide — How to use the API to interact with CloudPlatform programmatically.

1.2. Citrix Knowledge Center

Troubleshooting articles by the Citrix support team are available in the Citrix Knowledge Center, at
support.citrix.com/product/cs/1.

1.3. Contacting Support

The support team is available to help customers plan and execute their installations. To contact the support team, log in to the support portal at support.citrix.com/cloudsupport2 by using the account credentials you received when you purchased your support contract.
1
http://support.citrix.com/product/cs/
2
http://support.citrix.com/cloudsupport
1
2
Chapter 2.
Concepts

2.1. What Is CloudPlatform?

CloudPlatform is a software platform that pools computing resources to build public, private, and hybrid Infrastructure as a Service (IaaS) clouds. CloudPlatform manages the network, storage, and compute nodes that make up a cloud infrastructure. Use CloudPlatform to deploy, manage, and configure cloud computing environments.
Typical users are service providers and enterprises. With CloudPlatform, you can:
• Set up an on-demand, elastic cloud computing service. Service providers can sell self service virtual machine instances, storage volumes, and networking configurations over the Internet.
• Set up an on-premise private cloud for use by employees. Rather than managing virtual machines in the same way as physical machines, with CloudPlatform an enterprise can offer self-service virtual machines to users without involving IT departments.

2.2. What Can CloudPlatform Do?

Multiple Hypervisor Support
CloudPlatform works with a variety of hypervisors. A single cloud deployment can contain multiple hypervisor implementations. You have the complete freedom to choose the right hypervisor for your workload.
CloudPlatform is designed to work with open source Xen and KVM hypervisors as well as enterprise­grade hypervisors such as Citrix XenServer, VMware vSphere, and Oracle VM (OVM).
3
Chapter 2. Concepts
Massively Scalable Infrastructure Management
CloudPlatform can manage tens of thousands of servers installed in multiple geographically distributed datacenters. The centralized management server scales linearly, eliminating the need for intermediate cluster-level management servers. No single component failure can cause cloud-wide outage. Periodic maintenance of the management server can be performed without affecting the functioning of virtual machines running in the cloud.
Automatic Configuration Management
CloudPlatform automatically configures each guest virtual machine’s networking and storage settings.
CloudPlatform internally manages a pool of virtual appliances to support the cloud itself. These appliances offer services such as firewalling, routing, DHCP, VPN access, console proxy, storage access, and storage replication. The extensive use of virtual appliances simplifies the installation, configuration, and ongoing management of a cloud deployment.
Graphical User Interface
CloudPlatform offers an administrator's Web interface, used for provisioning and managing the cloud, as well as an end-user's Web interface, used for running VMs and managing VM templates. The UI can be customized to reflect the desired service provider or enterprise look and feel.
API and Extensibility
CloudPlatform provides an API that gives programmatic access to all the management features available in the UI. This API enables the creation of command line tools and new user interfaces to suit particular needs.
The CloudPlatform pluggable allocation architecture allows the creation of new types of allocators for the selection of storage and hosts.
High Availability
CloudPlatform has a number of features to increase the availability of the system. The Management Server itself, which is the main controlling software at the heart of CloudPlatform, may be deployed in a multi-node installation where the servers are load balanced. MySQL may be configured to use replication to provide for a manual failover in the event of database loss. For the hosts, CloudPlatform supports NIC bonding and the use of separate networks for storage as well as iSCSI Multipath.

2.3. Deployment Architecture Overview

A CloudPlatform installation consists of two parts: the Management Server and the cloud infrastructure that it manages. When you set up and manage a CloudPlatform cloud, you provision resources such as hosts, storage devices, and IP addresses into the Management Server, and the Management Server manages those resources.
The minimum production installation consists of one machine running the CloudPlatform Management Server and another machine to act as the cloud infrastructure (in this case, a very simple infrastructure consisting of one host running hypervisor software). In a trial installation, a single machine can act as both the Management Server and the hypervisor host (using the KVM hypervisor).
4
Management Server Overview
A more full-featured installation consists of a highly-available multi-node Management Server installation and up to thousands of hosts using any of several advanced networking setups. For information about deployment options, see Chapter 13, Choosing a Deployment Architecture.

2.3.1. Management Server Overview

The Management Server is the CloudPlatform software that manages cloud resources. By interacting with the Management Server through its UI or API, you can configure and manage your cloud infrastructure.
The Management Server runs on a dedicated server or VM. It controls allocation of virtual machines to hosts and assigns storage and IP addresses to the virtual machine instances. The Management Server runs in a Tomcat container and uses a MySQL database for persistence.
The machine where the Management Server runs must meet the system requirements described in
Section 5.3, “Minimum System Requirements”.
The Management Server:
• Provides the web user interface for the administrator and a reference user interface for end users.
• Provides the APIs for CloudPlatform.
• Manages the assignment of guest VMs to particular hosts.
• Manages the assignment of public and private IP addresses to particular accounts.
• Manages the allocation of storage to guests as virtual disks.
• Manages snapshots, templates, and ISO images, possibly replicating them across data centers.
• Provides a single point of configuration for the cloud.

2.3.2. Cloud Infrastructure Overview

The Management Server manages one or more zones (typically, datacenters) containing host computers where guest virtual machines will run. The cloud infrastructure is organized as follows:
• Region: To increase reliability of the cloud, you can optionally group resources into multiple geographic regions. A region consists of one or more zones.
• Zone: Typically, a zone is equivalent to a single datacenter. A zone consists of one or more pods and secondary storage.
5
Chapter 2. Concepts
• Pod: A pod is usually one rack of hardware that includes a layer-2 switch and one or more clusters.
• Cluster: A cluster consists of one or more hosts and primary storage.
• Host: A single compute node within a cluster. The hosts are where the actual cloud services run in the form of guest virtual machines.
• Primary storage is associated with a cluster, and it can also be provisioned on a zone-wide basis. It stores the disk volumes for all the VMs running on hosts in that cluster.
• Secondary storage is associated with a zone, and it can also be provisioned as object storage that is available throughout the cloud. It stores templates, ISO images, and disk volume snapshots.
More Information
For more information, see Chapter 3, Cloud Infrastructure Concepts.

2.3.3. Networking Overview

CloudPlatform offers two types of networking scenario:
• Basic. Provides a single network where guest isolation can be provided through layer-3 means such as security groups (IP address source filtering).
6
Networking Overview
• Advanced. For more sophisticated network topologies. This network model provides the most flexibility in defining guest networks and providing guest isolation.
For more details, see Chapter 14, Network Setup.
7
8
Chapter 3.
Cloud Infrastructure Concepts

3.1. About Regions

To increase reliability of the cloud, you can optionally group resources into multiple geographic regions. A region is the largest available organizational unit within a CloudPlatform deployment. A region is made up of several availability zones, where each zone is equivalent to a datacenter. Each region is controlled by its own cluster of Management Servers, running in one of the zones. The zones in a region are typically located in close geographical proximity. Regions are a useful technique for providing fault tolerance and disaster recovery.
By grouping zones into regions, the cloud can achieve higher availability and scalability. User accounts can span regions, so that users can deploy VMs in multiple, widely-dispersed regions. Even if one of the regions becomes unavailable, the services are still available to the end-user through VMs deployed in another region. And by grouping communities of zones under their own nearby Management Servers, the latency of communications within the cloud is reduced compared to managing widely-dispersed zones from a single central Management Server.
Usage records can also be consolidated and tracked at the region level, creating reports or invoices for each geographic region.
Regions are visible to the end user. When a user starts a guest VM on a particular CloudPlatform Management Server, the user is implicitly selecting that region for their guest. Users might also be required to copy their private templates to additional regions to enable creation of guest VMs using their templates in those regions.

3.2. About Zones

A zone is the second largest organizational unit within a CloudPlatform deployment. A zone typically corresponds to a single datacenter, although it is permissible to have multiple zones in a datacenter.
9
Chapter 3. Cloud Infrastructure Concepts
The benefit of organizing infrastructure into zones is to provide physical isolation and redundancy. For example, each zone can have its own power supply and network uplink, and the zones can be widely separated geographically (though this is not required).
A zone consists of:
• One or more pods. Each pod contains one or more clusters of hosts and one or more primary storage servers.
• (Optional) If zone-wide primary storage is desired, a zone may contain one or more primary storage servers, which are shared by all the pods in the zone. (Supported for KVM and VMware hosts)
• Secondary storage, which is shared by all the pods in the zone.
Zones are visible to the end user. When a user starts a guest VM, the user must select a zone for their guest. Users might also be required to copy their private templates to additional zones to enable creation of guest VMs using their templates in those zones.
Zones can be public or private. Public zones are visible to all users. This means that any user may create a guest in that zone. Private zones are reserved for a specific domain. Only users in that domain or its subdomains may create guests in that zone.
Hosts in the same zone are directly accessible to each other without having to go through a firewall. Hosts in different zones can access each other through statically configured VPN tunnels.
10
About Pods
For each zone, the administrator must decide the following.
• How many pods to place in a zone.
• How many clusters to place in each pod.
• How many hosts to place in each cluster.
• (Optional) If zone-wide primary storage is being used, decide how many primary storage servers to place in each zone and total capacity for these storage servers. (Supported for KVM and VMware hosts)
• How many primary storage servers to place in each cluster and total capacity for these storage servers.
• How much secondary storage to deploy in a zone.
When you add a new zone, you will be prompted to configure the zone’s physical network and add the first pod, cluster, host, primary storage, and secondary storage.
(VMware) In order to support zone-wide functions for VMware, CloudPlatform is aware of VMware Datacenters and can map each Datacenter to a CloudPlatform zone. To enable features like storage live migration and zone-wide primary storage for VMware hosts, CloudPlatform has to make sure that a zone contains only a single VMware Datacenter. Therefore, when you are creating a new CloudPlatform zone, you can select a VMware Datacenter for the zone. If you are provisioning multiple VMware Datacenters, each one will be set up as a single zone in CloudPlatform.
Note
If you are upgrading from a previous CloudPlatform version, and your existing deployment contains a zone with clusters from multiple VMware Datacenters, that zone will not be forcibly migrated to the new model. It will continue to function as before. However, any new zone-wide operations introduced in CloudPlatform 4.2, such as zone-wide primary storage and live storage migration, will not be available in that zone.

3.3. About Pods

A pod often represents a single rack. Hosts in the same pod are in the same subnet. A pod is the third-largest organizational unit within a CloudPlatform deployment. Pods are contained within zones, and zones can be contained within regions. Each zone can contain one or more pods. A pod consists of one or more clusters of hosts and one or more primary storage servers. Pods are not visible to the end user.
11
Chapter 3. Cloud Infrastructure Concepts

3.4. About Clusters

A cluster provides a way to group hosts. To be precise, a cluster is a XenServer server pool, a set of KVM servers, a set of OVM hosts, or a VMware cluster preconfigured in vCenter. The hosts in a cluster all have identical hardware, run the same hypervisor, are on the same subnet, and access the same shared primary storage. Virtual machine instances (VMs) can be live-migrated from one host to another within the same cluster without interrupting service to the user.
A cluster is the fourth-largest organizational unit within a CloudPlatform deployment. Clusters are contained within pods, pods are contained within zones, and zones can be contained within regions. Size of the cluster is only limited by the underlying hypervisor, although the CloudPlatform recommends you stay below the theoretically allowed maximum cluster size in most cases.
A cluster consists of one or more hosts and one or more primary storage servers.
Even when local storage is used, clusters are still required. In this case, there is just one host per cluster.
(VMware) If you use VMware hypervisor hosts in your CloudPlatform deployment, each VMware cluster is managed by a vCenter server. The CloudPlatform administrator must register the vCenter
12
About Hosts
server with CloudPlatform. There may be multiple vCenter servers per zone. Each vCenter server may manage multiple VMware clusters.

3.5. About Hosts

A host is a single computer. Hosts provide the computing resources that run guest virtual machines. Each host has hypervisor software installed on it to manage the guest VMs. For example, a host can be a Citrix XenServer server, a Linux KVM-enabled server, or an ESXi server.
The host is the smallest organizational unit within a CloudPlatform deployment. Hosts are contained within clusters, clusters are contained within pods, pods are contained within zones, and zones can be contained within regions.
Hosts in a CloudPlatform deployment:
• Provide the CPU, memory, storage, and networking resources needed to host the virtual machines
• Interconnect using a high bandwidth TCP/IP network and connect to the Internet
• May reside in multiple data centers across different geographic locations
• May have different capacities (different CPU speeds, different amounts of RAM, etc.), although the hosts within a cluster must all be homogeneous
Additional hosts can be added at any time to provide more capacity for guest VMs. CloudPlatform automatically detects the amount of CPU and memory resources provided by the hosts. Hosts are not visible to the end user. An end user cannot determine which host their guest has been
assigned to. For a host to function in CloudPlatform, you must do the following:
• Install hypervisor software on the host
• Assign an IP address to the host
• Ensure the host is connected to the CloudPlatform Management Server.

3.6. About Primary Storage

Primary storage is associated with a cluster or (in KVM and VMware) a zone, and it stores the disk volumes for all the VMs running on hosts.
You can add multiple primary storage servers to a cluster or zone. At least one is required. It is typically located close to the hosts for increased performance. CloudPlatform manages the allocation of guest virtual disks to particular primary storage devices.
It is useful to set up zone-wide primary storage when you want to avoid extra data copy operations. With cluster-based primary storage, data in the primary storage is directly available only to VMs within that cluster. If a VM in a different cluster needs some of the data, it must be copied from one cluster to another, using the zone's secondary storage as an intermediate step. This operation can be unnecessarily time-consuming.
CloudPlatform is designed to work with all standards-compliant iSCSI and NFS servers that are supported by the underlying hypervisor, including, for example:
13
Chapter 3. Cloud Infrastructure Concepts
• Dell EqualLogic™ for iSCSI
• Network Appliances filers for NFS and iSCSI
• Scale Computing for NFS If you intend to use only local disk for your installation, you can skip adding separate primary storage.

3.7. About Secondary Storage

Secondary storage stores the following:
• Templates — OS images that can be used to boot VMs and can include additional configuration information, such as installed applications
• ISO images — disc images containing data or bootable media for operating systems
• Disk volume snapshots — saved copies of VM data which can be used for data recovery or to create new templates
The items in secondary storage are available to all hosts in the scope of the secondary storage, which may be defined as per zone or per region.
CloudPlatform manages the allocation of guest virtual disks to particular primary storage devices.
To make items in secondary storage available to all hosts throughout the cloud, you can add object storage in addition to the zone-based NFS Secondary Staging Store. It is not necessary to copy templates and snapshots from one zone to another, as would be required when using zone NFS alone. Everything is available everywhere.
Object storage is provided through third-party software such as Amazon Simple Storage Service (S3) or any other object storage that supports the S3 interface. Additional third party object storages can be integrated with CloudPlatform by writing plugin software that uses the object storage plugin capability.
CloudPlatform provides some plugins which we have already written for you using this storage plugin capability. The provided plugins are for OpenStack Object Storage (Swift, swift.openstack.org1) and Amazon Simple Storage Service (S3) object storage. The S3 plugin can be used for any object storage that supports the Amazon S3 interface. When using one of these storage plugins, you configure Swift or S3 storage for the entire CloudPlatform, then set up the NFS Secondary Staging Store for each zone. The NFS storage in each zone acts as a staging area through which all templates and other secondary storage data pass before being forwarded to Swift or S3. The backing object storage acts as a cloud-wide resource, making templates and other data available to any zone in the cloud.
There is no hierarchy in the Swift storage, just one Swift container per storage object. Any secondary storage in the whole cloud can pull a container from Swift at need.

3.8. About Physical Networks

Part of adding a zone is setting up the physical network. One or (in an advanced zone) more physical networks can be associated with each zone. The network corresponds to a NIC on the hypervisor host. Each physical network can carry one or more types of network traffic. The choices of traffic
1
http://swift.openstack.org
14
Basic Zone Network Traffic Types
type for each network vary depending on whether you are creating a zone with basic networking or advanced networking.
A physical network is the actual network hardware and wiring in a zone. A zone can have multiple physical networks. An administrator can:
• Add/Remove/Update physical networks in a zone
• Configure VLANs on the physical network
• Configure a name so the network can be recognized by hypervisors
• Configure the service providers (firewalls, load balancers, etc.) available on a physical network
• Configure the IP addresses trunked to a physical network
• Specify what type of traffic is carried on the physical network, as well as other properties like network speed

3.8.1. Basic Zone Network Traffic Types

When basic networking is used, there can be only one physical network in the zone. That physical network carries the following traffic types:
• Guest. When end users run VMs, they generate guest traffic. The guest VMs communicate with each other over a network that can be referred to as the guest network. Each pod in a basic zone is a broadcast domain, and therefore each pod has a different IP range for the guest network. The administrator must configure the IP range for each pod.
• Management. When CloudPlatform’s internal resources communicate with each other, they generate management traffic. This includes communication between hosts, system VMs (VMs used by CloudPlatform to perform various tasks in the cloud), and any other component that communicates directly with the CloudPlatform Management Server. You must configure the IP range for the system VMs to use.
Note
We strongly recommend the use of separate NICs for management traffic and guest traffic.
• Public. Public traffic is generated when VMs in the cloud access the Internet. Publicly accessible IPs must be allocated for this purpose. End users can use the CloudPlatform UI to acquire these IPs to implement NAT between their guest network and the public network, as described in Acquiring a New IP Address. Public traffic is generated only in EIP-enabled basic zones. For information on Elastic IP, see About Elastic IP in the Administration Guide.
• Storage. Traffic such as VM templates and snapshots, which is sent between the secondary storage VM and secondary storage servers. CloudPlatform uses a separate Network Interface Controller (NIC) named storage NIC for storage network traffic. Use of a storage NIC that always operates on a high bandwidth network allows fast template and snapshot copying. You must configure the IP range to use for the storage network.
In a basic network, configuring the physical network is fairly straightforward. In most cases, you only need to configure one guest network to carry traffic that is generated by guest VMs. If you use a NetScaler load balancer and enable its elastic IP and elastic load balancing (EIP and ELB) features,
15
Chapter 3. Cloud Infrastructure Concepts
you must also configure a network to carry public traffic. CloudPlatform takes care of presenting the necessary network configuration steps to you in the UI when you add a new zone.

3.8.2. Basic Zone Guest IP Addresses

When basic networking is used, CloudPlatform will assign IP addresses in the CIDR of the pod to the guests in that pod. The administrator must add a direct IP range on the pod for this purpose. These IPs are in the same VLAN as the hosts.

3.8.3. Advanced Zone Network Traffic Types

When advanced networking is used, there can be multiple physical networks in the zone. Each physical network can carry one or more traffic types, and you need to let CloudPlatform know which type of network traffic you want each network to carry. The traffic types in an advanced zone are:
• Guest. When end users run VMs, they generate guest traffic. The guest VMs communicate with each other over a network that can be referred to as the guest network. This network can be isolated or shared. In an isolated guest network, the administrator needs to reserve VLAN ranges to provide isolation for each CloudPlatform account’s network (potentially a large number of VLANs). In a shared guest network, all guest VMs share a single network.
• Management. When CloudPlatform’s internal resources communicate with each other, they generate management traffic. This includes communication between hosts, system VMs (VMs used by CloudPlatform to perform various tasks in the cloud), and any other component that communicates directly with the CloudPlatform Management Server. You must configure the IP range for the system VMs to use.
• Public. Public traffic is generated when VMs in the cloud access the Internet. Publicly accessible IPs must be allocated for this purpose. End users can use the CloudPlatform UI to acquire these IPs to implement NAT between their guest network and the public network, as described in “Acquiring a New IP Address” in the Administration Guide.
• Storage. Traffic such as VM templates and snapshots, which is sent between the secondary storage VM and secondary storage servers. CloudPlatform uses a separate Network Interface Controller (NIC) named storage NIC for storage network traffic. Use of a storage NIC that always operates on a high bandwidth network allows fast template and snapshot copying. You must configure the IP range to use for the storage network.
These traffic types can each be on a separate physical network, or they can be combined with certain restrictions. When you use the Add Zone wizard in the UI to create a new zone, you are guided into making only valid choices.

3.8.4. Advanced Zone Guest IP Addresses

When advanced networking is used, the administrator can create additional networks for use by the guests. These networks can span the zone and be available to all accounts, or they can be scoped to a single account, in which case only the named account may create guests that attach to these networks. The networks are defined by a VLAN ID, IP range, and gateway. The administrator may provision thousands of these networks if desired. Additionally, the administrator can reserve a part of the IP address space for non-CloudPlatform VMs and servers (see IP Reservation in Isolated Guest Networks in the Administrator's Guide).
16
Advanced Zone Public IP Addresses

3.8.5. Advanced Zone Public IP Addresses

When advanced networking is used, the administrator can create additional networks for use by the guests. These networks can span the zone and be available to all accounts, or they can be scoped to a single account, in which case only the named account may create guests that attach to these networks. The networks are defined by a VLAN ID, IP range, and gateway. The administrator may provision thousands of these networks if desired.

3.8.6. System Reserved IP Addresses

In each zone, you need to configure a range of reserved IP addresses for the management network. This network carries communication between the CloudPlatform Management Server and various system VMs, such as Secondary Storage VMs, Console Proxy VMs, and DHCP.
The reserved IP addresses must be unique across the cloud. You cannot, for example, have a host in one zone which has the same private IP address as a host in another zone.
The hosts in a pod are assigned private IP addresses. These are typically RFC1918 addresses. The Console Proxy and Secondary Storage system VMs are also allocated private IP addresses in the CIDR of the pod that they are created in.
Make sure computing servers and Management Servers use IP addresses outside of the System Reserved IP range. For example, suppose the System Reserved IP range starts at 192.168.154.2 and ends at 192.168.154.7. CloudPlatform can use .2 to .7 for System VMs. This leaves the rest of the pod CIDR, from .8 to .254, for the Management Server and hypervisor hosts.
In all zones:
Provide private IPs for the system in each pod and provision them in CloudPlatform. For KVM and XenServer, the recommended number of private IPs per pod is one per host. If you
expect a pod to grow, add enough private IPs now to accommodate the growth.
In a zone that uses advanced networking:
When advanced networking is being used, the number of private IP addresses available in each pod varies depending on which hypervisor is running on the nodes in that pod. Citrix XenServer and KVM use link-local addresses, which in theory provide more than 65,000 private IP addresses within the address block. As the pod grows over time, this should be more than enough for any reasonable number of hosts as well as IP addresses for guest virtual routers. VMWare ESXi, by contrast uses any administrator-specified subnetting scheme, and the typical administrator provides only 255 IPs per pod. Since these are shared by physical machines, the guest virtual router, and other entities, it is possible to run out of private IPs when scaling up a pod whose nodes are running ESXi.
To ensure adequate headroom to scale private IP space in an ESXi pod that uses advanced networking, use one or more of the following techniques:
• Specify a larger CIDR block for the subnet. A subnet mask with a /20 suffix will provide more than 4,000 IP addresses.
• Create multiple pods, each with its own subnet. For example, if you create 10 pods and each pod has 255 IPs, this will provide 2,550 IP addresses.
For vSphere with advanced networking, we recommend provisioning enough private IPs for your total number of customers, plus enough for the required CloudPlatform System VMs. Typically, about 10 additional IPs are required for the System VMs. For more information about System VMs, see Working with System Virtual Machines in the Administrator's Guide.
17
18
Chapter 4.
Upgrade Instructions

4.1. Upgrade from 3.0.x to 4.2

Perform the following to upgrade from version 3.0.0, 3.0.1, 3.0.2, 3.0.3, 3.0.4, 3.0.5, 3.0.6, or 3.0.7 to version 4.2.
1. If you are upgrading from 3.0.0 or 3.0.1, ensure that you query your IP address usage records and
process them; for example, issue invoices for any usage that you have not yet billed users for. Starting in 3.0.2, the usage record format for IP addresses is the same as the rest of the usage
types. Instead of a single record with the assignment and release dates, separate records are generated per aggregation period with start and end dates. After upgrading, any existing IP address usage records in the old format will no longer be available.
2. While running the 3.0.x system, log in to the UI as root administrator.
3. Using the UI, add a new System VM template for each hypervisor type that is used in your cloud.
In each zone, add a system VM template for each hypervisor used in that zone.
Note
You might notice that the size of the system VM template has increased compared to previous CloudPlatform versions. This is because the new version of the underlying Debian template has an increased disk size.
a. In the left navigation bar, click Templates. b. In Select view, click Templates. c. Click Register template.
The Register template dialog box is displayed.
d. In the Register template dialog box, specify the following values depending on the hypervisor
type (do not change these):
Hypervisor Description
XenServer Name: systemvm-xenserver-4.2
Description: systemvm-xenserver-4.2 URL: http://download.cloud.com/templates/4.2/
systemvmtemplate-2013-07-12-master-xen.vhd.bz2
Zone: Choose the zone where this hypervisor is used. If your CloudPlatform deployment includes multiple zones running XenServer, choose All Zones to make the template available in all the XenServer zones.
Hypervisor: XenServer Format: VHD
19
Chapter 4. Upgrade Instructions
Hypervisor Description
KVM Name: systemvm-kvm-4.2
OS Type: Debian GNU/Linux 7.0 (32-bit) (or the highest Debian release number available in the dropdown)
Extractable: no Password Enabled: no Public: no Featured: no
Description: systemvm-kvm-4.2 URL: http://download.cloud.com/templates/4.2/
systemvmtemplate-2013-06-12-master-kvm.qcow2.bz2
Zone: Choose the zone where this hypervisor is used. If your CloudPlatform deployment includes multiple zones running KVM, choose All Zones to make the template available in all the KVM zones.
Hypervisor: KVM Format: QCOW2 OS Type: Debian GNU/Linux 7.0 (32-bit) (or the highest
Debian release number available in the dropdown) Extractable: no Password Enabled: no Public: no Featured: no
VMware Name: systemvm-vmware-4.2
Description: systemvm-vmware-4.2 URL: http://download.cloud.com/templates/4.2/
systemvmtemplate-4.2-vh7.ova
Zone: Choose the zone where this hypervisor is used. If your CloudPlatform deployment includes multiple zones running VMware, choose All Zones to make the template available in all the VMware zones.
20
Hypervisor: VMware Format: OVA OS Type: Debian GNU/Linux 7.0 (32-bit) (or the highest
Debian release number available in the dropdown) Extractable: no
Upgrade from 3.0.x to 4.2
Hypervisor Description
Password Enabled: no Public: no Featured: no
e. Watch the screen to be sure that the template downloads successfully and enters the READY
state. Do not proceed until this is successful
f. If you use more than one type of hypervisor in your cloud, repeat these steps to download the
system VM template for each hypervisor type.
Warning
If you do not repeat the steps for each hypervisor type, the upgrade will fail.
4. (KVM on RHEL 6.0/6.1 only) If your existing CloudPlatform deployment includes one or more
clusters of KVM hosts running RHEL 6.0 or RHEL 6.1, you must first upgrade the operating system version on those hosts before upgrading CloudPlatform itself.
Run the following commands on every KVM host.
a. Download the CloudPlatform 4.2.0 RHEL 6.3 binaries from https://www.citrix.com/English/ss/
downloads/.
b. Extract the binaries:
# cd /root # tar xvf CloudPlatform-4.2.0-1-rhel6.3.tar.gz
c. Create a CloudPlatform 4.2 qemu repo:
# cd CloudPlatform-4.2.0-1-rhel6.3/6.3 # createrepo .
d. Prepare the yum repo for upgrade. Edit the file /etc/yum.repos.d/rhel63.repo. For example:
[upgrade] name=rhel63 baseurl=url-of-your-rhel6.3-repo enabled=1 gpgcheck=0 [cloudstack] name=cloudstack baseurl=file:///root/CloudPlatform-4.2.0-1-rhel6.3/6.3 enabled=1 gpgcheck=0
e. Upgrade the host operating system from RHEL 6.0 to 6.3:
yum upgrade
21
Chapter 4. Upgrade Instructions
5. Stop all Usage Servers if running. Run this on all Usage Server hosts.
# service cloud-usage stop
6. Stop the Management Servers. Run this on all Management Server hosts.
# service cloud-management stop
7. On the MySQL master, take a backup of the MySQL databases. We recommend performing this step even in test upgrades. If there is an issue, this will assist with debugging.
In the following commands, it is assumed that you have set the root password on the database, which is a CloudPlatform recommended best practice. Substitute your own MySQL root password.
# mysqldump -u root -p<mysql_password> cloud >> cloud-backup.dmp # mysqldump -u root -p<mysql_password> cloud_usage > cloud-usage-backup.dmp
8. (RHEL/CentOS 5.x) If you are currently running CloudPlatform on RHEL/CentOS 5.x, use the following command to set up an Extra Packages for Enterprise Linux (EPEL) repo:
rpm -Uvh http://mirror.pnl.gov/epel/5/i386/epel-release-5-4.noarch.rpm
9. Download CloudPlatform 4.2 onto the management server host where it will run. Get the software from the following link:
https://www.citrix.com/English/ss/downloads/.
You need a My Citrix Account1.
10. Upgrade the CloudPlatform packages. You should have a file in the form of “CloudPlatform-4.2­N-OSVERSION.tar.gz”. Untar the file, then run the install.sh script inside it. Replace the file and directory names below with those you are using:
# tar xzf CloudPlatform-4.2-N-OSVERSION.tar.gz # cd CloudPlatform-4.2-N-OSVERSION # ./install.sh
You should see a few messages as the installer prepares, followed by a list of choices.
11. Choose "U" to upgrade the package
>U
You should see some output as the upgrade proceeds, ending with a message like "Complete! Done."
12. If you have made changes to your existing copy of the configuration files components.xml, db.properties, or server.xml in your previous-version CloudPlatform installation, the changes will be preserved in the upgrade. However, you need to do the following steps to place these changes in a new version of the file which is compatible with version 4.2.
1
http://www.citrix.com/lang/English/publicindex.asp?destURL=%2FEnglish%2FmyCitrix%2Findex.asp%3F#
22
Upgrade from 3.0.x to 4.2
Note
How will you know whether you need to do this? If the upgrade output in the previous step included a message like the following, then some custom content was found in your old file, and you need to merge the two files:
warning: /etc/cloud.rpmsave/management/components.xml created as /etc/cloudstack/ management/components.xml.rpmnew
a. Make a backup copy of your previous version file. For example: (substitute the file name
components.xml, db.properties, or server.xml in these commands as needed)
# mv /etc/cloudstack/management/components.xml /etc/cloudstack/management/ components.xml-backup
b. Copy the *.rpmnew file to create a new file. For example:
# cp -ap /etc/cloudstack/management/components.xml.rpmnew /etc/cloudstack/management/ components.xml
c. Merge your changes from the backup file into the new file. For example:
# vi /etc/cloudstack/management/components.xml
13. Repeat steps 8 - 12 on each management server node.
14. Start the first Management Server. Do not start any other Management Server nodes yet.
# service cloudstack-management start
Wait until the databases are upgraded. Ensure that the database upgrade is complete. After confirmation, start the other Management Servers one at a time by running the same command on each node.
Note
Failing to restart the Management Server indicates a problem in the upgrade. Restarting the Management Server without any issues indicates that the upgrade is successfully completed.
15. Start all Usage Servers (if they were running on your previous version). Perform this on each Usage Server host.
# service cloudstack-usage start
23
Chapter 4. Upgrade Instructions
Note
After upgrade from 3.0.4 to 4.2, if the usage server fails to restart then copy db.properties from /etc/cloudstack/management to /etc/cloudstack/usage. Then start the Usage Server.
16. (VMware only) If you are upgrading from 3.0.6 or beyond and you have existing clusters, additional steps are required to update the existing vCenter password for each VMware cluster.
These steps will not affect running guests in the cloud. These steps are required only for clouds using VMware clusters:
a. Stop the Management Server:
service cloudstack-management stop
b. Perform the following on each VMware cluster:
i. Encrypt the vCenter password:
java -classpath /usr/share/cloudstack-common/lib/jasypt-1.9.0.jar org.jasypt.intf.cli.JasyptPBEStringEncryptionCLI encrypt.sh input=<_your_vCenter_password_> password="`cat /etc/cloudstack/management/key`" verbose=false
Save the output from this step for later use. You need to add this in the cluster_details and vmware_data_center tables in place of the existing password.
ii. Find the ID of the cluster from the cluster_details table:
mysql -u <username> -p<password>
select * from cloud.cluster_details;
iii. Update the existing password with the encrypted one:
update cloud.cluster_details set value = <_ciphertext_from_step_i_> where id = <_id_from_step_ii_>;
iv. Confirm that the table is updated:
select * from cloud.cluster_details;
v. Find the ID of the VMware data center that you want to work with:
select * from cloud.vmware_data_center;
24
vi. Change the existing password to the encrypted one:
Upgrade from 3.0.x to 4.2
update cloud.vmware_data_center set password = <_ciphertext_from_step_i_> where id = <_id_from_step_v_>;
vii. Confirm that the table is updated:
select * from cloud.vmware_data_center;
c. Start the CloudPlatform Management server
service cloudstack-management start
17. (KVM only) Additional steps are required for each KVM host. These steps will not affect running guests in the cloud. These steps are required only for clouds using KVM as hosts and only on the KVM hosts.
Note
After the software upgrade on a KVM machine, the Ctrl+Alt+Del button on the console view of a VM doesn't work. Use Ctrl+Alt+Insert to log in to the console of the VM.
a. Copy the CloudPlatform 4.2 .tgz download to the host, untar it, and cd into the resulting
directory.
b. Stop the running agent.
# service cloud-agent stop
c. Update the agent software.
# ./install.sh
d. Choose "U" to update the packages. e. Edit /etc/cloudstack/agent/agent.properties to change the resource parameter
from com.cloud.agent.resource.computing.LibvirtComputingResource to com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.
f. Upgrade all the existing bridge names to new bridge names by running this script:
# cloudstack-agent-upgrade
g. Install a libvirt hook with the following commands:
# mkdir /etc/libvirt/hooks # cp /usr/share/cloudstack-agent/lib/libvirtqemuhook /etc/libvirt/hooks/qemu # chmod +x /etc/libvirt/hooks/qemu
h. Restart libvirtd.
25
Chapter 4. Upgrade Instructions
# service libvirtd restart
i. Start the agent.
# service cloudstack-agent start
18. Log in to the CloudPlatform UI as administrator, and check the status of the hosts. All hosts should come to Up state (except those that you know to be offline). You may need to wait 20 or 30 minutes, depending on the number of hosts.
Note
Troubleshooting: If login fails, clear your browser cache and reload the page.
Do not proceed to the next step until the hosts show in Up state. If the hosts do not come to the Up state, contact support.
19. If you are upgrading from 3.0.1 or 3.0.2, perform the following: a. Ensure that the admin port is set to 8096 by using the "integration.api.port" global parameter.
This port is used by the cloudstack-sysvmadm script later in the upgrade procedure. For information about how to set this parameter, see “Setting Configuration Parameters” in the Installation Guide.
b. Restart the Management Server.
Note
If you don't want the admin port to remain open, you can set it to null after the upgrade is done and restart the Management Server.
20. Run the following script to stop, then start, all System VMs including Secondary Storage VMs, Console Proxy VMs, and virtual routers.
a. Run the script once on one management server. Substitute your own IP address of the
MySQL instance, the MySQL user to connect as, and the password to use for that user. In addition to those parameters, provide the "-a" argument. For example:
# nohup cloudstack-sysvmadm -d 192.168.1.5 -u cloud -p password -a > sysvm.log 2>&1 &
This might take up to an hour or more to run, depending on the number of accounts in the system.
b. After the script terminates, check the log to verify correct execution:
# tail -f sysvm.log
26
Upgrade from 3.0.x to 4.2
The content should be like the following:
Stopping and starting 1 secondary storage vm(s)... Done stopping and starting secondary storage vm(s) Stopping and starting 1 console proxy vm(s)... Done stopping and starting console proxy vm(s). Stopping and starting 4 running routing vm(s)... Done restarting router(s).
c. If you would like additional confirmation that the new system VM templates were correctly
applied when these system VMs were rebooted, SSH into the System VM and check the version.
Use one of the following techniques, depending on the hypervisor.
XenServer or KVM:
SSH in by using the link local IP address of the system VM. For example, in the command below, substitute your own path to the private key used to log in to the system VM and your own link local IP.
ESXi
Run the following commands on the XenServer or KVM host on which the system VM is present:
# ssh -i /root/.ssh/id_rsa.cloud <link-local-ip> -p 3922 # cat /etc/cloudstack-release
The output should be like the following:
Cloudstack Release 4.2 Mon Aug 12 15:10:04 PST 2013
SSH in using the private IP address of the system VM. For example, in the command below, substitute your own path to the private key used to log in to the system VM and your own private IP.
Run the following commands on the Management Server:
# ssh -i /var/cloudstack/management/.ssh/id_rsa <private-ip> -p 3922 # cat /etc/cloudstack-release
The output should be like the following:
Cloudstack Release 4.2 Mon Sep 24 15:10:04 PST 2012
21. If you want to close the admin port again (recommended in production systems), set integration.api.port to null. Then restart the Management Server. For information about how to set integration.api.port, see Section 5.5, “Setting Configuration Parameters”.
22. (XenServer only) If needed, upgrade all Citrix XenServer hypervisor hosts in your cloud to a version supported by CloudPlatform 4.2 and apply any required hotfixes. Instructions for upgrading XenServer software and applying hotfixes can be found in Section 4.4, “Upgrading and
Hotfixing XenServer Hypervisor Hosts”.
27
Chapter 4. Upgrade Instructions
23. (VMware only) After upgrade, if you want to change a Standard vSwitch zone to a VMware dvSwitch Zone, perform the following:
a. Ensure that the Public and Guest traffics are not on the same network as the Management
and Storage traffic. b. Set vmware.use.dvswitch to true. c. Access the physical network for the Public and guest traffic, then change the traffic labels as
given below:
<dvSwitch name>,<VLANID>,<Switch Type>
For example: dvSwitch18,,vmwaredvs
VLANID is optional. d. Stop the Management server. e. Start the Management server. f. Add the new VMware dvSwitch-enabled cluster to this zone.
24. (VMware only) If your existing cloud includes any deployed data centers, you should set the global configuration setting vmware.create.full.clone to false. Then restart the Management Server. For information about how to set vmware.create.full.clone, see Section 5.5, “Setting Configuration
Parameters”. For information about how CloudPlatform supports full and linked clones, see
“Configuring Usage of Linked Clones on VMware” in the CloudPlatform Administration Guide.
Note
Troubleshooting tip: If passwords which you know to be valid appear not to work after upgrade, or other UI issues are seen, try clearing your browser cache and reloading the UI page.
Note
(VMware only) After upgrade, whenever you add a new VMware cluster to a zone that was created with a previous version of CloudPlatform, the fields vCenter host, vCenter Username, vCenter Password, and vCenter Datacenter are required. The Add Cluster dialog in the CloudPlatform user interface incorrectly shows them as optional, and will allow you to proceed with adding the cluster even though these important fields are blank. If you do not provide the values, you will see an error message like "Your host and/or path is wrong. Make sure it's of the format http://hostname/path".

4.2. Upgrade from 2.2.x to 4.2

1. Ensure that you query your IP address usage records and process them; for example, issue invoices for any usage that you have not yet billed users for.
28
Upgrade from 2.2.x to 4.2
Starting in 3.0.2, the usage record format for IP addresses is the same as the rest of the usage types. Instead of a single record with the assignment and release dates, separate records are generated per aggregation period with start and end dates. After upgrading to 4.2, any existing IP address usage records in the old format will no longer be available.
2. If you are using version 2.2.0 - 2.2.13, first upgrade to 2.2.14 by using the instructions in the
2.2.14 Release Notes2.
Note
(KVM only) If KVM hypervisor is used in your cloud, be sure you completed the step to insert a valid username and password into the host_details table on each KVM node as described in the 2.2.14 Release Notes. This step is critical, as the database will be encrypted after the upgrade to 4.2.
3. While running the 2.2.x system (which by this step should be at version 2.2.14 or greater), log in to the UI as root administrator.
4. Using the UI, add a new System VM template for each hypervisor type that is used in your cloud. In each zone, add a system VM template for each hypervisor used in that zone.
Note
You might notice that the size of the system VM template has increased compared to previous CloudPlatform versions. This is because the new version of the underlying Debian template has an increased disk size.
a. In the left navigation bar, click Templates. b. In Select view, click Templates. c. Click Register template.
The Register template dialog box is displayed.
d. In the Register template dialog box, specify the following values depending on the hypervisor
type (do not change these):
Hypervisor Description
XenServer Name: systemvm-xenserver-4.2
Description: systemvm-xenserver-4.2 URL: http://download.cloud.com/templates/4.2/
systemvmtemplate-2013-07-12-master-xen.vhd.bz2
2
http://download.cloud.com/releases/2.2.0/CloudStack2.2.14ReleaseNotes.pdf
29
Chapter 4. Upgrade Instructions
Hypervisor Description
KVM Name: systemvm-kvm-4.2
Zone: Choose the zone where this hypervisor is used. If your CloudPlatform deployment includes multiple zones running XenServer, choose All Zones to make the template available in all the XenServer zones.
Hypervisor: XenServer Format: VHD OS Type: Debian GNU/Linux 7.0 (32-bit) (or the highest
Debian release number available in the dropdown) Extractable: no Password Enabled: no Public: no Featured: no
Description: systemvm-kvm-4.2 URL: http://download.cloud.com/templates/4.2/
systemvmtemplate-2013-06-12-master-kvm.qcow2.bz2
Zone: Choose the zone where this hypervisor is used. If your CloudPlatform deployment includes multiple zones running KVM, choose All Zones to make the template available in all the KVM zones.
Hypervisor: KVM Format: QCOW2 OS Type: Debian GNU/Linux 7.0 (32-bit) (or the highest
Debian release number available in the dropdown) Extractable: no Password Enabled: no Public: no Featured: no
VMware Name: systemvm-vmware-4.2
30
Description: systemvm-vmware-4.2 URL: http://download.cloud.com/templates/4.2/
systemvmtemplate-4.2-vh7.ova
Zone: Choose the zone where this hypervisor is used. If your CloudPlatform deployment includes multiple zones running
Upgrade from 2.2.x to 4.2
Hypervisor Description
VMware, choose All Zones to make the template available in all the VMware zones.
Hypervisor: VMware Format: OVA OS Type: Debian GNU/Linux 7.0 (32-bit) (or the highest
Debian release number available in the dropdown) Extractable: no Password Enabled: no Public: no Featured: no
e. Watch the screen to be sure that the template downloads successfully and enters the READY
state. Do not proceed until this is successful
f. If you use more than one type of hypervisor in your cloud, repeat these steps to download the
system VM template for each hypervisor type.
Warning
If you do not repeat the steps for each hypervisor type, the upgrade will fail.
5. (KVM on RHEL 6.0, 6.1) If your existing CloudPlatform deployment includes one or more clusters of KVM hosts running RHEL 6.0 or RHEL 6.1, you must first upgrade the operating system version on those hosts before upgrading CloudPlatform itself.
Run the following commands on every KVM host.
a. Download the CloudPlatform 4.2.0 RHEL 6.3 binaries from https://www.citrix.com/English/ss/
downloads/.
b. Extract the binaries:
# cd /root # tar xvf CloudPlatform-4.2.0-1-rhel6.3.tar.gz
c. Create a CloudPlatform 4.2 qemu repo:
# cd CloudPlatform-4.2.0-1-rhel6.3/6.3 # createrepo .
d. Prepare the yum repo for upgrade. Edit the file /etc/yum.repos.d/rhel63.repo. For example:
[upgrade] name=rhel63 baseurl=url-of-your-rhel6.3-repo
31
Chapter 4. Upgrade Instructions
enabled=1 gpgcheck=0 [cloudstack] name=cloudstack baseurl=file:///root/CloudPlatform-4.2.0-1-rhel6.3/6.3 enabled=1 gpgcheck=0
e. Upgrade the host operating system from RHEL 6.0 to 6.3:
yum upgrade
6. Stop all Usage Servers if running. Run this on all Usage Server hosts.
# service cloud-usage stop
7. Stop the Management Servers. Run this on all Management Server hosts.
# service cloud-management stop
8. On the MySQL master, take a backup of the MySQL databases. We recommend performing this step even in test upgrades. If there is an issue, this will assist with debugging.
In the following commands, it is assumed that you have set the root password on the database, which is a CloudPlatform recommended best practice. Substitute your own MySQL root password.
# mysqldump -u root -p<mysql_password> cloud >> cloud-backup.dmp # mysqldump -u root -p<mysql_password> cloud_usage > cloud-usage-backup.dmp
9. (RHEL/CentOS 5.x) If you are currently running CloudPlatform on RHEL/CentOS 5.x, use the following command to set up an Extra Packages for Enterprise Linux (EPEL) repo:
rpm -Uvh http://mirror.pnl.gov/epel/5/i386/epel-release-5-4.noarch.rpm
10. Download CloudPlatform 4.2 onto the management server host where it will run. Get the software from the following link:
https://www.citrix.com/English/ss/downloads/
You need a My Citrix Account3.
11. Upgrade the CloudPlatform packages. You should have a file in the form of “CloudPlatform-4.2­N-OSVERSION.tar.gz”. Untar the file, then run the install.sh script inside it. Replace the file and directory names below with those you are using:
# tar xzf CloudPlatform-4.2-N-OSVERSION.tar.gz # cd CloudPlatform-4.2-N-OSVERSION # ./install.sh
You should see a few messages as the installer prepares, followed by a list of choices.
3
http://www.citrix.com/lang/English/publicindex.asp?destURL=%2FEnglish%2FmyCitrix%2Findex.asp%3F#
32
Upgrade from 2.2.x to 4.2
12. Choose "U" to upgrade the package.
> U
13. If you have made changes to your existing copy of the configuration files components.xml, db.properties, or server.xml in your previous-version CloudPlatform installation, the changes will be preserved in the upgrade. However, you need to do the following steps to place these changes in a new version of the file which is compatible with version 4.2.
Note
How will you know whether you need to do this? If the upgrade output in the previous step included a message like the following, then some custom content was found in your old file, and you need to merge the two files:
warning: /etc/cloud.rpmsave/management/components.xml created as /etc/cloudstack/ management/components.xml.rpmnew
a. Make a backup copy of your previous version file. For example: (substitute the file name
components.xml, db.properties, or server.xml in these commands as needed)
# mv /etc/cloudstack/management/components.xml /etc/cloudstack/management/ components.xml-backup
b. Copy the *.rpmnew file to create a new file. For example:
# cp -ap /etc/cloudstack/management/components.xml.rpmnew /etc/cloudstack/management/ components.xml
c. Merge your changes from the backup file into the new file. For example:
# vi /etc/cloudstack/management/components.xml
14. On the management server node, run the following command. It is recommended that you use the command-line flags to provide your own encryption keys. See Password and Key Encryption in the Installation Guide.
# cloudstack-setup-encryption -e <encryption_type> -m <management_server_key> -k <database_key>
When used without arguments, as in the following example, the default encryption type and keys will be used:
• (Optional) For encryption_type, use file or web to indicate the technique used to pass in the
database encryption password. Default: file.
• (Optional) For management_server_key, substitute the default key that is used to encrypt
confidential parameters in the properties file. Default: password. It is highly recommended that you replace this with a more secure value
33
Chapter 4. Upgrade Instructions
• (Optional) For database_key, substitute the default key that is used to encrypt confidential parameters in the CloudPlatform database. Default: password. It is highly recommended that you replace this with a more secure value.
15. Repeat steps 9 - 14 on every management server node. If you provided your own encryption key in step 14, use the same key on all other management servers.
16. Start the first Management Server. Do not start any other Management Server nodes yet.
# service cloudstack-management start
Wait until the databases are upgraded. Ensure that the database upgrade is complete. After confirmation, start the other Management Servers one at a time by running the same command on each node.
17. Start all Usage Servers (if they were running on your previous version). Perform this on each Usage Server host.
# service cloudstack-usage start
18. (KVM only) Additional steps are required for each KVM host. These steps will not affect running guests in the cloud. These steps are required only for clouds using KVM as hosts and only on the KVM hosts.
Note
After the software upgrade on a KVM machine, the Ctrl+Alt+Del button on the console view of a VM doesn't work. Use Ctrl+Alt+Insert to log in to the console of the VM.
a. Copy the CloudPlatform 4.2 .tgz download to the host, untar it, and cd into the resulting
directory.
b. Stop the running agent.
# service cloud-agent stop
c. Update the agent software.
# ./install.sh
d. Choose "U" to update the packages. e. Edit /etc/cloudstack/agent/agent.properties to change the resource parameter
from com.cloud.agent.resource.computing.LibvirtComputingResource to com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.
f. Upgrade all the existing bridge names to new bridge names by running this script:
# cloudstack-agent-upgrade
34
Upgrade from 2.2.x to 4.2
g. Install a libvirt hook with the following commands:
# mkdir /etc/libvirt/hooks # cp /usr/share/cloudstack-agent/lib/libvirtqemuhook /etc/libvirt/hooks/qemu # chmod +x /etc/libvirt/hooks/qemu
h. Restart libvirtd.
# service libvirtd restart
i. Start the agent.
# service cloudstack-agent start
19. Log in to the CloudPlatform UI as admin, and check the status of the hosts. All hosts should come to Up state (except those that you know to be offline). You may need to wait 20 or 30 minutes, depending on the number of hosts.
Do not proceed to the next step until the hosts show in the Up state. If the hosts do not come to the Up state, contact support.
20. Run the following script to stop, then start, all System VMs including Secondary Storage VMs, Console Proxy VMs, and virtual routers.
a. Run the command once on one management server. Substitute your own IP address of the
MySQL instance, the MySQL user to connect as, and the password to use for that user. In addition to those parameters, provide the "-c" and "-r" arguments. For example:
# nohup cloudstack-sysvmadm -d 192.168.1.5 -u cloud -p password -a > sysvm.log 2>&1 &
This might take up to an hour or more to run, depending on the number of accounts in the system.
b. After the script terminates, check the log to verify correct execution:
# tail -f sysvm.log
The content should be like the following:
Stopping and starting 1 secondary storage vm(s)... Done stopping and starting secondary storage vm(s) Stopping and starting 1 console proxy vm(s)... Done stopping and starting console proxy vm(s). Stopping and starting 4 running routing vm(s)... Done restarting router(s).
c. If you would like additional confirmation that the new system VM templates were correctly
applied when these system VMs were rebooted, SSH into the System VM and check the version.
Use one of the following techniques, depending on the hypervisor.
35
Chapter 4. Upgrade Instructions
XenServer or KVM:
SSH in by using the link local IP address of the system VM. For example, in the command below, substitute your own path to the private key used to log in to the system VM and your own link local IP.
Run the following commands on the XenServer or KVM host on which the system VM is present:
# ssh -i /root/.ssh/id_rsa.cloud <link-local-ip> -p 3922 # cat /etc/cloudstack-release
The output should be like the following:
Cloudstack Release 4.2 Mon Aug 12 15:10:04 PST 2013
ESXi
SSH in using the private IP address of the system VM. For example, in the command below, substitute your own path to the private key used to log in to the system VM and your own private IP.
Run the following commands on the Management Server:
# ssh -i /var/cloudstack/management/.ssh/id_rsa <private-ip> -p 3922 # cat /etc/cloudstack-release
The output should be like the following:
Cloudstack Release 4.2 Mon Aug 12 15:10:04 PST 2012
21. (XenServer only) If needed, upgrade all Citrix XenServer hypervisor hosts in your cloud to a version supported by CloudPlatform 4.2 and apply any required hotfixes. Instructions for upgrading and applying hotfixes can be found in Section 4.4, “Upgrading and Hotfixing XenServer
Hypervisor Hosts”.
22. (VMware only) If your existing cloud includes any deployed data centers, you should set the global configuration setting vmware.create.full.clone to false. Then restart the Management Server. For information about how to set vmware.create.full.clone, see Section 5.5, “Setting Configuration
Parameters”. For information about how CloudPlatform supports full and linked clones, see
“Configuring Usage of Linked Clones on VMware” in the CloudPlatform Administration Guide.
36
Upgrade from 2.1.x to 4.2
Note
(VMware only) After upgrade, whenever you add a new VMware cluster to a zone that was created with a previous version of CloudPlatform, the fields vCenter host, vCenter Username, vCenter Password, and vCenter Datacenter are required. The Add Cluster dialog in the CloudPlatform user interface incorrectly shows them as optional, and will allow you to proceed with adding the cluster even though these important fields are blank. If you do not provide the values, you will see an error message like "Your host and/or path is wrong. Make sure it's of the format http://hostname/path".

4.3. Upgrade from 2.1.x to 4.2

Direct upgrades from version 2.1.0 - 2.1.10 to 4.2 are not supported. CloudPlatform must first be upgraded to version 2.2.14. For information on how to upgrade from 2.1.x to 2.2.14, see the CloudPlatform 2.2.14 Release Notes.

4.4. Upgrading and Hotfixing XenServer Hypervisor Hosts

In CloudPlatform 4.2, you can upgrade XenServer hypervisor host software without having to disconnect the XenServer cluster. You can upgrade XenServer 5.6 GA, 5.6 FP1, or 5.6 SP2 to any newer version that is supported by CloudPlatform. The actual upgrade is described in XenServer documentation, but there are some additional steps you must perform before and after the upgrade.

4.4.1. Upgrading to a New XenServer Version

To upgrade XenServer hosts when running CloudPlatform 4.2:
1. Edit the file /etc/cloudstack/management/environment.properties and add the following line:
manage.xenserver.pool.master=false
2. Restart the Management Server to put the new setting into effect.
# service cloudstack-management start
3. Find the hostname of the master host in your XenServer cluster (pool): a. Run the following command on any host in the pool, and make a note of the host-uuid of the
master host:
# xe pool-list
b. Now run the following command, and find the host that has a host-uuid that matches the
master host from the previous step. Make a note of this host's hostname. You will need to input it in a later step.
# xe host-list
4. On CloudPlatform, put the master host into maintenance mode. Use the hostname you discovered in the previous step.
37
Chapter 4. Upgrade Instructions
Note
In the latest XenServer upgrade procedure, even after putting the master host into maintenance mode, the master host continues to stay as master.
Any VMs running on this master will be automatically migrated to other hosts, unless there is only one UP host in the cluster. If there is only one UP host, putting the host into maintenance mode will stop any VMs running on the host.
5. Disconnect the XenServer cluster from CloudPlatform. It will remain disconnected only long enough to upgrade one host.
a. Log in to the CloudPlatform UI as root. b. Navigate to the XenServer cluster, and click Actions – Unmanage. c. Watch the cluster status until it shows Unmanaged.
6. Upgrade the XenServer software on the master host: a. Insert the XenXerver CD. b. Reboot the host. c. Upgrade to the newer version of XenServer. Use the steps in XenServer documentation.
7. Cancel the maintenance mode on the master host.
8. Reconnect the XenServer cluster to CloudPlatform. a. Log in to the CloudPlatform UI as root. b. Navigate to the XenServer cluster, and click Actions – Manage. c. Watch the status to see that all the hosts come up.
9. Upgrade the slave hosts in the cluster: a. Put a slave host into maintenance mode.
Wait until all the VMs are migrated to other hosts. b. Upgrade the XenServer software on the slave. c. Cancel maintenance mode for the slave. d. Repeat steps a through c for each slave host in the XenServer pool.
10. You might need to change the OS type settings for VMs running on the upgraded hosts, if any of the following apply:
• If you upgraded from XenServer 5.6 GA to XenServer 5.6 SP2, change any VMs that have the
OS type CentOS 5.5 (32-bit), Oracle Enterprise Linux 5.5 (32-bit), or Red Hat Enterprise Linux
5.5 (32-bit) to Other Linux (32-bit). Change any VMs that have the 64-bit versions of these same OS types to Other Linux (64-bit).
38
Applying Hotfixes to a XenServer Cluster
• If you upgraded from XenServer 5.6 SP2 to XenServer 6.0.2 or higher, change any VMs that have the OS type CentOS 5.6 (32-bit), CentOS 5.7 (32-bit), Oracle Enterprise Linux 5.6 (32­bit), Oracle Enterprise Linux 5.7 (32-bit), Red Hat Enterprise Linux 5.6 (32-bit) , or Red Hat Enterprise Linux 5.7 (32-bit) to Other Linux (32-bit). Change any VMs that have the 64-bit versions of these same OS types to Other Linux (64-bit).
• If you upgraded from XenServer 5.6 to XenServer 6.0.2 or higher, do all of the above.

4.4.2. Applying Hotfixes to a XenServer Cluster

1. Edit the file /etc/cloudstack/management/environment.properties and add the following line:
manage.xenserver.pool.master=false
2. Restart the Management Server to put the new setting into effect.
# service cloudstack-management start
3. Find the hostname of the master host in your XenServer cluster (pool): a. Run the following command on any host in the pool, and make a note of the host-uuid of the
master host:
# xe pool-list
b. Now run the following command, and find the host that has a host-uuid that matches the
master host from the previous step. Make a note of this host's hostname. You will need to input it in a later step.
# xe host-list
4. On CloudPlatform, put the master host into maintenance mode. Use the hostname you discovered in the previous step.
Any VMs running on this master will be automatically migrated to other hosts, unless there is only one UP host in the cluster. If there is only one UP host, putting the host into maintenance mode will stop any VMs running on the host.
5. Disconnect the XenServer cluster from CloudPlatform. It will remain disconnected only long enough to hotfix one host.
a. Log in to the CloudPlatform UI as root. b. Navigate to the XenServer cluster, and click Actions – Unmanage. c. Watch the cluster status until it shows Unmanaged.
6. Hotfix the master host: a. Add the XenServer hot fixes to the master host.
i. Assign a UUID to the update file:
39
Chapter 4. Upgrade Instructions
xe patch-upload file-name=XS602E015.xsupdate
The command displays the UUID of the update file:
33af688e-d18c-493d-922b-ec51ea23cfe9
ii. Repeat the xe patch-upload command for all other XenServer updates:
XS602E004.xsupdate, XS602E005.xsupdate. Take a note of the UUIDs of the update files. The UUIDs are required in the next step.
b. Apply XenServer hot fixes to master host:
xe patch-apply host-uuid=<master uuid> uuid=<hotfix uuid>
c. Repeat xe patch-apply command for all the hot fixes. d. Install the required CSP files.
xe-install-supplemental-pack <csp-iso-file>
e. Restart the master host.
7. Cancel the maintenance mode on the master host.
8. Reconnect the XenServer cluster to CloudPlatform. a. Log in to the CloudPlatform UI as root. b. Navigate to the XenServer cluster, and click Actions – Manage. c. Watch the status to see that all the hosts come up.
9. Hotfix the slave hosts in the cluster: a. Put a slave host into maintenance mode.
Wait until all the VMs are migrated to other hosts.
b. Apply the XenServer hot fixes to the slave host:
xe patch-apply host-uuid=<master uuid> uuid=<hotfix uuid>
c. Repeat Step a through b for each slave host in the XenServer pool. d. Install the required CSP files.
xe-install-supplemental-pack <csp-iso-file>
e. Restart the slave hosts.
Wait until all the slave hosts are up. It might take several minutes for the hosts to come up.
10. Cancel the maintenance mode on the slave hosts.
40
Applying Hotfixes to a XenServer Cluster
11. You might need to change the OS type settings for VMs running on the upgraded hosts, if any of the following apply:
• If you upgraded from XenServer 5.6 SP2 to XenServer 6.0.2, change any VMs that have the
OS type CentOS 5.6 (32-bit), CentOS 5.7 (32-bit), Oracle Enterprise Linux 5.6 (32-bit), Oracle Enterprise Linux 5.7 (32-bit), Red Hat Enterprise Linux 5.6 (32-bit) , or Red Hat Enterprise Linux
5.7 (32-bit) to Other Linux (32-bit). Change any VMs that have the 64-bit versions of these same OS types to Other Linux (64-bit).
• If you upgraded from XenServer 5.6 GA or 5.6 FP1 to XenServer 6.0.2, change any VMs
that have the OS type CentOS 5.5 (32-bit), CentOS 5.6 (32-bit), CentOS 5.7 (32-bit), Oracle Enterprise Linux 5.5 (32-bit), Oracle Enterprise Linux 5.6 (32-bit), Oracle Enterprise Linux 5.7 (32-bit), Red Hat Enterprise Linux 5.5 (32-bit), Red Hat Enterprise Linux 5.6 (32-bit) , or Red Hat Enterprise Linux 5.7 (32-bit) to Other Linux (32-bit). Change any VMs that have the 64-bit versions of these same OS types to Other Linux (64-bit).
41
42
Chapter 5.
Installation

5.1. Who Should Read This

These installation instructions are intended for those who are ready to set up a full production deployment. If you only need to set up a trial installation, you will probably find more detail than you need here. Instead, you might want to start with the Trial Installation Guide.
With the following procedures, you can start using the more powerful features of CloudPlatform, such as advanced VLAN networking, high availability, additional network elements such as load balancers and firewalls, and support for multiple hypervisors including Citrix XenServer, KVM, and VMware vSphere.

5.2. Overview of Installation Steps

For anything more than a simple trial installation, you will need guidance for a variety of configuration choices. It is strongly recommended that you read the following:
Chapter 13, Choosing a Deployment Architecture
Section 5.3.3, “Hypervisor Compatibility Matrix”
Chapter 14, Network Setup
• Storage Setup
• Best Practices
Prepare
1. Make sure you have the required hardware ready
2. (Optional) Fill out the preparation checklists
Install the CloudPlatform software
3. Install the Management Server (choose single-node or multi-node)
4. Log in to the UI
Provision your cloud infrastructure
5. Add a zone. Includes the first pod, cluster, and host
6. Add more pods
7. Add more clusters
8. Add more hosts
9. Add more primary storage
10. Add more secondary storage
Try using the cloud
11. Initialization and testing
43
Chapter 5. Installation

5.3. Minimum System Requirements

5.3.1. Management Server, Database, and Storage System Requirements

The machines that will run the Management Server and MySQL database must meet the following requirements. The same machines can also be used to provide primary and secondary storage, such as via local disk or NFS. The Management Server may be placed on a virtual machine.
• Operating system:
• Preferred: RHEL 6.2 or 6.3 64-bit (https://access.redhat.com/downloads)
• Also supported: RHEL 5.5 64-bit
• It is highly recommended that you purchase a RHEL support license. Citrix support can not be responsible for helping fix issues with the underlying OS.
• 64-bit x86 CPU (more cores results in better performance)
• 4 GB of memory
• 50 GB of local disk (when secondary storage is on the same machine with the Management Server, 500GB is recommended)
• At least 1 NIC
• Statically allocated IP address
• Fully qualified domain name as returned by the hostname command
• Use the default user file-creation mode mask (umask). The value is 022. If the value is not 022, several files might not be accessible to the cloud user, which leads to
installation failure.

5.3.2. Host/Hypervisor System Requirements

The host is where the cloud services run in the form of guest virtual machines. Each host is one machine that meets the following requirements:
• Must support HVM (Intel-VT or AMD-V enabled).
• 64-bit x86 CPU (more cores results in better performance)
• Hardware virtualization support required
• 4 GB of memory
• 36 GB of local disk
• At least 1 NIC
• Latest hotfixes applied to hypervisor software
• When you deploy CloudPlatform, the hypervisor host must not have any VMs already running
44
Hypervisor Compatibility Matrix
• All hosts within a cluster must be homogenous. The CPUs must be of the same type, count, and feature flags.
Hosts have additional requirements depending on the hypervisor. See the requirements listed at the top of the Installation section for your chosen hypervisor:
Chapter 8, Installing XenServer for CloudPlatform
Chapter 10, Installing VMware for CloudPlatform
Chapter 9, Installing KVM for CloudPlatform
Chapter 12, Installing Oracle VM (OVM) for CloudPlatform
Warning
Be sure you fulfill the additional hypervisor requirements and installation steps provided in this Guide. Hypervisor hosts must be properly prepared to work with CloudPlatform.

5.3.3. Hypervisor Compatibility Matrix

Find your CloudPlatform version number in the top row of the table, then look down the column to see which hypervisor versions you can use.
You can find an additional presentation of this information on the Citrix Knowledge Base at http://
support.citrix.com/article/CTX134803.
5.3.3.1. CloudPlatform 4.x
4.2.0
XenServer 6.2 with fresh CloudPlatform installation
XenServer 6.2 with CloudPlatform upgraded from previous version
XenServer 6.1.0 Yes XenServer 6.0.2 Yes XenServer 6.0.0 No XenServer 5.6 SP2 Yes XenServer 5.6 FP1 Yes KVM (RHEL 6.2 or 6.3) Yes KVM (RHEL 6.0 or 6.1) No KVM (RHEL 5.x) No
Yes
No
VMware ESX 5 and vCenter 5.1 Yes VMware ESX 5 and vCenter 5.0 (both 5.0.1
Update B) VMware ESX 4.1 and vCenter 4.1 No
Yes
45
Chapter 5. Installation
5.3.3.2. CloudPlatform 3.x
3.0.0 3.0.1 3.0.2 3.0.3 3.0.4 3.0.5 3.0.6 3.0.7
XenServer
5.6 XenServer
5.6 FP1 XenServer
5.6 SP2 XenServer
6.0.0 XenServer
6.0.2 XenServer
6.1 XenServer
6.2
KVM (RHEL
6.0, 6.1 or 6.2)
No No No No No No No No
No No Yes Yes Yes Yes Yes Yes
No No Yes Yes Yes Yes Yes Yes
No No No No No No No No
Yes Yes Yes Yes Yes Yes Yes Yes
No No No No No No Yes Yes
No No No No No No No Yes
(3.0.7 Patch C or greater)
Yes Yes Yes Yes Yes Yes Yes Yes
VMware ESX
4.1 and vCenter
4.1 VMware
ESX 5 and vCenter 5
Yes Yes Yes Yes Yes Yes Yes Yes
Yes Yes Yes Yes Yes Yes Yes Yes
5.3.3.3. CloudPlatform 2.x
2.1.x 2.2.x
XenServer 5.6 Yes Yes XenServer 5.6 FP1 Yes Yes XenServer 5.6 SP2 Yes Yes XenServer 6.0.0 No No XenServer 6.0.2 No No XenServer 6.1 No No KVM (RHEL 6.0 or 6.1) Yes Yes VMware ESX 4.1 and vCenter
4.1
46
Yes Yes
Management Server Installation
2.1.x 2.2.x
VMware ESX 5 and vCenter 5 No No

5.4. Management Server Installation

5.4.1. Management Server Installation Overview

This section describes installing the Management Server. There are two slightly different installation flows, depending on how many Management Server nodes will be in your cloud:
• A single Management Server node, with MySQL on the same node.
• Multiple Management Server nodes, with MySQL on a node separate from the Management Servers.
In either case, each machine must meet the system requirements described in System Requirements.
Warning
For the sake of security, be sure the public Internet can not access port 8096 or port 8250 on the Management Server.
The procedure for installing the Management Server is:
1. Prepare the Operating System
2. Install the First Management Server
3. Install and Configure the MySQL database
4. Prepare NFS Shares
5. Prepare and Start Additional Management Servers (optional)
6. Prepare the System VM Template

5.4.2. Prepare the Operating System

The OS must be prepared to host the Management Server using the following steps. These steps must be performed on each Management Server node.
1. Log in to your OS as root.
2. Check for a fully qualified hostname.
# hostname --fqdn
This should return a fully qualified hostname such as "managament1.lab.example.org". If it does not, edit /etc/hosts so that it does.
3. Set SELinux to be permissive by default.
a. Check to see whether SELinux is installed on your machine. If not, you can skip to step 4.
47
Chapter 5. Installation
In RHEL, SELinux is installed and enabled by default. You can verify this with:
# rpm -qa | grep selinux
b. Set the SELINUX variable in /etc/selinux/config to “permissive”. This ensures that the
permissive setting will be maintained after a system reboot.
# vi /etc/selinux/config
c. Then set SELinux to permissive starting immediately, without requiring a system reboot.
# setenforce 0
4. Make sure that the machine can reach the Internet.
# ping www.cloudstack.org
5. If you do not have a Red Hat Network account, you need to prepare a local Yum repository. a. If you are working with a physical host, insert the RHEL installation CD. If you are using a VM,
attach the RHEL ISO. b. Mount the CDROM to /media. c. Create a repo file at /etc/yum.repos.d/rhel6.repo. In the file, insert the following lines:
[rhel] name=rhel6 baseurl=file:///media enabled=1 gpgcheck=0
6. Turn on NTP for time synchronization.
Note
NTP is required to synchronize the clocks of the servers in your cloud.
a. Install NTP.
# yum install ntp
b. Edit the NTP configuration file to point to your NTP server.
# vi /etc/ntp.conf
Add one or more server lines in this file with the names of the NTP servers you want to use.
For example:
48
Install the Management Server on the First Host
server 0.xenserver.pool.ntp.org server 1.xenserver.pool.ntp.org server 2.xenserver.pool.ntp.org server 3.xenserver.pool.ntp.org
c. Restart the NTP client.
# service ntpd restart
d. Make sure NTP will start again upon reboot.
# chkconfig ntpd on
7. Repeat all of these steps on every host where the Management Server will be installed.
8. Continue to Section 5.4.3, “Install the Management Server on the First Host”.

5.4.3. Install the Management Server on the First Host

The first step in installation, whether you are installing the Management Server on one host or many, is to install the software on a single node.
Note
If you are planning to install the Management Server on multiple nodes for high availability, do not proceed to the additional nodes yet. That step will come later.
1. Download the CloudStack Management Server onto the host where it will run. Get the software from the following link.
https://www.citrix.com/English/ss/downloads/.
You will need a MyCitrix account1.
2. Install the CloudStack packages. You should have a file in the form of “CloudStack-VERSION-N­OSVERSION.tar.gz”. Untar the file and then run the install.sh script inside it. Replace the file and directory names below with those you are using:
# tar xzf CloudStack-VERSION-N-OSVERSION.tar.gz # cd CloudStack-VERSION-N-OSVERSION # ./install.sh
You should see a few messages as the installer prepares, followed by a list of choices.
3. Choose M to install the Management Server software.
> M
1
http://www.citrix.com/lang/English/publicindex.asp?destURL=%2FEnglish%2FmyCitrix%2Findex.asp%3F
49
Chapter 5. Installation
4. When the installation is finished, run the following commands to start essential services:
# service rpcbind start # service nfs start # chkconfig nfs on # chkconfig rpcbind on
5. Continue to Section 5.4.4, “Install and Configure the Database”.

5.4.4. Install and Configure the Database

CloudPlatform uses a MySQL database server to store its data. When you are installing the Management Server on a single node, you can install the MySQL server on the same node if desired. When installing the Management Server on multiple nodes, we assume that the MySQL database runs on a separate node.
5.4.4.1. Install the Database on the Management Server Node
This section describes how to install MySQL on the same machine with the Management Server. This technique is intended for a simple deployment that has a single Management Server node. If you have a multi-node Management Server deployment, you will typically use a separate node for MySQL. See
Section 5.4.4.2, “Install the Database on a Separate Node”.
1. If you already have a version of MySQL installed on the Management Server node, make one of the following choices, depending on what version of MySQL it is. The most recent version tested is
5.1.58.
• If you already have installed MySQL version 5.1.58 or later, skip to step 4.
• If you have installed a version of MySQL earlier than 5.1.58, you can either skip to step 4 or
uninstall MySQL and proceed to step 2 to install a more recent version.
Warning
It is important that you choose the right database version. Never downgrade a MySQL installation.
2. On the same computer where you installed the Management Server, re-run install.sh.
# ./install.sh
You should see a few messages as the installer prepares, followed by a list of choices.
3. Choose D to install the MySQL server from the distribution’s repo.
> D
Troubleshooting: If you do not see the D option, you already have MySQL installed. Please go back to step 1.
4. Edit the MySQL configuration (/etc/my.cnf or /etc/mysql/my.cnf, depending on your OS) and insert the following lines in the [mysqld] section. You can put these lines below the datadir line.
50
Install and Configure the Database
The max_connections parameter should be set to 350 multiplied by the number of Management Servers you are deploying. This example assumes one Management Server.
innodb_rollback_on_timeout=1 innodb_lock_wait_timeout=600 max_connections=350 log-bin=mysql-bin binlog-format = 'ROW'
Note
The binlog-format variable is supported in MySQL versions 5.1 and greater. It is not supported in MySQL 5.0. In some versions of MySQL, an underscore character is used in place of the hyphen in the variable name. For the exact syntax and spelling of each variable, consult the documentation for your version of MySQL.
5. Restart the MySQL service, then invoke MySQL as the root user.
# service mysqld restart # mysql -u root
6. Best Practice: MySQL does not set a root password by default. It is very strongly recommended that you set a root password as a security precaution. Run the following commands, and substitute your own desired root password.
mysql> SET PASSWORD = PASSWORD('password');
From now on, start MySQL with mysql -p so it will prompt you for the password.
7. To grant access privileges to remote users, perform the following steps. a. Run the following commands from the mysql prompt:
mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION; mysql> exit
b. Restart the MySQL service.
# service mysqld restart
c. Open the MySQL server port (3306) in the firewall to allow remote clients to connect.
# iptables -I INPUT -p tcp --dport 3306 -j ACCEPT
d. Edit the /etc/sysconfig/iptables file and add the following line at the beginning of the INPUT
chain.
-A INPUT -p tcp --dport 3306 -j ACCEPT
51
Chapter 5. Installation
8. Set up the database. The following command creates the cloud user on the database.
• In dbpassword, specify the password to be assigned to the cloud user. You can choose to provide no password.
• In deploy-as, specify the username and password of the user deploying the database. In the following command, it is assumed the root user is deploying the database and creating the cloud user.
• (Optional) For encryption_type, use file or web to indicate the technique used to pass in the database encryption password. Default: file. See About Password and Key Encryption.
• (Optional) For management_server_key, substitute the default key that is used to encrypt confidential parameters in the CloudPlatform properties file. Default: password. It is highly recommended that you replace this with a more secure value. See About Password and Key Encryption.
• (Optional) For database_key, substitute the default key that is used to encrypt confidential parameters in the CloudPlatform database. Default: password. It is highly recommended that you replace this with a more secure value. See About Password and Key Encryption.
# cloudstack-setup-databases cloud:<dbpassword>@localhost --deploy-as=root:<password> -e <encryption_type> -m <management_server_key> -k <database_key>
9. Now that the database is set up, you can finish configuring the OS for the Management Server. This command will set up iptables, sudoers, and start the Management Server.
# cloudstack-setup-management
10. Continue to Section 5.4.7, “Prepare NFS Shares”.
5.4.4.2. Install the Database on a Separate Node
This section describes how to install MySQL on a standalone machine, separate from the Management Server. This technique is intended for a deployment that includes several Management Server nodes. If you have a single-node Management Server deployment, you will typically use the same node for MySQL. See Section 5.4.4.1, “Install the Database on the Management Server Node”.
1. If you already have a version of MySQL installed, make one of the following choices, depending on what version of MySQL it is. The most recent version tested with CloudPlatform is 5.1.58.
• If you already have installed MySQL version 5.1.58 or later, skip to step 3.
• If you have installed a version of MySQL earlier than 5.1.58, you can either skip to step 3 or
uninstall MySQL and proceed to step 2 to install a more recent version.
Warning
It is important that you choose the right database version. Never downgrade a MySQL installation that is used with CloudPlatform.
2. Log in as root to your Database Node and run the following commands. If you are going to install a replica database, then log in to the master.
52
Install and Configure the Database
# yum install mysql-server # chkconfig --level 35 mysqld on
3. Edit the MySQL configuration (/etc/my.cnf or /etc/mysql/my.cnf, depending on your OS) and insert the following lines in the [mysqld] section. You can put these lines below the datadir line. The max_connections parameter should be set to 350 multiplied by the number of Management Servers you are deploying. This example assumes two Management Servers.
innodb_rollback_on_timeout=1 innodb_lock_wait_timeout=600 max_connections=700 log-bin=mysql-bin binlog-format = 'ROW'
Note
The binlog-format variable is supported in MySQL versions 5.1 and greater. It is not supported in MySQL 5.0. In some versions of MySQL, an underscore character is used in place of the hyphen in the variable name. For the exact syntax and spelling of each variable, consult the documentation for your version of MySQL.
4. Start the MySQL service, then invoke MySQL as the root user.
# service mysqld start # mysql -u root
5. MySQL does not set a root password by default. It is very strongly recommended that you set a root password as a security precaution. Run the following command, and substitute your own desired root password for <password>. You can answer "Y" to all questions except "Disallow root login remotely?". Remote root login is required to set up the databases.
mysql> SET PASSWORD = PASSWORD('password');
From now on, start MySQL with mysql -p so it will prompt you for the password.
6. To grant access privileges to remote users, perform the following steps. a. Run the following command from the mysql prompt, then exit MySQL:
mysql> GRANT ALL PRIVILEGES ON *.* TO ‘root’@’%’ WITH GRANT OPTION; mysql> exit
b. Restart the MySQL service.
# service mysqld restart
c. Open the MySQL server port (3306) in the firewall to allow remote clients to connect.
# iptables -I INPUT -p tcp --dport 3306 -j ACCEPT
53
Chapter 5. Installation
d. Edit the /etc/sysconfig/iptables file and add the following lines at the beginning of the INPUT
chain.
-A INPUT -p tcp --dport 3306 -j ACCEPT
7. Return to the root shell on your first Management Server.
8. Set up the database. The following command creates the cloud user on the database.
• In dbpassword, specify the password to be assigned to the cloud user. You can choose to provide no password.
• In dbhost, provide the hostname or IP address of the database node.
• In deploy-as, specify the username and password of the user deploying the database. For example, if you originally installed MySQL with user “root” and password “password”, provide -­deploy-as=root:password.
• (Optional) For encryption_type, use file or web to indicate the technique used to pass in the database encryption password. Default: file. See Section 5.4.5, “About Password and Key
Encryption”.
• (Optional) For management_server_key, substitute the default key that is used to encrypt confidential parameters in the CloudPlatform properties file. Default: password. It is highly recommended that you replace this with a more secure value. See Section 5.4.5, “About
Password and Key Encryption”.
• (Optional) For database_key, substitute the default key that is used to encrypt confidential parameters in the CloudPlatform database. Default: password. It is highly recommended that you replace this with a more secure value. See Section 5.4.5, “About Password and Key
Encryption”.
# cloudstack-setup-databases cloud:<dbpassword>@<dbhost> --deploy-as=root:<password> -e <encryption_type> -m <management_server_key> -k <database_key>
9. Now run a script that will set up iptables rules and SELinux for use by the Management Server. It will also chkconfig off and start the Management Server.
# cloudstack-setup-management
10. Continue to Section 5.4.7, “Prepare NFS Shares”.

5.4.5. About Password and Key Encryption

CloudPlatform stores several sensitive passwords and secret keys that are used to provide security. These values are always automatically encrypted:
• Database secret key
• Database password
• SSH keys
• Compute node root password
54
Changing the Default Password Encryption
• VPN password
• User API secret key
• VNC password CloudPlatform uses the Java Simplified Encryption (JASYPT) library. The data values are encrypted
and decrypted using a database secret key, which is stored in one of CloudPlatform’s internal properties files along with the database password. The other encrypted values listed above, such as SSH keys, are in the CloudPlatform internal database.
Of course, the database secret key itself can not be stored in the open – it must be encrypted. How then does CloudPlatform read it? A second secret key must be provided from an external source during Management Server startup. This key can be provided in one of two ways: loaded from a file or provided by the CloudPlatform administrator. The CloudPlatform database has a configuration setting that lets it know which of these methods will be used. If the encryption type is set to “file,” the key must be in a file in a known location. If the encryption type is set to “web,” the administrator runs the utility com.cloud.utils.crypt.EncryptionSecretKeySender, which relays the key to the Management Server over a known port.
The encryption type, database secret key, and Management Server secret key are set during CloudPlatform installation. They are all parameters to the CloudPlatform database setup script (cloudstack-setup-databases). The default values are file, password, and password. It is, of course, highly recommended that you change these to more secure keys.

5.4.6. Changing the Default Password Encryption

Passwords are encoded when creating or updating users. The default preferred encoder is SHA256. It is more secure than MD5 hashing, which was used in CloudPlatform 3.x. If you take no action to customize password encryption and authentication, SHA256 Salt will be used.
If you prefer a different authentication mechanism, CloudPlatform provides a way for you to determine the default encoding and authentication mechanism for admin and user logins. Two configurable lists are provided: userPasswordEncoders and userAuthenticators. userPasswordEncoders allow you to configure the order of preference for encoding passwords, and userAuthenticator allows you to configure the order in which authentication schemes are invoked to validate user passwords.
The following method determines what encoding scheme is used to encode the password supplied during user creation or modification.
When a new user is created, the user password is encoded by using the first valid encoder loaded as per the sequence specified in the UserPasswordEncoders property in the ComponentContext.xml or nonossComponentContext.xml files. The order of authentication schemes is determined by the UserAuthenticators property in the same files. If Non-OSS components, such as VMware environments, are to be deployed, modify the UserPasswordEncoders and UserAuthenticators lists in the nonossComponentContext.xml file. For OSS environments, such as XenServer or KVM, modify the ComponentContext.xml file. It is recommended to make uniform changes across both the files.
When a new authenticator or encoder is added, you can add them to this list. While doing so, ensure that the new authenticator or encoder is specified as a bean in both the files. The administrator can change the ordering of both these properties as desired to change the order of schemes. Modify the following list properties available in client/tomcatconf/nonossComponentContext.xml.in or client/tomcatconf/componentContext.xml.in as applicable, to the desired order:
<property name="UserAuthenticators"> <list>
55
Chapter 5. Installation
<ref bean="SHA256SaltedUserAuthenticator"/> <ref bean="MD5UserAuthenticator"/> <ref bean="LDAPUserAuthenticator"/> <ref bean="PlainTextUserAuthenticator"/> </list> </property> <property name="UserPasswordEncoders"> <list> <ref bean="SHA256SaltedUserAuthenticator"/> <ref bean="MD5UserAuthenticator"/> <ref bean="LDAPUserAuthenticator"/> <ref bean="PlainTextUserAuthenticator"/> </list>
In the above default ordering, SHA256Salt is used first for UserPasswordEncoders. If the module is found and encoding returns a valid value, the encoded password is stored in the user table's password column. If it fails for any reason, the MD5UserAuthenticator will be tried next, and the order continues. For UserAuthenticators, SHA256Salt authentication is tried first. If it succeeds, the user is logged into the Management server. If it fails, md5 is tried next, and attempts continues until any of them succeeds and the user logs in . If none of them works, the user is returned an invalid credential message.

5.4.7. Prepare NFS Shares

CloudPlatform needs a place to keep primary and secondary storage (see Chapter 3, Cloud
Infrastructure Concepts). Both of these can be NFS shares. This section tells how to set up the NFS
shares before adding the storage to CloudPlatform. For primary storage, you can use iSCSI instead. The requirements for primary and secondary storage are described in:
Section 3.6, “About Primary Storage”
Section 3.7, “About Secondary Storage”
A production installation typically uses a separate NFS server. See Section 5.4.7.1, “Using a Separate
NFS Server”.
You can also use the Management Server node as the NFS server. This is more typical of a trial installation, but is technically possible in a larger deployment. See Section 5.4.7.2, “Using the
Management Server As the NFS Server”.
5.4.7.1. Using a Separate NFS Server
This section tells how to set up NFS shares for secondary and (optionally) primary storage on an NFS server running on a separate node from the Management Server.
The exact commands for the following steps may vary depending on your operating system version.
Warning
(KVM only) Ensure that no volume is already mounted at your NFS mount point.
1. On the storage server, create an NFS share for secondary storage and, if you are using NFS for primary storage as well, create a second NFS share. For example:
56
Prepare NFS Shares
# mkdir -p /export/primary # mkdir -p /export/secondary
2. To configure the new directories as NFS exports, edit /etc/exports. Export the NFS share(s) with rw,async,no_root_squash. For example:
# vi /etc/exports
Insert the following line.
/export *(rw,async,no_root_squash)
3. Export the /export directory.
# exportfs -a
4. On the management server, create a mount point for secondary storage. For example:
# mkdir -p /mnt/secondary
5. Mount the secondary storage on your Management Server. Replace the example NFS server name and NFS share paths below with your own.
# mount -t nfs nfsservername:/nfs/share/secondary /mnt/secondary
6. If you are setting up multiple Management Server nodes, continue with Section 5.4.8, “Prepare
and Start Additional Management Servers”. If you are setting up a single-node deployment,
continue with Section 5.4.10, “Prepare the System VM Template”.
5.4.7.2. Using the Management Server As the NFS Server
This section tells how to set up NFS shares for primary and secondary storage on the same node with the Management Server. This is more typical of a trial installation, but is technically possible in a larger deployment. It is assumed that you will have less than 16TB of storage on the host.
The exact commands for the following steps may vary depending on your operating system version.
1. On the Management Server host, create two directories that you will use for primary and secondary storage. For example:
# mkdir -p /export/primary # mkdir -p /export/secondary
2. To configure the new directories as NFS exports, edit /etc/exports. Export the NFS share(s) with rw,async,no_root_squash. For example:
# vi /etc/exports
Insert the following line.
57
Chapter 5. Installation
/export *(rw,async,no_root_squash)
3. Export the /export directory.
# exportfs -a
4. Edit the /etc/sysconfig/nfs file.
# vi /etc/sysconfig/nfs
Uncomment the following lines:
LOCKD_TCPPORT=32803 LOCKD_UDPPORT=32769 MOUNTD_PORT=892 RQUOTAD_PORT=875 STATD_PORT=662 STATD_OUTGOING_PORT=2020
5. Edit the /etc/sysconfig/iptables file.
# vi /etc/sysconfig/iptables
Add the following lines at the beginning of the INPUT chain:
A INPUT -m state --state NEW -p udp --dport 111 -j ACCEPT A INPUT -m state --state NEW -p tcp --dport 111 -j ACCEPT A INPUT -m state --state NEW -p tcp --dport 2049 -j ACCEPT A INPUT -m state --state NEW -p tcp --dport 32803 -j ACCEPT A INPUT -m state --state NEW -p udp --dport 32769 -j ACCEPT A INPUT -m state --state NEW -p tcp --dport 892 -j ACCEPT A INPUT -m state --state NEW -p udp --dport 892 -j ACCEPT A INPUT -m state --state NEW -p tcp --dport 875 -j ACCEPT A INPUT -m state --state NEW -p udp --dport 875 -j ACCEPT A INPUT -m state --state NEW -p tcp --dport 662 -j ACCEPT A INPUT -m state --state NEW -p udp --dport 662 -j ACCEPT
6. Run the following commands:
# service iptables restart # service iptables save
7. If NFS v4 communication is used between client and server, add your domain to /etc/idmapd.conf on both the hypervisor host and Management Server.
# vi /etc/idmapd.conf
Remove the character # from the beginning of the Domain line in idmapd.conf and replace the value in the file with your own domain. In the example below, the domain is company.com.
58
Prepare and Start Additional Management Servers
Domain = company.com
8. Reboot the Management Server host. Two NFS shares called /export/primary and /export/secondary are now set up.
9. It is recommended that you test to be sure the previous steps have been successful. a. Log in to the hypervisor host. b. Be sure NFS and rpcbind are running. The commands might be different depending on your
OS. For example:
# service rpcbind start # service nfs start # chkconfig nfs on # chkconfig rpcbind on # reboot
c. Log back in to the hypervisor host and try to mount the /export directories. For example
(substitute your own management server name):
# mkdir /primarymount # mount -t nfs <management-server-name>:/export/primary /primarymount # umount /primarymount # mkdir /secondarymount # mount -t nfs <management-server-name>:/export/secondary /secondarymount # umount /secondarymount
10. If you are setting up multiple Management Server nodes, continue with Section 5.4.8, “Prepare
and Start Additional Management Servers”. If you are setting up a single-node deployment,
continue with Section 5.4.10, “Prepare the System VM Template”.

5.4.8. Prepare and Start Additional Management Servers

For your second and subsequent Management Servers, you will install the Management Server software, connect it to the database, and set up the OS for the Management Server.
1. Perform the steps in Section 5.4.2, “Prepare the Operating System”.
2. Download the Management Server onto the additional host where it will run. Get the software from the following link.
https://www.citrix.com/English/ss/downloads/
You will need a MyCitrix account2.
3. Install the packages. You should have a file in the form of “CloudPlatform-VERSION-N­OSVERSION.tar.gz”. Untar the file and then run the install.sh script inside it. Replace the file and directory names below with those you are using:
2
http://www.citrix.com/lang/English/publicindex.asp?destURL=%2FEnglish%2FmyCitrix%2Findex.asp%3F
59
Chapter 5. Installation
# tar xzf CloudPlatform-VERSION-N-OSVERSION.tar.gz # cd CloudPlatform-VERSION-N-OSVERSION # ./install.sh
You should see a few messages as the installer prepares, followed by a list of choices.
4. Choose M to install the Management Server software.
> M
5. When the installation is finished, run the following commands to start essential services:
# service rpcbind start # service nfs start # chkconfig nfs on # chkconfig rpcbind on
6. Configure the database client. Note the absence of the --deploy-as argument in this case. (For more details about the arguments to this command, see Section 5.4.4.2, “Install the Database on
a Separate Node”.)
# cloudstack-setup-databases cloud:<dbpassword>@<dbhost> -e <encryption_type> -m <management_server_key> -k <database_key>
7. (Trial installations only) If you are running the hypervisor on the same machine with the Management Server, edit /etc/sudoers and add the following line:
Defaults:cloud !requiretty
8. Configure the OS and start the Management Server:
# cloudstack-setup-management
The Management Server on this node should now be running.
9. Repeat these steps on each additional Management Server.
10. Be sure to configure a load balancer for the Management Servers. See Section 5.4.9,
“Management Server Load Balancing”.
11. Continue with Section 5.4.10, “Prepare the System VM Template”.

5.4.9. Management Server Load Balancing

CloudPlatform can use a load balancer to provide a virtual IP for multiple Management Servers. The administrator is responsible for creating the load balancer rules for the Management Servers. The application requires persistence or stickiness across multiple sessions. The following chart lists the ports that should be load balanced and whether or not persistence is required.
Even if persistence is not required, enabling it is permitted.
60
Prepare the System VM Template
Source Port Destination Port Protocol Persistence
Required?
80 or 443 8080 (or 20400 with
HTTP (or AJP) Yes
AJP) 8250 8250 TCP Yes 8096 8096 HTTP No
In addition to above settings, the adminstrator is responsible for setting the 'host' global config value from the management server IP to load balancer virtual IP address. If the 'host' value is not set to the VIP for Port 8250 and one of your management servers crashes, the UI is still available but the system VMs will not be able to contact the management server.

5.4.10. Prepare the System VM Template

Secondary storage must be seeded with a template that is used for CloudPlatform system VMs.
Note
When copying and pasting a command, be sure the command has pasted as a single line before executing. Some document viewers may introduce unwanted line breaks in copied text.
1. On the Management Server, run one or more of the following cloud-install-sys-tmplt commands to retrieve and decompress the system VM template. Run the command for each hypervisor type that you expect end users to run in this Zone.
If your secondary storage mount point is not named /mnt/secondary, substitute your own mount point name.
If you set the CloudPlatform database encryption type to "web" when you set up the database, you must now add the parameter -s <management-server-secret-key>. See About Password and Key Encryption.
This process will require approximately 5 GB of free space on the local file system and up to 30 minutes each time it runs.
• For XenServer:
# /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt -m / mnt/secondary -u http://download.cloud.com/templates/4.2/systemvmtemplate-2013-07-12­master-xen.vhd.bz2 -h xenserver -s <optional-management-server-secret-key> -F
• For vSphere:
# /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt -m / mnt/secondary -u http://download.cloud.com/templates/4.2/systemvmtemplate-4.2-vh7.ova ­h vmware -s <optional-management-server-secret-key> -F
• For KVM:
# /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt -m / mnt/secondary -u http://download.cloud.com/templates/4.2/systemvmtemplate-2013-06-12­master-kvm.qcow2.bz2 -h kvm -s <optional-management-server-secret-key> -F
61
Chapter 5. Installation
2. If you are using a separate NFS server, perform this step. If you are using the Management Server as the NFS server, you MUST NOT perform this step.
When the script has finished, unmount secondary storage and remove the created directory.
# umount /mnt/secondary # rmdir /mnt/secondary
3. Repeat these steps for each secondary storage server.

5.4.11. Installation Complete! Next Steps

Congratulations! You have now installed CloudPlatform Management Server and the database it uses to persist system data.
What should you do next?
• Even without adding any cloud infrastructure, you can run the UI to get a feel for what's offered and
how you will interact with CloudPlatform on an ongoing basis. See Log In to the UI.
• When you're ready, add the cloud infrastructure and try running some virtual machines on it, so you
can watch how CloudPlatform manages the infrastructure. See Provision Your Cloud Infrastructure.

5.5. Setting Configuration Parameters

5.5.1. About Configuration Parameters

CloudPlatform provides a variety of settings you can use to set limits, configure features, and enable or disable features in the cloud. Once your Management Server is running, you might need to set some of these configuration parameters, depending on what optional features you are setting up. You can set default values at the global level, which will be in effect throughout the cloud unless you override them at a lower level. You can make local settings, which will override the global configuration parameter values, at the level of an account, zone, cluster, or primary storage.
The documentation for each CloudPlatform feature should direct you to the names of the applicable parameters. The following table shows a few of the more useful parameters.
62
About Configuration Parameters
Field Value
management.network.cidr A CIDR that describes the network that the
management CIDRs reside on. This variable must be set for deployments that use vSphere. It is recommended to be set for other deployments as well. Example: 192.168.3.0/24.
xen.setup.multipath For XenServer nodes, this is a true/false variable
that instructs CloudStack to enable iSCSI multipath on the XenServer Hosts when they are added. This defaults to false. Set it to true if you would like CloudStack to enable multipath.
If this is true for a NFS-based deployment multipath will still be enabled on the XenServer host. However, this does not impact NFS operation and is harmless.
secstorage.allowed.internal.sites This is used to protect your internal network
from rogue attempts to download arbitrary files using the template download feature. This is a comma-separated list of CIDRs. If a requested URL matches any of these CIDRs the Secondary Storage VM will use the private network interface to fetch the URL. Other URLs will go through the public interface. We suggest you set this to 1 or 2 hardened internal machines where you keep your templates. For example, set it to
192.168.1.66/32.
use.local.storage Determines whether CloudStack will use storage
that is local to the Host for data disks, templates, and snapshots. By default CloudStack will not use this storage. You should change this to true if you want to use local storage and you understand the reliability and feature drawbacks to choosing local storage.
host This is the IP address of the Management
Server. If you are using multiple Management Servers you should enter a load balanced IP address that is reachable via the private network.
default.page.size Maximum number of items per page that can be
returned by a CloudStack API command. The limit applies at the cloud level and can vary from cloud to cloud. You can override this with a lower value on a particular API call by using the page and page size API command parameters. For more information, see the Developer's Guide. Default: 500.
ha.tag The label you want to use throughout the cloud
to designate certain hosts as dedicated HA hosts. These hosts will be used only for HA­enabled VMs that are restarting due to the failure of another host. For example, you could set this
63
Chapter 5. Installation
Field Value
to ha_host. Specify the ha.tag value as a host tag when you add a new host to the cloud.

5.5.2. Setting Global Configuration Parameters

Use the following steps to set global configuration parameters. These values will be the defaults in effect throughout your CloudPlatform deployment.
1. Log in to the UI as administrator.
2. In the left navigation bar, click Global Settings.
3. In Select View, choose one of the following:
• Global Settings. This displays a list of the parameters with brief descriptions and current values.
• Hypervisor Capabilities. This displays a list of hypervisor versions with the maximum number of
guests supported for each.
4. Use the search box to narrow down the list to those you are interested in.
5. In the Actions column, click the Edit icon to modify a value. If you are viewing Hypervisor Capabilities, you must click the name of the hypervisor first to display the editing screen.

5.5.3. Setting Local Configuration Parameters

Use the following steps to set local configuration parameters for an account, zone, cluster, or primary storage. These values will override the global configuration settings.
1. Log in to the UI as administrator.
2. In the left navigation bar, click Infrastructure or Accounts, depending on where you want to set a value.
3. Find the name of the particular resource that you want to work with. For example, if you are in Infrastructure, click View All on the Zones, Clusters, or Primary Storage area.
4. Click the name of the resource where you want to set a limit.
5. Click the Settings tab.
6. Use the search box to narrow down the list to those you are interested in.
7. In the Actions column, click the Edit icon to modify a value.

5.5.4. Granular Global Configuration Parameters

The following global configuration parameters have been made more granular. The parameters are listed under three different scopes: account, cluster, and zone.
Field Field Value
account remote.access.vpn.client.iprange The range of IPs to be
allocated to remotely access the VPN clients. The first IP in the range is used by the VPN server.
64
Granular Global Configuration Parameters
Field Field Value
account allow.public.user.templates If false, users will not be able to
create public templates.
account use.system.public.ips If true and if an account has
one or more dedicated public IP ranges, IPs are acquired from the system pool after all the IPs dedicated to the account have been consumed.
account use.system.guest.vlans If true and if an account has
one or more dedicated guest VLAN ranges, VLANs are allocated from the system pool after all the VLANs dedicated to the account have been consumed.
cluster cluster.storage.allocated.capacity.notificationthresholdThe percentage, as a value
between 0 and 1, of allocated storage utilization above which alerts are sent that the storage is below the threshold.
cluster cluster.storage.capacity.notificationthresholdThe percentage, as a value
between 0 and 1, of storage utilization above which alerts are sent that the available storage is below the threshold.
cluster cluster.cpu.allocated.capacity.notificationthresholdThe percentage, as a value
between 0 and 1, of cpu utilization above which alerts are sent that the available CPU is below the threshold.
cluster cluster.memory.allocated.capacity.notificationthresholdThe percentage, as a value
between 0 and 1, of memory utilization above which alerts are sent that the available memory is below the threshold.
cluster cluster.cpu.allocated.capacity.disablethresholdThe percentage, as a value
between 0 and 1, of CPU utilization above which allocators will disable that cluster from further usage. Keep the corresponding notification threshold lower than this value to be notified beforehand.
cluster cluster.memory.allocated.capacity.disablethresholdThe percentage, as a value
between 0 and 1, of memory utilization above which allocators will disable that cluster from further usage.
65
Chapter 5. Installation
Field Field Value
Keep the corresponding notification threshold lower than this value to be notified beforehand.
cluster cpu.overprovisioning.factor Used for CPU over-
provisioning calculation; the available CPU will be the mathematical product of actualCpuCapacity and cpu.overprovisioning.factor.
cluster mem.overprovisioning.factor Used for memory over-
provisioning calculation.
cluster vmware.reserve.cpu Specify whether or not to
reserve CPU when not over­provisioning; In case of CPU over-provisioning, CPU is always reserved.
cluster vmware.reserve.mem Specify whether or not to
reserve memory when not over-provisioning; In case of memory over-provisioning memory is always reserved.
zone pool.storage.allocated.capacity.disablethresholdThe percentage, as a value
between 0 and 1, of allocated storage utilization above which allocators will disable that pool because the available allocated storage is below the threshold.
zone pool.storage.capacity.disablethresholdThe percentage, as a value
between 0 and 1, of storage utilization above which allocators will disable the pool because the available storage capacity is below the threshold.
zone storage.overprovisioning.factor Used for storage over-
provisioning calculation; available storage will be the mathematical product of actualStorageSize and storage.overprovisioning.factor.
zone network.throttling.rate Default data transfer rate in
megabits per second allowed in a network.
zone guest.domain.suffix Default domain name for VMs
inside a virtual networks with a router.
zone router.template.xen Name of the default router
template on Xenserver.
66
Granular Global Configuration Parameters
Field Field Value
zone router.template.kvm Name of the default router
template on KVM.
zone router.template.vmware Name of the default router
template on VMware.
zone enable.dynamic.scale.vm Enable or diable dynamically
scaling of a VM.
zone use.external.dns Bypass internal DNS, and use
the external DNS1 and DNS2
zone blacklisted.routes Routes that are blacklisted
cannot be used for creating static routes for a VPC Private Gateway.
67
68
Chapter 6.
User Interface

6.1. Supported Browsers

The CloudPlatform web-based UI is available in the following popular browsers:
• Mozilla Firefox 22 or greater
• Apple Safari, all versions packaged with Mac OS X 10.5 (Leopard) or greater
• Google Chrome, all versions starting from the year 2012
• Microsoft Internet Explorer 9 or greater

6.2. Log In to the UI

CloudPlatform provides a web-based UI that can be used by both administrators and end users. The appropriate version of the UI is displayed depending on the credentials used to log in.
The URL to log in to CloudPlatform is: (substitute your own management server IP address)
http://<management-server-ip-address>:8080/client
On a fresh Management Server installation, a guided tour splash screen appears. On later visits, you’ll see a login screen where you specify the following to proceed to your Dashboard:
Username
The user ID of your account. The default username is admin.
Password
The password associated with the user ID. The password for the default username is password.
Domain
If you are a root user, leave this field blank. If you are a user in the sub-domains, enter the full path to the domain, excluding the root domain. For example, suppose multiple levels are created under the root domain, such as Comp1/hr. The
users in the Comp1 domain should enter Comp1 in the Domain field, whereas the users in the Comp1/ sales domain should enter Comp1/sales.
For more guidance about the choices that appear when you log in to this UI, see Logging In as the Root Administrator.

6.2.1. End User's UI Overview

The CloudPlatform UI helps users of cloud infrastructure to view and use their cloud resources, including virtual machines, templates and ISOs, data volumes and snapshots, guest networks, and IP addresses. If the user is a member or administrator of one or more CloudPlatform projects, the UI can provide a project-oriented view.
69
Chapter 6. User Interface

6.2.2. Root Administrator's UI Overview

The CloudPlatform UI helps the CloudPlatform administrator provision, view, and manage the cloud infrastructure, domains, user accounts, projects, and configuration settings. The first time you start the UI after a fresh Management Server installation, you can choose to follow a guided tour to provision your cloud infrastructure. On subsequent logins, the dashboard of the logged-in user appears. The various links in this screen and the navigation bar on the left provide access to a variety of administrative functions. The root administrator can also use the UI to perform all the same tasks that are present in the end-user’s UI.

6.2.3. Logging In as the Root Administrator

After the Management Server software is installed and running, you can run the CloudPlatform user interface. This UI is there to help you provision, view, and manage your cloud infrastructure.
1. Open your favorite Web browser and go to this URL. Substitute the IP address of your own Management Server:
http://<management-server-ip-address>:8080/client
On a fresh Management Server installation, a guided tour splash screen appears. On later visits, you’ll see a login screen where you can enter a user ID and password and proceed to your Dashboard.
2. If you see the first-time splash screen, choose one of the following.
Continue with basic setup. Choose this if you're just trying CloudPlatform, and you want
a guided walkthrough of the simplest possible configuration so that you can get started right away. We'll help you set up a cloud with the following features: a single machine that runs CloudPlatform software and uses NFS to provide storage; a single machine running VMs under the XenServer or KVM hypervisor; and a shared public network.
The prompts in this guided tour should give you all the information you need, but if you want just a bit more detail, you can follow along in the Trial Installation Guide.
I have used CloudPlatform before. Choose this if you have already gone through a design
phase and planned a more sophisticated deployment, or you are ready to start scaling up a trial cloud that you set up earlier with the basic setup screens. In the Administrator UI, you can start using the more powerful features of CloudPlatform, such as advanced VLAN networking, high availability, additional network elements such as load balancers and firewalls, and support for multiple hypervisors including Citrix XenServer, KVM, and VMware vSphere.
The root administrator Dashboard appears.
3. You should set a new root administrator password. If you chose basic setup, you’ll be prompted to create a new password right away. If you chose experienced user, use the steps in Section 6.2.4,
“Changing the Root Password”.
70
Changing the Root Password
Warning
You are logging in as the root administrator. This account manages the CloudPlatform deployment, including physical infrastructure. The root administrator can modify configuration settings to change basic functionality, create or delete user accounts, and take many actions that should be performed only by an authorized person. Please change the default password to a new, unique password.

6.2.4. Changing the Root Password

During installation and ongoing cloud administration, you will need to log in to the UI as the root administrator. The root administrator account manages the CloudPlatform deployment, including physical infrastructure. The root administrator can modify configuration settings to change basic functionality, create or delete user accounts, and take many actions that should be performed only by an authorized person. When first installing CloudPlatform, be sure to change the default password to a new, unique value.
1. Open your favorite Web browser and go to this URL. Substitute the IP address of your own Management Server:
http://<management-server-ip-address>:8080/client
2. Log in to the UI using the current root user ID and password. The default is admin, password.
3. Click Accounts.
4. Click the admin account name.
5. Click View Users.
6. Click the admin user name.
7. Click the Change Password button.
8. Type the new password, and click OK.

6.3. Using SSH Keys for Authentication

In addition to the username and password authentication, CloudPlatform supports using SSH keys to log in to the cloud infrastructure for additional security for your cloud infrastructure. You can use the createSSHKeyPair API to generate the SSH keys.
Because each cloud user has their own ssh key, one cloud user cannot log in to another cloud user's instances unless they share their ssh key files. Using a single SSH key pair, you can manage multiple instances.
6.3.1. Creating an Instance from a Template that Supports SSH
Keys
Perform the following:
1. Create a new instance by using the template provided by CloudPlatform.
71
Chapter 6. User Interface
For more information on creating a new instance, see Creating VMs in the Administration Guide.
2. Download the script file cloud-set-guest-sshkey from the following link:
http://download.cloud.com/templates/4.2/bindir/cloud-set-guest-sshkey.in
3. Copy the file to /etc/init.d.
4. Give the necessary permissions on the script:
chmod +x /etc/init.d/cloud-set-guest-sshkey
5. Run the script while starting up the operating system:
chkconfig --add cloud-set-guest-sshkey
6. Stop the instance.

6.3.2. Creating the SSH Keypair

You must make a call to the createSSHKeyPair api method. You can either use the CloudPlatform python api library or the curl commands to make the call to the CloudPlatform api.
For example, make a call from the CloudPlatform server to create a SSH keypair called "keypair-doc" for the admin account in the root domain:
Note
Ensure that you adjust these values to meet your needs. If you are making the API call from a different server, your URL or port number will be different, and you will need to use the API keys.
1. Run the following curl command:
curl --globoff "http://localhost:8080/?command=createSSHKeyPair&name=keypair­doc&account=admin&domainid=1"
The output is something similar to what is given below:
<?xml version="1.0" encoding="ISO-8859-1"?><createsshkeypairresponse cloud-stack-version="3.0.0.20120228045507"><keypair><name>keypair­doc</name><fingerprint>f6:77:39:d5:5e:77:02:22:6a:d8:7f:ce:ab:cd:b3:56</ fingerprint><privatekey>-----BEGIN RSA PRIVATE KEY----­MIICXQIBAAKBgQCSydmnQ67jP6lNoXdX3noZjQdrMAWNQZ7y5SrEu4wDxplvhYci dXYBeZVwakDVsU2MLGl/K+wefwefwefwefwefJyKJaogMKn7BperPD6n1wIDAQAB AoGAdXaJ7uyZKeRDoy6wA0UmF0kSPbMZCR+UTIHNkS/E0/4U+6lhMokmFSHtu mfDZ1kGGDYhMsdytjDBztljawfawfeawefawfawfawQQDCjEsoRdgkduTy QpbSGDIa11Jsc+XNDx2fgRinDsxXI/zJYXTKRhSl/LIPHBw/brW8vzxhOlSOrwm7 VvemkkgpAkEAwSeEw394LYZiEVv395ar9MLRVTVLwpo54jC4tsOxQCBlloocK lYaocpk0yBqqOUSBawfIiDCuLXSdvBo1Xz5ICTM19vgvEp/+kMuECQBzm nVo8b2Gvyagqt/KEQo8wzH2THghZ1qQ1QRhIeJG2aissEacF6bGB2oZ7Igim5L14 4KR7OeEToyCLC2k+02UCQQCrniSnWKtDVoVqeK/zbB32JhW3Wullv5p5zUEcd KfEEuzcCUIxtJYTahJ1pvlFkQ8anpuxjSEDp8x/18bq3
-----END RSA PRIVATE KEY----­</privatekey></keypair></createsshkeypairresponse>
72
2. Copy the key data into a file. The file looks like this:
-----BEGIN RSA PRIVATE KEY----­MIICXQIBAAKBgQCSydmnQ67jP6lNoXdX3noZjQdrMAWNQZ7y5SrEu4wDxplvhYci dXYBeZVwakDVsU2MLGl/K+wefwefwefwefwefJyKJaogMKn7BperPD6n1wIDAQAB AoGAdXaJ7uyZKeRDoy6wA0UmF0kSPbMZCR+UTIHNkS/E0/4U+6lhMokmFSHtu mfDZ1kGGDYhMsdytjDBztljawfawfeawefawfawfawQQDCjEsoRdgkduTy QpbSGDIa11Jsc+XNDx2fgRinDsxXI/zJYXTKRhSl/LIPHBw/brW8vzxhOlSOrwm7 VvemkkgpAkEAwSeEw394LYZiEVv395ar9MLRVTVLwpo54jC4tsOxQCBlloocK lYaocpk0yBqqOUSBawfIiDCuLXSdvBo1Xz5ICTM19vgvEp/+kMuECQBzm nVo8b2Gvyagqt/KEQo8wzH2THghZ1qQ1QRhIeJG2aissEacF6bGB2oZ7Igim5L14 4KR7OeEToyCLC2k+02UCQQCrniSnWKtDVoVqeK/zbB32JhW3Wullv5p5zUEcd KfEEuzcCUIxtJYTahJ1pvlFkQ8anpuxjSEDp8x/18bq3
-----END RSA PRIVATE KEY-----
3. Save the file.

6.3.3. Creating an Instance

Ensure that you use the same SSH key name that you created.
Note
Creating an Instance
You cannot create the instance by using the GUI at this time and associate the instance with the newly created SSH keypair.
A sample curl command to create a new instance is:
curl --globoff http://localhost:<port number>/? command=deployVirtualMachine&zoneId=1&serviceOfferingId=18727021-7556-4110-9322­d625b52e0813&templateId=e899c18a­ce13-4bbf-98a9-625c5026e0b5&securitygroupids=ff03f02f-9e3b-48f8-834d-91b822da40c5&account=admin \&domainid=1&keypair=keypair-doc
Substitute the template, service offering and security group IDs (if you are using the security group feature) that are in your cloud environment.

6.3.4. Logging In Using the SSH Keypair

To test your SSH key generation is successful, check whether you can log in to the cloud setup. For example, from a Linux OS, run:
ssh -i ~/.ssh/keypair-doc <ip address>
The -i parameter directs the ssh client to use a ssh key found at ~/.ssh/keypair-doc.

6.3.5. Resetting SSH Keys

With the API command resetSSHKeyForVirtualMachine, a user can set or reset the SSH keypair assigned to a virtual machine. A lost or compromised SSH keypair can be changed, and the user can access the VM by using the new keypair. Just create or register a new keypair, then call resetSSHKeyForVirtualMachine.
73
74
Chapter 7.
Steps to Provisioning Your Cloud Infrastructure
This section tells how to add regions, zones, pods, clusters, hosts, storage, and networks to your cloud. If you are unfamiliar with these entities, please begin by looking through Chapter 3, Cloud
Infrastructure Concepts.

7.1. Overview of Provisioning Steps

After the Management Server is installed and running, you can add the compute resources for it to manage. For an overview of how a CloudPlatform cloud infrastructure is organized, see Section 2.3.2,
“Cloud Infrastructure Overview”.
To provision the cloud infrastructure, or to scale it up at any time, follow these procedures:
1. Define regions (optional). See Section 7.2, “Adding Regions (optional)”.
2. Add a zone to the region. See Section 7.3, “Adding a Zone”.
3. Add more pods to the zone (optional). See Section 7.4, “Adding a Pod”.
4. Add more clusters to the pod (optional). See Section 7.5, “Adding a Cluster”.
5. Add more hosts to the cluster (optional). See Section 7.6, “Adding a Host”.
6. Add primary storage to the cluster. See Section 7.7, “Adding Primary Storage”.
7. Add secondary storage to the zone. See Section 7.8, “Adding Secondary Storage”.
8. Initialize and test the new cloud. See Section 7.9, “Initialize and Test”.
When you have finished these steps, you will have a deployment with the following basic structure:
75
Chapter 7. Steps to Provisioning Your Cloud Infrastructure

7.2. Adding Regions (optional)

Grouping your cloud resources into geographic regions is an optional step when provisioning the cloud. For an overview of regions, see Section 3.1, “About Regions”.

7.2.1. The First Region: The Default Region

If you do not take action to define regions, then all the zones in your cloud will be automatically grouped into a single default region. This region is assigned the region ID of 1. You can change the name or URL of the default region by displaying the region in the CloudPlatform UI and clicking the Edit button.

7.2.2. Adding a Region

Use these steps to add a second region in addition to the default region.
1. Each region has its own CloudPlatform instance. Therefore, the first step of creating a new region is to install the Management Server software, on one or more nodes, in the geographic area where you want to set up the new region. Use the steps in the Installation guide. When you come to the step where you set up the database, use the additional command-line flag -r <region_id> to set a region ID for the new region. The default region is automatically assigned a region ID of 1, so your first additional region might be region 2.
cloudstack-setup-databases cloud:<dbpassword>@localhost --deploy-as=root:<password> -e <encryption_type> -m <management_server_key> -k <database_key> -r <region_id>
2. By the end of the installation procedure, the Management Server should have been started. Be sure that the Management Server installation was successful and complete.
76
Adding Third and Subsequent Regions
3. Now add the new region to region 1 in CloudPlatform. a. Log in to CloudPlatform in the first region as root administrator (that is, log in to
<region.1.IP.address>:8080/client). b. In the left navigation bar, click Regions. c. Click Add Region. In the dialog, fill in the following fields:
• ID. A unique identifying number. Use the same number you set in the database during Management Server installation in the new region; for example, 2.
• Name. Give the new region a descriptive name.
• Endpoint. The URL where you can log in to the Management Server in the new region. This has the format <region.2.IP.address>:8080/client.
4. Now perform the same procedure in reverse. Log in to region 2, and add region 1.
5. Copy the account, user, and domain tables from the region 1 database to the region 2 database. In the following commands, it is assumed that you have set the root password on the database,
which is a CloudPlatform recommended best practice. Substitute your own MySQL root password.
a. First, run this command to copy the contents of the database:
# mysqldump -u root -p<mysql_password> -h <region1_db_host> cloud account user domain > region1.sql
b. Then run this command to put the data onto the region 2 database:
# mysql -u root -p<mysql_password> -h <region2_db_host> cloud < region1.sql
6. Remove project accounts. Run these commands on the region 2 database:
mysql> delete from account where type = 5;
7. Set the default zone as null:
mysql> update account set default_zone_id = null;
8. Restart the Management Servers in region 2.

7.2.3. Adding Third and Subsequent Regions

To add the third region, and subsequent additional regions, the steps are similar to those for adding the second region. However, you must repeat certain steps additional times for each additional region:
1. Install CloudPlatform in each additional region. Set the region ID for each region during the database setup step.
cloudstack-setup-databases cloud:<dbpassword>@localhost --deploy-as=root:<password> -e <encryption_type> -m <management_server_key> -k <database_key> -r <region_id>
77
Chapter 7. Steps to Provisioning Your Cloud Infrastructure
2. Once the Management Server is running, add your new region to all existing regions by repeatedly using the Add Region button in the UI. For example, if you were adding region 3:
a. Log in to CloudPlatform in the first region as root administrator (that is, log in to
<region.1.IP.address>:8080/client), and add a region with ID 3, the name of region 3, and the endpoint <region.3.IP.address>:8080/client.
b. Log in to CloudPlatform in the second region as root administrator (that is, log in to
<region.2.IP.address>:8080/client), and add a region with ID 3, the name of region 3, and the endpoint <region.3.IP.address>:8080/client.
3. Repeat the procedure in reverse to add all existing regions to the new region. For example, for the third region, add the other two existing regions:
a. Log in to CloudPlatform in the third region as root administrator (that is, log in to
<region.3.IP.address>:8080/client).
b. Add a region with ID 1, the name of region 1, and the endpoint <region.1.IP.address>:8080/
client.
c. Add a region with ID 2, the name of region 2, and the endpoint <region.2.IP.address>:8080/
client.
4. Copy the account, user, and domain tables from any existing region's database to the new region's database.
In the following commands, it is assumed that you have set the root password on the database, which is a CloudPlatform recommended best practice. Substitute your own MySQL root password.
a. First, run this command to copy the contents of the database:
# mysqldump -u root -p<mysql_password> -h <region1_db_host> cloud account user domain > region1.sql
b. Then run this command to put the data onto the new region's database. For example, for
region 3:
# mysql -u root -p<mysql_password> -h <region3_db_host> cloud < region1.sql
5. Remove project accounts. Run these commands on the region 3 database:
mysql> delete from account where type = 5;
6. Set the default zone as null:
mysql> update account set default_zone_id = null;
7. Restart the Management Servers in the new region.

7.2.4. Deleting a Region

Log in to each of the other regions, navigate to the one you want to delete, and click Remove Region. For example, to remove the third region in a 3-region cloud:
1. Log in to <region.1.IP.address>:8080/client.
78
Adding a Zone
2. In the left navigation bar, click Regions.
3. Click the name of the region you want to delete.
4. Click the Remove Region button.
5. Repeat these steps for <region.2.IP.address>:8080/client.

7.3. Adding a Zone

Adding a zone consists of three phases:
• Create a mount point for secondary storage on the Management Server.
• Seed the system VM template on the secondary storage.
• Add the zone.

7.3.1. Create a Secondary Storage Mount Point for the New Zone

To be sure the most up-to-date system VMs are deployed in new zones, you need to seed the latest system VM template to the zone's secondary storage. The first step is to create a mount point for the secondary storage. Then seed the system VM template.
1. On the management server, create a mount point for secondary storage. For example:
# mkdir -p /mnt/secondary
2. Mount the secondary storage on your Management Server. Replace the example NFS server name and NFS share paths below with your own.
# mount -t nfs nfsservername:/nfs/share/secondary /mnt/secondary
3. Secondary storage must be seeded with a template that is used for CloudPlatform system VMs. Use the steps in Section 5.4.10, “Prepare the System VM Template”. Then return here and continue with adding the zone.

7.3.2. Steps to Add a New Zone

When you add a new zone, you will be prompted to configure the zone’s physical network and add the first pod, cluster, host, primary storage, and secondary storage.
1. Be sure you have first performed the steps to seed the system VM template.
2. Log in to the CloudPlatform UI as the root administrator. See Section 6.2, “Log In to the UI”.
3. In the left navigation, choose Infrastructure.
4. On Zones, click View More.
5. Click Add Zone. The zone creation wizard will appear.
6. Choose one of the following network types:
Basic. For AWS-style networking. Provides a single network where each VM instance is
assigned an IP directly from the network. Guest isolation can be provided through layer-3 means such as security groups (IP address source filtering).
79
Chapter 7. Steps to Provisioning Your Cloud Infrastructure
Advanced. For more sophisticated network topologies. This network model provides the most flexibility in defining guest networks and providing custom network offerings such as firewall, VPN, or load balancer support.
For more information about the network types, see Network Setup.
7. The rest of the steps differ depending on whether you chose Basic or Advanced. Continue with the steps that apply to you:
Section 7.3.2.1, “Basic Zone Configuration”
Section 7.3.2.2, “Advanced Zone Configuration”
7.3.2.1. Basic Zone Configuration
1. After you select Basic in the Add Zone wizard and click Next, you will be asked to enter the following details. Then click Next.
Name. A name for the zone.
DNS 1 and 2. These are DNS servers for use by guest VMs in the zone. These DNS servers
will be accessed via the public network you will add later. The public IP addresses for the zone must have a route to the DNS server named here.
Internal DNS 1 and Internal DNS 2. These are DNS servers for use by system VMs in the
zone (these are VMs used by CloudPlatform itself, such as virtual routers, console proxies, and Secondary Storage VMs.) These DNS servers will be accessed via the management traffic network interface of the System VMs. The private IP address you provide for the pods must have a route to the internal DNS server named here.
Hypervisor. Choose the hypervisor for the first cluster in the zone. You can add clusters with
different hypervisors later, after you finish adding the zone.
Network Offering. Your choice here determines what network services will be available on the
network for guest VMs.
Network Offering Description
DefaultSharedNetworkOfferingWithSGService If you want to enable security groups for
guest traffic isolation, choose this. (See Using Security Groups to Control Traffic to VMs.)
DefaultSharedNetworkOffering If you do not need security groups, choose
this.
DefaultSharedNetscalerEIPandELBNetworkOfferingIf you have installed a Citrix NetScaler
appliance as part of your zone network, and you will be using its Elastic IP and Elastic Load Balancing features, choose this. With the EIP and ELB features, a basic zone with security groups enabled can offer 1:1 static NAT and load balancing.
Network Domain. (Optional) If you want to assign a special domain name to the guest VM
network, specify the DNS suffix.
80
Steps to Add a New Zone
Public. A public zone is available to all users. A zone that is not public will be assigned to a particular domain. Only users in that domain will be allowed to create guest VMs in this zone.
2. Choose which traffic types will be carried by the physical network. The traffic types are management, public, guest, and storage traffic. For more information about
the types, roll over the icons to display their tool tips, or see Basic Zone Network Traffic Types. This screen starts out with some traffic types already assigned. To add more, drag and drop traffic types onto the network. You can also change the network name if desired.
3. Assign a network traffic label to each traffic type on the physical network. These labels must match the labels you have already defined on the hypervisor host. To assign each label, click the Edit button under the traffic type icon. A popup dialog appears where you can type the label, then click OK.
These traffic labels will be defined only for the hypervisor selected for the first cluster. For all other hypervisors, the labels can be configured after the zone is created.
4. Click Next.
5. (NetScaler only) If you chose the network offering for NetScaler, you have an additional screen to fill out. Provide the requested details to set up the NetScaler, then click Next.
IP address. The NSIP (NetScaler IP) address of the NetScaler device.
Username/Password. The authentication credentials to access the device. CloudPlatform uses
these credentials to access the device.
Type. NetScaler device type that is being added. It could be NetScaler VPX, NetScaler MPX, or
NetScaler SDX. For a comparison of the types, see About Using a NetScaler Load Balancer.
Public interface. Interface of NetScaler that is configured to be part of the public network.
Private interface. Interface of NetScaler that is configured to be part of the private network.
Number of retries. Number of times to attempt a command on the device before considering
the operation failed. Default is 2.
Capacity. Number of guest networks/accounts that will share this NetScaler device.
Dedicated. When marked as dedicated, this device will be dedicated to a single account. When
Dedicated is checked, the value in the Capacity field has no significance – implicitly, its value is
1.
6. (NetScaler only) Configure the IP range for public traffic. The IPs in this range will be used for the static NAT capability which you enabled by selecting the network offering for NetScaler with EIP and ELB. Enter the following details, then click Add. If desired, you can repeat this step to add more IP ranges. When done, click Next.
Gateway. The gateway in use for these IP addresses.
Netmask. The netmask associated with this IP range.
VLAN. The VLAN that will be used for public traffic.
Start IP/End IP. A range of IP addresses that are assumed to be accessible from the Internet
and will be allocated for access to guest VMs.
81
Chapter 7. Steps to Provisioning Your Cloud Infrastructure
7. In a new zone, CloudPlatform adds the first pod for you. You can always add more pods later. For an overview of what a pod is, see Section 3.3, “About Pods”.
To configure the first pod, enter the following, then click Next:
Pod Name. A name for the pod.
Reserved system gateway. The gateway for the hosts in that pod.
Reserved system netmask. The network prefix that defines the pod's subnet. Use CIDR
notation.
Start/End Reserved System IP. The IP range in the management network that CloudPlatform
uses to manage various system VMs, such as Secondary Storage VMs, Console Proxy VMs, and DHCP. For more information, see System Reserved IP Addresses.
8. Configure the network for guest traffic. Provide the following, then click Next:
Guest gateway. The gateway that the guests should use.
Guest netmask. The netmask in use on the subnet the guests will use.
Guest start IP/End IP. Enter the first and last IP addresses that define a range that
CloudPlatform can assign to guests.
• We strongly recommend the use of multiple NICs. If multiple NICs are used, they may be in a different subnet.
• If one NIC is used, these IPs should be in the same CIDR as the pod CIDR.
9. In a new pod, CloudPlatform adds the first cluster for you. You can always add more clusters later. For an overview of what a cluster is, see About Clusters.
To configure the first cluster, enter the following, then click Next:
Hypervisor. The type of hypervisor software that all hosts in this cluster will run. If the
hypervisor is VMware, additional fields appear so you can give information about a vSphere cluster. For vSphere servers, we recommend creating the cluster of hosts in vCenter and then adding the entire cluster to CloudPlatform. See Section 7.5.3, “Add Cluster: vSphere”.
Cluster name. Enter a name for the cluster. This can be text of your choosing and is not used
by CloudPlatform.
10. In a new cluster, CloudPlatform adds the first host for you. You can always add more hosts later. For an overview of what a host is, see About Hosts.
Note
When you add a hypervisor host to CloudPlatform, the host must not have any VMs already running.
Before you can configure the host, you need to install the hypervisor software on the host. You will need to know which version of the hypervisor software version is supported by CloudPlatform and what additional configuration is required to ensure the host will work with CloudPlatform. To find these installation details, see:
82
Steps to Add a New Zone
• Citrix XenServer Installation and Configuration
• VMware vSphere Installation and Configuration
• KVM vSphere Installation and Configuration
• Oracle VM (OVM) Installation and Configuration To configure the first host, enter the following, then click Next:
Host Name. The DNS name or IP address of the host.
Username. The username is root.
Password. This is the password for the user named above (from your XenServer or KVM install).
Host Tags. (Optional) Any labels that you use to categorize hosts for ease of maintenance. For example, you can set this to the cloud's HA tag (set in the ha.tag global configuration parameter) if you want this host to be used only for VMs with the "high availability" feature enabled. For more information, see HA-Enabled Virtual Machines as well as HA for Hosts.
11. In a new cluster, CloudPlatform adds the first primary storage server for you. You can always add more servers later. For an overview of what primary storage is, see About Primary Storage.
To configure the first primary storage server, enter the following, then click Next:
Name. The name of the storage device.
Protocol. For XenServer, choose either NFS, iSCSI, or PreSetup. For KVM, choose NFS or
SharedMountPoint. For vSphere choose either VMFS (iSCSI or FiberChannel) or NFS. The remaining fields in the screen vary depending on what you choose here.
7.3.2.2. Advanced Zone Configuration
1. After you select Advanced in the Add Zone wizard and click Next, you will be asked to enter the following details. Then click Next.
Name. A name for the zone.
DNS 1 and 2. These are DNS servers for use by guest VMs in the zone. These DNS servers
will be accessed via the public network you will add later. The public IP addresses for the zone must have a route to the DNS server named here.
Internal DNS 1 and Internal DNS 2. These are DNS servers for use by system VMs in the
zone(these are VMs used by CloudPlatform itself, such as virtual routers, console proxies,and Secondary Storage VMs.) These DNS servers will be accessed via the management traffic network interface of the System VMs. The private IP address you provide for the pods must have a route to the internal DNS server named here.
Network Domain. (Optional) If you want to assign a special domain name to the guest VM
network, specify the DNS suffix.
Guest CIDR. This is the CIDR that describes the IP addresses in use in the guest virtual
networks in this zone. For example, 10.1.1.0/24. As a matter of good practice you should set different CIDRs for different zones. This will make it easier to set up VPNs between networks in different zones.
83
Chapter 7. Steps to Provisioning Your Cloud Infrastructure
Hypervisor. Choose the hypervisor for the first cluster in the zone. You can add clusters with different hypervisors later, after you finish adding the zone.
Public. A public zone is available to all users. A zone that is not public will be assigned to a particular domain. Only users in that domain will be allowed to create guest VMs in this zone.
2. Choose which traffic types will be carried by the physical network. The traffic types are management, public, guest, and storage traffic. For more information about
the types, roll over the icons to display their tool tips, or see Section 3.8.3, “Advanced Zone
Network Traffic Types”. This screen starts out with one network already configured. If you have
multiple physical networks, you need to add more. Drag and drop traffic types onto a greyed-out network and it will become active. You can move the traffic icons from one network to another; for example, if the default traffic types shown for Network 1 do not match your actual setup, you can move them down. You can also change the network names if desired.
3. Assign a network traffic label to each traffic type on each physical network. These labels must match the labels you have already defined on the hypervisor host. To assign each label, click the Edit button under the traffic type icon within each physical network. A popup dialog appears where you can type the label, then click OK.
These traffic labels will be defined only for the hypervisor selected for the first cluster. For all other hypervisors, the labels can be configured after the zone is created.
(VMware only) If you have enabled Nexus dvSwitch in the environment, you must specify the corresponding Ethernet port profile names as network traffic label for each traffic type on the physical network. For more information on Nexus dvSwitch, see Configuring a vSphere Cluster with Nexus 1000v Virtual Switch. If you have enabled VMware dvSwitch in the environment, you must specify the corresponding Switch name as network traffic label for each traffic type on the physical network. For more information, see Configuring a VMware Datacenter with VMware Distributed Virtual Switch in the Installation Guide.
84
Steps to Add a New Zone
4. Click Next.
5. Configure the IP range for public Internet traffic. Enter the following details, then click Add. If desired, you can repeat this step to add more public Internet IP ranges. When done, click Next.
Gateway. The gateway in use for these IP addresses.
Netmask. The netmask associated with this IP range.
VLAN. The VLAN that will be used for public traffic.
Start IP/End IP. A range of IP addresses that are assumed to be accessible from the Internet
and will be allocated for access to guest networks.
6. In a new zone, CloudPlatform adds the first pod for you. You can always add more pods later. For an overview of what a pod is, see Section 3.3, “About Pods”.
To configure the first pod, enter the following, then click Next:
Pod Name. A name for the pod.
Reserved system gateway. The gateway for the hosts in that pod.
Reserved system netmask. The network prefix that defines the pod's subnet. Use CIDR
notation.
85
Chapter 7. Steps to Provisioning Your Cloud Infrastructure
Start/End Reserved System IP. The IP range in the management network that CloudPlatform uses to manage various system VMs, such as Secondary Storage VMs, Console Proxy VMs, and DHCP. For more information, see Section 3.8.6, “System Reserved IP Addresses”.
7. Specify a range of VLAN IDs to carry guest traffic for each physical network (see VLAN Allocation Example ), then click Next.
8. In a new pod, CloudPlatform adds the first cluster for you. You can always add more clusters later. For an overview of what a cluster is, see Section 3.4, “About Clusters”.
To configure the first cluster, enter the following, then click Next:
Hypervisor. The type of hypervisor software that all hosts in this cluster will run. If the
hypervisor is VMware, additional fields appear so you can give information about a vSphere cluster. For vSphere servers, we recommend creating the cluster of hosts in vCenter and then adding the entire cluster to CloudPlatform. See Section 7.5.3, “Add Cluster: vSphere”.
Cluster name. Enter a name for the cluster. This can be text of your choosing and is not used
by CloudPlatform.
9. In a new cluster, CloudPlatform adds the first host for you. You can always add more hosts later. For an overview of what a host is, see Section 3.5, “About Hosts”.
Note
When you deploy CloudPlatform, the hypervisor host must not have any VMs already running.
Before you can configure the host, you need to install the hypervisor software on the host. You will need to know which version of the hypervisor software version is supported by CloudPlatform and what additional configuration is required to ensure the host will work with CloudPlatform. To find these installation details, see:
• Citrix XenServer Installation for CloudPlatform
• VMware vSphere Installation and Configuration
• KVM Installation and Configuration
• Oracle VM (OVM) Installation and Configuration To configure the first host, enter the following, then click Next:
Host Name. The DNS name or IP address of the host.
Username. Usually root.
Password. This is the password for the user named above (from your XenServer or KVM
install).
Host Tags. (Optional) Any labels that you use to categorize hosts for ease of maintenance. For
example, you can set to the cloud's HA tag (set in the ha.tag global configuration parameter) if you want this host to be used only for VMs with the "high availability" feature enabled. For
86
Steps to Add a New Zone
more information, see HA-Enabled Virtual Machines as well as HA for Hosts, both in the Administration Guide.
10. In a new cluster, CloudPlatform adds the first primary storage server for you. You can always add more servers later. For an overview of what primary storage is, see Section 3.6, “About Primary
Storage”.
To configure the first primary storage server, enter the following, then click Next:
Name. The name of the storage device.
Protocol. For XenServer, choose either NFS, iSCSI, or PreSetup. For KVM, choose NFS or
SharedMountPoint. For vSphere choose either VMFS (iSCSI or FiberChannel) or NFS. The remaining fields in the screen vary depending on what you choose here.
NFS Server. The IP address or DNS name of the storage device.
Path. The exported path from the server.
Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings.
The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2.
iSCSI Server. The IP address or DNS name of the storage device.
Target IQN. The IQN of the target. For example, iqn.1986-03.com.sun:02:01ec9bb549-1271378984.
Lun. The LUN number. For example, 3.
Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings.
The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2.
preSetup Server. The IP address or DNS name of the storage device.
SR Name-Label. Enter the name-label of the SR that has been set up outside CloudPlatform.
Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings.
The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2.
87
Chapter 7. Steps to Provisioning Your Cloud Infrastructure
SharedMountPoint Path. The path on each host that is where this primary
storage is mounted. For example, "/mnt/primary".
Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings.
The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2.
VMFS Server. The IP address or DNS name of the vCenter
server.
Path. A combination of the datacenter name and the datastore name. The format is "/" datacenter name "/" datastore name. For example, "/cloud.dc.VM/ cluster1datastore".
Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings.
The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2.
11. In a new zone, CloudPlatform adds the first secondary storage server for you. For an overview of what secondary storage is, see Section 3.7, “About Secondary Storage”.
Before you can fill out this screen, you need to prepare the secondary storage by setting up NFS shares and installing the latest CloudPlatform System VM template. See Section 7.8, “Adding
Secondary Storage”.
To configure the first secondary storage server, enter the following, then click Next:
NFS Server. The IP address of the server.
Path. The exported path from the server.
12. Click Launch.

7.4. Adding a Pod

When you create a new zone, CloudPlatform adds the first pod for you. You can add more pods at any time using the procedure in this section.
1. Log in to the CloudPlatform UI. See Section 6.2, “Log In to the UI”.
2. In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone to which you want to add a pod.
3. Click the Compute and Storage tab. In the Pods node of the diagram, click View All.
4. Click Add Pod.
88
Adding a Cluster
5. Enter the following details in the dialog.
Name. The name of the pod.
Gateway. The gateway for the hosts in that pod.
Netmask. The network prefix that defines the pod's subnet. Use CIDR notation.
Start/End Reserved System IP. The IP range in the management network that CloudPlatform uses to manage various system VMs, such as Secondary Storage VMs, Console Proxy VMs, and DHCP. For more information, see System Reserved IP Addresses.
6. Click OK.

7.5. Adding a Cluster

You need to tell CloudPlatform about the hosts that it will manage. Hosts exist inside clusters, so before you begin adding hosts to the cloud, you must add at least one cluster.

7.5.1. Add Cluster: KVM or XenServer

These steps assume you have already installed the hypervisor on the hosts and logged in to the CloudPlatform UI.
1. In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone in which you want to add the cluster.
2. Click the Compute tab.
3. In the Clusters node of the diagram, click View All.
4. Click Add Cluster.
5. Choose the hypervisor type for this cluster.
6. Choose the pod in which you want to create the cluster.
7. Enter a name for the cluster. This can be text of your choosing and is not used by CloudPlatform.
8. Click OK.

7.5.2. Add Cluster: OVM

To add a Cluster of hosts that run Oracle VM (OVM):
1. Add a companion non-OVM cluster to the Pod. This cluster provides an environment where the CloudPlatform System VMs can run. You should have already installed a non-OVM hypervisor on at least one Host to prepare for this step. Depending on which hypervisor you used:
• For VMWare, follow the steps in Add Cluster: vSphere. When finished, return here and continue
with the next step.
• For KVM or XenServer, follow the steps in Section 7.5.1, “Add Cluster: KVM or XenServer”.
When finished, return here and continue with the next step
2. In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone in which you want to add the cluster.
89
Chapter 7. Steps to Provisioning Your Cloud Infrastructure
3. Click the Compute tab. In the Pods node, click View All. Select the same pod you used in step 1.
4. Click View Clusters, then click Add Cluster. The Add Cluster dialog is displayed.
5. In Hypervisor, choose OVM.
6. In Cluster, enter a name for the cluster.
7. Click Add.

7.5.3. Add Cluster: vSphere

Host management for vSphere is done through a combination of vCenter and the CloudPlatform UI. CloudPlatform requires that all hosts be in a CloudPlatform cluster, but the cluster may consist of a single host. As an administrator you must decide if you would like to use clusters of one host or of multiple hosts. Clusters of multiple hosts allow for features like live migration. Clusters also require shared storage.
For vSphere servers, we recommend creating the cluster of hosts in vCenter and then adding the entire cluster to CloudPlatform.
7.5.3.1. VMware Cluster Size Limit
The maximum number of hosts in a vSphere cluster is determined by the VMware hypervisor software. For VMware versions 4.2, 4.1, 5.0, and 5.1, the limit is 32 hosts. CloudPlatform adheres to this maximum.
Note
Best Practice: It is advisable for VMware clusters in CloudPlatform to be smaller than the VMware hypervisor's maximum size. A cluster size of up to 8 hosts has been found optimal for most real­world situations.
7.5.3.2. Adding a vSphere Cluster
To add a vSphere cluster to CloudPlatform:
1. Create the cluster of hosts in vCenter. Follow the vCenter instructions to do this. You will create a cluster that looks something like this in vCenter.
90
Add Cluster: vSphere
2. Log in to the UI.
3. In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone in which you want to add the cluster.
4. Click the Compute tab, and click View All on Pods. Choose the pod to which you want to add the cluster.
5. Click View Clusters.
6. Click Add Cluster.
7. In Hypervisor, choose VMware.
8. Provide the following information in the dialog. The fields below make reference to values from vCenter.
• Cluster Name. Enter the name of the cluster you created in vCenter. For example,
"cloud.cluster.2.2.1"
• vCenter Host. Enter the hostname or IP address of the vCenter server.
• vCenter Username. Enter the username that CloudPlatform should use to connect to vCenter.
This user must have all administrative privileges.
• vCenter Password. Enter the password for the user named above
• vCenter Datacenter. Enter the vCenter datacenter that the cluster is in. For example,
"cloud.dc.VM".
91
Chapter 7. Steps to Provisioning Your Cloud Infrastructure
If you have enabled Nexus dvSwitch in the environment, the following parameters for dvSwitch configuration are displayed:
• Nexus dvSwitch IP Address: The IP address of the Nexus VSM appliance.
• Nexus dvSwitch Username: The username required to access the Nexus VSM applicance.
• Nexus dvSwitch Password: The password associated with the username specified above.
There might be a slight delay while the cluster is provisioned. It will automatically display in the UI
92
Loading...