Cloudstack Install Guide
Cloudstack Install Guide
2011, 2012 Citrix Systems, Inc. All rights reserved. Specifications are subject to change without notice. Citrix Systems, Inc., the Citrix logo, Citrix XenServer, Citrix XenCenter, and CloudStack are trademarks or registered trademarks of Citrix Systems, Inc. All other brands or products are trademarks or registered trademarks of their respective holders.
May 8, 2012
Contents
What's In This Guide .................................................................................................................................................................. 11 What Is CloudStack? .................................................................................................................................................................. 12 What Can CloudStack Do? ......................................................................................................................................................... 13 Deployment Architecture Overview ...................................................................................................................................... 14 Management Server Overview .......................................................................................................................................... 14 Cloud Infrastructure Overview .......................................................................................................................................... 15 Networking Overview ........................................................................................................................................................ 16 Overview of Installation Steps ................................................................................................................................................... 17 System Requirements ................................................................................................................................................................ 18 Management Server Single-Node Installation ........................................................................................................................... 19 Prepare the Operating System .............................................................................................................................................. 19 Install the Management Server ............................................................................................................................................. 21 Install and Configure the Database........................................................................................................................................ 22 Prepare NFS Shares ................................................................................................................................................................ 24 Using a Separate NFS Server .............................................................................................................................................. 24 About Password and Key Encryption ................................................................................................................................. 25 Using the Management Server as the NFS Server ............................................................................................................. 25 Prepare the System VM Template ......................................................................................................................................... 27 Single-Node Installation Complete! Next Steps ..................................................................................................................... 28 Management Server Multi-Node Installation ............................................................................................................................ 29 Prepare the Operating System .............................................................................................................................................. 29 Install the First Management Server ..................................................................................................................................... 31 Install and Configure the Database........................................................................................................................................ 32 About Password and Key Encryption ................................................................................................................................. 34 Prepare NFS Shares ................................................................................................................................................................ 34
May 8, 2012
Using a Separate NFS Server .............................................................................................................................................. 35 Using the Management Server as the NFS Server ............................................................................................................. 35 Prepare and Start Additional Management Servers .............................................................................................................. 37 Prepare the System VM Template ......................................................................................................................................... 38 Multi-Node Installation Complete! Next Steps ...................................................................................................................... 39 Log In to the CloudStack UI ........................................................................................................................................................ 40 Provision Your Cloud Infrastructure .......................................................................................................................................... 41 Change the Root Password ........................................................................................................................................................ 42 Add a Zone ................................................................................................................................................................................. 43 About Zones ........................................................................................................................................................................... 43 About Physical Networks ....................................................................................................................................................... 44 Basic Zone Network Traffic Types ...................................................................................................................................... 44 Basic Zone Guest IP Addresses .......................................................................................................................................... 45 Advanced Zone Network Traffic Types .............................................................................................................................. 45 Advanced Zone Guest IP Addresses ................................................................................................................................... 45 Advanced Zone Public IP Addresses .................................................................................................................................. 45 System Reserved IP Addresses .......................................................................................................................................... 46 Using Security Groups to Control Traffic to VMs ................................................................................................................... 47 About Security Groups ....................................................................................................................................................... 47 Security Groups in Basic and Advanced Zones .................................................................................................................. 47 Enabling Security Groups ................................................................................................................................................... 47 Working With Security Groups .......................................................................................................................................... 47 Adding a Zone ........................................................................................................................................................................ 48 Basic Zone Configuration ................................................................................................................................................... 49 Advanced Zone Configuration ........................................................................................................................................... 53 Add More Pods (Optional) ......................................................................................................................................................... 57 About Pods............................................................................................................................................................................. 57
May 8, 2012
Adding a Pod .......................................................................................................................................................................... 57 Add More Clusters (Optional) .................................................................................................................................................... 59 About Clusters ....................................................................................................................................................................... 59 Add Cluster: KVM or XenServer ............................................................................................................................................. 59 Add Cluster: OVM .................................................................................................................................................................. 60 Add Cluster: vSphere ............................................................................................................................................................. 60 Add Cluster: Bare Metal ......................................................................................................................................................... 62 Add More Hosts (Optional) ........................................................................................................................................................ 63 About Hosts ........................................................................................................................................................................... 63 Host Allocation ................................................................................................................................................................... 63 Install Hypervisor Software on Hosts ..................................................................................................................................... 64 Add Hosts to CloudStack (XenServer, KVM, or OVM) ............................................................................................................ 64 Requirements for XenServer, KVM, and OVM Hosts ......................................................................................................... 64 Adding a XenServer, KVM, or OVM Host ........................................................................................................................... 65 Add Hosts (vSphere) .............................................................................................................................................................. 66 Add Hosts (Bare Metal) .......................................................................................................................................................... 66 Add Primary Storage .................................................................................................................................................................. 67 About Primary Storage .......................................................................................................................................................... 67 System Requirements for Primary Storage ............................................................................................................................ 67 Adding Primary Storage ......................................................................................................................................................... 67 Add Secondary Storage .............................................................................................................................................................. 69 About Secondary Storage ...................................................................................................................................................... 69 System Requirements for Secondary Storage ....................................................................................................................... 69 Adding Secondary Storage ..................................................................................................................................................... 69 Initialization and Testing ............................................................................................................................................................ 71 Citrix XenServer Installation for CloudStack .............................................................................................................................. 72 System Requirements for XenServer Hosts ........................................................................................................................... 72
May 8, 2012
XenServer Installation Steps .................................................................................................................................................. 72 Configure XenServer dom0 Memory ..................................................................................................................................... 73 Username and Password ....................................................................................................................................................... 73 Time Synchronization ............................................................................................................................................................ 73 Licensing ................................................................................................................................................................................ 74 Getting and Deploying a License ........................................................................................................................................ 74 Install CloudStack XenServer Support Package (CSP) ............................................................................................................ 74 Primary Storage Setup for XenServer .................................................................................................................................... 75 iSCSI Multipath Setup for XenServer (Optional) .................................................................................................................... 76 Physical Networking Setup for XenServer ............................................................................................................................. 76 Configuring Public Network with a Dedicated NIC for XenServer (Optional) .................................................................... 77 Configuring Multiple Guest Networks for XenServer (Optional) ....................................................................................... 77 Separate Storage Network for XenServer (Optional) ........................................................................................................ 77 NIC Bonding for XenServer (Optional) ............................................................................................................................... 78 Upgrading XenServer Versions .............................................................................................................................................. 80 VMware vSphere Installation and Configuration ...................................................................................................................... 83 System Requirements for vSphere Hosts .............................................................................................................................. 83 Preparation Checklist for VMware......................................................................................................................................... 84 vCenter Checklist ............................................................................................................................................................... 85 Networking Checklist for VMware ..................................................................................................................................... 85 vSphere Installation Steps ..................................................................................................................................................... 86 ESXi Host setup ...................................................................................................................................................................... 86 Physical Host Networking ...................................................................................................................................................... 87 Configure Virtual Switch .................................................................................................................................................... 87 Configure vCenter Management Network ........................................................................................................................ 89 Extend Port Range for CloudStack Console Proxy ............................................................................................................. 91 Configure NIC Bonding for vSphere ................................................................................................................................... 91
May 8, 2012
Storage Preparation for vSphere (iSCSI only) ........................................................................................................................ 91 Enable iSCSI initiator for ESXi hosts ................................................................................................................................... 91 Add iSCSI target ................................................................................................................................................................. 93 Create an iSCSI datastore................................................................................................................................................... 94 Multipathing for vSphere (Optional) ................................................................................................................................. 94 Add Hosts or Configure Clusters (vSphere)............................................................................................................................ 95 KVM Installation and Configuration........................................................................................................................................... 96 Supported Operating Systems ............................................................................................................................................... 96 System Requirements for KVM Hosts .................................................................................................................................... 96 KVM Installation Steps ........................................................................................................................................................... 97 Installing the CloudStack Agent on a KVM Host .................................................................................................................... 97 Physical Network Configuration for KVM .............................................................................................................................. 98 Time Synchronization ............................................................................................................................................................ 99 Primary Storage Setup for KVM (Optional) .......................................................................................................................... 100 Oracle VM (OVM) Installation and Configuration .................................................................................................................... 101 System Requirements for OVM Hosts ................................................................................................................................. 101 OVM Installation Overview .................................................................................................................................................. 101 Installing OVM on the Host(s) .............................................................................................................................................. 101 Primary Storage Setup for OVM .......................................................................................................................................... 102 Set Up Host(s) for System VMs ............................................................................................................................................ 102 Bare Metal Installation ............................................................................................................................................................ 103 Bare Metal Concepts............................................................................................................................................................ 103 Bare Metal Architecture .................................................................................................................................................. 103 How Does Bare Metal Provisioning Work? ...................................................................................................................... 104 Bare Metal Deployment Architecture.............................................................................................................................. 104 Bare Metal Installation Checklist ......................................................................................................................................... 106 Set Up the Firewall ............................................................................................................................................................... 106
May 8, 2012
Set Up IPMI .......................................................................................................................................................................... 109 Enable PXE on the Bare Metal Host ..................................................................................................................................... 110 Install the PXE and DHCP Servers......................................................................................................................................... 110 Set Up a CIFS File Server ...................................................................................................................................................... 110 Create a Bare Metal Image .................................................................................................................................................. 111 Add the PXE Server and DHCP Server to Your Deployment ................................................................................................. 111 Add a Cluster, Host, and Firewall ......................................................................................................................................... 112 Add a Service Offering and Template .................................................................................................................................. 112 Choosing a Deployment Architecture ...................................................................................................................................... 113 Small-Scale Deployment ...................................................................................................................................................... 113 Large-Scale Redundant Setup .............................................................................................................................................. 114 Separate Storage Network ................................................................................................................................................... 115 Multi-Node Management Server ......................................................................................................................................... 115 Multi-Site Deployment......................................................................................................................................................... 116 Choosing a Hypervisor: Supported Features ........................................................................................................................... 120 Network Setup ......................................................................................................................................................................... 122 Basic and Advanced Networking ......................................................................................................................................... 122 VLAN Allocation Example ..................................................................................................................................................... 123 Example Hardware Configuration ........................................................................................................................................ 123 Dell 62xx........................................................................................................................................................................... 124 Cisco 3750 ........................................................................................................................................................................ 124 Layer-2 Switch ...................................................................................................................................................................... 125 Hardware Firewall ................................................................................................................................................................ 126 Generic Firewall Provisions .............................................................................................................................................. 126 External Guest Firewall Integration for Juniper SRX (Optional) ....................................................................................... 126 Management Server Load Balancing ................................................................................................................................... 129 Topology Requirements ....................................................................................................................................................... 130
May 8, 2012
Security Requirements..................................................................................................................................................... 130 Runtime Internal Communications Requirements .......................................................................................................... 130 Storage Network Topology Requirements....................................................................................................................... 130 External Firewall Topology Requirements ....................................................................................................................... 130 Advanced Zone Topology Requirements ......................................................................................................................... 130 XenServer Topology Requirements ................................................................................................................................. 130 VMware Topology Requirements .................................................................................................................................... 130 KVM Topology Requirements .......................................................................................................................................... 131 External Guest Load Balancer Integration (Optional) .......................................................................................................... 131 Guest Network Usage Integration for Traffic Sentinel ........................................................................................................ 132 Setting Zone VLAN and Running VM Maximums ................................................................................................................. 133 Storage Setup........................................................................................................................................................................... 134 Small-Scale Setup ............................................................................................................................................................. 134 Secondary Storage ........................................................................................................................................................... 134 Example Configurations ................................................................................................................................................... 134 Additional Installation Options ................................................................................................................................................ 138 Edit the Global Configuration Settings (Optional) ............................................................................................................... 138 Installing the Usage Server (Optional) ................................................................................................................................. 139 Requirements for Installing the Usage Server ................................................................................................................. 139 Steps to Install the Usage Server ..................................................................................................................................... 139 SSL (Optional)....................................................................................................................................................................... 140 Database Replication (Optional) .......................................................................................................................................... 140 Failover ............................................................................................................................................................................ 142 Best Practices ........................................................................................................................................................................... 143 Process Best Practices.......................................................................................................................................................... 143 Setup Best Practices............................................................................................................................................................. 143 Maintenance Best Practices................................................................................................................................................. 143
May 8, 2012
Troubleshooting ....................................................................................................................................................................... 145 Checking the Management Server Log ................................................................................................................................ 145 Troubleshooting the Secondary Storage VM ....................................................................................................................... 145 Running a Diagnostic Script ............................................................................................................................................. 145 Checking the Log Files ...................................................................................................................................................... 146 VLAN Issues .......................................................................................................................................................................... 146 Console Proxy VM Issues ..................................................................................................................................................... 146 Binary Logging Error when Upgrading Database ................................................................................................................. 147 Can't Add Host ..................................................................................................................................................................... 147 Preparation Checklists ............................................................................................................................................................. 148 Management Server Checklist ............................................................................................................................................. 148 Database Checklist ............................................................................................................................................................... 149 Storage Checklist.................................................................................................................................................................. 150 Contacting Support .................................................................................................................................................................. 151
10
May 8, 2012
May 8, 2012
11
What Is CloudStack?
CloudStack is an open source software platform that pools computing resources to build public, private, and hybrid Infrastructure as a Service (IaaS) clouds. CloudStack manages the network, storage, and compute nodes that make up a cloud infrastructure. Use CloudStack to deploy, manage, and configure cloud computing environments. Typical users are service providers and enterprises. With CloudStack, you can: Who Should Read This If you are new to CloudStack or you want to learn more about concepts before installing and running CloudStack, read this overview. If you just want to get started, you can skip to Overview of Installation Steps on page 17.
Set up an on-demand, elastic cloud computing service. Service providers can sell self-service virtual machine instances, storage volumes, and networking configurations over the Internet. Set up an on-premise private cloud for use by employees. Rather than managing virtual machines in the same way as physical machines, with CloudStack an enterprise can offer self-service virtual machines to users without involving IT departments.
12
May 8, 2012
May 8, 2012
13
Management Server
Hypervisor
Machine 1
Machine 2
Simplified view of a basic deployment A more full-featured installation consists of a highly-available multi-node Management Server installation and up to thousands of hosts using any of several advanced networking setups. For information about deployment options, see Choosing a Deployment Architecture on page 113.
14
May 8, 2012
For additional options, including how to set up a multi-node management server installation, see Choosing a Deployment Architecture on page 113.
May 8, 2012
15
Secondary storage is associated with a zone, and it stores templates, ISO images, and disk volume snapshots. See About Secondary Storage on page 69.
Host
Primary Storage
Networking Overview
CloudStack offers two types of networking scenario: Basic. For AWS-style networking. Provides a single network where guest isolation can be provided through layer-3 means such as security groups (IP address source filtering). Advanced. For more sophisticated network topologies. This network model provides the most flexibility in defining guest networks.
16
May 8, 2012
1. Make sure you have the required hardware ready (p. 18) 2. (Optional) Fill out the preparation checklists (p. 148)
Install the CloudStack software
For anything more than a simple trial installation, you will need guidance for a variety of configuration choices. It is strongly recommended that you read the following: Choosing a Deployment Architecture on page 113 Choosing a Hypervisor: Supported Features on page 120 Network Setup on page 122 Storage Setup on page 134 Best Practices on page 143
5. Add a zone. Includes the first pod, cluster, and host (p. 43) 6. Add more pods (p. 57) 7. Add more clusters (p. 59) 8. Add more hosts (p. 63) 9. Add more primary storage (p. 67) 10. Add more secondary storage (p. 69)
Try using the cloud
May 8, 2012
17
System Requirements
The machines that will run the Management Server and MySQL database must meet the following requirements. The same machines can also be used to provide primary and secondary storage, such as via localdisk or NFS. The Management Server may be placed on a virtual machine. Operating system: Commercial users: Preferred: RHEL 6.2+ 64-bit (https://access.redhat.com/downloads) or CentOS 6.2+ 64-bit (http://isoredirect.centos.org/centos/6/isos/x86_64/). Also supported: RHEL and CentOS 5.4-5.x 64-bit Open-source community users: RHEL 5.4-5.x 64-bit or 6.2+ 64-bit; CentOS 5.4-5.x 64-bit or 6.2+ 64-bit; Ubuntu 10.04 LTS
64-bit x86 CPU (more cores results in better performance) 4 GB of memory 250 GB of local disk (more results in better capability; 500 GB recommended) At least 1 NIC Statically allocated IP address Fully qualified domain name as returned by the hostname command
The host is where the cloud services run in the form of guest virtual machines. Each host is one machine that meets the following requirements: Must be 64-bit and must support HVM (Intel-VT or AMD-V enabled). 64-bit x86 CPU (more cores results in better performance) Hardware virtualization support required 4 GB of memory 36 GB of local disk At least 1 NIC Statically allocated IP Address Latest hotfixes applied to hypervisor software When you deploy CloudStack, the hypervisor host must not have any VMs already running WARNING Be sure you fulfill the additional hypervisor requirements and installation steps provided in this Guide. Hypervisor hosts must be properly prepared to work with CloudStack. For example, the requirements for XenServer are listed under Citrix XenServer Installation for CloudStack on page 72.
Hosts have additional requirements depending on the hypervisor. See the requirements listed at the top of the Installation section for your chosen hypervisor: Citrix XenServer Installation for CloudStack on page 72 VMware vSphere Installation and Configuration on page 83 KVM Installation and Configuration on page 96 Oracle VM (OVM) Installation and Configuration on page 101
18
May 8, 2012
1. Prepare the Operating System 2. Install the Management Server 3. Install and Configure the Database 4. Prepare NFS Shares 5. Prepare the System VM Template
For the sake of security, be sure the public Internet can not access port 8096 or port 8250 on the Management Server.
This should return a fully qualified hostname such as "kvm1.lab.example.org". If it does not, edit /etc/hosts so that it does.
In Ubuntu, SELinux is not installed by default. You can verify this with:
# dpkg --list 'selinux'
b. Set the SELINUX variable in /etc/selinux/config to permissive. This ensures that the permissive setting will be maintained after a system reboot. In RHEL or CentOS:
# vi /etc/selinux/config
May 8, 2012
19
In Ubuntu (do this step only if SELinux was found on the machine in the previous step):
# selinux-config-enforcing permissive
c.
Then set SELinux to permissive starting immediately, without requiring a system reboot.
In CentOS:
# setenforce permissive
In RHEL:
# setenforce 0
In Ubuntu (do this step only if SELinux was found on the machine):
# setenforce permissive
4. Make sure that the Management Server can reach the Internet.
# ping www.google.com
5. (RHEL 6.2) If you do not have a Red Hat Network account, you need to prepare a local Yum repository.
a. If you are working with a physical host, insert the RHEL 6.2 installation CD. If you are using a VM, attach the RHEL6 ISO.
b. Mount the CDROM to /media. c. Create a repo file at /etc/yum.repos.d/rhel6.repo. In the file, insert the following lines:
TIP NTP is required to synchronize the clocks of the servers in your cloud.
On Ubuntu:
# apt-get install ntp
20
May 8, 2012
c.
d. Make sure NTP will start again upon reboot. On RHEL or CentOS:
# chkconfig ntpd on
On Ubuntu:
# chkconfig ntp on
1. Download the CloudStack Management Server onto the host where it will run from one of the following links. If your
operating system is CentOS, use the download file for RHEL. Open-source community: http://sourceforge.net/projects/cloudstack/files/CloudStack Acton/ Commercial customers: https://www.citrix.com/English/ss/downloads/ You will need a MyCitrix account.
2. Install the CloudStack packages. You should have a file in the form of CloudStack-VERSION-N-OSVERSION.tar.gz.
Untar the file and then run the install.sh script inside it. Replace the file and directory names below with those you are using:
# tar xzf CloudStack-VERSION-N-OSVERSION.tar.gz # cd CloudStack-VERSION-N-OSVERSION # ./install.sh
You should see a few messages as the installer prepares, followed by a list of choices.
Wait for a message like Complete! Done. Continue to Install and Configure the Database on page 22.
4. (RHEL or CentOS) When the installation is finished, run the following commands to start essential services (the
commands might be different depending on your OS).
# # # # service rpcbind start service nfs start chkconfig nfs on chkconfig rpcbind on
May 8, 2012
21
WARNING It is important that you make the right choice of database version. Never downgrade an existing MySQL installation that is being used with CloudStack.
If you have installed a version of MySQL earlier than 5.1.58, you can either skip to step 4 or uninstall MySQL and proceed to step 2 to install a more recent version.
2. On the same computer where you installed the CloudStack Management Server, re-run install.sh.
# ./install.sh
You should see a few messages as the installer prepares, followed by a list of choices.
Troubleshooting: If you do not see the D option, you already have MySQL installed. Please go back to step 1.
4. Edit the MySQL configuration (/etc/my.cnf or /etc/mysql/my.cnf, depending on your OS) and insert the following
lines in the [mysqld] section. You can put these lines below the datadir line. The max_connections parameter should be set to 350 multiplied by the number of Management Servers you are deploying. This example assumes one Management Server.
innodb_rollback_on_timeout=1 innodb_lock_wait_timeout=600 max_connections=350 log-bin=mysql-bin binlog-format = 'ROW'
NOTE: The binlog-format variable is supported in MySQL versions 5.1 and greater. It is not supported in MySQL 5.0. In some versions of MySQL, an underscore character is used in place of the hyphen in the variable name. For the exact syntax and spelling of each variable, consult the documentation for your version of MySQL.
5. Restart the MySQL service, then invoke MySQL as the root user.
On RHEL or CentOS:
# service mysqld restart # mysql -u root
On Ubuntu, use the following. Replace the password with the root password you set during MySQL installation.
# service mysql restart # mysql -u root -p<password>
22
May 8, 2012
6. (RHEL or CentOS) Best Practice: On RHEL and CentOS, MySQL does not set a root password by default. It is very
strongly recommended that you set a root password as a security precaution. Run the following commands, and substitute your own desired root password.
mysql> SET PASSWORD = PASSWORD('password');
From now on, start MySQL with mysql -p so it will prompt you for the password.
On Ubuntu:
# service mysql restart
c.
Open the MySQL server port (3306) in the firewall to allow remote clients to connect.
d.
Edit the /etc/sysconfig/iptables file and add the following line at the beginning of the INPUT chain.
8. Set up the database. The following command creates the cloud user on the database.
In dbpassword, specify the password to be assigned to the cloud user. You can choose to provide no password. In deploy-as, specify the username and password of the user deploying the database. In the following command, it is assumed the root user is deploying the database and creating the cloud user. (Optional) For encryption_type, use file or web to indicate the technique used to pass in the database encryption password. Default: file. See About Password and Key Encryption on page 25. (Optional) For management_server_key, substitute the default key that is used to encrypt confidential parameters in the CloudStack properties file. Default: password. It is highly recommended that you replace this with a more secure value. See About Password and Key Encryption on page 25. (Optional) For database_key, substitute the default key that is used to encrypt confidential parameters in the CloudStack database. Default: password. It is highly recommended that you replace this with a more secure value. See About Password and Key Encryption on page 25.
When this script is finished, you should see a message like CloudStack has successfully initialized the database.
9. Now that the database is set up, you can finish configuring the OS for the Management Server. This command will set
up iptables, sudoers, and start the Management Server.
# cloud-setup-management
May 8, 2012
23
You should see the message CloudStack Management Server setup is done.
1. On the storage server, create an NFS share for secondary storage. 2. Export it with rw,async,no_root_squash. For example:
# vi /etc/exports
5. Mount the secondary storage on your Management Server. Replace the example NFS server name and NFS share
paths below with your own.
# mount -t nfs nfsservername:/nfs/share/secondary /mnt/secondary
6. If you are using NFS for primary storage as well, repeat these steps with a different NFS share and mount point. If
you are using iSCSI for primary storage, continue with Log In to the CloudStack UI on page 40.
24
May 8, 2012
CloudStack uses the Java Simplified Encryption (JASYPT) library. The data values are encrypted and decrypted using a database secret key, which is stored in one of CloudStacks internal properties files along with the database password. The other encrypted values listed above (SSH keys, etc.) are in the CloudStack internal database. Of course, the database secret key itself can not be stored in the open it must be encrypted. How then does CloudStack read it? A second secret key must be provided from an external source during Management Server startup. This key can be provided in one of two ways: loaded from a file or provided by the CloudStack administrator. The CloudStack database has a new configuration setting that lets it know which of these methods will be used. If the encryption type is set to file, the key must be in a file in a known location. If the encryption type is set to web, the administrator runs the utility com.cloud.utils.crypt.EncryptionSecretKeySender, which relays the key to the Management Server over a known port. The encryption type, database secret key, and Management Server secret key are set during CloudStack installation. They are all parameters to the CloudStack database setup script (cloud-setup-databases). The default values are file, password, and password. It is, of course, highly recommended that you change these to more secure keys.
1. On the Management Server host, create two directories that you will use for primary and secondary storage.
For example:
# mkdir -p /export/primary # mkdir -p /export/secondary
May 8, 2012
25
5. Edit the /etc/sysconfig/iptables file and add the following lines at the beginning of the INPUT chain.
-A -A -A -A -A -A -A -A -A -A -A INPUT INPUT INPUT INPUT INPUT INPUT INPUT INPUT INPUT INPUT INPUT -m -m -m -m -m -m -m -m -m -m -m state state state state state state state state state state state --state --state --state --state --state --state --state --state --state --state --state NEW NEW NEW NEW NEW NEW NEW NEW NEW NEW NEW -p -p -p -p -p -p -p -p -p -p -p udp tcp tcp tcp udp tcp udp tcp udp tcp udp --dport --dport --dport --dport --dport --dport --dport --dport --dport --dport --dport 111 -j ACCEPT 111 -j ACCEPT 2049 -j ACCEPT 32803 -j ACCEPT 32769 -j ACCEPT 892 -j ACCEPT 892 -j ACCEPT 875 -j ACCEPT 875 -j ACCEPT 662 -j ACCEPT 662 -j ACCEPT
7. If NFS v4 communication is used between client and server, add your domain to /etc/idmapd.conf on both the
hypervisor host and Management Server.
# vi /etc/idmapd.conf
Remove the character # from the beginning of the Domain line in idmapd.conf and replace the value in the file with your own domain. In the example below, the domain is company.com.
Domain = company.com
26
May 8, 2012
9. It is recommended that you test to be sure the previous steps have been successful.
a. Log in to the hypervisor host. b. (RHEL or CentOS) Be sure NFS and rpcbind are running. The commands might be different depending on your OS. For example (substitute your own management server name):
# # # # # service rpcbind start service nfs start chkconfig nfs on chkconfig rpcbind on reboot
c.
# # # # # #
Log back in to the hypervisor host and try to mount the /export directories. For example (substitute your own management server name):
mkdir /primarymount mount -t nfs <management-server-name>:/export/primary /primarymount umount /primarymount mkdir /secondarymount mount -t nfs <management-server-name>:/export/secondary /secondarymount umount /secondarymount
decompress the system VM template. Run the command for each hypervisor type that you expect end users to run in this Zone.
If your secondary storage mount point is not named /mnt/secondary, substitute your own mount point name. If you set the CloudStack database encryption type to "web" when you set up the database, you must use the parameter -s <management-server-secret-key>. See About Password and Key Encryption on page 34. This process will require approximately 5 GB of free space on the local file system and up to 30 minutes each time it runs. For vSphere:
# /usr/lib64/cloud/agent/scripts/storage/secondary/cloud-install-sys-tmplt -m /mnt/secondary -u http://download.cloud.com/templates/acton/acton-systemvm-02062012.ova -h vmware -s <optional-management-server-secret-key> -F
For KVM:
# /usr/lib64/cloud/agent/scripts/storage/secondary/cloud-install-sys-tmplt -m /mnt/secondary -u http://download.cloud.com/templates/acton/acton-systemvm02062012.qcow2.bz2 -h kvm -s <optional-management-server-secret-key> -F
May 8, 2012
27
For XenServer:
# /usr/lib64/cloud/agent/scripts/storage/secondary/cloud-install-sys-tmplt -m /mnt/secondary -u http://download.cloud.com/templates/acton/acton-systemvm02062012.vhd.bz2 -h xenserver -s <optional-management-server-secret-key> -F
2. When the script has finished, unmount secondary storage and remove the created directory.
# umount /mnt/secondary # rmdir /mnt/secondary
Management Server
MySQL cloud_db
What should you do next? Even without adding any cloud infrastructure, you can run the UI to get a feel for what's offered and how you will interact with CloudStack on an ongoing basis. See Log In to the CloudStack UI on page 40. When you're ready, add the cloud infrastructure and try running some virtual machines on it, so you can watch how CloudStack manages the infrastructure. See Provision Your Cloud Infrastructure on page 41. If desired, you can scale up by adding more Management Server nodes. See Management Server Multi-Node Installation on page 29.
28
May 8, 2012
1. Prepare the Operating System 2. Install the First Management Server 3. Install and Configure the Database 4. Prepare NFS Shares 5. Prepare and Start Additional Management Servers 6. Prepare the System VM Template
WARNING For the sake of security, be sure the public Internet can not access port 8096 or port 8250 on the Management Server.
This should return a fully qualified hostname such as "kvm1.lab.example.org". If it does not, edit /etc/hosts so that it does.
In Ubuntu, SELinux is not installed by default. You can verify this with:
# dpkg --list 'selinux'
b. Set the SELINUX variable in /etc/selinux/config to permissive. This ensures that the permissive setting will be maintained after a system reboot. In RHEL or CentOS:
# vi /etc/selinux/config
May 8, 2012
29
In Ubuntu (do this step only if SELinux was found on the machine in the previous step):
# selinux-config-enforcing permissive
c.
Then set SELinux to permissive starting immediately, without requiring a system reboot. In CentOS:
# setenforce permissive
In RHEL:
# setenforce 0
In Ubuntu (do this step only if SELinux was found on the machine):
# setenforce permissive
4. Make sure that the Management Server can reach the Internet.
# ping www.google.com
5. (RHEL 6.2) If you do not have a Red Hat Network account, you need to prepare a local Yum repository.
a. If you are working with a physical host, insert the RHEL 6.2 installation CD. If you are using a VM, attach the RHEL6 ISO.
b. Mount the CDROM to /media. c. Create a repo file at /etc/yum.repos.d/rhel6.repo. In the file, insert the following lines:
TIP NTP is required to synchronize the clocks of the servers in your cloud.
On Ubuntu:
# apt-get install ntp
30
May 8, 2012
c.
d. Make sure NTP will start again upon reboot. On RHEL or CentOS:
# chkconfig ntpd on
On Ubuntu:
# chkconfig ntp on
2. Install the CloudStack packages. You should have a file in the form of CloudStack-VERSION-N-OSVERSION.tar.gz.
Untar the file and then run the install.sh script inside it. Replace the file and directory names below with those you are using:
# tar xzf CloudStack-VERSION-N-OSVERSION.tar.gz # cd CloudStack-VERSION-N-OSVERSION # ./install.sh
You should see a few messages as the installer prepares, followed by a list of choices.
4. Wait for a message like Complete! Done, which indicates that the software was installed successfully. 5. (RHEL or CentOS) When the installation is finished, run the following commands to start essential services (the
commands might be different depending on your OS):
# # # # service rpcbind start service nfs start chkconfig nfs on chkconfig rpcbind on
May 8, 2012
31
If you have installed a version of MySQL earlier than 5.1.58, you can either skip to step 3 or uninstall MySQL and proceed to step 2 to install a more recent version.
2. Log in as root to your Database Node and run the following commands. If you are going to install a replica database,
then log in to the master.
# yum install mysql-server # chkconfig --level 35 mysqld on
3. Edit the MySQL configuration (/etc/my.cnf or /etc/mysql/my.cnf, depending on your OS) and insert the following
lines in the [mysqld] section. You can put these lines below the datadir line. The max_connections parameter should be set to 350 multiplied by the number of Management Servers you are deploying. This example assumes two Management Servers.
innodb_rollback_on_timeout=1 innodb_lock_wait_timeout=600 max_connections=700 log-bin=mysql-bin binlog-format = 'ROW'
NOTE: The binlog-format variable is supported in MySQL versions 5.1 and greater. It is not supported in MySQL 5.0. In some versions of MySQL, an underscore character is used in place of the hyphen in the variable name. For the exact syntax and spelling of each variable, consult the documentation for your version of MySQL.
4. Start the MySQL service, then invoke MySQL as the root user.
On RHEL or CentOS:
# service mysqld restart # mysql -u root
On Ubuntu, use the following. Replace the password with the root password you set during MySQL installation.
# service mysql restart # mysql -u root -p<password>
5. (RHEL or CentOS) Best Practice: On RHEL and CentOS, MySQL does not set a root password by default. It is very
strongly recommended that you set a root password as a security precaution. Run the following command, and substitute your own desired root password for <password>.
mysql> SET PASSWORD = PASSWORD('password');
From now on, start MySQL with mysql -p so it will prompt you for the password.
32
May 8, 2012
On Ubuntu:
# service mysql restart
c.
Open the MySQL server port (3306) in the firewall to allow remote clients to connect.
d. Edit the /etc/sysconfig/iptables file and add the following lines at the beginning of the INPUT chain.
-A INPUT -p tcp --dport 3306 -j ACCEPT
7. Return to the root shell on your first Management Server. 8. Set up the database. The following command creates the cloud user on the database.
In dbpassword, specify the password to be assigned to the cloud user. You can choose to provide no password. In dbhost, provide the hostname of the database node. In deploy-as, specify the username and password of the user deploying the database. For example, if you originally installed MySQL with user root and password password, provide --deploy-as=root:password. (Optional) For encryption_type, use file or web to indicate the technique used to pass in the database encryption password. Default: file. See About Password and Key Encryption on page 34. (Optional) For management_server_key, substitute the default key that is used to encrypt confidential parameters in the CloudStack properties file. Default: password. It is highly recommended that you replace this with a more secure value. See About Password and Key Encryption on page 34. (Optional) For database_key, substitute the default key that is used to encrypt confidential parameters in the CloudStack database. Default: password. It is highly recommended that you replace this with a more secure value. See About Password and Key Encryption on page 34.
9. Now run a script that will set up iptables rules and SELinux for use by the Management Server. It will also chkconfig
off and start the Management Server.
# cloud-setup-management
You should see the message CloudStack Management Server setup is done.
May 8, 2012
33
CloudStack uses the Java Simplified Encryption (JASYPT) library. The data values are encrypted and decrypted using a database secret key, which is stored in one of CloudStacks internal properties files along with the database password. The other encrypted values listed above (SSH keys, etc.) are in the CloudStack internal database. Of course, the database secret key itself can not be stored in the open it must be encrypted. How then does CloudStack read it? A second secret key must be provided from an external source during Management Server startup. This key can be provided in one of two ways: loaded from a file or provided by the CloudStack administrator. The CloudStack database has a new configuration setting that lets it know which of these methods will be used. If the encryption type is set to file, the key must be in a file in a known location. If the encryption type is set to web, the administrator runs the utility com.cloud.utils.crypt.EncryptionSecretKeySender, which relays the key to the Management Server over a known port. The encryption type, database secret key, and Management Server secret key are set during CloudStack installation. They are all parameters to the CloudStack database setup script (cloud-setup-databases). The default values are file, password, and password. It is, of course, highly recommended that you change these to more secure keys.
34
May 8, 2012
1. On the storage server, create an NFS share for secondary storage and, if you are using NFS for primary storage as
well, create a second NFS share.
# mkdir -p /export/primary # mkdir -p /export/secondary
4. On the management server, create a mount point for secondary storage. For example:
# mkdir -p /mnt/secondary
5. Mount the secondary storage on your Management Server. Replace the example NFS server name and NFS share
paths below with your own.
# mount -t nfs nfsservername:/nfs/share/secondary /mnt/secondary
6. Continue with Prepare and Start Additional Management Servers on page 37.
2. On the Management Server host, create an NFS share for secondary storage and, if you are using NFS for primary
storage as well, create a second NFS share.
# mkdir -p /export/primary # mkdir -p /export/secondary
May 8, 2012
35
5. (Not applicable on Ubuntu) Edit the /etc/sysconfig/nfs file and uncomment the following lines.
LOCKD_TCPPORT=32803 LOCKD_UDPPORT=32769 MOUNTD_PORT=892 RQUOTAD_PORT=875 STATD_PORT=662 STATD_OUTGOING_PORT=2020
6. (Not applicable on Ubuntu) Edit the /etc/sysconfig/iptables file and add the following lines at the beginning of the
INPUT chain.
-A -A -A -A -A -A -A -A -A -A -A INPUT INPUT INPUT INPUT INPUT INPUT INPUT INPUT INPUT INPUT INPUT -m -m -m -m -m -m -m -m -m -m -m state state state state state state state state state state state --state --state --state --state --state --state --state --state --state --state --state NEW NEW NEW NEW NEW NEW NEW NEW NEW NEW NEW -p -p -p -p -p -p -p -p -p -p -p udp tcp tcp tcp udp tcp udp tcp udp tcp udp --dport --dport --dport --dport --dport --dport --dport --dport --dport --dport --dport 111 -j ACCEPT 111 -j ACCEPT 2049 -j ACCEPT 32803 -j ACCEPT 32769 -j ACCEPT 892 -j ACCEPT 892 -j ACCEPT 875 -j ACCEPT 875 -j ACCEPT 662 -j ACCEPT 662 -j ACCEPT
8. If NFS v4 communication is used between client and server, add your domain to /etc/idmapd.conf on both the
hypervisor host and Management Server.
# vi /etc/idmapd.conf
Remove the character # from the beginning of the Domain line in idmapd.conf and replace the value in the file with your own domain. In the example below, the domain is company.com.
Domain = company.com
36
May 8, 2012
11. It is recommended that you also test to be sure the previous steps have been successful.
a. Log in to the hypervisor host. b. (Not applicable on Ubuntu) Be sure NFS and rpcbind are running. The commands might be different depending on your OS. For example (substitute your own management server name):
# # # # # service rpcbind start service nfs start chkconfig nfs on chkconfig rpcbind on reboot
c.
# # # # # #
Log back in to the hypervisor host and try to mount the /export directories. For example (substitute your own management server name):
mkdir /primarymount mount -t nfs <management-server-name>:/export/primary /primarymount umount /primarymount mkdir /secondarymount mount -t nfs <management-server-name>:/export/secondary /secondarymount umount /secondarymount
12. Continue with Prepare and Start Additional Management Servers on page 37.
1. Perform the steps in Prepare the Operating System on page 29. 2. Run these commands on each additional Management Server. Replace the file and directory names below with those
you are using:
# tar xzf CloudStack-VERSION-1-OSVERSION.tar.gz # cd CloudStack-VERSION-1-OSVERSION # ./install.sh
You should see a few messages as the installer prepares, followed by a list of choices.
3. Choose M to install the Management Server. 4. (RHEL or CentOS) When the installation is finished, run the following commands to start essential services (the
commands might be different depending on your OS):
# # # # service rpcbind start service nfs start chkconfig nfs on chkconfig rpcbind on
May 8, 2012
37
5. Configure the database client. Note the absence of the --deploy-as argument in this case.
# cloud-setup-databases cloud:<dbpassword>@<dbhost> -e <encryption_type> -m <management_server_key> -k <database_key>
7. Be sure to configure a load balancer for the Management Servers. See Management Server Load Balancing on page
129.
decompress the system VM template. Run the command for each hypervisor type that you expect end users to run in this Zone.
If your secondary storage mount point is not named /mnt/secondary, substitute your own mount point name. If you set the CloudStack database encryption type to "web" when you set up the database, you must now add the parameter -s <management-server-secret-key>. See About Password and Key Encryption on page 34. This process will require approximately 5 GB of free space on the local file system and up to 30 minutes each time it runs. For vSphere:
# /usr/lib64/cloud/agent/scripts/storage/secondary/cloud-install-sys-tmplt -m /mnt/secondary -u http://download.cloud.com/templates/acton/acton-systemvm-02062012.ova -h vmware -s <optional-management-server-secret-key> -F
For KVM:
# /usr/lib64/cloud/agent/scripts/storage/secondary/cloud-install-sys-tmplt -m /mnt/secondary -u http://download.cloud.com/templates/acton/acton-systemvm02062012.qcow2.bz2 -h kvm -s <optional-management-server-secret-key> -F
For XenServer:
# /usr/lib64/cloud/agent/scripts/storage/secondary/cloud-install-sys-tmplt -m /mnt/secondary -u http://download.cloud.com/templates/acton/acton-systemvm02062012.vhd.bz2 -h xenserver -s <optional-management-server-secret-key> -F
2. If you are using a separate NFS server, perform this step. If you are using the Management Server as the NFS server,
you MUST NOT perform this step.
38
May 8, 2012
When the script has finished, unmount secondary storage and remove the created directory.
# umount /mnt/secondary # rmdir /mnt/secondary
Mgmt Server
Mgmt Server
MySQL DB
Mgmt Server
What should you do next? Even without adding any cloud infrastructure, you can run the UI to get a feel for what's offered and how you will interact with CloudStack on an ongoing basis. See Log In to the CloudStack UI on page 40. When you're ready, add the cloud infrastructure and try running some virtual machines on it, so you can watch how CloudStack manages the infrastructure. See Provision Your Cloud Infrastructure on page 41.
May 8, 2012
39
1. Open your favorite Web browser and go to this URL. Substitute the IP address of your own Management Server:
http://<management-server-ip-address>:8080/client
On a fresh Management Server installation, a guided tour splash screen appears. On later visits, youll see a login screen where you can enter a user ID and password and proceed to your Dashboard.
2. If you see the first-time splash screen, choose one of the following.
Continue with basic setup. Choose this if you're just trying CloudStack, and you want a guided walkthrough of the simplest possible configuration so that you can get started using CloudStack right away. We'll help you set up a cloud with the following features: a single machine that runs CloudStack software and uses NFS to provide storage; a single machine running VMs under the XenServer hypervisor; and a shared public network. The prompts in this guided tour should give you all the information you need, but if you want just a bit more detail, you can follow along in the CloudStack Basic Installation Guide. I have used CloudStack before. Choose this if you have already gone through a design phase and planned a more sophisticated CloudStack deployment, or you are ready to start scaling up a trial cloud that you set up earlier with the basic setup screens. In the Administrator UI, you can start using the more powerful features of CloudStack, such as advanced VLAN networking, high availability, additional network elements such as load balancers and firewalls, and support for multiple hypervisors including Citrix XenServer, KVM, and VMware vSphere. The root administrator Dashboard appears.
3. You should set a new root administrator password. If you chose basic setup, youll be prompted to create a new
password right away. If you chose experienced user, use the steps in Change the Root Password on page 42. You are logging in as the root administrator. This account manages the CloudStack deployment, including physical infrastructure. The root administrator can modify configuration settings to change basic functionality, create or delete user accounts, and take many actions that should be performed only by an authorized person. Please change the default password to a new, unique password.
40
May 8, 2012
1. Change the Root Password on page 42 2. Add a Zone on page 43 3. Add More Pods (Optional) on page 57 4. Add More Cluster on page 59 5. Add More Hosts (Optional) on page 63 6. Add Primary Storage on page 67 7. Add Secondary Storage on page 69 8. Initialization and Testing on page 71
When you have finished these steps, you will have a deployment with the following basic structure:
MySQL cloud_db
Host
Primary Storage
Conceptual view of a basic deployment Your actual deployment can have multiple management servers and zones.
May 8, 2012
41
1. Log in to the CloudStack UI using the current root user ID and password. The default is admin, password. 2. Click Accounts. 3. Click the admin account name. 4. Click View Users. 5. Click the admin user name. 6. Click the Change Password button. 7. Type the new password, and click OK.
42
May 8, 2012
Add a Zone
About Zones
A zone is the largest organizational unit within a CloudStack deployment. A zone typically corresponds to a single datacenter, although it is permissible to have multiple zones in a datacenter. The benefit of organizing infrastructure into zones is to provide physical isolation and redundancy. For example, each zone can have its own power supply and network uplink, and the zones can be widely separated geographically (though this is not required). A zone consists of: One or more pods. Each pod contains one or more clusters of hosts and one or more primary storage servers. Secondary storage, which is shared by all the pods in the zone.
Host
Primary Storage
A simple zone
Zones are visible to the end user. When a user starts a guest VM, the user must select a zone for their guest. Users might also be required to copy their private templates to additional zones to enable creation of guest VMs using their templates in those zones. Zones can be public or private. Public zones are visible to all users. This means that any user may create a guest in that zone. Private zones are reserved for a specific domain. Only users in that domain or its subdomains may create guests in that zone. Hosts in the same zone are directly accessible to each other without having to go through a firewall. Hosts in different zones can access each other through statically configured VPN tunnels.
May 8, 2012
43
For each zone, the administrator must decide the following. How many pods to place in a zone. How many clusters to place in each pod. How many hosts to place in each cluster. How many primary storage servers to place in each cluster and total capacity for the storage servers. How much secondary storage to deploy in a zone.
When you add a new zone, you will be prompted to configure the zones physical network and add the first pod, cluster, host, primary storage, and secondary storage.
Guest. When end users run VMs, they generate guest traffic. The guest VMs communicate with each other over a network that can be referred to as the guest network. Each pod in a basic zone is a broadcast domain, and therefore each pod has a different IP range for the guest network. The administrator must configure the IP range for each pod.
Management. When CloudStacks internal resources communicate with each other, they generate management traffic. This includes communication between hosts, system VMs (VMs used by CloudStack to perform various tasks in the cloud), and any other component that communicates directly with the CloudStack Management Server. You must configure the IP range for the system VMs to use. Storage. Traffic between primary and secondary storage servers, such as VM templates and snapshots. Public. Public traffic is generated when VMs in the cloud access the Internet. Publicly accessible IPs must be allocated for this purpose. End users can use the CloudStack UI to acquire these IPs to implement NAT between their guest network and the public network, as described in Acquiring a New IP Address in the Administration Guide.
In a basic network, configuring the physical network is fairly straightforward. In most cases, you only need to configure one guest network to carry traffic that is generated by guest VMs. If you use a NetScaler load balancer and enable its elastic IP and elastic load balancing (EIP and ELB) features, you must also configure a network to carry public traffic. CloudStack takes care of presenting the necessary network configuration steps to you in the UI when you add a new zone.
44
May 8, 2012
These traffic types can each be on a separate physical network, or they can be combined with certain restrictions. When you use the Add Zone wizard in the UI to create a new zone, you are guided into making only valid choices.
May 8, 2012
45
46
May 8, 2012
May 8, 2012
47
Adding a Zone
These steps assume you have already logged in to the CloudStack UI (see page 40).
1. (Optional) If you are going to use Swift for cloud-wide secondary storage, you need to add it to CloudStack before
you add zones. a. Log in to the CloudStack UI as administrator.
b. If this is your first time visiting the UI, you will see the guided tour splash screen. Choose Experienced user. The Dashboard appears. c. In the left navigation bar, click Global Settings.
d. In the search box, type swift.enable and click the search button. e. f. Click the edit button and set swift.enable to true. Restart the Management Server.
g.
2. In the left navigation, choose Infrastructure. On Zones, click View More. 3. (Optional) If you are using Swift storage, click Enable Swift. Provide the following:
URL. The Swift URL. Account. The Swift account. Username. The Swift accounts username. Key. The Swift key.
4. Click Add Zone. The Zone creation wizard will appear. 5. Choose one of the following network types:
Basic. For AWS-style networking. Provides a single network where each VM instance is assigned an IP directly from the network. Guest isolation can be provided through layer-3 means such as security groups (IP address source filtering). Advanced. For more sophisticated network topologies. This network model provides the most flexibility in defining guest networks and providing custom network offerings such as firewall, VPN, or load balancer support.
For more information about the network types, see Network Setup on page 122.
6. The rest of the steps differ depending on whether you chose Basic or Advanced. Continue with the steps that apply
to you: Basic Zone Configuration on page 49 Advanced Zone Configuration on page 53
48
May 8, 2012
DefaultSharedNetworkOffering DefaultSharedNetscalerEIPandELBNetworkOffering
Network Domain: (Optional) If you want to assign a special domain name to the guest VM network, specify the DNS suffix. Public. A public zone is available to all users. A zone that is not public will be assigned to a particular domain. Only users in that domain will be allowed to create guest VMs in this zone.
3. (Introduced in version 3.0.1) Assign a network traffic label to each traffic type on the physical network. These labels
must match the labels you have already defined on the hypervisor host. To assign each label, click the Edit button under the traffic type icon. A popup dialog appears where you can type the label, then click OK.
May 8, 2012
49
These traffic labels will be defined only for the hypervisor selected for the first cluster. For all other hypervisors, the labels can be configured after the zone is created.
4. Click Next. 5. (NetScaler only) If you chose the network offering for NetScaler, you have an additional screen to fill out. Provide the
requested details to set up the NetScaler, then click Next. IP address. The NSIP (NetScaler IP) address of the NetScaler device. Username/Password. The authentication credentials to access the device. CloudStack uses these credentials to access the device. Type. NetScaler device type that is being added. It could be NetScaler VPX, NetScaler MPX, or NetScaler SDX. For a comparison of the types, see the CloudStack Administration Guide. Public interface. Interface of NetScaler that is configured to be part of the public network. Private interface. Interface of NetScaler that is configured to be part of the private network. Number of retries. Number of times to attempt a command on the device before considering the operation failed. Default is 2. Capacity. Number of guest networks/accounts that will share this NetScaler device. Dedicated. When marked as dedicated, this device will be dedicated to a single account. When Dedicated is checked, the value in the Capacity field has no significance implicitly, its value is 1.
6. (NetScaler only) Configure the IP range for public traffic. The IPs in this range will be used for the static NAT capability
which you enabled by selecting the network offering for NetScaler with EIP and ELB. Enter the following details, then click Add. If desired, you can repeat this step to add more IP ranges. When done, click Next. Gateway. The gateway in use for these IP addresses. Netmask. The netmask associated with this IP range. VLAN. The VLAN that will be used for public traffic. Start IP/End IP. A range of IP addresses that are assumed to be accessible from the Internet and will be allocated for access to guest VMs.
7. In a new zone, CloudStack adds the first pod for you. You can always add more pods later. For an overview of what a
pod is, see About Pods on page 57. To configure the first pod, enter the following, then click Next: Pod Name. A name for the pod. Reserved system gateway. The gateway for the hosts in that pod. Reserved system netmask. The network prefix that defines the pod's subnet. Use CIDR notation. Start/End Reserved System IP. The IP range in the management network that CloudStack uses to manage various system VMs, such as Secondary Storage VMs, Console Proxy VMs, and DHCP. For more information, see System Reserved IP Addresses on page 46.
8. Configure the network for guest traffic. Provide the following, then click Next:
Guest gateway: The gateway that the guests should use. Guest netmask: The netmask in use on the subnet the guests will use.
50
May 8, 2012
Guest start IP/End IP: Enter the first and last IP addresses that define a range that CloudStack can assign to guests. o o We strongly recommend the use of multiple NICs. If multiple NICs are used, they may be in a different subnet. If one NIC is used, these IPs should be in the same CIDR as the pod CIDR.
9. In a new pod, CloudStack adds the first cluster for you. You can always add more clusters later. For an overview of
what a cluster is, see About Clusters on page 59. To configure the first cluster, enter the following, then click Next: Hypervisor. (Version 3.0.0 only; in 3.0.1, this field is read only) Choose the type of hypervisor software that all hosts in this cluster will run. If you choose VMware, additional fields appear so you can give information about a vSphere cluster. For vSphere servers, we recommend creating the cluster of hosts in vCenter and then adding the entire cluster to CloudStack. See Add Cluster: vSphere on page 60. Cluster name. Enter a name for the cluster. This can be text of your choosing and is not used by CloudStack.
10. In a new cluster, CloudStack adds the first host for you. You can always add more hosts later. For an overview of
what a host is, see About Hosts on page 63. Before you can configure the host, you need to install the hypervisor software on the host. You will need to know which version of the hypervisor software version is supported by CloudStack and what additional configuration is required to ensure the host will work with CloudStack. To find these installation details, see: Citrix XenServer Installation for CloudStack on page 72 VMware vSphere Installation and Configuration on page 83 KVM Installation and Configuration on page 96 Oracle VM (OVM) Installation and Configuration on page 101 When you deploy CloudStack, the hypervisor host must not have any VMs already running.
To configure the first host, enter the following, then click Next: Host Name. The DNS name or IP address of the host. Username. Usually root. Password. This is the password for the user named above (from your XenServer or KVM install). Host Tags (Optional). Any labels that you use to categorize hosts for ease of maintenance.
11. In a new cluster, CloudStack adds the first primary storage server for you. You can always add more servers later. For
an overview of what primary storage is, see About Primary Storage on page 67. To configure the first primary storage server, enter the following, then click Next: Name. The name of the storage device. Protocol. For XenServer, choose either NFS, iSCSI, or PreSetup. For KVM, choose NFS or SharedMountPoint. For vSphere choose either VMFS (iSCSI or FiberChannel) or NFS. The remaining fields in the screen vary depending on what you choose here.
May 8, 2012
51
NFS
Server.The IP address or DNS name of the storage device. Path. The exported path from the server. Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings. The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2. Server. The IP address or DNS name of the storage device. Target IQN. The IQN of the target. Example: iqn.1986-03.com.sun:02:01ec9bb549-1271378984 Lun #. The LUN number. Example: 3. Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings. The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2. Server. The IP address or DNS name of the storage device. SR Name-Label. Name-label of an SR that has been set up outside CloudStack. Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings. The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2. Path. The path on each host where primary storage is mounted. Example: "/mnt/primary". Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings. The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2. Server. The IP address or DNS name of the vCenter server. Path. The datacenter and datastore as "/datacenter name/datastore name". Example: "/cloud.dc.VM/cluster1datastore". Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings. The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2.
iSCSI
PreSetup
SharedMountPoint
VMFS
52
May 8, 2012
12. In a new zone, CloudStack adds the first secondary storage server for you. For an overview of what secondary
storage is, see About Secondary Storage on page 69. Before you can fill out this screen, you need to prepare the secondary storage by setting up NFS shares and installing the latest CloudStack System VM template. See Adding Secondary Storage on page 69. To configure the first secondary storage server, enter the following, then click Next: NFS Server. The IP address of the server. Path. The exported path from the server.
3. (Introduced in version 3.0.1) Assign a network traffic label to each traffic type on each physical network. These labels
must match the labels you have already defined on the hypervisor host. To assign each label, click the Edit button under the traffic type icon within each physical network. A popup dialog appears where you can type the label, then click OK. These traffic labels will be defined only for the hypervisor selected for the first cluster. For all other hypervisors, the labels can be configured after the zone is created.
May 8, 2012
53
4. Click Next. 5. Configure the IP range for public Internet traffic. Enter the following details, then click Add. If desired, you can repeat
this step to add more public Internet IP ranges. When done, click Next. Gateway. The gateway in use for these IP addresses. Netmask. The netmask associated with this IP range. VLAN. The VLAN that will be used for public traffic. Start IP/End IP. A range of IP addresses that are assumed to be accessible from the Internet and will be allocated for access to guest networks.
6. In a new zone, CloudStack adds the first pod for you. You can always add more pods later. For an overview of what a
pod is, see About Pods on page 57. To configure the first pod, enter the following, then click Next: Pod Name. A name for the pod. Reserved system gateway. The gateway for the hosts in that pod. Reserved system netmask. The network prefix that defines the pod's subnet. Use CIDR notation. Start/End Reserved System IP. The IP range in the management network that CloudStack uses to manage various system VMs, such as Secondary Storage VMs, Console Proxy VMs, and DHCP. For more information, see System Reserved IP Addresses on page 46.
7. Specify a range of VLAN IDs to carry guest traffic for each physical network (see VLAN Allocation Example on page
123), then click Next.
8. In a new pod, CloudStack adds the first cluster for you. You can always add more clusters later. For an overview of
what a cluster is, see About Clusters on page 59. To configure the first cluster, enter the following, then click Next: Hypervisor. (Version 3.0.0 only; in 3.0.1, this field is read only) Choose the type of hypervisor software that all hosts in this cluster will run. If you choose VMware, additional fields appear so you can give information about a vSphere cluster. For vSphere servers, we recommend creating the cluster of hosts in vCenter and then adding the entire cluster to CloudStack. See Add Cluster: vSphere on page 60. Cluster name. Enter a name for the cluster. This can be text of your choosing and is not used by CloudStack.
9. In a new cluster, CloudStack adds the first host for you. You can always add more hosts later. For an overview of
what a host is, see About Hosts on page 63. Before you can configure the host, you need to install the hypervisor software on the host. You will need to know which version of the hypervisor software version is supported by CloudStack and what additional configuration is required to ensure the host will work with CloudStack. To find these installation details, see: Citrix XenServer Installation for CloudStack on page 72 VMware vSphere Installation and Configuration on page 83 KVM Installation and Configuration on page 96 Oracle VM (OVM) Installation and Configuration on page 101 When you deploy CloudStack, the hypervisor host must not have any VMs already running.
54
May 8, 2012
To configure the first host, enter the following, then click Next: Host Name. The DNS name or IP address of the host. Username. Usually root. Password. This is the password for the user named above (from your XenServer or KVM install). Host Tags (Optional). Any labels that you use to categorize hosts for ease of maintenance.
10. In a new cluster, CloudStack adds the first primary storage server for you. You can always add more servers later. For
an overview of what primary storage is, see About Primary Storage on page 67. To configure the first primary storage server, enter the following, then click Next: Name. The name of the storage device. Protocol. For XenServer, choose either NFS, iSCSI, or PreSetup. For KVM, choose NFS or SharedMountPoint. For vSphere choose either VMFS (iSCSI or FiberChannel) or NFS. The remaining fields in the screen vary depending on what you choose here. NFS Server.The IP address or DNS name of the storage device. Path. The exported path from the server. Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings. The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2. iSCSI Server. The IP address or DNS name of the storage device. Target IQN. The IQN of the target. For example, iqn.198603.com.sun:02:01ec9bb549-1271378984 Lun #. The LUN number. For example, 3. Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings. The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2.
May 8, 2012
55
PreSetup
Server. The IP address or DNS name of the storage device. SR Name-Label. Enter the name-label of the SR that has been set up outside CloudStack. Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings. The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2.
SharedMountPoint
Path. The path on each host that is where this primary storage is mounted. For example, "/mnt/primary". Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings. The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2.
VMFS
Server. The IP address or DNS name of the vCenter server. Path. A combination of the datacenter name and the datastore name. The format is "/" datacenter name "/" datastore name. For example, "/cloud.dc.VM/cluster1datastore". Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings. The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2.
11. In a new zone, CloudStack adds the first secondary storage server for you. You can always add more servers later. For
an overview of what secondary storage is, see About Secondary Storage on page 69. Before you can fill out this screen, you need to prepare the secondary storage by setting up NFS shares and installing the latest CloudStack System VM template. See Adding Secondary Storage on page 69. To configure the first secondary storage server, enter the following, then click Next: NFS Server. The IP address of the server. Path. The exported path from the server.
56
May 8, 2012
About Pods
A pod often represents a single rack. Hosts in the same pod are in the same subnet. A pod is the second-largest organizational unit within a CloudStack deployment. Pods are contained within zones. Each zone can contain one or more pods. A pod consists of one or more clusters of hosts and one or more primary storage servers.
Pod Cluster
Host
Primary Storage
A simple pod
Adding a Pod
These steps assume you have already logged in to the CloudStack UI (see page 40).
1. In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone to which you want to add
a pod.
2. Click the Compute and Storage tab. In the Pods node of the diagram, click View All. 3. Click Add Pod.
May 8, 2012
57
5. Click OK.
58
May 8, 2012
About Clusters
A cluster provides a way to group hosts. To be precise, a cluster is a XenServer server pool, a set of KVM servers, a set of OVM hosts, a VMware cluster preconfigured in vCenter, or a set of bare metal hosts (Beta feature; untested in CloudStack 3.0). The hosts in a cluster all have identical hardware, run the same hypervisor, are on the same subnet, and access the same shared primary storage. Virtual machine instances (VMs) can be live-migrated from one host to another within the same cluster, without interrupting service to the user. A cluster is the third-largest organizational unit within a CloudStack deployment. Clusters are contained within pods, and pods are contained within zones. Size of the cluster is limited by the underlying hypervisor, although the CloudStack recommends less in most cases; see Best Practices on page 143. A cluster consists of one or more hosts and one or more primary storage servers.
Cluster
Host
Primary Storage
A simple cluster
CloudStack allows multiple clusters in a cloud deployment. Every VMware cluster is managed by a vCenter server. Administrator must register the vCenter server with CloudStack. There may be multiple vCenter servers per zone. Each vCenter server may manage multiple VMware clusters. Even when local storage is used, clusters are still required. There is just one host per cluster.
May 8, 2012
59
1. In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone in which you want to add
the cluster.
2. Click the Compute tab. 3. In the Clusters node of the diagram, click View All. 4. Click Add Cluster. 5. Choose the hypervisor type for this cluster. 6. Choose the pod in which you want to create the cluster. 7. Enter a name for the cluster. This can be text of your choosing and is not used by CloudStack. 8. Click OK.
1. Add a companion non-OVM cluster to the Pod. This cluster provides an environment where the CloudStack System
VMs can run. You should have already installed a non-OVM hypervisor on at least one Host to prepare for this step. Depending on which hypervisor you used: For VMWare, follow the steps in Add Cluster: vSphere on page 60. When finished, return here and continue with the next step. For KVM or XenServer, follow the steps in Add Cluster: KVM or XenServer on page 59. When finished, return here and continue with the next step.
2. In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone in which you want to add
the cluster.
3. Click the Compute tab. In the Pods node, click View All. Select the same pod you used in step 1. 4. Click View Clusters, then click Add Cluster. 5. The Add Cluster dialog will appear. 6. In Hypervisor, choose OVM. 7. In Cluster, enter a name for the cluster. 8. Click Add.
60
May 8, 2012
For vSphere servers, we recommend creating the cluster of hosts in vCenter and then adding the entire cluster to CloudStack. Follow these requirements: Do not put more than 8 hosts in a vSphere cluster. Make sure the hypervisor hosts do not have any VMs already running before you add them to CloudStack.
1. Create the cluster of hosts in vCenter. Follow the vCenter instructions to do this. You will create a cluster that looks
something like this in vCenter.
2. Log in to the CloudStack UI (see page 40). 3. In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone in which you want to add
the cluster.
4. Click the Compute tab, and click View All on Pods. Choose the pod to which you want to add the cluster. 5. Click View Clusters. 6. Click Add Cluster. 7. In Hypervisor, choose VMware. 8. Provide the following information in the dialog. The fields below make reference to values from vCenter.
Cluster Name. Enter the name of the cluster you created in vCenter. For example, "cloud.cluster.2.2.1" vCenter Host. Enter the hostname or IP address of the vCenter server. vCenter Username. Enter the username that CloudStack should use to connect to vCenter. This user must have all administrative privileges.
May 8, 2012
61
vCenter Password. Enter the password for the user named above. vCenter Datacenter. Enter the vCenter datacenter that the cluster is in. For example, "cloud.dc.VM".
There might be a slight delay while the cluster is provisioned. It will automatically display in the UI.
1. Before you can add a bare metal Cluster, you must have performed several other installation and setup steps to
create a bare metal environment. See Bare Metal Installation on page 101.
2. In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone in which you want to add
the cluster.
3. Click the Compute tab. In the Pods node, click View All. Select the pod where you want to add the cluster. 4. Click View Clusters, then click Add Cluster. 5. The Add Cluster dialog will appear. 6. In Hypervisor, choose BareMetal. 7. In Cluster, enter a name for the cluster. This can be any text you like. 8. Click Add.
62
May 8, 2012
About Hosts
A host is a single computer. Hosts provide the computing resources that run the guest virtual machines. Each host has hypervisor software installed on it to manage the guest VMs. For example, a Linux KVM-enabled server, a Citrix XenServer server, and an ESXi server are hosts. The host is the smallest organizational unit within a CloudStack deployment. Hosts are contained within clusters, clusters are contained within pods, and pods are contained within zones. Hosts in a CloudStack deployment: Provide the CPU, memory, storage, and networking resources needed to host the virtual machines Interconnect using a high bandwidth TCP/IP network and connect to the Internet May reside in multiple data centers across different geographic locations May have different capacities (different CPU speeds, different amounts of RAM, etc.), although the hosts within a cluster must all be homogeneous
Additional hosts can be added at any time to provide more capacity for guest VMs. CloudStack automatically detects the amount of CPU and memory resources provided by the Hosts. Hosts are not visible to the end user. An end user cannot determine which host their guest has been assigned to. For a host to function in CloudStack, you must do the following: Install hypervisor software on the host Assign an IP address to the host Ensure the host is connected to the CloudStack Management Server
Host Allocation
At runtime, when a user creates a new guest VM, the CloudStack platform chooses an available Host to run the new guest VM. The chosen Host will always be close to where the guests virtual disk image is stored. Both vertical and horizontal allocation is allowed. Vertical allocation consumes all the resources of a given Host before allocating any guests on a second Host. This reduces power consumption in the cloud. Horizontal allocation places a guest on each Host in a round-robin fashion. This may yield better performance to the guests in some cases. The CloudStack platform also allows an element of CPU over-provisioning as configured by the administrator. Over-provisioning allows the administrator to commit more CPU cycles to the allocated guests than are actually available from the hardware. The CloudStack platform also provides a pluggable interface for adding new allocators. These custom allocators can provide any policy the administrator desires.
May 8, 2012
63
For hardware requirements, see the appropriate section: Citrix XenServer Installation for CloudStack on page 72 KVM Installation and Configuration on page 96 Oracle VM (OVM) Installation and Configuration on page 101
64
May 8, 2012
With all hosts added to the XenServer pool, run the cloud-setup-bond script. This script will complete the configuration and setup of the bonds on the new hosts in the cluster.
When copying and pasting a command, be sure the command has pasted as a single line before executing. Some document viewers may introduce unwanted line breaks in copied text.
1. If you have not already done so, install the hypervisor software on the host. You will need to know which version of
the hypervisor software version is supported by CloudStack and what additional configuration is required to ensure the host will work with CloudStack. To find these installation details, see: Citrix XenServer Installation for CloudStack on page 72 KVM Installation and Configuration on page 96
2. Log in to the CloudStack UI (see page 40). 3. In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone in which you want to add
the host.
4. Click the Compute tab. In the Clusters node, click View All. 5. Click the cluster where you want to add the host. 6. Click View Hosts. 7. Click Add Host. 8. Provide the following information.
Host Name. The DNS name or IP address of the host. Username. Usually root. Password. This is the password for the user named above (from your XenServer or KVM install).
May 8, 2012
65
Host Tags (Optional). Any labels that you use to categorize hosts for ease of maintenance.
There may be a slight delay while the host is provisioned. It should automatically display in the UI.
1. Before you can add a bare metal Host, you must have performed several other installation and setup steps to create
a bare metal cluster and environment. See Bare Metal Installation on page 101.
2. Go to Infrastructure -> Physical Resources -> Zone -> Pod -> Add Host. 3. Provide the following information in the Add Host dialog.
Hypervisor. Choose BareMetal. Cluster. The Cluster to which this host will be added. Give the name of a bare metal cluster that you created earlier (see Add Cluster: Bare Metal on page 62). Host Name. The IPMI IP address of the machine. Username. User name you set for IPMI. Password. Password you set for IPMI. # of CPU Cores. Number of CPUs on the machine. CPU (in MHZ). Frequency of CPU. Memory (in MB). Memory capacity of the new host. Host MAC. MAC address of the PXE NIC. Tags. Set to large. You will use this tag later when you create the service offering.
It may take a minute for the host to be provisioned. It should automatically display in the UI. Repeat for additional bare metal hosts.
66
May 8, 2012
If you intend to use only local disk for your installation, you can skip to Add Secondary Storage on page 68.
When setting up primary storage, follow these restrictions: Primary storage cannot be added until a host has been added to the cluster. If you do not provision shared storage for primary storage, you will not be able to create additional volumes. If you do not provision shared primary storage, you must set the global configuration parameter system.vm.local.storage.required to true, or else you will not be able to start VMs.
May 8, 2012
67
2. In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone in which you want to add
the primary storage.
3. Click the Compute tab. 4. In the Primary Storage node of the diagram, click View All. 5. Click Add Primary Storage. 6. Provide the following information in the dialog. The information required varies depending on your choice in
Protocol. Pod. The pod for the storage device. Cluster. The cluster for the storage device. Name. The name of the storage device. Protocol. For XenServer, choose either NFS, iSCSI, or PreSetup. For KVM, choose NFS or SharedMountPoint. For vSphere choose either VMFS (iSCSI or FiberChannel) or NFS. Server (for NFS, iSCSI, or PreSetup). The IP address or DNS name of the storage device. Server (for VMFS). The IP address or DNS name of the vCenter server. Path (for NFS). In NFS this is the exported path from the server. Path (for VMFS). In vSphere this is a combination of the datacenter name and the datastore name. The format is "/" datacenter name "/" datastore name. For example, "/cloud.dc.VM/cluster1datastore". Path (for SharedMountPoint). With KVM this is the path on each host that is where this primary storage is mounted. For example, "/mnt/primary". SR Name-Label (for PreSetup). Enter the name-label of the SR that has been set up outside CloudStack. Target IQN (for iSCSI). In iSCSI this is the IQN of the target. For example, iqn.1986-03.com.sun:02:01ec9bb5491271378984 Lun # (for iSCSI). In iSCSI this is the LUN number. For example, 3. Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings.
The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2.
7. Click OK.
68
May 8, 2012
The items in zone-based NFS secondary storage are available to all hosts in the zone. CloudStack manages the allocation of guest virtual disks to particular primary storage devices. To make items in secondary storage available to all hosts throughout the cloud, you can add OpenStack Object Storage (Swift, http://swift.openstack.org) in addition to the zone-based NFS secondary storage. When using Swift, you configure Swift storage for the entire CloudStack, then set up NFS secondary storage for each zone as usual. The NFS storage in each zone acts as a staging area through which all templates and other secondary storage data pass before being forwarded to Swift. The Swift storage acts as a cloud-wide resource, making templates and other data available to any zone in the cloud. There is no hierarchy in the Swift storage, just one Swift container per storage object. Any secondary storage in the whole cloud can pull a container from Swift at need. It is not necessary to copy templates and snapshots from one zone to another, as would be required when using zone NFS alone. Everything is available everywhere.
1. If you are going to use Swift for cloud-wide secondary storage, you
must add the Swift storage to CloudStack before you add the local zone secondary storage servers. See Adding a Zone on page 48.
May 8, 2012
69
2. To prepare for local zone secondary storage, you should have created and mounted an NFS share during
Management Server installation. See Prepare NFS Shares on page 24.
3. Make sure you prepared the system VM template during Management Server installation. See Prepare the System
VM Template on page 27.
4. Now that the secondary storage server for per-zone storage is prepared, add it to CloudStack. Secondary storage is
added as part of the procedure for adding a new zone. See Add a Zone on page 43.
70
May 8, 2012
1. Verify that the system is ready. In the left navigation bar, select Templates. Click on the CentOS 5.5 (64bit) no Gui
(KVM) template. Check to be sure that the status is Download Complete. Do not proceed to the next step until this status is displayed.
2. Go to the Instances tab, and filter by My Instances. 3. Click Add Instance and follow the steps in the wizard.
a. Choose the zone you just added. b. In the template selection, choose the template to use in the VM. If this is a fresh installation, likely only the provided CentOS template is available. c. Select a service offering. Be sure that the hardware you have allows starting the selected service offering.
d. In data disk offering, if desired, add another data disk. This is a second volume that will be available to but not mounted in the guest. For example, in Linux on XenServer you will see /dev/xvdb in the guest after rebooting the VM. A reboot is not required if you have a PV-enabled OS kernel in use. e. f. g. In default network, choose the primary network for the guest. In the Basic Installation, you should have only one option here. Optionally give your VM a name and a group. Use any descriptive text you would like. Click Launch VM. Your VM will be created and started. It might take some time to download the template and complete the VM startup. You can watch the VMs progress in the Instances screen.
May 8, 2012
71
72
May 8, 2012
2. After installation, perform the following configuration steps, which are described in the next few sections:
Required Configure XenServer dom0 Memory (p. 73) Username and password (p. 73) Optional Install CSP package (p. 74) Set up SR if not using NFS, iSCSI, or local disk for primary storage (p. 75) iSCSI multipath setup (p. 76) Physical networking setup, including NIC bonding (p. 76)
Time Synchronization
The host must be set to use NTP. All hosts in a pod must have the same time.
1. Install NTP.
# yum install ntp
May 8, 2012
73
Licensing
Citrix XenServer Free version provides 30 days usage without a license. Following the 30 day trial, XenServer requires a free activation and license. You can choose to install a license now or skip this step. If you skip this step, you will need to install a license when you activate and license the XenServer.
1. In XenCenter, click Tools > License manager. 2. Select your XenServer and select Activate Free XenServer. 3. Request a license.
You can install the license with XenCenter or using the xe command line tool.
1. Download the CSP software onto the XenServer host from one of the following links:
For XenServer 6.0.2 (used with CloudStack 3.0.1): http://download.cloud.com/releases/3.0.1/XS-6.0.2/xenserver-cloud-supp.tgz For XenServer 6.0 (used with CloudStack 3.0.0): http://download.cloud.com/releases/3.0/xenserver-cloud-supp.tgz
74
May 8, 2012
4. If the XenServer host is part of a zone that uses basic networking, disable Open vSwitch (OVS):
# xe-switch-network-backend bridge
Accept the restart of the host when prompted. The XenServer host is now ready to be added to CloudStack.
1. Connect FiberChannel cable to all hosts in the cluster and to the FiberChannel storage host. 2. Rescan the SCSI bus. Either use the following command or use XenCenter to perform an HBA rescan.
# scsi-rescan
3. Repeat step 2 on every host. 4. Check to be sure you see the new SCSI disk.
# ls /dev/disk/by-id/scsi-360a98000503365344e6f6177615a516b -l
The output should look like this, although the specific file name will be different (scsi-<scsiID>):
lrwxrwxrwx 1 root root 9 Mar 16 13:47 /dev/disk/by-id/scsi-360a98000503365344e6f6177615a516b -> ../../sdc
5. Repeat step 4 on every host. 6. On the storage server, run this command to get a unique ID for the new SR.
# uuidgen
The output should look like this, although the specific ID will be different:
e6849e96-86c3-4f2c-8fcc-350cc711be3d
7. Create the FiberChannel SR. In name-label, use the unique ID you just generated.
# xe sr-create type=lvmohba shared=true device-config:SCSIid=360a98000503365344e6f6177615a516b name-label="e6849e96-86c3-4f2c-8fcc-350cc711be3d"
This command returns a unique ID for the SR, like the following example (your ID will be different):
7a143820-e893-6c6a-236e-472da6ee66bf
May 8, 2012
75
8. To create a human-readable description for the SR, use the following command. In uuid, use the SR ID returned by
the previous command. In name-description, set whatever friendly text you prefer.
# xe sr-param-set uuid=7a143820-e893-6c6a-236e-472da6ee66bf name-description="Fiber Channel storage repository"
Make note of the values you will need when you add this storage to CloudStack later (see Add Primary Storage on page 67). In the Add Primary Storage dialog, in Protocol, you will choose PreSetup. In SR Name-Label, you will enter the name-label you set earlier (in this example, e6849e96-86c3-4f2c-8fcc-350cc711be3d).
9. (Optional) If you want to enable multipath I/O on a FiberChannel SAN, refer to the documentation provided by
the SAN vendor.
You can also ask your SAN vendor for advice about setting up your Citrix repository for multipathing. Make note of the values you will need when you add this storage to the CloudStack later (see Add Primary Storage on page 66). In the Add Primary Storage dialog, in Protocol, you will choose PreSetup. In SR Name-Label, you will enter the same name used to create the SR. If you encounter difficulty, address the support team for the SAN provided by your vendor. If they are not able to solve your issue, see Contacting Support on page 151.
76
May 8, 2012
name labels are placed on physical interfaces or bonds and configured in CloudStack. In some simple cases the name labels are not required.
1. Run xe network-list and find the public network. This is usually attached to the NIC that is public. Once you
find the network make note of its UUID. Call this <UUID-Public>.
1. Run xe network-list and find one of the guest networks. Once you find the network make note of its UUID.
Call this <UUID-Guest>.
2. Run the following command, substituting your own name-label and uuid values.
# xe network-param-set name-label=<cloud-guestN> uuid=<UUID-Guest>
3. Repeat these steps for each additional guest network, using a different name-label and uuid each time.
May 8, 2012
77
Give the storage network a different name-label than what will be given for other networks. For the separate storage network to work correctly, it must be the only interface that can ping the primary storage devices IP address. For example, if eth0 is the management network NIC, ping -I eth0 <primary storage device IP> must fail. In all deployments, secondary storage devices must be pingable from the management network NIC or bond. If a secondary storage device has been placed on the storage network, it must also be pingable via the storage network NIC or bond on the hosts as well. You can set up two separate storage networks as well. For example, if you intend to implement iSCSI multipath, dedicate two non-bonded NICs to multipath. Each of the two networks needs a unique name-label. If no bonding is done, the administrator must set up and name-label the separate storage network on all hosts (masters and slaves). Here is an example to set up eth5 to access a storage network on 172.16.0.0/24.
# xe pif-list host-name-label=`hostname` device=eth5 uuid ( RO) : ab0d3dd4-5744-8fae-9693-a022c7a3471d device ( RO): eth5 # xe pif-reconfigure-ip DNS=172.16.3.3 gateway=172.16.0.1 IP=172.16.0.55 mode=static netmask=255.255.255.0 uuid=ab0d3dd4-5744-8fae-9693-a022c7a3471d
All NIC bonding is optional. XenServer expects all nodes in a cluster will have the same network cabling and same bonds implemented. In an installation the master will be the first host that was added to the cluster and the slave hosts will be all subsequent hosts added to the cluster. The bonds present on the master set the expectation for hosts added to the cluster later. The procedure to set up bonds on the master and slaves are different, and are described below. There are several important implications of this: You must set bonds on the first host added to a cluster. Then you must use xe commands as below to establish the same bonds in the second and subsequent hosts added to a cluster. Slave hosts in a cluster must be cabled exactly the same as the master. For example, if eth0 is in the private bond on the master, it must be in the management network for added slave hosts.
78
May 8, 2012
These command shows the eth0 and eth1 NICs and their UUIDs. Substitute the ethX devices of your choice. Call the UUIDs returned by the above command slave1-UUID and slave2-UUID.
2. Create a new network for the bond. For example, a new network with name cloud-private.
This label is important. CloudStack looks for a network by a name you configure. You must use the same namelabel for all hosts in the cloud for the management network.
# xe network-create name-label=cloud-private # xe bond-create network-uuid=[uuid of cloud-private created above] pif-uuids=[slave1-uuid],[slave2-uuid]
Now you have a bonded pair that can be recognized by CloudStack as the management network.
These commands show the eth2 and eth3 NICs and their UUIDs. Substitute the ethX devices of your choice. Call the UUIDs returned by the above command slave1-UUID and slave2-UUID.
2. Create a new network for the bond. For example, a new network with name cloud-public.
This label is important. CloudStack looks for a network by a name you configure. You must use the same namelabel for all hosts in the cloud for the public network.
# xe network-create name-label=cloud-public # xe bond-create network-uuid=[uuid of cloud-public created above] pif-uuids=[slave1-uuid],[slave2-uuid]
May 8, 2012
79
Now you have a bonded pair that can be recognized by CloudStack as the public network.
1. Copy the script from the Management Server in /usr/lib64/cloud/agent/scripts/vm/hypervisor/xenserver/cloudsetup-bonding.sh to the master host and ensure it is executable.
Now the bonds are set up and configured properly across the cluster.
To upgrade XenServer:
b. Restart the Management Server and Usage Server. You only need to do this once for all clusters.
# service cloud-management start # service cloud-usage start
80
May 8, 2012
b. Navigate to the XenServer cluster, and click Actions Unmanage. c. Watch the cluster status until it shows Unmanaged.
3. Log in to one of the hosts in the cluster, and run this command to clean up the VLAN:
# . /opt/xensource/bin/cloud-clean-vlan.sh
Troubleshooting: If you see the error "can't eject CD," log in to the VM and umount the CD, then run the script again.
5. Upgrade the XenServer software on all hosts in the cluster. Upgrade the master first.
a. Live migrate all VMs on this host to other hosts. See the instructions for live migration in the Administrator's Guide.
Troubleshooting: You might see the following error when you migrate a VM:
[root@xenserver-qa-2-49-4 ~]# xe vm-migrate live=true host=xenserver-qa-2-49-5 vm=i-2-8VM You attempted an operation on a VM which requires PV drivers to be installed but the drivers were not detected. vm: b6cf79c8-02ee-050b-922f-49583d9f1a14 (i-2-8-VM)
b. Reboot the host. c. Upgrade to the newer version of XenServer. Use the steps in XenServer documentation.
d. After the upgrade is complete, copy the following files from the management server to this host, in the directory locations shown below: Copy this Management Server file /usr/lib64/cloud/agent/scripts/vm/hypervisor/xenserver/xenserver60/NFSSR.py /usr/lib64/cloud/agent/scripts/vm/hypervisor/xenserver/setupxenserver.sh /usr/lib64/cloud/agent/scripts/vm/hypervisor/xenserver/make_migratable.sh /usr/lib64/cloud/agent/scripts/vm/hypervisor/xenserver/cloud-clean-vlan.sh to this location on the XenServer host /opt/xensource/sm/NFSSR.py /opt/xensource/bin/setupxenserver.sh /opt/xensource/bin/make_migratable.sh /opt/xensource/bin/cloud-clean-vlan.sh
May 8, 2012
81
e.
# /opt/xensource/bin/setupxenserver.sh
Troubleshooting: If you see the following error message, you can safely ignore it.
mv: cannot stat `/etc/cron.daily/logrotate': No such file or directory
f.
Plug in the storage repositories (physical block devices) to the XenServer host:
# for pbd in `xe pbd-list currently-attached=false| grep ^uuid | awk '{print $NF}'`; do xe pbd-plug uuid=$pbd ; done
Note: If you add a host to this XenServer pool, you need to migrate all VMs on this host to other hosts, and eject this host from XenServer pool.
6. Repeat these steps to upgrade every host in the cluster to the same version of XenServer. 7. Run the following command on one host in the XenServer cluster to clean up the host tags:
# for host in $(xe host-list | grep ^uuid | awk '{print $NF}') ; do xe host-param-clear uuid=$host param-name=tags; done;
9. After all hosts are up, run the following on one host in the
cluster:
# /opt/xensource/bin/cloud-clean-vlan.sh
82
May 8, 2012
WARNING The lack of up-do-date hotfixes can lead to data corruption and lost VMs.
Hardware requirements: The host must be certified as compatible with vSphere. See the VMware Hardware Compatibility Guide at http://www.vmware.com/resources/compatibility/search.php . All hosts must be 64-bit and must support HVM (Intel-VT or AMD-V enabled). All hosts within a cluster must be homogenous. That means the CPUs must be of the same type, count, and feature flags. 64-bit x86 CPU (more cores results in better performance) Hardware virtualization support required 4 GB of memory 36 GB of local disk At least 1 NIC Statically allocated IP Address
vCenter Server requirements: Processor 2 CPUs 2.0GHz or higher Intel or AMD x86 processors. Processor may be higher if the database runs on the same machine.
May 8, 2012
83
Memory 3GB RAM. RAM requirements may be higher if your database runs on the same machine. Disk storage 2GB. Disk requirements may be higher if your database runs on the same machine. Microsoft SQL Server 2005 Express disk requirements. The bundled database requires up to 2GB free disk space to decompress the installation archive. Networking 1Gbit or 10Gbit. For more information, see "vCenter Server and the vSphere Client Hardware Requirements" at http://pubs.vmware.com/vsp40/wwhelp/wwhimpl/js/html/wwhelp.htm#href=install/c_vc_hw.html.
Other requirements: VMware vCenter Standard Edition 4.1 or 5.0 must be installed and available to manage the vSphere hosts. vCenter must be configured to use the standard port 443 so that it can communicate with the CloudStack Management Server. You must re-install VMware ESXi if you are going to re-use a host from a previous install. CloudStack requires VMware vSphere 4.1 or 5.0. VMware vSphere 4.0 is not supported. All hosts must be 64-bit and must support HVM (Intel-VT or AMD-V enabled). All hosts within a cluster must be homogenous. That means the CPUs must be of the same type, count, and feature flags. The CloudStack management network must not be configured as a separate virtual network. The CloudStack management network is the same as the vCenter management network, and will inherit its configuration. See Configure vCenter Management Network on page 89. CloudStack requires ESXi. ESX is not supported. All resources used for CloudStack must be used for CloudStack only. CloudStack cannot shares instance of ESXi or storage with other management consoles. Do not share the same storage volumes that will be used by CloudStack with a different set of ESXi servers that are not managed by CloudStack. Put all target ESXi hypervisors in a cluster in a separate Datacenter in vCenter. The cluster that will be managed by CloudStack should not contain any VMs. Do not run the management server, vCenter or any other VMs on the cluster that is designated for CloudStack use. Create a separate cluster for use of CloudStack and make sure that they are no VMs in this cluster. All the required VLANS must be trunked into all the ESXi hypervisor hosts. These would include the VLANS for Management, Storage, vMotion, and guest VLANs. The guest VLAN (used in Advanced Networking; see Network Setup on page 19) is a contiguous range of VLANs that will be managed by CloudStack. CloudStack does not support Distributed vSwitches in VMware.
84
May 8, 2012
vCenter Checklist
You will need the following information about vCenter. vCenter Requirement vCenter User Value Notes This user must have admin privileges. Password for the above user. Name of the datacenter. Name of the cluster.
ESXi VLAN IP Gateway ESXi VLAN Netmask Management Server VLAN VLAN on which the CloudStack Management server is installed. VLAN for the Public Network.
May 8, 2012
85
Public VLAN Netmask Public VLAN IP Address Range Range of Public IP Addresses available for CloudStack use. These addresses will be used for virtual router on CloudStack to route private traffic to external networks. A contiguous range of non-routable VLANs. One VLAN will be assigned for each customer.
2. Following installation, perform the following configuration, which are described in the next few sections:
Required ESXi host setup Configure host physical networking, virtual switch, vCenter Management Network, and extended port range Prepare storage for iSCSI Configure clusters in vCenter and add hosts to them, or add hosts without clusters to vCenter Optional NIC bonding Multipath storage
86
May 8, 2012
In the host configuration tab, click the Hardware/Networking link to bring up the networking configuration page as above.
May 8, 2012
87
Separating Traffic
CloudStack allows you to use vCenter to configure three separate networks per ESXi host. These networks are identified by the name of the vSwitch they are connected to. The allowed networks for configuration are public (for traffic to/from the public internet), guest (for guest-guest traffic), and private (for management and usually storage traffic). You can use the default virtual switch for all three, or create one or two other vSwitches for those traffic types. If you want to separate traffic in this way you should first create and configure vSwitches in vCenter according to the vCenter instructions. Take note of the vSwitch names you have used for each traffic type. You will configure CloudStack to use these vSwitches.
Increasing Ports
By default a virtual switch on ESXi hosts is created with 56 ports. We recommend setting it to 4096, the maximum number of ports allowed. To do that, click the Properties link for virtual switch (note this is not the Properties link for Networking).
In vSwitch properties dialog, select the vSwitch and click Edit. You should see the following dialog:
88
May 8, 2012
In this dialog, you can change the number of switch ports. After youve done that, ESXi hosts are required to reboot in order for the setting to take effect.
May 8, 2012
89
Make sure the following values are set: VLAN ID set to the desired ID vMotion enabled. Management traffic enabled.
If the ESXi hosts have multiple VMKernel ports, and ESXi is not using the default value "Management Network" as the management network name, you must follow these guidelines to configure the management network port group so that CloudStack can find it: Use one label for the management network port across all ESXi hosts. In the CloudStack UI, go to Configuration Global Settings and set vmware.management.portgroup to the management network label from the ESXi hosts.
90
May 8, 2012
May 8, 2012
91
92
May 8, 2012
May 8, 2012
93
1. Select Home/Inventory/Datastores. 2. Right click on the datacenter node. 3. Choose Add Datastore command. 4. Follow the wizard to create a iSCSI datastore.
This procedure should be done on one host in the cluster. It is not necessary to do this on all hosts.
94
May 8, 2012
May 8, 2012
95
The following are also available for community use. We do not guarantee access to CloudStack support personnel for users of these versions: RHEL versions 5.5 5.x: https://access.redhat.com/downloads CentOS versions 5.5 5.x: http://www.centos.org/modules/tinycontent/index.php?id=15 CentOS 6.0: http://www.centos.org/modules/tinycontent/index.php?id=15 Ubuntu 10.04: http://releases.ubuntu.com/lucid/ Fedora 16: https://mirrors.fedoraproject.org/publiclist/Fedora/14/
96
May 8, 2012
hypervisor vendors support channel, and apply patches as soon as possible after they are released. CloudStack will not track or notify you of required hypervisor patches. It is essential that your hosts are completely up to date with the provided hypervisor patches. The hypervisor vendor is likely to refuse to support any system that is not up to date with patches.
2. After installation, perform the following configuration tasks, which are described in the next few sections:
Required Install the CloudStack agent on the host (p. 97) Physical network configuration (p. 98) Time synchronization (p. 99) Optional Primary storage setup (p. 100)
This should return a fully qualified hostname such as "kvm1.lab.example.org". If it does not edit /etc/hosts so that it does.
On Ubuntu:
# apt-get remove qemu-kvm
3. (RHEL 6.2) If you do not have a Red Hat Network account, you need to prepare a local Yum repository.
a. If you are working with a physical host, insert the RHEL 6.2 installation CD. If you are using a VM, attach the RHEL6 ISO.
May 8, 2012
97
c.
Create a repo file at /etc/yum.repos.d/rhel6.repo. In the file, insert the following lines:
4. Install the CloudStack packages. You should have a file in the form of CloudStack-VERSION-N-OSVERSION.tar.gz.
Untar the file and then run the install.sh script inside it. Replace the file and directory names below with those you are using:
# tar xzf CloudStack-VERSION-N-OSVERSION.tar.gz # cd CloudStack-VERSION-N-OSVERSION # ./install.sh
You should see a few messages as the installer prepares, followed by a list of choices.
6. (Not applicable to Ubuntu) When the agent installation is finished, log in to the host as root and run the following
commands to start essential services (the commands might be different depending on your OS):
# # # # service rpcbind start service nfs start chkconfig nfs on chkconfig rpcbind on
7. On the KVM host, edit the file /etc/libvirt/qemu.conf file and make sure the line "vnc_listen = 0.0.0.0" is
uncommented. If necessary, uncomment the line and restart /etc/init.d/libvirtd. On RHEL or CentOS:
# vi /etc/libvirt/qemu.conf # /etc/init.d/libvirtd restart
On Ubuntu:
# vi /etc/libvirt/qemu.conf # /etc/init.d/libvirt-bin restart
98
May 8, 2012
If a system has multiple NICs or bonding is desired, the admin may configure the networking on the host. The admin must create a bridge and place the desired device into the bridge. This may be done for each of the public network and the management network. Then edit /etc/cloud/agent/agent.properties and add values for the following: public.network.device private.network.device
These should be set to the name of the bridge that the user created for the respective traffic type. For example: public.network.device=publicbondbr0
This should be done after the install of the software as described previously.
Time Synchronization
The host must be set to use NTP. All hosts in a pod must have the same time.
1. Install NTP.
On RHEL or CentOS:
# yum install ntp
On Ubuntu:
# apt-get install ntp
On Ubuntu:
# service ntp restart
May 8, 2012
99
On Ubuntu:
# chkconfig ntp on
100
May 8, 2012
The OVM hosts must follow these restrictions: All hosts must be 64-bit and must support HVM (Intel-VT or AMD-V enabled). All Hosts within a Cluster must be homogenous. That means the CPUs must be of the same type, count, and feature flags. Within a single cluster, the hosts must be of the same kernel version. For example, if one Host is OVM 2.3 64 bit, they must all be OVM 2.3 64 bit. Be sure all the hotfixes provided by the hypervisor vendor are applied. Track the release of hypervisor patches through your hypervisor vendors support channel, and apply patches as soon as possible after they are released. CloudStack will not track or notify you of required hypervisor patches. It is essential that your hosts are completely up to date with the provided hypervisor patches. The hypervisor vendor is likely to refuse to support any system that is not up to date with patches. WARNING The lack of up-do-date hotfixes can lead to data corruption and lost VMs.
2. Unzip the file and copy the .img file to your HTTP server. 3. Follow the instructions in the OVM Installation Guide to install OVM on each host. During installation, you will be
prompted to set an agent password and a root password. You can specify any desired text or accept the default.
May 8, 2012
101
4. Repeat for any additional hosts that will be part of the OVM cluster.
NOTE: After ISO installation, the installer reboots into the operating system. Due to a known issue in OVM Server, the reboot will place the VM in the Stopped state. In the CloudStack UI, detach the ISO from the VM (so that the VM will not boot from the ISO again), then click the Start button to restart the VM.
1. Map your iSCSI device to the OVM host's local device. The exact steps to use depend on your system's peculiarities. 2. On every host in the cluster, create the same softlink name so CloudStack can use a consistent path to refer to the
iSCSI LUN from any host. For example, if the softlink name is /dev/ovm-iscsi0:
ln -s /dev/disk/by-path/<output of previous command> /dev/ovm-iscsi0
3. Exactly once on any ONE host in the OVM cluster, format the OCFS2 file system on the iSCSI device.
1. Install the non-OVM hypervisor on at least one host by following one of the instructions below, depending on which
hypervisor you want to use: Citrix XenServer on page 72 VMware vSphere Installation and Configuration on page 80 KVM Installation and Configuration on page 96
2. When you set up the pod that will contain the OVM cluster, remember to include this non-OVM host in its own
cluster along with the OVM cluster in the same pod.
102
May 8, 2012
May 8, 2012
103
1. CloudStack programs the PXE server with the host MAC address, host IP address, and boot image file based on the
bare metal template the user has chosen.
2. CloudStack programs the DHCP server with the MAC address and IP. 3. CloudStack enables PXE boot on the host and powers it on using IPMI interface. 4. The host broadcasts a DHCP request and receives a reply from the DHCP server. This reply consists of an IP address
for the host, PXE boot server address, and a pointer to the boot image file.
5. The host then contacts the PXE boot server and downloads the image file using TFTP protocol. The image file is a live
kernel and initrd with PING software installed.
6. The host starts the boot process using the downloaded file from the TFTP server. 7. After these steps complete successfully, the host is ready for the workload.
104
May 8, 2012
May 8, 2012
105
1. Install the Management Server (either single-node, p. 19, or multi-node, p. 29) 2. Set Up the Firewall (p. 106) 3. Set Up IPMI (p. 109) 4. Enable PXE on the Bare Metal Host (p. 110) 5. Install the PXE and DHCP Servers (p. 110) 6. Set Up a CIFS File Server (p. 110) 7. Create a Bare Metal Image (p. 111) 8. Add the PXE Server and DHCP Server to Your Deployment (p. 111) 9. Add a Cluster, Host, and Firewall (p. 112) 10. Add a Service Offering and Template (p. 112)
a.
b. Delete VLANs.
delete vlans
c.
delete firewall
106
May 8, 2012
f.
If you do not see the above output, follow these steps to re-create the base security policy.
# delete security policies # set security policies from-zone trust to-zone untrust policy trust-to-untrust match source-address any destination-address any application any # set security policies from-zone trust to-zone untrust policy trust-to-untrust then permit
b. Program the interfaces with correct IP addresses. Run the following command once for each interface, substituting your own values.
# set interfaces <name> unit 0 family inet address <address>/<subnet size>
For example:
# set interfaces ge-0/0/1 unit 0 family inet address 192.168.240.1/24 # set interfaces ge-0/0/2 unit 0 family inet address 192.168.1.91/24
c.
Create two security zones: one called "trust" (private zone) and the other called "untrust" (public zone).
# set security zones security-zone trust # set security zones security-zone untrust
d. Add the private interface to the trust zone and the public interface to the untrust zone. Run the following command once for each interface, substituting your own values.
# set security zones security-zone <zone> interfaces <name>
May 8, 2012
107
For example:
# set security zones security-zone trust interfaces ge-0/0/1 # set security zones security-zone untrust interfaces ge-0/0/2
e.
# # # #
f.
Create a default route. This can be through the public interface. Substitute your own values for the IP addresses in this example.
g.
Verify your interface setup. Your IP addresses should appear instead of our examples.
# show interfaces ge-0/0/1 { unit 0 { family inet { address 192.168.240.1/24; } } } ge-0/0/2 { unit 0 { family inet { address 192.168.1.91/24; } } }
h. Verify your zone setup. Your interface names should appear instead of our examples.
# show security zones security-zone trust { host-inbound-traffic { system-services { all; } protocols { all; } } interfaces { ge-0/0/1; } } security-zone untrust { host-inbound-traffic { system-services { all;
108
May 8, 2012
i.
Verify your route. Your IP addresses should appear instead of our examples.
b. Set a root password for the system. You will be prompted to enter and re-enter a password.
# set system root-authentication plain-text-password
Set Up IPMI
The procedure to access IPMI settings varies depending on the type of hardware. Consult your manufacturer's documentation if you do not already know how to display the IPMI settings screen. Once you are there, set the following: IP address of IPMI NIC Netmask Gateway Username and password for IPMI NIC
May 8, 2012
109
1. Log in as root to a host or virtual machine running CentOS 6.2. 2. You should have access to a file in the form of CloudStack-VERSION-N-rhel6.2.tar.gz." Copy that file to the machine. 3. Untar the file and then run the install.sh script inside it. Replace the file and directory names below with those you
are using:
# tar xzf CloudStack-VERSION-N-rhel6.2.tar.gz # cd CloudStack-VERSION-N-rhel6.2 # ./install.sh
You should see a few messages as the installer prepares, followed by a list of choices.
6. Make note of the TFTP root directory that is displayed by this script. You will need it later.
1. On the file server, set up the following directory structure: Share\Ping_Backup. Share is the folder that will be the
CIFS root directory, and Ping_Backup will store images created with the Partimage Is Not Ghost (PING) tool.
2. Share the root directory. On Windows, right-click the Share folder and choose Share with .
110
May 8, 2012
1. Install the desired OS on a machine with hardware identical to what you intend to use for the bare metal machines.
Be sure the hardware is identical.
2. Use PING to create an image and store it in the Share\Ping_Backup directory on the CIFS file server you have set up.
a. PING will prompt you for the storage location. Use <IP of CIFS server>\Share\Ping_Backup. b. PING will prompt you for a name for the image. Give any name you find helpful, such as win7_64bit.
1. Add a zone and pod using the Management Server UI. When creating the zone, in network type, choose Basic.
Follow the steps in Adding a Zone on page 48.
2. In the left navigation tree, choose Infrastructure, then Physical Resources. 3. Select your Pod. 4. Click Add Network Device. Input the following and click Save:
Type: PxeServer URL: IP of PXE server Username: username of PXE server Password: password of PXE server Pxe Server Type: PING PING Storage IP: IP of CIFS server PING Directory: Share/Ping_Backup TFTP Directory: The directory displayed by cloud-setup-baremetal earlier. For example, /tftpboot. PING CIFS Username: username of CIFS server (optional) PING CIFS Password: password of CIFS server (optional)
5. Click Add Network Device again. Input the following and click Save:
Type: ExternalDhcp.
May 8, 2012
111
URL: IP of DHCP server. This is the same value you used for PXE server in the previous step. The DHCP and PXE servers must both be installed on one machine, and the install.sh script does this (see Install the PXE and DHCP Servers on page 110). Username: username of DHCP server Password: password of DHCP server DHCP Server Type: Dhcpd
1. Add a bare metal cluster as described in Add Cluster: Bare Metal on page 62, then return here for the next step. 2. Add one or more bare metal hosts as described in Add Hosts (Bare Metal) on page 66, then return here for the next
step.
3. Add the firewall as described in Setting Zone VLAN and Running VM Maximums on page 133. Then continue to the
next section, Add a Service Offering and Template.
1. Create a bare metal service offering. In the Management Server UI, click Configuration Service Offering Add
Service Offering. In the dialog box, fill in these values: Name. Any desired name for the service offering. Display. Any desired display text. Storage Type. Shared. # of CPU Cores. Use the same value as when you added the host. CPU (in MHZ). Use the same value as when you added the host. Memory (in MB). Use the same value as when you added the host. Offer HA? No. Tags. large Public? Yes.
2. Add a bare metal template as described in Creating a Bare Metal Template in the Administrator's Guide.
Your bare metal installation is complete! Now you can create a bare metal instance from the Instances screen of the UI. If you want to allow inbound network traffic to the bare metal instances through public IPs, set up public IPs and a port forwarding rules. Follow the steps in How to Set Up Port Forwarding in the Administrator's Guide.
112
May 8, 2012
Small-Scale Deployment
Public IP 62.43.51.125 Internet
192.168.10.10
192.168.10.11
192.168.10.12 NFS server 192.168.10.13 192.168.10.5 vCenter Server (for VMware only) Computing Node
Small-Scale Deployment
This diagram illustrates the network architecture of a small-scale CloudStack deployment. A firewall provides a connection to the Internet. The firewall is configured in NAT mode. The firewall forwards HTTP requests and API calls from the Internet to the Management Server. The Management Server resides on the management network. A layer-2 switch connects all physical servers and storage.
May 8, 2012
113
A single NFS server functions as both the primary and secondary storage. The Management Server is connected to the management network.
Layer-2 switches
Computing Node
vCenter Server
Storage servers
Pod 1
Pod 2
114
May 8, 2012
This diagram illustrates the network architecture of a large-scale CloudStack deployment. A layer-3 switching layer is at the core of the data center. A router redundancy protocol like VRRP should be deployed. Typically high-end core switches also include firewall modules. Separate firewall appliances may also be used if the layer-3 switch does not have integrated firewall capabilities. The firewalls are configured in NAT mode. The firewalls provide the following functions: Forwards HTTP requests and API calls from the Internet to the Management Server. The Management Server resides on the management network. When the cloud spans multiple zones, the firewalls should enable site-to-site VPN such that servers in different zones can directly reach each other.
A layer-2 access switch layer is established for each pod. Multiple switches can be stacked to increase port count. In either case, redundant pairs of layer-2 switches should be deployed. The Management Server cluster (including front-end load balancers, Management Server nodes, and the MySQL database) is connected to the management network through a pair of load balancers. Secondary storage servers are connected to the management network. Each pod contains storage and computing servers. Each storage and computing server should have redundant NICs connected to separate layer-2 access switches.
May 8, 2012
115
Mgmt Server Hardware Load Balancer Mgmt Server Hardware Load Balancer Mgmt Server Backup MySQL DB Primary MySQL DB
The administrator must decide the following. Whether or not load balancers will be used How many Management Servers will be deployed Whether MySQL replication will be deployed to enable disaster recovery.
Multi-Site Deployment
The CloudStack platform scales well into multiple sites through the use of zones. The following diagram shows an example of a multi-site deployment.
116
May 8, 2012
Availability Zone 1
Data Center 1
Availability Zone 2
Data Center 1 houses the primary Management Server as well as zone 1. The MySQL database is replicated in real time to the secondary Management Server installation in Data Center 2.
May 8, 2012
117
Computing servers
Pod 1
Separate Storage Network
This diagram illustrates a setup with a separate storage network. Each server has four NICs, two connected to pod-level network switches and two connected to storage network switches. There are two ways to configure the storage network: Bonded NIC and redundant switches can be deployed for NFS. In NFS deployments, redundant switches and bonded NICs still result in one network (one CIDR block+ default gateway address). iSCSI can take advantage of two separate storage networks (two CIDR blocks each with its own default gateway). Multipath iSCSI client can failover and load balance between separate storage networks.
118
May 8, 2012
192.168.10.14 2 NICs on NFS server bond to the same IP address: 192.168.10.14 NIC Bonding
NIC Bonding and Multipath I/O
This diagram illustrates the differences between NIC bonding and Multipath I/O (MPIO). NIC bonding configuration involves only one network. MPIO involves two separate networks.
May 8, 2012
119
120
May 8, 2012
Feature
XenServer 6.0.2
vSphere 4.1/5.0
OVM 2.3
Bare Metal
Network throttling Security groups in zones that use basic networking iSCSI FibreChannel Local disk HA Snapshots of local disk Local disk as data disk Work load balancing Manual live migration of VMs from host to host Conserve management traffic IP addresses by using link local network to communicate with virtual router
Yes Yes
Yes No
No Yes
No No
N/A No
Yes
No
Yes
Yes
N/A
May 8, 2012
121
Network Setup
Achieving the correct networking setup is crucial to a successful CloudStack installation. This section contains information to help you make decisions and follow the right procedures to get your network set up correctly.
Each zone has either basic or advanced networking. Once the choice of networking model for a zone has been made and configured in CloudStack, it can not be changed. A zone is either basic or advanced for its entire lifetime. The following table compares the networking features in the two networking models. Networking Feature Number of networks Firewall type Load balancer Isolation type VPN support Port forwarding 1:1 NAT Source NAT Userdata Basic Network Single network Physical Physical Layer 3 No Physical Physical No Yes Advanced Network Multiple networks Physical & virtual Physical & virtual Layer 2 & Layer 3 Yes Physical & virtual Physical & virtual Physical & virtual Yes
122
May 8, 2012
The two types of networking may be in use in the same cloud. However, a given zone must use either Basic Networking or Advanced Networking. Different types of network traffic can be segmented on the same physical network. Guest traffic can also be segmented by account. To isolate traffic, you can use separate VLANs. If you are using separate VLANs on a single physical network, make sure the VLAN tags are in separate numerical ranges.
500-599 600-799
800-899
900-999
> 1000
May 8, 2012
123
Dell 62xx
The following steps show how a Dell 62xx is configured for zone-level layer-3 switching. These steps assume VLAN 201 is used to route untagged private IPs for pod 1, and pod 1s layer-2 switch is connected to Ethernet port 1/g1. The Dell 62xx Series switch supports up to 1024 VLANs.
The statements configure Ethernet port 1/g1 as follows: VLAN 201 is the native untagged VLAN for port 1/g1. All VLANs (300-999) are passed to all the pod-level layer-2 switches.
Cisco 3750
The following steps show how a Cisco 3750 is configured for zone-level layer-3 switching. These steps assume VLAN 201 is used to route untagged private IPs for pod 1, and pod 1s layer-2 switch is connected to GigabitEthernet1/0/1.
1. Setting VTP mode to transparent allows us to utilize VLAN IDs above 1000. Since we only use VLANs up to 999, vtp
transparent mode is not strictly required.
vtp mode transparent vlan 200-999 exit
2. Configure GigabitEthernet1/0/1.
interface GigabitEthernet1/0/1 switchport trunk encapsulation dot1q switchport mode trunk switchport trunk native vlan 201 exit
The statements configure GigabitEthernet1/0/1 as follows: VLAN 201 is the native untagged VLAN for port GigabitEthernet1/0/1. Cisco passes all VLANs by default. As a result, all VLANs (300-999) are passed to all the pod-level layer-2 switches.
124
May 8, 2012
Layer-2 Switch
The layer-2 switch is the access switching layer inside the pod. It should trunk all VLANs into every computing host. It should switch traffic for the management network containing computing and storage hosts. The layer-3 switch will serve as the gateway for the management network.
Example Configurations
This section contains example configurations for specific switch models for pod-level layer-2 switching. It assumes VLAN management protocols such as VTP or GVRP have been disabled. The scripts must be changed appropriately if you choose to use VTP or GVRP. Dell 62xx The following steps show how a Dell 62xx is configured for pod-level layer-2 switching.
2. VLAN 201 is used to route untagged private IP addresses for pod 1, and pod 1 is connected to this layer-2 switch.
interface range ethernet all switchport mode general switchport general allowed vlan add 300-999 tagged exit
The statements configure all Ethernet ports to function as follows: All ports are configured the same way. All VLANs (300-999) are passed through all the ports of the layer-2 switch.
Cisco 3750 The following steps show how a Cisco 3750 is configured for pod-level layer-2 switching.
1. Setting VTP mode to transparent allows us to utilize VLAN IDs above 1000. Since we only use VLANs up to 999, vtp
transparent mode is not strictly required.
vtp mode transparent vlan 300-999 exit
May 8, 2012
125
2. Configure all ports to dot1q and set 201 as the native VLAN.
interface range GigabitEthernet 1/0/1-24 switchport trunk encapsulation dot1q switchport mode trunk switchport trunk native vlan 201 exit
By default, Cisco passes all VLANs. Cisco switches complain of the native VLAN IDs are different when 2 ports are connected together. Thats why we specify VLAN 201 as the native VLAN on the layer-2 switch.
Hardware Firewall
All deployments should have a firewall protecting the management server; see Generic Firewall Provisions. Optionally, some deployments may also have a Juniper SRX firewall that will be the default gateway for the guest networks; see External Guest Firewall Integration for Juniper SRX (Optional).
To achieve the above purposes you must set up fixed configurations for the firewall. Firewall rules and policies need not change as users are provisioned into the cloud. Any brand of hardware firewall that supports NAT and site-to-site VPN can be used.
126
May 8, 2012
Public Internet
Firewall
Load Balancer
Zone-level Switch
Pod 1
Pod 2
Pod N
1. Install your SRX appliance according to the vendor's instructions. 2. Connect one interface to the management network and one
interface to the public network. Alternatively, you can connect the same interface to both networks and a use a VLAN for the public network.
3. Make sure "vlan-tagging" is enabled on the private interface. 4. Record the public and private interface names. If you used a VLAN for the public interface, add a ".[VLAN TAG]"
after the interface name. For example, if you are using ge-0/0/3 for your public interface and VLAN tag 301, your public interface name would be "ge-0/0/3.301". Your private interface name should always be untagged because the CloudStack software automatically creates tagged logical interfaces.
5. Create a public security zone and a private security zone. By default, these will already exist and will be called
"untrust" and "trust". Add the public interface to the public zone and the private interface to the private zone. Note down the security zone names.
6. Make sure there is a security policy from the private zone to the public zone that allows all traffic.
May 8, 2012
127
7. Note the username and password of the account you want the CloudStack software to log in to when it is
programming rules.
8. Make sure the "ssh" and "xnm-clear-text" system services are enabled. 9. If traffic metering is desired:
a. Create an incoming firewall filter and an outgoing firewall filter. These filters should be the same names as your public security zone name and private security zone name respectively. The filters should be set to be "interface-specific". For example, here is the configuration where the public zone is "untrust" and the private zone is "trust":
b. Add the firewall filters to your public interface. For example, a sample configuration output (for public interface ge-0/0/3.0, public security zone untrust, and private security zone trust) is:
ge-0/0/3 { unit 0 { family inet { filter { input untrust; output trust; } address 172.25.0.252/16; } } }
10. Make sure all VLANs are brought to the private interface of the SRX. 11. After the CloudStack Management Server is installed, log in to the CloudStack UI as administrator. 12. In the left navigation bar, click Infrastructure. 13. In Zones, click View More. 14. Choose the zone you want to work with. 15. Click the Network tab. 16. In the Network Service Providers node of the diagram, click Configure. (You might have to scroll down to see this.) 17. Click SRX. 18. Click the Add New SRX button (+) and provide the following:
IP Address. The IP address of the SRX. Username. The user name of the account on the SRX that CloudStack should use.
128
May 8, 2012
Password. The password of the account. Public Interface. The name of the public interface on the SRX. For example, ge-0/0/2. A ".x" at the end of the interface indicates the VLAN that is in use. Private Interface. The name of the private interface on the SRX. For example, ge-0/0/1. Usage Interface. (Optional) Typically, the public interface is used to meter traffic. If you want to use a different interface, specify its name here. Number of Retries. The number of times to attempt a command on the SRX before failing. The default value is 2. Timeout (seconds). The time to wait for a command on the SRX before considering it failed. Default is 300 seconds. Public Network. The name of the public network on the SRX. For example, trust. Private Network. The name of the private network on the SRX. For example, untrust. Capacity. The number of networks the device can handle. Dedicated. When marked as dedicated, this device will be dedicated to a single account. When Dedicated is checked, the value in the Capacity field has no significance implicitly, its value is 1.
19. Click OK. 20. Click Global Settings. Set the parameter external.network.stats.interval to indicate how often you want CloudStack to
fetch network usage statistics from the Juniper SRX. If you are not using the SRX to gather network usage statistics, set to 0.
8250 8096
TCP HTTP
Yes No
May 8, 2012
129
Topology Requirements
Security Requirements
The public Internet must not be able to access port 8096 or port 8250 on the Management Server.
130
May 8, 2012
1. Set up the appliance according to the vendor's directions. 2. Connect it to the networks carrying public traffic and management traffic (these could be the same network). 3. Record the IP address, username, password, public interface name, and private interface name. The interface names
will be something like "1.1" or "1.2".
4. Make sure that the VLANs are trunked to the management network interface. 5. After the CloudStack Management Server is installed, log in as administrator to the CloudStack UI. 6. In the left navigation bar, click Infrastructure. 7. In Zones, click View More. 8. Choose the zone you want to work with. 9. Click the Network tab. 10. In the Network Service Providers node of the diagram, click Configure. (You might have to scroll down to see this.) 11. Click NetScaler or F5. 12. Click the Add button (+) and provide the following:
For the NetScaler: IP address. The IP address of the device. Username/Password. The authentication credentials to access the device. CloudStack uses these credentials to access the device. Type. The type of device that is being added. It could be F5 Big Ip Load Balancer, NetScaler VPX, NetScaler MPX, or NetScaler SDX. For a comparison of the NetScaler types, see the CloudStack Administration Guide. Public interface. Interface of device that is configured to be part of the public network. Private interface. Interface of device that is configured to be part of the private network. Number of retries. Number of times to attempt a command on the device before considering the operation failed. Default is 2. Capacity. Number of guest networks/accounts that will share this device. Dedicated. When marked as dedicated, this device will be dedicated to a single account. When Dedicated is checked, the value in the Capacity field has no significance implicitly, its value is 1.
May 8, 2012
131
The installation and provisioning of the external load balancer is finished. You can proceed to add VMs and NAT/load balancing rules.
1. On your network infrastructure, install Traffic Sentinel and configure it to gather traffic data. For installation and
configuration steps, see inMon documentation at http://inmon.com.
2. In the Traffic Sentinel UI, configure Traffic Sentinel to accept script querying from guest users. CloudStack will be the
guest user performing the remote queries to gather network usage for one or more IP addresses. a. Click File Users Access Control Reports Query, then select Guest from the dropdown list.
b. Click File Users Access Control Reports Script, then select Guest from the dropdown list.
3. On CloudStack, add the Traffic Sentinel host by calling the CloudStack API command addTrafficMonitor. Pass in the
URL of the Traffic Sentinel as protocol + host + port (optional); for example, http://10.147.28.100:8080. For the addTrafficMonitor command syntax, see the API Reference at http://download.cloud.com/releases/3.0.0/api_3.0.0/root_admin/addTrafficMonitor.html. For information about how to call the CloudStack API, see the Developers Guide at http://docs.cloud.com/CloudStack_Documentation/Developer's_Guide%3A_CloudStack.
4. Log in to the CloudStack UI as administrator. 5. Click Configuration Global Settings. Set the following:
direct.network.stats.interval how often you want CloudStack to query Traffic Sentinel.
132
May 8, 2012
Based on your deployment's needs, choose the appropriate value of guest.vlan.bits. Set it as described in Edit the Global Configuration Settings (Optional) on page 138 and restart the Management Server.
May 8, 2012
133
Storage Setup
CloudStack is designed to work with a wide variety of commodity and enterprise-grade storage. Local disk may be used as well, if supported by the selected hypervisor. Storage type support for guest virtual disks differs based on hypervisor selection. XenServer NFS iSCSI Supported Supported vSphere Supported Supported via VMFS KVM Supported Supported via Clustered Filesystems Supported via Clustered Filesystems Not Supported
Fiber Channel
Supported
Local Disk
Supported
The use of the Cluster Logical Volume Manager (CLVM) for KVM is not officially supported with CloudStack 3.0.x.
Small-Scale Setup
In a small-scale setup, a single NFS server can function as both primary and secondary storage. The NFS server just needs to export two separate shares, one for primary storage and the other for secondary storage.
Secondary Storage
CloudStack is designed to work with any scalable secondary storage system. The only requirement is the secondary storage system supports the NFS protocol.
Example Configurations
The storage server should be a machine with a large number of disks. The disks should ideally be managed by a hardware RAID controller. Modern hardware RAID controllers support hot plug functionality independent of the operating system so you can replace faulty disks without impacting the running operating system.
In this section we go through a few examples of how to set up storage to work properly with CloudStack on a few types of NFS and iSCSI storage systems.
134
May 8, 2012
1. Install the RHEL/CentOS distribution on the storage server. 2. If the root volume is more than 2 TB in size, create a smaller boot volume to install RHEL/CentOS. A root volume of
20 GB should be sufficient.
3. After the system is installed, create a directory called /export. This can each be a directory in the root partition itself
or a mount point for a large disk volume.
4. If you have more than 16TB of storage on one host, create multiple EXT3 file systems and multiple NFS exports.
Individual EXT3 file systems cannot exceed 16TB.
5. After /export directory is created, run the following command to configure it as an NFS export.
# echo "/export <CIDR>(rw,async,no_root_squash)" >> /etc/exports
Adjust the above command to suit your deployment needs. Limiting NFS export. It is highly recommended that you limit the NFS export to a particular subnet by specifying a subnet mask (e.g.,192.168.1.0/24). By allowing access from only within the expected cluster, you avoid having nonpool member mount the storage. The limit you place must include the management network(s) and the storage network(s). If the two are the same network then one CIDR is sufficient. If you have a separate storage network you must provide separate CIDRs for both or one CIDR that is broad enough to span both. The following is an example with separate CIDRs:
/export 192.168.1.0/24(rw,async,no_root_squash) 10.50.1.0/24(rw,async,no_root_squash)
Removing the async flag. The async flag improves performance by allowing the NFS server to respond before writes are committed to the disk. Remove the async flag in your mission critical production deployment.
May 8, 2012
135
8. Edit the /etc/sysconfig/iptables file and add the following lines at the beginning of the INPUT chain.
-A -A -A -A -A -A -A -A -A -A -A INPUT INPUT INPUT INPUT INPUT INPUT INPUT INPUT INPUT INPUT INPUT -m -m -m -m -m -m -m -m -m -m -m state state state state state state state state state state state --state --state --state --state --state --state --state --state --state --state --state NEW NEW NEW NEW NEW NEW NEW NEW NEW NEW NEW -p -p -p -p -p -p -p -p -p -p -p udp tcp tcp tcp udp tcp udp tcp udp tcp udp --dport --dport --dport --dport --dport --dport --dport --dport --dport --dport --dport 111 -j ACCEPT 111 -j ACCEPT 2049 -j ACCEPT 32803 -j ACCEPT 32769 -j ACCEPT 892 -j ACCEPT 892 -j ACCEPT 875 -j ACCEPT 875 -j ACCEPT 662 -j ACCEPT 662 -j ACCEPT
1. Install iscsiadm.
# # # # yum install iscsi-initiator-utils service iscsi start chkconfig --add iscsi chkconfig iscsi on
When copying and pasting a command, be sure the command has pasted as a single line before executing. Some document viewers may introduce unwanted line breaks in copied text.
For example:
# iscsiadm -m discovery -t st -p 172.23.10.240:3260 172.23.10.240:3260,1 iqn.2001-05.com.equallogic:0-8a0906-83bcb3401-16e0002fd0a46f3drhel5-test
3. Log in.
# iscsiadm -m node -T <Complete Target Name> -l -p <Group IP>:3260
For example:
# iscsiadm -m node -l -T iqn.2001-05.com.equallogic:83bcb3401-16e0002fd0a46f3d-rhel5test -p 172.23.10.240:3260
136
May 8, 2012
Removing the async flag. The async flag improves performance by allowing the NFS server to respond before writes are committed to the disk. Remove the async flag in your mission critical production deployment.
May 8, 2012
137
xen.setup.multipath
use.local.storage
host
138
May 8, 2012
default.page.size
Maximum number of items per page that can be returned by a CloudStack API command. The limit applies at the cloud level and can vary from cloud to cloud. You can override this with a lower value on a particular API call by using the page and pagesize API command parameters. For more information, see the Developer's Guide. Default: 500.
1. Log in as administrator to the CloudStack UI. Substitute your own management server IP address.
http://management-server-ip-address:8080/client
The default credentials are admin for user and password for password. The domain field should be left blank. A blank domain field is defaulted to the ROOT domain.
2. In the left navigation bar, click Global Settings. 3. Use the Search box to find the setting you need. 4. Click the Edit button next to the parameter, type a new value, then click the Apply icon. 5. After you change any global configuration parameter, restart the Management Server. You might also need to restart
other services as directed in the confirmation popup dialog that appears when you click Apply.
# service cloud-management restart
You should see a few messages as the installer prepares, followed by a list of choices.
May 8, 2012
139
3. Once installed, start the Usage Server with the following command.
# service cloud-usage start
SSL (Optional)
CloudStack provides HTTP access in its default installation. There are a number of technologies and sites which choose to implement SSL. As a result, we have left CloudStack to expose HTTP under the assumption that a site will implement its typical practice. CloudStack uses Tomcat as its servlet container. For sites that would like CloudStack to terminate the SSL session, Tomcats SSL access may be enabled. Tomcat SSL configuration is described at http://tomcat.apache.org/tomcat-6.0-doc/sslhowto.html.
1. Ensure that this is a fresh install with no data in the master. 2. Edit my.cnf on the master and add the following in the [mysqld] section below datadir.
log_bin=mysql-bin server_id=1
The server_id must be unique with respect to other servers. The recommended way to achieve this is to give the master an ID of 1 and each slave a sequential number greater than 1, so that the servers are numbered 1, 2, 3, etc. Restart the MySQL service: On RHEL or CentOS:
# service mysqld restart
On Ubuntu:
# service mysql restart
140
May 8, 2012
3. Create a replication account on the master and give it privileges. We will use the cloud-repl user with the password
password. This assumes that master and slave run on the 172.16.1.0/24 network.
# mysql -u root mysql> create user 'cloud-repl'@'172.16.1.%' identified by 'password'; mysql> grant replication slave on *.* TO 'cloud-repl'@'172.16.1.%'; mysql> flush privileges; mysql> flush tables with read lock;
4. Leave the current MySQL session running. 5. In a new shell start a second MySQL session. 6. Retrieve the current position of the database.
# mysql -u root mysql> show master status; +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000001 | 412 | | | +------------------+----------+--------------+------------------+
7. Note the file and the position that are returned by your instance. 8. Exit from this session. 9. Complete the master setup. Returning to your first session on the master, release the locks and exit MySQL.
mysql> unlock tables;
10. Install and configure the slave. On the slave server, run the following commands.
# yum install mysql-server # chkconfig mysqld on
11. Edit my.cnf and add the following lines in the [mysqld] section below datadir.
server_id=2 innodb_rollback_on_timeout=1 innodb_lock_wait_timeout=600
On Ubuntu:
# service mysql restart
May 8, 2012
141
13. Instruct the slave to connect to and replicate from the master. Replace the IP address, password, log file, and
position with the values you have used in the previous steps.
mysql> -> -> -> -> -> change master to master_host='172.16.1.217', master_user='cloud-repl', master_password='password', master_log_file='mysql-bin.000001', master_log_pos=412;
15. Optionally, open port 3306 on the slave as was done on the master earlier.
This is not required for replication to work. But if you choose not to do this, you will need to do it when failover to the replica occurs.
Failover
This will provide for a replicated database that can be used to implement manual failover for the Management Servers. CloudStack failover from one MySQL instance to another is performed by the administrator. In the event of a database failure you should:
1. Stop the Management Servers (via service cloud-management stop). 2. Change the replicas configuration to be a master and restart it. 3. Ensure that the replicas port 3306 is open to the Management Servers. 4. Make a change so that the Management Server uses the new database. The simplest process here is to put the IP
address of the new database server into each Management Servers /etc/cloud/management/db.properties.
142
May 8, 2012
Best Practices
Deploying a cloud is challenging. There are many different technology choices to make, and CloudStack is flexible enough in its configuration that there are many possible ways to combine and configure the chosen technology. This section contains suggestions and requirements about cloud deployments. These should be treated as suggestions and not absolutes. However, we do encourage anyone planning to build a cloud outside of these guidelines to discuss their needs with us.
May 8, 2012
143
Monitor the total number of VM instances in each cluster, and disable allocation to the cluster if the total is approaching the maximum that the hypervisor can handle. Be sure to leave a safety margin to allow for the possibility of one or more hosts failing, which would increase the VM load on the other hosts as the VMs are redeployed. Consult the documentation for your chosen hypervisor to find the maximum permitted number of VMs per host, then use CloudStack global configuration settings to set this as the default limit. Monitor the VM activity in each cluster and keep the total number of VMs below a safe level that allows for the occasional host failure. For example, if there are N hosts in the cluster, and you want to allow for one host in the cluster to be down at any given time, the total number of VM instances you can permit in the cluster is at most (N-1) * (per-host-limit). Once a cluster reaches this number of VMs, use the CloudStack UI to disable allocation to the cluster. Be sure all the hotfixes provided by the hypervisor vendor are WARNING applied. Track the release of hypervisor patches through your hypervisor vendors support channel, and apply patches as soon The lack of up-do-date hotfixes can lead to as possible after they are released. CloudStack will not track or notify you of required hypervisor patches. It is essential that data corruption and lost VMs. your hosts are completely up to date with the provided hypervisor patches. The hypervisor vendor is likely to refuse to support any system that is not up to date with patches. XenServer users can find more information at Highly Recommended Hotfixes for XenServer in the CloudStack Knowledge Base.
144
May 8, 2012
Troubleshooting
Checking the Management Server Log
The command below shoes a quick way to look for errors in the management server log. When copying and pasting this command, be sure the command has pasted as a single line before executing. Some document viewers may introduce unwanted line breaks in copied text.
# grep -i -E 'exception|unable|fail|invalid|leak|invalid|warn' /var/log/cloud/management/management-server.log
The configured DNS server cannot resolve your internal hostnames. E.g., you entered private-nfs.lab.example.org for secondary storage NFS, but gave a DNS server that your customers use, and that server cannot resolve private-nfs.lab.example.org.
You can troubleshoot the secondary storage VM either by running a diagnostic script or by checking the log file. The following sections detail each of these methods. If you have corrected the problem but the template hasnt started to download, restart the cloud service with service cloud restart. This will restart the default CentOS template download.
1. In the admin UI, go to Infrastructure -> Virtual Resources -> System VMs. Select the target VM. 2. Note the name of the host hosting the SSVM as shown in the Host row. Also note the private IP of the SSVM as
shown in the Private IP row.
May 8, 2012
145
For VMware: ssh into the CloudStack Management Server using your known user and password. Run this command:
This script will test various aspects of the SSVM and report warnings and errors.
VLAN Issues
A common installation issue is that your VLANs are not set up correctly. VLANs must be trunked into every host in the zone.
Cause This most likely means that the Console Proxy VM cannot connect from its private interface to port 8250 on the Management Server (or load balanced Management server pool). Solution Check the following: Load balancer has port 8250 open
146
May 8, 2012
All Management Servers have port 8250 open There is a network path from the CIDR in the pod hosting the Console Proxy VM to the load balancer or Management Server The "host" global configuration parameter is set to the load balancer if in use
1. Edit the MySQL configuration (/etc/my.cnf or /etc/mysql/my.cnf, depending on your OS) and set the log-bin and
binlog-format variables in the [mysqld] section. For example:
log-bin=mysql-bin binlog-format= 'ROW'
On Ubuntu:
# service mysql restart
NOTE: The binlog-format variable is supported in MySQL versions 5.1 and greater. It is not supported in MySQL 5.0. In some versions of MySQL, an underscore character is used in place of the hyphen in the variable name. For the exact syntax and spelling of each variable, consult the documentation for your version of MySQL.
May 8, 2012
147
Preparation Checklists
Start by gathering the information in the following checklists. This will make installation go more smoothly.
ISO Available
148
May 8, 2012
Database Checklist
For database setup, you will need the following information. Installation Requirement IP Address Netmask Gateway FQDN DNS should resolve the FQDN of the Database Server. Login id of the root user. Password for the root user. Choose: RHEL 6.2 (or later) or CentOS 6.2 (or later) Choose one of the supported OS platforms. CloudStack requires the ISO used for installing the OS in order to install dependent RPMS. Default is cloud. Default is password. Value Notes Do not use IPV6 addresses.
ISO Available
Username for Cloud User in MySQL Password for Cloud user in MySQL
May 8, 2012
149
Storage Checklist
CloudStack requires two types of storage: Primary (in a Basic Installation, this uses local disk) and Secondary Storage (NFS). The volumes used for Primary and Secondary storage should be accessible from Management Server and the hypervisors. These volumes should allow root users to read/write data. These volumes must be for the exclusive use of CloudStack and should not contain any data. You will need the following information when setting up storage. Installation Requirement Type of Storage Storage Server IP Address Storage Server Path Storage Size Secondary Storage Type Secondary Storage IP Address(es) Secondary Storage Path Secondary Storage Size Existing data backed up? Please back up any data on Primary and Secondary storage volumes, as they may be overwritten by CloudStack. NFS Only NFS is supported. Value Choose: NFS or iSCSI or local Notes
150
May 8, 2012
Contacting Support
Open-source community A variety of channels are available for getting help with CloudStack, from forums to IRC chat and more. For details, see http://cloudstack.org/discuss/. Commercial customers The CloudStack support team is available to help commercial customers plan and execute their installations. To contact the support team, log in to the support portal at https://na6.salesforce.com/sserv/login.jsp?orgId=00D80000000LWom using the account credentials you received when you purchased your support contract.
May 8, 2012
151