-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
1/127
OpenStack Documentation TeamRed Hat
Red Hat Enterprise Linux OpenStack
Platform 6Administration Guide
Managing a Red Hat Enterprise Linux OpenStack Plat form environment
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
2/127
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
3/127
Red Hat Enterprise Linux OpenStack Platform 6 Administration Guide
Managing a Red Hat Enterprise Linux OpenStack Plat form environment
OpenStack Documentation TeamRed Hat Customer Co ntent [email protected]
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
4/127
Legal Notice
Copyright © 2015 Red Hat Inc.
The text of and illustrations in this do cument are licensed by Red Hat under a CreativeCommons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation o f CC-BY-SA is available athttp://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you dis tribute this document or an adaptation o f it, you mustprovide the URL for the original versio n.
Red Hat, as the licenso r of this document, waives the right to enforce, and agrees no t to assert,Section 4d o f CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the InfinityLogo, and RHCE are trademarks o f Red Hat, Inc., registered in the United States and o thercountries.
Linux ® is the registered trademark o f Linus Torvalds in the United States and o ther countries.
Java ® is a regis tered trademark o f Oracle and/or its affiliates.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the UnitedStates and/or o ther countries.
MySQL ® is a registered trademark o f MySQL AB in the United States, the European Union andother countries.
Node.js ® is an o fficial trademark of Joyent. Red Hat Software Collections is not formallyrelated to o r endorsed by the official Joyent Node.js open so urce o r commercial project.
The OpenStack ® Word Mark and OpenStack Logo are either registered trademarks/servicemarks or trademarks/service marks of the OpenStack Foundation, in the United States and o thercountries and are used with the OpenStack Foundation's permiss ion. We are not affiliated with,endorsed or sponso red by the OpenStack Foundation, or the OpenStack community.
All o ther trademarks are the property of their respective owners.
Abstract
This Adminis tration Guide provides procedures for the management of a Red Hat Enterprise
Linux OpenStack Platform environment. Procedures to manage bo th user projects and thecloud configuration are provided.
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
5/127
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Table of Contents
CHAPTER 1 . INT RODUCT ION
1.1. OPENSTACK DASHBOARD
1.2. COMMAND-LINE CLIENTS
CHAPT ER 2 . PROJECT S AND USERS
2.1. MANAGE PRO JECTS
2.2. MANAGE USERS
CHAPT ER 3. VIRT UAL MACHINE INST ANCES
3.1. MANAGE INSTANCES
3.2. MANAGE INSTANCE SECURITY
3.3. MANAGE FLAVORS
3.4. MANAGE HOST AGGREGATES
3.5. SCHEDULE HOSTS AND CELLS
3.6. EVACUATE INSTANCES
CHAPT ER 4 . IMAGES AND STORAGE
4.1. MANAGE IMAGES
4.2. MANAGE VOLUMES
4.3. MANAGE CO NTAINERS
CHAPT ER 5. NET WORKING
5.1. MANAGE NETWORK RESOURCES
5.2. CONFIGURE IP ADDRESSING
5.3. BRIDGE THE PHYSICAL NETWORK
CHAPT ER 6 . CLOUD RESOURCES
6 .1. MANAGE STACKS
6 .2. USING THE TELEMETRY SERVICE
CHAPT ER 7. T ROUBLESHOOT ING
7.1. LOGGING
7.2. SUPPORT
APPENDIX A. IMAGE CONFIGURAT ION PARAMET ERS
APPENDIX B. REVISION HISTORY
3
3
4
6
6
10
17
17
28
30
37
41
48
52
52
67
84
87
87
96
98
100
100
103
109
109
113
114
122
T able of Cont ents
1
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
6/127
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
2
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
7/127
CHAPTER 1. INTRODUCTION
Red Hat Enterprise Linux OpenStack P latform (RHEL OpenStack Pla tform) provides the
founda tion to bu ild a private or pu blic Infrastructure-as-a-Service (IaaS) cloud on top of Red
Hat Enterprise Linux. It offers a massively sca lab le, fau lt-tolerant p latform for the
development of cloud-enabled workloads.
This guide provides cloud-management procedures for the following OpenStack services:
Block Storage, Compute, Da shboard, Identity, Image, Object Storage, OpenStack
Networking, Orchestration, and Telemetry.
Procedures for both administrators and project users (end users) are provided;
administrator-only procedures are marked as such.
You can manage the cloud usin g either the OpenStack da shboard o r the command-line
clients. Most procedures can be carried ou t using either method ; some of the more adva nced
procedures can on ly be executed on the command line. This gu ide provid es procedures forthe dashboard where possible.
Note
For the complete suite of documentation for RHEL OpenStack Pla tform, see
https://access.redhat.com/documentation/en-
US/Red_Hat_Enterprise_Linux_OpenStack_Platform/
1.1. OPENSTACK DASHBOARD
The OpenStack da shboard is a web-based g raphical user interface for managin g
OpenStack services.
To access the browser dashboard, the dashboard service must be installed, and you must
know the dashboa rd host name (or IP) and login password. The dashboa rd URL will be:
http://HOSTNAME/dashboard/
Figure 1.1. Log In Screen
CHAPTER 1 . INT RODUCTIO N
3
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
8/127
1.2. COMMAND-LINE CLIENTS
Each RHEL OpenStack Pla tform component typica lly has i ts own management client. For
example, the Compute service has the nova client. For a complete listing of client commands
and parameters, see the "Command-Line Interface Reference" in
https://access.redhat.com/documentation /en-
US/Red_Hat_Enterprise_Linux_OpenStack_Platform/
To use a command-line client, the client must be installed and you must first load the
environment variables used for authenticating with the Identity service. You can do this by
creating a n RC (run con trol) environment file, and placing it in a secure location to run as
needed.
Run the file usin g:
$ source RC_FileName
Example 1.1.
$ source ~/keystonerc_admin
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
4
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
9/127
Note
By defau lt, the Packstack u tility creates the admin and demo users, and their
keystone_admin and keystone_demo RC files.
1.2.1. Automatically Create an RC File
Using the dashboa rd, you can automatically generate and download an RC file for the
current project user, which enables the use of the OpenStack command-line clients (see
Section 1.2, “Command-lin e Clients” ). The file's environ ment variables map to the project
and the current project's user.
1. In the dashboard, select the Project tab, and click Compute > Access &
Security.
2. Select the API Access tab, which lists all services that are visib le to the project'slogged-in user.
3. Click Do wnlo ad OpenStack RC file to generate the file. The file name maps to
the current user. For example, if you a re an 'admin' user, an admin-openrc.sh file
is generated and download ed through the browser.
1.2.2. Manually Create an RC File
If you create an RC file manua lly, you must set the following environment variab les:
OS_USERNAME=userName
OS_TENANT_NAME=tenantName
OS_PASSWORD=userPassword
OS_AUTH_URL=http://IP:35357/v2.0/
PS1='[\u@\h \W(keystone_ userName)]\$ '
Example 1.2.
The following example file sets the necessary variables for the admin user:
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=secretPassexport OS_AUTH_URL=http://192.0.2.24:35357/v2.0/
export PS1='[\u@\h \W(keystone_admin)]\$ '
CHAPTER 1 . INT RODUCTIO N
5
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
10/127
CHAPTER 2. PROJECTS AND USERS
As a cloud administrator, you can manage both p rojects and users. Projects are
organ izational un its in the cloud to which you can assign users. Projects are also known as
tenan ts or accou nts. You ca n manage projects and users independently from each o ther.
Users can be members of one or more projects.
During cloud setup, the operator defines at least one project, user, and ro le. The operator
links the role to the user and the user to the pro ject. Roles define the actions that users can
perform. As a clou d ad ministrator, you can create addition al pro jects and users as needed.
Additionally, you can add , upda te, and delete projects an d users, assign users to on e or
more projects, and change or remove these assignments. To enable or temporarily d isab le a
project or user, update that project or user.
After you create a user accoun t, you must assign the accoun t to a primary pro ject.
Optionally, you can assign the account to additional projects. Before you can delete a u ser
account, you must remove the user account from its primary project.
2.1. MANAGE PROJECTS
2.1.1. Creat e a Project
1. As an ad min user in the dashboard, select Identity > Projects.
2. Click Create Project.
3. On the Project Information tab, enter a name and description for the project
(the Enabled check bo x is selected by defau lt).
4. On the Project Members tab, add members to the project from the All Users list.
5. On the Quotas tab, sp ecify resource limits for the pro ject.
6. Click Create Project.
2.1.2. Update a Project
You can u pdate a pro ject to chan ge its name or description , enable or temporarily d isab le it,
or update its members.
1. As an ad min user in the dashboard, select Identity > Projects.
2. In the project's Actions column, click the arrow, and click Edit Project.
3. In the Edit Project window, you can update a p roject to change its name or
description , and enab le or temporarily disab le the project.
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
6
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
11/127
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
12/127
All projects ha ve a default security grou p that is a pplied to an y instance that has no other
defined security grou p. Unless you change the default values, this security group denies all
incoming traffic and allows only outgoing traffic to your instance.
2.1 .5.1. Creat e a Security Group
1. In the dashboard, select Project > Compute > Access & Security .
2. On the Security Groups tab, click Create Security Group.
3. Provide a name and description for the group, and click Create Security Group.
2.1 .5.2. Add a Security Group Rule
By default, rules for a new grou p on ly prov ide outgo ing a ccess. You must add new rules to
provide additional a ccess.
1. In the dashboard, select Project > Compute > Access & Security .
2. On the Security Groups tab, click Manage Rules for the security group .
3. Click Add Rule to add a new rule.
4. Specify the rule values, and click Add .
Table 2.1. Required Rule Fields
Field Descript ion
Rule
Rule type. If you sp ecify a rule template (for example, 'SSH' ), its fields areautomatically filled in:
TCP: Typicall y used to exchange data between systems, and for end -user co mmunication.
UDP: Typical ly used to exchange d ata between systems, particularly atthe application level.
ICMP: Typic ally used b y netwo rk devices, such as ro uters, to senderror or monitoring messages.
Directio
n
Ingress (inbo und), or Egress (outbo und)
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
8
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
13/127
Open
PortFor TCP o r UDP rules, the Port o r Po rt Range to o pen for the rule(single po rt or range o f po rts):
Fo r a range o f po rts, enter po rt values in the From Po rt and ToPort fields .
Fo r a sing le p o rt, enter the po rt value in the Port field .
Type The type for ICMP rules; must be in the range ' -1:255' .
Code The code fo r ICMP rules ; mus t be in the range ' -1:255' .
Remote The traffic source for th is ru le:
CIDR (Classless Inter-Domain Routing): IP address block, which limitsaccess to IPs within the blo ck. Enter the CIDR in the Source field .
Security Gro up: Source g roup that enables any instance in the gro upto access any other gro up instance.
Field Descript ion
2.1 .5.3. Dele t e a Securit y Group Rule
1. In the dashboard, select Project > Compute > Access & Security .
2. On the Security Groups tab, click Manage Rules for the security group .
3. Select the security group rule, and click Delete Rule.
4. Click Delete Rule.
Note
You cannot undo the delete action.
2.1.5.4. Delet e a Securit y Group
1. In the dashboard, select Project > Compute > Access & Security .
2. On the Security Groups tab, select the group, and click Delete Security
Groups.
3. Click Delete Security Groups.
CHAPT ER 2 . PROJECT S AND USERS
9
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
14/127
Note
You cannot undo the delete action.
2.2. MANAGE USERS
2.2.1. Creat e a User
1. As an ad min user in the dashboard, select Identity > Users.
2. Click Create User.
3. Enter a user name, email, and p relimina ry password for the user.
4. Select a pro ject from the Primary Project list.
5. Select a role for the user from the Role list (the default role is _member_).
6. Click Create User.
2.2.2. Enable or Disable a User
You can d isab le or enable only one user at a time.
1. As an ad min user in the dashboard, select Identity > Users.
2. In the Actions colu mn, click the arrow, and select Enable User or Disable
User. In the Enabled colu mn, the value then updates to either True or False.
2.2.3. Delete a User
1. As an ad min user in the dashboard, select Identity > Users.
2. Select the users that to delete.
3. Click Delete Users.
4. Click Delete Users.
Note
You cannot undo the delete action.
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
10
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
15/127
2.2.4. Manage Roles
2.2 .4.1. View Roles
To list the ava ilab le roles:
$ keystone role-list+----------------------------------+---------------+
| id | name |
+----------------------------------+---------------+
| 71ccc37d41c8491c975ae72676db687f | Member |
| 149f50a1fe684bfa88dae76a48d26ef7 | ResellerAdmin |
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ |
| 6ecf391421604da985db2f141e46a7c8 | admin |
+----------------------------------+---------------+
To get details for a specified ro le:
$ keystone role-get ROLE
Example 2.1.
$ keystone role-get admin
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+| id | 6ecf391421604da985db2f141e46a7c8 |
| name | admin |
+----------+----------------------------------+
2.2 .4.2. Create and Assign a Role
Users can be members of multiple pro jects. To assign users to multiple pro jects, create a role
and assign that role to a user-project pair.
Note
Either the name or ID can b e used to specify users, roles, or pro jects.
1. Create the new-role role:
$ keystone role-create --name ROLE_NAME
Example 2.2 .
CHAPT ER 2 . PROJECT S AND USERS
11
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
16/127
$ keystone role-create --name new-role
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| id | 61013e7aa4ba4e00a0a1ab4b14bc6b2a |
| name | new-role |
+----------+----------------------------------+
2. To assig n a user to a pro ject, you must assign the role to a user-project pair. To do
this, you need the user, role, and project names or IDs.
a. List users:
$ keystone user-list
b. List roles:
$ keystone role-list
c. List projects:
$ keystone tenant-list
3. Assign a role to a user-project pa ir.
$ keystone user-role-add --user USER_NAME --role ROLE_NAME --
tenant TENANT_NAME
Example 2.3 .
In this example, you assign the new-role role to the demo -demo pair:
$ keystone user-role-add --user demo --role new-role --
tenant demo
4. Verify the role ass ignment for the user demo :
$ keystone user-role-list --user USER_NAME --tenant
TENANT_NAME
Example 2.4 .
$ keystone user-role-list --user demo --tenant demo
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
12
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
17/127
2.2.4.3. Delet e a Role
1. Remove a role from a user-project pa ir:
$ keystone user-role-remove --user USER_NAME --role ROLE --
tenant TENANT_NAME
2. Verify the role remova l:
$ keystone user-role-list --user USER_NAME --tenant
TENANT_NAME
If the role was removed, the command output omits the removed role.
2.2.5. View Compute Quot as for a Pro ject User
To list the currently set quo ta valu es for a project user (tenant user), run:
$ nova quota-show --user USER --tenant TENANT
Example 2.5.
$ nova quota-show --user demoUser --tenant demo
+-----------------------------+-------+
| Quota | Limit |
+-----------------------------+-------+
| instances | 10 |
| cores | 20 |
| ram | 51200 |
| floating_ips | 5 |
| fixed_ips | -1 |
| metadata_items | 128 |
| injected_files | 5 |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes | 255 |
| key_pairs | 100 |
| security_groups | 10 |
| security_group_rules | 20 |
| server_groups | 10 |
| server_group_members | 10 |
+-----------------------------+-------+
2.2.6. Update Compute Quotas for a Project User
Procedure 2.1. Update Compute Quot as for User
CHAPT ER 2 . PROJECT S AND USERS
13
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
18/127
To upd ate a pa rticular quota value, run:
$ nova quota-update --user USER --QUOTA_NAME QUOTA_VALUE TENANT
Example 2.6 .
$ nova quota-update --user demoUser --floating-ips 10 demo
$ nova quota-show --user demoUser --tenant demo
+-----------------------------+-------+
| Quota | Limit |
+-----------------------------+-------+
| instances | 10 |
| cores | 20 |
| ram | 51200 |
| floating_ips | 10 |
| ... | |
+-----------------------------+-------+
Note
To view a list of option s for the quota-upd ate command, run:
$ nova help quota-update
2.2.7. Configure Role Access Control
A user can h ave different roles in d ifferent tenants. A user can a lso have multiple roles in the
same tenant.
The /etc/[SERVICE_CODENAME]/policy.json file controls the tasks that users can
perform for a given service. For example:
/etc/nova/policy.json specifies the access po licy for the Compute service.
/etc/glance/policy.json specifies the access policy for the Image Service
/etc/keystone/policy.json specifies the access po licy for the Identity Service.
The default policy.json files for the Compute, Identity, and Image services recognize only
the admin role; all operations tha t do not requ ire the admin role are accessible by any user
that has an y role in a tenant.
For example, if you wish to restrict users from performing operations in the Compute service,
you must create a role in the Identity service, give users that role, and then modify
/etc/nova/policy.json so tha t the role is required for Compute operations.
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
14
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
19/127
Example 2.7.
The following line in /etc/nova/policy.json specifies that there are no restrictions
on which u sers can create volu mes; if the user has any role in a tenant, they can create
volu mes in tha t tenant.
"volume:create": [],
Example 2.8.
To restrict creation of volumes to users who had the compute-user role in a particula r
tenant, you would add " role:compute-user" to the Compute policy:
"volume:create": ["role:compute-user"],
Example 2.9 .
To restrict all Compute service requests to require this ro le, values in the file might look
like the follo wing (not a complete example):
{"admin_or_owner": [["role:admin"], ["project_id:%
(project_id)s"]],
"default": [["rule:admin_or_owner"]],"compute:create": ["role:compute-user"],
"compute:create:attach_network": ["role:compute-user"],
"compute:create:attach_volume": ["role:compute-user"],
"compute:get_all": ["role:compute-user"],
"compute:unlock_override": ["rule:admin_api"],
"admin_api": [["role:admin"]],
"compute_extension:accounts": [["rule:admin_api"]],
"compute_extension:admin_actions": [["rule:admin_api"]],
"compute_extension:admin_actions:pause":
[["rule:admin_or_owner"]],
"compute_extension:admin_actions:unpause":
[["rule:admin_or_owner"]],
"compute_extension:admin_actions:suspend":
[["rule:admin_or_owner"]],
"compute_extension:admin_actions:resume":
[["rule:admin_or_owner"]],
"compute_extension:admin_actions:lock": [["rule:admin_or_owner"]],
"compute_extension:admin_actions:unlock":
[["rule:admin_or_owner"]],
"compute_extension:admin_actions:resetNetwork":
[["rule:admin_api"]],
"compute_extension:admin_actions:injectNetworkInfo":
[["rule:admin_api"]],
"compute_extension:admin_actions:createBackup":
[["rule:admin_or_owner"]],
"compute_extension:admin_actions:migrateLive":
CHAPT ER 2 . PROJECT S AND USERS
15
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
20/127
[["rule:admin_api"]],
"compute_extension:admin_actions:migrate": [["rule:admin_api"]],
"compute_extension:aggregates": [["rule:admin_api"]],
"compute_extension:certificates": ["role:compute-user"],
"compute_extension:cloudpipe": [["rule:admin_api"]],
"compute_extension:console_output": ["role:compute-user"],
"compute_extension:consoles": ["role:compute-user"],
"compute_extension:createserverext": ["role:compute-user"],
"compute_extension:deferred_delete": ["role:compute-user"],
"compute_extension:disk_config": ["role:compute-user"],
"compute_extension:evacuate": [["rule:admin_api"]],
"compute_extension:extended_server_attributes":
[["rule:admin_api"]],
...
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
16
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
21/127
CHAPTER 3. VIRTUAL MACHINE INSTANCES
The RHEL OpenStack P latform allo ws you to easily manage virtua l machine instances in the
cloud. OpenStack Compu te is the central component that creates, schedu les, and manages
instances, and exposes this functionality to other OpenStack components.
Note
The term 'instance' is used by OpenStack to mean a virtual machine instance.
3.1. MANAGE INSTANCES
3.1.1. Create an Instance
Prerequisites: Ensure that a network, key pair, and a boo t source are ava ilab le:
1. In the dashboard, select Project.
2. Select Network > Networks, and ensure there is a priva te network to which you
can attach the new instance (to create a n etwork, see Section 5.1.1, “Add a Network” ).
3. Select Compute > Access & Security > Key Pairs , and ensure there is a
key pai r (to create a key pair, see Section 3 .2.1, “Mana ge Key Pa irs” ).
4. Ensure tha t you have either an image or a vo lume that can be used as a b oot source:
To view boo t-source images, select the Images tab (to create an image, see
Section 4.1.1, “Create an Image”).
To view boo t-source volumes, select the Volumes tab (to create a volume, see
Section 4.2.1.1, “Create a Volume”).
Procedure 3.1. Create an Inst ance
1. In the dashboard, select Project > Compute > Instances.
2. Click Launch Instance.
3. Fill out instance fields (those marked with '* ' are requ ired), and click Launch when
finished.
T ab Field Not es
CHAPT ER 3. VIRT UAL MACHINE INST ANCES
17
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
22/127
Details Availab il ity Zo ne Zo nes are lo gical g ro up ing s o f c lo ud
resources in which your instance can be p laced.
If you are unsure, use the default zo ne (for mo re
information, see section Section 3.4, “ Manage
Host Aggregates” ).
Ins tance Name The name mus t be unique wi thin the p ro jec t.
Flavo r The flavo r determines what resources the
instance is g iven (for example, memory). Fo r
default flavor alloc ations and information o n
creating new flavors, s ee Section 3.3, “ Manage
Flavors” .
Instance Boot
Source Depending on the item selected , new fields aredi sp layed allo wing you to select the so urce:
Image s ources must be co mpatible withOp enStack (see Section 4.1, “ Manage
Images” ).
If a vo lume or volume sourc e is selected , theso urce must be fo rmatted using an image
(see Section 4.2, “ Manage Volumes” ).
Access andSecurity
Key Pair The specified key pair is injected into theinstance and i s used to remo tely access the
instance using SSH (if neither a d irect log in
information o r a static key pair is pro vided).
Usually one key pair per p roject is created.
Secur ity Groups Secur ity groups conta in fi rewal l ru les which fi l ter
the type and d irectio n of the instance's netwo rk
traffic (for more information o n co nfig uring
gro ups, see Section 2.1.5, “ Manage Pro ject
Security” ).
Networking Selected Networks You must select a t least one network. Instances
are typ icall y ass ig ned to a p rivate netwo rk, and
then later gi ven a flo ating IP add ress to enab le
external access .
T ab Field Not es
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
18
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
23/127
Post-
Creation
Customization
Script Source
You can pro vide either a set of commands or a
sc rip t file, which will run after the instance is
boo ted (for exampl e, to set the instance
hos tname or a user password ). If 'Direct Input'
is selected, write your commands in the Script
Data field; o therwise, sp ecify your scri pt file.Note: Any scri p t that starts with '#clo ud-config'
is interpreted as using the clo ud-co nfig syntax
(for i nformation on the syntax, see
http://cloudinit.readthedocs.org/en/latest/topics
/examples.html ).
Advanced
Options
D is k Parti ti on By d efaul t, the ins tanc e i s b ui lt as a s ing le
partition and d ynamically resized as needed .
Ho wever, you can choo se to manually co nfigure
the partitio ns yourself.
Configuration
Drive
If selected, O penStack writes metadata to a
read-only co nfiguration d rive that is attached to
the instance when it boo ts (instead o f to
Co mp ute's metadata service). After the instance
has boo ted, you can mount this dri ve to view its
co ntents (enables you to pro vide files to the
instance).
T ab Field Not es
3.1.2. Update an Instance (Actions menu)
You can u pda te an instance by selecting Project > Compute > Instance, and
selecting an action for that instance in the Actions column. Actions allow you to manipulate
the instance in a n umber of ways:
Act ion Descript ion
Create Snapshot Snapshots preserve the d isk s tate of a running instance. You can
create a snapsho t to m ig rate the instance, as well as to p reservebackup co pies.
Associate/Disassoci
ate Flo ating IP
You must ass ociate an instance with a flo ating IP (external) ad d ress
before it can co mmunicate with external netwo rks, or b e reached b y
external users . Because there are a limited number o f external
add resses i n your external sub nets, it is reco mmended that you
di sasso ciate any unused ad dresses.
Ed it Ins tanc e Up d ate the i ns tanc e' s name and as so c iated sec uri ty g ro up s .
Edit Security Groups Add and remove security gro ups to o r from this instance using the
list of available security gro ups (for mo re information o n configuring
gro ups, see Section 2.1.5, “Manage Pro ject Security” ).
CHAPT ER 3. VIRT UAL MACHINE INST ANCES
19
http://cloudinit.readthedocs.org/en/latest/topics/examples.html
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
24/127
Co nso le View the instance' s co nso le in the b ro wser (allo ws easy access to the
instance).
View Lo g View the mo st recent sectio n o f the instance's co nso le lo g. O nce
op ened, you can view the full lo g by cli cking View Full Log .
Pause/Resume
Instance
Immediately pause the instance (you are no t asked fo r co nfirmatio n);
the state of the instance is s tored in memo ry (RAM).
Suspend/Resume
Instance
Immediately suspend the instance (you are not asked fo r
co nfirmatio n); like hybernatio n, the state of the instance is kept on
disk.
Res ize Ins tance Bring up the Res ize Ins tance window (see Section 3.1.3, “Resi ze an
instance” ).
So ft Reb o o t G rac eful ly s to p and res tart the i ns tanc e. A s o ft reb o o t attemp ts to
g racefully shut do wn all pro cess es befo re restarting the instance.
Har d Reboo t Stop and res tar t the i ns tance. A hard reboo t effec ti vel y j us t shuts
down the instance's 'p ower' and then turns it back on.
Shut O ff Ins tance Graceful ly s top the i ns tance.
Rebui ld Instance Use new image and d isk-parti tion opt ions to rebuild the image (shut
down, re-imag e, and re-boot the instance). If enco untering operatingsystem issues, this op tion is easier to try than terminating the
instance and starting over.
Terminate Instance Permanently destroy the instance (you are asked for confirmation).
Act ion Descript ion
For example, you can create and alloca te an external address by using the 'Associa te
Floating IP' action.
Procedure 3.2. Upd ate Example - Assign a Floati ng IP
1. In the dashboard, select Project > Compute > Instances.
2. Select the Associate Flo ating IP action for the instance.
Note
A floating IP add ress can on ly be selected from an al ready created floa ting
IP poo l (see Section 5.2.1, “Create Floa ting IP Poo ls” ).
3. Click '+' and select Allocat e IP > Associat e.
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
20
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
25/127
Note
If you do not know the name of the instance, just its IP address (and you do not
want to fl ip through the detai ls of all your instances), yo u can run th e fol lowing on
the command line:
$ nova list --ip IPAddress
Where IPAddress is the IP address you are looking up.
$ nova list --ip 192.0.2.0
3.1.3. Resize an instance
To resize an instance (memory or CPU count), you must select a new flavor for the instance
that has the righ t capa city. If you are increasing the size, remember to first ensu re tha t the
host has enough spa ce.
1. If you a re resizing an instance in a distributed deployment, you must ensure
communication between ho sts. Set up each host with SSH key authentication so that
Compute can use SSH to move disks to other hosts (for example, compute nodes can
share the same SSH key). For more information abo ut setting up SSH key
authentication, see Section 3 .1.4, “Configure SSH Tunn eling b etween Nodes” .
2. Enable resizing on the original ho st by setting the following parameter in the
/etc/nova/nova.conf file:
3. In the dashboard, select Project > Compute > Instances.
4. Click the instan ce's Actions arrow, and select Resize Instance.
5. Select a new flavor in the New Flavor field.
6. If you wan t to manua lly partition the instance when it launches (results in a faster
bu ild time):
a. Select Advanced O ptio ns.
b. In the Di sk Partition field, select 'Manual' .
7. Click Resize.
3.1.4. Configure SSH T unneling bet ween Nodes
[DEFAULT] allow_resize_to_same_host = True
CHAPT ER 3. VIRT UAL MACHINE INST ANCES
21
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
26/127
To migrate instances between nodes using SSH tunneling or to resize instance in a
distribu ted environment, each node must be set up with SSH key authentication so that the
Compute service can u se SSH to move disks to o ther nodes. For example, compute nodes
cou ld use the same SSH key to ensure communica tion.
Note
If the Compute service cannot migrate the instance to a different node, it will attempt
to migrate the instance back to its original host. To avoid migration failure in this
case, ensure that 'allow_migrate_to_same_host=True' is set in the
/etc/nova/nova.conf file.
To share a key pa ir between compute nodes:
1. As root on bo th nodes, make nova a login user:
# usermod -s /bin/bash nova
2. On the first compute node, generate a key pai r for the nova user:
# su nova
# ssh-keygen
# echo 'StrictHostKeyChecking no' >> /var/lib/nova/.ssh/config
# cat /var/lib/nova/.ssh/id_rsa.pub >>
/var/lib/nova/.ssh/authorized_keys
The key pair, id_rsa and id_rsa.pub, is generated in /var/lib/nova/.ssh .
3. As roo t, copy the created key pair to the second compu te node:
# scp /var/lib/nova/.ssh/id_rsa root@computeNodeAddress:~/
# scp /var/lib/nova/.ssh/id_rsa.pub root@computeNodeAddress:~/
4. As root on the second compute node, change the cop ied key pai r's permissions back
to 'nova ', and then add the key pair into SSH:
# chown nova:nova id_rsa# chown nova:nova id_rsa.pub
# su nova
# mkdir -p /var/lib/nova/.ssh
Warning
Red Hat do es not recommend any pa rticular lib virt security strategy; SSH-
tunneling steps a re provided for user reference only. Only users with root access
can set up SSH tunneling.
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
22
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
27/127
# cp id_rsa /var/lib/nova/.ssh/
# cat id_rsa.pub >> /var/lib/nova/.ssh/authorized_keys
# echo 'StrictHostKeyChecking no' >> /var/lib/nova/.ssh/config
5. Ensure that the nova user can no w log into each no de without using a pa ssword:
# su nova
# ssh nova@computeNodeAddress
6. As roo t on both nodes, restart both libvirt and the Compute services:
# systemctl restart libvirtd.service
# systemctl restart openstack-nova-compute.service
3.1.5. Connect t o an Instance
3.1.5 .1. Access using t he Dashbo ard Conso le
The console allows you a way to directly access your instance within the dashboa rd.
1. In the dashboard, select Compute > Instances.
2. Click the instan ce's More button and select Console.
Figure 3.1. Con sole Access
CHAPT ER 3. VIRT UAL MACHINE INST ANCES
23
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
28/127
3. Log in usin g the image's user name and password (for example, a CirrOS image
uses 'cirros'/'cub swin:)').
Note
Red Hat Enterprise Linux gu est images typically do n ot allow direct console
access; you must SSH into the instance (see Section 3.1.5.4, “SSH into an
Instance”).
3.1.5.2. Direct ly Connect t o a VNC Console
You can d irectly access an instance's VNC conso le using a URL returned by nova get-
vnc-console command.
Browser
To obtain a b rowser URL, use:
$ nova get-vnc-console INSTANCE_ID novnc
Java Clien t
To ob tain a Java -client URL, use:
$ nova get-vnc-console INSTANCE_ID xvpvnc
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
24
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
29/127
Note
nova-xvpvncviewer provides a simple example of a Java client. To
download the client, use:
# git clone https://github.com/cloudbuilders/nova-xvpvncviewer
# cd nova-xvpvncviewer/viewer
# make
Run the viewer with the instance's Java -client URL:
# java -jar VncViewer.jar URL
This tool is provided on ly for customer convenience, and is no t officiallysupported by Red Hat.
3.1.5.3. Direct ly Connect t o a Serial Console
You can d irectly access an instance's serial port using a websocket client. Serial
connections are typically used as a debugging tool (for example, instances can be
accessed even if the network configuration fails). To obtain a serial URL for a runn ing
instance, use:
$ nova get-serial-console INSTANCE_ID
Note
novaconsole provides a simple example of a websocket client. To do wnload the
client, use:
# git clone https://github.com/larsks/novaconsole/
# cd novaconsole
Run the client with the instance's serial URL:
# python console-client-poll.py URL
This tool is provid ed only for customer convenience, and is no t officially supported
by Red Hat.
However, depending on your installation , the administrator may need to first set up thenova-serialproxy service. The proxy service is a websocket proxy tha t allows
connections to OpenStack Compute serial ports.
CHAPT ER 3. VIRT UAL MACHINE INST ANCES
25
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
30/127
Procedure 3.3. Install and Config ure nova-serialproxy
1. Install the nova-serialproxy service:
# yum install openstack-nova-serialproxy
2. Update the serial_console section in /etc/nova/nova.conf:
a. Enable the nova-serialproxy service:
$ openstack-config --set /etc/nova/nova.conf
serial_console enabled true
b. Specify the string used to generate URLS provided by the nova get-
serial-console command .
$ openstack-config --set /etc/nova/nova.conf
serial_console base_url ws://PUBLIC_IP:6083/
Where PUBLIC_IP is the public IP address of the host running the nova-
serialproxy service.
c. Specify the IP add ress on which the instance serial console shou ld listen
(string).
$ openstack-config --set /etc/nova/nova.conf
serial_console listen 0.0.0.0
d. Specify the address to which proxy clients should co nnect (string).
$ openstack-config --set /etc/nova/nova.conf
serial_console proxyclient_address ws://HOST_IP:6083/
Where HOST_IP is the IP ad dress of your Compute host.
Example 3.1. Enabled nova-serialproxy
3. Restart Compu te services:
[serial_console]
enabled=true
base_url=ws://192.0.2.0:6083/
listen=0.0.0.0
proxyclient_address=192.0.2.3
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
26
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
31/127
# openstack-service restart nova
4. Start the nova-serialproxy service:
# systemctl enable openstack-nova-serialproxy
# systemctl start openstack-nova-serialproxy
5. Restart any runn ing instances, to ensu re that they are now listening on the righ t
sockets.
6. Open the firewall for serial-conso le port connections. Serial ports are set using
[serial_console] port_range in /etc/nova/nova.conf; by defau lt, the
range is 10000:20000. Update iptables with:
# iptables -I INPUT 1 -p tcp --dport 10000:20000 -j ACCEPT
3.1.5.4. SSH int o an Instance
1. Ensure that the instance's security group has an SSH rule (see Section 2 .1.5,
“Manage Project Security” ).
2. Ensure the instan ce has a floating IP address (external address) assigned to it (see
Section 3 .2.2, “Create, Assign , and Release Floating IP Addresses” ).
3. Obtain the instance's key-pai r certificate. The certificate is downloaded when the keypa ir is created; if you d id not create the key pa ir yourself, ask you r administrator (see
Section 3.2.1, “Manage Key Pairs” ).
4. On your loca l machine, load the key-pa ir certificate into SSH. For example:
$ ssh-add ~/.ssh/os-key.pem
5. You can n ow SSH into the file with the user supp lied by the image.
The following example command sh ows ho w to SSH into the Red Hat Enterprise
Linux gu est image with the user 'cloud-user':
$ ssh [email protected]
Note
You can a lso use the certificate directly. For example:
$ ssh -i /myDir/os-key.pem [email protected]
CHAPT ER 3. VIRT UAL MACHINE INST ANCES
27
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
32/127
3.1.6. View Instance Usage
The following usage statistics are available:
Per Project
To view instance usage per project, select Project > Compute > Overview . A usagesummary is immedia tely displayed for all pro ject instances.
You can also view statistics for a specific period of time by specifying the date rang e and
clicking Submit.
Per Hypervisor
If logged in as an administrator, you can also view information for all projects. Click
Admin > System an d select one of the tabs. For example, the Resource Usage taboffers a way to view reports for a distinc t time period. You might also click Hypervisors
to view your current vCPU, memory, or d isk statistics.
Note
The 'vCPU Usage' va lue ('x o f y') reflects the number of total vCPUs of all virtua l
machines (x) and the total number of hypervisor cores (y).
3.1.7. Delete an Instance
1. In the dashboard, select Project > Compute > Instances, and select your
instance.
2. Click Terminate Instance.
Note
Deleting an instance does not delete its attached vo lumes; you must do thisseparately (see Section 4.2.1.4, “Delete a Volume” ).
3.2. MANAGE INSTANCE SECURITY
You can manage access to an instance by assigning it the correct security group (set of
firewall rules) and key pair (enables SSH user access). Further, you can assign a floa ting IP
address to an in stance to enable external network access. The sections b elow outline how to
create and manag e key pa irs and floating IP add resses. For information o n manag ing
security groups, see Section 2.1.5, “Manag e Project Security” .
3.2.1. Manage Key Pairs
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
28
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
33/127
Key pa irs provide SSH access to the instances. Each time a key pair is generated, its
certificate is downloaded to the local machine and can be distributed to users. Typically, one
key pair is created for each p roject (and used for multiple instan ces).
You can a lso import an existing key pair in to OpenStack.
3.2.1.1. Create a Key Pair
1. In the dashboard, select Project > Compute > Access & Security .
2. On the Key Pairs tab, click Create Key Pair.
3. Specify a name in the Key Pair Name field, and click Create Key Pair.
When the key pair is created, a key pair file is au tomatically downloaded through the
browser. Save this file for later connections from external machines. For command-line SSH connections, you can lo ad this file into SSH by executing:
# ssh-add ~/.ssh/OS-Key.pem
3.2.1.2. Impo rt a Key Pair
1. In the dashboard, select Project > Compute > Access & Security .
2. On the Key Pairs tab, click Import Key Pair.
3. Specify a name in the Key Pair Name field, and copy and paste the contents of
your public key into the Public Key field.
4. Click Import Key Pair.
3.2.1.3. Dele t e a Key Pair
1. In the dashboard, select Project > Compute > Access & Security .
2. On the Key Pairs tab, click the key's Delete Key Pair button.
3.2.2. Create, Assign, and Release Floating IP Addresses
By defau lt, an instance is g iven an internal IP address when it is first created. However, you
can enable access through the public network by creating and assigning a floating IP
address (external address). You can chang e an instance's a ssociated IP a ddress
regardless of the instance's state.
CHAPT ER 3. VIRT UAL MACHINE INST ANCES
29
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
34/127
Projects have a limited ran ge of floating IP address tha t can be used (by default, the limit is
50), so you should release these addresses for reuse when they are no longer needed.
Floating IP addresses can only be allocated from an existing floating IP pool (see
Section 5.2.1, “Create Floa ting IP Poo ls” ).
Procedure 3.4 . Allocate a Floating IP to the Project
1. In the dashboard, select Project > Compute > Access & Security .
2. On the Floating IPs tab, click Allocate IP to Project.
3. Select a network from which to allocate the IP address in the Pool field.
4. Click Allocate IP .
Procedure 3.5. Assign a Floati ng IP
1. In the dashboard, select Project > Compute > Access & Security .
2. On the Floating IPs tab, click the address' Associate button.
3. Select the address to be assig ned in the IP address field.
Note
If no addresses are available, you can click the + bu tton to create a new
address.
4. Select the instance to be associa ted in the Port to be Associated field. An
instance can o nly be associated with one floating IP add ress.
5. Click Associate.
Procedure 3.6 . Release a Floating IP
1. In the dashboard, select Project > Compute > Access & Security .
2. On the Floating IPs tab, click the address' menu arrow (next to the
Associate/Disassociate button.
3. Select Release Floating IP .
3.3. MANAGE FLAVORS
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
30
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
35/127
Each created instance is given a flavor (resource template), which d etermines the instance's
size and capacity. Flavors can also specify secondary ephemeral storage, swap disk,
metadata to restrict usag e, or special pro ject access (none of the defau lt flavors have these
additional attributes defined).
Tabl e 3.1. Default Flavors
Name vCPUs RAM Root Disk Size
m1.tiny 1 512 MB 1 GB
m1.small 1 2048 MB 20 GB
m1.med ium 2 4096 MB 40 GB
m1.large 4 8192 MB 80 GB
m1.xlarge 8 16384 MB 160 GB
The majority of end users will be able to use the default flavors. However, you might need to
create and manage specialized flavo rs. For example, you might:
Change defaul t memory and cap acity to suit the underlying h ardware needs.
Add metadata to force a specific I/O rate for the instance or to match a host ag gregate.
Note
Behavio r set using image properties overrides behavior set using flavors (for more
information, see Section 4.1, “Manage Images”).
3.3.1. Updat e Configurat ion Permissions
By d efau lt, only administrators can create flavors or view the complete flavo r list (selectAdmin > System > Flavors). To allow al l users to configure flavo rs, specify the
following in the /etc/nova/policy.json file (nova-api server):
"compute_extension:flavormanage": "",
3.3.2. Create a Flavor
1. As an ad min user in the dashboard, select Admi n > System > Flavors.
2. Click Create Flavor, and specify the following fields:
CHAPT ER 3. VIRT UAL MACHINE INST ANCES
31
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
36/127
T ab Field Descript ion
Flavor
Informati
on
Name Unique name.
ID Unique ID. The default value, 'auto ' , generates a
UUID4 value, but you can also manually sp ecify
an integer or UUID4 value.
VCPUs Number o f virtual CPUs.
RAM (MB) Memory (in megabytes).
Roo t D isk (GB) Ephemeral d is k s ize (i n g igabytes ); to use the
native image size, sp ecify '0 '. This d isk is not
used if ' Instance Boo t Source=Boo t from Volume'.
Epehemeral Disk
(GB)
Seco ndary ephemeral d isk size (in g igab ytes).
Swap D is k (MB) Swap d i sk s iz e (in meg ab ytes ).
Flavor
Access
Selected Projects Projects which can use the flavor. If no projects
are selected, all p ro jects have access
('Public=Yes').
3. Click Create Flavor.
3.3.3. Update General Attributes
1. As an ad min user in the dashboard, select Admi n > System > Flavors.
2. Click the flavor' s Edit Flavor button.
3. Update the values, and click Save.
3.3.4 . Update Flavor Metadata
In addition to editing general attributes, you can add metadata to a flavor ('extra_specs'),
which can help fine-tune instance usage. For exa mple, you might want to set the maximum-
allowed bandwidth or d isk writes.
Pre-defined keys determine hardware support or quotas. Pre-defined keys a re limited by
the hypervisor you are using (for libvirt, see Table 3.2, “Libvirt Metadata” ).
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
32
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
37/127
Both p re-defined and user-defined keys can determine instance scheduling. For example,
you might specify 'SpecialComp=True'; any instance with this flavor can then only run in
a host ag gregate with the same key-value combination in its metadata (see Section 3 .4,
“Manage Host Aggregates” ).
3.3.4.1. View Met adat a
1. As an ad min user in the dashboard, select Admi n > System > Flavors.
2. Click the flavor' s Metadata link ('Yes' o r 'No' ). All current values are listed on the
right-hand side under Existing Metadata.
3.3.4.2. Add Met adat a
You specify a flavor's metadata using a key/value pair.
1. As an ad min user in the dashboard, select Admi n > System > Flavors.
2. Click the flavor' s Metadata link ('Yes' o r 'No' ). All current values are listed on the
right-hand side under Existing Metadata.
3. Under Available Metadata, click on the Other field, and specify the key you
want to a dd (see Table 3.2, “Libvirt Metada ta” ).
4. Click the + button; you can now view the new key under Existing Metadata.
5. Fill in the key's value in its right-hand field.
Figure 3.2. Flavor Metad ata
6. When finish ed with adding key-value pai rs, click Save.
Tabl e 3.2. Libvirt Metadata
CHAPT ER 3. VIRT UAL MACHINE INST ANCES
33
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
38/127
Key Descript ion
hw: action
Actio n that co nfig ures supp ort limi ts p er instance. Valid
actions are:
cpu_max_sockets - Maximum sup po rted CPUsockets.
cpu_max_cores - Maximum supp orted CPU c ores.
cpu_max_threads - Maximum sup po rted CPUthreads.
cpu_sockets - Preferred numb er o f CPU so ckets.
cpu_cores - Preferred numb er of CPU co res.
cpu_threads - Preferred numb er of CPU threads .
serial_port_count - Maximum serial po rts perinstance.
Exampl e: 'hw:cp u_max_so ckets=2'
hw: NUMA_def
Definition o f NUMA top o lo gy for the instance. Fo r flavo rs whose RAM and vCPU al locati ons are larger than the s ize
of NUMA nod es in the co mpute hos ts, defining NUMAtopo lo gy enabl es hosts to b etter utilize NUMA andimpro ve p erformance o f the g uest OS.
NUMA definitio ns d efined through the flavor overrideimage d efinitions. Valid definitio ns are:
numa_nodes - Number of NUMA nod es to expo se tothe instance. Specify '1' to ensure image NU MA settings
are overrid den.
numa_mempolicy - Memory alloc ation po licy. Validpo licies are:
strict - Mandato ry for the instance's R AM alloc ations
to co me from the NUMA nod es to which it is bo und(default if numa_nodes is specif ied).
p referred - The kernel can fall b ack to using analternative nod e. Useful when the numa_nodes isset to '1'.
numa_cpus.0 - Mapp ing o f vCPUs N-M to NUMAnod e 0 (co mma-separated lis t).
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
34
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
39/127
numa_cpus.1 - Mapp ing o f vCPUs N-M to NUMAnode 1 (comma-separated list).
numa_mem.0 - Mapp ing N G B of RAM to NUMA node0 .
numa_mem.1 - Mapp ing N G B of RAM to NUMA node1.
numa_cpu.N and numa_mem.N are only valid ifnuma_nodes is s et. Add itio nally, they are onlyrequired if the instance's NUMA nod es have anasymetrical allo cation o f CPUs and RAM (imp ortant forso me NFV workload s). No te: If the values o f numa_cpuo r numa_mem.N sp ecify more than that availab le, anexceptio n is raised.
Exampl e when the instance has 8 vCPUs and 4G B RAM:
hw:numa_nodes=2
hw:numa_cpus.0=0,1,2,3,4,5
hw:numa_cpus.1=6,7
hw:numa_mem.0=3
hw:numa_mem.1=1
The scheduler looks fo r a host with 2 NUMA nod es with theability to run 6 CPUs + 3 G B of RAM on one nod e, and 2CPUS + 1 GB o f RAM on another nod e. If a host has a
singl e NUMA nod e with capab ility to run 8 CPUs and 4 G Bo f RAM, it will no t be co nsid ered a valid match. The samelog ic is ap pli ed in the scheduler regardless o f thenuma_mempolicy setting .
hw:watchdog_action
An instance watchdo g device can be used to trigg er an
action if the instance so mehow fails (o r hangs). Validactions are:
disabled - The device is not attached (default value).
pause - Pause the ins tance.
poweroff - Fo rcefully shut do wn the instance.
reset - Forc efully reset the instance.
none - Enable the watchdo g , but do nothing if theinstance fails .
Exampl e: 'hw:watchdo g_action=poweroff'
Key Descript ion
CHAPT ER 3. VIRT UAL MACHINE INST ANCES
35
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
40/127
hw_rng:action
A rando m-number generator d evice can be ad ded to an
instance using its imag e pro perties (see hw_rng_modelin the "Command-Line Interface Reference" in RHEL
OpenStack Platform documentation).
If the device has b een added, valid actions are:
allowed - If 'True', the device is enabled ; if 'False' ,di sabled . By default, the device is d isabl ed.
rate_bytes - Maximum numb er o f bytes the instance'skernel c an read from the hos t to fill its entropy p oo l
every rate_period (integer).
rate_period - Duratio n of the read p eriod inseco nds (integer).
Exampl e: 'hw_rng:allo wed=True'.
hw_video:ram_max_mb
Maximum permitted RAM to b e allowed for vid eo d evices(in MB).
Example: 'hw:ram_max_mb=64'
quota: option
Enforcing limit for the instance. Valid op tions are:
cpu_period - Time period for enforcing c pu_quo ta(in microseco nds). Within the specified cp u_period ,
each vCPU cannot consume more than cpu_q uota ofruntime. The value must be in range [1000 , 1000000 ];'0 ' means 'no value'.
cpu_quota - Maximum allo wed b andwidth (inmicro seco nds) for the vCPU in each cp u_period . The
value must be in range [1000 , 1844674407370 9551]. '0 'means ' no value' ; a negative value means that the vCPUis not controlled. cpu_quota and cpu_period canbe used to ensure that all vCPUs run at the samespeed.
cpu_shares - Share of CPU time for the do main. Thevalue o nly has meaning when weighted agains t other
machine values in the same domain. That is, an
instance with a flavor with '20 0 ' will g et twice as muchmachine time as an instance with '100 '.
Key Descript ion
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
36
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
41/127
disk_read_bytes_sec - Maximum dis k reads inbytes per second .
disk_read_iops_sec - Maximum read I/Oop erations p er second.
disk_write_bytes_sec - Maximum d isk writes i nbytes per second .
disk_write_iops_sec - Maximum write I/Oop erations p er second.
disk_total_bytes_sec - Maximum to tal throughp utlimit in bytes per seco nd.
disk_total_iops_sec - Maximum total I/Oop erations p er second.
vif_inbound_average - Desired average o fincoming traffic.
vif_inbound_burst - Maximum amo unt of trafficthat can be received at vif_inbound_peak speed.
vif_inbound_peak - Maximum rate at whichincoming traffic can b e received.
vif_outbound_average - Desired average o foutgo ing traffic.
vif_outbound_burst - Maximum amo unt of trafficthat can be s ent at vif_outbound_peak speed.
vif_outbound_peak - Maximum rate at whichoutgo ing traffic c an be sent.
Exampl e: 'q uota:vif_inbo und_average=10240'
Key Descript ion
3.4. MANAGE HOST AGGREGATES
A sing le Compute deployment can be partitioned into log ical g roup s for performance or
administrative purposes. OpenStack u ses the following terms:
Host aggregates - A host aggregate creates logica l un its in a OpenStack deployment by
grouping together hosts. Aggregates are assigned Compute hosts and associated
metadata; a host can be in more than on e host aggregate. Only administrators can see or
create host agg regates.
An aggregate's metadata is commonly used to provide information for use with the
Compute schedu ler (for example, limiting specific flavo rs or images to a subset of hosts).
Metadata specified in a ho st aggregate will limit the use of that host to an y instan ce tha t
has the same metadata specified in its flavor.
CHAPT ER 3. VIRT UAL MACHINE INST ANCES
37
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
42/127
Administrators can u se host aggregates to ha ndle load balan cing, enforce physical
isola tion (or redundancy), group servers with common attributes, or separate out classes
of hardware. When you create an ag gregate, a zo ne name must be specified, and i t is this
name which is p resented to the end user.
Availability zones - An availa bili ty zone is the end-user view of a ho st aggregate. An end
user cann ot view which hosts make up the zone, nor see the zone's metadata; the user
can on ly see the zone's na me.
End users can be directed to use specific zones which have been configured with certain
capabilities or within certain areas.
3.4 .1. Enable Host Aggregat e Scheduling
By defau lt, host-aggregate metadata is no t used to filter instance usage; you must upda te
the Compute schedu ler's configuration to enable metadata usage:
1. Edit the /etc/nova/nova.conf file (you must have either root or nova user
permissions).
2. Ensure that the scheduler_default_filters parameter contains:
'AggregateInstanceExtraSpecsFilter' for host aggregate metadata. For example:
'AvailabilityZoneFilter' for ava ilability host specification when launch ing an
instance. For example:
3. Save the configura tion file.
3.4 .2. View Availability Zones or Host Aggregat es
As an a dmin user in the dashboard, select Admin > System > Host Agg regates. All
currently defined aggregates are listed in the Host Aggregates section ; all zones are
in the Availability Zones section .
3.4.3. Add a Host Aggregat e
scheduler_default_filters=AggregateInstanceExtraSpecsFilter,Re
tryFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,I
magePropertiesFilter,CoreFilter
scheduler_default_filters=AvailabilityZoneFilter,RetryFilter,R
amFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropert
iesFilter,CoreFilter
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
38
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
43/127
1. As an ad min user in the dashboard, select Admin > System > Host
Aggregates. All cu rrently defined aggregates are listed in the Host Aggregates
section.
2. Click Create Host Aggregate.
3. Add a name for the aggregate in the Name field, and a name by which the end user
should see it in the Availability Zone field.
4. Click Manage Hosts within Aggregate.
5. Select a host for use by clicking its + icon.
6. Click Create Host Aggregate.
3.4.4. Update a Host Aggregate
1. As an ad min user in the dashboard, select Admin > System > Host
Aggregates. All cu rrently defined aggregates are listed in the Host Aggregates
section.
2. To upda te the instance's:
Name or availability zone:
Click the aggregate's Edit Host Agg regate button.
Update the Name or Availability Zone field, and click Save.
Assigned hosts:
Click the aggregate's arrow icon under Actions.
Click Manage Hosts.
Change a host's assign ment by clicking its + or - icon.
When finished, click Save.
Metatdata:
Click the aggregate's arrow icon under Actions.
Click the Update Metadata bu tton . All current valu es are listed on the
right-hand side under Existing Metadata.
CHAPT ER 3. VIRT UAL MACHINE INST ANCES
39
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
44/127
Under Available Metadata, click on the Other field, and specify the
key you want to add. Use predefined keys (see Table 3.3, “Host Aggregate
Metadata” ) or ad d your own (which will only be valid if exactly the same
key is set in an instance's flavo r).
Click the + bu tton ; you can now view the new key under Existing
Metadata.
Note: Remove a key by clicking its - icon.
Click Save.
Table 3.3. Host Aggregate Metadata
Key Descript ion
cpu_allocation
_ratio
Sets allo cation ratio o f virtual CPU to p hysic al CPU.
Depends on the AggregateCoreFilter filter being
set fo r the Comp ute sched uler.
disk_allocatio
n_ratio
Sets allo cation ratio of Virtual di sk to physical d isk.
Depends on the AggregateDiskFilter filter being
set fo r the Comp ute sched uler.
filter_tenant_i
d
If sp ecified , the agg regate only hosts this tenant (p ro ject).
Depends on the
AggregateMultiTenancyIsolation filter b eing set
for the Comp ute scheduler.
ram_allocation
_ratio
Sets allo cation ratio o f virtual RAM to p hysic al RAM.
Depends on the AggregateRamFilter filter b eing set
for the Comp ute scheduler.
3.4.5. Delete a Host Aggregat e
1. As an ad min user in the dashboard, select Admin > System > Host
Aggregates. All cu rrently defined aggregates are listed in the Host Aggregates
section.
2. Remove all assigned hosts from the aggregate:
1. Click the aggregate's arrow icon under Actions.
2. Click Manage Hosts.
3. Remove all hosts by clicking their - icon.
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
4 0
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
45/127
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
46/127
You define which filters you would like the scheduler to use in the
scheduler_default_filters option (/etc/nova/nova.conf file; you must have
either root or nova user permission s). Filters can be add ed or removed.
By defau lt, the follo wing filters are configu red to run in the scheduler:
Some filters use information in p arameters passed to the instance in:
The nova boot command, see the "Command-Line Interface Reference" in
https://access.redhat.com/documentation /en-
US/Red_Hat_Enterprise_Linux_OpenStack_Platform/.
The instance's flavor (see Section 3.3.4, “Update Flavo r Metada ta” )
The instance's image (see Appendix A, Image Configuration Parameters).
All availab le filters are listed in the follo wing tab le.
Table 3.4 . Scheduling Filters
Filt er Descript ion
AggregateCoreFilter
Uses the host-agg regate metadata keycpu_allocation_ratio to filter out hosts exceeding theover-commit ratio (virtual CPU to physical CPU allocationratio ); only valid if a host agg regate is sp ecified for theinstance.
If this ratio is not set, the filter uses thecpu_allocation_ratio value in the
/etc/nova/nova.conf file. The d efault value is '16.0' (16virtual CPU can b e alloc ated p er physical CPU).
AggregateDiskFilter
Uses the host-agg regate metadata keydisk_allocation_ratio to filter out hosts exceedingthe o ver-co mmit ratio (virtual d isk to p hysical di sk allocationratio ); only valid if a host agg regate is sp ecified for theinstance.
If this ratio is not set, the filter uses the
disk_allocation_ratio value in the/etc/nova/nova.conf file. The default value is ' 1.0' (onevirtual d isk can be allo cated for each physical d isk).
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter
,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
4 2
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
47/127
AggregateImagePropertiesI
solation
Only passes ho sts in host agg regates whos e metadata
matches the instance's imag e metadata; only valid if a host
agg regate is sp ecified for the instance. For more information,
see Sectio n 4.1.1, “Create an Image” .
AggregateInstanceExtraSpe
csFilter
Metadata in the host agg regate must match the host's flavo r
metadata. For mo re informatio n, see Section 3.3.4, “ Up date
Flavor Metadata” .
AggregateMultiTenancyIsola
tion
A host with the sp ecified filter_tenant_id can only
co ntain instances fro m that tenant (pro ject). Note: The tenant
can still pl ace instances on o ther hosts.
AggregateRamFilter
Uses the host-agg regate metadata key
ram_allocation_ratio to filter out hosts exceeding theover commit ratio (virtual RAM to physical RAM allo cationratio ); only valid if a host agg regate is sp ecified for the
instance.
If this ratio is not set, the filter uses theram_allocation_ratio value in the/etc/nova/nova.conf file. The d efault value is '1.5' (1.5RAM can be allo cated for each physical RAM).
AllHo stsFilter Passes all availab le ho sts (ho wever, d oes no t d isab le o ther
filters).
Avai lab i li tyZoneFi lter Fi lters us ing the ins tance' s spec ifi ed avai lab i li ty zone.
ComputeCapabil itiesFi l ter Ensures Compute metadata is read correctly. Anything b efore
the ':' i s read as a namespace. Fo r example,
'q uota:cp u_period ' uses 'q uota' as the namespace and
'cp u_period ' as the key.
ComputeFilter Passes only hosts that are operational and enab led .
Co reFilter Uses the cpu_allocation_ratio in the
/etc/nova/nova.conf file to filter out hos ts exceed ing
the over commit ratio (virtual CPU to p hysic al CPU alloc ation
ratio). The d efault value is ' 16 .0' (16 virtual CPU can b e
alloc ated p er physical CPU).
Di fferentHo s tFi lter Enab les an i ns tanc e to b ui ld o n a ho st that i s d ifferent fro m
one o r more sp ecified hos ts. Specify 'di fferent' hosts using
the nova boot o pt ion --different_host o pt ion.
Filt er Descript ion
CHAPT ER 3. VIRT UAL MACHINE INST ANCES
4 3
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
48/127
DiskFilter Uses disk_allocation_ratio in the
/etc/nova/nova.conf file to filter out hos ts exceed ing
the o ver co mmit ratio (virtual d isk to p hysical d isk alloc atio n
ratio). The default value is '1.0 ' (o ne virtual dis k can be
alloc ated for each physical di sk).
ImageProperti esFi lter Only passes hos ts that match the i ns tance' s image
pro perties. For more information, see Sectio n 4.1.1, “ Create
an Image” .
Is o lated Ho s ts Fi lter Pas ses o nl y i so lated ho s ts runni ng is o lated imag es that are
sp ecified in the /etc/nova/nova.conf file using
isolated_hosts and isolated_images (comma-
sep arated values).
JsonFilter
Recog nises and uses an instance's c ustom JSON filters:
Valid operato rs are: =, , in, =, not, or, and
Recog nised variables are: $free_ram_mb,$free_disk_mb, $total_usable_ram_mb,$vcpus_total , $vcpus_used
The filter is sp ecfied as a q uery hint in the nova boot
co mmand. For example:
--hint query='['>=', ' $free_disk_mb', 200* 1024]'
MetricFilter Filters out hosts with unavailab le metrics.
NUMAT op o lo g yFi lter Fi lters o ut ho s ts b as ed o n its NU MA to p o lo g y; i f the ins tanc e
has no top olo gy d efined, any hos t can be used. The filter
tries to match the exact NUMA top o lo gy o f the instance to
those o f the host (it do es no t attempt to p ack the instance
onto the hos t). The filter also lo oks at the standard over-subs crip tion limits for each NUMA nod e, and pro vides limits
to the comp ute host accord ingly.
RamFilter Uses ram_allocation_ratio in the
/etc/nova/nova.conf file to filter out hos ts exceed ing
the over commit ratio (virtual RAM to p hysic al RAM alloc ation
ratio). The d efault value is ' 1.5' (1.5 RAM can be all ocated fo r
each physical RAM).
RetryFilter Filters out hosts that have failed a scheduling attemp t; valid if
scheduler_max_attempts is greater than zero (b y
default,scheduler_max_attempts=3).
Filt er Descript ion
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
4 4
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
49/127
SameHostFilter
Passes o ne or more sp ecified hosts; sp ecify hosts for the
instance using the --hint same_host op tion for novaboot.
ServerGroupAffinityFilter
Only passes hosts for a sp ecific server group :
Give the server gro up the affinity pol icy (nova server-group-create --policy affinity groupName).
Build the instance with that gro up (nova boot o pt ion --hint group=UUID).
ServerGroupAntiAffinityFilter
Only passes ho sts in a server gro up that do not already host
an instance:
Give the server gro up the anti-affinity po lic y (novaserver-group-create --policy anti-affinity groupName).
Build the instance with that gro up (nova boot o pt ion --hint group=UUID).
SimpleCIDRAffinityFilter
Only passes ho sts o n the sp ecified IP subnet rangesp ecified b y the instance's cidr andbuild_new_host_ip hints. Exampl e:
--hint buil d_near_host_ip=192.0. 2.0 --hintcidr=/24
Filt er Descript ion
3.5.2. Configure Scheduling Weights
Both cells and hosts can be weighted for scheduling; the host or cell with the largest weight
(after filtering) is selected. All weighers are given a multiplier that is app lied after no rmalising
the node's weight. A node's weight is calculated as:
w1_multipli er * norm(w1) + w2_multiplier * norm(w2) + ...
You can configure weight op tions in the scheduler host's /etc/nova/nova.conf file
(must have either root or nova user permissions).
3.5.2.1. Configure Weight Optio ns for Hosts
CHAPT ER 3. VIRT UAL MACHINE INST ANCES
4 5
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
50/127
You can define the host weighers you would like the schedu ler to use in the [DEFAULT]
scheduler_weight_classes op tion. Valid weighers are:
nova.scheduler.weights.ram - Weighs the host's ava ilab le RAM.
nova.scheduler.weights.metrics - Weighs the host's metrics.
nova.scheduler.weights.all_weighers - Uses all h ost weighers (defau lt).
Table 3.5. Host Weight O ption s
Weighe
r
Opt ion Descript ion
All [DEFAULT]
scheduler_host_subset_size
Defines the subset size from which a host is
selected (integer); must be at least 1. A value o f 1selects the first ho st returned by the weighing
functio ns. Any value less than 1 is i gnored and 1 is
used i nstead (integer value).
metri cs [metri cs ] requi red
Specifies how to handle metrics in [metrics]weight_setting that are unavailab le:
True - Metrics are required ; if unavailab le, an
exceptio n is raised. To avoi d the exceptio n, usethe MetricFilter filter in the[DEFAULT]scheduler_default_filtersopt ion.
False - The unavailab le metric is treated as anegative factor in the weighing p rocess ; thereturned value is set byweight_of_unavailable.
metric s [metric s]
weight_o f_unavailab le
Used as the weight if any metric in [metrics]
weight_setting is unavailab le; valid if[metrics]required=False.
metr ics [metr ics ] weight_multip l ier Mul i tp l ier used for weighing metr ics . By defaul t,
weight_multiplier=1.0 and spreads
instances across po ssib le hosts. If this value is
negative, the hos t with lower metrics is p rio ritized ,
and instances are stacked i n hosts.
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
4 6
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
51/127
metr ics [metr ics ] weight_setting
Speci fies metrics and the ratio with which they are
weighed ; us e a comma-sep arated lis t o f'metric=ratio' pairs. Valid metric names are:
cpu.frequency - Current CPU freq uency
cpu.user.time - CPU user mo de time
cpu.kernel.time - CPU kernel time
cpu.idle.time - CPU id le time
cpu.iowait.time - CPU I/O wait time
cpu.user.percent - CPU user mod epercentage
cpu.kernel.percent - CPU kernelpercentage
cpu.idle.percent - CPU id le percentage
cpu.iowait.percent - CPU I/O waitpercentage
cpu.percent - Generic CPU utiliz ation
Example:weight_setting=cpu.user.time=1.0
ram [DEFAULT]
ram_weight_multiplier
Multip lier for RAM (floating p o int). By default,
ram_weight_multiplier=1.0 and sp reads
instances across po ssib le hosts. If this value is
negative, the hos t with less RAM is p rio ritized , and
instances are stacked in ho sts.
Weighe
r
Opt ion Descript ion
3.5.2.2. Configure Weight Opt ions fo r Cells
You define which cell weighers you wou ld li ke the scheduler to use in the [cells]
scheduler_weight_classes option (/etc/nova/nova.conf file; you must have either
root or nova user permissions)
Valid weighers are:
nova.cells.weights.all_weighers - Uses a ll cell weighers(defau lt).
CHAPT ER 3. VIRT UAL MACHINE INST ANCES
4 7
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
52/127
nova.cells.weights.mute_child - Weighs whether a chi ld cell has no t sent capacity or
capab ility updates for some time.
nova.cells.weights.ram_by_instance_type - Weighs the cell's ava ilab le RAM.
nova.cells.weights.weight_offset - Evaluates a cell's weight offset. Note: A cell's weight
offset is specified using --woffset in the nova-manage cell create command.
Table 3.6. Cell Weight Op tions
Weighers Opt ion Descript ion
mute_child [cells]
mute_weight_mult
iplier
Multip lier for ho sts which have been silent fo r
so me time (negative floating po int). By default, this
value is '-10 .0' .
mute_child [cells]
mute_weight_valu
e
Weight value given to sil ent hos ts (pos itive floating
po int). By default, this value is '100 0 .0' .
ram_by_instance
_typ e
[cells]
ram_weight_multi
plier
Multip lier for weighing RAM (floating p oi nt). By
default, this value is '1.0 ', and sp reads instances
acro ss p ossi b le cells . If this value is negative, the
cell with fewer RAM is pri o ritized , and ins tances
are stacked in cells .
weight_o ffset [cel ls]offset_weight_mul
tiplierMultip lier for weighing c ells (floating po int).Enables the instance to sp ecify a preferred cell(floating po int) by setting its weight offset to
99 99 99 99 99 99 99 9 (highest weight isp rio ritized ). By default, this value is ' 1.0' .
3.6. EVACUATE INSTANCES
If you wan t to move an instan ce from a dead or shut-down compute nod e to a new hostserver in the same environment (for example, because the server needs to be swapped out),
you can evacuate it using nova evacuate.
An evacuation is on ly useful if the instance disks a re on sh ared storage or if the instance
disks a re Block Storage volu mes. Otherwise, the disks will not be accessible and ca nnot
be accessed by the new compute node.
An instance can on ly be evacu ated from a server if the server is shu t down; if the server is
not shut down, the evacuate command will fail.
Red Hat Ent erprise Linux OpenSt ack Platfo rm 6 Administ ration G uide
4 8
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
53/127
Note
If you h ave a functioning compute node, and you want to:
Make a static copy (not running ) of an instance for backup purposes or to copy
the instance to a different environment, make a sna pshot using nova image-
create (see Migrate a Static Instance).
Move an in stance in a static state (not runn ing ) to a host in the same
environment (shared storage not needed), migra te it using nova migrate (see
Migrate a Static Instance).
Move an instan ce in a live state (runn ing) to a ho st in the same environment,
migrate it using nova live-migration (see Migrate a Live (runn ing )
Instance).
3.6.1. Evacuate One Instance
Evacuate an instance using :
# nova evacuate [--password pass] [--on-shared-storage]
instance_name [target_host]
Where:
--password pass - Admin pa ssword to set for the evacuated instance (canno t be used
if --on-shared-storage is specified). If a password is no t specified, a rando m
password is generated and output when evacuation is complete.
--on-shared-storage - Indica tes that all in stance files are on shared storage.
instance_name - Name of the instan ce to be evacuated.
target_host - Host to which the instance is evacuated; if you do no t specify the host,
the Compute schedu ler selects one for you . You ca n find possib le hosts using:
# nova host-list | grep compute
For example:
# nova evacuate myDemoInstance Compute2_OnEL7.myDomain
3.6.2. Evacuat e All Instances
Evacuate all instances on a sp ecified h ost using :
CHAPT ER 3. VIRT UAL MACHINE INST ANCES
4 9
https://access.redhat.com/articles/1258893https://access.redhat.com/articles/1265613https://access.redhat.com/articles/1265613
-
8/20/2019 Red_Hat_Enterprise_Linux_OpenStack_Platform-6-Administration_Guide-en-US.pdf
54/127
# nova host-evacuate instance_name [--target target_host] [--on-
shared-storage] source_host
Where:
--target target_host - Host to which the instance is evacuated; if you do no t specify
the host, the Compute schedu ler selects one for you. You can find possib le hosts using:
# nova host-list | grep compute
--on-shared-storage - Indica tes that all in stance files are on shared storage.
source_host - Name of the host to be evacua ted.
For example:
# nova host-evacuate --target Compute2_OnEL7.localdomain
myDemoHost.localdomain
3.6.3. Configure Shared Storage
If you are using shared storage, this p rocedure exports the instances directory for the
Compute service to the two nodes, and ensures the nodes have access. The directory path is
set in the state_path and instances_path parameters in the /etc/nova/nova.conffile. This procedure uses the default value, which is /var/lib/nova/instances. Only
users with root access can set up sha red storag e.
1. On the controller host:
a. Ensure the /var/lib/nova/instances directory has read-write access by
the Compute service user (this user must be the same across controller and
nodes). For example:
drwxr-xr-x. 9 nova nova 4096 Nov 5 20:37 instances
b. Add the following l ines to the /etc/exports file; switch out node1_IP and
node2_IP for the IP addresses of the two compute nodes:
c. Export the /var/lib/nova/instances directory to the compute nodes.
# exportfs -avr
/var/