Red Hat OpenStack Platform-17.1-Configuring Red Hat OpenStack Platform networking-en-US
Red Hat OpenStack Platform-17.1-Configuring Red Hat OpenStack Platform networking-en-US
OpenStack Team
rhos-docs@redhat.com
Legal Notice
Copyright © 2024 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
A cookbook for common OpenStack Networking tasks.
Table of Contents
Table of Contents
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . . . . . . .
PREFACE
. . . . . . . . . .OPEN
MAKING . . . . . . SOURCE
. . . . . . . . . .MORE
. . . . . . .INCLUSIVE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. . . . . . . . . . . . .
. . . . . . . . . . . . . FEEDBACK
PROVIDING . . . . . . . . . . . . ON
. . . .RED
. . . . .HAT
. . . . .DOCUMENTATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
..............
.CHAPTER
. . . . . . . . . . 1.. .INTRODUCTION
. . . . . . . . . . . . . . . . . TO
. . . .OPENSTACK
. . . . . . . . . . . . . .NETWORKING
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11. . . . . . . . . . . . .
1.1. MANAGING YOUR RHOSP NETWORKS 11
1.2. NETWORKING SERVICE COMPONENTS 13
1.3. MODULAR LAYER 2 (ML2) NETWORKING 13
1.4. ML2 NETWORK TYPES 14
1.5. MODULAR LAYER 2 (ML2) MECHANISM DRIVERS 15
1.6. OPEN VSWITCH 16
1.7. OPEN VIRTUAL NETWORK (OVN) 16
1.8. MODULAR LAYER 2 (ML2) TYPE AND MECHANISM DRIVER COMPATIBILITY 17
1.9. EXTENSION DRIVERS FOR THE RHOSP NETWORKING SERVICE 17
.CHAPTER
. . . . . . . . . . 2.
. . WORKING
. . . . . . . . . . . WITH
. . . . . . ML2/OVN
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
..............
2.1. LIST OF COMPONENTS IN THE RHOSP OVN ARCHITECTURE 18
2.2. ML2/OVN DATABASES 19
2.3. THE OVN-CONTROLLER SERVICE ON COMPUTE NODES 20
2.4. OVN METADATA AGENT ON COMPUTE NODES 20
2.5. THE OVN COMPOSABLE SERVICE 20
2.6. LAYER 3 HIGH AVAILABILITY WITH OVN 21
2.7. ACTIVE-ACTIVE CLUSTERED DATABASE SERVICE MODEL 22
2.8. DEPLOYING A CUSTOM ROLE WITH ML2/OVN 22
2.9. SR-IOV WITH ML2/OVN AND NATIVE OVN DHCP 26
.CHAPTER
. . . . . . . . . . 3.
. . MANAGING
. . . . . . . . . . . . . PROJECT
. . . . . . . . . . .NETWORKS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
..............
3.1. VLAN PLANNING 27
3.2. TYPES OF NETWORK TRAFFIC 27
3.3. IP ADDRESS CONSUMPTION 29
3.4. VIRTUAL NETWORKING 29
3.5. ADDING NETWORK ROUTING 29
3.6. EXAMPLE NETWORK PLAN 29
3.7. CREATING A NETWORK 30
3.8. WORKING WITH SUBNETS 32
3.9. CREATING A SUBNET 33
3.10. ADDING A ROUTER 34
3.11. PURGING ALL RESOURCES AND DELETING A PROJECT 35
3.12. DELETING A ROUTER 35
3.13. DELETING A SUBNET 35
3.14. DELETING A NETWORK 36
.CHAPTER
. . . . . . . . . . 4.
. . .CONNECTING
. . . . . . . . . . . . . . .VM
. . . .INSTANCES
. . . . . . . . . . . . TO
. . . .PHYSICAL
. . . . . . . . . . . NETWORKS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37
..............
4.1. OVERVIEW OF THE OPENSTACK NETWORKING TOPOLOGY 37
4.2. PLACEMENT OF OPENSTACK NETWORKING SERVICES 37
4.3. CONFIGURING FLAT PROVIDER NETWORKS 38
4.4. HOW DOES THE FLAT PROVIDER NETWORK PACKET FLOW WORK? 40
4.5. TROUBLESHOOTING INSTANCE-PHYSICAL NETWORK CONNECTIONS ON FLAT PROVIDER
NETWORKS 44
4.6. CONFIGURING VLAN PROVIDER NETWORKS 46
4.7. HOW DOES THE VLAN PROVIDER NETWORK PACKET FLOW WORK? 49
1
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
.CHAPTER
. . . . . . . . . . 5.
. . MANAGING
. . . . . . . . . . . . .FLOATING
. . . . . . . . . . . .IP
. . ADDRESSES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60
..............
5.1. CREATING FLOATING IP POOLS 60
5.2. ASSIGNING A SPECIFIC FLOATING IP 60
5.3. CREATING AN ADVANCED NETWORK 62
5.4. ASSIGNING A RANDOM FLOATING IP 62
5.5. CREATING MULTIPLE FLOATING IP POOLS 65
5.6. CONFIGURING FLOATING IP PORT FORWARDING 65
5.7. CREATING PORT FORWARDING FOR A FLOATING IP 66
5.8. BRIDGING THE PHYSICAL NETWORK 68
5.9. ADDING AN INTERFACE 69
5.10. DELETING AN INTERFACE 69
. . . . . . . . . . . 6.
CHAPTER . . .MONITORING
. . . . . . . . . . . . . . AND
. . . . . TROUBLESHOOTING
. . . . . . . . . . . . . . . . . . . . . . .NETWORKS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
..............
6.1. BASIC PING TESTING 71
6.2. VIEWING CURRENT PORT STATUS 73
6.3. TROUBLESHOOTING CONNECTIVITY TO VLAN PROVIDER NETWORKS 74
6.4. REVIEWING THE VLAN CONFIGURATION AND LOG FILES 75
6.5. PERFORMING BASIC ICMP TESTING WITHIN THE ML2/OVN NAMESPACE 76
6.6. TROUBLESHOOTING FROM WITHIN PROJECT NETWORKS (ML2/OVS) 77
6.7. PERFORMING ADVANCED ICMP TESTING WITHIN THE NAMESPACE (ML2/OVS) 78
6.8. CREATING ALIASES FOR OVN TROUBLESHOOTING COMMANDS 79
6.9. MONITORING OVN LOGICAL FLOWS 80
6.10. MONITORING OPENFLOWS 84
6.11. MONITORING OVN DATABASE STATUS 85
6.12. VALIDATING YOUR ML2/OVN DEPLOYMENT 87
6.13. SETTING THE LOGGING MODE FOR ML2/OVN 89
6.14. FIXING OVN CONTROLLERS THAT FAIL TO REGISTER ON EDGE SITES 90
6.15. ML2/OVN LOG FILES 92
.CHAPTER
. . . . . . . . . . 7.
. . CONFIGURING
. . . . . . . . . . . . . . . . PHYSICAL
. . . . . . . . . . . .SWITCHES
. . . . . . . . . . . FOR
. . . . . OPENSTACK
. . . . . . . . . . . . . . NETWORKING
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .93
..............
7.1. PLANNING YOUR PHYSICAL NETWORK ENVIRONMENT 93
7.2. CONFIGURING A CISCO CATALYST SWITCH 93
7.2.1. About trunk ports 93
7.2.2. Configuring trunk ports for a Cisco Catalyst switch 94
7.2.3. About access ports 95
7.2.4. Configuring access ports for a Cisco Catalyst switch 95
7.2.5. About LACP port aggregation 96
7.2.6. Configuring LACP on the physical NIC 96
7.2.7. Configuring LACP for a Cisco Catalyst switch 97
7.2.8. About MTU settings 98
7.2.9. Configuring MTU settings for a Cisco Catalyst switch 98
7.2.10. About LLDP discovery 99
7.2.11. Configuring LLDP for a Cisco Catalyst switch 99
7.3. CONFIGURING A CISCO NEXUS SWITCH 100
7.3.1. About trunk ports 100
7.3.2. Configuring trunk ports for a Cisco Nexus switch 100
7.3.3. About access ports 100
2
Table of Contents
. . . . . . . . . . . 8.
CHAPTER . . .CONFIGURING
. . . . . . . . . . . . . . . MAXIMUM
. . . . . . . . . . . .TRANSMISSION
. . . . . . . . . . . . . . . . .UNIT
. . . . . (MTU)
. . . . . . . SETTINGS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115
..............
8.1. MTU OVERVIEW 115
8.2. CONFIGURING MTU SETTINGS IN DIRECTOR 116
8.3. REVIEWING THE RESULTING MTU CALCULATION 116
. . . . . . . . . . . 9.
CHAPTER . . .USING
. . . . . . .QUALITY
. . . . . . . . . .OF
. . . SERVICE
. . . . . . . . . .(QOS)
. . . . . . . POLICIES
. . . . . . . . . . .TO
. . . MANAGE
. . . . . . . . . . DATA
. . . . . . .TRAFFIC
. . . . . . . . . . . . . . . . . . . . . . . . . .117
..............
9.1. QOS RULES 117
9.2. CONFIGURING THE NETWORKING SERVICE FOR QOS POLICIES 119
9.3. CONTROLLING MINIMUM BANDWIDTH BY USING QOS POLICIES 123
9.3.1. Using Networking service back-end enforcement to enforce minimum bandwidth 124
3
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
. . . . . . . . . . . 10.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . . BRIDGE
. . . . . . . . .MAPPINGS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .138
...............
10.1. OVERVIEW OF BRIDGE MAPPINGS 138
10.2. TRAFFIC FLOW 138
10.3. CONFIGURING BRIDGE MAPPINGS 139
10.4. MAINTAINING BRIDGE MAPPINGS FOR OVS 140
10.4.1. Cleaning up OVS patch ports manually 141
10.4.2. Cleaning up OVS patch ports automatically 141
. . . . . . . . . . . 11.
CHAPTER . . .VLAN-AWARE
. . . . . . . . . . . . . . .INSTANCES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .143
...............
11.1. VLAN TRUNKS AND VLAN TRANSPARENT NETWORKS 143
11.2. ENABLING VLAN TRANSPARENCY IN ML2/OVN DEPLOYMENTS 143
11.3. REVIEWING THE TRUNK PLUG-IN 145
11.4. CREATING A TRUNK CONNECTION 145
11.5. ADDING SUBPORTS TO THE TRUNK 147
11.6. CONFIGURING AN INSTANCE TO USE A TRUNK 148
11.7. CONFIGURING NETWORKING SERVICE RPC TIMEOUT 151
11.8. UNDERSTANDING TRUNK STATES 152
. . . . . . . . . . . 12.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . .RBAC
. . . . . . POLICIES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
...............
12.1. OVERVIEW OF RBAC POLICIES 153
12.2. CREATING RBAC POLICIES 153
12.3. REVIEWING RBAC POLICIES 154
12.4. DELETING RBAC POLICIES 154
12.5. GRANTING RBAC POLICY ACCESS FOR EXTERNAL NETWORKS 155
. . . . . . . . . . . 13.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . . DISTRIBUTED
. . . . . . . . . . . . . . .VIRTUAL
. . . . . . . . . .ROUTING
. . . . . . . . . . (DVR)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .156
...............
13.1. UNDERSTANDING DISTRIBUTED VIRTUAL ROUTING (DVR) 156
13.1.1. Overview of Layer 3 routing 156
13.1.2. Routing flows 156
13.1.3. Centralized routing 156
13.2. DVR OVERVIEW 157
13.3. DVR KNOWN ISSUES AND CAVEATS 157
13.4. SUPPORTED ROUTING ARCHITECTURES 158
13.5. MIGRATING CENTRALIZED ROUTERS TO DISTRIBUTED ROUTING 158
13.6. DEPLOYING ML2/OVN OPENSTACK WITH DISTRIBUTED VIRTUAL ROUTING (DVR) DISABLED 159
13.6.1. Additional resources 160
. . . . . . . . . . . 14.
CHAPTER . . . PROJECT
. . . . . . . . . . .NETWORKING
. . . . . . . . . . . . . . . WITH
. . . . . . IPV6
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .161
..............
14.1. IPV6 SUBNET OPTIONS 161
14.2. CREATE AN IPV6 SUBNET USING STATEFUL DHCPV6 164
. . . . . . . . . . . 15.
CHAPTER . . . MANAGING
. . . . . . . . . . . . .PROJECT
. . . . . . . . . . QUOTAS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .166
...............
15.1. CONFIGURING PROJECT QUOTAS 166
15.2. L3 QUOTA OPTIONS 166
15.3. FIREWALL QUOTA OPTIONS 166
15.4. SECURITY GROUP QUOTA OPTIONS 166
15.5. MANAGEMENT QUOTA OPTIONS 167
. . . . . . . . . . . 16.
CHAPTER . . . DEPLOYING
. . . . . . . . . . . . . .ROUTED
. . . . . . . . . PROVIDER
. . . . . . . . . . . .NETWORKS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .168
...............
16.1. ADVANTAGES OF ROUTED PROVIDER NETWORKS 168
4
Table of Contents
.CHAPTER
. . . . . . . . . . 17.
. . . CREATING
. . . . . . . . . . . .CUSTOM
. . . . . . . . . .VIRTUAL
. . . . . . . . . ROUTERS
. . . . . . . . . . . WITH
. . . . . . ROUTER
. . . . . . . . . FLAVORS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .181
..............
17.1. ENABLING ROUTER FLAVORS AND CREATING SERVICE PROVIDERS 181
17.2. CREATING A ROUTER FLAVOR 183
17.3. CREATING A CUSTOM VIRTUAL ROUTER 185
. . . . . . . . . . . 18.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . . ALLOWED
. . . . . . . . . . . .ADDRESS
. . . . . . . . . . PAIRS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .187
...............
18.1. OVERVIEW OF ALLOWED ADDRESS PAIRS 187
18.2. CREATING A PORT AND ALLOWING ONE ADDRESS PAIR 188
18.3. ADDING ALLOWED ADDRESS PAIRS 188
. . . . . . . . . . . 19.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . . SECURITY
. . . . . . . . . . . GROUPS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .190
...............
19.1. CREATING A SECURITY GROUP 190
19.2. UPDATING SECURITY GROUP RULES 192
19.3. DELETING SECURITY GROUP RULES 193
19.4. DELETING A SECURITY GROUP 193
19.5. CONFIGURING SHARED SECURITY GROUPS 193
. . . . . . . . . . . 20.
CHAPTER . . . .LOGGING
. . . . . . . . . . .SECURITY
. . . . . . . . . . .GROUP
. . . . . . . . ACTIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .196
...............
20.1. VERIFYING THAT SECURITY GROUP LOGGING IS ENABLED 197
20.2. CREATING LOG OBJECTS FOR SECURITY GROUPS 197
20.3. LISTING AND VIEWING LOG OBJECTS FOR SECURITY GROUPS 198
20.4. ENABLING AND DISABLING LOG OBJECTS FOR SECURITY GROUPS 198
20.5. RENAMING A LOG OBJECT FOR SECURITY GROUPS 199
20.6. DELETING A LOG OBJECT FOR SECURITY GROUPS 199
20.7. ACCESSING SECURITY GROUP LOG CONTENT 199
20.8. SAMPLE SECURITY GROUP LOG CONTENT 199
20.9. ADJUSTING RATE AND BURST LIMITS FOR SECURITY GROUP LOGGING 200
. . . . . . . . . . . 21.
CHAPTER . . . COMMON
. . . . . . . . . . .ADMINISTRATIVE
. . . . . . . . . . . . . . . . . . .NETWORKING
. . . . . . . . . . . . . . .TASKS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .202
...............
21.1. CONFIGURING THE L2 POPULATION DRIVER 202
21.2. TUNING KEEPALIVED TO AVOID VRRP PACKET LOSS 204
21.3. SPECIFYING THE NAME THAT DNS ASSIGNS TO PORTS 205
21.4. ASSIGNING DHCP ATTRIBUTES TO PORTS 208
21.5. ENABLING NUMA AFFINITY ON PORTS 210
21.6. LOADING KERNEL MODULES 212
21.7. LIMITING QUERIES TO THE METADATA SERVICE 213
. . . . . . . . . . . 22.
CHAPTER . . . .CONFIGURING
. . . . . . . . . . . . . . . .LAYER
. . . . . . . 3. .HIGH
. . . . . .AVAILABILITY
. . . . . . . . . . . . . . .(HA)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
...............
22.1. RHOSP NETWORKING SERVICE WITHOUT HIGH AVAILABILITY (HA) 216
22.2. OVERVIEW OF LAYER 3 HIGH AVAILABILITY (HA) 216
22.3. LAYER 3 HIGH AVAILABILITY (HA) FAILOVER CONDITIONS 217
22.4. PROJECT CONSIDERATIONS FOR LAYER 3 HIGH AVAILABILITY (HA) 217
22.5. HIGH AVAILABILITY (HA) CHANGES TO THE RHOSP NETWORKING SERVICE 217
22.6. ENABLING LAYER 3 HIGH AVAILABILITY (HA) ON RHOSP NETWORKING SERVICE NODES 218
22.7. REVIEWING HIGH AVAILABILITY (HA) RHOSP NETWORKING SERVICE NODE CONFIGURATIONS 219
. . . . . . . . . . . 23.
CHAPTER . . . .USING
. . . . . . .AVAILABILITY
. . . . . . . . . . . . . . . ZONES
. . . . . . . .TO
. . . .MAKE
. . . . . . NETWORK
. . . . . . . . . . . .RESOURCES
. . . . . . . . . . . . . HIGHLY
. . . . . . . . .AVAILABLE
. . . . . . . . . . . . . . . . . . 221
...............
23.1. ABOUT NETWORKING SERVICE AVAILABILITY ZONES 221
23.2. CONFIGURING NETWORK SERVICE AVAILABILITY ZONES FOR ML2/OVS 221
5
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
6
Table of Contents
7
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
PREFACE
NOTE
You cannot apply a role-based access control (RBAC)-shared security group directly to
an instance during instance creation. To apply an RBAC-shared security group to an
instance you must first create the port, apply the shared security group to that port, and
then assign that port to the instance. See Adding a security group to a port .
8
MAKING OPEN SOURCE MORE INCLUSIVE
9
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
1. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to
submit feedback.
2. Click the following link to open a the Create Issue page: Create Issue
3. Complete the Summary and Description fields. In the Description field, include the
documentation URL, chapter or section number, and a detailed description of the issue. Do not
modify any other fields in the form.
4. Click Create.
10
CHAPTER 1. INTRODUCTION TO OPENSTACK NETWORKING
Inside project networks, you can use pools of floating IP addresses or a single floating IP
address to direct ingress traffic to your VM instances. Using bridge mappings, you can associate
a physical network name (an interface label) to a bridge created with OVS or OVN to allow
provider network traffic to reach the physical network.
Routed provider networks simplify the cloud for end users because they see only one network.
For cloud operators, routed provider networks deliver scalabilty and fault tolerance. For
example, if a major error occurs, only one segment is impacted instead of the entire network
failing.
RHOSP uses VRRP to make project routers and floating IP addresses highly available. An
alternative to centralized routing, Distributed Virtual Routing (DVR) offers an alternative routing
design based on VRRP that deploys the L3 agent and schedules routers on every Compute
node.
11
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
For more information, see Using availability zones to make network resources highly available .
By default, security groups are stateful. In ML2/OVN deployments, you can also create stateless
security groups. A stateless security group can provide significant performance benefits. Unlike
stateful security groups, stateless security groups do not automatically allow returning traffic, so
you must create a complimentary security group rule to allow the return of related traffic.
In a VLAN transparent network, you set up VLAN tagging in the VM instances. The VLAN tags
are transferred over the network and consumed by the VM instances on the same VLAN, and
ignored by other instances and devices. VLAN trunks support VLAN-aware instances by
combining VLANs into a single trunked port.
12
CHAPTER 1. INTRODUCTION TO OPENSTACK NETWORKING
API server
The RHOSP networking API includes support for Layer 2 networking and IP Address
Management (IPAM), as well as an extension for a Layer 3 router construct that enables routing
between Layer 2 networks and gateways to external networks. RHOSP networking includes a
growing list of plug-ins that enable interoperability with various commercial and open source
network technologies, including routers, switches, virtual switches and software-defined
networking (SDN) controllers.
Messaging queue
Accepts and routes RPC requests between RHOSP services to complete API operations.
The ML2 framework distinguishes between the two kinds of drivers that can be configured:
Type drivers
Define how an RHOSP network is technically realized.
Each available network type is managed by an ML2 type driver, and they maintain any required type-
specific network state. Validating the type-specific information for provider networks, type drivers
are responsible for the allocation of a free segment in project networks. Examples of type drivers are
GENEVE, GRE, VXLAN, and so on.
13
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Mechanism drivers
Define the mechanism to access an RHOSP network of a certain type.
The mechanism driver takes the information established by the type driver and applies it to the
networking mechanisms that have been enabled. Examples of mechanism drivers are Open Virtual
Networking (OVN) and Open vSwitch (OVS).
Mechanism drivers can employ L2 agents, and by using RPC interact directly with external devices or
controllers. You can use multiple mechanism and type drivers simultaneously to access different
ports of the same virtual network.
Additional resources
Section 1.8, “Modular Layer 2 (ML2) type and mechanism driver compatibility”
Flat
VLAN
GENEVE tunnels
Flat
All virtual machine (VM) instances reside on the same network, which can also be shared with the
hosts. No VLAN tagging or other network segregation occurs.
VLAN
With RHOSP networking users can create multiple provider or project networks using VLAN IDs
(802.1Q tagged) that correspond to VLANs present in the physical network. This allows instances to
communicate with each other across the environment. They can also communicate with dedicated
servers, firewalls, load balancers and other network infrastructure on the same Layer 2 VLAN.
You can use VLANs to segment network traffic for computers running on the same switch. This
means that you can logically divide your switch by configuring the ports to be members of different
networks — they are basically mini-LANs that you can use to separate traffic for security reasons.
For example, if your switch has 24 ports in total, you can assign ports 1-6 to VLAN200, and ports 7-18
to VLAN201. As a result, computers connected to VLAN200 are completely separate from those on
VLAN201; they cannot communicate directly, and if they wanted to, the traffic must pass through a
router as if they were two separate physical switches. Firewalls can also be useful for governing which
VLANs can communicate with each other.
GENEVE tunnels
GENEVE recognizes and accommodates changing capabilities and needs of different devices in
network virtualization. It provides a framework for tunneling rather than being prescriptive about the
entire system. Geneve defines the content of the metadata flexibly that is added during
14
CHAPTER 1. INTRODUCTION TO OPENSTACK NETWORKING
encapsulation and tries to adapt to various virtualization scenarios. It uses UDP as its transport
protocol and is dynamic in size using extensible option headers. Geneve supports unicast, multicast,
and broadcast. The GENEVE type driver is compatible with the ML2/OVN mechanism driver.
VXLAN and GRE tunnels
VXLAN and GRE use network overlays to support private communication between instances. An
RHOSP networking router is required to enable traffic to traverse outside of the GRE or VXLAN
project network. A router is also required to connect directly-connected project networks with
external networks, including the internet; the router provides the ability to connect to instances
directly from an external network using floating IP addresses. VXLAN and GRE type drivers are
compatible with the ML2/OVS mechanism driver.
Additional resources
Section 1.8, “Modular Layer 2 (ML2) type and mechanism driver compatibility”
You enable mechanism drivers using the Orchestration service (heat) parameter,
NeutronMechanismDrivers. Here is an example from a heat custom environment file:
parameter_defaults:
...
NeutronMechanismDrivers: ansible,ovn,baremetal
...
The order in which you specify the mechanism drivers matters. In the earlier example, if you want to bind
a port using the baremetal mechanism driver, then you must specify baremetal before ansible.
Otherwise, the ansible driver will bind the port, because it precedes baremetal in the list of values for
NeutronMechanismDrivers.
Red Hat chose ML2/OVN as the default mechanism driver for all new deployments starting with RHOSP
15 because it offers immediate advantages over the ML2/OVS mechanism driver for most customers
today. Those advantages multiply with each release while we continue to enhance and improve the
ML2/OVN feature set.
Support is available for the deprecated ML2/OVS mechanism driver through the RHOSP 17 releases.
During this time, the ML2/OVS driver remains in maintenance mode, receiving bug fixes and normal
support, and most new feature development happens in the ML2/OVN mechanism driver.
In RHOSP 18.0, Red Hat plans to completely remove the ML2/OVS mechanism driver and stop
supporting it.
If your existing Red Hat OpenStack Platform (RHOSP) deployment uses the ML2/OVS mechanism
driver, start now to evaluate a plan to migrate to the mechanism driver. Migration is supported in
RHOSP 16.2 and will be supported in RHOSP 17.1. Migration tools are available in RHOSP 17.0 for test
purposes only.
Red Hat requires that you file a proactive support case before attempting a migration from ML2/OVS
to ML2/OVN. Red Hat does not support migrations without the proactive support case. See How to
open a proactive case for a planned activity on Red Hat OpenStack Platform?
15
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Additional resources
Neutron in Component, Plug-In, and Driver Support in Red Hat OpenStack Platform
Environment files in the Customizing your Red Hat OpenStack Platform deployment guide
Including environment files in overcloud creation in the Customizing your Red Hat OpenStack
Platform deployment guide
NOTE
To mitigate the risk of network loops in OVS, only a single interface or a single bond can
be a member of a given bridge. If you require multiple bonds or interfaces, you can
configure multiple bridges.
Additional resources
Network Interface Bonding in the Customizing your Red Hat OpenStack Platform deployment
guide.
A physical network comprises physical wires, switches, and routers. A virtual network extends a physical
network into a hypervisor or container platform, bridging VMs or containers into the physical network.
An OVN logical network is a network implemented in software that is insulated from physical networks by
tunnels or other encapsulations. This allows IP and other address spaces used in logical networks to
overlap with those used on physical networks without causing conflicts. Logical network topologies can
be arranged without regard for the topologies of the physical networks on which they run. Thus, VMs
that are part of a logical network can migrate from one physical machine to another without network
disruption.
The encapsulation layer prevents VMs and containers connected to a logical network from
communicating with nodes on physical networks. For clustering VMs and containers, this can be
acceptable or even desirable, but in many cases VMs and containers do need connectivity to physical
networks. OVN provides multiple forms of gateways for this purpose. An OVN deployment consists of
several components:
16
CHAPTER 1. INTRODUCTION TO OPENSTACK NETWORKING
OVN databases
stores data representing the OVN logical and physical networks.
Hypervisors
run Open vSwitch and translate the OVN logical network into OpenFlow on a physical or virtual
machine.
Gateways
extends a tunnel-based OVN logical network into a physical network by forwarding packets between
tunnels and the physical network infrastructure.
[1] ML2/OVN VXLAN support is limited to 4096 networks and 4096 ports per network. Also, ACLs that
rely on the ingress port do not work with ML2/OVN and VXLAN, because the ingress port is not passed.
The ML2 plug-in also supports extension drivers that allows other pluggable drivers to extend the core
resources implemented in the ML2 plug-in for network objects. Examples of extension drivers include
support for QoS, port security, and so on.
17
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Earlier RHOSP versions used the Open vSwitch (OVS) mechanism driver by default, but Red Hat
recommends the ML2/OVN mechanism driver for most deployments.
As illustrated in Figure 2.1, the OVN architecture consists of the following components and services:
NOTE
19
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
To communicate with ovs-vswitchd and install the OpenFlow flows, the ovn-controller connects to
one of the active ovsdb-server servers (which host conf.db) using the UNIX socket path that was
passed when ovn-controller was started (for example unix:/var/run/openvswitch/db.sock).
The ovn-controller service expects certain key-value pairs in the external_ids column of the
Open_vSwitch table; puppet-ovn uses puppet-vswitch to populate these fields. The following
example shows the key-value pairs that puppet-vswitch configures in the external_ids column:
hostname=<HOST NAME>
ovn-encap-ip=<IP OF THE NODE>
ovn-encap-type=geneve
ovn-remote=tcp:OVN_DBS_VIP:6642
OpenStack guest instances access the Networking metadata service available at the link-local IP
address: 169.254.169.254. The neutron-ovn-metadata-agent has access to the host networks where the
Compute metadata API exists. Each HAProxy is in a network namespace that is not able to reach the
appropriate host network. HaProxy adds the necessary headers to the metadata API request and then
forwards the request to the neutron-ovn-metadata-agent over a UNIX domain socket.
The OVN Networking service creates a unique network namespace for each virtual network that enables
the metadata service. Each network accessed by the instances on the Compute node has a
corresponding metadata namespace (ovnmeta-<network_uuid>).
Red Hat OpenStack Platform usually consists of nodes in pre-defined roles, such as nodes in Controller
20
CHAPTER 2. WORKING WITH ML2/OVN
Red Hat OpenStack Platform usually consists of nodes in pre-defined roles, such as nodes in Controller
roles, Compute roles, and different storage role types. Each of these default roles contains a set of
services that are defined in the core heat template collection.
In a default Red Hat OpenStack (RHOSP) deployment, the ML2/OVN composable service, ovn-dbs,
runs on Controller nodes. Because the service is composable, you can assign it to another role, such as a
Networker role. By choosing to assign the ML2/OVN service to another role you can reduce the load on
the Controller node, and implement a high-availability strategy by isolating the Networking service on
Networker nodes.
Related information
NOTE
When you create a router, do not use --ha option because OVN routers are highly
available by default. Openstack router create commands that include the --ha option
fail.
OVN automatically schedules the router port to all available gateway nodes that can act as an L3
gateway on the specified external network. OVN L3 HA uses the gateway_chassis column in the OVN
Logical_Router_Port table. Most functionality is managed by OpenFlow rules with bundled
active_passive outputs. The ovn-controller handles the Address Resolution Protocol (ARP) responder
and router enablement and disablement. Gratuitous ARPs for FIPs and router external addresses are
also periodically sent by the ovn-controller.
NOTE
L3HA uses OVN to balance the routers back to the original gateway nodes to avoid any
nodes becoming a bottleneck.
BFD monitoring
OVN uses the Bidirectional Forwarding Detection (BFD) protocol to monitor the availability of the
gateway nodes. This protocol is encapsulated on top of the Geneve tunnels established from node to
node.
Each gateway node monitors all the other gateway nodes in a star topology in the deployment. Gateway
nodes also monitor the compute nodes to let the gateways enable and disable routing of packets and
ARP responses and announcements.
Each compute node uses BFD to monitor each gateway node and automatically steers external traffic,
such as source and destination Network Address Translation (SNAT and DNAT), through the active
gateway node for a given router. Compute nodes do not need to monitor other compute nodes.
NOTE
21
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
NOTE
External network failures are not detected as would happen with an ML2-OVS
configuration.
The gateway node becomes disconnected from the network (tunneling interface).
NOTE
This BFD monitoring mechanism only works for link failures, not for routing failures.
A clustered database operates on a cluster of at least three database servers on different hosts. Servers
use the Raft consensus algorithm to synchronize writes and share network traffic continuously across
the cluster. The cluster elects one server as the leader. All servers in the cluster can handle database
read operations, which mitigates potential bottlenecks on the control plane. Write operations are
handled by the cluster leader.
If a server fails, a new cluster leader is elected and the traffic is redistributed among the remaining
operational servers. The clustered database service model handles failovers more efficiently than the
pacemaker-based model did. This mitigates related downtime and complications that can occur with
longer failover times.
The leader election process requires a majority, so the fault tolerance capacity is limited by the highest
odd number in the cluster. For example, a three-server cluster continues to operate if one server fails. A
five-server cluster tolerates up to two failures. Increasing the number of servers to an even number does
not increase fault tolerance. For example, a four-server cluster cannot tolerate more failures than a
three-server cluster.
Clusters larger than five servers also work, with every two added servers allowing the cluster to tolerate
an additional failure, but write performance decreases.
For information on monitoring the status of the database servers, see Monitoring OVN database status.
Networker
22
CHAPTER 2. WORKING WITH ML2/OVN
Limitations
The following limitations apply to the use of SR-IOV with ML2/OVN and native OVN DHCP in this
release.
All external ports are scheduled on a single gateway node because there is only one HA Chassis
Group for all of the ports.
North/south routing on VF(direct) ports on VLAN tenant networks does not work with SR-IOV
because the external ports are not colocated with the logical router’s gateway ports. See
https://bugs.launchpad.net/neutron/+bug/1875852.
Prerequisites
Procedure
1. Log in to the undercloud host as the stack user and source the stackrc file.
$ source stackrc
2. Choose the custom roles file that is appropriate for your deployment. Use it directly in the
deploy command if it suits your needs as-is. Or you can generate your own custom roles file that
combines other custom roles files.
3. (Optional) Generate a new custom roles data file that combines one of the custom roles files
listed earlier with other custom roles files.
Follow the instructions in Creating a roles_data file in the Customizing your Red Hat OpenStack
Platform deployment guide. Include the appropriate source role files depending on your
deployment.
23
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
4. (Optional) To identify specific nodes for the role, you can create a specific hardware flavor and
assign the flavor to specific nodes. Then use an environment file to define the flavor for the
role, and to specify a node count.
For more information, see the example in Creating a new role in the Customizing your Red Hat
OpenStack Platform deployment guide.
Deployment Settings
Networker role
ControllerParameters:
OVNCMSOptions: ""
ControllerSriovParameters:
OVNCMSOptions: ""
NetworkerParameters:
OVNCMSOptions: "enable-chassis-as-gw"
NetworkerSriovParameters:
OVNCMSOptions: ""
ControllerParameters:
OVNCMSOptions: ""
ControllerSriovParameters:
OVNCMSOptions: ""
NetworkerParameters:
OVNCMSOptions: ""
NetworkerSriovParameters:
OVNCMSOptions: "enable-chassis-as-gw"
Co-located control
and networker with OS::TripleO::Services::NeutronDhcpAgent: OS::Heat::None
SR-IOV
ControllerParameters:
OVNCMSOptions: ""
ControllerSriovParameters:
OVNCMSOptions: "enable-chassis-as-gw"
NetworkerParameters:
OVNCMSOptions: ""
NetworkerSriovParameters:
OVNCMSOptions: ""
24
CHAPTER 2. WORKING WITH ML2/OVN
7. Run the deployment command and include the core heat templates, other environment files,
and the custom roles data file in your deployment command with the -r option.
IMPORTANT
The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.
Example
Verification steps
Example
ssh tripleo-admin@controller-0
Sample output
3. Ensure that Controller nodes with OVN services or dedicated Networker nodes have been
configured as gateways for OVS.
Sample output
enable-chassis-as-gw
Example
ssh tripleo-admin@compute-0
25
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Sample output
Sample output
Additional resources
Composable services and custom roles in the Customizing your Red Hat OpenStack Platform
deployment guide
Limitations
The following limitations apply to the use of SR-IOV with ML2/OVN and native OVN DHCP in this
release.
All external ports are scheduled on a single gateway node because there is only one HA Chassis
Group for all of the ports.
North/south routing on VF(direct) ports on VLAN tenant networks does not work with SR-IOV
because the external ports are not colocated with the logical router’s gateway ports. See
https://bugs.launchpad.net/neutron/+bug/1875852.
Additional resources
Composable services and custom roles in the Customizing your Red Hat OpenStack Platform
deployment guide.
26
CHAPTER 3. MANAGING PROJECT NETWORKS
For example, it is ideal that your management or API traffic is not on the same network as systems that
serve web traffic. Traffic between VLANs travels through a router where you can implement firewalls to
govern traffic flow.
You must plan your VLANs as part of your overall plan that includes traffic isolation, high availability, and
IP address utilization for the various types of virtual networking resources in your deployment.
NOTE
The maximum number of VLANs in a single network, or in one OVS agent for a network
node, is 4094. In situations where you require more than the maximum number of VLANs,
you can create several provider networks (VXLAN networks) and several network nodes,
one per network. Each node can contain up to 4094 private networks.
NOTE
You do not require all of the isolated VLANs in this section for every OpenStack
deployment. For example, if your cloud users do not create ad hoc virtual networks on
demand, then you may not require a project network. If you want each VM to connect
directly to the same switch as any other physical system, connect your Compute nodes
directly to a provider network and configure your instances to use that provider network
directly.
Provisioning network - This VLAN is dedicated to deploying new nodes using director over
PXE boot. OpenStack Orchestration (heat) installs OpenStack onto the overcloud bare metal
servers. These servers attach to the physical network to receive the platform installation image
from the undercloud infrastructure.
Internal API network - The OpenStack services use the Internal API network for
communication, including API communication, RPC messages, and database communication. In
addition, this network is used for operational messages between controller nodes. When
planning your IP address allocation, note that each API service requires its own IP address.
Specifically, you must plan IP addresses for each of the following services:
vip-msg (ampq)
vip-keystone-int
27
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
vip-glance-int
vip-cinder-int
vip-nova-int
vip-neutron-int
vip-horizon-int
vip-heat-int
vip-ceilometer-int
vip-swift-int
vip-keystone-pub
vip-glance-pub
vip-cinder-pub
vip-nova-pub
vip-neutron-pub
vip-horizon-pub
vip-heat-pub
vip-ceilometer-pub
vip-swift-pub
Storage - Block Storage, NFS, iSCSI, and other storage services. Isolate this network to
separate physical Ethernet links for performance reasons.
Storage Management - OpenStack Object Storage (swift) uses this network to synchronise
data objects between participating replica nodes. The proxy service acts as the intermediary
interface between user requests and the underlying storage layer. The proxy receives incoming
requests and locates the necessary replica to retrieve the requested data. Services that use a
Ceph back end connect over the Storage Management network, since they do not interact with
Ceph directly but rather use the front end service. Note that the RBD driver is an exception; this
traffic connects directly to Ceph.
Project networks - Neutron provides each project with their own networks using either VLAN
segregation (where each project network is a network VLAN), or tunneling using VXLAN or
GRE. Network traffic is isolated within each project network. Each project network has an IP
subnet associated with it, and multiple project networks may use the same addresses.
External - The External network hosts the public API endpoints and connections to the
Dashboard (horizon). You can also use this network for SNAT. In a production deployment, it is
common to use a separate network for floating IP addresses and NAT.
28
CHAPTER 3. MANAGING PROJECT NETWORKS
Physical nodes - Each physical NIC requires one IP address. It is common practice to dedicate
physical NICs to specific functions. For example, allocate management and NFS traffic to
distinct physical NICs, sometimes with multiple NICs connecting across to different switches for
redundancy purposes.
Virtual IPs (VIPs) for High Availability- Plan to allocate between one and three VIPs for each
network that controller nodes share.
Project networks - Each project network requires a subnet that it can use to allocate IP
addresses to instances.
Virtual routers - Each router interface plugging into a subnet requires one IP address. If you
want to use DHCP, each router interface requires two IP addresses.
Instances - Each instance requires an address from the project subnet that hosts the instance.
If you require ingress traffic, you must allocate a floating IP address to the instance from the
designated external network.
Management traffic - Includes OpenStack Services and API traffic. All services share a small
number of VIPs. API, RPC and database services communicate on the internal API VIP.
2. Select your virtual router name in the Routers list, and click Add Interface.
In the Subnet list, select the name of your new subnet. You can optionally specify an IP address
for the interface in this field.
29
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
When creating networks, it is important to know that networks can host multiple subnets. This is useful if
you intend to host distinctly different systems in the same network, and prefer a measure of isolation
between them. For example, you can designate that only webserver traffic is present on one subnet,
while database traffic traverses another. Subnets are isolated from each other, and any instance that
wants to communicate with another subnet must have their traffic directed by a router. Consider placing
systems that require a high volume of traffic amongst themselves in the same subnet, so that they do
not require routing, and can avoid the subsequent latency and load.
Field Description
30
CHAPTER 3. MANAGING PROJECT NETWORKS
Field Description
3. Click the Next button, and specify the following values in the Subnet tab:
Field Description
Enable DHCP - Enables DHCP services for this subnet. You can use DHCP to automate the
distribution of IP settings to your instances.
IPv6 Address - Configuration Modes. If you create an IPv6 network, you must specify how
to allocate IPv6 addresses and additional information:
31
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
No Options Specified - Select this option if you want to set IP addresses manually, or if
you use a non OpenStack-aware method for address allocation.
DHCPv6 stateful - Instances receive IPv6 addresses as well as additional options (for
example, DNS) from the OpenStack Networking DHCPv6 service. Use this
configuration to create a subnet with ra_mode set to dhcpv6-stateful and
address_mode set to dhcpv6-stateful.
Allocation Pools - Range of IP addresses that you want DHCP to assign. For example, the
value 192.168.22.100,192.168.22.150 considers all up addresses in that range as available for
allocation.
DNS Name Servers - IP addresses of the DNS servers available on the network. DHCP
distributes these addresses to the instances for name resolution.
IMPORTANT
For strategic services such as DNS, it is a best practice not to host them on
your cloud. For example, if your cloud hosts DNS and your cloud becomes
inoperable, DNS is unavailable and the cloud components cannot do lookups
on each other.
Host Routes - Static host routes. First, specify the destination network in CIDR format,
followed by the next hop that you want to use for routing (for example, 192.168.23.0/24,
10.1.31.1). Provide this value if you need to distribute static routes to instances.
5. Click Create.
You can view the complete network in the Networks tab. You can also click Edit to change any
options as needed. When you create instances, you can configure them now to use its subnet,
and they receive any specified DHCP options.
You can create subnets only in pre-existing networks. Remember that project networks in OpenStack
Networking can host multiple subnets. This is useful if you intend to host distinctly different systems in
the same network, and prefer a measure of isolation between them.
For example, you can designate that only webserver traffic is present on one subnet, while database
traffic traverse another.
Subnets are isolated from each other, and any instance that wants to communicate with another subnet
32
CHAPTER 3. MANAGING PROJECT NETWORKS
Subnets are isolated from each other, and any instance that wants to communicate with another subnet
must have their traffic directed by a router. Therefore, you can lessen network latency and load by
grouping systems in the same subnet that require a high volume of traffic between each other.
1. In the dashboard, select Project > Network > Networks, and click the name of your network in
the Networks view.
Field Description
Enable DHCP - Enables DHCP services for this subnet. You can use DHCP to automate the
distribution of IP settings to your instances.
IPv6 Address - Configuration Modes. If you create an IPv6 network, you must specify how
to allocate IPv6 addresses and additional information:
No Options Specified - Select this option if you want to set IP addresses manually, or if
you use a non OpenStack-aware method for address allocation.
33
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
DHCPv6 stateful - Instances receive IPv6 addresses as well as additional options (for
example, DNS) from the OpenStack Networking DHCPv6 service. Use this
configuration to create a subnet with ra_mode set to dhcpv6-stateful and
address_mode set to dhcpv6-stateful.
Allocation Pools - Range of IP addresses that you want DHCP to assign. For example, the
value 192.168.22.100,192.168.22.150 considers all up addresses in that range as available for
allocation.
DNS Name Servers - IP addresses of the DNS servers available on the network. DHCP
distributes these addresses to the instances for name resolution.
Host Routes - Static host routes. First, specify the destination network in CIDR format,
followed by the next hop that you want to use for routing (for example, 192.168.23.0/24,
10.1.31.1). Provide this value if you need to distribute static routes to instances.
4. Click Create.
You can view the subnet in the Subnets list. You can also click Edit to change any options as
needed. When you create instances, you can configure them now to use its subnet, and they
receive any specified DHCP options.
The default gateway of a router defines the next hop for any traffic received by the router. Its network is
typically configured to route traffic to the external physical network using a virtual bridge.
1. In the dashboard, select Project > Network > Routers, and click Create Router.
2. Enter a descriptive name for the new router, and click Create router.
3. Click Set Gateway next to the entry for the new router in the Routers list.
4. In the External Network list, specify the network that you want to receive traffic destined for an
external location.
34
CHAPTER 3. MANAGING PROJECT NETWORKS
IMPORTANT
The default routes for subnets must not be overwritten. When the default route for a
subnet is removed, the L3 agent automatically removes the corresponding route in the
router namespace too, and network traffic cannot flow to and from the associated
subnet. If the existing router namespace route has been removed, to fix this problem,
perform these steps:
For example, to purge the resources of the test-project project, and then delete the project, run the
following commands:
To remove its interfaces and delete a router, complete the following steps:
1. In the dashboard, select Project > Network > Routers, and click the name of the router that you
want to delete.
2. Select the interfaces of type Internal Interface, and click Delete Interfaces.
3. From the Routers list, select the target router and click Delete Routers.
35
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
To delete a network in your project, together with any dependent interfaces, complete the following
steps:
To remove an interface, find the ID number of the network that you want to delete by clicking on
your target network in the Networks list, and looking at the ID field. All the subnets associated
with the network share this value in the Network ID field.
2. Navigate to Project > Network > Routers, click the name of your virtual router in the Routers
list, and locate the interface attached to the subnet that you want to delete.
You can distinguish this subnet from the other subnets by the IP address that served as the
gateway IP. You can further validate the distinction by ensuring that the network ID of the
interface matches the ID that you noted in the previous step.
3. Click the Delete Interface button for the interface that you want to delete.
4. Select Project > Network > Networks, and click the name of your network.
5. Click the Delete Subnet button for the subnet that you want to delete.
NOTE
If you are still unable to remove the subnet at this point, ensure it is not already
being used by any instances.
6. Select Project > Network > Networks, and select the network you would like to delete.
36
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS
Neutron server - This service runs the OpenStack Networking API server, which provides the
API for end-users and services to interact with OpenStack Networking. This server also
integrates with the underlying database to store and retrieve project network, router, and
loadbalancer details, among others.
Neutron agents - These are the services that perform the network functions for OpenStack
Networking:
neutron-l3-agent - performs layer 3 routing between project private networks, the external
network, and others.
Compute node - This node hosts the hypervisor that runs the virtual machines, also known as
instances. A Compute node must be wired directly to the network in order to provide external
connectivity for instances. This node is typically where the l2 agents run, such as neutron-
openvswitch-agent.
Additional resources
Network node - The server that runs the OpenStack Networking agents.
The steps in this chapter apply to an environment that contains these three node types. If your
deployment has both the Controller and Network node roles on the same physical node, then you must
perform the steps from both sections on that server. This also applies for a High Availability (HA)
environment, where all three nodes might be running the Controller node and Network node services
with HA. As a result, you must complete the steps in sections applicable to Controller and Network nodes
on all three nodes.
Additional resources
37
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Prerequisites
Procedure
1. On the undercloud host, logged in as the stack user, create a custom YAML environment file.
Example
$ vi /home/stack/templates/my-modules-environment.yaml
TIP
The Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called
templates to install and configure your environment. You can customize aspects of the
overcloud with a custom environment file , which is a special type of template that provides
customization for your orchestration templates.
Example
parameter_defaults:
NeutronBridgeMappings: 'physnet1:br-net1,physnet2:br-net2'
3. In the custom NIC configuration template for the Controller and Compute nodes, configure the
bridges with interfaces attached.
Example
...
- type: ovs_bridge
name: br-net1
mtu: 1500
use_dhcp: false
members:
- type: interface
name: eth0
mtu: 1500
use_dhcp: false
primary: true
38
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS
- type: ovs_bridge
name: br-net2
mtu: 1500
use_dhcp: false
members:
- type: interface
name: eth1
mtu: 1500
use_dhcp: false
primary: true
...
4. Run the openstack overcloud deploy command and include the templates and the
environment files, including this modified custom NIC template and the new environment file.
IMPORTANT
The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.
Example
Verification
1. Create an external network (public1) as a flat network and associate it with the configured
physical network (physnet1).
Configure it as a shared network (using --share) to let other users create VM instances that
connect to the external network directly.
Example
Example
Example
$ openstack server create --image rhel --flavor my_flavor --network public01 my_instance
39
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Additional resources
Environment files in the Installing and managing Red Hat OpenStack Platform with director
guide
Including environment files in overcloud creation in the Installing and managing Red Hat
OpenStack Platform with director guide
40
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS
1. Packets leave the eth0 interface of the instance and arrive at the linux bridge qbr-xx.
2. Bridge qbr-xx is connected to br-int using veth pair qvb-xx <-> qvo-xxx. This is because the
bridge is used to apply the inbound/outbound firewall rules defined by the security group.
3. Interface qvb-xx is connected to the qbr-xx linux bridge, and qvoxx is connected to the br-int
Open vSwitch (OVS) bridge.
# brctl show
qbr269d4d73-e7 8000.061943266ebb no qvb269d4d73-e7
tap269d4d73-e7
# ovs-vsctl show
Bridge br-int
fail_mode: secure
Interface "qvof63599ba-8f"
Port "qvo269d4d73-e7"
tag: 5
Interface "qvo269d4d73-e7"
NOTE
Port qvo-xx is tagged with the internal VLAN tag associated with the flat provider
network. In this example, the VLAN tag is 5. When the packet reaches qvo-xx, the VLAN
tag is appended to the packet header.
The packet is then moved to the br-ex OVS bridge using the patch-peer int-br-ex <-> phy-br-ex.
# ovs-vsctl show
Bridge br-int
fail_mode: secure
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex
Interface br-ex
type: internal
41
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
When this packet reaches phy-br-ex on br-ex, an OVS flow inside br-ex strips the VLAN tag (5) and
forwards it to the physical interface.
In the following example, the output shows the port number of phy-br-ex as 2.
2(phy-br-ex): addr:ba:b5:7b:ae:5c:a2
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
The following output shows any packet that arrives on phy-br-ex (in_port=2) with a VLAN tag of 5
(dl_vlan=5). In addition, an OVS flow in br-ex strips the VLAN tag and forwards the packet to the
physical interface.
If the physical interface is another VLAN-tagged interface, then the physical interface adds the tag to
the packet.
42
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS
3. The packet moves to br-int via the patch-peer phy-br-ex <--> int-br-ex.
In the following example, int-br-ex uses port number 15. See the entry containing 15(int-br-ex):
1. When the packet arrives at int-br-ex, an OVS flow rule within the br-int bridge amends the
packet to add the internal VLAN tag 5. See the entry for actions=mod_vlan_vid:5:
43
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
2. The second rule manages packets that arrive on int-br-ex (in_port=15) with no VLAN tag
(vlan_tci=0x0000): This rule adds VLAN tag 5 to the packet
(actions=mod_vlan_vid:5,NORMAL) and forwards it to qvoxxx.
3. qvoxxx accepts the packet and forwards it to qvbxx, after stripping away the VLAN tag.
NOTE
VLAN tag 5 is an example VLAN that was used on a test Compute node with a flat
provider network; this value was assigned automatically by neutron-openvswitch-agent.
This value may be different for your own flat provider network, and can differ for the
same network on two separate Compute nodes.
Additional resources
Procedure
1. Review bridge_mappings.
Verify that the physical network name you use is consistent with the contents of the
bridge_mapping configuration.
Example
In this example, the physical network name is, physnet1.
Sample output
...
| provider:physical_network | physnet1
...
44
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS
Example
In this example, the contents of the bridge_mapping configuration is also, physnet1:
Sample output
bridge_mappings = physnet1:br-ex
Example
In this example, details about the network, provider-flat, is queried:
Sample output
...
| provider:network_type | flat |
| router:external | True |
...
$ ovs-vsctl show
Sample output
Bridge br-int
fail_mode: secure
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Sample output
Configuration of the patch-peer on br-ex:
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex
Interface br-ex
type: internal
45
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Re-check the bridge_mapping setting if the connection is not created after you restart the
service.
a. If this flow is not created after spawning the instance, verify that the network is created as
flat, is external, and that the physical_network name is correct. In addition, review the
bridge_mapping settings.
b. Finally, review the ifcfg-br-ex and ifcfg-ethx configuration. Ensure that ethX is added as a
port within br-ex, and that ifcfg-br-ex and ifcfg-ethx have an UP flag in the output of ip a.
Sample output
The following output shows eth1 is a port in br-ex:
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port "eth1"
Interface "eth1"
Example
The following example demonstrates that eth1 is configured as an OVS port, and that the
kernel knows to transfer all packets from the interface, and send them to the OVS bridge
br-ex. This can be observed in the entry, master ovs-system.
$ ip a
5: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-
system state UP qlen 1000
Additional resources
Section 4.4, “How does the flat provider network packet flow work?”
Prerequisites
46
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS
Your Network nodes and Compute nodes are connected to a physical network using a physical
interface.
This example uses Network nodes and Compute nodes that are connected to a physical
network, physnet1, using a physical interface, eth1.
The switch ports that these interfaces connect to must be configured to trunk the required
VLAN ranges.
Procedure
1. On the undercloud host, logged in as the stack user, create a custom YAML environment file.
Example
$ vi /home/stack/templates/my-modules-environment.yaml
TIP
The Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called
templates to install and configure your environment. You can customize aspects of the
overcloud with a custom environment file , which is a special type of template that provides
customization for your orchestration templates.
Example
parameter_defaults:
NeutronTypeDrivers: vxlan,flat,vlan
3. Configure the NeutronNetworkVLANRanges setting to reflect the physical network and VLAN
ranges in use:
Example
parameter_defaults:
NeutronTypeDrivers: 'vxlan,flat,vlan'
NeutronNetworkVLANRanges: 'physnet1:171:172'
4. Create an external network bridge (br-ex), and associate a port ( eth1) with it.
This example configures eth1 to use br-ex:
Example
parameter_defaults:
NeutronTypeDrivers: 'vxlan,flat,vlan'
NeutronNetworkVLANRanges: 'physnet1:171:172'
NeutronBridgeMappings: 'datacentre:br-ex,tenant:br-int'
5. Run the openstack overcloud deploy command and include the core templates and the
environment files, including this new environment file.
IMPORTANT
47
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
IMPORTANT
The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.
Example
Verification
1. Create the external networks as type vlan, and associate them with the configured
physical_network.
Run the following example command to create two networks: one for VLAN 171, and another for
VLAN 172:
Example
2. Create a number of subnets and configure them to use the external network.
You can use either openstack subnet create or the dashboard to create these subnets. Ensure
that the external subnet details you have received from your network administrator are correctly
associated with each VLAN.
In this example, VLAN 171 uses subnet 10.65.217.0/24 and VLAN 172 uses 10.65.218.0/24:
Example
48
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS
--dhcp \
--gateway 10.65.218.254 \
subnet-provider-172
Additional resources
Custom network interface templates in the Installing and managing Red Hat OpenStack
Platform with director guide
Environment files in the Customizing your Red Hat OpenStack Platform deployment guide
Including environment files in overcloud creation in the Customizing your Red Hat OpenStack
Platform deployment guide
49
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
1. Packets leaving the eth0 interface of the instance arrive at the linux bridge qbr-xx connected to
the instance.
3. qvbxx is connected to the linux bridge qbr-xx and qvoxx is connected to the Open vSwitch
bridge br-int.
# brctl show
bridge name bridge id STP enabled interfaces
qbr84878b78-63 8000.e6b3df9451e0 no qvb84878b78-63
tap84878b78-63
options: {peer=phy-br-ex}
Port "qvo86257b61-5d"
tag: 3
50
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS
Interface "qvo86257b61-5d"
Port "qvo84878b78-63"
tag: 2
Interface "qvo84878b78-63"
qvoxx is tagged with the internal VLAN tag associated with the VLAN provider network. In this
example, the internal VLAN tag 2 is associated with the VLAN provider network provider-171
and VLAN tag 3 is associated with VLAN provider network provider-172. When the packet
reaches qvoxx, the this VLAN tag is added to the packet header.
The packet is then moved to the br-ex OVS bridge using patch-peer int-br-ex <→ phy-br-ex.
Example patch-peer on br-int:
Bridge br-int
fail_mode: secure
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex
Interface br-ex
type: internal
When this packet reaches phy-br-ex on br-ex, an OVS flow inside br-ex replaces the internal
VLAN tag with the actual VLAN tag associated with the VLAN provider network.
The output of the following command shows that the port number of phy-br-ex is 4:
The following command shows any packet that arrives on phy-br-ex (in_port=4) which has VLAN tag 2
(dl_vlan=2). Open vSwitch replaces the VLAN tag with 171 ( actions=mod_vlan_vid:171,NORMAL) and
forwards the packet to the physical interface. The command also shows any packet that arrives on phy-
br-ex (in_port=4) which has VLAN tag 3 ( dl_vlan=3). Open vSwitch replaces the VLAN tag with 172
(actions=mod_vlan_vid:172,NORMAL) and forwards the packet to the physical interface. The
neutron-openvswitch-agent adds these rules.
51
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Your VLAN provider network may require a different configuration. Also, the configuration requirement
for a network may differ between two different Compute nodes.
The output of the following command shows int-br-ex with port number 18:
The output of the following command shows the flow rules on br-int.
A packet with VLAN tag 172 from the external network reaches the br-ex bridge via eth1 on the
physical node.
The packet moves to br-int via the patch-peer phy-br-ex <-> int-br-ex.
52
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS
The flow actions (actions=mod_vlan_vid:3,NORMAL) replace the VLAN tag 172 with internal
VLAN tag 3 and forwards the packet to the instance with normal Layer 2 processing.
Additional resources
Section 4.4, “How does the flat provider network packet flow work?”
Procedure
1. Verify that physical network name used in the bridge_mapping configuration matches the
physical network name.
Example
Sample output
...
| provider:physical_network | physnet1
...
Example
Sample output
In this sample output, the physical network name, physnet1, matches the name used in the
bridge_mapping configuration:
bridge_mappings = physnet1:br-ex
2. Confirm that the network was created as external, is type vlan, and uses the correct
segmentation_id value:
Example
Sample output
...
53
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
| provider:network_type | vlan |
| provider:physical_network | physnet1 |
| provider:segmentation_id | 171 |
...
$ ovs-vsctl show
Recheck the bridge_mapping setting if this is not created even after restarting the service.
a. To review the flow of outgoing packets, run ovs-ofctl dump-flows br-ex and ovs-ofctl
dump-flows br-int, and verify that the flows map the internal VLAN IDs to the external
VLAN ID (segmentation_id).
b. For incoming packets, map the external VLAN ID to the internal VLAN ID.
This flow is added by the neutron OVS agent when you spawn an instance to this network for
the first time.
c. If this flow is not created after spawning the instance, ensure that the network is created as
vlan, is external, and that the physical_network name is correct. In addition, re-check the
bridge_mapping settings.
Example
$ ovs-vsctl show
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port "eth1"
Interface "eth1"
Example
$ ip a
Sample output
In this sample output, eth1 has been added as a port, and that the kernel is configured to
54
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS
In this sample output, eth1 has been added as a port, and that the kernel is configured to
move all packets from the interface to the OVS bridge br-ex. This is demonstrated by the
entry, master ovs-system.
Additional resources
Section 4.7, “How does the VLAN provider network packet flow work?”
IMPORTANT
You should thoroughly test and understand any multicast snooping configuration before
applying it to a production environment. Misconfiguration can break multicasting or cause
erratic network behavior.
Prerequisites
An RHOSP Networking service security group rule must be in place to allow inbound IGMP to
the VM instances (or port security disabled).
In this example, a rule is created for the ping_ssh security group:
Example
Procedure
1. On the undercloud host, logged in as the stack user, create a custom YAML environment file.
Example
$ vi /home/stack/templates/my-ovs-environment.yaml
TIP
55
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
TIP
The Orchestration service (heat) uses a set of plans called templates to install and configure
your environment. You can customize aspects of the overcloud with a custom environment file,
which is a special type of template that provides customization for your heat templates.
parameter_defaults:
NeutronEnableIgmpSnooping: true
...
IMPORTANT
Ensure that you add a whitespace character between the colon (:) and true.
3. Run the openstack overcloud deploy command and include the core heat templates,
environment files, and this new custom environment file.
IMPORTANT
The order of the environment files is important as the parameters and resources
defined in subsequent environment files take precedence.
Example
Verification
Example
Sample output
...
mcast_snooping_enable: true
...
other_config: {mac-table-size="50000", mcast-snooping-disable-flood-unregistered=True}
...
Additional resources
Neutron in Component, Plug-In, and Driver Support in Red Hat OpenStack Platform
56
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS
Environment files in the Customizing your Red Hat OpenStack Platform deployment guide
Including environment files in overcloud creation in the Customizing your Red Hat OpenStack
Platform deployment guide
IMPORTANT
Prerequisites
Procedure
1. Configure security to allow multicast traffic to the appropriate VM instances. For instance,
create a pair of security group rules to allow IGMP traffic from the IGMP querier to enter and
exit the VM instances, and a third rule to allow multicast traffic.
Example
A security group mySG allows IGMP traffic to enter and exit the VM instances.
As an alternative to setting security group rules, some operators choose to selectively disable
port security on the network. If you choose to disable port security, consider and plan for any
related security risks.
Example
parameter_defaults:
NeutronEnableIgmpSnooping: True
57
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
3. Include the environment file in the openstack overcloud deploy command with any other
environment files that are relevant to your environment and deploy the overcloud.
-e ovn-extras.yaml \
…
Replace <other_overcloud_environment_files> with the list of environment files that are part
of your existing deployment.
Verification
1. Verify that the multicast snooping is enabled. List the northbound database Logical_Switch
table.
Sample output
_uuid : d6a2fbcd-aaa4-4b9e-8274-184238d66a15
other_config : {mcast_flood_unregistered="false", mcast_snoop="true"}
...
Sample output
_uuid : 2d6cae4c-bd82-4b31-9c63-2d17cbeadc4e
address : "225.0.0.120"
chassis : 34e25681-f73f-43ac-a3a4-7da2a710ecd3
datapath : eaf0f5cc-a2c8-4c30-8def-2bc1ec9dcabc
ports : [5eaf9dd5-eae5-4749-ac60-4c1451901c56, 8a69efc5-38c5-48fb-bbab-
30f2bf9b8d45]
...
Additional resources
Neutron in Component, Plug-In, and Driver Support in Red Hat OpenStack Platform
Environment files in the Customizing your Red Hat OpenStack Platform deployment guide
Including environment files in overcloud creation in the Customizing your Red Hat OpenStack
Platform deployment guide
58
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS
enable_isolated_metadata = True
59
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
NOTE
OpenStack Networking allocates floating IP addresses to all projects (tenants) from the
same IP ranges in CIDR format. As a result, all projects can consume floating IPs from
every floating IP subnet. You can manage this behavior using quotas for specific projects.
For example, you can set the default to 10 for ProjectA and ProjectB, while setting the
quota for ProjectC to 0.
Procedure
When you create an external subnet, you can also define the floating IP allocation pool.
If the subnet hosts only floating IP addresses, consider disabling DHCP allocation with the --no-
dhcp option in the openstack subnet create command.
Example
Verification
You can verify that the pool is configured properly by assigning a random floating IP to an
instance. (See the later link that follows.)
Additional resources
60
CHAPTER 5. MANAGING FLOATING IP ADDRESSES
Procedure
Allocate a floating IP address to an instance by using the openstack server add floating ip
command.
Example
Validation steps
Confirm that your floating IP is associated with your instance by using the openstack server
show command.
Example
Sample output
+-----------------------------+------------------------------------------+
| Field | Value |
+-----------------------------+------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2021-08-11T14:45:37.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | public=198.51.100.56,192.0.2.200 |
| | |
| config_drive | |
| created | 2021-08-11T14:44:54Z |
| flavor | review-ephemeral |
| | (8130dd45-78f6-44dc-8173-4d6426b8e520) |
| hostId | 2308c8d8f60ed5394b1525122fb5bf8ea55c78b8 |
| | 0ec6157eca4488c9 |
| id | aef3ca09-887d-4d20-872d-1d1b49081958 |
| image | rhel8 |
| | (20724bfe-93a9-4341-a5a3-78b37b3a5dfb) |
| key_name | example-keypair |
| name | prod-serv1 |
| progress |0 |
| project_id | bd7a8c4a19424cf09a82627566b434fa |
| properties | |
| security_groups | name='default' |
| status | ACTIVE |
| updated | 2021-08-11T14:45:37Z |
| user_id | 4b7e19a0d723310fd92911eb2fe59743a3a5cd32 |
| | 45f76ffced91096196f646b5 |
| volumes_attached | |
+-----------------------------+------------------------------------------+
61
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Additional resources
Procedure
1. In the dashboard, select Admin > Networks > Create Network > Project.
2. Select the project that you want to host the new network with the Project drop-down list.
Local - Traffic remains on the local Compute host and is effectively isolated from any
external networks.
Flat - Traffic remains on a single network and can also be shared with the host. No VLAN
tagging or other network segregation takes place.
VLAN - Create a network using a VLAN ID that corresponds to a VLAN present in the
physical network. This option allows instances to communicate with systems on the same
layer 2 VLAN.
GRE - Use a network overlay that spans multiple nodes for private communication between
instances. Traffic egressing the overlay must be routed.
VXLAN - Similar to GRE, and uses a network overlay to span multiple nodes for private
communication between instances. Traffic egressing the overlay must be routed.
Additional resources
Prerequisites
62
CHAPTER 5. MANAGING FLOATING IP ADDRESSES
Procedure
1. Enter the following command to allocate a floating IP address from the pool. In this example, the
network is named public.
Example
Sample output
In the following example, the newly allocated floating IP is 192.0.2.200. You can assign it to an
instance.
+---------------------+--------------------------------------------------+
| Field | Value |
+---------------------+--------------------------------------------------+
| fixed_ip_address | None |
| floating_ip_address | 192.0.2.200 |
| floating_network_id | f0dcc603-f693-4258-a940-0a31fd4b80d9 |
| id | 6352284c-c5df-4792-b168-e6f6348e2620 |
| port_id | None |
| router_id | None |
| status | ACTIVE |
+---------------------+--------------------------------------------------+
Sample output
+-------------+-------------+--------+-------------+-------+-------------+
| ID | Name | Status | Networks | Image | Flavor |
+-------------+-------------+--------+-------------+-------+-------------+
| aef3ca09-88 | prod-serv1 | ACTIVE | public=198. | rhel8 | review- |
| 7d-4d20-872 | | | 51.100.56 | | ephemeral |
| d-1d1b49081 | | | | | |
| 958 | | | | | |
| | | | | | |
+-------------+-------------+--------+-------------+-------+-------------+
Example
Validation steps
63
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Enter the following command to confirm that your floating IP is associated with your instance.
Example
Sample output
+-----------------------------+------------------------------------------+
| Field | Value |
+-----------------------------+------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2021-08-11T14:45:37.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | public=198.51.100.56,192.0.2.200 |
| | |
| config_drive | |
| created | 2021-08-11T14:44:54Z |
| flavor | review-ephemeral |
| | (8130dd45-78f6-44dc-8173-4d6426b8e520) |
| hostId | 2308c8d8f60ed5394b1525122fb5bf8ea55c78b8 |
| | 0ec6157eca4488c9 |
| id | aef3ca09-887d-4d20-872d-1d1b49081958 |
| image | rhel8 |
| | (20724bfe-93a9-4341-a5a3-78b37b3a5dfb) |
| key_name | example-keypair |
| name | prod-serv1 |
| progress |0 |
| project_id | bd7a8c4a19424cf09a82627566b434fa |
| properties | |
| security_groups | name='default' |
| status | ACTIVE |
| updated | 2021-08-11T14:45:37Z |
| user_id | 4b7e19a0d723310fd92911eb2fe59743a3a5cd32 |
| | 45f76ffced91096196f646b5 |
| volumes_attached | |
+-----------------------------+------------------------------------------+
Additional resources
64
CHAPTER 5. MANAGING FLOATING IP ADDRESSES
Procedure
Additional resources
Prerequisites
The port_forwarding service plug-in requires that you also set the router service plug-in.
Procedure
$ source ~/stackrc
parameter_defaults:
NeutronPluginExtensions: "router,port_forwarding"
NOTE
The port_forwarding service plug-in requires that you also set the router service
plug-in.
4. If you use the ML2/OVS mechanism driver with the Networking service, you must also set the
port_forwarding extension for the OVS L3 agent:
parameter_defaults:
NeutronPluginExtensions: "router,port_forwarding"
NeutronL3AgentExtensions: "port_forwarding"
5. Deploy your overcloud and include the core heat templates, environment files, and this new
65
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
5. Deploy your overcloud and include the core heat templates, environment files, and this new
custom environment file.
IMPORTANT
The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.
RHOSP users can now set up port forwarding for floating IPs. For more information, see
Section 5.7, “Creating port forwarding for a floating IP” .
Verification
Example
$ source ~/overcloudrc
2. Ensure that the Networking service has successfully loaded the port_forwarding and router
service plug-ins:
Sample output
A successful verification produces output similar to the following:
Additional resources
Environment files in the Customizing your Red Hat OpenStack Platform deployment guide
Including environment files in overcloud creation in the Customizing your Red Hat OpenStack
Platform deployment guide
Prerequisites
The Networking service must be running with the port_forwarding service plug-in loaded.
For information, see Section 5.6, “Configuring floating IP port forwarding” .
66
CHAPTER 5. MANAGING FLOATING IP ADDRESSES
Procedure
Example
$ source ~/overcloudrc
2. Use the following command to create port forwarding for a floating IP:
Replace <port> with the name or ID of the Networking service port to which the instance is
attached.
Replace <protocol> with the protocol, such as TCP or UDP, used by the application that
receives the port-forwarded traffic.
Replace <floating-ip> with the floating IP whose specified port traffic you want to forward.
Example
This example creates port fowarding for an instance that is attached to the floating IP
198.51.100.47. The floating IP uses the Networking service port 1adfdb09-e8c6-4708-
b5aa-11f50fc22d62. When the Networking service detects incoming, external traffic
addressed to 198.51.100.47:80, it forwards the traffic to the internal IP address,
203.0.113.107, on TCP port, 8080:
Verification
Confirm that the Networking service has established forwarding for the floating IP port.
67
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Example
The following example verifies successful port forwarding for the floating IP 198.51.100.47:
Sample output
The output shows that traffic sent to the floating IP 198.51.100.47 on TCP port 80 is forwarded
to port 8080 on the instance with the internal address 203.0.113.107:
+----------+------------------+---------------------+---------------+---------------+----------+-------------
+
| ID | Internal Port ID | Internal IP Address | Internal Port | External Port | Protocol |
Description |
+----------+------------------+---------------------+---------------+---------------+----------+-------------
+
| 5cf204c7 | 1adfdb09-e8c6-47 | 203.0.113.107 | 8080 | 80 | tcp | |
| -6825-45 | 08-b5aa-11f50fc2 | | | | | |
| de-84ec- | 2d62 | | | | | |
| 2eb507be | | | | | | |
| 543e | | | | | | |
+----------+------------------+---------------------+---------------+---------------+----------+-------------
+
Additional resources
In this procedure, the example physical interface, eth0, is mapped to the bridge, br-ex; the virtual bridge
acts as the intermediary between the physical network and any virtual networks.
As a result, all traffic traversing eth0 uses the configured Open vSwitch to reach instances.
To map a physical NIC to the virtual Open vSwitch bridge, complete the following steps:
Procedure
IPADDR
NETMASK GATEWAY
# vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=OVSPort
68
CHAPTER 5. MANAGING FLOATING IP ADDRESSES
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
ONBOOT=yes
# vi /etc/sysconfig/network-scripts/ifcfg-br-ex
DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=192.168.120.10
NETMASK=255.255.255.0
GATEWAY=192.168.120.1
DNS1=192.168.120.1
ONBOOT=yes
You can now assign floating IP addresses to instances and make them available to the physical
network.
Additional resources
To add a router interface and connect the new interface to a subnet, complete these steps:
NOTE
This procedure uses the Network Topology feature. Using this feature, you can see a
graphical representation of all your virtual routers and networks while you to perform
network management tasks.
2. Locate the router that you want to manage, hover your mouse over it, and click Add Interface.
You can remove an interface to a subnet if you no longer require the router to direct traffic for the
69
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
You can remove an interface to a subnet if you no longer require the router to direct traffic for the
subnet.
2. Click the name of the router that hosts the interface that you want to delete.
3. Select the interface type (Internal Interface), and click Delete Interfaces.
70
CHAPTER 6. MONITORING AND TROUBLESHOOTING NETWORKS
NOTE
The ping command is an ICMP operation. To use ping, you must allow ICMP traffic to
traverse any intermediary firewalls.
Ping tests are most useful when run from the machine experiencing network issues, so it may be
necessary to connect to the command line via the VNC management console if the machine seems to
be completely offline.
For example, the following ping test command validates multiple layers of network infrastructure in
order to succeed; name resolution, IP routing, and network switching must all function correctly:
$ ping www.example.com
You can terminate the ping command with Ctrl-c, after which a summary of the results is presented.
Zero percent packet loss indicates that the connection was stable and did not time out.
The results of a ping test can be very revealing, depending on which destination you test. For example, in
the following diagram VM1 is experiencing some form of connectivity issue. The possible destinations
are numbered in blue, and the conclusions drawn from a successful or failed result are presented:
71
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
1. The internet - a common first step is to send a ping test to an internet location, such as
www.example.com.
Success: This test indicates that all the various network points in between the machine and
the Internet are functioning correctly. This includes the virtual and physical network
infrastructure.
Failure: There are various ways in which a ping test to a distant internet location can fail. If
other machines on your network are able to successfully ping the internet, that proves the
internet connection is working, and the issue is likely within the configuration of the local
machine.
2. Physical router - This is the router interface that the network administrator designates to direct
traffic onward to external destinations.
Success: Ping tests to the physical router can determine whether the local network and
underlying switches are functioning. These packets do not traverse the router, so they do
not prove whether there is a routing issue present on the default gateway.
Failure: This indicates that the problem lies between VM1 and the default gateway. The
router/switches might be down, or you may be using an incorrect default gateway. Compare
the configuration with that on another server that you know is functioning correctly. Try
pinging another server on the local network.
3. Neutron router - This is the virtual SDN (Software-defined Networking) router that Red Hat
OpenStack Platform uses to direct the traffic of virtual machines.
Failure: Confirm whether ICMP traffic is permitted in the security group of the instance.
Check that the Networking node is online, confirm that all the required services are running,
and review the L3 agent log (/var/log/neutron/l3-agent.log).
4. Physical switch - The physical switch manages traffic between nodes on the same physical
72
CHAPTER 6. MONITORING AND TROUBLESHOOTING NETWORKS
4. Physical switch - The physical switch manages traffic between nodes on the same physical
network.
Success: Traffic sent by a VM to the physical switch must pass through the virtual network
infrastructure, indicating that this segment is functioning correctly.
Failure: Check that the physical switch port is configured to trunk the required VLANs.
5. VM2 - Attempt to ping a VM on the same subnet, on the same Compute node.
Success: The NIC driver and basic IP configuration on VM1 are functional.
Failure: Validate the network configuration on VM1. Or, firewall on VM2 might simply be
blocking ping traffic. In addition, verify the virtual switching configuration and review the
Open vSwitch log files.
Procedure
1. To view all the ports that attach to the router named r1, run the following command:
Sample output
+--------------------------------------+------+-------------------+--------------------------------------------
------------------------------------------+
| id | name | mac_address | fixed_ips
|
+--------------------------------------+------+-------------------+--------------------------------------------
------------------------------------------+
| b58d26f0-cc03-43c1-ab23-ccdb1018252a | | fa:16:3e:94:a7:df | {"subnet_id": "a592fdba-
babd-48e0-96e8-2dd9117614d3", "ip_address": "192.168.200.1"} |
| c45e998d-98a1-4b23-bb41-5d24797a12a4 | | fa:16:3e:ee:6a:f7 | {"subnet_id": "43f8f625-
c773-4f18-a691-fd4ebfb3be54", "ip_address": "172.24.4.225"} |
+--------------------------------------+------+-------------------+--------------------------------------------
------------------------------------------+
2. To view the details of each port, run the following command. Include the port ID of the port that
you want to view. The result includes the port status, indicated in the following example as
having an ACTIVE state:
Sample output
+-----------------------+--------------------------------------------------------------------------------------
+
| Field | Value |
+-----------------------+--------------------------------------------------------------------------------------
73
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
+
| admin_state_up | True |
| allowed_address_pairs | |
| binding:host_id | node.example.com |
| binding:profile | {} |
| binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": true} |
| binding:vif_type | ovs |
| binding:vnic_type | normal |
| device_id | 49c6ebdc-0e62-49ad-a9ca-58cea464472f |
| device_owner | network:router_interface |
| extra_dhcp_opts | |
| fixed_ips | {"subnet_id": "a592fdba-babd-48e0-96e8-2dd9117614d3", "ip_address":
"192.168.200.1"} |
| id | b58d26f0-cc03-43c1-ab23-ccdb1018252a |
| mac_address | fa:16:3e:94:a7:df |
| name | |
| network_id | 63c24160-47ac-4140-903d-8f9a670b0ca4
|
| security_groups | |
| status | ACTIVE |
| tenant_id | d588d1112e0f496fb6cac22f9be45d49 |
+-----------------------+--------------------------------------------------------------------------------------
+
Procedure
$ ping 192.168.120.254
a. Confirm that you have network flow for the associated VLAN.
It is possible that the VLAN ID has not been set. In this example, OpenStack Networking is
configured to trunk VLAN 120 to the provider network. (See --
provider:segmentation_id=120 in the example in step 1.)
b. Confirm the VLAN flow on the bridge interface using the command, ovs-ofctl dump-flows
74
CHAPTER 6. MONITORING AND TROUBLESHOOTING NETWORKS
b. Confirm the VLAN flow on the bridge interface using the command, ovs-ofctl dump-flows
<bridge-name>.
In this example the bridge is named br-ex:
verify the registration and status of Red Hat Openstack Platform (RHOSP) Networking service
(neutron) agents.
Procedure
1. Use the openstack network agent list command to verify that the RHOSP Networking service
agents are up and registered with the correct host names.
5. Validate the OVS agent configuration file bridge mappings, to confirm that the bridge
mapped to phy-eno1 exists and is properly connected to eno1.
75
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Prerequisites
RHOSP deployment, with ML2/OVN as the Networking service (neutron) default mechanism
driver.
Procedure
1. Log in to the overcloud using your Red Hat OpenStack Platform credentials.
2. Run the openstack server list command to obtain the name of a VM instance.
3. Run the openstack server show command to determine the Compute node on which the
instance is running.
Example
Sample output
+----------------------+-------------------------------------------------+
| Field | Value |
+----------------------+-------------------------------------------------+
| OS-EXT-SRV-ATTR:host | compute0.ctlplane.example.com |
| addresses | finance-network1=192.0.2.2; provider- |
| | storage=198.51.100.13 |
+----------------------+-------------------------------------------------+
Example
$ ssh tripleo-admin@compute0.ctlplane
5. Run the ip netns list command to see the OVN metadata namespaces.
Sample output
ovnmeta-07384836-6ab1-4539-b23a-c581cf072011 (id: 1)
ovnmeta-df9c28ea-c93a-4a60-b913-1e611d6f15aa (id: 0)
6. Using the metadata namespace run an ip netns exec command to ping the associated network.
Example
76
CHAPTER 6. MONITORING AND TROUBLESHOOTING NETWORKS
Sample output
Additional resources
Prerequisites
RHOSP deployment, with ML2/OVS as the Networking service (neutron) default mechanism
driver.
Procedure
1. Determine which network namespace contains the network, by listing all of the project networks
using the openstack network list command:
In this output, note that the ID for the web-servers network (9cb32fe0-d7fb-432c-b116-
f483c6497b08). The command appends the network ID to the network namespace, which
enables you to identify the namespace in the next step.
Sample output
+--------------------------------------+-------------+-------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+-------------+-------------------------------------------------------+
| 9cb32fe0-d7fb-432c-b116-f483c6497b08 | web-servers | 453d6769-fcde-4796-a205-
66ee01680bba 192.168.212.0/24 |
| a0cc8cdd-575f-4788-a3e3-5df8c6d0dd81 | private | c1e58160-707f-44a7-bf94-
8694f29e74d3 10.0.0.0/24 |
77
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
2. List all the network namespaces using the ip netns list command:
# ip netns list
The output contains a namespace that matches the web-servers network ID.
Sample output
qdhcp-9cb32fe0-d7fb-432c-b116-f483c6497b08
qrouter-31680a1c-9b3e-4906-bd69-cb39ed5faa01
qrouter-62ed467e-abae-4ab4-87f4-13a9937fbd6b
qdhcp-a0cc8cdd-575f-4788-a3e3-5df8c6d0dd81
qrouter-e9281608-52a6-4576-86a6-92955df46f56
3. Examine the configuration of the web-servers network by running commands within the
namespace, prefixing the troubleshooting commands with ip netns exec <namespace>.
In this example, the route -n command is used.
Example
Sample output
Prerequisites
RHOSP deployment, with ML2/OVS as the Networking service (neutron) default mechanism
driver.
Procedure
Example
78
CHAPTER 6. MONITORING AND TROUBLESHOOTING NETWORKS
Example
Example
3. In the terminal running the tcpdump session, observe detailed results of the ping test.
Sample output
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
IP (tos 0xc0, ttl 64, id 55447, offset 0, flags [none], proto ICMP (1), length 88)
172.24.4.228 > 172.24.4.228: ICMP host 192.168.200.20 unreachable, length 68
IP (tos 0x0, ttl 64, id 22976, offset 0, flags [DF], proto UDP (17), length 60)
172.24.4.228.40278 > 192.168.200.21: [bad udp cksum 0xfa7b -> 0xe235!] UDP, length 32
NOTE
When you perform a tcpdump analysis of traffic, you see the responding packets heading
to the router interface rather than to the VM instance. This is expected behavior, as the
qrouter performs Destination Network Address Translation (DNAT) on the return
packets.
Prerequisites
Red Hat OpenStack Platform deployment with ML2/OVN as the default mechanism driver.
Procedure
1. Log in to the Controller host as a user that has the necessary privileges to access the OVN
containers.
Example
$ ssh tripleo-admin@controller-0.ctlplane
2. Create a shell script file that contains the ovn commands that you want to run.
Example
vi ~/bin/ovn-alias.sh
79
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Example
In this example, the ovn-sbctl, ovn-nbctl, and ovn-trace commands have been added to an alias
file:
Validation
Example
# source ovn-alias.sh
Example
# ovn-nbctl show
Sample output
Additional resources
OVN uses logical flows that are tables of flows with a priority, match, and actions. These logical flows are
80
CHAPTER 6. MONITORING AND TROUBLESHOOTING NETWORKS
OVN uses logical flows that are tables of flows with a priority, match, and actions. These logical flows are
distributed to the ovn-controller running on each Red Hat Openstack Platform (RHOSP) Compute
node. Use the ovn-sbctl lflow-list command on the Controller node to view the full set of logical flows.
Prerequisites
RHOSP deployment with ML2/OVN as the Networking service (neutron) default mechanism
driver.
Procedure
1. Log in to the Controller host as a user that has the necessary privileges to access the OVN
containers.
Example
$ ssh tripleo-admin@controller-0.ctlplane
Example
source ~/ovn-alias.sh
$ ovn-sbctl lflow-list
Sample output
81
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
OVN ports are logical entities that reside somewhere on a network, not physical ports on a
single switch.
OVN gives each table in the pipeline a name in addition to its number. The name describes
the purpose of that stage in the pipeline.
82
CHAPTER 6. MONITORING AND TROUBLESHOOTING NETWORKS
The actions supported in OVN logical flows extend beyond those of OpenFlow. You can
implement higher level features, such as DHCP, in the OVN logical flow syntax.
DATAPATH
The logical switch or logical router where the simulated packet starts.
MICROFLOW
The simulated packet, in the syntax used by the ovn-sb database.
Example
This example displays the --minimal output option on a simulated packet and shows that the
packet reaches its destination:
Sample output
#
reg14=0x1,vlan_tci=0x0000,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02,dl_type=0x
0000
output("sw0-port2");
Example
In more detail, the --summary output for this same simulated packet shows the full
execution pipeline:
Sample output
The sample output shows:
The packet enters the sw0 network from the sw0-port1 port and runs the ingress
pipeline.
The outport variable is set to sw0-port2 indicating that the intended destination for this
packet is sw0-port2.
The packet is output from the ingress pipeline, which brings it to the egress pipeline for
sw0 with the outport variable set to sw0-port2.
The output action is executed in the egress pipeline, which outputs the packet to the
current value of the outport variable, which is sw0-port2.
#
reg14=0x1,vlan_tci=0x0000,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02,dl_type
=0x0000
83
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
ingress(dp="sw0", inport="sw0-port1") {
outport = "sw0-port2";
output;
egress(dp="sw0", inport="sw0-port1", outport="sw0-port2") {
output;
/* output to "sw0-port2", type "" */;
};
};
Additional resources
Prerequisites
RHOSP deployment with ML2/OVN as the Networking service (neutron) default mechanism
driver.
Procedure
1. Log in to the Controller host as a user that has the necessary privileges to access the OVN
containers.
Example
$ ssh tripleo-admin@controller-0.ctlplane
Example
Sample output
84
CHAPTER 6. MONITORING AND TROUBLESHOOTING NETWORKS
Additional resources
Prerequisites
RHOSP deployment with ML2/OVN as the Networking service (neutron) default mechanism
driver.
Procedure
1. Log in to a Controller host as a user that has the necessary privileges to access the OVN
containers.
Monitoring from a server on a single Controller host provides the information you need to to
verify basic cluster health and to diagnose many types of problems. For a very thorough
analysis, perform this procedure on all Controllers.
Example
$ ssh tripleo-admin@compute-0
1114
Name: OVN_Southbound
Cluster ID: 017a (017add73-58f1-4fcd-ae35-bacc0f07ce57)
Server ID: 1114 (1114865d-4f42-443a-b758-d4431fc35748)
Address: tcp:[fd00:fd00:fd00:2000::4a]:6644
85
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Role: candidate
...
Leader: unknown
2024-03-27T22:10:28Z|00001|unixctl|WARN|failed to connect to
/var/lib/openvswitch/ovn/ovnsb_db.ctl
ovs-appctl: cannot connect to "/var/lib/openvswitch/ovn/ovnsb_db.ctl" (Connection
refused)
In this case, you cannot get all the information you need from a single server. For example,
86
CHAPTER 6. MONITORING AND TROUBLESHOOTING NETWORKS
In this case, you cannot get all the information you need from a single server. For example,
you cannot determine whether the other servers are running. If the server is down, run ovs-
appctl on another server.
Time since last message to leader from each follower (only updated on leader)
Servers:
1114 (1114 at tcp:[fd00:fd00:fd00:2000::4a]:6644) next_index=51737
match_index=51736 last msg 224 ms ago
ca6e (ca6e at tcp:[fd00:fd00:fd00:2000::18f]:6644) (self) next_index=51470
match_index=51736
0f90 (0f90 at tcp:[fd00:fd00:fd00:2000::2e0]:6644) next_index=51737
match_index=51736 last msg 224 ms ago
Log on to the cluster leader host and run ovs-appctl. Note that a new leader can be elected
at any time.
Additional resources
Prerequisites
New deployment of RHOSP, with ML2/OVN as the Networking service (neutron) default
mechanism driver.
Procedure
NETWORK_ID=\
$(openstack network create internal_network | awk '/\| id/ {print $4}')
2. Verify that the relevant containers are running on the Controller host:
a. Log in to the Controller host as a user that has the necessary privileges to access the OVN
containers.
87
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Example
$ ssh tripleo-admin@controller-0.ctlplane
As shown in the following sample, the output should list the OVN containers:
Sample output
container-puppet-ovn_controller
ovn_cluster_north_db_server
ovn_cluster_south_db_server
ovn_cluster_northd
ovn_controller
3. Verify that the relevant containers are running on the Compute host:
a. Log in to the Compute host as a user that has the necessary privileges to access the OVN
containers.
Example
$ ssh tripleo-admin@compute-0.ctlplane
As shown in the following sample, the output should list the OVN containers:
Sample output
container-puppet-ovn_controller
ovn_metadata_agent
ovn_controller
Example
$ source ~/ovn-alias.sh
88
CHAPTER 6. MONITORING AND TROUBLESHOOTING NETWORKS
# ovn-nbctl show
# ovn-sbctl show
7. Attempt to ping an instance from an OVN metadata interface that is on the same layer 2
network.
For more information, see Section 6.5, “Performing basic ICMP testing within the ML2/OVN
namespace”.
8. If you need to contact Red Hat for support, perform the steps described in this Red Hat
Solution, How to collect all required logs for Red Hat Support to investigate an OpenStack
issue.
Additional resources
Prerequisites
Red Hat OpenStack Platform deployment with ML2/OVN as the default mechanism driver.
Procedure
1. Log in to the Controller or Compute node where you want to set the logging mode as a user
that has the necessary privileges to access the OVN containers.
Example
$ ssh tripleo-admin@controller-0.ctlplane
Verification
89
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Verification
Confirm that the ovn-controller container log now contains debug messages:
Sample output
You should see recent log messages that contain the string |DBG|:
2022-09-29T20:52:54.638Z|00170|vconn(ovn_pinctrl0)|DBG|unix:/var/run/openvswitch/br-
int.mgmt: received: OFPT_ECHO_REQUEST (OF1.5) (xid=0x0): 0 bytes of payload
2022-09-29T20:52:54.638Z|00171|vconn(ovn_pinctrl0)|DBG|unix:/var/run/openvswitch/br-
int.mgmt: sent (Success): OFPT_ECHO_REPLY (OF1.5) (xid=0x0): 0 bytes of payload
Confirm that the ovn-controller container log contains a string similar to the following:
Additional resources
NOTE
This error can occur on RHOSP 17.1 ML2/OVN deployments that were updated from
an earlier RHOSP version—RHOSP 16.1.7 and earlier or RHOSP 16.2.0.
Sample error
The error encountered is similar to the following:
Cause
If the ovn-controller process replaces the hostname, it registers another chassis entry which
includes another encap entry. For more information, see BZ#1948472.
Resolution
Follow these steps to resolve the problem:
1. If you have not already, create aliases for the necessary OVN database commands that you
90
CHAPTER 6. MONITORING AND TROUBLESHOOTING NETWORKS
1. If you have not already, create aliases for the necessary OVN database commands that you
will use later in this procedure.
For more information, see Creating aliases for OVN troubleshooting commands .
2. Log in to the Controller host as a user that has the necessary privileges to access the OVN
containers.
Example
$ ssh tripleo-admin@controller-0.ctlplane
6. Check the Chassis_Private table to confirm that chassis has been removed:
7. If any entries are reported, remove them with the following command:
tripleo_ovn_controller
tripleo_ovn_metadata_agent
Verification
Sample output
+------------------------------+-------+----------------------------+
| Agent Type | State | Binary |
+------------------------------+-------+----------------------------+
| OVN Controller Gateway agent | UP | ovn-controller |
| OVN Controller Gateway agent | UP | ovn-controller |
| OVN Controller agent | UP | ovn-controller |
91
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
92
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING
First, you must decide which physical NICs oFn your Compute node you want to carry which types of
traffic. Then, when the NIC is cabled to a physical switch port, you must configure the switch port to allow
trunked or general traffic.
For example, the following diagram depicts a Compute node with two NICs, eth0 and eth1. Each NIC is
cabled to a Gigabit Ethernet port on a physical switch, with eth0 carrying instance traffic, and eth1
providing connectivity for OpenStack services:
NOTE
This diagram does not include any additional redundant NICs required for fault tolerance.
Additional resources
Network Interface Bonding in the Customizing your Red Hat OpenStack Platform deployment
guide.
93
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
With OpenStack Networking you can connect instances to the VLANs that already exist on your physical
network. The term trunk is used to describe a port that allows multiple VLANs to traverse through the
same port. Using these ports, VLANs can span across multiple switches, including virtual switches. For
example, traffic tagged as VLAN110 in the physical network reaches the Compute node, where the
8021q module directs the tagged traffic to the appropriate VLAN on the vSwitch.
IMPORTANT
These values are examples. You must change the values in this example to match
those in your environment. Copying and pasting these values into your switch
configuration without adjustment can result in an unexpected outage.
interface GigabitEthernet1/0/12
description Trunk to Compute Node
spanning-tree portfast trunk
switchport trunk encapsulation dot1q
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 2,110,111
Field Description
interface GigabitEthernet1/0/12 The switch port that the NIC of the X node
connects to. Ensure that you replace the
GigabitEthernet1/0/12 value with the correct
port value for your environment. Use the show
interface command to view a list of ports.
description Trunk to Compute Node A unique and descriptive value that you can use
to identify this interface.
spanning-tree portfast trunk If your environment uses STP, set this value to
instruct Port Fast that this port is used to trunk
traffic.
switchport trunk encapsulation dot1q Enables the 802.1q trunking standard (rather
than ISL). This value varies depending on the
configuration that your switch supports.
switchport mode trunk Configures this port as a trunk port, rather than
an access port, meaning that it allows VLAN
traffic to pass through to the virtual switches.
94
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING
Field Description
switchport trunk native vlan 2 Set a native VLAN to instruct the switch where
to send untagged (non-VLAN) traffic.
switchport trunk allowed vlan 2,110,111 Defines which VLANs are allowed through the
trunk.
IMPORTANT
These values are examples. You must change the values in this example to match
those in your environment. Copying and pasting these values into your switch
configuration without adjustment can result in an unexpected outage.
interface GigabitEthernet1/0/13
description Access port for Compute Node
switchport mode access
switchport access vlan 200
spanning-tree portfast
Field Description
interface GigabitEthernet1/0/13 The switch port that the NIC of the X node
connects to. Ensure that you replace the
GigabitEthernet1/0/12 value with the correct
port value for your environment. Use the show
interface command to view a list of ports.
description Access port for Compute A unique and descriptive value that you can use
Node to identify this interface.
95
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Field Description
switchport access vlan 200 Configures the port to allow traffic on VLAN
200. You must configure your Compute node
with an IP address from this VLAN.
spanning-tree portfast If using STP, set this value to instruct STP not to
attempt to initialize this as a trunk, allowing for
quicker port handshakes during initial
connections (such as server reboot).
Additional resources
Network Interface Bonding in the Installing and managing Red Hat OpenStack Platform with
director guide.
Procedure
- type: linux_bond
name: bond1
mtu: 9000
bonding_options:{get_param: BondInterfaceOvsOptions};
members:
- type: interface
name: nic3
mtu: 9000
primary: true
- type: interface
name: nic4
mtu: 9000
BondInterfaceOvsOptions:
"mode=802.3ad"
Additional resources
Network Interface Bonding in the Customizing your Red Hat OpenStack Platform deployment
96
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING
Network Interface Bonding in the Customizing your Red Hat OpenStack Platform deployment
guide.
Procedure
1. Physically connect both NICs on the Compute node to the switch (for example, ports 12 and 13).
interface port-channel1
switchport access vlan 100
switchport mode access
spanning-tree guard root
sw01# config t
Enter configuration commands, one per line. End with CNTL/Z.
interface GigabitEthernet1/0/13
switchport access vlan 100
switchport mode access
speed 1000
duplex full
channel-group 10 mode active
channel-protocol lacp
4. Review your new port channel. The resulting output lists the new port-channel Po1, with
member ports Gi1/0/12 and Gi1/0/13:
NOTE
97
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
NOTE
NOTE
You must change MTU settings from end-to-end on all hops that the traffic is expected
to pass through, including any virtual switches.
Additional resources
2. MTU settings are changed switch-wide on 3750 switches, and not for individual interfaces. Run
the following commands to configure the switch to use jumbo frames of 9000 bytes. You might
prefer to configure the MTU settings for individual interfaces, if your switch supports this
feature.
sw01# config t
Enter configuration commands, one per line. End with CNTL/Z.
NOTE
IMPORTANT
98
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING
IMPORTANT
Reloading the switch causes a network outage for any devices that are
dependent on the switch. Therefore, reload the switch only during a scheduled
maintenance period.
sw01# reload
Proceed with reload? [confirm]
4. After the switch reloads, confirm the new jumbo MTU size.
The exact output may differ depending on your switch model. For example, System MTU might
apply to non-Gigabit interfaces, and Jumbo MTU might describe all Gigabit interfaces.
Procedure
1. Run the lldp run command to enable LLDP globally on your Cisco Catalyst switch:
sw01# config t
Enter configuration commands, one per line. End with CNTL/Z.
NOTE
99
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
NOTE
IMPORTANT
These values are examples. You must change the values in this example to match
those in your environment. Copying and pasting these values into your switch
configuration without adjustment can result in an unexpected outage.
interface Ethernet1/12
description Trunk to Compute Node
switchport mode trunk
switchport trunk allowed vlan 2,110,111
switchport trunk native vlan 2
end
Procedure
Using the example from the Figure 7.1, “Sample network layout” diagram, Ethernet1/13 (on a
Cisco Nexus switch) is configured as an access port for eth1. This configuration assumes that
your physical node has an ethernet cable connected to interface Ethernet1/13 on the physical
switch.
IMPORTANT
100
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING
IMPORTANT
These values are examples. You must change the values in this example to match
those in your environment. Copying and pasting these values into your switch
configuration without adjustment can result in an unexpected outage.
interface Ethernet1/13
description Access port for Compute Node
switchport mode access
switchport access vlan 200
Additional resources
Network Interface Bonding in the Installing and managing Red Hat OpenStack Platform with
director guide.
Procedure
- type: linux_bond
name: bond1
mtu: 9000
bonding_options:{get_param: BondInterfaceOvsOptions};
members:
- type: interface
name: nic3
mtu: 9000
primary: true
- type: interface
name: nic4
mtu: 9000
BondInterfaceOvsOptions:
"mode=802.3ad"
Additional resources
Network Interface Bonding in the Customizing your Red Hat OpenStack Platform deployment
guide.
101
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Procedure
1. Physically connect the Compute node NICs to the switch (for example, ports 12 and 13).
3. Configure ports 1/12 and 1/13 as access ports, and as members of a channel group.
Depending on your deployment, you can deploy trunk interfaces rather than access interfaces.
For example, for Cisco UCI the NICs are virtual interfaces, so you might prefer to configure
access ports exclusively. Often these interfaces contain VLAN tagging configurations.
interface Ethernet1/13
description Access port for Compute Node
switchport mode access
switchport access vlan 200
channel-group 10 mode active
interface Ethernet1/13
description Access port for Compute Node
switchport mode access
switchport access vlan 200
channel-group 10 mode active
NOTE
When you use PXE to provision nodes on Cisco switches, you might need to set the
options no lacp graceful-convergence and no lacp suspend-individual to bring up the
ports and boot the server. For more information, see your Cisco switch documentation.
NOTE
You must change MTU settings from end-to-end on all hops that the traffic is expected
to pass through, including any virtual switches.
Additional resources
102
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING
Procedure
Run the following commands to configure interface 1/12 to use jumbo frames of 9000 bytes:
Procedure
You can enable LLDP for individual interfaces on Cisco Nexus 7000-series switches:
NOTE
103
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
This configuration assumes that your physical node has transceivers connected to switch ports swp1 and
swp2 on the physical switch.
IMPORTANT
These values are examples. You must change the values in this example to match those in
your environment. Copying and pasting these values into your switch configuration
without adjustment can result in an unexpected outage.
Procedure
Use the following configuration syntax to allow traffic for VLANs 100 and 200 to pass through
to your instances.
auto bridge
iface bridge
bridge-vlan-aware yes
bridge-ports glob swp1-2
bridge-vids 100 200
IMPORTANT
These values are examples. You must change the values in this example to match those in
your environment. Copying and pasting these values into your switch configuration
without adjustment can result in an unexpected outage.
Procedure
Using the example from the Figure 7.1, “Sample network layout” diagram, swp1 (on a Cumulus
Linux switch) is configured as an access port.
auto bridge
iface bridge
bridge-vlan-aware yes
bridge-ports glob swp1-2
bridge-vids 100 200
auto swp1
iface swp1
bridge-access 100
104
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING
auto swp2
iface swp2
bridge-access 200
Additional resources
Network Interface Bonding in the Installing and managing Red Hat OpenStack Platform with
director guide.
NOTE
You must change MTU settings from end-to-end on all hops that the traffic is expected
to pass through, including any virtual switches.
Additional resources
Procedure
auto swp1
iface swp1
mtu 9000
NOTE
105
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
process.
Procedure
To view all LLDP neighbors on all ports/interfaces, run the following command:
IMPORTANT
These values are examples. You must change the values in this example to match those in
your environment. Copying and pasting these values into your switch configuration
without adjustment can result in an unexpected outage.
Procedure
This configuration assumes that your physical node has an ethernet cable connected to
interface 24 on the physical switch. In this example, DATA and MNGT are the VLAN names.
106
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING
operational requirements, such as transporting management traffic or Block Storage data. These ports
are commonly known as access ports and usually require a simpler configuration than trunk ports.
IMPORTANT
These values are examples. You must change the values in this example to match those in
your environment. Copying and pasting these values into your switch configuration
without adjustment can result in an unexpected outage.
Procedure
For example:
Additional resources
Network Interface Bonding in the Installing and managing Red Hat OpenStack Platform with
director guide.
Procedure
- type: linux_bond
name: bond1
mtu: 9000
bonding_options:{get_param: BondInterfaceOvsOptions};
107
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
members:
- type: interface
name: nic3
mtu: 9000
primary: true
- type: interface
name: nic4
mtu: 9000
BondInterfaceOvsOptions:
"mode=802.3ad"
Additional resources
Network Interface Bonding in the Customizing your Red Hat OpenStack Platform deployment
guide.
Procedure
In this example, the Compute node has two NICs using VLAN 100:
For example:
NOTE
You might need to adjust the timeout period in the LACP negotiation script. For
more information, see
https://gtacknowledge.extremenetworks.com/articles/How_To/LACP-
configured-ports-interfere-with-PXE-DHCP-on-servers
NOTE
You must change MTU settings from end-to-end on all hops that the traffic is expected
to pass through, including any virtual switches.
Additional resources
108
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING
Procedure
Run the commands in this example to enable jumbo frames on an Extreme Networks EXOS
switch and configure support for forwarding IP packets with 9000 bytes:
Example
Procedure
In this example, LLDP is enabled on an Extreme Networks EXOS switch. 11 represents the port
string:
Procedure
If using a Juniper EX series switch running Juniper JunOS, use the following configuration
syntax to allow traffic for VLANs 110 and 111 to pass through to your instances.
This configuration assumes that your physical node has an ethernet cable connected to
109
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
This configuration assumes that your physical node has an ethernet cable connected to
interface ge-1/0/12 on the physical switch.
IMPORTANT
These values are examples. You must change the values in this example to match
those in your environment. Copying and pasting these values into your switch
configuration without adjustment can result in an unexpected outage.
ge-1/0/12 {
description Trunk to Compute Node;
unit 0 {
family ethernet-switching {
port-mode trunk;
vlan {
members [110 111];
}
native-vlan-id 2;
}
}
}
IMPORTANT
These values are examples. You must change the values in this example to match those in
your environment. Copying and pasting these values into your switch configuration
without adjustment can result in an unexpected outage.
Procedure
This configuration assumes that your physical node has an ethernet cable connected to interface ge-
1/0/13 on the physical switch.
ge-1/0/13 {
description Access port for Compute Node
unit 0 {
family ethernet-switching {
port-mode access;
110
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING
vlan {
members 200;
}
native-vlan-id 2;
}
}
}
Additional resources
Network Interface Bonding in the Installing and managing Red Hat OpenStack Platform with
director guide.
Procedure
- type: linux_bond
name: bond1
mtu: 9000
bonding_options:{get_param: BondInterfaceOvsOptions};
members:
- type: interface
name: nic3
mtu: 9000
primary: true
- type: interface
name: nic4
mtu: 9000
BondInterfaceOvsOptions:
"mode=802.3ad"
Additional resources
Network Interface Bonding in the Customizing your Red Hat OpenStack Platform deployment
guide.
111
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
In this example, the Compute node has two NICs using VLAN 100.
Procedure
1. Physically connect the Compute node’s two NICs to the switch (for example, ports 12 and 13).
chassis {
aggregated-devices {
ethernet {
device-count 1;
}
}
}
3. Configure switch ports 12 (ge-1/0/12) and 13 (ge-1/0/13) to join the port aggregate ae1:
interfaces {
ge-1/0/12 {
gigether-options {
802.3ad ae1;
}
}
ge-1/0/13 {
gigether-options {
802.3ad ae1;
}
}
}
NOTE
For Red Hat OpenStack Platform director deployments, in order to PXE boot
from the bond, you must configure one of the bond members as lacp force-up
toensure that only one bond member comes up during introspection and first
boot. The bond member that you configure with lacp force-up must be the same
bond member that has the MAC address in instackenv.json (the MAC address
known to ironic must be the same MAC address configured with force-up).
interfaces {
ae1 {
aggregated-ether-options {
lacp {
active;
}
}
}
}
112
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING
interfaces {
ae1 {
vlan-tagging;
native-vlan-id 2;
unit 100 {
vlan-id 100;
}
}
}
6. Review your new port channel. The resulting output lists the new port aggregate ae1 with
member ports ge-1/0/12 and ge-1/0/13:
NOTE
NOTE
You must change MTU settings from end-to-end on all hops that the traffic is expected
to pass through, including any virtual switches.
Additional resources
NOTE
The MTU value is calculated differently depending on whether you are using Juniper or
Cisco devices. For example, 9216 on Juniper would equal to 9202 for Cisco. The extra
bytes are used for L2 headers, where Cisco adds this automatically to the MTU value
specified, but the usable MTU will be 14 bytes smaller than specified when using Juniper.
So in order to support an MTU of 9000 on the VLANs, the MTU of 9014 would have to be
configured on Juniper.
Procedure
113
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
1. For Juniper EX series switches, MTU settings are set for individual interfaces. These commands
configure jumbo frames on the ge-1/0/14 and ge-1/0/15 ports:
NOTE
2. If using a LACP aggregate, you will need to set the MTU size there, and not on the member
NICs. For example, this setting configures the MTU size for the ae1 aggregate:
Procedure
Use the following too enable LLDP globally on your Juniper EX 4200 switch:
lldp {
interface all{
enable;
}
}
}
Use the following to enable LLDP for the single interface ge-1/0/14:
lldp {
interface ge-1/0/14{
enable;
}
}
}
NOTE
114
CHAPTER 8. CONFIGURING MAXIMUM TRANSMISSION UNIT (MTU) SETTINGS
NOTE
You can use the openstack network show <network_name> command to view the
largest possible MTU values that OpenStack Networking calculates. net-mtu is a neutron
API extension that is not present in some implementations. The MTU value that you
require can be advertised to DHCPv4 clients for automatic configuration, if supported by
the instance, as well as to IPv6 clients through Router Advertisement (RA) packets. To
send Router Advertisements, the network must be attached to a router.
You must configure MTU settings consistently from end-to-end. This means that the MTU setting must
be the same at every point the packet passes through, including the VM, the virtual network
infrastructure, the physical network, and the destination server.
For example, the circles in the following diagram indicate the various points where an MTU value must
be adjusted for traffic between an instance and a physical server. You must change the MTU value for
very interface that handles network traffic to accommodate packets of a particular MTU size. This is
necessary if traffic travels from the instance 192.168.200.15 through to the physical server 10.20.15.25:
Inconsistent MTU values can result in several network issues, the most common being random packet
loss that results in connection drops and slow network performance. Such issues are problematic to
115
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
troubleshoot because you must identify and examine every possible network point to ensure it has the
correct MTU value.
-
type: ovs_bridge
name: br-isolated
use_dhcp: false
mtu: 9000 # <--- Set MTU
members:
-
type: ovs_bond
name: bond1
mtu: 9000 # <--- Set MTU
ovs_options: {get_param: BondInterfaceOvsOptions}
members:
-
type: interface
name: ens15f0
mtu: 9000 # <--- Set MTU
primary: true
-
type: interface
name: enp131s0f0
mtu: 9000 # <--- Set MTU
-
type: vlan
device: bond1
vlan_id: {get_param: InternalApiNetworkVlanID}
mtu: 9000 # <--- Set MTU
addresses:
-
ip_netmask: {get_param: InternalApiIpSubnet}
-
type: vlan
device: bond1
mtu: 9000 # <--- Set MTU
vlan_id: {get_param: TenantNetworkVlanID}
addresses:
-
ip_netmask: {get_param: TenantIpSubnet}
116
CHAPTER 9. USING QUALITY OF SERVICE (QOS) POLICIES TO MANAGE DATA TRAFFIC
You can apply QoS policies to individual ports, or apply QoS policies to a project network, where ports
with no specific policy attached inherit the policy.
NOTE
Internal network owned ports, such as DHCP and internal router ports, are excluded from
network policy application.
You can apply, modify, or remove QoS policies dynamically. However, for guaranteed minimum
bandwidth QoS policies, you can only apply modifications when there are no instances that use any of
the ports the policy is assigned to.
QoS policies can be enforced in various contexts, including virtual machine instance placements, floating
IP assignments, and gateway IP assignments.
Depending on the enforcement context and on the mechanism driver you use, a QoS rule affects egress
traffic (upload from instance), ingress traffic (download to instance), or both.
NOTE
Starting with Red Hat OpenStack Platform (RHOSP) 17.1, in ML2/OVN deployments, you
can enable minimum bandwidth and bandwidth limit egress policies for hardware
offloaded ports. You cannot enable ingress policies for hardware offloaded ports. For
more information, see Section 9.2, “Configuring the Networking service for QoS policies” .
Table 9.1. Supported traffic direction by driver (all QoS rule types)
117
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Bandwidth limit Egress [1][2] and Egress only [3] Egress and ingress
ingress
[1] The OVS egress bandwidth limit is performed in the TAP interface and is traffic policing, not traffic
shaping.
[2] In RHOSP 16.2.2 and later, the OVS egress bandwidth limit is supported in hardware offloaded ports
by applying the QoS policy in the network interface using ip link commands.
[3] The mechanism drivers ignore the max-burst-kbits parameter because they do not support it.
[5] The OVS egress minimum bandwidth is supported in hardware offloaded ports by applying the QoS
policy in the network interface using ip link commands.
[6] https://bugzilla.redhat.com/show_bug.cgi?id=2060310
Table 9.2. Supported traffic direction by driver for placement reporting and scheduling (minimum
bandwidth only)
Table 9.3. Supported traffic direction by driver for enforcement types (bandwidth limit only)
ML2/OVS ML2/OVN
Additional resources
118
CHAPTER 9. USING QUALITY OF SERVICE (QOS) POLICIES TO MANAGE DATA TRAFFIC
Creating and applying a guaranteed minimum bandwidth QoS policy and rule
Creating and applying a DSCP marking QoS policy and rule for egress traffic
When using the qos service plug-in with the ML2/OVS and ML2/SR-IOV mechanism drivers, you must
also load the qos extension on their respective agents.
The following list summarizes the tasks that you must perform to configure the Networking service for
QoS. The task details follow this list:
Add qos extension for the agents (OVS and SR-IOV only).
In ML2/OVN deployments, you can enable minimum bandwidth and bandwidth limit egress
policies for hardware offloaded ports. You cannot enable ingress policies for hardware
offloaded ports.
Additional tasks for scheduling VM instances using minimum bandwidth policies only:
Specify the hypervisor name if it differs from the name that the Compute service (nova)
uses.
Configure the resource provider ingress and egress bandwidths for the relevant agents on
each Compute node.
Additional task for DSCP marking policies on systems that use ML/OVS with tunneling only:
Prerequisites
Access to the undercloud host and credentials for the stack user.
Procedure
$ source ~/stackrc
119
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
If the qos service plug-in is not loaded, then you receive a ResourceNotFound error. If you do
not receive the error, then the plug-in is loaded and you do not need to perform the steps in this
topic.
Example
$ vi /home/stack/templates/my-neutron-environment.yaml
5. Your environment file must contain the keywords parameter_defaults. On a new line below
parameter_defaults add qos to the NeutronServicePlugins parameter:
parameter_defaults:
NeutronServicePlugins: "qos"
6. If you use ML2/OVS and ML2/SR-IOV mechanism drivers, then you must also load the qos
extension on the agent, by using either the NeutronAgentExtensions or the
NeutronSriovAgentExtensions variable, respectively:
ML2/OVS
parameter_defaults:
NeutronServicePlugins: "qos"
NeutronAgentExtensions: "qos"
ML2/SR-IOV
parameter_defaults:
NeutronServicePlugins: "qos"
NeutronSriovAgentExtensions: "qos"
7. In ML2/OVN deployments, you can enable egress minimum and maximum bandwidth policies
for hardware offloaded ports. To do this, set the OvnHardwareOffloadedQos parameter to
true:
parameter_defaults:
NeutronServicePlugins: "qos"
OvnHardwareOffloadedQos: true
8. If you want to schedule VM instances by using minimum bandwidth QoS policies, then you must
also do the following:
a. Add placement to the list of plug-ins and ensure the list also includes qos:
parameter_defaults:
NeutronServicePlugins: "qos,placement"
b. If the hypervisor name matches the canonical hypervisor name used by the Compute
service (nova), skip to step 7.iii.
If the hypervisor name does not match the canonical hypervisor name used by the Compute
120
CHAPTER 9. USING QUALITY OF SERVICE (QOS) POLICIES TO MANAGE DATA TRAFFIC
If the hypervisor name does not match the canonical hypervisor name used by the Compute
service, specify the alternative hypervisor name, using
resource_provider_default_hypervisor:
ML2/OVS
parameter_defaults:
NeutronServicePlugins: "qos,placement"
ExtraConfig:
Neutron::agents::ml2::ovs::resource_provider_default_hypervisor: %
{hiera('fqdn_canonical')}
ML2/SR-IOV
parameter_defaults:
NeutronServicePlugins: "qos,placement"
ExtraConfig:
Neutron::agents::ml2::sriov::resource_provider_default_hypervisor: %
{hiera('fqdn_canonical')}
IMPORTANT
ML2/OVS
parameter_defaults:
ExtraConfig:
Neutron::agents::ml2::ovs::resource_provider_hypervisors:"ens5:%
{hiera('fqdn_canonical')},ens6:%{hiera('fqdn_canonical')}"
ML2/SR-IOV
parameter_defaults:
ExtraConfig:
Neutron::agents::ml2::sriov::resource_provider_hypervisors:
"ens5:%{hiera('fqdn_canonical')},ens6:%
{hiera('fqdn_canonical')}"
c. Configure the resource provider ingress and egress bandwidths for the relevant agents on
each Compute node that needs to provide a minimum bandwidth.
You can configure egress, ingress, or both, using the following formats:
NeutronOvsResourceProviderBandwidths: <bridge0>:<egress_kbps>:,<bridge1>:
<egress_kbps>:,...,<bridgeN>:<egress_kbps>:
121
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
NeutronOvsResourceProviderBandwidths: <bridge0>::<ingress_kbps>,<bridge1>::
<ingress_kbps>,...,<bridgeN>::<ingress_kbps>
NeutronOvsResourceProviderBandwidths: <bridge0>:<egress_kbps>:
<ingress_kbps>,<bridge1>:<egress_kbps>:<ingress_kbps>,...,<bridgeN>:
<egress_kbps>:<ingress_kbps>
parameter_defaults:
...
NeutronBridgeMappings: physnet0:br-physnet0
NeutronOvsResourceProviderBandwidths: br-physnet0:10000000:10000000
parameter_defaults:
...
NeutronML2PhysicalNetworkMtus: physnet0:1500,physnet1:1500
NeutronSriovResourceProviderBandwidths:
ens5:40000000:40000000,ens6:40000000:40000000
d. Optional: To mark vnic_types as not supported when multiple ML2 mechanism drivers
support them by default and multiple agents are being tracked in the Placement service,
also add the following configuration to your environment file:
parameter_defaults:
...
NeutronOvsVnicTypeBlacklist: direct
parameter_defaults:
...
NeutronSriovVnicTypeBlacklist: direct
9. If you want to create DSCP marking policies and use ML2/OVS with a tunneling protocol
(VXLAN or GRE), then under NeutronAgentExtensions, add the following lines:
parameter_defaults:
...
ControllerExtraConfig:
122
CHAPTER 9. USING QUALITY OF SERVICE (QOS) POLICIES TO MANAGE DATA TRAFFIC
neutron::config::server_config:
agent/dscp_inherit:
value: true
When dscp_inherit is true, the Networking service copies the DSCP value of the inner header
to the outer header.
10. Run the deployment command and include the core heat templates, other environment files,
and this new custom environment file.
IMPORTANT
The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.
Example
Verification
If the qos service plug-in is loaded, then you do not receive a ResourceNotFound error.
Additional resources
Environment files in the Customizing your Red Hat OpenStack Platform deployment guide
Including environment files in overcloud creation in the Customizing your Red Hat OpenStack
Platform deployment guide
Section 9.3.1, “Using Networking service back-end enforcement to enforce minimum bandwidth”
Section 9.5, “Prioritizing network traffic by using DSCP marking QoS policies”
The network back end, ML2/OVS or ML2/SR-IOV, attempts to guarantee that each port on which the
rule is applied has no less than the specified network bandwidth.
123
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
When you use resource allocation scheduling bandwidth enforcement, the Compute service (nova) only
places VM instances on hosts that support the minimum bandwidth.
You can apply QoS minumum bandwidth rules using Networking service back-end enforcement,
resource allocation scheduling enforcement, or both.
The following table identifies the Modular Layer 2 (ML2) mechanism drivers that support minimum
bandwidth QoS policies.
Table 9.4. ML2 mechanism drivers that support minimum bandwidth QoS
Additional resources
Section 9.3.1, “Using Networking service back-end enforcement to enforce minimum bandwidth”
NOTE
Currently, the Modular Layer 2 plug-in with the Open Virtual Network mechanism driver
(ML2/OVN) does not support minimum bandwidth QoS rules.
Prerequisites
The RHOSP Networking service (neutron) must have the qos service plug-in loaded. (This is
the default.)
Do not mix ports with and without bandwidth guarantees on the same physical interface,
because this might cause denial of necessary resources (starvation) to the ports without a
guarantee.
TIP
Create host aggregates to separate ports with bandwidth guarantees from those ports without
bandwidth guarantees.
Procedure
Example
124
CHAPTER 9. USING QUALITY OF SERVICE (QOS) POLICIES TO MANAGE DATA TRAFFIC
Example
$ source ~/overcloudrc
2. Confirm that the qos service plug-in is loaded in the Networking service:
If the qos service plug-in is not loaded, then you receive a ResourceNotFound error, and you
must load the qos services plug-in before you can continue. For more information, see
Section 9.2, “Configuring the Networking service for QoS policies” .
3. Identify the ID of the project you want to create the QoS policy for:
Sample output
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors |
| 519e6344f82e4c079c8e2eabb690023b | services |
| 80bf5732752a41128e612fe615c886c6 | demo |
| 98a2f53c20ce4d50a40dac4a38016c69 | admin |
+----------------------------------+----------+
4. Using the project ID from the previous step, create a QoS policy for the project.
Example
In this example, a QoS policy named guaranteed_min_bw is created for the admin project:
Example
In this example, QoS rules for ingress and egress with a minimum bandwidth of 40000000 kbps
are created for the policy named guaranteed_min_bw:
Example
Verification
ML2/SR-IOV
Using root access, log in to the Compute node, and show the details of the virtual functions that
are held in the physical function.
Example
Sample output
ML2/OVS
Using root access, log in to the compute node, show the tc rules and classes on the physical
bridge interface.
Example
126
CHAPTER 9. USING QUALITY OF SERVICE (QOS) POLICIES TO MANAGE DATA TRAFFIC
Sample output
class htb 1:11 parent 1:fffe prio 0 rate 4Gbit ceil 34359Mbit burst 9000b cburst 8589b
class htb 1:1 parent 1:fffe prio 0 rate 72Kbit ceil 34359Mbit burst 9063b cburst 8589b
class htb 1:fffe root rate 34359Mbit ceil 34359Mbit burst 8589b cburst 8589b
Additional resources
Prerequisites
The RHOSP Networking service (neutron) must have the qos and placement service plug-ins
loaded. The qos service plug-in is loaded by default.
agent-resources-synced
port-resource-request
qos-bw-minimum-ingress
You can only modify a minimum bandwidth QoS policy when there are no instances using any of
the ports the policy is assigned to. The Networking service cannot update the Placement API
usage information if a port is bound.
Procedure
Example
$ source ~/overcloudrc
2. Confirm that the qos service plug-in is loaded in the Networking service:
If the qos service plug-in is not loaded, then you receive a ResourceNotFound error, and you
127
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
If the qos service plug-in is not loaded, then you receive a ResourceNotFound error, and you
must load the qos services plug-in before you can continue. For more information, see
Section 9.2, “Configuring the Networking service for QoS policies” .
3. Identify the ID of the project you want to create the QoS policy for:
Sample output
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors |
| 519e6344f82e4c079c8e2eabb690023b | services |
| 80bf5732752a41128e612fe615c886c6 | demo |
| 98a2f53c20ce4d50a40dac4a38016c69 | admin |
+----------------------------------+----------+
4. Using the project ID from the previous step, create a QoS policy for the project.
Example
In this example, a QoS policy named guaranteed_min_bw is created for the admin project:
Example
In this example, QoS rules for ingress and egress with a minimum bandwidth of 40000000 kbps
are created for the policy named guaranteed_min_bw:
Example
In this example, the guaranteed_min_bw policy is applied to port ID, 56x9aiw1-2v74-144x-
c2q8-ed8w423a6s12:
Verification
128
CHAPTER 9. USING QUALITY OF SERVICE (QOS) POLICIES TO MANAGE DATA TRAFFIC
$ source ~/stackrc
Sample output
+--------------------------------------+-----------------------------------------------------+------------+----
----------------------------------+--------------------------------------+
| uuid | name | generation |
root_provider_uuid | parent_provider_uuid |
+--------------------------------------+-----------------------------------------------------+------------+----
----------------------------------+--------------------------------------+
| 31d3d88b-bc3a-41cd-9dc0-fda54028a882 | dell-r730-014.localdomain |
28 | 31d3d88b-bc3a-41cd-9dc0-fda54028a882 | None |
| 6b15ddce-13cf-4c85-a58f-baec5b57ab52 | dell-r730-063.localdomain |
18 | 6b15ddce-13cf-4c85-a58f-baec5b57ab52 | None |
| e2f5082a-c965-55db-acb3-8daf9857c721 | dell-r730-063.localdomain:NIC Switch agent
| 0 | 6b15ddce-13cf-4c85-a58f-baec5b57ab52 | 6b15ddce-13cf-4c85-a58f-
baec5b57ab52 |
| d2fb0ef4-2f45-53a8-88be-113b3e64ba1b | dell-r730-014.localdomain:NIC Switch agent
| 0 | 31d3d88b-bc3a-41cd-9dc0-fda54028a882 | 31d3d88b-bc3a-41cd-9dc0-
fda54028a882 |
| f1ca35e2-47ad-53a0-9058-390ade93b73e | dell-r730-063.localdomain:NIC Switch
agent:enp6s0f1 | 13 | 6b15ddce-13cf-4c85-a58f-baec5b57ab52 | e2f5082a-c965-55db-
acb3-8daf9857c721 |
| e518d381-d590-5767-8f34-c20def34b252 | dell-r730-014.localdomain:NIC Switch
agent:enp6s0f1 | 19 | 31d3d88b-bc3a-41cd-9dc0-fda54028a882 | d2fb0ef4-2f45-53a8-
88be-113b3e64ba1b |
+--------------------------------------+-----------------------------------------------------+------------+----
----------------------------------+--------------------------------------+
Example
In this example, the bandwidth provided by interface enp6s0f1 on the host dell-r730-014 is
checked, using the resource provider UUID, e518d381-d590-5767-8f34-c20def34b252:
Sample output
+----------------------------+------------------+----------+------------+----------+-----------+----------+
| resource_class | allocation_ratio | min_unit | max_unit | reserved | step_size |
total |
+----------------------------+------------------+----------+------------+----------+-----------+----------+
129
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
5. To check claims against the resource provider when instances are running, run the following
command:
Example
In this example, claims against the resource provider are checked on the host, dell-r730-014,
using the resource provider UUID, e518d381-d590-5767-8f34-c20def34b252:
Sample output
{3cbb9e07-90a8-4154-8acd-b6ec2f894a83: {resources:
{NET_BW_EGR_KILOBIT_PER_SEC: 1000000, NET_BW_IGR_KILOBIT_PER_SEC:
1000000}}, 8848b88b-4464-443f-bf33-5d4e49fd6204: {resources:
{NET_BW_EGR_KILOBIT_PER_SEC: 1000000, NET_BW_IGR_KILOBIT_PER_SEC:
1000000}}, 9a29e946-698b-4731-bc28-89368073be1a: {resources:
{NET_BW_EGR_KILOBIT_PER_SEC: 1000000, NET_BW_IGR_KILOBIT_PER_SEC:
1000000}}, a6c83b86-9139-4e98-9341-dc76065136cc: {resources:
{NET_BW_EGR_KILOBIT_PER_SEC: 3000000, NET_BW_IGR_KILOBIT_PER_SEC:
3000000}}, da60e33f-156e-47be-a632-870172ec5483: {resources:
{NET_BW_EGR_KILOBIT_PER_SEC: 1000000, NET_BW_IGR_KILOBIT_PER_SEC:
1000000}}, eb582a0e-8274-4f21-9890-9a0d55114663: {resources:
{NET_BW_EGR_KILOBIT_PER_SEC: 3000000, NET_BW_IGR_KILOBIT_PER_SEC:
3000000}}}
Additional resources
Prerequisites
The Networking service must have the qos service plug-in loaded. (The plug-in is loaded by
default.)
130
CHAPTER 9. USING QUALITY OF SERVICE (QOS) POLICIES TO MANAGE DATA TRAFFIC
Procedure
Example
$ source ~/overcloudrc
2. Confirm that the qos service plug-in is loaded in the Networking service:
If the qos service plug-in is not loaded, then you receive a ResourceNotFound error, and you
must load the qos services plug-in before you can continue. For more information, see
Section 9.2, “Configuring the Networking service for QoS policies” .
3. Identify the ID of the project you want to create the QoS policy for:
Sample output
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors |
| 519e6344f82e4c079c8e2eabb690023b | services |
| 80bf5732752a41128e612fe615c886c6 | demo |
| 98a2f53c20ce4d50a40dac4a38016c69 | admin |
+----------------------------------+----------+
4. Using the project ID from the previous step, create a QoS policy for the project.
Example
In this example, a QoS policy named bw-limiter is created for the admin project:
NOTE
You can add more than one rule to a policy, as long as the type or direction of
each rule is different. For example, You can specify two bandwidth-limit rules,
one with egress and one with ingress direction.
Example
In this example, QoS ingress and egress rules are created for the policy named bw-limiter with a
bandwidth limit of 50000 kbps and a maximum burst size of 50000 kbps:
131
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
6. You can create a port with a policy attached to it, or attach a policy to a pre-existing port.
Sample output
+-----------------------+--------------------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------------------+
| admin_state_up | UP |
| allowed_address_pairs | |
| binding_host_id | |
| binding_profile | |
| binding_vif_details | |
| binding_vif_type | unbound |
| binding_vnic_type | normal |
| created_at | 2022-07-04T19:20:24Z |
| data_plane_status | None |
| description | |
| device_id | |
| device_owner | |
| dns_assignment | None |
| dns_name | None |
| extra_dhcp_opts | |
| fixed_ips | ip_address='192.0.2.210', subnet_id='292f8c-...' |
| id | f51562ee-da8d-42de-9578-f6f5cb248226 |
| ip_address | None |
| mac_address | fa:16:3e:d9:f2:ba |
| name | port2 |
| network_id | 55dc2f70-0f92-4002-b343-ca34277b0234 |
| option_name | None |
| option_value | None |
| port_security_enabled | False |
| project_id | 98a2f53c20ce4d50a40dac4a38016c69 |
| qos_policy_id | 8491547e-add1-4c6c-a50e-42121237256c |
| revision_number |6 |
| security_group_ids | 0531cc1a-19d1-4cc7-ada5-49f8b08245be |
| status | DOWN |
| subnet_id | None |
| tags | [] |
| trunk_details | None |
| updated_at | 2022-07-04T19:23:00Z |
+-----------------------+--------------------------------------------------+
132
CHAPTER 9. USING QUALITY OF SERVICE (QOS) POLICIES TO MANAGE DATA TRAFFIC
Verification
Example
In this example, the QoS policy, bw-limiter is queried:
Sample output
+-------------------+-------------------------------------------------------------------+
| Field | Value |
+-------------------+-------------------------------------------------------------------+
| description | |
| id | 8491547e-add1-4c6c-a50e-42121237256c |
| is_default | False |
| name | bw-limiter |
| project_id | 98a2f53c20ce4d50a40dac4a38016c69 |
| revision_number | 4 |
| rules | [{u'max_kbps': 50000, u'direction': u'egress', |
| | u'type': u'bandwidth_limit', |
| | u'id': u'0db48906-a762-4d32-8694-3f65214c34a6', |
| | u'max_burst_kbps': 50000, |
| | u'qos_policy_id': u'8491547e-add1-4c6c-a50e-42121237256c'}, |
| | [{u'max_kbps': 50000, u'direction': u'ingress', |
| | u'type': u'bandwidth_limit', |
| | u'id': u'faabef24-e23a-4fdf-8e92-f8cb66998834', |
| | u'max_burst_kbps': 50000, |
| | u'qos_policy_id': u'8491547e-add1-4c6c-a50e-42121237256c'}] |
| shared | False |
+-------------------+-------------------------------------------------------------------+
Query the port, and confirm that its policy ID matches the one obtained in the previous step.
Example
In this example, port1 is queried:
Sample output
+-------------------------+--------------------------------------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------------------------------------+
| admin_state_up | UP |
133
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Additional resources
Prerequisites
The Networking service must have the qos service plug-in loaded. (This is the default.)
Procedure
Example
$ source ~/overcloudrc
2. Confirm that the qos service plug-in is loaded in the Networking service:
If the qos service plug-in is not loaded, then you receive a ResourceNotFound error, and you
must configure the Networking service before you can continue. For more information, see
Section 9.2, “Configuring the Networking service for QoS policies” .
3. Identify the ID of the project you want to create the QoS policy for:
Sample output
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors |
| 519e6344f82e4c079c8e2eabb690023b | services |
| 80bf5732752a41128e612fe615c886c6 | demo |
| 98a2f53c20ce4d50a40dac4a38016c69 | admin |
+----------------------------------+----------+
4. Using the project ID from the previous step, create a QoS policy for the project.
Example
In this example, a QoS policy named qos-web-servers is created for the admin project:
135
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Example
In this example, a DSCP rule is created using DSCP mark 18 and is applied to the qos-web-
servers policy:
Sample output
Example
In this example, the DSCP mark value is changed to 22 for the rule, d7f976ec-7fab-4e60-af70-
f59bf88198e6, in the qos-web-servers policy:
Example
In this example, the DSCP rule, d7f976ec-7fab-4e60-af70-f59bf88198e6, in the qos-web-
servers policy is deleted:
Verification
Example
In this example, the DSCP rule, d7f976ec-7fab-4e60-af70-f59bf88198e6 is applied to the QoS
policy, qos-web-servers:
Sample output
+-----------+--------------------------------------+
| dscp_mark | id |
136
CHAPTER 9. USING QUALITY OF SERVICE (QOS) POLICIES TO MANAGE DATA TRAFFIC
+-----------+--------------------------------------+
| 18 | d7f976ec-7fab-4e60-af70-f59bf88198e6 |
+-----------+--------------------------------------+
Additional resources
Prerequisities
Procedure
Create an RHOSP Networking service RBAC policy associated with a specific QoS policy, and
assign it to a specific project:
Example
For example, you might have a QoS policy that allows for lower-priority network traffic, named
bw-limiter. Using a RHOSP Networking service RBAC policy, you can apply the QoS policy to a
specific project:
Additional resources
Section 9.3.1, “Using Networking service back-end enforcement to enforce minimum bandwidth”
Section 9.5, “Prioritizing network traffic by using DSCP marking QoS policies”
137
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
The next part of the data path varies depending on which mechanism driver your deployment uses:
ML2/OVS: a patch port between br-int and br-ex allows the traffic to pass through the bridge
of the provider network and out to the physical network.
ML2/OVN: the Networking service creates a patch port on a hypervisor only when there is a VM
bound to the hypervisor and the VM requires the port.
You configure the bridge mapping on the network node on which the router is scheduled. Router traffic
can egress using the correct physical network, as represented by the provider network.
NOTE
The Networking service supports only one bridge for each physical network. Do not map
more than one physical network to the same bridge.
The return packet from the external network arrives on br-ex and moves to br-int using phy-br-ex <->
int-br-ex. When the packet is going through br-ex to br-int, the packet’s external VLAN ID is replaced by
an internal VLAN tag in br-int, and this allows qg-xxx to accept the packet.
In the case of egress packets, the packet’s internal VLAN tag is replaced with an external VLAN tag in
br-ex (or in the external bridge that is defined in the NeutronNetworkVLANRanges parameter).
138
CHAPTER 10. CONFIGURING BRIDGE MAPPINGS
Prerequisites
You must be able to access the underclod host as the stack user.
You must configure bridge mappings on the network node on which the router is scheduled.
You must also configure bridge mappings for your Compute nodes.
Procedure
$ source ~/stackrc
Example
$ vi /home/stack/templates/my_bridge_mappings.yaml
4. Your environment file must contain the keywords parameter_defaults. Add the
NeutronBridgeMappings heat parameter with values that are appropriate for your site after
the parameter_defaults keyword.
Example
In this example, the NeutronBridgeMappings parameter associates the physical names,
datacentre and tenant, the bridges br-ex and br-tenant, respectively.
parameter_defaults:
NeutronBridgeMappings: "datacentre:br-ex,tenant:br-tenant"
NOTE
When the NeutronBridgeMappings parameter is not used, the default maps the
external bridge on hosts (br-ex) to a physical name (datacentre).
5. If you are using a flat network, add its name using the NeutronFlatNetworks parameter.
Example
In this example, the parameter associates physical name datacentre with bridge br-ex, and
physical name tenant with bridge br-tenant."
139
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
parameter_defaults:
NeutronBridgeMappings: "datacentre:br-ex,tenant:br-tenant"
NeutronFlatNetworks: "my_flat_network"
NOTE
6. If you are using a VLAN network, specify the network name along with the range of VLANs it
accesses by using the NeutronNetworkVLANRanges parameter.
Example
In this example, the NeutronNetworkVLANRanges parameter specifies the VLAN range of 1 -
1000 for the tenant network:
parameter_defaults:
NeutronBridgeMappings: "datacentre:br-ex,tenant:br-tenant"
NeutronNetworkVLANRanges: "tenant:1:1000"
7. Run the deployment command and include the core heat templates, environment files, and this
new custom environment file.
a. Using the network VLAN ranges, create the provider networks that represent the
corresponding external networks. (You use the physical name when creating neutron
provider networks or floating IP networks.)
b. Connect the external networks to your project networks with router interfaces.
Additional resources
Updating the format of your network configuration files in the Installing and managing Red Hat
OpenStack Platform with director guide
Including environment files in overcloud creation in the Customizing your Red Hat OpenStack
Platform deployment guide
Manual port cleanup - requires careful removal of the superfluous patch ports. No outages of
network connectivity are required.
Automated port cleanup - performs an automated cleanup, but requires an outage, and requires
140
CHAPTER 10. CONFIGURING BRIDGE MAPPINGS
Automated port cleanup - performs an automated cleanup, but requires an outage, and requires
that the necessary bridge mappings be re-added. Choose this option during scheduled
maintenance windows when network connectivity outages can be tolerated.
NOTE
When OVN bridge mappings are removed, the OVN controller automatically cleans up
any associated patch ports.
Prerequisites
The patch ports that you are cleaning up must be Open Virtual Switch (OVS) ports.
You can identify the patch ports to cleanup by their naming convention:
In br-$external_bridge patch ports are named phy-<external bridge name> (for example,
phy-br-ex2).
In br-int patch ports are named int-<external bridge name> (for example, int-br-ex2).
Procedure
1. Use ovs-vsctl to remove the OVS patch ports associated with the removed bridge mapping
entry:
2. Restart neutron-openvswitch-agent:
NOTE
When OVN bridge mappings are removed, the OVN controller automatically cleans up
any associated patch ports.
Prerequisites
The patch ports that you are cleaning up must be Open Virtual Switch (OVS) ports.
141
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Use the flag --ovs_all_ports to remove all patch ports from br-int, cleaning up tunnel ends from
br-tun, and patch ports from bridge to bridge.
The neutron-ovs-cleanup command unplugs all patch ports (instances, qdhcp/qrouter, among
others) from all OVS bridges.
Procedure
IMPORTANT
# /usr/bin/neutron-ovs-cleanup
--config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini
--log-file /var/log/neutron/ovs-cleanup.log --ovs_all_ports
NOTE
After a restart, the OVS agent does not interfere with any connections that are
not present in bridge_mappings. So, if you have br-int connected to br-ex2, and
br-ex2 has some flows on it, removing br-int from the bridge_mappings
configuration does not disconnect the two bridges when you restart the OVS
agent or the node.
Additional resources
Including environment files in overcloud creation in the Customizing your Red Hat OpenStack
Platform deployment guide
142
CHAPTER 11. VLAN-AWARE INSTANCES
In ML2/OVN deployments you can support VLAN-aware instances using VLAN transparent networks.
As an alternative in ML2/OVN or ML2/OVS deployments, you can support VLAN-aware instances using
trunks.
In a VLAN transparent network, you set up VLAN tagging in the VM instances. The VLAN tags are
transferred over the network and consumed by the instances on the same VLAN, and ignored by other
instances and devices. In a VLAN transparent network, the VLANs are managed in the instance. You do
not need to set up the VLAN in the OpenStack Networking Service (neutron).
VLAN trunks support VLAN-aware instances by combining VLANs into a single trunked port. For
example, a project data network can use VLANs or tunneling (VXLAN, GRE, or Geneve) segmentation,
while the instances see the traffic tagged with VLAN IDs. Network packets are tagged immediately
before they are injected to the instance and do not need to be tagged throughout the entire network.
The following table compares certain features of VLAN transparent networks and VLAN trunks.
Transparent Trunk
VLAN ID Flexible. You can set the VLAN ID in the Fixed. Instances must use the
instance VLAN ID configured in the trunk
Prerequisites
Deployment of Red Hat OpenStack Platform 16.1 or higher, with ML2/OVN as the mechanism
driver.
Provider network of type VLAN or Geneve. Do not use VLAN transparency in deployments with
flat type provider networks.
Ensure that the external switch supports 802.1q VLAN stacking using ethertype 0x8100 on both
143
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Ensure that the external switch supports 802.1q VLAN stacking using ethertype 0x8100 on both
VLANs. OVN VLAN transparency does not support 802.1ad QinQ with outer provider VLAN
ethertype set to 0x88A8 or 0x9100.
Procedure
$ source ~/stackrc
parameter_defaults:
EnableVLANTransparency: true
4. Include the environment file in the openstack overcloud deploy command with any other
environment files that are relevant to your environment and deploy the overcloud:
-e ovn-extras.yaml \
…
Replace <other_overcloud_environment_files> with the list of environment files that are part
of your existing deployment.
Example
The following example command adds a VLAN interface on eth0 with an MTU of 1496. The
VLAN is 50 and the interface name is vlan50:
Example
$ ip link add link eth0 name vlan50 type vlan id 50 mtu 1496
$ ip link set vlan50 up
$ ip addr add 192.128.111.3/24 dev vlan50
7. Choose one of these alternatives for the IP address you created on the VLAN interface inside
144
CHAPTER 11. VLAN-AWARE INSTANCES
7. Choose one of these alternatives for the IP address you created on the VLAN interface inside
the VM in step 4:
Example
The following example sets an allowed address pair on port, fv82gwk3-qq2e-yu93-go31-
56w7sf476mm0, by using 192.128.111.3 and optionally adding a MAC address,
00:40:96:a8:45:c4:
Example
The following example disables port security on port fv82gwk3-qq2e-yu93-go31-
56w7sf476mm0:
Verification
1. Ping between two VMs on the VLAN using the vlan50 IP address.
2. Use tcpdump on eth0 to see if the packets arrive with the VLAN tag intact.
Additional resources
Environment files in the Customizing your Red Hat OpenStack Platform deployment guide
Including environment files in overcloud creation in the Customizing your Red Hat OpenStack
Platform deployment guide
On the controller node, confirm that the trunk plug-in is enabled in the /var/lib/config-
data/puppet-generated/neutron/etc/neutron/neutron.conf file:
service_plugins=router,qos,trunk
145
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
To implement trunks for VLAN-tagged traffic, create a parent port and attach the new port to an
existing neutron network. When you attach the new port, OpenStack Networking adds a trunk
connection to the parent port you created. Next, create subports. These subports connect VLANs to
instances, which allow connectivity to the trunk. Within the instance operating system, you must also
create a sub-interface that tags traffic for the VLAN associated with the subport.
1. Identify the network that contains the instances that require access to the trunked VLANs. In
this example, this is the public network:
2. Create the parent trunk port, and attach it to the network that the instance connects to. In this
example, create a neutron port named parent-trunk-port on the public network. This trunk is the
parent port, as you can use it to create subports.
3. Create a trunk using the port that you created in step 2. In this example the trunk is named
146
CHAPTER 11. VLAN-AWARE INSTANCES
3. Create a trunk using the port that you created in step 2. In this example the trunk is named
parent-trunk.
147
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
NOTE
If you receive the error HttpException: Conflict, confirm that you are creating
the subport on a different network to the one that has the parent trunk port. This
example uses the public network for the parent trunk port, and private for the
subport.
2. Associate the port with the trunk (parent-trunk), and specify the VLAN ID ( 55):
Prerequisites
If you are performing live migrations of your Compute nodes, ensure that the RHOSP
Networking service RPC response timeout is appropriately set for your RHOSP deployment.
148
CHAPTER 11. VLAN-AWARE INSTANCES
The RPC response timeout value can vary between sites and is dependent on the system speed.
The general recommendation is to set the value to at least 120 seconds per/100 trunk ports.
The best practice is to measure the trunk port bind process time for your RHOSP deployment,
and then set the RHOSP Networking service RPC response timeout appropriately. Try to keep
the RPC response timeout value low, but also provide enough time for the RHOSP Networking
service to receive an RPC response. For more information, see Section 11.7, “Configuring
Networking service RPC timeout”.
Procedure
1. Review the configuration of your network trunk, using the network trunk command.
Example
Sample output
+---------------------+--------------+---------------------+-------------+
| ID | Name | Parent Port | Description |
+---------------------+--------------+---------------------+-------------+
| 0e4263e2-5761-4cf6- | parent-trunk | 20b6fdf8-0d43-475a- | |
| ab6d-b22884a0fa88 | | a0f1-ec8f757a4a39 | |
+---------------------+--------------+---------------------+-------------+
Example
Sample output
+-----------------+------------------------------------------------------+
| Field | Value |
+-----------------+------------------------------------------------------+
| admin_state_up | UP |
| created_at | 2021-10-20T02:05:17Z |
| description | |
| id | 0e4263e2-5761-4cf6-ab6d-b22884a0fa88 |
| name | parent-trunk |
| port_id | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 |
| revision_number | 2 |
| status | DOWN |
| sub_ports | port_id='479d742e-dd00-4c24-8dd6-b7297fab3ee9', segm |
| | entation_id='55', segmentation_type='vlan' |
| tenant_id | 745d33000ac74d30a77539f8920555e7 |
| updated_at | 2021-08-20T02:10:06Z |
+-----------------+------------------------------------------------------+
Example
149
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
openstack server create --image cirros --flavor m1.tiny --security-group default --key-name
sshaccess --nic port-id=20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 testInstance
Sample output
+--------------------------------------+---------------------------------+
| Property | Value |
+--------------------------------------+---------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host |- |
| OS-EXT-SRV-ATTR:hostname | testinstance |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-SRV-ATTR:kernel_id | |
| OS-EXT-SRV-ATTR:launch_index |0 |
| OS-EXT-SRV-ATTR:ramdisk_id | |
| OS-EXT-SRV-ATTR:reservation_id | r-juqco0el |
| OS-EXT-SRV-ATTR:root_device_name | - |
| OS-EXT-SRV-ATTR:user_data |- |
| OS-EXT-STS:power_state |0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at |- |
| OS-SRV-USG:terminated_at |- |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | uMyL8PnZRBwQ |
| config_drive | |
| created | 2021-08-20T03:02:51Z |
| description |- |
| flavor | m1.tiny (1) |
| hostId | |
| host_status | |
| id | 88b7aede-1305-4d91-a180-67e7eac |
| | 8b70d |
| image | cirros (568372f7-15df-4e61-a05f |
| | -10954f79a3c4) |
| key_name | sshaccess |
| locked | False |
| metadata | {} |
| name | testInstance |
| os-extended-volumes:volumes_attached | [] |
| progress |0 |
| security_groups | default |
| status | BUILD |
| tags | [] |
| tenant_id | 745d33000ac74d30a77539f8920555e |
| |7 |
| updated | 2021-08-20T03:02:51Z |
| user_id | 8c4aea738d774967b4ef388eb41fef5 |
| |e |
+--------------------------------------+---------------------------------+
150
CHAPTER 11. VLAN-AWARE INSTANCES
Additional resources
The RPC response timeout value can vary between sites and is dependent on the system speed. The
general recommendation is to set the value to at least 120 seconds per/100 trunk ports.
If your site uses trunk ports, the best practice is to measure the trunk port bind process time for your
RHOSP deployment, and then set the RHOSP Networking service RPC response timeout appropriately.
Try to keep the RPC response timeout value low, but also provide enough time for the RHOSP
Networking service to receive an RPC response.
By using a manual hieradata override, rpc_response_timeout, you can set the RPC response timeout
value for the RHOSP Networking service.
Procedure
1. On the undercloud host, logged in as the stack user, create a custom YAML environment file.
Example
$ vi /home/stack/templates/my-modules-environment.yaml
TIP
The RHOSP Orchestration service (heat) uses a set of plans called templates to install and
configure your environment. You can customize aspects of the overcloud with a custom
environment file, which is a special type of template that provides customization for your heat
templates.
2. In the YAML environment file under ExtraConfig, set the appropriate value (in seconds) for
rpc_response_timeout. (The default value is 60 seconds.)
Example
parameter_defaults:
ExtraConfig:
neutron::rpc_response_timeout: 120
NOTE
The RHOSP Orchestration service (heat) updates all RHOSP nodes with the
value you set in the custom environment file, however this value only impacts the
RHOSP Networking components.
3. Run the openstack overcloud deploy command and include the core heat templates,
environment files, and this new custom environment file.
151
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
IMPORTANT
The order of the environment files is important as the parameters and resources
defined in subsequent environment files take precedence.
Example
Additional resources
Environment files in the Customizing your Red Hat OpenStack Platform deployment guide
Including environment files in overcloud creation in the Customizing your Red Hat OpenStack
Platform deployment guide
DOWN: The virtual and physical resources for the trunk are not in sync. This can be a temporary
state during negotiation.
BUILD: There has been a request and the resources are being provisioned. After successful
completion the trunk returns to ACTIVE.
DEGRADED: The provisioning request did not complete, so the trunk has only been partially
provisioned. It is recommended to remove the subports and try again.
ERROR: The provisioning request was unsuccessful. Remove the resource that caused the error
to return the trunk to a healthier state. Do not add more subports while in the ERROR state, as
this can cause more issues.
152
CHAPTER 12. CONFIGURING RBAC POLICIES
As a result, cloud administrators can remove the ability for some projects to create networks and can
instead allow them to attach to pre-existing networks that correspond to their project.
3. Create a RBAC entry for the web-servers network that grants access to the auditors project
(4b0b98f8c6c040f38ba4f7146e8680f5):
153
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
| object_id | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 |
| object_type | network |
| target_project | 4b0b98f8c6c040f38ba4f7146e8680f5 |
| project_id | 98a2f53c20ce4d50a40dac4a38016c69 |
+----------------+--------------------------------------+
As a result, users in the auditors project can connect instances to the web-servers network.
2. Run the openstack network rbac-show command to view the details of a specific RBAC entry:
2. Run the openstack network rbac delete command to delete the RBAC, using the ID of the
154
CHAPTER 12. CONFIGURING RBAC POLICIES
2. Run the openstack network rbac delete command to delete the RBAC, using the ID of the
RBAC that you want to delete:
Complete the steps in the following example procedure to create a RBAC for the web-servers network
and grant access to the engineering project (c717f263785d4679b16a122516247deb):
As a result, users in the engineering project are able to view the network or connect instances to
it:
155
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Each model has advantages and disadvantages. Use this document to carefully plan whether centralized
routing or DVR better suits your needs.
New default RHOSP deployments use DVR and the Modular Layer 2 plug-in with the Open Virtual
Network mechanism driver (ML2/OVN).
East-West routing - routing of traffic between different networks in the same project. This
traffic does not leave the RHOSP deployment. This definition applies to both IPv4 and IPv6
subnets.
North-South routing without floating IPs (also known as SNAT) - The Networking service
offers a default port address translation (PAT) service for instances that do not have allocated
floating IPs. With this service, instances can communicate with external endpoints through the
router, but not the other way around. For example, an instance can browse a website on the
internet, but a web browser outside cannot browse a website hosted within the instance. SNAT
is applied for IPv4 traffic only. In addition, Networking service networks that are assigned GUAs
prefixes do not require NAT on the Networking service router external gateway port to access
the outside world.
Originally, the Networking service (neutron) was designed with a centralized routing model where a
156
CHAPTER 13. CONFIGURING DISTRIBUTED VIRTUAL ROUTING (DVR)
Originally, the Networking service (neutron) was designed with a centralized routing model where a
project’s virtual routers, managed by the neutron L3 agent, are all deployed in a dedicated node or
cluster of nodes (referred to as the Network node, or Controller node). This means that each time a
routing function is required (east/west, floating IPs or SNAT), traffic would traverse through a dedicated
node in the topology. This introduced multiple challenges and resulted in sub-optimal traffic flows. For
example:
Traffic between instances flows through a Controller node - when two instances need to
communicate with each other using L3, traffic has to hit the Controller node. Even if the
instances are scheduled on the same Compute node, traffic still has to leave the Compute
node, flow through the Controller, and route back to the Compute node. This negatively
impacts performance.
Instances with floating IPs receive and send packets through the Controller node - the external
network gateway interface is available only at the Controller node, so whether the traffic is
originating from an instance, or destined to an instance from the external network, it has to flow
through the Controller node. Consequently, in large environments the Controller node is subject
to heavy traffic load. This would affect performance and scalability, and also requires careful
planning to accommodate enough bandwidth in the external network gateway interface. The
same requirement applies for SNAT traffic.
To better scale the L3 agent, the Networking service can use the L3 HA feature, which distributes the
virtual routers across multiple nodes. In the event that a Controller node is lost, the HA router will
failover to a standby on another node and there will be packet loss until the HA router failover
completes.
North-South traffic with floating IP is distributed and routed on the Compute nodes. This
requires the external network to be connected to every Compute node.
North-South traffic without floating IP is not distributed and still requires a dedicated Controller
node.
The L3 agent on the Controller node uses the dvr_snat mode so that the node serves only
SNAT traffic.
The neutron metadata agent is distributed and deployed on all Compute nodes. The metadata
proxy service is hosted on all the distributed routers.
On ML2/OVS DVR deployments, network traffic for the Red Hat OpenStack Platform Load-
balancing service (octavia) goes through the Controller and network nodes, instead of the
compute nodes.
With an ML2/OVS mechanism driver network back end and DVR, it is possible to create VIPs.
157
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
With an ML2/OVS mechanism driver network back end and DVR, it is possible to create VIPs.
However, the IP address assigned to a bound port using allowed_address_pairs, should match
the virtual port IP address (/32).
If you use a CIDR format IP address for the bound port allowed_address_pairs instead, port
forwarding is not configured in the back end, and traffic fails for any IP in the CIDR expecting to
reach the bound IP port.
SNAT (source network address translation) traffic is not distributed, even when DVR is enabled.
SNAT does work, but all ingress/egress traffic must traverse through the centralized Controller
node.
In ML2/OVS deployments, IPv6 traffic is not distributed, even when DVR is enabled. All
ingress/egress traffic goes through the centralized Controller node. If you use IPv6 routing
extensively with ML2/OVS, do not use DVR.
Note that in ML2/OVN deployments, all east/west traffic is always distributed, and north/south
traffic is distributed when DVR is configured.
In ML2/OVS deployments, DVR is not supported in conjunction with L3 HA. If you use DVR with
Red Hat OpenStack Platform 17.1 director, L3 HA is disabled. This means that routers are still
scheduled on the Network nodes (and load-shared between the L3 agents), but if one agent
fails, all routers hosted by this agent fail as well. This affects only SNAT traffic. The
allow_automatic_l3agent_failover feature is recommended in such cases, so that if one
network node fails, the routers are rescheduled to a different node.
For ML2/OVS environments, the DHCP server is not distributed and is deployed on a Controller
node. The ML2/OVS neutron DCHP agent, which manages the DHCP server, is deployed in a
highly available configuration on the Controller nodes, regardless of the routing design
(centralized or DVR).
Compute nodes require an interface on the external network attached to an external bridge.
They use this interface to attach to a VLAN or flat network for an external router gateway, to
host floating IPs, and to perform SNAT for VMs that use floating IPs.
In ML2/OVS deployments, each Compute node requires one additional IP address. This is due
to the implementation of the external gateway port and the floating IP network namespace.
VLAN, GRE, and VXLAN are all supported for project data separation. When you use GRE or
VXLAN, you must enable the L2 Population feature. The Red Hat OpenStack Platform director
enforces L2 Population during installation.
Procedure
158
CHAPTER 13. CONFIGURING DISTRIBUTED VIRTUAL ROUTING (DVR)
4. You cannot transition an L3 HA router to distributed directly. Instead, for each router, disable
the L3 HA option, and then enable the distributed option:
Example
Example
Example
Example
Additional resources
In a DVR topology, compute nodes with floating IP addresses route traffic between virtual machine
instances and the network that provides the router with external connectivity (north-south traffic).
Traffic between instances (east-west traffic) is also distributed.
You can optionally deploy with DVR disabled. This disables north-south DVR, requiring north-south
traffic to traverse a controller or networker node. East-west routing is always distributed in an an
ML2/OVN deployment, even when DVR is disabled.
Prerequisites
159
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Prerequisites
Procedure
parameter_defaults:
NeutronEnableDVR: false
2. To apply this configuration, deploy the overcloud, adding your custom environment file to the
stack along with your other environment files. For example:
160
CHAPTER 14. PROJECT NETWORKING WITH IPV6
NOTE
RHOSP does not support IPv6 prefix delegation from an external entity in ML2/OVN
deployments. You must obtain the Global Unicast Address prefix from your external
prefix delegation router and set it by using the subnet-range argument during creation
of a IPv6 subnet.
For example:
161
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
NOTE
OpenStack
Networking
supports only
EUI-64 IPv6
address
assignment for
SLAAC. This
allows for
simplified IPv6
networking, as
hosts self-assign
addresses based
on the base 64-
bits plus the MAC
address. You
cannot create
subnets with a
different netmask
and
address_assign_ty
pe of SLAAC.
162
CHAPTER 14. PROJECT NETWORKING WITH IPV6
163
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
For example, you can create an IPv6 subnet using Stateful DHCPv6 in network named database-servers
in a project named QA.
Procedure
1. Retrieve the project ID of the Project where you want to create the IPv6 subnet. These values
are unique between OpenStack deployments, so your values differ from the values in this
example.
2. Retrieve a list of all networks present in OpenStack Networking (neutron), and note the name of
the network where you want to host the IPv6 subnet:
3. Include the project ID, network name, and ipv6 address mode in the openstack subnet create
command:
164
CHAPTER 14. PROJECT NETWORKING WITH IPV6
| enable_dhcp | True |
| gateway_ip | fdf8:f53b:82e4::51 |
| host_routes | |
| id | cdfc3398-997b-46eb-9db1-ebbd88f7de05 |
| ip_version |6 |
| ipv6_address_mode | dhcpv6-stateful |
| ipv6_ra_mode | |
| name | |
| network_id | 6aff6826-4278-4a35-b74d-b0ca0cbba340 |
| tenant_id | 25837c567ed5458fbb441d39862e1399 |
+-------------------+--------------------------------------------------------------+
Validation steps
1. Validate this configuration by reviewing the network list. Note that the entry for database-
servers now reflects the newly created IPv6 subnet:
Result
As a result of this configuration, instances that the QA project creates can receive a DHCP IPv6
address when added to the database-servers subnet:
Additional resources
To find the Router Advertisement mode and address mode combinations to achieve a particular result in
an IPv6 subnet, see IPv6 subnet options in the Configuring Red Hat OpenStack Platform networking .
165
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Procedure
You can set project quotas for various network components in the /var/lib/config-
data/puppet-generated/neutron/etc/neutron/neutron.conf file.
For example, to limit the number of routers that a project can create, change the quota_router
value:
quota_router = 10
For a listing of the quota settings, see sections that immediately follow.
166
CHAPTER 15. MANAGING PROJECT QUOTAS
Here are quota options available for managing the number of security groups that projects can create:
167
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Routed provider networks simplify the cloud for end users because they see only one network. For
administrators, routed provider networks deliver scalabilty and fault tolerance. For example, if a major
error occurs, only one segment is impacted instead of the entire network failing.
Before routed provider networks, administrators typically had to choose from one of the following
architectures:
Single, large layer 2 networks become complex when scaling and reduce fault tolerance (increase failure
domains).
Multiple, smaller layer 2 networks scale better and shrink failure domains, but can introduce complexity
for end users.
Starting with RHOSP 16.2 and later, you can deploy routed provider networks using the Modular Layer 2
plug-in with the Open Virtual Network mechanism driver (ML2/OVN). (Routed provider network
support for the ML2/Open vSwitch (OVS) and SR-IOV mechanism drivers was introduced in RHOSP
16.1.1.)
Additional resources
With routed provider networks, the IP addresses available to virtual machine (VM) instances depend on
the segment of the network available on the particular compute node. The Networking service port can
be associated with only one network segment.
Similar to conventional networking, layer 2 (switching) handles transit of traffic between ports on the
same network segment and layer 3 (routing) handles transit of traffic between segments.
The Networking service does not provide layer 3 services between segments. Instead, it relies on
physical network infrastructure to route subnets. Thus, both the Networking service and physical
network infrastructure must contain configuration for routed provider networks, similar to conventional
provider networks.
You can configure the Compute scheduler to filter Compute nodes that have affinity with routed
168
CHAPTER 16. DEPLOYING ROUTED PROVIDER NETWORKS
You can configure the Compute scheduler to filter Compute nodes that have affinity with routed
network segments, so that the scheduler places instances only on Compute nodes that are in the
required routed provider network segment.
If you require a DHCP-metadata service, you must define an availability zone for each edge site or
network segment, to ensure that the local DHCP agent is deployed.
Additional resources
When using SR-IOV or PCI pass-through, physical network (physnet) names must be the same
in central and remote sites or segments. You cannot reuse segment IDs.
Prerequisites
Procedure
1. Gather the VLAN IDs from the tripleo-heat-templates/network_data.yaml file for the network
you want to create the routed provider network on, and assign unique physical network names
for each segment that you will create on the routed provider network. This enables reuse of the
same segmentation details between subnets.
Create a reference table to visualize the relationships between the VLAN IDs, segments, and
physical network names:
Each subnet on a segment must contain the gateway address of the router interface on that
particular subnet. You need the subnet address in both IPv4 and IPv6 formats.
Table 16.2. Example - routing plan for routed provider network segments
3. Routed provider networks require that Compute nodes reside on different segments. Check the
templates/overcloud-baremetal-deployed.yaml file to ensure that every Compute host in a
routed provider network has direct connectivity to one of its segments.
For more information, see Provisioning bare metal nodes for the overcloud in the Installing and
managing Red Hat OpenStack Platform with director guide.
...
- name: Compute
...
ServicesDefault:
- OS::TripleO::Services::NeutronMetadataAgent
...
For more information, see Composable services and custom roles in the Customizing your Red
Hat OpenStack Platform deployment guide.
5. When using the ML2/OVS mechanism driver, in addition to the NeutronMetadataAgent service,
also ensure that the NeutronDhcpAgent service is included in templates/roles_data-
custom.yaml for the Compute nodes containing the segments:
...
- name: Compute
...
ServicesDefault:
- OS::TripleO::Services::NeutronDhcpAgent
- OS::TripleO::Services::NeutronMetadataAgent
...
TIP
Unlike conventional provider networks, a DHCP agent cannot support more than one segment
within a network. Deploy DHCP agents on the Compute nodes containing the segments rather
than on the network nodes to reduce the node count.
170
CHAPTER 16. DEPLOYING ROUTED PROVIDER NETWORKS
parameter_defaults:
NeutronEnableIsolatedMetadata: true
8. Ensure that the segments service plug-in is loaded into the Networking service:
Example
parameter_defaults:
NeutronEnableIsolatedMetadata: true
NeutronServicePlugins: 'router,qos,segments,trunk,placement'
IMPORTANT
9. To verify the network with the Placement service before scheduling an instance on a host,
enable scheduling support for routed provider networks on the Controllers that are running the
Compute scheduler services.
Example
parameter_defaults:
NeutronEnableIsolatedMetadata: true
NeutronServicePlugins: 'router,qos,segments,trunk,placement'
NovaSchedulerQueryPlacementForRoutedNetworkAggregates: true
10. Add your routed provider network environment file to the stack with your other environment
files and deploy the overcloud:
Next steps
Additional resources
Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat
OpenStack Platform with director guide
171
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Composable services and custom roles in the Customizing your Red Hat OpenStack Platform
deployment guide
When you perform this procedure, you create an routed provider network with two network segments.
Each segment contains one IPv4 subnet and one IPv6 subnet.
Prerequisites
Complete the steps in Section 16.4, “Preparing for a routed provider network” .
Procedure
Example
Sample output
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| l2_adjacency | True |
| mtu | 1500 |
| name | multisegment1 |
| port_security_enabled | True |
| provider:network_type | vlan |
| provider:physical_network | provider1 |
| provider:segmentation_id | 128 |
| revision_number |1 |
| router:external | Internal |
| shared | True |
| status | ACTIVE |
| subnets | |
| tags | [] |
+---------------------------+--------------------------------------+
Sample output
+--------------------------------------+----------+--------------------------------------+--------------
+---------+
| ID | Name | Network | Network Type |
Segment |
+--------------------------------------+----------+--------------------------------------+--------------
+---------+
| 43e16869-ad31-48e4-87ce-acf756709e18 | None | 6ab19caa-dda9-4b3d-abc4-
5b8f435b98d9 | vlan | 128 |
+--------------------------------------+----------+--------------------------------------+--------------
+---------+
Example
Sample output
+------------------+--------------------------------------+
| Field | Value |
+------------------+--------------------------------------+
| description | None |
| headers | |
| id | 053b7925-9a89-4489-9992-e164c8cc8763 |
| name | segment2 |
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| network_type | vlan |
| physical_network | provider2 |
| revision_number | 1 |
| segmentation_id | 129 |
| tags | [] |
+------------------+--------------------------------------+
4. Verify that the network contains the segment1 and segment2 segments:
Sample output
173
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
+--------------------------------------+----------+--------------------------------------+--------------+-----
----+
| ID | Name | Network | Network Type | Segment |
+--------------------------------------+----------+--------------------------------------+--------------+-----
----+
| 053b7925-9a89-4489-9992-e164c8cc8763 | segment2 | 6ab19caa-dda9-4b3d-abc4-
5b8f435b98d9 | vlan | 129 |
| 43e16869-ad31-48e4-87ce-acf756709e18 | segment1 | 6ab19caa-dda9-4b3d-abc4-
5b8f435b98d9 | vlan | 128 |
+--------------------------------------+----------+--------------------------------------+--------------+-----
----+
5. Create one IPv4 subnet and one IPv6 subnet on the segment1 segment.
In this example, the IPv4 subnet uses 203.0.113.0/24:
Example
Sample output
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 203.0.113.2-203.0.113.254 |
| cidr | 203.0.113.0/24 |
| enable_dhcp | True |
| gateway_ip | 203.0.113.1 |
| id | c428797a-6f8e-4cb1-b394-c404318a2762 |
| ip_version |4 |
| name | multisegment1-segment1-v4 |
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| revision_number | 1 |
| segment_id | 43e16869-ad31-48e4-87ce-acf756709e18 |
| tags | [] |
+-------------------+--------------------------------------+
Example
Sample output
+-------------------+------------------------------------------------------+
| Field | Value |
174
CHAPTER 16. DEPLOYING ROUTED PROVIDER NETWORKS
+-------------------+------------------------------------------------------+
| allocation_pools | fd00:203:0:113::2-fd00:203:0:113:ffff:ffff:ffff:ffff |
| cidr | fd00:203:0:113::/64 |
| enable_dhcp | True |
| gateway_ip | fd00:203:0:113::1 |
| id | e41cb069-9902-4c01-9e1c-268c8252256a |
| ip_version |6 |
| ipv6_address_mode | slaac |
| ipv6_ra_mode | None |
| name | multisegment1-segment1-v6 |
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| revision_number | 1 |
| segment_id | 43e16869-ad31-48e4-87ce-acf756709e18 |
| tags | [] |
+-------------------+------------------------------------------------------+
NOTE
6. Create one IPv4 subnet and one IPv6 subnet on the segment2 segment.
In this example, the IPv4 subnet uses 198.51.100.0/24:
Example
Sample output
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 198.51.100.2-198.51.100.254 |
| cidr | 198.51.100.0/24 |
| enable_dhcp | True |
| gateway_ip | 198.51.100.1 |
| id | 242755c2-f5fd-4e7d-bd7a-342ca95e50b2 |
| ip_version |4 |
| name | multisegment1-segment2-v4 |
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| revision_number | 1 |
| segment_id | 053b7925-9a89-4489-9992-e164c8cc8763 |
| tags | [] |
+-------------------+--------------------------------------+
Example
175
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Sample output
+-------------------+--------------------------------------------------------+
| Field | Value |
+-------------------+--------------------------------------------------------+
| allocation_pools | fd00:198:51:100::2-fd00:198:51:100:ffff:ffff:ffff:ffff |
| cidr | fd00:198:51:100::/64 |
| enable_dhcp | True |
| gateway_ip | fd00:198:51:100::1 |
| id | b884c40e-9cfe-4d1b-a085-0a15488e9441 |
| ip_version |6 |
| ipv6_address_mode | slaac |
| ipv6_ra_mode | None |
| name | multisegment1-segment2-v6 |
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| revision_number | 1 |
| segment_id | 053b7925-9a89-4489-9992-e164c8cc8763 |
| tags | [] |
+-------------------+--------------------------------------------------------+
Verification
1. Verify that each IPv4 subnet associates with at least one DHCP agent:
Sample output
+--------------------------------------+------------+-------------+-------------------+-------+-------+------
--------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary
|
+--------------------------------------+------------+-------------+-------------------+-------+-------+------
--------------+
| c904ed10-922c-4c1a-84fd-d928abaf8f55 | DHCP agent | compute0001 | nova | :-)
| UP | neutron-dhcp-agent |
| e0b22cc0-d2a6-4f1c-b17c-27558e20b454 | DHCP agent | compute0101 | nova | :-)
| UP | neutron-dhcp-agent |
+--------------------------------------+------------+-------------+-------------------+-------+-------+------
--------------+
2. Verify that inventories were created for each segment IPv4 subnet in the Compute service
placement API.
Run this command for all segment IDs:
$ SEGMENT_ID=053b7925-9a89-4489-9992-e164c8cc8763
$ openstack resource provider inventory list $SEGMENT_ID
176
CHAPTER 16. DEPLOYING ROUTED PROVIDER NETWORKS
Sample output
In this sample output, only one of the segments is shown:
+----------------+------------------+----------+----------+-----------+----------+-------+
| resource_class | allocation_ratio | max_unit | reserved | step_size | min_unit | total |
+----------------+------------------+----------+----------+-----------+----------+-------+
| IPV4_ADDRESS | 1.0 | 1| 2| 1| 1 | 30 |
+----------------+------------------+----------+----------+-----------+----------+-------+
3. Verify that host aggregates were created for each segment in the Compute service:
Sample output
In this example, only one of the segments is shown:
+----+---------------------------------------------------------+-------------------+
| Id | Name | Availability Zone |
+----+---------------------------------------------------------+-------------------+
| 10 | Neutron segment id 053b7925-9a89-4489-9992-e164c8cc8763 | None |
+----+---------------------------------------------------------+-------------------+
4. Launch one or more instances. Each instance obtains IP addresses according to the segment it
uses on the particular compute node.
NOTE
177
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
NOTE
If a fixed IP is specified by the user in the port create request, that particular IP is
allocated immediately to the port. However, creating a port and passing it to an
instance yields a different behavior than conventional networks. If the fixed IP is
not specified on the port create request, the Networking service defers
assignment of IP addresses to the port until the particular compute node
becomes apparent. For example, when you run this command:
Sample output
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| admin_state_up | UP |
| binding_vnic_type | normal |
| id | 6181fb47-7a74-4add-9b6b-f9837c1c90c4 |
| ip_allocation | deferred |
| mac_address | fa:16:3e:34:de:9b |
| name | port1 |
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| port_security_enabled | True |
| revision_number |1 |
| security_groups | e4fcef0d-e2c5-40c3-a385-9c33ac9289c5 |
| status | DOWN |
| tags | [] |
+-----------------------+--------------------------------------+
Additional resources
Prerequisites
The non-routed network you are migrating must contain only one segment and only one subnet.
IMPORTANT
178
CHAPTER 16. DEPLOYING ROUTED PROVIDER NETWORKS
IMPORTANT
Procedure
1. For the network that is being migrated, obtain the ID of the current network segment.
Example
Sample output
+--------------------------------------+------+--------------------------------------+--------------+---------
+
| ID | Name | Network | Network Type | Segment |
+--------------------------------------+------+--------------------------------------+--------------+---------
+
| 81e5453d-4c9f-43a5-8ddf-feaf3937e8c7 | None | 45e84575-2918-471c-95c0-
018b961a2984 | flat | None |
+--------------------------------------+------+--------------------------------------+--------------+---------
+
2. For the network that is being migrated, obtain the ID of the current subnet.
Example
Sample output
+--------------------------------------+-----------+--------------------------------------+---------------+
| ID | Name | Network | Subnet |
+--------------------------------------+-----------+--------------------------------------+---------------+
| 71d931d2-0328-46ae-93bc-126caf794307 | my_subnet | 45e84575-2918-471c-95c0-
018b961a2984 | 172.24.4.0/24 |
+--------------------------------------+-----------+--------------------------------------+---------------+
3. Verify that the current segment_id of the subnet has a value of None.
Example
Sample output
+------------+-------+
| Field | Value |
+------------+-------+
179
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
| segment_id | None |
+------------+-------+
4. Change the value of the subnet segment_id to the network segment ID.
Here is an example:
Verification
Verify that the subnet is now associated with the desired network segment.
Example
Sample output
+------------+--------------------------------------+
| Field | Value |
+------------+--------------------------------------+
| segment_id | 81e5453d-4c9f-43a5-8ddf-feaf3937e8c7 |
+------------+--------------------------------------+
Additional resources
180
CHAPTER 17. CREATING CUSTOM VIRTUAL ROUTERS WITH ROUTER FLAVORS
IMPORTANT
The content in this section is available in this release as aTechnology Preview, and
therefore is not fully supported by Red Hat. It should only be used for testing, and
should not be deployed in a production environment. For more information, see
Technology Preview.
You can use router flavors to deploy custom virtual routers in your Red Hat OpenStack Platform
(RHOSP) ML2/OVN environments. After the RHOSP administrator enables the router flavor feature
and creates the router flavor, users can create custom routers by using the router flavor.
Within a RHOSP deployment you can combine virtual custom routers that are based on router flavors
with routers of the default OVN type.
Using router flavors does not affect the operation of the default OVN router. When router flavors are
used, the default OVN router is treated as the default router flavor, with no impact on its configuration
or operation.
To set up router flavors and create custom routers, perform the following general steps:
1. The administrator loads the necessary RHOSP Networking service (neutron) plug-in and
specifies the service provider.
See Section 17.1, “Enabling router flavors and creating service providers” .
3. The user creates a custom router by using one of the router flavors.
See Section 17.3, “Creating a custom virtual router” .
The administrator must deploy the service provider code in a module in the Networking service
directories. Red Hat recommends the neutron.services.ovn_l3.service_providers.user_defined
module.
NOTE
The following procedure involves direct editing of .conf files on the Controller nodes. Red
Hat is developing heat template methods and OpenStack commands to replace this
direct editing method.
181
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Prerequisites
You have a router flavor service provider created for your deployment.
You have access to the RHOSP Controller nodes and permission to update configuration files.
Procedure
[DEFAULT]
service_plugins = qos,ovn-router-flavors-ha,trunk,segments,port_forwarding,log
3. Create a service_providers section and add a service provider definition for each router flavor
that you plan to use.
Example
In this example, a service provider, user_defined_1, is added:
...
[service_providers]
service_provider =
L3_ROUTER_NAT:user_defined_1:neutron.services.ovn_l3.service_providers.user_defined.Us
erDefined1
Path
Red Hat recommends using this path:
neutron.services.ovn_l3.service_providers.user_defined
Class
A python class name for the service provider.
Each provider has its own class. For example, UserDefined1.
NOTE
Retain this class name and its path. You need it later when you create the
router flavor.
182
CHAPTER 17. CREATING CUSTOM VIRTUAL ROUTERS WITH ROUTER FLAVORS
Verification
Verify that the Networking service has loaded your user defined service provider:
If the procedure was successful the new service appears in the list.
Sample output
+-------------------------+-------+---------+
| Service Type | Name | Default |
+---------------+-----------------+---------+
| L3_ROUTER_NAT | user_defined_1 | False |
| L3_ROUTER_NAT | ovn | True |
+---------------+-----------------+---------+
Prerequisites
The router flavor service provider has been created and you know the name and path of its
class.
For more information, see Section 17.1, “Enabling router flavors and creating service providers” .
Procedure
1. Source your overcloud credentials file that assigns you the admin role.
2. Using the service provider class and its path, create a service profile for the router flavor.
Retain the profile ID, as you need it in a later step.
Example
183
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
In this example, the driver class name is UserDefined1, and its path is,
neutron.services.ovn_l3.service_providers.user_defined:
Sample output
+-------------+----------------------------------------------------------------------+
| Field | Value |
+-------------+----------------------------------------------------------------------+
| description | User-defined router flavor profile |
| driver | neutron.services.ovn_l3.service_providers.user_defined.UserDefined1 |
| enabled | True |
| id | a717c92c-63f7-47e8-9efb-6ad0d61c4875 |
| meta_info | |
| project_id | None |
+-------------+----------------------------------------------------------------------+
Sample output
+---------------------+---------------------------------------------------------+
| Field | Value |
+---------------------+---------------------------------------------------------+
| description | User-defined flavor for routers |
| enabled | True |
| id | e47c1c5c-629b-4c48-b49a-78abe6ac7696 |
| name | user-defined-router-flavor |
| service_profile_ids | [] |
| service_type | L3_ROUTER_NAT |
+---------------------+---------------------------------------------------------+
4. Add the service profile to the router flavor, using the profile ID from an earlier step.
Example
Additional resources
184
CHAPTER 17. CREATING CUSTOM VIRTUAL ROUTERS WITH ROUTER FLAVORS
Prerequisites
Procedure
2. Get the ID for the router flavor to use to create your custom router:
Sample output
+--------------------------------------+-------------------------------+
| ID | Name |
+--------------------------------------+-------------------------------+
| 4b37f895-e78e-49df-a96b-1916550f9116 | user-defined-router-flavor |
+--------------------------------------+-------------------------------+
Example
In this example, a custom router, user-defined-router is created using the flavor ID for user-
defined-router-flavor:
If you do not use the --flavor-id argument, the openstack router create command creates a
default OVN router.
Sample output
+--------------------------------------+------------------------+--------+------+
| ID | Name | Status | HA |
+--------------------------------------+------------------------+--------+------+
| 9f5fec56-1829-4bad-abe5-7b4221649c8e | router1 | ACTIVE | True |
| e9f25566-ff73-4a76-aeb4-969c819f9c47 | user-defined-router | ACTIVE | True |
+--------------------------------------+------------------------+--------+------+
185
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Additional resources
186
CHAPTER 18. CONFIGURING ALLOWED ADDRESS PAIRS
IMPORTANT
Binding a vport to an instance prevents the instance from spawning and produces an
error message similar to the following:
You define allowed address pairs using the Red Hat OpenStack Platform command-line client
openstack port command.
IMPORTANT
Be aware that you should not use the default security group with a wider IP address range
in an allowed address pair. Doing so can allow a single port to bypass security groups for
all other ports within the same network.
For example, this command impacts all ports in the network and bypasses all security
groups:
NOTE
With an ML2/OVN mechanism driver network back end, it is possible to create VIPs.
However, the IP address assigned to a bound port using allowed_address_pairs, should
match the virtual port IP address (/32).
If you use a CIDR format IP address for the bound port allowed_address_pairs instead,
port forwarding is not configured in the back end, and traffic fails for any IP in the CIDR
expecting to reach the bound IP port.
Additional resources
187
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
IMPORTANT
Do not use the default security group with a wider IP address range in an allowed address
pair. Doing so can allow a single port to bypass security groups for all other ports within
the same network.
Procedure
Use the following command to create a port and allow one address pair:
Additional resources
IMPORTANT
Do not use the default security group with a wider IP address range in an allowed address
pair. Doing so can allow a single port to bypass security groups for all other ports within
the same network.
Procedure
NOTE
You cannot set an allowed-address pair that matches the mac_address and
ip_address of a port. This is because such a setting has no effect since traffic
matching the mac_address and ip_address is already allowed to pass through
the port.
188
CHAPTER 18. CONFIGURING ALLOWED ADDRESS PAIRS
Additional resources
189
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
All projects have a default security group called default, which is used when you do not specify a security
group for your instances. By default, the default security group allows all outgoing traffic and denies all
incoming traffic from any source other than instances in the same security group. You can either add
rules to the default security group or create a new security group for your project. You can apply one or
more security groups to an instance during instance creation. To apply a security group to a running
instance, apply the security group to a port attached to the instance.
When you create a security group, you can choose stateful or stateless in ML2/OVN deployments.
NOTE
Security groups are stateful by default and in most cases stateful security groups provide better control
with less administrative overhead.
A stateless security group can provide significant performance benefits, because it bypasses connection
tracking in the underlying firewall. But stateless security groups require more security group rules than
stateful security groups. Stateless security groups also offer less granularity in some cases.
Stateless security groups are the only viable security group option in applications that
offload OpenFlow actions to hardware.
Stateless security group rules do not automatically allow returning traffic. For example, if you
create a rule to allow outgoing TCP traffic from a port that is in a stateless security group,
you must also create a rule that allows incoming replies. Stateful security groups
automatically allow the incoming replies.
Control over those incoming replies may not be as granular as the control provided by
stateful security groups.
In general, use the default stateful security group type unless your application is highly sensitive to
performance or uses hardware offloading of OpenFlow actions.
NOTE
You cannot apply a role-based access control (RBAC)-shared security group directly to
an instance during instance creation. To apply an RBAC-shared security group to an
instance you must first create the port, apply the shared security group to that port, and
then assign that port to the instance. See Adding a security group to a port .
190
CHAPTER 19. CONFIGURING SECURITY GROUPS
You can create a new security group to apply to instances and ports within a project.
Procedure
1. Optional: To ensure the security group you need does not already exist, review the available
security groups and their rules:
Replace <sec_group> with the name or ID of the security group that you retrieved from the
list of available security groups.
Optional: Include the --stateless option to create a stateless security group. Security
groups are stateful by default.
NOTE
Replace <protocol> with the name of the protocol you want to allow to communicate with
your instances.
Optional: Replace <port-range> with the destination port or port range to open for the
protocol. Required for IP protocols TCP, UDP, and SCTP. Set to -1 to allow all ports for the
specified protocol. Separate port range values with a colon.
Optional: You can allow access only from specified IP addresses by using --remote-ip to
specify the remote IP address block, or --remote-group to specify that the rule only applies
to packets from interfaces that are a member of the remote group. If using --remote-ip,
replace <ip-address> with the remote IP address block. You can use CIDR notation. If using
--remote-group, replace <group> with the name or ID of the existing security group. If
neither option is specified, then access is allowed to all addresses, as the remote IP access
range defaults (IPv4 default: 0.0.0.0/0; IPv6 default: ::/0).
Specify the direction of network traffic the protocol rule applies to, either incoming
(ingress) or outgoing (egress). If not specified, defaults to ingress.
NOTE
If you created a stateless security group, and you created a rule to allow
outgoing TCP traffic from a port that is in the stateless security group, you
must also create a rule that allows incoming replies.
191
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
4. Repeat step 3 until you have created rules for all the protocols that you want to allow to access
your instances. The following example creates a rule to allow SSH connections to instances in
the security group mySecGroup:
Procedure
1. Retrieve the name or ID of the security group that you want to update the rules for:
2. Determine the rules that you need to apply to the security group.
Replace <protocol> with the name of the protocol you want to allow to communicate with
your instances.
Optional: Replace <port-range> with the destination port or port range to open for the
protocol. Required for IP protocols TCP, UDP, and SCTP. Set to -1 to allow all ports for the
specified protocol.Separate port range values with a colon.
Optional: You can allow access only from specified IP addresses by using --remote-ip to
specify the remote IP address block, or --remote-group to specify that the rule only applies
to packets from interfaces that are a member of the remote group. If using --remote-ip,
replace <ip-address> with the remote IP address block. You can use CIDR notation. If using
--remote-group, replace <group> with the name or ID of the existing security group. If
neither option is specified, then access is allowed to all addresses, as the remote IP access
range defaults (IPv4 default: 0.0.0.0/0; IPv6 default: ::/0).
Specify the direction of network traffic the protocol rule applies to, either incoming
(ingress) or outgoing (egress). If not specified, defaults to ingress.
Replace <group_name> with the name or ID of the security group that you want to apply
the rule to.
4. Repeat step 3 until you have created rules for all the protocols that you want to allow to access
your instances. The following example creates a rule to allow SSH connections to instances in
the security group mySecGroup:
192
CHAPTER 19. CONFIGURING SECURITY GROUPS
Procedure
1. Identify the security group that the rules are applied to:
Replace <rule> with the ID of the rule to delete. You can delete more than one rule at a time by
specifying a space-delimited list of the IDs of the rules to delete.
Procedure
1. Retrieve the name or ID of the security group that you want to delete:
If the security group you want to delete is associated with any of the ports, then you must first
remove the security group from the port. For more information, see Removing a security group
from a port.
Replace <group> with the ID of the group that you want to delete. You can delete more than
one group at a time by specifying a space-delimited list of the IDs of the groups to delete.
193
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
When you want one or more Red Hat OpenStack Platform (RHOSP) projects to be able to share data,
you can use the RHOSP Networking service (neutron) RBAC policy feature to share a security group.
You create security groups and Networking service role-based access control (RBAC) policies using the
OpenStack Client.
You can apply a security group directly to an instance during instance creation, or to a port on the
running instance.
NOTE
You cannot apply a role-based access control (RBAC)-shared security group directly to
an instance during instance creation. To apply an RBAC-shared security group to an
instance you must first create the port, apply the shared security group to that port, and
then assign that port to the instance. See Adding a security group to a port .
Prerequisites
You have at least two RHOSP projects that you want to share.
In one of the projects, the current project, you have created a security group that you want to
share with another project, the target project.
In this example, the ping_ssh security group is created:
Example
Procedure
1. Log in to the overcloud for the current project that contains the security group.
3. Obtain the name or ID of the security group that you want to share between RHOSP projects.
4. Using the identifiers from the previous steps, create an RBAC policy using the openstack
network rbac create command.
In this example, the ID of the target project is 32016615de5d43bb88de99e7f2e26a1e. The ID of
the security group is 5ba835b7-22b0-4be6-bdbe-e0722d1b5f24:
Example
--target-project
specifies the project that requires access to the security group.
TIP
194
CHAPTER 19. CONFIGURING SECURITY GROUPS
TIP
You can share data between all projects by using the --target-all-projects argument instead
of --target-project <target-project>. By default, only the admin user has this privilege.
--action access_as_shared
specifies what the project is allowed to do.
--type
indicates that the target object is a security group.
5ba835b7-22b0-4be6-bdbe-e0722d1b5f24
is the ID of the particular security group which is being granted access to.
The target project is able to access the security group when running the OpenStack Client security
group commands, in addition to being able to bind to its ports. No other users (other than
administrators and the owner) are able to access the security group.
TIP
To remove access for the target project, delete the RBAC policy that allows it using the openstack
network rbac delete command.
Additional resources
195
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
You can associate any instance port with one or more security groups and define one or more rules for
each security group. For example, you can create a rule to allow inbound SSH traffic to any virtual
machine in a security group. You can create another rule in the same security group to allow virtual
machines in that group to initiate and respond to ICMP (ping) messages.
Then you can create logs to record combinations of packet flow events. For example, the following
command creates a log to capture all ACCEPT events in the security group security-group1.
You can create multiple logs to capture data about specific combinations of security groups and packet
flow events.
resource-type
You must set this required parameter to security_group.
resource (security group names)
You can optionally limit a log to a specific security group with the target argument. For example: --
resource security-group1. If you do not specify a resource, the log will capture events from all
security groups on the specified ports in the project.
event (types of events to log)
You can choose to log the following packet flow events:
DROP: Log one DROP log entry for each incoming or outgoing session that is dropped.
NOTE
If you log dropped traffic on one or more security groups, the Networking
service logs dropped traffic on all security groups.
ACCEPT: Log one ACCEPT log entry for each new session that is allowed by the security
group.
ALL (drop and accept): Log all DROP and ACCEPT events. If you do not set –event
ACCEPT or –event DROP, the Networking service defaults to ALL.
NOTE
The Networking service writes all log data to the same file on every Compute node:
/var/log/containers/openvswitch/ovn-controller.log.
196
CHAPTER 20. LOGGING SECURITY GROUP ACTIONS
Procedure
1. Source a credentials file that gives you access to the overcloud with the RHOSP admin role.
If the logging service plug-in and extension are configured properly, the output includes the
following:
3. If the openstack extension list output does not include the Logging API Extension:
Example
parameter_defaults:
NeutronPluginExtensions: "qos,port_security,log"
b. Run the openstack overcloud deploy command and include the core Orchestration
templates, environment files, and this environment file.
Additional resources
Prerequisites
You have created security group rules for the security groups
Procedure
1. Source a credentials file that gives you access to the overcloud with the RHOSP admin role.
2. Create a log by using the openstack network log create command with the appropriate set of
arguments.
Example 1: Log ACCEPT events from the security group sg1 on all ports
197
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Example 2: Log ACCEPT events from all security groups on all ports
Procedure
1. Source a credentials file that gives you access to the overcloud with the RHOSP admin role.
Procedure
1. Source a credentials file that gives you access to the overcloud with the RHOSP admin role.
198
CHAPTER 20. LOGGING SECURITY GROUP ACTIONS
Procedure
1. Source a credentials file that gives you access to the overcloud with the RHOSP admin role.
Replace <new_log_object_name> with the new name of the log object. Replace <object> with the
old name or ID of the log object.
Procedure
1. Source a credentials file that gives you access to the overcloud with the RHOSP admin role.
Replace <log_object_name> with the name of the log object to delete. To delete multiple log
objects, enter a list of log object names, separated by spaces.
The log file contains other log objects. Security group log entries include the string acl_log.
An indication of the originator of the flow. For example, which project or log resource generated
the events.
199
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
2022-11-30T03:29:12.868Z|00111|acl_log(ovn_pinctrl1)|INFO|name="neutron-bc53f8df-2318-4d08-
8e12-89e92b08deec", verdict=allow, severity=info, direction=from-lport:
udp,vlan_tci=0x0000,dl_src=fa:16:3e:70:c4:45,dl_dst=fa:16:3e:66:8b:18,nw_src=192.168.100.59,nw_dst
=192.168.100.1,nw_tos=0,nw_ecn=0,nw_ttl=64,tp_src=68,tp_dst=67
When logging packet transmission reaches the rate limit, the Networking service queues the excess
packets to be logged. You can change the maximum number of queued packets using the
NeutronOVNLoggingBurstLimit parameter.
Logging rate and burst limits do not limit control of data traffic. They limit only the transmission of
logging data.
Procedure
$ source ~/stackrc
Example
parameter_defaults:
...
NeutronOVNLoggingRateLimit=450
NeutronOVNLoggingBurstLimit=50
4. Run the deployment command and include the core Heat templates, other environment files,
and the custom roles data file in your deployment command with the -r option.
IMPORTANT
The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.
Example
200
CHAPTER 20. LOGGING SECURITY GROUP ACTIONS
201
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Prerequisites
Procedure
$ source ~/stackrc
Example
$ vi /home/stack/templates/my-environment.yaml
4. Your environment file must contain the keywords parameter_defaults. Under these keywords,
add the following lines:
parameter_defaults:
NeutronMechanismDrivers: ['openvswitch', 'l2population']
NeutronEnableL2Pop: 'True'
NeutronEnableARPResponder: true
5. Run the deployment command and include the core heat templates, environment files, and this
new custom environment file.
IMPORTANT
The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.
202
CHAPTER 21. COMMON ADMINISTRATIVE NETWORKING TASKS
Example
Verification
Sample output
+--------------------------------------+---------------------------+
| ID | Binary |
+--------------------------------------+---------------------------+
| 003a8750-a6f9-468b-9321-a6c03c77aec7 | neutron-openvswitch-agent |
| 02bbbb8c-4b6b-4ce7-8335-d1132df31437 | neutron-l3-agent |
| 0950e233-60b2-48de-94f6-483fd0af16ea | neutron-openvswitch-agent |
| 115c2b73-47f5-4262-bc66-8538d175029f | neutron-openvswitch-agent |
| 2a9b2a15-e96d-468c-8dc9-18d7c2d3f4bb | neutron-metadata-agent |
| 3e29d033-c80b-4253-aaa4-22520599d62e | neutron-dhcp-agent |
| 3ede0b64-213d-4a0d-9ab3-04b5dfd16baa | neutron-dhcp-agent |
| 462199be-0d0f-4bba-94da-603f1c9e0ec4 | neutron-sriov-nic-agent |
| 54f7c535-78cc-464c-bdaa-6044608a08d7 | neutron-l3-agent |
| 6657d8cf-566f-47f4-856c-75600bf04828 | neutron-metadata-agent |
| 733c66f1-a032-4948-ba18-7d1188a58483 | neutron-l3-agent |
| 7e0a0ce3-7ebb-4bb3-9b89-8cccf8cb716e | neutron-openvswitch-agent |
| dfc36468-3a21-4a2d-84c3-2bc40f224235 | neutron-metadata-agent |
| eb7d7c10-69a2-421e-bd9e-aec3edfe1b7c | neutron-openvswitch-agent |
| ef5219b4-ee49-4635-ad04-048291209373 | neutron-sriov-nic-agent |
| f36c7af0-e20c-400b-8a37-4ffc5d4da7bd | neutron-dhcp-agent |
+--------------------------------------+---------------------------+
2. Using an ID from one of the OVS agents, confirm that the L2 Population driver is set on the
OVS agent.
Example
This example verifies the configuration of the L2 Population driver on the neutron-
openvswitch-agent with ID 003a8750-a6f9-468b-9321-a6c03c77aec7:
Sample output
"l2_population": true,
3. Ensure that the ARP responder feature is enabled for the OVS agent.
Example
203
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Sample output
"arp_responder_enabled": true,
Additional resources
Environment files in the Customizing your Red Hat OpenStack Platform deployment guide
Including environment files in overcloud creation in the Customizing your Red Hat OpenStack
Platform deployment guide
To avoid VRRP packet overload, you must increase the VRRP advertisement interval using the
ha_vrrp_advert_int parameter in the ExtraConfig section for the Controller role.
Procedure
1. Log in to the undercloud as the stack user, and source the stackrc file to enable the director
command line tools.
Example
$ source ~/stackrc
Example
$ vi /home/stack/templates/my-neutron-environment.yaml
TIP
The Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called
templates to install and configure your environment. You can customize aspects of the
overcloud with a custom environment file , which is a special type of template that provides
customization for your heat templates.
3. In the YAML environment file, increase the VRRP advertisement interval using the
ha_vrrp_advert_int argument with a value specific for your site. (The default is 2 seconds.)
You can also set values for gratuitous ARP messages:
ha_vrrp_garp_master_repeat
The number of gratuitous ARP messages to send at one time after the transition to the
204
CHAPTER 21. COMMON ADMINISTRATIVE NETWORKING TASKS
The number of gratuitous ARP messages to send at one time after the transition to the
master state. (The default is 5 messages.)
ha_vrrp_garp_master_delay
The delay for second set of gratuitous ARP messages after the lower priority advert is
received in the master state. (The default is 5 seconds.)
Example
parameter_defaults:
ControllerExtraConfig:
neutron::agents::l3::ha_vrrp_advert_int: 7
neutron::config::l3_agent_config:
DEFAULT/ha_vrrp_garp_master_repeat:
value: 5
DEFAULT/ha_vrrp_garp_master_delay:
value: 5
4. Run the openstack overcloud deploy command and include the core heat templates,
environment files, and this new custom environment file.
IMPORTANT
The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.
Example
Additional resources
Environment files in the Customizing your Red Hat OpenStack Platform deployment guide
Including environment files in overcloud creation in the Customizing your Red Hat OpenStack
Platform deployment guide
You enable the DNS domain for ports extension by declaring the RHOSP Orchestration (heat)
NeutronPluginExtensions parameter in a YAML-formatted environment file. Using a corresponding
parameter, NeutronDnsDomain, you specify your domain name, which overrides the default value,
openstacklocal. After redeploying your overcloud, you can use the OpenStack Client port commands,
port set or port create, with --dns-name to assign a port name.
205
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
IMPORTANT
You must enable the DNS domain for ports extension (dns_domain_ports) for DNS to
internally resolve names for ports in your RHOSP environment. Using the
NeutronDnsDomain default value, openstacklocal, means that the Networking service
does not internally resolve port names for DNS.
Also, when the DNS domain for ports extension is enabled, the Compute service automatically populates
the dns_name attribute with the hostname attribute of the instance during the boot of VM instances.
At the end of the boot process, dnsmasq recognizes the allocated ports by their instance hostname.
Procedure
1. Log in to the undercloud as the stack user, and source the stackrc file to enable the director
command line tools.
Example
$ source ~/stackrc
NOTE
Values inside parentheses are sample values that are used in the example
commands in this procedure. Substitute these sample values with values that are
appropriate for your site.
Example
$ vi /home/stack/templates/my-neutron-environment.yaml
TIP
The undercloud includes a set of Orchestration service templates that form the plan for your
overcloud creation. You can customize aspects of the overcloud with environment files, which
are YAML-formatted files that override parameters and resources in the core Orchestration
service template collection. You can include as many environment files as necessary.
3. In the environment file, add a parameter_defaults section. Under this section, add the DNS
domain for ports extension, dns_domain_ports.
Example
parameter_defaults:
NeutronPluginExtensions: "qos,port_security,dns_domain_ports"
NOTE
If you set dns_domain_ports, ensure that the deployment does not also use
dns_domain, the DNS Integration extension. These extensions are incompatible,
and both extensions cannot be defined simultaneously.
206
CHAPTER 21. COMMON ADMINISTRATIVE NETWORKING TASKS
4. Also in the parameter_defaults section, add your domain name ( example.com) using the
NeutronDnsDomain parameter.
Example
parameter_defaults:
NeutronPluginExtensions: "qos,port_security,dns_domain_ports"
NeutronDnsDomain: "example.com"
5. Run the openstack overcloud deploy command and include the core Orchestration templates,
environment files, and this new environment file.
IMPORTANT
The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.
Example
Verification
1. Log in to the overcloud, and create a new port (new_port) on a network (public). Assign a DNS
name (my_port) to the port.
Example
$ source ~/overcloudrc
$ openstack port create --network public --dns-name my_port new_port
Example
Output
+-------------------------+----------------------------------------------+
| Field | Value |
+-------------------------+----------------------------------------------+
| dns_assignment | fqdn='my_port.example.com', |
| | hostname='my_port', |
| | ip_address='10.65.176.113' |
| dns_domain | example.com |
| dns_name | my_port |
| name | new_port |
+-------------------------+----------------------------------------------+
207
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Under dns_assignment, the fully qualified domain name (fqdn) value for the port contains a
concatenation of the DNS name (my_port) and the domain name (example.com) that you set
earlier with NeutronDnsDomain.
3. Create a new VM instance (my_vm) using the port (new_port) that you just created.
Example
$ openstack server create --image rhel --flavor m1.small --port new_port my_vm
Example
Output
+-------------------------+----------------------------------------------+
| Field | Value |
+-------------------------+----------------------------------------------+
| dns_assignment | fqdn='my_vm.example.com', |
| | hostname='my_vm', |
| | ip_address='10.65.176.113' |
| dns_domain | example.com |
| dns_name | my_vm |
| name | new_port |
+-------------------------+----------------------------------------------+
Note that the Compute service changes the dns_name attribute from its original value
(my_port) to the name of the instance with which the port is associated ( my_vm).
Additional resources
Environment files in the Customizing your Red Hat OpenStack Platform deployment guide
Including environment files in overcloud creation in the Customizing your Red Hat OpenStack
Platform deployment guide
The value of the extra_dhcp_opt attribute is an array of DHCP option objects, where each object
contains an opt_name and an opt_value. IPv4 is the default version, but you can change this to IPv6 by
including a third option, ip-version=6.
When a VM instance starts, the RHOSP Networking service supplies port information to the instance
208
CHAPTER 21. COMMON ADMINISTRATIVE NETWORKING TASKS
When a VM instance starts, the RHOSP Networking service supplies port information to the instance
using DHCP protocol. If you add DHCP information to a port already connected to a running instance,
the instance only uses the new DHCP port information when the instance is restarted.
Some of the more common DHCP port attributes are: bootfile-name, dns-server, domain-name, mtu,
server-ip-address, and tftp-server. For the complete set of acceptable values for opt_name, refer to
the DHCP specification.
Prerequisites
Procedure
$ source ~/stackrc
Example
$ vi /home/stack/templates/my-environment.yaml
4. Your environment file must contain the keywords parameter_defaults. Under these keywords,
add the extra DHCP option extension, extra_dhcp_opt.
Example
parameter_defaults:
NeutronPluginExtensions: "qos,port_security,extra_dhcp_opt"
5. Run the deployment command and include the core heat templates, environment files, and this
new custom environment file.
IMPORTANT
The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.
Example
Verification
Example
209
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Example
$ source ~/overcloudrc
2. Create a new port (new_port) on a network (public). Assign a valid attribute from the DHCP
specification to the new port.
Example
Example
Sample output
+-----------------+-----------------------------------------------------------------+
| Field | Value |
+-----------------+-----------------------------------------------------------------+
| extra_dhcp_opts | ip_version='4', opt_name='domain-name', opt_value='test.domain' |
| | ip_version='4', opt_name='ntp-server', opt_value='192.0.2.123' |
+-----------------+-----------------------------------------------------------------+
Additional resources
Dynamic Host Configuration Protocol (DHCP) and Bootstrap Protocol (BOOTP) Parameters
Environment files in the Customizing your Red Hat OpenStack Platform deployment guide
Including environment files in overcloud creation in the Customizing your Red Hat OpenStack
Platform deployment guide
Prerequisites
Access to the undercloud host and credentials for the stack user.
Procedure
210
CHAPTER 21. COMMON ADMINISTRATIVE NETWORKING TASKS
$ source ~/stackrc
3. To enable the port_numa_affinity_policy extension, open the environment file where the
NeutronPluginExtensions parameter is defined, and add port_numa_affinity_policy to the
list:
parameter_defaults:
NeutronPluginExtensions: "qos,port_numa_affinity_policy"
4. Add the environment file that you modified to the stack with your other environment files, and
redeploy the overcloud:
IMPORTANT
The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.
Verification
Example
$ source ~/overcloudrc
--numa-policy-legacy - NUMA affinity policy using legacy mode to schedule this port.
Example
Example
211
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Sample output
When the extension is loaded, the Value column should read, legacy, preferred or required. If
the extension has failed to load, Value reads None:
+----------------------+--------+
| Field | Value |
+----------------------+--------+
| numa_affinity_policy | legacy |
+----------------------+--------+
Additional resources
Environment files in the Customizing your Red Hat OpenStack Platform deployment guide
Including environment files in overcloud creation in the Customizing your Red Hat OpenStack
Platform deployment guide
Creating an instance with NUMA affinity on the port in the Creating and managing instances
guide
By using a special Orchestration service (heat) parameter, ExtraKernelModules, you can ensure that
heat stores configuration information about the required kernel modules needed for features like GRE
tunneling. Later, during normal module management, these required kernel modules are loaded.
Procedure
1. On the undercloud host, logged in as the stack user, create a custom YAML environment file.
Example
$ vi /home/stack/templates/my-modules-environment.yaml
TIP
Heat uses a set of plans called templates to install and configure your environment. You can
customize aspects of the overcloud with a custom environment file , which is a special type of
template that provides customization for your heat templates.
Example
ComputeParameters:
212
CHAPTER 21. COMMON ADMINISTRATIVE NETWORKING TASKS
ExtraKernelModules:
nf_conntrack_proto_gre: {}
ControllerParameters:
ExtraKernelModules:
nf_conntrack_proto_gre: {}
3. Run the openstack overcloud deploy command and include the core heat templates,
environment files, and this new custom environment file.
IMPORTANT
The order of the environment files is important as the parameters and resources
defined in subsequent environment files take precedence.
Example
Verification
If heat has properly loaded the module, you should see output when you run the lsmod
command on the Compute node:
Example
Additional resources
Environment files in the Customizing your Red Hat OpenStack Platform deployment guide
Including environment files in overcloud creation in the Customizing your Red Hat OpenStack
Platform deployment guide
Prerequisites
You have access to the RHOSP Compute nodes and permission to update configuration files.
Your RHOSP environment uses IPv4 networking. Currently, the Networking service does not
support metadata rate limiting on IPv6 networks.
213
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
This procedure requires you to restart the OVN metadata service or the OVS metadata agent.
Schedule this activity for a maintenance window to minimize the operational impact of any
potential disruption.
Procedure
rate_limit_enabled
enables you to limit the rate of metadata requests. The default value is false. Set the value
to true to enable metadata rate limiting.
ip_versions
the IP version, 4, used for metadata IP addresses on which you want to control query rates.
RHOSP does not yet support metadata rate limiting for IPv6 networks.
base_window_duration
the time span, in seconds, during which query requests are limited. The default value is 10
seconds.
base_query_rate_limit
the maximum number of requests allowed during the base_window_duration. The default
value is 10 requests.
burst_window_duration
the time span, in seconds, that a request rate higher than the base_window_duration is
allowed. The default value is 10 seconds.
burst_query_rate_limit
the maximum number of requests allowed during the burst_window_duration. The default
value is 10 requests.
Example
In this example, the Networking service is configured for a base time and rate that allows
instances to query the IPv4 metadata service IP address 6 times over a 60 second period.
The Networking service is also configured for a burst time and rate that allows a higher rate
of 2 queries during shorter periods of 10 seconds each:
[metadata_rate_limiting]
rate_limit_enabled = True
ip_versions = 4
base_window_duration = 60
base_query_rate_limit = 6
burst_window_duration = 10
burst_query_rate_limit = 2
ML2/OVN
On the Compute nodes, restart tripleo_ovn_metadata_agent.service.
214
CHAPTER 21. COMMON ADMINISTRATIVE NETWORKING TASKS
ML2/OVS
On the Compute nodes, restart tripleo_neutron_metadata_agent.service.
215
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
In a typical deployment, projects create virtual routers, which are scheduled to run on physical
Networking service Layer 3 (L3) agent nodes. This becomes an issue when you lose an L3 agent node
and the dependent virtual machines subsequently lose connectivity to external networks. Any floating IP
addresses are also unavailable. In addition, connectivity is lost between any networks that the router
hosts.
NOTE
To deploy Layer 3 (L3) HA, you must maintain similar configuration on the redundant
Networking service nodes, including floating IP ranges and access to external networks.
In the following diagram, the active Router1 and Router2 routers are running on separate physical L3
Networking service agent nodes. L3 HA has scheduled backup virtual routers on the corresponding
nodes, ready to resume service in the case of a physical node failure. When the L3 agent node fails, L3
HA reschedules the affected virtual router and floating IP addresses to a working node:
During a failover event, instance TCP sessions through floating IPs remain unaffected, and migrate to
the new L3 node without disruption. Only SNAT traffic is affected by failover events.
Additional resources
216
CHAPTER 22. CONFIGURING LAYER 3 HIGH AVAILABILITY (HA)
The Networking service L3 agent node shuts down or otherwise loses power because of a
hardware failure.
The L3 agent node becomes isolated from the physical network and loses connectivity.
NOTE
Manually stopping the L3 agent service does not induce a failover event.
Internal VRRP messages are transported within a separate internal network, created
automatically for each project. This process occurs transparently to the user.
When implementing high availability (HA) routers on ML2/OVS, each L3 agent spawns haproxy
and neutron-keepalived-state-change-monitor processes for each router. Each process
consumes approximately 20MB of memory. By default, each HA router resides on three L3
agents and consumes resources on each of the nodes. Therefore, when sizing your RHOSP
networks, ensure that you have allocated enough memory to support the number of HA routers
that you plan to implement.
Layer 3 (L3) HA assigns the active role randomly, regardless of the scheduler used by the
Networking service (whether random or leastrouter).
The database schema has been modified to handle allocation of virtual IP addresses (VIPs)
to virtual routers.
A new keepalived manager has been added, providing load-balancing and HA capabilities.
217
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
A new keepalived manager has been added, providing load-balancing and HA capabilities.
Prerequisites
Procedure
1. Log in to the undercloud as the stack user, and source the stackrc file to enable the director
command line tools.
Example
$ source ~/stackrc
Example
$ vi /home/stack/templates/my-neutron-environment.yaml
TIP
The Orchestration service (heat) uses a set of plans called templates to install and configure
your environment. You can customize aspects of the overcloud with a custom environment file ,
which is a special type of template that provides customization for your heat templates.
3. Set the NeutronL3HA parameter to true in the YAML environment file. This ensures HA is
enabled even if director did not set it by default.
parameter_defaults:
NeutronL3HA: 'true'
Example
218
CHAPTER 22. CONFIGURING LAYER 3 HIGH AVAILABILITY (HA)
parameter_defaults:
NeutronL3HA: 'true'
ControllerExtraConfig:
neutron::server::max_l3_agents_per_router: 2
In this example, if you deploy four Networking service nodes, only two L3 agents protect each
HA virtual router: one active, and one standby.
If you set the value of max_l3_agents_per_router to be greater than the number of available
network nodes, you can scale out the number of standby routers by adding new L3 agents. For
every new L3 agent node that you deploy, the Networking service schedules additional standby
versions of the virtual routers until the max_l3_agents_per_router limit is reached.
5. Run the openstack overcloud deploy command and include the core heat templates,
environment files, and this new custom environment file.
IMPORTANT
The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.
Example
NOTE
When NeutronL3HA is set to true, all virtual routers that are created default to
HA routers. When you create a router, you can override the HA option by including
the --no-ha option in the openstack router create command:
Additional resources
Environment files in the Customizing your Red Hat OpenStack Platform deployment guide
Including environment files in overcloud creation in the Customizing your Red Hat OpenStack
Platform deployment guide
Procedure
Run the ip address command within the virtual router namespace to return a high availability
(HA) device in the result, prefixed with ha-.
219
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
<snip>
2794: ha-45249562-ec: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state DOWN group default
link/ether 12:34:56:78:2b:5d brd ff:ff:ff:ff:ff:ff
inet 169.254.0.2/24 brd 169.254.0.255 scope global ha-54b92d86-4f
With Layer 3 HA enabled, virtual routers and floating IP addresses are protected against individual node
failure.
220
CHAPTER 23. USING AVAILABILITY ZONES TO MAKE NETWORK RESOURCES HIGHLY AVAILABLE
AZs enable you to make your RHOSP network resources highly available. You can group network nodes
that are attached to different power sources on different AZs, and then schedule crucial services to be
on separate AZs.
Often Networking service AZs are used in conjunction with Compute service (nova) AZs to ensure that
customers use specific virtual networks that are local to a physical site that workloads run on. For more
information on Distributed Compute Node architecture see, the Deploying a Distributed Compute Node
architecture guide.
NOTE
The Modular Layer 2 plug-in with the Open Virtual Network (ML2/OVN) mechanism
driver supports only router availability zones. ML2/OVN has a distributed DHCP server,
so supporting network AZs is unnecessary.
When you create your network resource, you can specify an AZ by using the OpenStack client command
line option, --availability-zone-hint. The AZ that you specify is added to the AZ hint list. However, the
AZ attribute is not actually set until the resource is scheduled. The actual AZ that is assigned to your
network resource can vary from the AZ that you specified with the hint option. The reasons for this
mismatch can be that there is a zone failure, or that the zone specified has no remaining capacity.
You can configure the Networking service for a default AZ, in case users fail to specify an AZ when they
create a network resource. In addition to setting a default AZ, you can also configure specific drivers to
schedule networks and routers on AZs.
Additional resources
221
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
The information contained in this topic is for deployments that run the RHOSP Networking service that
uses the Module Layer 2 plug-in with the Open vSwitch mechanism driver (ML2/OVS).
Prerequisites
Running the RHOSP Networking service that uses the ML2/OVS mechanism driver.
When using Networking service AZs in distributed compute node (DCN) environments, you must
match the Networking service AZ names to the Compute service (nova) AZ names.
For more information, see the Deploying a Distributed Compute Node architecture guide.
Procedure
1. Log in to the undercloud as the stack user, and source the stackrc file to enable the director
command line tools.
Example
$ source ~/stackrc
Example
$ vi /home/stack/templates/my-neutron-environment.yaml
TIP
The Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called
templates to install and configure your environment. You can customize aspects of the
overcloud with a custom environment file , which is a special type of template that provides
customization for your heat templates.
IMPORTANT
In DCN environments, you must match the Networking service AZ names with
Compute service AZ names.
Example
parameter_defaults:
NeutronDefaultAvailabilityZones: 'az-central,az-datacenter2,az-datacenter1'
4. Determine the AZs for the DHCP and the L3 agents, by entering values for the parameters,
NeutronDhcpAgentAvailabilityZone and NeutronL3AgentAvailabilityZone, respectively.
Example
222
CHAPTER 23. USING AVAILABILITY ZONES TO MAKE NETWORK RESOURCES HIGHLY AVAILABLE
Example
parameter_defaults:
NeutronDefaultAvailabilityZones: 'az-central,az-datacenter2,az-datacenter1'
NeutronL3AgentAvailabilityZone: 'az-central,az-datacenter2,az-datacenter1'
NeutronDhcpAgentAvailabilityZone: 'az-central,az-datacenter2,az-datacenter1'
IMPORTANT
Example
parameter_defaults:
NeutronDefaultAvailabilityZones: 'az-central,az-datacenter2,az-datacenter1'
NeutronL3AgentAvailabilityZone: 'az-central,az-datacenter2,az-datacenter1'
NeutronDhcpAgentAvailabilityZone: 'az-central,az-datacenter2,az-datacenter1'
NeutronNetworkSchedulerDriver:
'neutron.scheduler.dhcp_agent_scheduler.AZAwareWeightScheduler'
NeutronRouterSchedulerDriver:
'neutron.scheduler.l3_agent_scheduler.AZLeastRoutersScheduler'
6. Run the openstack overcloud deploy command and include the core heat templates,
environment files, and this new custom environment file.
IMPORTANT
The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.
Example
Verification
Confirm that availability zones are properly defined, by running the availability zone list
command.
Example
223
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
Sample output
+----------------+-------------+
| Zone Name | Zone Status |
+----------------+-------------+
| az-central | available |
| az-datacenter1 | available |
| az-datacenter2 | available |
+----------------+-------------+
Additional resources
The information contained in this topic is for deployments that run the RHOSP Networking service that
uses the Modular Layer 2 plug-in with the Open Virtual Network (ML2/OVN) mechanism driver.
NOTE
The ML2/OVN mechanism driver supports only router availability zones. ML2/OVN has a
distributed DHCP server, so supporting network AZs is unnecessary.
Prerequisites
Running the RHOSP Networking service that uses the ML2/OVN mechanism driver.
When using Networking service AZs in distributed compute node (DCN) environments, you must
match the Networking service AZ names to the Compute service (nova) AZ names.
For more information, see the Deploying a Distributed Compute Node architecture guide.
IMPORTANT
Ensure that all router gateway ports reside on the OpenStack Controller nodes
by setting OVNCMSOptions: 'enable-chassis-as-gw' and by providing one or
more AZ values for the OVNAvailabilityZone parameter. Performing these
actions prevent the routers from scheduling all chassis as potential hosts for the
router gateway ports.
Procedure
224
CHAPTER 23. USING AVAILABILITY ZONES TO MAKE NETWORK RESOURCES HIGHLY AVAILABLE
1. Log in to the undercloud as the stack user, and source the stackrc file to enable the director
command line tools.
Example
$ source ~/stackrc
Example
$ vi /home/stack/templates/my-neutron-environment.yaml
TIP
The Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called
templates to install and configure your environment. You can customize aspects of the
overcloud with a custom environment file , which is a special type of template that provides
customization for your heat templates.
IMPORTANT
In DCN environments, you must match the Networking service AZ names with
Compute service AZ names.
The Networking service assigns these AZs if a user fails to specify an AZ with the --availability-
zone-hint option when creating a network or a router.
Example
parameter_defaults:
NeutronDefaultAvailabilityZones: 'az-central,az-datacenter2,az-datacenter1'
4. Determine the AZs for the gateway nodes (Controllers and Network nodes), by entering values
for the parameter, OVNAvailabilityZone.
IMPORTANT
Example
In this example, roles have been predefined for Controllers for the az-central AZ, and
Networkers for the datacenter1 and datacenter2 AZs:
parameter_defaults:
NeutronDefaultAvailabilityZones: 'az-central,az-datacenter2,az-datacenter1'
225
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
ControllerCentralParameters:
OVNCMSOptions: 'enable-chassis-as-gw'
OVNAvailabilityZone: 'az-central,az-datacenter2,az-datacenter1'
NetworkerDatacenter1Parameters:
OVNCMSOptions: 'enable-chassis-as-gw'
OVNAvailabilityZone: 'az-datacenter1'
NetworkerDatacenter2Parameters:
OVNCMSOptions: 'enable-chassis-as-gw'
OVNAvailabilityZone: 'az-datacenter2'
IMPORTANT
5. By default, the router scheduler is AZLeastRoutersScheduler. If you want to change this, enter
the new scheduler with the NeutronRouterSchedulerDriver parameters.
Example
parameter_defaults:
NeutronDefaultAvailabilityZones: 'az-central,az-datacenter2,az-datacenter1'
ControllerCentralParameters:
OVNCMSOptions: 'enable-chassis-as-gw'
OVNAvailabilityZone: 'az-central,az-datacenter2,az-datacenter1'
NetworkerDCN1Parameters:
OVNCMSOptions: 'enable-chassis-as-gw'
OVNAvailabilityZone: 'az-datacenter1'
NetworkerDCN2Parameters:
OVNCMSOptions: 'enable-chassis-as-gw'
OVNAvailabilityZone: 'az-datacenter2'
NeutronRouterSchedulerDriver:
'neutron.scheduler.l3_agent_scheduler.AZLeastRoutersScheduler'
6. Run the openstack overcloud deploy command and include the core heat templates,
environment files, and this new custom environment file.
IMPORTANT
The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.
Example
Verification
Confirm that availability zones are properly defined, by running the availability zone list
command.
226
CHAPTER 23. USING AVAILABILITY ZONES TO MAKE NETWORK RESOURCES HIGHLY AVAILABLE
Example
Sample output
+----------------+-------------+
| Zone Name | Zone Status |
+----------------+-------------+
| az-central | available |
| az-datacenter1 | available |
| az-datacenter2 | available |
+----------------+-------------+
Additional resources
NOTE
If you fail to assign an AZ when creating a network or a router, the RHOSP Networking
service automatically assigns to the resource the value that was specified to the RHOSP
Orchestration service (heat) parameter. If no value is defined for
NeutronDefaultAvailabilityZones the resources are scheduled without any AZ
attributes.
For RHOSP Networking service agents that use the Modular Layer 2 plug-in with the
Open vSwitch (ML2/OVS) mechanism driver, if no AZ hint is supplied and no value
specified for NeutronDefaultAvailabilityZones, then the Compute service (nova) AZ
value is used to schedule the agent.
Prerequisites
Running the RHOSP Networking service that uses either the ML2/OVS or ML2/OVN (Open
Virtual Network) mechanism drivers.
Procedure
When you create a network on the overcloud using the OpenStack client, use the --availability-
227
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
When you create a network on the overcloud using the OpenStack client, use the --availability-
zone-hint option.
NOTE
In the following example, a network (net1) is created and assigned to either AZ zone-1 or zone-
2:
Network example
Sample output
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | zone-1 |
| | zone-2 |
| availability_zones | |
| created_at | 2021-07-31T22:14:12Z |
| description | |
| headers | |
| id | ad88e059-e7fa-4cf7-8857-6731a2a3a554 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| mtu | 1450 |
| name | net1 |
| port_security_enabled | True |
| project_id | cfd1889ac7d64ad891d4f20aef9f8d7c |
| provider:network_type | vxlan |
| provider:physical_network | None |
| provider:segmentation_id | 77 |
| revision_number |3 |
| router:external | Internal |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | [] |
| updated_at | 2021-07-31T22:14:13Z |
+---------------------------+--------------------------------------+
When you create a router on the overcloud using the OpenStack client, use the --ha and --
availability-zone-hint options.
In the following example, a router (router1) is created and assigned to either AZ zone-1 or
zone-2:
Router example
228
CHAPTER 23. USING AVAILABILITY ZONES TO MAKE NETWORK RESOURCES HIGHLY AVAILABLE
Sample output
+-------------------------+--------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | zone-1 |
| | zone-2 |
| availability_zones | |
| created_at | 2021-07-31T22:16:54Z |
| description | |
| distributed | False |
| external_gateway_info | null |
| flavor_id | None |
| ha | False |
| headers | |
| id | ced10262-6cfe-47c1-8847-cd64276a868c |
| name | router1 |
| project_id | cfd1889ac7d64ad891d4f20aef9f8d7c |
| revision_number |3 |
| routes | |
| status | ACTIVE |
| tags | [] |
| updated_at | 2021-07-31T22:16:56Z |
+-------------------------+--------------------------------------+
Notice that the actual AZ is not assigned at the time that you create the network resource. The
RHOSP Networking service assigns the AZ when it schedules the resource.
Verification
Enter the appropriate OpenStack client show command to confirm in which zone the resource
is hosted.
Example
Sample output
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | zone-1 |
| | zone-2 |
| availability_zones | zone-1 |
| | zone-2 |
| created_at | 2021-07-31T22:14:12Z |
| description | |
| headers | |
229
Red Hat OpenStack Platform 17.1 Configuring Red Hat OpenStack Platform networking
| id | ad88e059-e7fa-4cf7-8857-6731a2a3a554 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| mtu | 1450 |
| name | net1 |
| port_security_enabled | True |
| project_id | cfd1889ac7d64ad891d4f20aef9f8d7c |
| provider:network_type | vxlan |
| provider:physical_network | None |
| provider:segmentation_id | 77 |
| revision_number |3 |
| router:external | Internal |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | [] |
| updated_at | 2021-07-31T22:14:13Z |
+---------------------------+--------------------------------------+
Additional resources
230