Juniper
Juniper
Contrail Networking
Release
Published
21.3
2021-12-29
ii
Juniper Networks, the Juniper Networks logo, Juniper, and Junos are registered trademarks of Juniper Networks, Inc. in
the United States and other countries. All other trademarks, service marks, registered marks, or registered service marks
are the property of their respective owners.
Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right
to change, modify, transfer, or otherwise revise this publication without notice.
®
Contrail Networking Contrail Networking Installation and Upgrade Guide
21.3
Copyright © 2021 Juniper Networks, Inc. All rights reserved.
The information in this document is current as of the date on the title page.
Juniper Networks hardware and software products are Year 2000 compliant. Junos OS has no known time-related
limitations through the year 2038. However, the NTP application is known to have some difficulty in the year 2036.
The Juniper Networks product that is the subject of this technical documentation consists of (or is intended for use with)
Juniper Networks software. Use of such software is subject to the terms and conditions of the End User License Agreement
(“EULA”) posted at https://support.juniper.net/support/eula/. By downloading, installing or using such software, you
agree to the terms and conditions of that EULA.
iii
Table of Contents
About the Documentation | xv
Contrail Containers | 6
playbooks/provision_instances.yml | 8
playbooks/configure_instances.yml | 8
playbooks/install_contrail.yml | 9
Prerequisites | 9
Supported Providers | 9
Provider Configuration | 10
Instances Configuration | 14
iv
Contrail Command | 20
Server Requirements | 21
Software Requirements | 21
| 57
Search functionality | 64
Adding a New Compute Node to Existing Contrail Cluster Using Contrail Command | 85
Example: Config.YML File for Deploying Contrail Command with a Cluster Using Juju | 97
Install Contrail Insights on the Juju Cluster after Contrail Command is Installed | 100
Install Contrail Insights Flows on the Juju Cluster after Contrail Insights is Installed | 101
How to Perform a Zero Impact Contrail Networking Upgrade using the Ansible Deployer | 114
Updating Contrail Networking using the Zero Impact Upgrade Process in an Environment using
Red Hat Openstack 16.1 | 121
Prerequisites | 121
Updating Contrail Networking in an Environment using Red Hat Openstack 16.1 | 122
Updating Contrail Networking using the Zero Impact Upgrade Process in an Environment using
Red Hat Openstack 13 | 129
Prerequisites | 130
Updating Contrail Networking using the Zero Impact Upgrade Procedure in a Canonical Openstack
Deployment with Juju Charms | 139
Prerequisites | 139
Recommendations | 140
Updating Contrail Networking in a Canonical Openstack Deployment Using Juju Charms | 141
Upgrading Contrail Networking Release 19xx with RHOSP13 to Contrail Networking Release 2011
with RHOSP16.1 | 145
Upgrading Contrail Networking Release 1912.L2 with RHOSP13 to Contrail Networking Release
2011.L3 with RHOSP16.1 | 147
How to Upgrade Contrail Networking Through Kubernetes and/or Red Hat OpenShift | 150
Deploying Red Hat Openstack with Contrail Control Plane Managed by Tungsten Fabric
Operator | 154
How to Backup and Restore Contrail Databases in JSON Format in Openstack Environments Using
the Openstack 16.1 Director Deployment | 159
How to Backup and Restore Contrail Databases in JSON Format in Openstack Environments Using
the Openstack 13 or Ansible Deployers | 170
Example: How to Restore a Database Using the JSON Backup (Ansible Deployer
Environment) | 184
Example: How to Restore a Database Using the JSON Backup (Red Hat Openstack Deployer
Environment) | 188
Heat Version 2 with Service Templates and Port Tuple Sample Workflow | 338
ix
Queuing | 346
Overview | 348
Limitations | 350
Overview | 384
Incompatibilities | 394
xi
Installing Contrail with Kubernetes in Nested Mode by Using Juju Charms | 463
Installing OpenStack Octavia LBaaS with Juju Charms in Contrail Networking | 467
Using Netronome SmartNIC vRouter with Contrail Networking and Juju Charms | 476
Example Instances.yml for Contrail and Contrail Insights OpenStack Deployment | 493
Requirements | 499
Requirements | 511
Connectivity | 512
Configuration | 519
Configuring the Control Node with BGP from Contrail Command | 533
IN THIS SECTION
Use this guide to install and upgrade the Contrail Networking solution for your environment. This guide
covers various installation scenarios including:
• Contrail Command
• Contrail with Openstack, including Openstack, Red Hat Openstack, and Canonical Openstack
Contrail Networking Provides information about installing and using Contrail Networking in cloud-native
Cloud-Native User Guide environments using Kubernetes orchestration.
Contrail Networking Fabric Provides information about Contrail underlay management and data center
Lifecycle Management Guide automation.
Contrail Networking and Security Provides information about creating and orchestrating highly secure virtual
User Guide networks.
Contrail Networking Service Provides information about the features that are used by service providers.
Provider Focused Features Guide
Contrail Networking Monitoring Provides information about Contrail Insights and Contrail analytics.
and Troubleshooting Guide
xvi
®
To obtain the most current version of all Juniper Networks technical documentation, see the product
documentation page on the Juniper Networks website at https://www.juniper.net/documentation/.
If the information in the latest release notes differs from the information in the documentation, follow the
product Release Notes.
Juniper Networks Books publishes books by Juniper Networks engineers and subject matter experts.
These books go beyond the technical documentation to explore the nuances of network architecture,
deployment, and administration. The current list can be viewed at https://www.juniper.net/books.
Documentation Conventions
Laser warning Alerts you to the risk of personal injury from a laser.
Table 3 on page xvii defines the text and syntax conventions used in this guide.
xvii
Bold text like this Represents text that you type. To enter configuration mode, type
the configure command:
user@host> configure
Fixed-width text like this Represents output that appears on user@host> show chassis alarms
the terminal screen.
No alarms currently active
Italic text like this • Introduces or emphasizes important • A policy term is a named structure
new terms. that defines match conditions and
• Identifies guide names. actions.
Italic text like this Represents variables (options for Configure the machine’s domain
which you substitute a value) in name:
commands or configuration
[edit]
statements.
root@# set system domain-name
domain-name
Text like this Represents names of configuration • To configure a stub area, include
statements, commands, files, and the stub statement at the [edit
directories; configuration hierarchy protocols ospf area area-id]
levels; or labels on routing platform hierarchy level.
components. • The console port is labeled
CONSOLE.
< > (angle brackets) Encloses optional keywords or stub <default-metric metric>;
variables.
# (pound sign) Indicates a comment specified on the rsvp { # Required for dynamic MPLS
same line as the configuration only
statement to which it applies.
xviii
[ ] (square brackets) Encloses a variable for which you can community name members [
substitute one or more values. community-ids ]
GUI Conventions
Bold text like this Represents graphical user interface • In the Logical Interfaces box, select
(GUI) items you click or select. All Interfaces.
• To cancel the configuration, click
Cancel.
> (bold right angle bracket) Separates levels in a hierarchy of In the configuration editor hierarchy,
menu selections. select Protocols>Ospf.
Documentation Feedback
We encourage you to provide feedback so that we can improve our documentation. You can use either
of the following methods:
• Online feedback system—Click TechLibrary Feedback, on the lower right of any page on the Juniper
Networks TechLibrary site, and do one of the following:
xix
• Click the thumbs-up icon if the information on the page was helpful to you.
• Click the thumbs-down icon if the information on the page was not helpful to you or if you have
suggestions for improvement, and use the pop-up form to provide feedback.
Technical product support is available through the Juniper Networks Technical Assistance Center (JTAC).
If you are a customer with an active Juniper Care or Partner Support Services support contract, or are
covered under warranty, and need post-sales technical support, you can access our tools and resources
online or open a case with JTAC.
• JTAC policies—For a complete understanding of our JTAC procedures and policies, review the JTAC User
Guide located at https://www.juniper.net/us/en/local/pdf/resource-guides/7100059-en.pdf.
• JTAC hours of operation—The JTAC centers have resources available 24 hours a day, 7 days a week,
365 days a year.
For quick and easy problem resolution, Juniper Networks has designed an online self-service portal called
the Customer Support Center (CSC) that provides you with the following features:
• Find solutions and answer questions using our Knowledge Base: https://kb.juniper.net/
To verify service entitlement by product serial number, use our Serial Number Entitlement (SNE) Tool:
https://entitlementsearch.juniper.net/entitlementsearch/
You can create a service request with JTAC on the Web or by telephone.
• Visit https://myjuniper.juniper.net.
Understanding Contrail | 2
Contrail Command | 20
CHAPTER 1
Understanding Contrail
IN THIS CHAPTER
Contrail Networking provides dynamic end-to-end networking policy and control for any cloud, any
workload, and any deployment, from a single user interface. It translates abstract workflows into specific
policies, simplifying the orchestration of virtual overlay connectivity across all environments.
It unifies policy for network automation with seamless integrations for systems such as: Kubernetes,
OpenShift, Mesos, OpenStack, VMware, a variety of popular DevOps tools like Ansible, and a variety of
Linux operating systems with or without virtualization like KVM and Docker containers.
Contrail Networking is a fundamental building block of Contrail Enterprise Multicloud for enterprises. It
manages your data center networking devices, such as QFX Series Switches, Data Center Interconnect
(DCI) infrastructures, as well as public cloud gateways, extending the continuous connectivity from your
on-premises to private and public clouds.
Contrail Networking reduces the friction of migrating to cloud by providing a virtual networking overlay
layer that delivers virtual routing, bridging, and networking services (IPAM, NAT, security, load balancing,
VPNs, etc.) over any existing physical or cloud IP network. It also provides multitenant structure and API
compatibility with multitenant public clouds like Amazon Web Services (AWS) virtual private clouds (VPCs)
for truly unifying policy semantics for hybrid cloud environments.
For service providers, Contrail Networking automates network resource provisioning and orchestration
to dynamically create highly scalable virtual networks and to chain a rich set of Juniper Networks or
third-party virtualized network functions (VNFs) and physical network functions (PNFs) to form differentiated
service chains on demand.
3
Contrail Networking is also integrated with Contrail Cloud for service providers. It enables you to run
high-performance Network Functions Virtualization (NFV) with always-on reliability so that you can deliver
innovative services with greater agility.
Contrail Networking is equipped with always-on advanced analytics capabilities to provide deep insights
into application and infrastructure performance for better visualization, easier diagnostics, rich reporting,
custom application development, and machine automation. It also supports integration with other analytics
platforms like Juniper Networks Contrail Insights and streaming analytics through technologies like Apache
Kafka and its API.
Contrail Networking also provides a Graphical User Interface (GUI).This GUI is built entirely using the REST
APIs.
RELATED DOCUMENTATION
• Contrail Networking management Web GUI and plug-ins integrate with orchestration platforms such as
Kubernetes, OpenShift, Mesos, OpenStack, VMware vSphere, and with service provider operations
support systems/business support systems (OSS/BSS). Many of these integrations are built, certified,
and tested with technology alliances like Red Hat, Mirantis, Canonical, NEC, and more. Contrail Networking
sits under such orchestration systems and integrates northbound via published REST APIs. It can be
automatically driven through the APIs and integrations, or managed directly using the Web GUI, called
Contrail Command GUI.
• Contrail Networking control and management systems, commonly called the controller, have several
functions. Few of the major functions are:
• Configuration Nodes—This function accepts requests from the API to provision workflows like adding
new virtual networks, new endpoints, and much more. It converts these abstract high-level requests,
with optional detail, into low-level directions that map to the internal data model.
• Control Nodes—This function maintains a scalable, highly available network model and state by federating
with other peer instances of itself. It directs network provisioning for the Contrail Networking vRouters
using Extensible Messaging and Presence Protocol (XMPP). It can also exchange network connectivity
and state with peer physical routers using open industry-standard MP-BGP which is useful for routing
the overlay networks and north-south traffic through a high-performance cloud gateway router.
• Analytics Nodes—This function collects, stores, correlates, and analyzes data across network elements.
This information, which includes statistics, logs, events, and errors, can be consumed by end-user or
network applications through the northbound REST API or Apache Kafka. Through the Web GUI, the
data can be analyzed with SQL style queries.
• Contrail Networking vRouter runs on the compute nodes of the cloud or NFV infrastructure. It gets network
tenancy, VPN, and reachability information from the control function nodes and ensures native Layer 3
services for the Linux host on which it runs or for the containers or virtual machines of that host. Each
vRouter is connected to at least two control nodes to optimize system resiliency. The vRouters run in
one of two high performance implementations: as a Linux kernel module or as an Intel Data Plane
Development Kit (DPDK)-based process.
5
RELATED DOCUMENTATION
IN THIS SECTION
Contrail Containers | 6
Contrail Containers
The following are key features of the new architecture of Contrail containers:
• Each container has an INI-based configuration file that has the configurations for all of the applications
running in that container.
• A single tool, Ansible, is used for all levels of building, deploying, and provisioning the containers. The
Ansible code for the Contrail system is named contrail-ansible and kept in a separate repository. The
Contrail Ansible code is responsible for all aspects of Contrail container build, deployment, and basic
container orchestration.
IN THIS SECTION
The containers and their processes are grouped as services and microservices, and are similar to pods in
the Kubernetes open-source software used to manage containers on a server cluster.
7
Figure 3 on page 7 shows how the Contrail containers and microservices are grouped into a pod structure
upon installation.
These procedures help you to install and manage Contrail with microservices architecture. Refer to the
following topics for installation for the operating system appropriate for your system:
IN THIS SECTION
Supported Providers | 9
8
This topic provides an overview of contrail-ansible-deployer used by Contrail Command tool. It is used for
installing Contrail Networking with microservices architecture.
IN THIS SECTION
playbooks/provision_instances.yml | 8
playbooks/configure_instances.yml | 8
playbooks/install_contrail.yml | 9
The contrail-ansible-deployer is a set of Ansible playbooks designed to deploy Contrail Networking with
microservices architecture.
playbooks/provision_instances.yml
This play provisions the operating system instances for hosting the containers. It supports the following
infrastructure providers:
• kvm.
• gce.
• aws.
playbooks/configure_instances.yml
This play configures the provisioned instances. The playbook installs software and configures the operating
system to meet the required prerequisite standards. This is applicable to all providers.
9
playbooks/install_contrail.yml
This play pulls, configures, and starts the Contrail containers.
This section helps you prepare your system before installing Contrail Networking using
contrail-command-deployer.
Prerequisites
Make sure your system meets the following requirements before running contrail-command-deployer.
• Confirm that you are running compatible versions of CentOS, Ansible, Docker, and any other software
component for your system in your environment. See Contrail Networking Supported Platforms List.
• Name resolution is operational for long and short host names of the cluster nodes, through either DNS
or the host file.
• For high availability (HA), confirm that the time is in sync between the cluster nodes.
• The time must be synchronized between the cluster nodes using Network Time Protocol (ntp).
Supported Providers
IN THIS SECTION
Provider Configuration | 10
Instances Configuration | 14
10
The configuration for all three plays is contained in a single file, config/instances.yaml.
The main sections of the config/instances.yaml file are described in this section. Using the sections that
are appropriate for your system, configure each with parameters specific to your environment.
Provider Configuration
The section provider_config configures provider-specific settings.
NOTE: Passwords are provided in this output for illustrative purposes only. We suggest using
unique passwords in accordance with your organization’s security guidelines in your environment.
provision play.
nameserver: dns-ip-address # Mandatory for
provision play.
ntpserver: ntp-server-ip-address # Mandatory
for provision/configuration play.
domainsuffix: local # Mandatory for provision
play.
NOTE: Passwords are provided in this output for illustrative purposes only. We suggest using
unique passwords in accordance with your organization’s security guidelines in your environment.
provider_config:
bms: # Mandatory.
ssh_pwd: contrail123 # Optional. Not needed if ssh
keys are used.
ssh_user: centos # Mandatory.
ssh_public_key: /home/centos/.ssh/id_rsa.pub # Optional. Not needed if ssh
password is used.
ssh_private_key: /home/centos/.ssh/id_rsa # Optional. Not needed if ssh
password is used.
ntpserver: ntp-server-ip-address # Optional. Needed if
ntp server should be configured.
domainsuffix: local # Optional. Needed if
configuration play should configure /etc/hosts
12
CAUTION: SSH Host Identity Keys must be accepted or installed on the Deployer node
before proceeding with Contrail installation.
To do so:
• Make SSH connection to each target machine from the Deployer VM using Deployer
user credentials and click Yes to accept the SSH Host Key.
or
ANSIBLE_HOST_KEY_CHECKING=false
or
[defaults] host_key_checking=false
provider_config:
aws: # Mandatory.
ec2_access_key: THIS_IS_YOUR_ACCESS_KEY # Mandatory.
ec2_secret_key: THIS_IS_YOUR_SECRET_KEY # Mandatory.
ssh_public_key: /home/centos/.ssh/id_rsa.pub # Optional.
ssh_private_key: /home/centos/.ssh/id_rsa # Optional.
ssh_user: centos # Mandatory.
instance_type: t2.xlarge # Mandatory.
image: ami-337be65c # Mandatory.
region: eu-central-1 # Mandatory.
security_group: SECURITY_GROUP_ID # Mandatory.
vpc_subnet_id: VPC_SUBNET_ID # Mandatory.
assign_public_ip: yes # Mandatory.
volume_size: 50 # Mandatory.
key_pair: KEYPAIR_NAME # Mandatory.
13
provider_config:
gce: # Mandatory.
service_account_email: # Mandatory. GCE service account email address.
credentials_file: # Mandatory. Path to GCE account json file.
project_id: # Mandatory. GCE project name.
ssh_user: # Mandatory. Ssh user for GCE instances.
ssh_pwd: # Optional. Ssh password used by ssh user, not
needed when public is used
ssh_private_key: # Optional. Path to private SSH key, used by by
ssh user, not needed when ssh-agent loaded private key
machine_type: n1-standard-4 # Mandatory. Default is too small
image: centos-7 # Mandatory. For provisioning and configuration
only centos-7 is currently supported.
network: microservice-vn # Optional. Defaults to default
subnetwork: microservice-sn # Optional. Defaults to default
zone: us-west1-aA # Optional. Defaults to ?
disk_size: 50 # Mandatory. Default is too small
global_configuration:
CONTAINER_REGISTRY: hub.juniper.net/contrail
REGISTRY_PRIVATE_INSECURE: True
CONTAINER_REGISTRY_USERNAME: YourRegistryUser
CONTAINER_REGISTRY_PASSWORD: YourRegistryPassword
For a complete list of parameters available for contrail_configuration.md, see Contrail Configuration
Parameters for Ansible Deployer.
14
kolla_config:
Instances Configuration
Instances are the operating systems on which the containers will be launched. The instance configuration
has a few provider-specific knobs. The instance configuration specifies which roles are installed on which
instance. Additionally, instance-wide and role-specific Contrail and Kolla configurations can be specified,
overwriting the parameters from the global Contrail and Kolla configuration settings.
instances:
kvm1:
provider: kvm
roles:
config_database:
config:
control:
analytics_database:
analytics:
webui:
kubemanager:
k8s_master:
instances:
gce1: # Mandatory. Instance name
provider: gce # Mandatory. Instance runs on GCE
15
instances:
aws1:
provider: aws
aws2:
provider: aws
aws3:
provider: aws
More Examples
Refer to the following for more configuration examples for instances.
• GCE Kubernetes (k8s) HA with separate control and data plane instances
To perform a full installation of a Contrail system, refer to the installation instructions in: “Installing a
Contrail Cluster using Contrail Command and instances.yml” on page 73.
RELATED DOCUMENTATION
CHAPTER 2
IN THIS CHAPTER
• 64 GB memory.
• 4 CPU cores.
A server can either be a physical device or a virtual machine. For scalability and availability reasons, it is
highly recommended to use physical servers in most use cases whenever possible.
Server role assignments vary by environment. All non-compute roles can be configured in each controller
node if desired in your topology.
All installation images are available in repositories and can also be downloaded from Contrail Downloads
page.
• All dependent software packages needed to support installation and operation of OpenStack and Contrail.
All components required for installing the Contrail Controller are available for each Contrail release, for
the supported Linux operating systems and versions, and for the supported versions of OpenStack.
17
For a list of supported platforms for all Contrail Networking releases, see Contrail Networking Supported
Platforms List.
Access Container Tags are located at README Access to Contrail Registry 21XX.
If you need access to Contrail docker private secure registry, e-mail [email protected] for
Contrail container registry credentials.
RELATED DOCUMENTATION
IN THIS SECTION
The following tables list the minimum and total memory and disk requirements per x86 server or per virtual
machine for installing Contrail Networking.
The following tables list the minimum and total memory and disk requirements for installing Contrail
Networking Release 2011.
Number of Number of VMIs (2500 VPGs vCPUs (hyper RAM (GB) Disk (GB)
Devices each with 102 VLANs) threaded)
Number of Contrail Number of VMs Number of Compute vCPUs RAM (GB) Disk (TB)
Insight Nodes Nodes
3 10K 300 64 64 2
The following tables list the minimum and total memory and disk requirements for installing Contrail
Networking Release 2008.
Number of Number of VMIs (2500 VPGs vCPUs (hyper RAM (GB) Disk (GB)
Devices each with 102 VLANs) threaded)
Number of Contrail Number of VMs Number of Compute vCPUs RAM (GB) Disk (TB)
Insight Nodes Nodes
3 10K 300 64 64 2
RELATED DOCUMENTATION
CHAPTER 3
Contrail Command
IN THIS CHAPTER
Adding a New Compute Node to Existing Contrail Cluster Using Contrail Command | 85
IN THIS SECTION
Server Requirements | 21
Software Requirements | 21
Use this document to install Contrail Command—the graphical user interface for Contrail Networking—and
provision your servers or VMs as nodes in a Contrail cluster. Servers or VMs are provisioned into compute
nodes, control nodes, orchestrator nodes, Contrail Insights nodes, Contrail Insights Flows nodes, or service
nodes to create your Contrail cluster using this procedure.
21
NOTE: Contrail Insights and Contrail Insights Flows were previously named Appformix and
Appformix Flows.
We strongly recommend Contrail Command as the primary interface for configuring and maintaining
Contrail Networking.
You should, therefore, complete the procedures in this document as an initial configuration task in your
Contrail Networking environment.
Server Requirements
A Contrail Networking environment can include physical servers or VMs providing server functions,
although we highly recommended using physical servers for scalability and availability reasons whenever
possible.
• 64 GB memory.
• 4 CPU cores.
For additional information on server requirements for Contrail Networking, see “Server Requirements and
Supported Platforms” on page 16.
Software Requirements
Contrail Command and Contrail Networking are updated simultaneously and always run the same version
of Contrail Networking software.
Each Contrail Networking release has software compatibility requirements based on the orchestration
platform version, the deployer used to deploy the orchestration platform, the supported server operating
system version, and other software requirements.
For a list of supported platforms for all Contrail Networking releases and additional environment-specific
software requirements, see Contrail Networking Supported Platforms List.
Starting in Contrail Release 2005, the Contrail Insights and Contrail Insights Flows images that support
a Contrail Networking release are automatically provisioned within Contrail Command. When you
download your version of Contrail Command, Contrail Command pulls the Contrail Insights and Contrail
Insights Flows images for your Contrail Networking version automatically from within the Juniper Contrail
registry. You do not, therefore, need to separately download any individual Contrail Insights software
or have awareness of Contrail Insights or Contrail Insights version numbers for your installation.
The procedures used in this document download the Contrail Command, Contrail Insights, and Contrail
Insights Flows software from the Juniper Networks Contrail docker private secure registry at hub.juniper.net.
Email [email protected] to obtain access credentials to this registry.
You will need to know the Container Tags for your Contrail image to retrieve Contrail images from the
Contrail registry. See README Access to Contrail Registry 21XX.
Contrail Networking images are also available at the Contrail Downloads page. Enter Contrail Networking
as the product name.
Contrail Insights and Contrail Insights Flows images are also available at the Contrail Insights Download
page. Enter Contrail Insights as the product name.
IN THIS SECTION
Contrail Command is a single pane-of-glass GUI interface for Contrail Networking. For an optimized Contrail
Networking experience, we strongly recommend installing Contrail Command before creating your Contrail
clusters. Contrail Command is installed using these instructions.
For additional information on Contrail Command, see “Understanding Contrail Networking Components”
on page 4.
23
• 4 vCPUs
• 32 GB RAM
• 100 GB disk storage with all user storage in the “/” partition.
If the “/home” partition exists, remove it and increase the “/” partition by the amount of freed storage.
For a list of CentOS versions that are supported with Contrail Networking and orchestration platform
combinations, see Contrail Networking Supported Platforms List.
You can install CentOS with updated packages using the yum update command.
• Has access to the Contrail Container registry at hub.juniper.net. This access is needed because the Contrail
Command deployer, which includes the Contrail Command docker images, is retrieved from this registry
during this installation procedure.
If you do not have access to the Contrail Container registry, email [email protected] to obtain
access credentials. See README Access to Contrail Registry 21XX for additional information about
accessing this registry.
• Includes at least one active IP interface attached to the management network. Contrail Command
manages Contrail and orchestrator clusters over a management IP interface.
Obtain the container tag for the release that you are installing. A container tag is necessary to identify the
Contrail Command container files in the hub.juniper.net repository that are installed during this procedure.
The container tag for any Contrail Release 21-based image can be found in README Access to Contrail
Registry 21XX.
1. Log onto the server that will host Contrail Command and all servers in your Contrail cluster. The servers
in your Contrail cluster are the devices that will be provisioned into compute, control, orchestrator,
Contrail Insights, Contrail Insights Flows, or service node roles.
2. Verify the hosts in the hosts file, and add the name and IP address of each host that you are adding to
the file.
24
In this example, the hosts file is edited using VI to include the name and IP address of the three other
servers—contrail-cluster, insights, and insights-flows—that will be provisioned into the contrail cluster
during this procedure.
NOTE: The hosts file is typically overwritten during the provisioning process. This step can
be skipped in most Contrail cluster provisioning scenarios, but is recommended as a precaution.
If needed, update the Contrail Command hostname accordingly to match the hostname that you will
use in the Contrail Command cluster.
NOTE: The hostname file is typically overwritten during the provisioning process. This step
can be skipped in most Contrail cluster provisioning scenarios, but is recommended as a
precaution.
4. If you haven’t already generated a shared RSA key for the servers in the cluster, generate and share
the RSA key.
5. SSH into each server that will be provisioned into the Contrail cluster to confirm reachability and
accessibility:
26
NOTE: The routes connecting the servers are created outside the Contrail Networking
environment and the process to create the routes varies by environment. This procedure,
therefore, does not provide the instructions for creating these routes.
In this example, the routes are verified on the Contrail Command server.
Perform this step on the Contrail Command server and all servers in your Contrail cluster.
In this example, each node in the Contrail cluster is pinged from the Contrail Command server.
^C
--- 10.1.1.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.220/0.411/0.602/0.191 ms
Perform this step on the Contrail Command server and all servers in your Contrail cluster.
8. Check the Linux kernel version and, if needed, update the Linux kernel version. If a kernel version
update is performed, reboot the server to complete the update.
In this example, the Linux kernel is verified on the Contrail Command server.
[root@ix-cn-ccmd-01 ~]# ls
28
anaconda-ks.cfg kernel-3.10.0-1062.12.1.el7.x86_64.rpm
Perform this step on the Contrail Command server and all servers in your Contrail cluster.
1. Log into the server that will host the Contrail Command containers. This server will be called the Contrail
Command server for the remainder of this procedure.
$ ssh [email protected]
[email protected]'s password: password
2. Remove all installed Python Docker libraries—docker and docker-py—from the Contrail Command
server:
The Python Docker libraries will not exist on the server if a new version of CentOS 7-based software
was recently installed. Entering this command when no Python Docker libraries are installed does not
harm any system functionality.
The Contrail Command Deployer, which is deployed later in this procedure, installs all necessary libraries,
including the Python Docker libraries.
There are multiple ways to perform this step. In this example, Docker Community Edition version 18.03
is installed using yum install and yum-config-manager commands and started using the systemctl start
docker command.
NOTE: The Docker version supported with Contrail Networking changes between Contrail
releases and orchestration platforms. See Contrail Networking Supported Platforms List. The
yum install -y docker-ce-18.03.1.ce command is used to illustrate the command for one
version of Docker.
4. Retrieve the contrail-command-deployer Docker image by logging into hub.juniper.net and entering
the docker pull command.
Variables:
• <container_tag>—container tag for the Contrail Command (UI) container deployment for the release
that you are installing. The <container_tag> for any Contrail Release 21xx image can be found in
README Access to Contrail Registry 21XX.
5. Create and save the command_servers.yml configuration file on the Contrail Command server.
The configuration of the command_servers.yml file is unique to your environment and complete
documentation describing all command_servers.yml configuration options is beyond the scope of this
document. Two sample command_servers.yml files for a Contrail environment are provided with this
document in “Sample command_servers.yml Files for Installing Contrail Command” on page 50 to
provide configuration assistance.
Be aware of the following key configuration parameters when configuring the command_servers.yml
file for Contrail Command:
30
contrail_config:
database:
type: postgres
dialect: postgres
password: contrail123
keystone:
assignment:
data:
users:
admin:
password: contrail123
• (Contrail Networking Release 2003 or earlier) The following configuration lines must be entered if
you want to deploy Contrail Insights and Contrail Insights Flows:
NOTE: Appformix and Appformix Flows were renamed Contrail Insights and Contrail
Insights Flows. The Appformix naming conventions still appear during product usage,
including within these directory names.
---
user_command_volumes:
- /opt/software/appformix:/opt/software/appformix
- /opt/software/xflow:/opt/software/xflow
The configuration lines must be entered outside of the “command_servers” hierarchy, either
immediately after the "---" at the very top of the file or as the last two lines at the very bottom of
the file. See “Complete command_servers.yml File” on page 52 for an example of these lines added
at the beginning of the command_servers.yml file.
This step is not required to install Contrail Insights and Contrail Insights Flows starting in Contrail
Networking Release 2005.
The contrail_command container is the GUI and the contrail_psql container is the database. Both
containers should have a STATUS of Up.
The contrail-command-deployer container should have a STATUS of Exited because it exits when the
installation is complete.
Enter the username and password combination specified in the command_servers.yml file in step 5. If
you use the sample command_servers.yml files in “Sample command_servers.yml Files for Installing
Contrail Command” on page 50, the username is admin and the password is contrail123.
For additional information on logging into Contrail Command, see “How to Login to Contrail Command”
on page 58.
32
IN THIS SECTION
Use this procedure to provision servers into your Contrail cluster. A Contrail cluster is a collection of
interconnected servers that have been provisioned as compute nodes, control nodes, orchestrator nodes,
Contrail Insights nodes, Contrail Insights Flows nodes, or service nodes in a cloud networking environment.
• Ensure Contrail Command is installed. See “How to Install Contrail Command” on page 22.
• Ensure all servers hosting Contrail cluster functions meet the specifications listed in “Server Requirements”
on page 21.
1. (Contrail Networking Release 2003 target release installations using Appformix only) Download the
Appformix and—if your also using Appformix Flows—the Appformix Flows images from the
NOTE: Appformix and Appformix Flows were renamed Contrail Insights and Contrail Insights
Flows. The Appformix filename conventions are used to name these files for use with Contrail
Networking Release 2003.
For Contrail Release 2003, the supported AppFormix version is 3.1.15 and the supported
AppFormix Flows version is 1.0.7.
appformix-<version>.tar.gz
appformix-platform-images-<version>.tar.gz
33
appformix-dependencies-images-<version>.tar.gz
appformix-network_device-images-<version>.tar.gz
appformix-openstack-images-<version>.tar.gz
appformix-flows-<version>.tar.gz
appformix-flows-ansible-<version>.tar.gz
• Copy the tar.gz files to the /opt/software/appformix/ directory on the Contrail Command server.
• (Appformix Flows environments only) Copy the two appformix-flows files to the /opt/software/xflow
directory.
You can ignore this step if you are not using Appformix Flows.
You can skip this step if you are using Contrail Networking Release 2005 or later or are not using
Appformix or Appformix Flows in your environment.
Leave the Select Cluster field blank to enter Contrail Command in a wizard that guides you through
the cluster provisioning process. If Contrail Command is not currently managing a cluster, this is your
only Contrail Command login option.
Your Contrail Command access credentials were specified in the command_servers.yml files in step 5
when you installed Contrail Command. If you used the sample command_servers.yml file to enable
Contrail Command, your username is admin and your password is contrail123.
NOTE: Username and password combinations are provided in this document for illustrative
purposes only. We suggest using unique username and password combinations to maximize
security in accordance with your organization’s security guidelines.
34
3. You are placed into the Infrastructure > Clusters menu upon login. Click the Add Cluster button to
start the cluster provisioning process.
4. Click the Credentials tab to move to the Credentials box, then the Add button to add access credentials
for a device that will be added to the cluster.
35
5. In the Add box, add the access credentials for a device in your cluster. Click the Add button to complete
the process and add the access credentials.
Repeat steps 4 and 5 to add the access credentials for each server or VM in your cluster.
6. After clicking the Add button to add the credentials of your last server or VM, click the Server tab to
return to the Available servers box.
36
8. Complete the fields in the Create Server dialog box for each physical server or VM in your Contrail
cluster. Each physical server or VM that will function as a compute node, control node, orchestrator
37
node, Contrail Insights node, Contrail Insights Flows node, or service node in your cluster must be
added as a server at this stage of the provisioning process.
Field descriptions:
• Choose Mode—Options include: Express, Detailed, or Bulk Import (CSV). We recommend using the
Detailed or Bulk Import (CSV) modes in most environments to ensure all server field data is entered
and to avoid performing manual configuration tasks later in the procedure.
• Express—includes a limited number of required fields to enter for each server or VM.
• Bulk Import (CSV)—Import the physical server or VM fields from a CSV file.
• Physical/Virtual Node—A virtualized physical server or a VM. This is the option used for most servers
or VMs in Contrail Networking environments.
• Management Interface—the name of the management-network facing interface on the physical server
or VM.
• Disk Partition(s)—(Optional) Specify the disk partitions that you want to use.
• Name (Network interfaces)—the name of a network-facing interface on the physical server or VM.
• IP Address (Network interfaces)—the IP address of the network-facing interface on the physical server
or VM.
Click Add in the Network Interfaces box to add additional network interfaces for the server or VM.
Click the Create button after completing all fields to add the server or VM.
Repeat this step for each physical server or VM that will function as a compute node, control node,
orchestrator node, Contrail Insights node, Contrail Insights Flows node, or service node in the Contrail
cluster.
9. You are returned to the Infrastructure > Clusters > Servers menu after adding the final server. Click
the Next button to proceed to the Provisioning Options page.
39
Field Descriptions:
• Contrail Cloud—Contrail Cloud Provisioning Manager. Do not use this provisioning manager option.
The remaining steps of this procedure assume Contrail Enterprise Multicloud is selected as the
provisioning manager.
• Container Registry—Path to the container registry to obtain the Contrail Networking image. The path
to the Juniper container registry is hub.juniper.net/contrail and is set as the default container registry
path. Enter this path or the path to the repository used by your organization.
• Insecure checkbox—This option should only be selected if you want to connect to an insecure registry
using a non-secure protocol like HTTP.
This box is unchecked by default. Leave this box unchecked to connect to the Juniper container
registry at hub.juniper.net/contrail or to access any other securely-accessible registry.
The Juniper container registry is often used in this field to obtain the Contrail Networking image.
Email [email protected] to receive a registry username and password combination to
access the Juniper container registry.
The Juniper container registry is often used in this field to obtain the Contrail Networking image.
Email [email protected] to receive a registry username and password combination to
access the Juniper container registry.
• Contrail Version—Specify the version of the Contrail Networking image to use for the upgrade that
is in the repository.
You can use the latest tag to retrieve the most recent image in the repository, which is the default
setting. You can also specify a specific release in this field using the version’s release tag.
See README Access to Contrail Registry 21XX to obtain the release tag for any Contrail Networking
Release 21XX release tag.
This address is typically the IP address of the interface on the leaf device in the fabric that connects
to the server’s network-facing interface.
• Encapsulation Priority—Select the Encapsulation priority order from the drop down menu.
• Fabric Management checkbox—Select this option if your deploying in an environment using Openstack
for orchestration.
Key Value
CONTRAIL_CONTAINER_TAG The container tag for the desired Contrail and OpenStack release
combination as specified in README Access to Contrail Registry 21XX.
41
Key Value
Click the Next button to proceed to the Control Nodes provisioning page.
11. From the Control Nodes provisioning page, assign any server that you created in step 8 as a control
node by clicking the > icon next to the server to move it into the Assigned Control Nodes box.
42
You have the option to remove roles from a control node within the Assigned Control Nodes. There
is no need to remove control node roles in most deployments and you should only remove roles if you
are an expert user familiar with the consequences.
(Installations using VMWare vCenter only) Complete the following steps to install a control node that
is integrated with VMware vCenter. For additional information on vCenter integration with Contrail
Networking, see Understanding VMware-Contrail Networking Fabric Integration.
Prerequisites:
• In the Data Center Name field, enter the name of the data center under vCenter that CVFM will
work on.
43
c. Click >, next to the name of the server, to assign a server from the Available Servers table as a
control node. The server is then added to the Assigned Control Nodes table.
d. Click Next.
After assigning all control nodes, click the Next button to move to the Orchestrator Nodes provisioning
page.
12. Select your orchestration platform from the Orchestrator Type drop-down menu.
Assign any one of the servers that you created in step 8 as an orchestrator node by clicking the > icon
next to the server to move it into the Assigned nodes box.
The remaining processes for this step depend on your orchestration platform:
• Openstack
Click the Show Advanced box then scroll to Kolla Globals and click +Add.
Add the following Kolla global Key and Value pairs in most environments:
Key Value
enable_haproxy no
44
Key Value
enable_ironic no
enable_swift yes
swift_disk_partition_size 20GB
After assigning all orchestrator nodes and Kolla global keys and values, click the Next button to
progress to the Compute Nodes provisioning page.
• Kubernetes
Select the Kubernetes nodes from the list of available servers and assign corresponding roles to the
servers.
After assigning roles to all nodes, click the Next button to progress to the Compute Nodes provisioning
page.
13. Assign any server that you created in step 8 as a compute node by clicking the > icon next to the server
to move it into the Assigned Compute nodes box.
Enter the default vRouter gateway IP Address in the Default Vrouter Gateway box after moving the
server into the Assigned Compute nodes box.
45
After assigning all compute nodes, click the Next button to progress to the Contrail Service Nodes
provisioning page.
14. Assign any server that you created in step 8 as a Contrail Services node by clicking the > icon next to
the server to move it into the Assigned Service Nodes box.
Contrail service nodes are only used in environments with bare metal servers. If you are not using
Contrail Service nodes in your environment, click the Next button without assigning any servers into
the Assigned Service Nodes box.
The default vRouter gateway IP Address might be autocompleted in the Default Vrouter Gateway box.
This default vRouter gateway is typically the IP address of a leaf device in the fabric that is directly
connected to the server fulfilling the service node role.
After assigning all Contrail Service nodes, click the Next button to progress to the Insights Nodes
provisioning page.
NOTE: The Insights Nodes provisioning workflow is called the Appformix Nodes workflow
in Contrail Networking Release 2005 and earlier releases.
15. Contrail Insights is an optional product that isn’t used in all environments. If your are not using Contrail
Insights in your environment, simply click the Next button without assigning a server as an Appformix
node in this step.
NOTE: Appformix was renamed Contrail Insights. The Appformix naming is still used in some
Contrail Command screens.
• Contrail Insights
If you are using Contrail Insights in your environment, click the > icon next to the server or VM in
the Available servers box to move it into the Assigned Insights Nodes box.
NOTE: The Assigned Insights Nodes box is called Assigned Appformix Nodesin Contrail
Networking Release 2005 and earlier releases.
By default, the server is assigned the appformix_platform_node role. You can maintain this default
setting in most environments. If the role needs to be changed, click within the Roles drop-down menu
and select from the available roles.
If you are also using Contrail Insights Flows in your environment, click the > icon next to the server
or VM in the Available servers box to move it into the Assigned Insights Nodes box.
NOTE: The Assigned Insights Nodes box is called Assigned Appformix Nodesin Contrail
Networking Release 2005 and earlier releases.
Click within the Roles drop-down menu and uncheck the default appformix_platform_node role
selection. Select appformix_bare_host_node from within the Roles drop-down menu to set it as the
role.
Click the Next button to progress to the Appformix Flows provisioning page.
16. Contrail Insights Flows is an optional product that isn’t used in all environments. If your are not using
Contrail Insights Flows in your environment, simply click the Next button without assigning a server
as an Appformix Flows node in this step.
NOTE: Appformix Flows was renamed Contrail Insights Flows. The Appformix Flows naming
is still used on this Contrail Command page.
If you are using Contrail Insights Flows in your environment, make the following configuration selections:
• Virtual IP Address—The virtual IP address management address on the Appformix Flows node that
connects the node to the management network.
(Contrail Insights and Contrail Insights Flows on same server only) Starting in Contrail Networking
Release 2008, you can enable Contrail Insights and Contrail Insights Flows on the same server node.
Perform these steps if you are enabling Contrail Insights and Contrail Insights Flows on the same node:
a. Click the Show Advanced box. The advanced configuration options appear.
b. From the AppFormix Flows Configuration Parameters box, click the +Add option to open the Key
and Value configuration options.
• Key: health_port
Value: 8205
• Key: kafka_broker_port
Value: 9195
• Key: zookeeper_client_port
Value: 3281
• Key: redis_port
Value: 6479
48
Click the > icon next to the server or VM in the Available servers box to move it into the Assigned
AppFormix Flows Nodes box.
Click any tab in the Nodes Overview box to review any configuration.
Click the Provision button after verifying your settings to provision the cluster.
The cluster provisioning process begins. This provisioning process time varies by environment and
deployment. It has routinely taken 90 minutes or more in our testing environments.
18. (Optional) Monitor the provisioning process by logging onto the Contrail Command server and entering
the docker exec contrail-command tail /var/log/contrail/deploy.log command.
19. When the provisioning process completes, click the Proceed to Login option.
• Select Cluster: Select a Contrail Cluster from the dropdown menu. The cluster is presented in the
<cluster-name>-<string> format. The <cluster-name> options should include the cluster that you just
created and should match the cluster name assigned in step 10 of this procedure.
• Username: Enter the username credential to access Contrail Command. This username was set in
the command_servers.yml file configured in step 5 of the “How to Install Contrail Command” on
page 22 procedure.
50
• Password: Enter the password credential to access Contrail Command. This password was set in the
command_servers.yml file configured in step 5 of the “How to Install Contrail Command” on page 22
procedure.
• Domain: You can often leave this field blank. Contrail Command logs into the default_domain—the
default domain for all orchestration platforms supported by Contrail Command except Canonical
Openstack—when the Domain field is empty.
If you are logging into a cluster that includes Canonical Openstack as it’s orchestration platform, you
can enter admin_domain—the default domain name for Canonical Openstack—in the Domain field if
your default domain name was not manually changed.
You can enter the personalized domain name of your cloud network’s orchestration platform in the
Domain field if you’ve changed the default domain name.
See “How to Login to Contrail Command” on page 58 for additional information on logging into Contrail
Command.
21. (Optional. Contrail Insights only) Click the Contrail Insights icon on the bottom-left hand corner of the
Contrail Command page to open Contrail Insights.
NOTE: This is an Appformix icon in Contrail Networking Release 2005 and earlier releases.
If you are not accessing Contrail Command through the fabric network, you might also have to configure
an External IP address to access Contrail Insights externally. Navigate to Infrastructure > Advanced
Options > Endpoints and locate insights in the Prefixes list. Click the Edit button—the pencil icon—and
change the Public URL field to a usable external IP address.
Contrail Insights Flows is integrated into Contrail Command. See Contrail Insights Flows in Contrail
Command.
IN THIS SECTION
---
# Required for Appformix and Appformix Flows installations in Release 2003 and
earlier
user_command_volumes:
- /opt/software/appformix:/opt/software/appformix
- /opt/software/xflow:/opt/software/xflow
command_servers:
server1:
ip: <IP Address> # IP address of server where you want to install Contrail
Command
connection: ssh
ssh_user: root
ssh_pass: <contrail command server password>
sudo_pass: <contrail command server root password>
ntpserver: <NTP Server address>
registry_insecure: false
container_registry: hub.juniper.net/contrail
container_tag: <container_tag>
container_registry_username: <registry username>
container_registry_password: <registry password>
config_dir: /etc/contrail
contrail_config:
database:
type: postgres
dialect: postgres
password: contrail123
keystone:
assignment:
data:
users:
admin:
password: contrail123
52
insecure: true
client:
password: contrail123
---
# Required for Appformix and Appformix Flows installations in Release 2003 and
earlier
user_command_volumes:
- /opt/software/appformix:/opt/software/appformix
- /opt/software/xflow:/opt/software/xflow
command_servers:
server1:
ip: <IP Address>
connection: ssh
ssh_user: root
ssh_pass: <contrail command server password>
sudo_pass: <contrail command server root password>
ntpserver: <NTP Server address>
# Log Level
log_level: debug
# Cache configuration
cache:
enabled: true
timeout: 10s
max_history: 100000
rdbms:
enabled: true
# Server configuration
server:
enabled: true
read_timeout: 10
write_timeout: 5
log_api: true
address: ":9091"
# TLS Configuration
tls:
enabled: true
key_file: /usr/share/contrail/ssl/cs-key.pem
cert_file: /usr/share/contrail/ssl/cs-cert.pem
54
notify_etcd: false
# VNC Replication
enable_vnc_replication: true
# Keystone configuration
keystone:
local: true
assignment:
type: static
data:
domains:
default: &default
id: default
name: default
projects:
admin: &admin
id: admin
name: admin
domain: *default
demo: &demo
id: demo
name: demo
domain: *default
users:
admin:
id: admin
name: Admin
55
domain: *default
password: contrail123
email: [email protected]
roles:
- id: admin
name: admin
project: *admin
bob:
id: bob
name: Bob
domain: *default
password: bob_password
email: [email protected]
roles:
- id: Member
name: Member
project: *demo
store:
type: memory
expire: 36000
insecure: true
authurl: https://localhost:9091/keystone/v3
etcd:
endpoints:
- localhost:2379
username: ""
password: ""
path: contrail
watcher:
enabled: false
storage: json
client:
id: admin
password: contrail123
project_name: admin
domain_id: default
56
schema_root: /
endpoint: https://localhost:9091
compilation:
enabled: false
# Global configuration
plugin_directory: 'etc/plugins/'
number_of_workers: 4
max_job_queue_len: 5
msg_queue_lock_time: 30
msg_index_string: 'MsgIndex'
read_lock_string: "MsgReadLock"
master_election: true
# Plugin configuration
plugin:
handlers:
create_handler: 'HandleCreate'
update_handler: 'HandleUpdate'
delete_handler: 'HandleDelete'
agent:
enabled: true
backend: file
watcher: polling
log_level: debug
SUMMARY
57
This section lists commonly seen errors and failure scenarios and procedures to fix them.
Problem
Description: Recovering the Galera Cluster Upon Server Shutdown—In an OpenStack HA setup provisioned
using Kolla and OpenStack Rocky, if you shut down all the servers at the same time and bring them up
later, the Galera cluster fails.
Solution
To recover the Galera cluster, follow these steps:
1. Edit the /etc/kolla/mariadb/galera.cnf file to remove the wsrep address on one of the controllers as
shown here.
wsrep_cluster_address = gcomm://
#wsrep_cluster_address = gcomm://10.x.x.8:4567,10.x.x.10:4567,10.x.x.11:4567
NOTE: If all the controllers are shut down in the managed scenario at the same time, you
must select the controller that was shut down last.
2. Docker start mariadb on the controller on which you edited the file.
3. Wait for a couple of minutes, ensure that the mariadb container is not restarting, and then Docker start
mariadb on the remaining controllers.
4. Restore the /etc/kolla/mariadb/galera.cnf file changes and restart the mariadb container on the
previously selected controller.
Problem
Description: Containers from Private Registry Not Accessible—You might have a situation in which
containers that are pulled from a private registry named CONTAINER_REGISTRY are not accessible.
Solution
To resolve, check to ensure that REGISTRY_PRIVATE_INSECURE is set to True.
RELATED DOCUMENTATION
58
Video: Using Contrail Command to Install Contrail Networking 2005 and Contrail Insights
Server Requirements and Supported Platforms | 16
Contrail Networking Installation and Upgrade Guide
Contrail Command is a single pane-of-glass GUI that configures and monitors Contrail Networking. You
can login to Contrail Command using these instructions.
• Install Contrail Command. See “How to Install Contrail Command and Provision Your Contrail Cluster”
on page 20.
The default port number is 9091 in most environments. The port number can be reset using the address:
field in the server: hierarchy within the command_servers.yml file.
2. Select your Contrail cluster from the Select Cluster drop-down menu. Leave this field blank if you are
logging into Contrail Command to create a cluster.
Your cluster is presented within the drop-down menu in the <cluster-name>-<string> format. Your
cluster-name was defined during the cluster creation process and the string is a randomly-generated
character string.
3. Enter the username and password credentials in the Username and Password fields. The username
and password credentials are set in the command_servers.yml file during the Contrail Command
installation. See “How to Install Contrail Command and Provision Your Contrail Cluster” on page 20.
For security purposes, we strongly recommend creating unique username and password combinations
in your environment. If you didn’t change your password in the command_servers.yml file, however,
the default username to access Contrail Command in most deployments is admin and the password is
contrail123.
You can often leave this field blank. Contrail Command logs into the default_domain—the default
domain for all orchestration platforms supported by Contrail Command except Canonical
Openstack—when the Domain field is empty.
• Canonical Openstack orchestration. If you are logging into a cluster that includes Canonical Openstack
as it’s orchestration platform, you can enter admin_domain—the default domain name for Canonical
Openstack—in the Domain field if your default domain name was not manually changed.
• You have manually changed the domain name in your cloud network’s orchestration platform. You
can enter the personalized domain name of your cloud network’s orchestration platform in the Domain
field if you’ve changed the default domain name.
60
RELATED DOCUMENTATION
IN THIS SECTION
Search functionality | 64
61
Contrail Networking Release 2003 introduces a redesigned Contrail Command UI. This topic describes
how to navigate the UI, some of the new and improved features that the UI offers, and the supported
browsers to install Contrail Command.
Starting with Contrail Networking Release 2003, you can use the Get Started with Contrail Enterprise
Multicloud panel in Contrail Command. This Get Started panel provides a user-friendly walkthrough of initial
Contrail Command configuration tasks. The panel includes Begin buttons that allow for quick task initiation
and a dynamic tracking mechanism that tracks task progress.
The Get Started panel appears automatically when Contrail Command is initially accessed and can always
be opened by selecting the Get Started with Contrail option in the ? help menu. If you choose to close
the panel, it remains closed within Contrail Command—including across login sessions—unless you choose
to open the panel by selecting the Get Started with Contrail option.
Figure 6 on page 62 shows the Get Started with Contrail Enterprise Multicloud panel home screen.
62
Figure 7 on page 62 shows how to open the Get Started with Contrail Enterprise Multicloud panel within
Contrail Command.
• if you switch between web browsers or login using incognito mode, the task status within the panel is
lost.
• If you clear your web browser cache or delete your web browser’s cookies, the task status within the
panel is lost.
• If you access Contrail Command from a different web browser or login using incognito mode, the panel
opens by default even if it had previously been closed.
• If you clear your web browser cache or delete your web browser’s cookies, the panel opens by default
even if it had previously been closed.
All menu options are available in a side panel on the left of the UI. The side panel displays the main menu
categories available in Contrail Command. Mouse over each category to view the corresponding second
tier menu options. These second tier menu options correspond to pages and you can click the menu option
to open the page.
The category as well as the page name can be viewed in the bread crumbs visible on the top banner. You
can click on page names in the bread crumbs to navigate to the page, but the category name is not clickable.
You can also hide the side panel from view. To hide the side panel, click the toggle button (≡) on the top
left of the banner on any page in Contrail Command. To view the side panel, click the toggle button (≡)
again.
Search functionality
If you are unaware of the navigation path to any page, you can use the search functionality to search for
the page. The updated Contrail Command UI has a search text box at the top of the left side panel. When
you enter a string in the search text box, all the categories containing matching terms are highlighted in a
bold font. All other categories are greyed out.
Mouse over each highlighted category to view the corresponding second tier menu options. While all the
menu options of the category are displayed, the pages matching the search input are highlighted in yellow.
You can pin frequently visited pages to the favorites category. The left side panel has a Favorites category
under the search text box and you can add second tier menu options to this category. Initially this category
is empty and is grayed out.
66
• Adding to Favorites – To add a page to favorites, mouse over a category in the side panel to display the
second tier menu options. Mouse over the menu option and click the pin icon in-line with the page name
to add to favorites.
67
Once pages are added, the Favorites category gets enabled and the pinned page is displayed underneath
it.
68
NOTE: Pinned favorite pages are stored in the Web browser cache in local storage. The existing
favorite-page list disappears if you switch between Web browsers, or if you log in under the
incognito mode, or if you clear Web browser cache and cookies.
• Deleting from Favorites - You can delete pages from the favorites category in any one of the following
two ways. You can click the enabled pin icon available in-line with the pinned page in the favorites
category to remove the page.
69
Alternatively, mouse over the corresponding category of the pinned page to display all the second tier
menu options. Mouse over the pinned page and click the enabled pin icon in-line with the page name
to remove it from the favorites category.
70
You can also collapse and expand the Favorites category by clicking the arrow icon (∧ or ∨) next to the
category name.
71
To open integrated external applications such as Contrail Insights, click the application name in the footer
of the side panel on the left.
NOTE: If no external applications are available, the footer in the side panel is not visible.
Starting with Contrail Networking Release 2005, you can use the What’s New panel within Contrail Command
to gather a summary list of the new Contrail Networking features in your Contrail Networking release.
The What’s New panel provides a high-level description of each new feature and a See Release Notes option
that takes you to the Contrail Networking Release Notes for additional feature information.
You can access the What’s New panel by selecting the What’s New option in the ? help menu.
72
Release Description
2003 Contrail Networking Release 2003 introduces a redesigned Contrail Command UI.
Contrail Networking supports deploying Contrail cluster using Contrail Command and the instances.yml
file. A YAML file provides a concise format for specifying the instance settings.
We recommend installing Contrail Command and deploying your Contrail cluster from Contrail Command
in most Contrail Networking deployments. See “How to Install Contrail Command and Provision Your
Contrail Cluster” on page 20. You should only use the procedure in this document if you have a strong
reason to not use the recommended procedure.
System Requirements
• 4 vCPUs
• 32 GB RAM
• 100 GB disk
74
• Internet access to and from the physical server, hereafter referred to as the Contrail Command server
• (Recommended) x86 server with CentOS 7.6 as the base OS to install Contrail Command
For a list of supported platforms for all Contrail Networking releases, see Contrail Networking Supported
Platforms List.
NOTE: Contrail Release 5.1 does not support Contrail Insights deployment from command line
with Contrail Cluster instances.yml file.
docker-py Python module is superseded by docker Python module. You must remove docker-py and
docker Python packages from all the nodes where you want to install the Contrail Command UI.
Configuration
Perform the following steps to deploy a Contrail Cluster using Contrail Command and the instances.yml
file.
1. Install Docker to pull contrail-command-deployer container. This package is necessary to automate the
deployment of Contrail Command software.
3. Edit the input configuration instances.yml file. See Sample instances.yml File on page 76 for a sample
instances.yml file.
4. Start the contrail_command_deployer container to deploy the Contrail Command (UI) server and
provision Contrail Cluster using the instances.yml file provided.
The contrail_command and contrail_psql Contrail Command containers will be deployed. Contrail
Cluster is also provisioned using the given instances.yml file.
NOTE: We strongly recommend creating a unique username and password for Contrail
Command. See Installing Contrail Command for additional information on creating username
and password combinations.
global_configuration:
CONTAINER_REGISTRY: hub.juniper.net/contrail
CONTAINER_REGISTRY_USERNAME: < container_registry_username >
CONTAINER_REGISTRY_PASSWORD: < container_registry_password >
provider_config:
bms:
ssh_pwd: <Pwd>
ssh_user: root
ntpserver: <NTP Server>
domainsuffix: local
instances:
bms1:
provider: bms
ip: <BMS IP>
roles:
config_database:
config:
control:
analytics_database:
analytics:
webui:
vrouter:
openstack:
openstack_compute:
bms2:
provider: bms
ip: <BMS2 IP>
roles:
openstack:
bms3:
provider: bms
ip: <BMS3 IP>
roles:
openstack:
bms4:
provider: bms
ip: <BMS4 IP>
roles:
config_database:
config:
control:
analytics_database:
analytics:
77
webui:
bms5:
provider: bms
ip: <BMS5 IP>
roles:
config_database:
config:
control:
analytics_database:
analytics:
webui:
bms6:
provider: bms
ip: <BMS6 IP>
roles:
config_database:
config:
control:
analytics_database:
analytics:
webui:
bms7:
provider: bms
ip: <BMS7 IP>
roles:
vrouter:
PHYSICAL_INTERFACE: <Interface name>
VROUTER_GATEWAY: <Gateway IP>
openstack_compute:
bms8:
provider: bms
ip: <BMS8 IP>
roles:
vrouter:
# Add following line for TSN Compute Node
TSN_EVPN_MODE: True
openstack_compute:
contrail_configuration:
CLOUD_ORCHESTRATOR: openstack
CONTRAIL_VERSION: latest or <contrail_container_tag>
RABBITMQ_NODE_PORT: 5673
KEYSTONE_AUTH_PUBLIC_PORT: 5005
VROUTER_GATEWAY: <Gateway IP>
78
ENCAP_PRIORITY: VXLAN,MPLSoUDP,MPLSoGRE
AUTH_MODE: keystone
KEYSTONE_AUTH_HOST: <Internal VIP>
KEYSTONE_AUTH_URL_VERSION: /v3
CONTROLLER_NODES: < list of mgmt. ip of control nodes >
CONTROL_NODES: <list of control-data ip of control nodes>
OPENSTACK_VERSION: queens
kolla_config:
kolla_globals:
openstack_release: queens
kolla_internal_vip_address: <Internal VIP>
kolla_external_vip_address: <External VIP>
openstack_release: queens
enable_haproxy: "no" ("no" by default, set "yes" to enable)
enable_ironic: "no" ("no" by default, set "yes" to enable)
enable_swift: "no" ("no" by default, set "yes" to enable)
keystone_public_port: 5005
swift_disk_partition_size = 10GB
keepalived_virtual_router_id: <Value between 0-255>
kolla_passwords:
keystone_admin_password: <Keystone Admin Password>
NOTE: This representative instances.yaml file configures non-default Keystone ports by setting
the keystone_public_port: and KEYSTONE_AUTH_PUBLIC_PORT:.
RELATED DOCUMENTATION
Contrail Networking supports importing of Contrail Cluster data to Contrail Command provisioned using
one of the following applications - OpenStack, Kubernetes, VMware vCenter, and TripleO.
System Requirements
• 4 vCPUs
• 32 GB RAM
• 100 GB storage
• Internet access to and from the physical server, which is the Contrail Command server.
• (Recommended) x86 server with CentOS 7.6 as the base OS to install Contrail Command.
For a list of supported platforms for all Contrail Networking releases, see Contrail Networking Supported
Platforms List.
Access Container Tags are located at README Access to Contrail Registry 21XX.
If you need access to Contrail docker private secure registry, e-mail [email protected] for
Contrail container registry credentials.
docker-py Python module is superseded by docker Python module. You must remove docker-py and
docker Python packages from all the nodes where you want to install the Contrail Command UI.
Configuration
1. Install Docker to pull contrail-command-deployer container. This package is necessary to automate the
deployment of Contrail Command software.
3. Get the command_servers.yml file that was used to bring the Contrail Command server up and the
configuration file that was used to provision the Contrail Cluster.
NOTE: "For OpenShift orchestrator use the ose-install file instead of instances.yml file.
4. Start the contrail-command-deployer container to deploy the Contrail Command (UI) server and import
Contrail Cluster data to Contrail Command (UI) server using the Cluster configuration file provided.
• Import Contrail-Cluster provisioned using OSPDirector/TripleO Life Cycle Manager for RedHat
OpenStack Orchestration.
Prerequisites:
• External VIP is an Overcloud VIP where OpenStack and Contrail public endpoints are available.
External VIP must be reachable from Contrail Command node.
• DNS host name for Overcloud external VIP must be resolvable on Contrail Command node. Add
the entry in the /etc/hosts file.
• Contrail command server must have access to External VIP network to communicate with the
configured endpoints.
• If you have used domain name for the external VIP, add the entry in the /etc/hosts file.
global_configuration:
CONTAINER_REGISTRY: hub.juniper.net/contrail
CONTAINER_REGISTRY_USERNAME: < container_registry_username >
CONTAINER_REGISTRY_PASSWORD: < container_registry_password >
provider_config:
bms:
ssh_pwd: <Pwd>
ssh_user: root
ntpserver: <NTP Server>
domainsuffix: local
instances:
bms1:
provider: bms
ip: <BMS1 IP>
roles:
openstack:
bms2:
provider: bms
ip: <BMS2 IP>
roles:
openstack:
bms3:
provider: bms
ip: <BMS3 IP>
roles:
openstack:
bms4:
provider: bms
83
contrail_configuration:
CLOUD_ORCHESTRATOR: openstack
CONTRAIL_VERSION: latest or <contrail_container_tag>
CONTRAIL_CONTAINER_TAG: <contrail_container_tag>-queens
RABBITMQ_NODE_PORT: 5673
VROUTER_GATEWAY: <Gateway IP>
ENCAP_PRIORITY: VXLAN,MPLSoUDP,MPLSoGRE
AUTH_MODE: keystone
KEYSTONE_AUTH_HOST: <Internal VIP>
KEYSTONE_AUTH_URL_VERSION: /v3
CONTROLLER_NODES: < list of mgmt. ip of control nodes >
CONTROL_NODES: <list of control-data ip of control nodes>
OPENSTACK_VERSION: queens
kolla_config:
kolla_globals:
openstack_release: queens
kolla_internal_vip_address: <Internal VIP>
kolla_external_vip_address: <External VIP>
openstack_release: queens
enable_haproxy: "no" ("no" by default, set "yes" to enable)
enable_ironic: "no" ("no" by default, set "yes" to enable)
enable_swift: "no" ("no" by default, set "yes" to enable)
keepalived_virtual_router_id: <Value between 0-255>
kolla_passwords:
keystone_admin_password: <Keystone Admin Password>
RELATED DOCUMENTATION
You can add or remove a new node from an existing containerized Contrail cluster.
1. Login to Contrail Command UI as a super user using credentials admin for username and contrail123
for password.
The default credentials for Contrail Command are admin for username and contrail123 for password.
We strongly recommend creating a unique username and password combination. See Installing Contrail
Command.
2. Click Servers.
a. Click Create.
c. Click Create.
3. Click Cluster.
Perform the following steps to remove a compute node from an existing Contrail OpenStack cluster.
NOTE: Workloads on the deleted computes must be removed before removing the compute
node from the cluster.
1. Login to Contrail Command UI as a super user using credentials admin for username and contrail123
for password.
The default credentials for Contrail Command are admin for username and contrail123 for password.
We strongly recommend creating a unique username and password combination for security purposes.
See Installing Contrail Command.
2. Click Cluster.
You can also add a compute node to existing Contrail cluster using instances.yaml file. For details, refer
to How to Add a New Compute Node to an Existing Contrail Cluster Using the instances.yaml File.
IN THIS SECTION
Example: Config.YML File for Deploying Contrail Command with a Cluster Using Juju | 97
Install Contrail Insights on the Juju Cluster after Contrail Command is Installed | 100
Install Contrail Insights Flows on the Juju Cluster after Contrail Insights is Installed | 101
You can use this document to deploy Contrail Command and import an existing cluster into Contrail
Command using Juju with a single procedure. This procedure can be applied in environments using Canonical
Openstack or environments that are running Juju and using Kubernetes for orchestration.
If you are already running Contrail Command in a Canonical Openstack environment and want to import
a cluster, see “Importing a Canonical Openstack Deployment Into Contrail Command” on page 104.
Starting in Contrail Release 2005, you can deploy Contrail Command and import a cluster using Juju in a
Canonical Openstack environment.
Starting in Contrail Release 2008, you can deploy Contrail Command and import a cluster using Juju in an
environment using Kubernetes orchestration.
This document makes the following assumptions about your initial environment:
• Juju is already running in your environment, and your environment is either a Canonical Openstack
deployment or a deployment using Kubernetes orchestration.
• Contrail Networking Release 2005 or later is running if you are operating a Canonical Openstack
deployment.
Contrail Networking Release 2008 or later is running if you are operating an environment using Kubernetes
orchestration.
See Contrail Networking Supported Platforms for information on the supported software components
for any Contrail Networking release.
A base64-encoded SSL Certificate Authority (CA) for the Juju controller is required to deploy Contrail
Command with an existing cluster in a Canonical Openstack or Kubernetes environment.
90
There are multiple ways to generate a base64-encoded SSL CA. You can use this procedure or a more
familiar procedure to generate your base64-encoded SSL CA.
1. From the Juju jumphost, enter the juju show-controller command and locate the certificate output in
the ca-cert: hierarchy.
$ juju show-controller
jc5-cloud:
details:
...<output removed for readability>...
ca-cert: |
-----BEGIN CERTIFICATE-----
MIIErTCCAxWgAwIBAgIVAKRPIub8Q7imJ2+T2U8AK4thOss7MA0GCSqGSIb3DQEB
CwUAMG4xDTALBgNVBAoTBGp1anUxLjAsBgNVBAMMJWp1anUtZ2VuZXJhdGVkIENB
IGZvciBtb2RlbCAianVqdS1jYSIxLTArBgNVBAUTJDI0ZDJjODg0LTllYWYtNDU2
Ni04NTA0LWJkZGYxZWJiYTgzYjAeFw0yMDA0MTUwMzE2MzdaFw0zMDA0MjIwMzE2
MzVaMG4xDTALBgNVBAoTBGp1anUxLjAsBgNVBAMMJWp1anUtZ2VuZXJhdGVkIENB
IGZvciBtb2RlbCAianVqdS1jYSIxLTArBgNVBAUTJDI0ZDJjODg0LTllYWYtNDU2
Ni04NTA0LWJkZGYxZWJiYTgzYjCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoC
ggGBAL/7d3JtNcHW6ue6yOeKvOlSDhxgGs4vYLDO0QzlIMyW39+BytB4XY+05EBg
A5JKfYV+u8xXL0meLvh+4yE87cwRObsT1WYFCDFVTiGSeSN3w+2UJxHWwuAubDl7
zfAKnGgIzq/KZJJimxa6Yuqw5isCxffu3fQz+H5UlSpLCpFxvAq38VjrW7FnjEm1
c4fFlBf07LUOqBxSIS0gxarO1DQE2IQv4mfIAFvJgT/5UKJYuGEX3NH9DerYqjJa
NchyGMkXgyBj3YVec8bFE4+erDMISBvJHBMwyx74PTDQys+KlfNXptup5FH/FwBb
9ZRBAD99c0f0VW6moNxoAkKhrGVZt1w7CxwvgRZnWUezthwoHI8yFqBvkT+lq6Nd
jvLEv1DQ+3zmMfhz/emRD1DOQQfn3mQhSk40NdO3kw/B8bHOIXmgIgNbv48g0Ac7
/hQO02moDxrLkCZNN0fVgOKvonDjbSo5YNCH/7fleacmQN3Mug3wXp9kYh7rKDHw
6pkQQwIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAd
BgNVHQ4EFgQUcGE6bMiGsQQyiDYKBl+txAfeFAkwDQYJKoZIhvcNAQELBQADggGB
AG7pivQhJVNSCbG+9jVy3owhg/POnp2sewD1t8BMOkPTPgAa/37vrp4KSPdNXKZz
hnFzBXkL8jBUP0qg2Vfy9iqlgXNdVAdb4Ijk44OhlwWNGUiZwl2nNbvnUL7NnTeh
jqZaIb6Oe2y1ByNrQweVMO85qdrYJCelf9Wh9fYdtofx4TyOMg+ZqPqmvTRO8yTx
KOupywxmezbjhEaaILXo9kouU4UV2gAIdYiHfvsbTaLkWbYeNgvvE5WAan8HuQqb
YVnvxggIN45UgEgqGUHEgcj9tHgssfbnX3f2sCbOJkXL2cv7D+wK7hvUCS5tKS6H
6O7OoXxfimFBdSZQuuqhqyiMYafnRo48Q2oCyQn1Q+g/qG+GYxmujIigoiYS1srV
mIUaJQUGHtgXvyZGJFIvQiAzImQCylq1iyz77Da3myDRX0i0dauu5MACn5i9cgu9
W7/MD2xR3kKMAY3b4y+pP7CKbEJ6UDswLyAQUkwPyeLi1r82vGh6CasinnGaUhk+
zg==
-----END CERTIFICATE-----
...<additional output removed for readability>...
Copy and paste options vary by user interface. The SSL CA content—all highlighted text from step 1
starting at the beginning of the -----BEGIN CERTIFICATE----- line and ending at the end of the -----END
CERTIFICATE----- line—should be the only content in the cert.pem file.
Confirm that leading white spaces are not added to the SSL CA after copying the SSL CA into the
cert.pem file. These leading white spaces are introduced by some user interfaces—often at the start of
new lines—and will cause the SSL CA certification to be unusable. If leading whitespaces are added to
the SSL CA after it is copied into the cert.pem file, manually delete the whitespaces before proceeding
to the next step.
You can generate the cert.pem file into base64-encoded output without saving the file contents by
entering the following command:
You can also generate the base-64 encoded output and save the SSL CA contents into a separate file.
In this example, the base64-encoded output is generated and a new file containing the
output—cert.pem.b64—is saved.
The base64-encoded SSL CA will be entered as the juju-CA-certificate variable in “Deploy Contrail
Command and Import a Contrail Cluster Using Juju” on page 92.
92
To deploy Contrail Command and import a Contrail cluster into Contrail Command:
1. From the Juju jumphost, deploy Contrail Command using one of the following command strings:
where:
• machine-name—the name of the machine instance in Juju that will host Contrail Command.
The IP address of this machine—which can be obtained by entering the juju status command—is used
to access Contrail Command from a web browser after the installation is complete.
The image tag is used to identify your Contrail Networking image within the registry. You can retrieve
the image tag for any Contrail Release 21xx image from README Access to Contrail Registry 21XX.
2. Create a juju relation between the Contrail Command charm and the Contrail Controller charm:
$ cat config.yaml
juju-controller: juju-controller-ip
juju-controller-password: password
juju-ca-cert: |
juju-CA-certificate
juju-model-id: juju-model-id
juju-controller-user: juju-controller-user
You can retrieve the juju-controller-ip from the juju show-controller command output:
You can set the password for Juju controller access using the juju change-user-password command.
• juju-CA-certificate—The base64-encoded SSL Certificate Authority (CA) for the Juju controller.
The juju-CA-certificate is the base64-encoded SSL CA created in “Preparing the SSL Certificate
Authority (CA) for the Deployment” on page 89.
See “Example: Config.YML File for Deploying Contrail Command with a Cluster Using Juju” on
page 97 for a sample juju-CA-certificate entry.
• juju-model-id—The universally unique identifier (UUID) assigned to the model environment that
includes the Contrail Networking cluster..
You can retrieve the juju-model-id from the juju show-controller command output:
$ juju show-controller
jc5-cloud:
...<output removed for readability>...
models:
default:
95
model-uuid: 4a62e0b0-bcfe-4b35-8db7-48e55f439217
...<output removed for readability>...
The admin username is used by default if no user with Juju controller access is configured.
See “Example: Config.YML File for Deploying Contrail Command with a Cluster Using Juju” on
page 97 for a sample config.yaml configuration for this deployment.
c. Import the Contrail cluster with the parameters defined in the config.yaml file:
You can check the import status by entering the juju show-action-status action-ID and juju
show-action-output action-ID | grep result commands.
The action-ID is assigned immediately after entering the juju run-action command in the previous
step.
The cluster import is complete when the status field output in the juju show-action-status action-ID
command shows completed, or when the result field in the juju show-action-output action-ID |
grep result indicates Success.
Examples:
juju show-action-status 1
actions:
- action: import-cluster
completed at: "2020-04-03 12:49:55"
id: "60"
status: completed
unit: contrail-command/19
96
The <juju-machine-ip-address> is the IP address of the machine hosting Contrail command that was
specified in 1. You can retrieve the IP address using the juju status command:
juju status
Unit Workload Agent Machine Public address
contrail-command/0* active idle 3 10.0.12.40
The port-number typically defaults to 9091 or 8079. You can, however, configure a unique port number
for your environment using the command_servers.yml file.
Enter the following values after the Contrail Command homescreen appears:
• Select Cluster: Select a Contrail Cluster from the dropdown menu. The cluster is presented in the
<cluster-name>-<string> format.
• Domain: If you are running Juju in a Canonical Openstack environment, enter admin_domain—the
default domain name for Canonical Openstack— if you haven’t established a unique domain in
Canonical Openstack. Enter the name of your domain if you have created a unique domain.
If you are running Juju in a Kubernetes environment, you can leave this field blank unless you’ve
established a unique domain name in Kubernetes. Enter the name of your domain if you have created
a unique domain.
Figure 17 on page 97 illustrates an example Contrail Command login to complete this procedure.
97
Figure 17: Contrail Command Login Example—Cluster in Environment using Canonical Openstack
See “How to Login to Contrail Command” on page 58 for additional information on logging into Contrail
Command.
Example: Config.YML File for Deploying Contrail Command with a Cluster Using Juju
This sample config.yml file provides a representative example of a configuration that could be used to
deploy Contrail Command with Contrail clusters in an environment running Juju.
See “Deploy Contrail Command and Import a Contrail Cluster Using Juju” on page 92 for step-by-step
procedures to create this config.yml file and “Preparing the SSL Certificate Authority (CA) for the
Deployment” on page 89 for instructions on generating the juju-ca-cert in the required base64-encoded
format.
This sample config.yml file does not contain the juju-controller-user: field to specify a user with Juju
controller access, so the default admin username is used.
98
CAUTION: The password password is used in this example for illustrative purposes
only.
$ cat config.yaml
juju-controller: 10.102.72.40
juju-ca-cert: |
LS0tLS9CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVyVENDQXhXZ0F3SUJBZ0lWQUtSUEl1YjhR
N2ltSjIrVDJVOEFLNHRoT3NzN01BMEdDU3FHU0liM0RRRUIKQ3dVQU1HNHhEVEFMQmdOVkJBb1RC
R3AxYW5VeExqQXNCZ05WQkFNTUpXcDFhblV0WjJWdVpYSmhkR1ZrSUVOQgpJR1p2Y2lCdGIyUmxi
Q0FpYW5WcWRTMWpZU0l7TFRBckJnTlZCQVVUSkRJMFpESmpPRGcwTFRsbFlXWXRORFUyCk5pMDRO
VEEwTFdKa1pHWXhaV0ppWVRnellqQWVGdzB5TURBME1UVXdNekUyTXpkYUZ3MHpNREEwTWpJd016
RTIKTXpWYU1HNHhEVEFMQmdOVkJBb1RCR3AxYW5VeExqQXNCZ05WQkFNTUpXcDFhblV0WjJWdVpY
SmhkR1ZrSUVOQgrJR1p2Y2lCdGIyUmxiQ0FpYW5WcWRTMWpZU0l4TFRBckJnTlZCQVVUSkRJMFpE
SmpPRGcwTFRsaFlXWXRORFUyCk5pMDROVEEwTFdKa1pHWXhaV0ppWVRnellqQ0NBYUl3RFFZSktv
WklodmNOQVFFQkJRQURnZ0dQQURDQ0FZb0MKZ2dHQkFMLzdkM0p0TmNIVzZ1ZTZ5T2VLdk9sU0Ro
eGdHczR2WUxETzBRemxJTXlXMzkrQnl0QjRYWSswNUVCZwpBNUpLZllWK3U4eFhMMG1lTHZoKzR5
RTg3Y3dST2JwVDFXWUZDREZWVGlHU2VTTjN3KzJVSnhIV3d1QXViRGw3CnpmQUtuR2dJenEvS1pK
SmlteGE2WXVxdzVpc0N4ZmZ1M2ZReitINVVsU3BMQ3BGeHZBcTM4VmpyVzdGbmpFbTEKYzRmRmxC
ZjA3TFVPcUJ4U0lTMGd4YXJPMURRRTJJUXY0bWZJQUZ2SmdULzVVS0pZdUdFWDNOSDlEZXJZcWpK
YQpOY2h5R01rWGd5QmozWVZlYzhiRkU0K2VyRE1JU0J2SkhCTXd5eDc0UFREUXlzK0tsZk5YcHR1
cDVGSC9Gd0JiCjlaUkJBRDk5YzBmMFZXNm1vTnhvQWtLaHJHVlp0MXc3Q3h3dmdSWm5XVWV6dGh3
b0hJOHlGcUJ2a1QrbHE2TmQKanZMRXYxRFErM3ptTWZoei9lbVJEMURPUVFmbjNtUWhTazQwTmRA
M2t3L0I4YkhPSVhtZ0lnTmJ2NDhnMEFjNwovaFFPMDJtb0R4ckxrQ1pOTjBmVmdPS3ZvbkRqYlNv
NVlOQ0gvN2ZsZWFjbVFOM011ZzN3WHA5a1loN3JLREh3CjZwa1FRd0lEQVFBQm8wSXdRREFPQmdO
VkhROEJBZjhFQkFNQ0FxUXdEd1lEVlIwVEFRSC9CQVV3QXdFQi96QWQKQmdOVkhRNEVGZ1FVY0dF
NmJNaUdzUVF5aURZS0JpK3R4QWZlRkFrd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dHQgpBRzdwaXZR
aEpWTlNDYkcrOWpWeTNad2hnL1BPbnAyc2V3RDF0OEJNT2tQVFBnQWEvMzd2cnA0S1NQZE5YS1p6
CmhuRnpCWGtMOGpCVVAwcWcyVmZ5OWlxbGdYTmRWQWRiNElqazQ0T2hsd1dOR1VpWndsMm5OYnZu
VUw3Tm5UZWgKanFaYUliNk9lMnkxQnlOclF3ZVZNTzg1cWRyWUpDZWxmOVdoOWZZZHRvZng0VHlP
TWcrWnFQcW12VFJPOHlUeApLT3VweXd4bWV6YmpoRWFhSUxYbzlrb3VVNFVWMmdBSWRZaUhmdnNi
VGFMa1diWWVOZ3Z2RTVXQWFuOEh1UXFiCllWbnZ4Z2dJTjQ1VWdFZ3FHVUhFZ2NqOXRIZ3NzZmJu
WDNmMnNDYk9Ka1hMMmN2N0Qrd0s3aHZVQ1M1dEtTNkgKNk83T29YeGZpbUZCZFNaUXV1cWhxeWlN
WWFmblJvNDhRMm9DeRFuMVErZy9xRytHWXhtdWpJaWdvaVlTMXNyVgptSVVhSlFVR0h0Z1h2eVpH
SkZJdlFpQXpJbVFDeWxxMWl5ejc3RGEzbXlEUlgwaTBkYXV1NU1BQ241aTljZ3U5Clc3L01EMnhS
M2tLTUFZM2I0eStwUDdDS2JFSjZVRHN3THlBUVVrd1B5ZUxpMXI4MnZHaDZDYXNpbm5HYVVoaysK
eac9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
juju-model-id: 4a62e0b0-bcfe-4b35-8db7-48e55f439217
juju-controller-password: password
99
Contrail Networking Release 2011 supports installing Contrail Insights and Contrail Insights Flows on a
Juju cluster after Contrail Networking and Contrail Command are installed. The following prerequisites
apply.
docker, python2.7, python-pip must be installed on the Contrail Insights node and Contrail Insights Flows
node.
To install the Docker engine, you need the 64-bit version of one of these Ubuntu versions:
Docker Engine is supported on x86_64 (or amd64), armhf, and arm64 architectures. For more information,
see https://docs.docker.com/engine/install/ubuntu/.
If you are running the playbooks as root user then this step can be skipped. As a non-root user (for example,
“ubuntu”), the user “ubuntu” needs access to the docker user group. The following command adds the user
to the docker group:
For more information, see “Contrail Insights Installation for OpenStack in HA” on page 511.
Software Requirements
• docker-ce : 5:19.03.9~3-0~ubuntu-focal
1. Install python and python-pip on the Contrail Insights Controller nodes, and on the host(s) that the
Contrail Insights Agent runs on.
appformix_ansible_python3_interpreter_enabled: true
NOTE: Ignore any errors that may arise if IN_public_allow does not exist.
After you have completed these steps, you can install Contrail Insights.
Install Contrail Insights on the Juju Cluster after Contrail Command is Installed
NOTE: Appformix and Appformix Flows were renamed Contrail Insights and Contrail Insights
Flows. The Appformix naming conventions still appear during product usage, including within
these directory names.
101
1. Copy the Contrail Insights and Contrail Insights Flows installation directories to the
/opt/software/appformix/ and /opt/software/xflow directories inside the Contrail Command container,
if not already present.
cd /usr/share/contrail/appformix-ansible-deployer/appformix/
. venv/bin/activate
cd /opt/software/appformix/
ansible-playbook -i inventory --skip-tags=install_docker
contrail-insights-ansible/appformix_openstack_ha.yml -v
Install Contrail Insights Flows on the Juju Cluster after Contrail Insights is Installed
Disclaimer: Official installation method for installation is using the Contrail-Command UI.
contrail-ansible-deployer installs all packages needed for Contrail Insights and Contrail Insights Flows.
appformix-ansible-deployer creates inventory files for the installation. There are many variables set in the
inventory files for specific releases, so setting them manually is prone to errors.
cd /usr/share/contrail/appformix-ansible-deployer/xflow
source venv/bin/activate
3. Run one of the following commands dependent on your Contrail Networking Release version.
If you are running a Contrail Networking Release earlier than 2005, add the following snippet to the
end of the existing instances.yml before running the deploy_contrail_insights_flows.sh or
deploy_xflow.sh.
global_configuration:
CONTAINER_REGISTRY: hub.juniper.net/contrail
CONTAINER_REGISTRY_USERNAME: < container_registry_username >
CONTAINER_REGISTRY_PASSWORD: < container_registry_password >
provider_config:
bms:
ssh_pwd: <Root Pwd>
ssh_user: root
ntpserver: <NTP Server>
domainsuffix: local
instances: < under existing hierarchy >
a7s33:
ip: 10.84.30.201
provider: bms
roles:
appformix_flows:
telemetry_in_band_interface_name: enp4s0f0
xflow_configuration:
103
clickhouse_retention_period_secs: 7200
loadbalancer_collector_vip: 30.1.1.3
telemetry_in_band_cidr: 30.1.1.0/24
loadbalancer_management_vip: 10.84.30.195
telemetry_in_band_vlan_id: 11
global_configuration:
CONTAINER_REGISTRY: hub.juniper.net/contrail
CONTAINER_REGISTRY_USERNAME: < container_registry_username >
CONTAINER_REGISTRY_PASSWORD: < container_registry_password >
provider_config:
bms:
ssh_pwd: <Root Pwd>
ssh_user: root
ntpserver: <NTP Server>
domainsuffix: local
instances: < under existing hierarchy >
a7s33:
ip: 10.84.30.201
provider: bms
roles:
appformix_flows:
xflow_configuration:
clickhouse_retention_period_secs: 7200
loadbalancer_collector_vip: 10.84.30.195
Release Description
2011 Contrail Networking Release 2011 supports installing Contrail Insights and Contrail Insights
Flows on a Juju cluster after Contrail Networking and Contrail Command are installed.
2008 Starting in Contrail Release 2008, you can deploy Contrail Command and import a cluster using
Juju in an environment using Kubernetes orchestration.
2005 Starting in Contrail Release 2005, you can deploy Contrail Command and import a cluster using
Juju in a Canonical Openstack environment.
RELATED DOCUMENTATION
IN THIS SECTION
This document provides the steps needed to import a Canonical Openstack deployment into Contrail
Command.
This procedure assumes that Contrail Command is already running in your Contrail Networking environment
that is using Canonical Openstack as it’s orchestration platform. See “How to Deploy Contrail Command
and Import a Cluster Using Juju” on page 88 if you’d like to deploy Contrail Command and import the
Contrail cluster into Contrail Command in an environment using Contrail Networking and Canonical
Openstack.
105
Starting in Contrail Networking Release 2003, Canonical Openstack deployments can be managed using
Contrail Command.
This document provides the steps needed to import a Canonical Openstack deployment into Contrail
Command. Contrail Command can be used to manage the Canonical Openstack deployment after this
procedure is complete.
• Contrail Command has access to the Juju jumphost and the Juju cluster.
Canonical Openstack is imported into Contrail Command using Juju in this procedure.
There are multiple ways to perform this step. In this example, Docker Community Edition version 18.03
is installed using yum install and yum-config-manager commands and started using the systemctl start
docker command.
2. Retrieve the contrail-command-deployer Docker image by logging into hub.juniper.net and entering
the docker pull command.
where <container_tag> is the container tag for the Contrail Command (UI) container deployment for
the release that you are installing.
106
The <container_tag> for any Contrail Release 21xx image can be found in README Access to Contrail
Registry 21XX.
The configuration of the config.yml file is unique to your environment and complete documentation
describing all config.yml configuration options is beyond the scope of this document.
The following configuration parameters must be present in the config.yml file to support Canonical
Openstack in Contrail Command:
• ntpserver: <NTP_IP>.
• vrouter_gateway: <VROUTER_GATEWAY_IP>
The VROUTER_GATEWAY_IP variable is the IP address of the vRouter gateway. The vrouter_gateway:
parameter can be left empty, but it must be present.
• container_registry: <CONTAINER_REGISTRY>
The CONTAINER_REGISTRY variable is the path to the container registry. The CONTAINER_REGISTRY
is hub.juniper.net/contrail in most deployments.
• container_tag: <COMMAND_BUILD_TAG>
The COMMAND_BUILD_TAG variable is the Contrail Command (UI) container deployment for the
release that you are installing. For any Contrail Release 21xx image, you can retrieve this value from
README Access to Contrail Registry 21XX.
• contrail_container_tag: <CONTRAIL_BUILD_TAG>
The CONTRAIL_BUILD_TAG variable is the Contrail build container for the release that you are installing.
For any Contrail Release 21xx image, you can retrieve this value from README Access to Contrail
Registry 21XX.
In the following example, Contrail Command is deployed from the Juju jump host at 172.31.40.101.
107
• juju_controller—(Required) The IP address of the Juju jump host. We define the Juju jump host in this
context as the device that has installed the Juju CLI and is being used to run Juju commands. The
Contrail Command server must have access to the Juju jump host at this IP address.
• config_file—(Required) The path to the configuration file. This configuration file was created in the
previous step of this procedure.
• delete_db—(Optional) Specifies whether the PostgreSQL database is deleted during the process. The
PostgreSQL database is deleted by default. Enter no in this field if you do not want the PostgreSQL
database deleted.
• juju_model_name—(Optional) The name of the Juju model. The name can be retrieved by entering the
juju show-models command.
• juju_controller_user—(Optional) The username of the Juju user on the Juju jump server.
• juju_controller_password—(Optional) The password for the Juju user on the Juju jumpbox. This password
is used if no SSH keys have been installed.
The contrail_command container is the GUI and the contrail_psql container is the database. Both
containers should have a STATUS of Up.
108
The contrail-command-deployer container should have a STATUS of Exited because it exits when the
installation is complete.
Choose token from the drop-down menu. Enter the username and password combination to Juju as
the credentials, and use admin_domain as the domain.
Release Description
2003 Starting in Contrail Networking Release 2003, Canonical Openstack deployments can
be managed using Contrail Command.
RELATED DOCUMENTATION
CHAPTER 4
IN THIS CHAPTER
How to Perform a Zero Impact Contrail Networking Upgrade using the Ansible Deployer | 114
Updating Contrail Networking using the Zero Impact Upgrade Process in an Environment using Red Hat
Openstack 16.1 | 121
Updating Contrail Networking using the Zero Impact Upgrade Process in an Environment using Red Hat
Openstack 13 | 129
Updating Contrail Networking using the Zero Impact Upgrade Procedure in a Canonical Openstack
Deployment with Juju Charms | 139
Upgrading Contrail Networking Release 19xx with RHOSP13 to Contrail Networking Release 2011 with
RHOSP16.1 | 145
Upgrading Contrail Networking Release 1912.L2 with RHOSP13 to Contrail Networking Release 2011.L3
with RHOSP16.1 | 147
How to Upgrade Contrail Networking Through Kubernetes and/or Red Hat OpenShift | 150
Deploying Red Hat Openstack with Contrail Control Plane Managed by Tungsten Fabric Operator | 154
Use the following procedure to upgrade Contrail Networking using Contrail Command.
The procedure supports incremental model and you can use it to upgrade from Contrail Networking Release
N-1 to N.
1. Create snapshots of your current configurations and upgrade Contrail Command. See “Upgrading
Contrail Command using Backup Restore Procedure” on page 112.
Leave the Select Cluster field blank. If the field is populated, use the drag-down menu to delete the
pre-populated cluster selection.
If you have any issues logging into Contrail Command, see “How to Login to Contrail Command” on
page 58.
3. Click on Clusters.
You will see the list of all the available clusters with the status.
Hover your mouse over ellipsis next to the cluster and click on Upgrade.
5. Enter Contrail Version, Container Registry, Container Registry Username, Container Registry Password.
111
Contrail Version depicts the current installed Contrail version. You must update the value to the desired
version number.
The values for Container Registry, Container Registry Username, and Container Registry Password
are pre-populated based on the values used during initial Contrail deployment.
Add CONTRAIL_CONTAINER_TAG.
Access CONTRAIL_CONTAINER_TAG values for your targeted upgrade fromREADME Access to Contrail
Registry 21XX.
6. If you have Contrail Insights and Contrail Insights Flows installed in the cluster:
NOTE: Appformix and Appformix Flows were renamed Contrail Insights and Contrail Insights
Flows. The Appformix naming conventions still appear during product usage, including within
these directory names.
• Release 2005 or later: You do not need to provide appropriate versions of Contrail Insights and Contrail
Insights packages in the /opt/software/appformix and /opt/software/xflow directories. This step
is no longer required starting in Release 2005.
Provide appropriate versions of Contrail Insights and Contrail Insights packages in the
/opt/software/appformix and /opt/software/xflow directories on the Contrail Command server.
For more details, refer to Installing Contrail Insights and Contrail Insights Flows using Contrail Command.
Skip this step if you are not using Contrail Insights or Contrail Insights Flows.
7. Click on Upgrade.
Figure 18 on page 112 provides a representative illustration of a user completing the Upgrade Cluster
workflow to upgrade to Contrail Networking Release 2005.
112
RELATED DOCUMENTATION
You cannot use the SQL data with the new version of Contrail Command container if the database schema
changes while upgrading the Contrail Command container.
113
Run the following docker exec contrail_command command on the Contrail Command node to backup
the DB.
docker exec contrail_command commandutil convert --intype rdbms --outtype yaml --out
/etc/contrail/db.yml -c /etc/contrail/command-app-server.yml; mkdir ~/backups; mv
/etc/contrail/db.yml ~/backups/
docker exec contrail_command contrailutil convert --intype rdbms --outtype yaml --out
/etc/contrail/db.yml -c /etc/contrail/contrail.yml; mkdir ~/backups; mv /etc/contrail/db.yml
~/backups/
Specify the desired version of Contrail Command container (container_tag) in the deployer input file
(command_servers.yml) and deploy playbook.
The step depends on how you have deployed the Contrail Command.
The contrail_container_tag for any Contrail Release 21 software can be obtained from README Access
to Contrail Registry 21XX.
Modify the yaml-formatted db dump by adding or removing the fields per the new database schema.
NOTE: If the restore procedure fails because of schema mismatch, repeat Step 3 and Step 4
with incremental db dump changes.
Starting in Contrail Networking Release 2005, you can perform a Zero Impact Upgrade (ZIU) of Contrail
Networking using the Contrail Ansible Deployer container. The Contrail Ansible Deployer container image
can be loaded from the Juniper Networks Contrail Container Registry hosted at hub.juniper.net/contrail.
Use the procedure in this document to perform a Zero Impact Upgrade (ZIU) of Contrail Networking using
the Contrail Ansible Deployer container. This ZIU allows Contrail Networking to upgrade while sustaining
minimal network downtime.
115
• The target release for this upgrade must be Contrail Release 2005 or later.
• You can use this procedure to incrementally upgrade to the next Contrail Networking release only. For
instance, if you are running Contrail Networking Release 2003 and want to upgrade to the next Contrail
Release—which is Contrail Networking Release 2005—you can use this procedure to perform the upgrade.
This procedure is not validated for upgrades between releases that are two or more releases apart. For
instance, it could not be used to upgrade from Contrail Networking Release 2002 to Contrail Networking
Release 2005.
For a list of Contrail Networking releases in a table that illustrates Contrail Networking release order,
see Contrail Networking Supported Platforms.
• The Contrail Ansible Deployer container can only be used in CentOS environments.
• Take snapshots of your current configurations before you proceed with the upgrade process. For details,
refer to “How to Backup and Restore Contrail Databases in JSON Format in Openstack Environments
Using the Openstack 13 or Ansible Deployers” on page 170.
116
This procedure illustrates how to perform a ZIU using the Ansible deployer container. It includes a
representative example of the steps being performed to upgrade from Contrail Networking Release 2005
to Release 2008.
1. Pull the contrail-ansible-deployer file for the target upgrade release. This procedure is typically performed
from a Contrail controller running in your environment, but it can also be performed from a separate
server which has network connectivity to the deployment that is being upgraded.
This procedure shows you how to load a 2008 image from the Juniper Networks Contrail Container
Registry. You can, however, also change the values to load the file from a private registry.
The Juniper Networks Contrail Container Registry is hosted at hub.juniper.net/contrail. If you need the
credentials to access the registry, email [email protected].
Enter the following commands to pull the contrail-ansible-deployer file from the registry:
where:
• contrail_container_tag—the container tag ID for your target Contrail Networking release. The
contrail_container_tag for any Contrail Release 21 software can be obtained from README Access
to Contrail Registry 21XX.
hub.juniper.net/contrail/contrail-kolla-ansible-deployer:2008.<contrail_container_tag>
The instances.yaml file was used to initially deploy the setup. The instances.yaml can be loaded into the
Contrail Ansible Deployer and edited to supported the target upgrade version.
docker cp instances.yaml
contrail-kolla-ansible-deployer:/root/contrail-ansible-deployer/config/instances.yaml
docker exec -it contrail-kolla-ansible-deployer bash
cd /root/contrail-ansible-deployer/config/
vi instances.yaml
4. Update the CONTRAIL_CONTAINER_TAG to the desired version tag in the instances.yaml file from the
existing deployment. The CONTRAIL_CONTAINER_TAG variable is in the contrail_configuration: hierarchy
within the instances.yaml file.
The CONTRAIL_CONTAINER_TAG for any Contrail Release 21 software can be obtained from README
Access to Contrail Registry 21XX.
contrail_configuration:
CONTRAIL_CONTAINER_TAG: "2008.121"
CONFIG_DATABASE_NODEMGR__DEFAULTS__minimum_diskGB: "2"
DATABASE_NODEMGR__DEFAULTS__minimum_diskGB: "2"
JVM_EXTRA_OPTS: "-Xms1g -Xmx2g"
VROUTER_ENCRYPTION: FALSE
LOG_LEVEL: SYS_DEBUG
CLOUD_ORCHESTRATOR: kubernetes
5. Upgrade the control plane by running the ziu.yml playbook file from inside the contrail ansible deployer
container.
Upgrade the control plane by running the controller stage of ziu.yml playbook file.
7. Enter the contrail-status command to monitor upgrade status. Ensure all pods reach the running state
and all services reach the active state.
NOTE: Some output fields and data have been removed for readability.
Original
Pod Service Name State
redis contrail-external-redis running
rsyslogd running
analytics api contrail-analytics-api running
analytics collector contrail-analytics-collector running
analytics nodemgr contrail-nodemgr running
analytics provisioner contrail-provisioner running
analytics-alarm alarm-gen contrail-analytics-alarm-gen running
analytics-alarm kafka contrail-external-kafka running
analytics-alarm nodemgr contrail-nodemgr running
analytics-alarm provisioner contrail-provisioner running
analytics-snmp nodemgr contrail-nodemgr running
analytics-snmp provisioner contrail-provisioner running
analytics-snmp snmp-collector contrail-analytics-snmp-collector running
analytics-snmp topology contrail-analytics-snmp-topology running
config api contrail-controller-config-api running
config device-manager contrail-controller-config-devicemgr running
config dnsmasq contrail-controller-config-dnsmasq running
config nodemgr contrail-nodemgr running
config provisioner contrail-provisioner running
config schema contrail-controller-config-schema running
config stats contrail-controller-config-stats running
config svc-monitor contrail-controller-config-svcmonitor running
config-database cassandra contrail-external-cassandra running
<trimmed>
named: active
dns: active
== Contrail analytics-alarm ==
nodemgr: active
kafka: active
alarm-gen: active
== Contrail kubernetes ==
kube-manager: active
== Contrail database ==
nodemgr: active
query-engine: active
cassandra: active
== Contrail analytics ==
nodemgr: active
api: active
collector: active
== Contrail config-database ==
nodemgr: active
zookeeper: active
rabbitmq: active
cassandra: active
== Contrail webui ==
web: active
job: active
== Contrail vrouter ==
nodemgr: active
agent: active
== Contrail analytics-snmp ==
snmp-collector: active
nodemgr: active
topology: active
== Contrail config ==
svc-monitor: active
nodemgr: active
device-manager: active
120
api: active
schema: active
8. Migrate workloads VM from one group of compute nodes. Leave them uncommented in the
instances.yaml file. Comment other computes not ready to upgrаde in instances.yaml.
Run the install_contrail.yml playbook file to upgrade the compute nodes that were uncommented
in the instances.yaml file. Only the compute nodes that were left uncommented in 8 are upgraded
to the target release in this step.
Run the compute stage of ziu.yml playbook file to upgrade the compute nodes that were uncommented
in the instances.yaml file. Only the compute nodes that were left uncommented in 8 are upgraded
to the target release in this step.
10. Repeat Steps 8 and 9 until all compute nodes are upgraded.
You can access the Ansible playbook logs of the upgrade at /var/log/ansible.log.
Release Description
2005 Starting in Contrail Networking Release 2005, you can perform a Zero Impact Upgrade
(ZIU) of Contrail Networking using the Contrail Ansible Deployer container.
RELATED DOCUMENTATION
IN THIS SECTION
Prerequisites | 121
Updating Contrail Networking in an Environment using Red Hat Openstack 16.1 | 122
This document provides the steps needed to update a Contrail Networking deployment that is using Red
Hat Openstack 16.1 as it’s orchestration platform. The procedure provides a zero impact upgrade (ZIU)
with minimal disruption to network operations.
If you are using Contrail Networking in an environment that is using a Red Hat Openstack 13-based release,
see Updating Contrail Networking using the Zero Impact Upgrade Process in an Environment using Red
Hat Openstack 13.
This procedure is used to upgrade Contrail Networking when it is running in environments using RHOSP
version 16.1 (RHOSP16.1).
The procedure in this document has been validated for the following Contrail Networking upgrade scenarios:
2011.L1 2011.L2
A different procedure is followed for upgrading Contrail Networking in environments using Red Hat
Openstack 13. See Updating Contrail Networking using the Zero Impact Upgrade Process in an Environment
using Red Hat Openstack 13.
Prerequisites
• A Contrail Networking deployment using Red Hat Openstack version 16.1 (RHOSP16.1) as the
orchestration platform is already operational.
• The overcloud nodes in the RHOSP16.1 environment have an enabled Red Hat Enterprise Linux (RHEL)
subscription.
• Your environment is running Contrail Release 2011.L1 and upgrading to Contrail Release 2011.L2 or
later.
• If you are updating Red Hat Openstack simultaneously with Contrail Networking, we assume that the
undercloud node is updated to the latest minor version and that new overcloud images are prepared for
an upgrade. See the Upgrading the Undercloud section of the Keeping Red Hat OpenStack Platform
Updated guide from Red Hat.
If the undercloud has been updated and a copy of the heat templates are used for the deployment,
update the copy of the heat template from the Red Hat’s core heat template collection at
/usr/share/openstack-tripleo-heat-templates. See the Understanding Heat Templates document from
Red Hat for information on this process.
• Backup your Contrail configuration database before starting this procedure. See How to Backup and
Restore Contrail Databases in JSON Format in Openstack Environments Using the Openstack 16.1
Director Deployment.
• Each compute node agent will go down during this procedure, causing some compute node downtime.
The estimated downtime for a compute node varies by environment, but typically took between 12 and
15 minutes in our testing environments.
If you have compute nodes with workloads that cannot tolerate this downtime, consider migrating
workloads or taking other steps to accommodate this downtime in your environment.
To update Contrail Networking in an environment that is using Red Hat Openstack 16.1 as the orchestration
platform:
1. Prepare your container registry. The registry is often included in the undercloud, but it can also be a
separate node.
2. Backup the Contrail TripleO Heat Templates. See Using the Contrail Heat Template.
Prepare the new tripleo-heat-templates with latest available software from Openstack and Contrail
Networking.
The location of the ContrailImageTag variable varies by environment. In the most commonly-used
environments, this variable is set in the contrail-services.yaml file.
You can obtain the ContrailImageTag parameter from the README Access to Contrail Registry 21XX.
NOTE: If you are using the undercloud as a registry, ensure the new contrail image is updated
in undercloud before proceeding further.
5. Update the overcloud by entering the openstack overcloud update prepare command and include the
files that were updated during the previous steps with the overcloud update.
Example:
-e tripleo-heat-templates /environments/ssl/enable-internal-tls.yaml \
-e containers-prepare-parameter.yaml
6. Prepare the overcloud nodes that include Contrail containers for the update.
There are multiple methods for performing this step. Commonly used methods for performing this
operation include using the podman pull command for Docker containers and the openstack overcloud
container image upload command for Openstack containers, or running the
tripleo-heat-templates/upload.containers.sh and tools/contrail/update_contrail_preparation.sh
scripts.
• (Not required in all setups) Provide export variables for the script if the predefined values aren’t
appropriate for your environment. The script location:
~/tripleo-heat-templates/tools/contrail/update_contrail_preparation.sh
The following variables within the script are particularly significant for this upgrade:
If needed, you can obtain this parameter for a specific image from the README Access to Contrail
Registry 21XX.
• SSH_USER—The SSH username for logging into overcloud nodes. The default value is heat-admin.
The default SSH options for your environment are typically pre-defined. You are typically only
changing this value if you want to customize your update.
• STOP_CONTAINERS—The list of containers that must be stopped before the upgrade can proceed.
The default value is contrail_config_api contrail_analytics_api.
CAUTION: Contrail services stop working when the script starts running.
~/tripleo-heat-templates/tools/contrail/update_contrail_preparation.sh
• Run the openstack overcloud update run command on the first Contrail controller and, if needed,
on a Contrail Analytics node. The purpose of this step is to update one Contrail Controller and one
Contrail Analytics node to support the environment so the other Contrail Controllers and analytics
nodes can be updated without incurring additional downtime.
Example:
If the analytics and the analyticsdb nodes are on separate nodes, you may have to update the nodes
individually:
• After the upgrade, check the docker container status and versions for the Contrail Controllers and
the Contrail Analytics and AnalyticsDB nodes.
podman ps -a
Example:
8. Update the Openstack Controllers using the openstack overcloud update run commands:
Example:
NOTE: The compute node agent will be down during this step. The estimated downtime
varies by environment, but is typically between 1 and 5 minutes.
Consider migrating workloads that can’t tolerate this downtime before performing this step
NOTE: A reboot is required to complete this procedure only if a kernel update is also needed.
If you would like to avoid rebooting your compute node, check the log files in the
/var/log/yum.log file to see if kernel packages were updated during the compute node update.
A reboot is required only if kernel updates occurred as part of the compute node update
procedure.
sudo reboot
Use the contrail-status command to monitor upgrade status. Ensure all pods reach the running state
and all services reach the active state.
NOTE: Some output fields and data have been removed from this contrail-status command
sample for readability.
== Contrail control ==
control: active
nodemgr: active
named: active
dns: active
== Contrail analytics-alarm ==
nodemgr: active
kafka: active
alarm-gen: active
== Contrail database ==
nodemgr: active
query-engine: active
cassandra: active
== Contrail analytics ==
nodemgr: active
api: active
128
collector: active
== Contrail config-database ==
nodemgr: active
zookeeper: active
rabbitmq: active
cassandra: active
== Contrail webui ==
web: active
job: active
== Contrail analytics-snmp ==
snmp-collector: active
nodemgr: active
topology: active
== Contrail config ==
svc-monitor: active
nodemgr: active
device-manager: active
api: active
schema: active
10. Enter the openstack overcloud update converge command to finalize the update.
NOTE: The options used in the openstack overcloud update converge in this step will match
the options used with the openstack overcloud update prepare command entered in 5.
-e tripleo-heat-templates /environments/contrail/contrail-net.yaml \
-e tripleo-heat-templates /environments/contrail/contrail-plugins.yaml \
-e tripleo-heat-templates/environments/contrail/contrail-tls.yaml \
-e tripleo-heat-templates /environments/ssl/tls-everywhere-endpoints-dns.yaml
\
-e tripleo-heat-templates
/environments/services/haproxy-public-tls-certmonger.yaml \
-e tripleo-heat-templates /environments/ssl/enable-internal-tls.yaml \
-e containers-prepare-parameter.yaml
Monitor screen messages indicating SUCCESS to confirm that the updates made in this step are
successful.
RELATED DOCUMENTATION
IN THIS SECTION
Prerequisites | 130
This document provides the steps needed to update a Contrail Networking deployment that is using Red
Hat Openstack 13 as it’s orchestration platform. The procedure provides a zero impact upgrade (ZIU) with
minimal disruption to network operations.
If you are using Contrail Networking in an environment that is using a Red Hat Openstack 16-based release,
see Updating Contrail Networking using the Zero Impact Upgrade Process in an Environment using Red
Hat Openstack 16
130
This procedure is used to upgrade Contrail Networking when it is running in environments using RHOSP13.
The procedure in this document has been validated for the following Contrail Networking upgrade scenarios:
NOTE: You can upgrade from any 1912.L release to any other
1912.L release.
1912.L0 2003
2003 2005
2005 2008
2008 2011
2011 2011.L1
A different procedure is followed for upgrading to earlier target Contrail Networking releases in
environments using RHOSP13 orchestration. See Upgrading Contrail Networking with Red Hat Openstack
13 using ISSU.
If you want to use this procedure to upgrade your Contrail Networking release to other releases, you must
engage Juniper Networks professional services. Contact your Juniper representative for additional
information.
Prerequisites
• A Contrail Networking deployment using Red Hat Openstack version 13 (RHOSP13) as the orchestration
platform is already operational.
• The overcloud nodes in the RHOSP13 environment have an enabled Red Hat Enterprise Linux (RHEL)
subscription.
131
• Your environment is running Contrail Release 1912 and upgrading to Contrail Release 1912-L1 or to
Contrail Release 2003 or later.
• If you are updating Red Hat Openstack simultaneously with Contrail Networking, we assume that the
undercloud node is updated to the latest minor version and that new overcloud images are prepared for
an upgrade if needed for the upgrade. See the Upgrading the Undercloud section of the Keeping Red
Hat OpenStack Platform Updated guide from Red Hat.
If the undercloud has been updated and a copy of the heat templates are used for the deployment,
update the copy of the heat template from the Red Hat’s core heat template collection at
/usr/share/openstack-tripleo-heat-templates. See the Understanding Heat Templates document from
Red Hat for information on this process.
• Backup your Contrail configuration database before starting this procedure. See “How to Backup and
Restore Contrail Databases in JSON Format in Openstack Environments Using the Openstack 13 or
Ansible Deployers” on page 170.
• Each compute node agent will go down during this procedure, causing some compute node downtime.
The estimated downtime for a compute node varies by environment, but typically took between 12 and
15 minutes in our testing environments.
If you have compute nodes with workloads that cannot tolerate this downtime, consider migrating
workloads or taking other steps to accommodate this downtime in your environment.
• If you are updating Red Hat Openstack simultaneously with Contrail Networking, update Red Hat
Openstack to the latest minor release version and ensure that the new overcloud images are prepared
for the upgrade. See the Upgrading the Undercloud section of the Keeping Red Hat OpenStack Platform
Updated guide from Red Hat for additional information.
If the undercloud has been updated and a copy of the heat templates are used for the deployment,
update the Heat templates from Red Hat’s core Heat template collection at
/usr/share/openstack-tripleo-heat-templates. See the Understanding Heat Templates document from
Red Hat for additional information.
132
To update Contrail Networking in an environment that is using Red Hat Openstack as the orchestration
platform:
1. Prepare your docker registry. The registry is often included in the undercloud, but it can also be a
separate node.
Docker registry setup is environment independent. See Docker Registry from Docker for additional
information on Docker registry setup.
2. Backup the Contrail TripleO Heat Templates. See Using the Contrail Heat Template.
4. (Optional) Update the Contrail TripleO Puppet module to the latest version and prepare Swift Artifacts,
as applicable.
The location of the ContrailImageTag variable varies by environment. In the most commonly-used
environments, this variable is set in the contrail-services.yaml file.
You can obtain the ContrailImageTag parameter from the README Access to Contrail Registry 21XX.
6. (Recommended) If you are upgrading to Contrail Networking Release 2005 or later, check and, if needed,
enable kernel vRouter huge page support to support future compute node upgrades without rebooting.
You can enable or verify kernel-mode vRouter huge page support in the contrail-services.yaml file
using either the ContrailVrouterHugepages1GB: and ContrailVrouterHugepages2MB: parameters:
133
parameter_defaults:
…
ContrailVrouterHugepages1GB: ‘2’
parameter_defaults:
…
# ContrailVrouterHugepages2MB: ‘1024’
Notes about kernel-mode vRouter huge page support in Red Hat Openstack environments:
• Kernel-mode vRouter huge page support was introduced in Contrail Networking Release 2005, and
is configured to support 2 1GB huge pages by default in Contrail Networking Release 2005 or later.
A compute node has to be rebooted once for a huge page configuration to finalize. After this initial
reboot, the compute node can perform future Contrail Networking software upgrades without
rebooting.
Notably, a compute node in an environment running Contrail Networking 2005 or later has not
enabled huge page support for kernel-mode vRouters until it is rebooted. The 2x1GB huge page
support configuration is present by default, but it isn’t enabled until the compute node is rebooted.
• We recommend only using 1GB or 2MB kernel-mode vRouter huge pages in most environments.
You can, however, simultaneously enable 1GB or 2MB kernel-mode vRouter huge pages in Red Hat
Openstack environments if your environment requires enablement of both huge page options.
• Changing vRouter huge page configuration settings in a Red Hat Openstack environment typically
requires a compute node reboot.
• 2 MB: Reboot usually required. The reboot is sometimes avoided in environments where memory
isn’t highly fragmented or the required number of pages can be easily allocated.
• We recommend allotting 2GB of memory—either using the default 1024x2MB huge page size setting
or the 2x1GB size setting—for huge pages in most environments. Some larger environments might
require additional huge page memory settings for scale. Other huge page size settings should only
be set by expert users in specialized circumstances.
7. Update the overcloud by entering the openstack overcloud update prepare command and include the
files that were updated during the previous steps with the overcloud update.
Example:
134
8. Prepare the overcloud nodes that include Contrail containers for the update.
There are multiple methods for performing this step. Commonly used methods for performing this
operation include using the docker pull command for Docker containers and the openstack overcloud
container image upload command for Openstack containers, or running the
tripleo-heat-templates/upload.containers.sh and tools/contrail/update_contrail_preparation.sh
scripts.
• (Not required in all setups) Provide export variables for the script if the predefined values aren’t
appropriate for your environment. The script location:
~/tripleo-heat-templates/tools/contrail/update_contrail_preparation.sh
The following variables within the script are particularly significant for this upgrade:
If needed, you can obtain this parameter for a specific image from the README Access to Contrail
Registry 21XX.
• SSH_USER—The SSH username for logging into overcloud nodes. The default value is heat-admin.
The default SSH options for your environment are typically pre-defined. You are typically only
changing this value if you want to customize your update.
• STOP_CONTAINERS—The list of containers that must be stopped before the upgrade can proceed.
The default value is contrail_config_api contrail_analytics_api.
CAUTION: Contrail services stop working when the script starts running.
~/tripleo-heat-templates/tools/contrail/update_contrail_preparation.sh
• Run the openstack overcloud update run command on the first Contrail controller and, if needed,
on a Contrail Analytics node. The purpose of this step is to update one Contrail Controller and one
Contrail Analytics node to support the environment so the other Contrail Controllers and analytics
nodes can be updated without incurring additional downtime.
Example:
If the analytics and the analyticsdb nodes are on separate nodes, you may have to update the nodes
individually:
• After the upgrade, check the docker container status and versions for the Contrail Controllers and
the Contrail Analytics and AnalyticsDB nodes.
docker ps -a
Example:
136
10. Update the Openstack Controllers using the openstack overcloud update run commands:
Example:
NOTE: The compute node agent will be down during this step. The estimated downtime
varies by environment, but is typically between 1 and 5 minutes.
Consider migrating workloads that can’t tolerate this downtime before performing this step
NOTE: A reboot is required to complete this procedure only if a kernel update is also needed.
If you would like to avoid rebooting your compute node, check the log files in the
/var/log/yum.log file to see if kernel packages were updated during the compute node update.
A reboot is required only if kernel updates occurred as part of the compute node update
procedure.
sudo reboot
137
Use the contrail-status command to monitor upgrade status. Ensure all pods reach the running state
and all services reach the active state.
NOTE: Some output fields and data have been removed from this contrail-status command
sample for readability.
== Contrail control ==
control: active
nodemgr: active
named: active
dns: active
== Contrail analytics-alarm ==
nodemgr: active
kafka: active
alarm-gen: active
== Contrail database ==
nodemgr: active
query-engine: active
cassandra: active
138
== Contrail analytics ==
nodemgr: active
api: active
collector: active
== Contrail config-database ==
nodemgr: active
zookeeper: active
rabbitmq: active
cassandra: active
== Contrail webui ==
web: active
job: active
== Contrail analytics-snmp ==
snmp-collector: active
nodemgr: active
topology: active
== Contrail config ==
svc-monitor: active
nodemgr: active
device-manager: active
api: active
schema: active
12. Enter the openstack overcloud update converge command to finalize the update.
NOTE: The options used in the openstack overcloud update converge in this step will match
the options used with the openstack overcloud update prepare command entered in 5.
tripleo-heat-templates/environments/contrail/contrail-plugins.yaml -e
misc_opts.yaml -e
contrail-parameters.yaml -e
docker_registry.yaml
Monitor screen messages indicating SUCCESS to confirm that the updates made in this step are
successful.
RELATED DOCUMENTATION
IN THIS SECTION
Prerequisites | 139
Recommendations | 140
Updating Contrail Networking in a Canonical Openstack Deployment Using Juju Charms | 141
This document provides the steps needed to update a Contrail Networking deployment that is using
Canonical Openstack as it’s orchestration platform. The procedure utilizes Juju charms and provides a zero
impact upgrade (ZIU) with minimal disruption to network operations.
Prerequisites
• A Contrail Networking deployment using Canonical Openstack as the orchestration platform is already
operational.
• Juju charms for Contrail services are active in your environment, and the Contrail Networking controller
has access to the Juju jumphost and the Juju cluster.
140
This procedure is used to upgrade Contrail Networking when it is running in environments using Canonical
Openstack.
You can use this procedure to incrementally upgrade to the next main Contrail Networking release only.
This procedure is not validated for upgrades between releases that are two or more releases apart.
The procedure in this document has been validated for the following Contrail Networking upgrade scenarios:
2003 2005
2005 2008
2008 2011
Recommendations
We strongly recommend performing the following tasks before starting this procedure:
Starting in Contrail Networking Release 2005, you can enable huge pages in the kernel-mode vRouter
to avoid future compute node reboots during upgrades. Huge pages in Contrail Networking are used
primarily to allocate flow and bridge table memory within the vRouter. Huge pages for kernel-mode
vRouters provide enough flow and bridge table memory to avoid compute node reboots to complete
future Contrail Networking software upgrades.
We recommend allotting 2GB of memory—either using the default 1024x2MB huge page size setting
or the 2x1GB size setting—for huge pages. Other huge page size settings should only be set by expert
users in specialized circumstances.
141
A compute node reboot is required to initially enable huge pages. Future compute node upgrades can
happen without reboots after huge pages are enabled. The 1024x2MB huge page setting is configured
by default starting in Contrail Networking Release 2005, but is not active in any compute node until the
compute node is rebooted to enable the setting.
2GB and 1MB huge page size settings cannot be enabled simultaneously in environments using Juju.
The huge page configurations can be changed by entering one of the following commands:
• Enable 1024 2MB huge pages (Default): juju config contrail-agent kernel-hugepages-2m=1024
Disable 2MB huge pages (empty value): juju config contrail-agent kernel-hugepages-2m=““
Disable 1GB huge pages (default. empty value): juju config contrail-agent kernel-hugepages-1g=““
NOTE: 1GB huge page settings can only be specified at initial deployment; you cannot
modify the setting in active deployments. The 1GB huge page setting can also not be
completely disabled after being activated on a compute node. Be sure that you want to use
1GB huge page settings on your compute node before enabling the setting.
To update Contrail Networking in an environment that is using Canonical Openstack as the orchestration
platform:
1. From the Juju jumphost, enter the run-action command to place all control plane services—Contrail
Controller, Contrail Analytics, & Contrail AnalyticsDB—into maintenance mode in preparation for the
upgrade.
NOTE: The --wait option is not required to complete this step, but is recommended to ensure
this procedure completes without interfering with the procedures in the next step.
Wait for all charms to move to the maintenance status. You can check the status of all charms by entering
the juju status command.
2. Upgrade all charms. See the Upgrading applications document from Juju.
142
3. Update the image tags in Juju for the Contrail Analytics, Contrail AnalyticsDB, Contrail Agent, and
Contrail Openstack services.
If a Contrail Service node (CSN) is part of the cluster, also update the image tags in Juju for the Contrail
Service node.
4. Update the image tag in Juju for the Contrail Controller service:
5. After updating the image tags, wait for all services to complete stage 5 of the ZIU upgrade process
workflow. The wait time for this step varies by environment, but often takes 30 to 90 minutes.
Enter the juju status command and review the Workload and Message field outputs to monitor progress.
The update is complete when all services are in the maintenance state—the Workload field output is
maintenance—and each individual service has completed stage 5 of the ZIU upgrade—illustrated by
the ziu is in progress - stage/done = 5/5 output in the Message field.
A sample output of an in-progress update that has not completed the image tag update process. The
Message field illustrates that the ZIU processes have not completed stage 5 of the upgrade.
juju status
Unit Workload Agent Message
contrail-analytics/0* maintenance idle ziu is in progress - stage/done
= 4/4
contrail-analytics/1 maintenance idle ziu is in progress - stage/done
= 4/4
contrail-analytics/2 maintenance idle ziu is in progress - stage/done
= 4/4
contrail-analyticsdb/0* maintenance idle ziu is in progress - stage/done
143
= 4/4
contrail-analyticsdb/1 maintenance idle ziu is in progress - stage/done
= 4/3
contrail-analyticsdb/2 maintenance idle ziu is in progress - stage/done
= 4/3
contrail-controller/0* maintenance idle ziu is in progress - stage/done
= 4/4
ntp/3 active idle chrony: Ready
contrail-controller/1 maintenance executing ziu is in progress - stage/done
= 4/3
ntp/2 active idle chrony: Ready
contrail-controller/2 maintenance idle ziu is in progress - stage/done
= 4/3
ntp/4 active idle chrony: Ready
contrail-keystone-auth/0* active idle Unit is ready
A sample output of an update that has completed the image tag update process on all services. The
Workload field is maintenance for all services and the Message field explains that stage 5 of the ZIU
process is done.
juju status
Unit Workload Agent Message
contrail-analytics/0* maintenance idle ziu is in progress - stage/done =
5/5
contrail-analytics/1 maintenance idle ziu is in progress - stage/done =
5/5
contrail-analytics/2 maintenance idle ziu is in progress - stage/done =
5/5
contrail-analyticsdb/0* maintenance idle ziu is in progress - stage/done =
5/5
contrail-analyticsdb/1 maintenance idle ziu is in progress - stage/done =
5/5
contrail-analyticsdb/2 maintenance idle ziu is in progress - stage/done =
5/5
contrail-controller/0* maintenance idle ziu is in progress - stage/done =
5/5
ntp/3 active idle chrony: Ready
contrail-controller/1 maintenance idle ziu is in progress - stage/done =
144
5/5
ntp/2 active idle chrony: Ready
contrail-controller/2 maintenance idle ziu is in progress - stage/done =
5/5
ntp/4 active idle chrony: Ready
contrail-keystone-auth/0* active idle Unit is ready
glance/0* active idle Unit is ready
haproxy/0* active idle Unit is ready
keepalived/2 active idle VIP ready
haproxy/1 active idle Unit is ready
keepalived/0* active idle VIP ready
haproxy/2 active idle Unit is ready
keepalived/1 active idle VIP ready
heat/0* active idle Unit is ready
contrail-openstack/3 active idle Unit is ready
keystone/0* active idle Unit is ready
mysql/0* active idle Unit is ready
neutron-api/0* active idle Unit is ready
contrail-openstack/2 active idle Unit is ready
nova-cloud-controller/0* active idle Unit is ready
nova-compute/0* active idle Unit is ready
If Contrail Service nodes (CSNs) are part of the cluster, also upgrade every Contrail CSN agent:
Wait for each compute node and CSN node upgrade to finish. The wait time for this step varies by
environment, but typically takes around 10 minutes to complete per node.
7. If huge pages are not enabled for your vRouter, log into each individual compute node and reboot to
complete the procedure.
145
NOTE: A compute node reboot is required to initially enable huge pages. If huge pages have
been configured in Juju without a compute node reboot, you can also use this reboot to
enable huge pages. You can avoid rebooting the compute node during future software
upgrades after this initial reboot.
1024x2MB huge page support is configured by default starting in Contrail Networking Release
2005, which is also the first Contrail Networking release that supports huge pages. If you are
upgrading to Release 2005 for the first time, a compute node reboot is always required
because huge pages could not have been previously enabled.
This reboot also enables the default 1024x2MB huge page configuration unless you change
the huge page configuration in Release 2005 or later.
sudo reboot
RELATED DOCUMENTATION
The goal of this topic is to provide a combined procedure to upgrade Red Hat OpenStack Platform (RHOSP)
from RHOSP 13 to RHOSP 16.1 by leveraging Red Hat Fast Forward Upgrade (FFU) procedure while
simultaneously upgrading Contrail Networking from Release 19xx to Release 2011. The procedure leverages
the Zero Impact Upgrade (ZIU) procedure from Contrail to minimize the downtime.
The downtime will be reduced by not requiring extra server reboots in addition to the ones that the RHOSP
FFU procedure already requires for Kernel/RHEL upgrades.
Refer to Red Hat OpenStack Framework for Upgrades (13 to16.1) documentation for details on RHOSP
13 to RHOSP 16.1 Fast Forward Upgrade (FFU) procedure of OpenStack Platform environment from one
long life version to the next long life version.
1. Follow chapter 2—Planning and preparation for an in-place upgrade through chapter 8.3— Copying the
Leapp data to the overcloud nodes of Red Hat OpenStack Framework for Upgrades (13 to16.1) procedure.
The playbook sets the new NIC prefix to em. To set a different NIC prefix, set the prefix variable
when running the playbook:
d. Reboot overcloud nodes using the standard reboot procedures. For more information, see Red Hat
rebooting nodes documentation.
3. Follow chapter 8.5—Copying the Leapp data to the overcloud nodes through chapter 19.2—Upgrading
Controller nodes with external Ceph deployments of Red Hat OpenStack Framework for Upgrades (13
to16.1) procedure.
NOTE: If you are not using the default stack name (overcloud), set your stack name with the
--stack STACK NAME option replacing STACK NAME with the name of your stack.
$ source ~/stackrc
b. Migrate your instances. For details, see RedHat Migrating virtual machine instances between
Compute nodes documentation.
c. Run the upgrade command with the system_upgrade_prepare and system_upgrade_run tags.
147
$ openstack overcloud upgrade run --stack STACK NAME --tags system_upgrade_prepare --limit
overcloud-compute-0
$ openstack overcloud upgrade run --stack STACK NAME --tags system_upgrade_run --limit
overcloud-compute-0
To upgrade multiple Compute nodes in parallel, set the --limit option to a comma-separated list of
nodes that you want to upgrade.
d. Run the upgrade command with no tags to perform Red Hat OpenStack Platform upgrade.
5. Follow chapter 19.4—Synchronizing the overcloud stack through chapter 21.4—Synchronizing the overcloud
stack of Red Hat OpenStack Framework for Upgrades (13 to16.1) procedure to complete the upgrade.
RELATED DOCUMENTATION
The goal of this topic is to provide a combined procedure to upgrade Red Hat OpenStack Platform (RHOSP)
from RHOSP 13 to RHOSP 16.1 by leveraging Red Hat Fast Forward Upgrade (FFU) procedure while
simultaneously upgrading Contrail Networking from Release 1912.L2 to Release 2011.L3. This procedure
leverages the speeding up an overcloud upgrade process from RHOSP.
Follow chapter 2—Planning and preparation for an in-place upgrade through chapter 8.3— Copying the Leapp
data to the overcloud nodes of FRAMEWORK FOR UPGRADES (13 TO 16.1) procedure.
To upgrade an overcloud:
1. Create a Contrail container file to upload the Contrail container image to undercloud, if the undercloud
has used a registry.
cd ~/tf-heat-templates/tools/contrail
./import_contrail_container.sh -f container_outputfile -r registry -t tag [-i
insecure] [-u username] [-p password] [-c certificate path]
For example:
Run the following command from the undercloud to upload the Contrail container image.
3. Follow chapter 2—Planning and preparation for an in-place upgrade through chapter 8.3— Copying the
Leapp data to the overcloud nodes of FRAMEWORK FOR UPGRADES (13 TO 16.1) procedure to upgrade
the undercloud and prepare the enviornment.
Follow Chapter 20.1—Running the overcloud upgrade preparation and Chapter 20.2—Upgrading the control
plane nodes of RedHat OpenStack Speeding Up an Overcloud Upgrade procedure.
5. Run the playbook-nics-vhost0.yaml playbook to setup the vhost interface on Contrail compute nodes.
NOTE: Stop or migrate the workloads running on the compute batch that you are going to
upgrade.
6. Run the upgrade command with the system_upgrade_prepare and system_upgrade_run tags.
To upgrade multiple Compute nodes in parallel, set the --limit option to a comma-separated list of
nodes that you want to upgrade.
and
https://github.com/tungstenfabric/tf-deployment-test/blob/master/rhosp/ffu_ziu_13_16/tf_specific/run_overcloud_system_upgrade_run.sh.
7. Follow Chapter 20.3—Upgrading Compute nodes in parallel and Chapter 20.4—Synchronizing the overcloud
stack of RedHat OpenStack Speeding Up an Overcloud Upgrade procedure to complete the overloud
upgrade.
RELATED DOCUMENTATION
Starting in Contrail Networking Release 21.3, you can update Contrail Networking through Kubernetes
and/or Red Hat OpenShift.
You can use this procedure to update Contrail Networking deployed by the Tungsten Fabric (TF) Operator.
NOTE: Only CONTRAIL_CONTAINER_TAG must have a new tag. The render manifest must
be done with all the same exported environment variables used during the initial deployment.
5. Wait and ensure that the Contrail Control plane pods are updated.
6. Use the contrail-status tool to check the Contrail Networking status on all the master nodes.
$ contrail-status
Must show that all services are active:
== Contrail control ==
control: active
nodemgr: active
named: active
dns: active
== Contrail analytics-alarm ==
nodemgr: active
kafka: active
alarm-gen: active
== Contrail kubernetes ==
kube-manager: backup
== Contrail database ==
nodemgr: active
query-engine: active
cassandra: active
== Contrail analytics ==
nodemgr: active
api: active
collector: active
== Contrail config-database ==
nodemgr: active
zookeeper: active
rabbitmq: active
cassandra: active
152
== Contrail webui ==
web: active
job: active
== Contrail vrouter ==
nodemgr: active
agent: active
== Contrail analytics-snmp ==
snmp-collector: active
nodemgr: active
topology: active
== Contrail config ==
svc-monitor: backup
nodemgr: active
device-manager: backup
api: active
schema: backup
• Choose a node to upgrade and obtain the vRouter daemon name for the node.
• Delete the vRouter pod resource by specifying the name of the pod you want to delete.
The status is showing Running for all the vRouter daemon sets. The number of daemon set entries
depends on the cluster size (that is number of master nodes and worker nodes).
You can also verify the status of a particular daemon set. Obtain the new vrouter-daemonset from
the kubectl describe node <node name> command. Check the status of that particular daemon set
using the kubectl get pods -n tf | grep "vrouter1-vrouter-daemonset-XXX" command.
8. Verify the vRouter agent status by using the contrail-status command on the node.
$ contrail-status
Must show that all services are active:
== Contrail control ==
control: active
nodemgr: active
named: active
dns: active
== Contrail analytics-alarm ==
nodemgr: active
kafka: active
alarm-gen: active
== Contrail kubernetes ==
kube-manager: backup
== Contrail database ==
nodemgr: active
query-engine: active
cassandra: active
== Contrail analytics ==
nodemgr: active
api: active
collector: active
== Contrail config-database ==
nodemgr: active
zookeeper: active
rabbitmq: active
cassandra: active
== Contrail webui ==
web: active
job: active
154
== Contrail vrouter ==
nodemgr: active
agent: active
== Contrail analytics-snmp ==
snmp-collector: active
nodemgr: active
topology: active
== Contrail config ==
svc-monitor: backup
nodemgr: active
device-manager: backup
api: active
schema: backup
RELATED DOCUMENTATION
This document provides steps needed to use the Operator Framework for Contrail Networking deployment
that is using Red Hat Openstack version 16.1 (RHOSP16.1) as it’s orchestration platform.
Figure 19 on page 155 shows the difference between the classic deployment of RHOSP16.1 and the
Tungsten Fabric (TF) Operator based deployment.
155
• Contrail Control plane nodes require access to RHOSP networks such as Internal API and Tenant.
Administrator must configure the Internal API and Tenant, if Kubernetes or OpenShift cluster is deployed
outside of the RHOSP networks.
156
1. Deploy Kubernetes or OpenShift cluster with Contrail Control plane without the keystone options.
a. Generate a self-signed CA certificate and key and provide their content in the environmental
variables.
If Red Hat IdM is used for RHOSP, the RHOSP must bundle its own CA and IPA CA.
a. Do not create virtual machines (VMs) for Control Plane and skip any related steps.
b. For Transport Layer Security (TLS), use the self-signed certificates. The Kubernetes or OpenShift
is not integrated with Red Hat IdM.
c. Set the counters to zero for ContrailController, Analytics, and database roles.
d. Provide the heat parameters addresses to Contrail Control plane deployed by Kubernetes or
OpenShift.
ExtraHostFileEntries:
'ip1 <FQDN K8S master1> <Short name master1>'
'ip2 <FQDN K8S master2> <Short name master2>'
'ip3 <FQDN K8S master3> <Short name master3>'
ExternalContrailConfigIPs: <ip1>,<ip2>,<ip3>
ExternalContrailControlIPs: <ip1>,<ip2>,<ip3>
ExternalContrailAnalyticsIPs: <ip1>,<ip2>,<ip3>
157
ControllerExtraConfig:
contrail_internal_api_ssl: True
ComputeExtraConfig:
contrail_internal_api_ssl: True
# add contrail_internal_api_ssl for all other roles if any
3. Ensure that Contrail Control plane deployed by Kubernetes or OpenShift has connectivity to RHOSP
InternalAPI and tenant networks.
4. Ensure that Contrail Control plane nodes deployed by Kubernetes or OpenShift are able to resolve
RHOSP FQDNs for Internal API and Control Plane networks. For example, add names to /etc/hosts
on Contrail Control plane nodes.
6. Wait until contrail-status shows active for Control plane and for RHOSP computes.
158
RELATED DOCUMENTATION
159
CHAPTER 5
IN THIS CHAPTER
How to Backup and Restore Contrail Databases in JSON Format in Openstack Environments Using the
Openstack 16.1 Director Deployment | 159
How to Backup and Restore Contrail Databases in JSON Format in Openstack Environments Using the
Openstack 13 or Ansible Deployers | 170
IN THIS SECTION
This document shows how to backup and restore the Contrail databases—Cassandra and Zookeeper—in
JSON format when Contrail Networking is running in Openstack-orchestrated environments that were
deployed using the RedHat Openstack 16.1 director deployment.
If you are deploying Contrail Networking in an Openstack-orchestrated environment that was deployed
using an Openstack 13-based or Ansible deployer, see How to Backup and Restore Contrail Databases in
JSON Format in Openstack Environments Using the Openstack 13 or Ansible Deployer.
Contrail Networking is initially supported in Openstack environments using the Openstack 16.1 director
deployment in Contrail Networking Release 2008. See Contrail Networking Supported Platforms for a
matrix of Contrail Networking release support within orchestration platforms and deployers.
The backup and restore procedure must be completed for nodes running the same Contrail Networking
release. The procedure is used to backup the Contrail Networking databases only; it does not include
instructions for backing up orchestration system databases.
CAUTION: Database backups must be consistent across all systems because the state
of the Contrail database is associated with other system databases, such as OpenStack
databases. Database changes associated with northbound APIs must be stopped on
all the systems before performing any backup operation. For example, you might block
the external VIP for northbound APIs at the load balancer level, such as HAproxy.
This procedure provides a simple database backup in JSON format. This procedure is performed using the
db_json_exim.py script located inside the config-api container in
/usr/lib/python2.7/site-packages/cfgm_common on the controller node.
2. Log into one of the config nodes. Create the /tmp/db-dump directory on any of the config node hosts.
mkdir /tmp/db-dump
3. On the same config node, copy the contrail-api.conf file from the container to the host.
podman cp contrail_config_api:/etc/contrail/contrail-api-0.conf
/tmp/db-dump/contrail-api.conf
The Cassandra database instance on any configuration node includes the complete Cassandra database
for all configuration nodes in the cluster. Steps 2 and 3, therefore, only need to be performed on one
configuration node.
4. On all Contrail controller nodes, stop the following Contrail configuration services:
This step must be performed on each individual controller node in the cluster.
5. On all nodes hosting Contrail Analytics containers, stop the following analytics services:
This step must be performed on each individual analytics node in the cluster.
Use the podman images command to list the name or ID of the config api image.
Example:
7. From the same config node, start the config api container by pointing the entrypoint.sh script to the
/bin/bash directory then mapping /tmp/db-dump directory from the host to the /tmp directory inside
the container. You perform this step to ensure that the API services are not started on the config node.
Example:
8. From the container created on the config node in Step 7, use the db_json_exim.py script to backup
data in JSON format. The db dump file will be saved in the /tmp/db-dump/ directory on this config
node.
Example:
(config-api)[user@overcloud-contrailcontroller-0 /]$ cd
/usr/lib/python2.7/site-packages/cfgm_common
(config-api)[user@overcloud-contrailcontroller-0
/usr/lib/python2.7/site-packages/cfgm_common]$ python db_json_exim.py --export-to
/tmp/db-dump.json --api-conf /tmp/contrail-api.conf
2021-06-30 19:47:27,120 INFO: Cassandra DB dumped
2021-06-30 19:47:28,878 INFO: Zookeeper DB dumped
2021-06-30 19:47:28,895 INFO: DB dump wrote to file /tmp/db-dump.json
The Cassandra database instance on any configuration node includes the complete Cassandra database
for all configuration nodes in the cluster. You, therefore, only need to perform step 4 through 6 from
one of the configuration nodes.
9. (Optional. Recommended) From the same config node, enter the cat /tmp/db-dump.json | python -m
json.tool | less command to view a more readable version of the file transfer.
10. From the same config node, exit out of the config api container. This will stop the container.
exit
13. On each config node, enter the contrail-status command to confirm that services are in the active or
running states.
NOTE: Some command output and output fields are removed for readability. Output shown
is from a single node hosting config and analytics services.
contrail-status
Pod Service Original Name State
analytics api contrail-analytics-api running
analytics collector contrail-analytics-collector running
analytics nodemgr contrail-nodemgr running
analytics provisioner contrail-provisioner running
analytics redis contrail-external-redis running
analytics-alarm alarm-gen contrail-analytics-alarm-gen running
analytics-alarm kafka contrail-external-kafka running
<some output removed for readability>
== Contrail control ==
control: active
nodemgr: active
named: active
dns: active
== Contrail analytics-alarm ==
nodemgr: active
kafka: active
alarm-gen: active
165
== Contrail database ==
nodemgr: active
query-engine: active
cassandra: active
== Contrail analytics ==
nodemgr: active
api: active
collector: active
== Contrail config-database ==
nodemgr: active
zookeeper: active
rabbitmq: active
cassandra: active
== Contrail webui ==
web: active
job: active
== Contrail analytics-snmp ==
snmp-collector: active
nodemgr: active
topology: active
== Contrail config ==
svc-monitor: active
nodemgr: active
device-manager: active
api: active
schema: active
166
This procedure provides the steps to restore a system using the simple database backup JSON file that
was created in “Simple Database Backup in JSON Format” on page 160.
1. Copy the contrail-api.conf file from the container to the host on any one of the config nodes.
podman cp contrail_config_api:/etc/contrail/contrail-api-0.conf
/tmp/db-dump/contrail-api.conf
3. On all nodes hosting Contrail Analytics containers, stop the following services:
cd /var/lib/contrail/config_zookeeper
cp -aR version-2/ zookper-bkp.save
rm -rf version-2
cd /var/lib/contrail/config_cassandra
cp -aR data/ Cassandra_data-save
rm -rf data/
12. Use the podman images command to list the name or ID of the config api image.
Example:
13. Run a new podman container using the name or ID of the config_api image on the same config node.
168
Use the registry_name and container_tag from the output of the step 12.
Example:
14. Restore the data in the new running container on the same config node.
cd /usr/lib/python2.7/site-packages/cfgm_common
python db_json_exim.py --import-from /tmp/db-dump.json --api-conf
/tmp/contrail-api.conf
Example:
cd /usr/lib/python2.7/site-packages/cfgm_common
python db_json_exim.py --import-from /tmp/db-dump.json --api-conf /tmp/contrail-api.conf
2021-07-06 17:22:17,157 INFO: DB dump file loaded
2021-07-06 17:23:12,227 INFO: Cassandra DB restored
2021-07-06 17:23:14,236 INFO: Zookeeper DB restored
15. Exit out of the config api container. This will stop the container.
exit
18. Enter the contrail-status command on each configuration node and, when applicable, on each analytics
node to confirm that services are in the active or running states.
NOTE: Output shown for a config node. Some command output and output fields are removed
for readability.
contrail-status
Pod Service Original Name State
config api contrail-controller-config-api running
config device-manager contrail-controller-config-devicemgr running
config dnsmasq contrail-controller-config-dnsmasq running
config nodemgr contrail-nodemgr running
config provisioner contrail-provisioner running
config schema contrail-controller-config-schema running
config stats contrail-controller-config-stats running
<some output removed for readability>
== Contrail control ==
control: active
nodemgr: active
named: active
dns: active
170
== Contrail database ==
nodemgr: active
query-engine: active
cassandra: active
== Contrail config-database ==
nodemgr: active
zookeeper: active
rabbitmq: active
cassandra: active
== Contrail webui ==
web: active
job: active
== Contrail config ==
svc-monitor: active
nodemgr: active
device-manager: active
api: active
schema: active
IN THIS SECTION
Example: How to Restore a Database Using the JSON Backup (Ansible Deployer Environment) | 184
Example: How to Restore a Database Using the JSON Backup (Red Hat Openstack Deployer
Environment) | 188
171
This document shows how to backup and restore the Contrail databases—Cassandra and Zookeeper—in
JSON format when Contrail Networking is running in Openstack-orchestrated environments that were
deployed using an Openstack 13-based or Ansible deployer. For a list of Openstack-orchestrated
environments that are deployed using Openstack 13-based or Ansible deployers, see the Contrail
Networking Supported Platforms matrix.
If you are deploying Contrail Networking in an Openstack-orchestrated environment that was deployed
using an Openstack 16-based deployer, see How to Backup and Restore Contrail Databases in JSON
Format in Openstack Environments Using the Openstack 16.1 Deployer.
The backup and restore procedure must be completed for nodes running the same Contrail Networking
release. The procedure is used to backup the Contrail Networking databases only; it does not include
instructions for backing up orchestration system databases.
CAUTION: Database backups must be consistent across all systems because the state
of the Contrail database is associated with other system databases, such as OpenStack
databases. Database changes associated with northbound APIs must be stopped on
all the systems before performing any backup operation. For example, you might block
the external VIP for northbound APIs at the load balancer level, such as HAproxy.
This procedure provides a simple database backup in JSON format. This procedure is performed using the
db_json_exim.py script located in the /usr/lib/python2.7/site-packages/cfgm_common on the controller
node.
1. Log into one of the config nodes. Create the /tmp/db-dump directory on any of the config node hosts.
mkdir /tmp/db-dump
2. On the same config node, copy the contrail-api.conf file from the container to the host.
Ansible Deployer:
The Cassandra database instance on any configuration node includes the complete Cassandra database
for all configuration nodes in the cluster. Steps 1 and 2, therefore, only need to be performed on one
configuration node.
3. Stop the following docker configuration services on all of the Contrail configuration nodes.
Ansible Deployer:
This step must be performed on each individual config node in the cluster.
List the docker image to find the name or ID of the config api image.
Example:
5. From the same config node, start the config api container pointing the entrypoint.sh script to /bin/bash
and mapping /tmp/db-dump from the host to the /tmp directory inside the container. You perform
this step to ensure that the API services are not started on the config node.
173
Example:
6. From the docker container created on the config node in Step 7, use the db_json_exim.py script to
backup data in JSON format.. The db dump file will be saved in the /tmp/db-dump/ on this config
node.
cd /usr/lib/python2.7/site-packages/cfgm_common
python db_json_exim.py --export-to /tmp/db-dump.json --api-conf
/tmp/contrail-api.conf
The Cassandra database instance on any configuration node includes the complete Cassandra database
for all configuration nodes in the cluster. You, therefore, only need to perform step 4 through 6 from
one of the configuration nodes.
7. (Optional. Recommended) From the same config node, enter the cat /tmp/db-dump.json | python -m
json.tool | less command to view a more readable version of the file transfer.
8. From the same config node, exit out of the config api container. This will stop the container.
exit
9. Start the following configuration services on all of the Contrail configuration nodes.
Ansible Deployer:
174
10. On each config node, enter the contrail-status command to confirm that services are in the active or
running states.
NOTE: Some command output and output fields are removed for readability. Output shown
is from a node hosting config and analytics services.
contrail-status
Pod Service Original Name State
analytics api contrail-analytics-api running
analytics collector contrail-analytics-collector running
analytics nodemgr contrail-nodemgr running
analytics provisioner contrail-provisioner running
analytics redis contrail-external-redis running
analytics-alarm alarm-gen contrail-analytics-alarm-gen running
analytics-alarm kafka contrail-external-kafka running
<some output removed for readability>
== Contrail control ==
control: active
nodemgr: active
named: active
dns: active
== Contrail analytics-alarm ==
nodemgr: active
175
kafka: active
alarm-gen: active
== Contrail database ==
nodemgr: active
query-engine: active
cassandra: active
== Contrail analytics ==
nodemgr: active
api: active
collector: active
== Contrail config-database ==
nodemgr: active
zookeeper: active
rabbitmq: active
cassandra: active
== Contrail webui ==
web: active
job: active
== Contrail analytics-snmp ==
snmp-collector: active
nodemgr: active
topology: active
== Contrail config ==
svc-monitor: active
nodemgr: active
device-manager: active
api: active
schema: active
These examples illustrate the process for creating a simple database backup in JSON format in both an
Ansible deployer environment and a Red Hat Openstack deployer environment.
## control_config1 ##
mkdir /tmp/db-dump
docker cp config_api_1:/etc/contrail/contrail-api.conf /tmp/db-dump/
docker stop config_svcmonitor_1
docker stop config_devicemgr_1
docker stop config_schema_1
docker stop config_api_1
## control_config2 ##
docker stop config_svcmonitor_1
docker stop config_devicemgr_1
docker stop config_schema_1
docker stop config_api_1
## control_config3 ##
docker stop config_svcmonitor_1
docker stop config_devicemgr_1
docker stop config_schema_1
docker stop config_api_1
## control_config1 ##
docker run --rm -it -v /tmp/db-dump/:/tmp/ -v /etc/contrail/ssl:/etc/contrail/ssl:ro
--network host --entrypoint=/bin/bash
hub.juniper.net/contrail/contrail-controller-config-api:1909.30-ocata
cd /usr/lib/python2.7/site-packages/cfgm_common
python db_json_exim.py --export-to /tmp/db-dump.json --api-conf
/tmp/contrail-api.conf
cat /tmp/db-dump.json | python -m json.tool | less
exit
docker start config_api_1
docker start config_schema_1
docker start config_svcmonitor_1
docker start config_devicemgr_1
contrail-status
## control_config2 ##
docker start config_api_1
docker start config_schema_1
docker start config_svcmonitor_1
docker start config_devicemgr_1
contrail-status
177
## control_config3 ##
docker start config_api_1
docker start config_schema_1
docker start config_svcmonitor_1
docker start config_devicemgr_1
contrail-status
## control_config1 ##
mkdir /tmp/db-dump
docker cp contrail_config_api:/etc/contrail/contrail-api.conf /tmp/db-dump/
docker stop contrail_config_svc_monitor
docker stop contrail_config_device_manager
docker stop contrail_config_schema
docker stop contrail_config_api
## control_config2 ##
docker stop contrail_config_svc_monitor
docker stop contrail_config_device_manager
docker stop contrail_config_schema
docker stop contrail_config_api
## control_config3 ##
docker stop contrail_config_svc_monitor
docker stop contrail_config_device_manager
docker stop contrail_config_schema
docker stop contrail_config_api
## control_config1 ##
docker run --rm -it -v /tmp/db-dump/:/tmp/ -v /etc/contrail/ssl:/etc/contrail/ssl:ro
--network host --entrypoint=/bin/bash
hub.juniper.net/contrail/contrail-controller-config-api:1909.30-ocata
cd /usr/lib/python2.7/site-packages/cfgm_common
python db_json_exim.py --export-to /tmp/db-dump.json --api-conf
/tmp/contrail-api.conf
cat /tmp/db-dump.json | python -m json.tool | less
exit
docker start contrail_config_api
docker start contrail_config_schema
docker start contrail_config_svc_monitor
docker start contrail_config_device_manager
contrail-status
178
## control_config2 ##
docker start contrail_config_api
docker start contrail_config_schema
docker start contrail_config_svc_monitor
docker start contrail_config_device_manager
contrail-status
## control_config3 ##
docker start contrail_config_api
docker start contrail_config_schema
docker start contrail_config_svc_monitor
docker start contrail_config_device_manager
contrail-status
This procedure provides the steps to restore a system using the simple database backup JSON file that
was created in “Simple Database Backup in JSON Format” on page 160.
1. Copy the contrail-api.conf file from the container to the host on any one of the config nodes.
Ansible Deployer:
Ansible Deployer:
Ansible Deployer:
Ansible Deployer:
Ansible Deployer:
cd /var/lib/docker/volumes/config_database_config_zookeeper/
cp -R _data/version-2/ version-2-save
cd /var/lib/docker/volumes/config_zookeeper/
cp -R _data/version-2/ version-2-save
rm -rf _data/version-2/*
Ansible Deployer:
cd /var/lib/docker/volumes/config_database_config_cassandra/
cp -R _data/ Cassandra_data-save
cd /var/lib/docker/volumes/config_cassandra/
cp -R _data/ Cassandra_data-save
rm -rf _data/*
Ansible Deployer:
181
Ansible Deployer:
11. List docker image to find the name or ID of the config-api image on the config node.
Example:
12. Run a new docker container using the name or ID of the config_api image on the same config node.
Use the registry_name and container_tag from the output of the step 12.
Example
13. Restore the data in new running docker on the same config node.
cd /usr/lib/python2.7/site-packages/cfgm_common
python db_json_exim.py --import-from /tmp/db-dump.json --api-conf
/tmp/contrail-api.conf
14. Exit out of the config api container. This will stop the container.
exit
Ansible Deployer:
16. Enter the contrail-status command on each configuration node and, when applicable, on each analytics
node to confirm that services are in the active or running states.
NOTE: Output shown for a config node. Some command output and output fields are removed
for readability.
contrail-status
Pod Service Original Name State
config api contrail-controller-config-api running
config device-manager contrail-controller-config-devicemgr running
config dnsmasq contrail-controller-config-dnsmasq running
config nodemgr contrail-nodemgr running
config provisioner contrail-provisioner running
config schema contrail-controller-config-schema running
config stats contrail-controller-config-stats running
<some output removed for readability>
== Contrail control ==
control: active
nodemgr: active
named: active
dns: active
== Contrail database ==
nodemgr: active
query-engine: active
cassandra: active
== Contrail config-database ==
nodemgr: active
184
zookeeper: active
rabbitmq: active
cassandra: active
== Contrail webui ==
web: active
job: active
== Contrail config ==
svc-monitor: active
nodemgr: active
device-manager: active
api: active
schema: active
Example: How to Restore a Database Using the JSON Backup (Ansible Deployer Environment)
This example shows how to restore the databases for three controllers connected to the Contrail
Configuration database (config-db). This example assumes a JSON backup file of the databases was
previously created using the instructions provided in “Simple Database Backup in JSON Format” on
page 160.The network was deployed using Ansible and the three controllers—nodec53, nodec54, and
nodec55—have separate IP addresses.
## Stop Cassandra ##
[root@nodec53 ~]# docker stop config_database_cassandra_1
[root@nodec54 ~]# docker stop config_database_cassandra_1
[root@nodec55 ~]# docker stop config_database_cassandra_1
## Stop Zookeeper ##
[root@nodec53 ~]# docker stop config_database_zookeeper_1
[root@nodec54 ~]# docker stop config_database_zookeeper_1
[root@nodec55 ~]# docker stop config_database_zookeeper_1
## Start Zookeeper ##
[root@nodec53 ~]# docker start config_database_zookeeper_1
[root@nodec54 ~]# docker start config_database_zookeeper_1
[root@nodec55 ~]# docker start config_database_zookeeper_1
## Start Cassandra ##
[root@nodec53 ~]# docker start config_database_cassandra_1
[root@nodec54 ~]# docker start config_database_cassandra_1
[root@nodec55 ~]# docker start config_database_cassandra_1
## Run Docker Image & Mount Contrail TLS Certificates to Contrail SSL Directory
##
[root@nodec54 ~]# docker image ls | grep config-api
hub.juniper.net/contrail/contrail-controller-config-api 1909.30-ocata c9d757252a0c
4 months ago 583MB
[root@nodec54 ~]# docker run --rm -it -v /tmp/db-dump/:/tmp/ -v
/etc/contrail/ssl:/etc/contrail/ssl:ro --network host --entrypoint=/bin/bash
hub.juniper.net/contrail/contrail-controller-config-api:1909.30-ocata
Example: How to Restore a Database Using the JSON Backup (Red Hat Openstack Deployer
Environment)
This example shows how to restore the databases from an environment that was deployed using Red Hat
Openstack and includes three config nodes—config1, config2, and config3—connected to the Contrail
Configuration database (config-db). All steps that need to be done from a single config node are performed
from config1.
The environment also contains three analytics nodes—analytics1, analytics2, and analytics3—to provide
analytics services.
This example assumes a JSON backup file of the databases was previously created using the instructions
provided in “Simple Database Backup in JSON Format” on page 160.
## Stop Cassandra ##
[root@config1 ~]# docker stop contrail_config_database
[root@config2 ~]# docker stop contrail_config_database
[root@config3 ~]# docker stop contrail_config_database
## Stop Zookeeper ##
[root@config1 ~]# docker stop contrail_config_zookeeper
[root@config2 ~]# docker stop contrail_config_zookeeper
[root@config3 ~]# docker stop contrail_config_zookeeper
## Start Zookeeper ##
[root@config1 ~]# docker start contrail_config_zookeeper
[root@config2 ~]# docker start contrail_config_zookeeper
[root@config3 ~]# docker start contrail_config_zookeeper
## Start Cassandra ##
[root@config1 ~]# docker start contrail_config_database
[root@config2 ~]# docker start contrail_config_database
[root@config3 ~]# docker start contrail_config_database
## Run Docker Image & Mount Contrail TLS Certificates to Contrail SSL Directory
##
[root@config1 ~]# docker image ls | grep config-api
hub.juniper.net/contrail/contrail-controller-config-api 1909.30-ocata c9d757252a0c
4 months ago 583MB
[root@config1 ~]# docker run --rm -it -v /tmp/db-dump/:/tmp/ -v
/etc/contrail/ssl:/etc/contrail/ssl:ro --network host --entrypoint=/bin/bash
hub.juniper.net/contrail/contrail-controller-config-api:1909.30-ocata
CHAPTER 6
IN THIS CHAPTER
IN THIS SECTION
IN THIS SECTION
Red Hat OpenStack Platform provides an installer called the Red Hat OpenStack Platform director (RHOSPd
or OSPd), which is a toolset based on the OpenStack project TripleO (OOO, OpenStack on OpenStack).
TripleO is an open source project that uses features of OpenStack to deploy a fully functional, tenant-facing
OpenStack environment.
TripleO can be used to deploy an RDO-based OpenStack environment integrated with Tungsten Fabric.
Red Hat OpenStack Platform director can be used to deploy an RHOSP-based OpenStack environment
integrated with Contrail Networking.
OSPd uses the concepts of undercloud and overcloud. OSPd sets up an undercloud, a single server running
an operator-facing deployment that contains the OpenStack components needed to deploy and manage
an overcloud, a tenant-facing deployment that hosts user workloads.
The overcloud is the deployed solution that can represent a cloud for any purpose, such as production,
staging, test, and so on. The operator can select to deploy to their environment any of the available
overcloud roles, such as controller, compute, and the like.
OSPd leverages existing core components of OpenStack including Nova, Ironic, Neutron, Heat, Glance,
and Ceilometer to deploy OpenStack on bare metal hardware.
195
• Nova and Ironic are used in the undercloud to manage the bare metal instances that comprise the
infrastructure for the overcloud.
The following are the Contrail Networking roles used for integrating into the overcloud:
• Contrail Controller
• Contrail Analytics
• Contrail-TSN
• Contrail-DPDK
Figure 20 on page 196 shows the relationship and components of an undercloud and overcloud architecture
for Contrail Networking.
196
Undercloud Requirements
The undercloud is a single server or VM that hosts the OpenStack Platform director, which is an OpenStack
installation used to provision OpenStack on the overcloud.
Overcloud Requirements
The overcloud roles can be deployed to bare metal servers or to virtual machines (VMs), but the compute
nodes must be deployed to bare metal systems. Every overcloud node must support IPMI for booting up
from the undercloud using PXE.
Ensure the following requirements are met for the Contrail Networking nodes per role.
• Non-high availability: A minimum of 4 overcloud nodes are needed for control plane roles for a non-high
availability deployment:
• 1x contrail-analytics
• 1x contrail-analytics-database
• 1x OpenStack controller
197
• High availability: A minimum of 12 overcloud nodes are needed for control plane roles for a high availability
deployment:
• 3x contrail-analytics
• 3x contrail-analytics-database
• 3x OpenStack controller
If the control plane roles are deployed to VMs, use 3 separate physical servers and deploy one role of
each kind to each physical server.
Networking Requirements
As a minimum, the installation requires two networks:
• provisioning network - This is the private network that the undercloud uses to provision the overcloud.
• external network - This is the externally-routable network you use to access the undercloud and overcloud
nodes.
Ensure the following requirements are met for the provisioning network:
• One NIC from every machine must be in the same broadcast domain of the provisioning network, and
it should be the same NIC on each of the overcloud machines. For example, if you use the second NIC
on the first overcloud machine, you should use the second NIC on each additional overcloud machine.
During installation, these NICs will be referenced by a single name across all overcloud machines.
• The provisioning network NIC should not be the same NIC that you are using for remote connectivity
to the undercloud machine. During the undercloud installation, an Open vSwitch bridge will be created
for Neutron, and the provisioning NIC will be bridged to the Open vSwitch bridge. Consequently,
connectivity would be lost if the provisioning NIC was also used for remote connectivity to the undercloud
machine.
• You must have the MAC address of the NIC that will PXE boot the IPMI information for the machine
on the provisioning network. The IPMI information will include such things as the IP address of the IPMI
NIC and the IPMI username and password.
• All of the networks must be available to all of the Contrail Networking roles and computes.
While the provisioning and external networks are sufficient for basic applications, you should create
additional networks in most overcloud environments to provide isolation for the different traffic types by
assigning network traffic to specific network interfaces or bonds.
198
When isolated networks are configured, the OpenStack services are configured to use the isolated networks.
If no isolated networks are configured, all services run on the provisioning network. If only some isolated
networks are configured, traffic belonging to a network not configured runs on the provisioning network.
The following networks are typically deployed when using network isolation topology:
• Tenant - used for tenant overlay data plane traffic (one network per tenant)
• External - provides external access to the undercloud and overcloud, including external access to the
web UIs and public APIs
• Floating IP - provides floating IP access to the tenant network (can either be merged with external or
can be a separate network)
Compatibility Matrix
The following combinations of Operating System/OpenStack/Deployer/Contrail Networking are supported:
Installation Summary
The general installation procedure is as follows:
• Set up the infrastructure, which is the set of servers or VMs that host the undercloud and overcloud,
including the provisioning network that connects them together.
• Set up the overcloud, which is the set of services in the tenant-facing network. Contrail Networking is
part of the overcloud.
For more information on installing and using the RHOSPd, see Red Hat documentation.
199
Release Description
2008 Starting with Contrail Networking Release 2008, Contrail Networking supports using
Contrail with Red Hat OpenStack Platform Director 16.1.
The following example illustrates all control plane functions as Virtual Machines hosted on KVM hosts.
There are different ways to create the infrastructure providing the control plane elements. To illustrate
the installation procedure, we will use four host machines for the infrastructure, each running KVM. KVM1
contains a VM running the undercloud while KVM2 through KVM4 each contains a VM running an
OpenStack controller and a Contrail controller (Table 8 on page 199).
KVM1 undercloud
Figure 21 on page 200 shows the physical connectivity where each KVM host and each compute node has
two interfaces that connect to an external switch. These interfaces attach to separate virtual bridges within
the VM, allowing for two physically separate networks (external and provisioning networks).
Figure 22 on page 201 shows the logical view of the connectivity where VLANs are used to provide further
network separation for the different OpenStack network types.
201
The following sections describe how to configure the infrastructure, the undercloud, and finally the
overcloud.
ge0 - -
ge3 - -
ge5 - -
4. Additionally, on the overcloud nodes only, create and start the virtual switches br0 and br1.
br1 - -
203
# Create the virtual switches and bind them to the respective interfaces.
ovs-vsctl add-br br0
ovs-vsctl add-br br1
ovs-vsctl add-port br0 NIC1
ovs-vsctl add-port br1 NIC2
• create and start a virtual baseboard management controller for that overcloud KVM host so that the
VM can be managed using IPMI
This example procedure creates a VM definition consisting of 2 compute nodes, 1 Contrail controller node,
and 1 OpenStack controller node on each overcloud KVM host.
ROLES=compute:2,contrail-controller:1,control:1
# Initialize and specify the IPMI user and password you want to use.
num=0
ipmi_user=<user>
ipmi_password=<password>
libvirt_path=/var/lib/libvirt/images
port_group=overcloud
prov_switch=br0
/bin/rm ironic_list
CAUTION: This procedure creates one ironic_list file per overcloud KVM host. Combine
the contents of each file into a single ironic_list file on the undercloud.
The following shows the resulting ironic_list file after you combine the contents from
each separate file:
mkdir ~/images
cd images
RHEL
cloud_image=~/images/rhel-server-8.2-update-1-x86_64-kvm.qcow2
undercloud_name=queensa
undercloud_suffix=local
root_password=<password>
207
stack_password=<password>
export LIBGUESTFS_BACKEND=direct
qemu-img create -f qcow2 /var/lib/libvirt/images/${undercloud_name}.qcow2 100G
virt-resize --expand /dev/sda1 ${cloud_image}
/var/lib/libvirt/images/${undercloud_name}.qcow2
virt-customize -a /var/lib/libvirt/images/${undercloud_name}.qcow2 \
--run-command 'xfs_growfs /' \
--root-password password:${root_password} \
--hostname ${undercloud_name}.${undercloud_suffix} \
--run-command 'useradd stack' \
--password stack:password:${stack_password} \
--run-command 'echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack'
\
--chmod 0440:/etc/sudoers.d/stack \
--run-command 'sed -i "s/PasswordAuthentication no/PasswordAuthentication yes/g"
/etc/ssh/sshd_config' \
--run-command 'systemctl enable sshd' \
--run-command 'yum remove -y cloud-init' \
--selinux-relabel
NOTE: As part of the undercloud definition, a user called stack is created. This user will be
used later to install the undercloud.
vcpus=8
vram=32000
virt-install --name ${undercloud_name} \
--disk /var/lib/libvirt/images/${undercloud_name}.qcow2 \
--vcpus=${vcpus} \
--ram=${vram} \
--network network=default,model=virtio \
--network network=br0,model=virtio,portgroup=overcloud \
--virt-type kvm \
--import \
--os-variant rhel8.2 \
--graphics vnc \
--serial pty \
--noautoconsole \
--console pty,target_type=virtio
208
6. Retrieve the undercloud IP address. It might take several seconds before the IP address is available.
ssh ${undercloud_ip}
undercloud_name=`hostname -s`
undercloud_suffix=`hostname -d`
hostnamectl set-hostname ${undercloud_name}.${undercloud_suffix}
hostnamectl set-hostname --transient ${undercloud_name}.${undercloud_suffix}
3. Add the hostname to the /etc/hosts file. The following example assumes the management interface
is eth0.
209
undercloud_ip=`ip addr sh dev eth0 | grep "inet " | awk '{print $2}' | awk -F"/"
'{print $1}'`
echo ${undercloud_ip} ${undercloud_name}.${undercloud_suffix} ${undercloud_name}
>> /etc/hosts
RHEL
6. Copy the undercloud configuration file sample and modify the configuration as required. See Red Hat
documentation for information on how to modify that file.
su - stack
cp /usr/share/python-tripleoclient/undercloud.conf.sample ~/undercloud.conf
vi ~/undercloud.conf
8. If you are using a satellite for deployment, manually update the hostname and satellite IP addresses in
your /etc/hosts/ file.
and manually enter your hostname and satellite IP address in the file while using the editor.
This step ensures that the overcloud deployment is successful later in the procedure.
You should also perform this step if the overcloud deployment fails later in the procedure and a failed
lookup URL message appears on the console as the reason.
A sample failed lookup URL error message when you experience this issue:.
========================
TASK [redhat-subscription : SATELLITE | Run Satellite 6 tasks] *****************
Tuesday 30 March 2021 12:11:25 -0400 (0:00:00.490) 0:13:39.737 *********
included: /usr/share/ansible/roles/redhat-subscription/tasks/satellite-6.yml
for overcloud-controller-0, overcloud-controller-1, overcloud-controller-2
TASK [redhat-subscription : SATELLITE 6 | Set Satellite server CA as a fact]
***Tuesday 30 March 2021 12:11:26 -0400 (0:00:00.730) 0:13:40.467 *********
fatal: [overcloud-controller-0]: FAILED! =) {"msg": "An unhandled exception
occurred while running the lookup plugin 'url'. Error was a <class
'ansible.errors.AnsibleError'>, original message: Failed lookup url for :
<urlopen error [Errno -2] Name or service not known>"}fatal:
[overcloud-controller-1]: FAILED! =) {"msg": "An unhandled exception occurred
while running the lookup plugin 'url'. Error was a <class
'ansible.errors.AnsibleError'>, original message: Failed lookup url for :
<urlopen error [Errno -2] Name or service not known>"}
1. Configure a forwarding path between the provisioning network and the external network:
sudo ip link add name vlan720 link br-ctlplane type vlan id 720
sudo ip addr add 10.2.0.254/24 dev vlan720
sudo ip link set dev vlan720 up
211
newgrp docker
exit
su - stack
source stackrc
4. Manually add the satellite IP address and hostname into the /etc/hosts/ file.
undercloud_nameserver=8.8.8.8
openstack subnet set `openstack subnet show ctlplane-subnet -c id -f value`
--dns-nameserver ${undercloud_nameserver}
mkdir images
cd images
212
b. Retrieve the overcloud images from either the RDO project or from Red Hat.
OSP16
cd
openstack overcloud image upload --image-path /home/stack/images/
Ironic is an integrated OpenStack program that provisions bare metal machines instead of virtual
machines. It is best thought of as a bare metal hypervisor API and a set of plugins that interact with
the bare metal hypervisors.
NOTE: Make sure to combine the ironic_list files from the three overcloud KVM hosts.
ipmi_password=<password>
ipmi_user=<user>
while IFS= read -r line; do
mac=`echo $line|awk '{print $1}'`
name=`echo $line|awk '{print $2}'`
kvm_ip=`echo $line|awk '{print $3}'`
profile=`echo $line|awk '{print $4}'`
ipmi_port=`echo $line|awk '{print $5}'`
uuid=`openstack baremetal node create --driver ipmi \
--property cpus=4 \
--property memory_mb=16348 \
--property local_gb=100 \
--property cpu_arch=x86_64 \
--driver-info
ipmi_username=${ipmi_user} \
--driver-info ipmi_address=${kvm_ip}
\
--driver-info
ipmi_password=${ipmi_password} \
213
--driver-info ipmi_port=${ipmi_port}
\
--name=${name} \
--property
capabilities=profile:${profile},boot_option:local \
-c uuid -f value`
openstack baremetal port create --node ${uuid} ${mac}
done < <(cat ironic_list)
4. Create Flavor:
for i in compute-dpdk \
compute-sriov \
contrail-controller \
contrail-analytics \
contrail-database \
contrail-analytics-database; do
openstack flavor create $i --ram 4096 --vcpus 1 --disk 40
openstack flavor set --property "capabilities:boot_option"="local" \
--property "capabilities:profile"="${i}" ${i}
openstack flavor set --property resources:CUSTOM_BAREMETAL=1 --property
resources:DISK_GB='0'
--property resources:MEMORY_MB='0'
214
cp -r /usr/share/openstack-tripleo-heat-templates/ tripleo-heat-templates
parameter_defaults:
RhsmVars:
rhsm_repos:
- fast-datapath-for-rhel-8-x86_64-rpms
- openstack-16.1-for-rhel-8-x86_64-rpms
- satellite-tools-6.5-for-rhel-8-x86_64-rpms
- ansible-2-for-rhel-8-x86_64-rpms
- rhel-8-for-x86_64-highavailability-rpms
- rhel-8-for-x86_64-appstream-rpms
- rhel-8-for-x86_64-baseos-rpms
rhsm_username: "YOUR_REDHAT_LOGIN"
rhsm_password: "YOUR_REDHAT_PASSWORD"
rhsm_org_id: "YOUR_REDHAT_ID"
rhsm_pool_ids: "YOUR_REDHAT_POOL_ID"
OSP16
NOTE: This step is optional. The Contrail containers can be downloaded from external
registries later.
cd ~/tf-heat-templates/tools/contrail
./import_contrail_container.sh -f container_outputfile -r registry -t tag [-i
insecure] [-u username] [-p password] [-c certificate path]
Here are few examples of importing Contrail containers from different sources:
./import_contrail_container.sh -f /tmp/contrail_container -r
hub.juniper.net/contrail -u USERNAME -p PASSWORD -t 1234
./import_contrail_container.sh -f /tmp/contrail_container -r
docker.io/opencontrailnightly -t 1234
./import_contrail_container.sh -f /tmp/contrail_container -r
device.example.net:5443 -c
http://device.example.net/pub/device.example.net.crt -t 1234
vi ~/tripleo-heat-templates/environments/contrail-services.yaml
parameter_defaults:
ContrailSettings:
VROUTER_GATEWAY: 10.0.0.1
# KEY1: value1
# KEY2: value2
VXLAN_VN_ID_MODE: "configured"
ENCAP_PRIORITY: "VXLAN,MPLSoUDP,MPLSoGRE"
ContrailControllerParameters:
AAAMode: rbac
vi ~/tripleo-heat-templates/environments/contrail-services.yaml
parameter_defaults:
ContrailRegistry: hub.juniper.net/contrail
ContrailRegistryUser: <USER>
ContrailRegistryPassword: <PASSWORD>
• Insecure registry
parameter_defaults:
ContrailRegistryInsecure: true
DockerInsecureRegistryAddress: 10.87.64.32:5000,192.168.24.1:8787
ContrailRegistry: 10.87.64.32:5000
parameter_defaults:
ContrailRegistryCertUrl: http://device.example.net/pub/device.example.net.crt
ContrailRegistry: device.example.net:5443
parameter_defaults:
ContrailImageTag: queens-5.0-104-rhel-queens
IN THIS SECTION
Overview | 218
Overview
In order to customize the network, define different networks and configure the overcloud nodes NIC
layout. TripleO supports a flexible way of customizing the network.
provisioning - All
IN THIS SECTION
vi ~/tripleo-heat-templates/roles_data_contrail_aio.yaml
OpenStack Controller
###############################################################################
# Role: Controller #
###############################################################################
- name: Controller
description: |
Controller role that has all the controler services loaded and handles
Database, Messaging and Network functions.
CountDefault: 1
tags:
- primary
- controller
networks:
- External
- InternalApi
- Storage
- StorageMgmt
Compute Node
###############################################################################
# Role: Compute #
###############################################################################
- name: Compute
description: |
Basic Compute Node role
CountDefault: 1
networks:
- InternalApi
- Tenant
- Storage
Contrail Controller
###############################################################################
# Role: ContrailController #
220
###############################################################################
- name: ContrailController
description: |
ContrailController role that has all the Contrail controler services loaded
and handles config, control and webui functions
CountDefault: 1
tags:
- primary
- contrailcontroller
networks:
- InternalApi
- Tenant
Compute DPDK
###############################################################################
# Role: ContrailDpdk #
###############################################################################
- name: ContrailDpdk
description: |
Contrail Dpdk Node role
CountDefault: 0
tags:
- contraildpdk
networks:
- InternalApi
- Tenant
- Storage
Compute SRIOV
###############################################################################
# Role: ContrailSriov
###############################################################################
- name: ContrailSriov
description: |
Contrail Sriov Node role
CountDefault: 0
tags:
- contrailsriov
networks:
- InternalApi
221
- Tenant
- Storage
Compute CSN
###############################################################################
# Role: ContrailTsn
###############################################################################
- name: ContrailTsn
description: |
Contrail Tsn Node role
CountDefault: 0
tags:
- contrailtsn
networks:
- InternalApi
- Tenant
- Storage
cat ~/tripleo-heat-templates/environments/contrail/contrail-net.yaml
resource_registry:
OS::TripleO::Controller::Net::SoftwareConfig:
../../network/config/contrail/controller-nic-config.yaml
OS::TripleO::ContrailController::Net::SoftwareConfig:
../../network/config/contrail/contrail-controller-nic-config.yaml
OS::TripleO::ContrailControlOnly::Net::SoftwareConfig:
../../network/config/contrail/contrail-controller-nic-config.yaml
OS::TripleO::Compute::Net::SoftwareConfig:
../../network/config/contrail/compute-nic-config.yaml
OS::TripleO::ContrailDpdk::Net::SoftwareConfig:
../../network/config/contrail/contrail-dpdk-nic-config.yaml
OS::TripleO::ContrailSriov::Net::SoftwareConfig:
../../network/config/contrail/contrail-sriov-nic-config.yaml
OS::TripleO::ContrailTsn::Net::SoftwareConfig:
../../network/config/contrail/contrail-tsn-nic-config.yaml
222
parameter_defaults:
# Customize all these values to match the local environment
TenantNetCidr: 10.0.0.0/24
InternalApiNetCidr: 10.1.0.0/24
ExternalNetCidr: 10.2.0.0/24
StorageNetCidr: 10.3.0.0/24
StorageMgmtNetCidr: 10.4.0.0/24
# CIDR subnet mask length for provisioning network
ControlPlaneSubnetCidr: '24'
# Allocation pools
TenantAllocationPools: [{'start': '10.0.0.10', 'end': '10.0.0.200'}]
InternalApiAllocationPools: [{'start': '10.1.0.10', 'end': '10.1.0.200'}]
ExternalAllocationPools: [{'start': '10.2.0.10', 'end': '10.2.0.200'}]
StorageAllocationPools: [{'start': '10.3.0.10', 'end': '10.3.0.200'}]
StorageMgmtAllocationPools: [{'start': '10.4.0.10', 'end': '10.4.0.200'}]
# Routes
ControlPlaneDefaultRoute: 192.168.24.1
InternalApiDefaultRoute: 10.1.0.1
ExternalInterfaceDefaultRoute: 10.2.0.1
# Vlans
InternalApiNetworkVlanID: 710
ExternalNetworkVlanID: 720
StorageNetworkVlanID: 730
StorageMgmtNetworkVlanID: 740
TenantNetworkVlanID: 3211
# Services
EC2MetadataIp: 192.168.24.1 # Generally the IP of the undercloud
DnsServers: ["172.x.x.x"]
NtpServer: 10.0.0.1
IN THIS SECTION
cd ~/tripleo-heat-templates/network/config/contrail
OpenStack Controller
heat_template_version: rocky
description: >
Software Config to drive os-net-config to configure multiple interfaces
for the compute role. This is an example for a Nova compute node using
Contrail vrouter and the vhost0 interface.
parameters:
ControlPlaneIp:
default: ''
description: IP address/subnet on the ctlplane network
type: string
ExternalIpSubnet:
default: ''
description: IP address/subnet on the external network
type: string
InternalApiIpSubnet:
default: ''
description: IP address/subnet on the internal_api network
type: string
InternalApiDefaultRoute: # Not used by default in this template
default: '10.0.0.1'
description: The default route of the internal api network.
type: string
StorageIpSubnet:
default: ''
description: IP address/subnet on the storage network
type: string
StorageMgmtIpSubnet:
default: ''
description: IP address/subnet on the storage_mgmt network
type: string
TenantIpSubnet:
default: ''
description: IP address/subnet on the tenant network
type: string
224
resources:
OsNetConfigImpl:
type: OS::Heat::SoftwareConfig
properties:
group: script
config:
str_replace:
template:
get_file: ../../scripts/run-os-net-config.sh
params:
$network_config:
network_config:
- type: interface
name: nic1
use_dhcp: false
dns_servers:
get_param: DnsServers
addresses:
- ip_netmask:
list_join:
- '/'
- - get_param: ControlPlaneIp
- get_param: ControlPlaneSubnetCidr
routes:
- ip_netmask: 169.x.x.x/32
next_hop:
get_param: EC2MetadataIp
- default: true
next_hop:
get_param: ControlPlaneDefaultRoute
- type: vlan
vlan_id:
get_param: InternalApiNetworkVlanID
device: nic1
226
addresses:
- ip_netmask:
get_param: InternalApiIpSubnet
- type: vlan
vlan_id:
get_param: ExternalNetworkVlanID
device: nic1
addresses:
- ip_netmask:
get_param: ExternalIpSubnet
- type: vlan
vlan_id:
get_param: StorageNetworkVlanID
device: nic1
addresses:
- ip_netmask:
get_param: StorageIpSubnet
- type: vlan
vlan_id:
get_param: StorageMgmtNetworkVlanID
device: nic1
addresses:
- ip_netmask:
get_param: StorageMgmtIpSubnet
outputs:
OS::stack_id:
description: The OsNetConfigImpl resource.
value:
get_resource: OsNetConfigImpl
Contrail Controller
heat_template_version: rocky
description: >
Software Config to drive os-net-config to configure multiple interfaces
for the compute role. This is an example for a Nova compute node using
Contrail vrouter and the vhost0 interface.
227
parameters:
ControlPlaneIp:
default: ''
description: IP address/subnet on the ctlplane network
type: string
ExternalIpSubnet:
default: ''
description: IP address/subnet on the external network
type: string
InternalApiIpSubnet:
default: ''
description: IP address/subnet on the internal_api network
type: string
InternalApiDefaultRoute: # Not used by default in this template
default: '10.0.0.1'
description: The default route of the internal api network.
type: string
StorageIpSubnet:
default: ''
description: IP address/subnet on the storage network
type: string
StorageMgmtIpSubnet:
default: ''
description: IP address/subnet on the storage_mgmt network
type: string
TenantIpSubnet:
default: ''
description: IP address/subnet on the tenant network
type: string
ManagementIpSubnet: # Only populated when including
environments/network-management.yaml
default: ''
description: IP address/subnet on the management network
type: string
ExternalNetworkVlanID:
default: 10
description: Vlan ID for the external network traffic.
type: number
InternalApiNetworkVlanID:
default: 20
description: Vlan ID for the internal_api network traffic.
type: number
StorageNetworkVlanID:
default: 30
228
resources:
OsNetConfigImpl:
type: OS::Heat::SoftwareConfig
properties:
group: script
229
config:
str_replace:
template:
get_file: ../../scripts/run-os-net-config.sh
params:
$network_config:
network_config:
- type: interface
name: nic1
use_dhcp: false
dns_servers:
get_param: DnsServers
addresses:
- ip_netmask:
list_join:
- '/'
- - get_param: ControlPlaneIp
- get_param: ControlPlaneSubnetCidr
routes:
- ip_netmask: 169.x.x.x/32
next_hop:
get_param: EC2MetadataIp
- default: true
next_hop:
get_param: ControlPlaneDefaultRoute
- type: vlan
vlan_id:
get_param: InternalApiNetworkVlanID
device: nic1
addresses:
- ip_netmask:
get_param: InternalApiIpSubnet
- type: interface
name: nic2
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
outputs:
OS::stack_id:
description: The OsNetConfigImpl resource.
230
value:
get_resource: OsNetConfigImpl
Compute Node
heat_template_version: rocky
description: >
Software Config to drive os-net-config to configure multiple interfaces
for the compute role. This is an example for a Nova compute node using
Contrail vrouter and the vhost0 interface.
parameters:
ControlPlaneIp:
default: ''
description: IP address/subnet on the ctlplane network
type: string
ExternalIpSubnet:
default: ''
description: IP address/subnet on the external network
type: string
InternalApiIpSubnet:
default: ''
description: IP address/subnet on the internal_api network
type: string
InternalApiDefaultRoute: # Not used by default in this template
default: '10.0.0.1'
description: The default route of the internal api network.
type: string
StorageIpSubnet:
default: ''
description: IP address/subnet on the storage network
type: string
StorageMgmtIpSubnet:
default: ''
description: IP address/subnet on the storage_mgmt network
type: string
TenantIpSubnet:
default: ''
description: IP address/subnet on the tenant network
type: string
231
resources:
OsNetConfigImpl:
type: OS::Heat::SoftwareConfig
properties:
group: script
config:
str_replace:
template:
get_file: ../../scripts/run-os-net-config.sh
params:
$network_config:
network_config:
- type: interface
name: nic1
use_dhcp: false
dns_servers:
get_param: DnsServers
addresses:
- ip_netmask:
list_join:
- '/'
- - get_param: ControlPlaneIp
- get_param: ControlPlaneSubnetCidr
routes:
- ip_netmask: 169.x.x.x/32
next_hop:
get_param: EC2MetadataIp
- default: true
next_hop:
get_param: ControlPlaneDefaultRoute
- type: vlan
vlan_id:
get_param: InternalApiNetworkVlanID
device: nic1
233
addresses:
- ip_netmask:
get_param: InternalApiIpSubnet
- type: vlan
vlan_id:
get_param: StorageNetworkVlanID
device: nic1
addresses:
- ip_netmask:
get_param: StorageIpSubnet
- type: contrail_vrouter
name: vhost0
use_dhcp: false
members:
-
type: interface
name: nic2
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
outputs:
OS::stack_id:
description: The OsNetConfigImpl resource.
value:
get_resource: OsNetConfigImpl
IN THIS SECTION
VLAN | 234
Bond | 234
In addition to the standard NIC configuration, the vRouter kernel mode supports VLAN, Bond, and Bond
+ VLAN modes. The configuration snippets below only show the relevant section of the NIC template
configuration for each mode.
234
VLAN
- type: vlan
vlan_id:
get_param: TenantNetworkVlanID
device: nic2
- type: contrail_vrouter
name: vhost0
use_dhcp: false
members:
-
type: interface
name:
str_replace:
template: vlanVLANID
params:
VLANID: {get_param: TenantNetworkVlanID}
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
Bond
- type: linux_bond
name: bond0
bonding_options: "mode=4 xmit_hash_policy=layer2+3"
use_dhcp: false
members:
-
type: interface
name: nic2
-
type: interface
name: nic3
- type: contrail_vrouter
name: vhost0
use_dhcp: false
members:
-
type: interface
name: bond0
use_dhcp: false
addresses:
235
- ip_netmask:
get_param: TenantIpSubnet
Bond + VLAN
- type: linux_bond
name: bond0
bonding_options: "mode=4 xmit_hash_policy=layer2+3"
use_dhcp: false
members:
-
type: interface
name: nic2
-
type: interface
name: nic3
- type: vlan
vlan_id:
get_param: TenantNetworkVlanID
device: bond0
- type: contrail_vrouter
name: vhost0
use_dhcp: false
members:
-
type: interface
name:
str_replace:
template: vlanVLANID
params:
VLANID: {get_param: TenantNetworkVlanID}
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
IN THIS SECTION
Standard | 236
VLAN | 237
236
Bond | 237
In addition to the standard NIC configuration, the vRouter DPDK mode supports Standard, VLAN, Bond,
and Bond + VLAN modes.
vi ~/tripleo-heat-templates/environments/contrail/contrail-services.yaml
See the following NIC template configurations for vRouter DPDK mode. The configuration snippets below
only show the relevant section of the NIC configuration for each mode.
Standard
- type: contrail_vrouter_dpdk
name: vhost0
use_dhcp: false
driver: uio_pci_generic
cpu_list: 0x01
members:
-
type: interface
name: nic2
use_dhcp: false
237
addresses:
- ip_netmask:
get_param: TenantIpSubnet
VLAN
- type: contrail_vrouter_dpdk
name: vhost0
use_dhcp: false
driver: uio_pci_generic
cpu_list: 0x01
vlan_id:
get_param: TenantNetworkVlanID
members:
-
type: interface
name: nic2
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
Bond
- type: contrail_vrouter_dpdk
name: vhost0
use_dhcp: false
driver: uio_pci_generic
cpu_list: 0x01
bond_mode: 4
bond_policy: layer2+3
members:
-
type: interface
name: nic2
use_dhcp: false
-
type: interface
name: nic3
use_dhcp: false
addresses:
238
- ip_netmask:
get_param: TenantIpSubnet
Bond + VLAN
- type: contrail_vrouter_dpdk
name: vhost0
use_dhcp: false
driver: uio_pci_generic
cpu_list: 0x01
vlan_id:
get_param: TenantNetworkVlanID
bond_mode: 4
bond_policy: layer2+3
members:
-
type: interface
name: nic2
use_dhcp: false
-
type: interface
name: nic3
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
IN THIS SECTION
VLAN | 239
Bond | 240
• Standard
• VLAN
239
• Bond
• Bond + VLAN
vi ~/tripleo-heat-templates/environments/contrail/contrail-services.yaml
ContrailSriovParameters:
KernelArgs: "intel_iommu=on iommu=pt default_hugepagesz=1GB hugepagesz=1G
hugepages=4 hugepagesz=2M hugepages=1024"
ExtraSysctlSettings:
# must be equal to value from 1G kernel args: hugepages=4
vm.nr_hugepages:
value: 4
NovaPCIPassthrough:
- devname: "ens2f1"
physical_network: "sriov1"
ContrailSriovNumVFs: ["ens2f1:7"]
The SRIOV NICs are not configured in the NIC templates. However, vRouter NICs must still be configured.
See the following NIC template configurations for vRouter kernel mode. The configuration snippets below
only show the relevant section of the NIC configuration for each mode.
VLAN
- type: vlan
vlan_id:
get_param: TenantNetworkVlanID
device: nic2
- type: contrail_vrouter
name: vhost0
use_dhcp: false
members:
-
type: interface
240
name:
str_replace:
template: vlanVLANID
params:
VLANID: {get_param: TenantNetworkVlanID}
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
Bond
- type: linux_bond
name: bond0
bonding_options: "mode=4 xmit_hash_policy=layer2+3"
use_dhcp: false
members:
-
type: interface
name: nic2
-
type: interface
name: nic3
- type: contrail_vrouter
name: vhost0
use_dhcp: false
members:
-
type: interface
name: bond0
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
Bond + VLAN
- type: linux_bond
name: bond0
bonding_options: "mode=4 xmit_hash_policy=layer2+3"
use_dhcp: false
members:
241
-
type: interface
name: nic2
-
type: interface
name: nic3
- type: vlan
vlan_id:
get_param: TenantNetworkVlanID
device: bond0
- type: contrail_vrouter
name: vhost0
use_dhcp: false
members:
-
type: interface
name:
str_replace:
template: vlanVLANID
params:
VLANID: {get_param: TenantNetworkVlanID}
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
IN THIS SECTION
Standard | 242
VLAN | 243
Bond | 243
• Standard
• VLAN
242
• Bond
• Bond + VLAN
vi ~/tripleo-heat-templates/environments/contrail/contrail-services.yaml
ContrailSriovParameters:
KernelArgs: "intel_iommu=on iommu=pt default_hugepagesz=1GB hugepagesz=1G
hugepages=4 hugepagesz=2M hugepages=1024"
ExtraSysctlSettings:
# must be equal to value from 1G kernel args: hugepages=4
vm.nr_hugepages:
value: 4
NovaPCIPassthrough:
- devname: "ens2f1"
physical_network: "sriov1"
ContrailSriovNumVFs: ["ens2f1:7"]
The SRIOV NICs are not configured in the NIC templates. However, vRouter NICs must still be configured.
See the following NIC template configurations for vRouter DPDK mode. The configuration snippets below
only show the relevant section of the NIC configuration for each mode.
Standard
- type: contrail_vrouter_dpdk
name: vhost0
use_dhcp: false
driver: uio_pci_generic
cpu_list: 0x01
members:
-
type: interface
name: nic2
use_dhcp: false
243
addresses:
- ip_netmask:
get_param: TenantIpSubnet
VLAN
- type: contrail_vrouter_dpdk
name: vhost0
use_dhcp: false
driver: uio_pci_generic
cpu_list: 0x01
vlan_id:
get_param: TenantNetworkVlanID
members:
-
type: interface
name: nic2
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
Bond
- type: contrail_vrouter_dpdk
name: vhost0
use_dhcp: false
driver: uio_pci_generic
cpu_list: 0x01
bond_mode: 4
bond_policy: layer2+3
members:
-
type: interface
name: nic2
use_dhcp: false
-
type: interface
name: nic3
use_dhcp: false
addresses:
244
- ip_netmask:
get_param: TenantIpSubnet
Bond + VLAN
- type: contrail_vrouter_dpdk
name: vhost0
use_dhcp: false
driver: uio_pci_generic
cpu_list: 0x01
vlan_id:
get_param: TenantNetworkVlanID
bond_mode: 4
bond_policy: layer2+3
members:
-
type: interface
name: nic2
use_dhcp: false
-
type: interface
name: nic3
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
Advanced Scenarios
Remote Compute
Remote Compute extends the data plane to remote locations (POP) whilest keeping the control plane
central. Each POP will have its own set of Contrail control services, which are running in the central location.
The difficulty is to ensure that the compute nodes of a given POP connect to the Control nodes assigned
to that POC. The Control nodes must have predictable IP addresses and the compute nodes have to know
these IP addresses. In order to achieve that the following methods are used:
• Custom Roles
• Static IP assignment
Each overcloud node has a unique DMI UUID. This UUID is known on the undercloud node as well as on
the overcloud node. Hence, this UUID can be used for mapping node specific information. For each POP,
a Control role and a Compute role has to be created.
Overview
Mapping Table
IP
Nova Name Ironic Name UUID KVM Address POP
IP
Nova Name Ironic Name UUID KVM Address POP
ControlOnly preparation
Two ControlOnly overcloud VM definitions will be created on each of the overcloud KVM hosts.
ROLES=control-only:2
num=4
ipmi_user=<user>
ipmi_password=<password>
libvirt_path=/var/lib/libvirt/images
port_group=overcloud
prov_switch=br0
/bin/rm ironic_list
IFS=',' read -ra role_list <<< "${ROLES}"
for role in ${role_list[@]}; do
role_name=`echo $role|cut -d ":" -f 1`
role_count=`echo $role|cut -d ":" -f 2`
for count in `seq 1 ${role_count}`; do
echo $role_name $count
qemu-img create -f qcow2 ${libvirt_path}/${role_name}_${count}.qcow2 99G
virsh define /dev/stdin <<EOF
$(virt-install --name ${role_name}_${count} \
--disk ${libvirt_path}/${role_name}_${count}.qcow2 \
--vcpus=4 \
--ram=16348 \
--network network=br0,model=virtio,portgroup=${port_group} \
--network network=br1,model=virtio \
--virt-type kvm \
--cpu host \
--import \
--os-variant rhel7 \
--serial pty \
--console pty,target_type=virtio \
--graphics vnc \
--print-xml)
EOF
vbmc add ${role_name}_${count} --port 1623${num} --username ${ipmi_user}
--password ${ipmi_password}
vbmc start ${role_name}_${count}
prov_mac=`virsh domiflist ${role_name}_${count}|grep ${prov_switch}|awk '{print
$5}'`
vm_name=${role_name}-${count}-`hostname -s`
kvm_ip=`ip route get 1 |grep src |awk '{print $7}'`
echo ${prov_mac} ${vm_name} ${kvm_ip} ${role_name} 1623${num}>> ironic_list
num=$(expr $num + 1)
done
done
248
NOTE: The generated ironic_list will be needed on the undercloud to import the nodes to Ironic.
Get the ironic_lists from the overcloud KVM hosts and combine them.
cat ironic_list_control_only
52:54:00:3a:2f:ca control-only-1-5b3s30 10.87.64.31 control-only 16234
52:54:00:31:4f:63 control-only-2-5b3s30 10.87.64.31 control-only 16235
52:54:00:0c:11:74 control-only-1-5b3s31 10.87.64.32 control-only 16234
52:54:00:56:ab:55 control-only-2-5b3s31 10.87.64.32 control-only 16235
52:54:00:c1:f0:9a control-only-1-5b3s32 10.87.64.33 control-only 16234
52:54:00:f3:ce:13 control-only-2-5b3s32 10.87.64.33 control-only 16235
Import:
ipmi_password=<password>
ipmi_user=<user>
num=0
while IFS= read -r line; do
mac=`echo $line|awk '{print $1}'`
name=`echo $line|awk '{print $2}'`
kvm_ip=`echo $line|awk '{print $3}'`
profile=`echo $line|awk '{print $4}'`
ipmi_port=`echo $line|awk '{print $5}'`
uuid=`openstack baremetal node create --driver ipmi \
--property cpus=4 \
--property memory_mb=16348 \
--property local_gb=100 \
--property cpu_arch=x86_64 \
--driver-info ipmi_username=${ipmi_user}
\
--driver-info ipmi_address=${kvm_ip} \
--driver-info ipmi_password=${ipmi_password}
\
--driver-info ipmi_port=${ipmi_port} \
--name=${name} \
--property capabilities=boot_option:local
249
\
-c uuid -f value`
openstack baremetal node set ${uuid} --driver-info deploy_kernel=$DEPLOY_KERNEL
--driver-info deploy_ramdisk=$DEPLOY_RAMDISK
openstack baremetal port create --node ${uuid} ${mac}
openstack baremetal node manage ${uuid}
num=$(expr $num + 1)
done < <(cat ironic_list_control_only)
The first ControlOnly node on each of the overcloud KVM hosts will be used for POP1, the second for
POP2, and so and so forth.
available | False |
| 2d4be83e-6fcc-4761-86f2-c2615dd15074 | compute-4-5b3s31 | None | power off |
available | False |
The first two compute nodes belong to POP1 the second two compute nodes belong to POP2.
~/subcluster_input.yaml
---
- subcluster: subcluster1
asn: "65413"
control_nodes:
- uuid: 7d758dce-2784-45fd-be09-5a41eb53e764
ipaddress: 10.0.0.11
- uuid: 91dd9fa9-e8eb-4b51-8b5e-bbaffb6640e4
ipaddress: 10.0.0.12
- uuid: f4766799-24c8-4e3b-af54-353f2b796ca4
ipaddress: 10.0.0.13
compute_nodes:
- uuid: 91d6026c-b9db-49cb-a685-99a63da5d81e
vrouter_gateway: 10.0.0.1
- uuid: 8028eb8c-e1e6-4357-8fcf-0796778bd2f7
vrouter_gateway: 10.0.0.1
- subcluster: subcluster2
asn: "65414"
control_nodes:
- uuid: d26abdeb-d514-4a37-a7fb-2cd2511c351f
ipaddress: 10.0.0.14
- uuid: 09fa57b8-580f-42ec-bf10-a19573521ed4
ipaddress: 10.0.0.15
- uuid: 58a803ae-a785-470e-9789-139abbfa74fb
ipaddress: 10.0.0.16
compute_nodes:
- uuid: b795b3b9-c4e3-4a76-90af-258d9336d9fb
vrouter_gateway: 10.0.0.1
- uuid: 2d4be83e-6fcc-4761-86f2-c2615dd15074
vrouter_gateway: 10.0.0.1
~/tripleo-heat-templates/tools/contrail/create_subcluster_environment.py -i
~/subcluster_input.yaml \
-o
~/tripleo-heat-templates/environments/contrail/contrail-subcluster.yaml
cat ~/tripleo-heat-templates/environments/contrail/contrail-subcluster.yaml
parameter_defaults:
NodeDataLookup:
041D7B75-6581-41B3-886E-C06847B9C87E:
contrail_settings:
CONTROL_NODES: 10.0.0.14,10.0.0.15,10.0.0.16
SUBCLUSTER: subcluster2
VROUTER_GATEWAY: 10.0.0.1
09BEC8CB-77E9-42A6-AFF4-6D4880FD87D0:
contrail_settings:
BGP_ASN: '65414'
SUBCLUSTER: subcluster2
14639A66-D62C-4408-82EE-FDDC4E509687:
contrail_settings:
BGP_ASN: '65414'
SUBCLUSTER: subcluster2
28AB0B57-D612-431E-B177-1C578AE0FEA4:
contrail_settings:
BGP_ASN: '65413'
SUBCLUSTER: subcluster1
3993957A-ECBF-4520-9F49-0AF6EE1667A7:
contrail_settings:
BGP_ASN: '65413'
SUBCLUSTER: subcluster1
73F8D030-E896-4A95-A9F5-E1A4FEBE322D:
contrail_settings:
BGP_ASN: '65413'
SUBCLUSTER: subcluster1
7933C2D8-E61E-4752-854E-B7B18A424971:
contrail_settings:
CONTROL_NODES: 10.0.0.14,10.0.0.15,10.0.0.16
SUBCLUSTER: subcluster2
VROUTER_GATEWAY: 10.0.0.1
AF92F485-C30C-4D0A-BDC4-C6AE97D06A66:
252
contrail_settings:
BGP_ASN: '65414'
SUBCLUSTER: subcluster2
BB9E9D00-57D1-410B-8B19-17A0DA581044:
contrail_settings:
CONTROL_NODES: 10.0.0.11,10.0.0.12,10.0.0.13
SUBCLUSTER: subcluster1
VROUTER_GATEWAY: 10.0.0.1
E1A809DE-FDB2-4EB2-A91F-1B3F75B99510:
contrail_settings:
CONTROL_NODES: 10.0.0.11,10.0.0.12,10.0.0.13
SUBCLUSTER: subcluster1
VROUTER_GATEWAY: 10.0.0.1
Deployment
Installing Overcloud
1. Deployment:
-e tripleo-heat-templates/environments/contrail/contrail-net.yaml \
-e tripleo-heat-templates/environments/contrail/contrail-plugins.yaml \
-e containers-prepare-parameter.yaml \
-e rhsm.yaml
2. Validation Test:
source overcloudrc
curl -O http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
openstack image create --container-format bare --disk-format qcow2 --file
cirros-0.3.5-x86_64-disk.img cirros
openstack flavor create --public cirros --id auto --ram 64 --disk 0 --vcpus 1
openstack network create net1
openstack subnet create --subnet-range 1.0.0.0/24 --network net1 sn1
nova boot --image cirros --flavor cirros --nic net-id=`openstack network show
net1 -c id -f value` --availability-zone nova:overcloud-novacompute-0.localdomain
c1
nova list
Release Description
2008 Starting with Contrail Networking Release 2008, Contrail Networking supports using
Contrail with Red Hat OpenStack Platform Director 16.1.
IN THIS SECTION
IN THIS SECTION
Red Hat OpenStack Platform provides an installer called the Red Hat OpenStack Platform director (RHOSPd
or OSPd), which is a toolset based on the OpenStack project TripleO (OOO, OpenStack on OpenStack).
TripleO is an open source project that uses features of OpenStack to deploy a fully functional, tenant-facing
OpenStack environment.
TripleO can be used to deploy an RDO-based OpenStack environment integrated with Tungsten Fabric.
Red Hat OpenStack Platform director can be used to deploy an RHOSP-based OpenStack environment
integrated with Contrail.
OSPd uses the concepts of undercloud and overcloud. OSPd sets up an undercloud, a single server running
an operator-facing deployment that contains the OpenStack components needed to deploy and manage
an overcloud, a tenant-facing deployment that hosts user workloads.
The overcloud is the deployed solution that can represent a cloud for any purpose, such as production,
staging, test, and so on. The operator can select to deploy to their environment any of the available
overcloud roles, such as controller, compute, and the like.
OSPd leverages existing core components of OpenStack including Nova, Ironic, Neutron, Heat, Glance,
and Ceilometer to deploy OpenStack on bare metal hardware.
• Nova and Ironic are used in the undercloud to manage the bare metal instances that comprise the
infrastructure for the overcloud.
Contrail Roles
OSPd supports composable roles, which are groups of services that you define through Heat templates.
Composable roles allow you to integrate Contrail into the overcloud environment.
The following are the Contrail roles used for integrating into the overcloud:
• Contrail Controller
• Contrail Analytics
• Contrail-TSN
• Contrail-DPDK
Figure 20 on page 196 shows the relationship and components of an undercloud and overcloud architecture
for Contrail.
Undercloud Requirements
The undercloud is a single server or VM that hosts the OpenStack Platform director, which is an OpenStack
installation used to provision OpenStack on the overcloud.
Overcloud Requirements
The overcloud roles can be deployed to bare metal servers or to virtual machines (VMs), but the compute
nodes must be deployed to bare metal systems. Every overcloud node must support IPMI for booting up
from the undercloud using PXE.
Ensure the following requirements are met for the Contrail nodes per role.
• Non-high availability: A minimum of 4 overcloud nodes are needed for control plane roles for a non-high
availability deployment:
• 1x contrail-analytics
• 1x contrail-analytics-database
• 1x OpenStack controller
• High availability: A minimum of 12 overcloud nodes are needed for control plane roles for a high availability
deployment:
• 3x contrail-analytics
• 3x contrail-analytics-database
• 3x OpenStack controller
If the control plane roles are deployed to VMs, use 3 separate physical servers and deploy one role of
each kind to each physical server.
Networking Requirements
As a minimum, the installation requires two networks:
• provisioning network - This is the private network that the undercloud uses to provision the overcloud.
• external network - This is the externally-routable network you use to access the undercloud and overcloud
nodes.
Ensure the following requirements are met for the provisioning network:
257
• One NIC from every machine must be in the same broadcast domain of the provisioning network, and
it should be the same NIC on each of the overcloud machines. For example, if you use the second NIC
on the first overcloud machine, you should use the second NIC on each additional overcloud machine.
During installation, these NICs will be referenced by a single name across all overcloud machines.
• The provisioning network NIC should not be the same NIC that you are using for remote connectivity
to the undercloud machine. During the undercloud installation, an Open vSwitch bridge will be created
for Neutron, and the provisioning NIC will be bridged to the Open vSwitch bridge. Consequently,
connectivity would be lost if the provisioning NIC was also used for remote connectivity to the undercloud
machine.
• You must have the MAC address of the NIC that will PXE boot the IPMI information for the machine
on the provisioning network. The IPMI information will include such things as the IP address of the IPMI
NIC and the IPMI username and password.
• All of the networks must be available to all of the Contrail roles and computes.
While the provisioning and external networks are sufficient for basic applications, you should create
additional networks in most overcloud environments to provide isolation for the different traffic types by
assigning network traffic to specific network interfaces or bonds.
When isolated networks are configured, the OpenStack services are configured to use the isolated networks.
If no isolated networks are configured, all services run on the provisioning network. If only some isolated
networks are configured, traffic belonging to a network not configured runs on the provisioning network.
The following networks are typically deployed when using network isolation topology:
• Tenant - used for tenant overlay data plane traffic (one network per tenant)
• External - provides external access to the undercloud and overcloud, including external access to the
web UIs and public APIs
• Floating IP - provides floating IP access to the tenant network (can either be merged with external or
can be a separate network)
For more information on the different network types, see Planning Networks.
Compatibility Matrix
The following combinations of Operating System/OpenStack/Deployer/Contrail are supported:
Installation Summary
The general installation procedure is as follows:
• Set up the infrastructure, which is the set of servers or VMs that host the undercloud and overcloud,
including the provisioning network that connects them together.
• Set up the overcloud, which is the set of services in the tenant-facing network. Contrail is part of the
overcloud.
For more information on installing and using the RHOSPd, see Red Hat documentation.
IN THIS SECTION
The following example illustrates all control plane functions as Virtual Machines hosted on KVM hosts.
259
There are different ways to create the infrastructure providing the control plane elements. To illustrate
the installation procedure, we will use four host machines for the infrastructure, each running KVM. KVM1
contains a VM running the undercloud while KVM2 through KVM4 each contains a VM running an
OpenStack controller and a Contrail controller (Table 8 on page 199).
KVM1 undercloud
Figure 21 on page 200 shows the physical connectivity where each KVM host and each compute node has
two interfaces that connect to an external switch. These interfaces attach to separate virtual bridges within
the VM, allowing for two physically separate networks (external and provisioning networks).
Figure 22 on page 201 shows the logical view of the connectivity where VLANs are used to provide further
network separation for the different OpenStack network types.
260
The following sections describe how to configure the infrastructure, the undercloud, and finally the
overcloud.
ge0 - -
Table 15: External Physical Switch Port and VLAN Configuration (continued)
ge3 - -
ge5 - -
4. Additionally, on the overcloud nodes only, create and start the virtual switches br0 and br1.
br1 - -
262
# Create the virtual switches and bind them to the respective interfaces.
ovs-vsctl add-br br0
ovs-vsctl add-br br1
ovs-vsctl add-port br0 NIC1
ovs-vsctl add-port br1 NIC2
• create and start a virtual baseboard management controller for that overcloud KVM host so that the
VM can be managed using IPMI
This example procedure creates a VM definition consisting of 2 compute nodes, 1 Contrail controller node,
and 1 OpenStack controller node on each overcloud KVM host.
ROLES=compute:2,contrail-controller:1,control:1
# Initialize and specify the IPMI user and password you want to use.
num=0
ipmi_user=<user>
ipmi_password=<password>
libvirt_path=/var/lib/libvirt/images
port_group=overcloud
prov_switch=br0
/bin/rm ironic_list
CAUTION: This procedure creates one ironic_list file per overcloud KVM host. Combine
the contents of each file into a single ironic_list file on the undercloud.
The following shows the resulting ironic_list file after you combine the contents from
each separate file:
mkdir ~/images
cd images
• CentOS
curl
https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1802.qcow2.xz
-o CentOS-7-x86_64-GenericCloud-1802.qcow2.xz
unxz -d images/CentOS-7-x86_64-GenericCloud-1802.qcow2.xz
cloud_image=~/images/CentOS-7-x86_64-GenericCloud-1802.qcow2
• RHEL
266
undercloud_name=queensa
undercloud_suffix=local
root_password=<password>
stack_password=<password>
export LIBGUESTFS_BACKEND=direct
qemu-img create -f qcow2 /var/lib/libvirt/images/${undercloud_name}.qcow2 100G
virt-resize --expand /dev/sda1 ${cloud_image}
/var/lib/libvirt/images/${undercloud_name}.qcow2
virt-customize -a /var/lib/libvirt/images/${undercloud_name}.qcow2 \
--run-command 'xfs_growfs /' \
--root-password password:${root_password} \
--hostname ${undercloud_name}.${undercloud_suffix} \
--run-command 'useradd stack' \
--password stack:password:${stack_password} \
--run-command 'echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack'
\
--chmod 0440:/etc/sudoers.d/stack \
--run-command 'sed -i "s/PasswordAuthentication no/PasswordAuthentication yes/g"
/etc/ssh/sshd_config' \
--run-command 'systemctl enable sshd' \
--run-command 'yum remove -y cloud-init' \
--selinux-relabel
NOTE: As part of the undercloud definition, a user called stack is created. This user will be
used later to install the undercloud.
vcpus=8
vram=32000
virt-install --name ${undercloud_name} \
--disk /var/lib/libvirt/images/${undercloud_name}.qcow2 \
--vcpus=${vcpus} \
--ram=${vram} \
--network network=default,model=virtio \
267
--network network=br0,model=virtio,portgroup=overcloud \
--virt-type kvm \
--import \
--os-variant rhel7 \
--graphics vnc \
--serial pty \
--noautoconsole \
--console pty,target_type=virtio
6. Retrieve the undercloud IP address. It might take several seconds before the IP address is available.
IN THIS SECTION
ssh ${undercloud_ip}
undercloud_name=`hostname -s`
268
undercloud_suffix=`hostname -d`
hostnamectl set-hostname ${undercloud_name}.${undercloud_suffix}
hostnamectl set-hostname --transient ${undercloud_name}.${undercloud_suffix}
3. Add the hostname to the /etc/hosts file. The following example assumes the management interface
is eth0.
undercloud_ip=`ip addr sh dev eth0 | grep "inet " | awk '{print $2}' | awk -F"/"
'{print $1}'`
echo ${undercloud_ip} ${undercloud_name}.${undercloud_suffix} ${undercloud_name}
>> /etc/hosts
• CentOS
• RHEL
6. Copy the undercloud configuration file sample and modify the configuration as required. See Red Hat
documentation for information on how to modify that file.
269
su - stack
cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf
vi ~/undercloud.conf
1. Configure a forwarding path between the provisioning network and the external network:
sudo ip link add name vlan720 link br-ctlplane type vlan id 720
sudo ip addr add 10.2.0.254/24 dev vlan720
sudo ip link set dev vlan720 up
newgrp docker
exit
su - stack
source stackrc
IN THIS SECTION
undercloud_nameserver=8.8.8.8
openstack subnet set `openstack subnet show ctlplane-subnet -c id -f value`
--dns-nameserver ${undercloud_nameserver}
mkdir images
cd images
b. Retrieve the overcloud images from either the RDO project or from Red Hat.
• TripleO
curl -O
https://images.rdoproject.org/queens/rdo_trunk/current-tripleo-rdo/ironic-python-agent.tar
curl -O
https://images.rdoproject.org/queens/rdo_trunk/current-tripleo-rdo/overcloud-full.tar
• OSP13
cd
openstack overcloud image upload --image-path /home/stack/images/
Ironic is an integrated OpenStack program that provisions bare metal machines instead of virtual
machines. It is best thought of as a bare metal hypervisor API and a set of plugins that interact with
the bare metal hypervisors.
NOTE: Make sure to combine the ironic_list files from the three overcloud KVM hosts.
ipmi_password=<password>
ipmi_user=<user>
while IFS= read -r line; do
mac=`echo $line|awk '{print $1}'`
name=`echo $line|awk '{print $2}'`
kvm_ip=`echo $line|awk '{print $3}'`
profile=`echo $line|awk '{print $4}'`
ipmi_port=`echo $line|awk '{print $5}'`
uuid=`openstack baremetal node create --driver ipmi \
--property cpus=4 \
--property memory_mb=16348 \
--property local_gb=100 \
--property cpu_arch=x86_64 \
--driver-info
ipmi_username=${ipmi_user} \
--driver-info ipmi_address=${kvm_ip}
\
--driver-info
ipmi_password=${ipmi_password} \
--driver-info ipmi_port=${ipmi_port}
\
--name=${name} \
--property
capabilities=profile:${profile},boot_option:local \
-c uuid -f value`
openstack baremetal port create --node ${uuid} ${mac}
done < <(cat ironic_list)
Evaluate the attributes of the physical server. The server will automatically be profiled based on
the rules.
The following example shows how to create a rule for system manufacturer as “Supermicro” and
memory greater or equal to 128 GB.
"value": "Supermicro"},
{"op": "ge", "field": "memory_mb", "value": 128000}
],
"actions": [
{"action": "set-attribute", "path": "driver_info/ipmi_username",
"value": "<user>"},
{"action": "set-attribute", "path": "driver_info/ipmi_password",
"value": "<password>"},
273
• Scan the BMC IP range and automatically add new servers matching the above rule by:
ipmi_range=10.87.122.25/32
ipmi_password=<password>
ipmi_user=<user>
openstack overcloud node discover --range ${ipmi_range} \
--credentials ${ipmi_user}:${ipmi_password} \
--introspect --provide
4. Create Flavor:
for i in compute-dpdk \
compute-sriov \
contrail-controller \
contrail-analytics \
contrail-database \
contrail-analytics-database; do
openstack flavor create $i --ram 4096 --vcpus 1 --disk 40
openstack flavor set --property "capabilities:boot_option"="local" \
--property "capabilities:profile"="${i}" ${i}
done
cp -r /usr/share/openstack-tripleo-heat-templates/ tripleo-heat-templates
• TripleO
• OSP13
NOTE: This step is optional. The Contrail containers can be downloaded from external
registries later.
cd ~/tripleo-heat-templates/tools/contrail
./import_contrail_container.sh -f container_outputfile -r registry -t tag [-i
insecure] [-u username] [-p password] [-c certificate pat
Here are few examples of importing Contrail containers from different sources:
./import_contrail_container.sh -f /tmp/contrail_container -r
hub.juniper.net/contrail -u USERNAME -p PASSWORD -t 1234
./import_contrail_container.sh -f /tmp/contrail_container -r
docker.io/opencontrailnightly -t 1234
./import_contrail_container.sh -f /tmp/contrail_container -r
device.example.net:5443 -c
http://device.example.net/pub/device.example.net.crt -t 1234
vi ~/tripleo-heat-templates/environments/contrail-services.yaml
parameter_defaults:
ContrailSettings:
VROUTER_GATEWAY: 10.0.0.1
# KEY1: value1
# KEY2: value2
VXLAN_VN_ID_MODE: "configured"
ENCAP_PRIORITY: "VXLAN,MPLSoUDP,MPLSoGRE"
ContrailControllerParameters:
AAAMode: rbac
vi ~/tripleo-heat-templates/environments/contrail-services.yaml
parameter_defaults:
ContrailRegistry: hub.juniper.net/contrail
ContrailRegistryUser: <USER>
ContrailRegistryPassword: <PASSWORD>
• Insecure registry
277
parameter_defaults:
ContrailRegistryInsecure: true
DockerInsecureRegistryAddress: 10.87.64.32:5000,192.168.24.1:8787
ContrailRegistry: 10.87.64.32:5000
parameter_defaults:
ContrailRegistryCertUrl: http://device.example.net/pub/device.example.net.crt
ContrailRegistry: device.example.net:5443
parameter_defaults:
ContrailImageTag: queens-5.0-104-rhel-queens
IN THIS SECTION
Overview | 277
Overview
In order to customize the network, define different networks and configure the overcloud nodes NIC
layout. TripleO supports a flexible way of customizing the network.
provisioning - All
IN THIS SECTION
vi ~/tripleo-heat-templates/roles_data_contrail_aio.yaml
OpenStack Controller
###############################################################################
# Role: Controller #
###############################################################################
- name: Controller
description: |
Controller role that has all the controler services loaded and handles
279
Compute Node
###############################################################################
# Role: Compute #
###############################################################################
- name: Compute
description: |
Basic Compute Node role
CountDefault: 1
networks:
- InternalApi
- Tenant
- Storage
Contrail Controller
###############################################################################
# Role: ContrailController #
###############################################################################
- name: ContrailController
description: |
ContrailController role that has all the Contrail controler services loaded
and handles config, control and webui functions
CountDefault: 1
tags:
- primary
- contrailcontroller
networks:
- InternalApi
- Tenant
280
Compute DPDK
###############################################################################
# Role: ContrailDpdk #
###############################################################################
- name: ContrailDpdk
description: |
Contrail Dpdk Node role
CountDefault: 0
tags:
- contraildpdk
networks:
- InternalApi
- Tenant
- Storage
Compute SRIOV
###############################################################################
# Role: ContrailSriov
###############################################################################
- name: ContrailSriov
description: |
Contrail Sriov Node role
CountDefault: 0
tags:
- contrailsriov
networks:
- InternalApi
- Tenant
- Storage
Compute CSN
###############################################################################
# Role: ContrailTsn
###############################################################################
- name: ContrailTsn
description: |
Contrail Tsn Node role
CountDefault: 0
tags:
281
- contrailtsn
networks:
- InternalApi
- Tenant
- Storage
cat ~/tripleo-heat-templates/environments/contrail/contrail-net.yaml
resource_registry:
OS::TripleO::Controller::Net::SoftwareConfig:
../../network/config/contrail/controller-nic-config.yaml
OS::TripleO::ContrailController::Net::SoftwareConfig:
../../network/config/contrail/contrail-controller-nic-config.yaml
OS::TripleO::ContrailControlOnly::Net::SoftwareConfig:
../../network/config/contrail/contrail-controller-nic-config.yaml
OS::TripleO::Compute::Net::SoftwareConfig:
../../network/config/contrail/compute-nic-config.yaml
OS::TripleO::ContrailDpdk::Net::SoftwareConfig:
../../network/config/contrail/contrail-dpdk-nic-config.yaml
OS::TripleO::ContrailSriov::Net::SoftwareConfig:
../../network/config/contrail/contrail-sriov-nic-config.yaml
OS::TripleO::ContrailTsn::Net::SoftwareConfig:
../../network/config/contrail/contrail-tsn-nic-config.yaml
parameter_defaults:
# Customize all these values to match the local environment
TenantNetCidr: 10.0.0.0/24
InternalApiNetCidr: 10.1.0.0/24
ExternalNetCidr: 10.2.0.0/24
StorageNetCidr: 10.3.0.0/24
StorageMgmtNetCidr: 10.4.0.0/24
# CIDR subnet mask length for provisioning network
ControlPlaneSubnetCidr: '24'
# Allocation pools
TenantAllocationPools: [{'start': '10.0.0.10', 'end': '10.0.0.200'}]
InternalApiAllocationPools: [{'start': '10.1.0.10', 'end': '10.1.0.200'}]
ExternalAllocationPools: [{'start': '10.2.0.10', 'end': '10.2.0.200'}]
StorageAllocationPools: [{'start': '10.3.0.10', 'end': '10.3.0.200'}]
StorageMgmtAllocationPools: [{'start': '10.4.0.10', 'end': '10.4.0.200'}]
282
# Routes
ControlPlaneDefaultRoute: 192.168.24.1
InternalApiDefaultRoute: 10.1.0.1
ExternalInterfaceDefaultRoute: 10.2.0.1
# Vlans
InternalApiNetworkVlanID: 710
ExternalNetworkVlanID: 720
StorageNetworkVlanID: 730
StorageMgmtNetworkVlanID: 740
TenantNetworkVlanID: 3211
# Services
EC2MetadataIp: 192.168.24.1 # Generally the IP of the undercloud
DnsServers: ["172.x.x.x"]
NtpServer: 10.0.0.1
IN THIS SECTION
cd ~/tripleo-heat-templates/network/config/contrail
OpenStack Controller
heat_template_version: queens
description: >
Software Config to drive os-net-config to configure multiple interfaces
for the compute role. This is an example for a Nova compute node using
Contrail vrouter and the vhost0 interface.
283
parameters:
ControlPlaneIp:
default: ''
description: IP address/subnet on the ctlplane network
type: string
ExternalIpSubnet:
default: ''
description: IP address/subnet on the external network
type: string
InternalApiIpSubnet:
default: ''
description: IP address/subnet on the internal_api network
type: string
InternalApiDefaultRoute: # Not used by default in this template
default: '10.0.0.1'
description: The default route of the internal api network.
type: string
StorageIpSubnet:
default: ''
description: IP address/subnet on the storage network
type: string
StorageMgmtIpSubnet:
default: ''
description: IP address/subnet on the storage_mgmt network
type: string
TenantIpSubnet:
default: ''
description: IP address/subnet on the tenant network
type: string
ManagementIpSubnet: # Only populated when including
environments/network-management.yaml
default: ''
description: IP address/subnet on the management network
type: string
ExternalNetworkVlanID:
default: 10
description: Vlan ID for the external network traffic.
type: number
InternalApiNetworkVlanID:
default: 20
description: Vlan ID for the internal_api network traffic.
type: number
StorageNetworkVlanID:
default: 30
284
resources:
OsNetConfigImpl:
type: OS::Heat::SoftwareConfig
properties:
group: script
285
config:
str_replace:
template:
get_file: ../../scripts/run-os-net-config.sh
params:
$network_config:
network_config:
- type: interface
name: nic1
use_dhcp: false
dns_servers:
get_param: DnsServers
addresses:
- ip_netmask:
list_join:
- '/'
- - get_param: ControlPlaneIp
- get_param: ControlPlaneSubnetCidr
routes:
- ip_netmask: 169.x.x.x/32
next_hop:
get_param: EC2MetadataIp
- default: true
next_hop:
get_param: ControlPlaneDefaultRoute
- type: vlan
vlan_id:
get_param: InternalApiNetworkVlanID
device: nic1
addresses:
- ip_netmask:
get_param: InternalApiIpSubnet
- type: vlan
vlan_id:
get_param: ExternalNetworkVlanID
device: nic1
addresses:
- ip_netmask:
get_param: ExternalIpSubnet
- type: vlan
vlan_id:
get_param: StorageNetworkVlanID
device: nic1
addresses:
286
- ip_netmask:
get_param: StorageIpSubnet
- type: vlan
vlan_id:
get_param: StorageMgmtNetworkVlanID
device: nic1
addresses:
- ip_netmask:
get_param: StorageMgmtIpSubnet
outputs:
OS::stack_id:
description: The OsNetConfigImpl resource.
value:
get_resource: OsNetConfigImpl
Contrail Controller
heat_template_version: queens
description: >
Software Config to drive os-net-config to configure multiple interfaces
for the compute role. This is an example for a Nova compute node using
Contrail vrouter and the vhost0 interface.
parameters:
ControlPlaneIp:
default: ''
description: IP address/subnet on the ctlplane network
type: string
ExternalIpSubnet:
default: ''
description: IP address/subnet on the external network
type: string
InternalApiIpSubnet:
default: ''
description: IP address/subnet on the internal_api network
type: string
InternalApiDefaultRoute: # Not used by default in this template
default: '10.0.0.1'
287
default: '24'
description: The subnet CIDR of the control plane network.
type: string
ControlPlaneDefaultRoute: # Override this via parameter_defaults
description: The default route of the control plane network.
type: string
ExternalInterfaceDefaultRoute: # Not used by default in this template
default: '10.0.0.1'
description: The default route of the external network.
type: string
ManagementInterfaceDefaultRoute: # Commented out by default in this template
default: unset
description: The default route of the management network.
type: string
DnsServers: # Override this via parameter_defaults
default: []
description: A list of DNS servers (2 max for some implementations) that will
be added to resolv.conf.
type: comma_delimited_list
EC2MetadataIp: # Override this via parameter_defaults
description: The IP address of the EC2 metadata server.
type: string
resources:
OsNetConfigImpl:
type: OS::Heat::SoftwareConfig
properties:
group: script
config:
str_replace:
template:
get_file: ../../scripts/run-os-net-config.sh
params:
$network_config:
network_config:
- type: interface
name: nic1
use_dhcp: false
dns_servers:
get_param: DnsServers
addresses:
- ip_netmask:
list_join:
289
- '/'
- - get_param: ControlPlaneIp
- get_param: ControlPlaneSubnetCidr
routes:
- ip_netmask: 169.x.x.x/32
next_hop:
get_param: EC2MetadataIp
- default: true
next_hop:
get_param: ControlPlaneDefaultRoute
- type: vlan
vlan_id:
get_param: InternalApiNetworkVlanID
device: nic1
addresses:
- ip_netmask:
get_param: InternalApiIpSubnet
- type: interface
name: nic2
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
outputs:
OS::stack_id:
description: The OsNetConfigImpl resource.
value:
get_resource: OsNetConfigImpl
Compute Node
heat_template_version: queens
description: >
Software Config to drive os-net-config to configure multiple interfaces
for the compute role. This is an example for a Nova compute node using
Contrail vrouter and the vhost0 interface.
290
parameters:
ControlPlaneIp:
default: ''
description: IP address/subnet on the ctlplane network
type: string
ExternalIpSubnet:
default: ''
description: IP address/subnet on the external network
type: string
InternalApiIpSubnet:
default: ''
description: IP address/subnet on the internal_api network
type: string
InternalApiDefaultRoute: # Not used by default in this template
default: '10.0.0.1'
description: The default route of the internal api network.
type: string
StorageIpSubnet:
default: ''
description: IP address/subnet on the storage network
type: string
StorageMgmtIpSubnet:
default: ''
description: IP address/subnet on the storage_mgmt network
type: string
TenantIpSubnet:
default: ''
description: IP address/subnet on the tenant network
type: string
ManagementIpSubnet: # Only populated when including
environments/network-management.yaml
default: ''
description: IP address/subnet on the management network
type: string
ExternalNetworkVlanID:
default: 10
description: Vlan ID for the external network traffic.
type: number
InternalApiNetworkVlanID:
default: 20
description: Vlan ID for the internal_api network traffic.
type: number
StorageNetworkVlanID:
default: 30
291
resources:
OsNetConfigImpl:
type: OS::Heat::SoftwareConfig
properties:
group: script
292
config:
str_replace:
template:
get_file: ../../scripts/run-os-net-config.sh
params:
$network_config:
network_config:
- type: interface
name: nic1
use_dhcp: false
dns_servers:
get_param: DnsServers
addresses:
- ip_netmask:
list_join:
- '/'
- - get_param: ControlPlaneIp
- get_param: ControlPlaneSubnetCidr
routes:
- ip_netmask: 169.x.x.x/32
next_hop:
get_param: EC2MetadataIp
- default: true
next_hop:
get_param: ControlPlaneDefaultRoute
- type: vlan
vlan_id:
get_param: InternalApiNetworkVlanID
device: nic1
addresses:
- ip_netmask:
get_param: InternalApiIpSubnet
- type: vlan
vlan_id:
get_param: StorageNetworkVlanID
device: nic1
addresses:
- ip_netmask:
get_param: StorageIpSubnet
- type: contrail_vrouter
name: vhost0
use_dhcp: false
members:
-
293
type: interface
name: nic2
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
outputs:
OS::stack_id:
description: The OsNetConfigImpl resource.
value:
get_resource: OsNetConfigImpl
IN THIS SECTION
VLAN | 293
Bond | 294
In addition to the standard NIC configuration, the vRouter kernel mode supports VLAN, Bond, and Bond
+ VLAN modes. The configuration snippets below only show the relevant section of the NIC template
configuration for each mode.
VLAN
- type: vlan
vlan_id:
get_param: TenantNetworkVlanID
device: nic2
- type: contrail_vrouter
name: vhost0
use_dhcp: false
members:
-
type: interface
name:
str_replace:
294
template: vlanVLANID
params:
VLANID: {get_param: TenantNetworkVlanID}
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
Bond
- type: linux_bond
name: bond0
bonding_options: "mode=4 xmit_hash_policy=layer2+3"
use_dhcp: false
members:
-
type: interface
name: nic2
-
type: interface
name: nic3
- type: contrail_vrouter
name: vhost0
use_dhcp: false
members:
-
type: interface
name: bond0
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
Bond + VLAN
- type: linux_bond
name: bond0
bonding_options: "mode=4 xmit_hash_policy=layer2+3"
use_dhcp: false
members:
-
type: interface
295
name: nic2
-
type: interface
name: nic3
- type: vlan
vlan_id:
get_param: TenantNetworkVlanID
device: bond0
- type: contrail_vrouter
name: vhost0
use_dhcp: false
members:
-
type: interface
name:
str_replace:
template: vlanVLANID
params:
VLANID: {get_param: TenantNetworkVlanID}
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
IN THIS SECTION
Standard | 296
VLAN | 296
Bond | 297
In addition to the standard NIC configuration, the vRouter DPDK mode supports Standard, VLAN, Bond,
and Bond + VLAN modes.
vi ~/tripleo-heat-templates/environments/contrail/contrail-services.yaml
296
parameter_defaults:
ContrailDpdkHugepages1GB: 10
See the following NIC template configurations for vRouter DPDK mode. The configuration snippets below
only show the relevant section of the NIC configuration for each mode.
Standard
- type: contrail_vrouter_dpdk
name: vhost0
use_dhcp: false
driver: uio_pci_generic
cpu_list: 0x01
members:
-
type: interface
name: nic2
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
VLAN
- type: contrail_vrouter_dpdk
name: vhost0
use_dhcp: false
driver: uio_pci_generic
cpu_list: 0x01
vlan_id:
get_param: TenantNetworkVlanID
members:
-
type: interface
name: nic2
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
297
Bond
- type: contrail_vrouter_dpdk
name: vhost0
use_dhcp: false
driver: uio_pci_generic
cpu_list: 0x01
bond_mode: 4
bond_policy: layer2+3
members:
-
type: interface
name: nic2
use_dhcp: false
-
type: interface
name: nic3
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
Bond + VLAN
- type: contrail_vrouter_dpdk
name: vhost0
use_dhcp: false
driver: uio_pci_generic
cpu_list: 0x01
vlan_id:
get_param: TenantNetworkVlanID
bond_mode: 4
bond_policy: layer2+3
members:
-
type: interface
name: nic2
use_dhcp: false
-
type: interface
name: nic3
use_dhcp: false
addresses:
298
- ip_netmask:
get_param: TenantIpSubnet
IN THIS SECTION
VLAN | 299
Bond | 299
• Standard
• VLAN
• Bond
• Bond + VLAN
vi ~/tripleo-heat-templates/environments/contrail/contrail-services.yaml
parameter_defaults:
ContrailSriovHugepages1GB: 10
NovaPCIPassthrough:
- devname: "ens2f1"
physical_network: "sriov1"
ContrailSriovNumVFs: ["ens2f1:7"]
299
The SRIOV NICs are not configured in the NIC templates. However, vRouter NICs must still be configured.
See the following NIC template configurations for vRouter kernel mode. The configuration snippets below
only show the relevant section of the NIC configuration for each mode.
VLAN
- type: vlan
vlan_id:
get_param: TenantNetworkVlanID
device: nic2
- type: contrail_vrouter
name: vhost0
use_dhcp: false
members:
-
type: interface
name:
str_replace:
template: vlanVLANID
params:
VLANID: {get_param: TenantNetworkVlanID}
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
Bond
- type: linux_bond
name: bond0
bonding_options: "mode=4 xmit_hash_policy=layer2+3"
use_dhcp: false
members:
-
type: interface
name: nic2
-
type: interface
name: nic3
- type: contrail_vrouter
name: vhost0
use_dhcp: false
members:
-
300
type: interface
name: bond0
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
Bond + VLAN
- type: linux_bond
name: bond0
bonding_options: "mode=4 xmit_hash_policy=layer2+3"
use_dhcp: false
members:
-
type: interface
name: nic2
-
type: interface
name: nic3
- type: vlan
vlan_id:
get_param: TenantNetworkVlanID
device: bond0
- type: contrail_vrouter
name: vhost0
use_dhcp: false
members:
-
type: interface
name:
str_replace:
template: vlanVLANID
params:
VLANID: {get_param: TenantNetworkVlanID}
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
301
IN THIS SECTION
Standard | 302
VLAN | 302
Bond | 302
• Standard
• VLAN
• Bond
• Bond + VLAN
vi ~/tripleo-heat-templates/environments/contrail/contrail-services.yaml
parameter_defaults:
ContrailSriovMode: dpdk
ContrailDpdkHugepages1GB: 10
ContrailSriovHugepages1GB: 10
NovaPCIPassthrough:
- devname: "ens2f1"
physical_network: "sriov1"
ContrailSriovNumVFs: ["ens2f1:7"]
302
The SRIOV NICs are not configured in the NIC templates. However, vRouter NICs must still be configured.
See the following NIC template configurations for vRouter DPDK mode. The configuration snippets below
only show the relevant section of the NIC configuration for each mode.
Standard
- type: contrail_vrouter_dpdk
name: vhost0
use_dhcp: false
driver: uio_pci_generic
cpu_list: 0x01
members:
-
type: interface
name: nic2
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
VLAN
- type: contrail_vrouter_dpdk
name: vhost0
use_dhcp: false
driver: uio_pci_generic
cpu_list: 0x01
vlan_id:
get_param: TenantNetworkVlanID
members:
-
type: interface
name: nic2
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
Bond
- type: contrail_vrouter_dpdk
name: vhost0
303
use_dhcp: false
driver: uio_pci_generic
cpu_list: 0x01
bond_mode: 4
bond_policy: layer2+3
members:
-
type: interface
name: nic2
use_dhcp: false
-
type: interface
name: nic3
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
Bond + VLAN
- type: contrail_vrouter_dpdk
name: vhost0
use_dhcp: false
driver: uio_pci_generic
cpu_list: 0x01
vlan_id:
get_param: TenantNetworkVlanID
bond_mode: 4
bond_policy: layer2+3
members:
-
type: interface
name: nic2
use_dhcp: false
-
type: interface
name: nic3
use_dhcp: false
addresses:
- ip_netmask:
get_param: TenantIpSubnet
304
Advanced Scenarios
Remote Compute
Remote Compute extends the data plane to remote locations (POP) whilest keeping the control plane
central. Each POP will have its own set of Contrail control services, which are running in the central location.
The difficulty is to ensure that the compute nodes of a given POP connect to the Control nodes assigned
to that POC. The Control nodes must have predictable IP addresses and the compute nodes have to know
these IP addresses. In order to achieve that the following methods are used:
• Custom Roles
• Static IP assignment
Each overcloud node has a unique DMI UUID. This UUID is known on the undercloud node as well as on
the overcloud node. Hence, this UUID can be used for mapping node specific information. For each POP,
a Control role and a Compute role has to be created.
305
Overview
Mapping Table
IP
Nova Name Ironic Name UUID KVM Address POP
IP
Nova Name Ironic Name UUID KVM Address POP
ControlOnly preparation
Two ControlOnly overcloud VM definitions will be created on each of the overcloud KVM hosts.
ROLES=control-only:2
num=4
ipmi_user=<user>
ipmi_password=<password>
libvirt_path=/var/lib/libvirt/images
port_group=overcloud
prov_switch=br0
/bin/rm ironic_list
IFS=',' read -ra role_list <<< "${ROLES}"
for role in ${role_list[@]}; do
role_name=`echo $role|cut -d ":" -f 1`
role_count=`echo $role|cut -d ":" -f 2`
for count in `seq 1 ${role_count}`; do
echo $role_name $count
qemu-img create -f qcow2 ${libvirt_path}/${role_name}_${count}.qcow2 99G
virsh define /dev/stdin <<EOF
$(virt-install --name ${role_name}_${count} \
--disk ${libvirt_path}/${role_name}_${count}.qcow2 \
--vcpus=4 \
--ram=16348 \
--network network=br0,model=virtio,portgroup=${port_group} \
--network network=br1,model=virtio \
--virt-type kvm \
--cpu host \
--import \
--os-variant rhel7 \
--serial pty \
--console pty,target_type=virtio \
--graphics vnc \
--print-xml)
EOF
vbmc add ${role_name}_${count} --port 1623${num} --username ${ipmi_user}
--password ${ipmi_password}
vbmc start ${role_name}_${count}
prov_mac=`virsh domiflist ${role_name}_${count}|grep ${prov_switch}|awk '{print
$5}'`
vm_name=${role_name}-${count}-`hostname -s`
kvm_ip=`ip route get 1 |grep src |awk '{print $7}'`
echo ${prov_mac} ${vm_name} ${kvm_ip} ${role_name} 1623${num}>> ironic_list
num=$(expr $num + 1)
done
done
308
NOTE: The generated ironic_list will be needed on the undercloud to import the nodes to Ironic.
Get the ironic_lists from the overcloud KVM hosts and combine them.
cat ironic_list_control_only
52:54:00:3a:2f:ca control-only-1-5b3s30 10.87.64.31 control-only 16234
52:54:00:31:4f:63 control-only-2-5b3s30 10.87.64.31 control-only 16235
52:54:00:0c:11:74 control-only-1-5b3s31 10.87.64.32 control-only 16234
52:54:00:56:ab:55 control-only-2-5b3s31 10.87.64.32 control-only 16235
52:54:00:c1:f0:9a control-only-1-5b3s32 10.87.64.33 control-only 16234
52:54:00:f3:ce:13 control-only-2-5b3s32 10.87.64.33 control-only 16235
Import:
ipmi_password=<password>
ipmi_user=<user>
num=0
while IFS= read -r line; do
mac=`echo $line|awk '{print $1}'`
name=`echo $line|awk '{print $2}'`
kvm_ip=`echo $line|awk '{print $3}'`
profile=`echo $line|awk '{print $4}'`
ipmi_port=`echo $line|awk '{print $5}'`
uuid=`openstack baremetal node create --driver ipmi \
--property cpus=4 \
--property memory_mb=16348 \
--property local_gb=100 \
--property cpu_arch=x86_64 \
--driver-info ipmi_username=${ipmi_user}
\
--driver-info ipmi_address=${kvm_ip} \
--driver-info ipmi_password=${ipmi_password}
\
--driver-info ipmi_port=${ipmi_port} \
--name=${name} \
--property capabilities=boot_option:local
309
\
-c uuid -f value`
openstack baremetal node set ${uuid} --driver-info deploy_kernel=$DEPLOY_KERNEL
--driver-info deploy_ramdisk=$DEPLOY_RAMDISK
openstack baremetal port create --node ${uuid} ${mac}
openstack baremetal node manage ${uuid}
num=$(expr $num + 1)
done < <(cat ironic_list_control_only)
The first ControlOnly node on each of the overcloud KVM hosts will be used for POP1, the second for
POP2, and so and so forth.
available | False |
| 2d4be83e-6fcc-4761-86f2-c2615dd15074 | compute-4-5b3s31 | None | power off |
available | False |
The first two compute nodes belong to POP1 the second two compute nodes belong to POP2.
~/subcluster_input.yaml
---
- subcluster: subcluster1
asn: "65413"
control_nodes:
- uuid: 7d758dce-2784-45fd-be09-5a41eb53e764
ipaddress: 10.0.0.11
- uuid: 91dd9fa9-e8eb-4b51-8b5e-bbaffb6640e4
ipaddress: 10.0.0.12
- uuid: f4766799-24c8-4e3b-af54-353f2b796ca4
ipaddress: 10.0.0.13
compute_nodes:
- uuid: 91d6026c-b9db-49cb-a685-99a63da5d81e
vrouter_gateway: 10.0.0.1
- uuid: 8028eb8c-e1e6-4357-8fcf-0796778bd2f7
vrouter_gateway: 10.0.0.1
- subcluster: subcluster2
asn: "65414"
control_nodes:
- uuid: d26abdeb-d514-4a37-a7fb-2cd2511c351f
ipaddress: 10.0.0.14
- uuid: 09fa57b8-580f-42ec-bf10-a19573521ed4
ipaddress: 10.0.0.15
- uuid: 58a803ae-a785-470e-9789-139abbfa74fb
ipaddress: 10.0.0.16
compute_nodes:
- uuid: b795b3b9-c4e3-4a76-90af-258d9336d9fb
vrouter_gateway: 10.0.0.1
- uuid: 2d4be83e-6fcc-4761-86f2-c2615dd15074
vrouter_gateway: 10.0.0.1
~/tripleo-heat-templates/tools/contrail/create_subcluster_environment.py -i
~/subcluster_input.yaml \
-o
~/tripleo-heat-templates/environments/contrail/contrail-subcluster.yaml
cat ~/tripleo-heat-templates/environments/contrail/contrail-subcluster.yaml
parameter_defaults:
NodeDataLookup:
041D7B75-6581-41B3-886E-C06847B9C87E:
contrail_settings:
CONTROL_NODES: 10.0.0.14,10.0.0.15,10.0.0.16
SUBCLUSTER: subcluster2
VROUTER_GATEWAY: 10.0.0.1
09BEC8CB-77E9-42A6-AFF4-6D4880FD87D0:
contrail_settings:
BGP_ASN: '65414'
SUBCLUSTER: subcluster2
14639A66-D62C-4408-82EE-FDDC4E509687:
contrail_settings:
BGP_ASN: '65414'
SUBCLUSTER: subcluster2
28AB0B57-D612-431E-B177-1C578AE0FEA4:
contrail_settings:
BGP_ASN: '65413'
SUBCLUSTER: subcluster1
3993957A-ECBF-4520-9F49-0AF6EE1667A7:
contrail_settings:
BGP_ASN: '65413'
SUBCLUSTER: subcluster1
73F8D030-E896-4A95-A9F5-E1A4FEBE322D:
contrail_settings:
BGP_ASN: '65413'
SUBCLUSTER: subcluster1
7933C2D8-E61E-4752-854E-B7B18A424971:
contrail_settings:
CONTROL_NODES: 10.0.0.14,10.0.0.15,10.0.0.16
SUBCLUSTER: subcluster2
VROUTER_GATEWAY: 10.0.0.1
AF92F485-C30C-4D0A-BDC4-C6AE97D06A66:
312
contrail_settings:
BGP_ASN: '65414'
SUBCLUSTER: subcluster2
BB9E9D00-57D1-410B-8B19-17A0DA581044:
contrail_settings:
CONTROL_NODES: 10.0.0.11,10.0.0.12,10.0.0.13
SUBCLUSTER: subcluster1
VROUTER_GATEWAY: 10.0.0.1
E1A809DE-FDB2-4EB2-A91F-1B3F75B99510:
contrail_settings:
CONTROL_NODES: 10.0.0.11,10.0.0.12,10.0.0.13
SUBCLUSTER: subcluster1
VROUTER_GATEWAY: 10.0.0.1
Deployment
Installing Overcloud
1. Deployment:
-e ~/tripleo-heat-templates/environments/contrail/contrail-net.yaml \
--roles-file ~/tripleo-heat-templates/roles_data_contrail_aio.yaml
2. Validation Test:
source overcloudrc
curl -O http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
openstack image create --container-format bare --disk-format qcow2 --file
cirros-0.3.5-x86_64-disk.img cirros
openstack flavor create --public cirros --id auto --ram 64 --disk 0 --vcpus 1
openstack network create net1
openstack subnet create --subnet-range 1.0.0.0/24 --network net1 sn1
nova boot --image cirros --flavor cirros --nic net-id=`openstack network show
net1 -c id -f value` --availability-zone nova:overcloud-novacompute-0.localdomain
c1
nova list
RELATED DOCUMENTATION
Installing a Nested Red Hat OpenShift Container Platform 3.11 Cluster Using Contrail Ansible Deployer
314
NOTE: The Netronome SmartNIC vRouter technology covered in this document is available for
evaluation purposes only. It is not intended for deployment in production networks.
Contrail supports Netronome Agilio CX SmartNICs for Contrail Networking deployment with Red Hat
OpenStack Platform Director (RHOSPd) 13 environment.
This feature will enable service providers to improve the forwarding performance which includes packets
per second (PPS) of vRouter. This will optimize server CPU usage and you can deploy more Virtual network
functions (VNFs) per server.
Benefits:
• Increased PPS capacity of Contrail vRouter datapath allowing applications to reach their full processing
capacity.
• Reclaimed CPU cores from Contrail vRouter off-loading allowing more VMs and VNFs to be deployed
per server.
The goal of this topic is to provide a procedure for deploying accelerated vRouter compute nodes.
Register on Netronome support site at https://help.netronome.com and provide Docker Hub credentials.
Netronome will provide the TripleO templates for SmartNIC vRouter deployment on compute nodes.
Also, Netronome will authorize Docker Hub registry access.
AGILIO_TAG="2.38-rhel-queens
FORWARDER_TAG="2.38-rhel-queens
315
Procedure:
NOTE: If you have multiple undercloud nodes deployed, you must perform the following
procedure on the same node.
a. Extract the Agilio plugin archive and copy the agilio-plugin folder into the tripleo-heat-templates
directory.
[tripleo-heat-templates]$ cd agilio-plugin/
resource_registry:
OS::TripleO::NodeExtraConfigPost: agilio-vrouter.yaml
parameter_defaults:
# Hugepages
ContrailVrouterHugepages2MB: "8192"
# IOMMU
316
ComputeParameters:
KernelArgs: "intel_iommu=on iommu=pt isolcpus=1,2 "
ComputeCount: 3
# Aditional config
ControlPlaneDefaultRoute: 10.0.x.1
EC2MetadataIp: 10.0.x.1 # Generally the IP of the Undercloud
DnsServers: ["8.8.8.8","192.168.3.3"]
NtpServer: ntp.is.co.za
ContrailRegistryInsecure: true
DockerInsecureRegistryAddress: 172.x.x.150:6666,10.0.x.1:8787
ContrailRegistry: 172.x.x.150:6666
ContrailImageTag: <container_tag>-rhel-queens
• Add agilio-env.yaml to installing overcloud step as mentioned in “Installing Overcloud” on page 252
topic.
Or
deploy_rhosp.sh
-e ~/tripleo-heat-templates/agilio-plugin/agilio-env.yaml
SEE ALSO
Contrail Networking Release 2005 supports Octavia as LBaaS. The deployment supports RHOSP and Juju
platforms.
With Octavia as LBaaS, Contrail Networking is only maintaining network connectivity and is not involved
in any load balancing functions.
318
For each OpenStack load balancer creation, Octavia launches a VM known as amphora VM. The VM starts
the HAPROXY when listener is created for the load balancer in OpenStack. Whenever the load balancer
gets updated in OpenStack, amphora VM updates the running HAPROXY configuration. The amphora VM
is deleted on deleting the load balancer.
Contrail Networking provides connectivity to amphora VM interfaces. Amphora VM has two interfaces;
one for management and the other for data. The management interface is used by the Octavia services
for the management communication. Since, Octavia services are running in the underlay network and
amphora VM is running in the overlay network, SDN gateway is needed to reach the overlay network. The
data interface is used for load balancing.
Follow the procedure to install OpenStack Octavia LBaaS with Contrail Networking:
cp
tripleo-heat-templates/docker/services/octavia/octavia-deployment-config.yaml
tripleo-heat-templates/docker/services/octavia/octavia-deployment-config.bak
conditions:
generate_certs:
and:
- get_param: OctaviaGenerateCerts
- or:
319
- equals:
- get_param: StackAction
- CREATE
- equals:
- get_param: StackAction
- UPDATE
cp
tripleo-heat-templates/docker/services/octavia/octavia-deployment-config.bak
tripleo-heat-templates/docker/services/octavia/octavia-deployment-config.yaml
320
Prerequisites:
• You must have connectivity between Octavia controller and amphora instances,
• You must have separate interfaces for control plane and data plane.
3. Check available flavors and images. You can create them, if needed.
7. Create simple HTTP server on every cirros. Login on both the cirros instances and run following
commands:
321
11. Login to load balancer client and verify if round robin works.
[email protected]'s password:
$ curl 10.10.10.50
Welcome to 10.10.10.52
$ curl 10.10.10.50
Welcome to 10.10.10.53
$ curl 10.10.10.50
322
Welcome to 10.10.10.52
$ curl 10.10.10.50
Welcome to 10.10.10.53
$ curl 10.10.10.50
Welcome to 10.10.10.52
$ curl 10.10.10.50
Welcome to 10.10.10.53
SEE ALSO
IN THIS SECTION
In Contrail, a tenant configuration is called a project. A project is created for each set of virtual machines
(VMs) and virtual networks (VNs) that are configured as a discrete entity for the tenant.
Projects are created, managed, and edited at the OpenStack Projects page.
1. Click the Admin tab on the OpenStack dashboard, then click the Projects link to access the Projects
page; see Figure 26 on page 323.
2. In the upper right, click the Create Project button to access the Add Project window; see
Figure 27 on page 324.
324
3. In the Add Project window, on the Project Info tab, enter a Name and a Description for the new project,
and select the Enabled check box to activate this project.
4. In the Add Project window, select the Project Members tab, and assign users to this project. Designate
each user as admin or as Member.
As a general rule, one person should be a super user in the admin role for all projects and a user with
a Member role should be used for general configuration purposes.
Refer to OpenStack documentation for more information about creating and managing projects.
SEE ALSO
You can create virtual networks in Contrail Networking from the OpenStack. The following procedure
shows how to create a virtual network when using OpenStack.
1. To create a virtual network when using OpenStack Contrail, select Project > Network > Networks.
The Networks page is displayed. See Figure 28 on page 325.
2. Click Create Network. The Create Network window is displayed. See Figure 29 on page 326 and
Figure 30 on page 326.
326
3. Click the Network and Subnet tabs to complete the fields in the Create Network window. See field
descriptions in Table 19 on page 327.
Field Description
Gateway IP Optionally, enter an explicit gateway IP address for the IP address block.
Check the Disable Gateway box if no gateway is to be used.
4. Click the Subnet Details tab to specify the Allocation Pool, DNS Name Servers, and Host Routes.
328
5. To save your network, click Create , or click Cancel to discard your work and start over.
To specify an image to upload to the Image Service for a project in your system by using the OpenStack
dashboard:
329
1. In OpenStack, select Project > Compute > Images. The Images window is displayed. See
Figure 32 on page 329.
2. Make sure you have selected the correct project to which you are associating an image.
4. Complete the fields to specify your image. Table 20 on page 331 describes each of the fields on the
window.
331
NOTE: Only images available through an HTTP URL are supported, and the image location
must be accessible to the Image Service. Compressed image binaries are supported (*.zip and
*.tar.gz).
Field Description
If you select Image File, you are prompted to browse to the local location
of the file.
Image Location Enter an external HTTP URL from which to load the image. The URL
must be a valid and direct URL to the image binary. URLs that redirect
or serve error pages result in unusable images.
Format Required field. Select the format of the image from a list:
AKI– Amazon Kernel Image
AMI– Amazon Machine Image
ARI– Amazon Ramdisk Image
ISO– Optical Disk Image
QCOW2– QEMU Emulator
Raw– An unstructured image format
VDI– Virtual Disk Imade
VHD– Virtual Hard Disk
VMDK– Virtual Machine Disk
Minimum Disk (GB) Enter the minimum disk size required to boot the image. If you do not
specify a size, the default is 0 (no minimum).
Minimum Ram (MB) Enter the minimum RAM required to boot the image. If you do not
specify a size, the default is 0 (no minimum).
Public Select this check box if this is a public image. Leave unselected for a
private image.
332
Field Description
IN THIS SECTION
1. From the OpenStack interface, click the Project tab, select Access & Security, and click the Security
Groups tab.
Any existing security groups are listed under the Security Groups tab, including the default security
group; see Figure 34 on page 333.
2. Select the default-security-group and click Edit Rules in the Actions column.
The Edit Security Group Rules window is displayed; see Figure 35 on page 334. Any rules already
associated with the security group are listed.
334
3. Click Add Rule to add a new rule; see Figure 36 on page 334.
Column Description
IP Protocol Select the IP protocol to apply for this rule: TCP, UDP, ICMP.
From Port Select the port from which traffic originates to apply this rule. For TCP and UDP, enter a
single port or a range of ports. For ICMP rules, enter an ICMP type code.
To Port The port to which traffic is destined that applies to this rule, using the same options as in the
From Port field.
Source Select the source of traffic to be allowed by this rule. Specify subnet—the CIDR IP address
or address block of the inter-domain source of the traffic that applies to this rule, or you can
choose security group as source. Selecting security group as source allows any other instance
in that security group access to any other instance via this rule.
The Create Security Group window is displayed; see Figure 37 on page 335.
Each new security group has a unique 32-bit security group ID and an ACL is associated with the
configured rules.
In the Security Groups list, select the security group name to associate with the instance.
336
6. You can verify that security groups are attached by viewing the SgListReq and IntfReq associated with
the agent.xml.
IN THIS SECTION
IN THIS SECTION
Heat Version 2 with Service Templates and Port Tuple Sample Workflow | 338
Heat is the orchestration engine of the OpenStack program. Heat enables launching multiple cloud
applications based on templates that are comprised of text files.
Introduction to Heat
A Heat template describes the infrastructure for a cloud application, such as networks, servers, floating
IP addresses, and the like, and can be used to manage the entire life cycle of that application.
When the application infrastructure changes, the Heat templates can be modified to automatically reflect
those changes. Heat can also delete all application resources if the system is finished with an application.
Heat templates can record the relationships between resources, for example, which networks are connected
by means of policy enforcements, and consequently call OpenStack REST APIs that create the necessary
infrastructure, in the correct order, needed to launch the application managed by the Heat template.
Heat Architecture
Heat is implemented by means of Python applications, including the following:
• heat-client—The CLI tool that communicates with the heat-api application to run Heat APIs.
• heat-api—Provides an OpenStack native REST API that processes API requests by sending them to the
Heat engine over remote procedure calls (RPCs).
• heat-engine—Responsible for orchestrating the launch of templates and providing events back to the
API consumer.
The generated resources and templates are part of the Contrail Python package, and are located in the
following directory in the target installation:
/usr/lib/python2.7/dist-packages/vnc_api/gen/heat/
• resources/—Contains all the resources for the contrail-heat plugin, which runs in the context of the Heat
engine service.
• templates/—Contains sample templates for each resource. Each sample template presents every possible
parameter in the schema. Use the sample templates as a reference when you build up more complex
templates for your network design.
The following contains a list of all the generated plug-in resources that are supported by contrail-heat :
338
https://github.com/tungstenfabric/tf-heat-plugin/tree/master/contrail_heat/new_templates
Heat Version 2 with Service Templates and Port Tuple Sample Workflow
With Contrail service templates Version 2, the user can create ports and bind them to a virtual machine
(VM)-based service instance, by means of a port-tuple object. All objects created with the Version 2 service
template are directly visible to the Contrail Heat engine, and are directly managed by Heat.
The following shows the basic workflow steps for creating a port tuple and service instance that will be
managed by Heat:
5. Label each port as left, right, mgmt, and so on, and add the ports to the port-tuple object.
Use a unique label for each of the ports in a single port tuple. The labels named left and right are used
for forwarding.
service_template.yaml
heat_template_version: 2013- 05- 23
description: >
HOT template to create a service template
parameters:
name:
type: string
description: Name of service template
339
mode:
type: string
description: service mode
type:
type: string
description: service type
image:
type: string
description: Name of the image
flavor:
type: string
description: Flavor
service_interface_type_list:
type: string
description: List of interface types
shared_ip_list:
type: string
description: List of shared ip enabled- disabled
static_routes_list:
type: string
description: List of static routes enabled- disabled
resources:
service_template:
type: OS::ContrailV2::ServiceTemplate
properties:
name: { get_param: name }
service_mode: { get_param: mode }
service_type: { get_param: type }
image_name: { get_param: image }
flavor: { get_param: flavor }
service_interface_type_list: { "Fn::Split" : [ ",", Ref:
service_interface_type_list ] }
shared_ip_list: { "Fn::Split" : [ ",", Ref: shared_ip_list ] }
static_routes_list: { "Fn::Split" : [ ",", Ref: static_routes_list ]
}
outputs:
service_template_fq_name:
description: FQ name of the service template
value: { get_attr: [ service_template, fq_name] }
2. Create an environment file to define the values to put in the variables in the template file.
340
service_template.env
parameters:
name: contrail_svc_temp
mode: transparent
type: firewall
image: cirros
flavor: m1.tiny
service_interface_type_list: management,left,right,other
shared_ip_list: True,True,False,False
static_routes_list: False,True,False,False
3. Create the Heat stack by launching the template and the environment file, using the following command:
SEE ALSO
IN THIS SECTION
IN THIS SECTION
Queuing | 346
• All packet forwarding devices, such as vRouter and the gateway, combine to form a system.
• Interfaces to the system are the ports from which the system sends and receives packets, such as tap
interfaces and physical ports.
342
• QoS is applied at the ingress to the system, for example, upon traffic from the interfaces to the fabric.
• At egress, packets are stripped of their tunnel headers and sent to interface queues, based on the
forwarding class. No marking from the outer packet to the inner packet is considered at this time.
• Queueing on the fabric interface, including queues, scheduling of queues, and drop policies, and
• forwarding class, a method of marking that controls how packets are sent to the fabric, including marking
and identifying which queue to use.
Tenants can define which forwarding class their traffic can use, deciding which packets use which forwarding
class. The Contrail QoS configuration object has a mapping table, mapping the incoming DSCP or 802.1p
value to the forwarding class mapping.
The QoS configuration can also be applied to a virtual network, an interface, or a network policy.
1. Define the hardware queues and priority group in the instances.yaml file under the vrouter role as
shown below.
nodeh5:
ip: 10.xxx.xxx.109
provider: bms
roles:
vrouter:
VROUTER_GATEWAY: 192.168.1.45
PRIORITY_ID: 0,1,2,3,4,5,6,7
PRIORITY_BANDWIDTH: 0,10,0,20,0,30,0,40
PRIORITY_SCHEDULING: strict,rr,strict,rr,strict,rr,strict,rr
QOS_QUEUE_ID: 3,11,18,28,36,43,61,53
QOS_LOGICAL_QUEUES: "[ 1, 6-10, 12-15];[40-46];[70-74, 75,
80-95];[115];[140-143, 145];[175];[245];[215]"
QOS_DEF_HW_QUEUE: True
openstack_compute:
343
2. In the already provisioned setup, define the QoS configuration in the /etc/contrail/common_vrouter.env
file as shown in the following sample.
PRIORITY_ID=0,1,2,3,4,5,6,7
PRIORITY_BANDWIDTH=0,10,0,20,0,30,0,40
PRIORITY_SCHEDULING=strict,rr,strict,rr,strict,rr,strict,rr
QOS_QUEUE_ID=3,11,18,28,36,43,61,53
QOS_LOGICAL_QUEUES="[ 1, 6-10, 12-15];[40-46];[70-74, 75, 80-95];[115];[140-143,
145];[175];[245];[215]"
QOS_DEF_HW_QUEUE=True
Queuing Implementation
The vRouter provides the infrastructure to use queues supplied by the network interface, a method that
is also called hardware queueing. Network interface cards (NICs) that implement hardware queueing have
their own set of scheduling algorithms associated with the queues. The Contrail implementation is designed
to work with most NICs, however, the method is tested only on an Intel-based 10G NIC, also called Niantic.
• forwarding class
The forwarding class object specifies parameters for marking and queuing, including:
The QoS configuration object specifies a mapping from DSCP, 802.1p, and MPLS EXP values to the
corresponding forwarding class.
The QoS configuration has an option to specify the default forwarding class ID to use to select the
forwarding class for all unspecified DSCP, 802.1p, and MPLS EXP values.
If the default forwarding class ID is not specified by the user, it defaults to the forwarding class with ID 0.
Processing of QoS marked packets to look up the corresponding forwarding class to be applied works as
follows:
• For an MPLS-tunneled packet with MPLS EXP values specified, the EXP bit value is used with the MPLS
EXP map.
• If the QoS configuration is untrusted, only the default forwarding class is specified, and all incoming
values of the DSCP, 802.1p, and EXP bits in the packet are mapped to the same default forwarding class.
A virtual machine interface, virtual network, and network policy can refer to the QoS configuration object.
The QoS configuration object can be specified on the vhost so that underlay traffic can also be subjected
to marking and queuing. See Figure 40 on page 344.
Table 22 on page 345 shows two forwarding class objects defined. FC1 marks the traffic with high priority
values and queues it to Queue 0. FC2 marks the traffic as best effort and queues the traffic to Queue 1.
345
FC1 1 10 7 7 0
FC2 2 38 0 0 1
In Table 23 on page 345, the QoS configuration object DSCP values of 10, 18, and 26 are mapped to a
forwarding class with ID 1, which is forwarding class FC1. All other IP packets are mapped to the forwarding
class with ID 2, which is FC2. All traffic with an 802.1p value of 6 or 7 are mapped to forwarding class
FC1, and the remaining traffic is mapped to FC2.
10 1 6 1 5 1
18 1 7 1 7 1
26 1 * 2 * 1
* 2
IN THIS SECTION
The following sections describes how QoS configuration object marking is handled in various circumstances.
346
• If a VM sends a Layer 2 non-IP packet with an 802.1p value, the 802.1p value is used to look into the
qos-config table, and the corresponding forwarding class DSCP, 802.1p, and MPLS EXP value is written
to the tunnel header.
• If a VM sends an IP packet to a VM in same compute node, the packet headers are not changed while
forwarding. The original packet remains unchanged.
The levels that can be configured with QoS and their order of priority:
1. in policy
2. on virtual-network
3. on virtual-machine-interface
Queuing
IN THIS SECTION
Contrail Networking supports QoS. These sections provide an overview of the queuing features available
in Contrail Networking.
When this mapping is disabled, the kernel will send packets to the specific hardware queue.
To verify:
The priority group ID and the corresponding scheduling algorithm and bandwidth to be used by the priority
group can be configured.
• strict
• rr (round-robin)
When round-robin scheduling is used, the percentage of total hardware queue bandwidth that can be
used by the priority group is specified in the bandwidth parameter.
The following configuration and provisioning is applicable only for compute nodes running Niantic NICs
and running kernel based vrouter.
qos_niantic = {
‘compute1': [
{ 'priority_id': '1', 'scheduling': 'strict', 'bandwidth':
'0'},
348
],
‘compute2' :[
{ 'priority_id': '1', 'scheduling': 'strict', 'bandwidth':
'0'},
{ 'priority_id': '1', 'scheduling': 'rr', 'bandwidth': '30’}
]
}
SEE ALSO
IN THIS SECTION
Overview | 348
Limitations | 350
Overview
You can use the OpenStack Nova command-line interface (CLI) to specify a quality of service (QoS) setting
for a virtual machine’s network interface, by setting the quota of a Nova flavor. Any virtual machine created
with that Nova flavor will inherit all of the specified QoS settings. Additionally, if the virtual machine that
was created with the QoS settings has multiple interfaces in different virtual networks, the same QoS
settings will be applied to all of the network interfaces associated with the virtual machine. The QoS
settings can be specified in unidirectional or bidirectional mode.
The quota driver in Neutron converts QoS parameters into libvirt network settings of the virtual machine.
The QoS parameters available in the quota driver only cover rate limiting the network interface. There are
no specifications available for policy-based QoS at this time.
349
Example
where:
• vif_inbound_average lets you specify the average rate of inbound (receive) traffic, in kilobytes/sec.
• vif_outbound_average lets you specify the average rate of outbound (transmit) traffic, in kilobytes/sec.
• Optional: vif_inbound_peak and vif_outbound_peak specify the maximum rate of inbound and outbound
traffic, respectively, in kilobytes/sec.
• Optional: vif_inbound_burst and vif_outbound_peak specify the amount of kilobytes that can be received
or transmitted, respectively, in a single burst at the peak rate.
Details for various QoS parameters for libvirt can be found at http://libvirt.org/formatnetwork.html.
The following example shows an inbound average of 800 kilobytes/sec, a peak of 1000 kilobytes/sec, and
a burst amount of 30 kilobytes.
After the Nova flavor is configured for QoS, a virtual machine instance can be created, using either Horizon
or CLI. The instance will have network settings corresponding to the nova flavor-key, as in the following:
350
<interface type="ethernet">
<mac address="02:a3:a0:87:7f:61"/>
<model type="virtio"/>
<script path=""/>
<target dev="tapa3a0877f-61"/>
<bandwidth>
<inbound average="800" peak="1000" burst="30"/>
<outbound average="800" peak="1000" burst="30"/>
</bandwidth>
</interface>
Limitations
• The stock libvirt does not support rate limiting of ethernet interface types. Consequently, settings like
those in the example for the guest interface will not result in any tc qdisc settings for the corresponding
tap device in the host.
• The nova flavor-key rxtx_factor takes a float as an input and acts as a scaling factor for receive (inbound)
and transmit (outbound) throughputs. This key is only available to Neutron extensions (private extensions).
The Contrail Neutron plugin doesn’t implement this private extension. Consequently, setting the nova
flavor-key rxtx_factor will not have any effect on the QoS setting of the network interface(s) of any
virtual machine created with that nova flavor.
• The outbound rate limits of a virtual machine interface are not strictly achieved. The outbound throughput
of a virtual machine network interface is always less than the average outbound limit specified in the
virtual machine's libvirt configuration file. The same behavior is also seen when using a Linux bridge.
Load Balancers
IN THIS SECTION
IN THIS SECTION
As of Contrail Release 3.0, load balancer LBaaS features are available. This topic includes:
• HAProxy
• A10 Networks
• F5 Networks
• Avi Networks
• Configuration objects can be created in multiple ways: from Neutron, from virtual controller APIs, or
from the Contrail UI.
• The load balancer driver can make inline calls, such as REST or SUDS, to configure the external load
balancer device.
352
• The load balancer driver can use Contrail service monitor infrastructure, such as database, logging, and
API server.
NOTE: The Neutron LBaaS plugin is not supported in OpenStack Train release.
[service_providers]
service_provider =
353
LOADBALANCER:Opencontrail:neutron_plugin_contrail.plugins.opencontrail.
loadbalancer.driver.OpencontrailLoadbalancerDriver:default
In Contrail Release 3.0 and greater, the Neutron LBaaS provider is configured by using the object
service-appliance-set. All of the configuration parameters of the LBaaS driver are populated to the
service-appliance-set object and passed to the driver.
During initialization, the service monitor creates a default service appliance set with a default LBaaS
provider, which uses an HAProxy-based load balancer. The service appliance set consists of individual
service appliances for load balancing the traffic. The service appliances can be physical devices or virtual
machines.
{
"service-appliance-set": {
"fq_name": [
"default-global-system-config",
"f5"
],
"service_appliance_driver":
"svc_monitor.services.loadbalancer.drivers.f5.f5_driver.OpencontrailF5LoadbalancerDriver",
"parent_type": "global-system-config",
"service_appliance_set_properties": {
"key_value_pair": [
{
"key": "sync_mode",
"value": "replication"
},
{
"key": "global_routed_mode",
"value": "True"
}
]
},
"name": "f5"
}
}
354
{
"service-appliance": {
"fq_name": [
"default-global-system-config",
"f5",
"bigip"
],
"parent_type": "service-appliance-set",
"service_appliance_ip_address": "<ip address>",
"service_appliance_user_credentials": {
"username": "admin",
"password": "<password>"
},
"name": "bigip"
}
}
The dependency tracker is informed to notify the pool object whenever the VIP, member, or health monitor
object is modified.
Whenever there is an update to the pool object, either directly due to a pool update or due to a dependency
update, the load balancer agent in the service monitor is notified.
• Providing the abstract driver class for the load balancer driver.
IN THIS SECTION
This section details use of the F5 load balancer driver with Contrail.
Contrail Release 3.0 implements an LBaaS driver that supports a physical or virtual F5 Networks load
balancer, using the abstract load balancer driver class, ContrailLoadBalancerAbstractDriver.
This driver is invoked from the load balancer agent of the contrail-svc-monitor. The driver makes a BIG-IP
interface call to configure the F5 Networks device. All of the configuration parameters used to tune the
driver are configured in the service-appliance-set object and passed to the driver by the load balancer
agent while loading the driver.
The F5 load balancer driver uses the BIG-IP interface version V1.0.6, which is a Python package extracted
from the load balancer plugin provided by F5 Networks. The driver uses either a SOAP API or a REST API.
This section describes the features and requirements of the F5 load balancer driver configured in global
routed mode.
• All virtual IP addresses (VIPs) are assumed to be routable from clients and all members are routable from
the F5 device.
• All access to and from the F5 device is assumed to be globally routed, with no segregation between
tenant services on the F5 device. Consequently, do NOT configure overlapping addresses across tenants
and networks.
The following are requirements to support global routed mode of an F5 device used with LBaaS:
• The entire configuration of the F5 device for Layer 2 and Layer 3 is preprovisioned.
• All tenant networks and all IP fabrics are in the same namespace as the corporate network.
• All VIPs are in the same namespace as the tenant and corporate networks.
356
The information in this section is based on a model that includes the following network topology:
Corporate Network --- DC Gateway (MX device) --- IP Fabric --- Compute nodes
The Corporate Network, the IP Fabric and all tenant networks use IP addresses from a single namespace,
there is no overlap of the addresses in the networks. The F5 devices can be attached to the Corporate
Network or to the IP Fabric, and are configured to use the global routed mode.
The role of the MX Series device is to route post-proxy traffic, coming from the F5 device in the underlay,
to the pool members in the overlay. In the reverse direction, the MX device takes traffic coming from the
pool members in the overlay and routes it back to the F5 device in the underlay.
The MX routes the traffic from inet.0 to public VRF and sends traffic to the compute node where the pool
member is instantiated.
The F5 device is responsible for attracting traffic destined to all the VIPs, by advertising a subnet route
that covers all VIPs using IGP.
The F5 device load balances among different pool members and sends traffic to the chosen member.
Figure 42 on page 357 shows the traffic flow in global routed mode.
357
A similar result can also be achieved on the switch to which the F5 is attached, by publishing the VIP subnet
in IGP and using a static route to point the VIP traffic to the F5 device.
The MX should attract the reverse traffic from the pool members going back to the F5.
• In the global routed mode, the F5 uses AutoMap SNAT for all VIP traffic.
• The operator must add a static route for the super-net into inet.0 with a next-hop of public.inet.0.
358
• The operator must create a public VRF and get its default route imported into the VRF. This is to attract
the return traffic from pool members to the F5 device (VIP destination).
• As new member virtual networks are connected to the public virtual network by policy, corresponding
targets are imported by the public VRF on MX. The Contrail Device Manager generates the configuration
of import, export targets for public VRF on the MX device.
• The operator must ensure that security group rules for the member virtual network ports allow traffic
coming from the F5 device.
1. To configure a service appliance set, use the script in /opt/contrail/utils to create a load balancer
provider. With the script, you specify the driver and name of the selected provider. Additional
configuration can be performed using the key-value pair property configuration.
4. Add members to the load balancer pool. Both bare metal webserver and overlay webserver are allowed
as pool members. The F5 device can load balance the traffic among all pool members.
6. Create the health monitor and associate it with the load balancer pool.
NOTE: In a Contrail environment, you cannot have a mix of Contrail LBaaS and Neutron LBaaS.
You must select a mode that is compatible with the current environment.
$ contrail-version neutron-plugin-contrail
Package Version
------------------------- ------------
neutron-plugin-contrail 3.0.2.0-51
$ vi /etc/neutron/neutron.conf
# if using mysql
connection = mysql+pymysql://neutron:[email protected]/neutron
360
# to upgrade to head
$ neutron-db-manage upgrade head
# to upgrade to a specific version
$ neutron-db-manage --config-file /etc/neutron/neutron.conf upgrade liberty
$ mysql -u root -p
Enter password: fabe17d9dd5ae798f7ea
5. To install the Avi LBaaS plugin, continue with steps from the readme file that downloads with the Avi
LBaaS software. You can perform either a local installation or a manual installation. The following are
sample installation steps.
# LBaaS v1 driver
$ ./install.sh --aname avi_adc --aip
<controller_ip|controller_vip>
--auser
--apass
361
# LBaaS v2 driver
$ ./install.sh --aname avi_adc_v2 --aip
<controller_ip|controller_vip>
--auser
--apass
--v2
# LBaaS v1 driver
$ vi /etc/neutron/neutron.conf
#service_plugins =
neutron_plugin_contrail.plugins.opencontrail.loadbalancer.plugin.LoadBalancerPlugin
service_plugins = neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPlugin
[service_providers]
service_provider =
LOADBALANCER:Avi_ADC:neutron_lbaas.services.loadbalancer.drivers.avi.avi_driver.AviLbaaSDriver
[avi_adc]
address=10.1.11.4
user=admin
password=avi123
cloud=jcos
# LBaaS v2 driver
$ vi /etc/neutron/neutron.conf
#service_plugins =
neutron_plugin_contrail.plugins.opencontrail.loadbalancer.plugin.LoadBalancerPlugin
service_plugins =
neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2
[service_providers]
service_provider =
LOADBALANCERV2:avi_adc_v2:neutron_lbaas.drivers.avi.driver.AviDriver
[avi_adc_v2]
controller_ip=10.1.11.3
username=admin
password=avi123
$ contrail-version neutron-plugin-contrail
Package Version
------------------------- ------------
neutron-plugin-contrail 3.0.2.0-51
# LBaaS v2 driver
$ ./install.sh --aname ocavi_adc_v2 --aip
<controller_ip|controller_vip>
--auser
--apass
NOTE: If neutron_lbaas doesn’t exist on the api-server node, adjust the driver path to the
correct path location for neutron_lbaas.
2. To configure the Avi controller during cloud configuration, select the “Integration with Contrail” checkbox
and provide the endpoint URL of the Contrail VNC api-server. Use the Keystone credentials from the
OpenStack configuration to authenticate with the api-server service.
| dhcp_enabled | True |
| mtu | 1500 bytes |
| prefer_static_routes | False |
| enable_vip_static_routes | False |
| license_type | LIC_CORES |
| tenant_ref | admin |
+---------------------------+--------------------------------------------+
SEE ALSO
IN THIS SECTION
IN THIS SECTION
Starting with Contrail Networking Release 3.1, Contrail provides support for the OpenStack Load Balancer
as a Service (LBaaS) Version 2.0 APIs in the Liberty release of OpenStack.
Platform Support
Table 24 on page 365 shows which Contrail with OpenStack release combinations support which version
of OpenStack LBaaS APIs.
For LBaaS v2.0, the Contrail controller aggregates the configuration by provider. For example, if haproxy
is the provider, the controller generates the configuration for haproxy and eliminates the need to send all
of the load-balancer resources to the vrouter-agent; only the generated configuration is sent, as part of
the service instance.
For more information about OpenStack v2.0 APIs, refer to the section LBaaS 2.0 (STABLE) (lbaas,
loadbalancers, listeners, health_monitors, pools, members), at
http://developer.openstack.org/api-ref-networking-v2-ext.html.
LBaaS v2.0 also allows users to listen to multiple ports for the same virtual IP, by decoupling the virtual
IP address from the port.
• Pools
• Members
• Health monitors
366
NOTE: This procedure is written using the Neutron LBaaS plugin v1.0. Starting with the OpenStack
Train release, neutron-lbaas is replaced by Octavia Some commands are different due to the
plugin change. See the Red Hat Octavia documentation for the equivalent procedure: https://
access.redhat.com/documentation/en-us/red_hat_openstack_platform/15/html/
networking_guide/sec-octavia
IN THIS SECTION
With Octavia as LBaaS, Contrail Networking is only maintaining network connectivity and is not involved
in any load balancing functions.
For each OpenStack load balancer creation, Octavia launches a VM known as amphora VM. The VM starts
the HAPROXY when listener is created for the load balancer in OpenStack. Whenever the load balancer
gets updated in OpenStack, amphora VM updates the running HAPROXY configuration. The amphora VM
is deleted on deleting the load balancer.
Contrail Networking provides connectivity to amphora VM interfaces. Amphora VM has two interfaces;
one for management and the other for data. The management interface is used by the Octavia services
for the management communication. Since, Octavia services are running in the underlay network and
amphora VM is running in the overlay network, SDN gateway is needed to reach the overlay network. The
data interface is used for load balancing the traffic.
If the load balancer service is exposed to public, you must create the load balancer VIP in the public subnet.
The load balancer members can be in the public or private subnet.
You must create network policy between public network and private network if the load balancer members
are in the private network.
--protocol-port 80 pool1
openstack loadbalancer member create --subnet-id private --address 10.10.10.51
--protocol-port 80 pool1
SEE ALSO
IN THIS SECTION
The LBaaS load balancer enables the creation of a pool of virtual machines serving applications, all
front-ended by a virtual-ip. The LBaaS implementation has the following features:
369
• Load balancing of traffic from clients to a pool of backend servers. The load balancer proxies all
connections to its virtual IP.
• Provides health monitoring capabilities for applications, including HTTP, TCP, and ping.
• Enables floating IP association to virtual-ip for public access to the backend pool.
In Figure 43 on page 369, the load balancer is launched with the virtual IP address 198.51.100.2. The
backend pool of virtual machine applications (App Pool) is on the subnet 203.0.113.0/24. Each of the
application virtual machines gets an IP address (virtual-ip) from the pool subnet. When a client connects
to the virtual-ip for accessing the application, the load balancer proxies the TCP connection on its virtual-ip,
then creates a new TCP connection to one of the virtual machines in the pool.
Additionally, the load balancer monitors the health of each pool member using the following methods:
• Monitors HTTP by creating a TCP connection and issuing an HTTP request at intervals.
A Note on Installation
To use the LBaaS feature, HAProxy, version 1.5 or greater and iproute2, version 3.10.0 or greater must
both be installed on the Contrail compute nodes.
If you are using fabic commands for installation, the haproxy and iproute2 packages will be installed
automatically with LBaaS if you set the following:
env.enable_lbaas=True
Use the following to check the version of the iproute2 package on your system and verify the installation:
root@nodeh5:/var/log# ip -V
ip utility, iproute2-ss130716
root@nodeh5:/var/log#
You can also view the server yml file to verify the env.enable_lbaas=True.
Limitations
• Multiple VIPs cannot be associated with the same pool. If pool needs to be reused, create another pool
with the same members and bind it to the second VIP.
• Members cannot be moved from one pool to another. If needed, first delete the members from one
pool, then add to a different pool.
• In case of active-standby failover, namespaces might not get cleaned up when the agent restarts.
• The floating-ip association needs to select the VIP port and not the service ports.
NOTE: The following procedures are written using the Neutron LBaaS plugin v1.0. Starting with
the OpenStack Train release, neutron-lbaas is replaced by Octavia. Some commands are different
due to the plugin change. See the Red Hat Octavia documentation for the equivalent procedure:
https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/15/html/
networking_guide/sec-octavia
Use the following commands to create a healthmonitor, associate a healthmonitor to a pool, disassociate
a healthmonitor, and delete a healthmonitor.
1. Create a healthmonitor.
Use the following steps to configure an SSL VIP with an HTTP backend pool.
haproxy_ssl_cert_path=<certificate-path>
3. Restart contrail-vrouter-agent.
neutron lb-vip-create --name myvip --protocol-port 443 --protocol HTTP --subnet-id vipsubnet mypool
• Each load balancer consists of one or more listeners, pools, pool members, and health monitors.
• Listener: Port that listens for traffic from a particular load balancer. Multiple listeners can be associated
with a single load balancer.
• Pool: Group of hosts that serves traffic from the load balancer.
• Pool Member: Server that is specified by the IP address and port for which it uses to serve the traffic
it receives from the load balancer.
• Health Monitor: Health monitors are associated with pools and help divert traffic away from pool
members that are temporarily offline.
• Each load balancer can have multiple pools with one or more listeners for each pool.
• The native load balancer has a single pool that is shared among multiple listeners.
Use the following steps to create a load balancer with the load balancer wizard.
• Subnet: Drop-down menu displays all subnets from list of all available networks. The subnet is the
network on which to allocate the IP address of the load balancer.
• Admin State: Check the checkbox for UP or uncheck the checkbox for DOWN. Default is UP.
• Admin State: Check the checkbox for UP or uncheck the checkbox for DOWN. Default is UP.
377
• Method: Load balancing method used to distribute incoming requests. Dropdown menu includes
LEAST_CONNECTIONS, ROUND_ROBIN, and SOURCE_IP.
• Protocol: The protocol used by the pool and its members for the load balancer traffic. Dropdown
menu includes TCP and HTTP.
• Admin State: Check the checkbox for UP or uncheck the checkbox for DOWN. Default is UP.
378
5. Click Next. The list of available pool member instances are displayed. To add an external member, click
the Add icon. Each pool member must have a unique IP address and port combination.
• IP Address: The IP address of the member that is used to receive traffic from the load balancer.
• Port: The port to which the member listens to receive traffic from the load balancer.
• Admin State: Check the checkbox for UP or uncheck the checkbox for DOWN. Default is UP.
379
• HTTP Method: Required if monitor type is HTTP. Dropdown menu includes GET and HEAD. The
default value is GET.
• Expected HTTP Status Code: Required if monitor type is HTTP. The default value is 200.
• URL Path: Required if monitor type is HTTP. The default value is “/.”
• Health check interval (sec): The time interval, in seconds, between each health check. The default
value is 5.
• Retry count before markdown: The maximum number of failed health checks before the state of a
member is changed to OFFLINE. The default value is 3.
• Timeout (sec): The maximum number of seconds allowed for any given health check to complete.
The timeout value should always be less than the health check interval. The default value is 5.
• Admin State: Check the checkbox for UP or uncheck the checkbox for DOWN. Default is UP.
Click Finish.
1. Go to Services > Load Balancers. A summary screen of the Load Balancers is displayed.
2. To view summary of a load balancer, click the drop down arrow next to a load balancer listed in the
summary screen. The Load Balancer Info window is displayed.
SEE ALSO
IN THIS SECTION
IN THIS SECTION
Contrail 3.2 adds support for multiqueue for the DPDK-based vrouter.
Contrail 3.1 supports multiqueue virtio interfaces for Ubuntu kernel-based router, only.
• The maximum number of queues in the VM interface is set to the same value as the number of vCPUs
in the guest.
• The VM image metadata property is set to enable multiple queues inside the VM.
source /etc/contrail/openstackrc
nova image-meta <image_name> set hw_vif_multiqueue_enabled="true"
After the VM is spawned, use the following command on the virtio interface in the guest to enable multiple
queues inside the VM:
Packets will now be forwarded on all queues in the VM to and from the vRouter running on the host.
NOTE: Multiple queues in the VM are only supported with the kernel mode vRouter in Contrail
3.1.
Contrail 3.2 adds support for multiple queues with the DPDK-based vrouter, using OpenStack
Mitaka. The DPDK vrouter has the same setup requirements as the kernel mode vrouter. However,
in the ethtool –L setup command, the number of queues cannot be higher than the number of
CPU cores assigned to vrouter in the testbed file.
IN THIS SECTION
IN THIS SECTION
Overview | 384
Ceilometer is an OpenStack feature that provides an infrastructure for collecting SDN metrics from
OpenStack projects. The metrics can be used by various rating engines to transform events into billable
items. The Ceilometer collection process is sometimes referred to as “metering”. The Ceilometer service
provides data that can be used by platforms that provide metering, tracking, billing, and similar services.
This topic describes how to configure the Ceilometer service for Contrail.
Overview
Contrail Release 2.20 and later supports the OpenStack Ceilometer service, on the OpenStack Juno release
on Ubuntu 14.04.1 LTS.
NOTE: Ceilometer services are only installed on the first OpenStack controller node and do not
support high availability in Contrail Release 2.20.
Ceilometer Details
Ceilometer is used to reliably collect measurements of the utilization of the physical and virtual resources
comprising deployed clouds, persist these data for subsequent retrieval and analysis, and trigger actions
when defined criteria are met.
Polling agent—Agent designed to poll OpenStack services and build meters. The polling agents are also
run on the compute nodes in addition to the OpenStack controller.
Notification agent—Agent designed to listen to notifications on message queue and convert them to events
and samples.
Collector —Gathers and records event and metering data created by the notification and polling agents.
API server—Provides a REST API to query and view data recorded by the collector service.
Database—Stores the metering data, notifications, and alarms. The supported databases are MongoDB,
SQL-based databases compatible with SQLAlchemy, and HBase. The recommended database is
MongoDB, which has been thoroughly tested with Contrail and deployed on a production scale.
Notification agent—ceilometer-agent-notification
Collector —ceilometer-collector
API Server—ceilometer-api
Notification agent—openstack-ceilometer-notification
Collector —openstack-ceilometer-collector
API server—openstack-ceilometer-api
To verify the Ceilometer installation, users can verify that the Ceilometer services are up and running by
using the openstack-status command.
For example, using the openstack-status command on an all-in-one node running Ubuntu 14.04.1 LTS
with release 2.2 of Contrail installed shows the following Ceilometer services as active:
== Ceilometer services ==
ceilometer-api: active
ceilometer-agent-central: active
ceilometer-agent-compute: active
ceilometer-collector: active
ceilometer-alarm-notifier: active
ceilometer-alarm-evaluator: active
ceilometer-agent-notification:active
You can issue the ceilometer meter-list command on the OpenStack controller node to verify that meters
are being collected, stored, and reported via the REST API. The following is an example of the output:
NOTE: The ceilometer meter-list command lists the meters only if images have been created,
or instances have been launched, or if subnet, port, floating IP addresses have been created,
otherwise the meter list is empty. You also need to source the /etc/contrail/openstackrc file
when executing the command.
ip.floating.receive.bytes
ip.floating.receive.packets
ip.floating.transmit.bytes
ip.floating.transmit.packets
The Contrail Ceilometer plugin configuration is done in the /etc/ceilometer/pipeline.yaml file when Contrail
is installed by the Fabric provisioning scripts.
The following example shows the configuration that is added to the file:
388
sources:
- name: contrail_source
interval: 600
meters:
- "ip.floating.receive.packets"
- "ip.floating.transmit.packets"
- "ip.floating.receive.bytes"
- "ip.floating.transmit.bytes"
resources:
- contrail://<IP-address-of-Contrail-Analytics-Node>:8081
sinks:
- contrail_sink
sinks:
- name: contrail_sink
publishers:
- rpc://
transformers:
The following example shows the Ceilometer meter list output for the floating IP meters:
+-------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+
| Name | Type | Unit | Resource ID
| User ID |
Project ID |
+-------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+
| ip.floating.receive.bytes | cumulative | B |
451c93eb-e728-4ba1-8665-6e7c7a8b49e2 | None
| None |
| ip.floating.receive.bytes | cumulative | B |
9cf76844-8f09-4518-a09e-e2b8832bf894 | None
| None |
| ip.floating.receive.packets | cumulative | packet |
451c93eb-e728-4ba1-8665-6e7c7a8b49e2 | None
| None |
| ip.floating.receive.packets | cumulative | packet |
9cf76844-8f09-4518-a09e-e2b8832bf894 | None
| None |
| ip.floating.transmit.bytes | cumulative | B |
451c93eb-e728-4ba1-8665-6e7c7a8b49e2 | None
| None |
| ip.floating.transmit.bytes | cumulative | B |
9cf76844-8f09-4518-a09e-e2b8832bf894 | None
| None |
| ip.floating.transmit.packets | cumulative | packet |
389
451c93eb-e728-4ba1-8665-6e7c7a8b49e2 | None
| None |
| ip.floating.transmit.packets | cumulative | packet |
9cf76844-8f09-4518-a09e-e2b8832bf894 | None
| None |
In the meter -list output, the Resource ID refers to the floating IP.
The following example shows the output from the ceilometer resource-show -r
451c93eb-e728-4ba1-8665-6e7c7a8b49e2 command:
+-------------+-------------------------------------------------------------------------+
| Property | Value
|
+-------------+-------------------------------------------------------------------------+
| metadata | {u'router_id': u'None', u'status': u'ACTIVE', u'tenant_id':
|
| | u'ceed483222f9453ab1d7bcdd353971bc', u'floating_network_id':
|
| | u'6d0cca50-4be4-4b49-856a-6848133eb970', u'fixed_ip_address':
|
| | u'2.2.2.4', u'floating_ip_address': u'3.3.3.4', u'port_id':
u'c6ce2abf- |
| | ad98-4e56-ae65-ab7c62a67355', u'id':
|
| | u'451c93eb-e728-4ba1-8665-6e7c7a8b49e2', u'device_id':
|
| | u'00953f62-df11-4b05-97ca-30c3f6735ffd'}
|
| project_id | None
|
| resource_id | 451c93eb-e728-4ba1-8665-6e7c7a8b49e2
|
| source | openstack
|
| user_id | None
|
+-------------+-------------------------------------------------------------------------+
The following example shows the output from the ceilometer statistics command and the ceilometer
sample-list command for the ip.floating.receive.packets meter:
390
+--------+----------------------------+----------------------------+-------+-----+-------+--------+----------------+------------+----------------------------+----------------------------+
| Period | Period Start | Period End | Count | Min |
Max | Sum | Avg | Duration | Duration Start | Duration
End |
+--------+----------------------------+----------------------------+-------+-----+-------+--------+----------------+------------+----------------------------+----------------------------+
| 0 | 2015-02-13T19:50:40.795000 | 2015-02-13T19:50:40.795000 | 2892 | 0.0 |
325.0 | 1066.0 | 0.368603042877 | 439069.674 | 2015-02-13T19:50:40.795000 |
2015-02-18T21:48:30.469000 |
+--------+----------------------------+----------------------------+-------+-----+-------+--------+----------------+------------+----------------------------+----------------------------+
+--------------------------------------+-----------------------------+------------+--------+--------+----------------------------+
| Resource ID | Name | Type |
Volume | Unit | Timestamp |
+--------------------------------------+-----------------------------+------------+--------+--------+----------------------------+
| 9cf76844-8f09-4518-a09e-e2b8832bf894 | ip.floating.receive.packets | cumulative |
208.0 | packet | 2015-02-18T21:48:30.469000 |
| 451c93eb-e728-4ba1-8665-6e7c7a8b49e2 | ip.floating.receive.packets | cumulative |
325.0 | packet | 2015-02-18T21:48:28.354000 |
| 9cf76844-8f09-4518-a09e-e2b8832bf894 | ip.floating.receive.packets | cumulative |
0.0 | packet | 2015-02-18T21:38:30.350000 |
1. If you install your own OpenStack distribution, you can install the Contrail Ceilometer plugin on the
OpenStack controller node.
2. When using Contrail Cloud services, the Ceilometer controller services are installed and provisioned
as part of the OpenStack controller node and the compute agent service is installed as part of the
compute node when enable_ceilometer is set as True in the cluster config or testbed files.
IN THIS SECTION
IN THIS SECTION
Incompatibilities | 394
OpenStack’s networking solution, Neutron, has representative elements for Contrail elements for Network
(VirtualNetwork), Port (VirtualMachineInterface), Subnet (IpamSubnets), and Security-Group. The Neutron
plugin translates the elements from one representation to another.
Data Structure
Although the actual data between Neutron and Contrail is similar, the listings of the elements differs
significantly. In the Contrail API, the networking elements list is a summary, containing only the UUID, FQ
name, and an href, however, in Neutron, all details of each resource are included in the list.
The Neutron plugin has an inefficient list retrieval operation, especially at scale, because it:
As a result, the API server spends most of the time in this type of GET operation just waiting for results
from the Cassandra database.
• An optional detail query parameter is added in the GET of collections so that the API server returns
details of all the resources in the list, instead of just a summary. This is accompanied by changes in the
Contrail API library so that a caller gets returned a list of the objects.
• The existing Contrail list API takes in an optional parent_id query parameter to return information about
the resource anchored by the parent.
• The Contrail API server reads objects from Cassandra in a multiget format into obj_uuid_cf, where object
contents are stored, instead of reading in an xget/get format. This reduces the number of round-trips
to and from the Cassandra database.
392
• Set the router:external attribute, when the plugin supports an external_net extension.
When a network has the shared attribute set, users in other tenants or projects, including non-admin users,
can access that network, using:
Users can also launch a virtual machine directly on that network, using:
When a network has the router:external attribute set, users in other tenants or projects, including non-admin
users, can use that network for allocating floating IPs, using:
NOTE: The VN hosting the FIP pool should be marked shared and external.
Action Command
Create a network that has the shared attribute. neutron net-create <net-name> –shared
Set the shared attribute on an existing network. neutron net-update <net-name> -shared
Create a network that has the router:externalattribute. neutron net-create <net-name> -router:external
Set the router:external attribute on an existing network. neutron net-update <net-name> -router:external
393
Contrail provides the following features to increase support for OpenStack Neutron:
For more information about using OpenStack Networking API v2.0 (Neutron), refer to:
http://docs.openstack.org/api/openstack-network/2.0/content/ and the OpenStack Neutron Wiki at:
http://wiki.openstack.org/wiki/Neutron.
• Network
• Subnet
• Port
• Security group
• Per-tenant quota
• Network IPAM
• Network policy
• Floating IP pools
The plugin does not implement native bulk, pagination, or sort operations and relies on emulation provided
by the Neutron common code.
394
DHCP Options
In Neutron commands, DHCP options can be configured using extra-dhcp-options in port-create.
Example
The opt_name and opt_value pairs that can be used are maintained in GitHub:
https://github.com/Juniper/contrail-controller/wiki/Extra-DHCP-Options .
Incompatibilities
In the Contrail architecture, the following are known incompatibilities with the Neutron API.
• Filtering based on any arbitrary key in the resource is not supported. The only supported filtering is by
id, name, and tenant_id.
• To use a floating IP, it is not necessary to connect the public subnet and the private subnet to a Neutron
router. Marking a public network with router:external is sufficient for a floating IP to be created and
associated, and packet forwarding to it will work.
• The default values for quotas are sourced from /etc/contrail/contrail-api.conf and not from
/etc/neutron/neutron.conf.
395
CHAPTER 7
IN THIS CHAPTER
Installing Contrail with Kubernetes in Nested Mode by Using Juju Charms | 463
Installing OpenStack Octavia LBaaS with Juju Charms in Contrail Networking | 467
Using Netronome SmartNIC vRouter with Contrail Networking and Juju Charms | 476
IN THIS SECTION
You can deploy Contrail by using Juju Charms. Juju helps you deploy, configure, and efficiently manage
applications on private clouds and public clouds. Juju accesses the cloud with the help of a Juju controller.
A Charm is a module containing a collection of scripts and metadata and is used with Juju to deploy Contrail.
Starting in Contrail Networking Release 2011, Contrail Networking supports OpenStack Ussuri with Ubuntu
version 18.04 (Bionic Beaver) and Ubuntu version 20.04 (Focal Fossa).
• contrail-agent
• contrail-analytics
• contrail-analyticsdb
396
• contrail-controller
• contrail-keystone-auth
• contrail-openstack
1. Install Juju.
2. Configure Juju.
You can add a cloud to Juju, identify clouds supported by Juju, and also manage clouds already added
to Juju.
• Adding a cloud—Juju recognizes a wide range of cloud types. You can use any one of the following
methods to add a cloud to Juju:
juju add-cloud
Cloud Types
maas
manual
openstack
oracle
vsphere
NOTE: Juju 2.x is compatible with MAAS series 1.x and 2.x.
You use a YAML configuration file to add a cloud manually. Enter the following command:
For an example, to add the cloud junmaas, assuming that the name of the configuration file in the
directory is maas-clouds.yaml, you run the following command:
clouds:
<cloud_name>:
type: <type_of_cloud>
auth-types: [<authenticaton_types>]
regions:
<region-name>:
endpoint: <http://<ip-address>:<node>/MAAS>
Juju recognizes the cloud types given below. You use the juju clouds command to list cloud types
that are supported by Juju.
$ juju clouds
Cloud Regions Default Type Description
398
NOTE: A Juju controller manages and keeps track of applications in the Juju cloud
environment.
IN THIS SECTION
To deploy Contrail Charms in a bundle, use the juju deploy <bundle_yaml_file> command.
The following example shows you how to use bundle_yaml_file to deploy Contrail on Amazon Web
Services (AWS) Cloud.
399
series: bionic
variables:
openstack-origin: &openstack-origin distro
#vhost-gateway: &vhost-gateway "192.x.40.254"
data-network: &data-network "192.x.40.0/24"
agilio-image-tag: &agilio-image-tag
"latest-ubuntu-queens"
agilio-user: &agilio-user
"<agilio-username>"
agilio-password: &agilio-password
"<agilio-password>"
agilio-insecure: &agilio-insecure false
agilio-phy: &agilio-phy "nfp_p0"
docker-registry: &docker-registry
"<registry-directory>"
#docker-user: &docker-user
"<docker_username>"
#docker-password: &docker-password
"<docker_password>"
image-tag: &image-tag "2008.121"
docker-registry-insecure: &docker-registry-insecure "true"
dockerhub-registry: &dockerhub-registry
"https://index.docker.io/v1/"
machines:
"1":
constraints: tags=controller
series: bionic
"2":
constraints: tags=compute
series: bionic
"3":
constraints: tags=neutron
series: bionic
services:
ubuntu:
charm: cs:ubuntu
400
num_units: 1
to: [ "1" ]
ntp:
charm: cs:ntp
num_units: 0
options:
#source: ntp.ubuntu.com
source: 10.204.217.158
mysql:
charm: cs:percona-cluster
num_units: 1
options:
dataset-size: 15%
max-connections: 10000
root-password: <password>
sst-password: <password>
min-cluster-size: 1
to: [ "lxd:1" ]
rabbitmq-server:
num_units: 1
options:
min-cluster-size: 1
to: [ "lxd:1" ]
heat:
charm: cs:heat
num_units: 1
expose: true
options:
debug: true
openstack-origin: *openstack-origin
to: [ "lxd:1" ]
keystone:
charm: cs:keystone
expose: true
num_units: 1
options:
admin-password: <password>
admin-role: admin
openstack-origin: *openstack-origin
preferred-api-version: 3
nova-cloud-controller:
charm: cs:nova-cloud-controller
num_units: 1
expose: true
401
options:
network-manager: Neutron
openstack-origin: *openstack-origin
to: [ "lxd:1" ]
neutron-api:
charm: cs:neutron-api
expose: true
num_units: 1
series: bionic
options:
manage-neutron-plugin-legacy-mode: false
openstack-origin: *openstack-origin
to: [ "3" ]
glance:
charm: cs:glance
expose: true
num_units: 1
options:
openstack-origin: *openstack-origin
to: [ "lxd:1" ]
openstack-dashboard:
charm: cs:openstack-dashboard
expose: true
num_units: 1
options:
openstack-origin: *openstack-origin
to: [ "lxd:1" ]
nova-compute:
charm: cs:nova-compute
num_units: 0
expose: true
options:
openstack-origin: *openstack-origin
nova-compute-dpdk:
charm: cs:nova-compute
num_units: 0
expose: true
options:
openstack-origin: *openstack-origin
nova-compute-accel:
charm: cs:nova-compute
num_units: 2
expose: true
options:
402
openstack-origin: *openstack-origin
to: [ "2" ]
contrail-openstack:
charm: ./tf-charms/contrail-openstack
series: bionic
expose: true
num_units: 0
options:
docker-registry: *docker-registry
#docker-user: *docker-user
#docker-password: *docker-password
image-tag: *image-tag
docker-registry-insecure: *docker-registry-insecure
contrail-agent:
charm: ./tf-charms/contrail-agent
num_units: 0
series: bionic
expose: true
options:
log-level: "SYS_DEBUG"
docker-registry: *docker-registry
#docker-user: *docker-user
#docker-password: *docker-password
image-tag: *image-tag
docker-registry-insecure: *docker-registry-insecure
#vhost-gateway: *vhost-gateway
physical-interface: *agilio-phy
contrail-agent-dpdk:
charm: ./tf-charms/contrail-agent
num_units: 0
series: bionic
expose: true
options:
log-level: "SYS_DEBUG"
docker-registry: *docker-registry
#docker-user: *docker-user
#docker-password: *docker-password
image-tag: *image-tag
docker-registry-insecure: *docker-registry-insecure
dpdk: true
dpdk-main-mempool-size: "65536"
dpdk-pmd-txd-size: "2048"
dpdk-pmd-rxd-size: "2048"
dpdk-driver: ""
403
dpdk-coremask: "1-4"
#vhost-gateway: *vhost-gateway
physical-interface: "nfp_p0"
contrail-analytics:
charm: ./tf-charms/contrail-analytics
num_units: 1
series: bionic
expose: true
options:
log-level: "SYS_DEBUG"
docker-registry: *docker-registry
#docker-user: *docker-user
#docker-password: *docker-password
image-tag: *image-tag
control-network: *control-network
docker-registry-insecure: *docker-registry-insecure
to: [ "1" ]
contrail-analyticsdb:
charm: ./tf-charms/contrail-analyticsdb
num_units: 1
series: bionic
expose: true
options:
log-level: "SYS_DEBUG"
cassandra-minimum-diskgb: "4"
cassandra-jvm-extra-opts: "-Xms8g -Xmx8g"
docker-registry: *docker-registry
#docker-user: *docker-user
#docker-password: *docker-password
image-tag: *image-tag
control-network: *control-network
docker-registry-insecure: *docker-registry-insecure
to: [ "1" ]
contrail-controller:
charm: ./tf-charms/contrail-controller
series: bionic
expose: true
num_units: 1
options:
log-level: "SYS_DEBUG"
cassandra-minimum-diskgb: "4"
cassandra-jvm-extra-opts: "-Xms8g -Xmx8g"
docker-registry: *docker-registry
#docker-user: *docker-user
404
#docker-password: *docker-password
image-tag: *image-tag
docker-registry-insecure: *docker-registry-insecure
control-network: *control-network
data-network: *data-network
auth-mode: no-auth
to: [ "1" ]
contrail-keystone-auth:
charm: ./tf-charms/contrail-keystone-auth
series: bionic
expose: true
num_units: 1
to: [ "lxd:1" ]
agilio-vrouter5:
charm: ./charm-agilio-vrt-5-37
expose: true
options:
virtioforwarder-coremask: *virtioforwarder-coremask
agilio-registry: *agilio-registry
agilio-insecure: *agilio-insecure
agilio-image-tag: *agilio-image-tag
agilio-user: *agilio-user
agilio-password: *agilio-password
relations:
- [ "ubuntu", "ntp" ]
- [ "neutron-api", "ntp" ]
- [ "keystone", "mysql" ]
- [ "glance", "mysql" ]
- [ "glance", "keystone" ]
- [ "nova-cloud-controller:shared-db", "mysql:shared-db" ]
- [ "nova-cloud-controller:amqp", "rabbitmq-server:amqp" ]
- [ "nova-cloud-controller", "keystone" ]
- [ "nova-cloud-controller", "glance" ]
- [ "neutron-api", "mysql" ]
- [ "neutron-api", "rabbitmq-server" ]
- [ "neutron-api", "nova-cloud-controller" ]
- [ "neutron-api", "keystone" ]
- [ "nova-compute:amqp", "rabbitmq-server:amqp" ]
- [ "nova-compute", "glance" ]
- [ "nova-compute", "nova-cloud-controller" ]
- [ "nova-compute", "ntp" ]
- [ "openstack-dashboard:identity-service", "keystone" ]
- [ "contrail-keystone-auth", "keystone" ]
- [ "contrail-controller", "contrail-keystone-auth" ]
405
- [ "contrail-analytics", "contrail-analyticsdb" ]
- [ "contrail-controller", "contrail-analytics" ]
- [ "contrail-controller", "contrail-analyticsdb" ]
- [ "contrail-openstack", "nova-compute" ]
- [ "contrail-openstack", "neutron-api" ]
- [ "contrail-openstack", "contrail-controller" ]
- [ "contrail-agent:juju-info", "nova-compute:juju-info" ]
- [ "contrail-agent", "contrail-controller"]
- [ "contrail-agent-dpdk:juju-info", "nova-compute-dpdk:juju-info" ]
- [ "contrail-agent-dpdk", "contrail-controller"]
- [ "nova-compute-dpdk:amqp", "rabbitmq-server:amqp" ]
- [ "nova-compute-dpdk", "glance" ]
- [ "nova-compute-dpdk", "nova-cloud-controller" ]
- [ "nova-compute-dpdk", "ntp" ]
- [ "contrail-openstack", "nova-compute-dpdk" ]
- [ "contrail-agent:juju-info", "nova-compute-accel:juju-info" ]
- [ "nova-compute-accel:amqp", "rabbitmq-server:amqp" ]
- [ "nova-compute-accel", "glance" ]
- [ "nova-compute-accel", "nova-cloud-controller" ]
- [ "nova-compute-accel", "ntp" ]
- [ "contrail-openstack", "nova-compute-accel" ]
- [ "agilio-vrouter5:juju-info", "nova-compute-accel:juju-info" ]
You can create or modify the Contrail Charm deployment bundle YAML file to:
Each Contrail Charm has a specific set of options. The options you choose depend on the charms
you select. For more information on the options that are available, see “Options for Juju Charms” on
page 412.
You can check the status of the deployment by using the juju status command.
Based on your deployment requirements, you can enable the following configuration statements:
• contrail-agent
• contrail-analytics
• contrail-analyticsdb
• contrail-controller
• contrail-keystone-auth
• contrail-openstack
You can deploy OpenStack services by using any one of the following methods:
nova-compute:
openstack-origin: cloud:xenial-ocata
407
virt-type: qemu
enable-resize: True
enable-live-migration: True
migration-auth-type: ssh
• By using CLI
NOTE: Use the --to <machine number> command to point to a machine or container where
you want the application to be deployed.
NOTE: You can deploy nova-compute to more than one compute machine.
6. Enable contrail-controller and contrail-analytics services to be available to external traffic if you do not
use HAProxy.
7. Apply SSL.
You can apply SSL if needed. To use SSL with Contrail services, deploy easy-rsa service and add-relation
command to create relations to contrail-controller service and contrail-agent services.
8. (Optional) HA configuration.
If you use more than one controller, follow the HA solution given below:
HAProxy charm is deployed on machines with Contrail controllers. HAProxy charm must have
peering_mode set to active-active. If peering_mode is set to active-passive, HAProxy creates
additional listeners on the same ports as other Contrail services. This leads to port conflicts.
NOTE: If you enable HAProxy to be available to external traffic, do not follow step 6.
Each Contrail Charm has a specific set of options. The options you choose depend on the charms you
select. The following tables list the various options you can choose:
413
vhost-gateway auto Specify the gateway for vhost0. You can enter either an
IP address or the keyword (auto) to automatically set a
gateway based on the existing vhost routes.
dpdk-hugepages 70% Specify the percentage of huge pages reserved for DPDK
vRouter and OpenStack instances.
kernel-hugepages-1g Parameter not enabled by Specify the number of 1G huge pages for use with
default. vRouters in kernel mode.
NOTE: 2MB huge pages for You can enable huge pages to avoid compute node
kernel-mode vRouters are reboots during software upgrades.
enabled by default.
This parameter must be specified at initial deployment.
It cannot be modified in an active deployment. If you
need to migrate to huge page usage in an active
deployment, use 2MB huge pages if suitable for your
environment.
kernel-hugepages-2m 1024 Specify the number of 2MB huge pages for use with
vRouters in kernel mode. Huge pages in Contrail
Networking are used primarily to allocate flow and bridge
table memory within the vRouter. Huge pages for
kernel-mode vRouters provide enough flow and bridge
table memory to avoid compute node reboots to complete
future Contrail Networking software upgrades.
cloud-admin-role admin Specify the role name in keystone for users who have
admin-level access.
global-read-only-role Specify the role name in keystone for users who have
read-only access.
use-external-rabbitmq false To enable the Charm to use the internal RabbitMQ server,
set use-external-rabbitmq to false.
Contrail Networking Release 2011.L1 supports new charms for Ironic from OpenStack Train version 15.x.x.
Ironic is an OpenStack project that manages Bare Metal Servers (BMS) as if they are virtual machines
(VM)s. For more information about Contrail and BMS, see Bare Metal Server Management.
Contrail Networking Release 2011.L2 supports OpenStack Ussuri with Ironic deployed on Ubuntu version
20.04 (Focal Fossa).
The updated options are shown in the example bundle_yaml_file. Before deploying the updated yaml file,
you should have Ceph installed. If not, see Installing Ceph.
For information about deploying the bundle_yaml_file, see “Deploying Contrail Charms” on page 398.
Following is an example bundle_yaml_file with the additional options highlighted. ceph-radosgw and its
related options are required to support the new Ironic charms.
series: bionic
applications:
barbican:
charm: cs:barbican-31
num_units: 3
to:
- lxd:0
- lxd:1
- lxd:2
options:
openstack-origin: cloud:bionic-train
421
region: RegionOne
use-internal-endpoints: true
vip: 10.92.76.133 192.168.2.11
worker-multiplier: 0.25
bindings:
"": oam-space
admin: oam-space
amqp: oam-space
certificates: oam-space
cluster: oam-space
ha: oam-space
hsm: oam-space
identity-service: oam-space
internal: oam-space
public: public-space
secrets: oam-space
shared-db: oam-space
barbican-hacluster:
charm: cs:hacluster-62
options:
cluster_count: 3
bindings:
"": alpha
ha: alpha
hanode: alpha
juju-info: alpha
nrpe-external-master: alpha
pacemaker-remote: alpha
peer-availability: alpha
barbican-vault:
charm: cs:barbican-vault-12
bindings:
"": oam-space
certificates: oam-space
juju-info: oam-space
secrets: oam-space
secrets-storage: oam-space
ceph-mon:
charm: cs:ceph-mon-51
num_units: 3
to:
- lxd:0
- lxd:1
- lxd:2
422
constraints: spaces=oam-space
bindings:
"": alpha
admin: alpha
bootstrap-source: alpha
client: alpha
cluster: oam-space
mds: alpha
mon: alpha
nrpe-external-master: alpha
osd: alpha
prometheus: alpha
public: oam-space
radosgw: alpha
rbd-mirror: alpha
ceph-osd:
charm: cs:ceph-osd-306
num_units: 3
to:
- "17"
- "21"
- "19"
options:
osd-devices: /dev/sdb
bindings:
"": alpha
cluster: oam-space
mon: alpha
nrpe-external-master: alpha
public: oam-space
secrets-storage: alpha
ceph-radosgw:
charm: cs:ceph-radosgw-292
num_units: 3
to:
- lxd:0
- lxd:1
- lxd:2
options:
admin-roles: admin
loglevel: 10
namespace-tenants: true
operator-roles: member
source: cloud:bionic-train/proposed
423
vhost-gateway: auto
bindings:
"": alpha
agent-cluster: alpha
contrail-controller: alpha
juju-info: alpha
nrpe-external-master: alpha
tls-certificates: alpha
vrouter-plugin: alpha
contrail-analytics:
charm: local:bionic/contrail-analytics-1
num_units: 4
to:
- kvm:0
- kvm:1
- kvm:2
- kvm:13
options:
control-network: 192.168.2.0/24
docker-password: <docker password>
docker-registry: hub.juniper.net/contrail
docker-user: JNPR-FieldUser367
haproxy-http-mode: https
image-tag: "2008.121"
log-level: SYS_DEBUG
min-cluster-size: 3
vip: 10.92.77.18
constraints: cpu-cores=16 mem=32768 root-disk=102400
spaces=oam-space,overlay-space
bindings:
"": oam-space
analytics-cluster: oam-space
contrail-analytics: oam-space
contrail-analyticsdb: oam-space
http-services: oam-space
nrpe-external-master: oam-space
tls-certificates: oam-space
contrail-analyticsdb:
charm: local:bionic/contrail-analyticsdb-1
num_units: 4
to:
- kvm:0
- kvm:1
- kvm:2
425
- kvm:13
options:
cassandra-jvm-extra-opts: -Xms16g -Xmx24g
cassandra-minimum-diskgb: "4"
control-network: 192.168.2.0/24
docker-password: <docker password>
docker-registry: hub.juniper.net/contrail
docker-user: JNPR-FieldUser367
image-tag: "2008.121"
log-level: SYS_DEBUG
min-cluster-size: 3
constraints: cpu-cores=16 mem=65536 root-disk=512000
spaces=oam-space,overlay-space
bindings:
"": oam-space
analyticsdb-cluster: oam-space
contrail-analyticsdb: oam-space
nrpe-external-master: oam-space
tls-certificates: oam-space
contrail-command:
charm: local:bionic/contrail-command-0
num_units: 1
to:
- "9"
options:
docker-password: <docker password>
docker-registry: hub.juniper.net/contrail
docker-registry-insecure: true
docker-user: JNPR-FieldUser367
image-tag: "2008.121"
constraints: tags=command
bindings:
"": alpha
contrail-controller: alpha
contrail-controller:
charm: local:bionic/contrail-controller-1
num_units: 4
to:
- kvm:0
- kvm:2
- kvm:1
- kvm:13
options:
auth-mode: rbac
426
bindings:
"": oam-space
local-monitors: oam-space
munin: oam-space
nrpe-external-master: oam-space
peer: oam-space
public: public-space
reverseproxy: oam-space
statistics: oam-space
website: public-space
contrail-keepalived:
charm: cs:~containers/keepalived-28
options:
network_interface: eth0
port: 8143
virtual_ip: 10.92.77.18
bindings:
"": alpha
juju-info: alpha
lb-sink: alpha
loadbalancer: alpha
website: alpha
contrail-keystone-auth:
charm: local:bionic/contrail-keystone-auth-1
num_units: 4
to:
- lxd:0
- lxd:1
- lxd:2
- lxd:13
constraints: spaces=oam-space,overlay-space
bindings:
"": oam-space
contrail-auth: oam-space
identity-admin: oam-space
nrpe-external-master: oam-space
contrail-openstack:
charm: local:bionic/contrail-openstack-3
options:
docker-password: <docker password>
docker-registry: hub.juniper.net/contrail
docker-user: JNPR-FieldUser367
image-tag: "2008.121"
use-internal-endpoints: true
428
bindings:
"": alpha
cluster: alpha
contrail-controller: alpha
heat-plugin: alpha
juju-info: alpha
neutron-api: alpha
nova-compute: alpha
dashboard-hacluster:
charm: cs:hacluster-62
options:
cluster_count: 3
bindings:
"": alpha
ha: alpha
hanode: alpha
juju-info: alpha
nrpe-external-master: alpha
pacemaker-remote: alpha
peer-availability: alpha
easyrsa:
charm: cs:~containers/easyrsa-303
num_units: 1
to:
- lxd:0
bindings:
"": oam-space
client: oam-space
etcd:
charm: cs:etcd-521
num_units: 3
to:
- lxd:0
- lxd:1
- lxd:2
options:
channel: 3.1/stable
bindings:
"": oam-space
certificates: oam-space
cluster: oam-space
db: oam-space
nrpe-external-master: oam-space
proxy: oam-space
429
external-policy-routing:
charm: cs:~canonical-bootstack/policy-routing-3
options:
cidr: 10.92.76.0/23
gateway: 10.92.77.254
bindings:
"": alpha
juju-info: alpha
glance:
charm: cs:~openstack-charmers-next/glance-442
num_units: 4
to:
- lxd:0
- lxd:1
- lxd:2
- lxd:13
options:
openstack-origin: cloud:bionic-train
region: RegionOne
restrict-ceph-pools: false
use-internal-endpoints: true
vip: 10.92.77.12 192.168.2.12
worker-multiplier: 0.25
bindings:
"": oam-space
admin: oam-space
amqp: oam-space
ceph: oam-space
certificates: oam-space
cinder-volume-service: oam-space
cluster: oam-space
ha: oam-space
identity-service: oam-space
image-service: oam-space
internal: oam-space
nrpe-external-master: oam-space
object-store: oam-space
public: public-space
shared-db: oam-space
storage-backend: oam-space
glance-hacluster:
charm: cs:hacluster-62
options:
cluster_count: 3
430
bindings:
"": alpha
ha: alpha
hanode: alpha
juju-info: alpha
nrpe-external-master: alpha
pacemaker-remote: alpha
peer-availability: alpha
glance-simplestreams-sync:
charm: cs:glance-simplestreams-sync-33
num_units: 3
to:
- lxd:0
- lxd:1
- lxd:2
options:
source: ppa:simplestreams-dev/trunk
use_swift: false
bindings:
"": oam-space
amqp: oam-space
certificates: oam-space
identity-service: oam-space
image-modifier: oam-space
nrpe-external-master: oam-space
simplestreams-image-service: oam-space
heat:
charm: cs:heat-271
num_units: 4
to:
- lxd:0
- lxd:1
- lxd:2
- lxd:13
options:
openstack-origin: cloud:bionic-train
region: RegionOne
use-internal-endpoints: true
vip: 10.92.77.13 192.168.2.13
worker-multiplier: 0.25
constraints: cpu-cores=6 mem=32768 root-disk=65536
spaces=oam-space,public-space,overlay-space
bindings:
"": oam-space
431
admin: oam-space
amqp: oam-space
certificates: oam-space
cluster: oam-space
ha: oam-space
heat-plugin-subordinate: overlay-space
identity-service: oam-space
internal: oam-space
public: public-space
shared-db: oam-space
heat-hacluster:
charm: cs:hacluster-62
options:
cluster_count: 3
bindings:
"": alpha
ha: alpha
hanode: alpha
juju-info: alpha
nrpe-external-master: alpha
pacemaker-remote: alpha
peer-availability: alpha
ironic-api:
charm: cs:~openstack-charmers-next/ironic-api-8
num_units: 3
to:
- lxd:0
- lxd:1
- lxd:2
options:
openstack-origin: cloud:bionic-train/proposed
vip: 10.92.76.130 192.168.2.189
constraints: spaces=oam-space,public-space
bindings:
"": alpha
admin: alpha
amqp: alpha
certificates: alpha
cluster: alpha
ha: alpha
identity-service: alpha
internal: alpha
ironic-api: alpha
public: alpha
432
shared-db: oam-space
ironic-api-hacluster:
charm: cs:hacluster-72
options:
cluster_count: 3
bindings:
"": alpha
ha: alpha
hanode: alpha
juju-info: alpha
nrpe-external-master: alpha
pacemaker-remote: alpha
peer-availability: alpha
ironic-conductor:
charm: cs:~openstack-charmers-next/ironic-conductor-5
num_units: 1
to:
- "14"
options:
cleaning-network: ironic
default-deploy-interface: direct
default-network-interface: neutron
disable-secure-erase: true
enabled-deploy-interfaces: direct
enabled-network-interfaces: noop,flat,neutron
max-tftp-block-size: 1418
openstack-origin: cloud:bionic-train/proposed
provisioning-network: ironic
use-ipxe: false
bindings:
"": alpha
amqp: alpha
certificates: alpha
cleaning: alpha
deployment: alpha
identity-credentials: alpha
internal: alpha
ironic-api: alpha
shared-db: alpha
keystone:
charm: cs:keystone-309
num_units: 4
to:
- lxd:0
433
- lxd:1
- lxd:2
- lxd:13
options:
admin-password: c0ntrail123
admin-role: admin
openstack-origin: cloud:bionic-train
preferred-api-version: 3
region: RegionOne
token-provider: fernet
vip: 10.92.77.14 192.168.2.14
worker-multiplier: 0.25
bindings:
"": oam-space
admin: oam-space
certificates: oam-space
cluster: oam-space
domain-backend: oam-space
ha: oam-space
identity-admin: oam-space
identity-credentials: oam-space
identity-notifications: oam-space
identity-service: oam-space
internal: oam-space
keystone-fid-service-provider: oam-space
keystone-middleware: oam-space
nrpe-external-master: oam-space
public: public-space
shared-db: oam-space
websso-trusted-dashboard: oam-space
keystone-hacluster:
charm: cs:hacluster-62
options:
cluster_count: 3
bindings:
"": alpha
ha: alpha
hanode: alpha
juju-info: alpha
nrpe-external-master: alpha
pacemaker-remote: alpha
peer-availability: alpha
memcached:
charm: cs:memcached-26
434
num_units: 4
to:
- lxd:0
- lxd:1
- lxd:2
- lxd:13
options:
allow-ufw-ip6-softfail: true
constraints: spaces=oam-space
bindings:
"": oam-space
cache: oam-space
cluster: oam-space
local-monitors: oam-space
monitors: oam-space
munin: oam-space
nrpe-external-master: oam-space
mysql:
charm: cs:percona-cluster-281
num_units: 4
to:
- lxd:0
- lxd:1
- lxd:2
- lxd:13
options:
enable-binlogs: true
innodb-buffer-pool-size: 512M
max-connections: 2000
min-cluster-size: 3
performance-schema: true
source: cloud:bionic-train
tuning-level: safest
vip: 192.168.2.17
wait-timeout: 3600
wsrep-slave-threads: 48
bindings:
"": oam-space
access: oam-space
cluster: oam-space
db: oam-space
db-admin: oam-space
ha: oam-space
master: oam-space
435
nrpe-external-master: oam-space
shared-db: oam-space
slave: oam-space
mysql-hacluster:
charm: cs:hacluster-62
options:
cluster_count: 3
bindings:
"": alpha
ha: alpha
hanode: alpha
juju-info: alpha
nrpe-external-master: alpha
pacemaker-remote: alpha
peer-availability: alpha
ncc-hacluster:
charm: cs:hacluster-62
options:
cluster_count: 3
bindings:
"": alpha
ha: alpha
hanode: alpha
juju-info: alpha
nrpe-external-master: alpha
pacemaker-remote: alpha
peer-availability: alpha
neutron-api:
charm: cs:neutron-api-281
num_units: 4
to:
- lxd:0
- lxd:1
- lxd:2
- lxd:13
options:
default-tenant-network-type: vlan
dhcp-agents-per-network: 2
enable-l3ha: true
enable-ml2-port-security: true
global-physnet-mtu: 9000
l2-population: true
manage-neutron-plugin-legacy-mode: false
neutron-security-groups: true
436
openstack-origin: cloud:bionic-train
overlay-network-type: ""
region: RegionOne
use-internal-endpoints: true
vip: 10.92.77.15 192.168.2.15
worker-multiplier: 0.25
constraints: cpu-cores=8 mem=32768 root-disk=262144
spaces=oam-space,public-space,overlay-space
bindings:
"": oam-space
admin: oam-space
amqp: oam-space
certificates: oam-space
cluster: oam-space
etcd-proxy: oam-space
external-dns: oam-space
ha: oam-space
identity-service: oam-space
infoblox-neutron: oam-space
internal: oam-space
midonet: oam-space
neutron-api: oam-space
neutron-load-balancer: oam-space
neutron-plugin-api: oam-space
neutron-plugin-api-subordinate: overlay-space
nrpe-external-master: oam-space
public: public-space
shared-db: oam-space
vsd-rest-api: oam-space
neutron-hacluster:
charm: cs:hacluster-62
options:
cluster_count: 3
bindings:
"": alpha
ha: alpha
hanode: alpha
juju-info: alpha
nrpe-external-master: alpha
pacemaker-remote: alpha
peer-availability: alpha
nova-cloud-controller:
charm: cs:nova-cloud-controller-339
num_units: 4
437
to:
- lxd:0
- lxd:1
- lxd:2
- lxd:13
options:
console-access-protocol: novnc
console-proxy-ip: local
cpu-allocation-ratio: 4
network-manager: Neutron
openstack-origin: cloud:bionic-train
ram-allocation-ratio: 0.999999
region: RegionOne
use-internal-endpoints: true
vip: 10.92.77.16 192.168.2.16
worker-multiplier: 0.25
bindings:
"": oam-space
admin: oam-space
amqp: oam-space
amqp-cell: oam-space
certificates: oam-space
cinder-volume-service: oam-space
cloud-compute: oam-space
cloud-controller: oam-space
cluster: oam-space
ha: oam-space
identity-service: oam-space
image-service: oam-space
internal: oam-space
memcache: oam-space
neutron-api: oam-space
nova-cell-api: oam-space
nova-vmware: oam-space
nrpe-external-master: oam-space
placement: oam-space
public: public-space
quantum-network-service: oam-space
shared-db: oam-space
shared-db-cell: oam-space
nova-compute:
charm: cs:nova-compute-309
num_units: 5
to:
438
- "3"
- "4"
- "5"
- "6"
- "15"
options:
openstack-origin: cloud:bionic-train
os-internal-network: 192.168.2.0/24
bindings:
"": alpha
amqp: alpha
ceph: alpha
ceph-access: alpha
cloud-compute: alpha
cloud-credentials: alpha
compute-peer: alpha
ephemeral-backend: alpha
image-service: alpha
internal: alpha
lxd: alpha
neutron-plugin: alpha
nova-ceilometer: alpha
nrpe-external-master: alpha
secrets-storage: alpha
nova-ironic:
charm: cs:~openstack-charmers-next/nova-compute-524
num_units: 1
to:
- "22"
options:
enable-live-migration: false
enable-resize: false
openstack-origin: cloud:bionic-train/proposed
virt-type: ironic
bindings:
"": alpha
amqp: alpha
ceph: alpha
ceph-access: alpha
cloud-compute: alpha
cloud-credentials: alpha
compute-peer: alpha
ephemeral-backend: alpha
image-service: alpha
439
internal: alpha
ironic-api: alpha
lxd: alpha
migration: alpha
neutron-plugin: alpha
nova-ceilometer: alpha
nrpe-external-master: alpha
secrets-storage: alpha
ntp:
charm: cs:ntp-36
options:
source: ntp.juniper.net
bindings:
"": alpha
juju-info: alpha
master: alpha
nrpe-external-master: alpha
ntp-peers: alpha
ntpmaster: alpha
octavia:
charm: cs:~apavlov-e/octavia-3
num_units: 3
to:
- lxd:0
- lxd:1
- lxd:2
options:
amp-ssh-key-name: octavia
amp-ssh-pub-key:
c
3
N
oL
X
Jz
Y
S
BQ
UF
B
Q
jN
O
em
F
D
MX
l
jM
k
V
BQ
U
FB
R
E
FR
Q
UJ
B
Q
UF
C
QV
F
D
a0
N
0S
z
J
CW
k
01
T
C
90
V
Gd
o
M
3J
3
L2
F
p
R2
Z
EQ
z
l
zO
E
I4
a
l
Fi
S
2V
z
Q
zJ
q
TH
R
W
cF
N
Yb
G
l
Fe
lq
Q
T
NG
N
jE
y
a
lp
w
dE
R
j
OX
d
hO
E
F
3e
S
tx
b
E
l0
L
1F
r
a
k5
T
Vj
h
P
MV
p
vN
X
Z
lc
2
RM
R
E
hv
Q
jJ
r
M
zV
5
ZE
F
v
MX
h
QR
k
F
mV
3
ls
S
j
h6
V
nJ
r
d
0U
5
aU
1
t
WE
V
YZ
T
d
1V
j
dD
d
k
gy
Z
md
m
S
nl
G
eX
J
K
aF
R
2Z
j
B
Wd
T
ZG
K
1
M5
R
Hl
1
M
nk
x
MU
d
X
WE
s
rS
D
A
yR
3
Zn
e
H
Vz
a
mZ
3
Q
lh
o
Z3
I
x
NU
1
kZ
C
t
4R
k
Js
b
k
pY
R
Gt
k
Q
jV
Y
Vi
t
4
az
Z
hY
k
J
sR
V
JU
c
0
N6
c
09
E
d
XV
O
QT
g
4
aV
h
qe
H
k
vZ
z
Jp
b
2
Nt
N
Wh
t
c
Vh
U
eD
R
M
T2
g
za
m
9
Nb
E
FH
U
W
5R
Q
2F
X
d
FN
S
en
p
W
M3
d
KT
3
J
Le
W
5z
U
1
pO
b
VR
m
U
nl
u
RD
d
aG
l
1W
l
Z
LN
U
ZU
R
W
hQ
a
XZ
U
a
VA
w
aH
d
L
eT
R
MZ
G
Z
UM
0
Ns
S
k
JS
W
md
M
VJ
Z
YU
t
S
Yl
F
wY
k
Q
gd
W
J1
b
n
R1
Q
Gp
1
b
XB
o
b3
N
0
Cg
=
create-mgmt-network: false
lb-mgmt-controller-cacert: |-
<certificate>
lb-mgmt-controller-cert: |-
<certificate>
lb-mgmt-issuing-ca-key-passphrase: <passphrase>
lb-mgmt-issuing-ca-private-key: |-
<private key>
lb-mgmt-issuing-cacert: |-
<certificate>
loadbalancer-topology: ACTIVE_STANDBY
openstack-origin: cloud:bionic-train
region: RegionOne
440
use-internal-endpoints: true
vip: 10.92.76.135 192.168.2.18
worker-multiplier: 0.25
bindings:
"": oam-space
admin: oam-space
amqp: oam-space
certificates: oam-space
cluster: oam-space
ha: oam-space
identity-service: oam-space
internal: oam-space
neutron-api: oam-space
neutron-openvswitch: oam-space
ovsdb-cms: oam-space
ovsdb-subordinate: oam-space
public: public-space
shared-db: oam-space
octavia-dashboard:
charm: cs:octavia-dashboard-17
bindings:
"": alpha
certificates: alpha
dashboard: alpha
octavia-diskimage-retrofit:
charm: cs:octavia-diskimage-retrofit-12
options:
amp-image-tag: octavia-amphora
retrofit-uca-pocket: train
bindings:
"": oam-space
certificates: oam-space
identity-credentials: oam-space
juju-info: oam-space
octavia-hacluster:
charm: cs:hacluster-62
options:
cluster_count: 3
bindings:
"": alpha
ha: alpha
hanode: alpha
juju-info: alpha
nrpe-external-master: alpha
441
pacemaker-remote: alpha
peer-availability: alpha
openstack-dashboard:
charm: cs:openstack-dashboard-295
num_units: 4
to:
- lxd:0
- lxd:1
- lxd:2
- lxd:13
options:
cinder-backup: false
endpoint-type: publicURL
neutron-network-firewall: false
neutron-network-l3ha: true
neutron-network-lb: true
openstack-origin: cloud:bionic-train
password-retrieve: true
secret: encryptcookieswithme
vip: 10.92.77.11
webroot: /
constraints: spaces=oam-space
bindings:
"": public-space
certificates: public-space
cluster: public-space
dashboard-plugin: public-space
ha: public-space
identity-service: public-space
nrpe-external-master: public-space
public: public-space
shared-db: oam-space
website: public-space
websso-fid-service-provider: public-space
websso-trusted-dashboard: public-space
placement:
charm: cs:placement-11
num_units: 4
to:
- lxd:0
- lxd:1
- lxd:2
- lxd:13
options:
442
openstack-origin: cloud:bionic-train
region: RegionOne
use-internal-endpoints: true
vip: 10.92.77.19 192.168.2.19
bindings:
"": oam-space
admin: oam-space
amqp: oam-space
certificates: oam-space
cluster: oam-space
ha: oam-space
identity-service: oam-space
internal: oam-space
placement: oam-space
public: public-space
shared-db: oam-space
placement-hacluster:
charm: cs:hacluster-62
options:
cluster_count: 3
bindings:
"": alpha
ha: alpha
hanode: alpha
juju-info: alpha
nrpe-external-master: alpha
pacemaker-remote: alpha
peer-availability: alpha
rabbitmq-server:
charm: cs:rabbitmq-server-97
num_units: 4
to:
- lxd:0
- lxd:1
- lxd:2
- lxd:13
options:
min-cluster-size: 3
source: cloud:bionic-train
bindings:
"": oam-space
amqp: oam-space
ceph: oam-space
certificates: oam-space
443
cluster: oam-space
ha: oam-space
nrpe-external-master: oam-space
radosgw-hacluster:
charm: cs:hacluster-72
options:
cluster_count: 3
bindings:
"": alpha
ha: alpha
hanode: alpha
juju-info: alpha
nrpe-external-master: alpha
pacemaker-remote: alpha
peer-availability: alpha
ubuntu:
charm: cs:ubuntu-15
num_units: 4
to:
- "0"
- "1"
- "2"
- "13"
bindings:
"": alpha
vault:
charm: cs:vault-39
num_units: 3
to:
- lxd:0
- lxd:1
- lxd:2
options:
vip: 192.168.2.20
bindings:
"": oam-space
access: oam-space
certificates: oam-space
cluster: oam-space
db: oam-space
etcd: oam-space
external: oam-space
ha: oam-space
nrpe-external-master: oam-space
444
secrets: oam-space
shared-db: oam-space
vault-hacluster:
charm: cs:hacluster-62
options:
cluster_count: 3
bindings:
"": alpha
ha: alpha
hanode: alpha
juju-info: alpha
nrpe-external-master: alpha
pacemaker-remote: alpha
peer-availability: alpha
machines:
"0":
constraints: tags=controller1
"1":
constraints: tags=controller2
"2":
constraints: tags=controller3
"3":
constraints: tags=compute1
"4":
constraints: tags=compute2
"5":
constraints: tags=compute3
"6":
constraints: tags=compute4
"9":
constraints: tags=command
"13":
constraints: tags=controller4
"14":
constraints: tags=controller5
"15":
constraints: tags=compute5
"17":
constraints: tags=CEPH
"19":
constraints: tags=CEPH
"21":
constraints: tags=CEPH
"22":
445
constraints: tags=CSN
relations:
- - ubuntu:juju-info
- ntp:juju-info
- - mysql:ha
- mysql-hacluster:ha
- - keystone:shared-db
- mysql:shared-db
- - keystone:ha
- keystone-hacluster:ha
- - glance:shared-db
- mysql:shared-db
- - glance:identity-service
- keystone:identity-service
- - nova-cloud-controller:shared-db
- mysql:shared-db
- - nova-cloud-controller:identity-service
- keystone:identity-service
- - nova-cloud-controller:image-service
- glance:image-service
- - nova-cloud-controller:ha
- ncc-hacluster:ha
- - neutron-api:shared-db
- mysql:shared-db
- - neutron-api:neutron-api
- nova-cloud-controller:neutron-api
- - neutron-api:identity-service
- keystone:identity-service
- - neutron-api:ha
- neutron-hacluster:ha
- - nova-compute:image-service
- glance:image-service
- - nova-compute:cloud-compute
- nova-cloud-controller:cloud-compute
- - nova-compute:juju-info
- ntp:juju-info
- - openstack-dashboard:identity-service
- keystone:identity-service
- - openstack-dashboard:ha
- dashboard-hacluster:ha
- - heat:shared-db
- mysql:shared-db
- - heat:identity-service
- keystone:identity-service
446
- - heat:ha
- heat-hacluster:ha
- - placement:shared-db
- mysql:shared-db
- - placement:identity-service
- keystone:identity-service
- - placement:placement
- nova-cloud-controller:placement
- - contrail-controller:contrail-controller
- contrail-agent:contrail-controller
- - contrail-agent:juju-info
- nova-compute:juju-info
- - contrail-analytics:contrail-analyticsdb
- contrail-analyticsdb:contrail-analyticsdb
- - contrail-analytics:contrail-analytics
- contrail-controller:contrail-analytics
- - contrail-analytics:http-services
- contrail-haproxy:reverseproxy
- - contrail-analyticsdb:contrail-analyticsdb
- contrail-controller:contrail-analyticsdb
- - contrail-controller:contrail-auth
- contrail-keystone-auth:contrail-auth
- - contrail-controller:http-services
- contrail-haproxy:reverseproxy
- - contrail-controller:https-services
- contrail-haproxy:reverseproxy
- - contrail-keystone-auth:identity-admin
- keystone:identity-admin
- - contrail-openstack:nova-compute
- nova-compute:neutron-plugin
- - contrail-openstack:neutron-api
- neutron-api:neutron-plugin-api-subordinate
- - contrail-openstack:heat-plugin
- heat:heat-plugin-subordinate
- - contrail-openstack:contrail-controller
- contrail-controller:contrail-controller
- - contrail-haproxy:juju-info
- contrail-keepalived:juju-info
- - nova-cloud-controller:memcache
- memcached:cache
- - external-policy-routing:juju-info
- openstack-dashboard:juju-info
- - external-policy-routing:juju-info
- glance:juju-info
447
- - external-policy-routing:juju-info
- heat:juju-info
- - external-policy-routing:juju-info
- keystone:juju-info
- - external-policy-routing:juju-info
- neutron-api:juju-info
- - external-policy-routing:juju-info
- nova-cloud-controller:juju-info
- - external-policy-routing:juju-info
- contrail-haproxy:juju-info
- - ntp:juju-info
- contrail-controller:juju-info
- - ntp:juju-info
- contrail-analytics:juju-info
- - ntp:juju-info
- contrail-analyticsdb:juju-info
- - ntp:juju-info
- neutron-api:juju-info
- - ntp:juju-info
- heat:juju-info
- - contrail-command:contrail-controller
- contrail-controller:contrail-controller
- - glance:ha
- glance-hacluster:ha
- - placement:ha
- placement-hacluster:ha
- - mysql:shared-db
- octavia:shared-db
- - mysql:shared-db
- barbican:shared-db
- - mysql:shared-db
- vault:shared-db
- - keystone:identity-service
- octavia:identity-service
- - keystone:identity-service
- barbican:identity-service
- - neutron-api:neutron-load-balancer
- octavia:neutron-api
- - openstack-dashboard:dashboard-plugin
- octavia-dashboard:dashboard
- - barbican-vault:secrets
- barbican:secrets
- - vault:secrets
- barbican-vault:secrets-storage
448
- - glance-simplestreams-sync:juju-info
- octavia-diskimage-retrofit:juju-info
- - keystone:identity-service
- glance-simplestreams-sync:identity-service
- - keystone:identity-credentials
- octavia-diskimage-retrofit:identity-credentials
- - contrail-openstack:nova-compute
- octavia:neutron-openvswitch
- - vault:ha
- vault-hacluster:ha
- - etcd:certificates
- easyrsa:client
- - etcd:db
- vault:etcd
- - barbican:ha
- barbican-hacluster:ha
- - octavia:ha
- octavia-hacluster:ha
- - rabbitmq-server:amqp
- barbican:amqp
- - rabbitmq-server:amqp
- glance-simplestreams-sync:amqp
- - rabbitmq-server:amqp
- heat:amqp
- - rabbitmq-server:amqp
- neutron-api:amqp
- - rabbitmq-server:amqp
- nova-cloud-controller:amqp
- - rabbitmq-server:amqp
- nova-compute:amqp
- - rabbitmq-server:amqp
- octavia:amqp
- - ceph-mon:osd
- ceph-osd:mon
- - ceph-radosgw:juju-info
- external-policy-routing:juju-info
- - ceph-radosgw:ha
- radosgw-hacluster:ha
- - ceph-radosgw:mon
- ceph-mon:radosgw
- - ceph-radosgw:identity-service
- keystone:identity-service
- - vault:certificates
- ceph-radosgw:certificates
449
- - ceph-radosgw:object-store
- glance:object-store
- - ceph-mon:client
- glance:ceph
- - ironic-conductor:amqp
- rabbitmq-server:amqp
- - ironic-conductor:identity-credentials
- keystone:identity-credentials
- - ironic-conductor:shared-db
- mysql:shared-db
- - vault:certificates
- ironic-conductor:certificates
- - nova-ironic:amqp
- rabbitmq-server:amqp
- - nova-ironic:image-service
- glance:image-service
- - nova-ironic:cloud-credentials
- keystone:identity-credentials
- - nova-ironic:cloud-compute
- nova-cloud-controller:cloud-compute
- - ceph-mon:client
- nova-ironic:ceph
- - nova-ironic:juju-info
- ntp:juju-info
- - contrail-agent-csn:juju-info
- nova-ironic:juju-info
- - contrail-agent-csn:contrail-controller
- contrail-controller:contrail-controller
- - ironic-api:ha
- ironic-api-hacluster:ha
- - ironic-conductor:ironic-api
- ironic-api:ironic-api
- - ironic-api:amqp
- rabbitmq-server:amqp
- - ironic-api:identity-service
- keystone:identity-service
- - ironic-api:shared-db
- mysql:shared-db
- - vault:certificates
- ironic-api:certificates
- - nova-ironic:ironic-api
- ironic-api:ironic-api
450
Release Description
2011.L2 Contrail Networking Release 2011.L2 supports OpenStack Ussuri with Ironic deployed on
Ubuntu version 20.04 (Focal Fossa).
2011.L1 Contrail Networking Release 2011.L1 supports new charms for Ironic from OpenStack Train
version 15.x.x.
2011 Starting in Contrail Networking Release 2011, Contrail Networking supports OpenStack
Ussuri with Ubuntu version 18.04 (Bionic Beaver) and Ubuntu version 20.04 (Focal Fossa).
RELATED DOCUMENTATION
IN THIS SECTION
You can deploy Contrail Networking using Juju Charms. Juju helps you deploy, configure, and efficiently
manage applications on private clouds and public clouds. Juju accesses the cloud with the help of a Juju
controller. A Charm is a module containing a collection of scripts and metadata and is used with Juju to
deploy Contrail.
A Juju Charm helps you deploy Docker containers to the cloud. For more information on containerized
Contrail, see “Understanding Contrail Containers” on page 5. Juju Charms simplifies Contrail deployment
by providing a simple way to deploy, configure, scale, and manage Contrail operations.
451
• contrail-agent
• contrail-analytics
• contrail-analyticsdb
• contrail-controller
• contrail-kubernetes-master
• contrail-kubernetes-node
1. Install Juju.
2. Configure Juju.
You can add a cloud to Juju, identify clouds supported by Juju, and manage clouds already added to
Juju.
Adding a cloud
Juju already has knowledge of the AWS cloud, which means adding your AWS account to Juju is quick
and easy.
NOTE: In versions prior to Juju v.2.6.0 the show-cloud command only operates locally. There
is no --local option.
You must ensure that Juju’s information is up to date (e.g. new region support). Run the following
command to update Juju’s public cloud data:
452
juju update-public-clouds
Juju recognizes a wide range of cloud types. You can use any one of the following methods to add a
cloud credentials to Juju:
You can use a YAML configuration file to add AWS cloud credentials. Run the following command:
Use the juju clouds command to list cloud types that are supported by Juju.
$ juju clouds
Cloud Regions Default Type Description
aws 15 us-east-1 ec2 Amazon Web Services
aws-china 1 cn-north-1 ec2 Amazon China
aws-gov 1 us-gov-west-1 ec2 Amazon (USA Government)
azure 26 centralus azure Microsoft Azure
azure-china 2 chinaeast azure Microsoft Azure China
cloudsigma 5 hnl cloudsigma CloudSigma Cloud
google 13 us-east1 gce Google Cloud Platform
joyent 6 eu-ams-1 joyent Joyent Cloud
oracle 5 uscom-central-1 oracle Oracle Cloud
453
A Juju controller manages and keeps track of applications in the Juju cloud environment.
IN THIS SECTION
Juju Charms simplifies Contrail deployment by providing a simple way to deploy, configure, scale, and
manage Contrail operations.
To deploy Contrail Charms in a bundle, use the juju deploy <bundle_yaml_file> command.
The following example shows you how to use a bundle YAML file to deploy Contrail on Amazon Web
Services (AWS) Cloud.
series: "bionic"
machines:
# kubernetes pods
454
0:
series: "bionic"
constraints: mem=8G cores=2 root-disk=60G
# kubernetes master
2:
series: "bionic"
constraints: mem=8G cores=2 root-disk=60G
# contrail components
5:
series: "bionic"
constraints: mem=16G cores=4 root-disk=60G
services:
# kubernetes
easyrsa:
series: "bionic"
charm: cs:~containers/easyrsa
num_units: 1
annotations:
gui-x: '1168.1039428710938'
gui-y: '-59.11077045466004'
to:
- lxd:2
etcd:
series: "bionic"
charm: cs:~containers/etcd
annotations:
gui-x: '1157.2041015625'
gui-y: '719.1614406201691'
num_units: 1
options:
channel: 3.2/stable
to: [2]
kubernetes-master:
series: "bionic"
charm: cs:~containers/kubernetes-master-696
annotations:
gui-x: '877.1133422851562'
455
gui-y: '325.6035540382413'
expose: true
num_units: 1
options:
channel: '1.14/stable'
service-cidr: '10.96.0.0/12'
docker_runtime: 'custom'
docker_runtime_repo: 'deb [arch={ARCH}]
https://download.docker.com/linux/ubuntu {CODE} stable'
docker_runtime_key_url: 'https://download.docker.com/linux/ubuntu/gpg'
docker_runtime_package: 'docker-ce'
to: [2]
kubernetes-worker:
series: "bionic"
charm: cs:~containers/kubernetes-worker-550
annotations:
gui-x: '745.8510131835938'
gui-y: '-57.369691124215706'
num_units: 1
options:
channel: '1.14/stable'
docker_runtime: 'custom'
docker_runtime_repo: 'deb [arch={ARCH}]
https://download.docker.com/linux/ubuntu {CODE} stable'
docker_runtime_key_url: 'https://download.docker.com/linux/ubuntu/gpg'
docker_runtime_package: 'docker-ce'
to: [0]
# contrail-kubernetes
contrail-kubernetes-master:
series: "bionic"
charm: cs:~juniper-os-software/contrail-kubernetes-master
annotations:
gui-x: '586.8027801513672'
gui-y: '753.914497641757'
options:
log-level: 'SYS_DEBUG'
service_subnets: '10.96.0.0/12'
docker-registry: "opencontrailnightly"
image-tag: "master-latest"
contrail-kubernetes-node:
456
series: "bionic"
charm: cs:~juniper-os-software/contrail-kubernetes-node
annotations:
gui-x: '429.1971130371094'
gui-y: '216.05209087397168'
options:
log-level: 'SYS_DEBUG'
docker-registry: "opencontrailnightly"
image-tag: "master-latest"
# contrail
contrail-agent:
series: "bionic"
charm: cs:~juniper-os-software/contrail-agent
annotations:
gui-x: '307.5467224121094'
gui-y: '-24.150856522753656'
options:
log-level: 'SYS_DEBUG'
docker-registry: "opencontrailnightly"
image-tag: "master-latest"
contrail-analytics:
series: "bionic"
charm: cs:~juniper-os-software/contrail-analytics
annotations:
gui-x: '15.948270797729492'
gui-y: '705.2326686475128'
expose: true
num_units: 1
options:
log-level: 'SYS_DEBUG'
docker-registry: "opencontrailnightly"
image-tag: "master-latest"
to: [5]
contrail-analyticsdb:
series: "bionic"
charm: cs:~juniper-os-software/contrail-analyticsdb
annotations:
gui-x: '24.427139282226562'
gui-y: '283.9550754931123'
num_units: 1
457
options:
cassandra-minimum-diskgb: '4'
cassandra-jvm-extra-opts: '-Xms1g -Xmx2g'
log-level: 'SYS_DEBUG'
docker-registry: "opencontrailnightly"
image-tag: "master-latest"
to: [5]
contrail-controller:
series: "bionic"
charm: cs:~juniper-os-software/contrail-controller
annotations:
gui-x: '212.01282501220703'
gui-y: '480.69961284662793'
expose: true
num_units: 1
options:
auth-mode: 'no-auth'
cassandra-minimum-diskgb: '4'
cassandra-jvm-extra-opts: '-Xms1g -Xmx2g'
log-level: 'SYS_DEBUG'
docker-registry: "opencontrailnightly"
image-tag: "master-latest"
to: [5]
# misc
ntp:
charm: "cs:bionic/ntp"
annotations:
gui-x: '678.6017761230469'
gui-y: '415.27124759750086'
relations:
- [ kubernetes-master:kube-api-endpoint, kubernetes-worker:kube-api-endpoint ]
- [ kubernetes-master:kube-control, kubernetes-worker:kube-control ]
- [ kubernetes-master:certificates, easyrsa:client ]
- [ kubernetes-master:etcd, etcd:db ]
- [ kubernetes-worker:certificates, easyrsa:client ]
- [ etcd:certificates, easyrsa:client ]
# contrail
458
- [ kubernetes-master, ntp ]
- [ kubernetes-worker, ntp ]
- [ contrail-controller, ntp ]
- [ contrail-controller, contrail-analytics ]
- [ contrail-controller, contrail-analyticsdb ]
- [ contrail-analytics, contrail-analyticsdb ]
- [ contrail-agent, contrail-controller ]
# contrail-kubernetes
- [ contrail-kubernetes-node:cni, kubernetes-master:cni ]
- [ contrail-kubernetes-node:cni, kubernetes-worker:cni ]
- [ contrail-kubernetes-master:contrail-controller,
contrail-controller:contrail-controller ]
- [ contrail-kubernetes-master:kube-api-endpoint,
kubernetes-master:kube-api-endpoint ]
- [ contrail-agent:juju-info, kubernetes-worker:juju-info ]
- [ contrail-agent:juju-info, kubernetes-master:juju-info ]
- [ contrail-kubernetes-master:contrail-kubernetes-config,
contrail-kubernetes-node:contrail-kubernetes-config ]
You can create or modify the Contrail Charm deployment bundle YAML file to:
Each Contrail Charm has a specific set of options. The options you choose depend on the charms
you select. For more information on the options that are available, see config.yaml file for each charm
located at Contrail Charms.
You can check the status of the deployment by using the juju status command.
Based on your deployment requirements, you can enable the following configuration statements:
• contrail-agent
• contrail-analytics
• contrail-analyticsdb
• contrail-controller
• contrail-kubernetes-master
• contrail-kubernetes-node
1. Create machine instances for Kubernetes master, Kubernetes workers, and Contrail.
You can deploy Kubernetes services using any one of the following methods:
• By using CLI
NOTE: You must use the same docker version for Contrail and Kubernetes.
docker_runtime_repo="deb [arch={ARCH}]
https://download.docker.com/linux/ubuntu {CODE} stable" \
docker_runtime_key_url="https://download.docker.com/linux/ubuntu/gpg" \
docker_runtime_package="docker-ce"
NOTE: You must set the auth-mode parameter of the contrail-controller charm to no-auth
if Contrail is deployed without a keystone.
6. Enable contrail-controller and contrail-analytics services to be available to external traffic if you do not
use HAProxy.
7. Apply SSL.
You can apply SSL if needed. To use SSL with Contrail services, deploy easy-rsa service and add-relation
command to create relations to contrail-controller service and contrail-agent services.
RELATED DOCUMENTATION
https://juju.is/docs/installing
Contrail Networking Release 1909 and later support provisioning of a Kubernetes cluster inside an
OpenStack cluster. Contrail Networking offers a nested control and data plane where a single Contrail
control plane and a single network stack can manage and service both the OpenStack and Kubernetes
clusters.
In nested mode, a Kubernetes cluster is provisioned in virtual machines of an OpenStack cluster. The CNI
plugin and the Contrail-Kubernetes manager of the Kubernetes cluster interface directly with Contrail
components that manage the OpenStack cluster.
All Kubernetes features, functions and specifications are supported when used in nested mode.
NOTE: Nested mode deployment is only supported for Contrail with OpenStack cluster.
• Deploy Contrail with OpenStack either on bare metal server or virtual machines.
BEST PRACTICE: Public cloud deployment is not recommended because of slow nested
virtualization.
For example:
Follow these steps to deploy Juju Charms with Kubernetes in nested mode using bundle deployment:
You can use OpenStack Cloud provider or manually spun-up VMs. For details, refer to Preparing to
Deploy Contrail with Kubernetes by Using Juju Charms.
2. Deploy bundle.
If the machines for the setup are already provisioned, run the following command to deploy bundle:
or
Follow these steps to deploy Juju Charms with Kubernetes in nested mode manually:
You can use OpenStack Cloud provider or manually spun-up VMs. For details, refer to Preparing to
Deploy Contrail with Kubernetes by Using Juju Charms.
2. Create machine instances for Contrail components, Kubernetes master and Kubernetes workers.
All-In-One deployment:
or
Multinode deployment:
You can deploy Kubernetes services using any one of the following methods:
NOTE: You must use the same docker version for Contrail and Kubernetes.
--config docker_runtime_key_url="https://download.docker.com/linux/ubuntu/gpg"
\
--config docker_runtime_package="docker-ce"
contrail-kubernetes-master:
nested_mode: true
cluster_project: "{'domain':'default-domain','project':'admin'}"
cluster_network:
"{'domain':'default-domain','project':'admin','name':'juju-net'}"
service_subnets: '10.96.0.0/12'
nested_mode_config: |
{
"CONTROLLER_NODES": "10.0.12.20",
"AUTH_MODE": "keystone",
"KEYSTONE_AUTH_ADMIN_TENANT": "admin",
"KEYSTONE_AUTH_ADMIN_USER": "admin",
"KEYSTONE_AUTH_ADMIN_PASSWORD": "password",
"KEYSTONE_AUTH_URL_VERSION": "/v2.0",
"KEYSTONE_AUTH_HOST": "10.0.12.122",
"KEYSTONE_AUTH_PROTO": "http",
"KEYSTONE_AUTH_PUBLIC_PORT":"5000",
"KEYSTONE_AUTH_REGION_NAME": "RegionOne",
"KEYSTONE_AUTH_INSECURE": "True",
"KUBERNESTES_NESTED_VROUTER_VIP": "10.10.10.5"
}
You must provide the same certificates to the contrail-kubernetes-master node if Contrail in underlay
cluster has SSL enabled.
Release Description
1909 Contrail Networking Release 1909 and later support provisioning of a Kubernetes cluster inside
an OpenStack cluster. Contrail Networking offers a nested control and data plane where a single
Contrail control plane and a single network stack can manage and service both the OpenStack
and Kubernetes clusters.
RELATED DOCUMENTATION
Contrail Networking Release 2005 supports Octavia as LBaaS. The deployment supports RHOSP and Juju
platforms.
468
With Octavia as LBaaS, Contrail Networking is only maintaining network connectivity and is not involved
in any load balancing functions.
For each OpenStack load balancer creation, Octavia launches a VM known as amphora VM. The VM starts
the HAPROXY when listener is created for the load balancer in OpenStack. Whenever the load balancer
gets updated in OpenStack, amphora VM updates the running HAPROXY configuration. The amphora VM
is deleted on deleting the load balancer.
Contrail Networking provides connectivity to amphora VM interfaces. Amphora VM has two interfaces;
one for management and the other for data. The management interface is used by the Octavia services
for the management communication. Since, Octavia services are running in the underlay network and
amphora VM is running in the overlay network, SDN gateway is needed to reach the overlay network. The
data interface is used for load balancing.
1. Prepare Juju setup with OpenStack Train version and Octavia overlay bundle.
or
Add Octavia service after deploying the main bundle on the existing cluster.
2. Prepare ssh key for amphora VM. Add the options in the octavia-bundle.yaml file.
3. Generate certificates.
rm -rf demoCA/
mkdir -p demoCA/newcerts
touch demoCA/index.txt
touch demoCA/index.txt.attr
openssl genrsa -passout pass:foobar -des3 -out issuing_ca_key.pem 2048
openssl req -x509 -passin pass:foobar -new -nodes -key issuing_ca_key.pem \
-config /etc/ssl/openssl.cnf \
469
-subj "/C=US/ST=Somestate/O=Org/CN=www.example.com" \
-days 30 \
-out issuing_ca.pem
openssl genrsa -passout pass:foobar -des3 -out controller_ca_key.pem 2048
openssl req -x509 -passin pass:foobar -new -nodes \
-key controller_ca_key.pem \
-config /etc/ssl/openssl.cnf \
-subj "/C=US/ST=Somestate/O=Org/CN=www.example.com" \
-days 30 \
-out controller_ca.pem
openssl req \
-newkey rsa:2048 -nodes -keyout controller_key.pem \
-subj "/C=US/ST=Somestate/O=Org/CN=www.example.com" \
-out controller.csr
openssl ca -passin pass:foobar -config /etc/ssl/openssl.cnf \
-cert controller_ca.pem -keyfile controller_ca_key.pem \
-create_serial -batch \
-in controller.csr -days 30 -out controller_cert.pem
cat controller_cert.pem controller_key.pem > controller_cert_bundle.pem
juju config octavia \
lb-mgmt-issuing-cacert="$(base64 controller_ca.pem)" \
lb-mgmt-issuing-ca-private-key="$(base64 controller_ca_key.pem)" \
lb-mgmt-issuing-ca-key-passphrase=foobar \
lb-mgmt-controller-cacert="$(base64 controller_ca.pem)" \
lb-mgmt-controller-cert="$(base64 controller_cert_bundle.pem)"
export VAULT_ADDR='http://localhost:8200'
/snap/bin/vault operator init -key-shares=5 -key-threshold=3
c. Call unseal command by using any three of the five printed unseal keys.
export VAULT_TOKEN="..."
f. Exit from vault’s machine and initialize vault’s charm with the user token.
6. Install python-openstackclient and python-octaviaclient and create management network for Octavia.
7. The management network created in step 6 is in overlay network and Octavia services are running in
the underlay network. Verify network connectivity between overlay and underlay network via SDN
gateway.
Make sure the juju cluster is functional and all units have active status.
If you want to run amphora instances on DPDK computes, you have to create your own flavor with the
required options and set the ID to configuration of Octavia charm via custom-amp-flavor-id option before
call configure-resources.
Or
Set the required options to created flavor with name charm-octavia by charm
Prerequisites:
• You must have connectivity between Octavia controller and amphora instances,
• You must have separate interfaces for control plane and data plane.
3. Check available flavors and images. You can create them, if needed.
7. Create simple HTTP server on every cirros. Login on both the cirros instances and run following
commands:
473
11. Login to load balancer client and verify if round robin works.
$ curl 10.10.10.50
Welcome to 10.10.10.52
$ curl 10.10.10.50
Welcome to 10.10.10.53
$ curl 10.10.10.50
Welcome to 10.10.10.52
$ curl 10.10.10.50
Welcome to 10.10.10.53
$ curl 10.10.10.50
Welcome to 10.10.10.52
$ curl 10.10.10.50
Welcome to 10.10.10.53
charm: cs:glance-simplestreams-sync
num_units: 1
options:
source: ppa:simplestreams-dev/trunk
use_swift: false
to:
- lxd:4
octavia-diskimage-retrofit:
charm: cs:octavia-diskimage-retrofit
options:
amp-image-tag: 'octavia-amphora'
retrofit-uca-pocket: train
relations:
- - mysql:shared-db
- octavia:shared-db
- - mysql:shared-db
- barbican:shared-db
- - mysql:shared-db
- vault:shared-db
- - keystone:identity-service
- octavia:identity-service
- - keystone:identity-service
- barbican:identity-service
- - rabbitmq-server:amqp
- octavia:amqp
- - rabbitmq-server:amqp
- barbican:amqp
- - neutron-api:neutron-load-balancer
- octavia:neutron-api
- - openstack-dashboard:dashboard-plugin
- octavia-dashboard:dashboard
- - barbican-vault:secrets
- barbican:secrets
- - vault:secrets
- barbican-vault:secrets-storage
- - glance-simplestreams-sync:juju-info
- octavia-diskimage-retrofit:juju-info
- - keystone:identity-service
- glance-simplestreams-sync:identity-service
- - rabbitmq-server:amqp
- glance-simplestreams-sync:amqp
- - keystone:identity-credentials
- octavia-diskimage-retrofit:identity-credentials
476
- - contrail-openstack
- octavia
Release Description
RELATED DOCUMENTATION
IN THIS SECTION
NOTE: The Netronome SmartNIC vRouter technology covered in this document is available for
evaluation purposes only. It is not intended for deployment in production networks.
You can deploy Contrail Networking by using Juju charms. Juju helps you deploy, configure, and efficiently
manage applications on private clouds and public clouds. Juju accesses the cloud with the help of a Juju
controller. A charm is a module containing a collection of scripts and metadata and is used with Juju to
deploy Contrail.
477
Starting in Contrail Networking Release 2011, Contrail Networking supports Netronome Agilio CX
SmartNICs for Contrail Networking deployment with Juju charms. This feature enables service providers
to improve the forwarding performance which includes packets per second (PPS) of vRouter. This optimizes
server CPU usage and you can deploy more Virtual Network Functions (VNFs) per server.
• Equip compute nodes with Netronome Agilio CX SmartNIC. For details, see Agilio CX SmartNICs
documentation.
Register on Netronome support site at https://help.netronome.com and provide Docker Hub credentials.
Netronome will provide the Agilio charm for SmartNIC vRouter deployment on compute nodes. Add
the charm version as charm variable in the Bundle yaml file on page ?. Also, Netronome will authorize
Docker Hub registry access.
• Note the Container Tags for your Contrail image to customize the image-tag variable in the Bundle yaml
file on page ?. See README Access to Contrail Registry 21XX.
agilio-image-tag: 2.48-ubuntu-queens
• contrail-agent
• contrail-analytics
• contrail-analyticsdb
• contrail-controller
• contrail-keystone-auth
• contrail-openstack
The following topics describe how to use Netronome SmartNIC vRouter with Contrail Networking and
Juju charms.
1. Install Juju.
478
2. Configure Juju.
You can add a cloud to Juju, and manage clouds already added to Juju. Juju recognizes a wide range
of cloud types for adding a cloud.
juju add-cloud
Cloud Types
maas
manual
openstack
oracle
vsphere
NOTE: Juju 2.x is compatible with MAAS series 1.x and 2.x.
NOTE: A Juju controller manages and keeps track of applications in the Juju cloud
environment.
To deploy Contrail charms in a bundle, use the juju deploy <bundle_yaml_file> command.
The following example shows you how to use bundle_yaml_file to deploy Contrail Networking with
Netronome SmartNIC vRouter on MAAS based deployment.
series: bionic
variables:
openstack-origin: &openstack-origin distro
#vhost-gateway: &vhost-gateway "192.x.40.254"
data-network: &data-network "192.x.40.0/24"
agilio-image-tag: &agilio-image-tag
"2.48-ubuntu-queens"
agilio-user: &agilio-user
"<agilio-username>"
agilio-password: &agilio-password
"<agilio-password>"
agilio-insecure: &agilio-insecure false
agilio-phy: &agilio-phy "nfp_p0"
docker-registry: &docker-registry
"<registry-directory>"
#docker-user: &docker-user
"<docker_username>"
#docker-password: &docker-password
"<docker_password>"
image-tag: &image-tag "2011.61"
docker-registry-insecure: &docker-registry-insecure "true"
dockerhub-registry: &dockerhub-registry
480
"https://index.docker.io/v1/"
machines:
"1":
constraints: tags=controller
series: bionic
"2":
constraints: tags=compute
series: bionic
"3":
constraints: tags=neutron
series: bionic
services:
ubuntu:
charm: cs:ubuntu
num_units: 1
to: [ "1" ]
ntp:
charm: cs:ntp
num_units: 0
options:
#source: ntp.ubuntu.com
source: 10.204.217.158
mysql:
charm: cs:percona-cluster
num_units: 1
options:
dataset-size: 15%
max-connections: 10000
root-password: <password>
sst-password: <password>
min-cluster-size: 1
to: [ "lxd:1" ]
rabbitmq-server:
charm: cs:rabbitmq-server
num_units: 1
options:
min-cluster-size: 1
to: [ "lxd:1" ]
heat:
charm: cs:heat
num_units: 1
expose: true
options:
debug: true
481
openstack-origin: *openstack-origin
to: [ "lxd:1" ]
keystone:
charm: cs:keystone
expose: true
num_units: 1
options:
admin-password: <password>
admin-role: admin
openstack-origin: *openstack-origin
preferred-api-version: 3
nova-cloud-controller:
charm: cs:nova-cloud-controller
num_units: 1
expose: true
options:
network-manager: Neutron
openstack-origin: *openstack-origin
to: [ "lxd:1" ]
neutron-api:
charm: cs:neutron-api
expose: true
num_units: 1
series: bionic
options:
manage-neutron-plugin-legacy-mode: false
openstack-origin: *openstack-origin
to: [ "3" ]
glance:
charm: cs:glance
expose: true
num_units: 1
options:
openstack-origin: *openstack-origin
to: [ "lxd:1" ]
openstack-dashboard:
charm: cs:openstack-dashboard
expose: true
num_units: 1
options:
openstack-origin: *openstack-origin
to: [ "lxd:1" ]
nova-compute:
charm: cs:nova-compute
482
num_units: 0
expose: true
options:
openstack-origin: *openstack-origin
nova-compute-dpdk:
charm: cs:nova-compute
num_units: 0
expose: true
options:
openstack-origin: *openstack-origin
nova-compute-accel:
charm: cs:nova-compute
num_units: 2
expose: true
options:
openstack-origin: *openstack-origin
to: [ "2" ]
contrail-openstack:
charm: ./tf-charms/contrail-openstack
series: bionic
expose: true
num_units: 0
options:
docker-registry: *docker-registry
#docker-user: *docker-user
#docker-password: *docker-password
image-tag: *image-tag
docker-registry-insecure: *docker-registry-insecure
contrail-agent:
charm: ./tf-charms/contrail-agent
num_units: 0
series: bionic
expose: true
options:
log-level: "SYS_DEBUG"
docker-registry: *docker-registry
#docker-user: *docker-user
#docker-password: *docker-password
image-tag: *image-tag
docker-registry-insecure: *docker-registry-insecure
#vhost-gateway: *vhost-gateway
physical-interface: *agilio-phy
contrail-agent-dpdk:
charm: ./tf-charms/contrail-agent
483
num_units: 0
series: bionic
expose: true
options:
log-level: "SYS_DEBUG"
docker-registry: *docker-registry
#docker-user: *docker-user
#docker-password: *docker-password
image-tag: *image-tag
docker-registry-insecure: *docker-registry-insecure
dpdk: true
dpdk-main-mempool-size: "65536"
dpdk-pmd-txd-size: "2048"
dpdk-pmd-rxd-size: "2048"
dpdk-driver: ""
dpdk-coremask: "1-4"
#vhost-gateway: *vhost-gateway
physical-interface: "nfp_p0"
contrail-analytics:
charm: ./tf-charms/contrail-analytics
num_units: 1
series: bionic
expose: true
options:
log-level: "SYS_DEBUG"
docker-registry: *docker-registry
#docker-user: *docker-user
#docker-password: *docker-password
image-tag: *image-tag
control-network: *control-network
docker-registry-insecure: *docker-registry-insecure
to: [ "1" ]
contrail-analyticsdb:
charm: ./tf-charms/contrail-analyticsdb
num_units: 1
series: bionic
expose: true
options:
log-level: "SYS_DEBUG"
cassandra-minimum-diskgb: "4"
cassandra-jvm-extra-opts: "-Xms8g -Xmx8g"
docker-registry: *docker-registry
#docker-user: *docker-user
#docker-password: *docker-password
484
image-tag: *image-tag
control-network: *control-network
docker-registry-insecure: *docker-registry-insecure
to: [ "1" ]
contrail-controller:
charm: ./tf-charms/contrail-controller
series: bionic
expose: true
num_units: 1
options:
log-level: "SYS_DEBUG"
cassandra-minimum-diskgb: "4"
cassandra-jvm-extra-opts: "-Xms8g -Xmx8g"
docker-registry: *docker-registry
#docker-user: *docker-user
#docker-password: *docker-password
image-tag: *image-tag
docker-registry-insecure: *docker-registry-insecure
control-network: *control-network
data-network: *data-network
auth-mode: no-auth
to: [ "1" ]
contrail-keystone-auth:
charm: ./tf-charms/contrail-keystone-auth
series: bionic
expose: true
num_units: 1
to: [ "lxd:1" ]
agilio-vrouter5:
charm: ./charm-agilio-vrt-5-37
expose: true
options:
virtioforwarder-coremask: *virtioforwarder-coremask
agilio-registry: *agilio-registry
agilio-insecure: *agilio-insecure
agilio-image-tag: *agilio-image-tag
agilio-user: *agilio-user
agilio-password: *agilio-password
relations:
- [ "ubuntu", "ntp" ]
- [ "neutron-api", "ntp" ]
- [ "keystone", "mysql" ]
- [ "glance", "mysql" ]
- [ "glance", "keystone" ]
485
- [ "nova-cloud-controller:shared-db", "mysql:shared-db" ]
- [ "nova-cloud-controller:amqp", "rabbitmq-server:amqp" ]
- [ "nova-cloud-controller", "keystone" ]
- [ "nova-cloud-controller", "glance" ]
- [ "neutron-api", "mysql" ]
- [ "neutron-api", "rabbitmq-server" ]
- [ "neutron-api", "nova-cloud-controller" ]
- [ "neutron-api", "keystone" ]
- [ "nova-compute:amqp", "rabbitmq-server:amqp" ]
- [ "nova-compute", "glance" ]
- [ "nova-compute", "nova-cloud-controller" ]
- [ "nova-compute", "ntp" ]
- [ "openstack-dashboard:identity-service", "keystone" ]
- [ "contrail-keystone-auth", "keystone" ]
- [ "contrail-controller", "contrail-keystone-auth" ]
- [ "contrail-analytics", "contrail-analyticsdb" ]
- [ "contrail-controller", "contrail-analytics" ]
- [ "contrail-controller", "contrail-analyticsdb" ]
- [ "contrail-openstack", "nova-compute" ]
- [ "contrail-openstack", "neutron-api" ]
- [ "contrail-openstack", "contrail-controller" ]
- [ "contrail-agent:juju-info", "nova-compute:juju-info" ]
- [ "contrail-agent", "contrail-controller"]
- [ "contrail-agent-dpdk:juju-info", "nova-compute-dpdk:juju-info" ]
- [ "contrail-agent-dpdk", "contrail-controller"]
- [ "nova-compute-dpdk:amqp", "rabbitmq-server:amqp" ]
- [ "nova-compute-dpdk", "glance" ]
- [ "nova-compute-dpdk", "nova-cloud-controller" ]
- [ "nova-compute-dpdk", "ntp" ]
- [ "contrail-openstack", "nova-compute-dpdk" ]
- [ "contrail-agent:juju-info", "nova-compute-accel:juju-info" ]
- [ "nova-compute-accel:amqp", "rabbitmq-server:amqp" ]
- [ "nova-compute-accel", "glance" ]
- [ "nova-compute-accel", "nova-cloud-controller" ]
- [ "nova-compute-accel", "ntp" ]
- [ "contrail-openstack", "nova-compute-accel" ]
- [ "agilio-vrouter5:juju-info", "nova-compute-accel:juju-info" ]
- [ "heat:shared-db" , "mysql:shared-db" ]
- [ "heat:amqp" , "rabbitmq-server:amqp" ]
- [ "heat:identity-service" , "keystone:identity-service" ]
- [ "contrail-openstack:heat-plugin" , "heat:heat-plugin-subordinate" ]
You can create or modify the Contrail charm deployment bundle YAML file to:
Each Contrail charm has a specific set of options. The options you choose depend on the charms you
select. For more information on the options that are available, see “Options for Juju Charms” on
page 412.
You can check the status of the deployment by using the juju status command.
Based on your deployment requirements, you can enable the following configuration statements:
• contrail-agent
• contrail-analytics
• contrail-analyticsdb
• contrail-controller
• contrail-keystone-auth
• contrail-openstack
Release Description
2011 Starting in Contrail Networking Release 2011, Contrail Networking supports Netronome
Agilio CX SmartNICs for Contrail Networking deployment with Juju charms.
RELATED DOCUMENTATION
CHAPTER 8
IN THIS CHAPTER
Example Instances.yml for Contrail and Contrail Insights OpenStack Deployment | 493
Starting with Contrail Release 5.0.1, the combined installation of Contrail and Contrail Insights allows
Contrail monitoring by Contrail Insights. The following topics are referenced for the deployment.
The following software and hardware requirements apply to the combined Contrail, Contrail Insights, and
Kolla/Ocata deployment.
Software Requirements
• Contrail Insights Targets: Refer to “Software Requirements” in Contrail Insights General Requirements.
488
• Targets running both Contrail and Contrail Insights: CentOS 7.5 Ansible 2.4.2 for the installer.
Hardware Requirements
• It is strongly recommended that the Contrail Insights controller and Contrail services be installed on
separate targets.
• See “Installing a Contrail Cluster using Contrail Command and instances.yml” on page 73 and “Contrail
Insights Installation and Configuration for OpenStack” on page 497 for specifics about requirements for
installation.
In Contrail Release 5.1, nodes on which Contrail, Contrail Insights (formerly AppFormix), or both are installed
are referred to as targets. The host from which Ansible is run is referred to as the base host. A base host
can also be a target, meaning you can install either Contrail, Contrail Insights, or both on a base host.
1. Image all the Contrail targets with CentOS 7.5 kernel 3.10.0-862.3.2.el7.x86_64.
2. Install the necessary platform software on the targets on which Contrail Insights Controller or Contrail
Insights Agent is going to be installed. See the instructions in “Contrail Insights Installation and
Configuration for OpenStack” on page 497.
Workflow for preparing the base host consists of the following steps:
1. Install Ansible 2.4.2 on the base host. See “Set Up the Bare Host” in Installing Contrail with OpenStack
and Kolla Ansible .
2. Set-up the base host. See “Set Up the Base Host” in Installing Contrail with OpenStack and Kolla Ansible
. This section includes information about creating the Ansible instances.yaml file.
489
3. On the base host, create a single Ansible instances.yaml file that lists inventory for both Contrail and
Contrail Insights deployments. An example of the single instances.yaml file is provided later in this
section.
• The Contrail inventory section of the instances.yaml file is configured according to guidelines in the
section “Set Up the Base Host” in Installing Contrail with OpenStack and Kolla Ansible .
• The Contrail Insights inventory section of the instances.yaml file is configured according to guidelines
in “Contrail Insights Installation and Configuration for OpenStack” on page 497.
It is strongly recommended that Contrail Insights Controller and Contrail services be installed on separate
target nodes. However, if Contrail Insights Controller and Contrail services are installed on the same target,
the following configuration is required to resolve port conflicts.
The following Contrail Insights ports must be reconfigured in the Contrail Insights group-vars section of
the instances.yaml file.
• appformix_datamanager_port_http
• appformix_datamanager_port_https
• appformix_haproxy_datamanager_port_http
• appformix_haproxy_datamanager_port_https
• appformix_datamanager_port_http:8200
Enable the following plugins by including them in the Contrail Insights group-vars section of the
instances.yaml file.
Connections to Contrail are configured by providing complete URLs by which to access the analytics and
configuration API services.
• contrail_cluster_name: Contrail_Clusterxxx
490
A name by which the Contrail instance will be displayed in the Dashboard. If not specified, this variable
has a default value of default_contrail_cluster.
• contrail_analytics_url: http://analytics-api-node-ip-address:8081
URL for the Contrail analytics API. The URL should only specify the protocol, address, and optionally
port.
• contrail_config_url: http://contrail-config-api-server-api-address:8082
URL for the Contrail configuration API. The URL should only specify the protocol, address, and optionally
port.
NOTE: The IP address specified for contrail monitoring corresponds to one of the IPs listed in
the Contrail roles for config and analytics. Typically, the first active IP address is selected.
The IP addresses to monitor can be added in the compute section of Contrail Insights in the instances.yaml
file. A list of IP addresses with a vrouter role in the instances.yaml file.
The Openstack_controller hosts section must be configured with at least one host. An example section is
shown.
openstack_controller:
hosts:
<ip-address>:
ansible_connection: ssh
ansible_ssh_user: <root user>
ansible_sudo_pass: <contrail password>
• openstack_platform_enabled: true
• appformix_remote_host_monitoring_enabled: true
491
(Required for Contrail Insights and Contrail Insights Flows installations in Release 2003 and earlier.) You
must have an appropriate license that supports the combined deployment of Contrail with Contrail Insights
for OpenStack. To obtain a license, send an email to [email protected]. Also, the
following group_varsContrail Insights in the instances.yaml file must point to this license.
• appformix_license: /path/appformix-contrail-license-file.sig
This is the path where the license is placed on the bare host so that the license can be deployed on the
target.
RELATED DOCUMENTATION
Refer to section “Install Contrail and Kolla requirements” and section “Deploying contrail and Kolla
containers” in Installing Contrail with OpenStack and Kolla Ansible and execute the ansible-playbook.
Following are examples listing the Contrail play-book invocation from the contrail-ansible-deployer
directory:
• Install OpenStack:
• Install Contrail:
492
Also at this point, obtain a list of IP addresses to include in the compute section of Contrail Insights in the
instances.yaml file. Refer to Compute monitoring: Listing IP addresses to monitor in the computesection
of Contrail Insights in the instances.yaml file.
Refer to Installing AppFormix for OpenStack and validate target configuration requirements and inventory
parameters for Contrail Insights Controller and Agent. In place of -i inventory/use -i
/absolute-file-path/instances.yaml.
Following is an example listing the Contrail Insights playbook invocation from the appformix-2.18.x directory
where appformix_openstack.yml is located:
RELATED DOCUMENTATION
Contrail Insights service monitoring Dashboard for a Contrail cluster displays the overall state of the cluster
and its components. For more information, see “Dashboard” in “Contrail Monitoring” in the Contrail Insights
User Guide.
http://<controller-IP-address>:9000
RELATED DOCUMENTATION
• Versions of Contrail Insights-2.17 and earlier are not supported with Ansible-2.4.2. The combined Contrail
and Contrail Insights installation is not validated on these earlier releases.
• To view and monitor Contrail in the Contrail Insights Management Infrastructure dashboard, the license
used in the deployment must include support for Contrail.
• For Contrail Insights OpenStack HA installation steps, see “Contrail Insights Installation for OpenStack
in HA” on page 511.
RELATED DOCUMENTATION
See Installing Contrail with OpenStack and Kolla Ansible and “Contrail Insights Installation and Configuration
for OpenStack” on page 497 for specific inventory file details:
The following items are part of the all section in the instances.yaml file for Contrail Insights:
all:
children:
openstack_controller:
hosts:
<ip-address>:
494
ansible_connection: ssh
ansible_ssh_user: <ssh-user>
ansible_sudo_pass: <sudo-password>
The following items are part of the vars section in the instances.yaml file for Contrail Insights:
openstack_platform_enabled: true
##License must support Contrail and Openstack
appformix_license: /path/license-file.sig
contrail_cluster_name: 'Contrail_Cluster'
contrail_analytics_url: 'http://<contrail-analytics-api-server-ip-address>:8081'
contrail_config_url: 'http://<contrail-config-api-server-ip-address>:8082'
# Defaults from roles/appformix_defaults/defaults/main.yml are overwritten below
appformix_datamanager_port_http: "{{ (appformix_scale_setup_flag|bool) |
ternary(28200, 8200) }}"
appformix_datamanager_port_https: "{{ (appformix_scale_setup_flag|bool) |
ternary(28201, 8201) }}"
appformix_haproxy_datamanager_port_http: 8200
appformix_haproxy_datamanager_port_https: 8201
appformix_plugins: '{{ appformix_contrail_factory_plugins }} + {{
appformix_network_device_factory_plugins }}’
There is one instances.yaml file for the Contrail and Contrail Insights combined installation.
vrouter:
openstack:
openstack_compute:
global_configuration:
CONTAINER_REGISTRY: <ci-repository-URL>:5000
REGISTRY_PRIVATE_INSECURE: True
contrail_configuration:
#UPGRADE_KERNEL: true
CONTRAIL_VERSION: <contrail-version>
#CONTRAIL_VERSION: latest
CLOUD_ORCHESTRATOR: openstack
VROUTER_GATEWAY: <gateway-ip-address>
RABBITMQ_NODE_PORT: 5673
PHYSICAL_INTERFACE: <interface-name>
AUTH_MODE: keystone
KEYSTONE_AUTH_HOST: <keystone-ip-address>
KEYSTONE_AUTH_URL_VERSION: /v3
CONFIG_NODEMGR__DEFAULTS__minimum_diskGB: 2
DATABASE_NODEMGR__DEFAULTS__minimum_diskGB: 2
kolla_config:
kolla_globals:
network_interface: <interface-name>
kolla_internal_vip_address: <ip-address>
contrail_api_interface_address: <ip-address>
enable_haproxy: no
enable_swift: no
kolla_passwords:
keystone_admin_password: <password>
compute:
hosts:
#List IP addresses of Contrail roles to be monitored here
<<IP-addresses>>:
ansible_connection: ssh
ansible_ssh_user: <ssh-user>
ansible_sudo_pass: <sudo-password>
bare_host:
hosts:
<ip-address>:
ansible_connection: ssh
ansible_ssh_user: <ssh-user>
ansible_sudo_pass: <sudo-password>
#If host is local
<ip-address>:
ansible_connection: local
vars:
appformix_docker_images:
- /opt/software/appformix/contrail-insights-platform-images-<version>.tar.gz
- /opt/software/appformix/contrail-insights-dependencies-images-<version>.tar.gz
-
/opt/software/appformix/contrail-insights-network_device-images-<version>.tar.gz
- /opt/software/appformix/contrail-insights-openstack-images-<version>.tar.gz
openstack_platform_enabled: true
# appformix_license:
/opt/software/openstack_appformix/<appformix-contrail-license-file>.sig
appformix_license: /opt/software/configs/contrail.sig
appformix_docker_registry: registry.appformix.com/
appformix_version: <version> #Must be 2.18.x or above
appformix_plugins: '{{ appformix_contrail_factory_plugins }} + {{
appformix_network_device_factory_plugins }} + {{ appformix_openstack_factory_plugins
}}'
appformix_kvm_instance_discovery: true
# For enabling pre-requisites for package installation
appformix_network_device_monitoring_enabled: true
# For running the appformix-network-device-adapter
network_device_discovery_enabled: true
appformix_remote_host_monitoring_enabled: true
appformix_jti_network_device_monitoring_enabled: true
contrail_cluster_name: 'Contrail_Cluster'
contrail_analytics_url: 'http://<contrail-analytics-api-server-IP-address>:8081'
497
contrail_config_url: 'http://<contrail-config-api-server-IP-address>:8082'
# Defaults overwritten below were defined in
roles/appformix_defaults/defaults/main.yml
appformix_datamanager_port_http: "{{ (appformix_scale_setup_flag|bool) |
ternary(28200, 8200) }}"
appformix_datamanager_port_https: "{{ (appformix_scale_setup_flag|bool) |
ternary(28201, 8201) }}"
appformix_haproxy_datamanager_port_http: 8200
appformix_haproxy_datamanager_port_https: 8201
NOTE: Replace <contrail_version> with the correct contrail_container_tag value for your Contrail
release. The respective contrail_container_tag values are listed in README Access to Contrail
Registry 21XX.
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 499
Contrail Insights provides resource control and visibility for hosts and virtual machines in an OpenStack
cluster. This topic explains how to install Contrail Insights for an OpenStack cluster. Contrail Insights Agent
runs on a host to monitor resource consumption of the host itself and the virtual machines executing on
that host. See the Contrail Insights General Requirements and Contrail Insights Agent Requirements before
reading this section. Figure 52 on page 498 shows the Contrail Insights architecture with OpenStack.
Dashboard
(Client)
OpenStack Dashboard
(Server)
Contrail Insights
VM VM VM
Contrail Insights
Agent Controller MongoDB
Host
VM VM VM
DataManager Redis Message Bus
Contrail Insights
Agent
Host
dnsmasq
Keystone
OpenStack Adapter
g300721
Nova
• An adapter discovers platform-specific resources and configures the Contrail Insights Platform.
Requirements
The following are the requirements for installing Contrail Insights for OpenStack.
• Supported OpenStack versions: Icehouse, Juno, Kilo, Liberty, Mitaka, Newton, Ocata.
• See Contrail Insights General Requirements for software and hardware requirements.
• API access to OpenStack services: Cinder, Glance, Heat, Keystone, Neutron, Nova, and Swift. Contrail
Insights reads information from these services. The administrator account must provide sufficient
permission for read-only API calls. Further, Contrail Insights Platform must be able to open connections
to the host and port on which these services listen. Contrail Insights can be configured to use the admin,
internal, or public service endpoints. See OS_ENDPOINT_TYPE in OpenStack environment variables in
the section Installing Contrail Insights, 10.
•
NOTE: Upgrade notice: Starting with Contrail Insights 3.2.6, the requirement for a license file
is removed. If you are installing a version earlier than 3.2.6, a license is required prior to
installation.
You can obtain a license key from [email protected]. Provide the following
information in your request:
Group name:
Target customers or use:
Cluster type: OpenStack
Number of hosts:
Number of instances:
1. Configure OpenStack.
Configure OpenStack
To create an administrator account for Contrail Insights, perform the following steps in the OpenStack
Horizon dashboard:
Ansible is used to deploy the software to the compute node(s) and the Platform Host. An Ansible inventory
file describes groups of hosts in your cluster. Define the inventory in a separate location than the release
files, so that the inventory may be reused for an upgrade.
Contrail Insights requires two groups compute and appformix_controller. Each group lists the hosts in that
group. Only the agent is installed on the compute hosts. The agent and the Contrail Insights Platform
services are installed on the appformix_controller host.
Optionally, an openstack_controller group can be defined. The agent is installed on hosts in this group to
monitor the hosts that execute OpenStack controller services. (New in v2.3.0)
Create a directory inventory (or name of your choice) that contains a hosts file and a group_vars/all file.
For example:
inventory/
hosts # inventory file
group_vars/
all # configuration variables
The inventory/hosts file contains the list of hosts in each group. For example:
[appformix_controller]
appformix01
[compute]
compute01
compute02
compute03
501
[openstack_controller]
openstack_infra01
openstack_infra02
In the inventory/group_vars/all file, configure the following variables for installation of Contrail Insights
for OpenStack.
appformix_docker_images:
- /path/to/contrail-insights-platform-images-<version>.tar.gz
- /path/to/contrail-insights-dependencies-images<version>.tar.gz
- /path/to/contrail-insights-openstack-images-<version>.tar.gz
Refer to Platform Dependencies for steps to install dependencies on a Platform Host that cannot fetch files
from the Internet.
http_proxy_url: 'http://proxy.example.com:1234'
prerequisites_env:
The prerequisites_env is a dictionary that defines environment variables that will be used when invoking
commands to install prerequisites. In the above example, the same proxy URL
(http://proxy.example.com:1234) is used for both the http_proxy and https_proxy environment variables
because the single proxy can be used to access HTTP and HTTPS URLs. As a convenience, the proxy URL
is defined once in the http_proxy_url variable. Adjust prerequisites_env as necessary for the proxy
requirements of your network.
502
1. Install Ansible on the Contrail Insights Platform node. Ansible will install docker and docker-py on the
platform.
#Ubuntu
apt-get install python-pip python-dev #Installs Pip
pip install ansible==2.3.0 #Installs Ansible
2.3
sudo apt-get install build-essential libssl-dev libffi-dev #Dependencies
pip install markupsafe httplib2 requests #Dependencies
#RHEL/CentOS
yum install epel-release #Enable EPEL
repository
In case the above command does not work, manually download and install the
epel-release
package with one of the below commands, depending on your system’s version.
yum install
https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install
https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
NOTE: For RHEL, the following IPtables rule is needed to access port 9000.
This installs the KVM (Kernel-based Virtual Machine) and associated packages for collecting data from
virtual machines running on the compute nodes.
3. On the compute nodes where Contrail Insights Agent runs, verify that python virtualenv is installed.
#Ubuntu
apt-get install -y python-pip
pip install virtualenv
#RHEL/CentOS
yum install -y python-pip
pip install virtualenv
4. Enable passwordless login to facilitate Contrail Insights Platform node with Ansible to install agents on
the nodes. Create a SSH public key on the node where Ansible playbooks are run, and then copy the
key to the appformix_controller node.
# appformix_controller host
#
# Host variables can be defined to control Contrail Insights configuration
parameters
# for particular host. For example, to specify the directory in which MongoDB
# data is stored on hostname1 (the default is /opt/appformix/mongo/data):
#
# hostname1 appformix_mongo_data_dir=/var/lib/appformix/mongo
#
# For variables with same value for all appformix_controller hosts, set group
504
# variables below.
#
[appformix_controller]
172.16.70.119
[openstack_controller]
172.16.70.120
6. Verify that all the hosts listed in the inventory file are reachable from the Contrail Insights Platform.
ansible -i inventory -m ping all # Pings all the hosts in the inventory
file
mkdir group_vars
8. Download the Contrail Insights installation packages from software downloads to the Contrail Insights
Platform node. Get the following files:
contrail-insights-<version>.tar.gz
contrail-insights-dependencies-images-<version>.tar.gz
contrail-insights-openstack-images-<version>.tar.gz
contrail-insights-platform-images-<version>.tar.gz
contrail-insights-network_device-images-<version>.tar.gz
If you are installing a version earlier than 3.2.6, copy the Contrail Insights license file to the Contrail
Insights Platform node.
openstack_platform_enabled: true
appformix_manager_version: <version>
appformix_docker_images:
- /path/to/contrail-insights-platform-images-<version>.tar.gz
505
- /path/to/contrail-insights-dependencies-images-<version>.tar.gz
- /path/to/contrail-insights-openstack-images-<version>.tar.gz
If you are installing a version earlier than 3.2.6, include the path to the Contrail Insights license file in
group_vars/all:
appformix_license: path/to/<contrail-insights-license-file>.sig
10. Contrail Insights must be configured to communicate with the OpenStack cluster. The Ansible playbooks
use OpenStack environment variables to configure Contrail Insights with details of the OpenStack
environment.
11. Source the openrc file that contains the 10environment variables and ensure the variables are in the
environment of the shell from which the Ansible-playbooks are going to be executed. Then, install
Contrail Insights by executing the appformix_openstack.yml playbook. Specify the path to the inventory
directory that you created earlier.
12. Open the Contrail Insights Dashboard in a Web browser. For example:
http://<contrail-insights-platform-node-ip>:9000
Select Skip Installation because the initial configuration was performed by Ansible using the OpenStack
environment variables in 10. Log in to Contrail Insights Dashboard using OpenStack Keystone credentials.
1. To install in a Keystone SSL-enabled cluster, include the following variables in the group_vars/all file:
506
appformix_keystone_ssl_ca: '/path/to/keystone_ca.crt'
Contrail Insights Ansible will distribute this Keystone CA to all of the Contrail Insights Platform nodes
and ask Contrail Insights components to talk to Keystone using this CA file with SSL enabled.
2. To enable network device monitoring in the cluster, include the following in the group_vars/all file:
3. To install Contrail Insights certified plug-ins on the cluster, include the following variables in the
group_vars/all file:
For example:
appformix_plugins:
- { plugin_info: 'certified_plugins/cassandra_node_usage.json' }
- { plugin_info: 'certified_plugins/contrail_vrouter.json' }
- { plugin_info: 'certified_plugins/zookeeper_usage.json' }
- { plugin_info: 'certified_plugins/heavy_hitters.json' }
appformix_openstack_log_plugins:
- { plugin_info: 'certified_plugins/cinder_api_logparser.json',
507
log_file_path: '/var/log/cinder/cinder-api.log' }
- { plugin_info: 'certified_plugins/glance_logparser.json',
log_file_path: '/var/log/glance/glance-api.log' }
- { plugin_info: 'certified_plugins/keystone_logparser.json',
log_file_path:
'/var/log/apache2/keystone_access.log,/var/log/httpd/keystone_wsgi_admin_access.log,/var/log/keystone/keystone.log'
}
For a list of all Contrail Insights certified plug-ins that can be installed, look for the entries starting with
plugin_info in the file roles/appformix_defaults/defaults/main.yml.
The OpenStack log parser plug-ins parse the API log files of each OpenStack service to collect metrics
about API calls and response status codes. To install these plug-ins, add them to the variable
appformix_openstack_log_plugins in group_vars/all, as shown above. Each plug-in entry in this list
requires a parameter called log_file_path to be specified. This parameter should be set to the complete
path to the service's API log file on the OpenStack Controller node(s). Multiple comma-separated paths
may be specified.
To identify the right log file to be specified in log_file_path, look for entries like the following, containing
a client IP address, REST call type, and response status code:
Default locations for these files are listed in the variable appformix_openstack_log_factory_plugins in
roles/appformix_defaults/defaults/main.yml.
4. In Contrail Insights version 2.19.8, a timeout value can be configured for connecting to OpenStack
services. The default value of this timeout is 10 seconds and can be changed to a value between 5 and
20 seconds (both inclusive). To change the value, add the following variable to group_vars/all:
To modify the timeout value after the Contrail Insights Platform has been installed, add the variable to
the group_vars/all file as described above and re-run the Contrail Insights installation playbook. Restart
the appformix-openstack-adapter Docker container after the playbook has completed:
508
Upgrade Notices
NOTE: In Contrail Insights version 3.2.0, support for discovering OpenStack Octavia Load
Balancer services is added. Contrail Insights only collects load balancer state information, such
as provisioning_status and operating_status, as well as flavor information. To enable this service
discovery, provide Octavia service's endpoint as variable appformix_octavia_endpoint_url in the
group_vars/all file. For example:
appformix_octavia_endpoint_url: http://10.1.1.1:9876
Chargeback costs can also be configured for the Octavia Load Balancer services. See Configure
Load Balancer Costs.
When upgrading from version 2.18.x to version 3.0, make the following changes in the group_vars/all file:
In version 2.18.x:
appformix_openstack_factory_plugins:
- { plugin_info: 'certified_plugins/cinder_api_logparser.json', log_file_path:
'/var/log/cinder/cinder-api.log'}
- { plugin_info: 'certified_plugins/glance_logparser.json',log_file_path:
'/var/log/glance/api.log'}
509
- { plugin_info: 'certified_plugins/heavy_hitters.json' }
- { plugin_info: 'certified_plugins/keystone_logparser.json', log_file_path:
'/var/log/keystone/keystone.log'}
- { plugin_info: 'certified_plugins/neutron_logparser.json', log_file_path:
'/var/log/neutron/server.log'}
- { plugin_info: 'certified_plugins/nova_logparser.json', log_file_path:
'/var/log/nova/nova-api.log'}
In version 3.0.x:
appformix_plugins:
- { plugin_info: 'certified_plugins/heavy_hitters.json' }
appformix_openstack_log_plugins:
- { plugin_info: 'certified_plugins/cinder_api_logparser.json', log_file_path:
'/var/log/cinder/cinder-api.log'}
- { plugin_info: 'certified_plugins/glance_logparser.json',log_file_path:
'/var/log/glance/api.log'}
- { plugin_info: 'certified_plugins/keystone_logparser.json', log_file_path:
'/var/log/keystone/keystone.log'}
- { plugin_info: 'certified_plugins/neutron_logparser.json', log_file_path:
'/var/log/neutron/server.log'}
- { plugin_info: 'certified_plugins/nova_logparser.json', log_file_path:
'/var/log/nova/nova-api.log'}
Source the openrc file from the OpenStack controller node (/etc/contrail/openstackrc) to the Contrail
Insights Platform node to authenticate the adapter to access administrator privileges over the controller
services. The file should look like the following:
export OS_USERNAME=admin
export OS_PASSWORD=<admin-password>
export OS_AUTH_URL=http://172.16.80.2:5000/v2.0/
export OS_NO_CACHE=1
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
1. Edit the inventory file and add appformix_state=absent to each node that you want to remove from
Contrail Insights.
2. Run Ansible with the edited inventory file. This will remove the node and all its resources from Contrail
Insights.
Contrail Insights can be easily upgraded by running the appformix_openstack.yml playbook of the new
release. Follow the same procedure as the installation.
To uninstall Contrail Insights and destroy all data, execute the following command:
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 511
HA Design Overview
Contrail Insights Platform can be deployed to multiple hosts for high availability (HA). Platform services
continue to communicate using an API proxy that listens on a virtual IP address. Only one host will have
the virtual IP at a time, and so only one API proxy will be the “active” API proxy at a time.
The API proxy is implemented by HAProxy. HAProxy is configured to use services in active-standby or
load-balanced active-active mode, depending on the service.
At most, one host will be assigned the virtual IP at any given time. This host is considered the “active”
HAproxy. The virtual IP address is assigned to a host by keepalived, which uses VRRP protocol for election.
Services are replicated in different modes of operation. In the “active-passive” mode, HAProxy sends all
requests to a single “active” instance of a service. If the service fails, then HAProxy will select a new “active”
from the other hosts, and begin to send requests to the new “active” service.In the “active-active” mode,
HAProxy load balances requests across hosts on which a service is operational.
Contrail Insights Platform can be deployed in a 3-node, 5-node, or 7-node configuration for high availability.
Requirements
• For each host, on which Contrail Insights Platform is installed, see Contrail Insights General Requirements
for hardware and software requirements. For a list of Contrail Insights Agent supported platforms, see
Contrail Insights Agent Requirements.
512
•
NOTE: Upgrade notice: Starting with Contrail Insights 3.2.6, the requirement for a license file
is removed. If you are installing a version earlier than 3.2.6, a license is required prior to
installation.
You can obtain a license key from [email protected]. Provide the following
information in your request:
Group name:
Target customers or use:
Cluster type: OpenStack
Number of hosts:
Number of instances:
Connectivity
• One virtual IP address to be shared among all the Platform Hosts. This IP address should not be used
by any host before installation. It should have reachability from all the Platform Hosts after installation.
• Dashboard client (in browser) must have IP connectivity to the virtual IP.
• IP addresses for each Platform Host for installation and for services running on these hosts to
communicate.
• keepalived_vrrp_interface for each Platform Host which would be used for assigning virtual IP address.
Details on how to configure this interface is described in the sample_inventory section.
1. Download the Contrail Insights installation packages from software downloads to the Contrail Insights
Platform node. Get the following files:
contrail-insights-<version>.tar.gz
contrail-insights-dependencies-images-<version>.tar.gz
contrail-insights-openstack-images-<version>.tar.gz
contrail-insights-platform-images-<version>.tar.gz
contrail-insights-network_device-images-<version>.tar.gz
If you are installing a version earlier than 3.2.6, copy the Contrail Insights license file to the Contrail
Insights Platform node.
513
2. Install Ansible on the installer node. Ansible will install docker and the docker Python package on the
appformix_controller.
3. Install Python and python-pip on all the Platform hosts so that Ansible can run between the installer
node and the appformix_controller node.
4. Install python pip package on the hosts where Contrail Insights Agents run.
5. To enable passwordless login to all Platform hosts by Ansible, create an SSH public key on the node
where Ansible playbooks are run and then copy the key to all the Platform hosts.
6. Use the sample_inventory file as a template to create a host file. Add all the Platform hosts and compute
hosts details.
203.0.113.119 keepalived_vrrp_interface=eth0
203.0.113.120 keepalived_vrrp_interface=eth0
203.0.113.121 keepalived_vrrp_interface=eth0
NOTE: Note: In the case of 5-node or 7-node deployment, list all the nodes under
appformix_controller.
7. At top-level of the distribution, create a directory named group_vars and then create a file named all
inside this directory.
# mkdir group_vars
# touch group_vars/all
appformix_vip: <ip-address>
appformix_docker_images:
- /path/to/contrail-insights-platform-images-<version>.tar.gz
- /path/to/contrail-insights-dependencies-images-<version>.tar.gz
- /path/to/contrail-insights-openstack-images-<version>.tar.gz
If you are installing a version earlier than 3.2.6, include the path to the Contrail Insights license file in
group_vars/all:
appformix_license: path/to/<contrail-insights-license-file>.sig
8. Copy and source the openrc file from the OpenStack controller node (/etc/contrail/openrc) to the
appformix_controller to authenticate the adapter to access admin privileges over the controller services.
NOTE: In Contrail Insights version 3.2.0, support for discovering OpenStack Octavia Load
Balancer services is added. Contrail Insights only collects load balancer state information,
such as provisioning_status and operating_status, as well as flavor information. To enable
this service discovery, provide Octavia service's endpoint as variable
appformix_octavia_endpoint_url in the group_vars/all file. For example:
appformix_octavia_endpoint_url: http://10.1.1.1:9876
Chargeback costs can also be configured for the Octavia Load Balancer services. See Configure
Load Balancer Costs.
10. If running the playbooks as root user then this step can be skipped. As a non-root user (for example.
“ubuntu”), the user “ubuntu” needs access to the docker user group. The following command adds the
user to the docker group.
RELATED DOCUMENTATION
CHAPTER 9
IN THIS CHAPTER
IN THIS SECTION
Configuration | 519
Contrail Networking supports role and resource-based access control (RBAC) with API operation-level
access control.
517
The RBAC implementation relies on user credentials obtained from Keystone from a token present in an
API request. Credentials include user, role, tenant, and domain information.
API-level access is controlled by a list of rules. The attachment points for the rules include
global-system-config, domain, and project. Resource-level access is controlled by permissions embedded
in the object.
If the RBAC feature is enabled, the API server requires a valid token to be present in the X-Auth-Token
of any incoming request. The API server trades the token for user credentials (role, domain, project, and
so on) from Keystone.
Where:
field—Any property or reference within the resource. The field option can be multilevel, for example,
network.ipam.host-routes can be used to identify multiple levels. The field is optional, so in its absence,
the create, read, update, and delete (CRUD) operation refers to the entire resource.
Each rule also specifies the list of roles and their corresponding permissions as a subset of the CRUD
operations.
The following is an example access control list (ACL) object for a project in which the admin and any users
with the Development role can perform CRUD operations on the network in a project. However, only the
admin role can perform CRUD operations for policy and IP address management (IPAM) inside a network.
• The rule set for validation is the union of rules from the ACL attached to:
• User project
• User domain
• Default domain
• Access is only granted if a rule in the combined rule set allows access.
• An ACL object can be shared within a domain. Therefore, multiple projects can point to the same ACL
object. You can make an ACL object the default.
The perms2 permission property of an object allows fine-grained access control per resource.
owner —This field is populated at the time of creation with the tenant UUID value extracted from the
token.
share list —The share list gets built when the object is selected for sharing with other users. It is a list of
tuples with which the object is shared.
• R—Read object
Configuration
IN THIS SECTION
Parameter: aaa-mode
RBAC is controlled by a parameter named aaa-mode. This parameter is used in place of the multi-tenancy
parameter of previous releases.
If you are using Contrail Ansible Deployer to provision Contrail Networking, set the value for AAA_MODE
to rbac to enable RBAC by default.
contrail_configuration:
.
.
.
AAA_MODE: rbac
If you are installing Contrail Networking from Contrail Command, specify the key and value as AAA_MODE
and rbac, respectively, under the section Contrail Configuration on the Step 2 Provisioning Options
page.
After enabling RBAC, you must restart the neutron server by running the service neutron-server restart
command for the changes to take effect.
520
NOTE: The multi_tenancy parameter is deprecated, starting with Contrail 3.0. The parameter
should be removed from the configuration. Instead, use the aaa_mode parameter for RBAC to
take effect.
Parameter: cloud_admin_role
A user who is assigned the cloud_admin_role has full access to everything.
This role name is configured with the cloud_admin_role parameter in the API server. The default setting
for the parameter is admin. This role must be configured in Keystone to change the default value.
If a user has the cloud_admin_role in one tenant, and the user has a role in other tenants, then the
cloud_admin_role role must be included in the other tenants. A user with the cloud_admin_role doesn't
need to have a role in all tenants, however, if that user has any role in another tenant, that tenant must
include the cloud_admin_role.
• /etc/contrail/contrail-keystone-auth.conf
• /etc/neutron/plugins/opencontrail/ContrailPlugin.ini
• /etc/contrail/contrail-webui-userauth.js
• API server
• Neutron server
• WebUI
A global_read_only_role allows read-only access to all Contrail resources. The global_read_only_role must
be configured in Keystone. The default global_read_only_role is not set to any value.
A global_read_only_role user can use the Contrail Web Ui to view the global configuration of Contrail
default settings.
/etc/contrail/contrail-api.conf
global_read_only_role = <new-admin-read-role>
The multi_tenancy parameter is deprecated.. The parameter should be removed from the configuration.
Instead, use the aaa_mode parameter for RBAC to take effect.
/etc/contrail/contrail-analytics-api.conf
522
aaa_mode = no-auth
You can use the Contrail UI to configure RBAC at both the API level and the object level. API level access
control can be configured at the global, domain, and project levels. Object level access is available from
most of the create or edit screens in the Contrail UI.
RBAC Resources
Refer to the OpenStack Administrator Guide for additional information about RBAC:
The analytics API uses role-based access control (RBAC) to provide the ability to access UVE and query
information based on the permissions of the user for the UVE or queried object.
Contrail Networking extends authenticated access so that tenants can view network monitoring information
about the networks for which they have read permissions.
The analytics API can map query and UVE objects to configuration objects on which RBAC rules are applied,
so that read permissions can be verified using the VNC API.
• For statistics queries, annotations are added to the Sandesh file so that indices and tags on statistics
queries can be associated with objects and UVEs. These are used by the contrail-analytics-api to determine
the object level read permissions.
• For flow and log queries, the object read permissions are evaluated for each AND term in the where
query.
• For UVEs list queries (e.g. analytics/uve/virtual-networks/), the contrail-analytics-api gets a list of UVEs
that have read permissions for a given token. For a UVE query for a specific resource (e.g.
analytics/uves/virtual-network/vn1), contrail-analytics-api checks the object level read permissions
using VNC API.
Tenants cannot view system logs and flow logs, those logs are displayed for cloud-admin roles only.
• virtual_network
• virtual_machine
• virtual_machine_interface
• service_instance
• service_chain
• tag
• firewall_policy
526
• firewall_rule
• address_group
• service_group
• aaplication_policy_set
IN THIS SECTION
Configuring the Control Node with BGP from Contrail Command | 533
An important task after a successful installation is to configure the control node with BGP. This procedure
shows how to configure basic BGP peering between one or more virtual network controller control nodes
and any external BGP speakers. External BGP speakers, such as Juniper Networks MX80 routers, are
needed for connectivity to instances on the virtual network from an external infrastructure or a public
network.
Before you begin, ensure that the following tasks are completed:
• The Contrail Controller base system image has been installed on all servers.
• IP connectivity has been verified between all nodes of the Contrail Controller.
• You have access to Contrail Web User Interface (UI) or Contrail Command User Interface (UI). You can
access the user interface at http://nn.nn.nn.nn:8143, where nn.nn.nn.nn is the IP address of the
configuration node server that is running the contrail service.
These topics provide instructions to configure the Control Node with BGP.
527
1. From the Contrail Controller module control node (http://nn.nn.nn.nn:8143), select Configure >
Infrastructure > BGP Routers; see Figure 59 on page 528.
A summary screen of the control nodes and BGP routers is displayed; see Figure 60 on page 528.
2. (Optional) The global AS number is 64512 by default. To change the AS number, on the BGP Router
summary screen click the gear wheel and select Edit. In the Edit BGP Router window enter the new
number.
3. To create control nodes and BGP routers, on the BGP Routers summary screen, click the icon. The
Create BGP Router window is displayed; see Figure 61 on page 530.
530
4. In the Create BGP Router window, click BGP Router to add a new BGP router or click Control Node
to add control nodes.
For each node you want to add, populate the fields with values for your system. See Table 31 on page 530.
Field Description
Vendor ID Required for external peers. Populate with a text identifier, for example,
“MX-0”. (BGP peer only)
Autonomous System Enter the AS number in the range 1-65535 for the node. (BGP peer only)
531
Field Description
Hold Time BGP session hold time. The default is 90 seconds; change if needed.
6. To configure an existing node as a peer, select it from the list in the Available Peers box, then click >>
to move it into the Configured Peers box.
7. You can check for peers by selecting Monitor > Infrastructure > Control Nodes; see
Figure 62 on page 532.
532
In the Control Nodes window, click any hostname in the memory map to view its details; see
Figure 63 on page 532.
8. Click the Peers tab to view the peers of a control node; see Figure 64 on page 533.
533
1. From Contrail Command UI select Infrastructure > Cluster > Advanced page.
Click the BGP Routers tab. A list of control nodes and BGP routers is displayed. See Figure 65 on page 533.
Figure 65: Infrastructure > Cluster > Advanced > BGP Routers
2. (Optional) The global AS number is 64512 by default. You can change the AS number according to your
requirement on the BGP Router tab, by clicking the Edit icon. In the Edit BGP Router tab enter AS
534
number in the range of 1-65,535. You can also enter the AS number in the range of 1-4,294,967,295,
when 4 Byte ASN is enabled in Global Config.
3. Click the Create button on the BGP Routers tab. The Create BGP Router window is displayed. See
Figure 66 on page 534.
4. In the Create BGP Router page, populate the fields with values to create your system. See
Table 32 on page 534.
Fields Description
Vendor ID Required for external peers. Populate with a text identifier, for
example, “MX-0”. (BGP peer only)
Fields Description
Autonomous System (AS) Enter autonomous system (AS) number in the range of 1-65,535.
If you enable 4 Byte ASN in Global Config, you can enter 4-byte
AS number in the range of 1-4,294,967,295.
BGP Router ASN Enter the Local-AS number, specific to the associated peers.
Address Families Select the Internet Address Family from the list, for example,
inet-vpn, inet6-vpn, and so on.
Associate Peers
Hold Time Enter the maximum time a BGP session remains active if no
Keepalives are received.
Loop Count Enter the number of times the same ASN can be seen in a
route-update. The route is discarded when the loop count is
exceeded.
State Select the state box when you are associating BGP peers.
Passive Select the passive box to disable the BGP router from advertising
any routes. The BGP router can only receive updates from other
peers in this state.
Advanced Options
BGP Port Enter BGP Port number. The default is 179; change if needed.
Source Port Enter source port number for client side connection.
Hold Time (seconds) BGP session hold time. The default is 90 seconds; change if
needed.
Admin State Select the Admin state box to enable the state as UP and deselect
the box to disable the state to DOWN.
536
Fields Description
Control Node Zone Select the required control node zone from the list.
Physical Router Select the the physical router from the list.
6. You can check for peers and details about the control nodes by selecting Infrastructure > Cluster >
Control Nodes. Click the desired node to check the details on Summary and Detailed Stats page.
RELATED DOCUMENTATION
Contrail supports MD5 authentication for BGP peering based on RFC 2385.
This option allows BGP to protect itself against the introduction of spoofed TCP segments into the
connection stream. Both of the BGP peers must be configured with the same MD5 key. Once configured,
each BGP peer adds a 16-byte MD5 digest to the TCP header of every segment that it sends. This digest
is produced by applying the MD5 algorithm on various parts of the TCP segment. Upon receiving a signed
segment, the receiver validates it by calculating its own digest from the same data (using its own key) and
compares the two digests. For valid segments, the comparison is successful since both sides know the key.
The following are ways to enable BGP MD5 authentication and set the keys on the Contrail node.
1. If the md5 key is not included in the provisioning, and the node is already provisioned, you can run the
following script with an argument for md5:
537
contrail-controller/src/config/utils/provision_control.py
2. You can also use the web user interface to configure MD5.
b. For a BGP peer, click on the gear icon on the right hand side of the peer entry. Then click Edit. This
displays the Edit BGP Router dialog box.
d. Configure the MD5 authentication by selecting Authentication Mode>MD5 and entering the
Authentication Key value.
RELATED DOCUMENTATION
Transport Layer Security (TLS)-based XMPP can be used to secure all Extensible Messaging and Presence
Protocol (XMPP)-based communication that occurs in the Contrail environment.
Secure XMPP is based on RFC 6120, Extensible Messaging and Presence Protocol (XMPP): Core.
The RFC 6120 highlights a basic stream message exchange format for TLS negotiation between an XMPP
server and an XMPP client.
NOTE: Simple Authentication and Security Layer (SASL) authentication is not supported in the
Contrail environment.
In the Contrail environment, XMPP based communications are used in client and server exchanges, between
the compute node (as the XMPP client), and:
xmpp_auth_enable=true Enables SSL based XMPP Default is set to false, XMPP is disabled.
On the DNS server control node, enable the parameters in the configuration file:
/etc/contrail/contrail-control.conf
xmpp_dns_auth_enable=true Enables SSL based XMPP Default is set to false, XMPP is disabled.
xmpp_auth_enable=true Enables SSL based XMPP Default is set to false, XMPP is disabled.
xmpp_dns_auth_enable=true
NOTE: The keyword true is case sensitive.
540
IN THIS SECTION
Graceful restart and long-lived graceful restart BGP helper modes are supported for the Contrail control
node and XMPP helper mode.
Whenever a BGP peer session is detected as down, all routes learned from the peer are deleted and
immediately withdrawn from advertised peers. This causes instantaneous disruption to traffic flowing
end-to-end, even when routes kept in the vrouter kernel in the data plane remain intact.
Graceful restart and long-lived graceful restart features can be used to alleviate traffic disruption caused
by downs.
When configured, graceful restart features enable existing network traffic to be unaffected if Contrail
controller processes go down. The Contrail implementation ensures that if a Contrail control module
restarts, it can use graceful restart functionality provided by its BGP peers. Or when the BGP peers restart,
Contrail provides a graceful restart helper mode to minimize the impact to the network. The graceful restart
features can be used to ensure that traffic is not affected by temporary outage of processes.
With graceful restart features enabled, learned routes are not deleted when sessions go down, and the
routes are not withdrawn from the advertised peers. Instead, the routes are kept and marked as 'stale'.
Consequently, if sessions come back up and routes are relearned, the overall impact to the network is
minimized.
After a certain duration, if a downed session does not come back up, all remaining stale routes are deleted
and withdrawn from advertised peers.
541
The BGP helper mode can be used to minimize routing churn whenever a BGP session flaps. This is especially
helpful if the SDN gateway router goes down gracefully, as in an rpd crash or restart on an MX Series
Junos device. In that case, the contrail-control can act as a graceful restart helper to the gateway, by
retaining the routes learned from the gateway and advertising them to the rest of the network as applicable.
In order for this to work, the restarting router (the SDN gateway in this case) must support and be configured
with graceful restart for all of the address families used.
The graceful restart helper mode is also supported for BGP-as-a-Service (BGPaaS) clients. When configured,
contrail-control can provide a graceful restart or long-lived graceful restart helper mode to a restarting
BGPaaS client.
Feature Highlights
The following are highlights of the graceful restart and long-lived graceful restart features.
• Configuring a non-zero restart time enables the ability to advertise graceful restart and long-lived graceful
restart capabilities in BGP.
• Configuring helper mode enables the ability for graceful restart and long-lived graceful restart helper
modes to retain routes even after sessions go down.
• With graceful restart configured, whenever a session down event is detected and a closing process is
triggered, all routes, across all address families, are marked stale. The stale routes are eligible for best-path
election for the configured graceful restart time duration.
• When long-lived graceful restart is in effect, stale routes can be retained for a much longer time than
that allowed by graceful restart alone. With long-lived graceful restart, route preference is retained and
best paths are recomputed. The community marked LLGR_STALE is tagged for stale paths and
re-advertised. However, if no long-lived graceful restart community is associated with any received stale
route, those routes are not kept, instead, they are deleted.
• After a certain time, if a session comes back up, any remaining stale routes are deleted. If the session
does not come back up, all retained stale routes are permanently deleted and withdrawn from the
advertised peer.
Contrail Networking updated support for long-lived graceful restart (LLGR) with XMPP helper mode in
Contrail Networking Release 2011.L2. Starting in Release 2011.L2, the Contrail vRouter datapath agent
supports route retention with its controller peer when LLGR with XMPP helper mode is enabled. This
route retention allows the datapath agent to retain the last Route Path from the Contrail controller when
an XMPP-based connection is lost. The Route Paths are held by the agent until a new XMPP-based
connection is established to one of the Contrail controllers. Once the XMPP connection is up and is stable
542
for a predefined duration, the Route Paths from the old XMPP connection are flushed. This support for
route retention allows a controller to go down gracefully but with some forwarding interruption when
connectivity to a controller is restored.
The following notable behaviors are present when LLGR is used with XMPP helper mode:
• When a local vRouter is isolated from a Contrail controller, the Intra-VN EVPN routes on the remote
vRouter are removed.
• During a Contrail vRouter datapath agent restart, forwarding states are not always preserved.
Contrail Networking has limited support for graceful restart and long-lived graceful restart (LLGR) with
XMPP helper mode in all Contrail Release 4, 5, and 19 software as well as all Contrail Release 20 software
through Contrail Networking Release 2011.L1. Graceful restart and LLGR with XMPP should not be used
in most environments and should only be used by expert users in specialized circumstances when running
these Contrail Networking releases for reasons described later in this section.
Graceful restart and LLGR can be enabled with XMPP helper mode using Contrail Command, the Contrail
Web UI, or by using the provision_control script. The helper modes can also be enabled via schema, and
can be disabled selectively in a contrail-control node for BGP or XMPP sessions by configuring
gr_helper_disable in the /etc/contrail/contrail-control.conf configuration file.
You should be aware of the following dependencies when enabling graceful restart and LLGR with XMPP
helper mode:
• You can enable graceful restart and LLGR with XMPP helper mode without enabling the BGP helper.
You still have to enable graceful restart, XMPP, and all appropriate timers when graceful restart and
LLGR are enabled with XMPP helper mode without the BGP helper.
• LLGR and XMPP sub second timers for fast convergences should not be used simultaneously.
• If a control node fails when LLGR with XMPP helper mode is enabled, vrouters will hold routes for the
length of the GR and LLGR timeout values and continue to pass traffic. Routes are removed from the
vRouter when the timeout interval elapses and traffic is no longer forwarded at that point.
If the control node returns to the up state before the timeout interval elapses, a small amount of traffic
will be lost during the reconnection.
Graceful restart and LLGR with XMPP should only be used by expert users in specialized circumstances
when running Contrail Networking Release 4, 5, and 19 software as well as all Contrail Release 20 software
through Contrail Networking Release 2011.L1 due to the following issues:
Because graceful restart is not yet supported for the contrail-vrouter-agent, the parameter should not
be set for graceful_restart_xmpp_helper_enable. If the vrouter agent restarts, the data plane is reset
and the routes and flows are reprogrammed anew. This reprogramming typically results in traffic loss
for several seconds for new and existing flows and can result in even longer traffic loss periods.
543
• The vRouter agent restart caused by enabling graceful restart can cause stale route to be added to the
routing table used by the contrail-vrouter-agent.
This issue occurs after a contrail-vrouter-agent reset. After the reset, previous XMPP control nodes
continue to send stale routes to other control nodes. The stale routes sent by the previous XMPP control
nodes can eventually get passed to the contrail-vrouter-agent and installed into its routing table as
NH1/drop routes, leading to traffic drops. The stale routes are removed from the routing table only after
graceful restart is enabled globally or when the timer—which is user configurable but can be set to long
intervals—expires.
Configuration Parameters
Graceful restart parameters are configured in the global-system-config of the schema. They can be
configured by means of a provisioning script or by using the Contrail Web UI.
Configure a non-zero restart time to advertise for graceful restart and long-lived graceful restart capabilities
from peers.
Configure helper mode for graceful restart and long-lived graceful restart to retain routes even after
sessions go down.
• restart-time
• long-lived-restart-time
• end-of-rib-timeout
• bgp-helper-enable to enable graceful restart helper mode for BGP peers in contrail-control
• xmpp-helper-enable to enable graceful restart helper mode for XMPP peers (agents) in contrail-control
/opt/contrail/utils/provision_control.py
--api_server_ip 10.xx.xx.20
--api_server_port 8082
--router_asn 64512
--admin_user admin
--admin_password <password>
--admin_tenant_name admin
--set_graceful_restart_parameters
--graceful_restart_time 60
--long_lived_graceful_restart_time 300
544
--end_of_rib_timeout 30
--graceful_restart_enable
--graceful_restart_bgp_helper_enable
-set_graceful_restart_parameters
--graceful_restart_time 300
--long_lived_graceful_restart_time 60000
--end_of_rib_timeout 30
--graceful_restart_enable
--graceful_restart_bgp_helper_enable
When BGP peering with Juniper Networks devices, Junos must also be explicitly configured for graceful
restart/long-lived graceful restart, as shown in the following example:
The graceful restart helper modes can be enabled in the schema. The helper modes can be disabled
selectively in the contrail-control.conf for BGP sessions by configuring gr_helper_disable in the
/etc/contrail/contrail-control.conf file.
Be aware of the following caveats when configuring and using graceful restart.
• Using the graceful restart/long-lived graceful restart feature with a peer is effective either to all negotiated
address families or to none. If a peer signals support for graceful restart/long-lived graceful restart for
only a subset of the negotiated address families, the graceful restart helper mode does not come into
effect for any family in the set of negotiated address families.
• Because graceful restart is not yet supported for contrail-vrouter-agent, the parameter should not be
set for graceful_restart_xmpp_helper_enable. If the vrouter agent restarts, the data plane is reset and
the routes and flows are reprogrammed anew. This reprogramming typically results in traffic loss for
several seconds for new and existing flows and can result in even longer traffic loss periods.
Additionally, previous XMPP control nodes might continue to send stale routes to other control nodes
and these stale routers can be passed to the contrail-vrouter-agent. The contrail-vrouter-agent can
install these stale routes into it’s routing table as NH1/ drop routes, causing traffic loss. The stale routes
are removed only after graceful restart is enabled globally or when the timer—which is user configurable
but can be set to multiple days—expires.
• Graceful restart/long-lived graceful restart helper mode may not work correctly for EVPN routes, if the
restarting node does not preserve forwarding state for EVPN routes.
IN THIS SECTION
We recommend configuring Graceful Restart using Contrail Command. You can, however, also configure
Graceful Restart using the Contrail User Interface in environments not using Contrail Command.
The Edit System Configuration window opens. Click the box for Graceful Restart to enable graceful restart,
and enter a non-zero number to define the Restart Time in seconds. You can also specify the times for
the long-lived graceful restart (LLGR) and the end of RIB timers from this window.
Timer Description
Restart Time BGP helper mode—Routes advertised by the BGP peer are kept for the duration
of the restart time.
XMPP helper mode—Routes advertised by XMPP peer are kept for the duration
of the restart time.
548
Timer Description
LLGR Time BGP helper mode—Routes advertised by BGP peers are kept for the duration
of the LLGR timer when BGP helper mode is enabled.
XMPP helper mode—Routes advertised by XMPP peers are kept for the duration
of the LLGP timer if XMPP helper mode is enabled.
When Graceful Restart (GR) and Long-lived Graceful Restart (LLGR) are both
configured, the duration of the LLGR timer is the sum of both timers.
End of RIB timer The End of RIB (EOR) timer specifies the amount of time a control node waits
to remove stale routes from a vRouter agent’s RIB.
The EOR timer starts when this End of Config message is received by the
vRouter agent. When the EOR timer expires, an EOR message is sent from the
vRouter agent to the control node. The control node receives this EOR message
then removes the stale routes which were previously advertised by the vRouter
agent from it’s RIB.
549
Certain deployment scenarios may need running of multiple API configuration server instances for improved
performance. One of the methods to achieve this is by increasing the number of API configuration server
instances on a node after deploying Contrail Networking. This is done by modifying the docker-compose.yaml
file to allow multiple configuration API containers on the same host.
The steps described in this topic are valid for Contrail Networking Releases 5.0 through 2008. For release
2011 and later, refer the topic “Scaling Up Contrail Networking Configuration API” on page 553.
CAUTION: Any change to the Contrail networking Configuration API must be made
only with the help of Juniper Networks Professional Services. We strongly recommend
that you contact Juniper Networks Professional Services before you make any change
to the Configuration API.
550
api-2:
image: "hub.juniper.net/contrail/contrail-controller-config-api:1912.32-rhel"
env_file: /etc/contrail/common_config.env
environment:
- CONFIG_API_PORT=18082
- CONFIG_API_INTROSPECT_PORT=17084
container_name: config_api_2
command: ["/usr/bin/contrail-api", "--conf_file",
"/etc/contrail/contrail-api.conf", "--conf_file",
"/etc/contrail/contrail-keystone-auth.conf", "--worker_id", "2"]
network_mode: "host"
volumes_from:
- node-init
depends_on:
- node-init
restart: always
stdin_open: True
tty: True
logging:
driver: "json-file"
options:
max-size: "50m"
max-file: "10"
Make sure that the API port, Introspect port, Worker ID and container_name are unique within the
node. The default API section should not be changed to allow other Contrail services (for example,
Schema-transformer, SVC-Monitor or contrail-status command) to run without configuration changes.
The default API runs with port 8082, introspect port 8084, worker_id 0 and container name config_api_1.
2. Run the following commands on each configuration node to apply the changes:
cd /etc/contrail/config
docker-compose down
docker-compose up -d
3. Run the following commands on each configuration node to verify the configuration:
551
listen contrail_config_internal
bind 192.168.3.90:8082 transparent
mode http
balance leastconn
option httpchk GET /
option httplog
option forwardfor
timeout client 360s
timeout server 360s
server 0contrail-ctl0.local 192.168.3.114:8082 check fall 5 inter 2000 rise
2
server 1contrail-ctl0.local 192.168.3.114:18081 check fall 5 inter 2000 rise
2
server 2contrail-ctl0.local 192.168.3.114:18082 check fall 5 inter 2000 rise
2
server 3contrail-ctl0.local 192.168.3.114:18083 check fall 5 inter 2000 rise
2
server 0contrail-ctl1.local 192.168.3.139:8082 check fall 5 inter 2000 rise
2
server 1contrail-ctl1.local 192.168.3.139:18081 check fall 5 inter 2000 rise
552
2
server 2contrail-ctl1.local 192.168.3.139:18082 check fall 5 inter 2000 rise
2
server 3contrail-ctl1.local 192.168.3.139:18083 check fall 5 inter 2000 rise
2
server 0contrail-ctl2.local 192.168.3.100:8082 check fall 5 inter 2000 rise
2
server 1contrail-ctl2.local 192.168.3.100:18081 check fall 5 inter 2000 rise
2
server 2contrail-ctl2.local 192.168.3.100:18082 check fall 5 inter 2000 rise
2
server 3contrail-ctl2.local 192.168.3.100:18083 check fall 5 inter 2000 rise
2
5. Run the following commands to set load-balancer timeouts and balancing method
You cannot configure timeout for the Neutron plugin. Neutron relies on load-balancer to terminate
connections. Therefore, it is important that you increase the default timeout value of 30 seconds. This
enables Neutron to respond without error even if the Contrail API responds longer than 30 seconds.
NOTE: It is recommended that you use load balancing methods that distributes the load evenly
across the configuration API server instances. The preferred lod balancing methods are leastconn
and round-robin.
CAUTION: Increasing the number of configuration API server instances might result
in higher load on RabbitMQ and Schema-transformer.
553
Starting from Contrail Networking Release 2011, config-api can be scaled vertically, to run up to 10 workers
in a single container by using uWSGI. However, the recommended maximum number of workers in each
config_api container is 5.
The steps described in this topic are valid for Contrail Networking Releases 2011 and later releases. To
see these procedures for earlier releases, see“Scaling Up Contrail Networking Configuration API Server
Instances” on page 549.
If you are deploying Contrail Networking using Ansible Deployer , you should specify
CONFIG_API_WORKER_COUNT parameter in the contrail_configuration section of the instances.yml file
as shown below.
contrail_configuration:
CONFIG_API_WORKER_COUNT: 5
If you are deploying Contrail Networking from Contrail Command UI, you should specify a new key / value
pair (CONFIG_API_WORKER_COUNT / value) in the Contrail Configuration section as shown in
Figure 71 on page 554.
554
If you are deploying Contrail Networking using RHOSP deployer, you should specify the desired value for
the parameter CONFIG_API_WORKER_COUNT in the ContrailSettings section of the contrail-services.yaml
file as shown below.
ContrailSettings:
CONFIG_API_WORKER_COUNT: 3
If you are deploying Contrail Networking using JUJU deployer, you must specify the desired value for
config-api-worker-count in the config.yaml file.
config-api-worker-count:
default: 1
type: int
description: |
Number of workers spawned inside config-api container.