0% found this document useful (0 votes)
166 views374 pages

NSXT 40 Install

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
166 views374 pages

NSXT 40 Install

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

NSX Installation Guide

Modified on 09 SEPT 2024


VMware NSX 4.0
NSX Installation Guide

You can find the most up-to-date technical documentation on the VMware by Broadcom website at:

[Link]

VMware by Broadcom
3401 Hillview Ave.
Palo Alto, CA 94304
[Link]

©
Copyright 2017-2024 Broadcom. All Rights Reserved. The term “Broadcom” refers to Broadcom Inc.
and/or its subsidiaries. For more information, go to [Link] All trademarks, trade
names, service marks, and logos referenced herein belong to their respective companies.

VMware by Broadcom 2
Contents

NSX Installation Guide 9

1 Overview of NSX 10
Key Concepts 11
NSX Manager 16
Configure the User Interface Settings 18

2 NSX Installation Workflows 20


NSX Workflow for vSphere 20
NSX Configuration Workflow for Bare Metal Server 21

3 Preparing for Installation 22


System Requirements 22
NSX Manager VM and Host Transport Node System Requirements 22
Supported Hypervisor Version 25
NSX Edge VM System Requirements 27
NSX Edge Bare Metal Requirements 29
Bare Metal Server System Requirements 33
Bare Metal Linux Container Requirements 35
Ports and Protocols 35
Configuring a vSphere Distributed Switch 35

4 NSX Manager Installation 38


Modifying the Default Admin Password Expiration 43

5 NSX Manager Cluster Requirements 44


Cluster Requirements for an Individual Site 45
Cluster Requirements for Multiple Sites 46

6 Installing and Configuring NSX Embedded using VMware vCenter Plugin 49


Install NSX Manager from vSphere Client 49
Install Additional NSX Manager Nodes to Form a Cluster from VMware vCenter Plugin 53
Configure NSX for Virtual Networking from vSphere Client 57
Configuring NSX-T for Security from vSphere Client 63
Prepare Clusters for NSX Security 63
Create Groups 64
Define and Publish Communication Strategies for Groups 67
Viewing NSX Alarms in vSphere Web Client UI 70

VMware by Broadcom 3
NSX Installation Guide

7 Installing NSX Manager Cluster on vSphere 71


Install NSX Manager and Available Appliances 71
Install NSX Manager on ESXi Using the Command-Line OVF Tool 77
Configure an Appliance to Display the GRUB Menu at Boot Time 84
Log In to the Newly Created NSX Manager 86
Add a Compute Manager 87
Deploy NSX Manager Nodes to Form a Cluster from the UI 91
Configure a Virtual IP Address for a Cluster 98
Configuring an External Load Balancer 100
Disable Snapshots on an NSX Appliance 103

8 Transport Zones and Profiles 104


Create Transport Zones 104
Create an IP Pool for Tunnel Endpoint IP Addresses 106
Enhanced Data Path 108
Enhanced Datapath Supported Features 111
Configuring Profiles 112
Create an Uplink Profile 113
Add an NSX Edge Cluster Profile 118
Add and attach NSX Edge Bridge Profile to a Segment 119
Add a Transport Node Profile 121

9 Host Transport Nodes 126


Manual Installation of NSX Kernel Modules 126
Manually Install NSX Kernel Modules on ESXi Hypervisors 126
Preparing Physical Servers as NSX Transport Nodes 130
Install Third-Party Packages on a Linux Physical Server 130
Configure a Physical Server as a Transport Node from GUI 133
Ansible Server Configuration for Physical Server 140
Create Application Interface for Physical Server Workloads 141
Secure Workloads on Windows Server 2016/2019 Bare Metal Servers 141
Preparing ESXi Hosts as Transport Nodes 143
Prepare a vSphere Distributed Switch for NSX 144
Prepare ESXi Cluster Hosts as Transport Nodes 145
Prepare ESXi Cluster hots for Security only or Networking and Security using Quick Start
Wizard 150
Configure an ESXi Host Transport Node with Link Aggregation 153
Managing Transport Nodes 154
Switch Visualization 154
NSX Maintenance Mode 154
Health Check VLAN ID Ranges and MTU Settings 155

VMware by Broadcom 4
NSX Installation Guide

View Bidirectional Forwarding Detection Status 158


Verify the Transport Node Status 159

10 Installing NSX Edge 162


NSX Edge Installation Requirements 162
NSX Edge Networking Setup 165
NSX Edge Installation Methods 171
Create an NSX Edge Transport Node 172
Configure NSX Edge DPDK Interfaces 179
Manually Deploying NSX Edge Node Outside of NSX 183
Install an NSX Edge on ESXi Using the vSphere GUI 183
Install NSX Edge on ESXi Using the Command-Line OVF Tool 187
Install NSX Edge via ISO File as a Virtual Appliance 192
Install NSX Edge on Bare Metal 195
Join NSX Edge with the Management Plane 209
Edit NSX Edge Transport Node Configuration 210
Create an NSX Edge Cluster 214
Remove NSX Edge Nodes from an Edge Cluster 216
Relocate and Remove an NSX Edge Node from an NSX Edge Cluster 218

11 vSphere Lifecycle Manager with NSX 220


Prepare an NSX Cluster with vSphere Lifecycle Manager 221
Enable vSphere Lifecycle Manager on an NSX Cluster 223
NSX on vSphere Lifecycle Manager with VMware vSphere Distributed Services Engine 226
Configure NSX host transport node on DPU-based vSphere Lifecycle Manager-enabled
cluster 228
Displaying DPU-related information on NSX Manager Interface 230
NSX with vSphere Lifecycle Manager Scenarios 230
vSphere Lifecycle Manager Failed to Prepare a Host for NSX Networking 232
vSphere Lifecycle Manager Failed to Prepare NSX Cluster 233
Delete a NSX Depot on vCenter Server 234

12 Host Profile integration with NSX 235


Auto Deploy Stateless Cluster 235
High-Level Tasks to Auto Deploy Stateless Cluster 235
Prerequisites and Supported Versions 236
Create a Custom Image Profile for Stateless Hosts 237
Associate the Custom Image with the Reference and Target Hosts 238
Set Up Network Configuration on the Reference Host 239
Configure the Reference Host as a Transport Node in NSX 240
Extract and Verify the Host Profile 241
Verify the Host Profile Association with Stateless Cluster 243

VMware by Broadcom 5
NSX Installation Guide

Update Host Customization 243


Trigger Auto Deployment on Target Hosts 244
Troubleshoot Host Profile and Transport Node Profile 247
Stateful Servers 249
Supported NSX and ESXi versions 250
Prepare a Target Stateful Cluster 251

13 Getting Started with NSX Federation 253


NSX Federation Key Concepts 253
NSX Federation Requirements 254
Configuring the Global Manager and Local Managers 255
Install the Active and Standby Global Manager 256
Make the Global Manager Active and Add Standby Global Manager 257
Add a Location 258
Remove a Location 265

14 Install NSX Advanced Load Balancer Appliance Cluster 267


Troubleshooting NSX Advanced Load Balancer Controller Issues 271
NSX Advanced Load Balancer does not register with NSX Manager 271
The Second NSX Advanced Load Balancer Controller Remains in Queued State 271
NSX Advanced Load Balancer Controller Password Change Caused Cluster Failure 272
Unable to Delete NSX Advanced Load Balancer Controller 273
NSX Advanced Load Balancer Cluster HA Status is Compromised 273
Credential Mismatch After Changing NSX Advanced Load Balancer Controller Password
274
Deployment of NSX Advanced Load Balancer Controller Failed 275
Cluster Unstable After Two Controllers Are Down 275

15 Getting Started with NSX Cloud 277


NSX Cloud Architecture and Components 278
Overview of Deploying NSX Cloud 279
Deploy NSX On-Prem Components 279
Install CSM 279
Join CSM with NSX Manager 280
Specify CSM IPs for Access by PCG 281
(Optional) Configure Proxy Servers 281
(Optional) Set Up vIDM for Cloud Service Manager 282
Connect your Public Cloud with On-prem NSX 283
Deploy NSX Cloud Components in Microsoft Azure using the NSX Cloud Marketplace Image
285
Deploy NSX Cloud Components in Microsoft Azure using Terraform scripts 287
Deploy NSX Cloud Components in Microsoft Azure without using Terraform scripts 292

VMware by Broadcom 6
NSX Installation Guide

Add your Public Cloud Account 296


Adding your Microsoft Azure Subscription 297
Adding your AWS Account 303
Managing Regions in CSM 307
NSX Public Cloud Gateway: Architecture and Modes of Deployment 308
Deploy PCG or Link to a PCG 312
Deploy PCG in a VNet 312
Deploy PCG in a VPC 315
Link to a Transit VPC or VNet 318
Auto-Configurations after PCG Deployment or Linking 319
Auto-created NSX Logical Entities 319
Auto-created Public Cloud Configurations 323
Integrate Horizon Cloud Service with NSX Cloud 325
Auto-Created Entities After Horizon Cloud Integration 328
(Optional) Install NSX Tools on your Workload VMs 331
Un-deploy NSX Cloud 331
Undeploy or Unlink PCG 332

16 Uninstalling NSX from a Host Transport Node 335


Uninstall NSX from a vSphere Cluster 335
Uninstall NSX from a Managed Host in a vSphere Cluster 338
Uninstall NSX from a Physical Host 340
Triggering Uninstallation from the vSphere Web Client 344
Uninstall NSX from a vSphere Lifecycle Manager cluster through NSX Manager 346

17 Troubleshooting Installation Issues 348


Troubleshooting Installation to check for Basic Infrastructure Services 348
Troubleshoot OVA Deployment and Appliance Bringup 350
Troubleshoot NSX Manager Cluster 351
Manually deployed NSX Manager failed to join the NSX Manager Cluster 351
NSX Manager Cluster status Degraded As Datastore-related Components Are Down 352
Manager and HTTPS Services Are Down Frequently Due to Incorrect NAT Configuration
353
NSX Manager is Slow To Load And Tasks Fail 353
NSX Manager UI is not loading even when NSX Clustering is up 354
NSX Manager cluster is DOWN or UNAVAILABLE if all nodes part of the the NSX Manager
cluster is down or majority nodes are down 354
Troubelshoot NSX Appliance or NSX Clustering issues Using APIs 355
Troubleshooting Host Transport Nodes 355
Host Fails to Install as Transport Node 355
Accessing the NSX CLI Terminal 356
Transport Node Installation Failure Due To Pending Reboot 356

VMware by Broadcom 7
NSX Installation Guide

Unable to Reach Transport Node Due to Incorrect Credentials As Host is in Orphaned State
356
Transport node profile Fails to Prepare Transport Nodes Due to Stale Objects 357
Transport Node Creation is in Partial Success State Due to Logical Switch or Segment Full-
sync Realization Error 358
Transport Node status is degraded when its interface is down 358
Transport Node status is Disconnected or Unknown 359
Transport Node is Down as Agent Service is Down 360
Transport Node Connectivity to Controller is Down 360
Unable to power on VMs or vMotion VMs on Transport Node 361
Transport Node Tunnels Down 362
Installation Fails Due to Insufficient Space in Bootbank on ESXi Host 363
NSX Agent on ESXi Transport Nodes Times Out Communicating with NSX Manager 363
Troubleshooting NSX Edge Nodes 364
Accessing the NSX Edge CLI Terminal 365
NSX Edge MPA Connectivity Down 365
NSX Edge Status DOWN or DEGRADED As BFD Tunnel(s) are Down 365
NSX Edge Router High Availability Status is Down 367
NSX Edge Node Status Down Due to PNIC Bond Status Down 368
NSX Edge Transport Node Connectivity to Controller is Down 368
NSX Edge node status is Down As Controller Is Unavailable 369
Transport Node Failed as Named Teaming Defined with No Active Uplink 370
NSX Edge Transport Node goes into NSX Maintenance Mode On HA Failover 370
VMotion of NSX Edge VM Fails Due To ESXi Running Out Of Resources 371
State Not Consistent of NSX BFD BGP or HA Functionality 372
NSX Edge Node Status Down and BFD Tunnel Data is Missing 372
NSX Edge status DOWN or DEGRADED due to BFD tunnels between Edge and ESXi down
372

VMware by Broadcom 8
NSX Installation Guide

The NSX Installation Guide describes how to install the VMware NSX® product. The information
includes step-by-step configuration instructions and suggested best practices.

Intended Audience
This information is intended for anyone who wants to install or use NSX. This information is
written for experienced system administrators who are familiar with virtual machine technology
and network virtualization concepts.

Technical Publications Glossary


VMware Technical Publications provides a glossary of terms that might be unfamiliar to you.
For definitions of terms as they are used in VMware technical documentation, go to https://
[Link]/topics/glossary.

Related Documentation
You can find the VMware NSX® Intelligence™ documentation at [Link]
VMware-NSX-Intelligence/[Link]. The NSX Intelligence 1.0 content was initially included and
released with the NSX 2.5 documentation set.

VMware by Broadcom 9
Overview of NSX
1
In the same way that server virtualization programmatically creates and manages virtual
machines, NSX network virtualization programmatically creates and manages virtual networks.

With network virtualization, the functional equivalent of a network hypervisor reproduces the
complete set of Layer 2 through Layer 7 networking services (for example, switching, routing,
access control, firewalling, QoS) in software. As a result, these services can be programmatically
assembled in any arbitrary combination to produce unique, isolated virtual networks in a matter
of seconds.

NSX works by implementing three separate but integrated planes: management, control, and
data. These planes are implemented as a set of processes, modules, and agents residing on two
types of nodes: NSX Manager and transport nodes.

n Every node hosts a management plane agent.

n NSX Manager nodes host API services and the management plane cluster daemons.

n NSX Controller nodes host the central control plane cluster daemons.

n Transport nodes host local control plane daemons and forwarding engines.

NSX Manager supports a cluster with three node, which merges policy manager, management,
and central control services on a cluster of nodes. NSX Manager clustering provides high
availability of the user interface and API. The convergence of management and control plane
nodes reduces the number of virtual appliances that must be deployed and managed by the NSX
administrator.

The NSX Manager appliance is available in three different sizes for different deployment
scenarios:

n A small appliance for lab or proof-of-concept deployments.

n A medium appliance for deployments up to 64 hosts.

n A large appliance for customers who deploy to a large-scale environment.

See NSX Manager VM and Host Transport Node System Requirements and Configuration
maximums tool.

Read the following topics next:

n Key Concepts

VMware by Broadcom 10
NSX Installation Guide

n NSX Manager

Key Concepts
The common NSX concepts that are used in the documentation and user interface.

Compute Manager

A compute manager is an application that manages resources such as hosts and VMs. NSX
supports VMware vCenter as a compute manager.

Control Plane

Computes runtime state based on configuration from the management plane. Control
plane disseminates topology information reported by the data plane elements, and pushes
stateless configuration to forwarding engines (transport nodes). NSX control plane is split
into two components - the Central Control Plane (CCP) and the Local Control Plane (LCP).
The CCP is implemented on NSX Manager Cluster, while the LCP on all of the NSX transport
nodes.

Corfu services

Run on each NSX Manager node to create the highly available distributed datastore Corfu.

Data Plane

Performs stateless forwarding or transformation of packets based on tables populated by


the control plane. Data plane reports topology information to the control plane and maintains
packet level statistics. Data plane is implemented by NSX transport nodes.

Data Processing Unit (DPU)

A DPU device is a SmartNIC device, or a high-performance network interface card, with


added embedded CPU cores, memory, and a hypervisor running on the device independently
from the ESXi hypervisor installed on the server.

Note We will refer to SmartNIC as DPU across our user guides.

External Network
A physical network or VLAN not managed by NSX. You can link your logical network or
overlay network to an external network through a Tier-0 Gateway, Tier-1 Gateway or L2
bridge.

External Interface

Tier-0 Gateway Interface connecting to the physical infrastructure or router. Static routing
and BGP are supported on this interface. This interface was referred to as uplink interface in
previous releases.

Logical Port Egress

VMware by Broadcom 11
NSX Installation Guide

Outbound network traffic leaving the VM or logical network is called egress because traffic is
leaving virtual network and entering the data center.

Logical Port Ingress

Inbound network traffic entering the VM is called ingress traffic.

Gateway

NSX routing entity that provides connectivity between different L2 networks. Configuring a
gateway through NSX Manager instantiates a gateway (Tier-0 or Tier-1) on transport nodes
and provides optimized distributed routing as well as centralized routing and services like
NAT, Load balancer, DHCP and other supported services on each hypervisor.

Gateway Port

Logical network port to which you can attach a logical switch port or an uplink port to a
physical network.

Segment Port

Logical switch attachment point to establish a connection to a virtual machine network


interface, container, physical appliances or a gateway interface. The segment port reports
applied switching profile, port state, and link status.

Management Plane

Provides single API entry point to the system, persists user configuration, handles user
queries, and performs operational tasks on all of the management, control, and data plane
nodes in the system.

NSX Edge Cluster

Is a collection of NSX Edge node appliances that have the same settings and provide high
availability if one of the NSX Edge node fails.

NSX Edge Node

Edge nodes are service appliances (Bare Metal or VM form factor) with pools of capacity,
dedicated to running network and security services that cannot be distributed to the
hypervisors.

NSX Managed Virtual Distributed Switch (N-VDS, host-switch)

The NSX managed virtual distributed switch forwards traffic between logical and physical
ports of the device. On ESXi hosts, the N-VDS implementation is derived from VMware
vSphere® Distributed Switch™ (VDS) and it shows up as an opaque network in vCenter. With

VMware by Broadcom 12
NSX Installation Guide

any other kind of transport node (KVM hypervisors, Edges, Bare Metal servers, cloud VMs
and so on) the N-VDS implementation is derived from the Open vSwitch (OVS).

Note VMware has removed support of the NSX N-VDS virtual switch on ESXi hosts starting
release [Link] because it is recommended to deploy NSX on top of vCenter VDS. N-VDS will
remain the supported virtual switch on NSX Edge nodes, native public cloud NSX agents, and
Bare Metal workloads.

An N-VDS has two modes: Standard and Enhanced Datapath. The Enhanced Data Path
N-VDS is optimized for the Network Function Virtualization, where the workloads perform
networking functions under demanding latency and packet rate requirements.

vSphere Distributed Switch (VDS)

Starting with NSX 3.0, NSX can run directly on top of a vSphere Distributed Switch version
7 or later. It is recommended that you use the VDS switch for deployment of NSX on ESXi
hosts. You can create Overlay or VLAN backed segments on VDS switches, similar to the
N-VDS, VDS switches that can be configured in Standard or Enhanced Datapath mode.

vSphere Distributed Services Engine

Sphere 8.0 introduces VMware vSphere Distributed Services Engine, which leverages data
processing units (DPUs) as the new hardware technology to overcome the limits of core
CPU performance while delivering zero-trust security and simplified operations to vSphere
environments. With NSX [Link], vSphere Distributed Services Engine provides the ability to
offload some of the network operations from your server CPU to a DPU.

NSX Manager

Node that hosts the API services, the management plane, the control plane and the agent
services. It is accessible through CLI, Web UI, or API. NSX Manager is an appliance included
in the NSX installation package. You can deploy the appliance in the role of NSX Manager or
nsx-cloud-service-manager. Currently, the appliance only supports one role at a time.

NSX Manager Cluster

A cluster of NSX Manager virtual machine appliances providing high availability of the user
interface and the API.

Open vSwitch (OVS)

Open source software switch that acts as a virtual switch within XenServer, Xen, and other
Linux-based hypervisors.

Opaque Network

VMware by Broadcom 13
NSX Installation Guide

An opaque network is a network created and managed by a separate entity outside


of vSphere. For example, logical networks that are created and managed by N-VDS
switch running on NSX appear in vCenter Server as opaque networks of the type
[Link]. You can choose an opaque network as the backing for a VM network
adapter. To manage an opaque network, use the management tools associated with the
opaque network, such as NSX Manager or the NSX API management tools.

Overlay Logical Network

Logical network implemented using GENEVE encapsulation protocol as mentioned in https://


[Link]/rfc/[Link]. The topology seen by VMs is decoupled from that of the
physical network.

Physical Interface (pNIC)

Network interface on a physical server that a hypervisor is installed on.

Segment

Previously known as logical switch. It is an entity that provides virtual Layer 2 switching
for VM interfaces and Gateway interfaces. A segment gives tenant network administrators
the logical equivalent of a physical Layer 2 switch, allowing them to connect a set of VMs
to a common broadcast domain. A segment is a logical entity independent of the physical
hypervisor infrastructure and spans many hypervisors, connecting VMs regardless of their
physical location.

In a multi-tenant cloud, many segments might exist side-by-side on the same hypervisor
hardware, with each Layer 2 segment isolated from the others. Segments can be connected
using gateways, which can provide connectivity to the external physical network.

Service Interface

Tier-0 Interface connecting VLAN segments to provide connectivity and services to VLAN
backed physical or virtual workloads. A service interface can also be connected to overlay
segments for Tier-1 standalone load balancer use cases. Starting with NSX release version
3.0, service interface supports static and dynamic routing.

Tier-0 Gateway

A Tier-0 Gateway provides north-south connectivity and connects to the physical routers. It
can be configured as an active-active or active-standby cluster. The Tier-0 Gateway runs BGP
and peers with physical routers.

Tier-0 Gateways consists of two components:

n distributed routing component (DR) that runs on all transport nodes. DR of the Tier-0
gateway is instantiated on the hypervisors and Edge transport nodes upon its creation.

VMware by Broadcom 14
NSX Installation Guide

n centralized services routing component (SR) runs on edge cluster nodes. SR gets
instantiated on the edge nodes upon association of gateway with edge cluster and
creation of external interfaces.

Tier-1 Gateway

A Tier-1 Gateway connects to one Tier-0 Gateway for northbound connectivity of the
subnetworks attached to it (multi-tier routing model). It connects to one or more overlay
networks for southbound connectivity to its subnetworks. A Tier-1 Gateway can be
configured as an active-standby cluster. Like Tier-0 gateway, when a Tier-1 gateway
is created, a distributed component (DR) of the Tier-1 gateway is instantiated on the
hypervisors and Edge transport nodes but the service component (SR) will only be created if
gateways is associated with edge cluster and external interfaces created.

Transport Zone

Collection of transport nodes that defines the maximum span for logical switches. A transport
zone represents a set of similarly provisioned hypervisors and the logical switches that
connect VMs on those hypervisors. It also has been registered with the NSX management
plane and has NSX modules installed. For a hypervisor host or NSX Edge to be part of the
NSX overlay, it must be added to the NSX transport zone.

Transport Node

A fabric node is prepared as a transport node so that it becomes capable of participating


in an NSX overlay or NSX VLAN networking. For an ESXi host, you must configure a VDS
switch.

Uplink Profile (host-switch-profile)

Defines policies for the links from transport nodes to NSX segments or from NSX Edge nodes
to top-of-rack switches. The settings defined by uplink profiles might include teaming policies,
the transport VLAN ID, and the MTU setting. The transport VLAN set in the uplink profile tags
overlay traffic only and the VLAN ID is used by the TEP endpoint.

VM Interface (vNIC)

Network interface on a virtual machine that provides connectivity between the virtual guest
operating system and the standard vSwitch or NSX segment. The vNIC can be attached to a
logical port. You can identify a vNIC based on its Unique ID (UUID).

Tunnel Endpoint (TEP)

Each transport node has a Tunnel Endpoint (TEP) responsible for encapsulating the overlay
VM traffic inside a VLAN header and routing the packet to a destination TEP for further
processing. TEPs are the source and destination IP addresses used in the external IP header
to identify the ESXi hosts that originate and end the NSX encapsulation of overlay frames.
Traffic can be routed to another TEP on a different host or the NSX Edge gateway to access
the physical network. TEPs create a GENEVE tunnel between the source and destination
endpoints.

VMware by Broadcom 15
NSX Installation Guide

NSX Manager
The NSX Manager provides a web-based user interface where you can manage your NSX
environment. It also hosts the API server that processes API calls.

The NSX Manager interface provides two modes for configuring resources:

n Policy mode

n Manager mode

Accessing Policy Mode and Manager Mode


If present, you can use the Policy and Manager buttons to switch between the Policy and
Manager modes. Switching modes controls which menus items are available to you.

n By default, if your environment contains only objects created through Policy mode, your user
interface is in Policy mode and you do not see the Policy and Manager buttons.

n By default, if your environment contains any objects created through Manager mode, you see
the Policy and Manager buttons in the top-right corner.

These defaults can be changed by modifying the user interface settings. See Configure the User
Interface Settings for more information.

The same System tab is used in the Policy and Manager interfaces. If you modify Edge
nodes, Edge clusters, or transport zones, it can take up to 5 minutes for those changes to
be visible in Policy mode. You can synchronize immediately using POST /policy/api/v1/infra/
sites/default/enforcement-points/default?action=reload.

When to Use Policy Mode or Manager Mode


Be consistent about which mode you use. There are a few reasons to use one mode over the
other.

n If you are deploying a new NSX environment, using Policy mode to create and manage your
environment is the best choice in most situations.

n Some features are not available in Policy mode. If you need these features, use Manager
mode for all configurations.

n If you plan to use NSX Federation, use Policy mode to create all objects. Global Manager
supports only Policy mode.

n If you are upgrading from an earlier version of NSX and your configurations were created
using the Advanced Networking & Security tab, use Manager mode.

VMware by Broadcom 16
NSX Installation Guide

The menu items and configurations that were found under the Advanced Networking &
Security tab are available in NSX 3.0 in Manager mode.

Important If you decide to use Policy mode, use it to create all objects. Do not use Manager
mode to create objects.

Similarly, if you need to use Manager mode, use it to create all objects. Do not use Policy mode
to create objects.

Table 1-1. When to Use Policy Mode or Manager Mode

Policy Mode Manager Mode

Most new deployments should use Policy mode. Deployments which were created using the advanced
NSX Federation supports only Policy mode. If you want to interface, for example, upgrades from versions before
use NSX Federation, or might use it in future, use Policy Policy mode was available.
mode.

NSX Cloud deployments Deployments which integrate with other plugins. For
example, NSX Container Plug-in, Openstack, and other
cloud management platforms.

Networking features available in Policy mode only:


n DNS Services and DNS Zones
n VPN
n Forwarding policies for NSX Cloud

Security features available in Policy mode only: Security features available in Manager mode only:
n Endpoint Protection n Bridge Firewall
n Network Introspection (East-West Service Insertion)
n Context Profiles
n L7 applications
n FQDN
n New Distributed Firewall and Gateway Firewall Layout
n Categories
n Auto service rules
n Drafts

Names for Objects Created in Policy Mode and Manager Mode


The objects you create have different names depending on which interface was used to create
them.

Table 1-2. Object Names

Objects Created Using Policy Mode Objects Created Using Manager Mode

Segment Logical switch

Tier-1 gateway Tier-1 logical router

Tier-0 gateway Tier-0 logical router

Group NSGroup, IP Sets, MAC Sets

VMware by Broadcom 17
NSX Installation Guide

Table 1-2. Object Names (continued)

Objects Created Using Policy Mode Objects Created Using Manager Mode

Security Policy Firewall section

Gateway firewall Edge firewall

Policy and Manager APIs


The NSX Manager provides two APIs: Policy and Manager.

n The Policy API contains URIs that begin with /policy/api.

n The Manager API contains URIs that begin with /api.

For more information about using the Policy API, see the NSX Policy API: Getting Started Guide.

Security
NSX Manager has the following security features:

n NSX Manager has a built-in user account called admin, which has access rights to all
resources, but does not have rights to the operating system to install software. NSX upgrade
files are the only files allowed for installation.

n NSX Manager supports session timeout and automatic user logout. NSX Manager does not
support session lock. Initiating a session lock can be a function of the workstation operating
system being used to access NSX Manager. Upon session termination or user logout, users
are redirected to the login page.

n Authentication mechanisms implemented on NSX follow security best practices and are
resistant to replay attacks. The secure practices are deployed systematically. For example,
sessions IDs and tokens on NSX Manager for each session are unique and expire after the
user logs out or after a period of inactivity. Also, every session has a time record and the
session communications are encrypted to prevent session hijacking.

You can view and change the session timeout value with the following CLI commands:

n The command get service http displays a list of values including session timeout.

n To change the session timeout value, run the following commands:

set service http session-timeout <timeout-value-in-seconds>


restart service ui-service

Configure the User Interface Settings


You can configure how your users view the NSX user interface. These settings are valid for NSX
Manager, and in NSX Federation for Global Managers and Local Managers.

VMware by Broadcom 18
NSX Installation Guide

Prior to NSX 3.2, users could access two possible modes in the user interface: Policy and
Manager. You can control which mode is default, and whether users can switch between them
using the user interface mode buttons. The Policy mode is the default. New users of release 4.0
will not see the Manager button.

If present, you can use the Policy and Manager buttons to switch between the Policy and
Manager modes. Switching modes controls which menus items are available to you.

n By default, if your environment contains only objects created through Policy mode, your user
interface is in Policy mode and you do not see the Policy and Manager buttons.

n By default, if your environment contains any objects created through Manager mode, you see
the Policy and Manager buttons in the top-right corner.

You can use the User Interface settings to modify these defaults.

See NSX Manager for more information about the modes.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Navigate to System > General System Settings.

3 Select User Interface and click Edit in the User Interface Mode Toggle pane.

4 Modify the user interface settings: Toggle Visibility and Default Mode.

Toggle Visibility Description

Visible to All Users If Manager mode objects are present, the mode buttons are visible to all
users.

Visible to Users with the Enterprise If Manager mode objects are present, the mode buttons are visible to users
Admin Role with the Enterprise Admin role.

Hidden from All Users Even if Manager mode objects are present, the mode buttons are hidden
from all users. This displays Policy mode UI only, even if Manager mode
objects are present.

Default Mode Can be set to Policy or Manager, if available.

VMware by Broadcom 19
NSX Installation Workflows
2
You can install NSX on vSphere hosts. You can also configure a bare metal server to use NSX.

To install or configure any of the hypervisors or bare metal, follow the recommended tasks in the
workflows.

Read the following topics next:

n NSX Workflow for vSphere

n NSX Configuration Workflow for Bare Metal Server

NSX Workflow for vSphere


Use the checklist to track your installation progress on a vSphere host.

Follow the recommended order of procedures.

1 Review the NSX Manager installation requirements. See Chapter 4 NSX Manager Installation.

2 Configure the necessary ports and protocols. See Ports and Protocols.

3 Install the NSX Manager. See Install NSX Manager and Available Appliances.

4 Log in to the newly created NSX Manager. See Log In to the Newly Created NSX Manager .

5 Configure a compute manager. See Add a Compute Manager.

6 Deploy additional NSX Manager nodes to form a cluster. See Deploy NSX Manager Nodes to
Form a Cluster from the UI.

7 Review the NSX Edge installation requirements. See NSX Edge Installation Requirements.

8 Install NSX Edges. See Install an NSX Edge on ESXi Using the vSphere GUI.

9 Create an NSX Edge cluster. See Create an NSX Edge Cluster.

10 Create transport zones. See Create Transport Zones.

11 Create host transport nodes. See Prepare ESXi Cluster Hosts as Transport Nodes.

12 Create edge transport nodes. See Create an NSX Edge Transport Node.

13 Create an edge cluster. See Create an NSX Edge Cluster.

VMware by Broadcom 20
NSX Installation Guide

14 Create a Tier-0 gateway (if deploying single-tier or multi-tier routing topology in NSX). See
NSX Administration Guide.

NSX Configuration Workflow for Bare Metal Server


Use the checklist to track your progress when configuring bare metal server to use NSX.

Follow the recommended order of procedures.

1 Review the bare metal requirements. See Bare Metal Server System Requirements.

2 Configure the necessary ports and protocols. See Ports and Protocols.

3 Install the NSX Manager. See Install NSX Manager and Available Appliances.

4 Configure third-party packages on the bare metal server. See Install Third-Party Packages on
a Linux Physical Server.

5 Create host transport nodes.

A virtual switch is created on each host. The management plane sends the host certificates to
the control plane, and the management plane pushes control plane information to the hosts.
Each host connects to the control plane over SSL presenting its certificate. The control plane
validates the certificate against the host certificate provided by the management plane. The
controllers accept the connection upon successful validation.

6 Create an application interface for bare metal server workload. See Create Application
Interface for Physical Server Workloads.

VMware by Broadcom 21
Preparing for Installation
3
Before installing NSX, make sure your environment is prepared.

Read the following topics next:

n System Requirements

n Ports and Protocols

n Configuring a vSphere Distributed Switch

System Requirements
Before you install NSX, your environment must meet specific hardware and resource
requirements.

Before you configure Gateway Firewall features, make sure that the NSX Edge form factor
supports the features. See Supported Gateway Firewall Features on NSX Edge topic in the NSX
Administration Guide.

NSX Manager VM and Host Transport Node System Requirements


Before you install an NSX Manager or other NSX appliances, make sure that your environment
meets the supported requirements.

Hypervisor Host Network Requirements


The NIC card used must be compatible with the ESXi version that is running NSX. For supported
NIC card, see the VMware Compatibility Guide. To find the supported driver for the NIC card
vendor, see Firmware Versions of Supported Drivers.

Tip To quickly identify compatible cards in the Compatibility Guide, apply the following criteria:

n Under I/O Devices, select Network.

n Optionally, to use supported GENEVE encapsulation, under Features, select the GENEVE
options.

n Optionally, to use Enhanced Data Path, select N-VDS Enhanced Data Path.

VMware by Broadcom 22
NSX Installation Guide

Enhanced Data Path NIC Drivers


Download the supported NIC drivers from the My VMware page.

NIC Card NIC Driver

Intel 82599 ixgben [Link]-1OEM.670.0.0.7535516

Intel(R) Ethernet Controller X710 for 10GbE SFP+ i40en [Link]-1OEM.670.0.0.8169922


Intel(R) Ethernet Controller XL710 for 40GbE QSFP+

Cisco VIC 1400 series nenic_ens

NSX Manager VM Resource Requirements


Thin virtual disk size is 3.8 GB and thick virtual disk size is 300 GB.

Reser
Memo vation
Appliance Size ry vCPU Shares s Disk Space VM Hardware Version

NSX Manager Extra Small 8 GB 2 81920, 8192 300 GB 10 or later


(NSX 3.0 onwards) Norma MB
l

NSX Manager Small VM 16 GB 4 16384 16384 300 GB 10 or later


( NSX 2.5.1 onwards) 0, MB
Norma
l

VMware by Broadcom 23
NSX Installation Guide

Reser
Memo vation
Appliance Size ry vCPU Shares s Disk Space VM Hardware Version

NSX Manager Medium VM 24 GB 6 24576 24576 300 GB 10 or later


0, MB
Norma
l

NSX Manager Large VM 48 GB 12 49152 49152 300 GB 10 or later


0, MB
Norma
l

Note NSX Manager provides multiple roles which previously required separate appliances. This
includes the policy role, the management plane role and the central control plane role. The
central control plane role was previously provide by the NSX Controller appliance.

n You can use the Extra Small VM resource size only for the Cloud Service Manager appliance
(CSM). Deploy CSM in the Extra Small VM size or higher, as required. See Overview of
Deploying NSX Cloud for more information.

n The NSX Manager Small VM appliance size is suitable for lab and proof-of-concept
deployments, and must not be used in production.

n The NSX Manager Medium VM appliance size is the autoselected appliance size during
deployment and is suitable for typical production environments. An NSX management cluster
formed using this appliance size can support up to 128 hypervisors. Starting with NSX 3.1, a
single NSX Manager cluster is supported.

n The NSX Manager Large VM appliance size is suitable for large-scale deployments. An NSX
management cluster formed using this appliance size can support more than 128 hypervisors.

For maximum scale using the NSX Manager Large VM appliance size, go to the VMware
Configuration Maximums tool at [Link] and select NSX from the
product list.

Language Support
NSX Manager has been localized into multiple languages: English, German, French, Japanese,
Simplified Chinese, Korean, Traditional Chinese, and Spanish.

NSX Manager Browser Support


The following browsers are recommended for working with NSX Manager.

Browser Windows 10 Mac OS X 10.13, 10.14 Ubuntu 18.04

Google Chrome 80 Yes Yes Yes

Mozilla Firefox 72 Yes Yes Yes

VMware by Broadcom 24
NSX Installation Guide

Browser Windows 10 Mac OS X 10.13, 10.14 Ubuntu 18.04

Microsoft Edge 80 Yes

Apple Safari 13 Yes

Note
n Internet Explorer is not supported.

n Supported Browser minimum resolution is 1280 x 800 px.

n Language support: NSX Manager has been localized into multiple languages: English, German,
French, Japanese, Simplified Chinese, Korean, Traditional Chinese, and Spanish. However,
because NSX Manager localization utilizes the browser language settings, ensure that your
settings match the desired language. There is no language preference setting within the NSX
Manager interface itself.

Network Latency Requirements


The maximum network latency between NSX Managers in a NSX Manager cluster is 10ms.

The maximum network latency between NSX Managers and Transport Nodes is 150ms.

Important NSX Appliance VMs backed by vSAN clusters may see intemittent disk write latency
spikes of 10+ms. This is expected due to the way VSAN handles data (burst of incoming IOs
resulting in queuing of data and delay). As long as the average disk access latencycontinues to
be less than 10ms, intermittent latency spike should not have impact on NSX Appliance VMs.

Storage Requirements
n NSX appliance VMs that are backed by VSAN clusters may see intemittent disk write latency
spikes of 10+ms. This is expected due to the way VSAN handles data (burst of incoming IOs
resulting in queuing of data and delay). As long as the average disk access latency continues
to be less than 10ms, intermittent latency spike should not have impact on NSX Appliance
VMs.

n It is recommended that NSX Managers be placed on shared storage.

n Storage must be highly available to avoid a storage outage causing all NSX Manager file
systems to be placed into read-only mode upon event of a storage failure.

Refer to the product documentation of the storage technology you are using to know how to
optimally design a highly available storage solution.

Supported Hypervisor Version

VMware by Broadcom 25
NSX Installation Guide

Supported Hypervisor for Host Transport Nodes


To find interoperability between NSX and vCenter Server/ESXi host, do the following:

1 In the Solution 1 section, from the Select a Solution drop-down menu, select VMware NSX and
from the Select a Version drop-down menu, select a version. For example. 4.x.x.

2 In the Platform/Solution 1 section, from the Select a Solution drop-down menu, select
VMware vCenter Server and/or VMware vSphere Hypervisor(ESXi) and select a
version. For example, 7.x.

3 Click Check Interoprability.

Hypervisor Version CPU Cores Memory

vSphere Supported vSphere version 4 16 GB

Note To avoid memory errors on a hypervisor host running vSphere ESXi version 7.x.x, ensure
that at least 16 GB is available before deploying NSX Manager.

Table 3-1. Supported Hosts for NSX Managers

Support Description Hypervisor

ESXi For supported hosts, see the VMware Product


Interoperability Matrices.

For ESXi hosts, NSX supports the Host Profiles and Auto Deploy features on vSphere 6.7 EP6
or higher. See Understanding vSphere Auto Deploy in the VMware ESXi Installation and Setup
documentation for more information.

Caution On RHEL and Ubuntu, the yum update command might update the kernel version,
which must not be greater than 4.19.x, and break the compatibility with NSX. Disable the
automatic kernel update when you run yum update. Also, after running yum install, verify
that NSX supports the kernel version.

VMware by Broadcom 26
NSX Installation Guide

NSX Edge VM System Requirements


Before you install NSX Edge, make sure that your environment meets the supported
requirements.

Note The following conditions apply to the hosts for the NSX Edge nodes:

n NSX Edge nodes are supported only on ESXi-based hosts with Intel-based and AMD-based
chipsets.

Otherwise, vSphere EVC mode may prevent NSX Edge nodes from starting, showing an error
message in the console.

n If vSphere EVC mode is enabled for the host for the NSX Edge VM, the CPU must be Haswell
or later generation.

n Only VMXNET3 vNIC is supported for the NSX Edge VM.

NSX Cloud Note If using NSX Cloud, note that the NSX Public Cloud Gateway(PCG) is
deployed in a single default size for each supported public cloud. See NSX Public Cloud Gateway:
Architecture and Modes of Deployment for details.

NSX Edge VM Resource Requirements


Disk VM Hardware
Appliance Size Memory vCPU Space Version Notes

NSX Edge Small 4 GB 2 200 GB 11 or later Proof-of-concept deployments


(vSphere 6.0 or only.
later)
Note L7 rules for firewall, load
balancing and so on are not
realized on a Tier-1 gateway if
you deploy a small sized NSX
Edge VM.

NSX Edge Medium 8 GB 4 200 GB 11 or later Suggested for L2 through L4


(vSphere 6.0 or features and when the total
later) throughput requirement is less
than 2 Gbps:
n NAT
n Routing
n L4 firewall
n L4 load balancer

VMware by Broadcom 27
NSX Installation Guide

Disk VM Hardware
Appliance Size Memory vCPU Space Version Notes

NSX Edge Large 32 GB 8 200 GB 11 or later Suggested for L2 through


(vSphere 6.0 or L4 features and when the
later) total throughput requirement
is between 2 ~ 10 Gbps.
n NAT
n Routing
n L4 Firewall
n L4 Load Balancer
n L7 Load Balancer (for
example, when SSL offload
is required)
n TLS Inspection
See Scaling Load Balancer
Resources in the NSX
Administration Guide. For
more information about what
the different load balance sizes
and NSX Edge form factors
can support, see https://
[Link].

NSX Edge Extra Large 64 GB 16 200 GB 11 or later Suggested for features that
(vSphere 6.0 or have a higher total throughput
later) requirement than the NSX
Edge Large form factor:
n L7 load balancer
n VPN
n TLS Inspection
n L7 Access Profile (URL
filtering)
n IDS/IPS
n Malware prevention
(Adavanced Threat
Protection)
See Scaling Load Balancer
Resources in the NSX
Administration Guide.
For more information
about what the different
load balance sizes and
NSX Edge form factors
can support, see https://
[Link].

NSX Edge VM CPU Requirements


For the DPDK support, the underlaying platform needs to meet the following requirements:

n CPU must have AESNI capability.

VMware by Broadcom 28
NSX Installation Guide

n CPU must have 1 GB Huge Page support.

Hardware Type

CPU n Intel Xeon E7-xxxx (Westmere-EX and later CPU generation)


n Intel Xeon 56xx (Westmere-EP)
n Intel Xeon E5-xxxx (Sandy Bridge and later CPU generation)
n Intel Xeon Platinum (all generations)
n Intel Xeon Gold (all generations)
n Intel Xeon Silver (all generations)
n Intel Xeon Bronze (all generations)

n AMD EPYC Series processors

NSX Edge Bare Metal Requirements


Before you configure the NSX Edge bare metal, make sure that your environment meets the
supported requirements.

NSX Edge Bare Metal Memory, CPU, and Disk Requirements


Minimum Requirements

Memory CPU Cores Disk Space

32 GB 8 200 GB

Recommended Requirements

Memory CPU Cores Disk Space

256 GB 24 200 GB

NSX Edge Bare Metal DPDK CPU Requirements


For the DPDK support, the underlaying platform needs to meet the following requirements:

n CPU must have AES-NI capability.

n CPU must have 1 GB Huge Page support.

n NSX Edge Bare Metal supports up to 64 cores for the entire system. This means that a server
with a single socket, the CPU can have up to 64 cores. On a server with 2 sockets, each
socket cannot have more than 32 cores.

VMware by Broadcom 29
NSX Installation Guide

Hardware Type

CPU n Intel Xeon E7-xxxx (Westmere-EX and later CPU generation)


n Intel Xeon 56xx (Westmere-EP)
n Intel Xeon E5-xxxx (Sandy Bridge and later CPU generation)
n Intel Xeon Platinum (all generations)
n Intel Xeon Gold (all generations)
n Intel Xeon Silver (all generations)
n Intel Xeon Bronze (all generations)

n AMD EPYC Series processors

NSX Edge Bare Metal Hardware Requirements


Verify that the bare metal NSX Edge hardware is listed in this URL
[Link] If the
hardware is not listed, the storage, video adapter, or motherboard components might not work
on the NSX Edge appliance.

Note Starting with NSX 3.2, NSX Edge Bare Metal supports both UEFI and legacy BIOS modes.
However, in NSX 3.1 and previous releases, NSX Edge Bare Metal only supports legacy BIOS
mode.

NSX Edge Bare Metal NIC Requirements

NIC Type Description Vendor ID PCI Device ID Firmware Version

Mellanox PCI_DEVICE_ID_MELL 15b3 0x1013 12.21.1000 and above


ConnectX-4 EN ANOX_CONNECTX4

Mellanox PCI_DEVICE_ID_MELL 15b3 0x1015 14.21.1000 and above


ConnectX-4 Lx ANOX_CONNECTX4LX

Mellanox ConnectX-5 PCI_DEVICE_ID_MELL 15b3 0x1017 16.21.1000 and above


ANOX_CONNECTX5

Mellanox ConnectX-5 PCI_DEVICE_ID_MELL 15b3 0x1019 16.21.1000 and above


EX ANOX_CONNECTX5EX

Mellanox PCI_DEVICE_ID_MELL 15b3 0x101B 22.27.6008 and above


ConnectX-6 ANOX_CONNECTX6

Mellanox PCI_DEVICE_ID_MELL 15b3 0x101D 22.27.6008 and above


ConnectX-6 Dx ANOX_CONNECTX6D
X

Intel X520/Intel IXGBE_DEV_ID_82599 8086 0x10F7 n/a


82599 _KX4

IXGBE_DEV_ID_82599 8086 0x1514 n/a


_KX4_MEZZ

IXGBE_DEV_ID_82599 8086 0x1517 n/a


_KR

VMware by Broadcom 30
NSX Installation Guide

NIC Type Description Vendor ID PCI Device ID Firmware Version

IXGBE_DEV_ID_82599 8086 0x10F8 n/a


_COMBO_BACKPLANE

IXGBE_DEV_ID_82599 8086 0x10F9 n/a


_CX4

IXGBE_DEV_ID_82599 8086 0x10FB n/a


_SFP

IXGBE_SUBDEV_ID_82 8086 0x11A9 n/a


599_SFP

IXGBE_SUBDEV_ID_82 8086 0x1F72 n/a


599_RNDC

IXGBE_SUBDEV_ID_82 8086 0x17D0 n/a


599_560FLR

IXGBE_SUBDEV_ID_82 8086 0x0470 n/a


599_ECNA_DP

IXGBE_DEV_ID_82599 8086 0x1507 n/a


_SFP_EM

IXGBE_DEV_ID_82599 8086 0x154D n/a


_SFP_SF2

IXGBE_DEV_ID_82599 8086 0x154A n/a


_SFP_SF_QP

IXGBE_DEV_ID_82599 8086 0x1558 n/a


_QSFP_SF_QP

IXGBE_DEV_ID_82599 8086 0x1557 n/a


EN_SFP

IXGBE_DEV_ID_82599 8086 0x10FC n/a


_XAUI_LOM

IXGBE_DEV_ID_82599 8086 0x151C n/a


_T3_LOM

Intel X540 IXGBE_DEV_ID_X540T 8086 0x1528 n/a

IXGBE_DEV_ID_X540T 8086 0x1560 n/a


1

Intel X550 IXGBE_DEV_ID_X550T 8086 0x1563 n/a

IXGBE_DEV_ID_X550T 8086 0x15D1 n/a


1

IXGBE_DEV_ID_X550E 8086 0x15C8 21.5.9 and later versions


M_A_10G_T

IXGBE_DEV_ID_X550E 8086 0x15CA 21.5.9 and later versions


M_A_QSFP

IXGBE_DEV_ID_X550E 8086 0x15CC 21.5.9 and later versions


M_A_QSFP_N

VMware by Broadcom 31
NSX Installation Guide

NIC Type Description Vendor ID PCI Device ID Firmware Version

IXGBE_DEV_ID_X550E 8086 0x15AD 21.5.9 and later versions


M_X_10G_T

IXGBE_DEV_ID_X550E 8086 0x15B0 21.5.9 and later versions


M_X_XFI

Intel X710 I40E_DEV_ID_SFP_X71 8086 0x1572 6.80 and later versions


0 (8.x versions are not
supported)

I40E_DEV_ID_KX_C 8086 0x1581 6.80 and later versions


(8.x versions are not
supported)

I40E_DEV_ID_10G_BA 8086 0x1586 6.80 and later versions


SE_T (8.x versions are not
supported)

I40E_DEV_ID_10G_BA 8086 0x1589 6.80 and later versions


SE_T4 (8.x versions are not
supported)

I40E_DEV_ID_10G_SF 8086 0x104E n/a


P

I40E_DEV_ID_10G_B 8086 0x104F n/a

Intel XL710 I40E_DEV_ID_KX_B 8086 0x1580 6.80 and later versions


(8.x versions are not
supported)

I40E_DEV_ID_QSFP_A 8086 0x1583 6.80 and later versions


(8.x versions are not
supported)

I40E_DEV_ID_QSFP_B 8086 0x1584 6.80 and later versions


(8.x versions are not
supported)

I40E_DEV_ID_QSFP_C 8086 0x1585 6.80 and later versions


(8.x versions are not
supported)

I40E_DEV_ID_20G_KR 8086 0x1587 6.80 and later versions


2 (8.x versions are not
supported)

I40E_DEV_ID_20G_KR 8086 0x1588 6.80 and later versions


2_A (8.x versions are not
supported)

I40E_DEV_ID_10G_BA 8086 0x15FF 6.80 and later versions


SE_T_BC (8.x versions are not
supported)

Intel XXV710 I40E_DEV_ID_25G_B 8086 0x158A 6.80 and later versions


(8.x versions are not
supported)

VMware by Broadcom 32
NSX Installation Guide

NIC Type Description Vendor ID PCI Device ID Firmware Version

I40E_DEV_ID_25G_SF 8086 0x158B 6.80 and later versions


P28 (8.x versions are not
supported)

Cisco VIC 1300 series Cisco UCS Virtual 1137 0x0043 n/a
Interface Card 1300

Cisco VIC 1400 Cisco UCS Virtual 1137 0x0043 n/a


series Interface Card 1400

Note For all the supported NICs listed above, verify that the media adapters and cables you
use follow the vendor's supported media types. Any media adapter or cables not supported
by the vendor can result in unpredictable behavior, including the inability to boot up due
to an unrecognized media adapter. See the NIC vendor documentation for information about
supported media adapters and cables.

Important Cisco VICs: To successfully claim Cisco VICs for the NSX Edge datapath, configure
multiple RX and TX queues from the Cisco UCS Manager. The number of queues configured must
be sufficient for datapath to have one queue per core. For configuration details, refer to the
Cisco documentation.

Bare Metal Server System Requirements


Before you configure the bare metal server, make sure that your server meets the supported
requirements.

Important The user performing the installation may require sudo command permissions for
some of the procedures. See Install Third-Party Packages on a Linux Physical Server.

Bare Metal Server Requirements

Operating System Version CPU Cores Memory

CentOS Linux 7.6 (kernel: 3.10.0-957) 4 16 GB


7.7
7.9
8.2
8.4

Red Hat Enterprise Linux 7.6 (kernel: 3.10.0-957) 4 16 GB


(RHEL) 7.7
7.9
8.2
8.4
8.6 (starting with NSX 4.0.1)

VMware by Broadcom 33
NSX Installation Guide

Operating System Version CPU Cores Memory

Oracle Linux 7.6 (kernel: 3.10.0-957) 4 16 GB


7.7
7.8
7.9
8.6 (starting with NSX 4.0.1)

SUSE Linux Enterprise Server 12 SP3 4 16 GB


12 SP4
12 SP5 (starting with NSX 3.2.1)

Ubuntu 16.04.2 LTS (kernel: 4.4.0-*) 4 16 GB


18.04
20.04

Windows Server 2016 (minor version 14393.2248 4 16 GB


and later)
2019

n Ensure MTU is set to 1600 for Jumbo frame support by NIC or OS drivers.

n Hosts running Ubuntu 18.04.2 LTS must be upgraded from 16.04 or freshly installed.

Supported Topologies
To find the completet list of supported topologies, see the NSX Physical Server Encyclopedia
slide deck.

Physical NICS
For physical servers running Linux: There is no restriction on the physical NIC other than being
supported by the operating system.

For physical servers running Windows on Segment-VLAN with MTU at 1500, there is also no
restrictions on the physical NIC other than being supported by the operating system.

For physical servers running Windows on Segment-Overlay or with Segment-VLAN with large
MTU (> 1500), validate its associated driver support jumbo packet. To verify whether jumbo
packet is supported, run the following command:

$ (Get-NetAdapterAdvancedProperty -Name “<Ethernet>“).DisplayName -Contains


“Jumbo Packet”

Where, <Ethernet> must be replaced with the real adapter name of each physical NIC.”

Table 3-2. Virtual NICs

NIC Type Description PCI BUS ID Firmware Version

e1000e Intel(R) 82574L Gigabit [Link] [Link] and later version


Network

vmxnet3 vmxnet3 Ethernet Adapter [Link] [Link] and later version

VMware by Broadcom 34
NSX Installation Guide

Bare Metal Linux Container Requirements


For bare metal Linux container requirements, see the NSX Container Plug-in for OpenShift -
Installation and Administration Guide.

Ports and Protocols


Ports and protocols allow node-to-node communication paths in NSX, the paths are secured
and authenticated, and a storage location for the credentials are used to establish mutual
authentication.

Configure the ports and protocols required to be open on both the physical and the host
hypervisor firewalls in NSX. Refer to [Link] for
more details.

By default, all certificates are self-signed certificates. The northbound GUI and API certificates
and private keys can be replaced by CA signed certificates.

There are internal daemons that communicate over the loopback or UNIX domain sockets:

n ESXi: nsx-cfgagent, ESX-DP (in the kernel)

Note To get access to NSX nodes, you must enable SSH on these nodes.

Configuring a vSphere Distributed Switch


When a transport node is configured on a VDS host switch, some network parameters can only
be configured in VMware vCenter.

The following requirements must be met to install NSX on a VDS host switch:

n VMware vCenter 7.0 or a later version

n ESXi 7.0 or a later version

The created VDS switch can be configured to centrally manage networking for NSX hosts.

Configuring a VDS switch for NSX networking requires objects to be configured on NSX and in
vCenter Server.

n In vSphere:

n Create a VDS switch.

n Set MTU to at least 1600

n Add ESXi hosts to the switch. These hosts are later prepared as NSX transport nodes.

n Assign uplinks to physical NICs.

n In NSX:

n When configuring a transport node, map uplinks created in NSX uplink profile with uplinks
in VDS.

VMware by Broadcom 35
NSX Installation Guide

For more details on preparing a host transport node on a VDS switch, see the NSX Installation
Guide.
The following parameters can only be configured in a VMware vCenter on a VDS backed host
switch:

Configuration VDS NSX Description

MTU In VMware vCenter, set an MTU Any MTU value set in As a host transport node that is
value on the switch. an NSX uplink profile is prepared using VDS as the host
overriden. switch, the MTU value needs to
Note A VDS switch must have an
be set on the VDS switch in
MTU of 1600 or higher.
vCenter Server.
In VMware vCenter, select VDS, click
Actions → Settings → Edit Settings.

Uplinks/LAGs In VMware vCenter, configure When a transport node As a host transport node that
Uplinks/LAGs on a VDS switch. is prepared, the teaming is prepared using VDS as the
policy on NSX is host switch, the uplink or LAG
In VMware vCenter, select VDS, click
mapped to uplinks/LAGs are configured on the VDS
Actions → Settings → Edit Settings.
configured on a VDS switch. During configuration,
switch. NSX requires teaming policy
be configured for the transport
node. This teaming policy is
mapped to the uplinks/LAGs
configured on the VDS switch.

NIOC Configure in VMware vCenter. NIOC configuration is As a host transport node that is
In VMware vCenter, select VDS, click not available when a prepared using VDS as the host
Actions → Settings → Edit Settings. host transport node is switch, the NIOC profile can only
prepared using a VDS be configured in vCenter Server.
switch.

Link Layer Configure in VMware vCenter. LLDP configuration is As a host transport node that is
Discovery In VMware vCenter, select VDS, click not available when a prepared using VDS as the host
Protocol Actions → Settings → Edit Settings. host transport node is switch, the LLDP profile can only
(LLDP) prepared using a VDS be configured in vCenter Server.
switch.

Add or Manage Manage in VMware vCenter. Prepared as transport Before preparing a transport
Hosts In VMware vCenter, go to nodes in NSX. node using a VDS switch, that
Networking → VDS Switch → Add node must be added to the VDS
and Manage Host.. switch in vCenter Server.

Note NIOC profiles, Link Layer Discovery Protocol (LLDP) profile, and Link Aggregation Group
(LAG) for these virtual machines are managed by VDS switches and not by NSX. As a vSphere
administrator, configure these parameters from VMware vCenter UI or by calling VDS API
commands.

After preparing a host transport node with VDS as a host switch, the host switch type displays
VDS as the host switch. It displays the configured uplink profile in NSX and the associated
transport zones.

VMware by Broadcom 36
NSX Installation Guide

In VMware vCenter, the VDS switch used to prepare NSX hosts is created as an NSX

Switch.

VMware by Broadcom 37
NSX Manager Installation
4
NSX Manager provides a graphical user interface (GUI) and REST APIs for creating, configuring,
and monitoring NSX components such as logical switches, logical routers, and firewalls.

NSX Manager provides a system view and is the management component of NSX.

For high availability, NSX supports a management cluster of three NSX Managers. For a
production environment, deploying a management cluster is recommended. Starting with NSX
3.1, a single NSX Manager cluster deployment is supported.

In a vSphere environment, the following functions are supported by NSX Manager:

n vCenter Server can use the vMotion function to live migrate NSX Manager across hosts and
clusters.

n vCenter Server can use the Storage vMotion function to live migrate file system of an NSX
Manager across hosts and clusters.

n vCenter Server can use the Distributed Resource Scheduler function to rebalance NSX
Manager across hosts and clusters.

n vCenter Server can use the Anti-affinity function to manage NSX Manager across hosts and
clusters.

NSX Manager Deployment, Platform, and Installation


Requirements
The following table details the NSX Manager deployment, platform, and installation requirements

Requirements Description

Supported deployment methods n OVA/OVF

Supported platforms See NSX Manager VM and Host Transport Node System Requirements.
On ESXi, it is recommended that the NSX Manager appliance be installed on
shared storage.

IP address An NSX Manager must have a static IP address. You can change the IP address
after installation. Both IPv4 and IPv6 are supported. You can choose IPv4 only or
use dual stack (both IPv4 and IPv6).

Note If you choose to use one IPv4 only, then the NSX Manager services (for
example, SNMP, NTP, vIDM, etc.) must have IPv4 addresses.

VMware by Broadcom 38
NSX Installation Guide

Requirements Description

NSX appliance password n At least 12 characters


n At least one lower-case letter
n At least one upper-case letter
n At least one digit
n At least one special character
n At least five different characters
n Default password complexity rules are enforced by the following Linux PAM
module arguments:
n retry=3: The maximum number of times a new password can be entered,
for this argument at the most 3 times, before returning with an error.
n minlen=12: The minimum acceptable size for the new password. In
addition to the number of characters in the new password, credit (of +1
in length) is given for each different kind of character (other, upper, lower
and digit).
n difok=0: The minimum number of bytes that must be different in the new
password. Indicates similarity between the old and new password. With a
value 0 assigned to difok, there is no requirement for any byte of the old
and new password to be different. An exact match is allowed.
n lcredit=1: The maximum credit for having lower case letters in the new
password. If you have less than or 1 lower case letter, each letter will
count +1 towards meeting the current minlen value.
n ucredit=1: The maximum credit for having upper case letters in the new
password. If you have less than or 1 upper case letter each letter will
count +1 towards meeting the current minlen value.
n dcredit=1: The maximum credit for having digits in the new password. If
you have less than or 1 digit, each digit will count +1 towards meeting the
current minlen value.
n ocredit=1: The maximum credit for having other characters in the new
password. If you have less than or 1 other characters, each character will
count +1 towards meeting the current minlen value.
n enforce_for_root: The password is set for the root user.

Note For more details on Linux PAM module to check the password against
dictionary words, refer to the man page.

For example, avoid simple and systematic passwords such as VMware123!123


or VMware12345. Passwords that meet complexity standards are not simple
and systematic but are a combination of letters, alphabets, special characters,
and numbers, such as VMware123!45, VMware 1!2345 or VMware@1az23x.

Hostname When installing NSX Manager, specify a hostname that does not contain invalid
characters such as an underscore or special characters such as dot ".". If the
hostname contains any invalid character or special characters, after deployment
the hostname will be set to nsx-manager.
For more information about hostname restrictions, see [Link]
rfc952 and [Link]

VMware Tools The NSX Manager VM running on ESXi has VMTools installed. Do not remove or
upgrade VMTools.

VMware by Broadcom 39
NSX Installation Guide

Requirements Description

System n Verify that the system requirements are met. See System Requirements.
n Verify that the required ports are open. See Ports and Protocols.
n Verify that a datastore is configured and accessible on the ESXi host.
n Verify that you have the IP address and gateway, DNS server IP addresses,
domain search list, and the NTP Server IP or FQDN for the NSX Manager to
use.
n Create a management VDS and target VM port group in vCenter. Place the
NSX appliances onto this management VDS port group network. See Prepare
a vSphere Distributed Switch for NSX.

Multiple management networks can be used as long as the NSX Manager


nodes has consistent connectivity and recommended latency between them.

Note If you plan to use Cluster VIP, all NSX Manager appliances should
belong to same subnet.
n Plan your NSX Manager IP and NSX Manager Cluster VIP addressing scheme.

Note Verify that you have the hostname for NSX Manager to use. The
Hostname format must be nsx-manager-fqdn@[Link]. This format
is required if NSX installation is dual stack (IPv4, IPv6) and/or if planning to
configure CA-signed certificates.

OVF Privileges Verify that you have adequate privileges to deploy an OVF template on the ESXi
host.
A management tool that can deploy OVF templates, such as VMware vCenter or
the vSphere Client. The OVF deployment tool must support configuration options
to allow for manual configuration.
OVF tool version must be 4.0 or later.

Client Plug-in The Client Integration Plug-in must be installed.

Certificates If you plan to configure internal VIP on a NSX Manager cluster, you can apply a
different certificate to each NSX Manager node of the cluster. See Configure a
Virtual IP Address for a Cluster.
If you plan to configure an external load balancer, ensure only a single certificate
is applied to all NSX Manager cluster nodes. See Configuring an External Load
Balancer.

Note On an NSX Manager fresh install, reboot, or after an admin password change when
prompted on first login, it might take several minutes for the NSX Manager to start.

VMware by Broadcom 40
NSX Installation Guide

NSX Manager Installation Scenarios


Important When you install NSX Manager from an OVA or OVF file, either from vSphere Client
or the command line as a standalone host, OVA/OVF property values such as user names and
passwords are not validated before the VM is powered on. However, the static IP address field
is a mandatory field to install NSX Manager. When you install NSX Manager as a managed host
in VMware vCenter, OVA/OVF property values such as user names and passwords are validated
before the VM is powered on.

n If you specify a user name for any local user, the name must be unique. If you specify the
same name, it is ignored and the default names (for example, admin and audit) are used.

n If the password for the root or admin user does not meet the complexity requirements, you
must log in to NSX Manager through SSH or at the console as root with password vmware
and admin with password default. You are prompted to change the password.

n If the password for other local users (for example, audit) does not meet the complexity
requirements, the user account is disabled. To enable the account, log in to NSX Manager
through SSH or at the console as the admin user and run the command set user
local_user_name to set the local user's password (the current password is an empty string).
You can also reset passwords in the UI using System > User Management > Local Users.

Caution Changes made to the NSX while logged in with the root user credentials might cause
system failure and potentially impact your network. You can only make changes using the root
user credentials with the guidance of VMware Support team.

Note The core services on the appliance do not start until a password with sufficient complexity
is set.

After you deploy NSX Manager from an OVA file, you cannot change the VM's IP settings by
powering off the VM and modifying the OVA settings from VMware vCenter.

Configuring NSX Manager for Access by the DNS Server


By default, transport nodes access NSX Managers based on their IP addresses. However, this can
be based also on the DNS names of the NSX Managers.

You enable FQDN usage by publishing the FQDNs of the NSX Managers.

Note Enabling FQDN usage (DNS) on NSX Managers is required for multisite deployments.
(It is optional for all other deployment types.) See Multisite Deployment of NSX in the NSX
Administration Guide.

VMware by Broadcom 41
NSX Installation Guide

Publishing the FQDNs of the NSX Managers


n Go to DNS server and configure forward and reverse lookup entries for NSX Manager nodes.
In lookup entries, configure short TTL for FQDNs, for example, set short TTL to 600 seconds.

n Use NSX Manager API to enable DNS server to access the NSX Manager node.

Example request: PUT [Link]

{
"publish_fqdns": true,
"_revision": 0
}

Example response:

{
"publish_fqdns": true,
"_revision": 1
}

See the NSX API Guide for details.

Note After publishing the FQDNs, validate access by the transport nodes as described in the
next section.

Validating Access via FQDN by Transport Nodes


After publishing the FQDNs of the NSX Managers, verify that the transport nodes are
successfully accessing the NSX Managers.

Using SSH, log into a transport node such as a hypervisor or Edge node, and run the get
controllers CLI command.

Example response:

Controller IP Port SSL Status Is Physical Master Session State Controller


FQDN
[Link] 1235 enabled connected true up
[Link]

Read the following topics next:

n Modifying the Default Admin Password Expiration

VMware by Broadcom 42
NSX Installation Guide

Modifying the Default Admin Password Expiration


By default, the administrative password for the NSX Manager and NSX Edge appliances
expires after 90 days. However, you can reset the expiration period after initial installation and
configuration.

If the password expires, you will be unable to log in and manage components. Additionally, any
task or API call that requires the administrative password to execute will fail. If your password
expires, see Knowledge Base article 70691 NSX-T admin password expired.

Procedure

1 Use a secure program to connect to the NSX CLI console.

2 Reset the expiration period.

You can set the expiration period for between 1 and 9999 days.

nsxcli> set user admin password-expiration <1 - 9999>

Note Alternatively, you can use API commands to set the admin password expiration period.

3 (Optional) You can disable password expiry so the password never expires.

nsxcli> clear user admin password-expiration

VMware by Broadcom 43
NSX Manager Cluster
Requirements 5
The following subsections describe cluster requirements for NSX appliances and provides
recommendations for specific site deployments.

Cluster Requirements
n In a production environment, the NSX Manager (Local Manager in an NSX Federation
environment) or Global Manager cluster must have three members for scaling out and for
redundancy of its management and control planes.

Each cluster member should be placed on a unique hypervisor host with three physical
hypervisor hosts in total. This is required to avoid a single physical hypervisor host failure
impacting the NSX control plane. It is recommended you apply anti-affinity rules to ensure
that all three cluster members are running on different hosts.

The normal production operating state is a three-node cluster of the NSX Manager (Local
Manager in an NSX Federation environment) or Global Manager. However, you can add
additional, temporary nodes to allow for IP address changes.

Important NSX Manager cluster requires a quorum to be operational. This means at


least two out of three of its members must be up and running. If a quorum is lost, NSX
management and control planes do not work. This results in no access to the NSX Manager's
UI and API for configuration updates using the management plane as well as no access to
add or vMotion VMs on NSX segments. The dynamic routing on Tier-0 remains operational.

n Starting with NSX 3.1.1, NSX supports the deployment of a single NSX Manager or Global
Manager in production deployments. To recover from a failed NSX Manager, configure
vSphere HA for the single NSX Manager. The time required to recover a single NSX Manager
that uses backup/restore or vSphere HA might be much longer than the availability provided
by a cluster of NSX Manager nodes. Similarly, for lab and proof-of-concept deployments
where there are no production workloads, you can run a single NSX Manager or Global
Manager to save resources. NSX Manager or Global Manager nodes can be deployed on
either ESXi or KVM. However, mixed deployments of managers on both ESXi and KVM are
not supported.

VMware by Broadcom 44
NSX Installation Guide

n If you deploy a single NSX Manager or Global Manager in a production environment, you can
also enable vSphere HA on the host cluster for the manager nodes to get automatic recovery
of the manager VM in case of ESXi failure. For more information on vSphere HA, see the
vSphere Availability guide in the vSphere Documentation Center.
n If one of the NSX Manager node goes down, the three node NSX Manager cluster provides
redundancy. However, if one of the NSX Manager node loses its disk, it goes into read only
state. It might still continue to participate in the cluster if its network connectivity is Up. This
case can result in the management plane and control plane to become unvailable. To resolve
the issue, power off the impacted NSX Manager.

Configure Anti-affinity rules in VMware vCenter


Set up DRS Anti-Affinity rules to prevent, whenever possible, two NSX Manager VMs from
running on the same host. Follow the vSphere guide to configure VM to VM affinity rules. See
Create a VM-VM Affinity Rule.

VM to Host affinity rules. See Create a VM-Host Affinity Rule.

Recovery with vSphere HA


You can use vSphere HA (High Availability) with NSX to enable quick recovery if the host running
the NSX Manager node fails. See Creating and Using vSphere HA Clusters in the vSphere product
documentation.

Read the following topics next:

n Cluster Requirements for an Individual Site

n Cluster Requirements for Multiple Sites

Cluster Requirements for an Individual Site


Each NSX appliance cluster – Global Manager, Local Manager or NSX Manager – must contain
three VMs.

Those three VMs can be physically all deployed in the same data center or in different data
centers, as long as latency between VMs in the cluster is below 10ms. In a 3-node Manager
cluster, only one of the nodes can fail at any given time for the cluster to self-heal.

Single Site Requirements and Recommendations


The following recommendations apply to single site NSX deployments and cover cluster
recommendations for Global Manager, Local Manager and NSX Manager appliances:

n Place your NSX appliances on different hosts (from anti-affinity rules in VMware vCenter) to
avoid a single host failure impacting multiple managers.

n Maximum latency between NSX appliances is 10ms.

VMware by Broadcom 45
NSX Installation Guide

n You can place NSX appliances in different vSphere clusters or in a common vSphere cluster.

n It is recommended that you place NSX appliances in different management subnets or a


shared management subnet. When using vSphere HA it is recommended to use a shared
management subnet so NSX appliances that are recovered by vSphere can preserve their IP
address.

n It is recommended that you place NSX appliances on shared storage also. For vSphere HA,
please review the requirements for that solution.

You can also use vSphere HA with NSX to provide recovery of a lost NSX appliance when the
host where the NSX appliance is running fails.

Scenario example:

n A vSphere cluster in which all three NSX Managers are deployed.

n The vSphere cluster consists of four or more hosts:

n Host-01 with nsxmgr-01 deployed

n Host-02 with nsxmgr-02 deployed

n Host-03 with nsxmgr-03 deployed

n Host-04 with no NSX Manager deployed

n vSphere HA is configured to recover any lost NSX Manager (e.g., nsxmgr-01) from any host
(e.g., Host-01) to Host-04.

Thus, upon the loss of any hosts where a NSX Manager is running, vSphere recovers the lost NSX
Manager on Host-04.

Cluster Requirements for Multiple Sites


The following recommendations apply to dual site (Site A/Site B) and multiple-site (Site A/Site
B/Site C) NSX deployments.

Dual Site Requirements and Recommendations


Note Starting from NSX v3.0.2 onwards, VMware Site Recovery Manager (SRM) is supported.

n Do not deploy NSX Managers in a dual-site scenario without vSphere HA or VMware SRM. In
this scenario, one site requires the deployment of two NSX Managers and the loss of that site
will impact the operation of NSX.

n Deployment of NSX Managers in a dual site scenario with vSphere HA or VMware SRM can be
done with the following considerations:

n A single stretched vSphere cluster contains all the hosts for NSX Managers.

n All three NSX Managers are deployed to a common management subnet/VLAN to allow
IP address preservation upon recovery of a lost NSX Managers.

VMware by Broadcom 46
NSX Installation Guide

n For latency between sites, see the storage product requirements.

Scenario example:

n A vSphere cluster in which all three NSX Managers are deployed.

n The vSphere cluster consists of six or more hosts, with three hosts in Site A and three hosts in
Site B.

n The three NSX Managers are deployed to distinct hosts with additional hosts for placement
of recovered NSX Managers:

Site A:

n Host-01 with nsxmgr-01 deployed

n Host-02 with nsxmgr-02 deployed

n Host-03 with nsxmgr-03 deployed

Site B:

n Host-04 with no NSX Manager deployed

n Host-05 with no NSX Manager deployed

n Host-06 with no NSX Manager deployed

n vSphere HA or VMware SRM is configured to recover any lost NSX Manager (e.g., nsxmgr-01)
from any host (e.g., Host-01) in Site A to one of the hosts in Site B.

Thus, upon failure of Site A, vSphere HA or VMware SRM will recover all NSX Managers to hosts
in site B.

Important You must properly configure anti-affinity rules to prevent NSX Managers from being
recovered to the same common host.

Multiple (Three or More) Site Requirements and Recommendations


In a scenario with three or more sites, you can deploy NSX Managers with or without vSphere HA
or VMware SRM.

If you deploy without vSphere HA or VMware SRM:

n It is recommended that you use separate management subnets or VLANs per site.

n Maximum latency between NSX Managers is 10ms.

Scenario example (three sites):

n Three separate vSphere clusters, one per site.

n At least one host per site running NSX Manager:

n Host-01 with nsxmgr-01 deployed

n Host-02 with nsxmgr-02 deployed

VMware by Broadcom 47
NSX Installation Guide

n Host-03 with nsxmgr-03 deployed

Failure scenarios:

n Single site failure: Two remaining NSX Managers in other sites continue to operate. NSX is in
a degraded state but still operational. It is recommended you manually deploy a third NSX
Manager to replace the lost cluster member.

n Two site failure: Loss of quorum and therefore impact to NSX operations.

Recovery of NSX Managers may take as long as 20 minutes depending on environmental


conditions such as CPU speed, disk performance, and other deployment factors.

VMware by Broadcom 48
Installing and Configuring NSX
Embedded using VMware vCenter
Plugin
6
NSX embedded (NSXe) integrates NSX with VMware vCenter to enable the consumption of
NSX from VMware vCenter. As a VI admin, you can install NSX Manager and NSX for virtual
networking or security-only use case by installing and configuring the NSX plugin in VMware
vCenter.

There are two workflows - Virtual Networking and Security Only - allowed from the NSX page on
the vSphere Client. The Virtual Networking deployment workflow includes both networking and
security use cases. In contrast, if you choose to configure Security Only type of deployment, then
you cannot configure virtual networking on the selected cluster hosts.

Note You must not enable Multi NSX on the VMware vCenter on which you plan to install and
configure the NSX plugin. It might result in unexpected results when using the NSX plugin from
the VMware vCenter.

Supported Browser Resolution


The minimum supported browser resolution is 1280 x 800 px.

Read the following topics next:

n Install NSX Manager from vSphere Client

n Install Additional NSX Manager Nodes to Form a Cluster from VMware vCenter Plugin

n Configure NSX for Virtual Networking from vSphere Client

n Configuring NSX-T for Security from vSphere Client

n Viewing NSX Alarms in vSphere Web Client UI

Install NSX Manager from vSphere Client


As a VI admin working in the vSphere environment, you can completely install NSX Manager
appliance from the vSphere Client. You do not need to perform any installation operations from
the NSX Manager UI. After NSX Manager is installed, NSX appears as a plug-in in VMware
vCenter that is ready to install NSX for Virtual Networking or Security-only use cases.

Important In NSX 3.2, only a single NSX Manager cluster is supported.

VMware by Broadcom 49
NSX Installation Guide

Prerequisites

n Ensure that ESXi host version is compatible with VMware vCenter version v7.0.3.

n Ensure that VMware vCenter version is v7.0.3 or later.

n To provision a thick disk, ensure the disk size on host has at least 300GB free space.

n Configure a vSphere Distributed Switch (VDS) switch on hosts. Only VDS 6.6 or later is
supported.

n Ensure VMware vCenter points to an FQDN address and the DNS server must be able to
resolve the address.

n To ensure time is synchronized, configure NTP server on NSX Manager and ESXi hosts. See
the Time Synchronization between NSX Manager, vIDM, and Related Components topic in the
NSX Administration Guide.

Procedure

1 From a browser, log in with admin privileges to an VMware vCenter at [Link]


server-ip-address>.

2 On the vSphere Client UI, select vSphere Client menu and click NSX.

3 On the screen, click Install NSX.

4 Enter the download OVF URL or navigate to the OVF file, and click Next.

Important If you enter a URL to download the OVF file, ensure the URL points
to a secure HTTPS server. For example, [Link] There is a separate
OVF file available for NSX Manager deployed from vSphere Client. You must select
a OVF file name using the following convention: nsx-embedded-unified-appliance-
<[Link]>.ova. Do not use the nsx-unified-appliance-
<[Link]>.ova file.

5 To verify the thumbprint of the SSL certificate of the HTTPS server, click Yes.

6 Enter a name and a location for the NSX Manager VM, and click Next.

The selected location also indicates the VMware vCenter where the NSX Manager is
deployed and which VMware vCenter is managed by the NSX instance.

The name you enter appears in the vSphere and VMware vCenter inventory.

7 Select a compute resource for the NSX Manager appliance, and click Next.

8 Review and verify the OVF template details, and click Next.

9 Select a form factor to deploy the NSX appliance. You must deploy the NSX Manager in
either Medium or Large form factor. If you select any other form factor, then installation fails
and NSX appliance is not registered to VMware vCenter.

VMware by Broadcom 50
NSX Installation Guide

10 Specify storage for the configuration and disk files.

a Select the virtual disk format.

b Select the VM storage policy.

c Specify the datastore to store the NSX Manager appliance files.

d Click Next.

11 Select a destination network for each source network.

12 Select the port group or destination network for the NSX Manager.

13 Configure IP Allocation settings.

a For IP allocation, specify Static - Manual.

b For IP protocol, select IPv4 or IPv6.

Note You can ignore the IP Allocation settings. You can select either IPv4 or IPv6. It
would not impact ingress or egress network traffic of NSX Manager.

14 Click Next.

15 In the Application section, enter the System GRUB Root User Password, System GRUB menu
timeout, System Root Password, CLI 'admin' User Password, CLI 'audit' User Password, CLI
'admin' username, and CLI 'audit' username. Only the root password and admin password
fields are mandatory.

n At least 12 characters

n At least one lower-case letter

n At least one upper-case letter

n At least one digit

n At least one special character

n At least five different characters

n Default password complexity rules are enforced by the following Linux PAM module
arguments:

n retry=3: The maximum number of times a new password can be entered, for this
argument at the most 3 times, before returning with an error.

n minlen=12: The minimum acceptable size for the new password. In addition to the
number of characters in the new password, credit (of +1 in length) is given for each
different kind of character (other, upper, lower and digit).

n difok=0: The minimum number of bytes that must be different in the new password.
Indicates similarity between the old and new password. With a value 0 assigned to
difok, there is no requirement for any byte of the old and new password to be
different. An exact match is allowed.

VMware by Broadcom 51
NSX Installation Guide

n lcredit=1: The maximum credit for having lower case letters in the new password. If
you have less than or 1 lower case letter, each letter will count +1 towards meeting the
current minlen value.

n ucredit=1: The maximum credit for having upper case letters in the new password. If
you have less than or 1 upper case letter each letter will count +1 towards meeting the
current minlen value.

n dcredit=1: The maximum credit for having digits in the new password. If you have less
than or 1 digit, each digit will count +1 towards meeting the current minlen value.

n ocredit=1: The maximum credit for having other characters in the new password. If
you have less than or 1 other characters, each character will count +1 towards meeting
the current minlen value.

n enforce_for_root: The password is set for the root user.

Note For more details on Linux PAM module to check the password against dictionary
words, refer to the man page.

For example, avoid simple and systematic passwords such as VMware123!123 or


VMware12345. Passwords that meet complexity standards are not simple and systematic
but are a combination of letters, alphabets, special characters, and numbers, such as
VMware123!45, VMware 1!2345 or VMware@1az23x.

Important If the password you set does not meet the password complexity requirements,
installation of the NSX Manager fails. If installation fails, you need to redeploy the NSX
Manager again.

16 In the Network Properties section, enter the hostname of the NSX Manager.

Note The host name must be a valid domain name. Ensure that each part of the host name
(domain/subdomain) that is separated by dot starts with an alphabet character. Also, NSX
accepts only latin alphabets that do not have an accent mark, as in í, ó, ú, ý.

Important If you plan to install NSX in dual stack (IPv4 and IPv6) and/or if you plan to
configure CA-signed certificates, then enter a Hostname with valid domain name to NSX
Manager VMs and Cluster VIP (if configured).

17 Enter a default gateway, management network IP address (required), and management


network netmask (required).

18 In the DNS section, enter DNS Server list and Domain Search list.

19 In the Services Configuration section, enter NTP Server IP or FQDN.

Optionally, you can enable SSH service and allow root SSH login. But, it is not recommended
to allow root access to SSH service.

VMware by Broadcom 52
NSX Installation Guide

20 Verify that all your custom OVF template specification is accurate and click Finish to begin
installation.

See the installation progress in the Recent Tasks tab.

21 On the NSX page, you can either click Start NSX Onboarding to load the plugin or skip the
onboarding workflow and access the NSX Manager UI from the vSphere Client.

What to do next

Apply NSX license.

1 Click Go To NSX Getting Started.

2 In the NSX License Key section, enter the NSX license key and click Apply.

After you successfully apply the NSX license, configure NSX for Virtual Networking or Security
use case on the vSphere platform. See Configure NSX for Virtual Networking from vSphere
Client.

Install Additional NSX Manager Nodes to Form a Cluster


from VMware vCenter Plugin
Forming an NSX Manager cluster provides high availability and reliability of NSX management
function if one of the NSX Manager goes down.

For other environments, see Form an NSX Manager Cluster Using the CLI.

To create an NSX Manager cluster, deploy two additional nodes to form a cluster of three nodes
total.

Note Data is replicated to all the active NSX Manager nodes of the cluster. So, when the NSX
Manager cluster is stable, every NSX Manager node contains the same data.

Prerequisites

n Verify that an NSX Manager node is installed. See Install NSX Manager from vSphere Client.

n Verify that compute manager is configured. See Add a Compute Manager.

n Verify that the system requirements are met. See System Requirements.

n Verify that the required ports are open. See Ports and Protocols.

n Verify that a datastore is configured and accessible on the ESXi host.

n Verify that you have the IP address and gateway, DNS server IP addresses, domain search
list, and the NTP Server IP or FQDN for the NSX Manager to use.

n Create a management VDS and target VM port group in vCenter. Place the NSX appliances
onto this management VDS port group network. See Prepare a vSphere Distributed Switch
for NSX.

VMware by Broadcom 53
NSX Installation Guide

Multiple management networks can be used as long as the NSX Manager nodes has
consistent connectivity and recommended latency between them.

Note If you plan to use Cluster VIP, all NSX Manager appliances should belong to same
subnet.

Procedure

1 From a browser, log in with admin privileges to an vCenter Server at [Link]


ip-address>.

2 On the vSphere Web Client UI, select vSphere Web Client menu and click NSX.

3 Deploy an appliance. Go to System > Appliances > Add NSX Appliance.

4 Enter the appliance information details.

Option Description

Host Name or FQDN Enter a name for the node.

IP Type Select the IP type. The appliance can have IPv4 address only or both IPv4
and IPv6 addresses.

Management IPv4/Netmask Enter an IPv4 address to be assigned to the node.

Management Gateway IPv4 Enter a gateway IPv4 address to be used by the node.

Management IPv6/Netmask Enter an IPv6 address to be assigned to the node. This option appears when
IP Type is set to Both IPv4 and IPv6.

Management Gateway IPv6 Enter a gateway IPv4 address to be used by the node. This option appears
when IP Type is set to Both IPv4 and IPv6.

DNS Servers Enter DNS server IP addresses to be used by the node.

NTP Server Enter an NTP server IP address to be used by the node.

Node Size Select the form factor to deploy the node from the following options:
n Small (4 vCPU, 16 GB RAM, 300 GB storage)
n Medium (6 vCPU, 24 GB RAM, 300 GB storage)
n Large (12 vCPU, 48 GB RAM, 300 GB storage)

5 Enter the configuration details.

Option Description

Compute Manager Select the VMware vCenter to provision compute resources for deploying
the node.

Compute Cluster Select the cluster the node is going to join.

Resource Pool Select either a resource pool or a host for the node from the drop-down
menu.

Host If you did not select a resource pool, select a host for the node.

Datastore Select a datastore for the node files from the drop-down menu.

VMware by Broadcom 54
NSX Installation Guide

Option Description

Virtual Disk Format n For NFS datastores, select a virtual disk format from the available
provisioned policies on the underlying datastore.
n With hardware acceleration, Thin Provision, Thick Provision Lazy
Zeroed, and Thick Provision Eager Zeroed formats are supported.
n Without hardware acceleration, only Thin Provision format is
supported.
n For VMFS datastores, Thin Provision, Thick Provision Lazy Zeroed, and
Thick Provision Eager Zeroed formats are supported.
n

n For vSAN datastores, you cannot select a virtual disk format because the
VM storage policy defines the format.
n The vSAN storage policies determine the disk format. The default
virtual disk format for vSAN is Thin Provision. You can change the
vSAN storage policies to set a percentage of the virtual disk that
must be thick-provisioned.
By default, the virtual disk for an NSX Manager node is prepared in the Thin
Provision format.

Note You can provision each node with a different disk format based on
which policies are provisioned on the datastore.

Network Click Select Network to select the management network for the node.

6 Enter the access and credentials details.

Option Description

Enable SSH Toggle the button to allow an SSH login to the new node.

Enable Root Access Toggle the button to allow root access to the new node.

VMware by Broadcom 55
NSX Installation Guide

Option Description

System Root Credentials Set the root password and confirm the password for the new node.
Your password must comply with the password strength restrictions.
n At least 12 characters
n At least one lower-case letter
n At least one upper-case letter
n At least one digit
n At least one special character
n At least five different characters
n Default password complexity rules are enforced by the following Linux
PAM module arguments:
n retry=3: The maximum number of times a new password can be
entered, for this argument at the most 3 times, before returning with
an error.
n minlen=12: The minimum acceptable size for the new password. In
addition to the number of characters in the new password, credit
(of +1 in length) is given for each different kind of character (other,
upper, lower and digit).
n difok=0: The minimum number of bytes that must be different in
the new password. Indicates similarity between the old and new
password. With a value 0 assigned to difok, there is no requirement
for any byte of the old and new password to be different. An exact
match is allowed.
n lcredit=1: The maximum credit for having lower case letters in the
new password. If you have less than or 1 lower case letter, each
letter will count +1 towards meeting the current minlen value.
n ucredit=1: The maximum credit for having upper case letters in the
new password. If you have less than or 1 upper case letter each letter
will count +1 towards meeting the current minlen value.
n dcredit=1: The maximum credit for having digits in the new
password. If you have less than or 1 digit, each digit will count +1
towards meeting the current minlen value.
n ocredit=1: The maximum credit for having other characters in the
new password. If you have less than or 1 other characters, each
character will count +1 towards meeting the current minlen value.
n enforce_for_root: The password is set for the root user.

Note For more details on Linux PAM module to check the password
against dictionary words, refer to the man page.

For example, avoid simple and systematic passwords such as


VMware123!123 or VMware12345. Passwords that meet complexity
standards are not simple and systematic but are a combination of
letters, alphabets, special characters, and numbers, such as VMware123!
45, VMware 1!2345 or VMware@1az23x.

Admin CLI Credentials and Audit CLI Select the Same as root password check box to use the same password
Credentials that you configured for root, or deselect the check box and set a different
password.

VMware by Broadcom 56
NSX Installation Guide

7 Click Install Appliance.

The new node is deployed. You can track the NSX Manager deployment progress in the
System > Appliances page (NSX UI) in VMware vCenter. Do not add additional nodes until
the installation is finished and the cluster is stable.

8 Wait for the deployment, cluster formation, and repository synchronization to finish.

9 Verify that installed NSX Manager node has the required connectivity.

Make sure that you can perform the following tasks.

n Ping your node from another machine.

n The node can ping its default gateway.

n The node can ping the hypervisor hosts that are in the same network using the
management interface.

n The node can ping its DNS server and its NTP Server IP or FQDN list.

n If you enabled SSH, make sure that you can SSH to your node.

If connectivity is not established, make sure that the network adapter of the virtual appliance
is in the proper network or VLAN.

10 If your cluster has only two nodes, add another appliance.

n From NSX Manager, select System > Appliances > Add NSX Appliance and repeat the
configuration steps.

Results

After the cluster is formed, VMware vCenter displays the IP address of all the three nodes on the
NSX UI page.

What to do next

After the cluster is formed, you can choose to set a virtual IP address (VIP) for the cluster or you
can choose to not have a VIP for the cluster. See Configure a Virtual IP Address for a Cluster.
Even after you configure a VIP for the cluster, the NSX plugin in VMware vCenter continues to
access NSX UI using the primary IP address of the current HTTPS leader node. During failover,
the NSX plugin in VMware vCenter automatically starts using the primary IP address of the new
leader node.

Configure NSX for Virtual Networking from vSphere Client


As a VI administrator working in the vSphere environment, you can configure NSX for
virtual networking. The workflow involves configuring logical segments to establish connectivity
between hosts even in different subnets, configuring NSX Edge nodes, Tier-0 gateways, Tier-1
gateways and segments. Finally, workload VMs connected to these segments can pass north-
south and east-west traffic.

VMware by Broadcom 57
NSX Installation Guide

Prerequisites

n Ensure that ESXi hosts are compatible with VMware vCenter version v7.0.3 or later.

n Ensure that VMware vCenter version is v7.0.3 or later.

n Configure a vSphere Distributed Switch (VDS) switch on hosts. Only VDS 6.6 or later is
supported.

n On a vSphere Lifecycle Manager enabled cluster, edit the VMware vCenter from the NSX
Manager UI to:

n Create a service account and enable trust between NSX and VMware vCenter. See Add a
Compute Manager.

Procedure

1 From a browser, log in with admin privileges to an VMware vCenter at [Link]


server-ip-address>.

2 On the vSphere Client UI, select vSphere Client menu and click NSX.

3 On the Welcome to NSX screen, on the Virtual Networking card, click Getting Started.

4 In the Host Cluster Preparation tab, perform the following tasks.

5 Expand the Host Cluster section, select the clusters that you want to prepare for virtual
networking and click Next.

Note Any cluster with an incompatible ESXi host is not allowed for host preparation.

6 Expand the Transport VLAN section, enter a VLAN ID that will tag the overlay traffic and click
Next.

7 Expand the NSX IP Assignment (TEP) section, and enter IP details:

Field Description

IP Assignment Select the mode of IP assignment, from between static and DHCP.
If you select IP pool, enter a name for the pool, IP range, subnet along with
prefix (subnet/prefix) and default gateway.

8 Click Prepare Cluster to begin installation of NSX.

Cluster preparation begins. View installation progress at each host.

Alternatively, you can also verify the progress in NSX Manager UI. NSX creates a new
transport node profile using the configuration that you defined in the installation section.
The switch is set to VDS. The transport node profile is applied to the cluster to prepare hosts
of the cluster as transport nodes.

VMware by Broadcom 58
NSX Installation Guide

9 In the Edge Deployment tab, expand the Management Network for Edge Connectivity and
enter the following details:

Field Description

Management VDS Select a vSphere Distributed Switch for management traffic on NSX Edge
nodes.

Management Network Select a network for management traffic of NSX Edge nodes.

Management Gateway Select the gateway to route management traffic. Enter a static IP address.

10 Click Next.

11 In the Edge Deployment tab, expand the Edge Node 1 and enter the following details

Field Description

Name Enter a name for the Edge node.

Fully Qualified Domain Name Enter a Fully Qualified Domain Name that resolves to the IP address of the
NSX Edge node.

Management IP address Enter an IP address for management traffic of the NSX Edge node.

External Network Connectivity Select a distributed port group to be used as a data path interface. This
distributed port group manages the ingress and egress traffic of workload
VMs that are processed by the NSX Edge node.

Note Even though there are three data path interfaces on an NSX Edge
node, this workflow uses only one interface for a distributed port group.

Edge Node Settings Select Apply same settings for all Edges if you want to replicate the settings
across all NSX Edge nodes.

Password Enter a password that conforms to the required password complexity.


Confirm the same password on the next field.

Virtual Machine Size Select a form factor to deploy the Edge node.

Storage Location Select the datastore as storage location for installation and configuration
files and data generated by the Edge node.

12 On the Edge Deployment tab, verify that the visualization is updated with management
network, external network and other details related to NSX Edge node.

VMware by Broadcom 59
NSX Installation Guide

13 Enter details for Edge Node 2.

14 Click Deploy Edge.

15 On the confirmation window, click Deploy.

16 Observe the topology created based on the configuration details entered on the Edge
Deployment tab. After NSX Edge nodes are realized, the dotted line turns to a solid line,
indicating NSX Edge node is realized.

17 Click Next to configure network connectivity.

18 In the Network Connectivity tab, expand the Physical Router section and enter the following
details:

Field Description

Do you want to peer with a physical After deploying the NSX Edge node, it can establish a peer connection with
router now? a physical router.
n Select Yes if you want to set up Border Gateway Protocol (BGP) or static
routing to your physical router.
n BGP Local AS: Enter the local autonomous system number for use in
BGP.
n Select No if you do not want to set up BGP or static routing to your
router. However, you will need to set up NAT to connect to workloads to
external networks.
n In the Physical Routing IP address field, enter a static IP address.

How many physical routers do you Based on your selection, enter the following details for one or two physical
want to peer with? routers:
If you want to allow other routers to peer with your router, then enter the
following details:
For each peer router, enter these details:
n BGP Local AS: Enter the local autonomous system number used by the
BGP neighbor.
n BGP Neighbors:
n IP Address: Enter the IP address of the physical router, which is the
BGP neighbor.
n Remote AS: Remote autonomous system number used by BGP
neighbors.

VMware by Broadcom 60
NSX Installation Guide

19 Click Next.

20 In the Network Connectivity tab, expand the NSX Gateway section and enter the following
details:

Field Description

Gateway Name Prefix Enter a prefix for the gateway. Every object, such as Tier-0, Tier-1 gateways,
that are created for the gateway is prefixed with this value. You can search
for objects with a specific prefix to get a list of objects related to a particular
gateway.

Uplink VLAN for Router 1 Enter the VLAN ID to tag VLAN traffic going from NSX Edge node to
physical router 1.

Uplink VLAN for Router 2 Enter the VLAN ID to tag VLAN traffic going from NSX Edge node to
physical router 2.

21 Click Next.

22 In the Network Connectivity tab, expand the Uplinks section and enter the following details:

Field Description

IP Address for Uplink 1 Enter the IP address for uplink from NSX gateway or Tier-0 gateway to
physical router 1.

IP Address for Uplink 2 Enter the IP address for uplink from NSX gateway or Tier-0 gateway to
physical router 2.

Physical Router 1 Enter subnet mask and default gateway for physical router 1.

Physical Router 2 Enter subnet mask and default gateway for physical router 2.

23 Verify the visualization created based on the network details you entered.

24 Click Create Gateways.

25 On the confirmation window, click Create Gateways.

The NSX Gateway is successfully created.

VMware by Broadcom 61
NSX Installation Guide

26 In the New Segments tab, create a segment where workloads VMs will be running. For
example, create a segment for a web group. Enter the following details:

Field Description

Name Enter the name of the segment.

Subnet/Prefix Length Enter the subnet network for the segment.

Default Gateway Enter the default gateway that segments should forward traffic to.

27 To create additional segments, click Add Segment and enter the required details.

28 Click Create Segment.

29 After segments are created, add them to NSX distributed virtual port group.

30 Click Next.

31 (Optional) In the Add-on Services tab, enter Network Address Translation (NAT) details. On
the NAT Only window, enter the following details:

Field Description

Name Enter a name for NAT service.

Source Select a segment so that IP addresses of local hosts connected to this


segment are translated and protected and a single translated IP address
is presented to an external network.

Translated The IP address that is presented to external network thus protecting local
hosts from exposing their IP addresses to an external network.

32 Click Next.

33 View the created topology created in NSX.

Results

NSX is configured for virtual networking.

Example:

VMware by Broadcom 62
NSX Installation Guide

What to do next

1 From a browser log in to NSX Manager with [Link] Manager-IP-Address>/.

2 After logging in to NSX Manager, verify networking configuration is successfully created in


NSX.

Configuring NSX-T for Security from vSphere Client


As a VI admin working in the vSphere environment, you can use the simplified workflow to
prepare ESXi clusters for NSX security.

Use the vSphere Client to prepare ESXi clusters for NSX security. On such clusters, you can
enable micro-segmentation, URL filtering and distributed IDS on application workloads. These
clusters are not prepared for NSX virtual networking.

High-level tasks:

n Prepare Host Cluster.

n Create Firewall Rules

n Create Groups for infrastructure services (Active Directory, DNS, and so on), environment
groups (production or testing), and application groups (web, database, application).

n Define communication strategy. Some of the actions you can take are:

n Define communication between any workload and infrastrcture services.

n Define communication so that no environment can talk to each other.

n Limit communication to a specific port or protocol.

n Specifiy source workloads.

n Set up exeptions after setting up communication strategies for workloads.

n Define Action for Default Firewall Rule (to process traffic that does not match firewall
rules defined in Communication section).

n Review and publish firewall rules.

Prepare Clusters for NSX Security


Select a host cluster to prepare it for NSX security.

VMware by Broadcom 63
NSX Installation Guide

The Getting Started section gives you the option to select between Security Only or Virtual
Networking. When you choose to enable clusters only for security, the wizard asks you to define
security rules and uses those rules to automatically configure NSX security on the distributed
virtual port groups of the selected clusters.

Prerequisites

n Ensure that ESXi hosts are compatible with VMware vCenter version v7.0.3 or later.

n Ensure that VMware vCenter version is v7.0.3 or later.

n Configure a vSphere Distributed Switch (VDS) switch on hosts. Only VDS 6.6 or later is
supported.

n On a vSphere Lifecycle Manager enabled cluster, edit the VMware vCenter from the NSX
Manager UI to:

n Create a service account and enable trust between NSX and VMware vCenter. See Add a
Compute Manager.

Procedure

1 From a browser, log in with admin privileges to a VMware vCenter at [Link]


ip-address>.

2 On the vSphere Client UI, select the vSphere Client menu and click NSX.

3 On the Welcome to NSX screen, on the Security Only card, click Getting Started.

4 On the Host Cluster Preparation section, select the clusters that you want to prepare for
security only and click Install NSX.

5 On the Install Security pop-up window, confirm you want to process by clicking Install.

Note Any cluster with an incompatible ESXi host is not allowed for host preparation.

6 Click Next to define firewall rules.

Results

NSX is installed on the host cluster.

What to do next

To avoid any loss of connectiviy, add VMware vCenter and NSX Manager to the DFW Exclusion
list.

Create Groups
As part of firewall creation, define infrastructure group that run selected services, such as DHCP,
define environment groups, such as production, testing, or so on, comprising of selected group
members and define application groups with selected group members.

VMware by Broadcom 64
NSX Installation Guide

Prerequisites

n Install NSX on the host cluster.

Procedure

1 In the Create Firewalls Rules tab, select Create Groups.

2 In the Create Groups page, expand Create Infrastructure Groups.

3 Click Add Group.

4 From the Infrastructure Service drop-down menu, select a service, such as Active Directory.
In the next step, you assign this service to a group comprising of members that form the
infrastructure group. You can create an infrastructure service only once in a workflow. It
cannot be edited once you create it.

5 To define an infrastructure group, click [Define Group].

An infrastructure can be a combination of VMs, IP address range, or distributed virtual port


groups.
a (Optional) In the Group Name field, modify the default group name.

b (Optional) In the NSX Tag field, modify the default tag name. The defined tag is applied
to all VMs and distributed virtual port groups selected for the group. You can edit the
default tag name.

c Expand the Select VMs to add NSX Tag section and select VMs that must be part of the
infrastructure group.

d Expand the IP Address section and enter an IP address, IP addresses in CIDR format, or
an IP range. Both IPv4 and IPv6 formats are supported.

e Expand the Select DVPGs to add NSX Tag section and select the distributed virtual port
groups that must be part of the infrastructure group.

f Click Save.

The wizard automatically creates the group and applies the NSX tag on all the selected
members of the group. For example, if the defined group includes one VM, one
distributed virtual port group, and 1 IP address, and DHCP is the selected infrastructure
service, then wizard tags all group members with the defined tag.

6 Click Next.

7 In the Create Groups page, expand Create Environment Group.

8 Click Add Group.

9 From the Environment drop-down menu, select the environment for the group. For example,
an environment can be a production, testing, partner or a custom environment that you want
to define in your topology.

VMware by Broadcom 65
NSX Installation Guide

10 To define an environment group, click [Define Group].

a (Optional) In the Group Name field, modify the default group name.

b (Optional) In the NSX Tag field, modify the default NSX tag name. This tag name is
applied to all VMs and distributed virtual port group selected for the environment group.

c Expand the Select VMs to add NSX Tag section and select VMs that must be part of the
environment group.

d Expand the IP Address section and enter an IP address, IP addresses in CIDR format, or
an IP range. Both IPv4 and IPv6 formats are supported.

e Expand the Select DVPGs to add NSX Tag section and select the distributed virtual port
groups that must be part of the environment group.

f Click Save.

11 Click Next.

12 In the Create Groups page, expand Create Application Group.

13 Click Add Group.

14 From the Application Group Name drop-down menu, select the type of application group
you want to create.

15 To define an application group, click [Define Group].

a (Optional) In the Group Name field, modify the default group name for the application
group.

b (Optional) In the NSX Tag field, modify the default tag name. This tag name is applied to
all VMs and distributed virtual port group selected for the application group, enter a NSX
tag.

c Expand the Select VMs to add NSX Tag section and select VMs that must be part of the
application group.

d Expand the IP Address section and enter an IP address, IP addresses in CIDR format, or
an IP range. Both IPv4 and IPv6 formats are supported.

e Expand the Select DVPGs to add NSX Tag section and select the distributed virtual port
groups that must be part of the application group.

f Click Save.

16 Click Next.

Results

You created infrastructure groups, environment groups and application groups.

What to do next

After creating groups, define firewall rules that govern communication among workloads and
these different groups.

VMware by Broadcom 66
NSX Installation Guide

Define and Publish Communication Strategies for Groups


After creating groups, define firewall rules to govern communication between groups, define
exceptions and ports or protocols for communication.

Prerequisites

n Install NSX on the host cluster.

n Create Infrastructure groups, Environment groups, and Application groups.

Procedure

1 Expand the Access to infrastructure services section and define specific workloads that can
access shared infrastructure services.

Field Description

Source In the Source column, select the workloads that can access the target
infrastructure service.

Target Is the defined infrastructure service that is accessed by source workloads.

(NSX3.2.2) Service Entry Click the Edit icon to add or edit service entries.
In the Service Entry window, select a service type and properties for the
service type.

Note In NSX 3.2.1 and previous versions, the field name was L4.

2 Click Next.

VMware by Broadcom 67
NSX Installation Guide

3 Expand the Define communication between environments (Optional) section and define
communication between groups.

Field Description

Source Expand the section to define which source environment must communicate
with a target environment.
(NSX 3.2.2): For each source group listed, select a communication method:
Unprotected, Allowed or Blocked.

Note To allow all communication between all source groups and the target
group, select Allow All Communication.

(NSX 3.2.1 and previous versions): To allow communication between a


Development environment and a Production environment, click the red
dotted line between Development and Production. The enabled state is
displayed when a green line is established between groups.

Environment Is the target environment selected by the system.

(NSX3.2.2) Service Entry Select the service type, ports and properties over which the workloads in
source and target environments communicate with each other.
Click Apply.

Note In NSX 3.2.1 and previous versions, the field name was L4.

4 Click Next.

5 Expand the Define communication strategies for applications (Optional) section and define
communication for application groups.

Field Description

Source Select an application group for which you can select communication rules to
manage incoming or outgoing traffic.

Strategy Select a firewall strategy to apply to an application group.


Supported firewall rules are:
n Allow all external traffic.
n Deny incoming and allow outgoing traffic.
n Allow incoming and deny outgoing traffic.
n Deny all external traffic.

Note If you want to apply one firewall rule to all application groups, click
Select Strategy, select the rule and click Apply.

Exception Based on how you want to configure firewall rule, you might want to add
exceptions.
By default, no exceptions are added. To add an exception, click the No
Exceptions link. Edit these fields to add exceptions:
n Source: Select the source.
n Service Entry: Select the service, port and properties.
n L7 App ID: Select the App ID.
n FQDN: Select FQDN of the application.
.Click Apply.

VMware by Broadcom 68
NSX Installation Guide

6 Click Next.

7 Expand the Define Action for Default Firewall Rules (Optional) section and define an action
that is applied to traffic that does not match the defined criteria.

8 In the Default rule action, select from one of the following:

n Allow: Is the default rule set. Allows all traffic that does not match the defined criteria.

n Drop or Reject: To enforce firewall rules insider your network, you might choose to drop
traffic that does not match the defined criteria.

9 Click Next.

10 In the Review and Publish page, review the communication strategies and firewall rules that
you applied to the groups.

In the screenshot, Production Rule 1 is a user-defined rule and Production Rule 2 is system-
defined default rule, where the default action is set to Drop.

11 Click Publish Policies.

Results

The wizard ends and firewall policies you defined are applied to the groups. The NSX UI is
available in VMware vCenter.

What to do next

To verify the firewall rules published from vSphere Client are realized on NSX Manager UI.

1 In the NSX Manager UI, go to Inventory → Groups.

2 On the Groups page, verify whether the workload groups you defined in vSphere Client are
realized in NSX Manager.

3 Go to Security → Distributed Firewall page.

4 On the Distributed Firewall page, verify whether the firewall rules you applied in vSphere
Client are realized in NSX Manager.

VMware by Broadcom 69
NSX Installation Guide

Viewing NSX Alarms in vSphere Web Client UI


As a VI admin you can view NSX generated alarms in VMware vCenter.

Note The NSX Alarms forwarding functionality is supported starting with vSphere version 8.0
onwards.

After you install and configure NSX from the NSX page in VMware vCenter environment, NSX
automatically registers its alarm definitions with VMware vCenter. Any NSX alarms generated are
forwarded and displayed on the vSphere Web Client UI. Alarm forwarding is only applicable with
NSX deployments in VMware vCenter environment.

View NSX Alarms on vSphere Web Client UI at:

n Monitor page → Issues and Alarms → Triggered Alarms

n Alarms section

To identify a NSX alarm, go to the Triggered Alarms page and check the user field that triggered
the alarm. The user [Link] identifies that the event was generated
by NSX.

Note If there are multiple alarms generated for a certain NSX event, in vSphere Web Client a
summary event is displayed. The summary event takes you to NSX Alarms UI page where all
alarms of the same type are displayed. For example, if NSX generates five alarms related to
certificate expiry, the Alarms tab in vSphere Web Client displays only a single summary event for
all the five certificate expiry alarms.

Caution Do not delete any NSX alarms from VMware vCenter or reset any alarms (alarm turns to
green indicating it is resolved) . If you delete an Alarm Definition in VMware vCenter, any alarm
generated for that event may not be forwarded to VMware vCenter.

If one of the NSX Manager goes down, another NSX Manager in the cluster takes over as the
active manager. VMware vCenter synchronizes existing alarm definitions and triggered alarms
with the current state of alarms on the new NSX Manager.

VMware by Broadcom 70
Installing NSX Manager Cluster on
vSphere 7
As an administrator, your first task will be to install NSX in setting up your NSX environment.

NSX Manager is the application that you use to administer your NSX environment. You can use
the NSX Manager UI, API or CLI to manage workloads and NSX Edge nodes. In a production
environment, for fault tolerance, deploy a cluster of three NSX Manager nodes, each running on a
separate ESXi host.

NSX Manager can be deployed either on an ESXi host or a KVM host and this section covers the
use of vSphere Client to deploy the NSX Manager virtual appliances OVA/OVF on an ESXi host.

As an admin, your first task is to install NSX Manager as part of setting up your NSX environment.

Make sure that you have the supported vSphere version. See vSphere support.

Read the following topics next:

n Install NSX Manager and Available Appliances

n Configure a Virtual IP Address for a Cluster

n Configuring an External Load Balancer

n Disable Snapshots on an NSX Appliance

Install NSX Manager and Available Appliances


You can use the vSphere Client to deploy NSX Manager virtual appliances. The same OVF file can
be used to deploy three different types of appliances: NSX Manager, NSX Cloud Service Manager
for NSX Cloud, and Global Manager for NSX Federation.

Cloud Service Manager is a virtual appliance that uses NSX components and integrates them with
your public cloud.

Prerequisites

n Verify that the system requirements are met. See System Requirements.

n Verify that the required ports are open. See Ports and Protocols.

n Verify that a datastore is configured and accessible on the ESXi host.

n Verify that you have the IP address and gateway, DNS server IP addresses, domain search
list, and the NTP Server IP or FQDN for the NSX Manager to use.

VMware by Broadcom 71
NSX Installation Guide

n Create a management VDS and target VM port group in vCenter. Place the NSX appliances
onto this management VDS port group network. See Prepare a vSphere Distributed Switch
for NSX.

Multiple management networks can be used as long as the NSX Manager nodes has
consistent connectivity and recommended latency between them.

Note If you plan to use Cluster VIP, all NSX Manager appliances should belong to same
subnet.

n Plan your NSX Manager IP and NSX Manager Cluster VIP addressing scheme.

Note Verify that you have the hostname for NSX Manager to use. The Hostname format
must be nsx-manager-fqdn@[Link]. This format is required if NSX installation is dual
stack (IPv4, IPv6) and/or if planning to configure CA-signed certificates.

Procedure

1 Locate and download the NSX OVA file on the VMware download portal. You can either copy
the download URL or download the OVA file.

2 In the vSphere Client, select the ESXi host or ESXi host cluster on which to install NSX.

3 Right-click Host and select Deploy OVF template to start the installation wizard.

4 Enter the download OVA URL or navigate to the OVA file, and click Next.

5 Enter a name and a location for the NSX Manager VM, and click Next.

The name you enter appears in the vSphere and VMware vCenter inventory.

6 Select a compute resource for the NSX Manager appliance, and click Next.

u To install on a ESXi host managed by vCenter, select a host on which to deploy the NSX
Manager appliance.
u To install on a standalone ESXi host, select the host on which to deploy the NSX Manager
appliance.

7 Review and verify the OVF template details, and click Next.

8 Specify the deployment configuration size, and click Next.

The Description panel on the right side of the wizard shows details of the selected
configuration.

9 Specify storage for the configuration and disk files.

a Select the virtual disk format.

b Select the VM storage policy.

c Specify the datastore to store the NSX Manager appliance files.

d Click Next.

10 Select a destination network for each source network.

VMware by Broadcom 72
NSX Installation Guide

11 Select the port group or destination network for the NSX Manager.

12 Configure IP Allocation settings.

a For IP allocation, specify Static - Manual.

b For IP protocol, select IPv4 or IPv6.

Note You can ignore the IP Allocation settings. You can select either IPv4 or IPv6. It
would not impact ingress or egress network traffic of NSX Manager.

13 Click Next.

The following steps are all located in the Customize Template section of the Deploy OVF
Template wizard.

14 In the Application section, enter the system root, CLI admin, and audit passwords for the NSX
Manager. The root and admin credentials are mandatory fields.

Your passwords must comply with the password strength restrictions.

n At least 12 characters

n At least one lower-case letter

n At least one upper-case letter

n At least one digit

n At least one special character

n At least five different characters

n Default password complexity rules are enforced by the following Linux PAM module
arguments:

n retry=3: The maximum number of times a new password can be entered, for this
argument at the most 3 times, before returning with an error.

n minlen=12: The minimum acceptable size for the new password. In addition to the
number of characters in the new password, credit (of +1 in length) is given for each
different kind of character (other, upper, lower and digit).

n difok=0: The minimum number of bytes that must be different in the new password.
Indicates similarity between the old and new password. With a value 0 assigned to
difok, there is no requirement for any byte of the old and new password to be
different. An exact match is allowed.

n lcredit=1: The maximum credit for having lower case letters in the new password. If
you have less than or 1 lower case letter, each letter will count +1 towards meeting the
current minlen value.

n ucredit=1: The maximum credit for having upper case letters in the new password. If
you have less than or 1 upper case letter each letter will count +1 towards meeting the
current minlen value.

VMware by Broadcom 73
NSX Installation Guide

n dcredit=1: The maximum credit for having digits in the new password. If you have less
than or 1 digit, each digit will count +1 towards meeting the current minlen value.

n ocredit=1: The maximum credit for having other characters in the new password. If
you have less than or 1 other characters, each character will count +1 towards meeting
the current minlen value.

n enforce_for_root: The password is set for the root user.

Note For more details on Linux PAM module to check the password against dictionary
words, refer to the man page.

For example, avoid simple and systematic passwords such as VMware123!123 or


VMware12345. Passwords that meet complexity standards are not simple and systematic
but are a combination of letters, alphabets, special characters, and numbers, such as
VMware123!45, VMware 1!2345 or VMware@1az23x.

15 In the Optional parameters section, leave the password fields blank. It avoids the risk of
compromising passwords set for VMC roles by a user who has access to the VMware
vCenter. When deploying VMC for NSX, this field is used internally to set passwords for the
Cloud Admin and Cloud Operator roles.

16 In the Network Properties section, enter the hostname of the NSX Manager.

Note The host name must be a valid domain name. Ensure that each part of the host name
(domain/subdomain) that is separated by dot starts with an alphabet character. Also, NSX
accepts only latin alphabets that do not have an accent mark, as in í, ó, ú, ý.

Important If you plan to install NSX in dual stack (IPv4 and IPv6) and/or if you plan to
configure CA-signed certificates, then enter a Hostname with valid domain name to NSX
Manager VMs and Cluster VIP (if configured).

17 Select a Rolename for the appliance. The default role is NSX Manager.

n To install an NSX Manager appliance, select the NSX Manager role.

n To install a Global Manager appliance for a NSX Federation deployment, select the NSX
Global Manager role.

See Chapter 13 Getting Started with NSX Federation for details.

n To install a Cloud Service Manager (CSM) appliance for an NSX Cloud deployment, select
the nsx-cloud-service-manager role.

See Overview of Deploying NSX Cloud for details.

18 Enter a default gateway, management network IP address (required), and management


network netmask (required).

Note Entering a default gateway is optional. However, you cannot configure it after
deploying NSX Manager.

VMware by Broadcom 74
NSX Installation Guide

19 In the DNS section, enter DNS Server list and Domain Search list.

20 In the Services Configuration section, enter NTP Server IP or FQDN.

Optionally, you can enable SSH service and allow root SSH login. But, it is not recommended
to allow root access to SSH service.

21 Verify that all your custom OVF template specification is accurate and click Finish to begin
installation.

The installation might take 7-8 minutes.

22 From the vSphere Client, verify that the VM is powered on. Open the VM console to track the
boot process of the node.

23 After the VM node boots a second time, log in to the CLI as admin and run the get
interface eth0 command to verify that the IP address was applied as expected.

24 Enter the get services command after waiting for about 5 minutes to verify that all default
services are running.

The following services are not required by default and do not start automatically.

n liagent

n migration-coordinator: This service is used only when running migration coordinator.


See the NSX Migration Guide before starting this service.

n snmp: For information on starting SNMP see Simple Network Management Protocol in the
NSX Administration Guide.
n nsx-message-bus: This service is not used in NSX 3.0 and later releases.

25 After deployment, verify that the NSX Manager UI comes up by accessing the following URL,
[Link] or [Link]

26 Verify that your NSX Manager, or Global Manager node has the required connectivity.

Perform the following tasks:

n Ping your node from another machine.

n The node can ping its default gateway.

n The node can ping the hypervisor hosts that are in the same network using the
management interface.

n The node can ping its DNS server and its NTP Server or FQDN.

n If you enabled SSH, make sure that you can SSH to your node.

If connectivity is not established, make sure that the network adapter of the virtual appliance
is in the proper network or VLAN.

VMware by Broadcom 75
NSX Installation Guide

27 Troubleshooting OVA failures:

Note During deployment, if you entered incorrect configuration details, delete the appliance
and redeploy with correct configuration.

a Verify that the datastore chosen for deployment is mounted on all the hosts that are
members of a cluster. Redeploy and choose ESXi host instead of VMware vCenter to
bypass VMware vCenter cluster related checks.

b If proxy enabled on VMware vCenter, edit file /etc/sysconfig/proxy and add


line .*.domainname to bypass proxy for ESXi hosts. See, [Link]
81565.

c If deployment of appliance through OVF tool gives error ovf descriptor not found,
view the file contents in terminal cat -A <filepath/filename> and remove hidden
formatting characters. Then try again.

28 Troubleshooting issues related to bringing up the appliance. SSH to NSX Manager CLI as
admin and run followign commands to troubleshoot:

a Run get configuration and verify hostname/name-server/search-domain/ntp


settings are correct.

b Run get services and verify all required services are running (other than nsx-message-
bus, snmp, migration-coordinator). If these services are not running, try restarting the
service by running restart service <service-name>.

c Run get cluster status and verify all manager cluster components are up. If any
component is down, try restarting the service associated to the component by running
restart service <associated-component-service-name>.

d Run get core-dumps to verify no core dumps generated in /var/log/core or /image/


core. If you find any core dumps, contact VMware Support.

e Run get filesystem-status to verify that no disk partition is full, especially those
partitions that are consumed by NSX.

f Alternatively, you can run API commands to know the node and service status.

GET api/v1/node/status

GET api/v1/node/services

What to do next

n Log in to the NSX Manager from a supported web browser. See Log In to the Newly Created
NSX Manager .

n Deploy NSX Manager Nodes to Form a Cluster from the UI.

VMware by Broadcom 76
NSX Installation Guide

Install NSX Manager on ESXi Using the Command-Line OVF Tool


If you prefer to automate or use CLI for the NSX Manager installation, you can use the VMware
OVF Tool, which is a command-line utility.

By default, nsx_isSSHEnabled and nsx_allowSSHRootLogin are both disabled for security reasons.
When they are disabled, you cannot SSH or log in to the NSX Manager command line. If you
enable nsx_isSSHEnabled but not nsx_allowSSHRootLogin, you can SSH to NSX Manager but you
cannot log in as root.

Prerequisites

n You can download the latest OVF tool from the Customer Connect Portal.

n Verify that the system requirements are met. See System Requirements.

n Verify that the required ports are open. See Ports and Protocols.

n Verify that a datastore is configured and accessible on the ESXi host.

n Verify that you have the IP address and gateway, DNS server IP addresses, domain search
list, and the NTP Server IP or FQDN for the NSX Manager to use.

n Create a management VDS and target VM port group in vCenter. Place the NSX appliances
onto this management VDS port group network. See Prepare a vSphere Distributed Switch
for NSX.

Multiple management networks can be used as long as the NSX Manager nodes has
consistent connectivity and recommended latency between them.

Note If you plan to use Cluster VIP, all NSX Manager appliances should belong to same
subnet.

n Plan your NSX Manager IP and NSX Manager Cluster VIP addressing scheme.

Note Verify that you have the hostname for NSX Manager to use. The Hostname format
must be nsx-manager-fqdn@[Link]. This format is required if NSX installation is dual
stack (IPv4, IPv6) and/or if planning to configure CA-signed certificates.

VMware by Broadcom 77
NSX Installation Guide

Procedure

1 Run the ovftool command with the appropriate parameters.

The process depends on whether the host is standalone or managed by VMware vCenter.

n For a standalone host:

Note On a standalone host, if you enter an incorrect role in the nsx_role property, then
the appliance is deployed in the NSX Manager role.

n Windows example:

C:\Program Files\VMware\VMware OVF Tool>ovftool \


--name=<nsxmanager>
--X:injectOvfEnv
--X:logFile=[Link]
--sourceType=OVA
--vmFolder='Folder-in-VC'
--allowExtraConfig
--datastore=<datastore>
--net:"<network-name-of-OVF>=<network-name>"
--acceptAllEulas
--skipManifestCheck
--noSSLVerify
--diskMode=thin
--quiet
--hideEula
--powerOn
--prop:nsx_ip_0=[Link]
--prop:nsx_netmask_0=[Link]
--prop:nsx_gateway_0=[Link]
--prop:nsx_dns1_0=[Link]
--prop:nsx_domain_0=[Link]
--prop:nsx_ntp_0=[Link]
--prop:nsx_isSSHEnabled=True
--prop:"nsx_passwd_0=<password>"
--prop:"nsx_cli_passwd_0=<password-cli>"
--prop:"nsx_cli_audit_passwd_0=<password-cli-audit>"
--prop:nsx_hostname=<hostname>
--prop:mgrhostname01="nsx-manager-02@[Link]"
--prop:nsx_allowSSHRootLogin=True
--prop:nsx_role="NSX Manager"
--X:logFile=/root/ovftool/<ovf-folder>.log
--X:logLevel=trivia
--ipProtocol=IPv4

VMware by Broadcom 78
NSX Installation Guide

--ipAllocationPolicy="fixedPolicy"
<nsx-unified-applicance>.ova
'vi://admin@[Link]:password@[Link]/<datacenter>/host/
Install/[Link]/'

Note The above Windows code block uses the backslash (\) to indicate the
continuation of the command line. In actual use, omit the backslash and put the entire
command in a single line.

Note In the above example, [Link] is the IP address of the host machine where
NSX Manager is to be deployed.

Note In the above example, --deploymentOption is set to the default size Medium.
To know the other supported sizes, see NSX Manager VM and Host Transport Node
System Requirements.

n Linux example:

mgrformfactor="small"
ipAllocationPolicy="fixedPolicy"
mgrdatastore="QNAP-Share-VMs"
mgrnetwork="Management-VLAN-210"

mgrname01="nsx-manager-01"
mgrhostname01="nsx-manager-01"
mgrip01="[Link]"

mgrnetmask="[Link]"
mgrgw="[Link]"
mgrdns="[Link]"
mgrntp="[Link]"
mgrpasswd="<password>"
mgrssh="<True|False>"
mgrroot="<True|False>"
logLevel="trivia"

mgresxhost01="[Link]"

ovftool
--name=<nsxmanager>
--X:injectOvfEnv
--X:logFile=[Link]
--sourceType=OVA
--vmFolder='Folder-in-VC'
--allowExtraConfig
--datastore=<datastore>
--net:"<network-name-of-OVF>=<network-name>"
--acceptAllEulas
--skipManifestCheck
--noSSLVerify
--diskMode=thin
--quiet

VMware by Broadcom 79
NSX Installation Guide

--hideEula
--powerOn
--prop:nsx_ip_0=[Link]
--prop:nsx_netmask_0=[Link]
--prop:nsx_gateway_0=[Link]
--prop:nsx_dns1_0=[Link]
--prop:nsx_domain_0=[Link]
--prop:nsx_ntp_0=[Link]
--prop:nsx_isSSHEnabled=True
--prop:"nsx_passwd_0=<password>"
--prop:"nsx_cli_passwd_0=<password-cli>"
--prop:"nsx_cli_audit_passwd_0=<password-cli-audit>"
--prop:nsx_hostname=<hostname>
--prop:mgrhostname01="nsx-manager-02@[Link]"
--prop:nsx_allowSSHRootLogin=True
--prop:nsx_role="NSX Manager"
--X:logFile=/root/ovftool/<ovf-folder>.log
--X:logLevel=trivia
--ipProtocol=IPv4
--ipAllocationPolicy="fixedPolicy"
<nsx-unified-applicance>.ova
'vi://admin@[Link]:password@[Link]/<datacenter>/host/
Install/[Link]/'

The result should look something like this:

Opening OVA source: nsx-<component>.ova


The manifest validates
Source is signed and the certificate validates
Opening VI target: vi://root:<password>@<esxi-IP-address>
Deploying to VI: vi://root:<password>@<esxi-IP-address>
Transfer Completed
Powering on VM: NSX Manager
Task Completed
Completed successfully

n For a host managed by VMware vCenter:

n Windows example:

C:\Users\Administrator\Downloads>ovftool
--name=<nsxmanager>
--X:injectOvfEnv
--X:logFile=[Link]
--sourceType=OVA
--vmFolder='Folder-in-VC'
--allowExtraConfig
--datastore=<datastore>
--net:"<network-name-of-OVF>=<network-name>"
--acceptAllEulas
--skipManifestCheck
--noSSLVerify
--diskMode=thin

VMware by Broadcom 80
NSX Installation Guide

--quiet
--hideEula
--powerOn
--prop:nsx_ip_0=[Link]
--prop:nsx_netmask_0=[Link]
--prop:nsx_gateway_0=[Link]
--prop:nsx_dns1_0=[Link]
--prop:nsx_domain_0=[Link]
--prop:nsx_ntp_0=[Link]
--prop:nsx_isSSHEnabled=True
--prop:"nsx_passwd_0=<password>"
--prop:"nsx_cli_passwd_0=<password-cli>"
--prop:"nsx_cli_audit_passwd_0=<password-cli-audit>"
--prop:nsx_hostname=<hostname>
--prop:mgrhostname01="nsx-manager-02@[Link]"
--prop:nsx_allowSSHRootLogin=True
--prop:nsx_role="NSX Manager"
--X:logFile=/root/ovftool/<ovf-folder>.log
--X:logLevel=trivia
--ipProtocol=IPv4
--ipAllocationPolicy="fixedPolicy" <nsx-unified-applicance>.ova
'vi://admin@[Link]:password@[Link]/<datacenter>/host/
Install/[Link]/'

Note The above Windows code block uses the backslash (\) to indicate the
continuation of the command line. In actual use, omit the backslash and put the entire
command in a single line.

Note In the above example, --deploymentOption is set to the default size Medium.
To know the other supported sizes, see NSX Manager VM and Host Transport Node
System Requirements.

n Linux example:

mgrformfactor="small"
ipAllocationPolicy="fixedPolicy"
mgrdatastore="QNAP-Share-VMs"
mgrnetwork="Management-VLAN-210"

mgrname01="nsx-manager-01"
mgrhostname01="nsx-manager-01"
mgrip01="[Link]"

mgrnetmask="[Link]"
mgrgw="[Link]"
mgrdns="[Link]"
mgrntp="[Link]"
mgrpasswd="<password>"
mgrssh="<True|False>"
mgrroot="<True|False>"
logLevel="trivia"

vcadmin="administrator@[Link]"

VMware by Broadcom 81
NSX Installation Guide

vcpass="<password>"
vcip="[Link]"
mgresxhost01="[Link]"

ovftool
--name=<nsxmanager>
--X:injectOvfEnv
--X:logFile=[Link]
--sourceType=OVA
--vmFolder='Folder-in-VC'
--allowExtraConfig
--datastore=<datastore>
--net:"<network-name-of-OVF>=<network-name>"
--acceptAllEulas
--skipManifestCheck
--noSSLVerify
--diskMode=thin
--quiet
--hideEula
--powerOn
--prop:nsx_ip_0=[Link]
--prop:nsx_netmask_0=[Link]
--prop:nsx_gateway_0=[Link]
--prop:nsx_dns1_0=[Link]
--prop:nsx_domain_0=[Link]
--prop:nsx_ntp_0=[Link]
--prop:nsx_isSSHEnabled=True
--prop:"nsx_passwd_0=<password>"
--prop:"nsx_cli_passwd_0=<password-cli>"
--prop:"nsx_cli_audit_passwd_0=<password-cli-audit>"
--prop:nsx_hostname=<hostname>
--prop:mgrhostname01="nsx-manager-02@[Link]"
--prop:nsx_allowSSHRootLogin=True
--prop:nsx_role="NSX Manager"
--X:logFile=/root/ovftool/<ovf-folder>.log
--X:logLevel=trivia
--ipProtocol=IPv4
--ipAllocationPolicy="fixedPolicy" <nsx-unified-applicance>.ova
'vi://admin@[Link]:password@[Link]/<datacenter>/host/
Install/[Link]/'

The result should look something like this:

Opening OVA source: nsx-<component>.ova


The manifest validates
Source is signed and the certificate validates
Opening VI target: vi://administrator@[Link]@<esxi-IP-address:port>/
Deploying to VI: vi://administrator@[Link]@<esxi-IP-address:port>/
Transfer Completed
Powering on VM: NSX Manager
Task Completed
Completed successfully

VMware by Broadcom 82
NSX Installation Guide

2 You can also run the OVF tool in Probe mode to view contents of a source. OVA and OVF
packages can be probed among a list of other supported source types. You can use the
information returned by the Probe mode to configure deployments.

$> \ovftool --allowExtraConfig <OVA path or URL>


Where, --allowExtraConfig is the supported appliance type for Cloud Service Manager
(CSM).

3 For an optimal performance, reserve memory for the appliance.

Set the reservation to ensure that NSX Manager has sufficient memory to run efficiently. See
NSX Manager VM and Host Transport Node System Requirements.

4 From the vSphere Client, verify that the VM is powered on. Open the VM console to track the
boot process of the node.

5 After the VM node boots a second time, log in to the CLI as admin and run the get
interface eth0 command to verify that the IP address was applied as expected.

6 Enter the get services command after waiting for about 5 minutes to verify that all default
services are running.

The following services are not required by default and do not start automatically.

n liagent

n migration-coordinator: This service is used only when running migration coordinator.


See the NSX Migration Guide before starting this service.

n snmp: For information on starting SNMP see Simple Network Management Protocol in the
NSX Administration Guide.
n nsx-message-bus: This service is not used in NSX 3.0 and later releases.

7 After deployment, verify that the NSX ManagerNSX Manager UI comes up by accessing the
following URL, [Link] or [Link]

8 Verify that your NSX Manager, or Global Manager node has the required connectivity.

Perform the following tasks:

n Ping your node from another machine.

n The node can ping its default gateway.

n The node can ping the hypervisor hosts that are in the same network using the
management interface.

n The node can ping its DNS server and its NTP Server or FQDN.

n If you enabled SSH, make sure that you can SSH to your node.

If connectivity is not established, make sure that the network adapter of the virtual appliance
is in the proper network or VLAN.

VMware by Broadcom 83
NSX Installation Guide

What to do next

n Log in to the NSX Manager from a supported web browser. See Log In to the Newly Created
NSX Manager .

Note NSX Manager nodes that are removed from the cluster should be powered off or
deleted. Do not reuse the same NSX Manager again in your environment.

n If deploying second and third NSX Manager nodes as OVA/OVF, join the manager nodes
to first deployed manager node to create NSX Manager Cluster. see Form an NSX Manager
Cluster Using the CLI.

Configure an Appliance to Display the GRUB Menu at Boot Time


You must configure an NSX appliance to display the GRUB menu at boot time if you want to
reset the root password of the appliance.

Important If the configuration is not performed after deploying the appliance and you forget the
root password, resetting it is not possible.

Procedure

1 Log in to the VM as root.

2 In the /etc/default/grub file, set the GRUB_TIMEOUT_STYLE to menu or countdown.

n If this option set to menu, then GRUB will display the menu and then wait for the
timeout set by GRUB_TIMEOUT to expire before booting the default entry. Pressing a key
interrupts the timeout.

n If this option is set to countdown, then before displaying the menu, GRUB will wait for
the timeout set by GRUB_TIMEOUT to expire. If ESC or F4 are pressed, or SHIFT is held
down during that time, it will display the menu and wait for input. It will show a one-line
indication of the remaining time.

3 In the /etc/default/grub file, change the value for the parameter GRUB_TIMEOUT.

GRUB_TIMEOUT=4

4 (Optional) Generate a new password by running the following command:

grub-mkpasswd-pbkdf2

5 (Optional) In the /etc/grub.d/40_custom file, replace the existing GRUB password.

The default password is NSX@VM!WaR10.

6 Update the GRUB configuration.

update-grub

VMware by Broadcom 84
NSX Installation Guide

Configure GRUB Menu Using CLI or API


You must configure an NSX appliance to display the GRUB menu at boot time if you want to
reset the root password of the appliance.

Starting with NSX [Link], you can use CLI or API commands to set the GRUB timeout value and
password. You can follow these commands post deployment of NSX.

Important If the configuration is not performed after deploying the appliance and you forget the
root password, resetting it is not possible.

Procedure

1 Using CLI to set GRUB menu:

a Log in to the NSX command line interface.

b Run set grub menu timeout <value>.

Where <value> is time in seconds. The default timeout value is 4.

c Run set grub user root password <newpassword>.

OR

d Run set grub user root password

Enter password:<newpassword>

Confirm password:<newpassword>

VMware by Broadcom 85
NSX Installation Guide

2 Using API to set GRUB menu:

a Use the GET API to retrieve GRUB menu values.

GET [Link]

Example Response:
{
"timeout": 4,
"users": [
{
"username": "root"
}
]
}

b Set the GRUB timeout value.

PUT [Link] { "timeout": 4 }


Example Response:
{
"timeout": 4
}

c Set the GRUB menu password.

PUT [Link] { "password": "Str0ng_Pwd!Wins$" }


Example Response:
{
"username": "root"
}

3 Get GRUB timeout value.

get grub menu timeout

GRUB Menu Timeout = 4

Log In to the Newly Created NSX Manager


After you install NSX Manager, log in to the NSX Manager UI to perform additional tasks.

After you install NSX Manager, you can join the Customer Experience Improvement Program
(CEIP) for NSX. See Customer Experience Improvement Program in the NSX Administration Guide
for more information about the program, including how to join or leave the program later.

Prerequisites

Verify that NSX Manager is installed. See Install NSX Manager and Available Appliances.

VMware by Broadcom 86
NSX Installation Guide

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

The EULA appears.

2 Read and accept the EULA terms.

3 Select whether to join the VMware's Customer Experience Improvement Program (CEIP).

4 Click Save.

5 Go to System → Licenses and click Add to add your NSX license.

Add a Compute Manager


A compute manager, for example, VMware vCenter, is an application that manages resources
such as hosts and VMs.

NSX polls compute managers to collect cluster information from VMware vCenter.

For more information about VMware vCenter roles and privileges, see the vSphere Security
document.

Prerequisites

n Verify that you use the supported vSphere version. See Supported vSphere version.

n IPv4 communication with VMware vCenter.

n Verify that you use the recommended number of compute managers. See https://
[Link]/home.

n Provide credentials of a VMware vCenter user. You can provide the credentials of VMware
vCenter administrator, or create a role and a user specifically for NSX and provide this user's
credentials. Add global permissions to the newly created user and role and select Propogate
to Children.

Create an admin role with the following VMware vCenter privileges:

Role Privilege

Admin [Link] extension

Admin [Link] extension

Admin [Link] extension

Admin [Link]

Admin [Link] session

Admin [Link] and stop sessions

Admin [Link]

Admin [Link]

Admin [Link] [Link] virtual machine

VMware by Broadcom 87
NSX Installation Guide

Role Privilege

Admin [Link] [Link] virtual machine

Admin [Link] [Link] virtual machine

Admin Tasks

Admin Scheduled task

Admin [Link] task

Admin [Link] role permissions

Admin [Link] vApp to resource pool

Admin [Link] virtual machine to resource pool

Admin Virtual [Link]

Admin Virtual [Link] Operations

Admin Virtual [Link]

Admin Virtual [Link]

Admin [Link] network

Admin vApp

To use the NSX license for the vSphere Distributed Switch 7.0 feature, the VMware vCenter
user must either be an administrator, or the user must have [Link] privileges and be
a member of the [Link] group.

n Before you create a service account for the compute manager, add these additional VMware
vCenter privileges to the admin user role:

Role Privilege

Admin Service Account [Link]

Admin [Link] permission

Admin [Link] role

Admin VMware vSphere Lifecycle [Link] Health [Link]

Admin VMware vSphere Lifecycle [Link] Manager: Image [Link]

Admin VMware vSphere Lifecycle [Link] Manager: Image [Link]

Admin VMware vSphere Lifecycle [Link] Manager: Image Remediation


[Link]

Admin VMware vSphere Lifecycle [Link] Manager: Settings [Link]

Admin VMware vSphere Lifecycle [Link] Manager: Settings [Link]

Admin VMware vSphere Lifecycle [Link] Manager: General [Link]

Admin VMware vSphere Lifecycle [Link] Manager: General [Link]

VMware by Broadcom 88
NSX Installation Guide

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at [Link]
ip-address>.

2 Select System > Fabric > Compute Managers > Add Compute Manager.

3 Complete the compute manager details.

Option Description

Name and Description Type the name to identify the VMware vCenter.
You can optionally describe any special details such as, the number of
clusters in the VMware vCenter.

Type The default compute manager type is set to VMware vCenter.

FQDN or IP Address Type the FQDN or IP address of the VMware vCenter.

HTTPS Port of Reverse Proxy The default port is 443. If you use another port, verify that the port is open
on all the NSX Manager appliances.
Set the reverse proxy port to register the compute manager in NSX.

Username and Password Type the VMware vCenter login credentials.

SHA-256 Thumbprint Type the VMware vCenter SHA-256 thumbprint algorithm value.

Create Service Account Enable this field for features such as vSphere Lifecycle Manager that need
to authenticate with NSX APIs. Log in with the administrator@[Link]
credential to register a compute manager. After registration, the compute
manager creates a service account.

Note Service account creation is not supported on a global NSX Manager.

If service account creation fails, the compute manager's registration status


is set to Registered with errors. The compute manager is successfully
registered. However, vSphere Lifecycle Manager cannot be enabled on NSX
clusters.
If a VMware vCenter admin deletes the service account after it was
successfully created, vSphere Lifecycle Manager tries to authenticate
the NSX APIs and the compute manager's registration status is set to
Registered with errors.

VMware by Broadcom 89
NSX Installation Guide

Option Description

Enable Trust Enable this field to establish trust between NSX and compute manager, so
that services running in vCenter Server can establish trusted communication
with NSX. For example, for vSphere Lifecycle Manager to be enabled on
NSX clusters, you must enable this field.
Supported only on VMware vCenter 7.0 and later versions.

Important You can only enable trust on a maximum of 10 compute


managers.

Access Level Enable one of the options based on your requirement:


n Full Access to NSX: Is selected by default. This access level gives the
compute manager complete access to NSX. Full access ensures vSphere
for Kubernetes and vSphere Lifecycle Manager can communicate with
NSX. The VMware vCenter user's role must be set to an Enterprise
Admin.
n Limited Access to NSX: This access level ensures vSphere Lifecycle
Manager can communicate with NSX. The VMware vCenter user's role
must be set to Limited vSphere Admin.

If you left the thumbprint value blank, you are prompted to accept the server provided
thumbprint.

After you accept the thumbprint, it takes a few seconds for NSX to discover and register the
VMware vCenter resources.

Note If the FQDN, IP, or thumbprint of the compute manager changes after registration, edit
the computer manager and enter the new values.

4 If the progress icon changes from In progress to Not registered, perform the following steps
to resolve the error.

a Select the error message and click Resolve. One possible error message is the following:

Extension already registered at CM <vCenter Server name> with id <extension ID>

b Enter the VMware vCenter credentials and click Resolve.

If an existing registration exists, it will be replaced.

Results

It takes some time to register the compute manager with VMware vCenter and for the
connection status to appear as UP.

You can click the compute manager's name to view the details, edit the compute manager, or to
manage tags that apply to the compute manager.

VMware by Broadcom 90
NSX Installation Guide

After the VMware vCenter is successfully registered, do not power off and delete the NSX
Manager VM without deleting the compute manager first. Otherwise, when you deploy a new
NSX Manager, you will not be able to register the same VMware vCenter again. You will get the
error that the VMware vCenter is already registered with another NSX Manager.

Note After a vCenter Server (VC) compute manager is successfully added, it cannot be
removed if you successfully performed any of the following actions:

n Transport nodes are prepared using VDS that is dependent on the VC.

n Service VMs deployed on a host or a cluster in the VC using NSX service insertion.

n You use the NSX Manager UI to deploy Edge VMs or NSX Manager nodes on a host or a
cluster in the VC.

If you try to perform any of these actions and you encounter an error (for example, installation
failed), you can remove the VC if you have not successfully performed any of the actions listed
above.

If you have successfully prepared any transport node using VDS that is dependent on the VC or
deployed any VM, you can remove the VC after you have done the following:

n Unprepare all transport nodes. If uninstalling a transport node fails, you must force delete the
transport node.

n Undeploy all service VMs, all NSX Edge VMs, and all NSX Manager nodes. The undeployment
must be successful or in a failed state.

n If an NSX Manager cluster consists of nodes deployed from the VC (manual method) and
nodes deployed from the NSX Manager UI, and you had to undeploy the manually deployed
nodes, then you cannot remove the VC. To successfully remove the VC, ensure that you
re-deploy an NSX Manager node from the VC.

This restriction applies to a fresh installation of NSX as well as an upgrade.

Deploy NSX Manager Nodes to Form a Cluster from the UI


Forming an NSX Manager or Global Manager cluster provides high availability and reliability.
Deploying nodes using the UI is supported only on ESXi hosts managed by VMware vCenter that
is added as a compute manager.

For deploying additional NSX Manager nodes on a VMware vCenter that is not added as a
compute manager, see Install NSX Manager on ESXi Using the Command-Line OVF Tool and
Form an NSX Manager Cluster Using the CLI.

VMware by Broadcom 91
NSX Installation Guide

When you deploy a new node from the UI, the node connects to the first deployed node to form
a cluster. All the repository details and the password of the first deployed node are synchronized
with the newly deployed node. The first node is known as the orchestrator node because it
contains the original copy of the VIBs and installation files required to prepare hosts of the
cluster. The orchestrator node also help identify the node on which the Upgrade-Coordinator is
running. When new nodes are added to the cluster, NSX uses the repository IP to synchronize
the repository of VIBs and installation files on the new nodes of the cluster.

To create an NSX Manager cluster, deploy two additional nodes to form a cluster of three nodes
total.

Note Data is replicated to all the active NSX Manager nodes of the cluster. So, when the NSX
Manager cluster is stable, every NSX Manager node contains the same data.

To create a Global Manager cluster, deploy two additional nodes to form a cluster of three nodes
total. However, if your Global Manager has NSX 3.0.0 installed, deploy only one node, and do not
form a cluster. See Install the Active and Standby Global Manager .

Prerequisites

n Verify that an NSX Manager or Global Manager node is installed. See Install NSX Manager and
Available Appliances.

n Verify that compute manager is configured. See Add a Compute Manager.

n Verify that the system requirements are met. See System Requirements.

n Verify that the required ports are open. See Ports and Protocols.

n Verify that a datastore is configured and accessible on the ESXi host.

n Verify that you have the IP address and gateway, DNS server IP addresses, domain search
list, and the NTP Server IP or FQDN for the NSX Manager to use.

n Create a management VDS and target VM port group in vCenter. Place the NSX appliances
onto this management VDS port group network. See Prepare a vSphere Distributed Switch
for NSX.

Multiple management networks can be used as long as the NSX Manager nodes has
consistent connectivity and recommended latency between them.

Note If you plan to use Cluster VIP, all NSX Manager appliances should belong to same
subnet.

Procedure

1 From a browser, log in with admin privileges to the NSX Manager or Global Manager at
[Link]

2 Deploy an appliance.

n From NSX Manager, select System > Appliances > Add NSX Appliance.

VMware by Broadcom 92
NSX Installation Guide

n From Global Manager, select System > Global Manager Appliances > Add NSX
Appliance.

3 Enter the appliance information details.

Option Description

Host Name or FQDN Enter a name for the node.

Note If you plan to deploy NSX Manager in dual stack mode (IPv4 and IPv6)
and if you plan to configure NSX Manager with CA signed certificates, you
must set a FQDN with valid domain name.

IP Type Select the IP type. The appliance can have IPv4 address only or both IPv4
and IPv6 addresses.

Management IPv4/Netmask Enter an IPv4 address to be assigned to the node.

Management Gateway IPv4 Enter a gateway IPv4 address to be used by the node.

Management IPv6/Netmask Enter an IPv6 address to be assigned to the node. This option appears when
IP Type is set to Both IPv4 and IPv6.

Management Gateway IPv6 Enter a gateway IPv4 address to be used by the node. This option appears
when IP Type is set to Both IPv4 and IPv6.

DNS Servers Enter DNS server IP addresses to be used by the node.

NTP Server Enter an NTP server IP address to be used by the node.

Node Size Select the form factor to deploy the node from the following options:
n Small (4 vCPU, 16 GB RAM, 300 GB storage)
n Medium (6 vCPU, 24 GB RAM, 300 GB storage)
n Large (12 vCPU, 48 GB RAM, 300 GB storage)
For Global Manager select size:
n Medium GM appliance for deployments up to four locations and 128
hypervisors across all locations
n Large GM appliance for deployments with higher scale
Do not use Small GM appliance for scale deployment.

4 Enter the configuration details.

Option Description

Compute Manager Select the VMware vCenter to provision compute resources for deploying
the node.

Compute Cluster Select the cluster the node is going to join.

(Optional) Resource Pool Select a resource pool for the node from the drop-down menu.

(Optional) Host Select a host for the node from the drop-down menu.

Datastore Select a datastore for the node files from the drop-down menu.

VMware by Broadcom 93
NSX Installation Guide

Option Description

Virtual Disk Format n For NFS datastores, select a virtual disk format from the available
provisioned policies on the underlying datastore.
n With hardware acceleration, Thin Provision, Thick Provision Lazy
Zeroed, and Thick Provision Eager Zeroed formats are supported.
n Without hardware acceleration, only Thin Provision format is
supported.
n For VMFS datastores, Thin Provision, Thick Provision Lazy Zeroed, and
Thick Provision Eager Zeroed formats are supported.
n For vSAN datastores, you cannot select a virtual disk format because the
VM storage policy defines the format.
n The vSAN storage policies determine the disk format. The default
virtual disk format for vSAN is Thin Provision. You can change the
vSAN storage policies to set a percentage of the virtual disk that
must be thick-provisioned.
By default, the virtual disk for an NSX Manager or Global Manager node is
prepared in the Thin Provision format.

Note You can provision each node with a different disk format based on
which policies are provisioned on the datastore.

Network Click Select Network to select the management network for the node.

5 Enter the access and credentials details.

Option Description

Enable SSH Toggle the button to allow an SSH login to the new node.

Enable Root Access Toggle the button to allow root access to the new node.

VMware by Broadcom 94
NSX Installation Guide

Option Description

System Root Credentials Set the root password and confirm the password for the new node.
Your password must comply with the password strength restrictions.
n At least 12 characters
n At least one lower-case letter
n At least one upper-case letter
n At least one digit
n At least one special character
n At least five different characters
n Default password complexity rules are enforced by the following Linux
PAM module arguments:
n retry=3: The maximum number of times a new password can be
entered, for this argument at the most 3 times, before returning with
an error.
n minlen=12: The minimum acceptable size for the new password. In
addition to the number of characters in the new password, credit
(of +1 in length) is given for each different kind of character (other,
upper, lower and digit).
n difok=0: The minimum number of bytes that must be different in
the new password. Indicates similarity between the old and new
password. With a value 0 assigned to difok, there is no requirement
for any byte of the old and new password to be different. An exact
match is allowed.
n lcredit=1: The maximum credit for having lower case letters in the
new password. If you have less than or 1 lower case letter, each
letter will count +1 towards meeting the current minlen value.
n ucredit=1: The maximum credit for having upper case letters in the
new password. If you have less than or 1 upper case letter each letter
will count +1 towards meeting the current minlen value.
n dcredit=1: The maximum credit for having digits in the new
password. If you have less than or 1 digit, each digit will count +1
towards meeting the current minlen value.
n ocredit=1: The maximum credit for having other characters in the
new password. If you have less than or 1 other characters, each
character will count +1 towards meeting the current minlen value.
n enforce_for_root: The password is set for the root user.

Note For more details on Linux PAM module to check the password
against dictionary words, refer to the man page.

For example, avoid simple and systematic passwords such as


VMware123!123 or VMware12345. Passwords that meet complexity
standards are not simple and systematic but are a combination of
letters, alphabets, special characters, and numbers, such as VMware123!
45, VMware 1!2345 or VMware@1az23x.

Admin CLI Credentials and Audit CLI Select the Same as root password check box to use the same password
Credentials that you configured for root, or deselect the check box and set a different
password.

VMware by Broadcom 95
NSX Installation Guide

6 Click Install Appliance.

The new node is deployed. You can track the deployment process in the System >
Appliances page for NSX Manager, the System > Global Manager Appliances for Global
Manager, or the VMware vCenter for either. Do not add additional nodes until the installation
is finished and the cluster is stable.

7 Wait for the deployment, cluster formation, and repository synchronization to finish.

The joining and cluster stabilizing process might take from 10 to 15 minutes. After the node
boots (as part of appliance bringup) and is back up, log in to the first deployed NSX Manager
CLI as an admin. Run get cluster status to view the status. Verify that the first node and
second node are members of the cluster and the status for every cluster service group is UP
and cluster status STABLE before making any other cluster changes.

Note
n If you reboot the first deployed NSX Manager node, when the deployment of a new node
is in progress, the new node might fail to register with the cluster. It displays the Failed
to Register message on the new node's thumbnail. To redeploy the node manually on
the cluster, delete and redeploy the failed node again.

n If a node deployment fails, you cannot reuse the same IP address to deploy another node
until the failed node is deleted.

n NSX Manager nodes that are removed from the cluster should be powered off or deleted.
Do not reuse it in your deployments.

8 After the VM node boots a second time, log in to the CLI as admin and run the get
interface eth0 command to verify that the IP address was applied as expected.

9 Verify that your NSX Manager, or Global Manager node has the required connectivity.

Perform the following tasks:

n Ping your node from another machine.

n The node can ping its default gateway.

n The node can ping the hypervisor hosts that are in the same network using the
management interface.

n The node can ping its DNS server and its NTP Server or FQDN.

n If you enabled SSH, make sure that you can SSH to your node.

If connectivity is not established, make sure that the network adapter of the virtual appliance
is in the proper network or VLAN.

10 If your cluster has only two nodes, add another appliance.

n From NSX Manager, select System > Appliances > Add NSX Appliance and repeat the
configuration steps.

VMware by Broadcom 96
NSX Installation Guide

n From Global Manager, select System > Global Manager Appliances > Add NSX Appliance
and repeat the configuration steps.

Important If the orchestrator node (first NSX Manager node deployed) goes down or
becomes unreachable while additional nodes in the cluster have not finished replicating with
first node fully, further operations on the cluster will fail. To avoid this, bring up the first node
up successfully and then delete and redeploy the additional nodes that had failed to finish
replication of data with the first node.

Note NSX Manager nodes that are removed from the cluster should be powered off or
deleted. Do not reuse it in your deployments.

What to do next

Configure NSX Edge. See Install an NSX Edge on ESXi Using the vSphere GUI.

Form an NSX Manager Cluster Using the CLI


Forming an NSX Manager or Global Manager cluster provides high availability and reliability. If all
or at least one of the three NSX Manager appliances is deployed as an OVA/OVF using vCenter,
use the join command to join the NSX Manager nodes and create a cluster.

Prerequisites

n To create an NSX Manager cluster, deploy three NSX Manager nodes from the OVF tool CLI
or VMware vCenter UI.

n To create a Global Manager cluster, deploy three nodes to create the cluster. However, if
your Global Manager has NSX 3.0.0 installed, deploy only one node, and do not form a
cluster. See Install the Active and Standby Global Manager .

Procedure

1 Open an SSH or console session to the first deployed NSX Manager or Global Manager node
and log in with the administrator credentials.

2 On the first deployed node, run the following commands.

a Run the get certificate api thumbprint command.

The command output is a string that is unique to this node.

b Run the get cluster config command to get the cluster ID of the first deployed node.

mgr-first> get cluster config


Cluster Id: 7b50abb9-0402-4ed5-afec-363587c3c705
Cluster Configuration Version: 0
Number of nodes in the cluster: 1

...

VMware by Broadcom 97
NSX Installation Guide

3 Open an SSH or console session to the new node and log in with the administrator
credentials.

4 On the new node that is joining the cluster, run the join command.

Provide the following information about the first deployed node in the join command:

n IP address

n Cluster ID

n User name

n Password

n Certificate thumbprint

mgr-new> join <Manager-IP> cluster-id <cluster-id> username <Manager-username> password


<Manager-password> thumbprint <Manager-thumbprint>

The joining and cluster stabilizing process might take from 10 to 15 minutes. Run get
cluster status to view the status. Verify that the status for every cluster service group
is UP before making any other cluster changes.

5 Add the third node to the cluster.

Repeat step 4 on the third node.

6 Verify the cluster status on the web interface.

n On NSX Manager, log in to the NSX Manager web interface and select System >
Appliances.

n On Global Manager, log in to the Global Manager web interface and select System >
Global Manager Appliances.

What to do next

Create a transport zone. See Create Transport Zones.

Configure a Virtual IP Address for a Cluster


To provide fault tolerance and high availability to NSX Manager nodes, assign a virtual IP address
(VIP) to the NSX cluster.

NSX Manager nodes of a cluster become part of an HTTPS group to service API and UI requests.
The leader node of the cluster assumes ownership of the set VIP of the cluster to service any API
and UI request. Any API and UI request coming in from clients is directed to the leader node.

Note When assigning Virtual IP, all the NSX Manager VMs in the cluster must be configured
in the same subnet. But if external load balancer is used to configure cluster VIP, then NSX
Managers and VIP can belong to a different subnet.

VMware by Broadcom 98
NSX Installation Guide

If the leader node that owns VIP becomes unavailable, NSX elects a new leader. The new leader
owns the VIP. It sends out a gratuitous ARP packet advertising the new VIP to MAC address
mapping. After a new leader node is elected, new API and UI requests are sent to the new leader
node.

Failover of VIP to a new leader node of the cluster might take a few minutes to become
functional. If the VIP fails over to a new leader node because the previous leader node became
unavailable, reauthenticate the NSX Manager credentials so that API requests are directed to the
new leader node.

Note VIP is not designed to serve as a load-balancer and you cannot use it if you enable the
vIDM External Load Balancer Integration from System > Users > Configuration. Do not set up
a VIP if you want to use the External Load Balancer from vIDM. See Configure VMware Identity
Manager Integration in the NSX Administration Guide for more details.

Important If you reset the cluster VIP, then vIDM configurations that are using the VIP is cleared.
You will need to reconfigure vIDM configuration with the new VIP.

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Go to System > Appliances.

3 In the Virtual IP field, click Set Virtual IP.

4 Enter the IPv4 and/or IPv6 address to use as VIP for the cluster.

Ensure that VIP is part of the same subnet as the other management nodes. When you
deploy a NSX Manager in a dual stack environment (IPv4 and IPv6), use a valid FQDN address
and use the same FQDN address for both IPv4 and IPv6 addresses.

5 Click Save.

6 To verify the cluster status and the API leader of the HTTPS group, enter the NSX Manager
CLI command get cluster status verbose in the NSX Manager console or over SSH.

The following is an example output:

Group Type: HTTPS


Group Status: STABLE

Members:
UUID FQDN IP
STATUS
cdb93642-ccba-fdf4-8819-90bf018cd727 nsx-manager [Link]
UP
51a13642-929b-8dfc-3455-109e6cc2a7ae nsx-manager [Link]
UP
d0de3642-d03f-c909-9cca-312fd22e486b nsx-manager [Link]
UP

VMware by Broadcom 99
NSX Installation Guide

Leaders:
SERVICE LEADER
LEASE VERSION
api cdb93642-ccba-
fdf4-8819-90bf018cd727 8

7 Verify that the VIP is working correctly.

From a browser, log in to the NSX Manager using the virtual IP address assigned to the
cluster at [Link]

Results

Any API requests to NSX are redirected to the virtual IP address of the cluster, which is owned
by the leader node. The leader node then routes the request forward to the other components of
the appliance.

Note If you deploy NSX Manager in dual stack mode (IPv4, IPv6) and/or if you plan to configure
NSX Manager deployment with CA-signed certificates, associate the VIP IP address with DNS
name. Also, configure reverse and forward proxy for the VIP IP address on the DNS server. Then
access VIP from [Link]

Configuring an External Load Balancer


You can configure an external load balancer to distribute traffic to the NSX Managers in a
manager cluster.

An NSX Manager cluster does not require an external load balancer. The NSX Manager virtual IP
(VIP) provides resiliency in the event of a Manager node failure but has the following limitations:

n VIP does not perform load balancing across the NSX Managers.

n VIP requires all the NSX Managers to be in the same subnet.

n VIP recovery takes about 1 - 3 minutes in the event of a Manager node failure.

An external load balancer can provide the following benefits:

n Load balance across the NSX Managers.

n The NSX Managers can be in different subnets.

n Fast recovery time in the event of a Manager node failure.

An external load balancer will not work with the NSX Manager VIP. Do not configure an NSX
Manager VIP if you use an external load balancer.

Authentication Methods When Accessing NSX Manager


The following authentication methods are supported by NSX Manager. For more information
about the authentication methods, see the NSX API Guide.

n HTTP Basic Authentication

VMware by Broadcom 100


NSX Installation Guide

n Session-Based Authentication

n Authentication using an X.509 certificate and a Principal Identity

n Authentication in VMware Cloud on AWS

The session-based authentication method (used when you access NSX Manager from a browser)
requires source-IP persistence (all requests from the client must go to the same NSX Manager).
The other methods do not require source-IP persistence (requests from the client can go to
different NSX Managers).

Recommendations
n Create a single VIP on the load balancer with source-IP persistence configured to handle all
the authentication methods.

n If you have applications or scripts that might generate a lot of requests to NSX Manager,
create a second VIP without source-IP persistence for these applications or scripts. Use the
first VIP for browser access to NSX Manager only.

The VIP must have the following configurations:

n Type: Layer4-TCP

n Port: 443

n Pool: NSX Manager Pool

n Persistence: Source-IP persistence for the first VIP. None for the second VIP (if present).

Example of an external load balancer configuration:

[Link]
[Link]:443
Monitor
[Link]
VIP: x.x.x.x:443

[Link]
Clients External [Link]:443
(Browsers / API) Load Balancer

[Link]
[Link]:443

NSX Managers

VMware by Broadcom 101


NSX Installation Guide

NSX Manager's Certificate


The clients access NSX Manager using FQDN name (for example, [Link]). This
FQDN is resolved to the load balancer's VIP. To avoid any certificate mismatch, each NSX
Manager must have a certificate that is valid for the VIP's FQDN name. Therefore, you must
configure each NSX Manager with a SAN certificate that is valid for its own name (for example,
[Link]) and the VIP's FQDN.

Monitoring the Health of NSX Managers


The load balancer can check that each NSX Manager is running with the following API:

GET /api/v1/reverse-proxy/node/health

The request headers are:

n Header1

n Name: Authorization

n Value: Basic <Base64 Value>

Note: <Base64 Value> is Username:Password encoded in Base64. You can use https://
[Link] to do the encoding. For example, Header1 might be Authorization:
Basic YWRtaW46Vk13YXJlMSFWTXdhcmUxIQ== for admin:VMware1!VMware1!.

n Header2

n Name: Content-Type

n Value: application/json

n Header3

n Name: Accept

n Value: application/json

A response indicating that the NSX Manager is running will be:

"healthy" : true

Note that the format of the response is “healthy”<space>:<space>true.

If you change the password of the user that you specify in Header1, you must update Header1
accordingly.

VMware by Broadcom 102


NSX Installation Guide

Disable Snapshots on an NSX Appliance


Clones and snapshots of NSX appliances, such as NSX Manager, NSX Edge and Global Manager,
are not supported. Follow the procedure below to disable snapshots.

Note Starting with NSX 3.0.1, snapshots are automatically disabled when an NSX appliance is
deployed from the NSX Manager GUI. However, if you deploy an NSX appliance manually, for
example, using an OVA file, follow the procedure below to disable snapshots.

Procedure

1 Locate the appliance VM in the vSphere Client.

2 Power down the VM.

3 Right-click the VM and select Edit Settings.

4 Click the VM Options tab, then expand Advanced.

5 In the Configuration Parameters field, click Edit Configuration....

6 In the Configuration Parameters window, click Add Configuration Params.

7 Enter the following:

n For Name, enter [Link].

n For Value, enter 0.

Note NSX does not support taking snapshots of NSX Manager. However, if you still want
to take snapshots, enter a value other than zero in the [Link] to take
those many snapshots.

8 Click OK to save the changes.

9 Power the VM back on.

VMware by Broadcom 103


Transport Zones and Profiles
8
Transport zones and profiles are building blocks to prepare hosts for NSX networking.

Read the following topics next:

n Create Transport Zones

n Create an IP Pool for Tunnel Endpoint IP Addresses

n Enhanced Data Path

n Configuring Profiles

Create Transport Zones


Transport zones dictate which hosts transport nodes and, therefore, which VMs can participate
in the use of a particular network. A transport zone does this by limiting the hosts that can see a
segment—and, therefore, which VMs can be attached to the segment. A transport zone can span
one or more host clusters. Also, a host transport node can be associated to multiple transport
zones.

Overlay traffic: Is a type of network traffic that is created by virtual networks using a protocol
such as VXLAN or GENEVE. Overlay traffic is encapsulated with a header that contains
information about the virtual network, such as the virtual network identifier (VNI) or the logical
segment ID (LSID). Overlay traffic is transmitted over the physical network, which is called the
underlay, and is decapsulated at the destination host or edge node. Overlay traffic allows virtual
machines on different hosts or availability zones to communicate with each other as if they
were on the same Layer 2 network, without requiring any changes to the physical network
configuration. Overlay traffic also enables features such as distributed routing, logical switching,
and distributed firewall.

VLAN traffic: Is a type of network traffic that is segmented by using virtual LANs (VLANs).
VLANs allow you to create logical groups of ports on a physical switch, so that only the ports
that belong to the same VLAN can communicate with each other. VLANs can improve network
security, performance, and manageability by isolating traffic. VLAN traffic can be configured in
different ways on VMware ESXi hosts, depending on how the VLAN tags are added or removed
from the packets.

An NSX environment can contain one or more transport zones based on your requirements. A
host can belong to multiple transport zones. A segment can belong to only one transport zone.

VMware by Broadcom 104


NSX Installation Guide

NSX does not allow connection of VMs that are in different transport zones in the Layer 2
network. The span of a segment is limited to a transport zone.

Both host transport nodes and NSX Edge nodes use Overlay and VLAN transport zones. Host
transport nodes connect to VDS switches while N-VDS switch is configured on NSX Edge
transport nodes.

The VLAN transport zone is used by the NSX Edge and host transport nodes for its VLAN
uplinks. When an NSX Edge is added to a VLAN transport zone, a VLAN N-VDS is installed on the
NSX Edge.

Note vMotion is not supported between two segments or logical switches on different VLAN
transport zones.

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Select System > Fabric > Transport Zones > Add Zone.

3 Enter a name for the transport zone and optionally a description.

4 Select a traffic type between Overlay and VLAN.

5 (optional) For VLAN Transport Zone, enter the names of Named Teaming Policy. These
named teaming policies can be used by segments attached to the vlan transport zone which
in turn use the named teaming policy specified in uplink profiles to direct traffic.

Note If you define named teaming policies, ensure that you enter the exact named teaming
policy name in associated VLAN segments and uplink profiles as well. If segments do not find
a matching named teaming policy, then NSX uses the default uplink teaming policy.

6 After you add the transport zone, go to the Transport Zones page and view the newly added
transport zone either from the UI or by running the following API command.

GET /policy/api/v1/global-infra/sites/<site-id>/enforcement-points/
<enforcementpoint-id>/transport-zones

{
"sort_ascending": true,
"sort_by": "display_name",
"result_count": 1,
"results": [
{
"tz_type": "OVERLAY_BACKED",
"is_default": true,
"transport_zone_profile_paths": [
"/infra/transport-zone-profiles/tzp"
],
"nested_nsx": false,
"resource_type": "PolicyTransportZone",
"id": "tz",

VMware by Broadcom 105


NSX Installation Guide

"display_name": "tz",
"path": "/infra/sites/default/enforcement-points/default/transport-zones/tz",
"relative_path": "tz",
"parent_path": "/infra/sites/default/enforcement-points/default",
"unique_id": "8f4a026d-e3f5-4f23-a3ef-46309d573dc1",
"marked_for_delete": false,
"overridden": false,
"_create_user": "admin",
"_create_time": 1607501697823,
"_last_modified_user": "admin",
"_last_modified_time": 1607582307987,
"_system_owned": false,
"_protection": "NOT_PROTECTED",
"_revision": 5
}
]
}

What to do next

Optionally, create a custom transport-zone profile and bind it to the transport zone. You can
create custom transport-zone profiles using the POST /api/v1/infra/transport-zone-profiles
API. There is no UI workflow for creating a transport-zone profile. After the transport-zone profile
is created, you can find it to the transport zone with the PATCH /api/v1/infra/sites/default/
enforcement-points/nsxt-ep/transport-zones/<transport-zone-id> API.

{
"tz_type": "OVERLAY_BACKED",
"is_default": true,
"nested_nsx": false,
"transport_zone_profile_paths": [
"/infra/transport-zone-profiles/tzp"
]
}

Create an IP Pool for Tunnel Endpoint IP Addresses


NSX Transport Nodes are configured with tunnel endpoint (TEP pool) IP addresses. You can use
an IP pool for the tunnel endpoints. Tunnel endpoints are the source and destination IP addresses
used in the external IP header to identify the hypervisor hosts originating and end the NSX
encapsulation of overlay frames. You can also use either DHCP or manually configured IP pools
for tunnel endpoint IP addresses.

An example of the resulting routing table on an ESXi host where sub_a = [Link] and sub_b
= [Link]. (The management subnet, for example, might be [Link]).

VMware by Broadcom 106


NSX Installation Guide

Kernel IP routing table:

Destination Gateway Genmask Iface


[Link] [Link] [Link] eth0
[Link] [Link] [Link] virbr0
[Link] [Link] [Link] eth0
[Link] [Link] [Link] nsx-vtep0.0
[Link] [Link] [Link] nsx-vtep0.0

The route can be added in at least two different ways. Of these two methods, the route persists
after host reboot only if you add the route by editing the interface. Adding a route using the
route add command does not persist after a host reboot.

route add -net [Link] netmask [Link] gw [Link] dev nsx-vtep0.0

In /etc/network/interfaces before "up ifconfig nsx-vtep0.0 up" add this static route:

post-up route add -net [Link] netmask [Link] gw [Link]

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Select Networking → IP Address Pools → Add IP Address Pool.

3 Enter the IP pool details.

Option Parameter Example

Name and Description Enter the IP pool and optional description.

IP Ranges IP allocation ranges


[Link] - [Link]

Gateway [Link]

CIDR Network address in a CIDR notation


[Link]/24

DNS Servers Comma-separated list of DNS servers


[Link]

DNS Suffix [Link]

Note Overlay TEP Tunnels are configured for only those hosts that participate in overlay
traffic by that become transport nodes. TEP tunnels are not required for host clusters
that are member of VLAN transport zones only or management host cluster hosting NSX
Appliances and/or NSX Edge Nodes if management appliances are connected to VLAN
backed VDS portgroups.

VMware by Broadcom 107


NSX Installation Guide

Results

The IPv4 or IPv6 address pool is listed on the IP pool page.

You can also use the GET [Link] API call to


view the IP pool list.

What to do next

Create an uplink profile. See Create an Uplink Profile.

Enhanced Data Path


Enhanced Data Path (EDP) is a networking stack mode, which when configured provides superior
network performance. It is primarily targeted for NFV workloads, which offer performance
benefits leverging DPDK capability.

The VDS switch can be configured in the enhanced data path mode only on an ESXi host.
Enhanced Data Path also supports traffic flowing through Edge VMs.

In the enhanced data path mode, both traffic modes are supported:

n Overlay traffic

n VLAN traffic

Supported VMkernel NICs


With NSX supporting multiple Enhanced Data Path host switches, the maximum number of
VMkernel NICs supported per host is 32.

High-Level Process to Configure Enhanced Data Path


As a network administrator, before creating transport zones supporting VDS in the enhanced
data path mode, you must prepare the network with the supported NIC cards and drivers. To
improve network performance, you can enable the Load Balanced Source teaming policy to
become NUMA node aware.

The high-level steps are as follows:

1 Use NIC cards that support the enhanced data path.

See VMware Compatibility Guide to know NIC cards that support enhanced data path.

2 On the VMware Compatibility Guide page, from the Systems/Servers drop-down menu,
select IO Devices.

3 On the IO devices page, in the Product Release Version section, select ESXi <version>.

4 In the IO device Type section, select Network.

5 In the Features section, Enhanced Datapath - Interrupt Mode or Enhanced Datapath - Poll
Mode.

VMware by Broadcom 108


NSX Installation Guide

6 Click Update and View Results.

7 In the search results, you will find the supported NIC cards compatible with the ESXi version
you selected.

8 Identify the brand for which you want to download the driver and click the Model URL to
view and download the driver.

9 Download and install the latest NIC drivers from the My VMware page.

a Select the VMware vSphere version.

b Go to Drivers & Tools > Driver CDs.

c Download the NIC drivers.

d To use the host as an Enhanced Data Path host, at least one Enhanced Data Path capable
NIC must be available on the system. If there are no Enhanced Data Path capable NICs,
the management plane will not allow hosts to be added to Enhanced Data Path transport
zones.

e List the Enhanced Data Path driver.

esxcli software vib list | grep -E "i40|ixgben"

f Verify whether the NIC is capable to process Enhanced Data Path traffic.

esxcfg-nics -e

Name Driver ENS Capable ENS Driven MAC Address


Description
vmnic0 ixgben True False [Link] Intel(R) Ethernet
Controller X550
vmnic1 ixgben True False [Link] Intel(R) Ethernet
Controller X550
vmnic2 ixgben True False [Link] Intel(R) Ethernet
Controller X550
vmnic3 ixgben True False [Link] Intel(R) Ethernet
Controller X550
vmnic4 i40en True False [Link] Intel(R) Ethernet
Controller X710/X557-AT 10GBASE-T
vmnic5 i40en True False [Link] Intel(R) Ethernet
Controller X710/X557-AT 10GBASE-T
vmnic6 i40en True False [Link] Intel(R) Ethernet
Controller X710/X557-AT 10GBASE-T
vmnic7 i40en True False [Link] Intel(R) Ethernet
Controller X710/X557-AT 10GBASE-T

g Install the Enhanced Data Path driver.

esxcli software vib install -v [Link] --no-sig-


check

h Alternately, download the driver to the system and install it.

wget <DriverInstallerURL>

VMware by Broadcom 109


NSX Installation Guide

esxcli software vib install -v [Link] --no-sig-


check

i Reboot the host to load the driver. Proceed to the next step.

j To unload the driver, follow these steps:

vmkload_mod -u i40en

ps | grep vmkdevmgr

kill -HUP "$(ps | grep vmkdevmgr | awk {'print $1'})"

ps | grep vmkdevmgr

kill -HUP <vmkdevmgrProcessID>

kill -HUP "$(ps | grep vmkdevmgr | awk {'print $1'})"

k To uninstall the Enhanced Data Path driver, esxcli software vib remove --
vibname=i40en-ens --force --no-live-install.

Note Enhanced Data Path transport zones configured for overlay traffic: For a Microsoft
Windows virtual machine running VMware tools version prior to version 11.0.0 and vNIC
type is VMXNET3, ensure MTU is set to 1500. For a Microsoft Windows virtual machine
running vSphere 6.7 U1 and VMware tools version 11.0.0 and later, ensure MTU is set to a
value less than 8900. For virtual machines running other supported OSes, ensure that the
virtual machine MTU is set to a value less than 8900.

10 Create a host transport node. Configure mode in Enhanced Datapath on a VDS switch with
logical cores and NUMA nodes.

Load Balanced Source Teaming Policy Mode Aware of NUMA


The Load Balanced Source teaming policy mode defined for an enhanced datapath VDS
becomes aware of NUMA when the following conditions are met:

n The Latency Sensitivity on VMs is High.

n The network adapter type used is VMXNET3.

If the NUMA node location of either the VM or the physical NIC is not available, then the Load
Balanced Source teaming policy does not consider NUMA awareness to align VMs and NICs.

The teaming policy functions without NUMA awareness in the following conditions:

n The LAG uplink is configured with physical links from multiple NUMA nodes.

n The VM has affinity to multiple NUMA nodes.

n The ESXi host failed to define NUMA information for either VM or physical links.

VMware by Broadcom 110


NSX Installation Guide

Enhanced Data Path Support for Applications Requiring Traffic


Reliability
NFV workloads might use multi-homing and redundancy features provided by Stream Control
Transmission Protocol (SCTP) to increase resiliency and reliability to the traffic running on
applications. Multi-homing is the ability to support redundant paths from a source VM to a
destination VM.

Depending upon the number of physical NICs available to be used as an uplink for an overlay or
a VLAN network, those many redundant network paths are available for a VM to send traffic over
to the target VM. The redundant paths are used when the pinned pNIC to a logical switch fails.
The enhanced data path switch provides redundant network paths between the hosts.

Figure 8-1. Multi-homing and Redundancy of Traffic over Enhanced Data Path

The high-level tasks are:

1 Prepare host as an NSX transport node.

2 Prepare VLAN or Overlay Transport Zone with two VDS switches in Enhanced Data Path
mode.

3 On VDS 1, pin the first physical NIC to the switch.

4 On VDS 2, pin the second physical NIC to the switch.

The VDS in enhanced data path mode ensures that if pNIC1 becomes unavailable, then traffic
from VM 1 is routed through the redundant path - vNIC 1 → tunnel endpoint 2 → pNIC 2 → VM 2.

Enhanced Datapath Supported Features

Note In NSX 3.2.X version and earlier, the term used is ENS (Enhanced Network Stack). In NSX
4.0.X version and later, the term used is Enhanced Datapath.

Supported Features
Name Notes

Segments (Logical Switches) – VLAN based

Segments (Logical Switches) – Overlay based

Differentiated Services Code Point (DSCP) trust/re-


marking

Ingress and Egress rate limiting

VMware by Broadcom 111


NSX Installation Guide

Name Notes

Distributed Firewall – L2

Distributed Firewall – L3 / L4

IDFW Traffic will be flowing. However, the performance will be equal


to regular non-Enhanced Datapath performance.

IDS/IPS Traffic will be flowing. However, the performance will be equal


to regular non-Enhanced Datapath performance.

East/West Service Chaining Traffic will be flowing. However, the performance will be equal
to regular non-Enhanced Datapath performance.

Distributed Load Balancing (DLB) DLB on Enhanced Datapath works at full performance.

IP Discovery (DHCP/DHCPv6 Snooping, ARP snooping,


ND snooping, etc.)

Spoofguard (IP, MAC, etc.)

Segment Security Filter (Bridge Protocol Data Units


(BPDU), DHCP and so on.)

DHCP Relay & DHCP Local and Gateway Server

L2 Bridging through Edge

L2 Bridging through L2VPN

Distributed Routing (Overlay)

EVPN in route server mode

EVPN through Edge

L2 Multicast Traffic will be flowing. However, the performance will be equal


to regular non-Enhanced Datapath performance.

IPFIX

Port Mirroring Support for ERSPAN with Enhanced Datapath performance,


other port mirroring methods will be equal to regular non-
Enhanced Datapath performance.

Unsupported Features
Name Notes

Network I/O Control (NIOC) Disable NIOC on VDS switches used while configuring NSX.

Configuring Profiles
Profiles allow you to consistently configure identical capabilities for network adapters across
multiple hosts or nodes.

Profiles are containers for the properties or capabilities that you want your network adapters to
have. Instead of configuring individual properties or capabilities for each network adapter, you
can specify the capabilities in the profiles, which you can then apply across multiple hosts or
nodes.

VMware by Broadcom 112


NSX Installation Guide

Create an Uplink Profile


An uplink is a link from the NSX Edge nodes or hypervisor nodes to the top-of-rack switches
or NSX logical switches. A link is from a physical network interface on an NSX Edge node or
hypervisor nodes to a switch.

An uplink profile defines policies for the uplinks. The settings defined by uplink profiles can
include teaming policies, active and standby links, transport VLAN ID, and MTU setting.

Consider the following points when configuring Failover Teaming Policy for VM appliance-based
NSX Edge nodes and Bare Metal NSX Edge:

n For uplinks used by a teaming policy, you cannot use he same uplinks in a different uplink
profile for a given NSX Edge transport node. Standby uplinks are not supported and must not
be configured in the failover teaming policy. If the teaming policy uses more than one uplink
(active/standby list), you cannot use the same uplinks in the same or a different uplink profile
for a given NSX Edge transport node.

n Supported scenarios:

n Bare Metal NSX Edge supports a single active uplink and a standby uplink. They do not
support multiple standby uplinks.

n NSX Edge VMs do not support any standby uplinks - single or multiple standby uplinks.

Consider the following points when configuring Load Balance Source for VM appliance-based
NSX Edge nodes:

n Supports multiple active uplinks.

n You cannot use LAG to configue the teaming policy.

n In the Active Uplinks field, enter uplink labels that will be associated to physical NICs when
you prepare transport nodes. For example, uplink1, uplink2. When you prepare transport
nodes, you will associate uplink1 to pnic1 and uplink2 to pnic2.

n You must use the Load Balanced Source teaming policy for traffic load balancing.

Consider the following points when configuring Load Balance Source for Bare Metal NSX Edge:

n Supports multiple active uplinks.

n In the Active Uplinks field, you can use LAGs or enter individual uplink lables. For example,
LAG1 or uplink1, uplink2.

n A LAG must have two physical NICs on the same N-VDS.

n The number of LAGs that you can actually use depends on the capabilities of the underlying
physical environment and the topology of the virtual network. For example, if the physical
switch supports up to four ports in an LACP port channel, you can connect up to four
physical NICs per host to a LAG.

n In the LACP section, Bare Metal NSX Edge only supports Source and destination MAC
address, IP address and TCP/UDP port.

VMware by Broadcom 113


NSX Installation Guide

n If multiple LAG uplinks are configured on a Bare Metal NSX Edge, enter a unique LAG name
for each LAG uplink profile.

n If multi-vtep uplink profile is used for Bare Metal NSX Edge or edge VMs, NSX only supports
Load Balance Source teaming policy.

n You must use the Load Balanced Source teaming policy for traffic load balancing.

Prerequisites

n See NSX Edge network requirements in NSX Edge Installation Requirements.

n Each uplink in the uplink profile must correspond to an up and available physical link on your
hypervisor host or on the NSX Edge node.

For example, your hypervisor host has two physical links that are up: vmnic0 and vmnic1.
Suppose vmnic0 is used for management and storage networks, while vmnic1 is unused. This
might mean that vmnic1 can be used as an NSX uplink, but vmnic0 cannot. To do link teaming,
you must have two unused physical links available, such as vmnic1 and vmnic2.

For an NSX Edge, tunnel endpoint and VLAN uplinks can use the same physical link.
For example, vmnic0/eth0/em0 might be used for your management network and vmnic1/
eth1/em1 might be used for your fp-ethX links.

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Select System > Fabric > Profiles > Uplink Profiles > Add Profile.

VMware by Broadcom 114


NSX Installation Guide

3 Complete the uplink profile details.

Option Description

Name and Description Enter an uplink profile name.


Add an optional uplink profile description.

LAGs (Optional) In the LAGs section, click Add for Link aggregation groups (LAGs)
using Link Aggregation Control Protocol (LACP) for the transport network.
The active and standby uplink names you create can be any text to
represent physical links. These uplink names are referenced later when you
create transport nodes. The transport node UI/API allows you to specify
which physical link corresponds to each named uplink.
Possible LAG hashing mechanism options:
n Source MAC address
n Destination MAC address
n Source and destination MAC address
n Source and destination IP address and VLAN
n Source and destination MAC address, IP address, and TCP/UDP port
Supported LAG hashing mechanisms on hosts types:
n NSX Edge nodes: Source and destination MAC address, IP address, and
TCP/UDP port.
n ESXi hosts with VDS in Enhanced Networking Stack (ENS) mode: Source
MAC address, Destination MAC address, and Source and destination
MAC address.
n ESXi hosts with VDS in Standard mode: Source MAC address,
Destination MAC address, Source and destination MAC address, and
Source and destination IP address and VLAN.
n ESXi hosts with vSphere Distributed Switch (v 7.0 and later that supports
NSX): LACP is not configured in NSX. You need to configure it in VMware
vCenter.
n Physical server hosts: Source MAC address.

Teamings In the Teaming section, you can either enter a default teaming policy or
you can choose to enter a named teaming policy that is only applicable to
VLAN networks. Click Add to add a naming teaming policy. A teaming policy
defines how VDS uses its uplink for redundancy and traffic load balancing.
You can configure a teaming policy in the following modes:
n Failover Order: Specify an active uplink along with an optional list of
standby uplinks. If the active uplink fails, the next uplink in the standby
list replaces the active uplink. No actual load balancing is performed
with this option. Standby uplinks and multiple active uplinks are not
supported for NSX Edge transport nodes. Also, for an NSX Edge
transport node, active uplink used in one profile should must not be used
in another profile.

VMware by Broadcom 115


NSX Installation Guide

Option Description

n Load Balance Source: Maps a virtual interface of a VM to an uplink.


Traffic sent by this virtual interface will leave the host through this uplink
only, and traffic destined to this virtual interface will necessarily enter the
virtual switch through this uplink. Select a list of active uplinks. When you
configure a transport node, you can pin each interface of the transport
node to one active uplink. This configuration allows use of several active
uplinks at the same time. No standby uplink is configured in this case.

Important To manage VLAN traffic, if you configure a default teaming


policy in Load Balance Source mode, then on failure of the first uplink,
traffic will not fail over to the second uplink interface.
n Load Balance Source MAC Address: Select an uplink based on a hash
of the source Ethernet. NSX Edge transport nodes do not support this
teaming policy.

Note
n On hypervisor hosts:
n ESXi hosts: Default teaming policies - Load Balance Source MAC,
Load Balance Source, and Failover Order teaming policies are
supported.
n Physical server hosts (Linux): Only Failover Order teaming policy
is supported. Load Balance Source and Load Balance Source MAC
teaming policies are not supported.
n Physical server hosts (Windows): Supports Load Balance Source and
Load Balance Source Mac teaming policies. Load Balance Source
teaming policy on NSX is mapped to Address Hash on Windows.
Load Balance Source Mac Address on NSX is mapped to Mac
Addresses on Windows.
n On NSX Edge: For default teaming policy, Load Balance Source and
Failover Order teaming policies are supported. For named teaming
policy, only Failover Order policy is supported.

(ESXi hosts and NSX Edge) You can define the following policies for a
transport zone:
n A Named teaming policy for every VLAN-based logical switch or
segment.
n A Default teaming policy for the entire VDS.
Named teaming policy: A named teaming policy means that for every VLAN-
based logical switch or segment, you can define a specific teaming policy
mode and uplinks names. This policy type gives you the flexibility to select
specific uplinks depending on the traffic steering policy, for example, based
on bandwidth requirement.
n If you define a named teaming policy, VDS uses that named teaming
policy if it is attached to the VLAN-based transport zone and finally
selected for specific VLAN-based logical switch or segment in the host.
n If you do not define any named teaming policies, VDS uses the default
teaming policy.

4 Enter a Transport VLAN value. The transport VLAN set in the uplink profile tags overlay
traffic only and the VLAN ID is used by the Tunnel Endpoint Pools (TEP IP Pools).

VMware by Broadcom 116


NSX Installation Guide

5 Enter the MTU value.

For hosts that use vSphere VDS, configure MTU on the VDS from VMware vCenter. The
uplink profile MTU default value is 1700 bytes and is applicable to transport-nodes that use
N-VDS.

Note The MTU field is optional. If you do not configure it, NSX takes the value set in the
Tunnel Endpoint MTU field. If both MTU fields are set, uplink profile MTU value ovrerrides the
tunnel endpoint MTU value.

An example of Active/Active uplink profile for ESXi host with named-teaming (optional)

VMware by Broadcom 117


NSX Installation Guide

An example of uplink profile using LAG for ESXi

host

An example of uplink profile for NSX Edge with named-teaming policy

6 Configure global tunnel endpoint MTU.

Results

In addition to the UI, you can also view the uplink profiles with the API call GET /policy/api/v1/
infra/host-switch-profiles.

Add an NSX Edge Cluster Profile


The NSX Edge cluster profile defines policies for NSX Edge transport nodes that are part of an
NSX Edge cluster.

Prerequisites

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

VMware by Broadcom 118


NSX Installation Guide

2 Select System > Fabric > Profiles > Edge Cluster Profiles > Add Profile.

3 Enter the NSX Edge cluster profile details.

Option Description

Name and Description Enter an NSX Edge cluster profile name.


Optionally, describe the profile you want to create. For example,
Bidirectional Forwarding Detection (BFD) setting for the
profile.

BFD Probe Interval Accept the default setting.


BFD is a detection protocol used to identify the forwarding path failures. To
detect a forwarding path failure, you can set the interval timing for BFD .

BFD Allowed Hops Accept the default setting. You can set the maximum number of hops multi-
hop BFD sessions can transit.

BFD Declare Dead Multiple Accept the default setting. You can set the number of times the BFD packet
is not received before the session is flagged as down.

Stand By Relocation Threshold Accept the default setting.

Add and attach NSX Edge Bridge Profile to a Segment


The NSX Edge bridge profile specifies the primary NSX Edge node that will be the preferred
node for the active bridge and backup node that will be preferred for the backup bridge.

NSX Edge Bridge connects NSX overlay logical segment with a traditional VLAN at layer 2. The
edge bridge leverages DPDK for high performance forwarding. The traffic bridged in or out of
the NSX domain is subject to an edge bridge firewall instance. The NSX Edge bridge functionality
is mainly for migration scenarios (physical to virtual or virtual to virtual) or for integration of
physical, non-virtualized appliances to the virtualized environment.

Note Starting with NSX 2.5, the same segment can be attached to several bridges on different
Edge Clusters and VLANs.

At the time of the creation of the Bridge Profile, no Bridge is instantiated yet. The Bridge Profile
is just a template for the creation of one or several Bridge pairs. Once a Bridge Profile is created,
you can attach a segment to it. By doing so, an active Bridge instance is created on the primary
Edge, while a standby Bridge is provisioned on the backup Edge. NSX creates a Bridge Endpoint
object, which represents this pair of Bridges. The attachment of the segment to the Bridge
Endpoint is represented by a dedicated logical port.

Prerequisites

n Verify that the NSX Edge cluster is available with minimum of two edge nodes (BareMetal or
VM form factor).

n For the VM form factor NSX Edge, verify NSX Edge uplink VDS Trunk port group in VMware
vCenter has the following configuration:

n Forged transmit

VMware by Broadcom 119


NSX Installation Guide

n MAC learning (recommended) or promiscous mode/sink port configured

n Active/Standby teaming policy.

Note Consider dedicating an NSX Edge uplink (vNIC) to bridged traffic so that other kinds
of traffic to and from the NSX Edge do not suffer from the performance impact related to
promiscuous mode.

n Verify NSX Edge TNs member of NSX Edge Cluster are attached to overlay as well as VLAN
Transport Zone.

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Select Networking → Segments → Profiles → Edge Bridge Profiles → Add Edge Bridge
Profile.

3 Enter the NSX Edge cluster profile details.

Option Description

Name and Description Enter a NSX Edge bridge cluster profile name.
You can optionally enter the profile details such as, the primary and backup
node details.

Edge Cluster Select the NSX Edge cluster that you can to use.

Primary Node Designate the preferred NSX Edge node from the cluster.

Backup Node Designate the back up NSX Edge node if the primary node fails.

Failover Mode Select either Preemptive or Non-Preemptive mode.


The default HA mode is preemptive, which can slowdown traffic when the
preferred NSX Edge node goes back online. The non-preemptive mode
does not cause any traffic slowdown.
In the preemptive mode, the Bridge on the primary Edge will always become
the active bridge forwarding traffic between overlay and VLAN as soon as
it is available. In the non-preemptive mode, the Bridge on the primary Edge
will remain standby if it becomes available when the Bridge on the backup
Edge is already active.

4 After you create a Bridge Profile, associate it to a segment.

5 Select Networking > Segments > NSX > Add Segment.

6 Enter the required details, connect to overlay transport zone and click Save.

7 Edit the segment to which you want to add the Bridge Profile.

8 In the Additional Settings section, in the Edge Bridges field, select Set.

9 Click Add Edge Bridge.

10 Select the Edge Bridge Profile.

VMware by Broadcom 120


NSX Installation Guide

11 Select the Transport Zone where the bridged traffic is sent to the N-VDS selected by the
transport zone.

12 Select the VLAN ID for the VLAN traffic as well as the physical port you select on the NSX
Edge for sending or receiving this VLAN traffic.

13 (Optional) Select the teaming policy to decide how N-VDS balances traffic across its uplinks.

14 Click Add.

15 Click Save.

Results

The newly created NSX Edge Bridge Profile is associated to a segment to balance VLAN traffic.

What to do next

n Verify the configuration and state of L2 bridges on the NSX Edge.

a SSH to the NSX Edge as an admin.

b Run get bridge.

c Verify that the Device State is Up.

Add a Transport Node Profile


A transport node profile is a template to define configuration that is applied to a cluster. It is not
applied to prepare standalone hosts. Prepare VMware vCenter cluster hosts as transport nodes
by applying a transport node profile. Transport node profiles define transport zones, member
hosts, switch configuration including uplink profile, IP assignment, mapping of physical NICs to
uplink virtual interfaces and so on.

Note Transport node profiles are only applicable to hosts. It cannot be applied to NSX Edge
transport nodes.

Transport node creation begins when a transport node profile is applied to a VMware vCenter
cluster. NSX Manager prepares the hosts in the cluster and installs the NSX components on all
the hosts. Transport nodes for the hosts are created based on the configuration specified in the
transport node profile.

On a cluster prepared with a transport node profile, these outcomes are true:

n When you move an unprepared host into a cluster applied with a transport node profile, NSX
automatically prepares the host as a transport node using the transport node profile.

n When you move a transport node from the cluster to an unprepared cluster or directly as a
standalone host under the data center, first the transport node configuration applied to the
node is removed and then NSX VIBs are removed from the host. See Triggering Uninstallation
from the vSphere Web Client.

VMware by Broadcom 121


NSX Installation Guide

To delete a transport node profile, you must first detach the profile from the associated cluster.
The existing transport nodes are not affected. New hosts added to the cluster are no longer
automatically converted into transport nodes.

Points to note when you create a Transport Node Profile:

n You can add a maximum of four VDS switches for each configuration: enhanced VDS created
for VLAN transport zone, standard VDS created for overlay transport zone, enhanced VDS
created for overlay transport zone.

n There is no limit on the number of standard VDS switches created for VLAN transport zone.

n In a single host cluster topology running multiple standard overlay VDS switches and edge
VM on the same host, NSX provides traffic isolation such that traffic going through the first
VDS is isolated from traffic going through the second VDS and so on. The physical NICs
on each VDS must be mapped to the edge VM on the host to allow the north-south traffic
connectivity with the external world. Packets moving out of a VM on the first transport zone
must be routed through an external router or an external VM to the VM on the second
transport zone.

n Each VDS switch name must be unique. NSX does not allow use of duplicate switch names.

n Each transport zone ID associated with each VDS host in a transport node configuraiton or
transport node profile configuration must be unique.

Prerequisites

n Verify that the hosts are part of a VMware vCenter cluster.

n VMware vCenter must have at least one cluster.

n Verify that a transport zone is configured. See Create Transport Zones.

n Verify that a cluster is available. See Deploy NSX Manager Nodes to Form a Cluster from the
UI.

n Verify that an IP pool is configured, or DHCP must be available in the network deployment.
See Create an IP Pool for Tunnel Endpoint IP Addresses.

n Verify that a compute manager is configured. See Add a Compute Manager.

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Select System > Fabric > Hosts > Transport Node Profile > Add Transport Node Profile.

3 Enter a name to identify the transport node profile.

You can optionally add the description about the transport node profile.

4 In the + Add Switch section, add details of the new switch.

VMware by Broadcom 122


NSX Installation Guide

5 Before you proceed, decide which type of host switch you want to configure on nodes of a
cluster.

6 In the New Node Switch section, configure the following fields.

Option Description

Mode (NSX [Link] only) Choose between the following mode options:
n Standard: Is the standard mode that is available to all supported
hypervisors by NSX.
n ENS Interrupt: Is a variant of the Enhanced Datapath mode.
n Enhanced Datapath: Is the mode that provides accelerated networking
performance. This mode requires nodes to use VMXNET3 vNIC enabled
network cards. It is not supported on NSX Edge nodes and Public
Gateways. The supported hypervisor is ESXi. It is recommended to run
ESXi v6.7 U2 and later versions.

Mode (Starting with NSX [Link]) Choose between the following mode options:
n Standard: This mode applies to all transport nodes. The data-plane in the
transport node automatically selects the host switch mode as per the
uplink capabilities.
n Enhanced Datapath - Standard: This mode is a variant of the Enhanced
Data Path mode. It is available only on ESXi hypervisor 7.0 and later
versions. Please consult your account representative for applicability.
n Enhanced Datapath - Performance: This is the Enhanced Data Path
switch mode for ESXi host transport node. This mode provides
accelerated networking performance. It requires nodes to use VMXNET3
vNIC enabled network cards. It is not supported on NSX Edge nodes and
Public Gateways. The supported hypervisor is ESXi. It is recommended
to run ESXi v6.7 U2 and later versions.
n Legacy: This mode was formerly called Standard. It applies to all
transport nodes. When the host switch mode is set to Legacy, the
packet handler stack is enabled. On NSX Manager UI, you will see this
mode set to Standard and the Legacy field to 'Yes'. You can select
this mode only through API, since the Legacy field is read-only in NSX
Manager UI.

You can run the following Host Transport Node or Transport Node
Profile policy API to set the host switch mode to Legacy:
n Create or update Host Transport Node:

PUT [Link]
Manager-IP-ADDRESS>/POST/policy/api/v1/infra/sites/
<site-id>/enforcement-points/<enforcementpoint-id>/
host-transport-nodes/<host-transport-node-id>

n Create or update policy Host Transport Node Profile:

PUT [Link]
IP-ADDRESS>/POST/policy/api/v1/infra/host-transport-
node-profiles/<transport-node-profile-id>

Name (Hosts managed by a vSphere cluster) Select the VMware vCenter that
manages the host switch.
Select the VDS that is created in VMware vCenter.

VMware by Broadcom 123


NSX Installation Guide

Option Description

Transport Zones Shows the transport zones that are realized by the associated host switches.
You cannot add a transport zone if it is not realized by any host switch.
Supported transport zone configurations:
n You can add multiple VLAN transport zones per host switch.
n You must add only one overlay transport zone per host switch. NSX
Manager UI does not allow adding multiple overlay transport zones.

Uplink Profile Select an existing uplink profile from the drop-down menu or create a
custom uplink profile. You can also use the default uplink profile.
If you keep the MTU value empty, the NSX takes the global default MTU
value 1700. If you enter a MTU value in NSX uplink profile, that MTU value
will override the global default MTU value.

Note Link Aggregation Groups defined in an uplink profile cannot be


mapped to VDS uplinks.

IP Assignment (TEP) Select between Use DHCP and Use IP Pool to assign an IP address to tunnel
endpoints (TEPs) of the transport node.
If you selected Use IP Pool for an IP assignment, specify the IP pool name
and the range of IP addresses that can be used for tunnel endpoints.

CPU Config You can configure the CPU Config field only when the Mode is set to
Enhanced Datapath.
a Click Set.
b In the CPU Config window, click Add.
c Enter values for the NUMA Node Index and LCores per NUMA Node
fields.
d To save the values, click Add and Save.

Teaming Policy Switch Mapping Before you map uplinks profiles in NSX with uplinks in VDS, ensure uplinks
are configured on the VDS switch. To configure or view the VDS switch
uplinks, go to VMware vCenter → vSphere Distributed Switch. Click Actions
→ Settings → Edit Settings.
Map uplinks defined in the selected NSX uplink profile with VDS uplinks.
The number of NSX uplinks that are presented for mapping depends on the
uplink profile configuration.
For example, in the upink-1 (active) row, go to the Physical NICs column, click
the edit icon, and type in the name of VDS uplink to complete mapping it
with uplink-1 (active). Likewise, complete mapping for the other uplinks.

Note Uplinks/LAGs, NIOC profile, LLDP profile are defined in VMware vCenter. These
configurations are not available in NSX Manager. To manage VMkernel adapters on a VDS
switch, go to VMware vCenter to attach VMkernel adapters to Distributed Virtual port groups
or NSX port groups.

7 If you have selected multiple transport zones, click + Add Switch again to configure the
switch for the other transport zones.

8 Click Add to complete the configuration.

VMware by Broadcom 124


NSX Installation Guide

What to do next

Apply the transport node profile to an existing vSphere cluster. See Prepare ESXi Cluster Hosts
as Transport Nodes.

VMware by Broadcom 125


Host Transport Nodes
9
You can prepare ESXi hosts and physical servers as NSX transport nodes. Before you prepare
physical servers for NSX networking, ensure the required third-party packages are installed on
hosts.

Read the following topics next:

n Manual Installation of NSX Kernel Modules

n Preparing Physical Servers as NSX Transport Nodes

n Preparing ESXi Hosts as Transport Nodes

n Managing Transport Nodes

Manual Installation of NSX Kernel Modules


As an alternative to using the NSX UI or the POST /api/v1/transport-nodes API
(deprecated) API or PUT /api/v1/infra/sites/side-id/enforcement-points/enforcement-
point-id/host-transport-nodes/ API, you can install NSX kernel modules manually from the
hypervisor command line.

Note You cannot manually install of NSX kernel modules on a bare metal server.

Manually Install NSX Kernel Modules on ESXi Hypervisors


To prepare hosts to participate in NSX, you must install NSX kernel modules on ESXi hosts. This
allows you to build the NSX control-plane and management-plane fabric. NSX kernel modules
packaged in VIB files run within the hypervisor kernel and provide services such as distributed
routing, distributed firewall, and bridging capabilities.

You can download the NSX VIBs manually and make them part of the host image. The download
paths can change for each release of NSX. Always check the NSX downloads page to get the
appropriate VIBs.

Procedure

1 Log in to the host as root or as a user with administrative privileges

VMware by Broadcom 126


NSX Installation Guide

2 Navigate to the /tmp directory.

[root@host:~]: cd /tmp

3 Download and copy the nsx-lcp file into the /tmp directory.

4 Run the install command.

[root@host:/tmp]: esxcli software vib install -d /tmp/nsx-lcp-<release>.zip


Installation Result
Message: The update completed successfully, but the system needs to be rebooted for the
changes to be effective.
Reboot Required: true
VIBs Installed: VMware_bootbank_nsx-adf_<release>, VMware_bootbank_nsx-
aggservice_<release>, VMware_bootbank_nsx-cli-libs_<release>, VMware_bootbank_nsx-
common-libs_<release>, VMware_bootbank_nsx-context-mux_<release>, VMware_bootbank_nsx-
esx-datapath_<release>, VMware_bootbank_nsx-exporter_<release>, VMware_bootbank_nsx-
host_<release>, VMware_bootbank_nsx-metrics-libs_<release>, VMware_bootbank_nsx-
mpa_<release>, VMware_bootbank_nsx-nestdb-libs_<release>, VMware_bootbank_nsx-
nestdb_<release>, VMware_bootbank_nsx-netcpa_<release>, VMware_bootbank_nsx-
netopa_<release>, VMware_bootbank_nsx-opsagent_<release>, VMware_bootbank_nsx-platform-
client_<release>, VMware_bootbank_nsx-profiling-libs_<release>, VMware_bootbank_nsx-
proxy_<release>, VMware_bootbank_nsx-python-gevent_<release>, VMware_bootbank_nsx-python-
greenlet_<release>, VMware_bootbank_nsx-python-logging_<release>, VMware_bootbank_nsx-
python-protobuf_<release>, VMware_bootbank_nsx-rpc-libs_<release>, VMware_bootbank_nsx-
sfhc_<release>, VMware_bootbank_nsx-shared-libs_<release>, VMware_bootbank_nsx-upm-
libs_<release>, VMware_bootbank_nsx-vdpi_<release>, VMware_bootbank_nsxcli_<release>,
VMware_bootbank_vsipfwlib_<release>
VIBs Removed:
VIBs Skipped:

Depending on what was already installed on the host, some VIBs might be installed, some
might be removed, and some might be skipped. A reboot is not required unless the command
output says Reboot Required: true.

Results

As a result of adding an ESXi host to the NSX fabric, the following VIBs get installed on the host.

nsx-adf

(Automated Diagnostics Framework) Collects and analyzes performance data to produce


both local (at host) and central (across datacenter) diagnoses of performance issues.

nsx-aggservice

Provides host-side libraries for NSX aggregation service. NSX aggregation service is a service
that runs in the management-plane nodes and fetches runtime state from NSX components.

nsx-cfgagent

Provides communication between the central control plane and hypervisors. Receives logical
networking state from the central control plane and programs this state in the data plane.

VMware by Broadcom 127


NSX Installation Guide

nsx-cli-libs

Provides the NSX CLI on hypervisor hosts.

nsx-common-libs

Provide some utilities classes such as AES, SHA-1, UUID, bitmap, and others.

nsx-context-mux

Provides NSX Guest Introspection relay functionality. Allows VMware Tools guest agents to
relay guest context to inhouse and registered third-party partner appliances.

nsx-esx-datapath

Provides NSX data plane packet processing functionality.

nsx-exporter

Provides host agents that report runtime state to the aggregation service running in the
management plane.

nsx-host

Provides metadata for the VIB bundle that is installed on the host.

nsx-metrics-libs

Provides metric utility classes for collecting daemon metrics.

nsx-mpa

Provides communication between NSX Manager and hypervisor hosts.

nsx-nestdb

NestDB is a database that stores NSX configurations related to the host (desired/runtime
state, etc).

nsx-opsagent

Communicates operations agent executions (transport node realization, Link Layer Discovery
Protocol - LLDP,traceflow, packet capture, etc.) with the management plane.

nsx-netcpa

Provides communication required by the different components.

nsx-platform-client

Provides a common CLI execution agent, for centralized CLI and audit log collecting.

nsx-profiling-libs

Provides the functionality of profiling based on gpeftool which used for daemon process
profiling.

VMware by Broadcom 128


NSX Installation Guide

nsx-proxy

Provides the only northbound contact point agent, which talks to the central control plane
and management plane.

nsx-python-gevent

Contains Python Gevent.

nsx-python-greenlet

Contains Python Greenlet library (third party libraries).

nsx-python-logging

Contains the Python logs.

nsx-python-protobuf

Provides Python bindings for protocol buffers.

nsx-rpc-libs

This library provides nsx-rpc functionality.

nsx-sfhc

Service fabric host component (SFHC). Provides a host agent for managing the lifecycle
of the hypervisor as a fabric host in the management plane's inventory. This provides a
channel for operations such as NSX upgrade and uninstall and monitoring of NSX modules on
hypervisors.

nsx-shared-libs

Contains the shared NSX libraries.

nsx-upm-libs

Provides unified profile management functionality for flattening client-side configuration and
avoiding duplicate data transmission.

nsx-vdpi

Provides Deep Packet Inspection capabilities for NSX Distributed Firewall.

vsipfwlib

Provides distributed firewall functionality.

nsxcli

Provides the NSX CLI on hypervisor hosts.

To verify, you can run the esxcli software vib list | grep -E 'nsx|vsip' or esxcli software vib list |
grep <yyyy-mm-dd> command on the ESXi host, where the date is the day that you performed
the installation.

VMware by Broadcom 129


NSX Installation Guide

What to do next

Add the host to the NSX management plane. See Form an NSX Manager Cluster Using the CLI.

Preparing Physical Servers as NSX Transport Nodes


To use NSX on a physical server (also known as Bare Metal server), you must install supported
third-party packages.

Physical Server Concepts:

n Application - represents the actual application running on the physical server server, such as
a web server or a data base server.

n Application Interface - represents the network interface card (NIC) which the application
uses for sending and receiving traffic. One application interface per physical server server is
supported.

n Management Interface - represents the NIC which manages the physical server server.

n VIF - the peer of the application interface which is attached to the logical switch. This is
similar to a VM vNIC.

NSX supports the physical server server in two ways: as a host transport node and as a host for
NSX Manager.

Make sure that you have the supported physical server server versions. See Bare Metal Server
System Requirements.

Note If your NSX Edges are in VM form factor and you intend to use the NSX DHCP service
(deployed on VLAN-based logical switch), you must set the forged transmits option to Accept
on the physical server hosts on which the NSX Edges are deployed. See seciton on Forged
Transmits in the vSphere product documentation.

Install Third-Party Packages on a Linux Physical Server


To prepare a bare metal server to be a fabric node, you must install some third-party packages.

Prerequisites

n Disable SELinux on all Linux physical hosts.

n Disable secure boot on all Linux physical hosts.

n Verify that the user performing the installation has administrative permission to do the
following actions, some of which may require sudo permissions:

n Download and untar the bundle.

n Run dpkg or rpm commands for installing/uninstalling NSX components.

n Execute nsxcli command for executing join management plane commands.

VMware by Broadcom 130


NSX Installation Guide

n Verify that the virtualization packages are installed.

n Redhat, CentOS or Oracle Linux- yum install libvirt-libs

n Ubuntu - apt-get install libvirt0

n Oracle Linux - rpm-qa | grep xxx

n SUSE - zypper install libvirt-libs

Procedure

u On Ubuntu, run apt-get install <package_name> to install the third-party packages.

Ubuntu 20.04 Ubuntu 18.04.2 Ubuntu 16.04

traceroute traceroute libunwind8


python3-mako python-mako libgflags2v5
python3-netaddr python-netaddr libgoogle-perftools4
python3-simplejson python-simplejson traceroute
python3-unittest2 python-unittest2 python-mako
python3-yaml python-yaml python-simplejson
python3-openssl python-openssl python-unittest2
dkms dkms python-yaml
libvirt0 libvirt0 python-netaddr
libelf-dev libelf-dev python-openssl
python3-netifaces python-netifaces libboost-filesystem1.58.0
ifupdown ifupdown libboost-chrono1.58.0
libgoogle-glog0v5
dkms
libboost-date-time1.58.0
python-protobuf
python-gevent
libsnappy1v5
libleveldb1v5
libboost-program-options1.58.0
libboost-thread1.58.0
libboost-iostreams1.58.0
libvirt0
libelf-dev
python-netifaces

VMware by Broadcom 131


NSX Installation Guide

u On RHEL 8.6 (starting with NSX 4.0.1), 8.4, and 8.2, and CentOS 8.4 and 8.2, run yum install
to install the third-party packages.

RHEL 8.6, 8.4 and 8.2 CentOS 8.4 and 8.2

tcpdump tcpdump
boost-filesystem boost-filesystem
python3-pyyaml python3-pyyaml
boost-iostreams boost-iostreams
boost-chrono boost-chrono
python3-mako python3-mako
python3-netaddr python3-netaddr
python3-six python3-six
snappy snappy
boost-date-time boost-date-time
c-ares c-ares
redhat-lsb-core redhat-lsb-core
wget wget
net-tools net-tools
yum-utils yum-utils
lsof lsof
libvirt-libs libvirt-libs
python3-gevent python3-gevent
libev libev
python3-greenlet python3-greenlet
python3 python3
libbpf libbpf
python3-netifaces python3-netifaces
python3-pyOpenSSL python3-pyOpenSSL
network-scripts network-scripts

u On Oracle Linux 8.6 (starting with NSX 4.0.1), run yum install to install the third-party
packages.

Oracle 8.6

tcpdump
boost-filesystem
python3-pyyaml
boost-iostreams
boost-chrono
python3-mako
python3-netaddr
python3-six
snappy
boost-date-time
c-ares
redhat-lsb-core
wget
net-tools
yum-utils
lsof
libvirt-libs
python3-gevent
libev
python3-greenlet
python3
libbpf
python3-netifaces
python3-pyOpenSSL
network-scripts

VMware by Broadcom 132


NSX Installation Guide

u On RHEL, CentOS, and Oracle Linux run yum install to install the third-party packages.

RHEL 7.9, 7.7, and 7.6 CentOS 7.9, 7.7, and 7.6 Oracle Linux 7.9, 7.8, 7.7 and 7.6

tcpdump tcpdump tcpdump


boost-filesystem boost-filesystem boost-filesystem
PyYAML PyYAML PyYAML
boost-iostreams boost-iostreams boost-iostreams
boost-chrono boost-chrono boost-chrono
python-mako python-mako python-mako
python-netaddr python-netaddr python-netaddr
python-six python-six python-six
gperftools-libs gperftools-libs gperftools-libs
libunwind libunwind libunwind
elfutils-libelf-devel elfutils-libelf-devel snappy
snappy snappy boost-date-time
boost-date-time boost-date-time c-ares
c-ares c-ares redhat-lsb-core
redhat-lsb-core redhat-lsb-core wget
wget wget net-tools
net-tools net-tools yum-utils
yum-utils yum-utils lsof
lsof lsof libvirt-libs
python-gevent python-gevent python-netifaces
libev libev python-greenlet
python-greenlet python-greenlet libev
libvirt-libs libvirt-libs python-gevent
python-netifaces python-netifaces python3
python3 python3
wget wget
redhat-lsb-core redhat-lsb-core

u On SUSE 12 SP3, 12 SP4, and 12 SP5 (starting in NSX 3.2.1), run zypper install
<package_name> to install the third-party packages manually.

net-tools
tcpdump
python-simplejson
python-netaddr
python-PyYAML
python-six
libunwind
wget
lsof
libcap-progs
libvirt-libs
python-netifaces

Configure a Physical Server as a Transport Node from GUI


As an admin, you can configure a physical server for NSX networking through the NSX Manager
GUI.

Alternatively, you can run the Ansible script to achieve the same goal. See Secure Workloads on
Windows Server 2016/2019 Bare Metal Servers for configuring Windows physical servers using
Ansible. However, it is recommended to use the NSX Manager UI to prepare physical servers for
NSX networking.

VMware by Broadcom 133


NSX Installation Guide

Physical servers supports an overlay and VLAN transport zone. You can use the management
interface to manage the physical server server. The application interface allows you to access the
applications on the physical server. These NIC configurations are supported on a physical server:

n Single physical NIC cards provide an IP address for both the management and the application
IP interfaces.

n Dual physical NIC cards provide a physical NIC and a unique IP address for the management
interface. Dual physical NIC cards also provide a physical NIC, and a unique IP address for the
application interface.

n Windows servers: Multiple physical NIC cards in a bonded configuration provide dual physical
NIC cards - providing a unique IP address for both the management interface and the
application interface. Such physical NIC bonds are supported through bonds created in
the OS. Bond must be configured in the Switch Independent mode. Traffic running on
management network is not supported on a bonded teaming interface.

n Linux servers: Bond interface only supports underlay mode (VLAN 0). CentOS 7.9, RHEL 7.9
are supported. Physical NIC bonds are supported in Active/Active and Active/Standby mode
through OVS switch.

Unlike preparation of a standalone or a managed ESXi host that ends when it becomes a
transport node, for a physical sever, complete server preparation extends to attaching the
application interface of the physical server to an NSX segment.

After preparing the host as a transport node, you must complete the following tasks to finish
configuring a physical server.

1 Create a segment port on an NSX segment.

2 Attach application interface of the physical server to the segment port.

Prerequisites

n A transport zone must be configured.

VMware by Broadcom 134


NSX Installation Guide

n An uplink profile must be configured, or you can use the default uplink profile.

n An IP pool must be configured, or DHCP must be available in the network deployment.

n At least one physical NIC must be available on the host node.

n Hostname

n Management IP address

n User name

n Password

n A segment (VLAN or Overlay), depending upon your requirement, must be available to


attach to the application interface of the physical server.

n Verify that the required third-party packages are installed. Third party packages must be
installed on the physical server so that its physical NICs are available during transport node
configuration. See Install Third-Party Packages on a Linux Physical Server.

n On Linux physical servers, you can update the sudoers file to add custom users with minimal
privileges. The custom users allows you to install NSX without root permissions.

After configuring visudo, run the following command to access the /etc/sudoers file.

$ sudo visudo

RHEL/CentOS/OEL/SLES:

tester ALL=(ALL) /usr/bin/rpm, /usr/bin/nsxcli, /usr/bin/systemctl restart openvswitch

Ubuntu:

tester ALL=(ALL) /bin/ls, /usr/bin/sudo, /usr/bin/dpkg, /bin/nsxcli

Procedure

1 Retrieve the hypervisor thumbprint so that you can provide it when adding the host to the
fabric.

a Gather the hypervisor thumbprint information.

Use a Linux shell.

# echo -n | openssl s_client -connect <esxi-ip-address>:443 2>/dev/null | openssl x509


-noout -fingerprint -sha256

Use the ESXi CLI in the host.

[root@host:~] openssl x509 -in /etc/vmware/ssl/[Link] -fingerprint -sha256 -noout


SHA256
Fingerprint=[Link]
[Link]

VMware by Broadcom 135


NSX Installation Guide

2 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

3 Go to System → Fabric → Nodes → Hosts.

4 Select Standalone and + Add Host Node.

5 On the Host Details window, enter the following details.

Option Description

Name and Description Enter the name to identify the physical server.
You can optionally add the description of the operating system used for the
host or physical server server.

IP Addresses Enter the host or physical server server IP address.

Operating System Select an operating system that mentions physical server. For example,
if the operating system on the physical server is CentOS, select CentOS
Physical Server. NSX identifies bare metal servers as physical servers.
Depending on your physical server, you can select any of the supported
operating systems. See System Requirements.

Important Among the different flavors of Linux supported, you must know
the distinction between a physical server server running a Linux distribution
versus using a Linux distribution as a hypervisor host. For example, selecting
Ubuntu Server as the operating system means setting up a physical server
server running a Linux server.

Username and Password Enter the host user name and password.

SHA-256 Thumbprint This is an optional step.


Enter the host thumbprint value for authentication.
If you leave the thumbprint value empty, you are prompted to accept the
server provided value. It takes a few seconds for NSX to discover and
authenticate the host.

6 Click Next.

7 On the Prepare Host window, enter the following details. You can only configure a single
N-VDS switch for a single physical server.

Option Description

Name Enter a name for the N-VDS host switch.

Transport Zone From the drop-down menu, select a transport zone that this transport node.

Uplink Profile Select an existing uplink profile from the drop-down menu or create a
custom uplink profile. You can also use the default uplink profile.

VMware by Broadcom 136


NSX Installation Guide

Option Description

LLDP Profile By default, NSX only receives LLDP packets from a LLDP neighbor.
However, NSX can be set to send LLDP packets to and receive LLDP
packets from a LLDP neighbor.

Uplinks-Physical NICs Mapping To map an uplink in NSX with a physical NIC or a bonded interface, enter the
name of the physical NIC or bonded interface as configured on the physical
server. For example, if teaming1 is the name of the interface you configured
on the Windows server, then enter teaming1 in the Physical NICs field.

Important
n You cannot map one uplink to a physical NIC and another uplink to a
bonded interface.
n If you are using a bonded interface, both NICs must be configured to
function at the same packet transfer speed.

On Windows servers, you can configure teaming interfaces (bonded


interfaces). The supported load balancing algorithms for teaming interfaces
on Windows servers are:
n TransportNodes load balancing algorithm
n MacAddresses load balancing algorithm
n IPAddresses load balancing algorithm

In the teaming interface configuration, set Teaming Mode to Switch


Independent mode. For more details, see Windows documentation.
On Linux servers, you can configure a bonded interface by updating the
network-scripts files. For more information, see Linux documentation.

8 Click Next.

9 As the host is configured, the physical server progress is displayed.

10 On the Configure NSX window, verify status of host preparation. Based on whether you want
to proceed with further configuration, these choices are available:

Description

Click Select Segment If the physical server preparation was successful, click Select Segment. In
the next part of the procedure, you select a segment to attach the physical
server's application interface through the NSX agent. Proceed to the next
step.

Click Continue Later If you click Continue Later button, then preparation ends without the
application interface configured. You can later attach the segment port
to the application interface. Go to Networking → Segments. Configure
application interface for the BMS.

Preparation Failed If preparation failed, go to the Host Transport Node page (System → Fabric
→ Nodes → Host Transport Node). Identify the physical server, check if the
Configuration State is in Failed state. Click Resolve to retry host preparation.

VMware by Broadcom 137


NSX Installation Guide

11 If you proceed to select a segment for the physical server, perform the following steps:

a From the list of segments connected to the transport zone you configured for the
physical server, select the one to configure for the server.

b Click the vertical ellipses and click Edit to customize segment properties.

Note Only properties related to a segment can be edited. Admin can modify: Segment
Name, Connected Gateway, Subnet, Uplink Teaming Policy, IP Address Pool.

12 To add a new segment port on an NSX segment, got to the Select Segment window, click
Add Segment Port. The segment port page is auto-populated.

Option Description

Name Enter the Segment Port name.

ID The virtual interface UUID is auto-populated.

Type Static is auto-populated as the node is of the type, physical server.

Context ID Transport node UUID is auto-populated.

Note Alternatively, you can also run the API command, [Link]
address>/PATCH /policy/api/v1/infra/segments/<segment-id>/ports/<port-id>.

Where, <port-id> is the virtual interface UUID, which is displayed on NSX Manager.

13 To attach application interface of physical server to a segment port, go to the Set Segment
Port window, expand the Attach Application Interface section and enter these details:

Note The Attach Application Interface section is only applicable for physical servers.

Option Description

Name You can change the system-generated application interface name.


On a Linux physical server, run ovs-vsctl show to verify the application
interface name.

Context ID To enable the application interface configuration, enter the host node ID.

Assign Existing IP Use an existing IP so that it can be used for migration of the application
interface.

Assign New IP Used when configuring an overlay network. Select an IP assignment method
on the segment - IP pool, DHCP, or Static. When you assign a new
IP address for the application interface, complete the configuration by
providing the IP address, Routing Table and Default Gateway details.

14 Click Save.

15 View the summary of the network configuration represented by topology diagram.

16 On the Host Transport Node page, select the physical server, and click Switch Visualization
for the server. It must represent the network you configured on the physical server.

VMware by Broadcom 138


NSX Installation Guide

17 Verify that the NSX modules are installed on your host.

As a result of adding a host to the NSX fabric, a collection of NSX modules are installed on
the host.

The modules on different hosts are packaged as follows:

n RHEL, CentOS, Oracle Linux, or SUSE - RPMs.

n Ubuntu - DEBs
n On RHEL, CentOS, or Oracle Linux, enter the command yum list installed or rpm
-qa.

n On Ubuntu, enter the command dpkg --get-selections.

n On SUSE, enter the command rpm -qa | grep nsx.

n On Windows, open Task Manager. Or, from the command line enter tasklist /V |
grep nsx findstr “nsx ovs

Results

The physical server is configured for NSX networking.

Attach Segment Port to Application Interface of Physical Server


When configuring a physical server as a transport node, if you did not attach a segment port to
application interface of the server, complete the task to ensure NSX is configured on the physical
server.

Make changes to the segment or segment port properties in the following scenarios:

n If the application interface of the physical server was not attached to an NSX segment, revisit
the physical server transport node to complete the configuration.

n If you want to change segment port parameters, such as assigning a different IP address to
the application interface and so on to application interface of the physical server.

Prerequisites

n Ensure the physical server is prepared as an NSX transport node. See Prepare ESXi Cluster
Hosts as Transport Nodes.

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 If you did not configure the application interface of physical server, on the Host Transport
Node page the node status is Success. The node indicates by showing a warning icon that
segment port is not attached to application interface of physical server.

VMware by Broadcom 139


NSX Installation Guide

3 To configure the segment with the application interface, select the physical server, click
Actions → Manage Segment.

The Manage Segment window displays the segment that is already attached to the
application interface of the physical server.

4 For the selected segment, click Edit Segment Port, configure the Application Interface
section and click Save. The segment port status displays as Success, if everything is
functional.

5 To add a new port on the segment, on the Manage Segment window, click Add Segment
Port and add the required details.

Alternatively, if a port already exists, click Edit Segment to proceed with configuration.

Option Description

Context ID To enable the application interface configuration, enter the host node ID.

Assign Existing IP Use an existing IP so that it can be used for migration of the application
interface.

Assign New IP Used when configuring an overlay network. Select an IP assignment method
on the segment - IP pool, DHCP, or Static. When you assign a new
IP address for the application interface, complete the configuration by
providing the IP address, Routing Table and Default Gateway details.

IP Address Enter IP address for the application interface of the physical server.

Routing table Enter routing table details.

Default Gateway Enter the IP address of the gateway.

6 On the Host Transport Node page, select the physical server, and click Switch Visualization
for the server. It must represent the network you configured on the physical server.

Results

A new segment port is attached to application interface of the physical server.

Ansible Server Configuration for Physical Server


When virtual interfaces (VIFs) are being configured, unique IDs of the VIFs have to be configured
to be used as the segment port.

It is recommended to configure segment ports for application interface of physical server


through UI. See Configure a Physical Server as a Transport Node from GUI.

Ansible support modes are a set of automated scripts that set up the application interface for
physical servers.

n Static Mode - Application interface IP Address is configured manually.

n DHCP Mode - Application interface IP Address is configured dynamically.

n Migrate Mode - This mode supports management and application sharing the same IP
address. Also called underlay mode or VLAN-0.

VMware by Broadcom 140


NSX Installation Guide

For all Linux or Windows VM and physical servers:

1 Install Ansible based on the operating system: –[Link]


installation_guide/intro_installation.html

2 Run the command ansible-version, and check that Ansible is version is 2.4.3 or later.

3 Download and extract the physical server integration with NSXfrom Github: –https://
[Link]/vmware/bare-metal-server-integration-with-nsxt

For Windows physical servers only:

1 Install pip for pywinrm.

2 Install pywinrm, and run pip install pywinrm.

Create Application Interface for Physical Server Workloads


Before you create or migrate an application interface for physical server workloads, you must
configure NSX and install Linux third-party packages.

NSX does not support Linux OS interface bonding. You must use Open vSwitch (OVS) bonding
for Physical Server Transport Nodes. See Knowledge Base article 67835 Bare Metal Server
supports OVS bonding for Transport Node configuration in NSX-T. The supported bond
configuration for Linux is Active/Standby.

Procedure

1 Install the required third-party packages.

See Install Third-Party Packages on a Linux Physical Server.

2 Configure the TCP and UDP ports.

See [Link]

3 Add a physical server to the NSX fabric and create a transport node.

See Configure a Physical Server as a Transport Node from GUI.

4 Use the Ansible playbook to create an application interface.

See Ansible Server Configuration for Physical Server.

Secure Workloads on Windows Server 2016/2019 Bare Metal Servers


The NSX agent installed on the servers provides connectivity and security to the bare metal
workloads.

VMware by Broadcom 141


NSX Installation Guide

In this procedure, establish connectivity between the workloads and NSX Manager. Then,
configure DFW rules to secure ingress and egress traffic running between virtual or physical
and Windows Server 2016 or 2019 bare metal workloads.

Prerequisites

n Configure your own proxy settings on the physical server.

Procedure

1 Enable Windows Remote Management (WinRM) on Windows Server 2016 to allow the
Windows server to interoperate with third-party software and hardware. To enable the
WinRM service with a self-signed certificate.

a Run PS$ wget -o ConfigureWinRMService.ps1 https://


[Link]/vmware/bare-metal-server-integration-with-nsxt/master/
bms-ansible-nsx/windows/ConfigureWinRMService.ps1.

b Run PS$ [Link] -ExecutionPolicy ByPass -File


ConfigureWinRMService.ps1.

2 Configure WinRM to use HTTPS. The default port used for HTTPS is 5986.

a Run PowerShell as an administrator.

b Run winrm quickconfig.

c Run winrm set winrm/config/service/auth ‘@{Basic=“true”}’.

d Run winrm set winrm/config/service ‘@{AllowUnencrypted=“true”}’.

e Run winrm create winrm/config/Listener?Address=*+Transport=HTTPS


‘@{Hostname=“win16-colib-001”;CertificateThumbprint=“[output of the 2nd
command]"}’.

f Verify configuration of WinRM. Run winrm e winrm/config/listener.

3 Add the bare metal server as a standalone transport node. See Configure a Physical Server
as a Transport Node from GUI.

VMware by Broadcom 142


NSX Installation Guide

4 Verify whether OVS bridges are created on the Windows server. The OVS bridge connects
the application virtual interface to the NSX switch on the transport node.

ovs-vsctl show

The output must show the bridges created from nsxswitch and nsx managed host
component. The nsxswitch bridge is for the transport node that was created. The nsx
managed bridge is created for the application virtual interface on the Windows host. These
bridge entries indicate that communication channel is established between the NSX switch
and Windows remote listener.

5 On the overlay-backed transport node, verify:

n The static IP address is reflected as the IP address of the overlay segment to which the
Windows Server workload is connected.

n The GENEVE tunnels are created between the NSX switch and the NSX managed host
component on the Windows host.

Note Likewise, on a VLAN-backed transport node, verify that the static IP address is
reflected as the IP address of the VLAN segment to which the Windows Server workload
is connected.

6 In Windows, customize OVSIM driver for the Windows server to create two new network
adapters - application virtual interfaces and virtual tunnel endpoint (VTEP) for overlay-backed
workload.

$:> Get-NetAdapter
vEthernet1-VTEP: Used for overlay-backed VTEP interface. Not needed for a VLAN-backed
workload.
vEthernet1-VIF1: Used for virtual interface or application interface of the bare metal
Windows server.

7 To verify network adapters, go to the Windows server and run Get-NetAdapter.

8 Verify connectivity between the application, Windows bare metal server, and NSX Manager .

9 Add and publish L2 or L3 DFW rules for the overlay or VLAN-backed bare metal workload.

10 Verify ingress and egress traffic between virtual or physical workloads and bare metal
workloads is flowing as per the DFW rules published.

Preparing ESXi Hosts as Transport Nodes


After you create transport zones, IP pools, uplink profiles for transport nodes, prepare a host as
a transport node. An ESXi or Bare Metal (physical server) can be prepared as a transport node.
ESXi hosts only support vSphere Distributed Switch (VDS), while Physical servers only support
N-VDS host switch type.

VMware by Broadcom 143


NSX Installation Guide

Prepare a vSphere Distributed Switch for NSX


Before you configure an NSX transport node using vSphere Distributed Switch (VDS) as a host
switch, ensure that the VDS created on a VMware vCenter 7.0 or a later version is configured to
manage NSX traffic.

High-level tasks to configure a cluster or a standalone managed host using a VDS switch.

Important To create a VDS switch supporting NSX networking, the following conditions must be
met:

n VMware vCenter 7.0 or a later version

n ESXi 7.0 or a later version

Prerequisites

n Verify that ESXi hosts have the required number of physical NICs to meet networking
requirements. For example, if you plan to configure teaming policies and remote span port
mirroring, ensure that a free physical NIC is available to avoid uplink conflicts.

n Ensure that the MTU value of the physical switch port or LACP port is set to 1700 bytes.

Procedure

1 In a VMware vCenter, create a VDS. For more information about creating a VDS, see Prepare
a vSphere Distributed Switch for NSX.

n Set the MTU value for the VDS to at least 1600.

Note Prepare a vSphere Distributed Switch for NSXOn VDS 7.0 or later, the default MTU
size is 1500. To prepare a VDS for NSX overlay networking, the MTU size of the VDS must
be at least 1600. Starting in NSX 3.2.1, if the MTU size of the VDS is below 1600, NSX
Manager notifies you that the MTU size will be automatically increased to 1600.

n Add hosts, that you want to prepare for NSX networking, to VDS created in VMware
vCenter.

n Assign Physical NICS of each of the host as uplinks on the VDS.

2 In NSX, add an uplink profile that defines a teaming policy mapping NSX uplinks with VDS
uplinks.

3 In NSX, prepare an ESXi host using VDS as the host switch.

At the end of the configuration, the host is prepared as NSX transport node with VDS as the
host switch.

What to do next

Configure the host as a transport node. See Prepare ESXi Cluster Hosts as Transport Nodes.

VMware by Broadcom 144


NSX Installation Guide

Prepare ESXi Cluster Hosts as Transport Nodes


If a cluster of ESXi hosts is registered to a VMware vCenter, you can apply transport node
profiles on the ESXi cluster to automatically prepare all hosts as NSX transport nodes.

Prerequisites

n Verify that all hosts in the VMware vCenter are powered on.

n Verify that the system requirements are met. See System Requirements.

n The reverse proxy service on all nodes of the NSX Manager cluster must be Up and running.

To verify, run get service http. If the service is down, restart the service by running
restart service http on each NSX Manager node. If the service is still down, contact
VMware support.

n Verify that a transport node profile is configured. See Add a Transport Node Profile.

n (Host in lockdown mode) If your exception list for vSphere lockdown mode includes expired
user accounts, NSX installation on vSphere fails. Ensure that you delete all expired user
accounts before you begin installation. For more information on accounts with access
privileges in lockdown mode, see Specifying Accounts with Access Privileges in Lockdown
Mode in the vSphere Security Guide.

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Select System > Fabric > Hosts.

3 From the Managed By drop-down menu, select an existing VMware vCenter.

The page lists the available vSphere clusters and/or ESXi hosts from the selected VMware
vCenter. You may need to expand a cluster to view the ESXi hosts.

4 Select a cluster and click Configure NSX.

5 In the NSX Installation pop-up window, from the Transport Node Profile drop-down menu,
select the transport node profile to apply to the cluster. If a transport node is not created,
click Create New Transport Node Profile to create a new one.

6 Click Apply to begin the process of transport node creation of all hosts in the cluster.

See Add a Transport Node Profile.

VMware by Broadcom 145


NSX Installation Guide

7 If you only want to prepare individual hosts as transport nodes, select that host, click
Configure NSX.

The Configure NSX dialog box opens.


a Verify the host name in the Host Details panel. Optionally, you can add a description.

b Click Next to move to the Configure NSX panel.

c Select the available transport zones and click the > button to include the transport zones
in the transport node profile.

8 Verify the host name in the Host Details panel, and click Next.

Optionally, you can add a description.

9 In the Configure NSX panel, expand New Node Switch.

VMware by Broadcom 146


NSX Installation Guide

10 In the New Node Switch section, configure the following fields.

Option Description

Mode (NSX [Link] only) Choose between the following mode options:
n Standard: Is the standard mode that is available to all supported
hypervisors by NSX.
n ENS Interrupt: Is a variant of the Enhanced Datapath mode.
n Enhanced Datapath: Is the mode that provides accelerated networking
performance. This mode requires nodes to use VMXNET3 vNIC enabled
network cards. It is not supported on NSX Edge nodes and Public
Gateways. The supported hypervisor is ESXi. It is recommended to run
ESXi v6.7 U2 and later versions.

Mode (Starting with NSX [Link]) Choose between the following mode options:
n Standard: This mode applies to all transport nodes. The data-plane in the
transport node automatically selects the host switch mode as per the
uplink capabilities.
n Enhanced Datapath - Standard: This mode is a variant of the Enhanced
Data Path mode. It is available only on ESXi hypervisor 7.0 and later
versions. Please consult your account representative for applicability.
n Enhanced Datapath - Performance: This is the Enhanced Data Path
switch mode for ESXi host transport node. This mode provides
accelerated networking performance. It requires nodes to use VMXNET3
vNIC enabled network cards. It is not supported on NSX Edge nodes and
Public Gateways. The supported hypervisor is ESXi. It is recommended
to run ESXi v6.7 U2 and later versions.
n Legacy: This mode was formerly called Standard. It applies to all
transport nodes. When the host switch mode is set to Legacy, the
packet handler stack is enabled. On NSX Manager UI, you will see this
mode set to Standard and the Legacy field to 'Yes'. You can select
this mode only through API, since the Legacy field is read-only in NSX
Manager UI.

You can run the following Host Transport Node or Transport Node
Profile policy API to set the host switch mode to Legacy:
n Create or update Host Transport Node:

PUT [Link]
Manager-IP-ADDRESS>/POST/policy/api/v1/infra/sites/
<site-id>/enforcement-points/<enforcementpoint-id>/
host-transport-nodes/<host-transport-node-id>

n Create or update policy Host Transport Node Profile:

PUT [Link]
IP-ADDRESS>/POST/policy/api/v1/infra/host-transport-
node-profiles/<transport-node-profile-id>

Name (Hosts managed by a vSphere cluster) Select the VMware vCenter that
manages the host switch.
Select the VDS that is created in VMware vCenter.

VMware by Broadcom 147


NSX Installation Guide

Option Description

Transport Zones Shows the transport zones that are realized by the associated host switches.
You cannot add a transport zone if it is not realized by any host switch.
Supported transport zone configurations:
n You can add multiple VLAN transport zones per host switch.
n You must add only one overlay transport zone per host switch. NSX
Manager UI does not allow adding multiple overlay transport zones.

Uplink Profile Select an existing uplink profile from the drop-down menu or create a
custom uplink profile. You can also use the default uplink profile.
If you keep the MTU value empty, the NSX takes the global default MTU
value 1700. If you enter a MTU value in NSX uplink profile, that MTU value
will override the global default MTU value.

Note Link Aggregation Groups defined in an uplink profile cannot be


mapped to VDS uplinks.

IP Assignment (TEP) Select between Use DHCP and Use IP Pool to assign an IP address to tunnel
endpoints (TEPs) of the transport node.
If you selected Use IP Pool for an IP assignment, specify the IP pool name
and the range of IP addresses that can be used for tunnel endpoints.

CPU Config You can configure the CPU Config field only when the Mode is set to
Enhanced Datapath.
a Click Set.
b In the CPU Config window, click Add.
c Enter values for the NUMA Node Index and LCores per NUMA Node
fields.
d To save the values, click Add and Save.

Teaming Policy Switch Mapping Before you map uplinks profiles in NSX with uplinks in VDS, ensure uplinks
are configured on the VDS switch. To configure or view the VDS switch
uplinks, go to VMware vCenter → vSphere Distributed Switch. Click Actions
→ Settings → Edit Settings.
Map uplinks defined in the selected NSX uplink profile with VDS uplinks.
The number of NSX uplinks that are presented for mapping depends on the
uplink profile configuration.
For example, in the upink-1 (active) row, go to the Physical NICs column, click
the edit icon, and type in the name of VDS uplink to complete mapping it
with uplink-1 (active). Likewise, complete mapping for the other uplinks.

Note Uplinks/LAGs, NIOC profile, LLDP profile are defined in VMware vCenter. These
configurations are not available in NSX Manager. To manage VMkernel adapters on a VDS
switch, go to VMware vCenter to attach VMkernel adapters to Distributed Virtual port groups
or NSX port groups.

11 If you have selected multiple transport zones, click + Add Switch again to configure the
switch for the other transport zones.

12 Click Add to complete the configuration.

VMware by Broadcom 148


NSX Installation Guide

13 (Optional) View the ESXi connection status.

# esxcli network ip connection list | grep 1235


tcp 0 0 [Link]:20514 [Link]:1234 ESTABLISHED 1000144459 newreno
nsx-proxy

14 From the Host Transport Node page, verify that the NSX Manager connectivity status of
hosts in the cluster is Up and NSX configuration state is Success. During the configuration
process, each transport node displays the percentage of progress of the installation process.
If installation fails at any stage, you can restart the process by clicking the Resolve link that is
available against the failed stage of the process.

You can also see that the transport zone is applied to the hosts in the cluster.

Note If you again configure a host that is part of a cluster that is already prepared by
a transport node profile, the configuration state of a node is in Configuration Mismatch
state.

Note The Host Transport Node page displays TEP addresses of the host in addition to IP
addresses. TEP address is the address assigned to the VMkernel NIC of the host, whereas IP
address is the management IP address.

15 (Optional) Remove an NSX VIBs on the host.

a Select one or more hosts and click Actions > Remove NSX.

The uninstallation takes up to three minutes. Uninstallation of NSX removes the transport
node configuration on hosts and the host is detached from the transport zone(s) and switch.
Similar to the installation process, you can follow the percentage of the uninstallation process
completed on each transport node. If uninstallation fails at any stage, you can restart the
process by clicking the Resolve link that is available against the failed stage of the process.

16 (Optional) Remove a transport node from the transport zone.

a Select a single transport node and click Actions > Remove from Transport Zone.

What to do next

When the hosts are transport nodes, you can create transport zones, logical switches, logical
routers, and other network components through the NSX Manager UI or API at any time. When
NSX Edge nodes and hosts join the management plane, the NSX logical entities and configuration
state are pushed to the NSX Edge nodes and hosts automatically. You can create transport
zones, logical switches, logical routers, and other network components through the NSX Manager
UI or API at any time. When the hosts are transport nodes, these entities gets realized on the
host.

Create a logical switch and assign logical ports. See the Advanced Switching section in the NSX
Administration Guide.

VMware by Broadcom 149


NSX Installation Guide

Prepare ESXi Cluster hots for Security only or Networking and


Security using Quick Start Wizard
Using the Quick Start wizard, you can configure ESXi clusters for either Networking and Security
or Security Only.

Note If you configure a cluster for Security Only then you cannot configure networking for that
cluster.

Prepare ESXi cluster Hosts for Networking and Security


Use the Quick Start wizard to prepare ESXi clusters for networking and security using NSX
recommended host configurations.

The Quick Start wizard gives you two options to prepare ESXi clusters: Networking and Security
or Security Only. With either of these options, the wizard helps you finish installation with
minimum user input, thus, simplifying the installation process. By default, VLAN networking is
the default selection in the wizard.

Based on the type of host, the quick start wizard considers the following default configurations:

n ESXi hosts running 7.0 and later are prepared on the VDS switch. Configure the desired
number of uplinks on the VDS switch in vCenter Server and set the MTU to 1700.

Each host switch is assigned an auto-created transport zone, uplink profile and transport node
profile.

Prerequisites

n Register VMware vCenter as a compute manager in NSX.

n ESXi cluster hosts member of VDS in VMware vCenter with correct uplinks.

n (Optional) If you are using VLAN for VTEP pool, configue IP Pool for Host VTEP assignment
and Uplink Profile using the TEP VLAN.

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Navigate to System > Quick Start.

3 On the Prepare Clusters for Networking and Security card, click Get Started.

4 Select the clusters you want to prepare for NSX networking.

5 Click Install NSX and then select Networking and Security.

VMware by Broadcom 150


NSX Installation Guide

6 Depending on your requirement, you can prepare the same cluster for both VLAN and
Overlay networking or for one type of networking. With Overlay networking, each host
switch is added with a TEP IP address, which is required for overlay networking.

7 View the Host Switch Configuration switch to know the target switches where the physical
NICs and VMkernel adapters (if any) will be migrated to.

This is the NSX recommended configuration. However, you can customize the settings for the
cluster, even though it is an optional step.

Note A dotted line originating from a switch to a physical NIC indicates that it is an existing
configuration on the host switch, which will be replaced by a firm line going to the same
physical NIC.

8 Even though NSX provides recommendations, you can still customize the configuration. To
customize the host switch, select the switch and change the recommended configuration.

a VDS: Select the VDS switch as the host switch.

b Transport Zone: Select a different transport zone that you want the host to be associated
with.

c Uplink Profile: If needed, select a different uplink profile in place of the recommended
uplink profile.

Note If you configure two VDS switches with the same configuration, the wizard
recommends the same uplink profile for both the switches.

d Uplink to Physical NIC mapping: On a VDS switch, all uplinks configured on the VDS
switch are mapped to the uplinks in NSX.

A change to host switch type or uplink to vmnic mapping is reflected in the Host Switch
Configuration network representation.

9 Click Install.

View the progress of installation on the Prepare Clusters for Networking and Security card. If
installation on any of the host fails, retry installation by resolving the error.

VMware by Broadcom 151


NSX Installation Guide

10 To view successfully prepared hosts, go to System → Fabric → Nodes → Host Transport


Node.

Results

The transport nodes are ready for VLAN and Overlay networking.

Install Distributed Security for vSphere Distributed Switch


NSX allows you to install Distributed Security for vSphere Distributed Switch (VDS). As the host
switch is of the type VDS, DFW capabilities can be enabled on workload VMs..

Distributed Security provides security-related functionality to your VDS such as:

n Distributed Firewall (DFW)

n Distributed IDS/IPS

n Identity Firewall

n L7 App ID

n Fully Qualified Domain Name (FQDN) Filtering

n NSX Intelligence

n NSX Malware Prevention

n NSX Guest Introspection

Prerequisites

The following are the requirements for installing Distributed Security for VDS:

n vSphere 7.0 or later.

n The vSphere cluster should have at least one VDS with distributed switch version 6.6 or later
configured and ESXi cluster hosts must be members of a VDS with uplinks configured.

n A compute manager must be registered in NSX. See Add a Compute Manager.

n Before you deploy and configure Distributed Security on hosts, ensure that NSX is not
deployed on such hosts.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at [Link]
ip-address>.

2 Navigate to System > Quick Start.

3 On the Prepare Clusters for Networking and Security card, click Get Started.

4 Select the clusters that you want to install Distributed Security.

5 Click Install NSX and then select Security Only.

VMware by Broadcom 152


NSX Installation Guide

6 In the dialog box, click Install.

Note If the VDS spans across multiple clusters, Distributed Security installs only to the
clusters that you selected.

The installation process for Distributed Security starts.

7 To view VDS with Distributed Security installed, do the following:

a Navigate to System > Fabric > Nodes.

b Select the Host Transport Nodes tab.

Note vSphere clusters prepared for Distributed Security are identified by the Security
label.

Results

Distributed Security is installed and you can begin using security capabilities such as creating
DFW policies and rules for the VDS.

Configure an ESXi Host Transport Node with Link Aggregation


This procedure describes how to create an uplink profile that has a link aggregation group
configured, and how to configure an ESXi host transport node to use that uplink profile.

Prerequisites

n Familiarize yourself with the steps to create an uplink profile. See Create an Uplink Profile.

n Familiarize yourself with the steps to create a host transport node.

n If you plan to configure a VDS switch on the host, configure LAG on the VDS switch.

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Select System > Fabric > Profiles > Uplink Profiles > Add Profile.

3 Enter a name and optionally a description.

For example, you enter the name uplink-profile1.

4 Under Teamings, select Default Teaming.

5 In the Active Uplinks field, do one of the following:

a Enter the name of the LAG that you configured on the VDS switch in VMware vCenter.

6 Enter the VLAN ID for Transport VLAN.

7 Click Add at the bottom of the dialog box.

8 Select Fabric > Nodes > Host Transport Nodes > Managed by.

VMware by Broadcom 153


NSX Installation Guide

9 Select the VMware vCenter that hosts the ESXi host.

10 In the Host Details tab, enter IP address, OS name, admin credentials, and SHA-256
thumbprint of the host.

11 During the switch configuration, depending on the VDS switch, select the uplink profile
uplink-profile1 that was created in step 3.

12 In the Physical NICs field, the physical NICs and uplinks dropdown list reflects the new
NICs and uplink profile. Specifically, the uplink LAG that is configured on the VDS switch is
displayed in the uplink profile drop-down list. Select the VDS uplink for LAG.

13 Enter information for the other fields and complete host preparation.

The ESXi host is prepared as a transport node using the LAG profile.

Managing Transport Nodes


After preparing hosts as transport nodes, you can view status of hosts, switch visualization, and
other configuration settings related to transport nodes. You can use it to debug issues if the host
transport node is in a degraded or failed state.

Switch Visualization
You get a granular view of a vSphere Distributed Switch (VDS) at an individual host level.

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Select System > Fabric > Hosts.

3 From the Managed by drop-down menu, select Standalone Hosts or a compute manager.

4 Select the host.

5 Click the Switch Visualization tab to view the uplinks configured on the switch and physical
NICs that are connected to uplinks.

NSX Maintenance Mode


If you want to avoid vMotion of VMs to a transport node that is not functional, place that
transport node in NSX Maintenance Mode.

To put a transport node in NSX Maintenance Mode, select the node, click Actions → NSX
Maintenance Mode.

VMware by Broadcom 154


NSX Installation Guide

When you put a host in NSX Maintenance Mode, the transport node cannot participate
in networking. Therefore, you must vMotion all VMs to another host before initiating NSX
Maintenance Mode. Also, VMs running on other transport nodes that have VDS as the host
switch cannot be vMotioned to this transport node. In addition, logical network cannot be
configured on ESXi hosts as the status of host switch on the ESXi host would be shown as Down.

Scenarios to put the transport node in NSX Maintenance Mode:

n A transport node is not functional.

n If a host has hardware or software issues that are unrelated to NSX, but you want to retain
the node and its configurations in NSX, place the host in NSX Maintenance Mode.

n A transport node is automatically put in NSX Maintenance Mode when an upgrade on that
transport node fails.

Any transport node put in the NSX Maintenance Mode is not upgraded.

Health Check VLAN ID Ranges and MTU Settings


Run health check APIs to verify compatibility between VLAN ID ranges you specified and the
MTU settings on a transport node with the corresponding settings on a physical switch.

VLAN or MTU configuration mismatch is a common configuration error that can lead to
connectivity outage.

Note
n Health check results are only indicators of possible network configuration errors. For
example, health check run on hosts from different L2 domains results in untrunked VLAN
IDs. This result cannot be considered as a configuration error as hosts must be in the same L2
domain for the health check tool to give correct results.

n Only 50 health check operations can be in progress at any given time.

n After a health check finishes, NSX preserves that result on the system only for 24 hours.

In a health check operation, the NSX opsAgent sends probe packets from a transport node to
another node to verify compatibility between VLAN ID range you specified and the MTU value on
the transport node with corresponding settings on the physical switch.

As the number of VLAN ID ranges to be verified increases, the waiting time increases.

Number of VLANs Waiting Time (secs)

[3073,4095] 150

[1025, 3072] 120

[513, 1024] 80

[128, 512] 60

VMware by Broadcom 155


NSX Installation Guide

[64, 127] 30

[1, 63] 20

Prerequisites

n At least two uplinks configured on N-VDS for VLAN and MTU check to work.

n Transport nodes on the same L2 domain.

n Health check supported on ESX hosts running v6.7U2 or later.

Procedure

1 Create a manual health check.

POST [Link]

Example Request:
POST [Link]
{
"resource_type": "ManualHealthCheck",
"display_name": "Manual HealthCheck 002",
"transport_zone_id": "7754341c-8f3c-443f-9c1a-2d635d5b0d1c",
"vlans":{
"vlan_ranges":[{
"start": 0,
"end": 6
},]
},
}
Example Response:
{
"operation_status": "FINISHED",
"transport_zone_id": "7754341c-8f3c-443f-9c1a-2d635d5b0d1c",
"vlans": {
"vlan_ranges": [
{
"start": 0,
"end": 6
}
]
},
"result": {
"vlan_mtu_status": "UNTRUNKED",
"results_per_transport_node": [
{
"transport_node_id": "dfcabffa-8839-11e9-b30e-6f45344d8a04",
"result_on_host_switch": {
"host_switch_name": "nsxvswitch",
"results_per_uplink": [
{
"uplink_name": "uplink1",
"vlan_and_mtu_allowed": [
{

VMware by Broadcom 156


NSX Installation Guide

"start": 0,
"end": 0
}
],
"mtu_disallowed": [],
"vlan_disallowed": [
{
"start": 1,
"end": 6
}
]
}
]
}
},
{
"transport_node_id": "a300ea62-8839-11e9-a94e-31732bb71949",
"result_on_host_switch": {
"host_switch_name": "nsxvswitch",
"results_per_uplink": [
{
"uplink_name": "uplink1",
"vlan_and_mtu_allowed": [
{
"start": 0,
"end": 0
}
],
"mtu_disallowed": [],
"vlan_disallowed": [
{
"start": 1,
"end": 6
}
]
}
]
}
}
]
},
"resource_type": "ManualHealthCheck",
"id": "8a56ed9e-a31b-479e-987b-2dbfbde07c38",
"display_name": "mc1",
"_create_user": "admin",
"_create_time": 1560149933059,
"_last_modified_user": "system",
"_last_modified_time": 1560149971220,
"_system_owned": false,
"_protection": "NOT_PROTECTED",
"_revision": 0
}

A new health check object is created with id 8a56ed9e-a31b-479e-987b-2dbfbde07c38.

VMware by Broadcom 157


NSX Installation Guide

2 To get a list of all manual health check operations initiated, make the API call.

GET [Link]

3 To delete a manual health check, make the API call.

DELETE [Link]
ID>

4 To get a single health check initiated manually, make the API call.

GET [Link] Health-check-ID>

Results

The API response section contains the health check results. The NSX Ops agent waits for
an acknowledgement packet from the destination transport node to retrieve VLAN ID ranges
supported on the physical switch.

n Untrunked: Lists the VLAN ID ranges that are not compatible with a physical switch. The
VLAN ID ranges that are compatible with the physical switch are also listed.

n Trunked: Lists the VLAN ID ranges that are compatible with a physical switch.

n Unknown: There is no valid result for some or all uplinks because of infrastructure issues or
unsupported platform types such as Edge.

Parameters in the API response section:

n vlan_and_mtu_allowed: Lists the VLAN ID ranges that are compatible.

n mtu_disallowed: Lists the VLAN ID ranges for which the MTU value is not compatible with a
physical switch.

n vlan_disallowed: Lists the VLAN ID ranges that are not compatible with a physical switch.

What to do next

n In an overlay-based transport zone, update both VLAN ID and MTU config in the uplink profile
on N-VDS. Likewise, update VLAN or MTU on the physical switch.

n In a vlan-based transport zone, update MTU config in the uplink profile. And, update VLAN
config on logical switches of that transport zone. Likewise update VLAN or MTU on the
physical switch.

View Bidirectional Forwarding Detection Status


View BFD status between transport nodes. Each transport node detects connectivity status with
another remote transport node through a tunnel status that displays the BFD status among other
details related to the node.

Both Host Transport nodes (standalone and hosts registered to a vCenter) and Edge nodes
display the tunnel status. BFD packets support both GENEVE and STT encapsulation. GENEVE is
the default encapsulation.

VMware by Broadcom 158


NSX Installation Guide

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Navigate the UI based on the NSX version and select a host:

n (NSX 3.2.2 or later) System > Fabric > Hosts and select the Cluster tab.

n (NSX 3.2.1 or earlier) System > Fabric > Nodes > Host Transport Nodes

and from the Managed by dropdown menu, select a VMware vCenter.

3 On the Tunnel column, click the tunnel number that is displayed.

The Monitor page displays the status of tunnel, BFD diagnostic code, remote node UUID,
encapsulation on BFD packets, and tunnel name.

The tunnel BFD diagnostic code indicates the reason for the change in the session state.

Code Description

0 No Diagnostic

1 Control Detection Time Expired

2 Echo Function Failed

3 Neighbor Signaled Session Down

4 Forwarding Plane Reset

5 Path Down

6 Concatenated Path Down

7 Administratively Down

8 Reverse Concatenated Path Down

Results

If the BFD status is down, use the diagnostic code to troubleshoot the issue.

Verify the Transport Node Status


Verify the transport node is created and VDS configured successfully.

Procedure

1 Log in to the NSX Manager UI.

2 Navigate to System → Fabric → Hosts page.

3 To view the host and VDS status, navigate to Host Details → Overview tab.

4 Verify host controller connectivity and manager connectivity is UP.

VMware by Broadcom 159


NSX Installation Guide

5 Navigate to the Transport Node page and view the VDS status.

6 To view active manager node for this transport node, run nsxcli -c get managers *.

7 To view the master controller with 'connected' status and session state up for this transport
node, run nsxcli -c get controllers.

8 Alternatively, view the VDS on ESXi with the esxcli network ip interface list
command.

On ESXi, verify correct number of vmk interfaces are configured with a VDS name that
matches the name you used when you configured the transport zone and the transport node.

# esxcli network ip interface list


...

vmk10
Name: vmk10
MAC Address: [Link]
Enabled: true
Portset: DvsPortset-1
Portgroup: N/A
Netstack Instance: vxlan
VDS Name: overlay-hostswitch
VDS UUID: 18 ae 54 04 2c 6f 46 21-b8 ae ef ff 01 0c aa c2
VDS Port: 10
VDS Connection: 10
Opaque Network ID: N/A
Opaque Network Type: N/A
External ID: N/A
MTU: 1600
TSO MSS: 65535
Port ID: 67108895

...

If you are using the vSphere Client, you can view the installed VDS in the UI by selecting host
Configuration > Network Adapters.

9 Check the transport node's assigned tunnel endpoint address.

The vmk10 interface receives an IP address from the NSX IP pool or DHCP, as shown here:

# esxcli network ip interface ipv4 get


Name IPv4 Address IPv4 Netmask IPv4 Broadcast Address Type DHCP DNS
----- -------------- ------------- --------------- ------------ --------
vmk0 [Link] [Link] [Link] STATIC false
vmk1 [Link] [Link] [Link] STATIC false
vmk10 [Link] [Link] [Link] STATIC false

VMware by Broadcom 160


NSX Installation Guide

10 Check the API for transport node state information.

Call the (deprecated API) GET [Link]


node-id>/state API call.

Or call the GET /policy/api/v1/infra/sites/<site-id>/enforcement-points/<enforcement-


point-id>/transport-node-status-report, where the default value for <site-id> and
<enforcement-point-id> is default.

For example:

{ "transport_node_id": "55120a1a-51c6-4c20-b4a3-6f59662c9f6a", "host_switch_states":


[ { "host_switch_id": "50 21 0c 52 94 22 aa 20-b7 f0 0b da 1c 7c 29 ea",
"host_switch_name": "dvs1",
"endpoints": [ { "device_name": "vmk10", "ip": "[Link]", "default_gateway": "",
"subnet_mask": "[Link]", "label": 53249 } ],
"transport_zone_ids": [ "1b3a2f36-bfd1-443e-a0f6-4de01abc963e" ], "host_switch_type":
"VDS" } ],
"maintenance_mode_state": "DISABLED",
"node_deployment_state": { "state": "success", "details": [] },
"deployment_progress_state": { "progress": 100, "current_step_title": "Configuration
complete" },
"state": "success",
"details": [ { "sub_system_id": "55120a1a-51c6-4c20-b4a3-6f59662c9f6a",
"sub_system_type": "HostConfig", "state": "success" },
{ "sub_system_id": "55120a1a-51c6-4c20-b4a3-6f59662c9f6a", "sub_system_type": "AppInit",
"state": "success" },
{ "sub_system_id": "55120a1a-51c6-4c20-b4a3-6f59662c9f6a", "sub_system_type":
"LogicalSwitchFullSync",
"state": "success" } ] }

VMware by Broadcom 161


Installing NSX Edge
10
Install NSX Edge on ESXi using the NSX UI, the vSphere web client, or the command-line OVF
tool.

Read the following topics next:

n NSX Edge Installation Requirements

n NSX Edge Networking Setup

n NSX Edge Installation Methods

n Create an NSX Edge Transport Node

n Configure NSX Edge DPDK Interfaces

n Manually Deploying NSX Edge Node Outside of NSX

n Create an NSX Edge Cluster

n Remove NSX Edge Nodes from an Edge Cluster

n Relocate and Remove an NSX Edge Node from an NSX Edge Cluster

NSX Edge Installation Requirements


The NSX Edge provides routing services and connectivity to networks that are external to the
NSX deployment. NSX Edge nodes provide a pool of capacity for running centralized services
and is required if you want to deploy a tier-0 router or a tier-1 router with stateful services such
as network address translation (NAT), VPN, and so on.

Note There can be only one tier-0 router per NSX Edge node. However, multiple tier-1 logical
routers can be hosted on one NSX Edge node. NSX Edge VMs of different sizes can be combined
in the same cluster; however, it is not recommended.

VMware by Broadcom 162


NSX Installation Guide

Table 10-1. NSX Edge Deployment, Platforms, and Installation Requirements

Requirements Description

Supported deployment methods n OVA/OVF


n ISO with PXE
n ISO without PXE

Supported platforms NSX Edge is supported as VM (only on ESXi) or as


physical server (bare metal). Both leverage the data plane
development kit (DPDK) for faster packet processing and
high performance.

PXE installation The Password string must be encrypted with sha-512


algorithm for the root and admin user password.

NSX appliance password n At least 12 characters


n At least one lower-case letter
n At least one upper-case letter
n At least one digit
n At least one special character
n At least five different characters
n No dictionary words
n No palindromes
n More than four monotonic character sequence is not
allowed

Hostname When installing NSX Edge, specify a hostname that


does not contain invalid characters such as an
underscore. If the hostname contains any invalid
character, after deployment the hostname will be set
to localhost. For more information about hostname
restrictions, see [Link] and
[Link]

VMware Tools The NSX Edge VM running on ESXi has VMTools installed.
Do not remove or upgrade VMTools.

System Verify that the system requirements are met. See NSX
Edge VM System Requirements.

Ports Verify that the required ports are open. See Ports and
Protocols.

IP Addresses Plan your NSX Edge IPv4 and IPv6 IP addressing


scheme. However, a NSX Edge configured for IPv4 and
IPv6 stack is not supported on an ESXi node that is
configured only for IPv6 addressing scheme. Also, NSX
Edge tunnel endpoints on a dual IPv4 and IPv6 scheme
cannot communicate with an ESXi node using only IPv6
addressing scheme.

VMware by Broadcom 163


NSX Installation Guide

Table 10-1. NSX Edge Deployment, Platforms, and Installation Requirements (continued)

Requirements Description

OVF Template n Verify that you have adequate privileges to deploy an


OVF template on the ESXi host.
n Verify that hostnames do not include underscores.
Otherwise, the hostname is set to localhost.
n A management tool that can deploy OVF templates,
such as vCenter Server or the vSphere Client.

The OVF deployment tool must support configuration


options to allow for a manual configuration.
n The Client Integration Plug-in must be installed.

NTP Server The same NTP server must be configured on all NSX Edge
VMs or Bare Metal Edges in an Edge cluster.

Intel-based Chipsets
NSX Edge nodes are supported on ESXi-based hosts with Intel chipsets. If an unsupported
chipset type is used, vSphere EVC mode may prevent Edge nodes from starting, showing an
error message in the console. See NSX Edge VM System Requirements.

AMD EPYC
NSX Edge nodes are also supported on AMD-based chipsets. NSX Edge nodes can now be
deployed on AMD EPYC series chipsets. See NSX Edge VM System Requirements.

NSX Edge Support of vSphere Business Continuity Features


Starting in NSX 2.5.1, vMotion, DRS, and vSphere HA are supported for NSX Edge nodes. Use
vMotion and DRS with caution to avoid traffic disruption, especially, when the BFD timers are set
to less than one second.

NSX Edge Installation Scenarios


Important When you install NSX Edge from an OVA or OVF file, either from vSphere Web Client
or the command line, OVA/OVF property values such as user names, passwords, or IP addresses
are not validated before the VM is powered on.

n If you specify a user name for any of the local users, the name must be unique. If you specify
the same name, it is ignored and the default names (for example, admin or audit) are used.

n If the password for the root or admin user does not meet the complexity requirements, you
must log in to NSX Edge through SSH or at the console as root with password vmware and
admin with password default. You are prompted to change the password.

VMware by Broadcom 164


NSX Installation Guide

n If the password for other local users (for example, audit) does not meet the complexity
requirements, the user account is disabled. To enable the account, log in to NSX Edge
through SSH or at the console as the admin user and run the command set user
local_user_name to set the local user's password (the current password is an empty string).
You can also reset passwords in the UI using System > User Management > Local Users.

Caution Changes made to the NSX while logged in with the root user credentials might cause
system failure and potentially impact your network. You can only make changes using the root
user credentials with the guidance of VMware Support team.

Note The core services on the appliance do not start until a password with sufficient complexity
has been set.

After you deploy NSX Edge from an OVA file, you cannot change the VM's IP settings by
powering off the VM and modifying the OVA settings from VMware vCenter.

NSX Edge Networking Setup


NSX Edge can be installed using ISO, OVA/OVF, or PXE start. Regardless of the installation
method, make sure that the host networking is prepared before you install NSX Edge.

High-Level View of NSX Edge Within a Transport Zone


The high-level view of NSX shows two transport nodes in a transport zone. One transport node is
a host. The other is an NSX Edge.

Figure 10-1. High-Level Overview of NSX Edge

Transport Zone
NSX Edge Transport Node
Tier 0
SR
Intra-Tier 0 transit
[Link]/28
Tier 0 DR DR

Tier 0-Tier 1 router


Host [Link]/24
Transport Tier 1
Node
SR
Intra-Tier 1 transit
CCP [Link]/28
Tier 1 DR DR

LCP

N-VDS
MP
bond
vmxnet3
physical ports
fp-ethX
pnic1 pnic2 managed by datapath

VMware by Broadcom 165


NSX Installation Guide

When you first deploy an NSX Edge, you can think of it as an empty container. The NSX Edge
does not do anything until you create logical routers. The NSX Edge provides the compute
backing for tier-0 and tier-1 logical routers. Each logical router contains a services router (SR) and
a distributed router (DR). When we say that a router is distributed, we mean that it is replicated
on all transport nodes that belong to the same transport zone. In the figure, the host transport
node contains the same DRs contained on the tier-0 and tier-1 routers. A services router is
required if the logical router is going to be configured to perform services, such as NAT. All tier-0
logical routers have a services router. A tier-1 router can have a services router if needed based
on your design considerations.

By default, the links between the SR and the DR use the [Link]/28 subnet. These intra-
router transit links are created automatically when you deploy a tier-0 or tier-1 logical router. You
do not need to configure or modify the link configuration unless the [Link]/28 subnet is
already in use in your deployment. On a tier-1 logical router, the SR is present only if you select
an NSX Edge cluster when creating the tier-1 logical router.

The default address space assigned for the tier-0-to-tier-1 connections is [Link]/16. Each
tier-0-to-tier-1 peer connection is provided a /31 subnet within the [Link]/16 address space.
This link is created automatically when you create a tier-1 router and connect it to a tier-0 router.
You do not need to configure or modify the interfaces on this link unless the [Link]/16
subnet is already in use in your deployment.

Each NSX deployment has a management plane cluster (MP) and a control plane cluster (CCP).
The MP and the CCP push configurations to each transport zone's local control plane (LCP).
When a host or NSX Edge joins the management plane, the management plane agent (MPA)
establishes connectivity with the host or NSX Edge, and the host or NSX Edge becomes an
NSX fabric node. When the fabric node is then added as a transport node, LCP connectivity is
established with the host or NSX Edge.

Lastly, the figure shows an example of two physical NICs (pNIC1 and pNIC2) that are bonded
to provide high availability. The datapath manages the physical NICs. They can serve as either
VLAN uplinks to an external network or as tunnel endpoint links to internal NSX-managed VM
networks.

It is a best practice to allocate at least two physical links to each NSX Edge that is deployed as
a VM. Optionally, you can overlap the port groups on the same pNIC using different VLAN IDs.
The first network link found is used for management. For example, on an NSX Edge VM, the first
link found might be vnic1. On a bare-metal installation, the first link found might be eth0 or em0.
The remaining links are used for the uplinks and tunnels. For example, one might be for a tunnel
endpoint used by NSX-managed VMs. The other might be used for an NSX Edge-to-external TOR
uplink.

You can view the physical link information of the NSX Edge, by logging in to the CLI as
an administrator and running the get interfaces and get physical-ports commands.
In the API, you can use the GET /api/v1/transport-nodes/{transport-node-id}/node/network/
interfaces API call. Physical links are discussed in more detail in the next section.

VMware by Broadcom 166


NSX Installation Guide

Whether you install NSX Edge as a VM appliance or on bare metal, you have multiple options for
the network configuration, depending on your deployment.

Figure 10-2. NSX Edge Transport Node in VM Form Factor within ESXi

Figure 10-3. Bare Metal Edge Transport Node

Transport Zones and N-VDS


To understand NSX Edge networking, you must know something about transport zones and
N-VDS. Transport zones control the reach of Layer 2 networks in NSX. N-VDS is a software
switch that gets created on a transport node. The purpose of N-VDS is to bind logical router
uplinks and downlinks to physical NICs. For each transport zone that an NSX Edge belongs to, a
single N-VDS gets installed on the NSX Edge.

There are two types of transport zones:

n Overlay for internal NSX tunneling between transport nodes.

n VLAN for uplinks external to NSX.

An NSX Edge can belong to zero VLAN transport zones or many. For zero VLAN transport
zones, the NSX Edge can still have uplinks because the NSX Edge uplinks can use the same
N-VDS installed for the overlay transport zone. You might do this if you want each NSX Edge to
have only one N-VDS. Another design option is for the NSX Edge to belong to multiple VLAN
transport zones, one for each uplink.

The most common design choice is three transport zones: One overlay and two VLAN transport
zones for redundant uplinks.

VMware by Broadcom 167


NSX Installation Guide

To use the same VLAN ID for a transport network for overlay traffic and other for VLAN traffic,
such as a VLAN uplink, configure the ID on two different N-VDS, one for VLAN and the other for
overlay.

Virtual-Appliance/VM NSX Edge Networking


When you install NSX Edge as a virtual appliance or VM, internal interfaces are created, called
fp-ethX, where X is 0, 1, 2, and 3. These interfaces are allocated for uplinks to a top-of-rack (ToR)
switches and for NSX overlay tunneling.

When you create the NSX Edge transport node, you can select fp-ethX interfaces to associate
with the uplinks and the overlay tunnel. You can decide how to use the fp-ethX interfaces.

On the vSphere distributed switch or vSphere Standard switch, you must allocate at least two
vmnics to the NSX Edge: One for NSX Edge management and one for uplinks and tunnels.

In the following sample physical topology, fp-eth0 is used for the NSX overlay tunnel. fp-eth1 is
used for the VLAN uplink. fp-eth2 and fp-eth3 are not used. vNIC1 is assigned to the management
network.

Figure 10-4. One Suggested Link Setup for NSX Edge VM Networking

ESXi

VM VDS/VSS

NSX Edge VM eth0 vmnic0


Mgmt vNIC1

fp-eth0
Overlay N-VDS Overlay/Tunnel vNIC2

vmnic1

fp-eth1
VLAN uplink N-VDS Uplink vNIC3
fp-eth2
(Unused) vNIC4

The NSX Edge shown in this example belongs to two transport zones (one overlay and one
VLAN) and therefore has two N-VDS, one for tunnel and one for uplink traffic.

This screenshot shows the virtual machine port groups, nsx-tunnel, and vlan-uplink.

VMware by Broadcom 168


NSX Installation Guide

Figure 10-5. Edge Networking in vSphere

During deployment, you must specify the network names that match the names configured on
your VM port groups. For example, to match the VM port groups in the example, your network
ovftool settings can be as follows if you were using the ovftool to deploy NSX Edge:

--net:"Network 0-Mgmt" --net:"Network 1-nsx-tunnel" --net:"Network 2=vlan-uplink"

The example shown here uses the VM port group names Mgmt, nsx-tunnel, and vlan-uplink. You
can use any names for your VM port groups.

The tunnel and uplink VM port groups configured for the NSX Edge do not need to be associated
with VMkernel ports or given IP addresses. This is because they are used at Layer 2 only. If your
deployment uses DHCP to provide an address to the management interface, make sure that only
one NIC is assigned to the management network.

Notice that the VLAN and tunnel port groups are configured as trunk ports. This is required. For
example, on a standard vSwitch, you configure trunk ports as follows: . Host > Configuration >
Networking > Add Networking > Virtual Machine > VLAN ID All (4095).

If you are using an appliance-based or VM NSX Edge, you can use standard vSwitches or
vSphere distributed switches.

NSX Edge VM can be installed on an NSX prepared host and configured as a transport node.
There are two types of deployment:

n NSX Edge VM can be deployed using VSS/VDS port groups where VSS/VDS consume
separate pNIC(s) on the host. Host transport node consumes separate pNIC(s) for N-VDS
installed on the host. N-VDS of the host transport node co-exists with a VSS or VDS, both
consuming separate pNICs. Host TEP (Tunnel End Point) and NSX Edge TEP can be in the
same or different subnets.

n NSX Edge VM can be deployed using VLAN-backed logical switches on the N-VDS of the
host transport node. Host TEP and NSX Edge TEP can be in the same VLAN or subnet.

Optionally, you can install multiple NSX Edge appliances/VMs on a single host, and the same
management, VLAN, and tunnel endpoint port groups can be used by all installed NSX Edges.

VMware by Broadcom 169


NSX Installation Guide

With the underlying physical links up and the VM port groups configured, you can install the NSX
Edge.

Note If NSX Edge VM uplinks (management, fast-path interfaces) will be connecting to VLAN
based portgroups then underlying ESXi host does not need to be an NSX transport node. But
if NSX Edge VM uplinks will be connecting to NSX VLAN segments (logical switches), then you
must configure ESXi hosts hosting the edges as transport nodes.

Bare-Metal NSX Edge Networking


The bare-metal NSX Edge contains internal interfaces called fp-ethX, where X is up to 16
interfaces. The number of fp-ethX interfaces created depends on how many physical NICs your
bare-metal NSX Edge has. Up to four of these interfaces can be allocated for uplinks to top-of-
rack (ToR) switches and NSX overlay tunneling.

When you create the NSX Edge transport node, you can select fp-ethX interfaces to associate
with the uplinks and the overlay tunnel.

You can decide how to use the fp-ethX interfaces. In the following sample physical topology,
fp-eth0 and fp-eth1 are bonded and used for the NSX overlay tunnel. fp-eth2 and fp-eth3 are
used as redundant VLAN uplinks to TORs.

Figure 10-6. One Suggested Link Setup for Bare-Metal NSX Edge Networking

Management Bond
eth0 Physical NIC 1
Bare Metal NSX Edge Node
Transport node eth1 Physical NIC 2

Overlay Bond
fp-eth0
Physical NIC 3
Overlay N-VDS
Physical NIC 4
fp-eth1

fp-eth2
VLAN uplink N-VDS 1 Physical NIC 5

fp-eth3
VLAN uplink N-VDS 2 Physical NIC 6

NSX Edge Uplink Redundancy


NSX Edge uplink redundancy allows two VLAN equal-cost multipath (ECMP) uplinks to be used
on the NSX Edge-to-external TOR network connection.

When you have two ECMP VLAN uplinks, you must also have two TOR switches for high
availability and fully meshed connectivity. Each VLAN logical switch has an associated VLAN
ID.

VMware by Broadcom 170


NSX Installation Guide

When you add an NSX Edge to a VLAN transport zone, a new N-VDS is installed. For example,
if you add an NSX Edge node to four VLAN transport zones, as shown in the figure, four N-VDS
get installed on the NSX Edge.

Figure 10-7. One Suggested ECMP VLAN Setup for NSX Edges to TORs
ToR ToR

VLAN 100 VLAN 300 VLAN 200 VLAN 400

VLAN TZ VLAN TZ VLAN TZ VLAN TZ

VLAN 100 VLAN 200 VLAN 300 VLAN 400


logical switch logical switch logical switch logical switch

tier-0 tier-0
logical logical
router router

NSX Edge NSX Edge

Note For an Edge VM deployed on an ESXi host that has the vSphere Distributed Switch (vDS)
and not N-VDS, you must do the following:

n On a vDS switch running version prior (<) to 6.6, enable promiscuous mode for the port
connected to NSX Edge VM virtual NIC that provides VLAN connectivity. These settings are
needed to support bridging or L2VPN and DHCP functionality for VLAN networks.

n On a vDS switch running version equal to or greater than (>=) 6.6, enable mac learning
and disable promiscuous mode. These settings ensure that packets are received at the
destination where destination mac address does not match the virtual NIC effective MAC
address. These settings also ensure packets are received at destinations that are on an NSX
Segment. These settings are needed to support bridging or L2VPN and DHCP functionality
for VLAN networks.

n Enable forged transmit on the vDS switch. Forged transmit enables sending packets with
source mac address not matching the virtual NIC effective MAC addresses. These settings are
needed to support bridging or L2VPN and DHCP functionality for VLAN networks.

NSX Edge Installation Methods


Install NSX Edge on an ESXi host using NSX Manager UI (recommended method), vSphere Web
Client (from UI or vSphere command-line OVF tool) or as physical severs.

VMware by Broadcom 171


NSX Installation Guide

NSX Edge Installation Methods


Installation Method Instructions

NSX Manager (recommended method to n Ensure NSX Edge network requirements are met. See NSX Edge
install an NSX Edge VM appliance only) Installation Requirements.
n Create an NSX Edge transport node. See Create an NSX Edge
Transport Node.
n Create an NSX Edge cluster. See Create an NSX Edge Cluster.

vSphere web client or vSphere command- n Ensure NSX Edge network requirements are met. See NSX Edge
line OVF tool Installation Requirements.
n Choose vSphere web client or vSphere command-line OVF tool to
install NSX Edge.
n (Web Client) Install NSX Edge on ESXi. See Install an NSX Edge on
ESXi Using the vSphere GUI.
n (Command-line OVF tool) Install NSX Edge on ESXi. See Install NSX
Edge on ESXi Using the Command-Line OVF Tool.
n Join NSX Edge with the Management Plane. See Join NSX Edge with
the Management Plane.
n Configure an NSX Edge as a transport node. See Edit NSX Edge
Transport Node Configuration.
n Create an NSX Edge cluster. See Create an NSX Edge Cluster.

Physical server (Automated or Interactive Install NSX Edge using ISO file using PXE on physical servers. Note that
mode using ISO file) or NSX Edge VM PXE boot installation procedure is not supported on NSX Manager.
appliance n Ensure NSX Edge network requirements are met. See NSX Edge
Installation Requirements.
n Prepare PXE server. See Prepare a PXE Server for Bare Metal NSX
Edge Installation. Choose from one of the supported installation
methods:
n (Automated installation) Install NSX Edge using ISO File on physical
servers. See Install Bare Metal NSX Edge Automatically using ISO
File.
n (Automated installation) Install NSX Edge using ISO File as a Virtual
Appliance. See Install NSX Edge via ISO File as a Virtual Appliance.
n (Manual installation) Manually Install NSX Edge using ISO File. See
Install Bare Metal NSX Edge Interactively using ISO File .
n Join NSX Edge with the Management Plane. See Join NSX Edge with
the Management Plane.
n Configure an NSX Edge as a transport node. See Edit NSX Edge
Transport Node Configuration.
n Create an NSX Edge cluster. See Create an NSX Edge Cluster.

Create an NSX Edge Transport Node


NSX Edge nodes are service appliances with pools of capacity, dedicated to running network and
security services.

NSX Edge nodes when configured as transport nodes host Tier-0 and Tier-1 gateways. They can
be instantiated as a bare metal appliance or in virtual machine form factor. They are grouped in
one or several clusters. Each cluster is representing a pool of capacity.

VMware by Broadcom 172


NSX Installation Guide

An NSX Edge can belong to one overlay transport zone and multiple VLAN transport zones. An
NSX Edge belongs to at least one VLAN transport zone to provide the uplink access.

Note If you plan to create transport nodes from a template VM, make sure that there are
no certificates on the host in /etc/vmware/nsx/. nsx-proxy does not create a certificate if a
certificate already exists.

Important When you deploy an Edge Node through NSX Manager, the system records
the node's MO-REF. This MO-REF is required to make requests to VMware vCenter for any
subsequent operations that needs to performed on the node, such as redeploy and delete.
However, through customer inventory operations at VMware vCenter the MO-REF could change.
If MO-REF changes, the NSX operations for that edge node will fail. For example, an edge node
redeploy will fail to get rid of the node and the new node will get created with the same IP as the
old one. To help you mitigate this issue, the system generates some alarms. For more information
about these alarms, see the NSX Administration Guide.

Prerequisites

n Transport zones must be configured. See Create Transport Zones.

n Verify that compute manager is configured. See Add a Compute Manager.

n An uplink profile must be configured or you can use the default uplink profile for NSX Edge
nodes. See Create an Uplink Profile.

n An IP pool must be configured or must be available in the network deployment. See Create
an IP Pool for Tunnel Endpoint IP Addresses.

n Prepare uplinks. For example, distributed port groups as trunk in vCenter Server or NSX
Segments in NSX.

n Create distributed trunk port groups in VMware vCenter for management, TEP and
overlay networks if you plan to connect NSX Edge network interfaces to a VDS in
VMware vCenter.

n Create VLAN trunk segments in NSX if you plan to connect NSX Edge network interfaces
to NSX VLAN segments or logical switches.

n Before you can use NSX Edge VM datapath interfaces in Uniform Passthrough (UPT) mode,
meet the following conditions:

Note UPT mode is not supported on NSX Edge Bare Metal hosts.

n NSX Edge hardware version is 20 (vmx-20) or later. Previous NSX Edge hardware
versions do not support UPT mode.

n Verify that the memory reservation on the configured NSX Edge is set to 100%.

n From the vSphere Web Client, enable UPT on the NSX Edge VM network adapter. See
the Change the Virtual Machine Network Adapter Configuration topic in vSphere Virtual
Machine Administration guide.

VMware by Broadcom 173


NSX Installation Guide

n At least one of the NSX Edge VM datapath interface must be backed by an ESXi
host that hosts a Data Processing Unit-based SmartNIC. A SmartNIC is a NIC card that
provides network traffic processing using a Data Processing Unit (DPU), a programmable
processor on the NIC card, in addition to the traditional functions of a NIC card. For more
information related to DPU, see NSX on vSphere Lifecycle Manager with VMware vSphere
Distributed Services Engine.

n Starting with NSX [Link], NSX Edge VM hardware version will no longer default to
[Link] 13. NSX Edge VM hardware will depend on the underlying version
of the ESXi host. VM hardware versions compatible with ESXi hosts are listed in KB article
2007240.

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Select System > Fabric > Nodes > Edge Transport Nodes > Add Edge Node.

3 Type a name for the NSX Edge.

4 Type the Host name or FQDN from VMware vCenter.

5 Select the form factor for the NSX Edge VM appliance.

VMware by Broadcom 174


NSX Installation Guide

6 To customize CPU and memory allocated to an NSX Edge VM appliance, tune the following
parameters. However, for maximum performance NSX Edge VM appliance must be assigned
100% of the available resources.

Caution If you customize resources allocated to the NSX Edge VM, turn back the reservation
later on to 100% to get maximum performance.

Option Description

Memory Reservation (%) Reservation percentage is relative to the pre-defined value in the form
factor.
100 indicates 100% of memory is reserved for the NSX Edge VM.
If you enter 50, it indicates that 50% of the allocated memory is reserved for
the Edge transport node.

Note If you want to use NSX Edge VM datapath interfaces in UPT mode,
reserve 100% of the allocated memory for the NSX Edge transport node.

CPU Reservation Priority Select the number of shares to be allocated to an NSX Edge VM relative to
other VMs that are contending for shared resources.
The following shares are for an NSX Edge VM in Medium form factor:
n Low - 2000 shares
n Normal - 4000 shares
n High - 8000 shares
n Extra High - 10000 shares

CPU Reservation (MHz) Caution Unless you need fine grained control over CPU reservations,
do not use this field. Instead, change CPU reservations from the CPU
Reservation Priority field.

The maximum CPU reservation value must not exceed the number of vCPUs
multiplied by the normal CPU operation rate of the physical CPU core.
If the MHz value entered exceeds the maximum CPU capacity of the physical
CPU cores, the NSX Edge VM might fail to start even though the allocation
was accepted.
For example, consider a system with two Intel Xeon E5-2630 CPUs. Each
CPU contains ten cores running at 2.20 GHz. The maximum CPU allocation
for a VM configured with two vCPUs is 2 x 2200 MHz = 4400 MHz. If
CPU reservation is specified as 8000 MHz, the reconfiguration of the VM
completes successfully. However, the VM fails to power on.

7 In the Credentials window, enter the following details.

n Specify the CLI and the root passwords for the NSX Edge. Your passwords must comply
with the password strength restrictions.

n At least 12 characters

n At least one lower-case letter

n At least one upper-case letter

n At least one digit

VMware by Broadcom 175


NSX Installation Guide

n At least one special character

n At least five different characters

n No dictionary words

n No palindromes

n More than four monotonic character sequence is not allowed

n To enable SSH for an administrator, toggle the Allow SSH Login button.

n To enable SSH for a root user, toggle the Allow Root SSH Login button.

n Enter credentials for the Audit role. If you do not enter credentials in the Audit
Credentials section, the audit role remains disabled.

Note After deploying the NSX Edge node, you cannot change the SSH setting for a root
user that you set during deployment. For example, you cannot enable SSH for a root user
if you disabled it during deployment.

8 Enter the NSX Edge details.

Option Description

Compute Manager Select the compute manager from the drop-down menu.
The compute manager is the VMware vCenter registered in the Management
Plane.

Cluster Designate the cluster the NSX Edge is going to join from the drop-down
menu.

Resource Pool or Host Assign either a resource pool or a specific host for the NSX Edge from the
drop-down menu.

Datastore Select a datastore for the NSX Edge files from the drop-down menu.

9 Enter the NSX Edge interface details.

Option Description

IP Assignment It is the IP address assigned to NSX Edge node which is required to


communicate with NSX Manager and NSX Controller.
Select DHCP or Static IP.
If you select Static, enter the values for:
n Management IP: Enter IP address of NSX Edge in the CIDR notation.
n Default gateway: Enter the gateway IP address of NSX Edge.

Management Interface From the drop-down menu, select the interface that connects to the NSX
Edge management network. This interface must either be reachable from
NSX Manager or must be in the same management interface as NSX
Manager and NSX Controller.
The NSX Edge management interface establishes communication with the
NSX Manager management interface.
The NSX Edge management interface is connected to distributed port
groups or segments.

VMware by Broadcom 176


NSX Installation Guide

Option Description

Search Domain Names Enter domain names in the format '[Link]' or enter an IP address.

DNS Servers Enter the IP address of the DNS server.

NTP Servers Enter the IP address of the NTP server.

Enable UPT mode for datapath Enable Uniform Passthrough (UPT) mode on NSX Edge datapath interfaces
interface to have direct I/O access or passthrough to the virtual network adapter. It
improves overall performance of the NSX Edge node.
Before you enable this field, ensure:
n NSX Edge hardware version is 20 or vmx-20 or later. Earlier hardware
version do not support UPT mode.
n ESXi host version must be 8.0 or later.

Caution To make UPT settings effective on NSX Edge VM virtual network


adapters, NSX Manager puts NSX Edge VM into maintenance mode, powers
it off and powers it back on again.

10 Enter the N-VDS information.

Option Description

Edge Switch Name Enter a name for the switch.

Transport Zone Select the transport zones that this transport node belongs to. An NSX Edge
transport node belongs to at least two transport zones, an overlay for NSX
connectivity and a VLAN for uplink connectivity.

Note NSX Edge Nodes support multiple overlay tunnels (multi-TEP) when
the following prerequisites are met:
n TEP configuration must be done on one N-VDS only.
n All TEPs must use the same transport VLAN for overlay traffic.
n All TEP IPs must be in the same subnet and use the same default
gateway.

Uplink Profile Select the uplink profile from the drop-down menu.
The available uplinks depend on the configuration in the selected uplink
profile.

VMware by Broadcom 177


NSX Installation Guide

Option Description

IP Assignment (TEP) IP address is assigned to the NSX Edge switch that is configured. It is used
as the tunnel endpoint of the NSX Edge.
Select Use IP Pool or Use Static IP List for the overlay N-VDS.
n If you select Use Static IP List, specify:
n Static IP List: Enter a list of comma-separated IP addresses to be
used by the NSX Edge.
n Gateway: Enter the default gateway of the TEP, which is used to
route packets another TEP in another network. For eample, ESXi TEP
is in [Link]/24 and NSX Edge TEPs are in [Link]/24 then we
use the default gateway to route packets between these networks.
n Subnet mask: Enter the subnet mask of the TEP network used on the
NSX Edge.
n If you selected Use IP Pool for IP assignment, specify the IP pool name.

DPDK Fastpath Interfaces / Virtual Map uplinks to DPDK fastpath interfaces.


NICs Starting with NSX 4.0.1, you can map uplinks to DPDK fastpath interfaces
that are backed by smartNIC-enabled DVPGs, VLAN logical switches or
segments. The prerequisite is to enable UPT mode on NSX Edge VM virtual
network adapters. The UPT mode requires at least one DPDK interface to be
backed by smartNIC-enabled hardware also known as Data Processing Unit
(DPU)-backed networks.

Note If the uplink profile applied to the NSX Edge node is using a Named
Teaming policy, ensure the following condition is met:
n All uplinks in the Default Teaming policy must be mapped to the
corresponding physical network interfaces on the Edge VM for traffic
to flow through a logical switch that uses the Named Teaming policies.

You can configure a maximum of four unique data path interfaces as uplinks
on a NSX Edge VM.
When mapping uplinks to DPDK Fastpath Interfaces, if NSX Edge does not
display all the available interfaces (four in total), it means that either the
additional interface is not yet added to the NSX Edge VM or the uplink
profile has fewer number of uplinks.
For NSX Edge VMs upgraded from an earlier version of NSX to 3.2.1 or
later, invoke the redeploy API call to redeploy the NSX Edge VM. Invoking
the redeploy API ensures the NSX Edge VM deployed recognizes all the
available datapath interfaces in NSX Manager UI. Make sure the Uplink
profile is correctly configured to use additional datapath NIC.
n For autodeployed NSX Edges, call the redeploy API.

POST api/v1/transport-nodes/<transport-node-id>?
action=redeploy

n For manually deployed edges, deploy a new NSX Edge VM. Ensure all
the vmx customizations of the old NSX Edge VM are also done for the
new NSX Edge VM.
Performing vMotion on a NSX Edge VM might result in the NSX Edge
VM going into failed state or the additional network adapter cannot be
enabled because of memory buffer issues. For troubleshooting memory-
related issues when performing a vMotion on a NSX Edge VM, see https://
[Link]/s/article/76387.

VMware by Broadcom 178


NSX Installation Guide

Note
n LLDP profile is not supported on an NSX Edge VM appliance.

n Uplink interfaces are displayed as DPDK Fastpath Interfaces if the NSX Edge is installed
using NSX Manager or on a Bare Metal server.

n Uplink interfaces are displayed as Virtual NICs if the NSX Edge is installed manually using
vCenter Server.

11 View the connection status on the Transport Nodes page.

After adding the NSX Edge as a transport node, the connection status changes to Up in 10-12
minutes.

12 Verify the transport node status by running the Get edge-cluster-status | get
managers | get controllers | get host-switch CLI command.

13 (Optional) View the transport node by calling the GET /api/v1/transport-nodes/{transport-


node-id}/status | state (deprecated) API call.

GET api/v1/infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/
host-transport-nodes/<host-transport-node-id>/state | status

The default values for enforcementpoint-id and site-id is default.

Note After an NSX Edge node is migrated to a new host using


vCenter Server, you might find NSX Manager UI reporting stale configuration
details (Compute, Datastore, Network, SSH, NTP, DNS, Search Domains) of
the NSX Edge. To refresh latest NSX Edge configuration details on NSX
Manager, run the API command. POST api/v1/transport-nodes/<transport-node-id>?
action=refresh_node_configuration&resource_type=EdgeNode

Important You can change the IP address of the NSX Edge node from the command
line interface. At the CLI terminal, run set interface eth0 ip <Gateway_IPaddress>
gateway <NSXEdge_IPaddress> plane mgmt. For example, set interface eth0 ip
<edge-new-ip-address/cidr> gateway <gateway-ip-address> plane mgmt.

What to do next

Add the NSX Edge node to an NSX Edge cluster. See Create an NSX Edge Cluster.

Configure NSX Edge DPDK Interfaces


Configure an NSX Edge VM with an N-VDS switch to carry both overlay and external traffic.

In this topology, NSX Edge VM is configured with an N-VDS switch to carry overlay and external
traffic.

VMware by Broadcom 179


NSX Installation Guide

In this topology, the uplink profile attached is configured to use Multi-TEP to provide load
balancing for overlay traffic by selecting default teaming to be Load Balancing Source teaming
policy with active 'uplink1' and 'uplink2' on transport VLAN 200.

(optional) The uplink profile attached is also configured to use named teaming policy, where
'vlan300-policy' is mapped to uplink1 and 'vlan400-policy' is mapped to uplink2.

To create the topology, follow these steps.

Prerequisites

Procedure

1 Uplink Profile creation with named-teaming policy for VLAN networks and default teaming for
overlay networks. If named-teaming does not exist in uplink profile then default teaming is
used for all networks.

VMware by Broadcom 180


NSX Installation Guide

2 VLAN Transport Zone is created or modified to use named teaming policy VLAN300-Policy
and VLAN400-Policy (if using named-teaming policy).

3 In the NSX Edge configuration, “Uplink1” (fp-eth0) is vNIC2 on Edge VM and is mapped to use
VLAN 300 Trunk PG and “Uplink2”(fp-eth1) is vNIC3 on Edge VM and is mapped to use VLAN
400 Trunk portgroup.

4 NSX Edge VM interfaces can connect to VDS Portgroups in vCenter or NSX VLAN Segments.

Note If Edge interfaces will be connecting to NSX VLAN Segments, ESXi Hosts (hosting the
Edge VMs) should be configured as Host Transport Nodes and member of VLAN transport
zone.

An example of connecting VM interfaces to NSX VLAN segments.

An example of an NSX Edge VM with interfaces on VMware vCenter VDS Edge Trunk
Portgroups

VMware by Broadcom 181


NSX Installation Guide

a If VM interfaces connect to NSX VLAN segments, named-teaming is enabled on the


segments. The diagram shows that External VLAN segment 300 is configured to use a
named teaming policy “Vlan300-Policy” that sends traffic from this VLAN on “Uplink1”
(vNIC2 of Edge VM). "External VLAN segment 400" is configured to use a named teaming
policy “Vlan400-Policy” that sends traffic from this VLAN on “Uplink2” (vNIC3 of Edge
VM).

b If VM interfaces connect to VDS Portgroups in VMware vCenter, then “Trunk1 PG” is


configured to use active uplink as “VDS-Uplink1” and standby uplink as “VDS-Uplink2”.
“Trunk2 PG” is configured to use active uplink as “VDS-Uplink2” and standby uplink
as “VDS-Uplink1”. This configuration ensures that the traffic sent on “External VLAN
Segment 300”uses vNIC2 of Edge VM to exit the Edge VM and then “VDS-Uplink1” and is
sent to the left TOR switch. Similarly, traffic sent on VLAN 400 uses “VDS-Uplink2” and is
sent to the TOR switch on the right.

Important
n ESXi TEP and Edge TEP share the same VLAN: Use this configuration only if NSX Edge
Trunk 1 portgroup and NSX Edge Trunk 2 portgroup are created from NSX as VLAN
segments. Because any traffic between the Edge TEP to its own hypervisor’s TEP does
not need to exit the hypervisor.

n ESXi TEP and Edge TEP use different VLANs: Use this configuration if NSX EdgeTrunk1
portgroup and NSX EdgeTrunk2 portgroup are created as VDS portgroups from VMware
vCenter. Any traffic between the NSX Edge TEP and its hypervisor TEP must exit the
ESXi. The top of rack switch must then route it back toward the ESXi.

VMware by Broadcom 182


NSX Installation Guide

Manually Deploying NSX Edge Node Outside of NSX


In addition to deploying and configuring NSX Edge from the NSX Manager, you can manually
deploy NSX Edge either as a VM in VMware vCenter or a Bare Metal server.

Install an NSX Edge on ESXi Using the vSphere GUI


You can use the vSphere Web Client or vSphere Client to interactively install an NSX Edge on
ESXi.

Note Starting in NSX 2.5.1, the NSX Edge VM supports vMotion.

Prerequisites

See NSX Edge network requirements in NSX Edge Installation Requirements.

Procedure

1 Locate the NSX Edge node appliance OVA file on the VMware download portal.

Either copy the download URL or download the OVA file onto your computer.

2 In the vSphere Client, select the host on which to install NSX Edge node appliance.

3 Right-click and select Deploy OVF template to start the installation wizard.

4 Enter the download OVA URL or navigate to the saved OVA file, and click Next.

5 Enter a name and location for the NSX Edge node , and click Next.

The name you type appears in the VMware vCenter and vSphere inventory.

6 Select a compute resource for the NSX Edge node appliance, and click Next.

7 For an optimal performance, reserve memory for the NSX Edge appliance.

Set the reservation to ensure that NSX Edge has sufficient memory to run efficiently. See NSX
Manager VM and Host Transport Node System Requirements.

8 Review and verify the OVF template details, and click Next.

9 Select a deployment configuration, Small, Medium, Large, or XLarge and click Next.

The Description panel on the right side of the wizard shows the details of selected
configuration.

10 Select storage for the configuration and disk files, and click Next.

a Select the virtual disk format.

b Select the VM storage policy.

c Specify the datastore to store the NSX Edge node appliance files.

VMware by Broadcom 183


NSX Installation Guide

11 Select a destination network for each source network.

a For network 0, select the VDS management portgroup.

b For networks 1, 2, and 3, select the previously configured VDS trunk portgroups.

12 Configure IP Allocation settings.

a For IP allocation, specify Static - Manual.

b For IP protocol, select IPv4.

13 Click Next.

The following steps are all located in the Customize Template section of the Deploy OVF
Template wizard.

14 Enter the NSX Edge node system root, CLI admin, and audit passwords.

Note In the Customize Template window, ignore the message All properties have
valid values that is displayed even before you have entered values in any of the fields.
This message is displayed because all parameters are optional. The validation passes as you
have not entered values in any of the fields.

When you log in for the first time, you are prompted to change the password. This password
change method has strict complexity rules, including the following:

n At least 12 characters

n At least one lower-case letter

n At least one upper-case letter

n At least one digit

n At least one special character

n At least five different characters

n No dictionary words

n No palindromes

n More than four monotonic character sequence is not allowed

Important The core services on the appliance do not start until a password with sufficient
complexity has been set.

15 (Optional) If you have an available NSX Manager and want to register the NSX Edge
node with the management plane during the OVA deployment, complete the Manager IP,
Username, Password, and Thumbprint.

n Manager IP: Enter the NSX Manager node IP address.

Note Do not register the NSX Edge node with the virtual IP (VIP) address of the
management plane during the OVA deployment.

VMware by Broadcom 184


NSX Installation Guide

n Manager Username: Enter the NSX Manager username.

n Manager Password: Enter the NSX Manager password.

n Manager Thumbprint: Enter the NSX Manager thumbprint.

Note An NSX Manager thumbprint is required to join an NSX Edge node to an NSX
Manager. To retrieve thumbprint on an NSX Manager node, run get certificate api
thumbprint.

n Node ID: Leave the field blank. The Node UUID field is only for internal use.

16 (Optional) If you want to deploy the NSX Edge node as an autonomous edge in a L2 VPN
topology, enable the option. An autonomous edge is not managed by NSX. Do not enable
the option if you want to deploy an NSX Edge node that provides centralized edge services
to host transport nodes in an NSX topology.

Note The fields in the External and HA sections are required only when you configure an
autonomous NSX Edge node.

17 Enter the hostname of the NSX Edge.

18 Enter the default gateway, management network IPv4, and management network netmask
address.

Skip any VMC network settings.

19 Enter the DNS Server list, the Domain Search list, and the NTP Server IP or FQDN list.

20 (Optional) Do not enable SSH if you prefer to access NSX Edge using the console. However, if
you want root SSH login and CLI login to the NSX Edge command line, enable the SSH option.

By default, SSH access is disabled for security reasons.

21 (Optional) In the Internal Use Only section, if you want to enable NSX Edge in uniform
passthrough (UPT) or direct access mode to IO devices for improved performance, enable
Datapath UPT Mode Enabled field.

Note Meet these prerequisites before you enable UPT on NSX Edge:

n NSX Edge hardware version is 20 or vmx-20 or later. Earlier hardware version do not
support UPT mode.

n ESXi host version must be 8.0 or later.

Caution Enabling UPT requires restart of the NSX Edge node.

22 Verify that all your custom OVA template specification is accurate and click Finish to initiate
the installation.

The installation might take 7-8 minutes.

VMware by Broadcom 185


NSX Installation Guide

23 Open the console of the NSX Edge node to track the boot process.

If the console window does not open, make sure that pop-ups are allowed.

24 After the NSX Edge node starts, log in to the CLI with admin credentials.

Note After NSX Edge node starts, if you do not log in with admin credentials for the first
time, the data plane service does not automatically start on the NSX Edge node.

25 Run the get interface eth0 (without VLAN) or get interface eth0.<vlan_ID> (with a
VLAN) command to verify that the IP address was applied as expected.

nsx-edge-1> get interface eth0.100

Interface: eth0.100
Address: [Link]/24
MAC address: [Link]
MTU: 1500
Default gateway: [Link]
Broadcast address: [Link]
...

Note When bringing up NSX Edge nodes on non-NSX managed host, verify that the MTU
setting is set to 1600 (instead of 1500) on the physical host switch for the data NIC.

26 Run the get managers command to verify that the NSX Edge node is registered.

- [Link] Connected (NSX-RPC)


- [Link] Connected (NSX-RPC)
- [Link] Connected (NSX-RPC)

27 If NSX Edge is not registered with the management plane, see Join NSX Edge with the
Management Plane.

28 Verify that the NSX Edge node has the required connectivity.

If you enabled SSH, make sure that you can SSH to your NSX Edge node and verify the
following:

n You can ping your NSX Edge node management interface.

n From the NSX Edge node, you can ping the node's default gateway.

n From the NSX Edge node, you can ping the hypervisor hosts that are either in the same
network or a network reachable through routing.

n From the NSX Edge node, you can ping the DNS server and NTP Server IP or FQDN list.

29 Troubleshoot connectivity problems.

Note If connectivity is not established, make sure the VM network adapter is in the proper
network or VLAN.

VMware by Broadcom 186


NSX Installation Guide

By default, the NSX Edge node datapath claims all virtual machine NICs except the
management NIC (the one that has an IP address and a default route). If you incorrectly
assigned a NIC as the management interface, follow these steps to use DHCP to assign
management IP address to the correct NIC.
a Log in to the NSX Edge CLI and type the stop service dataplane command.

b Type the set interface interface dhcp plane mgmt command.

c Place interface into the DHCP network and wait for an IP address to be assigned to that
interface.
d Type the start service dataplane command.

The datapath fp-ethX ports used for the VLAN uplink and the tunnel overlay are shown in
the get interfaces and get physical-port commands on the NSX Edge node.

What to do next

Configure NSX Edge as a transport node. See Edit NSX Edge Transport Node Configuration.

Install NSX Edge on ESXi Using the Command-Line OVF Tool


If you prefer to automate NSX Edge installation, you can use the VMware OVF Tool, which is a
command-line utility.

Prerequisites

n Verify that the system requirements are met. See System Requirements.

n Verify that the required ports are open. See Ports and Protocols.

n Verify that a datastore is configured and accessible on the ESXi host.

n Verify that you have the IP address and gateway, DNS server IP addresses, domain search
list, and the NTP Server IP or FQDN for the NSX Manager to use.

n Create a management VDS and target VM port group in vCenter. Place the NSX appliances
onto this management VDS port group network. See Prepare a vSphere Distributed Switch
for NSX.

Multiple management networks can be used as long as the NSX Manager nodes has
consistent connectivity and recommended latency between them.

Note If you plan to use Cluster VIP, all NSX Manager appliances should belong to same
subnet.

VMware by Broadcom 187


NSX Installation Guide

n Plan your NSX Manager IP and NSX Manager Cluster VIP addressing scheme.

Note Verify that you have the hostname for NSX Manager to use. The Hostname format
must be nsx-manager-fqdn@[Link]. This format is required if NSX installation is dual
stack (IPv4, IPv6) and/or if planning to configure CA-signed certificates.

n See NSX Edge network requirements in NSX Edge Installation Requirements.

n Verify that you have adequate privileges to deploy an OVF template on the ESXi host.

n Verify that hostnames do not include underscores. Otherwise, the hostname is set to
localhost.
n OVF Tool version 4.3 or later.

n Know parameters that you can use to deploy a NSX Edge VM and join it to the management
plane.

Field Name OVF Parameter Field Type

System root password nsx_passwd_0 Required to install. NSX Edge

CLI admin password nsx_cli_passwd_0 Required to install NSX Edge.

CLI audit password nsx_cli_audit_passwd_0 Optional

CLI admin username nsx_cli_username Optional

CLI audit username nsx_cli_audit_username Optional

NSX Manager IP mpIp Required to join NSX Edge VM to


NSX Manager.

NSX Manager token mpToken Required to join NSX Edge VM to


NSX Manager.
To retrieve token, on the
NSX Manager, run POST
[Link]
registration-token.

NSX Manager thumbprint mpThumbprint Required to join NSX Edge VM to


NSX Manager.
To retrieve thumbprint, on the NSX
Manager node, run get certificate
api thumbprint.

Node Id mpNodeId Only for internal use.

Hostname nsx_hostname Optional

Default IPv4 gateway nsx_gateway_0 Optional

Management network IP address nsx_ip_0 Optional

Management network netmask nsx_netmask_0 Optional

DNS servers nsx_dns1_0 Optional

Domain Search suffixes nsx_domain_0 Optional

NTP Servers nsx_ntp_0 Optional

VMware by Broadcom 188


NSX Installation Guide

Field Name OVF Parameter Field Type

Is SSH service enabled nsx_isSSHEnabled Optional

Is SSH enabled for root login nsx_allowSSHRootLogin Optional

Is autonomous Edge is_autonomous_edge Optional. Valid values: True, False


(default)

Procedure

u For a standalone host, run the ovftool command with the appropriate parameters.

C:\Users\Administrator\Downloads>ovftool
--name=nsx-edge-1
--deploymentOption=medium
--X:injectOvfEnv
--X:logFile=[Link]
--allowExtraConfig
--datastore=ds1
--net:"Network 0=Mgmt"
--net:"Network 1=nsx-tunnel"
--net:"Network 2=vlan-uplink"
--net:"Network 3=vlan-uplink"
--acceptAllEulas
--noSSLVerify
--diskMode=thin
--powerOn
--prop:nsx_ip_0=[Link]
--prop:nsx_netmask_0=[Link]
--prop:nsx_gateway_0=[Link]
--prop:nsx_dns1_0=[Link]
--prop:nsx_domain_0=[Link]
--prop:nsx_ntp_0=[Link]
--prop:nsx_isSSHEnabled=True
--prop:nsx_allowSSHRootLogin=True
--prop:nsx_passwd_0=<password>
--prop:nsx_cli_passwd_0=<password>
--prop:nsx_hostname=nsx-edge
--prop:mpIp=<NSXManager-IP>
--prop:mpToken=<NSXManager-Token>
--prop:mpThumbprint=<NSXManager-Thumbprint>
--prop:is_autonomous_edge=False
<path/url to nsx component ova>
vi://root:<password>@[Link]

Opening OVA source: nsx-<component>.ova


The manifest validates
Source is signed and the certificate validates
Opening VI target: vi://root@[Link]
Deploying to VI: vi://root@[Link]
Transfer Completed

VMware by Broadcom 189


NSX Installation Guide

Powering on VM: nsx-edge-1


Task Completed
Completed successfully

u For a host managed by VMware vCenter, run the ovftool command with the appropriate
parameters.

C:\Users\Administrator\Downloads>ovftool
--name=nsx-edge-1
--deploymentOption=medium
--X:injectOvfEnv
--X:logFile=[Link]
--allowExtraConfig
--datastore=ds1
--net:"Network 0=Mgmt"
--net:"Network 1=nsx-tunnel"
--net:"Network 2=vlan-uplink"
--net:"Network 3=vlan-uplink"
--acceptAllEulas
--noSSLVerify
--diskMode=thin
--powerOn
--prop:nsx_ip_0=[Link]
--prop:nsx_netmask_0=[Link]
--prop:nsx_gateway_0=[Link]
--prop:nsx_dns1_0=[Link]
--prop:nsx_domain_0=[Link]
--prop:nsx_ntp_0=[Link]
--prop:nsx_isSSHEnabled=True
--prop:nsx_allowSSHRootLogin=True
--prop:nsx_passwd_0=<password>
--prop:nsx_cli_passwd_0=<password>
--prop:nsx_hostname=nsx-edge
--prop:mpIp=<NSXManager-IP>
--prop:mpToken=<NSXManager-Token>
--prop:mpThumbprint=<NSXManager-Thumbprint>
--prop:is_autonomous_edge=False
<path/url to nsx component ova>
vi://administrator@[Link]:<password>@[Link]/?ip=[Link]

Opening OVA source: nsx-<component>.ova


The manifest validates
Source is signed and the certificate validates
Opening VI target: vi://administrator@[Link]@[Link]:443/
Deploying to VI: vi://administrator@[Link]@[Link]:443/
Transfer Completed

VMware by Broadcom 190


NSX Installation Guide

Powering on VM: nsx-edge-1


Task Completed
Completed successfully

u For an optimal performance, reserve memory for the appliance.

Set the reservation to ensure that NSX Manager has sufficient memory to run efficiently. See
NSX Manager VM and Host Transport Node System Requirements.

u Open the console of the NSX Edge node to track the boot process.

u After the NSX Edge node starts, log in to the CLI with admin credentials.

u Run the get interface eth0 (without VLAN) or get interface eth0.<vlan_ID> (with a
VLAN) command to verify that the IP address was applied as expected.

nsx-edge-1> get interface eth0.100

Interface: eth0.100
Address: [Link]/24
MAC address: [Link]
MTU: 1500
Default gateway: [Link]
Broadcast address: [Link]
...

Note When bringing up NSX Edge nodes on non-NSX managed host, verify that the MTU
setting is set to 1600 (instead of 1500) on the physical host switch for the data NIC.

u Verify that the NSX Edge node has the required connectivity.

If you enabled SSH, make sure that you can SSH to your NSX Edge node and verify the
following:

n You can ping your NSX Edge node management interface.

n From the NSX Edge node, you can ping the node's default gateway.

n From the NSX Edge node, you can ping the hypervisor hosts that are either in the same
network or a network reachable through routing.

n From the NSX Edge node, you can ping the DNS server and NTP Server IP or FQDN list.

u Troubleshoot connectivity problems.

Note If connectivity is not established, make sure the VM network adapter is in the proper
network or VLAN.

VMware by Broadcom 191


NSX Installation Guide

By default, the NSX Edge node datapath claims all virtual machine NICs except the
management NIC (the one that has an IP address and a default route). If you incorrectly
assigned a NIC as the management interface, follow these steps to use DHCP to assign
management IP address to the correct NIC.
a Log in to the NSX Edge CLI and type the stop service dataplane command.

b Type the set interface interface dhcp plane mgmt command.

c Place interface into the DHCP network and wait for an IP address to be assigned to that
interface.
d Type the start service dataplane command.

The datapath fp-ethX ports used for the VLAN uplink and the tunnel overlay are shown in
the get interfaces and get physical-port commands on the NSX Edge node.

What to do next

If you did not join the NSX Edge with the management plane, see Join NSX Edge with the
Management Plane.

Install NSX Edge via ISO File as a Virtual Appliance


You can manually install NSX Edge using an ISO file.

Important The NSX component virtual machine installations include VMware Tools. Removal or
upgrade of VMware Tools is not supported for NSX appliances.

Prerequisites

n See NSX Edge network requirements in NSX Edge Installation Requirements.

Procedure

1 Go to [Link] and navigate to Products and Accounts →


Products → All Products.

2 Search VMware NSX and select the appropriate product version.

3 Locate and download the ISO file for NSX Edge.

4 In the vSphere Client, select the host datastore.

5 Select Files > Upload Files > Upload a File to a Datastore, browse to the ISO file, and upload.

If you are using a self-signed certificate, open the IP address in a browser and accept the
certificate and reupload the ISO file.

6 In the vSphere Client inventory, select the host you uploaded the ISO file. or in the vSphere
Client,

7 Right-click and select New Virtual Machine .

8 Select a compute resource for the NSX Edge appliance.

VMware by Broadcom 192


NSX Installation Guide

9 Select a datastore to store the NSX Edge appliance files.

10 Accept the default compatibility for your NSX Edge VM.

11 Select the supported ESXi operating systems for your NSX Edge VM.

12 Configure the virtual hardware.

n New Hard Disk - 200 GB

n New Network - VM Network

n New CD/DVD Drive - Datastore ISO File

You must click Connect to bind the NSX Edge ISO file to the VM.

13 Power on the new NSX Edge VM.

14 During ISO boot, open the VM console and choose Automated installation.

There might be a pause of 10 seconds after you press Enter.

During installation, the installer prompts you to enter a VLAN ID for the management
interface. Select Yes and enter a VLAN ID to create a VLAN subinterface for the network
interface. Select No if you do not want to configure VLAN tagging on the packet.

During power-on, the VM requests a network configuration via DHCP. If DHCP is not available
in your environment, the installer prompts you for IP settings.

By default, the root login password is vmware, and the admin login password is default.

When you log in for the first time, you are prompted to change the password. This password
change method has strict complexity rules, including the following:

n At least 12 characters

n At least one lower-case letter

n At least one upper-case letter

n At least one digit

n At least one special character

n At least five different characters

n No dictionary words

n No palindromes

n More than four monotonic character sequence is not allowed

Important The core services on the appliance do not start until a password with sufficient
complexity has been set.

15 For an optimal performance, reserve memory for the NSX Edge appliance.

Set the reservation to ensure that NSX Edge has sufficient memory to run efficiently. See NSX
Manager VM and Host Transport Node System Requirements.

VMware by Broadcom 193


NSX Installation Guide

16 After the NSX Edge node starts, log in to the CLI with admin credentials.

Note After NSX Edge node starts, if you do not log in with admin credentials for the first
time, the data plane service does not automatically start on the NSX Edge node.

17 There are three ways to configure a management interface.

Note If the server uses Mellanox NIC cards, do not configure the Edge in In-band
management interface.

n Untagged interface. This interface type creates an out-of-band management interface.

(DHCP) set interface eth0 dhcp plane mgmt

(Static) set interface eth0 ip <CIDR> gateway <gateway-ip> plane mgmt

n Tagged interface.

set interface eth0 vlan <vlan_ID> plane mgmt

(DHCP) set interface eth0.<vlan_ID> dhcp plane mgmt

(Static) set interface eth0.<vlan_ID> ip <CIDR> gateway <gateway-ip> plane


mgmt

n In-band interface.

set interface mac <mac_address> vlan <vlan_ID> in-band plane mgmt

(DHCP) set interface eth0.<vlan_ID> dhcp plane mgmt

(Static) set interface eth0.<vlan_ID> ip <CIDR> gateway <gateway-ip> plane


mgmt

18 (Optional) Start SSH service. Run start service ssh .

19 Run the get interface eth0 (without VLAN) or get interface eth0.<vlan_ID> (with a
VLAN) command to verify that the IP address was applied as expected.

nsx-edge-1> get interface eth0.100

Interface: eth0.100
Address: [Link]/24
MAC address: [Link]
MTU: 1500
Default gateway: [Link]
Broadcast address: [Link]
...

Note When bringing up NSX Edge nodes on non-NSX managed host, verify that the MTU
setting is set to 1600 (instead of 1500) on the physical host switch for the data NIC.

VMware by Broadcom 194


NSX Installation Guide

20 (Tagged interface and In-band interface) Any existing VLAN management interface must be
cleared before creating a new one.

Clear interface eth0.<vlan_ID>

To set a new interface, refer to step 15.

21 Verify that the NSX Edge node has the required connectivity.

If you enabled SSH, make sure that you can SSH to your NSX Edge node and verify the
following:

n You can ping your NSX Edge node management interface.

n From the NSX Edge node, you can ping the node's default gateway.

n From the NSX Edge node, you can ping the hypervisor hosts that are either in the same
network or a network reachable through routing.

n From the NSX Edge node, you can ping the DNS server and NTP Server IP or FQDN list.

22 Troubleshoot connectivity problems.

Note If connectivity is not established, make sure the VM network adapter is in the proper
network or VLAN.

By default, the NSX Edge node datapath claims all virtual machine NICs except the
management NIC (the one that has an IP address and a default route). If you incorrectly
assigned a NIC as the management interface, follow these steps to use DHCP to assign
management IP address to the correct NIC.
a Log in to the NSX Edge CLI and type the stop service dataplane command.

b Type the set interface interface dhcp plane mgmt command.

c Place interface into the DHCP network and wait for an IP address to be assigned to that
interface.
d Type the start service dataplane command.

The datapath fp-ethX ports used for the VLAN uplink and the tunnel overlay are shown in
the get interfaces and get physical-port commands on the NSX Edge node.

What to do next

If you did not join the NSX Edge with the management plane, see Join NSX Edge with the
Management Plane.

Install NSX Edge on Bare Metal


Use PXE server to automate installation of NSX Edge on a bare metal server or use ISO file to
install NSX Edge on a bare metal server.

VMware by Broadcom 195


NSX Installation Guide

The NSX Edge bare metal node is a dedicated physical server that runs a special version
NSX Edge software. The bare metal NSX Edge node requires a NIC supporting the Data Plane
Development Kit (DPDK). VMware maintains a list of the compatibility with various vendor NICs.
See the Bare Metal Server System Requirements.

The NSX Edge nodes in the diagram are illustrated with a single N-VDS switch, configured with
two datapath interfaces and two dedicated NICs for high availability of management plane.

Bare Metal NSX Edge nodes can be configured with more than 2 datapath interfaces depending
on the number of NICs available on the server. Bare Metal NSX Edge nodes use pNICs as uplinks
that directly connect to the top of rack switches. To provide high availability for bare metal edge
management, configure two pNICs on the server as an active/standby Linux bond.

The CPUs on the edge are assigned as either datapath (DP) CPUs that provide routing and
stateful services or services CPUs that provide load balancing and VPN services.

VMware by Broadcom 196


NSX Installation Guide

When a bare metal Edge node is installed, a dedicated interface is retained for management. This
configuration is called out-of-band management. If redundancy is desired, two NICs can be used
for management plane high availability. Bare metal Edge also supports in-band management
where management traffic can leverage an interface being used for overlay or external (N-S)
traffic as in the diagram.

For bare metal edge nodes, all the cores from the first node of a multi-NUMA node server will be
assigned to NSX datapath. If the bare metal has only one NUMA node, then 50% of the cores will
be assigned for the NSX datapath.

Bare Metal NSX Edge nodes support a maximum of two NUMA nodes.

Important VMware does not support the sub-NUMA clustering, which is a server hardware
feature, on a Bare Metal NSX Edge is not supported as it may result in a shortage of heap
memory.

The sub-NUMA clustering functionality changes the socket of heap memory from two NUMA
domains to four NUMA domains. This change limits the size of heap memory allocated to each
socket and causes a shortage of heap memory for socket 0 that the datapath requires. You must
disabled sub-NUMA functionlity in the BIOS. Any changes made to the BIOS will require a reboot.

VMware by Broadcom 197


NSX Installation Guide

To check if the sub-NUMA functionality is enabled, log in to the bare metal NSX Edge as root and
run lscpu. The output is also captured in the support bundle. If there are more than two NUMA
nodes, it implies that the sub-NUMA functionality is enabled and it must be disabled and must be
disabled via the BIOS.

Note
1 When configuring LACP LAG bonds on Bare Metal NSX Edge nodes, datapath cores (backing
NICs) should belong to same NUMA node for load balancing to occur on both devices. If
devices forming the bond span multipe NUMA nodes, then bond only uses network device
CPU that is local NUMA node (0) to transmit packets. So, not all devices do not get used for
balancing traffic that is sent out of the bond device.

In this case, failover still works as failover is exclusive of load balancing. If the Ethernet device
attached to the local NUMA node is down, then the bond sends traffic to the other device
even though it is not NUMA local. The load-balancing optimization does not impact failover
functionality.

Run get dataplane command to view the NUMA node associated with each datapath
interface. To move nics associated with datapath to single NUMA node, physical
reconfiguration of the server is required via the BIOS.

Prerequisites

n Disable sub-NUMA clustering by editing the BIOS settings. NSX does not support sub-NUMA
clustering. For more details, refer to the KB article, [Link]

n Starting with NSX v3.1.3, on Bare Metal NSX Edge nodes, you do not need to disable
hyperthreading. Hyperthreading is automatically disabled.

Prepare a PXE Server for Bare Metal NSX Edge Installation


PXE is made up of several components: DHCP, HTTP, and TFTP. This procedure demonstrates
how to set up a PXE server on Ubuntu.

DHCP dynamically distributes IP settings to NSX components, such as NSX Edge. In a PXE
environment, the DHCP server allows NSX Edge to request and receive an IP address
automatically.

TFTP is a file-transfer protocol. The TFTP server is always listening for PXE clients on the
network. When it detects any network PXE client asking for PXE services, it provides the NSX
component ISO file and the installation settings contained in a preseed file.

Prerequisites

n A PXE server must be available in your deployment environment. The PXE server can be set
up on any Linux distribution.

n Verify that the preseeded configuration file has the parameters [Link]=0 and
biosdevname=0 set after -- to persist after reboot.

VMware by Broadcom 198


NSX Installation Guide

n See Bare Metal Server System Requirements.

n See NSX Edge network requirements in NSX Edge Installation Requirements.

Procedure

1 (Optional) Use a kickstart file to set up a new TFTP or DHCP services on an Ubuntu server.

A kickstart file is a text file that contains CLI commands that you run on the appliance after
the first boot.

Name the kickstart file based on the PXE server it is pointing to. For example:

[Link]

The file must be copied to your Web server, for example at /var/www/html/nsx-edge/
[Link].

In the kickstart file, you can add CLI commands. For example, to configure the IP address of
the management interface:

stop service dataplane


set interface eth0 <ip-cidr-format> plane mgmt
start service dataplane

To change the admin user password:

set user admin password <new_password> old-password <old-password>

If you specify a password in the [Link] file, use the same password in the kickstart file.
Otherwise, use the default password, which is "default".

To join the NSX Edge with the management plane:

join management-plane <manager-ip> thumbprint <manager-thumbprint> username <manager-


username> password <manager password>

2 Create two interfaces, one for management and another for DHCP and TFTP services.

Make sure that the DHCP/TFTP interface is in the same subnet that the NSX Edge resides in.

For example, if the NSX Edge management interfaces are going to be in the [Link]/24
subnet, place eth1 in that same subnet.

# The loopback network interface


auto lo
iface lo inet loopback

# PXE server's management interface


auto eth0
iface eth0 inet static
address [Link]
gateway [Link]
netmask [Link]

VMware by Broadcom 199


NSX Installation Guide

dns-nameservers [Link]

# PXE server's DHCP/TFTP interface


auto eth1
iface eth1 inet static
address [Link]
gateway [Link]
netmask [Link]
dns-nameservers [Link]

3 Install DHCP server software and configure required settings to set up the PXE server. For
more details, see Linux documentation.

4 Install the Apache server and TFTP and other components required to configure the PXE
server.

5 Copy or download the NSX Edge installer ISO file to a temporary folder.

6 Mount the ISO file and copy the install components to the TFTP server and the Apache
server.

sudo mount -o loop ~/nsx-edge.<build>.iso /mnt


cd /mnt
sudo cp -fr install/netboot/* /var/lib/tftpboot/
sudo mkdir /var/www/html/nsx-edge
sudo cp -fr /mnt/* /var/www/html/nsx-edge/

VMware by Broadcom 200


NSX Installation Guide

7 (Optional) Edit the /var/www/html/nsx-edge/[Link] file to modify the encrypted


passwords.

You can use a Linux tool such as mkpasswd to create a password hash.

sudo apt-get install whois


sudo mkpasswd -m sha-512

Password:
$6$SUFGqs[...]FcoHLijOuFD

a Modify the root password, edit /var/www/html/nsx-edge/[Link] and search for


the following line:

d-i passwd/root-password-crypted password $6$tgmLNLMp$9BuAHhN...

b Replace the hash string.

You do not need to escape any special character such as $, ', ", or \.

c Add the usermod command to [Link] to set the password for root, admin, or
both.

For example, add the following command.

usermod --password '\$6\$VS3exId0aKmzW\$U3g0V7BF0DXlmRI.LR0v/VgloxVotEDp00bO2hUF8u/'


root; \
usermod --password '\$6\$VS3exId0aKmzW\$U3g0V7BF0DXlmRI.LR0v/VgloxVotEDp00bO2hUF8u/'
admin; \

The hash string is an example. You must escape all special characters. The root password
in the first usermod command replaces the password that is set in d-i passwd/root-
password-crypted password $6$tgm....

If you use the usermod command to set the password, the user is not prompted to
change the password at the first login. Otherwise, the user must change the password at
the first login.

8 Add the following lines to the /var/lib/tftpboot/[Link]/default file.

Replace [Link] with the IP address of your TFTP server.

label nsxedge
kernel ubuntu-installer/amd64/linux
ipappend 2
append netcfg/dhcp_timeout=60 auto=true priority=critical vga=normal
partman-lvm/device_remove_lvm=true netcfg/choose_interface=auto debian-installer/
allow_unauthenticated=true preseed/url=[Link] mirror/
country=manual mirror/http/hostname=[Link] nsx-kickstart/url=[Link]
nsx-edge/[Link] mirror/http/directory=/nsx-edge initrd=ubuntu-installer/amd64/
[Link] mirror/suite=bionic netcfg/do_not_use_netplan=true --

VMware by Broadcom 201


NSX Installation Guide

9 Add the following lines to the /etc/dhcp/[Link] file.

Replace [Link] with the IP address of your DHCP server.

allow booting;
allow bootp;

next-server [Link]; #Replace this IP address


filename "pxelinux.0";

10 Restart the DHCP service.

sudo service isc-dhcp-server restart

Note If an error is returned, for example: "stop: Unknown instance: start: Job failed to
start", run sudo /etc/init.d/isc-dhcp-server stop and then sudo /etc/init.d/isc-
dhcp-server start. The sudo /etc/init.d/isc-dhcp-server start command returns
information about the source of the error.

What to do next

Install NSX Edge on bare metal using an ISO file. See Install Bare Metal NSX Edge Automatically
using ISO File.

Install Bare Metal NSX Edge Automatically using ISO File


Use the automated installation of NSX Edge on the bare metal servers because it automatically
configures the NSX Edge node according to the hardware configuration and the most frequently
used NSX Edge configuration. After the installation is complete, configure the management
interface of the nodes as out-of-band or as in-band and provide the final IP configuration of
the management interface statically or over DHCP.

Prerequisites

n Both BIOS and UEFI boot modes are supported.

n See NSX Edge network requirements in NSX Edge Installation Requirements.

n See Bare Metal Server System Requirements.

n If the bare metal servers for NSX Edge nodes have integrated SD card devices, disable those
SD cards. For more information, see [Link]

n To prepare a bare metal server as an NSX Edge node, ensure there is a minimum of 200 Gb
disk space.

For more details about the supported NICs, see VMware Compatibility Guide.

Procedure

1 Go to [Link] and navigate to Products and Accounts →


Products → All Products.

VMware by Broadcom 202


NSX Installation Guide

2 Search VMware NSX and select the appropriate product version.

3 Locate and download the ISO file for NSX Edge for Bare Metal.

4 Log in to the Integrated Lights-Out (ILO) interface of the bare metal server.

5 Click Launch in the virtual console preview.

6 Select Virtual Media > Connect Virtual Media.

Wait a few seconds for the virtual media to connect.

7 Select Virtual Media > Map CD/DVD and browse to the ISO file.

8 Select Next Boot > Virtual CD/DVD/ISO.

9 Select Power > Reset System (warm boot).

The installation duration depends on the bare metal environment.

10 Choose Automated installation.

There might be a pause of 10 seconds after you press Enter.

11 Select the applicable primary network interface. This is for the management network
interface.

During power-on, the installer requests a network configuration. Select static IP settings. If
the static IP is not avialable, use DHCP.

By default, the root login password is vmware, and the admin login password is default.

Note During deployment, the installer automatically selects the largest disk to install NSX on
the NSX Edge node.

12 Open the console of the NSX Edge node to track the boot process.

If the console window does not open, make sure that pop-ups are allowed.

13 After the NSX Edge node starts, log in to the CLI with admin credentials.

Note After NSX Edge node starts, if you do not log in with admin credentials for the first
time, the data plane service does not automatically start on the NSX Edge node.

14 After the reboot, you can log in with either admin or root credentials. The default root
password is vmware.

15 There are three ways to configure a management interface.

n Untagged interface. This interface type creates an out-of-band management interface.

(Static) set interface eth0 ip <CIDR> gateway <gateway-ip> plane mgmt

(DHCP) set interface eth0 dhcp plane mgmt

n Tagged interface.

set interface eth0 vlan <vlan_ID> plane mgmt

VMware by Broadcom 203


NSX Installation Guide

(Static) set interface eth0.<vlan_ID> ip <CIDR> gateway <gateway-ip> plane


mgmt

(DHCP) set interface eth0.<vlan_ID> dhcp plane mgmt

n In-band interface.

set interface mac <mac_address> vlan <vlan_ID> in-band plane mgmt

(Static) set interface eth0.<vlan_ID> ip <CIDR> gateway <gateway-ip> plane


mgmt

(DHCP) set interface eth0.<vlan_ID> dhcp plane mgmt

n Tagged interface and In-band interface.

Any existing VLAN management interface must be cleared before creating a new one.

clear interface eth0.<vlan_ID>

n (Optional) Create a bond0 interface for management HA interface with multiple interfaces.

You can configure a bond management interface on an NSX Edge using the following CLI
command. Use console to clear existing mangement IP before you create a bond and add
an interface to it.

Note Only active-backup mode is allowed on a bond interface. You can configure VLANs
on a bond0 interface.

Create a bond interface

set interface bond0 ip x.x.x.x/mask gateway x.x.x.x plane mgmt mode


active-backup members eth0,eth1 primary eth0

Create vlan inteface on bond0

set interface bond0 vlan Y plane mgmt

Assign IP address to [Link]

set interface [Link] ip x.x.x.x/24 gateway z.z.z.z plane mgmt

16 Run the get interface eth0 (without VLAN) or get interface eth0.<vlan_ID> (with a
VLAN) command to verify that the IP address was applied as expected.

nsx-edge-1> get interface eth0.100

Interface: eth0.100
Address: [Link]/24
MAC address: [Link]

VMware by Broadcom 204


NSX Installation Guide

MTU: 1500
Default gateway: [Link]
Broadcast address: [Link]
...

Note When bringing up NSX Edge nodes on non-NSX managed host, verify that the MTU
setting is set to 1600 (instead of 1500) on the physical host switch for the data NIC.

17 Set physical NICs to be used by NSX dataplane from the list of available PCI devices.

a get dataplane device list

b reset dataplane device list

c restart service dataplane

d get physical-port
After selecting physical NICs, restart NSX dataplane services for changes to take effect.

Note Starting in NSX 3.1.2, you can claim up to 16 physical NICs.

Note To configure custom NICs for dataplance, run the set dataplane device list
<NIC1>, <NIC2>, <NIC3> command.

18 To avoid network configuration errors, verify that the physical NICs selected match the NICs
configured in the uplink profiles.

19 Verify that the NSX Edge node has the required connectivity.

If you enabled SSH, make sure that you can SSH to your NSX Edge node and verify the
following:

n You can ping your NSX Edge node management interface.

n From the NSX Edge node, you can ping the node's default gateway.

n From the NSX Edge node, you can ping the hypervisor hosts that are either in the same
network or a network reachable through routing.

n From the NSX Edge node, you can ping the DNS server and NTP Server IP or FQDN list.

20 Troubleshoot connectivity problems.

Note If connectivity is not established, make sure the VM network adapter is in the proper
network or VLAN.

By default, the NSX Edge node datapath claims all virtual machine NICs except the
management NIC (the one that has an IP address and a default route). If you incorrectly
assigned a NIC as the management interface, follow these steps to use DHCP to assign
management IP address to the correct NIC.
a Log in to the NSX Edge CLI and type the stop service dataplane command.

b Type the set interface interface dhcp plane mgmt command.

VMware by Broadcom 205


NSX Installation Guide

c Place interface into the DHCP network and wait for an IP address to be assigned to that
interface.
d Type the start service dataplane command.

The datapath fp-ethX ports used for the VLAN uplink and the tunnel overlay are shown in
the get interfaces and get physical-port commands on the NSX Edge node.

Install Bare Metal NSX Edge Interactively using ISO File


Install NSX Edge devices on bare metal using an ISO file in the interactive mode.

Prerequisites

n Starting in NSX 3.2 both UEFI or Legacy BIOS modes are supported.

n See NSX Edge network requirements in NSX Edge Installation Requirements.

Procedure

1 Go to [Link] and navigate to Products and Accounts →


Products → All Products.

2 Search VMware NSX and select the appropriate product version.

3 Locate and download the ISO file for NSX Edge for Bare Metal.

4 Log in to the ILO of the bare metal.

5 Click Launch in the virtual console preview.

6 Select Virtual Media > Connect Virtual Media.

Wait a few seconds for the virtual media to connect.

7 Select Virtual Media > Map CD/DVD and browse to the ISO file.

8 Select Next Boot > Virtual CD/DVD/ISO.

9 Select Power > Reset System (warm boot).

The installation duration depends on the bare metal environment.

10 Choose Interactive Install.

This operation takes several seconds to start.

11 In the Configure the keyboard window, select Yes if the installer must auto-detect the
keyboard or select No if the keyboard must not be detected by the console.

12 Select English US as the language.

13 In the Configure the network window, select the applicable primary network interface.

14 Enter the host name that connects to the selected primary interface and click Ok.

During power-on, the installer requests a network configuration mode. Select static IP
address and provide one. If static IP address is not available, select DHCP.

VMware by Broadcom 206


NSX Installation Guide

By default, the root login password is vmware, and the admin login password is default.

15 In the Configure NSX appliance using kickstart window:

n Enter the URL of the NSX kickstart config file if you want to automate NSX configuration
on the bare metal server.

n Leave the field blank if you want to manually configure NSX on the bare metal server.

16 Select a disk size that meets the system requirements.

17 In the Partition disks window, choose one of the following options:

n Select Yes if you want to unmount existing partitions so that new partitions can be
created on disks.

n Select No if you want to use existing partitions.

18 After the NSX Edge node starts, log in to the CLI with admin credentials.

Note After NSX Edge node starts, if you do not log in with admin credentials for the first
time, the data plane service does not automatically start on the NSX Edge node.

19 Run the get interface eth0 (without VLAN) or get interface eth0.<vlan_ID> (with a
VLAN) command to verify that the IP address was applied as expected.

nsx-edge-1> get interface eth0.100

Interface: eth0.100
Address: [Link]/24
MAC address: [Link]
MTU: 1500
Default gateway: [Link]
Broadcast address: [Link]
...

Note When bringing up NSX Edge nodes on non-NSX managed host, verify that the MTU
setting is set to 1600 (instead of 1500) on the physical host switch for the data NIC.

20 Troubleshoot connectivity problems.

Note If connectivity is not established, make sure the VM network adapter is in the proper
network or VLAN.

By default, the NSX Edge node datapath claims all virtual machine NICs except the
management NIC (the one that has an IP address and a default route). If you incorrectly
assigned a NIC as the management interface, follow these steps to use DHCP to assign
management IP address to the correct NIC.
a Log in to the NSX Edge CLI and type the stop service dataplane command.

b Type the set interface interface dhcp plane mgmt command.

VMware by Broadcom 207


NSX Installation Guide

c Place interface into the DHCP network and wait for an IP address to be assigned to that
interface.
d Type the start service dataplane command.

The datapath fp-ethX ports used for the VLAN uplink and the tunnel overlay are shown in
the get interfaces and get physical-port commands on the NSX Edge node.

Intel QAT Support for IPSec VPN Bulk Cryptography


Beginning with the NSX 3.0 release, support for the Intel QuickAssist Technology (QAT) is
provided on bare metal servers. Intel QAT provides the hardware acceleration for various
cryptography operations.

The QAT feature is enabled by default if the NSX Edge is deployed on a bare metal server
with an Intel QuickAssist PCIe card that is based on the installed C62x chipset (Intel QuickAssist
Adapter 8960 or 8970). The single root I/O virtualization (SR-IOV) interface must be enabled in
the BIOS firmware.

To check the status of the QAT feature, enter the following command on the NSX Edge bare
metal server CLI.

get dataplane qat

The possible responses you might receive are listed in the following table.

Status of QAT Feature Definition

QAT present, enabled, running The QAT feature is enabled and running.

QAT present, enabled, not running The QAT feature has been enabled, but the dataplane
service must be restarted for the status change to take
effect.

QAT present, disabled, not running The QAT feature is disabled.

QAT present, disabled, running The QAT feature has been disabled, but the dataplane
service must be restarted for the status change to take
effect.

QAT not present The bare metal server on which you ran the CLI command
does not have a QAT device installed.

QAT not supported in VM You ran the CLI command on a VM edge.

VMware by Broadcom 208


NSX Installation Guide

To disable or enable the use of an installed QAT device, use the following CLI commands. The
expected responses are also shown.

set dataplane qat disabled


QAT disabled. Please restart service dataplane to take effect.

set dataplane qat enabled


QAT enabled. Please restart service dataplane to take effect.

Important You must enter the restart service dataplane command at the CLI prompt for
the QAT feature status change to take effect.

Join NSX Edge with the Management Plane


To establish communication between NSX Edges and NSX Manager or NSX Manager cluster, join
NSX Edges with NSX Manager. You only need to register NSX Edges with one NSX Manager to
ensure communication with the management plane.

Prerequisites

Verify that you have admin privileges to log in to the NSX Edges and NSX Manager appliance.

Procedure

1 Open an SSH session or console session to one of the NSX Manager appliances.

2 Open an SSH session or console session to the NSX Edge node VM.

3 To retrieve the thumbprint of the NSX Manager appliance, at the NSX Manager appliance
console, run the get certificate api thumbprint command.

The command output is a string of alphanumeric numbers that is unique to this NSX Manager.
For example:

NSX-Manager1> get certificate api thumbprint


659442c1435350edbbc0e87ed5a6980d892b9118f851c17a13ec76a8b985f57

4 Alternatively, to retrieve the thumbprint of the cluster, at the NSX Manager appliance
console, run get certificate cluster thumbprint.

The output is a string of alphanumeric numbers that is unique to the cluster.

5 To join the NSX Edge node (VM or Bare Metal) to the NSX Manager appliance, run the join
management-plane command.

Provide the following information:

n Hostname or IP address of the NSX Manager with an optional port number

n User name of the NSX Manager

VMware by Broadcom 209


NSX Installation Guide

n Certificate thumbprint of the NSX Manager

n Password of the NSX Manager

NSX-Edge1> join management-plane <Manager-IP> thumbprint <Manager-thumbprint> username


admin

Repeat this command on each NSX Edge node VM.

6 If cluster VIP is configured for the NSX Manager, then join NSX Edge Node using the cluster
thumbprint. Run the join management-plane command.

Provide the following information:

n Virtual IP address of the NSX Manager cluster with an optional port number

n User name of the NSX Manager

n Certificate thumbprint of the NSX Manager cluster

n Password of the NSX Manager

NSX-Edge1> join management-plane <Cluster-VIP> username <Manager-username> password


<Manager-password> thumbprint <Cluster-tumbprint>

7 Verify the result by running the get managers command on your NSX Edge node VMs.

nsx-edge-1> get managers


- [Link] Connected (NSX-RPC)
- [Link] Connected (NSX-RPC)
- [Link] Connected (NSX-RPC)

8 In the NSX Manager UI, navigate to System > Fabric > Nodes > Edge Transport Nodes.

On the NSX Edge Transport Node page:

n The Configuration State column displays Configure NSX. Click Configure NSX to begin
configuration on the node. If the NSX Version column does not display the version
number installed on the node, try refreshing the browser window.

n Before you configure NSX on the NSX Edge node, the Node Status and Tunnel Status
columns display state Not Available. The Transport Zones and N-VDS switches
columns display value 0, indicating there are no transport zones attached or N-VDS
switches configured on the NSX Edge node.

What to do next

When installing NSX Edge using NSX Manager see Create an NSX Edge Transport Node.

When installing NSX Edge manually, see Edit NSX Edge Transport Node Configuration.

Edit NSX Edge Transport Node Configuration


After manually installing NSX Edge VM on an ESXi host or as a Bare Metal server, you can edit a
NSX Edge configuration.

VMware by Broadcom 210


NSX Installation Guide

A transport node is a node that is capable of participating in an NSX overlay or NSX VLAN
networking. Any node can serve as a transport node if it contains an N-VDS. Such nodes include
but are not limited to NSX Edges.

An NSX Edge can belong to one overlay transport zone and multiple VLAN transport zones. If a
VM requires access to the outside world, the NSX Edge must belong to the same transport zone
that the VM's logical switch belongs to. Generally, the NSX Edge belongs to at least one VLAN
transport zone to provide the uplink access.

Prerequisites

n VLAN and Overlay transport zones must be configured.

n Verify that compute manager is configured. See Add a Compute Manager.

n An IP pool (to be used as NSX Edge TEP pool) must be configured or must be available in the
network deployment.

n (NSX [Link]) Before you can use NSX Edge VM datapath interfaces in Uniform Passthrough
(UPT) mode, meet the following conditions:

Note UPT mode is not supported on NSX Edge Bare Metal hosts.

n NSX Edge hardware version is 20 (vmx-20) or later. Previous NSX Edge hardware
versions do not support UPT mode.

n Verify that the memory reservation on the configured NSX Edge is set to 100%.

n From the vSphere Web Client, enable UPT on the NSX Edge VM network adapter. See
the Change the Virtual Machine Network Adapter Configuration topic in vSphere Virtual
Machine Administration guide.
n At least one of the NSX Edge VM datapath interface must be backed by an ESXi
host that hosts a Data Processing Unit-based SmartNIC. A SmartNIC is a NIC card that
provides network traffic processing using a Data Processing Unit (DPU), a programmable
processor on the NIC card, in addition to the traditional functions of a NIC card. For more
information related to DPU, see NSX on vSphere Lifecycle Manager with VMware vSphere
Distributed Services Engine.

n Starting with NSX [Link], NSX Edge VM hardware version will no longer default to
[Link] 13. NSX Edge VM hardware will depend on the underlying version
of the ESXi host. VM hardware versions compatible with ESXi hosts are listed in KB article
2007240.

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

VMware by Broadcom 211


NSX Installation Guide

2 (NSX [Link]) To enable UPT mode on the NSX Edge node:

a Select System → Fabric → Nodes → Edge Transport Nodes.

b Select the NSX Edge node to enable UPT, click Actions and Change Node Settings.

c In the Change Node Settings window, ensure the Enable UPT mode for datapath
interface field is enabled. This setting enables UPT on all datapath interfaces that support
UPT mode or support network offloads.

d Click Save.

3 To prepare the NSX Edge node as a transport node, select System > Fabric > Nodes > Edge
Transport Nodes > Edit Edge. Configure the following fields to complete preparation of a
NSX Edge node as a transport node.

4 Enter the N-VDS information.

Option Description

Edge Switch Name Enter a name for the switch.

Transport Zone Select the transport zones that this transport node belongs to. An NSX Edge
transport node belongs to at least two transport zones, an overlay for NSX
connectivity and a VLAN for uplink connectivity.

Note NSX Edge Nodes support multiple overlay tunnels (multi-TEP) when
the following prerequisites are met:
n TEP configuration must be done on one N-VDS only.
n All TEPs must use the same transport VLAN for overlay traffic.
n All TEP IPs must be in the same subnet and use the same default
gateway.

Uplink Profile Select the uplink profile from the drop-down menu.
The available uplinks depend on the configuration in the selected uplink
profile.

VMware by Broadcom 212


NSX Installation Guide

Option Description

IP Assignment (TEP) IP address is assigned to the NSX Edge switch that is configured. It is used
as the tunnel endpoint of the NSX Edge.
Select Use IP Pool or Use Static IP List for the overlay N-VDS.
n If you select Use Static IP List, specify:
n Static IP List: Enter a list of comma-separated IP addresses to be
used by the NSX Edge.
n Gateway: Enter the default gateway of the TEP, which is used to
route packets another TEP in another network. For eample, ESXi TEP
is in [Link]/24 and NSX Edge TEPs are in [Link]/24 then we
use the default gateway to route packets between these networks.
n Subnet mask: Enter the subnet mask of the TEP network used on the
NSX Edge.
n If you selected Use IP Pool for IP assignment, specify the IP pool name.

DPDK Fastpath Interfaces / Virtual Map uplinks to DPDK fastpath interfaces.


NICs Starting with NSX 4.0.1, you can map uplinks to DPDK fastpath interfaces
that are backed by smartNIC-enabled DVPGs, VLAN logical switches or
segments. The prerequisite is to enable UPT mode on NSX Edge VM virtual
network adapters. The UPT mode requires at least one DPDK interface to be
backed by smartNIC-enabled hardware also known as Data Processing Unit
(DPU)-backed networks.

Note If the uplink profile applied to the NSX Edge node is using a Named
Teaming policy, ensure the following condition is met:
n All uplinks in the Default Teaming policy must be mapped to the
corresponding physical network interfaces on the Edge VM for traffic
to flow through a logical switch that uses the Named Teaming policies.

You can configure a maximum of four unique data path interfaces as uplinks
on a NSX Edge VM.
When mapping uplinks to DPDK Fastpath Interfaces, if NSX Edge does not
display all the available interfaces (four in total), it means that either the
additional interface is not yet added to the NSX Edge VM or the uplink
profile has fewer number of uplinks.
For NSX Edge VMs upgraded from an earlier version of NSX to 3.2.1 or
later, invoke the redeploy API call to redeploy the NSX Edge VM. Invoking
the redeploy API ensures the NSX Edge VM deployed recognizes all the
available datapath interfaces in NSX Manager UI. Make sure the Uplink
profile is correctly configured to use additional datapath NIC.
n For autodeployed NSX Edges, call the redeploy API.

POST api/v1/transport-nodes/<transport-node-id>?
action=redeploy

n For manually deployed edges, deploy a new NSX Edge VM. Ensure all
the vmx customizations of the old NSX Edge VM are also done for the
new NSX Edge VM.
Performing vMotion on a NSX Edge VM might result in the NSX Edge
VM going into failed state or the additional network adapter cannot be
enabled because of memory buffer issues. For troubleshooting memory-
related issues when performing a vMotion on a NSX Edge VM, see https://
[Link]/s/article/76387.

VMware by Broadcom 213


NSX Installation Guide

Note
n LLDP profile is not supported on an NSX Edge VM appliance.

n Uplink interfaces are displayed as DPDK Fastpath Interfaces if the NSX Edge is installed
using NSX Manager or on a Bare Metal server.

n Uplink interfaces are displayed as Virtual NICs if the NSX Edge is installed manually using
vCenter Server.

5 Click Save.

6 View the connection status on the Transport Nodes page.

After adding the NSX Edge as a transport node, the connection status changes to Up in 10-12
minutes.

Note (NSX [Link]) When you enable the Actions > Change Node Settings > Enable UPT
mode for datapath interface field, the NSX Manager puts the NSX Edge VM into maintenance
mode, applies configuration, and removes NSX Edge from maintenance mode which makes
the UPT configuration effective on the NSX Edge transport node.

7 To successfully configure firewall rules on the NSX Edge node, enable service core on the
transport node.

set debug

set dataplane service-core enabled

restart service dataplane

8 (Optional) View the transport node with the GET [Link]


nodes/<transport-node-id> API call.

9 (Optional) For status information, use the GET [Link]


<transport-node-id>/status API call.

10 After an NSX Edge node is migrated to a new host using vCenter Server, you might find NSX
Manager UI reporting stale configuration details (Compute, Datastore, Network, SSH, NTP,
DNS, Search Domains) of the NSX Edge. To get the latest configuration details of NSX Edge
on the new host, run the API command.

POST api/v1/transport-nodes/<transport-node-id>?
action=refresh_node_configuration&resource_type=EdgeNode

What to do next

Add the NSX Edge node to an NSX Edge cluster. See Create an NSX Edge Cluster.

Create an NSX Edge Cluster


Having a multi-node cluster of NSX Edges helps ensure that at least one NSX Edge is always
available.

VMware by Broadcom 214


NSX Installation Guide

In order to create a tier-0 logical router or a tier-1 router with stateful services such as NAT, load
balancer, and so on, you must associate it with an NSX Edge cluster. Therefore, even if you have
only one NSX Edge, it must still belong to an NSX Edge cluster to be useful.

An NSX Edge transport node can be added to only one NSX Edge cluster.

An NSX Edge cluster can be used to back multiple logical routers.

After creating the NSX Edge cluster, you can later edit it to add additional NSX Edges.

Note Multiple NSX Edge clusters can be deployed within a single NSX Manager, allowing for
the creation of pool of capacity that can be dedicated to specific services (for example, NAT
at Tier-0 gateways or NAT at Tier-1 gateways). Within a single NSX Edge cluster, all NSX Edge
nodes must be the same type – either physical servers (bare metal) or VMs. However, you can
have NSX Edge node VMs of different sizes within the same NSX Edge cluster.

Prerequisites

n Ensure there is at least one NSX Edge node with Node status Up and it must not be part of
any existing cluster.

n Optionally, create an NSX Edge cluster profile for high availability (HA). You can also use the
default NSX Edge cluster profile.

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Select System > Fabric > Nodes > Edge Clusters > Add Edge Clusters.

3 Enter the NSX Edge cluster name.

4 Select an NSX Edge cluster profile from the drop-down menu.

5 In Member Type drop-down menu, select either Edge Node if the virtual machine is deployed
on-premises or Public Cloud Gateway if the virtual machine is deployed in a public cloud.

6 From the Available column, select NSX Edges and click the right-arrow to move them to the
Selected column.

7 Click Add.

What to do next

To know the current status of the NSX Edge nodes within a cluster, run the get edge-cluster
status CLI command.

Understand these states of NSX Edge nodes:

n Up (routing down): The NSX Edge node state is Up but Tier-0 SR routing daemon is not
running on this NSX Edge node as no services are enabled on the it.

VMware by Broadcom 215


NSX Installation Guide

n Admin Down: The NSX Edge is in NSX maintenance mode and all services or traffic
forwarding is disabled on this edge.

n Down: The datapath process is not running, link is down or VTEP tunnels are down.

n Unreachable: Between two NSX Edge nodes, one BFD session is run on the management
interface, and at least one BFD session is run on each of the VTEP interfaces. An NSX Edge
considers its peer as unreachable only when all BFD sessions to that NSX Edge (management
one and all VTEP ones) are down.

You can now build logical network topologies and configure services. See the NSX Administration
Guide.

Remove NSX Edge Nodes from an Edge Cluster


Remove NSX Edge nodes with Tier-1 Gateways or Tier-0 Gateway configured with service router
(SR), DHCP and metadata proxy.

Before you remove NSX Edge nodes, relocate the gateway configurations to a new standby
node:

n To remove Tier-0 gateway configurations on an NSX Edge node, you must manually relocate
Tier-0 configurations such as Tier-0 SR, DHCP and metadata proxy configurations to a
standby NSX Edge node.

n To remove Tier-1 gateway configurations on an NSX Edge node, do the following in these
scenarios:

n If Tier-1 SR, DHCP and metadata proxy configurations are auto allocated to the NSX Edge
node, you can enable the standby relocation functionality to relocate Tier-1 configurations
to a new standby NSX Edge node. The procedure describes how to use the standby
relocation functionality to relocate the configurations to a new standby node.

n If Tier-1 SR, DHCP and metadata proxy configurations are manually allocated to the NSX
Edge node, you need to manually relocate Tier-1 configurations to a new standby NSX
Edge node.

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 On NSX Edge nodes configured with Tier-0 configurations, manually move configurations to
some other NSX Edgenode.

Note The standby relocation functionality does not relocate Tier-0 configurations such as
Tier-0 SRs, DHCP and metadata proxy configurations to a standby NSX Edge node.

a Select Networking → Tier-0 Gateways.

b To edit a Tier-0 Gateway, select the gateway, click vertical ellipses and click Edit.

VMware by Broadcom 216


NSX Installation Guide

c Navigate to Interfaces section and click the External and Service Interfaces.

d In the Set Interfaces window, edit the interface configured for NSX Edge node.

e Remove the existing NSX Edge node associated with the interface and select a new NSX
Edge node that is configured with the same VLAN connectivity required for the interface
and click Save.

The above procedure deletes Tier-0 SRs, DHCP and metadata proxy on Tier-0 Gateways
of the existing NSX Edge and moves them to the new NSX Edge node.

f Delete the NSX Edge node from the edge cluster.

3 On NSX Edge nodes that are auto allocated with Tier-1 SR, DHCP and metadata proxy
configurations, follow these steps to trigger the standby relocation functionality to relocate
those configurations and remove the NSX Edge node from the NSX Edge cluster:

a (Optional) For faster failover, change the BFD timers for NSX Edge cluster by setting it to
500 ms.

b Apply the new NSX Edge cluster profile to the transport node profile. It ensures faster
failover when the NSX Edge node is powered off.

c Select Networking → Tier-1 Gateways.

d To edit a Tier-1 Gateway, select the gateway, click vertical ellipses and click Edit.

e In the Edit view, select the NSX Edge cluster and enable the Enable Standby Relocation
field.

Important For standby relocation to function successfully, there must be an additional


healthy NSX Edge node in the edge cluster. During the process of removing an NSX Edge
node, Tier-1, DHCP or metadata proxy configurations are relocated from an existing NSX
Edge to a new standby node.

f Select System → Fabric → Profiles → Edge Cluster Profiles.

g Select the edge cluster profile and click Edit.

h Set Standby Relocation Threshold (mins) that is applied to the edge cluster. The default
recommended value is 30 mins and the minimum value is 10 mins.

Note Only auto allocated Tier-1 SR, DHCP and metadata proxy configurations are
relocated to a standby NSX Edge. If the NSX Edge node to be removed contains any
manually allocated configurations, such configurations will not be relocated out from
existing NSX Edge node to a stanby node. You need to manually change the allocation
for those Tier-1 configurations.

i Power off the NSX Edge without taking down the node into maintenance mode. If the
NSX Edge is running any active service, then all such active service configurations will
failover to another NSX Edge because of the HA failover trigger when the node is
powered off.

VMware by Broadcom 217


NSX Installation Guide

j Wait for the duration of standby relocation threshold timeout. After the threshold limit
is reached, all Tier-1 service configurations that have standby relocation enabled will be
removed from the edge node being powered-off and relocated to some other NSX Edge
in the cluster. There can be minor delays in standby relocation to perform the relocation
task.

k After Tier-1 configurations are relocated to a standby node, remove the NSX Edge that
was powered off from the cluster. Select the Edge Cluster, click Edit and remove the NSX
Edge node from the cluster anc click Save.

Relocate and Remove an NSX Edge Node from an NSX


Edge Cluster
Starting with NSX [Link], you can use the NSX relocate and remove API to relocate the service
configurations of an NSX Edge node to another standby NSX Edge node in the same NSX Edge
cluster and then remove the Edge node from the Edge cluster.

The relocate and remove API relocates the following service configurations:

n Logical routers

n DHCP server

n Metadata proxy

n L2 forwarder

Prerequisites

To relocate and remove an Edge node from an Edge cluster, the following conditions are
required:

n The Edge node must not have any manually allocated service configurations. Only auto
allocated service configurations can be relocated.

n To be available for relocation, standby Edge nodes must not be configured with Layer 2
bridging.

n The Edge cluster must have at least two healthy Edge nodes where the auto allocated
service configurations can be relocated to.

n For HA (high availability), the Edge cluster must have more than two Edge nodes that are
possible for relocation.

Procedure

1 Run the API command to get the member_index value of the Edge node that you want to
relocate and remove from an Edge cluster:

GET [Link]

VMware by Broadcom 218


NSX Installation Guide

"deployment_type": "VIRTUAL_MACHINE",
"members": [
{
"member_index": 11,
"transport_node_id": "21a19cbf-eaba-4a59-b18d-ff71fe5d76aa",
"display_name": "edgeVm1New"
},
{
"member_index": 13,
"transport_node_id": "740cf97d-892b-47bb-97e7-889d92252e80",
"display_name": "edgeVm2New"
},
{
"member_index": 14,
"transport_node_id": "cd5ab447-a36a-4bc3-94ff-0a4eea9fb2ad",
"display_name": "edgeVm3New"
}
],

The member_index value is used to specify the Edge node to relocate and remove. Assume
that you want to relocate the service configurations for the Edge node named edgeVm1New,
then its member_index value is 11.

2 Enter the relocate and remove API command and the member_value value of the Edge node
to relocate and remove:

POST [Link]

{
"member_index": 11
}

3 Run the API command.

The Edge node enters maintenance mode and its service configurations are transferred
to one of the standby Edge nodes in the cluster. After the service configurations are
transferred, the Edge node is removed from the Edge cluster and exits maintenance mode.

Note The API command will not work if:

n The Edge node has any manually allocated service configurations.

n The Edge cluster does not have at least two healthy standby Edge nodes.

Caution It is possible for the API command to give a success response, but in the
background, the relocation operation fails. If this scenario occurs, then an alarm with the
Event Type of Edge Cluster Member Relocate Failure is raised.

The recommended action for this scenario is to review the available capacity of the Edge
cluster. If more capacity is required, scale your Edge cluster and then retry the API command.

VMware by Broadcom 219


vSphere Lifecycle Manager with
NSX 11
By enabling VMware vSphere® vSphere Lifecycle Manager on a cluster, you can ensure that all
ESXi hosts participating in a cluster are prepared using a single ESXi image and a single NSX
image. The vSphere Lifecycle Manager functionality minimizes errors and lowers cluster and host
maintenance cycles.

Starting from vCenter Server 7.0 U1, ESXi 7.0 U1, and NSX 3.1.0 onwards, a vSphere Lifecycle
Manager-enabled cluster can manage installation of ESXi and NSX VIBs.

vSphere Lifecycle Manager requires two images: one for ESXi and another one for NSX. It
retrieves the ESXi image from the image directory in VMware vCenter. Ensure the ESXi image
is uploaded to VMware vCenter. vSphere Lifecycle Manager gets the NSX image only when a
cluster is prepared for NSX networking, which is possible from the NSX Manager user interface.
The NSX image is automatically uploaded to VMware vCenter when NSX cluster preparation
begins. For other clusters in the VMware vCenter, vSphere Lifecycle Manager references the
already uploaded NSX image. vSphere Lifecycle Manager refers to NSX as a solution, as it does
with other solutions such as HA, DRS and so on.

For more information on the usage of the terminology, such as images and solutions in vSphere
Lifecycle Manager, refer to the Managing Host and Cluster Lifecycle guide in the VMware
vSphere® Documentation center.

The following clusters can be enabled as vSphere Lifecycle Manager clusters:

n Clusters with ESXi hosts that are prepared for NSX networking using a transport node profile.

VMware by Broadcom 220


NSX Installation Guide

n Clusters with ESXi hosts that are not prepared for NSX networking.

Read the following topics next:

n Prepare an NSX Cluster with vSphere Lifecycle Manager

n Enable vSphere Lifecycle Manager on an NSX Cluster

n NSX on vSphere Lifecycle Manager with VMware vSphere Distributed Services Engine

n NSX with vSphere Lifecycle Manager Scenarios

n vSphere Lifecycle Manager Failed to Prepare a Host for NSX Networking

n vSphere Lifecycle Manager Failed to Prepare NSX Cluster

n Delete a NSX Depot on vCenter Server

Prepare an NSX Cluster with vSphere Lifecycle Manager


You can prepare NSX on vSphere Lifecycle Manager-enabled clusters.

For vSphere Lifecycle Manager to get access to the NSX image, you must configure the cluster
with a transport node profile. When you begin configuring the cluster, NSX local control plane
bundle (in the format - nsx-lcp-bundle-<release_version.build_version>) is uploaded to the image
repository in VMware vCenter.

During host preparation, vSphere Lifecycle Manager accesses the depot and sets NSX as a
solution for that cluster. It applies the NSX solution to the cluster, which begins with the process
of remediating hosts. Every host is remediated by vSphere Lifecycle Manager before the NSX
switch is configured on the host. vSphere Lifecycle Manager remediation happens when a new
ESXi host is added to a vSphere Lifecycle Manager cluster.

vSphere Lifecycle Manager remediates hosts so that the image on each host is the same as the
ESXi version set for the cluster. Any drift must be resolved before host preparation can progress
in NSX. During cluster preparation, if the cluster fails, NSX sets the cluster state to Failed. As an
admin, you must retrigger host remediation by taking appropriate actions either from the NSX
Manager user interface or from the vSphere Client.

VMware by Broadcom 221


NSX Installation Guide

Prerequisites

n Ensure all hosts in a cluster are running at least ESXi 7.0 U1 version or higher.

n Ensure Lockdown mode is not enabled on any of the hosts. vSphere Lifecycle Manager might
fail to prepare hosts that are enabled to function in Lockdown mode.

n Ensure there is not drift in images between hosts and cluster. Otherwise, you cannot enable
vSphere Lifecycle Manager on the cluster. Remediate hosts in VMware vCenter to ensure
base image matches on host and cluster.

n Ensure vSphere Lifecycle Manager is enabled on the cluster. See VMware vSphere®
documentation.

n Register Compute Manager with the following settings:

n Enable Trust and set access level to vSphere Lifecycle Manager. Trust is mandatory to
establish communication between NSX and vSphere Lifecycle Manager.

n Enable Create Service Account.

n Create a transport node profile using a vSphere Distributed Switch host switch. N-VDS switch
is not supported on a vSphere Lifecycle Manager-enabled.

n If you configure a Web Proxy on a VMware vCenter Appliance, add all NSX Manager IP
addresses to the NO_PROXY list, otherwise vLCM cannot connect to NSX Managers.

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Go to System → Fabric → Nodes → Host Transport Nodes.

3 In the Managed by drop-down menu, select the Compute Manager.

4 Select the cluster and click Configure NSX.

Note Identify vSphere Lifecycle Manager-enabled cluster when a cluster is accompanied


with vLCM text.

5 Select a transport node profile that uses vSphere Distributed Switch as the host switch.

6 Click Apply TNP.

If this is the first cluster that is enabled for vSphere Lifecycle Manager, NSX uploads the
NSX LCP bundle to the image repository in VMware vCenter. vSphere Lifecycle Manager sets
NSX as a solution on the cluster. It sets the desired state to the NSX image uploaded to
VMware vCenter. Then, vSphere Lifecycle Manager begins installation of NSX VIBs on each
host, followed by configuration of NSX switch on each transport node.

VMware by Broadcom 222


NSX Installation Guide

As part of host preparation, vSphere Lifecycle Manager remediates the host, registers the
host with NSX Manager, configures NSX switch on the host and completes the configuration.

Note Installing NSX on a vSphere Lifecycle Manager-enabled cluster might take a little
more time than when installing on a non-vSphere Lifecycle Manager-enabled cluster. This
difference is due to the additional health checks that are included in this combination of
products

7 Troubleshooting issues:

If vSphere Lifecycle Manager could not apply NSX as a solution to the cluster, the NSX cluster
in NSX Manager goes into Failed state. To remediate the hosts in the cluster, do one of the
following:
a Go to the VMware vCenter, verify the following conditions are met:

n Hosts are compliant.

n Hosts are not powered off or in maintenance mode.

b Verify cluster status through UI or API. Even if a host in the cluster is in Failed state, the
cluster status remains in unrealized state.

Run the following API to verify the cluster state, GET /<NSX-Manager-IP>/api/v1/
transport-node-collections/<transport-node-collection-id>.

c If any one of the host fails, the remaining hosts in the cluster go into Install Skipped
state. To remediate, read the error message and take any necessary action. Then, click
Resolve to retry remediation of the host and NSX preparation. Note that remediation
happens serially, one host at a time.

d If the cluster is still in Install Failed state, click Resolve for the cluster in UI or run the
API to realize the transport node profile on the cluster. Along with remediating the cluster,
the following API also tries to prepare those hosts that are in the Install Skipped state.
It retries remediation on the entire cluster. It tries to prepare the hosts where installation is
skipped.

POST /api/v1/transport-node-collections/<transport-node-collection-id>?
action=retry_profile_realization

Results

Sphere Lifecycle Manager prepares all hosts in the cluster as NSX transport nodes.

Enable vSphere Lifecycle Manager on an NSX Cluster


As you can prepare an NSX cluster on a cluster that is already enabled with vSphere Lifecycle
Manager, similarly, you can enable vSphere Lifecycle Manager on an existing NSX prepared
cluster.

VMware by Broadcom 223


NSX Installation Guide

Prerequisites

n Ensure all hosts in a cluster are running ESXi 7.0 U1 version.

n Register Compute Manager with the following settings:

n Enable Trust and set access level to vSphere Lifecycle Manager. Trust is mandatory to
establish communication between NSX and vSphere Lifecycle Manager.

n Enable Create Service Account.

n Prepare the cluster by applying a Transport Node Profile (using VDS as the host switch type)
to the cluster.

Note N-VDS host switch is not supported on a vSphere Lifecycle Manager-enabled cluster.

Procedure

1 From a browser, log in with admin prvileges to a VMware vCenter at [Link]


server-ip-address>.

2 Select the cluster on which the vSphere Lifecycle Manager functionality must be enabled.

3 On the Images page, confirm that all hosts are compliant. If any of the host is in non-
compliant state, remediate the host to be compliant with the ESXi image set for the cluster.

VMware by Broadcom 224


NSX Installation Guide

4 Verify that vSphere Lifecycle Manager sets the solution for the cluster to NSX. To verify that
the NSX solution is set on the vSphere Lifecycle Manager cluster, you can do one of the
following:

a In VMware vCenter, on the Images page, click Check Compliance and check the
Components section for an NSX entry.

b Alternatively, run the API command and verify that component and version are correctly
set to NSX.

GET [Link]
[Link]?vmw-task=true
components" : [
{
"component" : "nsx-lcp-bundle"
}
],
"version" : "3.1-0"

5 When a new host is added to the vSphere Lifecycle Manager-enabled cluster, NSX calls
vSphere Lifecycle Manager to check host compliance with the ESXi image set for the cluster.
If there is no drift in host and cluster image, then transport node profile is applied to the host.
NSX VIBs on the host. The final part of the installation is followed by registration with NSX
Manager and NSX switch configuration.

6 Troubleshooting issues:

If vSphere Lifecycle Manager could not apply NSX as a solution to the cluster, the NSX cluster
in NSX Manager goes into Failed state. To remediate the hosts in the cluster, do one of the
following:
a Go to the vCenter Server, verify the following conditions are met:

n Hosts are compliant.

n Hosts are not powered off or in maintenance mode.

b Verify cluster status through UI or API. Even if a host in the cluster is in Failed state, the
cluster status remains in unrealized state.

Run the following API to verify the cluster state, GET /<NSX-Manager-IP>/api/v1/
transport-node-collections/<transport-node-collection-id>.

VMware by Broadcom 225


NSX Installation Guide

c If any one of the host fails, the remaining hosts in the cluster go into Install Skipped
state. To remediate, read the error message and take any necessary action. Click Resolve
to retry remediation of the host and NSX preparation. Note that remediation happens
serially, one host at a time.

d If the cluster is still in Failed state, click Resolve for the cluster in UI or run the API to
realize the transport node profile on the cluster. Along with remediating the cluster, the
following API also tries to prepare those hosts that are in the Install Skipped state. It
retries remediation on the entire cluster. It tries to prepare the hosts where installation is
skipped.

POST /api/v1/transport-node-collections/<transport-node-collection-id>?
action=retry_profile_realization

Results

vSphere Lifecycle Manager is enabled on a NSX cluster.

What to do next

After enabling vSphere Lifecycle Manager on the cluster, you can remediate drifts between hosts
in VMware vCenter with the image set for the cluster.

NSX on vSphere Lifecycle Manager with VMware vSphere


Distributed Services Engine
Starting with NSX [Link], vSphere Distributed Services Engine provides the ability to offload
some of the network operations from your server CPU to a data processing unit (DPU also
known as SmartNIC).

Using DPU devices for network acceleration frees up the CPU capacity for business-critical
workloads. Besides accelerating networking performance, DPU devices improve network visibility
and security acceleration.

Note DPUs are supported on hosts in a vSphere Lifecycle Manager-enabled cluster that are
running at least ESXi 8.0 version or higher.

vSphere 8.0 supports NVIDIA BlueField-2 (25G Only), and AMD Pensando (25G and 100G) DPU
devices only.

VMware by Broadcom 226


NSX Installation Guide

License
To utilize the NSX DPU-based acceleration or NSX offload capabilities, you need to purchase
an NSX Enterprise Plus Term license (Per Core basis) or an NSX Enterprise Plus with Threat
Prevention Term license (Per Core basis).

Note For vSphere offload capabilities, you do not need to purchase a separate NSX license.
You just need to have the vSphere ENT + Term license and be on vSphere 8. NSX Manager is
available as a part of vSphere ENT+.

Once the license key is applied, you should be able to offload the routing and DFW capabilities to
the DPU.

Enable Network Offloads


Workflow to Enable Network Offloads:

Reference Topics
For more information, refer to the following topics:

VMware vSphere Distributed Services Engine See the topic, Introducing VMware vSphere® Distributed
Services EngineTM and Networking Acceleration by Using
DPUs , in the VMware ESXi Installation and Setup guide
under VMware vSphere Product Documentation.

vSphere Lifecycle Manager with VMware vSphere See the topic, Using vSphere Lifecycle Manager with
Distributed Services Engine VMware vSphere Distributed Services Engine, in the
Managing Host and Cluster Lifecycle guide under VMware
vSphere Product Documentation.

Network Offloads and vCenter Server specific steps See the topic, What is Network Offloads Compatibility,
in the vSphere Networking guide under VMware vSphere
Product Documentation.

ESXi on DPU See the topic, Introducing ESXi on Data Processing Units
(DPUs), in the VMware ESXi Installation and Setup guide
under VMware vSphere Product Documentation.

Prepare an NSX Cluster with vSphere Lifecycle Manager SeePrepare an NSX-T Cluster with vSphere Lifecycle
and troubleshooting issues Manager

VMware by Broadcom 227


NSX Installation Guide

Configure NSX host transport node on DPU-based vSphere Lifecycle


Manager-enabled cluster
You need to configure NSX and enable Enhanced Datapath for vSphere Distributed Services
Engine to offload some of the network operations from your server CPU to the DPU.

Configuring NSX host transport node on a DPU-based vSphere Lifecycle Manager-enabled


cluster is similar to Prepare an NSX Cluster with vSphere Lifecycle Manager.

vSphere Distributed Switch (VDS) backed by the DPU on ESXi supports the offloading mode
after NSX is enabled. Traffic forwarding logic is offloaded from the ESXi host to the VDS backed
by DPU.

Note A DPU is supported only on vSphere Lifecycle Managed clusters. DPU requires a minimum
version combination of vSphere 8.0, NSX [Link], and Edge hardware version 20.

To learn more about Network Offloads Capability, see What is Network Offloads Capability in the
VMware vSphere Documentation.

Prerequisites

n vSphere offloads: DPUs are supported on hosts in a vSphere Lifecycle Manager-enabled


cluster that are running on ESXi 8.0 version or higher.

Note For vSphere offload capabilities, you do not need to purchase a separate NSX license.
You just need to have the vSphere ENT + Term license and be on vSphere 8. NSX Manager is
available as a part of vSphere ENT+.

n NSX offloads: To utilize the NSX DPU-based acceleration capabilities, you need to purchase
an NSX Enterprise Plus Term license (Per Core basis) or an NSX Enterprise Plus with Threat
Prevention Term license (Per Core basis).

Note You do not need any additional NSX licenses if you have an NSX Enterprise Plus Term
license (Per Core) or an NSX Enterprise Plus with Threat Prevention Term license (Per Core).

Note vSphere ENT+ license holders on vSphere 8.0 are entitled to 'Enhanced Datapath -
Standard for DPUs'. This does not require you to purchase a separate NSX license.

For more information, see NSX Feature and Edition Guide.

n Lockdown mode is not enabled on any of the hosts. vSphere Lifecycle Manager might fail to
prepare hosts that are enabled to function in Lockdown mode.

n There is not drift in images between hosts and cluster. Otherwise, you cannot enable vSphere
Lifecycle Manager on the cluster. Remediate hosts in vCenter Server to ensure base image
matches on host and cluster.

n vSphere Lifecycle Manager is enabled on the cluster. See VMware vSphere® documentation.

VMware by Broadcom 228


NSX Installation Guide

n Register Compute Manager with the following settings:

n Enable Trust and set access level to vSphere Lifecycle Manager. Trust is mandatory to
establish communication between NSX and vSphere Lifecycle Manager.

n Enable Create Service Account.

n Create a transport node profile using a vSphere Distributed Switch host switch. NSX Virtual
Distributed Switch (N-VDS) is not supported on vSphere Lifecycle Managed clusters.

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Go to System → Fabric → Nodes → Host Transport Nodes.

3 In the Managed by drop-down menu, select the Compute Manager.

4 Select the cluster and click Configure NSX.

Note Identify vSphere Lifecycle Manager-enabled cluster when a cluster is accompanied


with vLCM text.

5 To create a transport node profile, click Add Profile.

6 Enter Transport Node Profile (TNP) details as required.

Note
n For VDS backed by the DPU, select the Host TN and TNP Host Switch mode as Enhanced
Datapath - Standard (Recommended) or Enhanced Datapath - Performance.

n In Add Switch, select VDS (Distributed Switch version 8.0 created in vSphere Client) with
offload compatibility. If there is a mismatch, the host will not be compatible.

n One cluster allows only one type of host. For example, when you create two distributed
switches on vSphere client: one with NVIDIA network offload compatibility and one with
AMD Pensando network offload compatibility, both options will appear on the UI. Select
the one specific to your requirement. Clusters of different types cannot use the same
TNP. Therefore, you will need two separate TNPs: one for NVIDIA DPU and one for AMD
Pensando DPU.

7 If TNP is already created, select a transport node profile that uses vSphere Distributed Switch
as the host switch and datapath mode as Enhanced Datapath.

8 Click Apply TNP.

If this is the first cluster that is enabled for vSphere Lifecycle Manager, NSX uploads the
NSX LCP bundle to the image repository in vCenter Server. vSphere Lifecycle Manager sets
NSX as a solution on the cluster. It sets the desired state to the NSX image uploaded to
vCenter Server. Then, vSphere Lifecycle Manager begins installation of NSX VIBs on each
host, followed by configuration of NSX switch on each transport node.

VMware by Broadcom 229


NSX Installation Guide

As part of host preparation, vSphere Lifecycle Manager remediates the host, registers the
host with NSX Manager, configures NSX switch on the host and completes the configuration.

Note
n vSphere Lifecycle Manager puts the ESXi host backed by AMD Pensando DPU in
maintenance mode and reboots it as part of host remediation. If vSphere Lifecycle
Manager fails to place the host in maintenance mode, you need to manually power off
all VMs and then retry NSX installation

n Installing NSX on a vSphere Lifecycle Manager-enabled cluster might take a little more
time than when installing on a non-vSphere Lifecycle Manager-enabled cluster. This
difference is due to the additional health checks that are included in this combination
of products.

Displaying DPU-related information on NSX Manager Interface


After you install and configure NSX on a host, you can monitor inventory objects of the Host
Transport Nodes and view the DPU-related information on the NSX Manager interface:

The following fields display the DPU-related information of Host Transport Node on the NSX
Manager interface:

n DPU: On the Monitor tab of the Host Transport Node, the DPU chart displays the CPU cores

allocated and the system memory used by a host on DPU. Click on the info icon beside
the DPU field to view the DPU-related information, such as the device number, vendor or
model name, firmware version, and OS version.

n DPU Backed: On the Physical Adapters tab of Host Transport Node, the DPU Backed field
displays if the hypervisor host is backed by the DPU or not. This is to know the DPU presence
on the ESXi host:

n If DPU Backed is 'Yes', the interface is backed by the DPU and is in the 'MANAGED' state.

n If DPU Backed is 'No', it is the standard hypervisor host.

Note ESXi on DPU is used as a traditional NIC until NSX transport node is configured. The
VDS on vCenter Server indicates if network offloading is permitted when NSX is enabled.

Hosts backed by DPU are associated with VDS. You get a granular view of VDS backed by
the DPU at an individual host level. Click the Switch Visualization tab to view the uplinks
configured on the VDS backed by the DPU that is connected to uplinks.

NSX with vSphere Lifecycle Manager Scenarios


Installation and uninstallation scenarios to consider when you work with vSphere Lifecycle
Manager (vLCM) for NSX clusters.

VMware by Broadcom 230


NSX Installation Guide

Scenario Result

You try to enable vLCM on a cluster where transport vLCM cannot be enabled on the cluster because a
node profile is not applied, but some hosts are individually transport node profile was not applied to the cluster.
prepared as host transport nodes.

You try to enable vLCM on a cluster using a transport vCenter Server checks for cluster eligibility so that the
node profile configured to apply N-VDS host switch. cluster can be converted to a vLCM cluster. As N-VDS
host switch type is not supported, apply a transport node
profile that is configured to use a VDS host switch.

You move an unprepared host from a non-vLCM cluster to If the vLCM cluster is prepared with a transport node
a vLCM cluster. profile, the unprepared host is prepared as an NSX
transport node by vLCM.
If the vLCM cluster is not prepared with a transport node
profile, the host remains in unprepared state.

You move a transport node from a vLCM cluster to a non- The NSX VIBs are deleted from the host, but the NSX
vLCM cluster that is not prepared for NSX. Solution-related data (set by vLCM) is not deleted.
Now, if you try to enable vLCM cluster on the cluster, NSX
Manager notifies that NSX Solution will be removed from
the host. This notfication is misleading because NSX VIBs
were already removed on the host.

After you perform the Remove NSX operation on a If you remove NSX on each individual transport node,
vSphere Lifecycle Manager cluster, if vLCM is unable to then even though NSX VIBs are removed on the host,
delete NSX from the desired state, all nodes go into the cluster continues to have NSX as the desired state in
Uninstall Failed state. Now, you try to remove NSX vLCM. This state shows up as drift in host compliance in
on individual transport nodes. vCenter Server. So, you must perform Remove NSX on
the cluster to remove NSX from the vLCM configuration.

You prepare a vLCM cluster consisting of a host by NSX installation fails with the following message:
applying a TNP. The VDS switch type is configured in Failed to install software on host.
the TNP. You move the host into maintenance mode Solution apply failed on host:
and move the vLCM cluster out of the vLCM cluster into '[Link]' Deployment status of
datacenter. And finally, move the host back to the vLCM host bfedeb69-48d3-4f3b-9ebc-ce4eb177a968
cluster. is INSTALL_IN_PROGRESS with 0 errors.
Expected status is INSTALL_SUCCESSFUL
with no [Link] status of
host bfedeb69-48d3-4f3b-9ebc-ce4eb177a968 is
INSTALL_IN_PROGRESS with 0 errors. Expected
status is INSTALL_SUCCESSFUL with no
[Link] apply failed on host:
'[Link]'
Workaround: On the host, click Resolve and reapply TNP.

You move a host that failed installation to a non-vLCM NSX does not perform any operation.
cluster with or without a TNP applied.

You move a host that failed installation to a vLCM cluster NSX installation begins automatically.
with TNP applied.

You move a host that failed installation to a vLCM cluster NSX does not perform any operation.
without TNP applied.

You move a host that failed installation to a datacenter. NSX does not perform any operation.

VMware by Broadcom 231


NSX Installation Guide

Scenario Result

VMware vCenter is added as a compute manager with NSX allows preparation of the existing vLCM cluster using
Multi NSX flag enabled. Apply TNP on another existing the TNP.
vLCM cluster.

VMware vCenter is added as a compute manager with NSX does not allow preparation of the existing vLCM
Multi NSX flag enabled. Then try to change the already cluster.
prepared cluster to a vLCM cluster.

VMware vCenter is added as a compute manager with NSX allows preparation of the existing vLCM cluster.
Multi NSX flag enabled. Then try creating a new vLCM
cluster.

VMware vCenter already contains a vLCM cluster and you NSX fails this operation because the VMware vCenter
try to add the VMware vCenter as a compute manager already contains a vLCM cluster.
with Multi NSX flag enabled.

You move a DPU-enabled host from a TNP applied cluster Vibs are not deleted from ESXi host and DPU. You need
to non-TNP applied cluster. to remediate the host from vSphere Lifecycle Manage.
vSphere Lifecycle Manage deletes NSX vibs from ESXi
host and DPU, and reboots the host.

You remove NSX VIBs from a DPU-enabled host by using After running the 'del nsx' command, you need to reboot
'del nsx' nsxcli. the ESXi host to complete the NSX VIBs removal process
(VIBs are removed from ESXi and DPU).

NSX VIBs are not deleted from DPU-enabled host even if When a DPU-enabled host is disconnected during the
it is shown as Not Configured on the NSX UI. Remove NSX operation, the operation fails. After a while,
the TN deletion continues and the host is displayed as
Not Configured but the VIBs are not removed.
Workaround: Go to the vLCM UI and remediate the
cluster.

Note The DPU-enabled host reboots as part of the


remediation.

vSphere Lifecycle Manager Failed to Prepare a Host for NSX


Networking
vSphere Lifecycle Manager failed to prepare some hosts in the cluster for NSX Networking.

Problem

In a cluster containing many hosts, vSphere Lifecycle Manager successfully prepared some hosts,
whereas vSphere Lifecycle Manager failed to realize NSX on one of the host.

Cause

Hosts can take different states when vSphere Lifecycle Manager triggers installation of NSX.

n Cluster goes into Install Failed if vSphere Lifecycle Manager fails to remediate the entire
cluster.

VMware by Broadcom 232


NSX Installation Guide

n If one or more individual hosts fail, failed hosts go into Install Failed state. If there are
other hosts in the cluster yet to be prepared, those hosts go into Install Skipped state.
Both cluster and individual hosts display failure states.

Solution

1 On the NSX Manager, go to System → Fabric → Nodes → Host Transport Node.

2 From the Managed by drop-down menu, select the VMware vCenter.

3 Identify the failed cluster to view the error state. Click the error link to open a popup window.

4 If the cluster is in Install Failed state, click Resolve at the to initiate transport node profile
realization on the cluster.

Important With the cluster in Install Failed state, first try to resolve the remediation
issues at the cluster and then try to remediate individual hosts. If you overlook cluster-level
errors and directly try to remediate host-level errors, the UI does not allow you to perform
any remediation action at the host-level.

5 If one or more hosts failed but the cluster remediation status is Success, then navigate to the
failed host and click Resolve to remediate hosts.

6 You can also try to realize the transport node profile on the cluster by executing
the following API command, POST /api/v1/transport-node-collections/{tnc id}?
action=retry_profile_realization.

This API command re-triggers the transport node profile on the cluster.

vSphere Lifecycle Manager Failed to Prepare NSX Cluster


vSphere Lifecycle Manager failed to prepare an NSX cluster.

Problem

Transport node profile could not be applied to a vSphere Lifecycle Manager cluster, which
caused the cluster to go into Failed state.

Cause

As vSphere Lifecycle Manager failed to set and apply NSX as a solution on the cluster, none of
the hosts in the cluster were prepared as transport nodes. View the error state on the cluster in
NSX Manager UI.

Solution

1 On the NSX Manager, go to System → Fabric → Nodes → Host Transport Node.

2 From the Managed by drop-down menu, select the VMware vCenter.

3 Identify the failed cluster to view the error state. Click the error link to open a popup window.
Click Resolve to initiate transport node profile realization on the cluster.

VMware by Broadcom 233


NSX Installation Guide

4 Alternatively, run the API command, POST /api/v1/transport-node-collections/<tnc


id>?action=retry_profile_realization. This command initiates vLCM to realize NSX on
the cluster.

Delete a NSX Depot on vCenter Server


Delete a NSX Depot on vCenter Server.

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Unregister the compute manager from NSX.

When the compute manager s unregistered, NSX invokes API to delete the depot in VMware
vCenter. Depot is deleted from the image depot of vSphere Lifecycle Manager in VMware
vCenter

3 Troubleshooting:

a If NSX is unable to delete the depot after unregistering the compute manager, run the
following API.

DELETE [Link]

b Remove the depot entry from the payload.

c To verify that the depot is successfully deleted, run the API command.

GET [Link]

VMware by Broadcom 234


Host Profile integration with NSX
12
Integrate host profiles extracted from an ESXi host with NSX to deploy ESXi and NSX VIBs on
stateful and stateless servers.

Read the following topics next:

n Auto Deploy Stateless Cluster

n Stateful Servers

Auto Deploy Stateless Cluster


Stateless hosts do not persist configuration, so they need an auto-deploy server to provide the
required start files when hosts power on.

This section helps you to set up a stateless cluster using vSphere Auto Deploy and NSX
Transport Node Profile to reprovision a host with a new image profile that contains a different
version of ESXi and NSX. Hosts that are set up for vSphere Auto Deploy use an auto-deploy
server and vSphere host profiles to customize hosts. These hosts can also be set up for NSX
Transport Node Profile to configure NSX on the hosts.

So, a stateless host can be set up for vSphere Auto Deploy and NSX Transport Node Profile to
reprovision a host with a custom ESXi and NSX version.

High-Level Tasks to Auto Deploy Stateless Cluster


High-level tasks to auto deploy a stateless cluster.

The high-level tasks to set up an auto deploy stateless cluster are:

1 Prerequisites and Supported Versions. See Prerequisites and Supported Versions.

2 (Reference host) Create a Custom Image Profile. See Create a Custom Image Profile for
Stateless Hosts.

3 (Reference and Target hosts) Associate the Custom Image Profile. See Associate the Custom
Image with the Reference and Target Hosts.

4 (Reference host) Set up Network Configuration in ESXi. See Set Up Network Configuration on
the Reference Host.

VMware by Broadcom 235


NSX Installation Guide

5 (Reference host) Configure as a Transport Node in NSX. See Configure the Reference Host as
a Transport Node in NSX.

6 (Reference host) Extract and Verify Host Profile. See Extract and Verify the Host Profile.

7 (Reference and Target hosts) Verify the Host Profile Association with Stateless Cluster. See
Verify the Host Profile Association with Stateless Cluster.

8 (Reference host) Update Host Customization. See Update Host Customization.

9 (Target hosts) Trigger Auto Deployment. See Trigger Auto Deployment on Target Hosts.

a Before applying Transport Node Profile. See Reboot Hosts Before Applying TNP.

b Apply Transport Node Profile. See Apply TNP on Stateless Cluster.

c After applying Transport Node Profile. See Reboot Hosts After Applying TNP.

10 Troubleshoot Host Profile and Transport Node Profile. See Troubleshoot Host Profile and
Transport Node Profile.

Prerequisites and Supported Versions


Prerequisites and supported ESXi and NSX versions.

Supported Workflows
n With Image Profile and HostProfile

Prerequisites
n Only homogeneous clusters (all hosts within a cluster must be either stateless or stateful) are
supported.

n Image builder service must be enabled.

n Auto deploy service must be enabled.

Supported NSX and ESXi Versions


ESXi
Supported ESXi Version ESXi 67ep6 ESXi 67u2 ESXi 67u3 ESXi 67ep7 67ep15 ESXi 7.0

NSX 2.4 Yes Yes No No No No

NSX 2.4.1 Yes Yes No No No No

NSX 2.4.2 Yes Yes No No No No

NSX 2.4.3 Yes Yes No No No No

NSX 2.5 Yes Yes Yes Yes No No

NSX 2.5.1 Yes Yes Yes Yes Yes No

NSX3.0 Yes Yes Yes Yes Yes Yes

NSX3.1 and later Yes Yes Yes Yes Yes Yes

VMware by Broadcom 236


NSX Installation Guide

Create a Custom Image Profile for Stateless Hosts


In your data center, identify a host to be prepared as the reference host.

The first time the reference host starts up, ESXi associates the default rule with the reference
host. In this procedure, we are adding a custom image profile (ESXi and NSX VIBs) and associate
the reference host with the new custom image. An image profile with the NSX image significantly
reduces the installation time. The same custom image is associated with the target hosts in the
stateless cluster.

Note Alternatively, you can add only an ESXi image profile to the reference and target stateless
cluster. The NSX VIBs are downloaded when you apply the transport node profile on the
stateless cluster. See Add a Software Depot.

Prerequisites

Ensure that the auto-deploy service and image builder service are enabled. See Using vSphere
Auto Deploy to Reprovision Hosts.

Procedure

1 To import NSX packages, create a software depot.

2 Download the nsx-lcp packages.

a Go to [Link] and navigate to Products and Accounts →


Products → All Products.

b In the Product Downloads page, search NSX Kernel Modules for a specific VMware ESXi
version.

c Click Download Now to begin downloading the nsx-lcp package.

d Import nsx-lcp packages into the software depot.

3 Create another software depot to import ESXi packages.

The vSphere Web Client displays two depots created on the reference host.

4 Create a custom software depot to clone previously imported ESXi image and nsx-lcp
packages.

a Select the ESXi Image profile from the ESXi software depot created in the earlier step.

VMware by Broadcom 237


NSX Installation Guide

b Click Clone.

c In the Clone Image Profile wizard, enter a name for the custom image to be created.

d Select the custom software depot where the cloned image (ESXi) must be available.

e In the Select software packages window, select the Acceptance level to VMware
Certified. The ESXi VIBs are preselected.

f Identify and select the NSX packages manually from the list of packages and click Next.

g In the Ready to complete screen, verify the details and click Finish to create the cloned
image containing ESXi and NSX packages into the custom software depot.

What to do next

Associate the custom image with the reference and target hosts. See Associate the Custom
Image with the Reference and Target Hosts.

Associate the Custom Image with the Reference and Target Hosts
To start the reference host and target hosts with the new custom image containing ESXi and NSX
packages, associate the custom image profile.

At this point in the procedure, the custom image is only being associated to the reference and
target hosts but NSX installation does not happen.

Important Perform this custom image association procedure on both reference and target
hosts.

Prerequisites

Procedure

1 On the ESXi host, navigate to Menu > Auto Deploy > Deployed Hosts.

VMware by Broadcom 238


NSX Installation Guide

2 To associate the custom image profile with a host, select the custom image.

3 Click Edit Image Profile Association.

4 In the Edit Image Profile Association wizard, click Browse and select the custom depot and
select the custom image profile.

5 Enable Skip image profile signature check.

6 Click Ok.

Results

What to do next

Set up Network Configuration on the Reference Host. See Set Up Network Configuration on the
Reference Host.

Set Up Network Configuration on the Reference Host


On the reference host, a standard switch with a VMkernel adapter is created to set up the
network configuration on ESXi.

This network configuration is captured in the host profile which is extracted from the reference
host. During a stateless deployment, the host profile replicates this network configuration setting
on each target host.

Procedure

1 On the ESXi host, configure a vSphere Standard Switch (VSS) or Distributed Virtual switch
(DVS) by adding a VMkernel adapter.

VMware by Broadcom 239


NSX Installation Guide

2 Verify that the newly added VSS/DVS switch is displayed in the VMkernel adapters page.

What to do next

Configure the Reference Host as a Transport Node in NSX. See Configure the Reference Host as
a Transport Node in NSX.

Configure the Reference Host as a Transport Node in NSX


After the reference host is associated with the custom image profile and configured with a VSS
or DVS switch, deploy the reference host as a NSX transport node with NSX enabled DVS switch.

Procedure

1 From a browser, log in to NSX at [Link]

2 To locate the reference host, navigate to System -> Nodes -> Host Transport Node.

3 Create a VLAN transport zone to define the span of the virtual network. The span is defined
by attaching VDS switches to the transport zone. Based on this attachment, VDS can access
segments defined within the transport zone. See Create a Transport Zone.

4 Create a VLAN segment on the transport zone. The created segment is displayed as a logical
switch.

a Navigate to Networking -> Segments.

b Select the transport zone to attach the segment.

c Enter VLAN ID.

d Click Save.

VMware by Broadcom 240


NSX Installation Guide

5 Create an uplink profile for the reference host that defines how an VDS switch connects to
the physical network. See, Create an Uplink Profile.

6 Configure the reference host as a transport node. See Configure a Managed Host Transport
Node.

a In the Host Transport Node page, select the reference host.

b (On a VDS switch) Click Configure NSX and select the previously created transport zone,
VDS, uplink profile.

7 Click Finish to begin installation of NSX on the reference host.

(On a VDS switch) After installation, configuration status of the reference host is displayed as
Success. In the vCenter Server, the VDS switch is displayed as NSX switch.

Note The reference host is listed under Other Hosts.

What to do next

Extract and Verify the Host Profile. See Extract and Verify the Host Profile.

Extract and Verify the Host Profile


After you extract the host profile from the reference host, verify the NSX configuration extracted
in the host profile. It consists of ESXi and NSX configuration that is applied to target hosts.

Procedure

1 To extract the host profile, see Extract and Configure Host Profile from the Reference Host.

VMware by Broadcom 241


NSX Installation Guide

2 In the extracted host profile verify NSX configuration.

3 To verify DVS switch is enabled on NSX, select Policies and Profiles → Host Profiles →
Configure → vSphere Distributed Switch.

4 Select the DVS switch and determine whether NSX is enabled on DVS.

What to do next

Verify the host profile association with stateless cluster. See Verify the Host Profile Association
with Stateless Cluster.

VMware by Broadcom 242


NSX Installation Guide

Verify the Host Profile Association with Stateless Cluster


To prepare the target stateless cluster with ESXi and NSX configuration, associate the host
profile extracted from the reference host to the target stateless cluster.

Without the host profile associated to the stateless cluster, new nodes joining the cluster cannot
be auto deployed with ESXi and NSX VIBs.

Procedure

1 Attach or Detach Host Profile to Stateless Cluster. See Attach or Detach Entities from a Host
Profile.

2 In the Deployed Hosts tab, verify that the existing stateless host is associated with the
correct image and associated with the host profile.

3 If the host profile association is missing, select the target host and click Remediate Host
Associations to force update the image and host profile to the target host.

What to do next

Update Host Customization. See Update Host Customization.

Update Host Customization


After the attaching the host profile to the target cluster, additional custom entries might be
required on the host to successfully auto deploy the ESXi and NSX packages on it.

VMware by Broadcom 243


NSX Installation Guide

Procedure

1 After attaching the host profile to the target cluster, if the hosts are not updated with custom
values, the system displays the following message.

2 To update host customizations, navigate to the host profile, click Actions -> Edit Host
Customizations.

3 For ESXi versions 67ep6, 67ep7, 67u2, enter the MUX user password.

4 Verify that all the required fields are updated with appropriate values.

What to do next

Trigger Auto Deployment on Target Hosts. See Trigger Auto Deployment on Target Hosts.

Trigger Auto Deployment on Target Hosts


When a new node is added to the cluster, it needs to be manually rebooted for the ESXi and NSX
VIBs to be configured.

Note Only applies to stateless hosts.

VMware by Broadcom 244


NSX Installation Guide

There are two ways to prepare hosts to trigger auto-deployment of ESXi and NSX VIBs to be
configured.

n Reboot hosts before applying TNP to the stateless cluster.

n Reboot hosts after applying TNP to the stateless cluster.

What to do next

Reboot hosts before applying TNP to the stateless cluster. See Reboot Hosts Before Applying
TNP.

Reboot Hosts Before Applying TNP


Only applies to stateless hosts. In this scenario, the transport node profile is not applied to the
stateless cluster, which means that NSX is not installed and configured on the target host.

Procedure

u Reboot hosts.

The target host starts with the ESXi image. After starting, the target host remains in
maintenance mode until the TNP profile is applied to the target host and NSX installation
is complete. Profiles are applied on hosts in the following order:

Profiles are applied on hosts in the following order.

n Image profile is applied to the host.

n Host profile configuration is applied to the host.

n NSX configuration is applied to the host.

ESXi VIBs are applied to all the rebooted hosts. A temporary NSX switch in an ESXi host.
When TNP is applied to the hosts, the temporary switch is replaced by the actual NSX switch.

What to do next

Apply TNP to the stateless cluster. See Apply TNP on Stateless Cluster.

Apply TNP on Stateless Cluster


NSX configuration and installation only happens on the target hosts when TNP is applied to the
cluster.

Procedure

1 Note down the settings extracted in the Host Profile from the reference host. The
corresponding entities in the TNP profile must have the same value. For example, the VDS
name used in the Host Profile and TNP must be the same.

For more information on extracted host profile settings, see Extract and Verify the Host
Profile.

2 Add a TNP.

VMware by Broadcom 245


NSX Installation Guide

3 Add a TNP by entering all required field.

Ensure that values of the following parameters are the same on both the new TNP profile and
the existing Host Profile.

Note On a VDS switch, migration of VMkernel adapters and physical NIC migration is not
supported.

n Transport Zone: Ensure transport zone referenced in Host Profile and TNP is the same.

n VDS Name: Ensure VDS name referenced in Host Profile and TNP is the same.

n Uplink Profile: Ensure uplink profile referenced in Host Profile and TNP is the same.

n Teaming Policy:

n (On a VDS switch) In vCenter Server, when creating VDS uplinks, verify the NIC used
in the Host Profile and map that physical NIC to the VDS uplink. In NSX-T, you map
NSX uplinks to VDS uplinks. So, verify the configuration on the VDS switch in vCenter
Server.

After applying TNP on target nodes, if the TNP configuration does not match Host Profile
configuration, the node might not come up because of compliance errors.

4 Verify that the TNP profile is successfully created.

5 Apply TNP profile to the target cluster and click Save.

6 Verify that the TNP profile is successfully applied to the target cluster. It means that NSX is
successfully configured on all nodes of the cluster.

7 In NSX, verify that the ESXi host is configured successfully as a transport node.

What to do next

Alternatively, you can reboot a target host after applying TNP to the cluster. See Reboot Hosts
After Applying TNP.

Reboot Hosts After Applying TNP


Only applies to stateless hosts. When a new node is added to the cluster, manually reboot the
node for the ESXi and NSX packages to be configured on it.

Procedure

1 Apply TNP to the stateless cluster that is already prepared with host profile. See Create and
Apply TNP on Stateless Cluster.

2 Reboot hosts.

After applying TNP profile to the stateless cluster, when you reboot any new node joining the
cluster that node is automatically configured with NSX on the host.

VMware by Broadcom 246


NSX Installation Guide

What to do next

Ensure that you reboot any new node joining the cluster to automatically deploy and configure
ESXi and NSX on the rebooted node.

To troubleshoot issues related to host profile and transport node profile when configuring auto-
deployment, see Troubleshoot Host Profile and Transport Node Profile.

Troubleshoot Host Profile and Transport Node Profile


Troubleshoot issues with host profiles and TNPs when they are used to auto deploy stateless
clusters.

Scenario Description

When multiple VMkernel adapters enabled to For example, before migration, vmk0 is enabled to support
support Management, vMotion and other traffic Management traffic and vmk1 is enabled for vMotion traffic.
are migrated to the same logical switch, VMkernel After host reboot, vmk0 supports vMotion traffic and vmk1
adapters get migrated to logical switch after supports Management traffic. This results in non-compliant error
reboot. But the service on one VMkernel adapter after reboot.
is enabled on a different adapter. Workaround: None. There is no impact as both VMkernel
adapters are on the same logical switch.

Host preparation progress is stuck at 60% while the Issue: When a TNP is applied on a cluster, NSX is successfully
node status displays UP. installed on the host and node status displays UP, but GUI still
shows 60% progress.
Workaround: Reapply the TNP or TN configuration without any
change in the config. This will fix the status to 100% on the GUI.

Even though VMkernel migration is successful there Issue: When you migrate vmk0 the management interface from
was a validation error on the TN before host vSwitch to a logical switch, NSX is successfully installed on the
switches are removed. host. VMkernel migration is successful, but TN status shows
Partial Success with error.
Validation before host switches removal failed:
[error: No management vmk will have PNIC after
['vmk1'] in ['9a bb eb c1 04 81 40 e2-bc 3f 3e aa
bd 14 62 1e'] lose all PNICs.]; LogicalSwitch full-
sync: LogicalSwitch full-sync realization query
skipped.
Workaround: None. Ignore the error message as VMkernel
migration is successful.

Reapplying a TNP where the Network Mapping for Issue: When a TNP configuration consists of vmk0 in the
Install lists vmk0 results in host losing connectivity. Networking Mapping for Install, the hosts loses connectivity.
Workaround: Instead of reapplying the TNP, reboot the host
with necessary configurations in TNP.

Cannot apply the host profile because MUX user Issue: Only on hosts running versions earlier than vSphere 6.7
password policy and password were not reset. U3. Host remediation and host profile application on hosts might
fail unless the mux_user password is reset.
Workaround: Under Policies & Profiles, edit the host profile to
modify the mux_user password policy and reset the mux_user
password.

VMware by Broadcom 247


NSX Installation Guide

Scenario Description

Host Profile is not portable. Issue: None of the vCenter servers can use the host profile
containing NSX configuration.
Workaround: None.

Auto Deploy Rule Engine Issue: Host profile cannot be used in auto deploy rules to
deploy new clusters. If new clusters are deployed, the hosts
get deployed with basic networking and remain in maintenance
mode.
Workaround: Prepare each cluster from NSX GUI. See Apply
TNP on Stateless Cluster.

Check compliance errors. Issue: Host profile remediation cannot fix the compliance errors
related to the NSX configuration.
n Physical NICs configured on Host Profile and TNP are
different.
n Mapping between vNIC to LS mapping. Host Profile finds a
mismatch in the logical switch to vNIC mapping with the TNP
profile.
n VMkernel connected to N-VDS mismatch on Host Profile and
TNP.
n Opaque switch mismatch on Host Profile and TNP.
Workaround: Ensure the NSX configuration matches on Host
Profile and TNP. Reboot the host to realize the configuration
changes. The host comes up.

Remediation Issue: If there are any NSX specific compliance errors, host
profile remediation on that cluster is blocked.
Incorrect configuration:
n Mapping between vNIC to LS mapping
n Mapping of physical NICs
Workaround: Ensure that the NSX configuration matches
on Host Profile and TNP. Reboot the host to realize the
configuration changes. The host comes up.

Attach Issue: In a cluster configured with NSX, host profile cannot be


attached at the host-level.
Workaround: None.

Detach Issue: Detaching and attaching a new host profile in a cluster


configured with NSX does not remove the NSX configuration.
Even though the cluster is compliant with newly attach the host
profile, it still has the NSX configuration from a previous profile.
Workaround: None.

Update Issue: If the user has changed NSX configuration in the cluster,
then extract a new host profile. Update the host profile manually
for all the settings that were lost.
Workaround: None.

VMware by Broadcom 248


NSX Installation Guide

Scenario Description

Host-level transport node configuration Issue: After anportsport node was auto-deployed, it acts as
individual entity. Any update to that transport node might not
match with the TNP.
Workaround: Update the cluster. Any update in a standalone
transport node cannot persist its migration specification. The
migration might fail to post the reboot.

PeerDNS configuration is not supported on the Issue: If a VMkernel adapter selected for migration to NVDS is
VMkernel adapter selected for migration to the peer-DNS enabled, then host profile application fails.
NVDS switch. Workaround: Edit the extracted host profile by disabling peer-
DNS setting on the VMkernel adapter that must be migrated to
an NVDS switch. Alternatively, ensure that you do not migrate
peer-DNS enabled VMkernel adapters to an NVDS switch.

DHCP address of the VMkernel NIC address not Issue: If the reference host is stateful, then any stateless hosts
retained using profile extracted from the stateful reference host cannot
retain their VMkernel management MAC address derived from
PXE started MAC. It results in DHCP addressing issues.
Workaround: Edit extracted host profile of stateful host and
modify the 'Determine how MAC address for vmknic should be
decided' to 'Use the MAC address from which the system was
PXE started'.

Host Profile application failure in vCenter can lead Issue: If host profile application fails in vCenter, NSX
to NSX configuration errors on the host. configuration might also fail.
Workaround: In vCenter, verify that host profile was successfully
applied. Fix the errors and try again.

LAGS are not supported on stateless ESXi hosts. Issue: The uplink profile configured as LAGs in NSX is not
supported in a stateless ESXi host managed by a vCenter Server
or in NSX.
Workaround: None.

A stateless host does not boot up with MAC Issue: If a stateless host is attached with a host profile extracted
address of PXE NIC when it is applied with a host from a statelful host, then the VMkernel adapter (vmknic) of the
profile extracted from a stateful host. stateless host does not boot up with the MAC address of PXE
NIC of the host because a stateful host does not boot up as a
PXE-enabled system.
Workaround: When you are setting up autodeployment of
stateless hosts, ensure that the host profile extracted is from
a from a host that boots up as a PXE-enabled system.

Stateful Servers
Integrate host profiles of an ESXi host with NSX on stateful servers.

VMware by Broadcom 249


NSX Installation Guide

A stateful host is a host that retains all configurations and the installed VIBs even after it is
rebooted. While an auto-deploy server is needed for stateless hosts because the boot up files
required to bring up a stateless hosts are stored on the auto-deploy server, a stateful host does
not need a similar infrastructure. Because the boot up files required to bring up a stateful host is
stored on its hard drive.

In this procedure, the reference host is outside of the stateful cluster and the target hosts in the
cluster. A target host can be within a cluster or a standalone host outside of the cluster. Prepare
a cluster by applying host profile and transport node profile (TN profile) , so that any new target
hosts joining the cluster is automatically prepared with NSX VIBs. Configure the target host as
a transport node. Similarly, for a standalone host, apply the host profile and configure NSX to
install NSX VIBs and when NSX configuration is complete, it becomes a transport node.

Note NSX VIBs are installed from TN profile and ESXi host configurations are applied by the
Host Profiles.

Supported NSX and ESXi versions


Supported NSX and ESXi versions on stateful servers.

7.
0
7 6. 6. 7 U
7. 0 7 5 5 6. 6. 0 3 7
67 67 67 6.5 0. P 0 7. 6. P E 7P 7E P E 0
Version ep U 67 67e U2 6.5 p0 0. 0 U 0. 7 0 P 0 P 0 P U 8.0
Name 6 2 U3 p7 C U3 3 1 3 2 1 U1 9 27 8 24 6 8 3c .0.1

NSX 2.4 Ye N No No No No Yes N N N N N N N N N N N N No


s o o o o o o o o o o o o o

NSX Ye Y No No No No Yes N N N N N N N N N N N N No
2.4.1 s es o o o o o o o o o o o o

NSX Ye Y No No No No Yes N N N N N N N N N N N N No
2.4.2 s es o o o o o o o o o o o o

NSX Ye Y No No No No Yes N N N N N N N N N N N N No
2.4.3 s es o o o o o o o o o o o o

NSX 2.5 Ye Y Yes Yes Yes Yes Yes N N N N N N N N N N N N No


s es o o o o o o o o o o o o

NSX Ye Y Yes Yes Yes Yes Yes N N N N N N N N N N N N No


2.5.1 s es o o o o o o o o o o o o

NSX 3.0 Ye Y Yes Yes Yes Yes Yes Y Y Y Y N N N N N N N N No


s es es es es es o o o o o o o o

NSX 3.1 Ye Y Yes Yes Yes Yes Yes Y Y Y Y N N N N N N N N No


s es es es es es o o o o o o o o

NSX 3.2 Ye Y Yes Yes Yes Yes Yes Y Y Y Y Y Y Y Y Y Y Y Y No


s es es es es es es es es es es es es es

VMware by Broadcom 250


NSX Installation Guide

7.
0
7 6. 6. 7 U
7. 0 7 5 5 6. 6. 0 3 7
67 67 67 6.5 0. P 0 7. 6. P E 7P 7E P E 0
Version ep U 67 67e U2 6.5 p0 0. 0 U 0. 7 0 P 0 P 0 P U 8.0
Name 6 2 U3 p7 C U3 3 1 3 2 1 U1 9 27 8 24 6 8 3c .0.1

NSX 4.0 N N No No No No No Y Y Y Y N N N N N N N Y Yes


o o es es es es o o o o o o o es

NSX 4.1 N N No No No No No Y Y Y Y N N N N N N N Y Yes


o o es es es es o o o o o o o es

Prepare a Target Stateful Cluster


Prepare a target stateful cluster so that any new host joining the cluster is automatically
deployed with ESXi and NSX VIBs.

You can select a host either within the cluster or outside of the cluster to be the reference
host. You need to create a reference host because the host profile from the reference host
is extracted and applied to a target host. VDS switch type supports migration of VMkernel
adapters.

In this procedure, as an example the instructions are to migrate vmk0 (management traffic) and
vmk1 (vMotion traffic) to an VDS switch.

Prerequisites

Procedure

1 On the Reference host, deploy a supported ESXi build.

a In vSphere, add vmk1 adapter. vmk0 is already present to serve management traffic.

2 Configure the reference node as a transport node.

a Using vSphere Web Client, ensure a logical switch is created in NSX.

b Using vSphere Web Client, ensure that vmk0 and vmk1 are connected to a logical switch
on VDS switch.

3 Extract the host profile from the reference host.

4 On a target host that is a standalone host:

a Attach the host profile to the target host.

b Manually configure NSX on the host. When configuring the host as a transport node
because the host profile on the ESXi, ensure the following conditions are met.

c Host must belong to the same transport zone.

VMware by Broadcom 251


NSX Installation Guide

d Target host must use the same IP pool that is used by the reference host.

e Uplink profile, LLDP, Networkork mapping for install, VDS configured on the target host
must be the same as configured on the reference host.

5 On a target host that is part of a cluster:

a Attach the host profile to the stateful target cluster.

b Create and apply the TN profile on the cluster.

c To apply TN profile on the cluster.

What to do next

Scenarios when VMkernel adapters are migrated with and without host profiles applied to NSX.

VMware by Broadcom 252


Getting Started with NSX
Federation 13
To get started with NSX Federation, you install the Global Manager, configure the Global
Manager as active, and add locations.

Task Details

Check the requirements for Federation. See NSX Federation Requirements.

Install the Global Manager. See Install the Active and Standby Global Manager .

Make the Global Manager cluster active. See Make the Global Manager Active and Add Standby Global Manager.

Add Locations to the active Global Manager. See Add a Location .

For further configuration tasks, such as preparing Edge clusters for stretched networking, and
creating objects from the Global Manager, see Federation in the NSX Administration Guide.

Procedure

1 NSX Federation Key Concepts


NSX Federation introduces some new terms and concepts, such as remote tunnel endpoint
(RTEP), span, and region.

2 NSX Federation Requirements


To support NSX Federation, your environment must meet various requirements, including
round-trip time, software versions, and ports.

3 Configuring the Global Manager and Local Managers


An NSX Federation environment contains an active and a standby Global Manager cluster
and one or more Local Manager clusters.

NSX Federation Key Concepts


NSX Federation introduces some new terms and concepts, such as remote tunnel endpoint
(RTEP), span, and region.

VMware by Broadcom 253


NSX Installation Guide

NSX Federation Systems: Global Manager and Local Manager


An NSX Federation environment includes two types of management systems:

n Global Manager: A system similar to NSX Manager that federates multiple Local Managers.

n Local Manager: An NSX Manager system in charge of network and security services for a
location.

NSX Federation Span: Local and Stretched


When you create a networking object from Global Manager, it can span one or more locations.

n Local: The object spans only one location.

n Stretched: The object spans more than one location.

You do not directly configure the span of a segment. A segment has the same span as the
gateway it is attached to.

NSX Federation Regions


Security objects have a region. The region can be one of the following:

n Location: Each location automatically creates a region. This region has the span of that
location.

n Global: A region that has the span of all available locations.

n Custom Region: You can create regions that include a subset of the available locations.

NSX Federation Tunnel Endpoints


In an NSX Federation environment, there are two types of tunnel endpoints.

n Tunnel End Point (TEP): The IP address of a transport node (Edge node or Host) used for
Geneve encapsulation within a location.

n Remote Tunnel End Points (RTEP): The IP address of a transport node (Edge node only) used
for Geneve encapsulation across locations.

NSX Federation Requirements


To support NSX Federation, your environment must meet various requirements, including round-
trip time, software versions, and ports.

n There must be a maximum round-trip time of 500 ms between the following nodes:

n Active Global Manager and standby Global Manager.

n Global Manager and Local Manager.

n Local Manager and remote Local Manager if you have cross-location security
configuration only and VMware NSX® Edge™ Nodes RTEP and remote Edge Nodes RTEP
if you have cross-network configurations.

VMware by Broadcom 254


NSX Installation Guide

n The Global Manager and Local Manager appliances must all have NSX 3.1.0 or later installed.
All appliances in an NSX Federation environment must have the same version installed.

n The required ports must be open to allow communication between the Global Manager and
Local Manager. See VMware Ports and Protocols at [Link]

n There must be connectivity without NAT between the following nodes:

n Global Manager and Local Manager.

n Local Manager and remote Local Manager.

n Edge node RTEP and remote Edge node RTEP.

n Verify that each location has a default overlay transport zone configured. From each Local
Manager, select System > Fabric > Transport Zones. Select an overlay transport zone, and
click Actions > Set as Default Transport Zone.

n Global Manager supports only Policy Mode. NSX Federation does not support Manager
Mode. See NSX Manager for more information.

An NSX Federation environment has the following configuration maximums:

n For most configurations, the Local Manager cluster has the same configuration maximums as
an NSX Manager cluster. Go to VMware Configuration Maximums tool and select NSX.

Select the NSX Federation category for NSX in the VMware Configuration Maximums tool for
exceptions and other NSX Federation-specific values.

n For a given location, the following configurations contribute to the configuration maximum:

n Objects that were created on the Local Manager.

n Objects that were created on the Global Manager and include the location in its span.

You can view the capacity and usage on each Local Manager. See View the Usage and
Capacity of Categories of Objects in the NSX Administration Guide.

Configuring the Global Manager and Local Managers


An NSX Federation environment contains an active and a standby Global Manager cluster and
one or more Local Manager clusters.

VMware by Broadcom 255


NSX Installation Guide

Active Standby
Global Manager Cluster Global Manager Cluster

G G

L L L

Local Manager Cluster Local Manager Cluster Local Manager Cluster

Location 1 Location 2 Location 3

Install the Active and Standby Global Manager


To use NSX Federation, you must install the Global Manager.

Installing a Global Manager appliance is similar to installing an NSX Manager appliance. The only
difference is that when you deploy the appliance, you select NSX Global Manager for the role.

Install a standby Global Manager appliance for high availability and disaster recovery. The
standby Global Manager appliance must be installed in a different location with a latency of
500ms or less.

Prerequisites

n Verify that your environment meets the requirements for NSX Manager. See NSX Manager
VM and Host Transport Node System Requirements.

VMware by Broadcom 256


NSX Installation Guide

n Decide which locations will contain the active and standby Global Manager appliances.

n Verify that you are installing the Global Manager appliance with NSX 3.1.0 or later.

Important All Global Manager and Local Manager appliances in an NSX Federation
environment must have the same version of NSX installed.

Procedure

1 To install the first Global Manager appliance on vSphere: Install NSX Manager and Available
Appliances.

n Select Medium or Large for the deployment configuration size. Do not use Small.

n Select NSXGlobal Manager for the Rolename.

2 Log in to the NSX Manager appliance.

See Log In to the Newly Created NSX Manager .

3 Configure a compute manager.

See Add a Compute Manager.

Note If you are at this step while installing the standby Global Manager, you must configure
a separate compute manager. Do not use the same compute manager that you configured
for the active Global Manager.

4 Create a Global Manager cluster. See Cluster Requirements for an Individual Site for design
recommendations.

n On vSphere with a compute manager configured: See Deploy NSX Manager Nodes to
Form a Cluster from the UI.

n On vSphere without a compute manager configured: Repeat the NSX Manager install
on vSphere steps to install three appliances, then form the cluster. See Form an NSX
Manager Cluster Using the CLI.

5 Configure a VIP for the Global Manager cluster.

See Configure a Virtual IP Address for a Cluster.

6 In a different location, install a standby Global Manager appliance and form a cluster by
repeating these steps.

What to do next

Select the designated Global Manager appliance as active and connect it with the standby Global
Manager.

Make the Global Manager Active and Add Standby Global Manager
After you have deployed a Global Manager appliance, you can make the Global Manager active.

VMware by Broadcom 257


NSX Installation Guide

Adding a standby Global Manager is optional but recommended for high availability of the Global
Manager.

Procedure

1 Log in to the appliance at [Link]

2 Select System > Location Manager. In the Global Manager tile, click Make Active. Provide a
descriptive name for the active Global Manager and click Save.

3 (Optional) Add a standby Global Manager cluster.

a Install a new Global Manager appliance in a secondary location and start it. Follow the
same instructions as for installing the primary Global Manager, see Install the Active and
Standby Global Manager .

b From the active Global Manager, add this newly installed Global Manager appliance as
standby.

Navigate back to your active Global Manager and click Add Standby and provide the
following information:

Option Description

Global Manager Name Provide a name for the standby Global Manager.

FQDN/IP Enter the FQDN or IP address of the Global Manager cluster VIP at the
secondary location. Do not enter an individual Global Manager FQDN or
IP.

Username and Password Provide the admin user's credentials for the Global Manager at the
secondary location.

SHA-256 Thumbprint Log in to any Global Manager node at the secondary location and run this
command:

get certificate cluster thumbprint

The result is the cluster VIP certificate:


bfae1a0a...
If the GM-Standby is a single Manager VM, use the same command.

Check Compatibility Click Check Compatibility to ensure that the Global Manager can be
added as standby. This checks that the NSX version is compatible.

c Click Save.

d Click Make Standby.

Add a Location
After you add a location to Global Manager, you can create objects from Global Manager that
span that location.

VMware by Broadcom 258


NSX Installation Guide

You can find the number of supported locations in the VMware Configuration Maximums tool.
Select the appropriate version of NSX, select the NSX Federation category, and click View
Limits.

Only use the admin account credentials to register the Local Manager with the Global Manager.

After you add a location to the Global Manager, the NSX Manager is called a Local Manager (LM).

Prerequisites

n Verify that the NSX environment you are adding has the latest version of NSX installed.

This new NSX location can be a new NSX environment or an NSX environment with an
existing Network and Security configuration.

n The NSX environment in the new location must have three NSX Manager nodes deployed and
a cluster VIP configured. See Configure a Virtual IP Address for a Cluster.

For a proof-of-concept environment, you can add a location that has only one NSX Manager
node, but you must still configure a cluster VIP.

n Verify that the latency between the Global Manager and the location is 500 ms or less for
non-stretched networks or 150 ms or less for stretched networks.

Procedure

1 Log in to the Global Manager at [Link]

2 Select System > Location Manager and click Add On-Prem Location.

3 In the Add New Location dialog box, enter the Location details.

Option Description

Location Name Provide a name for the location.

FQDN/IP Enter the FQDN or IP address of the NSX Manager cluster VIP. Do not enter
an individual NSX Manager FQDN or IP.

Username and Password Provide the admin user's credentials for the NSX Manager at the location.
Do not use any other account to register the Local Manager with the Global
Manager.

SHA-256 Thumbprint Log in to any NSX Manager node in the cluster and run this command:

get certificate cluster thumbprint

The result is the cluster VIP certificate:


bfae1a0a...

Check Compatibility Click Check Compatibility to ensure that the location can be added. This
checks that the NSX version is compatible.

What to do next

If you want to create gateways and segments that span more than one location, you must
configure a remote tunnel endpoint (RTEP) on Edge nodes in each location to handle the cross-

VMware by Broadcom 259


NSX Installation Guide

location traffic. See Configure Edge Nodes for Stretched Networking. After you add a location
to your Global Manager, you can import your configurations from that location's Local Manager
appliance into the Global Manager. See Importing Configurations from Local Manager.

Importing Configurations from Local Manager


After you successfully add a Local Manager location to the Global Manager, you can import all
network and security Local Manager configurations to the Global Manager.

You can only import the entire Local Manager configuration into the Global Manager. There is no
support for partial configuration import. You can only import the configurations once.

Local Manager Configurations Supported for Importing into Global Manager

n Context Profiles

n DHCP

n DNS

n Firewall Security Policies

n Gateway Profiles

n Groups

n NAT

n Security Profiles

n Services

n T0 Gateway

n T1 Gateway

n Time-based firewall (import/onboard now supported)

Local Manager Configurations Not Supported for Importing into Global Manager

The following features are not supported with NSX Federation. Import of configurations into
the Global Manager is blocked if you have any of these configurations in your Local Manager.
You must remove unsupported configurations to proceed with importing. After your supported
Local Manager configurations are successfully imported into Global Manager, you can add the
configurations for any of the unsupported features back into your Local Manager.

n DHCP dynamic binding

n Distributed IDS

n Distributed security for vCenter VDS Port Group only (Global Manager does not see the
vCenter VDS port groups to assign them in security groups. However, Global Manager can
use dynamic membership in groups based on vCenter VDS port groups tags added by Local
Managers.)

n Endpoint protection

VMware by Broadcom 260


NSX Installation Guide

n Forwarding policies

n Guest introspection

n Identity firewall

n IDS/IPS

n L2 Bridge

n Load balancer

n Malware prevention

n Metadata proxy

n Multicast

n Network detection and response

n Network introspection

n Routing protocols (OSPF)

n Routing VPN and EVPN

n Service insertion

n T0 VRF

n TLS inspection

n URL filtering

Note If you use a load balancer in the Local Manager, you cannot import the load balancer, but
you can still import other configurations if you meet certain conditions as described in the section
"Importing Configurations if you have a Load Balancer service on the Local Manager".

Importing Configurations if you have a Load Balancer service on the Local Manager

If your Local Manager has a load balancer service, you can import configurations except the load
balancer, if you meet the following conditions:

n The load balancer service must be in one-arm mode on a standalone tier-1 gateway.

n The standalone tier-1 gateway that the one-arm load balancer is attached to:

n must have only the load balancer service and no other services

n must not have any downlink segments

n must not share Gateway Firewall rules with any other tier-0 or tier-1 gateways.

n Groups used in load balancer service must not be used in any firewall rules. If you have
groups common to both load balancer and firewall rules, you must either remove the group
from the firewall rule or create an identical group to use with the load balancer.

Configurations Created by a Principal Identity User in Local Manager

VMware by Broadcom 261


NSX Installation Guide

If you have configurations in the Local Manager that are created by the Principal Identity user
and the same Principal Identity user is not present in the Global Manager, import is blocked.

You have the following options for importing these entities:

n The system displays a list of Principal Identity usernames that are being used on the Local
Managerr to create configurations. Create each of these Principal Identity users in the Global
Manager before proceeding to import.

n If you do not want to create Principal Identity usernames in Global Manager, remove all
configurations in theLocal Manager that are created using the Principal Identity username.
You can then proceed with importing other configurations from the Local Manager.

Prerequisites

n The Local Manager appliance must register with the Global Manager.

n The Local Manager appliance must have a backup that you can restore in case the importing
procedure fails.

n You must remove configurations for unsupported features from your Local Manager
appliance. You are provided guidance in the NSX UI on how to resolve any importing
conflicts.

Procedure

1 Log in to the Global Manager and navigate to System > Location Manager.

2 A system message appears for each location that has been successfully added into the
Global Manager and has objects that can be imported.

3 Click Import Now from the system message. You can also import objects by clicking Actions
> Import from the location tile.

VMware by Broadcom 262


NSX Installation Guide

4 You see a list of objects that can be imported into the Global Manager.

a If there are naming conflicts, you can provide a prefix or suffix for configurations. The
total length of the object including the prefix and suffix must not exceed 256 characters.

The prefix or suffix gets applied to the following objects being imported:

n Tier-0 gateway

n Tier-1 gateway

n Segments

n DNS zones

n DHCP profiles

n Switching profiles: IPv6, VNI Pool, Gateway QoS, BFD, IPFIX

n Security profiles: IPFIX, Flood-Protection, DNS Security, Session Timer, Context


Profiles

n L4-L7 services (all services listed under Inventory > Services).

The prefix or suffix does not get applied to the inventory and the firewall objects, that is:
groups, firewall policies and firewall rules, and to system-created profiles and services.

b For other conflicts, follow the guidance provided in the UI.

Results

Global Manager owns any Local Manager objects imported into the Global Manager. These
objects appear in the Local Manager with this icon:

. You can only modify these objects from the Global Manager now.

Configure Edge Nodes for Stretched Networking


If you want to create gateways and segments that span more than one location, you must
configure a remote tunnel endpoint (RTEP) on Edge nodes in each location.

When you configure an RTEP, do it on an Edge cluster basis. All Edge nodes in the cluster must
have an RTEP configured. You do not need to configure all Edge clusters with RTEP. RTEPs
are required only if the Edge cluster is used to configure a gateway that spans more than one
location.

You can configure the TEP and RTEP to use the same physical NIC on the Edge node or use
separate physical NICs.

You can also configure RTEPs from each Local Manager. Select System > Get Started >
Configure Remote Tunnel Endpoint.

VMware by Broadcom 263


NSX Installation Guide

You can edit RTEPs on an Edge node. Log into the Local Manager and select System > Fabric
> Nodes > Edge Transport Nodes. Select an Edge node, and click Tunnels. If an RTEP is
configured, it is displayed in the Remote Tunnel Endpoint section. Click Edit to modify the RTEP
configuration.

Prerequisites

n Verify that each location participating in the stretched network has at least one Edge cluster.

n Determine which layer 3 networks and VLANs to use for RTEP networks.

n Intra-location tunnel endpoints (TEP) and inter-location tunnel endpoints (RTEP) must use
separate VLANs and layer 3 subnets.

n Verify that all RTEP networks used in a given NSX Federation environment have IP
connectivity to each other.

n Verify that external firewalls allow cross-location RTEP tunnels, and BGP sessions between
Edges. See VMware Ports and Protocols at [Link]

n Configure the MTU for RTEP on each Local Manager. The default is 1500. Set the RTEP MTU
to be as high as your physical network supports. On each Local Manager, select System >
Fabric > Settings. Click Edit next to Remote Tunnel Endpoint.

Procedure

1 From your browser, log in with admin privileges to the active Global Manager at https://
<global-manager-ip-address>.

2 Go to System > Local Manager and click Networking from the location you want to configure
for stretched networking.

3 Click Configure next to the Edge cluster for which you want to set up the RTEP.

The Configure Edge Nodes for Stretched Networking screen opens in the Local Manager
with that Edge cluster selected.

4 You can select all Edge Nodes in this cluster or one node at a time. Provide the following
details for the RTEP configuration:

Option Description

Host Switch Select a host switch from the drop-down menu.

Teaming Policy Select a teaming policy if you have one configured.

RTEP VLAN Enter the VLAN ID for the RTEP network. Valid values are between 1 and
4094.

IP Pool for all nodes Select an IP pool for all nodes in this Edge Cluster. If you want to assign an IP
address to an individual node, you can edit the RTEP configuration later.

Inter Location MTU The default is 1500.

VMware by Broadcom 264


NSX Installation Guide

5 Click Save.

You can click each of the Edge nodes that are marked as Configured to see the Edge node
configuration details. Select the Tunnels tab to view and edit the RTEP configuration.

Remove a Location
Removing a location from the Global Manager removes all objects created from the Global
Manager that have a Global scope and are not specific to this location.

Removing a location is disruptive.

All configurations created by the Global Manager that are not specific to this location, such as
groups and firewall sections with a Global scope, will be removed from the NSX Manager at this
location.

Prerequisites

Before you remove a location, you must delete all objects created from the Global Manager that
are specific to this location, such as tier-0 gateway, or firewall policies.

Procedure

1 Log in to the Global Manager at [Link]

2 Navigate to System > Location Manager.

3 From the location that you want to remove, click Actions > Remove.

You see a message describing the effect of removing a location. If you have not previously
removed objects created from the Global Manager that are specific to this location,
such as tier-0 gateways or firewall policies, you cannot remove the location. The system
automatically removes all other configurations from the NSX Manager at this location, that
have a Global scope in your NSX Federation deployment.

Remove a Location When the Global Manager is Unavailable


This procedure is used only if the Global Manager has been deleted BEFORE the Local Managers
(LM) are aware of its deletion.

This is a possible use case at the completion of a Federation test. In this case, the LM will
constantly try to connect to its configured Global Manager (which has been deleted), and share
with other Local Managers its stretch GM group members. The API calls will remove the Global
Manager constructs on each LM, even when the GM has been deleted.

The below API calls on the Local Manager, will remove the Local Manager, its registration to the
Global Manager, and its registration to other Local Managers.

Note The following API calls done on the Local Manager are disruptive.

All configurations created by the Global Manager that are not specific to this location, such as
groups and firewall sections with a Global scope, will be removed from the NSX Manager at this
location.

VMware by Broadcom 265


NSX Installation Guide

Procedure

1 To remove a Global Manager that is active, a Global Manager that is standby, and other
Local Managers registrations from the Local Manager use the site manager API (at the Local
Manager) POST [Link]

2 To remove Global Manager objects from a Local Manager, run the Local Manager API (at the
Local Manager) POST [Link]

3 (Optional) The offboarding progress can be monitored by using the API call GET https://
<LM>/policy/api/v1/infra/site/offboarding-status.

Results

VMware by Broadcom 266


Install NSX Advanced Load
Balancer Appliance Cluster 14
Install three NSX Advanced Load Balancer appliances to form a controller cluster. This controller
cluster provides management function to virtual services, profiles, pools and pool groups that
you configure for NSX Advanced Load Balancer.

The VMware NSX Advanced Load Balancer is a distributed and highly scalable cloud-native
application distribution solution. Starting with NSX version 3.2, you can deploy and configure the
NSX Advanced Load Balancer (AVI) using NSX Manager. The existing NSX Load balancer will be
deprecated. The NSX Manager UI provides a single UI to install and manage all NSX components.

Prerequisites

n Supported Avi controller versions: 20.1.7, 21.1.2 or later versions

n Reserve four IP addresses (same subnet) in the management network to be assigned to


the three controller appliances and one to the Virtual IP of NSX Advanced Load Balancer
appliance cluster.

n Cluster VIP and all controllers management network must be in same subnet.

n Download the controller OVA from Broadcom Support page. To know more about
downloading the controller OVA image, see [Link]

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

Note You must log in with enterprise privileges. You cannot install NSX Advanced Load
Balancer controller nodes with only load balancer privileges.

2 Select System > Appliances > NSX Advanced Load Balancer.

3 Click Set Virtual IP and enter the VIP for the cluster. It is mandatory to set a VIP for the
cluster.

Note Verify that the virtual IP address you set is correct. If you set an incorrect cluster virtual
IP address, then NSX Manager, API clients and end users cannot access the NSX Advanced
Load Balancer controller. The only workaround is to delete all appliances and reconfigure the
cluster with correct virtual IP address before proceeding with deployment.

VMware by Broadcom 267


NSX Installation Guide

4 Click Save.

5 Select the Add NSX Advanced Load Balancer card.

6 Choose the Upload OVA File or Remote OVA Link option.

7 Enter the URL and click Upload. Wait for the upload to finish.

Note
n If instead of the Upload OVA File option, the active bundle version is seen with a
message to log in to the appliance, then directly log in to the appliance to upload the
load balancer.

n OVA upload can fail if the OVA file version being uploaded is different from the already
deployed OVA files. For example, the second or third OVA deployment version is
different from the first OVA deployment.

8 In the Add Appliance wizard, enter the deployment parameters for the first NSX Advanced
Load Balancer apppliance.

9 Click Upload.

10 On the Add Appliance window, configure these fields:

Field Description

Hostname Enter a valid hostname or FQDN (preferred) for the appliance. To enter a
hostname that resolves to a FQDN, contact the DNS owner.

Management IP/Netmask Enter a static IP address for the management IP address and netmask.
For example, [Link]/22

Management Gateway Enter a static IP address for the management gateway. The management
gateway is used by NSX Advanced Load Balancer controller to
communicate with NSX Manager and other NSX objects.

DNS Server Enter the IP address of the DNS server.

NTP Server Enter the IP address of the NTP server.

Node Size Select the node size you want to deploy based on the requirements of your
network.
Supported node sizes are:
n Small: 8 vCPU, 24 GB RAM, 128 GB storage
n Medium: 16 vCPU, 32 GB RAM, 256 GB storage
n Large: 24 vCPU, 48 GB RAM, 512 GB storage

11 Click Next.

12 On the Configuration window, configure these fields:

Field Description

Compute Manager Select a compute manager that registers the appliance.

Compute Cluster Select a compute cluster where appliance will be deployed.

VMware by Broadcom 268


NSX Installation Guide

Field Description

Resource Pool (Optional) Select a resource pool that will be used during appliance
deployment.

Host Select a host where appliance will be deployed.

Note Select either a host or a resource pool as storage location for


deployment.

Datastore Select a datastore that will be provide storage capacity for appliance.

Virtual Disk Format By default, the Thin Provision format is selected.


However, you can select a format that is feasible in your environment.

Network Click Select Network to select the port group that will provide network
connectivity to the appliance.

Note If incorrect compute manager details are provided, deployment fails. As a workaround,
you must force delete the deployment and redeploy the appliance by providing the correct
compute manager details.

13 Click Next.

14 On the Access & Credentials window, enter an admin password that complies with the
required complexity.

Important Enter the same password when deploying all the controllers.

15 (Optional) In the SSH Key field, enter the private key of the SSH key pair to access controller
using SSH key.

16 Click Install Appliance.

Do not try to delete the controller when NSX is registering the controller.

17 Follow steps 1-14 to deploy the second and third appliance.

Note Cluster formation only happens after the third appliance is deployed.

18 If clustering fails on the deployed controller nodes, the NSX Advanced Load Balancer
displays an error message. Click Start Clustering to retrigger clustering of the deployed
controller nodes. If clustering still fails, force delete the controller and reinstall it again.

NSX forms a cluster of the deployed controller nodes.

Results

NSX Advanced Load Balancer appliance controller cluster is deployed successfully and UI shows
cluster status as Stable. Verify that the NSX Advanced Load Balancer controller cluster UI is
accessible using its VIP, [Link]

VMware by Broadcom 269


NSX Installation Guide

What to do next

(Optional) Install portal certificate for ALB controllers.

n Run the following API to create a portal Certificate Signing Request (CSR) for ALB controller.

POST /alb/controller-nodes/certificate/csr

Payload:
{
"common_name": "avi",
"email": "xyz@[Link]",
"organization": "vm",
"organization_unit": "VM",
"locality": "BLR",
"country": "IN",
"state_name": "KA",
"subject_alt_names": [
"[Link]"
],
"algorithm": "SSL_KEY_ALGORITHM_RSA",
"key_size": "SSL_KEY_2048_BITS"
}

Response:
{
"name": "System-Portal-Cert-e8abab64",
"csr": "-----BEGIN CERTIFICATE REQUEST -------- END CERTIFICATE REQUEST-----"
}

n (Optional) Run the following API to install and update portal certificate in ALB Controller.

POST /alb/controller-nodes/certificate/install

Payload:
{
"name": "System-Portal-Cert-e8abab64",
"cert": "-----BEGIN CERTIFICATE -------- END CERTIFICATE-----"
}
Response:
{
"name": "System-Portal-Cert-[Link]",
"cert": "-----BEGIN CERTIFICATE -------- END CERTIFICATE-----"
}

After successfully deploying NSX Advanced Load Balancer appliance cluster, configure a NSX
Cloud Connector in the AVI UI and then configure virtual services that will load balance traffic
across servers.

For troubleshooting installation issues related to NSX Advanced Load Balancer appliance cluster,
see Troubleshooting NSX Advanced Load Balancer Controller Issues.

VMware by Broadcom 270


NSX Installation Guide

To know the best practices to install and run NSX Advanced Load Balancer, see the
following link, [Link]
Load-Balancer-by-Avi-Networks-NSX-T-Integration/ta-p/2890567.

Note Starting with NSX 3.2.2, you can deploy NSX Advanced Load Balancer controller from the
NSX Manager. However, you need to log in to the AVI controller to configure and consume load
balancer services.

In upgraded environments such as 3.2.0 or 3.2.1 to 3.2.2 or higher with advanced load balancer
activated, deactivate the NSX Advanced Load Balancer by clicking Deactivate NSX-T ALB in the
banner message on the UI. For more detals, see the NSX Administration Guide.

If the environment is running NSX Load Balancer, use NSX to Avi Migration Tool to migrate from
NSX LB to NSX Advanced Load Balancer. See the AVI documentation.

Troubleshooting NSX Advanced Load Balancer Controller


Issues
Troubleshoot issues when installing NSX Advanced Load Balancer controller.

NSX Advanced Load Balancer does not register with NSX Manager

Problem

Registration with compute manager failed.

Cause

If the status of deployment is 'VM registration in progress' and the appliance does not
register even after 45 min, registration has failed.

Solution

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 To delete the controller, go to

System > Appliances > NSX Advanced Load Balancer

3 Click Actions and click Delete.

4 Re-deploy the appliance.

The Second NSX Advanced Load Balancer Controller Remains in


Queued State

VMware by Broadcom 271


NSX Installation Guide

Problem

After initiating deployment of second controller, its status shows as Controller VM


Deployment Queued

Cause

The second NSX Advanced Load Balancer controller remains in queued state till the third
controller is deployed.

Solution

1 To retrieve controller deployments that are in pending state, run the following API command:

[Link]
state=PENDING

Deployment of the second appliance begins only when you initiate deployment of the third
appliance. Till then, the deployment status of the second appliance remains in queued state.

2 To retrieve status of deployments, run the following API command:

[Link]

3 To retrieve controller deployments that are in deployed state, run the following API
command:

[Link]
state=DEPLOYED

NSX Advanced Load Balancer Controller Password Change Caused


Cluster Failure

Problem

NSX Advanced Load Balancer cluster failed because controller password was changed.

Cause

If you changed password of the leader controller outside of NSX, then NSX Advanced Load
Balancer cluster goes into failed state. During the password change, if another node was made
the leader, then the original leader node loses all its configured objects.

Solution

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Update the password in NSX.

3 On the failed controller nodes, click Start Clustering.

VMware by Broadcom 272


NSX Installation Guide

4 If clustering still fails, delete the controller. Go to

System > Appliances > NSX Advanced Load Balancer

5 Click Actions and click Delete.

6 Re-deploy the appliance.

Unable to Delete NSX Advanced Load Balancer Controller

Problem

Unable to delete NSX Advanced Load Balancer controller.

Cause

If NSX Advanced Load Balancer objects exist and there is only one node left, NSX does not
allow you to delete all the deployed NSX Advanced Load Balancer controller nodes. This issue
occurred because NSX cannot access the node or you manually deleted the node from compute
manager.

Solution

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Delete all existing NSX Advanced Load Balancer objects.

3 To delete the controller, go to

System > Appliances > NSX Advanced Load Balancer

4 Click Actions and click Delete or Force Delete.

5 If load balancer objects are present, you cannot the delete the controller node remaining in
the cluster. To delete the last controller node, run the following API command.

Note Pass the inaccessible flag only if NSX cannot access the node and it is the last node
in the cluster.

/policy/api/v1/alb/controller-nodes/deployments/{{node_id}}?
action=delete&force_delete=true&inaccessible

The API command deletes the controller node but the load balancer objects still exist in the
system.

NSX Advanced Load Balancer Cluster HA Status is Compromised

Problem

NSX Advanced Load Balancer cluster status shows Cluster Up HA Compromised.

VMware by Broadcom 273


NSX Installation Guide

Cause

If you deleted one or two of the three NSX Advanced Load Balancer controllers from a stable
cluster or if a controller is down, then the cluster status changes from Cluster UP HA Active
(Stable) to Cluster Up HA Compromised (Degraded).

Solution

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Verify all controllers are Up.

3 If you delete two nodes out of three nodes, SSH to the controller node with admin privileges
and run /opt/avi/scripts/recover_cluster.py.

4 If the cluster still remains unstable, delete the controller and reinstall again.

Credential Mismatch After Changing NSX Advanced Load Balancer


Controller Password

Problem

Credential mismatch after changing NSX Advanced Load Balancer controller password.

Cause

If you change controller password for admin user from outside of NSX from the Avi Vantage
Platform UI, the new password does not refresh in NSX. Any change of password outside of NSX
is not reflected in NSX Manager. NSX Manager does not reflect the state of the cluster.

Solution

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 If you changed the password outside of NSX, run the following API to change password.

PUT [Link]
deployments/<node_id>

{
"form_factor": "SMALL",
"user_settings": {
"admin_password": "Tilak@123456"
},
"deployment_config": {
"vc_id": "755bd5cb-3700-456c-b74e-25f5140f4a50",
"compute_id": "domain-c201",
"host_id": null,
"storage_id": "datastore-206",
"management_network_id": "network-207",
"hostname": "controller-AA",

VMware by Broadcom 274


NSX Installation Guide

"placement_type": "AlbControllerVsphereClusterNodeVmDeploymentConfig",
"disk_provisioning": "THIN",
"dns_servers": [
"[Link]"
]
}
}

3 If you changed the password from NSX, run the following API to change password, DNS and
NTP servers in NSX and on the NSX Advanced Load Balancer controller.

PUT [Link]
running_config=true

Deployment of NSX Advanced Load Balancer Controller Failed

Problem

Deployment of NSX Advanced Load Balancer controller failed.

Cause

Deployment might fail because of a number of reasons. Some of the reasons are:

n Deployment of controller failed

n Controller failed to power on

n Controller failed to register with compute manager

n Controller failed to power off

n Controller could not be undeployed

n Connection between NSX and compute manager is broken

n Compute manager registered to controller is deleted

n Controller is deployed with a hostname that already exist in the cluster

Solution

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 To delete the controller, go to

System > Appliances > NSX Advanced Load Balancer

3 Click Actions and click Delete.

Cluster Unstable After Two Controllers Are Down

Problem

A cluster becomes unstable after two NSX Advanced Load Balancer controllers go down.

VMware by Broadcom 275


NSX Installation Guide

Cause

When a three node cluster loses two controllers, the cluster quorum is lost. It becomes unstable.

Solution

1 Open a SSH session and log in to the controller that is up and running.

2 At the terminal, run the /opt/avi/scripts/recover_cluster.py script.

3 Verify the VIP of cluster is back up and the cluster status is stable.

VMware by Broadcom 276


Getting Started with NSX Cloud
15
NSX Cloud provides a single pane of glass for managing your public cloud networks.

NSX Cloud is agnostic of provider-specific networking that does not require hypervisor access in
a public cloud.

It offers several benefits:

n You can develop and test applications using the same network and security profiles used in
the production environment.

n Developers can manage their applications until they are ready for deployment.

n With disaster recovery, you can recover from an unplanned outage or a security threat to
your public cloud.

n If you migrate your workloads between public clouds, NSX Cloud ensures that similar security
policies are applied to workload VMs regardless of their new location.

Read the following topics next:

n NSX Cloud Architecture and Components

n Overview of Deploying NSX Cloud

n Deploy NSX On-Prem Components

n Deploy NSX Cloud Components in Microsoft Azure using the NSX Cloud Marketplace Image

n Add your Public Cloud Account

n NSX Public Cloud Gateway: Architecture and Modes of Deployment

n Deploy PCG or Link to a PCG

n Auto-Configurations after PCG Deployment or Linking

n Integrate Horizon Cloud Service with NSX Cloud

n (Optional) Install NSX Tools on your Workload VMs

n Un-deploy NSX Cloud

VMware by Broadcom 277


NSX Installation Guide

NSX Cloud Architecture and Components


NSX Cloud integrates the NSX core components with your public cloud to provide network and
security across your implementations.

Figure 15-1. NSX Cloud Architecture

Internet

Private Cloud

Infra
NSX Manager Cluster Management
NSX CSM
Subnet

on-prem

Firewall
(optional) Hypervisors Hypervisors

Express Route/Direct Connect or Site to


Site VPN over internet or over Internet
Gateway (IGW)
<150 ms RT
latency

Azure Region AWS Region

Internet Internet

Optional Optional

Subnet 1 Subnet 2 Subnet 1 Subnet 2

APP APP APP APP


WEB WEB WEB WEB
OS OS OS OS
Availability Set AZ1 AZ2
Transit VNet Transit VPC

Peer link Peer link Peer link Peer link

Compute VNet 1 Compute VNet 2 Compute VPC 1 Compute VPC 2


Subnet 1 Subnet 2 Subnet 1 Subnet 2 Subnet 1 Subnet 2 Subnet 1 Subnet 2

APP APP APP APP APP APP APP APP


WEB WEB WEB WEB WEB WEB WEB WEB
OS OS OS OS OS OS OS OS

VMware by Broadcom 278


NSX Installation Guide

Core Components
The core NSX Cloud components are:

n NSX Manager for the management plane with policy-based routing, role-based access
control (RBAC), control plane and runtime states defined.

n Cloud Service Manager (CSM) for integration with NSX Manager to provide public cloud-
specific information to the management plane.

n Public Cloud Gateway (PCG) for connectivity to the NSX management and control planes,
NSX Edge gateway services, and for API-based communications with the public cloud
entities.

n NSX Tools functionality that provides NSX-managed datapath for workload VMs.

Overview of Deploying NSX Cloud


Refer to this overview to understand the overall process of installing and configuring NSX Cloud
components to enable NSX to manage your public cloud workload VMs.

Note While planning your deployment, ensure that NSX appliances have good connectivity with
the PCG deployed in the public cloud and Transit VPCs/VNets are in the same region as the
Compute VPCs/VNets.

Table 15-1. Workflow for deploying NSX Cloud

Task Instructions

Install CSM and connect with NSX Manager. See Deploy NSX On-Prem Components.

Note Starting in NSX 3.1.1, if you are using Microsoft


Azure, you can deploy NSX Cloud components in your
Microsoft Azure subscription. See Deploy NSX Cloud
Components in Microsoft Azure using the NSX Cloud
Marketplace Image.

Add one or more of your public cloud accounts in CSM. See Add your Public Cloud Account.

Deploy PCG in your Transit VPCs or VNets and link to See NSX Public Cloud Gateway: Architecture and Modes
your Compute VPCs or VNets. of Deployment .

What to do next? Follow instructions at "Using NSX Cloud" in the NSX


Administration Guide.

Deploy NSX On-Prem Components


You must have already installed NSX Manager to proceed with installing CSM.

Install CSM
The Cloud Service Manager (CSM) is a core component of NSX Cloud.

VMware by Broadcom 279


NSX Installation Guide

After installing NSX Manager, install CSM by following the same steps as for installing NSX
Manager and selecting nsx-cloud-service-manager as the VM role. See Install NSX Manager and
Available Appliances for instructions.

You can deploy CSM in the Extra Small VM size or higher, as required. See NSX Manager VM and
Host Transport Node System Requirements for details.

Ports and Protocols


For a list of ports and protocols required for inbound and outbound access for CSM, see https://
[Link]/home/NSX.

NTP Server Configuration


Many features require that CSM has a valid NTP server entry. You can configure the NTP server
at the time of installing CSM or later. Also see Configuring NTP on Appliances and Transport
Nodes in the NSX Administration Guide for other options for configuring NTP.

Join CSM with NSX Manager


You must connect the CSM appliance with NSX Manager to allow these components to
communicate with each other.

Prerequisites

n NSX Manager must be installed and you must have the username and password for the
admin account to log in to NSX Manager.

n CSM must be installed and you must have the Enterprise Administrator role assigned in CSM.

Procedure

1 From a browser, log in to CSM.

2 When prompted in the setup wizard, click Begin Setup.

3 Enter the following details in the NSX Manager Credentials screen:

Option Description

NSX Manager Host Name Enter the fully qualified domain name (FQDN) of the NSX Manager, if
available. You may also enter the IP address of NSX Manager.

Admin Credentials Enter an Enterprise Administrator username and password for NSX
Manager.

Manager Thumbprint Optionally, enter the NSX Manager's thumbrpint value. If you leave this field
blank, the system identifies the thumbprint and displays it in the next screen.

4 (Optional) If you did not provide a thumbprint value for NSX Manager, or if the value
was incorrect, the Verify Thumbprint screen appears. Select the checkbox to accept the
thumbprint discovered by the system.

VMware by Broadcom 280


NSX Installation Guide

5 Click Connect.

Note If you missed this setting in the setup wizard or if you want to change the associated
NSX Manager, log in to CSM, click System > Settings, and click Configure on the panel titled
Associated NSX Node.

CSM verifies the NSX Manager thumbprint and establishes connection.

6 (Optional) Set up the Proxy server. See instructions in (Optional) Configure Proxy Servers.

Specify CSM IPs for Access by PCG


After CSM is deployed, run the following API to use an IP/subnet pool for CSM visible to PCG.

Whenever you run this API, CSM updates gw-mgmt-sg associated with the PCG in your public
cloud, to append these IP addresses to allow inbound communication on PCG from CSM over
these IP addresses or IP address ranges. See Auto-created Public Cloud Configurations for a list
of constructs created in the public cloud after PCG is deployed.

PUT [Link]

Example Request Body where [Link]/24 is the IP address of CSM as seen by PCG.

{
"mgmt_ip_config": [
"[Link]/24",
"[Link]/24"
]
}

(Optional) Configure Proxy Servers


If you want to route and monitor all internet-bound HTTP/HTTPS traffic through a reliable HTTP
Proxy, you can configure up to five proxy servers in CSM.

All public cloud communication from PCG and CSM is routed through the selected proxy server.

Proxy settings for PCG are independent of proxy settings for CSM. You can choose to have none
or a different proxy server for PCG.

You can choose the following levels of authentication:

n Credentials-based authentication.

n Certificate-based authentication for HTTPS interception.

n No authentication.

VMware by Broadcom 281


NSX Installation Guide

Procedure

1 Click System > Settings. Then click Configure on the panel titled Proxy Servers.

Note You can also provide these details when using the CSM Setup Wizard that is available
when you first install CSM.

2 In the Configure Proxy Servers screen, enter the following details:

Option Description

Default Use this radio button to indicate the default proxy server.

Profile Name Provide a proxy server profile name. This is mandatory.

Proxy Server Enter the proxy server’s IP address. This is mandatory.

Port Enter the proxy server’s port. This is mandatory.

Authentication Optional. If you want to set up additional authentication, select this check
box and provide valid username and password.

Username This is required if you select the Authentication checkbox.

Password This is required if you select the Authentication checkbox.

Certificate Optional. If you want to provide an authentication certificate for HTTPS


interception, select this checkbox and copy-paste the certificate in the text
box that appears.

No Proxy Select this option if you do not want to use any of the proxy servers
configured.

(Optional) Set Up vIDM for Cloud Service Manager


If you use VMware Identity Manager™, you can set it up to access CSM from within NSX
Manager.

Procedure

1 Configure vIDM for NSX Manager and CSM. See instructions at Configure VMware Identity
Manager Integration in the NSX Administration Guide.

2 Assign the same role to the vIDM user for NSX Manager and CSM, for example, Enterprise
Admin role assigned to the user named vIDM_admin. You must log in to NSX Manager and
CSM each and assign the same role to the same username. See Add a Role Assignment or
Principal Identity in the NSX Administration Guide for detailed instructions.

3 Log in to NSX Manager. You are redirected to the vIDM login.

4 Enter the vIDM user's credentials. Once you log in, you can switch between NSX Manager and

CSM by clicking the Applications icon.

VMware by Broadcom 282


NSX Installation Guide

Connect your Public Cloud with On-prem NSX


Connect your public cloud account with the on-prem deployment of NSX,

Connect VPCs/VNets with on-prem using suitable methods, such as Direct Connect for AWS or
Express Route for Microsoft Azure. You can also use site-to-site VPN with any VPN endpoint
on-prem and PCG acting as the VPN endpoint in your public cloud.

To use the Transit/Compute topology, you must have peering connections established between
the Transit and Compute VPCs/VNets. You can have a single PCG manage multiple compute
VPCs/VNets. You can also have a flat compute VPC/VNet architecture with a PCG pair installed
in each VPC/VNet. See NSX Public Cloud Gateway: Architecture and Modes of Deployment for
details on PCG deployment options.

Connect Microsoft Azure with On-prem NSX


A connection must be established between your Microsoft Azure network and your on-prem NSX
appliances.

Note You must have already installed and connected NSX Manager with CSM in your on-prem
deployment.

Overview
n Connect your Microsoft Azure subscription with on-prem NSX.

n Configure your VNets with the necessary CIDR blocks and subnets required by NSX Cloud.

n Synchronize time on the CSM appliance with the Microsoft Azure Storage server or NTP.

Connect your Microsoft Azure subscription with on-prem NSX

Every public cloud provides options to connect with an on-premises deployment. You can
choose any of the available connectivity options that suit your requirements. Refer to Microsoft
Azure Reference documentation for details.

Note You must review and implement the applicable security considerations and best practices
by Microsoft Azure, for example, all privileged user accounts accessing the Microsoft Azure
portal or API should have Multi Factor Authentication (MFA) enabled. MFA ensures only a
legitimate user can access the portal and reduces the likelihood of access even if credentials are
stolen or leaked. For more information and recommendations, refer to Microsoft Azure Security
Center Documentation.

Configure your VNet


In Microsoft Azure, create routable CIDR blocks and set up the required subnets.

n One management subnet with a recommended range of at least /28, to handle:

n control traffic to on-prem appliances

n API traffic to cloud-provider API endpoints

VMware by Broadcom 283


NSX Installation Guide

n One downlink subnet with a recommended range of /24, for the workload VMs.

n One, or two for HA, uplink subnets with a recommended range of /24, for routing of north-
south traffic leaving from or entering the VNet.

See NSX Public Cloud Gateway: Architecture and Modes of Deployment for details on how these
subnets are used.

Connect AWS with On-prem NSX


A connection must be established between your Amazon Web Services (AWS) network and your
on-prem NSX appliances.

Note You must have already installed and connected NSX Manager with CSM in your on-prem
deployment.

Overview

n Connect your AWS account with on-prem NSX Manager appliances using any of the available
options that best suit your requirements.

n Configure your VPC with subnets and other requirements for NSX Cloud.

Connect your AWS account with your on-prem NSX deployment

Every public cloud provides options to connect with an on-premises deployment. You can
choose any of the available connectivity options that suit your requirements. Refer to AWS
Reference Documentation for details.

Note You must review and implement the applicable security considerations and best practices
by AWS; refer to AWS Security Best Practices for details.

Configure your VPC

You need the following configurations:

n six subnets for supporting PCG with High Availability

n an Internet gateway (IGW)

n a private and a public route table

n subnet association with route tables

n DNS resolution and DNS hostnames enabled

VMware by Broadcom 284


NSX Installation Guide

Follow these guidelines to configure your VPC:

1 Assuming your VPC uses a /16 network, for each gateway that needs to be deployed, set up
three subnets.

Important If using High Availability, set up three additional subnets in a different Availability
Zone.

n Management subnet: This subnet is used for management traffic between on-prem NSX
and PCG. The recommended range is /28.

n Uplink subnet: This subnet is used for north-south internet traffic. The recommended
range is /24.

n Downlink subnet: This subnet encompasses the workload VM's IP address range, and
should be sized accordingly. Bear in mind that you may need to incorporate additional
interfaces on the workload VMs for debugging purposes.

Note Label the subnets appropriately, for example, management-subnet, uplink-subnet,


downlink-subnet,because you will need to select the subnets when deploying PCG on this
VPC.

See NSX Public Cloud Gateway: Architecture and Modes of Deployment for details.

2 Ensure you have an Internet gateway (IGW) that is attached to this VPC.

3 Ensure the routing table for the VPC has the Destination set to [Link]/0 and the Target is
the IGW attached to the VPC.

4 Ensure you have DNS resolution and DNS hostnames enabled for this VPC.

Deploy NSX Cloud Components in Microsoft Azure using the


NSX Cloud Marketplace Image
Starting in version 3.1.1, you can use the Terraform scripts to deploy management components –
NSX Manager and CSM – within your Microsoft Azure subscription.

provides Terraform scripts that deploy components in your Microsoft subscription. See Deploy
NSX Cloud Components in Microsoft Azure using Terraform scripts.

If you cannot use Terraform scripts to deploy components, you can also manually deploy them
in your Microsoft Azure subscription. See Deploy NSX Cloud Components in Microsoft Azure
without using Terraform scripts.

VMware by Broadcom 285


NSX Installation Guide

Figure 15-2. NSX Cloud Components Deployed in Microsoft Azure

NSX Cloud Management VNet

Availability Set

Management and NSX Manager Cluster


Control
Plane
NSX Manager Cloud Service Manager

VNet Peering In Your


or Internet Gateway (IGW) Microsoft Azure
Subscription

Transit/Self-managed VNet
Subnet 1 Subnet 2
Public
Cloud APP APP
Gateway WEB WEB
OS OS
Availability Set

Peer link Peer link


Data
Plane (optional)
Compute VNet 1 Compute VNet 2
Subnet 1 Subnet 2 Subnet 1 Subnet 2

APP APP APP APP


WEB WEB WEB WEB
OS OS OS OS
Public cloud infrastructure
with hypervisor

Accepting Azure Marketplace Terms


As the NSX Cloud images are published at the Azure Marketplace, you must first accept the legal
terms for the images.

To accept the terms, you can use the PowerShell command. For
details, see [Link]
azmarketplaceterms?view=azps-6.2.0.

Use the following values for the PowerShell command. Note that the product value is based on
the marketplace image.

Parameter Description

Publisher vmware-inc

Name byol_release-3-1

Product nsx-public-cloud-gateway
For Public Cloud Gateway (PCG)

VMware by Broadcom 286


NSX Installation Guide

Parameter Description

Product nsx-policy-manager
For Management Plane (MP)

Product nsx-cloud-service-manager
For Cloud Service Manager (CSM)

Following is an example for accepting terms for CSM using the PowerShell command:

Set-AzMarketplaceTerms -Publisher "vmware-inc" -Product "nsx-cloud-service-manager" -Name


"byol_release-3-1" -Terms $agreementTerms -Accept

Deploy NSX Cloud Components in Microsoft Azure using Terraform


scripts
Follow these steps to deploy NSX Cloud using the NSX Cloud Marketplace image in Microsoft
Azure using the Terraform scripts provided by NSX Cloud.

Prerequisites

n Verify that you have access to the NSX Cloud Marketplace image in your Microsoft
subscription.

n Verify that you have accepted Microsoft Azure's Marketplace legal terms in the subscription
where you are deploying NSX cloud appliances.

n You must have Microsoft Azure CLI installed and configured on the system. This is required
for authenticating and running Azure APIs that are used in the Terraform scripts.

If possible, use the same system to run the Terraform scripts that you use to access your
Microsoft subscription from. This ensure that your Microsoft Azure credentials can be used
from within the system and you do not have share this information with a different system.

Also, as a security recommendation, run these scripts on a Linux/Unix or macOS system that
supports the Python crypt module.

n Verify that you have binaries of Terraform 0.13 or higher on the system where you plan to run
the Terraform scripts.

n You must have Python 3.0 or higher installed on this system.

Procedure

1 Download the Terraform scripts by logging in to your My VMware account and navigating to:
Products > NSXDrivers and Tools > VMware NSX Terraform Provider > Go To Downloads >
Download Now. For example, after you log in to your My VMware account, this link takes you
to the Download page for Drivers and Tools.

2 Extract the contents of the file named


[Link]. The Terraform scripts and
related files are in the folder NSXCloudScripts/cloud-native-deployment/azure/igw.

VMware by Broadcom 287


NSX Installation Guide

3 Update the Terraform configuration files.

a In [Link], add the following information:

Parameter Description

subscription_id Provide the subscription ID for your Microsoft Azure account.

location Specify the Microsoft Azure location that the NSX Cloud Management
VNet will be deployed in.

deployment_prefix This is the deployment name that will be prefixed to all auto-created
entities. Ensure that this is unique for each Microsoft subscription_id
and location.

b In credentials_nsx.[Link], add the following information:

Parameter Description

mgr_public_key_path This is the path to the public key to be applied to the NSX Manager
appliance.

csm_public_key_path This is the path to the public key to be applied to the CSM appliance.

license_key This is the license key for NSX Manager. You must have the NSX
Enterprise Plus license.

c Verify advanced configuration information, and update as necessary, in the file


advanced_config.[Link]:

Parameter Description

mgmt_vnet_address_space This is the address space for the newly deployed NSX Cloud
Management VNet.

mgmt_subnet_address_prefix This is the subnet for the NSX Cloud management appliances deployed in
the NSX Cloud Management VNet.

4 Run the following commands in the specified order:

~/terraform init This command collects all the modules required for deployment.

~/terraform plan This command displays the list of steps or a blueprint of the procedure involved in the
deployment.

~/terraform apply This command executes the script.


If something goes wrong during execution, you are shown the corresponding error
messages. After you fix the errors, you can resume the deployment from where it stopped.

VMware by Broadcom 288


NSX Installation Guide

5 Follow these steps to change the passwords generated for NSX Manager and CSM by the
Terraform scripts.

a After the scripts run successfully, make a note of the following passwords for NSX
Manager and CSM:

n admin_password

n root_password
These passwords are displayed on the screen at the end of the deployment. You can
also find these passwords in the file NSXCloudScripts/cloud-native-deployment/
azure/igw/[Link], under the section "outputs", for example:

"outputs": {
"csm": {
"value": {
"admin_password": "<pwd>",
"admin_username": "nsxadmin",
"private_ip": "<private IP>",
"public_ip": "<public IP>",
"root_password": "<pwd>"
},
"mgrs": {
"value": [
{
"admin_password": "<pwd>",
"admin_username": "nsxadmin",
"private_ip": "<private IP",
"public_ip": "<public IP>",
"root_password": "<pwd>"
},

b In Microsoft Azure, navigate to the Network Security Groups created for NSX Manager
and CSM, named <deployment_prefix>-nsx-mgr-sg and <deployment_prefix>-nsx-
csm-sg, and add the following temporary inbound "allow" rule for SSH:

Priority Name Port Protocol Source Destination Action

1010 AllowInboundRuleSSH 22 TCP Any Any Allow

VMware by Broadcom 289


NSX Installation Guide

c Log in to the NSX Manager appliance using your private key and change the password
generated by the Terraform scripts:

$ ssh -i <nsx_mgr_key> nsxadmin@<NSX Manager public IP address>


WARNING: Your password has expired.
You must change your password now and login again!
Changing password for nsxadmin.
(current) UNIX password: <Enter mgr_admin_pwd from the Terraform scripts>
New password: <Enter new password conforming to NSX password complexity>
Retype new password:
passwd: password updated successfully

d Log in to CSM using your private key and change the password generated by Terraform
scripts:

$ ssh -i <nsx_csm_key> nsxadmin@<CSM public IP address>


WARNING: Your password has expired.
You must change your password now and login again!
Changing password for nsxadmin.
(current) UNIX password: <Enter csm_admin_pwd from the Terraform scripts>
New password: <Enter new password conforming to NSX password complexity>
Retype new password:
passwd: password updated successfully

6 Log in to the CSM appliance using the new password you have set and run the following NSX
CLI command to join CSM with the NSX Manager cluster:

join <nsx-manager-ip-address & port(optional)> cluster-id <nsx-manager-cluster-id>


username <username> password <password> thumbprint <nsx-manager-api-thumbprint> csm-
username <csm-username> csm-password <csm-password>

You can run the NSX CLI command get cluster status from any NSX Manager node
to get the cluster-id. You can get the NSX Manager thumbprint by running the get
certificate api thumbprint command on the specified NSX Manager. See the NSX
Command-Line Interface Reference for details on CLI commands.

Note If the NSX Manager node that you joined the CSM appliance to is lost, you can
either run this NSX CLI command to join CSM with one of the other healthy NSX Manager
nodes, or you can redeploy the lost NSX Manager node using its image file named,
<deployment_prefix>nsx-mgr-image and CSM will automatically rejoin this node when this
node is back online. See Redeploying NSX Manager from nsx_mgr_image in Microsoft Azure
in the NSX Administration Guide for details.

Results

The scripts deploy the following in your Microsoft Azure subscription:

n A VNet to host the NSX Cloud management appliances. This VNet is named
<deployment_prefix>-nsx-mgmt-vnet.

VMware by Broadcom 290


NSX Installation Guide

n An Availability Set in which the three nodes of the NSX Manager cluster are deployed. This
Availability Set is named <deployment_prefix>-nsx-aset.

n Microsoft Azure Resource Group named <deployment_prefix>nsx-mgmt-rg.

n The following resources for the each of the NSX Manager nodes and for the CSM appliance:

a VMs named <deployment_prefix>nsx-csm for CSM, and <deployment_prefix>nsx-


mgr0, <deployment_prefix>nsx-mgr1 and <deployment_prefix>nsx-mgr2 for the
NSX Manager cluster.

b OS Disk for each VM.

c Network interface (NIC) for each VM.

d Public IP address for each VM.

e Data disk for each VM.

n Network Security Groups for NSX Cloud management components that allow connectivity for
these appliances.

n <deployment_prefix>-nsx-mgr-sg:

Table 15-2. Inbound Rules for NSX Manager deployed using the Terraform scripts

Priority Name Port Protocol Source Destination Action

1000 AllowInboun 443 TCP Any Any Allow


dRuleAPI

Table 15-3. Outbound Rules for NSX Manager deployed using the Terraform scripts

Priority Name Port Protocol Source Destination Action

100 AllowOutbou Any TCP Any Any Allow


ndRuleAPI

VMware by Broadcom 291


NSX Installation Guide

n <deployment_prefix>-nsx-csm-sg:

Table 15-4. Inbound Rules for CSM deployed using the Terraform scripts

Priority Name Port Protocol Source Destination Action

1000 AllowInboun 443 TCP Any Any Allow


dRuleAPI

Table 15-5. Outbound Rules for CSM deployed using the Terraform scripts

Priority Name Port Protocol Source Destination Action

100 AllowOutboundRuleAPI 80,44 TCP Any Any Allow


3

Note Consider updating the Source field of these auto-created network security groups to
a restricted set of CIDRs from which you want to access NSX Manager and CSM. The default
Any is not safe.

n A Microsoft Azure Recovery Service Vault with a vault policy to perform a recurring
backup of all three NSX Manager nodes and the CSM appliance. The vault policy is named
<deployment_prefix>-nsx-vault and the default backup schedule is set to: daily recurring
at 11PM UTC.

See Managing Backup and Restore of NSX Manager and CSM in Microsoft Azure in the NSX
Administration Guide for details on restore options.

What to do next

Deploy PCG in a VNet

Deploy NSX Cloud Components in Microsoft Azure without using


Terraform scripts
Follow these steps to manually deploy NSX Cloud components in Microsoft Azure using the
Microsoft Azure marketplace image, without using Terraform scripts provided by NSX Cloud.

The following steps are performed in your Microsoft Azure subscription:

1 Create a resource group for NSX Cloud management resources with a descriptive name, for
example, nsx-mgmt-rg.

2 In this resource group, create an availability set in which you will deploy three NSX Manager
nodes.

3 In this resource group, create a VNet where you will deploy NSX Cloud management
components.

4 In this VNet, create a subnet for NSX Cloud management components.

VMware by Broadcom 292


NSX Installation Guide

5 Create Security groups for NSX Manager and CSM appliances.

n Security groups for NSX Manager named like nsx-mgr-sg:

Table 15-6. Inbound Rules for NSX Manager

Priority Name Port Protocol Source Destination Action

1000 AllowInboun 443 TCP Any Any Allow


dRuleAPI

Table 15-7. Outbound Rules for NSX Manager

Priority Name Port Protocol Source Destination Action

100 AllowOutbou Any TCP Any Any Allow


ndRuleAPI

n Security groups for CSM named like nsx-csm-sg:

Table 15-8. Inbound Rules for CSM

Priority Name Port Protocol Source Destination Action

1000 AllowInboun 443 TCP Any Any Allow


dRuleAPI

Table 15-9. Outbound Rules for CSM

Priority Name Port Protocol Source Destination Action

100 AllowOutboundRuleAPI 80,44 TCP Any Any Allow


3

6 Deploy one CSM VM using the CSM marketplace image URN with a public IP address. Use the
following configurations as specified. For all other configurations you can select the default
values or the best options for your requirements.

Parameter Value

Basic

Virtual machine name Any descriptive name.

Size The minimum requirement is: Standard_D4s_v3-4vcpus, 16 GB


memory.

Authentication type SSH

Username Enter the default NSX Manager username: nsxadmin.

SSH Public Key Source Provide the public key of the SSH key-pair you will use to log in to
the appliance over SSH.

Networking

Public IP Click Create new and select Static for the Assignment option.

VMware by Broadcom 293


NSX Installation Guide

Parameter Value

NIC network security group Select Advanced

Configure network security group Select the network security group created for CSM, for example,
nsx-csm-sg as described in an earlier step.

Advanced

Custom data Copy-paste the following, ensuring that you use your deployment's
username and password:

#cloud-config
hostname: <hostname>
chpasswd:
expire: false
list:
- nsxadmin:<admin_password>
- root:<root_password>

For example:

#cloud-config
hostname: nsx-datacenter1-csm
chpasswd:
expire: false
list:
- nsxadmin:MySecretNsxAdminPassword
- root:MySecretNsxRootPassword

7 Deploy three NSX Manager VMs using the NSX Manager marketplace image URN with a
public IP address. Use the following configurations as specified. For all other configurations
you can select the default values or the best options for your requirements.

Parameter Value

Basic

Virtual machine name Any descriptive name.

Size The minimum requirement is: Standard_D4s_v3-4vcpus, 16 GB memory.

Authentication type SSH

Username Enter the default NSX Manager username: nsxadmin.

SSH Public Key Source Provide the public key of the SSH key-pair you will use to log in to the appliance over
SSH.

Disks

OS Disk type Standard HDD

Data disks Click Create and attach a new disk and select Standard HDD. for Disk SKU with a
custom size of 100 GiB.

Note Ensure that the data disk host caching is set to read/write.

Networking

VMware by Broadcom 294


NSX Installation Guide

Parameter Value

Public IP Click Create new and select Static for the Assignment option.

NIC network security Select Advanced


group

Configure network Select the network security group created in a previous step, from the example in
security group this topic: nsx-mgr-sg

Advanced

Custom data Copy-paste the following, ensuring that you use your deployment's username and
password:

#cloud-config
hostname: <hostname>
bootcmd:
- [cloud-init-per, instance, lvmdiskscan, lvmdiskscan]
- [cloud-init-per, instance, secondary_partition, /opt/vmware/
nsx-node-api/bin/set_secondary_partition.sh]
chpasswd:
expire: false
list:
- nsxadmin:<admin_password>
- root:<root_password>

8 Configure a Microsoft Azure Recovery Service Vault with a vault policy to perform a recurring
backup of all three NSX Manager nodes and the CSM appliance. For example, you could use
this policy named nsx-vault and the default backup schedule set to daily recurring at 11PM
UTC.

See Managing Backup and Restore of NSX Manager and CSM in Microsoft Azure in the NSX
Administration Guide for details on restore options.
9 Add a temporary network security group to allow SSH access for NSX Manager and CSM.

Table 15-10. Temporary rule for both NSX Manager and CSM to allow SSH access

Priority Name Port Protocol Source Destination Action

1010 AllowInboundRuleSSH 22 TCP Any Any Allow

10 Log in to the NSX Manager and CSM appliances using your private key and the passwords
you provided in user data when launching the VMs.

11 Create an NSX Manager cluster with the three NSX Manager nodes deployed. See Form an
NSX Manager Cluster Using the CLI.

12 Add an NSX license:

a From your browser, log in with admin privileges to an NSX Manager at [Link]
manager-ip-address>.

b Select System > Licenses > Add License.

c Enter a license key. You must have the NSX Enterprise Plus license.

VMware by Broadcom 295


NSX Installation Guide

13 Log in to the CSM appliance and run the following NSX CLI command to join CSM with the
NSX Manager cluster:

join <nsx-manager-ip-address & port(optional)> cluster-id <nsx-manager-ip-address>


username <username> password <password> thumbprint <nsx-manager-api-thumbprint> csm-
username <csm-username> csm-password <csm-password>

Add your Public Cloud Account


Create roles with appropriate permissions in your public cloud account to add the account into
CSM.

Overview
For each public cloud account that you want to bring under the control of NSX, you have the
option of creating appropriate roles with appropriate permissions. NSX Cloud provides scripts
that you can use to generate these roles.

If you want to restrict public clouds that can be added into CSM, run the following CSM API:

PUT /api/v1/csm/desired-clouds

Example Request:
PUT [Link]
{
"cloud_types": [
{
"cloud_type": "aws"
"enabled": true,
}
{
"cloud_type": "azure",
"enabled": true,
}
{
"cloud_type": "aws-gov-us-east"
"enabled": false,
}
{
"cloud_type": "aws-gov-us-west",
"enabled": false,
}
{
"cloud_type": "azure-gov-us",
"enabled": false,
}
]
}

See the latest version of the NSX API Guide at [Link] for API details.

VMware by Broadcom 296


NSX Installation Guide

Adding your Microsoft Azure Subscription


For NSX Cloud to operate in your subscription, create a Service Principal to grant the required
permissions, and roles for CSM and PCG based on the Microsoft Azure feature for managing
identities for Azure Resources.

Overview:

n NSX Cloud provides a PowerShell script to generate the Service Principal and roles that use
the managed identity feature of Microsoft Azure to manage authentication while keeping
your Microsoft Azure credentials secure. You can also include multiple subscriptions under
one Service Principal using this script.

n You have the option of reusing the Service Principal for all your subscriptions, or to create
new Service Principals as required. There is an additional script if you want to create separate
Service Principals for additional subscriptions.

n For multiple subscriptions, whether you are using a single Service Principal for all, or multiple
Service Principals, you must update the JSON files for the CSM and PCG roles to add each
additional subscription name under the section AssignableScopes.

n If you already have an NSX Cloud Service Principal in your VNet, you can update it by running
the scripts again and leaving out the Service Principal name from the parameters.

n The Service Principal name must be unique for your Microsoft Azure Active Directory. You
may use the same Service Principal in different subscriptions under the same Active Directory
domain, or different Service Principals per subscription. But you cannot create two Service
Principals with the same name.

n You must either be the owner of or have permissions to create and assign roles in all the
Microsoft Azure subscriptions.

n The following scenarios are supported:

n Scenario 1: You have a single Microsoft Azure Subscription that you want to enable with
NSX Cloud.

n Scenario 2: You have multiple Microsoft Azure Subscriptions under the same Microsoft
Azure Directory, that you want to enable with NSX Cloud, but want to use one NSX Cloud
Service Principal across all your subscriptions.

n Scenario 3: You have multiple Microsoft Azure Subscriptions under the same Microsoft
Azure Directory, that you want to enable with NSX Cloud, but want to use different NSX
Cloud Service Principal names for different subscriptions.

Here is an outline of the process:

1 Use the NSX Cloud PowerShell script to:

n Create a Service Principal account for NSX Cloud.

n Create a role for CSM.

n Create a role for PCG.

VMware by Broadcom 297


NSX Installation Guide

2 (Optional) Create Service Principals for other subscriptions you want to link.

3 Add the Microsoft Azure subscription in CSM.

Note If using multiple subscriptions, whether using the same or different Service Principals,
you must add each subscription separately in CSM.

Generate the Service Principal and Roles


NSX Cloud provides PowerShell scripts that help you generate the required service principal and
roles for one or multiple subscriptions.

Prerequisites

n You must have PowerShell 5.0+ with the AzureRM Module installed. If you have the new
Azure Powershell Az module, you must run the Enable-AzureRmAlias command to ensure
that the AzureRM cmdlets for NSX Cloud run successfully .

n You must either be the owner of or have permissions to create and assign roles in all the
Microsoft Azure subscriptions.

Note The response time from Microsoft Azure can cause the script to fail when you run it the
first time. If the script fails, try running it again.

Procedure

1 On a Windows desktop or server, download the ZIP file named


[Link] from the NSX Download page > Drivers & Tools > NSX
Cloud Scripts > Microsoft Azure.

2 Extract the following contents of the ZIP file in your Windows system:

Script/File Description

CreateNsxRoles.ps1 The PowerShell script to generate the NSX Cloud Service Principal and
managed identity roles for CSM and PCG. This script takes the following
parameters:
n -subscriptionId <the Transit_VNet's_Azure_subscription_ID>
n (optional) -servicePrincipalName <Service_Principal_Name>
n (optional) -useOneServicePrincipal

AddServicePrincipal.ps1 An optional script required if you want to add multiple subscriptions and
assign different Service Principals to each subscription. See Scenario 3 in the
following steps. This script takes the following parameters:
n -computeSubscriptionId
<the_Compute_VNet's_Azure_subscription_ID>
n -transitSubscriptionId <the
Transit_VNet's_Azure_Subscription_ID>
n -csmRoleName <CSM_Role_Name>
n -servicePrincipalName <Service_Principal_Name>

VMware by Broadcom 298


NSX Installation Guide

Script/File Description

nsx_csm_role.json A JSON template for the CSM role name and permissions. This file is
required as an input to the PowerShell script and must be in the same folder
as the script.

nsx_pcg_role.json A JSON template for the PCG role name and permissions. This file is required
as an input to the PowerShell script and must be in the same folder as the
script.

Note The default PCG (Gateway) Role Name is nsx-pcg-role. You need to
provide this value when adding your subscription in CSM.

3 Scenario 1: You have a single Microsoft Azure Subscription that you want to enable with NSX
Cloud.

a From a PowerShell instance, go to the directory where you downloaded the Microsoft
Azure scripts and JSON files.

b Run the script named CreateNsxRoles.ps1 with the parameter -SubscriptionId, as follows:

.\CreateNsxRoles.ps1 -subscriptionId <the_single_Azure_subscription_ID>

Note If you want to override the default Service Principal name of nsx-service-admin,
you can also use the parameter -servicePrincipalName. The Service Principal name
must be unique in your Microsoft Azure Active Directory.

VMware by Broadcom 299


NSX Installation Guide

4 Scenario 2: You have multiple Microsoft Azure Subscriptions under the same Microsoft Azure
Directory, that you want to enable with NSX Cloud, but want to use one NSX Cloud Service
Principal across all your subscriptions.

a From a PowerShell instance, go to the directory where you downloaded the Microsoft
Azure scripts and JSON files.

b Edit each of the JSON files to add a list of other subscription IDs under the section titled
"AssignableScopes", for example:

"AssignableScopes": [

"/subscriptions/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",

"/subscriptions/aaaaaaaa-bbbb-cccc-dddd-ffffffffffff",

"/subscriptions/aaaaaaaa-bbbb-cccc-dddd-000000000000"

Note You must use the format shown in the example to add subscription IDs: "/
subscriptions/<Subscription_ID>"

c Run the script named CreateNsxRoles.ps1 with the parameters -subscriptionID and
-useOneServicePrincipal:

.\CreateNsxRoles.ps1 -subscriptionId <the_Transit_VNet's_Azure_subscription_ID>


-useOneServicePrincipal

Note Omit the Service Principal name here if you want to use the default name: nsx-
service-admin. If that Service Principal name already exists in your Microsoft Azure
Active Directory, running this script without a Service Principal name updates that Service
Principal.

5 Scenario 3: You have multiple Microsoft Azure Subscriptions under the same Microsoft Azure
Directory, that you want to enable with NSX Cloud, but want to use different NSX Cloud
Service Principal names for different subscriptions.

a From a PowerShell instance, go to the directory where you downloaded the Microsoft
Azure scripts and JSON files.

b Follow steps b and c from the second scenario to add multiple subscriptions to the
AssignableScopes section in each of the JSON files.

VMware by Broadcom 300


NSX Installation Guide

c Run the script named CreateNsxRoles.ps1 with the parameters -subscriptionID:

.\CreateNsxRoles.ps1 -subscriptionId <One of the subscription_IDs>

Note Omit the Service Principal name here if you want to use the default name: nsx-
service-admin. If that Service Principal name exists in your Microsoft Azure Active
Directory, running this script without a Service Principal name updates that Service
Principal.

d Run the script named AddServicePrincipal.ps1 with the following parameters:

Parameter Value

-computeSubscriptionId The Compute_VNet's Azure Subscription ID

-transitSubscriptionId The Transit VNet's Azure Subscription ID

-csmRoleName Get this value from the file nsx_csm_role.JSON

-servicePrincipalName New Service Principal name

./AddServicePrincipal.ps1 -computeSubscriptionId
<the_Compute_VNet's_Azure_subscription_ID>
-transitSubscriptionId <the_Tranist_VNet's_Azure_Subscription_ID>
-csmRoleName <CSM_Role_Name>
-servicePrincipalName <new_Service_Principal_Name>”

6 Look for a file in the same directory where you ran the PowerShell script. It is named like:
NSXCloud_ServicePrincipal_<your_subscription_ID>_<NSX_Cloud_Service_Princi
pal_name>. This file contains the information required to add your Microsoft Azure
subscription in CSM.

n Client ID

n Client Key

n Tenant ID

n Subscription ID

Results

The following constructs are created:

n an Azure AD application for NSX Cloud.

n an Azure Resource Manager Service Principal for the NSX Cloud application.

n a role for CSM attached to the Service Principal account.

n a role for PCG to enable it to work on your public cloud inventory.

VMware by Broadcom 301


NSX Installation Guide

n a file named like


NSXCloud_ServicePrincipal_<your_subscription_ID>_<NSX_Cloud_Service_Princi
pal_name> is created in the same directory where you ran the PowerShell script. This file
contains the information required to add your Microsoft Azure subscription in CSM.

Note Refer to the JSON files that are used to create the CSM and PCG roles for a list of
permissions available to them after the roles are created.

What to do next

Add your Microsoft Azure Subscription in CSM

Note When enabling NSX Cloud for multiple subscriptions, you must add each separate
subscription to CSM individually, for example, if you have five total subscriptions you must add
five Microsoft Azure accounts in CSM with all other values the same but different subscription IDs.

Add your Microsoft Azure Subscription in CSM


Once you have the details of the NSX Cloud Service Principal and the CSM and PCG roles, you
are ready to add your Microsoft Azure subscription in CSM.

Prerequisites

n You must have the Enterprise Administrator role in NSX.

n You must have the output of the PowerShell script with details of the NSX Cloud Service
Principal.

n You must have the value of the PCG role you provided when running the PowerShell script to
create the roles and the Service Principal. The default value is nsx-pcg-role.

Procedure

1 Log in to CSM using an account with the Enterprise Administrator role.

2 Go to CSM > Clouds > Azure.

3 Click +Add and enter the following details:

Option Description

Name Provide a suitable name to identify this account in CSM. You may have
multiple Microsoft Azure subscriptions that are associated with the same
Microsoft Azure tenant ID. Name your account in CSM, for example, Azure-
DevOps-Account, Azure-Finance-Account, etc.

Client ID Copy paste this value from the output of the PowerShell script.

Key Copy paste this value from the output of the PowerShell script.

Subscription ID Copy paste this value from the output of the PowerShell script.

Tenant ID Copy paste this value from the output of the PowerShell script.

VMware by Broadcom 302


NSX Installation Guide

Option Description

Gateway Role Name The default value is nsx-pcg-role. This value is available from the
nsx_pcg_role.json file if you changed the default.

Cloud Tags By default this option is enabled and allows your Microsoft Azure tags to be
visible in NSX Manager

4 Click Save.

CSM adds the account and you can see it in the Accounts section within three minutes.

5 (Optional) If you have a brownfield deployment, mark all the VMs as User Managed in the
VNet where you want VMs managed to prevent automatic security group assignment under
the Quarantine Policy.

6 (Optional) Manage access to regions. See Managing Regions in CSM.

What to do next

Deploy PCG in a VNet

Adding your AWS Account


You might have one or more AWS accounts with VPCs and workload VMs that you want to bring
under NSX management.

Overview:

n NSX Cloud provides a shell script that you can run from the AWS CLI of your AWS account to
create the IAM profile and role, and create a trust relationship for Transit and Compute VPCs .

n The following scenarios are supported:

n Scenario 1: You want to use a single AWS account with NSX Cloud.

n Scenario 2: You want to use multiple sub-accounts in AWS that are managed by a
primary AWS account.

n Scenario 3: You want to use multiple AWS accounts with NSX Cloud, designating one
account where you will install the PCG, that is a Transit VPC, and other accounts that will
link to this PCG, that is, Compute VPCs. See NSX Public Cloud Gateway: Architecture and
Modes of Deployment for details on PCG deployment options.

Here is an outline of the process:

1 Use the NSX Cloud shell script to do the following. This step requires AWS CLI configured
with the account you want to add.

n Create an IAM profile.

n Create a role for PCG.

n (Optional) Create a trust relationship between the AWS account hosting the Transit VPC
and the AWS account hosting the Compute VPC.

VMware by Broadcom 303


NSX Installation Guide

2 Add the AWS account in CSM.

Generate the IAM Profile and PCG Role


NSX Cloud provides a SHELL script to help set up one or more of your AWS accounts by
generating an IAM profile and a role for PCG attached to the profile that provides necessary
permissions to your AWS account.

If you plan to host a Transit VPC linked to multiple Compute VPCs in two different AWS accounts,
you can use the script to create a trust relationship between these accounts.

Note The PCG (Gateway) role name is nsx_pcg_service by default. If you want a different
value for the Gateway Role Name, you can change it in the script, but make a note of this value
because it is required for adding the AWS account in CSM.

Prerequisites

You must have the following installed and configured on your Linux or compatible system before
you run the script:

n AWS CLI configured for the account and the default region.

n jq (a JSON parser).

n openssl (network security requirement).

Note If using AWS GovCloud (US) accounts, ensure that your AWS CLI is configured for the
GovCloud (US) account and the default region is specified in the AWS CLI configuration file.

Procedure

u On a Linux or compatible desktop or server, download the SHELL script named


nsx_csm_iam_script.sh from the NSX Download page > Drivers & Tools > NSX Cloud
Scripts > AWS.

u Scenario 1: You want to use a single AWS account with NSX Cloud.

a Run the script, for example:

bash nsx_csm_iam_script.sh

b Enter yes when prompted with the question Do you want to create an IAM user for
CSM and an IAM role for PCG? [yes/no]

c Enter a name for the IAM user when asked What do you want to name the IAM User?

Note The IAM user name must be unique in your AWS account.

d Enter no when asked Do you want to add trust relationship for any Transit VPC
account? [yes/no]

VMware by Broadcom 304


NSX Installation Guide

When the script runs successfully, the IAM profile and a role for PCG is created in your
AWS account. The values are saved in the output file named aws_details.txt in the same
directory where you ran the script. Next, follow instructions at Add your AWS Account
in CSM and then Deploy PCG in a VPC to finish the process of setting up a Transit or
Self-Managed VPC.

u Scenario 2: You want to use multiple sub-accounts in AWS that are managed by one primary
AWS account.

a Run the script from your AWS primary account.

bash nsx_csm_iam_script.sh

b Enter yes when prompted with the question Do you want to create an IAM user for
CSM and an IAM role for PCG? [yes/no]

c Enter a name for the IAM user when asked What do you want to name the IAM User?

Note The IAM user name must be unique in your AWS account.

d Enter no when asked Do you want to add trust relationship for any Transit VPC
account? [yes/no]

Note With a primary AWS account, if your Transit VPC has permission to view Compute
VPCs in the sub-accounts, you do not need to establish a trust relationship with your
sub-accounts. If not, follow the steps for Scenario 3 to set up multiple accounts.

When the script runs successfully, the IAM profile and a role for PCG is created in your AWS
primary account. The values are saved in the output file in the same directory where you
ran the script. The filename is aws_details.txt. Next, follow instructions at Add your AWS
Account in CSM and then Deploy PCG in a VPC to finish the process of setting up a Transit or
Self-Managed VPC.

u Scenario 3: You want to use multiple AWS accounts with NSX Cloud, designating one
account for Transit VPC and other accounts for Compute VPCs. See NSX Public Cloud
Gateway: Architecture and Modes of Deployment for details on PCG deployment options.

a Make a note of the 12-digit AWS account number where you want to host the Transit
VPC.

b Set up the Transit VPC in the AWS account by following steps a through d for Scenario 1
and finish the process of adding the account in CSM.

c Download and run the NSX Cloud script from a Linux or compatible system in your other
AWS account where you want to host the Compute VPCs. Alternatively, you can use
AWS profiles with different account credentials to use the same system to run the script
again for your other AWS account.

VMware by Broadcom 305


NSX Installation Guide

d The script poses the question: Do you want to create an IAM user for CSM and an IAM
role for PCG? [yes/no]. Use the following guidance for the appropriate response:

This AWS account was already Enter no in response to Do you want to create an IAM user for CSM and
added to CSM. an IAM role for PCG? [yes/no]

This account has not been Enter yes in response to Do you want to create an IAM user for CSM
added to CSM before. and an IAM role for PCG? [yes/no]

e (Optional) If you answered yes to creating an IAM user for CSM and PCG in the previous
question, enter a name for the IAM user when asked What do you want to name the IAM
User?. The IAM user name must be unique in your AWS account.

f Enter yes when asked Do you want to add trust relationship for any Transit VPC
account? [yes/no]

g Enter or copy-paste the 12-digit AWS account number that you noted in step 1 when
asked What is the Transit VPC account number?

An IAM Trust Relationship is established between the two AWS accounts and an
ExternalID is generated by the script.

When the script runs successfully, the IAM profile and a role for PCG is created in your AWS
primary account. The values are saved in the output file in the same directory where you ran
the script. The filename is aws_details.txt. Next, follow instructions at Add your AWS Account
in CSM and then Link to a Transit VPC or VNet to finish the process of linking to a Transit
VPC.

Add your AWS Account in CSM


Add your AWS account using values generated by the script.

Procedure

1 Log in to CSM using the Enterprise Administrator role.

2 Go to CSM > Clouds > AWS.

3 Click +Add and enter the following details using the output file aws_details.txt generated
from the NSX Cloud script:

Option Description

Name Enter a descriptive name for this AWS Account

Access Key Enter your account's Access Key

Secret Key Enter your account's Secret Key

Discover Cloud Tags By default this option is enabled and allows your AWS tags to be visible in
NSX Manager

Gateway Role Name The default value is nsx_pcg_service. You can find this value in the output
of the script in the file aws_details.txt.

VMware by Broadcom 306


NSX Installation Guide

The AWS account gets added in CSM.

In the VPCs tab of CSM, you can view all the VPCs in your AWS account.

In the Instances tab of CSM, you can view the EC2 Instances in this VPC.

4 (Optional) If you have a brownfield deployment, mark all the VMs as User Managed in the
VPC where you want VMs managed to prevent automatic security group assignment under
the Quarantine Policy.

5 (Optional) Manage access to regions. See Managing Regions in CSM.

What to do next

Deploy PCG in a VPC

Managing Regions in CSM


For your public cloud account added in CSM, you can get a list of supported regions and restrict
access to specific regions.

n To get a list of specific regions, run the following API:

GET [Link]

Note If you do not see all the available regions in your AWS account, check whether you
have enabled regions in your AWS account. See AWS documentations for details on enabling
regions.

n To restrict regions in Microsoft Azure, run the following API:

PUT [Link]

Example Request:
PUT [Link]
desired-regions
{
"regions": [
{
"id": "westus",
"display_name": "westus",
"enabled": true,
},
{
"id": "eastus2",
"display_name": "eastus2",
"enabled": false,
}
]
}

VMware by Broadcom 307


NSX Installation Guide

n To restrict regions in AWS, run the following API:

PUT [Link]

Example Request:
PUT [Link]
regions
{
"default_client_region": "us-east-1",
"regions": [
{
"id": "us-west-2",
"display_name": "Oregon",
"enabled": false,
},
{
"id": "us-east-1",
"display_name": "N. Virginia",
"enabled": true,
}
]
}

See the latest version of the NSX API Guide at [Link] for API details.

NSX Public Cloud Gateway: Architecture and Modes of


Deployment
The NSX Public Cloud Gateway (PCG) provides north-south connectivity between the public
cloud and the on-prem management components of NSX.

Familiarize yourself with the following terminology explaining the PCG's architecture and
deployment modes for workload VM-management.

Note The PCG is deployed in a single default size for each supported public cloud:

Public Cloud PCG instance type

AWS [Link].
Some regions might not support this instance type. Refer to AWS documentation for
details.

Note If you see high CPU usage alerts with this instance type, resize PCG instances to
c5.2xlarge.

If you have a high availability pair of PCG instances, resize the standby PCG first by
stopping, resizing and restarting it. Stop the currently active PCG next and wait until the
standby PCG becomes active. Resize and restart this PCG and it should become active.
See AWS documentation for details on resizing instance types.

Microsoft Azure Standard DS3 v.2

VMware by Broadcom 308


NSX Installation Guide

Architecture
The PCG can either be a standalone gateway appliance or shared between your public cloud
VPCs or VNets to achieve a hub and spoke topology.

Modes of Deployment
Self-managed VPC/VNet: When you deploy the PCG in a VPC or VNet, it qualifies the VPC
or VNet as self-managed, that is, you can bring VMs hosted in this VPC or VNet under NSX
management.

Transit VPC/VNet: A self-managed VPC/VNet becomes a Transit VPC/VNet when you link
Compute VPCs/VNets to it.

Compute VPC/VNet: VPCs/VNets that do not have the PCG deployed on them but link to a
Transit VPC/VNet are called Compute VPCs/VNets.

AWS Transit Gateway with PCG: Starting in NSX 3.1.1, you can use AWS Transit Gateway to
connect a Transit VPC with Compute VPCs. See Using PCG with AWS Transit Gateway for details.

Subnets Required in Your VPC/VNet to deploy PCG


The PCG uses the following subnets that you set up in your VPC/VNet. See Connect Microsoft
Azure with On-prem NSX or Connect AWS with On-prem NSX .

n Management subnet: This subnet is used for management traffic between on-prem NSX and
PCG. Example range: /28.

n Uplink subnet: This subnet is used for north-south internet traffic. Example range: /24.

n Downlink subnet: This subnet encompasses the workload VM's IP address range. Size this
subnet bearing in mind that you might need additional interfaces on the workload VMs for
debugging.

PCG deployment aligns with your network addressing plan with FQDNs for the NSX components
and a DNS server that can resolve these FQDNs.

Note It is not recommended to use IP addresses for connecting the public cloud with NSX using
PCG, but if you do, you must not change your IP addresses.

Impact of on-prem and public cloud connectivity mode on PCG's


discovery of CSM
After PCG is deployed in your public cloud, it must interact with CSM as the management
interface for your public cloud inventory. To ensure that PCG can reach the IP address of CSM,
follow these guidelines:

n For PCG deployed in Private IP mode (VGW): PCG discovers CSM with either the actual CSM
IP address or with one of the IP addresses in the subnet range provided.

VMware by Broadcom 309


NSX Installation Guide

Figure 15-3. NSX Public Cloud Gateway Architecture with VGW Connectivity

Internet Internet

Optional

Private Cloud Transit VPC/VNet

Infra Management Subnet Web1 Web2

Site to Site VPN


over Internet
NSX Manager Cluster (VGW) NSX NSX
NSX CSM
CSM IP range: [Link]/24 Uplink Subnet
Firewall
CSM IP seen externally: [Link]/254
(optional)

PCG Management
Firewall Subnet
(optional)
Downlink Subnet

on-prem <150 ms RT NSX NSX


latency App DB

Hypervisors Hypervisors

Customer Data-Center Public Cloud Region

n For PCG deployed in Public IP mode (IGW): PCG can discover CSM using CSM's NAT-
translated IP address that allows access to the real IP address or subnet range for CSM.

VMware by Broadcom 310


NSX Installation Guide

Figure 15-4. NSX Public Cloud Gateway Architecture with IGW Connectivity

Internet Internet

Optional
Over Internet
Private Cloud Gateway (IGW) Transit VPC/VNet
or Express
Route/Direct
Infra Management Subnet Connect
Web1 Web2

NSX Manager Cluster NSX NSX


NSX CSM
CSM IP range: [Link]/24 Uplink Subnet
Firewall
CSM behind NAT: [Link] (optional)

PCG Management
Firewall Subnet
(optional)
Downlink Subnet

on-prem <150 ms RT NSX NSX


latency App DB

Hypervisors Hypervisors

Customer Data-Center Public Cloud Region

Modes of VM-management
NSX Enforced Mode: In this mode, workload VMs are brought under NSX management by
installing NSX Tools on each workload VM to which you apply the tag [Link]=default
in your public cloud.

Native Cloud Enforced Mode: In this mode, your workload VMs can be brought under NSX
management without the use of NSX Tools.

Quarantine Policy
Quarantine Policy: NSX Cloud's threat detection feature that works with your public cloud
security groups.

n In the NSX Enforced Mode, you can enable or disable Quarantine Policy. As a best practice,
disable Quarantine Policy and add all your VMs to the User Managed list when onboarding
workload VMs.

VMware by Broadcom 311


NSX Installation Guide

n In the Native Cloud Enforced Mode Quarantine Policy is always enabled and cannot be
disabled.

Design Options
Regardless of the mode you deploy the PCG in, you can link a Compute VPC/VNet to it in either
mode.

Table 15-11. Design Options with PCG Deployment Modes


Supported Modes when linking Compute VPCs/VNets to
PCG Deployment Mode in Transit VPC/VNet this Transit VPC/VNet

NSX Enforced Mode n NSX Enforced Mode


n Native Cloud Enforced Mode

Native Cloud Enforced Mode n NSX Enforced Mode


n Native Cloud Enforced Mode

Note Once a mode is selected for a Transit or Compute VPC/VNet, you cannot change the
mode. If you want to switch modes, you must undeploy the PCG and redeploy it in the desired
mode.

Deploy PCG or Link to a PCG


Follow these instructions for deploying PCG or linking to a PCG.

NSX Cloud creates networking and security constructs in NSX Manager and your public cloud
after PCG is deployed. See Auto-Configurations after PCG Deployment or Linking .

If you are using AWS Transit Gateway, see Using PCG with AWS Transit Gateway.

Note For Native Cloud PCG deployment, the default admin and root password is same and is set
as NSX-<instance-ID>. The instance-ID of the AWS or Azure VM is prefixed. For example, if the
instance ID is i-0349f21e34dc27b08, the initial password is set as NSX-i-0349f21e34dc27b08

The root password gets expired by default. If you want to access root shell of the PCG appliance,
you must reset the password using the initial password which is NSX-<instance-ID>.

Deploy PCG in a VNet


Follow these instructions to deploy PCG in your Microsoft Azure VNet.

The VNet in which you deploy a PCG can act as a Transit VNet to which other VNets can connect
(known as Compute VNets). This VNet can also manage VMs and act as a self-managed VNet.

Follow these instructions to deploy a PCG. If you want to link to an existing Transit VNet, see Link
to a Transit VPC or VNet .

VMware by Broadcom 312


NSX Installation Guide

Prerequisites

n If you have deployed NSX Cloud components on-prem, ensure the VNet is connected with
your on-prem NSX.

If you have deployed NSX Cloud components in Microsoft Azure, ensure that the VNet
is peered with the NSX Cloud Management VNet. See deployment architecture details at
Deploy NSX Cloud Components in Microsoft Azure using the NSX Cloud Marketplace Image.

n Verify that your Microsoft Azure subscription is added into CSM.

n Verify that you have the required subnets in the VNet where you are deploying PCG: uplink,
downlink, and management. For High Availability, you must have an uplink subnet for the
secondary PCG that is different from the primary PCG.

Procedure

1 Log in to CSM using an account with the Enterprise Administrator role.

2 Click Clouds > Azure and go to the VNets tab.

3 Click a VNet where you want to deploy the PCG.

4 Click Deploy Gateways. The Deploy Gateway wizard opens.

5 For General Properties, use the following guidelines:

Option Description

SSH Public Key Provide an SSH public key that can be validated while deploying PCG. This is
required for each PCG deployment.

Manage with NSX Tools Leave in the default disabled state to onboard workload VMs in the Native
Cloud Enforced Mode. If you want to install NSX Tools on your workload
VMs to use the NSX Enforced Mode, enable this option.

Quarantine Policy on the Associated You can only change the Quarantine Policy setting if you choose to manage
VNet workload VMs using NSX Tools (NSX Enforced Mode). Quarantine Policy is
always enabled in the Native Cloud Enforced Mode.
Leave this in the default disabled mode when you first deploy PCG. You can
change this value after onboarding VMs. See Manage Quarantine Policy in
the NSX Administration Guide for details.

Auto-install NSX Tools This is only available when you enable Manage with NSX Tools. If selected,
NSX Tools are auto-installed on all workload VMs in the Transit/Self-
managed/linked Compute VNet if the tag [Link]=default is applied
to them.

Gateway Connectivity Mode The PCG can be accessed from CSM using a public IP address or a private
IP address depending on the connectivity mode between your public cloud
and your on-prem NSX installation. If you select Auto Detect, they system
attempts to connect with CSM over VGW first, and if that fails, over IGW. If
the system cannot connect with CSM, the deployment fails.
See Impact of on-prem and public cloud connectivity mode on PCG's
discovery of CSM for details.

VMware by Broadcom 313


NSX Installation Guide

Option Description

Use Marketplace Image This option is only available in NSX 3.1.1.

It is enabled by default when a compatible marketplace image is available to


deploy in Microsoft Azure. See Deploy NSX Cloud Components in Microsoft
Azure using the NSX Cloud Marketplace Image for details.

Azure Marketplace Terms If you are using the marketplace image to deploy PCG, you must accept
Microsoft Azure terms of use. NSX Cloud provides the terms for you to
download and read. Select the checkbox to accept the terms to proceed.

Local Storage Account When you add a Microsoft Azure subscription to CSM, a list of your
Microsoft Azure storage accounts is available to CSM. Select the storage
account from the drop-down menu. When proceeding with deploying PCG,
CSM copies the publicly available VHD of the PCG into this storage account
of the selected region.

Note If the VHD image has been copied to this storage account in the
region already for a previous PCG deployment, then the image is used from
this location for subsequent deployments to reduce the overall deployment
time.

VHD URL If you want to use a different PCG image that is not available from the public
VMware repository, you can enter the URL of the PCG’s VHD here. The VHD
must be present in the same account and region where this VNet is created.

Note The VHD must be in the correct URL format. We recommend that you
use the Click to copy option in Microsoft Azure.

Proxy Server Select a proxy server to use for internet-bound traffic from this PCG. The
proxy servers are configured in CSM. You can select the same proxy server
as CSM if one, or select a different proxy server from CSM, or select No
Proxy Server.
See (Optional) Configure Proxy Servers for details on how to configure
proxy servers in CSM.

6 Click Next.

7 For Subnets, use the following guidelines:

Option Description

Enable HA for NSX Cloud Gateway Select this option to enable High Availability.

Subnets Select this option to enable High Availability.

Public IP on Mgmt NIC Select Allocate New IP address to provide a public IP address to the
management NIC. You can manually provide the public IP address if you
want to reuse a free public IP address.

Public IP on Uplink NIC Select Allocate New IP address to provide a public IP address to the uplink
NIC. You can manually provide the public IP address if you want to reuse a
free public IP address.

What to do next

Follow instructions at Using NSX Cloud in the NSX Administration Guide.

VMware by Broadcom 314


NSX Installation Guide

Deploy PCG in a VPC


Follow these instructions to deploy PCG in your AWS VPC.

The VPC in which you deploy a PCG can act as a Transit VPC to which other VPCs can connect
(known as Compute VPCs). This VPC can also manage VMs and act as a self-managed VPC.

Follow these instructions to deploy a PCG. If you want to link to an existing Transit VPC, see Link
to a Transit VPC or VNet .

If you are using AWS Transit Gateway, see Using PCG with AWS Transit Gateway.

Prerequisites

n Ensure the VPC is connected with your on-prem NSX.

n Verify that your AWS account is already added into CSM.

n Verify that the VPC on which you are deploying PCG has the required subnets appropriately
adjusted for High Availability: uplink, downlink, and management.

n Verify that the configuration for your VPC's network ACL includes an ALLOW inbound rule.

Procedure

1 Log in to CSM using an account with the Enterprise Administrator role.

2 Click Clouds > AWS > <AWS_account_name> and go to the VPCs tab.

3 In the VPCs tab, select an AWS region name, for example, us-west. The AWS region must
be the same where you created the compute VPC.

4 Select a VPC configured for NSX Cloud.

5 Click Deploy Gateways.

6 Complete the general gateway details:

Option Description

PEM File Select one of your PEM files from the drop-down menu. This file must be
in the same region where NSX Cloud was deployed and where you created
your compute VPC.
This uniquely identifies your AWS account.

Manage with NSX Tools Leave in the default disabled state to onboard workload VMs in the Native
Cloud Enforced Mode. If you want to install NSX Tools on your workload
VMs to use the NSX Enforced Mode, enable this option.

Quarantine Policy on the Associated You can only change the Quarantine Policy setting if you choose to manage
VPC workload VMs using NSX Tools (NSX Enforced Mode). Quarantine Policy is
always enabled in the Native Cloud Enforced Mode
Leave this in the default disabled mode when you first deploy PCG. You can
change this value after onboarding VMs. See Manage Quarantine Policy in
the NSX Administration Guide for details.

VMware by Broadcom 315


NSX Installation Guide

Option Description

Gateway Connectivity Mode The PCG can be accessed from CSM using a public IP address or a private IP
address depending on the connectivity mode between your public cloud
and your on-premises NSX installation. If you select Auto Detect, they
system attempts to connect with CSM over VGW first, and if that fails, over
IGW. If the system cannot connect with CSM, the deployment fails.
See Impact of on-prem and public cloud connectivity mode on PCG's
discovery of CSM for details.

InstanceType Select any one of the sizes from the drop-down menu list based on your
requirement. There are four Intance Type sizes available:
n Small
n Medium
n Large
n Extra Large
See NSX Public Cloud Gateway: Architecture and Modes of Deployment for
more information on PCG instance type.

Note You can enable Firewall features like Application ID, IDPS, and URL
Enforcement only on Large and Extra Large size PCG deployment. However,
in NSX 3.2, Firewall features are available in Tech Preview mode. Use these
features only for experimental purposes and VMware does not officially
provide support for these features.

Proxy Server Select a proxy server to use for internet-bound traffic from this PCG. The
proxy servers are configured in CSM. You can select the same proxy server
as CSM if one, or select a different proxy server from CSM, or select No
Proxy Server.
See (Optional) Configure Proxy Servers for details on how to configure
proxy servers in CSM.

Override AMI ID Use this advanced feature to provide a different AMI ID for the PCG from the
one that is available in your AWS account.

7 Click Next.

8 Complete the Subnet details.

Option Description

Enable HA for Public Cloud Gateway The recommended setting is Enable, that sets up a High Availability Active/
Standby pair to avoid an unscheduled downtime.

Primary gateway settings Select an Availability Zone such as us-west-1a, from the drop-down menu
as the primary gateway for HA.
Assign the uplink, downlink, and management subnets from the drop-down
menu.

Secondary gateway settings Select another Availability Zone such as us-west-1b, from the drop-down
menu as the secondary gateway for HA.
The secondary gateway is used when the primary gateway fails.
Assign the uplink, downlink, and management subnets from the drop-down
menu.

VMware by Broadcom 316


NSX Installation Guide

Option Description

Public IP on Mgmt NIC Select Allocate New IP address to provide a public IP address to the
management NIC. You can manually provide the public IP address if you
want to reuse a free public IP address.

Public IP on Uplink NIC Select Allocate New IP address to provide a public IP address to the uplink
NIC. You can manually provide the public IP address if you want to reuse a
free public IP address.

Click Deploy.

9 Monitor the status of the primary (and secondary, if you selected it) PCG deployment. This
process can take 10-12 minutes.

10 Click Finish when PCG is successfully deployed.

What to do next

Follow instructions at "Using NSX Cloud" in the NSX Administration Guide.

Using PCG with AWS Transit Gateway


If you use AWS Transit Gateway, you can deploy the PCG in any VPC and connect this VPC with
the Transit Gateway.

Follow instructions at Deploy PCG in a VPC.

Any other VPCs connected to the Transit Gateway can have their workload VMs managed by
NSX for micro-segmentation.

NSX Cloud does not manage networking between the Transit and Compute VPCs or the
workload VMs. All NSX networking constructs are created upon PCG deployment but only the
Security constructs are valid if you are working with AWS Transit Gateway. See Security Entities
for a list of auto-created security policies after PCG deployment.

n Currently only NSX Enforced Mode is supported. You must install NSX Tools in your workload
VMs. See NSX Enforced Mode in the NSX Administration Guide for instructions.

n The VPC where you deploy PCG – Transit VPC – must have the same subnets as required
by a Transit VPC that is not using the AWS Transit Gateway. See Subnets Required in Your
VPC/VNet to deploy PCG for details.

n You must link compute VPCs to the Transit VPC. See Link to a Transit VPC or VNet for
instructions.

n You must ensure that workload VMs with NSX Tools installed on them have connectivity with
the management subnet of the Transit VPC.

n To utilize micro-segmentation, you must add a Forwarding Policy with the following values:

Option Value

Sources A Group in NSX Manager that contains all NSX-Managed VMs from your Transit and Compute VPCs

Destinations All ([Link]/0)

VMware by Broadcom 317


NSX Installation Guide

Option Value

Services Any

Action Route to Underlay

See Add or Edit Forwarding Policies in the NSX Administration Guide for details about
Forwarding Policies.

Link to a Transit VPC or VNet


You can link one or more compute VPCs or VNets to a Transit VPC or VNet.

Prerequisites

n Verify that you have a Transit VPC or VNet with a PCG.

n Verify that the VPC/VNet you want to link is connected to the Transit VPC or VNet through
VPN or peering.

n Verify that the Compute VPC/VNet is in the same region as the Transit VPC/VNet.

Note In route-based IPSec VPN configuration, you must specify the IP address for the virtual
tunnel interface (VTI) port. This IP must be in a different subnet than workload VMs. This prevents
workload VM inbound traffic from being directed to the VTI port, from which it will be dropped.

Note In the public cloud, a default limit exists for the number of inbound/outbound rules per
security group and NSX Cloud creates default security groups. This affects how many Compute
VPCs/VNets can be linked to a Transit VPC/VNet. Assuming 1 CIDR block per VPC/VNet, NSX
Cloud supports 10 Compute VPCs/VNets per Transit VPC/VNet. If you have more than 1 CIDR in
any Compute VPC/VNet, the number of supported Compute VPCs/VNets per Transit VPC/VNet
reduces. You can adjust the default limits by reaching out to your public cloud provider.

Procedure

1 Log in to CSM using an account with the Enterprise Administrator role.

2 Click Clouds > AWS / Azure > <public cloud_account_name> and go to the VPCs / VNets
tab.

3 In the VPCs or VNets tab, select a region name where you are hosting one or more compute
VPCs or VNets.

4 Select a compute VPC/VNet configured for NSX Cloud.

5 Click LINK TO TRANSIT VPC or LINK TO TRANSIT VNET

VMware by Broadcom 318


NSX Installation Guide

6 Complete the options in the Link Transit VPC or VNet window:

Option Description

Transit VPC or VNet Select a Transit VPC or VNet from the dropdown menu. The Transit VPC
or VNet you select must be already linked with this VPC by way of VPN or
peering.

Default Quarantine Policy Leave this in the default disabled mode when you first deploy PCG. You can
change this value after onboarding VMs. See Manage Quarantine Policy in
the NSX Administration Guide for details.

Manage with NSX Tools Leave in the default disabled state to onboard workload VMs in the Native
Cloud Enforced Mode. If you want to install NSX Tools on your workload
VMs to use the NSX Enforced Mode, enable this option.

Auto-install NSX Tools This is only available when you choose to manage with NSX Tools and only
for Microsoft Azure VNets. If selected, NSX Tools are auto-installed on all
workload VMs in the Transit/Self-managed/linked Compute VNet if the tag
[Link]=default is applied to them.

What to do next

Follow instructions at Using NSX Cloud in the NSX Administration Guide.

Auto-Configurations after PCG Deployment or Linking


The deployment of PCG in a Transit VPC/VNet and linking a compute VPC/VNet to it triggers
necessary configurations in NSX and the public cloud.

Auto-created NSX Logical Entities


A set of logical entities are auto-created in NSX Manager.

Log in to NSX Manager to view the auto-created logical entities.

Important Do not delete any of these auto-created entities except if you are manually
undeploying PCG. See Troubleshooting PCG Undeployment for details.

System Entities
You can see the following entities under the System tab:

VMware by Broadcom 319


NSX Installation Guide

Table 15-12. Auto-Created System Entities

Logical System Entity How many are created? Nomenclature Scope

Transport Zones Two Transport Zones are n TZ-<VPC/VNet-ID>- Scope: Global


created for each Transit OVERLAY
VPC/VNet n TZ-<VPC/VNet-ID>-
VLAN

Edge Transport Nodes One Edge Transport n PublicCloudGateway Scope: Global


Node is created for TN-<VPC/VNET-ID>
each deployed PCG, n PublicCloudGateway
two if deployed in high TN-<VPC/VNET-ID>-
availability mode. preferred

Edge Cluster One Edge Cluster is PCG-cluster-<VPC/VNet- Scope: Global


created per deployed ID>
PCG, whether one or in a
high availability pair.

Inventory Entities
The following entities are available under the Inventory tab:

Table 15-13. Groups

Groups Scope

Two Groups named: Scope: Shared across all PCGs


n cloud-default-route
n cloud-metadata services

One Group created at the Transit VPC/VNet level as Scope: shared across all Compute VPCs/VNets
a parent Group for individual segments created at the
Compute VPC/VNet level. cloud-<Transit VPC/VNet
ID>-all-segments

VMware by Broadcom 320


NSX Installation Guide

Table 15-13. Groups (continued)

Groups Scope

Two Groups for each Compute VPC/VNet: Scope: shared across all Compute VPC/VNets
n Network CIDR Group for all CIDRs of the Compute
VPC/VNet: cloud-<Compute VPC/VNet ID>-cidr
n Local Segment Group for all managed segments
within the Compute VPC/VNet:cloud-<Compute
VPC/VNet ID>-local-segments

The following Groups are created for the currently Scope: Shared across all PCGs
supported public cloud services:
n aws-dynamo-db-service-endpoint
n aws-elb-service-endpoint
n aws-rds-service-endpoint
n aws-s3-service-endpoint
n azure-cosmos-db-service-endpoint
n azure-load-balancer-service-endpoint
n azure-sql-service-endpoint
n azure-storage-service-endpoint

Note For PCGs deployed or linked to in the Native Cloud Enforced Mode, all the workload VMs
in the VPC/VNet become available under Virtual Machines in NSX Manager.

Networking Entities
The following entities are created at different stages of onboarding and can be found under the
Networking tab:

VMware by Broadcom 321


NSX Installation Guide

Figure 15-5. Auto-created NSX Networking Entities After PCG is Deployed

NSX-T Data Center Public Cloud

One Tier 0 Gateway


created when PCG Transit
is deployed in the VPC/VNet
Transit VPC/VNet

One Tier 1 Gateway


created for a
self-managedTransit Transit Transit
VPC/VNet and for each VPC/VNet VPC/VNet
compute VPC/VNet
linked to Transit
One Segment created
for the subnet in the
VPC/VNet in which
a workload VM Subnet 1 Subnet 2 Subnet 1 Subnet 2
is tagged with
[Link]=default

One Segment Port


created each time a
workload VM is VM1 VM2 VM3 VM1 VM2 VM3
onboarded in the same
subnet in the VPC/VNet

Table 15-14. Auto-Created Networking Entities


Logical Entities Created in
Onboarding Task NSX

PCG deployed on Transit VPC/VNet n Tier-0 Gateway


n Infra Segment (Default
VLAN switch)
n Tier-1 router

Compute VPC or VNet linked to the Transit VPC/VNet n Tier-1 router

A workload VM with the NSX agent installed on it is tagged with the n A Segment is created for
"[Link]:default" key:value in a subnet of a compute or self-managed VPC/VNet this specific subnet of the
compute or self-managed
VPC or VNet
n Hybrid ports are created
for each tagged workload
VM that has the NSX
agent installed on it

More workload VMs are tagged in the same subnet of the Compute or self-managed n Hybrid ports are created
VPC/VNet for each tagged workload
VM that has the NSX
agent installed on it

Forwarding Policies

VMware by Broadcom 322


NSX Installation Guide

The following three forwarding rules are set up for a Compute VPC/VNet, including Self-managed
Transit VPC/VNet:

n Access any CIDR of the same Compute VPC over the public cloud's network (underlay)

n Route traffic pertaining to public cloud metadata services over the public cloud's network
(underlay)

n Route everything not in the Compute VPC/VNet's CIDR block, or a known service, through
the NSX network (overlay)

Security Entities
The following entities are available under the Security tab:

Table 15-15. Auto-Created Security Entities

Logical Security Entity How many are created? Nomenclature Scope

Distributed Firewall (East- Two per Transit VPC/VNet: n cloud-stateless-<VPC/ n Stateful rule to allow
West) n Stateless VNet ID> traffic within local
n cloud-stateful-<VPC/ managed segments
n Stateful
VNet ID> n Stateful rule to reject
traffic from unmanaged
VMs

Gateway Firewall (North- One per Transit VPC/VNet cloud-<Transit VPC/VNet


South) ID>

Auto-created Public Cloud Configurations


In your public clouds, some configurations are set up automatically after you deploy PCG.

Some auto configurations are common to all public clouds and both NSX management modes.
Other configurations are specific to either the public cloud or the NSX management mode.

Specific to AWS
The following are specific to AWS:

n In the AWS VPC, a new Type A Record Set gets added with the name nsx-
[Link] into a private hosted zone in Amazon Route 53. The IP address mapped
to this record matches the Management IP address of the PCG which is assigned by AWS
using DHCP and will differ for each VPC. This DNS entry in the private hosted zone in Amazon
Route 53 is used by NSX Cloud to resolve the PCG's IP address.

Note When you use custom DNS domain names defined in a private hosted zone in Amazon
Route 53, the DNS Resolution and DNS Hostnames attributes must be set to Yes for your
VPC settings in AWS.

n A secondary IP for the uplink interface for PCG is created. An AWS Elastic IP is associated
with this secondary IP address. This configuration is for SNAT.

VMware by Broadcom 323


NSX Installation Guide

Specific to Microsoft Azure


The following are specific to Microsoft Azure:

n A common Resource Group is created per region, per subscription. It is named like:
nsx-default-<region-name>-rg, for example: nsx-default-westus-rg. All VNets in this
region share this Resource Group. This Resource Group and all the NSX-created security
groups named like default-<vnet-ID>-sg are not deleted from the Microsoft Azure region after
you off-board a VNet in this region from NSX Cloud.

Common to both modes and all public clouds


The following are created in all public clouds and for both NSX-management modes: NSX
Enforced Mode and Native Cloud Enforced Mode:

n The gw security groups are applied to the respective PCG interfaces in VPCs or VNets.

Table 15-16. Public Cloud Security Groups created by NSX Cloud for PCG interfaces

Security Group name Description

gw-mgmt-sg Gateway Management Security Group

gw-uplink-sg Gateway Uplink Security Group

gw-vtep-sg Gateway Downlink Security Group

Specific to Native Cloud Enforced Mode


The following security groups are created when the PCG is deployed in the Native Cloud
Enforced Mode.

After workload VMs are matched with groups and corresponding security policies in NSX
Manager, security groups named like nsx-<GUID> are created in the public cloud for each
matching security policy.

Note In AWS, Security Groups are created. In Microsoft Azure, Application Security Groups
are created corresponding to Groups in NSX Manager and Network Security Groups are created
corresponding to Security Policies in NSX Manager.

VMware by Broadcom 324


NSX Installation Guide

Security Group
name Available in Microsoft Azure? Available in AWS? Description

default-vnet-<vnet- Yes No NSX Cloud-created security


id>-sg group in the common
Microsoft Azure Resource
Group for assigning to VMs
that are not matched with a
security policy in NSX.

default No Yes An existing security group in


AWS used by NSX Cloud for
assigning to VMs that are
not matched with a security
policy in NSX.

vm-overlay-sg Yes Yes VM overlay security group


(this is not used in the current
release)

Specific to NSX Enforced Mode


The following security groups are created for workload VMs when you deploy PCG in the NSX
Enforced Mode.

Table 15-17. Public Cloud Security Groups created by NSX Cloud for Workload VMs in the NSX
Enforced Mode
Security Group
name Available in Microsoft Azure? Available in AWS? Description

default-vnet-<vnet- Yes No NSX Cloud-created security


id>-sg group in Microsoft Azure for
threat-detection workflows in
the NSX Enforced Mode

default No Yes An existing security group in


AWS used by NSX Cloud for
threat-detection workflows in
the NSX Enforced Mode

vm-underlay-sg Yes Yes VM underlay security group

vm-overlay-sg Yes Yes VM overlay security group


(this is not used in the current
release)

Integrate Horizon Cloud Service with NSX Cloud


Starting in NSX 3.1.1, you can integrate your Horizon Cloud deployment with NSX Cloud with
fewer manual steps than in earlier releases.

Horizon Cloud integration is supported with NSX Cloud management components – NSX
Manager and Cloud Service Manager (CSM) – deployed either on-premise, or natively in Microsoft
Azure, starting in NSX 3.1.1.

VMware by Broadcom 325


NSX Installation Guide

The diagrams depict the options available to deploy NSX Cloud management components and
shared subnets between PCG and Horizon Cloud Management components as possible scenarios
for this integration.

Figure 15-6. Horizon Cloud Integration with NSX Cloud Components deployed On-prem

On-Prem

NSX Manager Cluster NSX CSM

Express Route/Direct Connect


or Site to Site VPN over internet
or over Internet Gateway (IGW)

Microsoft Azure

Transit/Self-managed VNet

Horizon Cloud Management Components NSX Cloud Components

PCG in NSX Enforced


Base VMs Node VMs UAGs
Mode (with NSX Tools)

DMZ Subnet Management Subnet

Horizon Cloud
Management Subnet Uplink Subnet

Tenant Subnet

VMware by Broadcom 326


NSX Installation Guide

Figure 15-7. Horizon Cloud Integration with NSX Cloud with NSX Cloud Components deployed in
Microsoft Azure

Microsoft Azure
NSX Cloud Management VNet

NSX Manager Cluster NSX CSM

Peered Connection
or Internet Gateway (IGW)

Microsoft Azure

Transit/Self-managed VNet

Horizon Cloud Management Components NSX Cloud Components

PCG in NSX Enforced


Base VMs Node VMs UAGs
Mode (with NSX Tools)

DMZ Subnet Management Subnet

Horizon Cloud
Management Subnet Uplink Subnet

Tenant Subnet

You must deploy your Horizon Cloud pod in the same Self-managed/Transit VNet where you
have deployed the PCG. After deploying the Horizon Cloud pod in a Self-managed/Transit VNet,
the system auto-creates the necessary NSX policies to enable communication between Horizon
Cloud management components and VDIs deployed in Microsoft Azure. You can create security
policies for VDIs as necessary.

Note Auto-creation of entities is a feature of NSX version 3.1.1 and later only.

Prerequisites

n Verify that NSX Cloud management components are already deployed and active. See
Deploy NSX On-Prem Components or Deploy NSX Cloud Components in Microsoft Azure
using the NSX Cloud Marketplace Image for relevant instructions for your deployment model.

n Verify that a PCG is already deployed in the same VNet where you are deploying the Horizon
Cloud pod. A VNet that has the PCG deployed in it is called a Self-managed or Transit
VNet in NSX Cloud terminology. See NSX Public Cloud Gateway: Architecture and Modes of
Deployment .

VMware by Broadcom 327


NSX Installation Guide

n If NSX Cloud management components are deployed on-prem, verify that you have a Self-
managed or Transit VNet connected with the on-prem components using VGW or IGW
connectivity.

n If you are deploying NSX Cloud management components in the public cloud, verify that you
have a peering connection between the VNet where NSX Cloud components are deployed
and the Self-managed/ Transit VNet.

Procedure

1 From your Microsoft Azure subscription, deploy the Horizon Cloud pod in a Self-managed/
Transit VNet. See Horizon Pods on Microsoft Azure - Using the Horizon Cloud Administration
Console Capacity Page to Add More Pods to Your Pod Fleet in the Horizon Cloud Service
Product Documentation for detailed steps. Note the following configurations in the Add
Microsoft Azure Capacity wizard that are specific to integration with NSX Cloud:

a In the Subscription tab, select the Microsoft Azure subscription where PCG is deployed.

b In the Pod Setup tab, under the section Networking for Virtual Network, select the
Microsoft Azure VNet in which PCG is deployed.

Use subnets that allow communication between the PCG and Horizon Cloud management
components.

c Click Validate and Proceed

2 Create a base image with NSX Tools installed.

If you are using pod manifest 2632.0 or later, and your customer record has access to the
new Image Management feature that auto-installs NSX Tools into the base image, enable
Install NSX Tools.
If you are using pod manifests prior to 2632.0, follow the steps described at Install the NSX
Agent in the Horizon Cloud Imported Image VM to manually install NSX Tools in the base
image.

3 Use the base image to create the VDI desktop assignment, enabling NSX Cloud Managed.
See Horizon Cloud Workflows and NSX Cloud for details.

What to do next

Verify the Auto-Created Entities After Horizon Cloud Integration .

Auto-Created Entities After Horizon Cloud Integration


After you deploy your Horizon Cloud pod in a Self-managed/Transit VNet, the following entities
are auto-created in CSM and NSX Manager.

Note Auto-creation of entities is a feature of NSX version 3.1.1 and later only.

VMware by Broadcom 328


NSX Installation Guide

Horizon Cloud VMs in CSM


n Horizon Cloud VMs can be management VMs or VDIs for end users.

n CSM distinguishes Horizon Cloud management VMs from end-user consumable VDIs as
follows:

n The three types of Horizon Cloud management VMs – UAG, Base, and Node are labeled
as Horizon Management VMs in CSM > Clouds > Azure > Instances. The Horizon Cloud
administrator has complete control over the security groups assigned to these VMs in
Microsoft Azure.

n All VDIs launched in the Horizon Cloud pod, using any image with NSX Cloud enabled, are
NSX-managed if they enable NSX Cloud when launched. NSX Tools are installed on such
VDIs and they are managed like other managed VMs in the NSX Enforced mode. In CSM >
Clouds > Azure > Instances, you can see these VDIs with the label Horizon VDI.

See "Managing VMs in the NSX Enforced Mode" in the NSX Administration Guide for
details.

Also see "VMware NSX Cloud and Horizon Cloud Pods in Microsoft Azure" in the Horizon
Cloud Service Product Documentation.

Horizon Cloud Entities Created in NSX Manager

NSX Manager Component Auto-created Entities Details

Inventory > Services HorizonUAGPolicyService This service allows communication


between the Horizon Cloud UAG and
VDIs. See this table for details: Table
15-18. DFW Policy Auto-created for
Horizon Cloud Integration under the
Infrastructure category

Inventory > Services HorizonNodeVMPolicyService This service is used to allow


communication from the VDIs to
Horizon Cloud Management Node
VMs. See this table for details: Table
15-18. DFW Policy Auto-created for
Horizon Cloud Integration under the
Infrastructure category

VMware by Broadcom 329


NSX Installation Guide

NSX Manager Component Auto-created Entities Details

Inventory > Groups n vmw-hcs-<pod-id>-base The group definition for these groups
n vmw-hcs-<pod-id>-node is as follows:

n vmw-hcs-<pod-id>-uag n instance-type label that Horizon


Cloud applies to these VMs in
n vmw-hcs-<pod-id>-vdi
Microsoft Azure.
n Microsoft Azure ID of the Self-
managed/Transit VNet that also
hosts the Horizon Cloud pod.
You manage the VDIs that are
included in the vmw-hcs-<id>-vdi
group. The other groups are
managed by Horizon Cloud.
The Horizon Cloud jumpbox VMs are
grouped under vmw-hcs-<id>-node

Inventory > Virtual Machines Horizon Cloud VDIs with names These are the VDIs in Horizon
provided by Horizon Cloud Cloud that are categorized as
Virtual Machines in NSX Manager.
All security policies and other
configurations in NSX Manager
are targeted towards these Virtual
Machines.

Inventory > Tags n Tag Scope: These system tags are used to create
azure:instance_type groups for security policies.
n Tag Values:
n HORIZON_MGMT
n HORIZON_BASE
n HORIZON_UAG
n HORIZON_VDI

Security Policy

Under Security > Distributed Firewall > Infrastructure a DFW policy is created with the name:
vmw-hcs-<pod_id>-security-policy. This policy has the following Allow rules.

VMware by Broadcom 330


NSX Installation Guide

Table 15-18. DFW Policy Auto-created for Horizon Cloud Integration under the Infrastructure
category

DFW Rule Name Source Destination Service/Ports Protocols

AllowHCSUAGToVDI Unified Access VDI HorizonUAGPolicyService TCP and UDP


Gateway TCP (Source: Any; Destination:
22443,32111,4172,443,8443,9427)
UDP (Source: Any | Destination: 22443,4172)

AllowVDIToHCSNode VDI Node VMs HorizonNodeVMPolicyService (Source: Any; TCP


Destination: 3099,4001,4002)

Note All networking for NSX-managed VDIs within the VNet is through Microsoft Azure. NSX
only manages traffic going out of the VNet.

See "Group VMs using NSX and Public Cloud Tags" in the NSX Administration Guide for details
on discovered tags: these are tags that you apply in Microsoft Azure to your VDIs and they are
visible in NSX Manager to enable tag-based grouping.

(Optional) Install NSX Tools on your Workload VMs


If you are using the NSX Enforced Mode, proceed to installing NSX Tools in your workload VMs.

See instructions and further details at "Onboarding VMs in the NSX Enforced Mode" in the NSX
Administration Guide.

Un-deploy NSX Cloud


You must un-deploy NSX Cloud configurations before decommissioning the CSM appliance.

To un-deploy NSX Cloud, perform the following steps:

1 Undeploy or Unlink PCG.

For any issues, see Troubleshooting PCG Undeployment.

2 Decommission the CSM appliance.

a Log in to the vSphere Client using your administrator credentials.

b Power off the CSM appliance VM.

c To delete the CSM appliance VM from the datastore, right-click the appliance VM.

d Select Delete from Disk, and then click OK. For details, refer to the vSphere
documentation.

Note In case of Azure deployment, you must delete the entire Resource Group that contains
three MPs and one CSM.

VMware by Broadcom 331


NSX Installation Guide

Undeploy or Unlink PCG


You can undeploy or unlink PCG after you have removed some NSX Cloud configurations.

In the NSX Enforced Mode


n Remove the [Link]=default tag from NSX-managed workload VMs.

n Disable the Quarantine Policy if it is enabled.

n Delete all user-created logical entities associated with the PCG.

In the Native Cloud Enforced Mode


n Delete all user-created logical entities associated with the PCG.

Follow the steps relevant to the NSX management mode your PCG is deployed in.

What to read next

Procedure

1 Remove the [Link] tag in the Public Cloud


Before you can undeploy PCG, all VMs must be unmanaged.

2 Disable Quarantine Policy in the NSX Enforced Mode


If using the NSX Enforced Mode you must disable Quarantine Policy if previously enabled. .

3 Delete User-created Logical Entities


All user-created logical entities associated with the PCG must be deleted.

4 Undeploy or Unlink from CSM


Follow these instructions to undeploy or unlink PCG after completing the prerequisites.

5 Troubleshooting PCG Undeployment


If PCG undeployment fails, you have to manually delete all the NSX Cloud-created entities in
NSX Manager as well as in the public cloud.

Remove the [Link] tag in the Public Cloud


Before you can undeploy PCG, all VMs must be unmanaged.

Note This is only applicable in the NSX Enforced Mode.

Go to the VPC or VNet in your public cloud and remove the [Link]=default tag from
the managed VMs.

Disable Quarantine Policy in the NSX Enforced Mode


If using the NSX Enforced Mode you must disable Quarantine Policy if previously enabled. .

This step is only applicable to the NSX Enforced Mode.

VMware by Broadcom 332


NSX Installation Guide

With Quarantine Policy enabled, your VMs are assigned security groups in your public cloud that
are defined by NSX Cloud.

When you undeploy PCG, you must disable Quarantine Policy. Follow these steps: :

1 Go to the VPC or VNet in CSM.

2 From Actions > Edit Configurations >, turn off the setting for Default Quarantine .

3 All VMs that are unmanaged or quarantined in this VPC or VNet will be assigned to the
default security group in AWS and the default-vnet-<vnet-id>-sg security group in
Microsoft Azure.

4 If there are managed VMs while disabling Quarantine Policy, they retain their NSX Cloud-
assigned security groups. The first time you remove the [Link]=default tag from
such VMs to take them out from NSX management, they are also assigned to the default
security group in AWS and the default-vnet-<vnet-id>-sg security group in Microsoft
Azure.

Note The common Resource Group created in Microsoft Azure, that is named like: nsx-
default-<region-name>-rg, for example: nsx-default-westus-rg, is not removed when
you undeploy PCG. This Resource Group and all the NSX-created security groups named like
default-<vnet-ID>-sg are not deleted from the Microsoft Azure region. You can remove
the NSX Cloud-specific security group any time after the VNet is off-boarded.

See Auto-Configurations after PCG Deployment or Linking for a list of NSX Cloud security groups.

Delete User-created Logical Entities


All user-created logical entities associated with the PCG must be deleted.

Identify entities which are associated with the PCG and delete them.

Note Do not delete the auto-created logical entities. These are deleted automatically after you
click Undeploy or Unlink from Transit VPC/VNet from CSM. See Auto-Configurations after PCG
Deployment or Linking for details.

Undeploy or Unlink from CSM


Follow these instructions to undeploy or unlink PCG after completing the prerequisites.

1 Log in to CSM and go to your public cloud:

n If using AWS, go to Clouds > AWS > VPCs. Click on the VPC on which one or a pair of
PCGs is deployed and running.

n If using Microsoft Azure, go to Clouds > Azure > VNets. Click on the VNet on which one
or a pair of PCGs is deployed and running.

2 Click Undeploy or Unlink from Transit VPC/VNet.

The default entities created by NSX Cloud are removed automatically when the PCG is
undeployed or unlinked.

VMware by Broadcom 333


NSX Installation Guide

Troubleshooting PCG Undeployment


If PCG undeployment fails, you have to manually delete all the NSX Cloud-created entities in NSX
Manager as well as in the public cloud.

n In your public cloud:

n Terminate all PCGs in the Transit VPC/VNet.

n Move all your workload VMs to a security group not created by NSX Cloud.

n For Microsoft Azure, also delete the NSX Cloud-created Resource Group named like nsx-
gw-<vnet ID>-rg.

n Delete the auto-created entities with the VPC/VNet ID in NSX Manager as listed here: Auto-
created NSX Logical Entities.

Note Do not delete the global entities that are auto-created. Only delete the ones that have
the VPC/VNet ID in their name.

n Restart the CSM Service. Log in to the CSM appliance CLI and run the restart service
cloud-service-manager command.

Important If the PCG un-deployment fails even after performing the steps mentioned in this
topic or you need to redeploy in same VPC/VNet, a database cleanup for PCG may be required.
The database cleanup needs engineering assistance. If you want to clean-up the PCG database,
contact the VMware support team.

VMware by Broadcom 334


Uninstalling NSX from a Host
Transport Node 16
The steps to uninstall NSX from a host transport node vary depending on the host type and how
it is configured.

n Uninstall NSX from a vSphere Cluster


If you have installed NSX on a vSphere Cluster using transport node profiles, you can follow
these instructions to uninstall NSX from all hosts in the cluster.

n Uninstall NSX from a Managed Host in a vSphere Cluster


You can uninstall NSX from a single host that is managed by VMware vCenter. The other
hosts in the cluster are not affected.

n Uninstall NSX from a Physical Host


You can uninstall NSX from a physical host.

n Triggering Uninstallation from the vSphere Web Client


In the vSphere Web Client. if you move a host from a cluster prepared with a transport node
profile to either another cluster, outside of the cluster as a standalone host, or outside of
the data center, then NSX is uninstalled on the host that is moved. Such an uninstallation is
not triggered when a host that is individually prepared with a transport node configuration is
moved.

n Uninstall NSX from a vSphere Lifecycle Manager cluster through NSX Manager
You can trigger uninstallation of NSX on a host that is part of a vSphere Lifecycle Manager
cluster through NSX Manager.

Uninstall NSX from a vSphere Cluster


If you have installed NSX on a vSphere Cluster using transport node profiles, you can follow
these instructions to uninstall NSX from all hosts in the cluster.

For more information on transport node profiles, see Add a Transport Node Profile.

If you have not used a transport node profile to install NSX, or if you want to remove NSX from a
subset of the hosts in the cluster, see Uninstall NSX from a Managed Host in a vSphere Cluster.

Note Follow these instructions to uninstall NSX from a host in a cluster: Uninstall NSX from a
Managed Host in a vSphere Cluster.

VMware by Broadcom 335


NSX Installation Guide

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Select System > Fabric > Nodes.

3 From the Managed by drop-down menu, select the vCenter Server.

4 Select the cluster you want to uninstall, and select Remove NSX.

Note If NSX Intelligence is also deployed on the host, uninstallation of NSX will fail because
all transport nodes become part of a default network security group. To successfully uninstall
NSX, you also need to select the Force Delete option before proceeding with uninstallation.

5 Verify that the NSX software is removed from the host.

a Log into the host's command-line interface as root.

b Run this command to check for NSX VIBs

esxcli software vib list | grep -E 'nsx|vsipfwlib'

6 (Host on a VDS 7.0 switch) If the host goes into failed state and NSX VIBs cannot be
removed, then run the nsxcli -c del nsx command to remove NSX from the host.

a Before running the del nsx command, do the following:

n If there are VMkernel adapters on NSX port groups on the VDS switch, you must
manually migrate or remove vmks from NSX port group to DV port groups on
the VDS switch. If there are any vmks available on the NSX port groups, del nsx
command execution fails.

n Put the ESXi host in maintenance mode. The VMware vCenter does not allow the host
to be put in maintenance mode unless all running VMs on the host are in powered off
state or moved to a different host.

n Permanently disconnect the ESXi host transport node from NSX Manager by stopping
nsx-proxy service running on the ESX host transport node. Log in to the ESXi CLI
terminal and run /etc/init.d/nsx-proxy stop.

n Refresh the NSX Manager UI.

n Verify that the state of the ESXi host transport node is Disconnected from NSX
Manager.

b Log in to the ESXi CLI terminal.

c Run nsxcli -c del nsx.

VMware by Broadcom 336


NSX Installation Guide

d Read the warning message. Enter Yes if you want to go ahead with NSX uninstallation.

Carefully read the requirements and limitations of this command:


1. Read NSX documentation for 'Remove a Host from NSX or Uninstall NSX Completely'.
2. Deletion of this Transport Node from the NSX UI or API failed, and this is the last
resort.
3. If this is an ESXi host:
a. The host must be in maintenance mode.
b. All resources attached to NSXPGs must be moved out.
c. If this is a SmartNIC-enabled host, the host must be rebooted after
vib removal is completed. Verify this by checking /var/run/log/[Link] for the
thread which shows nsx-lcp component removal and confirm it completed without error.
It will look something like:

In(14) esxupdate[2150621]: Starting runnable component remove -n nsx-


lcp-bundle:[Link].0-8.0.21958016 with 6e3446d0-8393-5869-8873-076a95930f56
...
Db(15) esxupdate[2150621]: Finished execution of command =
[Link]
If the above conditions for ESXi hosts are not met, the command WILL fail.
4. If this is a Linux host:
a. If KVM is managing VM tenants then shut them down before running this
command.
b. This command should be run from the host console and may fail if run from
an SSH client
or any other network based shell client.
c. The 'nsxcli -c del nsx' form of this command is not supported
5. If this is a Windows host:
NOTE: This will completely remove all NSX instances (image and config) from
the host.
6. For command progress check /scratch/log/[Link] on ESXi host or /var/log/
[Link] on Linux host or 'c:/Programdata/VMware/NSX/Logs/[Link]' on Windows
host.
Are you sure you want to remove NSX on this host? (yes/no) yes

Important After running the del nsx command, do not use the Resolve functionality in
the NSX Manager UI to re-prepare the host that is in Disconnected state . If you use the
Resolve functionality, the host might go into Degraded state.

e In the NSX Manager UI, if a TNP is attached to the cluster, detach the profile.

f Select each host and click Remove NSX.

g In the pop-up window, select Force Delete and begin uninstallation.

h On the ESXi host, verify that system message displayed is Terminated. This message
indicates that NSX is completely removed from the host.

All existing host switches are removed, transport node is detached from NSX Manager,
and NSX VIBs are removed. If any NSX VIBs remain on the host, contact VMware support.

VMware by Broadcom 337


NSX Installation Guide

Uninstall NSX from a Managed Host in a vSphere Cluster


You can uninstall NSX from a single host that is managed by VMware vCenter. The other hosts in
the cluster are not affected.

Prerequisites

n On an ESXi host that is put into Locked state, ensure that the root user is added to the
exception list, so that an SSH session can be established with the host.

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Select System > Fabric > Nodes.

3 From the Managed by drop-down menu, select the vCenter Server.

4 If the cluster has a transport node profile applied, select the cluster, and click Actions >
Detach TN Profile.

If the cluster has a transport node profile applied, the NSX Configuration column for the
cluster displays the profile name.

5 Select the host and click Remove NSX.

6 Verify that the NSX software is removed from the host.

a Log into the host's command-line interface as root.

b Run this command to check for NSX VIBs

esxcli software vib list | grep -E 'nsx|vsipfwlib'

7 If a Transport Node Profile is applied to the cluster, and you want to reapply it, select
the cluster, click Configure NSX, and select the profile from the Select Deployment Profile
drop-down menu.

VMware by Broadcom 338


NSX Installation Guide

8 (Host on a VDS 7.0 switch) If the host goes into failed state and NSX VIBs cannot be
removed, run the nsxcli -c del nsx command to remove NSX from the host.

a Before running the del nsx command, do the following:

n If there are VMkernel adapters on NSX port groups on the VDS switch, you must
manually migrate or remove vmks from NSX port group to DV port groups on
the VDS switch. If there are any vmks available on the NSX port groups, del nsx
command execution fails.

n Put the ESXi host in maintenance mode. The VMware vCenter does not allow the host
to be put in maintenance mode unless all running VMs on the host are in powered off
state or moved to a different host.

n Permanently disconnect the ESXi host transport node from NSX Manager by stopping
nsx-proxy service running on the ESX host transport node. Log in to the ESXi CLI
terminal and run /etc/init.d/nsx-proxy stop.

n Refresh the NSX Manager UI.

n Verify that the state of the ESXi host transport node is Disconnected from NSX
Manager.

b Disable SNMP on the ESXi host.

esxcli system snmp set --enable false

c Log in to the ESXi CLI terminal.

d Run nsxcli -c del nsx.

e Read the warning message. Enter Yes if you want to go ahead with NSX uninstallation.

Carefully read the requirements and limitations of this command:


1. Read NSX documentation for 'Remove a Host from NSX or Uninstall NSX Completely'.
2. Deletion of this Transport Node from the NSX UI or API failed, and this is the last
resort.
3. If this is an ESXi host:
a. The host must be in maintenance mode.
b. All resources attached to NSXPGs must be moved out.
If the above conditions for ESXi hosts are not met, the command WILL fail.
4. For command progress check /scratch/log/[Link] on ESXi host or /var/log/
[Link] on non-ESXi host.
Are you sure you want to remove NSX on this host? (yes/no)

Important After running the del nsx command, do not use the Resolve functionality in
the NSX Manager UI to reprepare the host that is in Disconnected state . If you use the
Resolve functionality, the host might go into Degraded state.

f Select each host and click Remove NSX.

VMware by Broadcom 339


NSX Installation Guide

g In the popup window, select Force Delete and begin uninstallation.

h On the ESXi host, verify that system message displayed is Terminated. This message
indicates that NSX is completely removed from the host.

n All existing host switches are removed, transport node is detached from NSX
Manager, and NSX VIBs are removed. If any NSX VIBs remain on the host, contact
VMware support.

n On a host part of a vSphere Lifecycle Manager, after you perform del nsx and
Remove NSX from NSX Manager, the host status in vCenter Server is compliant
with the cluster image. The system displays, All hosts in the cluster are
compliant.

Uninstall NSX from a Physical Host


You can uninstall NSX from a physical host.

You can uninstall NSX from a physical host either from the NSX Manager or from the Windows
Powershell terminal on Windows hosts or the CLI terminal on Linux hosts.

Prerequisites

If you are uninstalling NSX from a standalone physical host, verify the following settings:

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Select System > Fabric > Hosts.

3 Select the Standalone tab.

4 Select the physical host, click Delete. In the confirmation dialog box, by default Uninstall NSX
Components is selected. deselect Force Delete, click Delete.

The NSX software is removed from the host.

5 If the uninstall fails, select the host and click Delete again. In the confirmation dialog box,
check Force Delete and click Delete.

The system deletes the host transport node from the management plane, but the host might
still have NSX software installed.

Go to the next step only if NSX cannot be uninstalled from the NSX Manager. In the following
steps, you will remove NSX from the CLI terminal of physical hosts.

VMware by Broadcom 340


NSX Installation Guide

6 Before deleting NSX from the CLI terminal, verify whether NSX packages are removed from
the host.

On Windows Powershell, run Get-ItemProperty


HKLM:\Software\Microsoft\Windows\CurrentVersion\Uninstall\* | Select-
Object DisplayName, DisplayVersion, Publisher, InstallDate | Format-Table
-AutoSize | findstr NSX

On Ubuntu CLI terminal, run Ubuntu: apt list --installed | grep nsx

On RHEL/SLES/CentOS CLI terminal, run rpm -qa | grep nsx

7 (Linux physical hosts) If the host goes into a failed state and NSX install bundles cannot be
removed, then to forcefully remove NSX from the host, run the del nsx command .

a Log into the host's command-line interface as root.

b Run nsxcli -c del nsx.

VMware by Broadcom 341


NSX Installation Guide

c Read the warning message. Enter Yes if you want to go ahead with NSX uninstallation.

Carefully read the requirements and limitations of this command:


[Link] NSX documentation for 'Remove a Host from NSX or Uninstall NSX
Completely'.
[Link] of this Transport Node from the NSX UI or API failed, and
this is the last resort.
[Link] this is an ESXi host:
[Link] host must be in maintenance mode.
b. All resources attached to NSXPGs must be moved out.
If the above conditions for ESXi hosts are not met, the command WILL
fail.
[Link] this is a Linux host:

a. If KVM is managing VM tenants then shut them down before running


this command.

b. This command should be run from the host console and may fail if run
from an SSH client or any other network based shell client.

c. The 'nsxcli -c del nsx' form of this command is not supported.


5. If this is a Windows host:

Note: This will completely remove all NSX-T instances (image and
config) from the host.
6. For command progress check /scratch/log/[Link] on ESXi host
or /var/log/[Link] on non-ESXi host.
Are you sure you want to remove NSX on this host? (yes/no)

Important After running the del nsx command, do not use the Resolve functionality in
the NSX Manager UI to reprepare the host that is in Disconnected state . If you use the
Resolve functionality, the host might go into Degraded state.

d On the physical host, verify that system message displayed is Terminated. This message
indicates that NSX is completely removed from the host including the application
interface that was created for the physical host.

After running del nsx, NSX packages and the application interface are removed from the
host.

VMware by Broadcom 342


NSX Installation Guide

8 On Windows physical hosts, if the host goes into a failed state and NSX install bundles cannot
be removed, then to forcefully remove NSX from the host, follow these steps.

a Log into Windows powershell interface as one of the administrators.

b Go to the NSX directory.

PS C:\program files\VMware\nsx\nsx-cli> .\[Link] -c del nsx

c Read the warning message. Enter Yes if you want to go ahead with NSX uninstallation.

Carefully read the requirements and limitations of this command:


[Link] NSX documentation for 'Remove a Host from NSX or Uninstall NSX
Completely'.
[Link] of this Transport Node from the NSX UI or API failed, and
this is the last resort.
[Link] this is an ESXi host:
[Link] host must be in maintenance mode.
b. All resources attached to NSXPGs must be moved out.
If the above conditions for ESXi hosts are not met, the command WILL
fail.
[Link] this is a Linux host:

a. If KVM is managing VM tenants then shut them down before running


this command.

b. This command should be run from the host console and may fail if run
from an SSH client or any other network based shell client.

c. The 'nsxcli -c del nsx' form of this command is not supported.


5. If this is a Windows host:

Note: This will completely remove all NSX-T instances (image and
config) from the host.
6. For command progress check /scratch/log/[Link] on ESXi host
or /var/log/[Link] on non-ESXi host.
Are you sure you want to remove NSX on this host? (yes/no)

Important After running the del nsx command, do not use the Resolve functionality in
the NSX Manager UI to reprepare the host that is in Disconnected state . If you use the
Resolve functionality, the host might go into Degraded state.

After running del nsx, the following actions are performed:

n Application Interface on Windows server is uninstalled.

n Transport Node configuration is deleted.

n NSX packages are deleted.

VMware by Broadcom 343


NSX Installation Guide

Results

When the NSX software is successfully removed, no packages are listed. If any NSX software
packages remain on the host, contact VMware support.

What to do next

Remove the segment port that was connected to the application interface of the physical host.

Triggering Uninstallation from the vSphere Web Client


In the vSphere Web Client. if you move a host from a cluster prepared with a transport node
profile to either another cluster, outside of the cluster as a standalone host, or outside of the data
center, then NSX is uninstalled on the host that is moved. Such an uninstallation is not triggered
when a host that is individually prepared with a transport node configuration is moved.

NSX Uninstallation Scenarios from the vSphere Web Client

VMware by Broadcom 344


NSX Installation Guide

Action Steps/Description Result

In the VMware vCenter, move an 1 From the vSphere Web Client, log in to The host is turned into a
ESXi host in cluster-1 (prepared by the VMware vCenter. standalone managed host,
applying transport node profile) to 2 Move the host in maintenance mode. which is displayed under 'Other
the data center as a standalone host Hosts'. NSX is uninstalled on
3 Move the host from cluster-1 that
(not to another cluster). the host.
is prepared with a transport node
profile out of the cluster as a If the host is under
standalone managed host. NSX triggers Configuration Mismatch state,
uninstallation of the configuration and then the host remains in that
NSX VIBs. state after it is moved.

4 During uninstallation, the transport node


is deleted, NSX VIBs are uninstalled.
5 In the NSX UI, the uninstalled host is
displayed under Other Hosts on the
same VMware vCenter.

In the VMware vCenter, move a 1 From the vSphere Web Client, log in to The host is moved from
prepared host from cluster-1 with the VMware vCenter. cluster-1 to cluster-2. A
transport node profile-1 to cluster-2 2 Move the host in maintenance mode. successfully configured host is
with transport node profile-2. applied with transport node
3 As cluster-1 is prepared with transport
profile-2.
node profile-1, when a host from
cluster-1 is moved to cluster-2, then If the host is in the failed
transport node profile-2 is applied to state, ensure that the host is
the host. Only the new transport node successfully configured in NSX.
profile-2 configuration is applied to
the host, whereas NSX VIBs are not
uninstalled from the host.
4 If the NSX host is in a failed
configuration state, then it is not
configured after it moves to cluster-2.
The host remains in the failed state.

In the VMware vCenter, move a host 1 From the vSphere Web Client, log in to The host remains in
that is in the Configuration Mismatch the VMware vCenter. Configuration Mismatch state.
state (NSX state) from cluster-1 with 2 Move the host in maintenance mode.
transport node profile-1 to cluster-2 3 As the host is in the Configuration
with transport node profile-2. Mismatch state, even though it is moved
to cluster-2, transport node profile-2 is
not applied to it.

In the VMware vCenter, move a host 1 From the vSphere Web Client, log in to If the NSX host is
from cluster-1 with transport node the VMware vCenter. successfully configured, then
profile-1 to cluster-3 not applied with 2 Move the host in maintenance mode. NSX uninstallation begins on
any transport node profile. the host.
3 Move the host from cluster-1 to
cluster-3. If the NSX host is in the failed
configuration state, then after
it moves to cluster-3 the node
remains in the failed state.

VMware by Broadcom 345


NSX Installation Guide

Action Steps/Description Result

In the VMware vCenter, delete a host 1 From the vSphere Web Client, log in to Uninstallation of NSX does
that is in the Configuration Mismatch the VMware vCenter. not begin because the node
state (NSX state) because the host 2 Move the host in maintenance mode. configuration was in the
has two different configurations Configuration Mismatch state.
3 Remove the host from the vCenter
applied to it - transport node To ensure that uninstallation
Server inventory.
configuration and transport node begins, ensure that the
profile configurations. transport node is configured
with a single configuration,
either at the host-level or at
the cluster-level.
After uninstallation, go to the
NSX Manager UI and verify
that the managed host is
moved out of the cluster
to become a standalone
unmanaged host.

In the VMware vCenter, move a 1 From the vSphere Web Client, log in to NSX is uninstalled from the
prepared host from cluster-1 with the VMware vCenter. host.
transport node profile-1 applied to: 2 Move the host in maintenance mode.
n Another cluster without any 3 Perform one of the actions:
transport node profile applied n Move the host to another cluster
n Data center without any transport node profile
n Outside of the data center applied.
n Move the host as a standalone host
in the data center.
n Move the host to outside of the data
center

Uninstall NSX from a vSphere Lifecycle Manager cluster


through NSX Manager
You can trigger uninstallation of NSX on a host that is part of a vSphere Lifecycle Manager cluster
through NSX Manager.

Procedure

1 From a browser, log in with admin privileges to an NSX Manager at [Link]


address> or [Link]

2 Select the cluster and click Remove NSX.

vSphere Lifecycle Manager removes the NSX solution applied to the cluster in VMware
vCenter.

Important You cannot detach a transport node profile on a vSphere Lifecycle Manager-
enabled cluster. You must remove NSX on the cluster.

VMware by Broadcom 346


NSX Installation Guide

3 If vSphere Lifecycle Manager fails to remove the NSX Solution applied to the cluster in
vCenter Server or remove NSX on one or more transport nodes, then vSphere Lifecycle
Manager marks the cluster and or transport nodes in Uninstall Failed state.

a Select the cluster and click Remove NSX on the cluster to retry uninstallation.

Note vSphere Lifecycle Manager puts the DPU-backed hosts in maintenance mode and
reboots it as part of host remediation. If vSphere Lifecycle Manager fails to place the host in
maintenance mode, you need to manually power off all VMs and then retry NSX installation.

VMware by Broadcom 347


Troubleshooting Installation
Issues 17
A list of issues related to NSX installation and configuration

Issue Solution

Installation Fails Due to Insufficient Space in Bootbank on [Link]


ESXi host.

Read the following topics next:

n Troubleshooting Installation to check for Basic Infrastructure Services

n Troubleshoot OVA Deployment and Appliance Bringup

n Troubleshoot NSX Manager Cluster

n Troubleshooting Host Transport Nodes

n Troubleshooting NSX Edge Nodes

Troubleshooting Installation to check for Basic Infrastructure


Services
This section provides information about troubleshooting installation issues.

Basic Infrastructure Services


The following services must be running on the appliances and hypervisors, also on vCenter
Server if it is used as a compute manager.

n NTP

n DNS

Make sure that firewall is not blocking traffic between NSX components and hypervisors. Make
sure that the required ports are open between the components.

To flush the DNS cache on the NSX Manager, SSH as root to the manager and run the following
command:

root@nsx-mgr-01:~# /etc/init.d/resolvconf restart


[ ok ] Restarting resolvconf (via systemctl): [Link].

VMware by Broadcom 348


NSX Installation Guide

You can then check the DNS configuration file.

root@nsx-mgr-01:~# cat /etc/[Link]


# Dynamic [Link](5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver [Link]
search [Link]

Log in as root user and run su admin to launch nsxcli on NSX manager. As an admin user,
nsxcli is default prompt.

Check DNS servers using following nsxcli command:

get name-servers

[Link]
[Link]
[Link]

Checking Communication from Host to Controller and Manager


On an ESXi host using NSX CLI commands:

[Link]> get managers


- [Link] Connected

[Link]> get controllers


Controller IP Port SSL Status Is Physical Master Session State
Controller FQDN
[Link] 1235 enabled connected true
up NA

On an ESXi host using host CLI commands:

[root@esxi-01:~] esxcli network ip connection list | grep 1235


tcp 0 0 [Link]:42271 [Link]:1235
ESTABLISHED 67702 newreno nsx-proxy
[root@esxi-01:~]
[root@esxi-01:~] esxcli network ip connection list | grep 5671
tcp 0 0 [Link]:11721 [Link]:5671 ESTABLISHED
2103688 newreno mpa
tcp 0 0 [Link]:30977 [Link]:5671 ESTABLISHED
2103688 newreno mpa

Host Registration Failure


If NSX uses the wrong IP address, host registration will fail. This can happen when a host has
multiple IP addresses. Trying to delete the transport node leaves it in the Orphaned state. To
resolve the issue:

n Go to Fabric > Nodes > Hosts, edit the host and remove all IP addresses except the
management one.

VMware by Broadcom 349


NSX Installation Guide

n Click on the errors and select Resolve.

Configuration Error when Deploying an Edge VM


After deploying an Edge VM, NSX Manager shows the VM's status as configuration error. The
manager log has a message similar to the following:

nsx-manager NSX - FABRIC [nsx@6876 comp="nsx-manager" errorCode="MP16027" subcomp="manager"]


Edge 758ad396-0754-11e8-877e-005056abf715 is not ready for configuration error occurred,
error detail is NSX Edge configuration has failed. The host does not support required cpu
features: ['aes'].

Restarting the edge datapath service and then the VM should resolve the issue.

Force Removing a Transport Node


You can remove a transport node that is stuck in the Orphaned state by making the following API
call:

DELETE [Link] Manager>/api/v1/transport-nodes/<TN ID>?force=true

NSX Manager will not do any validations as to whether you have any active VMs running on the
host. You are responsible for deleting the N-VDS and VIBs. If you have the node added through
Compute Manager, delete the Compute Manager first and then delete the node. The transport
node will be deleted as well.

Troubleshoot OVA Deployment and Appliance Bringup

Troubleshooting OVA Failures


Note During deployment, if you entered incorrect configuration details, delete the appliance and
redeploy with correct configuration.

n Verify that the datastore chosen for deployment is mounted on all the hosts that are
members of a cluster. Redeploy and choose ESXi host instead of VMware vCenter to bypass
VMware vCenter cluster related checks.

n If proxy enabled on VMware vCenter, edit file /etc/sysconfig/proxy and add


line .*.domainname to bypass proxy for ESXi hosts. See, [Link]
81565.

n If deployment of appliance through OVF tool gives error ovf descriptor not found, view
the file contents in terminal cat -A <filepath/filename> and remove hidden formatting
characters. Then try again.

VMware by Broadcom 350


NSX Installation Guide

Troubleshooting Issues Related to Bringing Up the Appliance


SSH to NSX Manager CLI as admin and run following commands to troubleshoot.

n Run get configuration and verify hostname/name-server/search-domain/ntp settings


are correct.

n Run get services and verify all required services are running (other than nsx-message-
bus, snmp, migration-coordinator). If these services are not running, try restarting the
service by running restart service <service-name>.

n Run get cluster status and verify all manager cluster components are up. If any
component is down, try restarting the service associated to the component by running
restart service <associated-component-service-name>.

n Run get core-dumps to verify no core dumps generated in /var/log/core or /image/


core. If you find any core dumps, contact VMware Support.

n Run get filesystem-stats to verify that no disk partition is full, especially those partitions
that are consumed by NSX.

n Alternatively, you can run API commands to know the node and service status.

GET api/v1/node/status

GET api/v1/node/services

Troubleshoot NSX Manager Cluster


You must configure a three-node NSX Manager cluster and only one of the nodes can fail at any
given time for the cluster to self-heal.

For an NSX Manager cluster to self-recover from a node failure, the majority of nodes must not
have failed (the number of active nodes must be greater than the failed nodes) otherwise the
entire cluster becomes unavailable. It means that all write operations are blocked, all clustering
related API/CLIs fail. However, local API/CLI commands continue to work.

NSX Manager logs are written into /var/log/syslog, directory.

Admin CLI
To activate Admin CLI, log in as an admin to a NSX Manager. But if you login as root, you
can run singleton admin CLI directly from the root shell using the su admin –c <cmd-to-run>
command. You can switch on interactive mode of admin CLI using the su admin command and
then run admin commands.

Manually deployed NSX Manager failed to join the NSX Manager


Cluster
A NSX Manager that is manually deployed in VMware vCenter using OVF or OVA file failed to join
the NSX Manager cluster.

VMware by Broadcom 351


NSX Installation Guide

Cause

You used an incorrect thumbprint value to join the new NSX Manager node to the existing NSX
Manager cluster. If NSX VIP is configured, then use the cluster thumbprint command. If external
VIP or no VIP is configured, use the API thumbprint command. Try registering the NSX Manager
node with the cluster using correct thumbprint.

Solution

u You can manually deploy NSX Manager by running one of the following commands.

n Run the get certificate cluster thumbprint command and then run Join
management-plane <cluster-vip> username <manager-username> password
<manager-pwd> thumbprint <cluster-thumbprint> command. The procedure to join
the management node with the cluster is complete. Alternatively, you can use the NSX
Manager thumbprint, as shown in the following steps to join the NSX Manager with the
cluster.

n Or run the get certificate api thumbprint command and then run the
Join management-plane <manager-ip> username <manager-username> password
<manager-pwd> thumbprint <manager-thumbprint> command.

NSX Manager Cluster status Degraded As Datastore-related


Components Are Down
NSX Manager status is degraded due to DATASTORE component and/or CORFU_NONCONFIG
component down on one or more manager nodes member of NSX Cluster.

Solution

1 SSH to NSX Manager CLI terminal as an admin.

2 To identify the manager node with components down, run get cluster status.

3 Verify underlying datastore is available with recommended disk access latency. To get and fix
the disk access latency numbers, see [Link]

Note Datastore outage can cause NSX Manager appliance VMs to go into read-only mode.
Linux does not provide a usable utility to recover from this error. Follow the KB, Recover
NSX Manager upon storage outage, if appliance reboot does not fix the issue. Verify backend
datastore has fully recovered before rebooting the affected NSX Manager VM.

4 To verify no disk partition consumed by NSX is full or close-to-full, run ‘get filesystem-stats'.

Note Datastore specific logs can be found at var/log/corfu/[Link]


and /var/log/corfu/[Link].

5 Clean up disk space and / or reboot all NSX Manager nodes (after storage issues are
resolved) to remove the read-only mode.

VMware by Broadcom 352


NSX Installation Guide

6 If components continue to be down, contact VMware Support.

Important With the datastore down, the NSX Manager continues to participate in the cluster
if its network connectivity is up. This state might result in the management plane and control
plane to become unavailable. If datastore issue cannot be resolved, replace the impacted
NSX Manager by powering off the impacted NSX Manager (as long as cluster quorum is
maintained by having majority number of nodes up).

7 Remove NSX Manager node from existing NSX Manager Cluster. On the NSX Manager, run
detach node <node-id>.

8 Verify the problematic node is not a member of the cluster anymore by running the get
cluster status.

9 Deploy a new NSX Manager node and join it to the existing Manager Cluster.

Manager and HTTPS Services Are Down Frequently Due to Incorrect


NAT Configuration
If you incorrectly configured NAT, the Manager and HTTPS service might go down frequently.

Solution

1 Do not use [Link]/0, x.0.0.0/8, x.x.0.0/16 subnet ranges in the NAT rules for SNAT
Translated IP section (SNAT rule) and for Destination Network section (DNAT rule).

2 Fix the incorrect NAT configuration.

3 Configure NAT rule where IP address range and count does not exceed 256 for SNAT
Translated IP section (SNAT rule) and for Destination Network section (DNAT rule).

NSX Manager is Slow To Load And Tasks Fail


NSX Manager is slow to load and tasks fail with message server is overloaded or too many
requests.

Cause

This issue occurs upon hitting the nsx manager incoming API rate limit. The solution is to either
limit the number of incoming API requests to NSX Manager or modify the default values for Client
API rate limit or Client API concurrency limit or Global API concurrently limit from API or CLI.

Solution

1 To view current default values, run get cluster api-service as admin or GET
API /api/v1/cluster/api-service.

2 To set a new API rate value, run set cluster api-service client-api-rate-limit or
PUT /api/v1/cluster/api-service with updated values.

VMware by Broadcom 353


NSX Installation Guide

3 To view logs of incoming API requests, go to /var/log/proxy directory, reverse-


[Link].

Note Try to avoid configuring the API rate limits. Instead, design your API client to gracefully
deal with situations where limits are exceeded.

NSX Manager UI is not loading even when NSX Clustering is up


NSX Manager is slow to load and tasks fail with message server is overloaded or too many
requests.

Solution

1 SSH to one of the NSX Manager that is a member of the cluster.

2 Run admin CLI get cluster status.

3 If cluster status is stable, run get services or get service <service-name> and verify
that the following services are running: http, manager, search, ui-service.

4 To restart or start the stopped service, run restart | start service <service-name>.

NSX Manager cluster is DOWN or UNAVAILABLE if all nodes part of


the the NSX Manager cluster is down or majority nodes are down
NSX Manager is down or unavailable if majority of the nodes in the cluster are down.

Problem

NSX Manager UI will fail to load with following error Some appliance components are
not functioning properly. Component health: POLICY:UNKNOWN, MANAGER:UNKNOWN,
SEARCH:UNKNOWN, NO and clustering related commands will fail using the CLI and API.

Solution

1 SSH to each of the affected NSX Manager nodes and run following commands:

a Run get file-system-stats and verify /config and /image is not 100% full.

b Run get core-dumps to verify no cores have gotten generated in NSX Manager.

c Verify there was no datastore outage. See NSX Manager Cluster status Degraded As
Datastore-related Components Are Down.

d Check logs for out-of-memory errors. See /var/log/proton/proton-tomcat-


[Link]

VMware by Broadcom 354


NSX Installation Guide

2 To restore clustering and UI, any two nodes in a three node cluster must be up. If you are not
able to bring any failed node back up, but if there is a healthy node available, then do one of
the following steps to restore clustering:

n Deploy a new manager node (as 4th member node), join the existing cluster and then
detach one of the failed nodes using CLI cmd detach node <node-uuid> or API
POST /api/v1/cluster/<node-uuid>?action=remove_node. The commands should
be executed from one of the healthy nodes. Alternatively, you can follow the next
bulleted point to deactivate the cluster.

n (Optional) Run the deactivate cluster command on active node such that you end
up with single node cluster. Now continue to add the new additional nodes to make a
3-member NSX Manager cluster.

Note NSX Manager nodes that are removed from the cluster should be powered off and
deleted.

Troubelshoot NSX Appliance or NSX Clustering issues Using APIs


Use APIs to troubleshoot NSX appliance and clustering issues.

Verify Cluster Status


GET [Link]

GET [Link]

GET/api/v1/reverse-proxy/node/health

Verify CPU Usage


GET /api/v1/systemhealth/appliances/process/status

GET [Link]
process/status

Verify Network Latency


GET [Link]

GET [Link]
latency/status

Troubleshooting Host Transport Nodes

Host Fails to Install as Transport Node

VMware by Broadcom 355


NSX Installation Guide

Problem

When many hosts are are prepared simultaneously (in non vLCM way) some hosts may fail in the
internal step of downloading nsx lcp bundle to the host. Error seen is "Failed to download NSX
components on host.

Solution

1 Check the host connectivity and ensure that the "/tmp" folder has enough space.

2 If it is fine then check if there are many hosts getting prepared at this time. Wait for host prep
of other hosts to complete.

3 Follow the usual error resolution steps from the UI or API to re trigger the operation on the
failed host.

Accessing the NSX CLI Terminal

After logging into the root shell of an ESXi host, you can do one of the following:

n Use a singleton NSX CLI directly from the root shell by running the nsxcli –c <cmd-to-
run> command.

n Go into interactive mode by running the nsxcli command.

The NSX Syslog directory on the ESXi host is /var/log/nsx-syslog.

Transport Node Installation Failure Due To Pending Reboot

Problem

Transport Node install task fails due to pending reboot.

Cause

This issue occurs if host is pending reboot from previous installation or upgrade of vibs on the
host.

Solution

u Run following CLI on the host as root to verify.

vim-cmd hostsvc/hostsummary|grep -i reboot requireSecureBoot = <unset>,


rebootRequired = true

If rebootRequired is set to true, reboot the host and then try installation of NSX modules
again on the host.

Unable to Reach Transport Node Due to Incorrect Credentials As


Host is in Orphaned State

VMware by Broadcom 356


NSX Installation Guide

Problem

n Host is not reachable. Cannot complete login due to an incorrect username or password.

n The Transport Node Apply Task fails with error Node already exists.

n Host is in orphaned state.

Cause

This issue occurs due to a race condition under heavy traffic


load. Run the API (deprecated) GET /api/v1/transport-nodes/<TN-UUID>/status
or GET api/v1/infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/
host-transport-nodes/<host-transport-node-id>/state | status, where default values
for enforcementpoint-id and site-id is default to review the transport node shows transport
nodes status unknown and node deployment status shows as Failed.

All these cases occur for host TN that did not get cleaned correctly upon initiation of NSX
removal task therefore is still registered with NSX Manager.

In this case, the GET transport node api and GET transport node status api will fail but GET
transport node state api works and will show failure message Failed to uninstall the
software on host....

Solution

u To fix the existence of stale entry, you must forcefully remove NSX from the host and also
run the following api to delete stale host entries in the setup.

a (NSX Manager UI) On Hosts page, select the Force Delete option and click Remove NSX.

b (API) To forcefully delete NSX, run the API, [Link]


nodes/<Transport-Node-UUID>?force=true&unprepare_host=false.

c (API) To remove stale entries, run the API, [Link]


transport-nodes?action=clean_stale_entries.

Transport node profile Fails to Prepare Transport Nodes Due to Stale


Objects

Problem

Transport node profile (TNP) fails to prepare transport nodes due to stale objects in TNP.

Cause

NSX displays the following error when TNP fails to prepare hosts into transport nodes: The
requested object <uuid> could not be found. Object identifiers are case
sensitive. Please make the appropriate changes and reapply the configuration.

This issue can occur if any configuration specified in the TNP does not exist in NSX Manager or
VMware vCenter.

VMware by Broadcom 357


NSX Installation Guide

Solution

1 Search the UUID in NSX Manager to find out the object being referred to in the error.

2 Edit the Transport Node Profile to remove the referenced stale object (could be TNP itself, IP
Pool, Profile etc) and then retrigger host configuration.

Transport Node Creation is in Partial Success State Due to Logical


Switch or Segment Full-sync Realization Error

Problem

Transport Node has partial success status with error LogicalSwitch full-sync
realization query skipped.

Cause

This issue can occur if uplink profile associated with host has teaming defined with no active
uplinks or if host switch uplinks is down or if failure in creation of VTEP.

Run transport-node state API to view more info on failure details using one of the following API
command:

n (deprecated) GET api/v1/transport-nodes/<uuid>/state

n GET api/v1/infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/
host-transport-nodes/<host-transport-node-id>/state, where default values for
enforcementpoint-id and site-id is default.

Solution

1 Edit the uplink profile associated with the transport node and ensure active uplinks are
selected for each teaming profile defined.

2 Verify PNIC link status is up by running CLI on the host: esxcli network nic | esxcli
network nic get -n <vmnic-name>.

3 If uplink profile is active-active, verify correct number of VTEPS got created (rather than just
one).

Run net-vdl2 –l |more to verify VTEP count is correct, is assigned a valid IP Address,
Gateway and state of each VTEP interface is UP.

Transport Node status is degraded when its interface is down

Problem

Transport Node status is degraded when its interface is down.

VMware by Broadcom 358


NSX Installation Guide

Cause

The NSX Manager declares a transport node to be in a degraded state if any of its interface that
is used by host switch is Down.

Solution

1 To check for the status of interfaces, run Esxcfg-vswitch –l.

2 To check status of uplinks, run esxcli network nic list.

3 Fix the PNIC that is down and in use by the VDS.

4 Alternatively, remove or replace the PNIC in VDS uplink configuration.

Solution

Transport Node status is Disconnected or Unknown

Problem

ESXi host prepared as a Transport Node status goes into Unknown or Disconnected state due
to lost connection to NSX Manager. NSX displays the following error: Heart beating between
NSX management node and host <uuid> is down.

Cause

Host infrastructure services being down due to ESXi disks being full or memory leak may result in
this condition. If ESXi.

If ESXi disks are full or if there is a memory leak, it can cause certain processes to crash
and cause the transport node to go into Disconnected state. When you run admin cli get
managers, NSX might return active manager node if the crash occurred post successful manager
registration. When you run admin cli get controllers, NSX gives error Failed to get
controller list.

Solution

1 Run admin cli get core-dumps to see if any cores got generated (in /var/core or /
image/core) due to service crash.

2 If core-dump happens, run cmd esxtop to see which NSX process is consuming too much
memory and df -h to verify disk partitions used by nsx is not full or close to full.

3 Run /etc/init.d/nsx-proxy | nsx-nestdb status to get status of infrastructure


services on the host.

4 Clean up the disk space, then start any stopped infrastructure services on the host by issuing
command /etc/init.d/<service-name> start (as a temporary workaround).

5 Open a support case with VMware if you see any cores.

VMware by Broadcom 359


NSX Installation Guide

Transport Node is Down as Agent Service is Down

Problem

The agent service is Down.

Cause

Solution

1 To view the host agent status in UI, go to Host > Transport Node > View Details > Monitor
tab.

2 To view the status of three host agent services (nsx-cfgagent, nsx-opsagent, nsx-
nestdb) in CLI, run /etc/init.d/<service-name> status.

3 To view the status of agents in API, call these APIs:

(deprecated) GET api/v1/transport-nodes/<uuid>/status


GET api/v1/infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/
host-transport-nodes/<host-transport-node-id>/status
The default value for enforcementpoint-id and site-id is default.

4 Restart service that is down by running the following CLI command: /etc/init.d/
<service-name> start.

Transport Node Connectivity to Controller is Down

Problem

The issue is seen when a transport node's connectivity to NSX Manager is Up but its controller is
Down. When you run get managers, NSX returns active manager node, while get controllers
does not return any active controller for this transport node, which is in Connected state and its
session state is also Up.

Cause

Solution

1 Verify transport node is not in NSX Maintenance Mode via admin cli get maintenance-mode.

2 Call one of the following API:

a (Deprecated) GET API/v1/transport-nodes/<uuid>/status

b GET api/v1/infra/sites/<site-id>/enforcement-points/<enforcementpoint-
id>/host-transport-nodes/<host-transport-node-id>/status, where default
values for <site-id> and <enforcementpoint-id> is default.

VMware by Broadcom 360


NSX Installation Guide

3 Verify FQDN property (used by transport nodes to talk with NSX Manager or controller) is set
by running API: GET /api/v1/configs/management and view value for publish_fqdns.

If FQDN is set, verify the controller FQDN is reachable and FQDN value is being used to by
the transport node to talk to controller by first running ICMP ping to controller FQDN followed
by admin cli get controllers to verify controller FQDN value is getting populated correctly.

4 Verify host agent services are running by following host agent troubleshooting step
mentioned above.

5 Verify [Link] file exists and contains Host transport node as its member: /etc/
vmware/nsx/[Link].

6 If host is in NSX Maintenance Mode, run admin cli set maintenance-mode false or API
to take host out of NSX maintenance mode: POST /api/v1/transport-nodes/<node-id>?
action=exit_maintenance_mode.

7 If FQDN set and ICMP ping works for controller FQDN then try unsetting and setting the
FQDN property again by running API PUT /api/v1/configs/management with value for
publish_fqdns as false and then run the API again with value true.

8 Start agent services on the host (if any stopped) by running the CLI command etc/init.d/
<service-name> start.

9 If [Link] file has incorrect data, restart nsx-proxy service on the host to trigger
re-creation of file.

Unable to power on VMs or vMotion VMs on Transport Node

Problem

The Virtual Distributed L2 (VDL2) component is down. This component must be up for NSX to
successfully complete VM operations that are attaced to a segment on the transport node.

Cause

NSX displays the following error message: Currently connected network interface"
'Network adapter 1' uses network 'VM_NETWORK:vdl2 down)

1 SSH to host and run following command to verify status of vdl2 component, net-dvs |
grep "component.vdl2"

[Link].vdl2 = down , propType = RUNTIME

2 Run net-vdl2 –l to verify VTEP interface is assigned valid IP Address, Gateway and status
of each interface is UP.

3 Run esxcfg-vswitch –l to verify minimum MTU of minimum 1600 bytes is setup on the
VDS switch used by NSX and uplinks assigned to VTEP interfaces are UP.

VMware by Broadcom 361


NSX Installation Guide

4 To view host switch information, run one of the following transport node state API:

a (deprecated) GET api/v1/transport-nodes/<uuid>/state

b GET api/v1/infra/sites/<site-id>/enforcement-points/<enforcementpoint-
id>/host-transport-nodes/<host-transport-node-id>/state, where default
values for enforcementpoint-id and site-id is ‘default’ or GET api/v1/transport-nodes/
<uuid>/state (deprecated).

Solution

1 Ensure that the configuration details entered in these fields are correct:

n VTEP IP Pool

n VTEP VLAN

n VDS MTU

n Status of assigned PNIC (must be Up)

2 If VTEP pool is configured using DHCP, verify that DHCP server is assigning valid IP addresses
to the VTEP pool.

Transport Node Tunnels Down

Problem

ESXi hosts that are prepared as transport nodes have their tunnel status as Down.

Cause

Solution

1 If all tunnels are down, verify existence of valid VTEP on the host by running the net-vdl2
–l | more command.

2 Verify these configuration details are correct: VTEP state is UP, VTEP count is correct, and a
valid VTEP addresses is assigned.

3 To find VTEP BFD tunnel sessions that are down on the ESXi transport node, run the CLI
command nsxdp-cli bfd sessions list | grep down.

4 For sessions that are down, validate IP connectivity between the TEPs using ICMP ping.

Note The TEP interfaces on ESXi are instantiated on the vxlan netstack and on the tunnel
VRF for NSX Edge nodes. Therefore, initiate the ping from within the vxlan netstack on ESXi
and if you have more than one TEP, be sure to specify the source IP address or interface
used for the ping.

5 Run ping ++netstack=vxlan <remote address> -I vmk10 .

6 If ping fails, verify underlay connectivity of the network.

VMware by Broadcom 362


NSX Installation Guide

7 If ping is successful, verify Fabric MTU is configured correctly on the underlay network and
within NSX. Use ICMP with the don’t fragment bit to test proper delivery for large packets.

8 Run ping ++netstack=vxlan <remote-vtep-ip=-address> -I vmk10 -d -s 1600 .

Installation Fails Due to Insufficient Space in Bootbank on ESXi Host


NSX installation might fail if there is insufficient space in the bootbank or in the alt-bootbank on
an ESXi host.

Problem

On the ESXi host, you might see a similar log ([Link]) message:

20**-**-**T[Link]Z esxupdate: 5557508: [Link]:


ERROR: The pending transaction requires 245 MB free space,
however the maximum supported size is 239 MB.^@

Cause

Unused VIBs on the ESXi host can be relatively large in size. These unused VIBs can result in
insufficient space in the bootbank or in the alt-bootbank when installing the required VIBs.

Solution

n Uninstall the VIBs that are no longer required and free up additional disk space.

For more information on deleting the unused VIBs, see the VMware knowledge base article at
[Link]

NSX Agent on ESXi Transport Nodes Times Out Communicating with


NSX Manager
In a large-scale environment with many transport nodes and VMs on ESXi hosts, NSX agents,
which run on ESXi hosts, might time out when communicating with NSX Manager.

Problem

Some operations, such as when a VM vnic tries to attach to a logical switch, fail.
The /var/run/log/[Link] has messages such as:

level="ERROR" errorCode="MPA41542"] [MP_AddVnicAttachment] RPC call [0e316296-13-14] to NSX


management plane timout
2017-05-15T[Link]Z nsxa: [nsx@6876 comp="nsx-esx" subcomp="NSXA[VifHandlerThread:-2282640]"
tid="1000017079" level="ERROR" errorCode="MPA42003"] [DoMpVifAttachRpc]
MP_AddVnicAttachment() failed: RPC call to NSX management plane timout

Cause

In a large-scale environment, some operations might take longer than usual and fail because the
default timeout values are exceeded.

VMware by Broadcom 363


NSX Installation Guide

Solution

1 Increase the NSX agent timeout (seconds) value.

a On the ESXi host, stop the NSX ops agent with the following command:

On NSX 2.3 or later releases:

/etc/init.d/nsx-opsagent stop

On NSX 2.1 or previous releases:

/etc/init.d/nsxa stop

b Edit the file /etc/vmware/nsx-opsagent/[Link] and change the


vifOperationTimeout value from 25 seconds to, for example, 55 seconds.

"mp" : {
/* timeout for VIF operation */
"vifOperationTimeout" : 25,

Note This timeout value must be less than the hostd timeout value that you set in step 2.

c Start the NSX ops agent with the following command:

/etc/init.d/nsx-opsagent start

2 Increase the hostd timeout (seconds) value.

a On the ESXi host, stop the hostd agent with the following command:

/etc/init.d/hostd stop

b Edit the file /etc/vmware/hostd/[Link]. Under <opaqueNetwork>, uncomment


the entry for <taskTimeout> and change the value from 30 seconds to, for example, 60
seconds.

<opaqueNetwork>
<!-- maximum message size allowed in opaque network manager IPC, in bytes. -->
<!-- <maxMsgSize> 65536 </maxMsgSize> -->
<!-- maximum wait time for opaque network response -->
<!-- <taskTimeout> 30 </taskTimeout> -->

c Start the hostd agent with the following command:

/etc/init.d/hostd start

Troubleshooting NSX Edge Nodes

VMware by Broadcom 364


NSX Installation Guide

Admin CLI is activated when you log in to an NSX Edge using admin credential. As a root user,
you can run singleton admin CLI directly from the root shell su admin –c <cmd-to-run> or go into
interactive mode of admin CLI su admin and then run admin commands.

Accessing the NSX Edge CLI Terminal

Admin CLI is activated if you login to NSX Edge as admin.

As a root user, you can:

n Use a singleton NSX Edge CLI directly from the root shell by running the su admin –c
<cmd-to-run> command.

n Go into interactive mode by running the su admin command.

NSX Edge MPA Connectivity Down

Problem

The NSX Edge MPA connectivity Down due to infrastructure service crash.

Cause

NSX Edge node disks being full or memory leak can cause certain processes to crash and lead
to this failure. Admin cli get managers may return active manager node (if crash occurred post
successful manager registration) and admin cli get controller will give error Failed to get
controller list.

Solution

1 Run admin cli get diagnosis config or GET API /api/v1/transport-nodes/{transport-


node-id}/node/diagnosis to diagnose failures related to health of NSX Edge nodes that are
caused when services go down.

2 Run admin cli get cores-dumps to see if any cores got generated (in /var/core or /image/
core) due to service crash. If core dump is seen, run cmd top -o %MEM as root to see
which nsx process is consuming too much memory and admin cli get filesystem-status
to verify if partitions used by nsx is not full or close to full.

3 Run root cli /etc/init.d/nsx-proxy | nsx-nestdb status to get status of infrastructure


services running on the NSX Edge node.

4 Clean up the disk space, then start any stopped infrastructure services on the host by
issuing command /etc/init.d/<service-name> start (as a temporary workaround).
Open support case with VMware if any cores are seen.

NSX Edge Status DOWN or DEGRADED As BFD Tunnel(s) are Down


NSX Edge

VMware by Broadcom 365


NSX Installation Guide

Problem

NSX Edge status DOWN or DEGRADED due to BFD tunnel(s) to remote NSX Edge down.

Cause

Between two NSX Edge, one BFD session is run on management interface, and one or more BFD
session on each VTEP interface. An NSX Edge considers its peer as unreachable only when all
BFD sessions to that edges (management one and all VTEP ones) are down.

Solution

1 To get information about NSX Edge VTEP devices, run admin cli get host-switch.

2 To verify physical port status, run get physical-port <vtep device>. Then on edge-1>
get phy fp-eth0.

Physical Port
ADMIN_STATUS : up <----------------- should be "up"
DRIVER : net_vmxnet3
DUPLEX : full
ID : 0
LINK : up <----------------- should be "up"

3 Run admin cli get diagnosis topology and get edge-cluster status to verify Edge is
healthy with edge cluster High Availability State Up, edge node status Up, admin status Up.
Then verify if VTEP State Up and status of BFD healthcheck sessions.

Interface : nsx-edge-vtep
Device : fp-eth0
Session : [Link]:[Link]
Status : Unreachable
Interface : nsx-edge-vtep.1
Device : fp-eth1
Session : [Link]:[Link]
Status : Unreachable

If status is unreachable, or Neighbor Signal Down, validate IP connectivity using ICMP Ping.
For all other status, check the BFD error code explanation in the guide. See View Bidirectional
Forwarding Detection Status.

4 Since TEP interfaces reside on the tunnel VRF for Edges therefore initiate ping from the
tunnel VRF 0 on edges and if you have more than one TEP, specify the source IP address or
interface used for the ping.

5 Run admin cli ‘get logical-routers' to get tunnel vrf followed by ping.

vrf 0
ping [Link] source [Link] repeat 3

6 Run admin cli get neighbor to check if ARP is getting resolved for the BFD session.

7 Run admin cli get interface to check status of interface with BFD tunnel down.

VMware by Broadcom 366


NSX Installation Guide

8 If any of the status is unreachable, verify underlay wiring is correct.

9 If ICMP ping is working yet VTEP status is unreachable, verify the VTEP IP addresses are not
already in use.

NSX Edge Router High Availability Status is Down


NSX Edge

Problem

Edge Router High Availability Status is Down. NSX Edge status in UI may also show as DOWN.

n After you run the CLI get edge-cluster status, the Edge Node Status shows as UP but
Routing Status shows as Down.

Edge Node Id : 9e60f8c7-c0ac-42ef-8854-5466cd0cc7eb


Edge Node Status : Up (Routing Down)
Admin State : Up
Service Status :
Datapath Config Channel : Up
Datapath Status Channel : Up
Routing Status Channel : Up
Routing Status : Down

n In the NSX Manager UI, navigate to Networking→ Tier-0 Logical Routers → Logical Router
→ Overview → High Availability State shows as Down.

n In the Edge CLI, run get logical-routers → vrf (Tier0 SR) → get high-
availability status of SR shows as Down.

Cause

This issue occurs if all BFP and/or BGP sessions for the router goes down.

Solution

1 To troubleshoot BFD sessions, see the previous described troubleshooting cases related to
BFD sessions. see NSX Edge status DOWN or DEGRADED due to BFD tunnels between Edge
and ESXi down and NSX Edge Status DOWN or DEGRADED As BFD Tunnel(s) are Down.

2 To troubleshoot BGP sessions, follow these steps:

a Within Tier0 SR vrf (get logical-routers; vrf x), run admin cli, get bgp neighbor
summary.

b If there is no connection established, ping bgp neighbor addresses and source addresses
(T0 upinks/interfaces) from inside as well as outside SR vrf to verify the interfaces are
correctly setup up as BGP peers on the TOR and BGP neighbors are up and accessible.

VMware by Broadcom 367


NSX Installation Guide

c If connection status is established, then run cli get bgp neighbor <neighbor-ip>
advertised-routes followed by get route <ip-address> | get route connected
| get route bgp to view if BGP routes are getting advertised.

d Run get logical-routers to view logical routers ID of Tier-0 Service Router followed
by get logical-router <logical-router-id> interfaces stats to view if TX or
RX drops are seen on the Service Router interfaces.

e Run admin cli get diagnosis topology to view status of edge uplink interfaces, bgp
peers.

NSX Edge Node Status Down Due to PNIC Bond Status Down
NSX Edge node status is Down because the PNIC bond status is down.

Problem

Cause

This issue can occur if one or more Edge interfaces used by host-switch are down.

Solution

1 Verify if uplinks used by fast path interfaces are not down on the physical server hosting the
uplinks.

2 If fast-path components are down, verify if dataplane service is running by running admin cli
get service dataplane.

3 Verify status of Edge VTEP Device via admin cli get host-switch to find out vtep device
name.

4 Run get physical-port <vtep device> .

edge-1> get phy fp-eth0


Physical Port
ADMIN_STATUS : up <----------------- should be "up"
DRIVER : net_vmxnet3
DUPLEX : full
ID : 0
LINK : up <----------------- should be "up"

5 If VTEP device admin_status or link is down, fix this infrastructure issue.

6 If dataplane service is stopped, start by running start | restart service dataplane.

7 After resolving the issue, run admin cli get logical-routers on the edge transport node to
make sure edge is functional again.

NSX Edge Transport Node Connectivity to Controller is Down


NSX Edge Transport Node connectivity to controller is down.

VMware by Broadcom 368


NSX Installation Guide

Problem

Cause

This issue is seen when connectivity to manager is up but connectivity to controller is down.
Admin cmd ‘get managers’ returns active manager node while cmd 'get controllers’ does not
return any active controller for this transport node with status connected and/or session-state
UP.

Solution

1 Verify transport node is not in NSX Maintenance Mode using admin cli get maintenance-
mode or run the API, GET api/v1/transport-nodes/<tn-uuid>| state | status.

2 Verify if FQDN property (used by transport-nodes to talk with NSX Manager/Controller) is set
by running API, GET /api/v1/configs/management and view value for publish_FQDNS.

3 If FQDN set, verify the controller FQDN is reachable and FQDN value is being used to by TN
to talk to controller by first running ICMP ping to controller FQDN followed by admin cli get
controllers to verify controller FQDN value is getting populated correctly.

4 Verify node agent services are running by following node agent troubleshooting step
outlined before.

5 If edge transport node is in NSX Maintenance Mode, run admin cli


set maintenance-mode false or API POST /api/v1/transport-nodes/<node-id>?
action=exit_maintenance_mode to take node out of NSX Maintenance Mode

6 If FQDN is set and ICMP ping works for controller FQDN then try unsetting and setting the
FQDN property again by running API PUT /api/v1/configs/management with value for
publish_fqdns false followed by true.

7 Verify that agent services are running on the NSX Edge node by running get edge
diagnosis config. If any service shows as failed, restart by running admin cli start
service <service-name> or root cli etc/init.d/<service-name> start.

NSX Edge node status is Down As Controller Is Unavailable


NSX Edge node status is Down due to controller unavailability.

Problem

NSX Edge node status is Down with controller connectivity status ‘unavailable’ because agent
service down.

Cause

The get manager and get controller commands will return correct values but GET api/v1/
transport-nodes/<tn-uuid>/status shows agent-status is Down. The error is due to the
service nsx-opsagent is not running.

VMware by Broadcom 369


NSX Installation Guide

Solution

1 Run admin cli get status nsx-opsagent to verify the service status.

2 Start or restart the service by running admin cli, restart | start service nsx-opsagent

Transport Node Failed as Named Teaming Defined with No Active


Uplink
Transport Node configuration has failed due to named teaming defined with no active
uplink.

Problem

Cause

This issue can occur if uplink profile associated with transport-node has named teaming defined
with no active uplinks.

Solution

1 Make sure active uplinks are selected for any teaming policies defined.

2 Edit the uplink profile associated with the transport node by either removing the teaming with
no active uplink or assigning a valid uplink to teaming policy defined.

NSX Edge Transport Node goes into NSX Maintenance Mode On HA


Failover
NSX Edge Transport Node goes into NSX Maintenance Mode automatically upon HA failover.

Problem

NSX Edge Transport Nodes may go automatically into NSX Maintenance Mode if there are issues
with its datapath or with the usage of heap memory.

Cause

To view edge node high availability status, status changes and reasons for it, run admin cli get
edge-cluster status and get edge-cluster history state. If the Edge STATE would be
DOWN, implying either datapath process is not running or physical link is down or vtep tunnels
are down.

Solution

1 Run admin cli get diagnois config and get service dataplane to verify core services
are up.

2 Run admin cli get diagnosis topology to view detailed edge config state.

3 Run admin cli get host-switch to get vtep-device name and physical-port-name.

VMware by Broadcom 370


NSX Installation Guide

4 Run admin cli get physical-port <port-name> to view status of host switch port followed
by get physical-port <interface-name> stats and look for rx_misses (ingress buffer)
or tx_drops (egress buffer) counter to determine occurrence of packet loss. Packet loss may
be seen if edge is flooded with traffic rate higher rate higher than the datapath CPUs can
process. Packets are first held in input/egress buffer and gets dropped if buffers are full. To
check the current buffer size configuration, use the CLI get dataplane | find ring.

5 If dataplane service is stopped, start by issuing cmd start service dataplane (as a
temporary workaround).

6 If host switch port is down, start by issuing cmd set physical-port fp-eth0 state up
(as a temporary workaround).

7 If packet loss is seen or issue with host switch status|state, file a ticket with VMware Support
Desk.

You can also try to change the rx/tx buffer configuration (to enhance edge interface traffic
management capacity) using the CLI set dataplane ring-size <rx/tx> <size>. The
supported buffer size range is 128-4096 bytes and dataplane service needs to be restarted
in order to make the new configuration effective resulting in a downtime of about 60
seconds.
a For example, set dataplane ring-size rx 2048. Restart dataplane service for the
change to take effect.

b set dataplane ring-size tx 2048. Restart dataplane service for the change to take
effect.

restart service dataplane


get dataplane | find ring
Bfd_ring_size : 512
Lacp_ring_size : 512
Learning_ring_size : 512
Livetrace_ring_size: 512
Rx_ring_size : 2048
Slowpath_ring_size : 512
Tx_ring_size : 2048

VMotion of NSX Edge VM Fails Due To ESXi Running Out Of


Resources
VMotion of NSX Edge VM may fail in vCenter due to ESXi running out of resources from a shared
buffer pool.

Cause

This issue can occur for an NSX Edge of XL VM form factor of 16 vCPU cores with error
reservation failed.

VMware by Broadcom 371


NSX Installation Guide

Solution

u To fix the issue, increase the P2M buffer slots for all the virtual machines on the host. For
more information, see the Kb, [Link]

State Not Consistent of NSX BFD BGP or HA Functionality


State Not Consistent of NSX BFD BGP or HA Functionality

Problem

Pings are not consistent. BFD status changes occur. Edge HA failovers occur consistently.

Cause

Solution

1 Run admin cli Get edge-cluster history state.

The CLI showcases history of edge cluster HA transition and reason. The only acceptable
state is Active. Anything else means edge node is down and no service is able to function.
For the purpose of identifying events leading to such HA state, focus on the timestamp and
"Reason" line.

2 Run admin cli get diagnosis topology to view status of all attached interfaces and config.

NSX Edge Node Status Down and BFD Tunnel Data is Missing

Cause

This issue can occur if NSX Edge dataplane service goes down due to memory leak in certain
NSX Edge processes. In this case, certain NSX Edge CLIs may fail with error encountered.

Solution

1 Run admin cli get service dataplane to view the status of service.

2 If status is down, run cli start service dataplane.

3 Once service has started, run the following CLIs to view the CPU and Memory consumption
by NSX Edge dataplane module.

4 Get dataplane cpu stats from /var/log/vmware/[Link] and [Link] to watch


for values going above 60% usage.

NSX Edge status DOWN or DEGRADED due to BFD tunnels between


Edge and ESXi down
NSX Edge

VMware by Broadcom 372


NSX Installation Guide

Problem

Cause

Solution

1 Run admin cli get bfd-sessions to find VTEP BFD tunnel sessions that are down on the
edge transport node. As a guard against VTEP misconfiguration, when edge loses all its BFD
to hypervisor, it brings down itself.

2 For down sessions, validate IP connectivity between the TEPs using ICMP ping.

Note The TEP interfaces reside on the tunnel VRF for Edges and vxlan netstack for ESXi.
Therefore initiate ping from the tunnel VRF on edges and if you have more than one TEP, be
sure to specify the source IP address or interface used for the ping.

3 Run admin cli ‘get logical-routers' to get tunnel vrf.

vrf 0
ping [Link] source [Link] repeat 3

4 ESXi: ping ++netstack=vxlan <remote-vtep-ip=-address> -I vmk10 -d -s 1600

5 As shown in the CLI, the ping should be initiated by specifying both the remote and local IP
address relevant to the BFD session. While not required to test simple connectivity, specify
the payload size that’s 100 bytes less than the TEP’s configured MTU, and the dfbit set to
enable to prevent the underlay network from fragmenting the packet. Testing with bigger
payloads will validate that your underlay network has been properly setup to support your
NSX Geneve overlay configuration.

6 Validate if ARP is getting resolved for neighbor VTEP address.

edge-1(vrf)> get neighbor


Logical Router
UUID : 736a80e3-23f6-5a2d-81d6-bbefb2786666
VRF : 0
LR-ID : 0
Name :
Type : TUNNEL
Neighbor
Interface : 4d9091fe-b971-5d3c-9201-4cb9c7f455fe
IP : [Link] <------------ peer TN VTEP IP
MAC : [Link] <---- resolved
State : reach <---------- ARP reachable state
Timeout : 37

7 Run get interface cmd followed by get logical-router interface <uuid> status
to get VTEP interface status.

Interface : ac80718b-72d3-5028-bb07-8f3c4ea2231a
Ifuid : 258
Name :

VMware by Broadcom 373


NSX Installation Guide

Fwd-mode : IPV4_AND_IPV6
Internal name : uplink-258
Mode : lif
Port-type : uplink
IP/Mask : [Link]/24
MAC : [Link]
VLAN : 2046
Access-VLAN : untagged
LS port : d31578e5-bc91-5466-97c1-8e4a6aa1b2e8
Urpf-mode : PORT_CHECK
DAD-mode : LOOSE
RA-mode : RA_INVALID
Admin : up
Op_state : up
Enable-mcast : True
MTU : 8800
arp_proxy

8 Run get bfd-session stats to look for RX drops and TX misses counter values.

9 If ICMP ping fails or ARP is not reachable, verify your underlay connectivity and peer host TEP
interface address is valid. If large packet MTU ping fails, fix the NSX Fabric and/or underly
infrastructure MTU to correct values.

10 Validate Edge TEP address is not in use by another transport node and validate Edge TEP
VLAN and Host TEP VLAN are not using the same VLAN and uplink.

VMware by Broadcom 374

You might also like