NSX 64 Admin
NSX 64 Admin
Update 17
Modified on 25 AUGUST 2022
VMware NSX Data Center for vSphere 6.4
NSX Administration Guide
You can find the most up-to-date technical documentation on the VMware website at:
[Link]
VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
[Link]
©
Copyright 2010 - 2022 VMware, Inc. All rights reserved. Copyright and trademark information.
VMware, Inc. 2
Contents
5 Transport Zones 40
Understanding Replication Modes 42
Add a Transport Zone 45
Edit a Transport Zone 46
Expand a Transport Zone 47
Contract a Transport Zone 48
Controller Disconnected Operation (CDO) Mode 48
VMware, Inc. 3
NSX Administration Guide
6 Logical Switches 49
Add a Logical Switch 51
Add a Logical Switch 52
Connect a Logical Switch to an NSX Edge 54
Deploy Services on a Logical Switch 55
Connect Virtual Machines to a Logical Switch 55
Test Logical Switch Connectivity 56
Prevent Spoofing on a Logical Switch 56
Edit a Logical Switch 56
Logical Switch Scenario 57
John Admin Assigns Segment ID Pool to NSX Manager 59
John Admin Configures VXLAN Transport Parameters 60
John Admin Adds a Transport Zone 61
John Admin Creates a Logical Switch 61
8 L2 Bridges 70
Add L2 Bridge 71
Add L2 Bridge to a Logically Routed Environment 72
Improving Bridging Throughput 73
Enable Software Receive Side Scaling 74
9 Routing 75
Add a Distributed Logical Router 75
Add an Edge Services Gateway 88
Specify Global Configuration 96
NSX Edge Configuration 98
Working with Certificates 98
FIPS Mode 103
Managing NSX Edge Appliances 105
Managing NSX Edge Appliance Resource Reservations 107
Working with Interfaces 110
Add a Sub Interface 113
Change Auto Rule Configuration 117
Change CLI Credentials 117
VMware, Inc. 4
NSX Administration Guide
VMware, Inc. 5
NSX Administration Guide
VMware, Inc. 6
NSX Administration Guide
VMware, Inc. 7
NSX Administration Guide
VMware, Inc. 8
NSX Administration Guide
VMware, Inc. 9
NSX Administration Guide
VMware, Inc. 10
NSX Administration Guide
VMware, Inc. 11
NSX Administration Guide
VMware, Inc. 12
NSX Administration Guide
The NSX Administration Guide describes how to configure, monitor, and maintain the VMware
® ® ®
NSX Data Center for vSphere system by using the VMware NSX Manager™ user interface,
® ®
the VMware vSphere Web Client, and the VMware vSphere Client™. The information includes
step-by-step configuration instructions, and suggested best practices.
Important NSX for vSphere is now known as NSX Data Center for vSphere.
Intended Audience
This manual is intended for anyone who wants to install or use NSX Data Center for vSphere in a
®
VMware vSphere environment. The information in this manual is written for experienced system
administrators who are familiar with virtual machine technology and virtual datacenter operations.
®
This manual assumes familiarity with vSphere, including VMware ESXi™, VMware vCenter Server ,
and the vSphere Web Client.
Task Instructions
Task instructions in this guide are based on the vSphere Web Client. You can also perform some
of the tasks in this guide by using the new vSphere Client. The new vSphere Client user interface
terminology, topology, and workflow are closely aligned with the same aspects and elements of
the vSphere Web Client.
Note Not all functionality of the NSX plug-in for the vSphere Web Client has been
implemented for the vSphere Client in NSX 6.4. For an up-to-date list of supported and
unsupported functionality, see [Link]
[Link].
VMware, Inc. 13
System Requirements for NSX
Data Center for vSphere 1
Before you install or upgrade NSX Data Center for vSphere, consider your network configuration
and resources. You can install one NSX Manager per vCenter Server, one instance of Guest
Introspection per ESXi host, and multiple NSX Edge instances per datacenter.
Hardware
This table lists the hardware requirements for NSX Data Center for vSphere appliances.
NSX Controller 4 GB 4 28 GB
As a general guideline, if your NSX-managed environment contains more than 256 hypervisors,
increase NSX Manager resources to at least 8 vCPU and 24 GB of RAM. Do not exceed 32 vCPU.
For more information on configuration maximums, see the NSX Data Center for vSphere section of
the VMware Configuration Maximums tool. The documented configuration maximums all assume
the large NSX Manager appliance size. For specific sizing details contact VMware support.
For information about increasing the memory and vCPU allocation for your virtual appliances,
see "Allocate Memory Resources", and "Change the Number of Virtual CPUs" in vSphere Virtual
Machine Administration.
VMware, Inc. 14
NSX Administration Guide
The provisioned space for a Guest Introspection appliance shows as 6.26 GB for Guest
Introspection. This is because vSphere ESX Agent Manager creates a snapshot of the service VM
to create fast clones, when multiple hosts in a cluster shares a storage. For more information
on how to disable this option via ESX Agent Manager, refer to the ESX Agent Manager
documentation.
Network Latency
You should ensure that the network latency between components is at or below the maximum
latency described.
Software
For the latest interoperability information, see the Product Interoperability Matrixes at http://
[Link]/comp_guide/sim/interop_matrix.php.
For recommended versions of NSX Data Center for vSphere, vCenter Server, and ESXi, see
the release notes for the version of NSX Data Center for vSphere to which you are upgrading.
Release notes are available at the NSX Data Center for vSphere documentation site: https://
[Link]/en/VMware-NSX-for-vSphere/[Link].
For an NSX Manager to participate in a cross-vCenter NSX deployment the following conditions
are required:
Component Version
VMware, Inc. 15
NSX Administration Guide
To manage all NSX Managers in a cross-vCenter NSX deployment from a single vSphere Web
Client, you must connect your vCenter Server systems in Enhanced Linked Mode. See Using
Enhanced Linked Mode in vCenter Server and Host Management.
To verify the compatibility of partner solutions with NSX, see the VMware Compatibility Guide
for Networking and Security at [Link]
deviceCategory=security.
n Forward and reverse name resolution. This is required if you have added ESXi hosts by name
to the vSphere inventory, otherwise NSX Manager cannot resolve the IP addresses.
n Access to the datastore where you store virtual machine files, and the account permissions to
copy files to that datastore.
n Cookies must be enabled on your Web browser to access the NSX Manager user interface.
n Port 443 must be open between the NSX Manager and the ESXi host, the vCenter Server, and
the NSX Data Center for vSphere appliances to be deployed. This port is required to download
the OVF file on the ESXi host for deployment.
n A Web browser that is supported for the version of vSphere Client or vSphere Web Client you
are using. See the list of supported Web browsers at [Link]
NSX-Data-Center-for-vSphere/6.4/rn/[Link].
Note that Windows 32-bit and 64-bit Microsoft Internet Explorer browser is not supported with
NSX 6.4.x.
n For information about using the vSphere Client (HTML5) on vSphere 6.5 with NSX Data
Center for vSphere 6.4, see [Link]
[Link].
VMware, Inc. 16
Ports and Protocols Required by
NSX Data Center for vSphere 2
NSX Data Center for vSphere requires multiple ports to be open for it to operate properly.
n If you have a cross-vCenter NSX environment and your vCenter Server systems are in
Enhanced Linked Mode, each NSX Manager appliance must have the required connectivity
to each vCenter Server system in the environment. In this mode, you can manage any NSX
Manager from any vCenter Server system.
n When you are upgrading from an earlier NSXversion to version 6.4.x, Guest Introspection and
host clusters must be upgraded for the Remote Desktop Session Host (RDSH) policies to be
created and enforced successfully.
For the list of all supported ports and protocols in NSX 6.4.0 and later, see the VMware Ports and
Protocols portal at [Link] You can use
this portal to download the list in either CSV, Excel, or PDF file formats.
VMware, Inc. 17
Overview of NSX Data Center for
vSphere 3
IT organizations have gained significant benefits as a direct result of server virtualization.
Server consolidation reduced physical complexity, increased operational efficiency and the ability
to dynamically repurpose underlying resources to quickly and optimally meet the needs of
increasingly dynamic business applications.
VMware’s Software Defined Data Center (SDDC) architecture is now extending virtualization
technologies across the entire physical data center infrastructure. NSX Data Center for vSphere
is a key product in the SDDC architecture. With NSX Data Center for vSphere, virtualization
delivers for networking what it has already delivered for compute and storage. In much the
same way that server virtualization programmatically creates, snapshots, deletes, and restores
software-based virtual machines (VMs), NSX Data Center for vSphere network virtualization
programmatically creates, snapshots, deletes, and restores software-based virtual networks. The
result is a transformative approach to networking that not only enables data center managers to
achieve orders of magnitude better agility and economics, but also allows for a vastly simplified
operational model for the underlying physical network. With the ability to be deployed on any
IP network, including both existing traditional networking models and next-generation fabric
architectures from any vendor, NSX Data Center for vSphere is a non-disruptive solution. In fact,
with NSX Data Center for vSphere, the physical network infrastructure you already have is all you
need to deploy a software-defined data center.
VMware, Inc. 18
NSX Administration Guide
The figure above draws an analogy between compute and network virtualization. With server
virtualization, a software abstraction layer (server hypervisor) reproduces the familiar attributes
of an x86 physical server (for example, CPU, RAM, Disk, NIC) in software, allowing them to be
programmatically assembled in any arbitrary combination to produce a unique VM in a matter of
seconds.
With network virtualization, the functional equivalent of a network hypervisor reproduces the
complete set of Layer 2 through Layer 7 networking services (for example, switching, routing,
access control, firewalling, QoS, and load balancing) in software. As a result, these services can
be programmatically assembled in any arbitrary combination, to produce unique, isolated virtual
networks in a matter of seconds.
With network virtualization, benefits similar to server virtualization are derived. For example, just
as VMs are independent of the underlying x86 platform and allow IT to treat physical hosts
as a pool of compute capacity, virtual networks are independent of the underlying IP network
hardware and allow IT to treat the physical network as a pool of transport capacity that can
be consumed and repurposed on demand. Unlike legacy architectures, virtual networks can be
provisioned, changed, stored, deleted, and restored programmatically without reconfiguring the
underlying physical hardware or topology. By matching the capabilities and benefits derived from
familiar server and storage virtualization solutions, this transformative approach to networking
unleashes the full potential of the software-defined data center.
NSX Data Center for vSphere can be configured through the vSphere Web Client, a command-line
interface (CLI), and a REST API.
n NSX Edge
n NSX Services
VMware, Inc. 19
NSX Administration Guide
NSX
Manager
Management Plane
NSX
Controller
Control Plane
Run-time state
NSX
Edge
Data Plane
Note that a cloud management platform (CMP) is not an NSX Data Center for vSphere component,
but NSX Data Center for vSphere provides integration into virtually any CMP via the REST API and
out-of-the-box integration with VMware CMPs.
Data Plane
The data plane consists of the NSX Virtual Switch, which is based on the vSphere Distributed
Switch (VDS) with additional components to enable services. Kernel modules, userspace agents,
configuration files, and install scripts are packaged in VIBs and run within the hypervisor kernel
to provide services such as distributed routing and logical firewall and to enable VXLAN bridging
capabilities.
The NSX Virtual Switch (vDS-based) abstracts the physical network and provides access-level
switching in the hypervisor. It is central to network virtualization because it enables logical
networks that are independent of physical constructs, such as VLANs. Some of the benefits of
the vSwitch are:
n Support for overlay networking with protocols (such as VXLAN) and centralized network
configuration. Overlay networking enables the following capabilities:
VMware, Inc. 20
NSX Administration Guide
n Creation of a flexible logical Layer 2 (L2) overlay over existing IP networks on existing
physical infrastructure without the need to re-architect any of the data center networks
n Application workloads and virtual machines that are agnostic of the overlay network and
operate as if they were connected to a physical L2 network
The logical routers can provide L2 bridging from the logical networking space (VXLAN) to the
physical network (VLAN).
The gateway device is typically an NSX Edge virtual appliance. NSX Edge offers L2, L3, perimeter
firewall, load balancing, and other services such as SSL VPN and DHCP.
Control Plane
The control plane runs in the NSX Controller cluster. The NSX Controller is an advanced
distributed state management system that provides control plane functions for logical switching
and routing functions. It is the central control point for all logical switches within a network and
maintains information about all hosts, logical switches (VXLANs), and distributed logical routers.
The NSX Controller cluster is responsible for managing the distributed switching and routing
modules in the hypervisors. The controller does not have any dataplane traffic passing through it.
Controller nodes are deployed in a cluster of three members to enable high-availability and scale.
Any failure of the controller nodes does not impact any data-plane traffic.
NSX Controllers work by distributing network information to hosts. To achieve a high level of
resiliency the NSX Controller is clustered for scale out and HA. NSX Controller cluster must contain
three nodes. The three virtual appliances provide, maintain, and update the state of all network
functioning within the NSX domain. NSX Manager is used to deploy NSX Controller nodes.
The three NSX Controller nodes form a control cluster. The controller cluster requires a quorum
(also called a majority) in order to avoid a "split-brain scenario." In a split-brain scenario, data
inconsistencies originate from the maintenance of two separate data sets that overlap. The
inconsistencies can be caused by failure conditions and data synchronization issues. Having three
controller nodes ensures data redundancy in case of failure of one NSX Controller node.
n API provider
n Persistence server
n Switch manager
n Logical manager
VMware, Inc. 21
NSX Administration Guide
n Directory server
Each role has a master controller node. If a master controller node for a role fails, the cluster
elects a new master for that role from the available NSX Controller nodes. The new master NSX
Controller node for that role reallocates the lost portions of work among the remaining NSX
Controller nodes.
NSX Data Center for vSphere supports three logical switch control plane modes: multicast, unicast
and hybrid. Using a controller cluster to manage VXLAN-based logical switches eliminates the
need for multicast support from the physical network infrastructure. You don’t have to provision
multicast group IP addresses, and you also don’t need to enable PIM routing or IGMP snooping
features on physical switches or routers. Thus, the unicast and hybrid modes decouple NSX from
the physical network. VXLANs in unicast control-plane mode do not require the physical network
to support multicast in order to handle the broadcast, unknown unicast, and multicast (BUM)
traffic within a logical switch. The unicast mode replicates all the BUM traffic locally on the host
and requires no physical network configuration. In the hybrid mode, some of the BUM traffic
replication is offloaded to the first hop physical switch to achieve better performance. Hybrid
mode requires IGMP snooping on the first-hop switch and access to an IGMP querier in each VTEP
subnet.
Management Plane
The management plane is built by the NSX Manager, the centralized network management
component of NSX Data Center for vSphere. It provides the single point of configuration and
REST API entry-points.
The NSX Manager is installed as a virtual appliance on any ESXi host in your vCenter Server
environment. NSX Manager and vCenter have a one-to-one relationship. For every instance of
NSX Manager, there is one vCenter Server. This is true even in a cross-vCenter NSX environment.
In a cross-vCenter NSX environment, there is both a primary NSX Manager and one or more
secondary NSX Manager appliances. The primary NSX Manager allows you to create and manage
universal logical switches, universal logical (distributed) routers and universal firewall rules.
Secondary NSX Managers are used to manage networking services that are local to that specific
NSX Manager. There can be up to seven secondary NSX Managers associated with the primary
NSX Manager in a cross-vCenter NSX environment.
Consumption Platform
The consumption of NSX Data Center for vSphere can be driven directly through the NSX Manager
user interface, which is available in the vSphere Web Client. Typically end users tie network
virtualization to their cloud management platform for deploying applications. NSX Data Center
for vSphere provides rich integration into virtually any CMP through REST APIs. Out-of-the-box
integration is also available through VMware vRealize Automation Center, vCloud Director, and
OpenStack with the Neutron plug-in.
VMware, Inc. 22
NSX Administration Guide
NSX Edge
You can install NSX Edge as an edge services gateway (ESG) or as a distributed logical router
(DLR).
Uplink interfaces of ESGs connect to uplink port groups that have access to a shared corporate
network or a service that provides access layer networking. Multiple external IP addresses can be
configured for load balancer, site-to-site VPN, and NAT services.
A logical router can have eight uplink interfaces and up to a thousand internal interfaces. An uplink
interface on a DLR generally peers with an ESG, with an intervening Layer 2 logical transit switch
between the DLR and the ESG. An internal interface on a DLR peers with a virtual machine hosted
on an ESXi hypervisor with an intervening logical switch between the virtual machine and the DLR.
n The DLR control plane is provided by the DLR virtual appliance (also called a control VM).
This VM supports dynamic routing protocols (BGP and OSPF), exchanges routing updates with
the next Layer 3 hop device (usually the edge services gateway) and communicates with the
NSX Manager and the NSX Controller cluster. High-availability for the DLR virtual appliance
is supported through active-standby configuration: a pair of virtual machines functioning in
active/standby modes are provided when you create the DLR with HA enabled.
n At the data-plane level, there are DLR kernel modules (VIBs) that are installed on the ESXi
hosts that are part of the NSX domain. The kernel modules are similar to the line cards in a
modular chassis supporting Layer 3 routing. The kernel modules have a routing information
base (RIB) (also known as a routing table) that is pushed from the controller cluster. The
data plane functions of route lookup and ARP entry lookup are performed by the kernel
modules. The kernel modules are equipped with logical interfaces (called LIFs) connecting to
VMware, Inc. 23
NSX Administration Guide
the different logical switches and to any VLAN-backed port-groups. Each LIF has assigned an
IP address representing the default IP gateway for the logical L2 segment it connects to and a
vMAC address. The IP address is unique for each LIF, whereas the same vMAC is assigned to
all the defined LIFs.
External
Network
[Link]/24
NSX Edge
(Acting as next
hop router)
P
VPN V
OSPF, BGP
Peering
[Link]
Control
Logical NSX
router 1 Manager
3 appliance
Data Path
6 [Link]
[Link]
4
vSphere Distributed Switch
Control
Controller
Cluster
5 2
Logical
Router
[Link] [Link]
Web VM App VM
[Link] [Link]
1 A DLR instance is created from the NSX Manager UI (or with API calls), and routing is enabled,
using either OSPF or BGP.
2 The NSX Controller uses the control plane with the ESXi hosts to push the new DLR
configuration including LIFs and their associated IP and vMAC addresses.
3 Assuming a routing protocol is also enabled on the next-hop device (an NSX Edge [ESG] in
this example), OSPF or BGP peering is established between the ESG and the DLR control VM.
The ESG and the DLR can then exchange routing information:
n The DLR control VM can be configured to redistribute into OSPF the IP prefixes for all
the connected logical networks ([Link]/24 and [Link]/24 in this example). As a
consequence, it then pushes those route advertisements to the NSX Edge. Notice that the
VMware, Inc. 24
NSX Administration Guide
next hop for those prefixes is not the IP address assigned to the control VM ([Link])
but the IP address identifying the data-plane component of the DLR ([Link]).
The former is called the DLR "protocol address," whereas the latter is the "forwarding
address".
n The NSX Edge pushes to the control VM the prefixes to reach IP networks in the external
network. In most scenarios, a single default route is likely to be sent by the NSX Edge,
because it represents the single point of exit toward the physical network infrastructure.
4 The DLR control VM pushes the IP routes learned from the NSX Edge to the controller cluster.
5 The controller cluster is responsible for distributing routes learned from the DLR control VM
to the hypervisors. Each controller node in the cluster takes responsibility of distributing
the information for a particular logical router instance. In a deployment where there are
multiple logical router instances deployed, the load is distributed across the controller nodes.
A separate logical router instance is usually associated with each deployed tenant.
6 The DLR routing kernel modules on the hosts handle the data-path traffic for communication
to the external network by way of the NSX Edge.
NSX Services
The NSX Data Center for vSphere components work together to provide the following functional
®
VMware NSX Services™.
Logical Switches
A cloud deployment or a virtual data center has a variety of applications across multiple tenants.
These applications and tenants require isolation from each other for security, fault isolation, and
non-overlapping IP addresses. NSX Data Center for vSphere allows the creation of multiple
logical switches, each of which is a single logical broadcast domain. An application or tenant
virtual machine can be logically wired to a logical switch. This allows for flexibility and speed of
deployment while still providing all the characteristics of a physical network's broadcast domains
(VLANs) without physical Layer 2 sprawl or spanning tree issues.
A logical switch is distributed and can span across all hosts in vCenter (or across all hosts in a
cross-vCenter NSX environment). This allows for virtual machine mobility (vMotion) within the data
center without limitations of the physical Layer 2 (VLAN) boundary. The physical infrastructure is
not constrained by MAC/FIB table limits, because the logical switch contains the broadcast domain
in software.
VMware, Inc. 25
NSX Administration Guide
Logical Routers
Routing provides the necessary forwarding information between Layer 2 broadcast domains,
thereby allowing you to decrease the size of Layer 2 broadcast domains and improve network
efficiency and scale. NSX Data Center for vSphere extends this intelligence to where the
workloads reside for East-West routing. This allows more direct VM-to-VM communication
without the costly or timely need to extend hops. At the same time, logical routers provide North-
South connectivity, thereby enabling tenants to access public networks.
Logical Firewall
Logical Firewall provides security mechanisms for dynamic virtual data centers. The Distributed
Firewall component of Logical Firewall allows you to segment virtual datacenter entities like virtual
machines based on VM names and attributes, user identity, vCenter objects like datacenters, and
hosts, as well as traditional networking attributes like IP addresses, VLANs, and so on. The Edge
Firewall component helps you meet key perimeter security requirements, such as building DMZs
based on IP/VLAN constructs, and tenant-to-tenant isolation in multi-tenant virtual data centers.
The Flow Monitoring feature displays network activity between virtual machines at the application
protocol level. You can use this information to audit network traffic, define and refine firewall
policies, and identify threats to your network.
Service Composer
Service Composer helps you provision and assign network and security services to applications in
a virtual infrastructure. You map these services to a security group, and the services are applied to
the virtual machines in the security group using a Security Policy.
VMware, Inc. 26
NSX Administration Guide
VMware, Inc. 27
Overview of Cross-vCenter
Networking and Security 4
NSX Data Center for vSphere allows you to manage multiple environments from a single primary
NSX Manager.
There are many reasons multiple vCenter Server systems may be required, for example:
n To accommodate products that require dedicated or multiple vCenter Server systems, such as
Horizon View or Site Recovery Manager
In NSX Data Center for vSphere 6.2 and later you can create universal objects on the primary NSX
Manager, which are synchronized across all vCenter Servers systems in the environment.
VMware, Inc. 28
NSX Administration Guide
n Increased span of logical networks. The same logical networks are available in the cross-
vCenter environment, so it's possible for VMs on any cluster on any vCenter Server system to
be connected to the same logical network.
n Centralized security policy management. Firewall rules are managed from one centralized
location, and apply to the VM regardless of location or vCenter Server system.
n Support of mobility boundaries in vSphere 6, including cross vCenter and long distance
vMotion across logical switches.
n Enhanced support for multi-site environments, from metro distance to 150ms RTT. This
includes both active-active and active-passive datacenters.
n Increased mobility of workloads - VMs can be migrated using vMotion across vCenter Servers
without having to reconfigure the VM or change firewall rules.
Note Cross-vCenter NSX functionality is supported with vSphere 6.0 and later.
The primary NSX Manager is used to deploy a universal controller cluster that provides the control
plane for the cross-vCenter NSX environment. The secondary NSX Managers do not have their
own controller clusters.
The primary NSX Manager can create universal objects, such as universal logical switches. These
objects are synchronized to the secondary NSX Managers by the NSX Universal Synchronization
Service. You can view these objects from the secondary NSX Managers, but you cannot edit them
there. You must use the primary NSX Manager to manage universal objects.
On both primary and secondary NSX Managers, you can create objects that are local to that
specific environment, such as logical switches, and logical (distributed) routers. They exist only
within the environment in which they were created. They are not visible on the other NSX
Managers in the cross-vCenter NSX environment.
VMware, Inc. 29
NSX Administration Guide
NSX Managers can be assigned the standalone role. A standalone NSX Manager manages an
environment with a single NSX Manager and single vCenter. A standalone NSX Manager cannot
create universal objects.
Note If you change the role of a primary NSX Manager to standalone and any universal objects
exist in the NSX environment, the NSX Manager is assigned the transit role. The universal objects
remain, but they cannot be changed, and no other universal objects can be created. You can
delete universal objects from the transit role. Use the transit role temporarily, for example, when
changing which NSX Manager is the primary.
Universal
Distributed
Firewall
VMware, Inc. 30
NSX Administration Guide
Table 4-1. Support matrix for NSX Data Center for vSphere Services in cross-vCenter NSX
Supports cross-vCenter NSX
NSX Data Center for vSphere Service Details synchronization?
L2 bridges No
Exclude list No
SpoofGuard No
Edge firewall No
VPN No
Service composer No
Network extensibility No
IP pools No
Services Yes
VMware, Inc. 31
NSX Administration Guide
Table 4-1. Support matrix for NSX Data Center for vSphere Services in cross-vCenter NSX
(continued)
Supports cross-vCenter NSX
NSX Data Center for vSphere Service Details synchronization?
As the universal controller cluster is the only controller cluster for the cross-vCenter NSX
environment, it maintains information about universal logical switches and universal logical routers
as well as logical switches and logical routers that are local to each NSX Manager.
In order to avoid any overlap in object IDs, separate ID pools are maintained for universal objects
and local objects.
The universal transport zone is created on the primary NSX Manager, and is synchronized to the
secondary NSX Managers. Clusters that need to participate in universal logical networks must be
added to the universal transport zone from their NSX Managers.
When you create a logical switch in a universal transport zone, you create a universal logical
switch. This switch is available on all clusters in the universal transport zone. The universal
transport zone can include clusters in any vCenter in the cross-vCenter NSX environment.
The segment ID pool is used to assign VNIs to logical switches, and the universal segment ID pool
is used to assign VNIs to universal logical switches. These pools must not overlap.
You must use a universal logical router to route between universal logical switches. If you need
to route between a universal logical switch and a logical switch, you must use an Edge Services
Gateway.
VMware, Inc. 32
NSX Administration Guide
When you create a universal logical router you must choose whether to enable local egress, as this
cannot be changed after creation. Local egress allows you to control what routes are provided to
ESXi hosts based on an identifier, the locale ID.
Each NSX Manager is assigned a locale ID, which is set to the NSX Manager UUID by default. You
can override the locale ID at the following levels:
n Cluster
n ESXi host
If you do not enable local egress the locale ID is ignored and all ESXi hosts connected to the
universal logical router will receive the same routes. Whether or not to enable local egress in
a cross-vCenter NSX environment is a design consideration, but it is not required for all cross-
vCenter NSX configurations.
As your datacenter needs scale out, the existing vCenter Server may not scale to the same
level. This may require you to move a set of applications to newer hosts that are managed by
a different vCenter Server. Or you may need to move applications from staging to production in an
environment where staging servers are managed by one vCenter Server and production servers
are managed by a different vCenter Server. Distributed Firewall supports these cross-vCenter
vMotion scenarios by replicating firewall policies that you define for the primary NSX Manager on
up to seven secondary NSX Managers.
From the primary NSX Manager you can create distributed firewall rule sections that are marked
for universal synchronization. You can create more than one universal L2 rule section and more
than one universal L3 rule section. Universal sections are always listed at the top of primary
and secondary NSX Managers. These sections and their rules are synchronized to all secondary
NSX Managers in your environment. Rules in other sections remain local to the appropriate NSX
Manager.
The following Distributed Firewall features are not supported in a cross-vCenter NSX environment:
n Exclude list
n SpoofGuard
VMware, Inc. 33
NSX Administration Guide
n Edge Firewall
Service Composer does not support universal synchronization, so you cannot use it to create
distributed firewall rules in the universal section.
n Universal IP Sets
n Dynamic criteria
Universal network and security objects are created, deleted, and updated only on the primary NSX
Manager, but are readable on the secondary NSX Manager. Universal Synchronization Service
synchronizes universal objects across vCenters immediately, as well as on demand using force
synchronization.
Universal security groups are used in two types of deployments: multiple live cross-vCenter NSX
environments, and cross-vCenter NSX active standby deployments, where one site is live at a
given time and the rest are on standby. Only active standby deployments can have universal
security groups with dynamic membership based on VM name static membership based on
universal security tag. Once a universal security group is created it cannot be edited to be enabled
or disabled for the active standby scenario functionality. Membership is defined by included
objects only, you cannot use excluded objects.
Universal security groups cannot be created from Service Composer. Security groups created
from Service Composer will be local to that NSX Manager.
VMware, Inc. 34
NSX Administration Guide
Whether the cross-vCenter NSX environment is contained within a single site or crosses multiple
sites, a similar configuration can be used. These two example topologies consist of the following:
n A universal transport zone that includes all clusters in the site or sites.
n Universal logical switches attached to the universal transport zone. Two universal logical
switches are used to connect VMs and one is used as a transit network for the router uplink.
n A universal logical router with an NSX Edge appliance to enable dynamic routing. The
universal logical router appliance has internal interfaces on the VM universal logical switches
and an uplink interface on the transit network universal logical switch.
n Edge Services Gateways (ESGs) connected to the transit network and the physical egress
router network.
For more information about cross-vCenter NSX topologies, see the Cross-vCenter NSX Design
Guide at [Link]
VMware, Inc. 35
NSX Administration Guide
Universal Site A
Controller
vCenter with NSX vCenter with NSX
Cluster
Manager (Primary) Manager (Secondary)
Physical Routers
P P
VPN V VPN V
OSPF, BGP
NSX Edge Universal
Peering x8 Services Logical Switch
Gateway Transit Network
E1 E8
Universal
Logical Router
Appliance
VMware, Inc. 36
NSX Administration Guide
Physical Routers
P P
VPN V VPN V
OSPF, BGP
NSX Edge Universal
Peering Services Logical Switch
Gateway Transit Network
E1 E8
Universal
Logical Router
Appliance
Local Egress
All sites in a multi-site cross-vCenter NSX environment can use the same physical routers for
egress traffic. However, if egress routes need to be customized, the local egress feature must be
enabled when the universal logical router is created.
Local egress allows you to customize routes at the universal logical router, cluster, or host level.
This example of a cross-vCenter NSX environment in multiple sites has local egress enabled. The
edge services gateways (ESGs) in each site have a default route that sends traffic out through
that site's physical routers. The universal logical router is configured with two appliances, one in
each site. The appliances learn routes from their site's ESGs. The learned routes are sent to the
VMware, Inc. 37
NSX Administration Guide
universal controller cluster. Because local egress is enabled, the locale ID for that site is associated
with those routes. The universal controller cluster sends routes with matching locale IDs to the
hosts. Routes learned on the site A appliance are sent to the hosts in site A, and routes learned on
the site B appliance are sent to the hosts in site B.
For more information about local egress, see the Cross-vCenter NSX Design Guide at https://
[Link]/docs/DOC-32552.
P P P P
VPN V VPN V VPN V VPN V
OSPF, BGP OSPF, BGP
NSX Edge NSX Edge
Peering Services Services Peering
Gateway Gateway
E1 E8 E1 E8
Universal Universal
Universal Logical Switch Logical Switch
Logical Router Universal
Transit Network A Transit Network B
Primary Logical Router
Appliance Secondary
Appliance
Universal Distributed Logical Router
VMware, Inc. 38
NSX Administration Guide
It is important to understand what happens when you change an NSX Manager's role.
Set as primary
This operation sets the role of an NSX Manager to primary and starts the synchronization
software. This operation fails if the NSX Manager is already the primary or already a
secondary.
This operation sets the role of NSX Manager to standalone or transit mode. This operation
might fail if the NSX Manager already has the standalone role.
This operation resets the primary NSX Manager to standalone or transit mode, stops the
synchronization software, and unregisters all secondary NSX Managers. This operation might
fail if the NSX Manager is already standalone or if any of the secondary NSX Managers are
unreachable.
When you run this operation on a secondary NSX Manager, the secondary NSX Manager is
unilaterally disconnected from the primary NSX Manager. This operation should be used when
the primary NSX Manager has experienced an unrecoverable failure, and you want to register
the secondary NSX Manager to a new primary. If the original primary NSX Manager does come
up again, its database continues to list the secondary NSX Manager as registered. To resolve
this issue, include the force option when you disconnect or unregister the secondary from
the original primary. The force option removes the secondary NSX Manager from the original
primary NSX Manager's database.
VMware, Inc. 39
Transport Zones
5
A transport zone controls to which hosts a logical switch can reach. It can span one or more
vSphere clusters. Transport zones dictate which clusters and, therefore, which VMs can participate
in the use of a particular network. In a cross-vCenter NSX environment you can create a universal
transport zone, which can include clusters from any vCenter in the environment. You can create
only one universal transport zone.
An NSX Data Center for vSphere environment can contain one or more transport zones based
on your requirements. A host cluster can belong to multiple transport zones. A logical switch can
belong to only one transport zone.
NSX Data Center for vSphere does not allow connection of VMs that are in different transport
zones. The span of a logical switch is limited to a transport zone, so virtual machines in different
transport zones cannot be on the same Layer 2 network. A distributed logical router cannot
connect to logical switches that are in different transport zones. After you connect the first logical
switch, the selection of further logical switches is limited to those that are in the same transport
zone.
The following guidelines are meant to help you design your transport zones:
n If a cluster requires Layer 3 connectivity, the cluster must be in a transport zone that also
contains an edge cluster, meaning a cluster that has Layer 3 edge devices (distributed logical
routers and edge services gateways).
n Suppose you have two clusters, one for web services and another for application services. To
have VXLAN connectivity between the VMs in these two clusters, both of the clusters must be
included in the transport zone.
n Keep in mind that all logical switches included in the transport zone will be available and visible
to all VMs within the clusters that are included in the transport zone. If a cluster includes
secured environments, you might not want to make it available to VMs in other clusters.
Instead, you can place your secure cluster in a more isolated transport zone.
n The span of the vSphere distributed switch (VDS or DVS) should match the transport zone
span. When creating transport zones in multi-cluster VDS configurations, make sure all
clusters in the selected VDS are included in the transport zone. This is to ensure that the
DLR is available on all clusters where VDS dvPortgroups are available.
The following diagram shows a transport zone correctly aligned to the VDS boundary.
VMware, Inc. 40
NSX Administration Guide
5001
5002
5003
db1
If you do not follow this best practice, keep in mind that if a VDS spans more than one host cluster
and the transport zone includes only one (or a subset) of these clusters, any logical switch included
within this transport zone can access VMs within all clusters spanned by the VDS. In other words,
the transport zone will not be able to constrain the logical switch span to a subset of the clusters.
If this logical switch is later connected to a DLR, you must ensure that the router instances are
created only in the cluster included in the transport zone to avoid any Layer 3 issues.
For example, when a transport zone is not aligned to the VDS boundary, the scope of the logical
switches (5001, 5002 and 5003) and the DLR instances that these logical switches are connected
to becomes disjointed, causing VMs in cluster Comp A to have no access to the DLR logical
interfaces (LIFs).
VMware, Inc. 41
NSX Administration Guide
5001
DLR DLR
Missing! Missing!
5002
5003
db1
Each ESXi host prepared for NSX is configured with a VXLAN tunnel endpoint (VTEP). Each
VXLAN tunnel endpoint has an IP address. These IP addresses can be in the same subnet or in
different subnets.
VMware, Inc. 42
NSX Administration Guide
When two VMs on different ESXi hosts communicate directly, unicast-encapsulated traffic is
exchanged between the two VTEP IP addresses without any need for flooding. However, as
with any layer 2 network, sometimes traffic from a VM must be flooded, or sent to all other VMs
belonging to the same logical switch. Layer 2 broadcast, unknown unicast, and multicast traffic are
known as BUM traffic. BUM traffic from a VM on a given host must be replicated to all other hosts
that have VMs connected to the same logical switch. NSX Data Center for vSphere supports three
different replication modes:
One subnet scenario: If all host VTEP interfaces belong to a single subnet, the source VTEP
forwards the BUM traffic to all remote VTEPs. This is known as head-end replication. Head-end
replication might result in unwanted host overhead and higher bandwidth usage. The impact
depends on the amount BUM traffic and the number of hosts and VTEPs within the subnet.
VMware, Inc. 43
NSX Administration Guide
Multiple subnet scenario: If the host VTEP interfaces are grouped into multiple IP subnets, the
source host handles the BUM traffic in two parts. The source VTEP forwards the BUM traffic to
each VTEP in the same subnet (the same as the one subnet scenario). For VTEPs in remote
subnets, the source VTEP forwards the BUM traffic to one host in each remote VTEP subnet and
sets the replication bit to mark this packet for local replication. When a host in the remote subnet
receives this packet and finds the replication bit set, it sends the packet to all the other VTEPs in
its subnet where the logical switch exists.
Therefore, unicast replication mode scales well in network architectures with many VTEP IP
subnets as the load is distributed among multiple hosts.
When hosts replicate BUM traffic to VTEPs in the same IP subnet, they use layer 2 multicast. When
hosts replicate BUM traffic to VTEPs in different IP subnets, they use layer 3 multicast. In both
cases, the replication of BUM traffic to remote VTEPs is handled by the physical infrastructure.
Note In multicast replication mode, the NSX Controller cluster is not used for logical switching.
Layer 2 multicast is more common in customer networks than Layer 3 multicast as it is typically
easy to deploy. The replication to different VTEPs in the same subnet is handled in the physical
network. Hybrid replication can be a significant relief for the source host for BUM traffic if there
are many peer VTEPs in the same subnet. With hybrid replication, you can scale up a dense
environment with little or no segmentation.
VMware, Inc. 44
NSX Administration Guide
You can have only one universal transport zone in a cross-vCenter NSX environment.
Prerequisites
n In a standalone or single vCenter NSX environment there is only one NSX Manager so you do
not need to select one.
n Objects local to an NSX Manager must be managed from that NSX Manager.
n In a cross-vCenter NSX environment that does not have Enhanced Linked Mode enabled, you
must make configuration changes from the vCenter linked to the NSX Manager that you want
to modify.
n In a cross-vCenter NSX environment in Enhanced Linked Mode, you can make configuration
changes to any NSX Manager from any linked vCenter. Select the appropriate NSX Manager
from the NSX Manager drop-down menu.
Procedure
u In NSX 6.4.1 and later, navigate to Networking & Security > Installation and Upgrade >
Logical Network Settings.
u In NSX 6.4.0, navigate to Networking & Security > Installation and Upgrade > Logical
Network Preparation.
3 (Optional) If you want to configure this transport zone as a universal transport zone, make the
following selection.
n In NSX 6.4.1 and later, click the Universal Synchronization button to turn the setting on.
n Multicast: Multicast IP addresses in the physical network are used for the control
plane. This mode is recommended only when you are upgrading from older VXLAN
deployments. Requires PIM/IGMP in the physical network.
VMware, Inc. 45
NSX Administration Guide
n Unicast: The control plane is handled by an NSX Controller. All unicast traffic
leverages optimized head-end replication. No multicast IP addresses or special network
configuration is required.
n Hybrid: Offloads local traffic replication to the physical network (L2 multicast). This
requires IGMP snooping on the first-hop switch and access to an IGMP querier in each
VTEP subnet, but does not require PIM. The first-hop switch handles traffic replication for
the subnet.
Important If you create a universal transport zone and select hybrid as the replication mode,
you must ensure that the multicast address used does not conflict with any other multicast
addresses assigned on any NSX Manager in the environment.
Results
Transport-Zone is a transport zone local to the NSX Manager on which it was created.
What to do next
If you added a universal transport zone, you can add universal logical switches.
If you added a universal transport zone, you can select the secondary NSX Managers and add their
clusters to the universal transport zone.
VMware, Inc. 46
NSX Administration Guide
Procedure
NSX 6.4.1 and later a Navigate to Networking & Security > Installation and Upgrade > Logical
Network Settings > Transport Zones.
b Select the transport zone and click Edit.
c Edit the name, description, or replication mode of the transport zone.
Note If you change the transport zone replication mode, select Migrate
existing Logical Switches to the new control plane mode to change the
replication mode for existing logical switches linked to this transport zone.
If you do not select this check box, only the logical switches linked to this
transport zone after the edit is done will have the new replication mode.
d Click SAVE.
NSX 6.4.0 a Navigate to Networking & Security > Installation and Upgrade > Logical
Network Preparation > Transport Zones.
b Select the transport zone, and click Actions > All NSX User Interface
Plugin Actions > Edit Settings.
c Edit the name, description, or replication mode of the transport zone.
Note If you change the transport zone replication mode, select Migrate
existing Logical Switches to the new control plane mode to change the
replication mode for existing logical switches linked to this transport zone.
If you do not select this check box, only the logical switches linked to this
transport zone after the edit is done will have the new replication mode.
d Click OK.
Prerequisites
The clusters you add to a transport zone have the network infrastructure installed and are
configured for VXLAN. See the NSX Installation Guide.
Procedure
u In NSX 6.4.1 and later, navigate to Networking & Security > Installation and Upgrade >
Logical Network Settings > Transport Zones.
u In NSX 6.4.0, navigate to Networking & Security > Installation and Upgrade > Logical
Network Preparation > Transport Zones.
VMware, Inc. 47
NSX Administration Guide
4 Select the clusters that you want to add to the transport zone and click OK or Save.
Procedure
u In NSX 6.4.1 and later, navigate to Networking & Security > Installation and Upgrade >
Logical Network Settings > Transport Zones.
u In NSX 6.4.0, navigate to Networking & Security > Installation and Upgrade > Logical
Network Preparation > Transport Zones.
5 Click OK or Save.
VMware, Inc. 48
Logical Switches
6
A cloud deployment or a virtual data center has a variety of applications across multiple tenants.
These applications and tenants require isolation from each other for security, fault isolation,
and avoiding overlapping IP addressing issues. The NSX logical switch creates logical broadcast
domains or segments to which an application or tenant virtual machine can be logically wired.
This allows for flexibility and speed of deployment while still providing all the characteristics of a
physical network's broadcast domains (VLANs) without physical Layer 2 sprawl or spanning tree
issues.
A logical switch is distributed and can span arbitrarily large compute clusters. This allows for
virtual machine mobility (vMotion) within the datacenter without limitations of the physical Layer
2 (VLAN) boundary. The physical infrastructure does not have to deal with MAC/FIB table limits
since the logical switch contains the broadcast domain in software.
A logical switch is mapped to a unique VXLAN, which encapsulates the virtual machine traffic and
carries it over the physical IP network.
VMware, Inc. 49
NSX Administration Guide
Controller NSX
cluster Manager
The NSX controller is the central control point for all logical switches within a network and
maintains information of all virtual machines, hosts, logical switches, and VXLANs. The controller
supports two new logical switch control plane modes, Unicast and Hybrid, These modes decouple
NSX from the physical network. VXLANs no longer require the physical network to support
multicast in order to handle the Broadcast, Unknown unicast, and Multicast (BUM) traffic within
a logical switch. The unicast mode replicates all the BUM traffic locally on the host and requires
no physical network configuration. In the hybrid mode, some of the BUM traffic replication is
offloaded to the first hop physical switch to achieve better performance. This mode requires IGMP
snooping to be turned on the first hop physical switch. Virtual machines within a logical switch can
use and send any type of traffic including IPv6 and multicast.
You can extend a logical switch to a physical device by adding an L2 bridge. See Chapter 8 L2
Bridges.
You must have the Super Administrator or Enterprise Administrator role permissions to manage
logical switches.
VMware, Inc. 50
NSX Administration Guide
Prerequisites
n You have the Super Administrator or Enterprise Administrator role permission to configure
and manage logical switches.
n VXLAN UDP port is opened on firewall rules (if applicable). The VXLAN UDP port can be
configured through the API.
n Physical infrastructure MTU is at least 50 bytes more than the MTU of the virtual machine vNIC.
n Managed IP address is set for each vCenter Server in the vCenter Server Runtime Settings. See
vCenter Server and Host Management.
n DHCP is available on VXLAN transport VLANs if you are using DHCP for IP assignment for
VMKNics.
n A consistent distributed virtual switch type (vendor, and so on) and version is being used
across a given transport zone. Inconsistent switch types can lead to undefined behavior in your
logical switch.
n You have configured an appropriate LACP teaming policy and connected physical NICs to the
ports. For more information on teaming modes, refer to the VMware vSphere documentation.
n 5-tuple hash distribution is enabled for Link Aggregation Control Protocol (LACP).
n Verify that for every host where you want to use LACP, a separate LACP port channel exists on
the distributed virtual switch.
n For multicast mode, multicast routing is enabled if VXLAN traffic is traversing routers. You
have acquired a multicast address range from your network administrator.
n Port 1234 (the default controller listening port) is opened on firewall for the ESXi host to
communicate with controllers.
n (Recommended) For multicast and hybrid modes, you have enabled IGMP snooping on the
L2 switches to which VXLAN participating hosts are attached. If IGMP snooping is enabled on
L2, IGMP querier must be enabled on the router or L3 switch with connectivity to multicast
enabled networks.
VMware, Inc. 51
NSX Administration Guide
When you create a logical switch, in addition to selecting a transport zone and replication mode,
you configure two options: IP discovery, and MAC learning.
IP discovery minimizes ARP traffic flooding within individual VXLAN segments---in other words,
between VMs connected to the same logical switch. IP discovery is enabled by default.
Note You cannot disable IP discovery when you create a universal logical switch. You can disable
IP discovery via the API after the universal logical switch is created. This setting is managed
separately on each NSX Manager. See the NSX API Guide.
MAC learning builds a VLAN/MAC pair learning table on each vNIC. This table is stored as part
of the dvfilter data. During vMotion, dvfilter saves and restores the table at the new location. The
switch then issues RARPs for all the VLAN/MAC entries in the table. You might want to enable
MAC learning if you are using virtual NICs that are trunking VLANs.
Prerequisites
Table 6-1. Prerequisites for Creating a Logical Switch or Universal Logical Switch
n vSphere distributed switches must be configured. n vSphere distributed switches must be configured.
n NSX Manager must be installed. n NSX Manager must be installed.
n Controllers must be deployed. n Controllers must be deployed.
n Host clusters must be prepared for NSX. n Host clusters must be prepared for NSX.
n VXLAN must be configured. n VXLAN must be configured.
n A segment ID pool must be configured. n A primary NSX Manager must be assigned.
n A transport zone must be created. n A universal segment ID pool must be configured.
n A universal transport zone must be created.
n In a standalone or single vCenter NSX environment there is only one NSX Manager so you do
not need to select one.
n Objects local to an NSX Manager must be managed from that NSX Manager.
n In a cross-vCenter NSX environment that does not have Enhanced Linked Mode enabled, you
must make configuration changes from the vCenter linked to the NSX Manager that you want
to modify.
VMware, Inc. 52
NSX Administration Guide
n In a cross-vCenter NSX environment in Enhanced Linked Mode, you can make configuration
changes to any NSX Manager from any linked vCenter. Select the appropriate NSX Manager
from the NSX Manager drop-down menu.
Procedure
2 Select the NSX Manager on which you want to create a logical switch. To create a universal
logical switch, you must select the primary NSX Manager.
5 Select the transport zone in which you want to create the logical switch. If you select a
universal transport zone, a universal logical switch is created.
By default, the logical switch inherits the control plane replication mode from the transport
zone. You can change it to one of the other available modes. The available modes are unicast,
hybrid, and multicast.
If you create a universal logical switch and select hybrid as the replication mode, you must
ensure that the multicast address used does not conflict with other multicast addresses
assigned on any NSX Manager in the cross-vCenter NSX environment.
The logical switch and the universal logical switch have segment IDs from different segment ID
pools.
What to do next
Create a logical router and attach it to your logical switches to enable connectivity between VMs
that are connected to different logical switches.
VMware, Inc. 53
NSX Administration Guide
Create a universal logical router and attach it to your universal logical switches to enable
connectivity between VMs that are connected to different universal logical switches.
Procedure
Procedure
1 In Logical Switches, select the logical switch to which you want to connect an NSX Edge.
3 Select the NSX Edge to which you want to connect the logical switch.
4 Select the interface that you want to connect to the logical switch.
An interface can have multiple non-overlapping subnets. Enter one primary IP address
and a comma-separated list of multiple secondary IP addresses. NSX Edge considers the
primary IP address as the source address for locally generated traffic. You must add an IP
address to an interface before using it on any feature configuration.
If the NSX Edge to which you are connecting the logical switch has Manual HA
Configuration selected, specify two management IP addresses in CIDR format.
e Enter the subnet prefix length or subnet mask for the interface.
f If you are using NSX 6.4.4 or later, click the Advanced tab, and then continue with the
remaining steps in this procedure. If you are using NSX 6.4.3 or earlier, directly go to the
next step.
VMware, Inc. 54
NSX Administration Guide
Option Description
Reverse Path Filter Verifies the reachability of the source address in packets being forwarded.
In enabled mode, the packet must be received on the interface that the
router might use to forward the return packet. In loose mode, the source
address must appear in the routing table.
Configure fence parameters when you want to reuse IP and MAC addresses across different
fenced environments. For example, in a cloud management platform (CMP), fencing allows
you to run several cloud instances simultaneously with the same IP and MAC addresses
isolated or "fenced".
Prerequisites
One or more third party virtual appliances must have been installed in your infrastructure.
Procedure
1 In Logical Switches, select the logical switch on which you want to deploy services.
3 Select the service and service profile that you want to apply.
4 Click OK.
Procedure
1 In Logical Switches, select the logical switch to which you want to add virtual machines.
3 Select one or more virtual machines you want to add to the logical switch.
4 Select a vNIC for each VM that you connected to the logical switch.
VMware, Inc. 55
NSX Administration Guide
5 Review the VMs and vNICS that you selected, and then click Finish.
1 In Logical Switches, double-click the logical switch that you want to test for connectivity.
n VXLAN Standard: The standard size is 1550 bytes. This packet size matches the
physical infrastructure MTU without fragmentation. This packet size allows NSX to check
connectivity and verify that the infrastructure is prepared for VXLAN traffic.
n Minimum: This packet size allows fragmentation. Hence, with the packet size minimized,
NSX can check connectivity, but not whether the infrastructure is ready for a larger frame
size.
SpoofGuard allows you to authorize the IP addresses reported by VMware Tools or IP discovery,
and alter them if necessary to prevent spoofing. SpoofGuard inherently trusts the MAC addresses
of virtual machines collected from the VMX files and vSphere SDK. Operating separately from the
Firewall rules, you can use SpoofGuard to block traffic identified as spoofed.
Procedure
1 In Logical Switches, select the logical switch that you want to edit.
VMware, Inc. 56
NSX Administration Guide
Cluster 1 Cluster 2
vDS1 vDS2
Physical Physical
Switch Switch
Engineering: VLAN10:[Link]/24
Finance: VLAN20:[Link]/24
Marketing: VLAN30:[Link]/24
ACME is running out of compute space on Cluster1 while Cluster2 is under-utilized. The ACME
network supervisor asks John Admin (ACME's virtualization administrator) to figure out a way
to extend the Engineering department to Cluster2 in a way that virtual machines belonging to
Engineering on both clusters can communicate with each other. This would enable ACME to utilize
the compute capacity of both clusters by stretching ACME's L2 layer.
If John Admin were to do this the traditional way, he would need to connect the separate VLANs
in a special way so that the two clusters can be in the same L2 domain. This might require ACME
to buy a new physical device to separate traffic, and lead to issues such as VLAN sprawl, network
loops, and administration and management overhead.
VMware, Inc. 57
NSX Administration Guide
John Admin remembers seeing a logical network demo at VMworld, and decides to evaluate
NSX. He concludes that building a logical switch across dvSwitch1 and dvSwitch2 will allow him to
stretch ACME's L2 layer. Since John can leverage the NSX controller, he will not have to touch
ACME's physical infrastructure as NSX works on top of existing IP networks.
Cluster 1 Cluster 2
Logical Switch stretches across multiple VLANs/subnets
vDS1 vDS2
Physical Physical
Switch Switch
Engineering: VXLAN5000:[Link]/24
Finance: VLAN 20:[Link]/24
Marketing: VLAN 30:[Link]/24
Once John Admin builds a logical switch across the two clusters, he can vMotion virtual machines
from one cluster to another while keeping them attached to the same logical switch.
VMware, Inc. 58
NSX Administration Guide
vDS1 vDS2
Physical Physical
Switch Switch
Engineering: VXLAN5000:[Link]/24
Finance: VLAN 20:[Link]/24
Marketing: VLAN 30:[Link]/24
Let us walk through the steps that John Admin follows to build a logical network at ACME
Enterprise.
Prerequisites
1 John Admin verifies that dvSwitch1 and dvSwitch2 are vSphere Distributed Switches.
2 John Admin sets the Managed IP address for the vCenter Server.
c Click OK.
3 John Admin installs the network virtualization components on Cluster1 and Cluster 2. See NSX
Installation Guide.
4 John Admin gets a segment ID pool (5000 - 5250) from ACME's NSX Manager administrator.
Since he is leveraging the NSX controller, he does not require multicast in his physical
network.
VMware, Inc. 59
NSX Administration Guide
5 John Admin creates an IP pool so that he can assign a static IP address to the VXLAN VTEPs
from this IP pool. See Add an IP Pool.
Procedure
1 In the vSphere Web Client, click Networking & Security > Installation and Upgrade.
2 Click the Logical Network Preparation tab and then click Segment ID.
3 Click Edit.
6 Click OK.
Procedure
3 In the Configuring VXLAN networking dialog box, select dvSwitch1 as the vSphere Distributed
Switch for the cluster.
5 In Specify Transport Attributes, leave 1600 as the Maximum Transmission Units (MTU) for
dvSwitch1.
MTU is the maximum amount of data that can be transmitted in one packet before it is divided
into smaller packets. John Admin knows that VXLAN logical switch traffic frames are slightly
larger in size because of the encapsulation, so the MTU for each switch must be set to 1550 or
higher.
John Admin wants to maintain the quality of service in his network by keeping the
performance of logical switches the same in normal and fault conditions. Hence, he chooses
Failover as the teaming policy.
8 Click Add.
VMware, Inc. 60
NSX Administration Guide
Results
After John Admin maps Cluster1 and Cluster2 to the appropriate switch, the hosts on those
clusters are prepared for logical switches:
1 A VXLAN kernel module and vmknic is added to each host in Cluster 1 and Cluster 2.
2 A special dvPortGroup is created on the vSwitch associated with the logical switch and the
VMKNic is connected to it.
Procedure
7 Click OK.
Procedure
1 Click Logical Switches and then click the New Logical Network icon.
5 Click OK.
NSX creates a logical switch providing L2 connectivity between dvSwitch1 and dvSwitch2.
What to do next
John Admin can now connect ACME's production virtual machines to the logical switch, and
connect the logical switch to an NSX Edge services gateway or Logical Router.
VMware, Inc. 61
Configuring Hardware Gateway
7
Hardware gateway configuration maps physical networks to virtual networks. The mapping
configuration allows NSX to leverage the Open vSwitch Database (OVSDB).
The OVSDB database contains information about the physical hardware and the virtual network.
The vendor hardware hosts the database server.
The hardware gateway switches in the NSX logical networks terminate VXLAN tunnels. To
the virtual network, the hardware gateway switches are known as hardware VTEP. For more
information about VTEPs, see the NSX Installation guide and NSX Network Virtualization Design
guide.
n Physical server
n IP network
The sample topology with a hardware gateway shows HV1 and HV2 as the two hypervisors.
The VM1 virtual machine is on HV1. VTEP1 is on HV1, VTEP2 is on HV2, and VTEP3 is on the
hardware gateway. The hardware gateway is located in a different subnet 211 compared to the two
hypervisors that are located in the same subnet 221.
NSX Controllers
[Link]-134 VLAN-Server
VTEP3
Ethernet18
WebService LS
VLAN 160
VM1 VNI 5000
[Link]
VMware, Inc. 62
NSX Administration Guide
The hardware gateway underlying configuration can have any one of the following components:
n Single switch
The NSX Controller communicates with the hardware gateway using its IP address on port 6640.
This connection is used to send and receive OVSDB transactions from hardware gateways.
The sample topology shows that virtual machine VM1 and VLAN-Server are configured with an
IP address in the subnet 10. VM1 is attached to WebService logical switch. The VLAN-Server is
attached to VLAN 160 on the physical server.
VM1 VLAN-Server
VM
[Link]/8 [Link]/8
Prerequisites
n Verify that you meet the NSX system and hardware requirements for hardware gateway
configuration. See Chapter 1 System Requirements for NSX Data Center for vSphere.
n Verify that the logical networks are set up properly. See the NSX Installation guide.
n Verify that the transport parameter mappings in the VXLAN are accurate. See the NSX
Installation guide.
VMware, Inc. 63
NSX Administration Guide
n Verify that the VXLAN port value is set to 4789. See Change VXLAN Port.
Procedure
Note Hypervisors including the replication nodes and the hardware gateway switches must not
be on the same IP subnet. This restriction is due to the limitation of the chipset used in most
hardware gateways. Most hardware gateways, if not all, use the Broadcom Trident II chipset,
which has a limitation that a layer 3 underlay network is required between the hardware gateway
and the hypervisors.
Important Through the NSX user interface, you can view and manage a single default replication
cluster, but not multiple replication clusters. Support for multiple replication clusters is available
through the API. See Working With a Specific Hardware Gateway Replication Cluster in the NSX
API Guide.
Prerequisites
VMware, Inc. 64
NSX Administration Guide
Procedure
4 Click Edit in the Replication Cluster section to select hypervisors to serve as replication nodes
in this replication cluster.
6 Click OK.
Results
The replication nodes are added to the replication cluster. At least one host must exist in the
replication cluster.
The Controller passively listens to the connection attempt from the physical switch. Therefore, the
hardware gateway must use the OVSDB manager table to initiate connection.
Prerequisites
Controllers must be deployed before any hardware gateway instances are configured. If
controllers are not deployed first, the error message "Failed to do the Operation on the
Controller" is shown.
VMware, Inc. 65
NSX Administration Guide
Procedure
1 Use the commands that apply to your environment to connect the hardware gateway to the
NSX Controller.
prmh-nsx-tor-7050sx-3#enable
prmh-nsx-tor-7050sx-3#configure terminal
prmh-nsx-tor-7050sx-3(config)#cvx
prmh-nsx-tor-7050sx-3(config-cvx)#service hsc
prmh-nsx-tor-7050sx-3(config-cvx-hsc)#manager [Link] 6640
prmh-nsx-tor-7050sx-3(config-cvx-hsc)#no shutdown
prmh-nsx-tor-7050sx-3(config-cvx-hsc)#end
4 (Optional) Verify that the hardware gateway is connected to the NSX Controller through the
OVSDB channel.
n Ping the VM1 and VLAN 160 to verify that the connection succeeds.
5 (Optional) Verify that the hardware gateway is connected to correct NSX Controller.
b Select Networking & Security > Installation and Upgrade > Management > NSX
Controller nodes.
Prerequisites
Verify that the hardware gateway certificate from your environment is available.
Procedure
VMware, Inc. 66
NSX Administration Guide
4 Click the Add ( ) icon to create the hardware gateway profile details.
Option Description
Certificate Paste the certificate that you extracted from your environment.
5 Click OK.
6 Refresh the screen to verify that the hardware gateway is available and running.
7 (Optional) Click the hardware gateway profile and right-click to select View the BFD Tunnel
Status from the drop-down menu.
VMware, Inc. 67
NSX Administration Guide
The dialog box shows diagnostic tunnel status details for troubleshooting.
Note If you bind multiple logical switches to hardware ports, you must apply these steps for each
logical switch.
Prerequisites
n Verify that the WebService logical switch is available. See Add a Logical Switch.
Procedure
3 Locate the WebService logical switch and right-click to select Manage Harware Bindings from
the drop-down menu.
5 Click the Add ( ) icon and select the physical switch from the drop-down menu.
6 Click Select to choose a physical port from the Available Objects list.
7 Click OK.
9 Click OK.
Results
VMware, Inc. 68
NSX Administration Guide
The NSX Controller synchronizes the physical and logical configuration information with the
hardware gateway.
VMware, Inc. 69
L2 Bridges
8
You can create an L2 bridge between a logical switch and a VLAN, which enables you to migrate
virtual workloads to physical devices with no impact on IP addresses.
A Layer 2 bridge enables connectivity between the virtual and physical network by enabling virtual
machines (VMs) to be connected to a physical server or network. Use cases include:
n Insertion into NSX of an appliance that cannot be virtualized, and that require L2 connectivity
with its clients. This is common for some physical database servers.
n Service insertion. An L2 Bridge allows integrating transparently into NSX any physical
appliance such as a router, load balancer or firewall
A logical network can leverage a physical L3 gateway and access existing physical networks
and security resources by bridging the logical switch broadcast domain to the VLAN broadcast
domain. The L2 bridge runs on the host that has the NSX DLR control virtual machine. An L2
bridge instance maps to a single VLAN, but there can be multiple bridge instances. The VLAN
port group and VXLAN logical switch that is bridged must be on the same vSphere distributed
switch (VDS) and both must share same physical NICs.
VXLAN (VNI) network and VLAN-backed port groups must be on the same distributed virtual
switch (VDS).
VMware, Inc. 70
NSX Administration Guide
NSX Edge
logical router
virtual machine
Compute rack
Physical
workload
Physical
gateway
Note that you should not use an L2 bridge to connect a logical switch to another logical switch,
a VLAN network to another VLAN network, or to interconnect datacenters. Also, you cannot use
a universal logical router to configure bridging and you cannot add a bridge to a universal logical
switch.
n Add L2 Bridge
Add L2 Bridge
You can add a bridge from a logical switch to a distributed virtual port group.
VMware, Inc. 71
NSX Administration Guide
Prerequisites
The logical switch and the VLAN-backed distributed virtual port group that are to be bridged
together must exist on the same Virtual Distributed Switch (VDS).
A DLR Control VM on a hypervisor where the VDS with the logical switch and VLAN-backed
distributed virtual port group are instantiated must be deployed in your environment.
You cannot use a universal distributed logical router to configure bridging, and you cannot add a
bridge to a universal logical switch.
Caution Bridged traffic enters and leaves an ESXi host through the uplink port on the dvSwitch
that is used for the VXLAN traffic. VDS teaming or failover policy for VLAN is not used for the
bridged traffic.
Procedure
5 Click Add.
Caution Bridge name must not exceed 40 characters. Bridge configuration fails when the
name exceeds 40 characters.
7 Select the logical switch that you want to create a bridge for.
8 Select the distributed virtual port group to which you want to bridge the logical switch.
You can use a logical switch to participate in both distributed logical routing and layer 2 bridging.
Therefore, the traffic from the bridged logical switch does not need to flow through the centralized
Edge VM. The traffic from the bridged logical switch can flow to the physical VLAN through the L2
bridge instance. The bridge instance gets enabled on the ESXi host where the DLR control VM is
running.
VMware, Inc. 72
NSX Administration Guide
For more information about L2 bridging in NSX, see the "NSX Distributed Routing and Layer
2 Bridging Integration" section in the NSX Network Virtualization Design Guide at https://
[Link]/docs/DOC-27683.
Tip You can create multiple sets of layer 2 bridging instances and associate them with different
DLRs. By following this practice, you can spread the bridging load across different ESXi hosts.
Prerequisites
n You cannot use a universal logical router to configure bridging, and you cannot add a bridge
to a universal logical switch.
Procedure
3 Double-click the distributed logical router that you want to use for bridging.
Note The bridge instance must be created in the same routing instance to which the VXLAN
is connected. One bridge instance can have one VXLAN and one VLAN, and the VXLAN and
VLAN cannot overlap. The same VXLAN and VLAN cannot connect to more than one bridge
instance.
5 Click Add.
Caution Bridge name must not exceed 40 characters. Bridge configuration fails when the
name exceeds 40 characters.
7 Select the logical switch that you want to create a bridge for.
8 Select the distributed virtual port group to which you want to bridge the logical switch.
9 Click Publish for the changes to the bridging configuration to take effect.
The logical switch that is used for bridging appears with Routing Enabled specified. For more
information, see Add a Logical Switch and Connect Virtual Machines to a Logical Switch.
With Receive Side Scaling (RSS) technology, you can spread incoming traffic across different
receive descriptor queues. If you assign each queue to a different CPU core, the incoming traffic
can be load balanced, improving performance.
VMware, Inc. 73
NSX Administration Guide
However, RSS does not work well with unknown unicast and multicast traffic. These packets end
up in the default queue processed by a single CPU core, which leads to low throughput. Most of
the packets received by the ESXi host performing VLAN-VXLAN bridging belong to this category,
so bridging throughput is low.
Some physical NIC vendors support a feature called Default Queue Receive Side Scaling (DRSS).
Using DRSS, you can configure multiple hardware queues backing up the default RX queue,
spreading VLAN-VXLAN flows across multiple CPU cores.
For physical NICs that do not support DRSS (for example, ixgbe, ixgben), you can use Software
Receive Side Scaling (SoftRSS) to improve bridging network throughput.
SoftRSS offloads the handling of individual flows to one of the multiple kernel worlds, so the
thread which pulls packets from the NIC can process more packets. Similar to RSS, network
throughput improvement when using SoftRSS has a linear correlation with CPU utilization.
Prerequisites
n Enable SoftRSS on the ESXi hosts on which the active/standby bridge exists (where the
DLR Control VMs are hosted). If the Control VMs might be migrated using vMotion, enable
SoftRSS on all the hosts in the cluster.
n If Default Queue Receive Side Scaling (DRSS) is supported on the host physical NIC,
enable that and do not enable SoftRSS. Use esxcli system module parameters list
-m [nic_module] to verify if DRSS is supported.
Procedure
You can increase the number of worlds up to a maximum of 16. You might want to increase
the number if the uplink can support a higher rate (using link aggregation) or if you notice an
uneven distribution of flows to the worlds based on your traffic pattern.
VMware, Inc. 74
Routing
9
You can specify static and dynamic routing for each NSX Edge.
Dynamic routing provides the necessary forwarding information between Layer 2 broadcast
domains, thereby allowing you to decrease Layer 2 broadcast domains and improve network
efficiency and scale. NSX extends this intelligence to where the workloads reside for doing East-
West routing. This allows more direct virtual machine to virtual machine communication without
the added cost or time needed to extend hops. At the same time, NSX also provides North-South
connectivity, thereby enabling tenants to access public networks.
n Configure BGP
VMware, Inc. 75
NSX Administration Guide
NSX Managers in a cross-vCenter NSX environment, but universal distributed logical routers can
be created only on the primary NSX Manager.
Note Starting in NSX Data Center 6.4.4, the term "Logical Router" is replaced with "Distributed
Logical Router" in the vSphere Web Client. In the documentation, both terms are used
interchangeably; however, they refer to the same object.
n NSX Data Center for vSphere 6.2 and later allows logical router-routed logical interfaces (LIFs)
to be connected to a VXLAN that is bridged to a VLAN.
n Logical router interfaces and bridging interfaces cannot be connected to a dvPortgroup with
the VLAN ID set to 0.
n A given logical router instance cannot be connected to logical switches that exist in different
transport zones. This is to ensure that all logical switches and logical router instances are
aligned.
n A logical router cannot be connected to VLAN-backed port groups if that logical router is
connected to logical switches spanning more than one vSphere distributed switch (VDS). This
is to ensure correct alignment of logical router instances with logical switch dvPortgroups
across hosts.
n Logical router interfaces must not be created on two different distributed port groups
(dvPortgroups) with the same VLAN ID if the two networks are in the same vSphere
distributed switch.
n Logical router interfaces should not be created on two different dvPortgroups with the same
VLAN ID if two networks are in different vSphere distributed switches, but the two vSphere
distributed switches share identical hosts. In other words, logical router interfaces can be
created on two different networks with the same VLAN ID if the two dvPortgroups are in
two different vSphere distributed switches, as long as the vSphere distributed switches do not
share a host.
n If VXLAN is configured, logical router interfaces must be connected to distributed port groups
on the vSphere Distributed Switch where VXLAN is configured. Do not connect logical router
interfaces to port groups on other vSphere Distributed Switches.
The following list describes feature support by interface type (uplink and internal) on the logical
router:
n Dynamic routing protocols (BGP and OSPF) are supported only on uplink interfaces.
n Firewall rules are applicable only on uplink interfaces and are limited to control and
management traffic that is destined to the Edge virtual appliance.
VMware, Inc. 76
NSX Administration Guide
n For more information about the DLR Management Interface, see the Knowledge Base Article
"Management Interface Guide: DLR Control VM - NSX" [Link]
Important If you enable high availability on an NSX Edge in a cross-vCenter NSX environment,
both the active and standby NSX Edge Appliances must reside in the same vCenter Server. If you
migrate one of the appliances of an NSX Edge HA pair to a different vCenter Server, the two HA
appliances no longer operate as an HA pair, and you might experience traffic disruption.
Attention vSphere Fault Tolerance does not work with logical router control VM.
Prerequisites
n You must create a local segment ID pool, even if you have no plans to create logical switches.
n Make sure that the controller cluster is up and available before creating or changing a logical
router configuration. A logical router cannot distribute routing information to hosts without
the help of NSX controllers. A logical router relies on NSX controllers to function, while Edge
Services Gateways (ESGs) do not.
n The destination host must be part of the same transport zone as the logical switches
connected to the new logical router's interfaces.
n Avoid placing it on the same host as one or more of its upstream ESGs if you use
ESGs in an ECMP setup. You can use DRS anti-affinity rules to enforce this practice,
reducing the impact of host failure on logical router forwarding. This guideline does not
apply if you have one upstream ESG by itself or in HA mode. For more information, see
the NSX Network Virtualization Design Guide at [Link]
DOC-27683.
n Verify that the host cluster on which you install the logical router appliance is prepared for NSX
Data Center for vSphere. See "Prepare Host Clusters for NSX" in the NSX Installation Guide.
n In a standalone or single vCenter NSX environment there is only one NSX Manager so you
do not need to select one.
n Objects local to an NSX Manager must be managed from that NSX Manager.
n In a cross-vCenter NSX environment that does not have Enhanced Linked Mode enabled,
you must make configuration changes from the vCenter linked to the NSX Manager that
you want to modify.
VMware, Inc. 77
NSX Administration Guide
n If you are adding a universal logical router, determine if you need to enable local egress. Local
egress allows you to selectively send routes to hosts. You may want this ability if your NSX
deployment spans multiple sites. See Cross-vCenter NSX Topologies for more information.
You cannot enable local egress after the universal logical router has been created.
Procedure
1 In the vSphere Web Client, navigate to Home > Networking & Security > NSX Edges.
2 Select the appropriate NSX Manager on which to make your changes. If you are creating a
universal logical router, you must select the primary NSX Manager.
3 Click Add, and then select the type of logical router you want to add:
n Select Logical (Distributed) Router to add a logical router local to the selected NSX
Manager.
n Select Universal Logical (Distributed) Router to add a logical router that can span a cross-
vCenter NSX environment. This option is available only if you have assigned a primary NSX
Manager, and are making changes from the primary NSX Manager. If you select Universal
Logical (Distributed) Router, you can optionally enable local egress.
Option Description
Name Enter a name for the logical router as you want it to appear in the vCenter
inventory.
Make sure that this name is unique across all logical routers within a single
tenant.
Host Name Optional. Enter a host name that you want to display for the logical router in
the CLI.
If you do not enter a host name, the Edge ID that is created automatically is
displayed in the CLI.
Deploy Edge Appliance By default, this option is selected. An Edge appliance (also called a logical
router virtual appliance) is required for dynamic routing and the logical router
appliance's firewall, which applies to logical router pings, SSH access, and
dynamic routing traffic.
If you require only static routes, and do not want to deploy an Edge
appliance, deselect this option. You cannot add an Edge appliance to the
logical router after the logical router is created.
VMware, Inc. 78
NSX Administration Guide
Option Description
High Availability Optional. By default, HA is disabled. Select this option to enable and
configure HA on the logical router.
If you are planning to do dynamic routing, HA is required.
5 Specify the CLI settings and other settings of the logical router.
Option Description
User Name Enter a user name that you want to use for logging in to the Edge CLI.
Password Enter a password that is at least 12 characters and it must satisfy these rules:
n Must not exceed 255 characters
n At least one uppercase letter and one lowercase letter
n At least one number
n At least one special character
n Must not contain the user name as a substring
n Must not consecutively repeat a character 3 or more times.
SSH access Optional. By default, SSH access is disabled. If you do not enable SSH, you
can still access the logical router by opening the virtual appliance console.
Enabling SSH causes the SSH process to run on the logical router. You must
adjust the logical router firewall configuration manually to allow SSH access
to the logical router's protocol address. The protocol address is configured
when you configure dynamic routing on the logical router.
Edge control level logging Optional. By default, the log level is info.
u If you did not select Deploy Edge Appliance, you cannot add an appliance. Click Next to
continue with the configuration.
u If you selected Deploy Edge Appliance, enter the settings of the logical router virtual
appliance.
For example:
Option Value
Datastore ds-1
Host [Link]
VMware, Inc. 79
NSX Administration Guide
See "Managing NSX Edge Appliance Resource Reservations" in the NSX Administration Guide
for more information on Resource Reservation.
If you selected Deploy Edge Appliance, you must connect the HA interface to a distributed
port group or a logical switch. If you are using this interface as an HA interface only, use a
logical switch. A /30 subnet is allocated from the link local range [Link]/16 and is used to
provide an IP address for each of the two NSX Edge appliances.
Optionally, if you want to use this interface to connect to the NSX Edge, you can specify an
extra IP address and prefix for the HA interface.
Note Before NSX Data Center for vSphere 6.2, HA interface was called management
interface. You cannot do an SSH connection to a HA interface from anywhere that is not on the
same IP subnet as the HA interface. You cannot configure static routes that point out of the
HA interface, which means that RPF will drop incoming traffic. However, you can, in theory,
disable RPF, but this action is counterproductive to high availability. For SSH access, you can
also use the logical router's protocol address, which is configured later when you configure
dynamic routing.
In NSX Data Center for vSphere 6.2 and later, the HA interface of a logical router is
automatically excluded from route redistribution.
For example, the following table shows a sample HA interface configuration where the HA
interface is connected to a management dvPortgroup.
Option Description
Connected To Mgmt_VDS-Mgmt
IP Address [Link]*
VMware, Inc. 80
NSX Administration Guide
Option Description
Connected To Select the distributed virtual port group or the logical switch to which you
want to connect this interface to.
Option Description
c (Optional) Edit the default MTU value, if necessary. The default value for both uplink and
internal interface is 1500.
The following table shows an example of two internal interfaces (app and web) and one
uplink interface (to-ESG).
VMware, Inc. 81
NSX Administration Guide
For example:
Option Value
vNIC Uplink
Gateway IP [Link]
MTU 1500
10 Make sure that the VMs connected to the logical switches have their default gateways set
properly to the logical router interface IP addresses.
Results
In the following example topology, the default gateway of app VM is [Link]. The default
gateway of web VM is [Link]. Make sure the VMs can ping their default gateways and each
other.
Logical
router
[Link] [Link]
App Web
logical logical
switch switch
[Link] [Link]
App Web
VM VM
Connect to the NSX Manager using SSH or the console, and run the following commands:
VMware, Inc. 82
NSX Administration Guide
n List the hosts that have received routing information for the logical router from the controller
cluster.
The output includes all hosts from all host clusters that are configured as members of the
transport zone that owns the logical switch that is connected to the specified logical router
(edge-1 in this example).
n List the routing table information that is communicated to the hosts by the logical router.
Routing table entries should be consistent across all the hosts.
n List additional information about the router from the point of view of one of the hosts. This
output is helpful to learn which controller is communicating with the host.
VMware, Inc. 83
NSX Administration Guide
Check the Controller IP field in the output of the show logical-router host host-25 dlr
edge-1 verbose command.
SSH to a controller, and run the following commands to display the controller's learned VNI,
VTEP, MAC, and ARP table state information.
The output for VNI 5000 shows zero connections and lists controller [Link] as the
owner for VNI 5000. Log in to that controller to gather further information for VNI 5000.
Because [Link] owns all three VNI connections, we expect to see zero connections on
the other controller, [Link].
n Before checking the MAC and ARP tables, ping from one VM to the other VM.
VMware, Inc. 84
NSX Administration Guide
Check the logical router information. Each logical router instance is served by one of the controller
nodes.
The interface-summary subcommand displays the LIFs that the controller learned from the NSX
Manager. This information is sent to the hosts that are in the host clusters managed under the
transport zone.
The routes subcommand shows the routing table that is sent to this controller by the logical
router's virtual appliance (also known as the control VM). Unlike on the ESXi hosts, this routing
table does not include directly connected subnets because this information is provided by the LIF
configuration. Route information on the ESXi hosts includes directly connected subnets because in
that case it is a forwarding table used by ESXi host’s datapath.
VMware, Inc. 85
NSX Administration Guide
[root@comp02a:~] esxcfg-route -l
VMkernel Routes:
Network Netmask Gateway Interface
[Link] [Link] Local Subnet vmk1
[Link] [Link] Local Subnet vmk0
default [Link] [Link] vmk0
These Host-IP addresses are vmk0 interfaces, not VTEPs. Connections between ESXi hosts
and controllers are created on the management network. The port numbers here are
ephemeral TCP ports that are allocated by the ESXi host IP stack when the host establishes a
connection with the controller.
n On the host, you can view the controller network connection matched to the port number.
n Display active VNIs on the host. Observe how the output is different across hosts. Not all VNIs
are active on all hosts. A VNI is active on a host if the host has a VM that is connected to the
logical switch.
[root@[Link]:~] # esxcli network vswitch dvs vmware vxlan network list --vds-name
Compute_VDS
VXLAN ID Multicast IP Control Plane Controller
Connection Port Count MAC Entry Count ARP Entry Count VTEP Count
-------- ------------------------- -----------------------------------
--------------------- ---------- --------------- --------------- ----------
VMware, Inc. 86
NSX Administration Guide
Note To enable the vxlan namespace in vSphere 6.0 and later, run the /etc/init.d/hostd
restart command.
For logical switches in hybrid or unicast mode, the esxcli network vswitch dvs vmware
vxlan network list --vds-name <vds-name> command contains the following output:
n Multicast proxy and ARP proxy are listed. AARP proxy is listed even if you disabled IP
discovery.
n If a logical router is connected to the ESXi host, the port Count is at least 1, even if there
are no VMs on the host connected to the logical switch. This one port is the vdrPort, which
is a special dvPort connected to the logical router kernel module on the ESXi host.
n First ping from VM to another VM on a different subnet and then display the MAC table. Note
that the Inner MAC is the VM entry while the Outer MAC and Outer IP refer to the VTEP.
~ # esxcli network vswitch dvs vmware vxlan network mac list --vds-name=Compute_VDS --
vxlan-id=5000
Inner MAC Outer MAC Outer IP Flags
----------------- ----------------- -------------- --------
[Link] [Link] [Link] 00000111
~ # esxcli network vswitch dvs vmware vxlan network mac list --vds-name=Compute_VDS --
vxlan-id=5001
Inner MAC Outer MAC Outer IP Flags
----------------- ----------------- -------------- --------
[Link] [Link] [Link] 00000101
[Link] [Link] [Link] 00000111
What to do next
When you install an NSX Edge Appliance, NSX enables automatic VM startup/shutdown on the
host if vSphere HA is disabled on the cluster. If the appliance VMs are later migrated to other
hosts in the cluster, the new hosts might not have automatic VM startup/shutdown enabled. For
this reason, when you install NSX Edge Appliances on clusters that have vSphere HA disabled, you
must preferably check all hosts in the cluster to make sure that automatic VM startup/shutdown
is enabled. See "Edit Virtual Machine Startup and Shutdown Settings" in vSphere Virtual Machine
Administration.
After the logical router is deployed, double-click the logical router ID to configure additional
settings, such as interfaces, routing, firewall, bridging, and DHCP relay.
VMware, Inc. 87
NSX Administration Guide
Uplink interfaces of an ESG connect to uplink port groups that have access to a shared corporate
network or a service that provides access layer networking.
The following list describes feature support by interface type (internal and uplink) on an ESG.
n DHCP: Not supported on uplink interfaces. See the note after this bulleted list.
n HA: Not supported on uplink interfaces, requires at least one internal interface.
Note By design, DHCP service is supported on the internal interfaces of an NSX Edge. However,
in some situations, you may choose to configure DHCP on an uplink interface of the edge and
configure no internal interfaces. In this situation, the edge can listen to the DHCP client requests
on the uplink interface, and dynamically assign IP addresses to the DHCP clients. Later, if you
configure an internal interface on the same edge, DHCP service stops working because the edge
starts listening to the DHCP client requests on the internal interface.
The following figure shows a sample topology. The Edge Service Gateway uplink interface is
connected to the physical infrastructure through the vSphere distributed switch. The Edge Service
Gateway internal interface is connected to a logical router through a logical transit switch.
VMware, Inc. 88
NSX Administration Guide
Edge vSphere
Services Distributed Physical
[Link] Architecture
Gateway Link type: uplink Switch [Link]
[Link]
Link type: internal
Transit
logical
switch
[Link]
Link type: uplink
Protocol address:
[Link]
Logical
Router
[Link] [Link]
Link type: internal Link type: internal
App Web
logical logical
switch switch
[Link] [Link]
App Web
VM VM
You can configure multiple external IP addresses for load balancing, site-to-site VPN, and NAT
services.
Important If you enable high availability on an NSX Edge in a cross-vCenter NSX environment,
both the active and standby NSX Edge Appliances must reside in the same vCenter Server. If you
migrate one of the appliances of an NSX Edge HA pair to a different vCenter Server, the two HA
appliances no longer operate as an HA pair, and you might experience traffic disruption.
Prerequisites
VMware, Inc. 89
NSX Administration Guide
n Verify that the resource pool has enough capacity for the Edge Services Gateway (ESG)
virtual appliance to be deployed. See Chapter 1 System Requirements for NSX Data Center
for vSphere for the resources required for each size of appliance.
n Verify that the host clusters on which the NSX Edge Appliance will be installed are prepared
for NSX. See "Prepare Host Clusters for NSX" in the NSX Installation Guide.
n Determine if you want to enable DRS. If you create an Edge Services Gateway with HA,
and DRS is enabled, DRS anti-affinity rules are created to prevent the appliances from being
deployed on the same host. If DRS is not enabled at the time the appliances are created, the
rules are not created and the appliances might be deployed on or moved to the same host.
Procedure
1 Log in to the vSphere Web Client, and navigate to Home > Networking & Security > NSX
Edges.
Option Description
Name Enter a name for the ESG as you want it to appear in the vCenter inventory.
Make sure that this name is unique across all ESGs within a single tenant.
Host Name Optional. Enter a host name that you want to display for this ESG in the CLI.
If you do not enter a host name, the Edge ID that is created automatically is
displayed in the CLI.
VMware, Inc. 90
NSX Administration Guide
Option Description
Deploy NSX Edge Optional. Select this option to create an NSX Edge Appliance virtual machine.
If you do not select this option, the ESG will not operate until a VM is
deployed.
High Availability Optional. Select this option to enable and configure high availability on the
ESG.
n If you need to run stateful services on an ESG, such as load balancer,
NAT, DHCP, and so on, you can enable HA on the edge. HA helps
in minimizing the failover time to a standby edge when an active edge
fails. Enabling HA deploys a standalone edge on a different host in a
cluster. So, you must ensure that you have enough resources in your
environment.
n If you are not running stateful services on the ESG, and your ESG is
used only for north-south routing, then enabling ECMP is recommended.
ECMP uses a dynamic routing protocol to learn the next-hop towards a
final destination and to converge during failures.
You can enable ECMP on the edge while doing the global routing
configuration, and not while deploying the edge in your network.
Option Description
User Name Enter a user name that you want to use for logging in to the Edge CLI.
Password Enter a password that is at least 12 characters and it must satisfy these rules:
n Must not exceed 255 characters
n At least one uppercase letter and one lowercase letter
n At least one number
n At least one special character
n Must not contain the user name as a substring
n Must not consecutively repeat a character 3 or more times.
SSH access Optional. Enable SSH access to the Edge. By default, SSH access is disabled.
Usually, SSH access is recommended for troubleshooting purposes.
VMware, Inc. 91
NSX Administration Guide
Option Description
Auto rule generation Optional. By default, this option is enabled. This option allows automatic
creation of firewall rules, NAT, and routing configuration, which control traffic
for certain NSX Edge services, including load balancing and VPN.
If you disable automatic rule generation, you must manually add these
rules and configurations. Auto rule generation does not create rules for data-
channel traffic.
Edge control level logging Optional. By default, the log level is info.
Large Provides more CPU, memory, and disk space than Compact, and supports
a larger number of concurrent SSL VPN-Plus users.
Quad Large Suitable when you need a high throughput and a high connection rate.
X-Large Suitable for environments that have a load balancer with millions of
concurrent sessions.
See Chapter 1 System Requirements for NSX Data Center for vSphere for the resources
required for each size of appliance.
b Add an NSX Edge Appliance, and specify the resource details for the VM deployment.
For example:
Option Value
Datastore ds-1
Host [Link]
See "Managing NSX Edge Appliance Resource Reservations" in the NSX Administration
Guide for more information on Resource Reservation.
If you enabled HA, you can add two appliances. If you add a single appliance, NSX Edge
replicates its configuration for the standby appliance. For HA to work correctly, you must
deploy both appliances on a shared datastore.
VMware, Inc. 92
NSX Administration Guide
Option Description
Type Select either Internal or Uplink. For High Availability to work, an Edge
appliance must have at least one internal interface.
Connected To Select the port group or the logical switch to which you want to connect
this interface to.
Option Description
Primary IP Address On an ESG, both IPv4 and IPv6 addresses are supported. An interface
can have one primary IP address, multiple secondary IP addresses, and
multiple non-overlapping subnets.
If you enter more than one IP address for the interface, you can select the
primary IP address.
Only one primary IP address is allowed per interface and the Edge uses
the primary IP address as the source address for locally generated traffic,
for example remote syslog and operator-initiated pings.
Secondary IP Addresses Enter the secondary IP address. To enter multiple IP addresses, use a
comma-separated list.
Option Description
MAC Addresses Optional. You can enter a MAC address for each interface.
If you change the MAC address using an API call later, you must redeploy
the Edge after changing the MAC address.
MTU The default value for uplink and internal interface is 1500. For trunk
interface, the default value is 1600. You can modify the default value, if
necessary. For sub-interfaces on the trunk, the default value is 1500. Make
sure that the MTU for the trunk interface is equal to or more than the MTU
of the sub interface.
Proxy ARP Select this option if you want the ESG to answer ARP requests intended
for other virtual machines.
This option is useful, for example, when you have the same subnet on
both sides of a WAN connection.
Send ICMP Redirect Select this option if you want the ESG to convey routing information to the
hosts.
Reverse Path Filter By default, this option is set to enabled. When enabled, it verifies the
reachability of the source address in packets being forwarded.
In enabled mode, the packet must be received on the interface that the
router might use to forward the return packet.
VMware, Inc. 93
NSX Administration Guide
Option Description
In loose mode, the source address must appear in the routing table.
Fence Parameters Configure fence parameters if you want to reuse IP and MAC addresses
across different fenced environments.
For example, in a cloud management platform (CMP), fencing allows you
to run several cloud instances simultaneously with the same IP and MAC
addresses isolated or "fenced".
The following table shows an example of two NSX Edge interfaces. The uplink interface
attaches the ESG to the outside world through an uplink port group on a vSphere
distributed switch. The internal interface attaches the ESG to a logical transit switch to
which a distributed logical router is also attached.
Important NSX 6.4.4 and earlier supports multicast on a single uplink interface of
the ESG. Starting with NSX 6.4.5, multicast is supported on a maximum of two uplink
interfaces of the ESG . In a multi-vCenter deployment scenario, if an NSX Edge is at version
6.4.4 or earlier, you can enable multicast only on a single uplink interface. To enable
multicast on two uplink interfaces, you must upgrade the Edge to 6.4.5 or later.
For example:
Option Value
vNIC Uplink
Gateway IP [Link]
MTU 1500
Note You can edit the MTU value, but it cannot be more than the configured MTU on the
interface.
Caution If you do not configure the firewall policy, the default policy is set to deny all traffic.
However, the firewall is enabled on the ESG during deployment, by default.
VMware, Inc. 94
NSX Administration Guide
By default, logs are enabled on all new NSX Edge appliances. The default logging level is
Info. If logs are stored locally on the ESG, logging might generate too many logs and affect
the performance of your NSX Edge. For this reason, you must preferably configure remote
syslog servers, and forward all logs to a centralized collector for analysis and monitoring.
Option Description
vNIC Select the internal interface for which you want to configure HA
parameters. By default, HA automatically selects an internal interface and
automatically assigns link-local IP addresses.
If you select ANY for interface but there are no internal interfaces
configured, the UI displays an error. Two Edge appliances are created but
since there is no internal interface configured, the new NSX Edge remains
in standby and HA is disabled. After an internal interface is configured, HA
is enabled on the NSX Edge appliance.
Declare Dead Time Enter the period in seconds within which, if the backup appliance does
not receive a heartbeat signal from the primary appliance, the primary
appliance is considered inactive and the backup appliance takes over.
The default interval is 15 seconds.
Management IPs Optional: You can enter two management IP addresses in CIDR format to
override the local link IP addresses assigned to the HA virtual machines.
Ensure that the management IP addresses do not overlap with the IP
addresses used for any other interface and do not interfere with traffic
routing. Do not use an IP address that exists somewhere else on your
network, even if that network is not directly attached to the appliance.
The management IP addresses must be in the same L2/subnet and must
be able to communicate with each other.
Results
After the ESG is deployed, go to the Hosts and Clusters view and open the console of the NSX
Edge virtual appliance. From the console, make sure that you can ping the connected interfaces.
What to do next
When you install an NSX Edge Appliance, NSX enables automatic VM startup/shutdown on the
host if vSphere HA is disabled on the cluster. If the appliance VMs are later migrated to other
hosts in the cluster, the new hosts might not have automatic VM startup/shutdown enabled. For
this reason, when you install NSX Edge Appliances on clusters that have vSphere HA disabled, you
must preferably check all hosts in the cluster to make sure that automatic VM startup/shutdown
is enabled. See "Edit Virtual Machine Startup and Shutdown Settings" in vSphere Virtual Machine
Administration.
Now you can configure routing to allow connectivity from external devices to your VMs.
VMware, Inc. 95
NSX Administration Guide
You must have a working NSX Edge instance before you can configure routing on it. For
information on setting up NSX Edge, see NSX Edge Configuration.
Procedure
ECMP is a routing strategy that allows next-hop packet forwarding to a single destination over
multiple best paths. These best paths can be added as static routes or as a result of metric
calculations by dynamic routing protocols like OSPF or BGP. Multiple paths for static routes
can be added by providing multiple next hops separated by commas in the Static Routes
dialog box. For more information, see Add a Static Route.
The Edge Services Gateway uses the Linux network stack implementation, a round-robin
algorithm with a randomness component. After a next hop is selected for a particular source
and destination IP address pair, the route cache stores the selected next hop. All packets for
that flow go to the selected next hop. The default IPv4 route cache timeout is 300 seconds
(gc_timeout). If an entry is inactive for this time, it is eligible to be removed from the route
cache. The actual removal happens when garbage collection timer activates (gc_interval = 60
seconds).
The Distributed Logical Router uses an XOR algorithm to determine the next hop from a list
of possible ECMP next hops. This algorithm uses the source and destination IP address on the
outgoing packet as sources of entropy.
Stateful services such as Load Balancing, VPN, NAT, and ESG firewall do not work with ECMP.
However, from NSX 6.1.3 onwards, ECMP and Distributed Firewall can work together.
6 (Only for UDLR): To change the Locale ID on a universal distributed logical router, next to
Routing Configuration, click Edit . Enter a locale ID and click Save or OK.
By default, the locale ID is set to the NSX Manager UUID. However, you can override the
locale ID by enabling local egress at the time of creating the universal distributed logical
router. Locale ID is used to selectively configure routes in a cross-vCenter NSX or multi-site
environment. See Cross-vCenter NSX Topologies for more information.
VMware, Inc. 96
NSX Administration Guide
a Select an interface from which the next hop towards the destination network can be
reached.
c (Optional) Type the locale ID. Locale ID is available only on universal logical routers.
Choose a value between 1 and 255. The admin distance is used to choose which route to
use when there are multiple routes for a given network. The lower the admin distance, the
higher the preference for the route.
Connected 0
Static 1
External BGP 20
OSPF Intra-Area 30
g Click Save.
a Router ID displays the first uplink IP address of the NSX Edge that pushes routes to the
kernel for dynamic routing.
c Select Enable Logging to save logging information and select the log level.
Note If you have IPSec VPN configured in your environment, you should not use dynamic
routing.
What to do next
To delete routing configuration, click Reset. This deletes all routing configurations (default, static,
OSPF, and BGP configurations, as well as route redistribution).
VMware, Inc. 97
NSX Administration Guide
Procedure
7 Click OK.
VMware, Inc. 98
NSX Administration Guide
n Import certificate at the global level in the NSX Manager virtual appliance.
1 Click the Manage Appliance Settings, and then click SSL Certificates.
2 Click Import.
3 In the Import SSL Certificate dialog box, click Choose File, and browse to the signed
certificate file.
4 Click Import.
1 Copy the contents of the signed certificate that you received from the certification
authority.
4 In the Import Certificate dialog box, paste the contents of the signed certificate.
5 Click OK.
Add a CA Certificate
By adding a CA certificate, you can become an interim CA for your company. You then have the
authority for signing your own certificates.
Procedure
6 Copy and paste the certificate contents in the Certificate Contents text box.
VMware, Inc. 99
NSX Administration Guide
Procedure
6 In the Certificates Contents text box, paste the contents of the PEM certificate file.
Text must include "-----BEGIN xxx-----" and "-----END xxx-----". For example:
-----BEGIN CERTIFICATE-----
Server cert
-----END CERTIFICATE-----
7 In the Private Key text box, paste the private key contents of the server.
8 Enter the password of the private key file and renter the password to confirm.
Procedure
6 In the Certificates Contents text box, paste the contents of the server [Link] file, and then
append the content of the intermediary certificates and the root certificate.
n Server certificate
n Root CA certificate
-----BEGIN CERTIFICATE-----
Server cert
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
Intermediate cert
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
Root cert
-----END CERTIFICATE-----
7 In the Private Key text box, paste the private key contents of the server.
8 Enter the password for the private key of the server and renter the password to confirm.
Results
After the certificate is added, the server certificate that is chained with its intermediary certificates
is displayed in the certificate details.
n In NSX 6.4.4 and later, in the Certificates table, click the text in the Issued To column.
Certificate details are displayed in a pop-up window.
n In NSX 6.4.3 and earlier, select a certificate from the grid. The Certificate Details pane below
the grid displays the details of the certificate.
Prerequisites
You must have a CA that can sign your certificate signing request (CSR).
Procedure
5 Generate a certificate signing request (CSR) for an NSX Edge. For detailed information, see
steps 1–7 in Configure a CA Signed Certificate.
7 Click CSR Actions or Actions, and then click Self Sign Certificate.
8 Type the number of days for which you want this self-signed certificate to be valid.
9 Click OK.
The main benefit of implementing client certificates is that the NSX Edge Load Balancer can ask
the client for its client certificate, and validate it before forwarding its web requests to the backend
servers. If a client certificate is revocated because it has been lost, or the client doesn't work in the
company anymore, NSX Edge will validate the client certificate doesn't belong to the Certification
Revocation List.
For more information on generating client certificates, refer to Scenario: SSL Client and Server
Authentication.
When a potential user attempts to access a server, the server allows or denies access based on the
CRL entry for that particular user.
Procedure
FIPS Mode
When you enable the FIPS mode, any secure communication to or from the NSX Edge uses
cryptographic algorithms or protocols that are allowed by United States Federal Information
Processing Standards (FIPS). FIPS mode turns on the cipher suites that comply with FIPS.
If you configure components those are not FIPS compliant on a FIPS enabled edge, or if you
enable FIPS on a edge which has ciphers or authentication mechanism that is not FIPS compliant,
NSX Manager will fail the operation and provide a valid error message.
Caution Changing FIPS mode reboots the NSX Edge appliance causing temporary traffic
disruption. This applies whether or not high availability is enabled.
Depending on your requirements, you can enable FIPS on some or all of your NSX Edge
appliances. FIPS-enabled NSX Edge appliances can communicate with NSX Edge appliances that
do not have FIPS enabled.
If a logical (distributed) router is deployed without an NSX Edge appliance, you cannot modify
the FIPS mode. The logical router automatically gets the same FIPS mode as the NSX Controller
cluster. If the NSX Controller cluster is NSX 6.3.0 or later, FIPS is enabled.
If you change FIPS mode on an NSX Edge appliances with high availability enabled, FIPS will be
enabled on both appliances, and the appliances will be rebooted one after the other.
If you want to change FIPS mode for a standalone edge, use the fips enable or fips disable
command. For more information, refer to NSX Command Line Interface Reference.
Prerequisites
n Verify that any partner solutions are FIPS mode certified. See the VMware Compatibility Guide
at [Link]
n If you have upgraded from an earlier version of NSX, do not enable FIPS mode until the
upgrade to NSX 6.3.0 is complete. See Understand FIPS Mode and NSX Upgrade in the NSX
Upgrade Guide.
n Verify that the NSX Manager is NSX 6.3.0 or later.
n Verify that all host clusters running NSX workloads are prepared with NSX 6.3.0 or later.
n Verify that all NSX Edge appliances on which you want to enable FIPS are version 6.3.0 or
later.
n Verify that the messaging infrastructure has status GREEN. Use the API method
GET /api/2.0/nwfabric/status?resource={resourceId}, where resourceId is the
vCenter Managed Object ID of a host or cluster. Look for the status corresponding to the
featureId of [Link] in the response body:
<nwFabricFeatureStatus>
<featureId>[Link]</featureId>
<updateAvailable>false</updateAvailable>
<status>GREEN</status>
<installed>true</installed>
<enabled>true</enabled>
<allowConfiguration>false</allowConfiguration>
</nwFabricFeatureStatus>
Procedure
3 Select the required edge or router, click Actions ( ) and select Change FIPS mode.
What to do next
Procedure
Version Procedure
NSX 6.4.4 and later a Manage > Settings > Appliance Settings.
b Go to the Edge Appliance VMs section.
NSX 6.4.3 and earlier a Click Manage > Settings > Configuration.
b Go to the NSX Edge Appliances pane.
6 Select the cluster or resource pool and datastore for the appliance.
7 (Optional) Select the host on which you want to add the appliance.
8 (Optional) Select the vCenter folder within which the appliance is to be added.
9 Click Add.
Results
n In NSX 6.4.4 or later, the NSX Edge Appliance details are displayed in a card view in the Edge
Appliance VMs section. One card shows settings of one Edge Appliance VM.
n In NSX 6.4.3 or earlier, the NSX Edge Appliance details are displayed in a grid format in the
NSX Edge Appliances pane.
Procedure
Version Procedure
NSX 6.4.4 and later a Manage > Settings > Appliance Settings.
b Go to the Edge Appliance VMs section.
NSX 6.4.3 and earlier a Click Manage > Settings > Configuration.
b Go to the NSX Edge Appliances pane.
Version Procedure
NSX 6.4.4 and later a In the Edge Appliance VMs section, go to the Edge Appliance VM that
you want to edit.
Procedure
Version Procedure
NSX 6.4.4 and later a Manage > Settings > Appliance Settings.
b Go to the Edge Appliance VMs section.
NSX 6.4.3 and earlier a Click Manage > Settings > Configuration.
b Go to the NSX Edge Appliances pane.
u In NSX 6.4.4 and later, go to the NSX Edge Appliance, click , and then click Delete.
u In NSX 6.4.3 and earlier, select an NSX Edge Appliance from the grid, and then click the
Delete ( ) icon.
There are three methods of resource reservation: System Managed, Custom, or No Reservation
Important If you are using NSX 6.4.3 or earlier, and you select Custom or No Reservation
reservations for an NSX Edge appliance, you cannot switch back to System Managed.
When you install, upgrade, or redeploy an NSX Edge instance, the associated NSX Edge
appliances are deployed. If an appliance has System Managed resource reservation, the
reservation is applied on the resource pool after the appliance is powered on. If there are
insufficient resources, the reservation fails and generates a system event, but the appliance
deployment succeeds. The reservation is attempted the next time the appliance is deployed
(during upgrade or redeploy).
With System Managed resource reservations, if you change the appliance size, the system
updates the resource reservation to match the system requirements of the new appliance size.
When you install, upgrade, or redeploy an NSX Edge, the associated NSX Edge appliances are
deployed. If an appliance has Custom resource reservation, the reservation is applied on the
resource pool before the appliance is powered on. If there are insufficient resources, the appliance
fails to power on and the appliance deployment fails.
You can apply Custom reservations to an existing NSX Edge appliance. If the resource pool does
not have sufficient resources, the configuration change fails.
With Custom resource reservations, the system does not manage resource reservations for the
appliance. If you change the appliance size, the appliance system requirements change, but the
system does not update the resource reservation. You should change the resource reservation to
reflect the system requirements of the new appliance size.
No Resource Reservation
If you select No reservation, no resources are reserved for the NSX Edge appliance. You can
deploy NSX Edge appliances on hosts that do not have sufficient resources, but if there is a
resource contention the appliances might not operate correctly.
Create a new NSX Edge Navigate to Networking & Security Use POST /api/4.0/edges
> NSX Edges and click Add. The
wizard guides you through the steps
of creating an NSX Edge. You can
add an NSX Edge appliance in the
Configure Deployment step. You
select the reservation method from
the Resource Reservation drop-down
menu.
Update an existing NSX Edge Navigate to Networking & Security Use PUT /api/4.0/edges/{edgeId}/
> NSX Edges > NSX Edge Instance appliances
> Manage > Settings and edit the
appliance VM to select a different
value for Resource Reservation.
Use the cpuReservation > reservation and memoryReservation > reservation parameters to
configure the NSX Edge Appliance Resource Reservation using the API.
The system requirements for NSX Edge appliances depend on the appliance size: Compact,
Large, Quad Large, or X-Large. These values are used for the default System Managed resource
reservation.
edgePublish/tuningConfiguration. The default value for both parameters is 100. This change
affects new NSX Edge appliance deployments, but not existing appliances. The percentages
modify the default CPU and memory reserved for the relevant NSX Edge appliance size. To
disable the resource reservation, set the values to 0. See the NSX API Guide for details.
Starting in NSX 6.4.4 you can use the API to switch back to System Managed reservations
using POST /api/4.0/edges/{edgeId}/appliances?action=applySystemResourceReservation. See
the NSX API Guide for details.
Starting in NSX 6.4.6, you can use the vSphere Web Client to edit the NSX Edge appliance VM
and switch back to System Managed reservations.
An NSX Edge must have at least one internal interface before it can be deployed.
Configure an Interface
Internal interfaces are generally for East-West traffic, while uplink interfaces are for North-South
traffic.
An NSX Edge Services Gateway (ESG) can have up to 10 internal, uplink, or trunk interfaces. These
limits are enforced by the NSX Manager. When a logical router (DLR) is connected to an edge
services gateway (ESG), the interface on the router is an uplink interface, while the interface on the
ESG is an internal interface. An NSX trunk interface is for internal networks, not external networks.
The trunk interface allows multiple internal networks (either VLAN or VXLAN) to be trunked.
An NSX Data Center deployment can have up to a 1,000 distributed logical router (DLR) instances
on a single ESXi host. On a single logical router, you can configure up to eight uplink interfaces,
and up to 991 internal interfaces. These limits are enforced by the NSX Manager. For more
information about interface scaling in an NSX Data Center deployment, see the NSX Network
Virtualization Design Guide at [Link]
Note IPv6 multicast addresses are not supported on NSX ESG interfaces in NSX Data Center for
vSphere 6.2.x, 6.3.x, and 6.4.x.
Procedure
4 Navigate to NSX Edge interface settings by clicking Manage > Settings > Interfaces.
6 In the Edit Edge Interface dialog box, enter a name for the interface.
7 To indicate whether this interface is an internal or an external (uplink) interface, click Internal
or Uplink.
Select Trunk when creating a sub interface. For more information, see Add a Sub Interface.
8 Select the port group or logical switch to which you want to connect this interface to.
b Depending on what you want to connect to the interface, click the Logical Switch,
Standard Port Group, or Distributed Virtual Port Group tab.
c Select the appropriate logical switch or port group, and click OK.
An interface can have multiple non-overlapping subnets. Enter one primary IP address and a
comma-separated list of multiple secondary IP addresses. NSX Edge considers the primary IP
address as the source address for locally generated traffic. You must add an IP address to an
interface before using it on any feature configuration.
11 Enter the subnet prefix length or subnet mask for the interface.
12 If you are using NSX 6.4.4 or later, click the Advanced tab, and then continue with the
remaining steps in this procedure. If you are using NSX 6.4.3 or earlier, go to the next step.
Option Description
Reverse Path Filter Verifies the reachability of the source address in packets being forwarded. In
enabled mode, the packet must be received on the interface that the router
might use to forward the return packet. In loose mode, the source address
must appear in the routing table.
Configure fence parameters if you want to reuse IP and MAC addresses across different
fenced environments. For example, in a cloud management platform (CMP), fencing allows
you to run several cloud instances simultaneously with the same IP and MAC addresses
isolated or "fenced".
Delete an Interface
You can delete an NSX Edge interface.
Procedure
4 Navigate to NSX Edge interface settings by clicking Manage > Settings > Interfaces.
Enable an Interface
An interface must be enabled or its status must be connected for an NSX Edge to isolate the
virtual machines within that interface (port group or logical switch).
Procedure
4 Navigate to NSX Edge interface settings by clicking Manage > Settings > Interfaces.
Disable an Interface
You can disable or disconnect an interface on an NSX Edge.
Procedure
4 Navigate to NSX Edge interface settings by clicking Manage > Settings > Interfaces.
Note Starting with NSX Data Center 6.4.4, the terminology for some features in the UI has
changed. The following table provides the list of modified terms.
Procedure
4 Navigate to NSX Edge interface settings by clicking Manage > Settings > Interfaces.
5 Select an interface for which you want to configure the quality of service.
n In NSX 6.4.3 and earlier, click Actions > Configure Traffic Shaping Policy.
For more information about the traffic policy shaping options, see Traffic Shaping Policy.
Edge
vNic 0 vNic 10
n VLAN trunk is standard and works with any version of ESXi. This type of interface is used to
bring a tagged VLAN traffic into Edge.
n VXLAN trunk works with NSX version 6.1, and later. This type of interface is used to bring
VXLAN traffic into Edge.
n DHCP
n Load Balancer
n IPSec VPN: You can configure IPSec VPN only as an uplink interface. Use sub interfaces when
you want private traffic to traverse through the IPSec tunnel. If an IPSec policy is configured
for private traffic, sub interface acts as a gateway for the private local subnet.
n L2 VPN
n NAT
. A sub interface cannot be used for HA or Logical Firewall. However, you can use the IP address
of the sub interface in an edge firewall rule.
Procedure
4 Navigate to NSX Edge interface settings by clicking Manage > Settings > Interfaces.
6 In the Edit Edge Interface dialog box, enter a name for the interface.
8 Select the standard port group or distributed port group to which this interface must be
connected.
b Depending on what you want to connect to the interface, click the Standard Port Group or
Distributed Port Group tab.
11 Make sure that the sub interface is enabled, and enter a name for the sub interface.
The tunnel ID is used to connect the networks that are being stretched. This value must be
identical on both the client and server sites.
13 In Backing Type, select one of the following options to indicate the network backing for the
sub interface.
Option Description
VLAN Enter the VLAN ID of the virtual LAN that your sub interface should use.
VLAN IDs can range from 0 to 4094.
Network Select the distributed port group or logical switch. NSX Manager extracts the
VLAN ID and uses it for configuring the trunk.
None Use this option to create a sub interface without specifying a network or
VLAN ID. This sub interface is internal to an NSX Edge, and is used to
route packets between a stretched network and an unstretched (untagged)
network.
An interface can have multiple non-overlapping subnets. Enter one primary IP address and a
comma-separated list of multiple secondary IP addresses. NSX Edge considers the primary IP
address as the source address for locally generated traffic. You must add an IP address to an
interface before using it on any feature configuration.
17 Edit the default MTU value for the sub interface, if necessary.
The default MTU for a sub interface is 1500. The MTU for the sub interface should be equal to
or less than the lowest MTU among all the trunk interfaces for the NSX Edge.
Reverse Path Filter verifies the reachability of the source address in packets being forwarded.
In enabled mode, the packet must be received on the interface that the router can use to
forward the return packet. In loose mode, the source address must appear in the routing table.
21 If you are using NSX Data Center 6.4.4 or later, click the Advanced tab to continue with the
remaining steps in this procedure.
22 Enter the MAC address for the interface, if needed. Enter two MAC addresses, if HA is enabled
for the ESG.
The default MTU for a trunk interface is 1600, and the default MTU for a sub interface is 1500.
The MTU for the trunk interface must be equal to or more than the MTU of the sub interface.
Results
You can now use the sub interface for the Edge services.
What to do next
Configure a VLAN trunk if the sub interface added to a trunk vNic is backed by a standard port
group. See Configure VLAN Trunk .
Prerequisites
Verify that a sub interface with a trunk vNic backed by standard portgroup is available. See Add a
Sub Interface.
Procedure
2 Click Networking.
5 In VLAN Type, select VLAN Trunking and type the VLAN IDs to be trunked.
6 Click OK.
Procedure
Procedure
For example, NSX Edge HA synchronizes the connection tracker of the stateful firewall, or the
stateful information held by the load balancer. The time required to bring all services backup is
not null. Examples of known service restart impacts include a non-zero downtime with dynamic
routing when an NSX Edge is operating as a router.
Sometimes, the two NSX Edge HA appliances are unable to communicate and unilaterally decide
to become active. This behavior is expected to maintain availability of the active NSX Edge
services if the standby NSX Edge is unavailable. If the other appliance still exists, when the
communication is re-established, the two NSX Edge HA appliances renegotiate active and standby
status. If this negotiation does not finish and if both appliances declare they are active when the
connectivity is re-established, an unexpected behavior is observed. This condition, known as split
brain, is observed due to the following environmental conditions:
n Transient storage problems that might cause at least one NSX Edge HA VM to become
unavailable.
For example, an improvement in NSX Edge HA stability and performance is observed when
the VMs are moved off overprovisioned storage. In particular, during large overnight backups,
large spikes in storage latency can impact NSX Edge HA stability.
n Congestion on the physical or virtual network adapter involved with the exchange of packets.
All NSX Edge services run on the active appliance. The primary appliance maintains a heartbeat
with the standby appliance and sends service updates through an internal interface.
If a heartbeat is not received from the primary appliance within the specified time (default value
is 15 seconds), the primary appliance is declared dead. The standby appliance moves to the
active state, takes over the interface configuration of the primary appliance, and starts the NSX
Edge services that were running on the primary appliance. When the switch over takes place, a
system event is displayed in the System Events tab of Settings & Reports. Load Balancer and
VPN services need to re-establish TCP connection with NSX Edge, so service is disrupted for a
short while. Logical switch connections and firewall sessions are synched between the primary
and standby appliances however, service is disrupted during the switch over while waiting for the
standby appliance to become active and take over.
If the NSX Edge appliance fails and a bad state is reported, HA force syncs the failed appliance
to revive it. When revived, it takes on the configuration of the now-active appliance and stays in a
standby state. If the NSX Edge appliance is dead, you must delete the appliance and add a new
one.
NSX Edge ensures that the two HA NSX Edge virtual machines are not on the same ESXi host
even after you use DRS and vMotion (unless you manually vMotion them to the same host).
Two virtual machines are deployed on vCenter in the same resource pool and datastore as the
appliance you configured. Local link IPs are assigned to HA virtual machines in the NSX Edge HA
so that they can communicate. You can specify management IP addresses to override the local
links.
If syslog servers are configured, logs in the active appliance are sent to the syslog servers.
If vSphere HA is not enabled, the active-standby NSX Edge HA pair will survive one fail-over.
However, if another fail-over happens before the second HA pair was restored, NSX Edge
availability can be compromised.
Note In NSX 6.2.3 and later, enabling high availability (HA) on an existing Edge will fail when
sufficient resources cannot be reserved for the second Edge Appliance VM. The configuration will
roll back to the last known good configuration.
Procedure
Version Procedure
NSX 6.4.4 and later a Click Manage > Settings > High Availability.
b Do these steps.
n To edit HA configuration settings, next to High Availability
Configuration, click Edit.
n To edit Management HA interface settings for a DLR appliance, next
to Management/HA Interface, click Edit.
NSX 6.4.3 and earlier a Click Manage > Settings > Configuration.
b Do these steps:
n To edit HA configuration settings, go to the HA Configuration pane,
and click Change.
n To edit Management HA interface settings for a DLR appliance, go to
the HA Interface Configuration pane, and click Change.
5 Change the HA configuration settings. See the following tables for a description of all HA
configuration options.
Option Description
Declare dead time Enter the period in seconds within which, if the backup
appliance does not receive a heartbeat signal from the
primary appliance, the primary appliance is considered
inactive and the backup appliance takes over. The
default interval is 15 seconds.
Table 9-6. HA Configuration Options For NSX Edge Services Gateway Appliance
Option Description
Option Description
IP Address (available in NSX Data Center for vSphere Optional: To use the HA interface to connect to the
6.4.3 or earlier) NSX Edge, you can specify an additional IP address and
prefix for the HA interface.
Note If you configure L2 VPN on this Edge appliance before HA is enabled, you must have at
least two internal interfaces set up. If there is a single interface configured on this Edge which
is already being used by L2 VPN, HA is disabled on the Edge appliance.
Run force sync when you want to synchronize the edge configuration as known to the NSX
Manager to all the components.
Note For NSX Data Center 6.2 or later, force sync avoids data loss for east-west routing traffic,
however, north-south routing and bridging might experience an interruption.
In NSX 6.4.3 or earlier, NSX Manager takes the following actions during the force sync operation:
n Deletes the configuration of the Edge appliances, starting with Index 0, and then Index 1.
n Reboots the Edge appliances. Both Index 0 and Index 1 are rebooted simultaneously. This
action results in high downtime.
n If the NSX Manager is primary or stand-alone, and the edge is a logical distributed router, the
controller cluster is synced.
n Sends a message to all relevant hosts to sync the distributed router instance.
Starting with NSX 6.4.4, NSX Manager takes the following actions during the force sync operation:
n If the Edge appliances are in a Bad state, then NSX Manager deletes the Edge configuration,
reboots the bad Edge appliances, and publishes the latest configuration to the Edge
appliances.
n If the Edge appliances are not in a Bad state, then NSX Manager does not reboot the
Edge appliances, and directly publishes the latest configuration to the Edge appliances. By
eliminating unnecessary reboot of the Edge appliances, downtime is reduced.
n If the NSX Manager is primary or stand-alone, and the edge is a logical distributed router, the
controller cluster is synced.
n Sends a message to all relevant hosts to sync the distributed router instance.
Important In a cross-vCenter NSX environment, you must first run force sync on an NSX Edge
instance on the primary NSX Manager. When that is complete, force sync the NSX Edge instance
on the secondary NSX Managers.
Procedure
Procedure
Version Procedure
NSX 6.4.4 and later a Click Manage > Settings > Appliance Settings.
NSX 6.4.3 and earlier a Click Manage > Settings > Configuration.
b In the Details pane, next to Syslog servers, click Change.
Procedure
Version Procedure
NSX 6.4.4 and later Click Manage > Settings > Services.
NSX 6.4.3 and earlier a Click Manage > Settings > Configuration.
b View the Services pane for the status of all Edge services.
Note Redeploying is a disruptive action. First apply a force sync and check whether the problem
is fixed. It is a good practice to download the tech support bundle for the Edge and troubleshoot
the issue. If the problem is still not fixed, then redeploy.
n Edge appliances are deleted and freshly deployed with the latest configuration applied.
n Logical routers are deleted from the controller and then recreated with the latest configuration
applied.
n Distributed logical router instances on hosts are deleted and then recreated with the latest
configuration applied.
OSPF adjacencies are withdrawn during redeploy if graceful restart is not enabled.
The following good practices can help in preventing traffic loss when redeploying edges:
n Enable graceful restart when OSPF or BGP timers are large and high availability (HA) is
enabled on both distributed logical routers (DLR) and edge services gateways (ESG).
n Use aggressive OSPF or BGP timer values and floating static routes when a DLR in HA is
peered with multiple ESGs (ECMP).
Important In a cross-vCenter NSX environment, you must first redeploy the NSX Edge instance
on the primary NSX Manager. After that is complete, redeploy the NSX Edge instance on the
secondary NSX Managers. It is required that the NSX Edge instances on both the primary and the
secondary NSX Managers are redeployed.
Prerequisites
n Verify that the hosts have enough resources to deploy additional NSX Edge Services Gateway
appliances during the redeploy operation. See the Chapter 1 System Requirements for NSX
Data Center for vSphere for the resources required for each NSX Edge size.
n For a single NSX Edge instance, there are two NSX Edge appliances of the appropriate
size in the poweredOn state during redeploy.
n For an NSX Edge instance with high availability enabled, both replacement appliances are
deployed before replacing the old appliances. This means that there are four NSX Edge
appliances of the appropriate size in the poweredOn state during the upgrade of a given
NSX Edge. After the NSX Edge instance is redeployed, either of the HA appliances can
become active.
n Verify that the host clusters listed in the configured location and live location for the NSX Edge
appliances you redeploy are prepared for NSX, and that their messaging infrastructure status
is GREEN.
Verify that the host clusters listed in the configured location and live location for all NSX Edge
appliances are prepared for NSX and that their messaging infrastructure status is GREEN. If the
status is green, the hosts are using the messaging infrastructure to communicate with NSX
Manager instead of VIX.
If the configured location is not available, for example, because the cluster has been removed
since the NSX Edge appliance was created, then verify the live location only.
n Find the ID of the original configured location (configuredResourcePool > id) and
the current live location (resourcePoolId) with the GET [Link]
Address/api/4.0/edges/{edgeId}/appliances API request.
n Find the host preparation status and the messaging infrastructure status for
those clusters with the GET [Link]
resource={resourceId} API request, where resourceId is the ID of the configured and live
location of the NSX Edge appliances found previously.
<nwFabricFeatureStatus>
<featureId>[Link]</featureId>
<featureVersion>6.3.1.5124716</featureVersion>
<updateAvailable>false</updateAvailable>
<status>GREEN</status>
<installed>true</installed>
<enabled>true</enabled>
<allowConfiguration>false</allowConfiguration>
</nwFabricFeatureStatus>
<nwFabricFeatureStatus>
<featureId>[Link]</featureId>
<updateAvailable>false</updateAvailable
<status>GREEN</status>
<installed>true</installed>
<enabled>true</enabled>
<allowConfiguration>false</allowConfiguration>
</nwFabricFeatureStatus>
n Navigate to Installation and Upgrade > Host Preparation and prepare the hosts for NSX.
Procedure
It is a good practice to download the tech support bundle for the Edge and troubleshoot the
problem. If the problem persists, redeploy the Edge.
Results
The NSX Edge virtual machine is replaced with a new virtual machine and all services are restored.
If redeploy does not work, power off the NSX Edge virtual machine and redeploy NSX Edge again.
n The resource pool on which the NSX Edge was installed is no longer in the vCenter inventory
or its Managed Object ID (MoId) has changed.
n The datastore on which the NSX Edge was installed is corrupted/unmounted or in-accessible.
n The dvportGroups on which the NSX Edge interfaces were connected are no longer in the
vCenter inventory or their MoId (identifier in vCenter Server) has changed.
If any of these cases is true, you must update the MoId of the resource pool, datastore, or
dvPortGroup using a REST API call. See NSX API Guide.
If FIPS mode is enabled on NSX Edge and something goes wrong, NSX Manager does not allow
you to redeploy the NSX Edge. You must resolve infrastructure issues for communication failures
instead of redeploying the edge.
Procedure
Procedure
1 In the vSphere Client, navigate to Networking & Security > NSX Edges.
4 Click Add.
The router must be able to reach the next hop directly. If ECMP is enabled, you can enter
multiple next hops as a comma-separated list of IP addresses.
n In NSX 6.4.4 or earlier, next hop is mandatory. Starting in NSX 6.4.5, next hop is optional
for ESG. You can specify either the next hop or the interface. When you specify the next
hop, interface is unavailable for selection, and conversely.
n When multicast traffic is sent through a GRE tunnel interface on the ESG, you must specify
the IP address of the remote GRE tunnel endpoint in the next hop when configuring the
static routes.
The Interface drop-down menu does not display the GRE tunnel interfaces.
8 For MTU, edit the maximum transmission value for the data packets if necessary.
The MTU cannot be higher than the MTU set on the NSX Edge interface.
Choose a value between 1 and 255. The admin distance is used to choose which route to use
when there are multiple routes for a given network. The lower the admin distance, the higher
the preference for the route.
Connected 0
Static 1
External BGP 20
OSPF Intra-Area 30
An administrative distance of 255 causes the static route to be excluded from the routing table
(RIB) and the data plane, so the route is not used.
By default, routes have the same locale ID as the NSX Manager. The locale ID specified here
associates the route with this locale ID. These routes are sent only to those hosts that have a
matching locale ID. See Cross-vCenter NSX Topologies for more information.
12 Click OK.
OSPF routing policies provide a dynamic process of traffic load balancing between routes of equal
cost.
An OSPF network is divided into routing areas to optimize traffic flow and limit the size of routing
tables. An area is a logical collection of OSPF networks, routers, and links that have the same area
identification.
Prerequisites
When you enable a router ID, the text box is populated by default with the uplink interface of the
logical router.
Procedure
5 Enable OSPF.
a Next to OSPF Configuration, click Edit, and then click Enable OSPF
c In Protocol Address, type a unique IP address within the same subnet as the Forwarding
Address. The protocol address is used by the protocol to form adjacencies with the peers.
c Type an Area ID. NSX Edge supports an area ID in the form of a decimal number. Valid
values are 0–4294967295.
NSSAs prevent the flooding of AS-external link-state advertisements (LSAs) into NSSAs.
They rely on default routing to external destinations. Hence, NSSAs must be placed at the
edge of an OSPF routing domain. NSSA can import external routes into the OSPF routing
domain, thereby providing transit service to small routing domains that are not part of the
OSPF routing domain.
7 (Optional) Select the type of Authentication. OSPF performs authentication at the area level.
All routers within the area must have the same authentication and corresponding password
configured. For MD5 authentication to work, both the receiving and transmitting routers must
have the same MD5 key.
a None: No authentication is required, which is the default value.
c MD5: This authentication method uses MD5 (Message Digest type 5 ) encryption. An MD5
checksum is included in the transmitted packet.
d For Password or MD5 type authentication, type the password or MD5 key.
Important
n If NSX Edge is configured for HA with OSPF graceful restart enabled and MD5 is used
for authentication, OSPF fails to restart gracefully. Adjacencies are formed only after
the grace period expires on the OSPF helper nodes.
n NSX Data Center for vSphere always uses a key ID value of 1. Any device not managed
by NSX Data Center for vSphere that peers with an Edge Services Gateway or
Logical Distributed Router must be configured to use a key ID of value 1 when MD5
authentication is used. Otherwise an OSPF session cannot be established.
a In Area to Interface Mapping, click Add to map the interface that belongs to the OSPF
area.
b Select the interface that you want to map and the OSPF area that you want to map it to.
In most cases, it is recommended to retain the default OSPF settings. If you do change the
settings, make sure that the OSPF peers use the same settings.
a Hello Interval displays the default interval between hello packets that are sent on the
interface.
b Dead Interval displays the default interval during which at least one hello packet must be
received from a neighbor before the router declares that neighbor down.
c Priority displays the default priority of the interface. The interface with the highest priority
is the designated router.
d Cost of an interface displays the default overhead required to send packets across that
interface. The cost of an interface is inversely proportional to the bandwidth of that
interface. The larger the bandwidth, the smaller the cost.
Edge
Services
Gateway
[Link]
Link type: internal
Transit
logical
switch
[Link]
Link type: uplink
Protocol address:
[Link]
Logical
Router
[Link] [Link]
Link type: internal Link type: internal
App Web
logical logical
switch switch
[Link] [Link]
App Web
VM VM
n Gateway IP: [Link]. The logical router's default gateway is the ESG's internal interface IP
address ([Link]).
n Router ID: [Link]. The router ID is the uplink interface of the logical router. In other
words, the IP address that faces the ESG.
n Protocol Address: [Link]. The protocol address can be any IP address that is in the
same subnet and is not used anywhere else. In this case, [Link] is configured.
n Area Definition:
n Area ID: 0
n Type: Normal
n Authentication: None
The uplink interface (the interface facing the ESG) is mapped to the area, as follows:
n Interface: To-ESG
n Area ID: 0
n Priority: 128
n Cost: 1
What to do next
Make sure the route redistribution and firewall configuration allow the correct routes to be
advertised.
In this example, the logical router's connected routes ([Link]/24 and [Link]/24) are
advertised into OSPF. To verify the redistributed routes, on the left navigation panel, click Route
Redistribution, and check the following settings:
n Learner: OSPF
n From: Connected
n Prefix: Any
n Action: Permit
If you enabled SSH when you created the logical router, you must also configure a firewall filter
that allows SSH to the logical router's protocol address. For example, you can create a firewall
filter rule with the following settings:
n Name: ssh
n Type: User
n Source: Any
n Service: SSH
OSPF routing policies provide a dynamic process of traffic load balancing between routes of equal
cost.
An OSPF network is divided into routing areas to optimize traffic flow and limit the size of routing
tables. An area is a logical collection of OSPF networks, routers, and links that have the same area
identification.
Prerequisites
A Router ID must be configured, as shown in OSPF Configured on the Edge Services Gateway.
When you enable a router ID, the text box is populated by default with the ESG's uplink interface
IP address.
Procedure
3 Double-click an ESG.
5 Enable OSPF.
a Next to OSPF Configuration, click Edit, and then click Enable OSPF
b (Optional) Click Enable Graceful Restart for packet forwarding to be uninterrupted during
restart of OSPF services.
c (Optional) Click Enable Default Originate to allow the ESG to advertise itself as a default
gateway to its peers.
c Enter an Area ID. NSX Edge supports an area ID in the form of a decimal number. Valid
values are 0–4294967295.
NSSAs prevent the flooding of AS-external link-state advertisements (LSAs) into NSSAs.
They rely on default routing to external destinations. Hence, NSSAs must be placed at the
edge of an OSPF routing domain. NSSA can import external routes into the OSPF routing
domain, providing transit service to small routing domains that are not part of the OSPF
routing domain.
7 (Optional) When you select the NSSA area type, the NSSA Translator Role appears. Select the
Always check box to translate Type-7 LSAs to Type-5 LSAs. All Type-7 LSAs are translated
into Type-5 LSAs by the NSSA.
8 (Optional) Select the type of Authentication. OSPF performs authentication at the area level.
All routers within the area must have the same authentication and corresponding password
configured. For MD5 authentication to work, both the receiving and transmitting routers must
have the same MD5 key.
a None: No authentication is required, which is the default value.
c MD5: This authentication method uses MD5 (Message Digest type 5 ) encryption. An MD5
checksum is included in the transmitted packet.
d For Password or MD5 type authentication, enter the password or MD5 key.
Important
n If NSX Edge is configured for HA with OSPF graceful restart enabled and MD5 is used for
authentication, OSPF fails to restart gracefully. Adjacencies are formed only after the grace
period expires on the OSPF helper nodes.
n NSX Data Center for vSphere always uses a key ID value of 1. Any device not managed
by NSX Data Center for vSphere that peers with an Edge Services Gateway or Logical
Distributed Router must be configured to use a key ID of value 1 when MD5 authentication
is used. Otherwise an OSPF session cannot be established.
a In Area to Interface Mapping, click Add to map the interface that belongs to the OSPF
area.
b Select the interface that you want to map and the OSPF area that you want to map it to.
In most cases, it is preferable to retain the default OSPF settings. If you do change the
settings, make sure that the OSPF peers use the same settings.
a Hello Interval displays the default interval between hello packets that are sent on the
interface.
b Dead Interval displays the default interval during which at least one hello packet must be
received from a neighbor before the router declares that neighbor down.
c Priority displays the default priority of the interface. The interface with the highest priority
is the designated router.
d Cost of an interface displays the default overhead required to send packets across that
interface. The cost of an interface is inversely proportional to the bandwidth of that
interface. The larger the bandwidth, the smaller the cost.
12 Make sure that the route redistribution and firewall configuration allow the correct routes to be
advertised.
The ESG can be connected to the outside world through a bridge, a physical router, or through an
uplink port group on a vSphere distributed switch, as shown in the following figure.
Edge vSphere
Services Distributed Physical
[Link] Architecture
Gateway Link type: uplink Switch [Link]
[Link]
Link type: internal
Transit
logical
switch
[Link]
Link type: uplink
Protocol address:
[Link]
Logical
Router
[Link] [Link]
Link type: internal Link type: internal
App Web
logical logical
switch switch
[Link] [Link]
App Web
VM VM
n vNIC: uplink
n Gateway IP: [Link]. The ESG's default gateway is the ESG's uplink interface to its
external peer.
n Router ID: [Link]. The router ID is the uplink interface of the ESG. In other words, the
IP address that faces its external peer.
n Area Definition:
n Area ID: 0
n Type: Normal
n Authentication: None
The internal interface (the interface facing the logical router) is mapped to the area, as follows:
n vNIC: internal
n Area ID: 0
n Priority: 128
n Cost: 1
The connected routes are redistributed into OSPF so that the OSPF neighbor (the logical router)
can learn about the ESG's uplink network. To verify the redistributed routes, on the left navigation
panel, click Route Redistribution, and check the following settings:
n Learner: OSPF
n From: Connected
n Prefix: Any
n Action: Permit
Note OSPF can also be configured between the ESG and its external peer router, but more
typically this link uses BGP for route advertisement.
Make sure that the ESG is learning OSPF external routes from the logical router.
To verify connectivity, make sure that an external device in the physical architecture can ping the
VMs.
For example:
Configure BGP
Border Gateway Protocol (BGP) makes core routing decisions. It includes a table of IP networks or
prefixes, which designate network reachability among multiple autonomous systems.
An underlying connection between two BGP speakers is established before any routing
information is exchanged. Keepalive messages are sent by the BGP speakers in order to keep
this relationship alive. After the connection is established, the BGP speakers exchange routes and
synchronize their tables.
Procedure
5 Next to BGP Configuration, click Edit, and then click Enable BGP.
6 (Optional) Click Enable Graceful Restart for packet forwarding to be uninterrupted during
restart of BGP services.
7 (Optional) Click Enable Default Originate to allow the ESG to advertise itself as a default
gateway to its peers.
8 In Local AS, enter the router ID. The routes are advertised when BGP peers with routers in
other autonomous systems (AS). The path of autonomous systems that a route traverses is
used as one metric when selecting the best path to a destination.
When you configure BGP peering between an edge services gateway (ESG) and a logical
router, use the protocol IP address of the logical router as the BGP neighbor address of
the ESG.
The forwarding address is the IP address that you assigned to the distributed logical
router's interface facing its BGP neighbor (its uplink interface).
The protocol address is the IP address that the logical router uses to form a BGP neighbor
relationship. It can be any IP address in the same subnet as the forwarding address,
but this IP address must not be used anywhere else. When you configure BGP peering
between an edge services gateway (ESG) and a logical router, use the protocol IP address
of the logical router as the BGP neighbor IP address of the ESG.
f Edit the default weight for the neighbor connection, if necessary. The default weight is 60.
g Hold Down Timer displays a default value of 180 seconds, which is three times the value of
keep alive timer. Edit if necessary.
When BGP peering is achieved between two neighbors, the NSX Edge starts a hold down
timer. Each keep alive message it receives from the neighbor resets the hold down timer to
0. When the NSX Edge fails to receive three consecutive keep alive messages so that the
hold down timer reaches 180 seconds, the NSX Edge considers the neighbor as down and
deletes the routes from this neighbor.
Note The default time-to-live (TTL) value for eBGP neighbors is 1 and for iBGP neighbors
is 64. This value cannot be modified.
h Keep Alive Timer displays the default frequency of 60 seconds at which a BGP neighbor
sends keep alive messages to its peer. Edit if necessary.
Each segment sent on the connection between the neighbors is verified. MD5
authentication must be configured with the same password on both BGP neighbors,
otherwise, the connection between them is not made. You cannot enter a password when
FIPS mode is enabled.
a Click Add.
b Select the direction to indicate whether you are filtering traffic to or from the neighbor.
c Select the action to indicate whether you are allowing or denying traffic.
d Type the network in CIDR format that you want to filter to or from the neighbor.
AS 64511
ESG
DLR
[Link]
(protocol address)
AS 64512
In this topology, the ESG is in AS 64511. The logical router (DLR) is in AS 64512.
The forwarding address of the logical router is [Link]. This address is configured on the
uplink interface of the logical router. The protocol address of the logical router is [Link]. The
ESG uses this address to form a BGP peer relationship with the logical router.
On the BGP Configuration page of the logical router, the configuration settings are as follows:
n Neighbor settings:
n IP address: [Link]
On the BGP Configuration page of the ESG, the configuration settings are as follows:
n Neighbor settings:
n IP address: [Link]. This IP address is the protocol address of the logical router.
On the logical router, run the show ip bgp neighbors command, and make sure that the BGP
state is Established.
On the ESG, run the show ip bgp neighbors command, and make sure that the BGP state is
Established.
You can exclude an interface from route redistribution by adding a deny criterion for its network.
From NSX 6.2, the HA (management) interface of a logical (distributed) router is automatically
excluded from route redistribution.
Procedure
6 Select the protocols for which you want to enable route redistribution and click OK.
7 Add an IP prefix.
The IP prefix entered is exactly matched, except if you include less-than-or-equal-to (LE)
or greater-than-or-equal-to (GE) modifiers.
c LE and GE together specify a range of prefix lengths that the rule must match. You can
add IP prefix GE as a minimum prefix length to be matched and IP prefix LE as a maximum
prefix length to be matched.
You can use these two options individually or in conjunction. Values of LE and GE cannot
be zero or greater than 32. GE value cannot be greater than LE value. For example,
n If you provide a prefix as [Link]/16 and LE = 28, then the redistribution rule matches
all prefixes ranging from [Link]/16 to [Link]/28. It means that the rule matches all
prefix lengths from 16 to 28. Prefix [Link]/24 is matched.
n If you provide a prefix as [Link]/16 and GE = 24, then the redistribution rule matches
all prefixes ranging from [Link]/24 to [Link]/32. Prefix [Link]/28 is matched.
n If you provide GE = 24 and LE = 28, then the redistribution rule matches all prefixes
ranging from [Link]/24 to [Link]/28. Prefix [Link]/27 is matched.
c In Learner Protocol, select the protocol that has to learn routes from other protocols.
d In Allow Learning From, select the protocols from which routes must be learned.
Procedure
n In NSX 6.4.6 and later, click Networking & Security > About NSX.
n In NSX 6.4.5 and earlier, click Networking & Security > NSX Home > Summary.
2 From the NSX Manager drop-down menu, select the IP address of the NSX Manager. Observe
that the Locale ID or ID field shows the UUID of the NSX Manager.
See Cross-vCenter NSX Topologies for information on routing configurations for cross-vCenter
NSX environments.
Prerequisites
The universal logical (distributed) router must have been created with local egress enabled.
Procedure
Prerequisites
The universal logical (distributed) router that performs routing for the hosts or clusters must have
been created with local egress enabled.
Procedure
1 Navigate to Networking & Security > Installation and Upgrade > Host Preparation.
2 Select the NSX Manager that manages the hosts or clusters you need to configure.
NSX 6.4.1 and later a Click a cluster from the left pane. In the right pane, the hosts in the
selected cluster are displayed in the Hosts table.
b To change the locale ID for the cluster, click Actions > Change Locale ID.
c To change the locale ID for a host, click the three dots menu ( ) next to
the host and click Change Locale ID.
d Type a new locale ID and click Save.
NSX 6.4.0 a Select the host or cluster you want to modify, expanding clusters to
display hosts if needed.
b Click Actions > Change Locale ID.
c Type a new locale ID and click OK.
Results
The universal controller cluster will send only routes matching this new locale ID to the hosts.
What to do next
NSX uses two multicast routing protocols: Internet Group Management Protocol (IGMPv2) and
Protocol Independent Multicast (PIM). PIM sparse mode is supported (PIM-SM). PIM is used on
ESGs, but not on the DLR.
n Receiving hosts advertise their group membership to a local multicast router, enabling them to
join and leave multicast groups.
After any routing protocol is first enabled (or disabled and re-enabled), traffic is not forwarded
until the protocol has converged, and the routes corresponding to the traffic have been learned
and installed. In a multicast network, traffic forwarding requires both the unicast and the multicast
routing protocols to converge. PIM Sparse-mode also requires that the Rendezvous Point (RP) for
a multicast group is known before any control or data traffic for the multicast group is processed.
When the PIM Bootstrap mechanism is used to disseminate the RP information, the Candidate
RPs are learned only after a Bootstrap message from a PIM neighbor is received. These messages
have an RFC default periodicity of 60 secs. If a Static RP is configured, the RP information is
available immediately and the delay associated with the Bootstrap mechanism is avoided.
n IPv4 support.
n IGMPv2 support.
n Replication Multicast Range should not overlap with a Transport Zone multicast range.
n The routes of multicast participating nodes must be learned explicitly either by the unicast
routing protocol or through a static route. NSX does not use the default route for multicast
reverse path forwarding (RPF) checks.
n During vMotion of virtual machines that are receivers of multicast, a 1–2 second multicast traffic
loss can occur.
n Starting in NSX 6.4.7, distributed firewall (DFW) is supported for multicast traffic. However,
IPFIX is not supported for multicast.
n Starting in NSX 6.4.7, edge firewall is supported for multicast traffic. Edge firewall supports
filtering of IGMP packets on the basis of protocol in the IP header. The firewall cannot filter the
type of IGMP packets, such as, membership report, leave group, and so on.
Topologies:
n In a Cross-VC environment, connecting two Edge Services Gateways with multicast to the
same universal TLS is not supported.
n Single distributed logical router, that is, only one downlink per ESG.
n In NSX 6.4.5 or later, multicast is supported on a maximum of two uplink interfaces and
one downlink interface per ESG. However, if an NSX Edge is at 6.4.4 or earlier, multicast is
supported on a single uplink interface and a single downlink interface per ESG.
n Starting in NSX 6.4.7, PIM is supported on one GRE tunnel per ESG. PIM can be enabled either
on a maximum of two uplink interfaces of the ESG or one GRE tunnel interface, but not on both
simultaneously. To reach the sources, receivers, and RP outside the NSX network, static routes
must be configured with the IP address of the GRE tunnel endpoint as the next hop.
n Hardware VTEP gateways (ToR gateways) are not supported on a logical switch with multicast
routing.
NSX uses two multicast routing protocols: Internet Group Management Protocol (IGMPv2) and
Protocol Independent Multicast (PIM). PIM sparse mode is supported (PIM-SM). PIM is used on
ESGs, but not on the DLR.
For more information about multicast support in NSX, see Multicast Routing Support, Limitations,
and Topology.
Attention During vMotion of virtual machines that are receivers of multicast, a 1–2 second
multicast traffic loss can occur.
Prerequisites
Transport Zones must have a multicast address range configured. See Assign a Segment ID Pool
and Multicast Address Range in the NSX Installation Guide.
IGMP configuration must be the same across the Edge Services Gateway and the Logical
(Distributed) Router.
Enable IGMP snooping on the L2 switches to which VXLAN participating hosts are attached. If
IGMP snooping is enabled on L2, IGMP querier must be enabled on the router or L3 switch with
connectivity to multicast enabled networks. See Add a Logical Switch.
Procedure
1 In the vSphere Client, navigate to Networking & Security > NSX Edges.
4 Enable multicast.
Version Procedure
NSX 6.4.2 to 6.4.4 In Configuration, click the toggle switch to enable multicast.
Version Procedure
NSX 6.4.2 to 6.4.4 In Replication Multicast Range, enter a range of multicast group addresses in
the CIDR format.
Replication Multicast Range is a range of multicast group addresses (VXLAN outer destination
IP) that is used to replicate workload/tenant multicast group addresses (VXLAN inner
destination IP). Replication Multicast Range IP addresses should not overlap with the multicast
address range, configured in Networking & Security > Installation and Upgrade > Logical
Network Settings. For more information, see Assign a Segment ID Pool and Multicast Address
Range in the NSX Installation Guide.
6 Configure IGMP parameters. IGMP messages are used primarily by multicast hosts to signal
their interest in joining a specific multicast group, and to begin receiving group traffic. IGMP
Parameters configured on the DLR must match those configured on the ESG, and have to be
configured globally for the ESG and the DLR.
Query Max Response Time (sec) Optional. Sets the maximum amount of time that can
elapse between when the querier router sends a host-
query message and when it receives a response from
a host. The default is 10 seconds. Maximum value is 25
seconds.
Last Member Query Interval (sec) Optional. Configures the interval at which the router
sends IGMP group-specific query messages. The default
is 1 second. Maximum value is 25 seconds.
7 Under Enabled Interfaces, click Configure Interfaces and enable multicast on the uplink and
internal interfaces.
Note
n Multicast must be enabled on all the DLRs that should receive IPv4 multicast packets.
Results
To verify the multicast routing configurations on a given host and DLR, run the CLI command: show
logical-router host <host ID> dlr <DLR instance> mrouting-domain
In the example output below, the host is host-19 and the DLR instance is edge-1:
NSX uses two multicast routing protocols: Internet Group Management Protocol (IGMPv2) and
Protocol Independent Multicast (PIM). PIM sparse mode is supported (PIM-SM). PIM is used on
ESGs, but not on the DLR.
Attention During vMotion of virtual machines that are receivers of multicast, a 1–2 second
multicast traffic loss can occur.
For more information about multicast support in NSX, see Multicast Routing Support, Limitations,
and Topology.
If an ESG is at 6.4.4 or earlier, PIM is supported on a single uplink interface of the Edge. Starting in
NSX Data Center 6.4.5, PIM is supported on two uplink interfaces of the ESG.
Starting in NSX 6.4.7, PIM is also supported on one GRE virtual tunnel interface (VTI) per ESG. PIM
can be enabled on a maximum of two uplink interfaces of the ESG or one GRE tunnel interface.
However, you cannot enable PIM simultaneously on the GRE virtual tunnel interface and edge
uplink interfaces.
To enable PIM on a GRE tunnel interface, you must first configure GRE tunnels on the ESG by
using the NSX APIs. For more information about configuring GRE tunnels, see the NSX API Guide.
After configuring GRE tunnels on the ESG, you can view the list of GRE tunnels in the vSphere
Client UI.
The GRE virtual tunnel interface can be configured with either IPv4 address, or IPv6 address, or
both. However, to enable PIM on the GRE tunnel interface, the tunnel interface must have an IPv4
address. If the GRE virtual tunnel interface is configured with only an IPv6 address, this GRE tunnel
interface cannot be enabled as a PIM interface.
When PIM is enabled on a GRE tunnel interface, static routes must be added with the IP address
of the GRE virtual tunnel endpoint as the next hop IP address. Static routes are required to reach
the multicast sources, receivers, and rendezvous point (RP) outside the NSX network.
Procedure
1 In the vSphere Client, navigate to Networking & Security > NSX Edges.
4 Enable multicast.
Version Procedure
NSX 6.4.2 to 6.4.4 In Configuration, click the toggle switch to enable multicast.
5 Configure IGMP parameters. IGMP messages are used primarily by multicast hosts to signal
their interest in joining a specific multicast group, and to begin receiving group traffic. IGMP
Parameters configured on the DLR must match those configured on the ESG, and have to be
configured globally for the ESG and the DLR.
Query Max Response Time (sec) Optional. Sets the maximum amount of time that can
elapse between when the querier router sends a host-
query message and when it receives a response from
a host. The default is 10 seconds. Maximum value is 25
seconds.
Last Member Query Interval (sec) Optional. Configures the interval at which the router
sends IGMP group-specific query messages. The default
is 1 second. Maximum value is 25 seconds.
6 (Optional) Under PIM Sparse Mode Parameters (PIM-SM), enter the Static Rendezvous Point
Address. The rendezvous point (RP) is the router in a multicast network domain that acts as
the root of the multicast shared tree. Packets from the upstream source and join messages
from the downstream routers “rendezvous” at this core router. RP can be configured statically
and dynamically with use of a Bootstrap Router (BR). If a static RP is configured, it is applicable
to all the multicast groups.
An ESG cannot be the rendezvous point or Bootstrap Candidate Router. PIM-SM configuration
is done at the PIM global configuration level per edge.
7 Under Enabled Interfaces, click Configure Interfaces, and enable PIM on the interfaces.
n A maximum of two uplink interfaces of an ESG or on a single GRE virtual tunnel interface of
an ESG, but not on both at the same time.
What to do next
If you have enabled PIM on a GRE virtual tunnel interface, static routes are required to reach the
multicast sources, receivers, and RP outside the NSX network. You must configure static routes
with the IP address of the GRE virtual tunnel endpoint as the next hop IP address.
For detailed information about configuring static routes, see Add a Static Route.
Multicast Topology
The following figure shows a sample topology using Multicast.
The Edge Service Gateway uplink interface is connected to the physical infrastructure through
the vSphere distributed switch. The Edge Service Gateway internal interface is connected to a
logical router through a logical transit switch. The logical router's default gateway is the ESG's
internal interface IP address ([Link]). The router ID is the logical router's uplink interface---in
other words, the IP address that faces the ESG ([Link]). In this topology, Multicast traffic is
replicated in an optimal way, across subnets, between sources and receivers inside or outside the
NSX domain.
[Link]
Link type: internal
Transit
logical
switch
[Link]
Link type: uplink
Protocol address:
[Link]
Logical
Router
[Link] [Link]
Link type: internal Link type: internal
App Web
logical logical
switch switch
[Link] [Link]
App Web
VM VM
n Distributed Firewall
n Edge Firewall
n Firewall Logs
Distributed Firewall
A Distributed Firewall (DFW) runs in the kernel as a VIB package on all the ESXi host clusters that
are prepared for NSX. Host preparation automatically activates DFW on the ESXi host clusters.
DFW complements and enhances your physical security by removing unnecessary hair-pinning
from the physical firewalls and reduces the amount of traffic on the network. Rejected traffic is
blocked before it leaves the ESXi host. There is no need for the traffic to traverse the network, only
to be stopped at the perimeter by the physical firewall. Traffic destined to another VM on the same
host or another host does not have to traverse through the network up to the physical firewall, and
then go back down to the destination VM. Traffic is inspected at the ESXi level and delivered to the
destination VM.
Physical
Firewall Physical
Firewall
VM VM VM VM
NSX DFW is a stateful firewall, meaning it monitors the state of active connections and uses
this information to determine which network packets to allow through the firewall. DFW is
implemented in the hypervisor and applied to virtual machines on a per-vNIC basis. That is, the
firewall rules are enforced at the vNIC of each virtual machine. Inspection of traffic happens at
the vNIC of a VM just as the traffic is about to exit the VM and enter the virtual switch (egress).
Inspection also happens at the vNIC just as the traffic leaves the switch but before entering the VM
(ingress).
NSX Manager virtual appliance, NSX Controller VMs, and NSX Edge Service Gateways are
automatically excluded from DFW. If a VM does not require DFW service, you can manually add it
to the exclusion list.
As DFW is distributed in the kernel of every ESXi host, firewall capacity scales horizontally
when you add hosts to the clusters. Adding more hosts increases the DFW capacity. As your
infrastructure expands and you buy more servers to manage your ever-growing number of VMs,
the DFW capacity increases.
A distributed firewall instance on an ESXi host contains the following two tables:
n Connection Tracker table to cache flow entries for rules with an “allow” action.
DFW rules are run in a "top-down" order. Traffic that must go through a firewall is first matched
against a firewall rules list. Each packet is checked against the top rule in the rule table before
moving down the subsequent rules in the table. The first rule in the table that matches the traffic
parameters is enforced. The last rule in the table is the DFW default rule. Packets not matching
any rule above the default rule are enforced by the default rule.
Each VM has its own firewall policy rules and context. During vMotion, when VMs move from one
ESXi host to another host, the DFW context (Rules table, Connection Tracker table) moves with
the VM. In addition, all active connections remain intact during vMotion. In other words, DFW
security policy is independent of VM location.
Micro-segmentation is powered by the Distributed Firewall (DFW) component of NSX. The power
of DFW is that the network topology is no longer a barrier to security enforcement. The same
degree of traffic access control can be achieved with any type of network topology.
For a detailed example of micro-segmentation use case, see the "Micro-Segmentation with NSX
DFW and Implementation" section in the NSX Network Virtualization Design Guide at https://
[Link]/docs/DOC-27683.
n Users want to virtual applications using a laptop or mobile device where Active Directory is
used for user authentication.
n Users want to access virtual applications using VDI infrastructure where the virtual machines
are running Microsoft Windows operating system.
For more information about Active-Directory user-based DFW rules, see Chapter 12 Identity
Firewall Overview.
Context-Aware Firewall
Context- aware firewall enhances visibility at the application level and helps to override the
problem of application permeability. Visibility at the application layer helps you to monitor the
workloads better from a resource, compliance, and security point of view.
Firewall rules cannot consume application IDs. Context-aware firewall identifies applications
and enforces a micro-segmentation for EAST-WEST traffic, independent of the port that the
application uses. Context-aware or application-based firewall rules can be defined by defining
Layer 7 service objects. After defining Layer 7 service objects in rules, you can define rules with
specific protocol, ports, and their application definition. Rule definition can be based on more than
5-tuples. You can also use Application Rule Manager to create context-aware firewall rules.
Context-aware firewall is supported starting in NSX Data Center for vSphere 6.4.
All host clusters in an existing infrastructure managed by NSX Data Center for vSphere must be
upgraded to NSX Data Center for vSphere 6.4.0 or later.
Types of Firewall
Firewall takes action based on one or a combination of different L2, L3, L4, and L7 packet headers
that are added to the data as it moves through each layer of the TCP/IP model.
Application Layer
Distributed Firewall
+ Presentation Layer Application L7
Third Party Layer
Session Layer
In layer 3 or layer 4 firewall, the action is taken solely based on source/destination IP, port, and
protocol. The activity of network connections is also tracked. This type of firewall is known as a
stateful firewall.
Layer 7 or context-aware firewall can do everything that the layer 3 and layer 4 firewall do. Also,
it can intelligently inspect the content of the packets. For example, a layer 7 firewall rule can be
written to deny all HTTP requests from a specific IP address.
Rule processing
and revalidation Flow Rule
Index 2 Index 1
Entry ID
Flow not
Packet 1 Rule 1
1 Flow 1 found
2 Flow 2 2 Rule 2
5 tuple
3 Flow 3 3 Rule 3
When a context-aware firewall is configured for the virtual machine, the Distributed Deep Packet
Inspection (DPI) attributes must also be matched with the 5-tuples. This is where rules are
processed and validated again and the correct rule is found. Depending on the action, a flow
is created or dropped.
1 Upon entering a DFW filter, packets are looked up in the flow table based on 5-tuple.
2 If no flow/state is found, the flow is matched against the rule-table based on 5-tuple and an
entry is created in the flow table.
3 If the flow matches a rule with a Layer 7 service object, the flow table state is marked as “DPI
In Progress”
4 The traffic is then punted to the DPI engine. The DPI Engine determines the APP_ID.
5 Once the APP_ID has been determined, the DPI Engine sends down the attribute which is
inserted into the context table for this flow. The ”DPI In Progress” flag is removed and traffic is
no longer punted to the DPI engine.
6 The flow (now with APP-ID) is reevaluated against all rules that match the APP_ID, starting
with the original rule that was matched based on 5-tuple, and ensuring that no matching L4
rules take precedence. The appropriate action is taken (allow/deny) and the flow table entry is
updated accordingly.
It is possible to have a context-aware firewall rule exactly like an L3 or L4 rule, without really
defining the context. If that is the case, the validation step might be performed to apply the
context, which might have more attributes.
Management
Plane
User Control
World Agent Plane (vsfwd)
User Space
Kernel Space
Datapath VSIP
Module
Application ID GUIDs
Layer 7 application identification identifies which application a particular packet or flow is
generated by, independent of the port that is being used.
Enforcement based on application identity enables users to allow or deny applications to run on
any port, or to force applications to run on their standard port. Deep Packet Inspection (DPI)
enables matching packet payload against defined patterns, commonly referred to as signatures.
Layer 7 service objects can be used for port-independent enforcement or to create new service
objects that leverage a combination of Layer 7 application identity, protocol and port. Layer
7 based service objects can be used in the firewall rule table and Service Composer, and
application identification information is captured in Distributed Firewall logs, and Flow Monitoring
and Application Rule Manager (ARM) when profiling an application.
BLAST A remote access protocol that compresses, encrypts, and Remote Access
encodes a computing experiences at a data center and transmits
it across any standard IP network for VMware Horizon desktops.
CIFS CIFS (Common Internet File System) is used to provide File Transfer
shared access to directories, files, printers, serial ports, and
miscellaneous communications between nodes on a network
CLRCASE A software tool for revision control of source code and other Networking
software development assets. It is developed by the Rational
Software division of IBM. ClearCase forms the base of revision
control for many large and medium sized businesses and can
handle projects with hundreds or thousands of developers
EPIC Epic EMR is an electronic medical records application that Client Server
provides patient care and healthcare information.
FTP FTP (File Transfer Protocol) is used to transfer files from a file File Transfer
server to a local machine
GITHUB Web-based Git or version control repository and Internet hosting Collaboration
service
HTTP (HyperText Transfer Protocol) the principal transport protocol for Web Services
the World Wide Web
HTTP2 Traffic generated by browsing websites that support the HTTP Web Services
2.0 protocol
MAXDB SQL connections and queries made to a MaxDB SQL server Database
NFS Allows a user on a client computer to access files over a network File Transfer
in a manner similar to how local storage is accessed
NTP NTP (Network Time Protocol) is used for synchronizing the clocks Networking
of computer systems over the network
OCSP An OCSP Responder verifying that a user's private key has not Networking
been compromised or revoked
PCOIP A remote access protocol that compresses, encrypts, and Remote Access
encodes a computing experiences at a data center and transmits
it across any standard IP network.
POP2 POP (Post Office Protocol) is a protocol used by local e-mail Mail
clients to retrieve e-mail from a remote server.
RDP RDP (Remote Desktop Protocol) provides users with a graphical Remote Access
interface to another computer
RTCP RTCP (Real-Time Transport Control Protocol) is a sister protocol Streaming Media
of the Real-time Transport Protocol (RTP). RTCP provides out-of-
band control information for an RTP flow.
RTP RTP (Real-Time Transport Protocol) is primarily used to deliver Streaming Media
real-time audio and video
RTSP RTSP (Real Time Streaming Protocol) is used for establishing and Streaming Media
controlling media sessions between end points
RTSPS A secure network control protocol designed for use in Streaming Media
entertainment and communications systems to control streaming
media servers. The protocol is used for establishing and
controlling media sessions between end points.
SIP SIP (Session Initiation Protocol) is a common control protocol for Streaming Media
setting up and controlling voice and video calls
SKIP Simple Key Management for Internet Protocols (SKIP) is hybrid Networking
Key distribution protocol Simple Key Management for Internet
Protocols (SKIP) is similar to SSL, except that it establishes a
long-term key once, and then requires no prior communication
in order to establish or exchange keys on a session-by-session
basis.
SMTP SMTP (Simple Mail Transfer Protocol) An Internet standard for Mail
electronic mail (e-mail) transmission across Internet Protocol (IP)
networks.
SSH SSH (Secure Shell) is a network protocol that allows data to Remote Access
be exchanged using a secure channel between two networked
devices.
SSL SSL (Secure Sockets Layer) is a cryptographic protocol that Web Services
provides security over the Internet.
SYMUPDAT Symantec LiveUpdate traffic, this includes spyware definitions, File Transfer
firewall rules, antivirus signature files, and software updates.
SYSLOG Symantec LiveUpdate traffic, this includes spyware definitions, Network Monitoring
firewall rules, antivirus signature files, and software updates.
TELNET A network protocol used on the Internet or local area networks to Remote Access
provide a bidirectional interactive text-oriented communications
facility using a virtual terminal connection.
TFTP TFTP (Trivial File Transfer Protocol) being used to list, download, File Transfer
and upload files to a TFTP server like SolarWinds TFTP Server,
using a client like WinAgents TFTP client.
This example explains the process of creating a layer 7 firewall rule with APP_HTTP service object.
This firewall rule allows HTTP requests from a virtual machine to any destination. After creating the
firewall rule, you initiate some HTTP sessions on the source VM that passes this firewall rule, and
turn on flow monitoring on a specific vNIC of the source VM. The firewall rule detects an HTTP
application context and enforces the rule on the source VM.
Prerequisites
You must log in to the vSphere Web Client with an account that has any one of the following NSX
roles:
n Security administrator
n NSX administrator
n Security engineer
n Enterprise administrator
Note Make sure that NSX Data Center for vSphere 6.4 or later is installed.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Firewall.
a Enter a rule name to identify this rule. For example, enter L7_Rule_HTTP_Service.
d From the Available Objects list, select the virtual machine. Move this object to the
Selected Objects list, and then click Save.
h From the Available Objects list, select App_HTTP service. Move this service to the
Selected Objects list, and then click Save.
i Make sure that the firewall rule is enabled, and the rule action is set to Allow.
The following figure shows the firewall rule that you created.
5 Log in to the console of your source VM and initiate the wget Linux command to download
files from the web using HTTP.
6 On the vNIC of the source VM, turn on live flow monitoring to monitor traffic flows on the
source VM.
b Select a particular vNIC on the source VM. For example, select l2vpn-client-vm-
Network adapter 1.
7 In the following figure, the flow monitoring data shows that the firewall rule has detected the
application (HTTP) context. Rule 1005 is enforced on source VM ([Link]) and traffic
flows to destination IP addresses [Link] and [Link].
8 Return to the Firewall page, and change the rule action to Block.
9 Go to the console of the source VM and run the wget command again.
Observe that the HTTP requests are now blocked on the source VM. You should see an error
in the VM console that says something like this:
HTTP request sent, awaiting response ... Read error (Connection reset by peer) in headers
Retrying.
The following figure shows a flow with the application (HTTP) context detected and blocked on
the vNIC of the source VM ([Link]).
What to do next
To know about other scenarios where you can use context-aware firewall rules, see Context-
Aware Firewall Scenarios.
Session Timers
Session Timers can be configured for TCP, UDP, and ICMP sessions.
Session Timers define how long a session is maintained on the firewall after inactivity. When the
session timeout for the protocol expires, the session closes.
On the firewall, a number of timeouts for TCP, UDP, and ICMP sessions can be specified to
apply to a user-defined subset of virtual machines or vNICs. By default, any virtual machines or
vNICs not included in the user-defined timer are included in the global session timer. All of these
timeouts are global, meaning they apply to all of the sessions of that type on the host.
Default session values can be modified depending on your network needs. Note that setting a
value too low could cause frequent timeouts, and setting a value too high could delay failure
detection.
On the firewall, you can define timeouts for TCP, UDP, and ICMP sessions for a set of user defined
VMs or vNICS. The default timer is global, meaning that it applies to all virtual machines protected
by firewall.
Procedure
u In NSX 6.4.1 and later, navigate to Networking & Security > Security > Firewall Settings >
Timeout Settings.
u In NSX 6.4.0, navigate to Networking & Security > Security > Firewall > Settings .
2 If there is more than one NSX Manager available, select one from the drop-down list.
4 Enter a name (required) and a description (optional) for the session timer.
5 Select the protocol. Accept the default values or enter your own values.
First Packet The timeout value for the connection after the first packet has been sent. The default is 120 seconds.
Closing The timeout value for the connection after the first FIN has been sent. The default is 120 seconds.
Open The timeout value for the connection after a second packet has been transferred. The default is 30
seconds.
Fin Wait The timeout value for the connection after both FINs have been exchanged and the connection is
closed. The default is 45 seconds.
Established The timeout value for the connection once the connection has become fully established.
Closed The timeout value for the connection after one endpoint sends an RST. The default is 20 seconds.
First Packet The timeout value for the connection after the first packet is sent. This will be the initial timeout for
the new UDP flow. The default is 60 seconds.
Single The timeout value for the connection if the source host sends more than one packet and the
destination host has not sent one back. The default is 30 seconds.
Multiple The timeout value for the connection if both hosts have sent packets. The default is 60 seconds.
First Packet The timeout value for the connection after the first packet is sent. This is the initial timeout for the
new ICMP flow. The default is 20 seconds.
Error reply The timeout value for the connection after an ICMP error is returned in response to an ICMP packet.
The default is 10 seconds.
8 Select one or more objects and click the arrow to move them to the Selected Objects column.
9 Click OK or Finish.
Results
After a session timer has been created it can be changed as needed. The default session timer can
also be edited.
Procedure
u In NSX 6.4.1 and later, navigate to Networking & Security > Security > Firewall Settings >
Timeout Settings.
u In NSX 6.4.0, navigate to Networking & Security > Security > Firewall > Settings .
2 If there is more than one NSX Manager available, select one from the drop-down list.
3 Select the timer you want to edit. Note that the default timer values can also be edited. Click
the pencil icon.
4 Enter a name (required) and a description (optional) for the session timer.
5 Select the protocol. Edit any default values that you want to change.
First Packet The timeout value for the connection after the first packet has been sent. The default is 120 seconds.
Closing The timeout value for the connection after the first FIN has been sent. The default is 120 seconds.
Open The timeout value for the connection after a second packet has been transferred. The default is 30
seconds.
Fin Wait The timeout value for the connection after both FINs have been exchanged and the connection is
closed. The default is 45 seconds.
Established The timeout value for the connection once the connection has become fully established.
Closed The timeout value for the connection after one endpoint sends an RST. The default is 20 seconds.
First Packet The timeout value for the connection after the first packet is sent. This will be the initial timeout for
the new UDP flow. The default is 60 seconds.
Single The timeout value for the connection if the source host sends more than one packet and the
destination host has not sent one back. The default is 30 seconds.
Multiple The timeout value for the connection if both hosts have sent packets. The default is 60 seconds.
First Packet The timeout value for the connection after the first packet is sent. This is the initial timeout for the
new ICMP flow. The default is 20 seconds.
Error reply The timeout value for the connection after an ICMP error is returned in response to an ICMP packet.
The default is 10 seconds.
8 Select one or more objects and click the arrow to move them to the Selected Objects column.
9 Click OK or Finish.
VMware recommends that you install VMware Tools on each virtual machine in your environment.
In addition to providing vCenter with the IP address of VMs, it provides the following functions:
n Collect network, disk, and memory usage from the VM and send it to the host.
Note that having two vNICs for a VM on the same network is not supported and can lead to
unpredictable results around which traffic is blocked or allowed.
For those VMs that do not have VMware Tools installed, NSX will learn the IP address through
ARP or DHCP snooping, if ARP and DHCP snooping is enabled on the VM's cluster.
IP addresses detected using ARP snooping are not removed automatically. In other words, there
is no timeout for vNIC IP addresses that are detected using ARP snooping.
You can specify the IP detection types either at a global level or at the host cluster level.
Typically, users with security administrator and security engineer roles might prefer to specify
the IP detection type at a global level. They use the detected VM IP addresses to configure the
SpoofGuard policies and the distributed firewall policies.
Users with an enterprise administrator role usually have a much wider view of the complete virtual
network, and might prefer to control the IP detection type by editing the settings at the host
cluster level. The IP detection settings at the host cluster level override the settings that are
specified at the global level.
Procedure
Global IP Detection a Navigate to Networking & Security > Security > SpoofGuard.
Host Cluster IP Detection a Navigate to Networking & Security > Installation and Upgrade > Host
Preparation.
b Click the cluster for which you want to change the IP detection type, and
then click Actions > Change IP Detection Type.
DHCP Snooping NSX detects the IP addresses of the VMs in the network by reading the
DHCP snooping entries.
ARP Snooping NSX detects the IP addresses of the VMs by using the ARP snooping
mechanism.
3 If you selected ARP snooping, enter the maximum ARP IP addresses that must be detected
per vNIC, per VM. The default value is 1.
ARP snooping can detect a maximum of 128 IP addresses per vNIC, per VM. The valid range of
values are 1 through 128. For example, if you specify a value of 5, it means that a maximum of
first five IP addresses are detected per vNIC per VM.
IP addresses detected using ARP snooping are not removed automatically. In other words,
there is no timeout for vNIC IP addresses that are detected using ARP snooping.
What to do next
n If you enabled ARP snooping, consider the option to configure SpoofGuard to defend your
network against ARP poison attacks.
NSX Manager, NSX Controller, and NSX Edge virtual machines are automatically excluded from
distributed firewall protection. In addition, place the following service virtual machines in the
Exclusion List to allow traffic to flow freely.
n vCenter Server. It can be moved into a cluster that is protected by Firewall, but it must already
exist in the exclusion list to avoid connectivity issues.
Note It is important to add the vCenter Server to the exclusion list before changing the "any
any" default rule from allow to block. Failure to do so will result in access to the vCenter
Server being blocked after creating a Deny All rule (or modifying default rule to block action).
If this occurs, use the API to change the default rule from deny to allow. For example,
use GET /api/4.0/firewall/globalroot-0/config to retrieve the current configuration, and
use PUT /api/4.0/firewall/globalroot-0/config to change the configuration. See "Working
with Distributed Firewall Configuration" in the NSX API Guide for more information.
n Virtual machines that require promiscuous mode. If these virtual machines are protected by
distributed firewall, their performance may be adversely affected.
Procedure
u In NSX 6.4.1 and later, navigate to Networking & Security > Security > Firewall Settings >
Exclusion List.
u In NSX 6.4.0, navigate to Networking & Security > Security > Firewall > Exclusion List.
2 Click Add.
4 Click OK.
Results
If a virtual machine has multiple vNICs, all of them are excluded from protection. If you add
vNICs to a virtual machine after it has been added to the Exclusion List, Firewall is automatically
deployed on the newly added vNICs. To exclude the new vNICs from firewall protection, you must
remove the virtual machine from the Exclusion List and then add it back to the Exclusion List. An
alternative workaround is to power cycle (power off and then power on) the virtual machine, but
the first option is less disruptive.
Knowing the host resource utilization at any given time can help you in better organizing your
server utilization and network designs.
The default CPU threshold is 100, and the memory threshold is 100. You can modify the default
threshold values through REST API calls. The Firewall module generates system events when the
memory and CPU usage crosses the thresholds. For information on configuring default threshold
values, see Working with Memory and CPU Thresholds in the NSX API Guide.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > System > Events.
Each ESXi host is configured with the following threshold parameters for DFW resource utilization:
CPU utilization, heap memory, process memory, connections per second (CPS), and maximum
connections. An alarm is raised if the respective threshold is crossed 20 consecutive times during
a 200-second period. A sample is taken every 10 seconds.
The memory is used by distributed firewall internal data structures, which include filters,
rules, containers, connection states, discovered IPs, and drop flows. These parameters can be
manipulated using the following API call: PUT /api/4.0/firewall/stats/thresholds. See the NSX
API Guide for more information.
Edge Firewall
Edge Firewall monitors the North-South traffic to provide perimeter security functionality including
firewall, Network Address Translation (NAT), and site-to-site IPSec and SSL VPN functionality.
This solution is available in the virtual machine form factor and can be deployed in a High
Availability mode.
Firewall support is limited on the Logical Router. Only the rules on management or uplink
interfaces work, however, the rules on internal interfaces do not work.
Note The Edge Services Gateway (ESG) is vulnerable to SYN flood attacks, where an attacker
fills the firewall state tracking table by flooding SYN packets. This DOS/DDOS attack creates a
service disruption to genuine users. The NSX Edge can defend itself from SYN flood attacks by
using the SYN cookie mechanism in a smart way to detect bogus TCP connections and stop
them without consuming firewall state tracking resources. Before the SYN queue is not full, the
incoming connections pass normally. After the SYN queue is full, the SYN cookie mechanism takes
effect.
However, for the servers behind the NSX Edge, the SYN flood protection feature is disabled by
default. The NSX Edge uses SYNPROXY to do the SYN flood protection.
Firewall rules applied to a Logical Router only protect control plane traffic to and from the Logical
Router control virtual machine. They do not enforce any data plane protection. To protect the
data plane traffic, create Logical Firewall rules for East-West protection or rules at the NSX Edge
Services Gateway level for North-South protection.
n These rules are defined on the Firewall user interface (Networking & Security > Security >
Firewall)
n These rules are displayed in read-only mode on the NSX Edge Firewall user interface.
2 Internal rules that enable the control traffic to flow for Edge services. For example, internal
rules include the following auto-plumbed rules:
a SSL VPN auto-plumb rule: The Edge Firewall tab displays the sslvpn auto-plumb rule when
server settings are configured and SSL VPN service is enabled.
b DNAT auto-plumb rule: The Edge NAT tab displays the DNAT auto-plumb rule as part of
the default SSL VPN configuration.
3 User-defined rules that are added on the NSX Edge Firewall user interface.
4 Default rule.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > NSX Edges.
4 Select the Default Rule, which is the last rule in the firewall table.
You can edit rule action or enable or disable logging of all sessions that match the default rule.
Enabling logging can affect performance.
The Edge Firewall interface provides the following methods to add an edge firewall rule:
n Add a rule either above or below an existing rule in the firewall table.
Remember If you have created distributed firewall rules and applied them to the edge, these
firewall rules are displayed in a read-only mode on the Edge Firewall user interface. However, the
edge firewall rules that you create using the Edge Firewall user interface are not displayed on the
Firewall interface that you used to create the distributed firewall rules (Networking & Security >
Security > Firewall).
Procedure
5 Use any of the following three methods to start the process of adding an edge firewall rule.
Method #1: Add a rule either above or below an existing rule in the firewall table.
NSX sets the source, destination, and service columns of the newly added rule as "any". If the
system-generated default rule is the only rule in the firewall table, the new rule is added above
the default rule. The new rule is enabled by default.
b In the No. column, click , and then select Add Above or Add Below.
In NSX 6.4.5 and earlier, you can create a rule by copying one rule at a time. Starting in NSX
6.4.6, you can select multiple rules to copy simultaneously. The copied rules are enabled by
default, and you can edit the rule properties, as necessary.
Note When you copy and paste system-generated "internal" rules and "default" rule, the
newly created rules are automatically assigned the rule type as "user".
6.4.6 and later a Select the check box next to the rules that you want to copy.
b Click More > Copy Selected Rule(s).
c Select the rule where you want the copied rules to be pasted.
d In the No. column, click , and select Paste Above or Paste Below.
A new row is added in the firewall table. NSX sets the source, destination, and service columns
of the newly added rule as "any". If the system-generated default rule is the only rule in the
firewall table, the new rule is added above the default rule. The new rule is enabled by default.
n In NSX 6.4.6 and later, click in the Name column of the new rule, and enter a rule name.
n In NSX 6.4.5 and earlier, point to the Name column of the new rule, and click . Enter a
rule name, and click OK.
You can add IP addresses, vCenter objects, and grouping objects as the source. If no source
is added, the source is set to "any". You can add multiple NSX Edge interfaces and IP address
groups as the source for firewall rules.
You can choose to create a new IP set or a new security group. After the IP set or security
group is created, it is automatically added in the Source column of the rule.
a Select one or more objects to use as sources in the firewall rule.
For example, in the following two situations, you can use the "vNIC Group" object type as
the source:
In this situation, select vNIC Group from the Object Type drop-down menu, and from the
Available Objects list, select vse.
Select all traffic originating from any internal or uplink (external) interface of the
selected NSX Edge
In this situation, select vNIC Group from the Object Type drop-down menu, and from the
Available Objects list, select internal or external.
The rule is automatically updated when you configure additional interfaces on the edge.
Remember Firewall rules defined on the internal interfaces do not work on a distributed
logical router.
n If the Negate Source option is turned on or selected, the rule is applied to traffic
coming from all sources except for the sources defined in this rule.
n If the Negate Source option is turned off or not selected, the rule is applied to traffic
coming from the sources in this rule.
You can add IP addresses, vCenter objects, and grouping objects as the destination. If no
destination is added, the destination is set to "any". You can add multiple NSX Edge interfaces
and IP address groups as the destination for firewall rules.
The procedure to add objects and IP addresses in the rule destination remains the same as
explained in the substeps for adding the rule source.
Tip Starting in NSX 6.4.6, you can drag objects and IP addresses from the Source column
to the Destination column and conversely. In addition, you can drag objects and IP addresses
from one rule to another rule.
You can add either a predefined service or a service group in the rule, or create a new
service or a service group to use in the rule. NSX Edge supports services defined only with
L3 protocols.
6.4.6 and later 1 Point to the Service column of the new rule and click .
2 In the Service/Service Groups tab, select either a service or a service
group from the Object Type drop-down menu.
3 Select the objects from the Available Objects list and move them to
the Selected Objects list.
Tip In NSX 6.4.6 and later, you can drag service and service group objects from one
user-defined rule to another user-defined rule.
6.4.6 and later 1 Point to the Service column of the new rule and click .
2 Click the Raw Port-Protocol tab, and then click Add.
3 Select a protocol.
4 In the Source Port column, and enter the port numbers.
n In NSX 6.4.6 and later, selection an action from the drop-down menu.
n In NSX 6.4.5 and earlier, point to the Action column of the rule, and click . Select an
action and click OK.
Action Description
Accept or Allow Allows traffic from or to the specified sources, destinations, and services. By
default, action is set to accept traffic.
Deny or Block Blocks traffic from or to the specified sources, destinations, and services.
11 (Optional) Specify whether sessions that match this new firewall rule must be logged.
By default, logging is disabled for the rule. Enabling logging can affect performance.
n In NSX 6.4.6 and later, click the toggle switch in the Log column to enable logging.
n In NSX 6.4.5 and earlier, point to the Action column of the new rule, and click . Select
Log or Do not log.
n In NSX 6.4.5 and earlier, point to the Action column of the new rule, and click . Expand
the Advanced options.
The following table describes the advanced options.
Option Description
Direction Select whether the rule must be applied on incoming traffic or outgoing
traffic or both. The default value is "In/Out", which means that rule is applied
symmetrically across both source and destination.
VMware does not recommend specifying the direction of firewall rules
because "in" or "out" direction can cause the rules to become asymmetric.
For example, consider that you have created a firewall rule to "allow" traffic
from source A to destination B, and the rule direction is set to "out".
n When A sends a packet to B, a state is created based on this rule on A
because the direction of traffic is "out" on A.
n When the packet is received on B, the actual traffic direction is "in".
Because the rule direction is set to accepting only “outgoing traffic”, the
rule does not hit this packet on B.
This example shows that setting the "out" direction in the rule causes the rule
to become asymmetric.
Match on Use this option to specify when the firewall rule must be applied.
n Select Original when you want the rule to be applied on original IP
address and services before network address translation is performed.
n Select Translated when you want the rule to be applied on translated IP
address and services after network address translation is performed.
13 Click Publish Changes to push the new rule to the NSX Edge.
Figure 10-5. Firewall rule for traffic to flow from an NSX Edge interface to an HTTP server
Figure 10-6. Firewall rule for traffic to flow from all internal interfaces (subnets on portgroups
connected to internal interfaces) of a NSX Edge to an HTTP Server
Figure 10-7. Firewall rule for traffic to allow SSH into a machine on internal network
What to do next
While working with edge firewall rules, you can perform several additional tasks in the firewall
table. For example:
n Filter the list of rules in the table by hiding the system-generated default and internal rules, or
by hiding the predefined distributed firewall rules that were applied on the edge.
n Search rules that match a specific string by using the Search text box. For instance, if you want
to search all the rules that contain the string "133", type 133 in the Search text box.
n In NSX 6.4.5 and earlier, make sure that the Stats column is displayed in the firewall table.
If the Stats column is not displayed, click and select the Stats column. To view the
n Change the order of user-defined rules by clicking the Move Up ( or ) or Move Down (
or ) icons. In NSX 6.4.6 and later, you can drag user-defined rules to change the order.
Point to the user-defined rule that you want to drag. A drag handle ( ) icon appears to the left
of the rule. Click and drag this handle to move the rule to a valid location in the firewall table.
Important You cannot change the order of system-generated internal rules and the default
rule.
n Disable a rule.
n In NSX 6.4.6 and later, click the toggle switch to the left of the rule name.
n Undo and redo rule changes until the rule is published. This feature is available in NSX 6.4.6
and later. After the rule is published, the history of rule changes is lost, and you cannot undo
or redo the changes.
Procedure
Note You cannot edit the following types of rules in the NSX Edge Firewall user interface:
n Internal rules (for example, auto-plumbed rules that enable the control traffic to flow for
Edge services.)
n Predefined distributed firewall rules that are applied to the edge. These firewall rules are
defined in the Firewall user interface (Networking & Security > Security > Firewall).
Procedure
5 Select the rule for which you want to change the order.
Important You cannot change the order of system-generated internal rules and the default
rule.
Tip In NSX 6.4.6 and later, you can drag user-defined rules to change the order. Point to the
user-defined rule that you want to drag. A drag handle ( ) icon appears to the left of this rule.
Click and drag this handle to move the rule to a valid location in the firewall table.
Procedure
n Default rule
n Predefined distributed firewall rules that are applied to the edge by using the Firewall user
interface (Networking & Security > Security > Firewall).
Consider that you have created an edge firewall rule that uses an IP set object in the destination of
the rule, as shown in the following figure.
In the following procedure, you will delete the "FW-IPset" object on the edge, and then return to
the firewall table to observe that NSX Edge detects the invalid rule. You will mark the rule as valid
and republish the rule.
Procedure
d Click the IP Sets tab, and then select the FW-IPSet object.
e Click the Delete ( or ) icon, and then select the Proceed to force delete check box.
3 Observe that NSX displays the following error message above the firewall table.
NSX Edge detects that the destination of the firewall rule at position 4 is invalid, and therefore
the rule becomes invalid. The empty object in the destination column of the rule is enclosed in
a red box, as shown in the following figure.
6.4.6 and later Point to the empty sys-gen-empty-ipset-edge-fw object, click , and
then select Remove.
6.4.5 and earlier Point to the empty sys-gen-empty-ipset-edge-fw object and click .
5 (Optional) Edit the rule destination to make the rule configuration valid.
6.4.6 and later a Point to the Destination column of the rule, click and select Edit Rule
Destination.
b Add objects or IP addresses, as necessary.
6 (6.4.6 and later only) Click the Mark the rule as valid link in the error message.
n To confirm that the rule can be marked as valid, click Yes. The error message is removed.
n To close the warning message and return to the firewall table to verify and edit the rule
destination, click No.
Note In NSX 6.4.5 and earlier, the error message above the firewall table does not show
the Mark rule as valid link. After you remove the empty object, and optionally edit the rule
destination, NSX Edge removes the error message when it detects that the rule configuration
has become valid.
Edge Objects
You can create Edge-level grouping objects to limit the scope of objects to an Edge. Edge
grouping objects differ from network grouping objects, which have a global scope and can be
used across Edges and other objects.
For example, if you want to create an IP set (IP address group) for a specific Edge and ensure that
this IP set is not available for reuse in other contexts, you can create an Edge IP set.
At the scope of an NSX Edge, you can create and manage the following grouping objects:
n IP Sets
n Services
n Service Groups
To create Edge objects in the vSphere Web Client, select an Edge, and navigate to Manage >
Grouping Objects.
For more information about working with IP sets (IP address groups), services, and service
groups, see Chapter 22 Network and Security Objects.
The NAT service configuration is separated into source NAT (SNAT) and destination NAT (DNAT)
rules.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > NSX Edges.
7 Type the original source (public) IP address in one of the following formats.
Format Example
IP address [Link]
IP address/subnet [Link]/24
any
Format Example
Port number 80
any
Format Example
IP address [Link]
any
Format Example
Port number 80
any
Format Example
IP address [Link]
IP address/subnet [Link]/24
any
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > NSX Edges.
Format Example
IP address [Link]
any
Format Example
Port number 80
any
Format Example
IP address [Link]
any
Format Example
Port number 80
any
Format Example
IP address [Link]
Format Example
any
Format Example
Port number 80
any
NAT64 supports communications initiated by the IPv6-only node towards an IPv4-only node only.
n TCP
n UDP
n ICMP
The translation of IPv4options, IPv6 routing headers, hop-by-hop extension headers, destination
option headers, and source routing headers is not supported. FTP is not supported. Fragmented
packets are not supported.
NSX Edge high availability is not supported with NAT64. NAT64 sessions are not synced between
active and standby appliances, so if a failover occurs, connectivity is interrupted.
If you have dynamic routing protocols configured, IPv4 prefixes are redistributed.
Protocol Timeout
TCP-ESTABLISHED 2 hours
TCP-Trans 4 minutes
UDP 5 minutes
ICMP 1 minute
Prerequisites
n Configure an uplink interface of the Edge Services Gateway with an address on the IPv6
network.
n Configure an internal interface of the Edge Services Gateway with an address on the IPv4
network.
n Ensure that these addresses are not duplicated anywhere else in your environment.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > NSX Edges.
Option Description
Match IPv6 Destination Prefix Enter an IPv6 network prefix (network address) or a specific IPv6 address in
CIDR notation.
As NAT64 provides connectivity from IPv6 subnets to IPv4 subnets, in most
situations, you might want to enter an IPv6 network prefix instead of a
specific IPv6 address.
NAT64 uses the IPv6 network prefix that you specify in this text box to map
the IPv4 destination addresses to IPv6 destination addresses. Prefix length
must be any one of the following: 32, 40, 48, 56, 64, or 96.
For example, if you use the /96 network prefix, NAT64 appends the
hexadecimal equivalent of the IPv4 destination address to the IPv6 network
prefix. See the sample NAT64 rule after this procedure for an example.
Note You can use the well-known [Link]/96 prefix defined in RFC 6052,
or use any other IPv6 prefix that is not already used in your environment.
Translated IPv4 Source Prefix Optional: Enter an IPv4 network prefix (network address) or a specific IPv4
address in CIDR notation.
Ensure that the IPv4 network prefix or the IPv4 address is not already used in
your environment.
As NAT64 provides connectivity from IPv6 subnets to IPv4 subnets, in most
situations, you might want to enter an IPv4 network prefix instead of a
specific IPv4 address.
NAT64 uses an IP address from the IPv4 network prefix to translate the IPv6
source address to an IPv4 source address. See the sample NAT64 rule after
this procedure for an example.
Note
n The [Link]/16 IPv4 shared address space is reserved for NAT64. You
can use this reserved address space.
n If you keep this text box empty, NAT64 rule automatically uses the
reserved address space when you publish the rule.
External Network
Web1 2001::20/64
IPv6 subnet
NSX Edge
P
V (NAT64) IPv6 IPv4
DLR
Logical switch 1 Logical switch 2
(IPv4 subnet) (IPv4 subnet)
[Link]/30
The NAT64 rule in this example uses the following sample values:
The following screen capture shows the published rule. The Rule ID is autogenerated and it might
vary in your environment.
The NAT64 rule takes the hex equivalent of the destination IPv4 address ([Link]) and appends
it to the IPv6 network prefix ([Link]) to form the IPv6 destination address: [Link].
The rule picks up any IP address from the Translated IPv4 Source prefix ([Link]/24). Let
us say, the rule picks up [Link]. NAT64 uses this IPv4 source address to translate the
[Link] destination address to the actual IPv4 destination address ([Link])
1 Log in to the command prompt of Web1 computer and issue a ping command to the IPv6
destination address [Link]. A nat64 session is established.
2 Log in to the NSX Edge CLI and view the nat64 session by running the show nat64 sessions
command.
You can create multiple firewall rule sections for L2 and L3 rules. Because multiple users can log
in to the web client and simultaneously make changes to firewall rules and sections, users can lock
sections that they are working on so that no one else will be able to modify the rules in the section
they are working on.
Cross-vCenter NSX environments can have multiple universal rule sections. Multiple universal
sections allow rules to be easily organized per tenant and application. If rules are modified or
edited within a universal section, only the universal distributed firewall rules for that section are
synced to the secondary NSX Managers. You must manage universal rules on the primary NSX
Manager, and you must create the universal section there before you can add universal rules.
Universal sections are always listed above local sections on both primary and secondary NSX
Managers.
Rules outside the universal sections remain local to the primary or secondary NSX Managers on
which they are added.
Prerequisites
n In a standalone or single vCenter NSX environment there is only one NSX Manager so you do
not need to select one.
n Objects local to an NSX Manager must be managed from that NSX Manager.
n In a cross-vCenter NSX environment that does not have Enhanced Linked Mode enabled, you
must make configuration changes from the vCenter linked to the NSX Manager that you want
to modify.
n In a cross-vCenter NSX environment in Enhanced Linked Mode, you can make configuration
changes to any NSX Manager from any linked vCenter. Select the appropriate NSX Manager
from the NSX Manager drop-down menu.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Firewall.
2 If there is more than one NSX Manager available, select one. You must select the Primary NSX
Manager to add a universal section.
3 Ensure that you are in the Configuration > General tab to add a section for L3, L4, or L7 rules.
Click the Ethernet tab to add a section for L2 rules.
5 Enter a name for the section. Section names must be unique within NSX Manager.
6 (Optional) In a cross-vCenter NSX environment, you can configure the section as a universal
firewall rule section.
7 (Optional) Configure firewall rule properties for the firewall section by selecting the
appropriate check boxes.
Enable User Identity at Source When using Identity Firewall for RDSH, Enable User
Identity at Source must be checked. Note that this
disables the enable stateless firewall option because
the TCP connection state is tracked for identifying the
context.
Enable Stateless Firewall Enable stateless firewall for the firewall section.
What to do next
Merging and consolidating a complex firewall configuration can help with maintenance and
readability.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Firewall.
n In NSX 6.4.1 and later, on the firewall rule section you want to merge, click the menu ( )
and select Merge Section.
n In NSX 6.4.0, on the firewall rule section you want to merge, click Merge section ( ).
3 Select whether you want to merge this section with the section above or below.
Rules from both sections are merged. The new section keeps the name of the section with
which the other section is merged.
You cannot delete a section and add it again at a different place in the firewall table. To do so, you
must delete the section and publish the configuration. Then add the deleted section to the firewall
table and re-publish the configuration.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Firewall.
2 Ensure that you are in the Configuration > General tab to delete a section for L3 rules. Click
the Ethernet tab to delete a section for L2 rules.
3 Click the Delete section ( ) icon for the section you want to delete.
Results
Firewall rule sections can be locked to prevent multiple users from simultaneously modifying the
same section. The Enterprise Administrator can view and override all locks.
Security Administrator, Security Engineer and Enterprise Administrator user roles are able to lock
and unlock their sections. Enterprise Administrator user roles have override capability - to unlock
a section locked by any user of any role. Enterprise Administrators are also able to unlock other
Enterprise Administrators. For more on user roles see Managing User Rights.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Firewall.
2 Click the section lock icon, enter the Lock Section name and comments, and click LOCK.
Firewall rule sections can be locked to prevent multiple users from simultaneously modifying the
same section. The Enterprise Administrator can view and override all locks.
Security Administrator, Security Engineer and Enterprise Administrator user roles are able to lock
and unlock their sections. Enterprise Administrator user roles have override capability - to unlock
a section locked by any user of any role. Enterprise Administrators are also able to unlock other
Enterprise Administrators. For more on user roles see Managing User Rights.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Firewall.
n Click the section lock icon, and then click UNLOCK. The section now displays an unlocked
lock icon to indicate that it is unlocked.
n The number of locked sections is displayed above the firewall rule table. To view all locked
sections, click the hyperlinked number next to Locked. To find the sections locked by you,
filter rules by your name. Select the rule you want to unlock and click UNLOCK.
Each traffic session is checked against the top rule in the Firewall table before moving down the
subsequent rules in the table. The first rule in the table that matches the traffic parameters is
enforced. Rules are displayed in the following order:
1 Rules defined in the Firewall user interface by users have the highest priority, and are enforced
in top-to-bottom ordering with a per-virtual NIC level precedence.
2 Auto-plumbed rules (rules that enable control traffic to flow for Edge services).
4 Service Composer rules - a separate section for each policy. You cannot edit these rules in
the Firewall table, but you can add rules at the top of a security policy firewall rules section. If
you do so, you must re-synchronize the rules in Service Composer. For more information, see
Chapter 18 Service Composer.
Note that firewall rules are enforced only on clusters on which you have enabled firewall. For
information on preparing clusters, see the NSX Installation Guide.
Procedure
Prerequisites
If you are adding a rule based on a VMware vCenter object, ensure that VMware Tools is installed
on the virtual machines. See NSX Installation Guide.
VMs that are migrated from 6.1.5 to 6.2.3 do not have support for TFTP ALG. To enable TFTP ALG
support after migrating, add and remove the VM from the exclusion list or restart the VM. A new
6.2.3 filter is created, with support for TFTP ALG.
n One or more domains have been registered with NSX Manager. NSX Manager gets group
and user information as well as the relationship between them from each domain that it is
registered with. See Register a Windows Domain with NSX Manager.
n A security group based on Active Directory objects has been created which can be used as the
source or destination of the rule. See Create a Security Group.
n The Applied to field is not supported for rules for remote desktop access.
In a cross-vCenter NSX environment, universal rules refer to the distributed firewall rules defined
on the primary NSX Manager in the universal rules section. These rules are replicated on all
secondary NSX Managers in your environment, which enables you to maintain a consistent firewall
policy across vCenter boundaries. The primary NSX Manager can contain multiple universal
sections for universal L2 rules and multiple universal sections for universal L3 rules. Universal
sections are on top of all local and service composer sections. Universal sections and universal
rules can be viewed but not edited on the secondary NSX Managers. The placement of the
universal section with respect to the local section does not interfere with rule precedence.
Edge firewall rules are not supported for vMotion between multiple vCenter Servers.
n universal MAC set n universal security group, which n pre-created universal services and
n universal IP set can contain a universal security service groups
tag, IP set, MAC set, or universal n user created universal services
n universal security group, which
security group and services groups
can contain a universal security
tag, an IP set, MAC set, or n universal logical switch
universal security group n Distributed Firewall - applies
rules on all clusters on which
Distributed Firewall is installed
Note that other vCenter objects are not supported for universal rules.
Make sure the state of NSX distributed firewall is not in backward compatibility mode. To check
the current state, use the REST API call GET [Link]
state. If the current state is backward compatibility mode, you can change the state to forward by
using the RES API call PUT [Link] Do not try to
publish a distributed firewall rule while the distributed firewall is in backward compatibility mode.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Firewall.
2 Ensure that you are in the Configuration > General tab to add an L3, L4, or L7 rule. Click the
Ethernet tab to add an L2 rule.
If creating a universal firewall rule, create the rule in a universal rule section.
The following vCenter objects can be specified as the source or destination for a firewall rule:
Procedure
b Select the object type from the Object Type drop-down menu.
You can create a new security group or IP set. Once you create the new object, it is added
to the source or destination column by default. For information on creating a new security
group or IP set, see Chapter 22 Network and Security Objects.
c Select one or more objects and click the arrow to move them to the Selected Objects
column.
Option Description
NSX 6.4.1 a Click Edit in the source or destination column, select IP addresses,
and click Add.
b Enter one IP address. Both IPv4 and IPv6 addresses are valid.
c Click Add if you need to enter additional IP addresses.
NSX 6.4.0
a Click IP ( ) in the source column.
b Select IPv4 or IPv6.
c Type the IP address.
If Negate Source is selected, the rule is applied to traffic coming from all sources except for
the sources defined for this rule.
If Negate Source is not selected, the rule applies to traffic coming from the sources or
destinations defined for this rule.
You can select Negate Source only if you have at least one source or destination defined.
Option Description
NSX 6.4.0
a Click Edit ( ) in the source column.
b Select the Negate source check box.
Procedure
Option Description
NSX 6.4.1 a Point to the Service cell of the new rule and click .
b Select the object type from the Object Type drop-down menu. You can
create a new security group or IP set. Once you create the new object, it
is added to the source or destination column by default. For information
on creating a new security group or IP set, see Chapter 22 Network and
Security Objects
c Select one or more objects and click the arrow to move them to the
Selected Objects column.
NSX 6.4.0
a Point to the Service cell of the new rule and click .
You can create a new service or service group. Once you create the new
object, it is added to the Selected Objects column by default.
c Click OK.
Option Description
NSX 6.4.1 a Point to the Service cell of the new rule and click .
b Select Raw Port-Protocol, and click Add.
c Select the Protocol from the list and click OK.
NSX 6.4.0
a Point to the Service cell of the new rule and click .
b Select the service protocol.
Note: VMs that are migrated from 6.1.5 to 6.2.3 do not have support for
TFTP ALG. To enable TFTP ALG support after migrating, add and remove
the VM from the exclusion list or restart the VM. A new 6.2.3 filter is
created, with support for TFTP ALG.
c Type the port number and click OK.
In order to protect your network from ACK or SYN floods, you can set Service to TCP-all_ports
or UDP-all_ports and set Action to Block for the default rule. For information on modifying the
default rule, see Edit the Default Distributed Firewall Rule.
Procedure
1 Point to the Action cell of the new rule and make appropriate selections as described in the
table below.
Action Results in
Allow Allows traffic from or to the specified source(s), destination(s), and service(s).
Block Blocks traffic from or to the specified source(s), destination(s), and service(s).
Log Logs all sessions matching this rule. Enabling logging can affect performance.
Option Description
NSX 6.4.1 In the Logging column, click the Log button to on.
NSX 6.4.0
a Point to the Action cell of the new rule and click
b Select Log or Do not Log. Logging logs all sessions that match this rule
and can affect performance.
If the rule contains virtual machines/vNICS in the source and destination fields, you must add both
the source and destination virtual machines/vNICS to Applied To for the rule to work correctly.
Procedure
u In Applied To, define the scope at which this rule is applicable. Make appropriate selections as
described in the table below and click OK. Note that if you adding a rule for remote desktop
access, the Applied To field is not supported.
All prepared clusters in your environment Select Apply this rule on all clusters on which
Distributed Firewall is installed. After you click OK,
the Applied To column for this rule displays Distributed
Firewall.
All NSX Edge gateways in your environment Select Apply this rule on all the Edge gateways. After
you click OK or SAVE, the Applied To column for this
rule displays All Edges.
If both the above options are selected, the Applied To
column displays Any.
One or more cluster, datacenter, distributed virtual port In the Available list, select one or more objects and click
group, NSX Edge, network, virtual machine, vNIC, or
logical switch .
Procedure
u Click Publish or Publish Changes. A new a rule is added at the top of the section. If the
system-defined rule is the only rule in the section, the new rule is added above the default
rule.
After a few moments, a message indicating whether the publish operation was successful
is displayed. In case of any failures, the hosts on which the rule was not applied
are listed. For additional details on a failed publish, navigate to NSX Managers >
NSX_Manager_IP_Address > Monitor > System Events.
If you want to add a rule at a specific place in a section, select a rule. In the No. column, click
When you click Publish Changes, the firewall configuration is automatically saved. For
information on reverting to an earlier configuration, see Load a Saved Firewall Configuration.
What to do next
n Display additional columns in the rule table by clicking and selecting the appropriate
columns.
Stats Clicking shows the traffic related to this rule (traffic packets and size)
n Merge sections by clicking the Merge section icon and selecting Merge with above section or
Merge with below section.
The default Distributed Firewall rule allows all L3 and L2 traffic to pass through all prepared
clusters in your infrastructure. The default rule is always at the bottom of the rules table and
cannot be deleted or added to. However, you can change the Action element of the rule from
Allow to Block or Reject, add comments for the rule, and indicate whether traffic for that rule
should be logged.
In a cross-vCenter NSX environment the default rule is not a universal rule. Any changes to the
default rule must be made on each NSX Manager.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Firewall.
You can only edit Action and Log, or add comments to the default rule.
Force sync is used when you need to synchronize the firewall rules on an individual host with the
NSX Manager.
Procedure
1 In the vSphere Web client, navigate to Networking & Security > Installation and Upgrade >
Host Preparation.
2 Select the cluster you want to force sync, then click Actions ( ) > Force Sync Services.
A firewall rule with a custom protocol number can be created on the distributed firewall or the
NSX Edge firewall.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Firewall.
2 Ensure that you are in the Configuration > General tab to add an L3 rule. Click the Add rule
( ) icon.
5 Specify theSource of the new rule. See Add a Firewall Rule Source or Destination for details.
6 Specify the Destination of the new rule. See Add a Firewall Rule Source or Destination for
details.
7 Point to the Service cell of the new rule. Click the Add Service ( ) icon
8 Click New Service on the bottom left of the Specify Service window.
12 Click OK.
Results
Procedure
NSX can save up to 100 configurations. After this limit is exceeded, saved configurations
marked with Preserve Configuration are preserved, while older non-preserved configurations
are deleted to make room for preserved configurations.
5 Click Save
Option Description
n Click Revert Changes to go back to the configuration that existed before you added the
rule. When you want to publish the rule you just added, click the Load Configuration icon,
select the rule that you saved in step 3 and click OK.
Option Description
In NSX 6.4.0 a Click Revert Changes to go back to the configuration that existed before
you added the rule.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Firewall.
2 Ensure that you are in the Configuration > General tab to load an L3 firewall configuration.
Click the Ethernet tab to load an L2 firewall configuration.
Option Description
NSX 6.4.1 and later a Click More, then select Load Saved Configuration.
b Select the configuration to load and click LOAD.
NSX 6.4.0
a Click the Load configuration ( ) icon.
b Select the configuration to load and click OK.
What to do next
If Service Composer rules in your configuration were overridden by the loaded configuration, click
Actions > Synchronize Firewall Rules in the Security Policies tab within Service Composer.
Rules can be filtered by source or destination virtual machines or IP address, rule action, logging,
rule name, comments, and rule ID. You can also filter rules based on a specific service, application,
or a protocol.
Procedure
3 Click Apply.
What to do next
n In NSX 6.4.1 and later, click Clear in the Filter dialog box.
1 User-defined pre rules have the highest priority and are enforced in top-to-bottom ordering
with a per-virtual NIC level precedence.
2 Auto-plumbed rules.
4 Service Composer rules - a separate section for each policy. You cannot edit these rules in
the Firewall table, but you can add rules at the top of a security policy firewall rules section. If
you do so, you must re-synchronize the rules in Service Composer. For more information, see
Chapter 18 Service Composer.
You can move a custom rule up or down in the table. The default rule is always at the bottom of
the table and cannot be moved.
Procedure
1 In the Firewall tab, select the rule that you want to move.
Table 10-6. Firewall Rule Behavior with RDSH and Non-RDSH Sections
Enable User Identity Security Group Identity Security Group (RDSH Any Security Group (Non-RDSH
(RDSH Section) Section) Section)
Source - SID based rules are Source - IP based rules Source - IP based rules
preemptively pushed to hypervisor.
Rule enforcement is on the first
packet.
Applied To with Identity based Security Group - Applied to all hosts User based Applied To
Applied To with Non-Identity based Security Group - User based Applied to User based Applied to
Procedure
3 Click RESET.
Firewall Logs
Firewall generates and stores log files, such as audit logs, rules message logs, and system event
logs. You must configure a syslog server for each cluster that has enabled the firewall . The syslog
server is specified in the [Link] attribute.
Recommendation To collect firewall audit logs on a syslog server, ensure that you have
upgraded the syslog server to the recent version. Preferably, configure a remote syslog-ng server
to collect the firewall audit logs.
Rules message logs Include all access decisions such as permitted /var/log/[Link]
or denied traffic for each rule if logging was
enabled for that rule. Contains DFW packet logs
for the rules where logging has been enabled.
Note The [Link] file can be accessed by running the show log manager command from the
NSX Manager Command Line Interface (CLI) and performing grep for the keyword [Link]. This
file is accessible only to the user or user group having the root privilege.
Rules message logs include all access decisions such as permitted or denied traffic for each
rule, if logging was enabled for that rule. These logs are stored on each host in /var/log/
[Link].
# more /var/log/[Link]
2015-03-10T[Link].671Z INET match DROP domain-c7/1002 IN 242 UDP [Link]/138-
>[Link]/138
# more /var/log/[Link]
2017-04-11T[Link].877Z ESXi_FQDN dfwpktlogs: 50047 INET TERM domain-c1/1001 IN TCP RST
[Link]/33491->[Link]/10001 22/14 7684/1070
More examples:
n RULE_TAG is an example of the text that you add in the Tag text box while adding or editing
the firewall rule.
The following tables explain the text boxes in the firewall log message.
Timestamp 2017-04-11T[Link]
Firewall-specific portion 877Z ESXi_FQDN dfwpktlogs: 50047 INET TERM domain-c1/1001 IN TCP RST
[Link]/33491->[Link]/10001 22/14 7684/1070
Filter hash A number that can be used to get the filter name and other information.
3 Enable logging.
Note If you want customized text to be displayed in the firewall log message, you can enable the
Tag column and add the required text by clicking the pencil icon.
System event logs include Distributed Firewall configuration applied, filter created, deleted, or
failed, and virtual machines added to security groups, and so on. These logs are stored in /home/
secureall/secureall/logs/[Link].
To view the audit and system event logs in the vSphere Web Client, navigate to Networking &
Security > System > Events. In the Monitor tab, select the IP address of the NSX Manager.
n Use Case 1: Don, the IT director of a team instructs his NSX administrator to restrict ALL HTTP
traffic for a particular VM. Don wants to restrict this traffic irrespective of the port it comes
from.
n Use Case 2: Robert, the IT lead of a team wants to restrict the HTTP traffic to a particular VM
on the condition that the traffic does not come from TCP port 8080.
n Use Case 3: Now that there is a context-aware firewall, it can be extended to identity-based
logins as well, such that an Active Directory user when logged into his virtual desktop, will
only be able to access HTTP requests from port 8080. A manager wants his employee John
to be able to access HTTP only from port 8080, and only when John is logged in to the Active
Directory.
Parameter Option
Layer Layer7
App ID HTTP
Protocol TCP
Destination port 80
With the context-aware firewall rule, only traffic that is allowed is Web traffic on port 80.
Parameter Option
Layer Layer7
App ID SSH
Protocol TCP
With the context-aware firewall rule, only traffic that is allowed is SSH traffic on any port.
Example: Example
For detailed steps on creating a context-aware firewall rule by using the vSphere Web Client, see
Example: Create a Context-Aware Firewall Rule.
Enforcement based on application identity enables users to allow or deny applications to run on
any port, or to force applications to run on their standard port. Deep Packet Inspection (DPI)
enables matching packet payload against defined patterns, commonly referred to as signatures.
Layer 7 service objects can be used for port-independent enforcement or to create new service
objects that leverage a combination of Layer 7 application identity, protocol and port. Layer
7 based service objects can be used in the firewall rule table and Service Composer, and
application identification information is captured in Distributed Firewall logs, and Flow Monitoring
and Application Rule Manager (ARM) when profiling an application.
Procedure
1 In the vSphere Web Client, click Networking & Security > Groups and Tags.
2 Create service and specify Layer 7, App ID, protocol, and port. For port independent
enforcement, this step can be skipped. SeeApplication ID GUIDs and Create a Service for
more details.
3 Create a new distributed firewall rule. In the service field, select the Layer 7 service you
created in step 2. For port independent enforcement, select an App ID, see Application ID
GUIDs. See Add a Firewall Rule for details.
A high level overview of the IDFW configuration workflow begins with preparing the
infrastructure. This includes the administrator installing the host preparation components on each
protected cluster, and setting up Active Directory synchronization so that NSX can consume AD
users and groups. Next, IDFW must know which desktop an Active Directory (AD) user logs
onto in order to apply DFW rules. There are two methods IDFW uses for logon detection:
Guest Introspection (GI) and/or the Active Directory Event Log Scraper. Guest Introspection is
deployed on ESXi clusters where IDFW virtual machines are running. When network events are
generated by a user, a guest agent installed on the VM forwards the information through the
Guest Introspection framework to the NSX Manager. The second option is the Active Directory
event log scraper. Configure the Active Directory event log scraper in the NSX Manager to point
at an instance of your Active Directory domain controller. NSX Manager will then pull events from
the AD security event log. You can use both in your environment, or one or the other. When both
the AD log scraper and Guest Introspection are used, Guest Introspection will take precedence.
Note that if both the AD event log scraper and Guest Introspection are used, the two are mutually
exclusive: if one of these stops working, the other does not begin to work as a back up.
Once the infrastructure is prepared, the administrator creates NSX Security Groups and adds
the newly available AD Groups (referred to as Directory Groups). The administrator can then
create Security Policies with associated firewall rules and apply those policies to the newly created
Security Groups. Now, when a user logs into a desktop, the system will detect that event along
with the IP address which is being used, look up the firewall policy that is associated with that
user, and push those rules down. This works for both physical and virtual desktops. For physical
desktops, AD event log scraper is also required to detect that a user is logged into a physical
desktop.
Identity firewall can be used for micro-segmentation with remote desktop sessions (RDSH),
enabling simultaneous logins by multiple users, user application access based on requirements,
and the ability to maintain independent user environments. Identity Firewall with remote desktop
sessions requires Active Directory.
For supported Windows operating systems see Identity Firewall Tested and Supported
Configurations. Note that Linux based operating systems are not supported for Identity Firewall.
User-based distributed firewall rules are determined by membership in an Active Directory (AD)
group membership. IDFW monitors where AD users are logged in, and maps the login to
an IP Address, which is used by DFW to apply firewall rules. Identity Firewall requires either
guest introspection framework or active directory event log scraping. You can use both in your
environment, or one or the other. When both the AD log scraper and Guest Introspection are
used, Guest Introspection will take precedence. Note that if both the AD event log scraper and
Guest Introspection are used, the two are mutually exclusive: if one of these stops working, the
other does not begin to work as a back up.
AD group membership changes do not immediately take effect for logged in users using
RDSH Identity Firewall rules, this includes enabling and disenabling users, and deleting users.
For changes to take effect, users must log off and then log back on. We recommend AD
administrators force a log off when group membership is modified. This behavior is a limitation
of Active Directory.
3 The NSX management plane looks at the user and receives all of the Active Directory (AD)
groups the user belongs to. The NSX management plane then sends group modify events for
all of the affected AD groups.
4 For each Active Directory group all of the Security Groups (SG) including this AD group are
flagged, and a job is added to the queue to process this change. Because a single SG can
include multiple Active Directory groups, a single user login event will often trigger multiple
processing events for the same SG. To address this, duplicate Security Group processing
requests are removed.
1 A Security Group processing request is received. When a SG is modified, NSX updates all
affected entities and triggers actions per IDFW rules.
3 From Active Directory, NSX receives all of the users belonging to the AD groups.
5 The IP address are mapped to vNICs, and then the vNICs are mapped to virtual machines
(VMs). The resulting list of VMs is result of Security Group to VM translation.
Note Identity Firewall for RDSH is only supported with Windows Server 2016, Windows 2012 with
VMware Tools 10.2.5 and later, and Windows 2012 R2 with VMware Tools 10.2.5 and later.
Procedure
1 Configure Active Directory Sync in NSX, see Synchronize a Windows Domain with Active
Directory. This is required to use Active Directory groups in Service Composer.
2 Prepare the ESXi cluster for DFW. See Prepare the Host Cluster for NSX in the NSX Installation
Guide.
3 Configure Identity Firewall logon detection options. One or both of these options must be
configured.
Note If you have a multi-domain AD architecture, and the log scrapper isn't accessible due to
security constraints, use Guest Introspection to generate login and logout events.
n Configure Active Directory event log access. See Register a Windows Domain with NSX
Manager.
n Windows Guest OS with guest agent installed. This comes with a complete installation
of VMware Tools ™. Deploy Guest Introspection service to protected clusters. See Install
Guest Introspection on Host Clusters.
Server/Version Supported?
Server/Version Supported?
Note that Identity Firewall with RDSH support requires Guest Introspection network drivers be
installed.
Server/Version Supported?
Domain sync with single subtree of OUs with level 6.4.0 and later
hierarchy
Delete and re-add same domain with selective OU 6.4.0 and later
Server/Version Supported?
n VM IP address change
n The event log queue for incoming login events is limited, and login events are not received if
the log is full.
For more information about domain synchronization see Synchronize a Windows Domain with
Active Directory.
Server/Version Supported?
Linux Support No
n GI framework must be deployed to every cluster where IDFW VMs are running.
n UDP sessions are not supported. Networking events are not generated for UDP sessions on
Guest VMs.
Table 12-6. Single forest, single domain and nesting of Active Directory groups and user
configurations
Scenarios Supported?
Scenarios Supported?
Scenarios Supported?
Scenarios Supported?
n A user login event is processed only when a TCP session is initiated from a guest VM.
n User log out events are not sent or processed. Enforced ruleset remains until an 8-hour
time span elapses since a user's last network activity, or a different user generates a TCP
connection from the same VM. The system processes this as a log out from the previous user
and a log on from the new user.
n Multi-user support is available with IDFW with RDSH in NSX 6.4.0 and later.
n RDSH VM logins are primarily handled by the context engine for rule enforcement. RDSH
logins are only matched to firewall rules created with Enable User Identity at Source, and the
rule must be created in a new section of Firewall Rules. If a user belongs to a non user identity
at source security group and logs in to an RDSH VM, the login won't trigger any translation on
the non user identity at source security group. An RDSH VM never belongs to any non user
identity at source security groups.
Once NSX Manager retrieves AD credentials, you can create security groups based on user
identity, create identity-based firewall rules, and run Activity Monitoring reports.
AD group membership changes do not immediately take effect for logged in users using
RDSH Identity Firewall rules, this includes enabling and disenabling users, and deleting users.
For changes to take effect, users must log off and then log back on. We recommend AD
administrators force a log off when group membership is modified. This behavior is a limitation
of Active Directory.
Important Any changes made in Active Directory will NOT be seen on NSX Manager until a delta
or full sync has been performed.
Prerequisites
The domain account must have AD read permission for all objects in the domain tree. The event
log reader account must have read permissions for security event logs.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > System > Users and
Domains.
2 Click the Domains tab, and then click the Add domain ( ) icon.
3 In the Add Domain dialog box, enter the fully qualified domain name (for example,
[Link]) and netBIOS name for the domain.
To retrieve the netBIOS name for your domain, type nbtstat -n in a command window on a
Windows workstation that is part of a domain or on a domain controller. In the NetBIOS Local
Name Table, the entry with a <00> prefix and type Group is the netBIOS name.
5 During sync, to filter out users that no longer have active accounts click Ignore disabled
users .
6 Click Next.
7 In the LDAP Options page, specify the domain controller that the domain is to be synchronized
with and select the protocol. See Identity Firewall Tested and Supported Configurationsfor
more information about supported domain synchronization options.
9 Enter the user credentials for the domain account. This user must be able to access the
directory tree structure.
10 Click Next.
11 (Optional) In the Security Event Log Access page, select either CIFS or WMI for the connection
method to access security event logs on the specified AD server. Change the port number
if required. This step is used by Active Directory Event Log Scraper. See Identity Firewall
Workflow.
Note The event log reader looks for events with the following IDs from the AD Security event
log: Windows 2008/2012: 4624, Windows 2003: 540. The event log server has a limit of 128
MB. When this limit is reached you may see Event ID 1104 in the Security Log Reader. See
[Link] for more information.
12 Select Use Domain Credentials to use the LDAP server user credentials. To specify an
alternate domain account for log access, un-select Use Domain Credentials and specify the
user name and password.
The specified account must be able to read the security event logs on the Domain Controller
specified in step 10.
13 Click Next.
15 Click Finish.
Attention
n If an error message appears stating that the Adding Domain operation failed for the entity
because of a domain conflict, select Auto Merge. The domains will be created and the
settings displayed below the domain list.
Results
The domain is created and its settings are displayed below the domain list.
What to do next
Verify that login events on the event log server are enabled.
You can add, edit, delete, enable, or disable LDAP servers by selecting the LDAP Servers tab in
the panel below the domain list. You can perform the same tasks for event log servers by selecting
the Event Log Servers tab in the panel below the domain list. Adding more than one Windows
server (Domain Controllers, Exchange servers, or File Servers) as an event log server improves the
user identity association.
Through the vSphere Web Client UI, you can perform a force sync for Active Directory domains.
A periodic sync is automatically performed once a week, and a delta sync every 3 hours. It is not
possible to selectively sync sub-trees through the UI.
With NSX 6.4 and later it is possible to selectively sync active directory sub trees using API
calls. The root domain cannot have any parent-child relationships and must have a valid directory
distinguished name.
n /api/directory/verifyRootDN. Verify that the list of rootDN doesn't have any parent-child
relationships. Verify each rootDN is a valid active directory distinguished name.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > System > Users and
Domains.
2 Click the Domains tab, and then select the domain to be synchronized.
Important Any changes made in Active Directory will NOT be seen on NSX Manager until a
delta or full sync has been performed.
Click To
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > System > Users and
Domains.
After creating a new user account, you must enable read-only security log access on a Windows
2008 server-based domain section to grant the user read-only access.
Note You must perform these steps on one Domain Controller of the domain, tree, or forest.
Procedure
1 Navigate to Start > Administrative Tools > Active Directory Users and Computers.
2 In the navigation tree, expand the node that corresponds to the domain for which you want to
enable security log access.
3 Under the node that you just expanded, select the Builtin node.
5 Select the Members tab in the Event Log Readers Properties dialog box.
7 If you previously created a group for the “AD Reader” user, select that group in the Select
Users, Contacts, Computers, or Groups dialog. If you created only the user and you did not
create a group, select that user in the Select Users, Contacts, Computers, or Groups dialog.
What to do next
After you have enabled security log access, verify directory privileges by following the steps in
Verifying Directory Privileges.
After you have created a new account and enabled security log access, you must verify the ability
to read the security logs.
Prerequisites
Enable security log access. See Enable Security Read-Only Log Access on Windows 2008.
Procedure
1 From any workstation that is part of the domain, log on to the domain as an administrator.
3 Select Connect to Another Computer... from the Action menu. The Select Computer dialog
box appears. (Note that you must do this even if you are already logged on to the machine for
which you plan to view the event log.)
5 In the text field adjacent to the Another computer radio button, enter the name of the Domain
Controller. Alternatively, click the Browse... button and then select the Domain Controller.
7 Click the Set User... button. The Event Viewer dialog box appears.
8 In the User name field, enter the user name for the user that you created.
9 In the Password field, enter the password for the user that you created
10 Click OK
11 Click OK again.
13 Under the Windows Logs node, select the Security node. If you can see log events then the
account has the required privileges.
After synchronizing with the vCenter Server, NSX Manager collects the IP addresses of all vCenter
guest virtual machines from VMware Tools on each virtual machine. If a virtual machine has been
compromised, the IP address can be spoofed and malicious transmissions can bypass firewall
policies.
SpoofGuard is inactive by default, and you must explicitly enable it on each logical switch or VDS
port-group. When a VM IP address change is detected, the Distributed Firewall (DFW) blocks the
traffic from or to this VM until you approve this new IP address.
You create a SpoofGuard policy for specific networks that allows you to authorize the IP addresses
reported by VMware Tools and alter them if necessary to prevent spoofing. SpoofGuard inherently
trusts the MAC addresses of virtual machines collected from the VMX files and vSphere SDK.
Operating separately from Firewall rules, you can use SpoofGuard to block traffic determined to
be spoofed.
SpoofGuard supports both IPv4 and IPv6 addresses. The SpoofGuard policy supports multiple
IP addresses assigned to a vNIC when using VMwareTools and DHCP snooping. ARP snooping
supports up to 128 addresses discovered per VM, per vNIC. The SpoofGuard policy monitors and
manages the IP addresses reported by your virtual machines in one of the following modes.
This mode blocks all traffic until you approve each vNIC-to-IP address assignment. In this
mode, multiple IPv4 addresses can be approved.
Note SpoofGuard inherently allows DHCP requests regardless of enabled mode. However, if
in manual inspection mode, traffic does not pass until the DHCP-assigned IP address has been
approved.
SpoofGuard includes a system-generated default policy that applies to port groups and logical
networks not covered by the other SpoofGuard policies. A newly added network is automatically
added to the default policy until you add the network to an existing policy or create a new policy
for it.
SpoofGuard is one of the ways that an NSX distributed firewall policy can determine the IP address
of a virtual machine. For information, see IP Discovery for Virtual Machines.
n Approve IP Addresses
n Change an IP Address
n Clear an IP Address
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > SpoofGuard.
2 Click Add.
Option Description
Automatically Trust IP Assignments Select this option to trust all IP assignments upon initial registration with the
on Their First Use NSX Manager.
Manually Inspect and Approve All IP Select this option to require manual approval of all IP addresses. All traffic to
Assignments Before Use and from unapproved IP addresses is blocked.
6 Click Allow local address as valid address in this namespace to allow local IP addresses in
your setup.
When you power on a virtual machine and it is unable to connect to the DHCP server, a local
IP address is assigned to it. This local IP address is considered valid only if the SpoofGuard
mode is set to Allow local address as valid address in this namespace. Otherwise, the local IP
address is ignored.
7 Click Next.
8 Select the object type this policy should apply to, then select the objects you want.
u In NSX 6.4.0, click the Add icon. Select the object type this policy should apply to, then
select the objects you want.
A port group or logical switch can belong to only one SpoofGuard policy.
9 Click OK or Finish.
What to do next
You can edit a policy by clicking the Edit icon and delete a policy by clicking the Delete icon.
Approve IP Addresses
If you set SpoofGuard to require manual approval of all IP address assignments, you must approve
IP address assignments to allow traffic from those virtual machines to pass.
Procedure
2 In NSX 6.4.1 and later, select one of the option links from the drop-down menu, or All.
Option Description
Pending Approval vNICs IP address changes that require approval before traffic can flow to or from
these virtual machines
Inactive vNICs List of IP addresses where the current IP address does not match the
published IP address
vNICs with Duplicate IP IP addresses that are duplicates of an existing assigned IP address within the
selected datacenter
3 In NSX 6.4.0, select View, and click one of the option links.
Option Description
Active Virtual NICs Since Last List of IP addresses that have been validated since the policy was last
Published updated
Option Description
Virtual NICs IP Required Approval IP address changes that require approval before traffic can flow to or from
these virtual machines
Virtual NICs with Duplicate IP IP addresses that are duplicates of an existing assigned IP address within the
selected datacenter
Inactive Virtual NICs List of IP addresses where the current IP address does not match the
published IP address
Unpublished Virtual NICs IP List of virtual machines for which you have edited the IP address assignment
but have not yet published
n To approve multiple IP addresses, select the appropriate vNICs and then click Approve
IPs.
Change an IP Address
You can change the IP address assigned to a MAC address to correct the assigned IP address.
Note SpoofGuard accepts a unique IP address from virtual machines. However, you can assign
an IP address only once. An approved IP address is unique across NSX. Duplicate approved IP
addresses are not allowed.
Procedure
2 In NSX 6.4.1 and later, select one of the option links from the drop-down menu, or All.
Option Description
Pending Approval vNICs IP address changes that require approval before traffic can flow to or from
these virtual machines
Inactive vNICs List of IP addresses where the current IP address does not match the
published IP address
vNICs with Duplicate IP IP addresses that are duplicates of an existing assigned IP address within the
selected datacenter
3 For NSX 6.4.0, select View, and click one of the option links.
Option Description
Active Virtual NICs Since Last List of IP addresses that have been validated since the policy was last
Published updated
Option Description
Virtual NICs IP Required Approval IP address changes that require approval before traffic can flow to or from
these virtual machines
Virtual NICs with Duplicate IP IP addresses that are duplicates of an existing assigned IP address within the
selected datacenter
Inactive Virtual NICs List of IP addresses where the current IP address does not match the
published IP address
Unpublished Virtual NICs IP List of virtual machines for which you have edited the IP address assignment
but have not yet published
4 Add an IP address.
Option Description
NSX 6.4.0 Click the pencil icon next to an Approved IP address, then click + and add a
new IP address.
6 Click OK.
Clear an IP Address
You clear an approved IP address assignment from a SpoofGuard policy.
Procedure
2 In NSX 6.4.1 and later, select one of the option links, or All.
Option Description
Pending Approval vNICs IP address changes that require approval before traffic can flow to or from
these virtual machines
Inactive vNICs List of IP addresses where the current IP address does not match the
published IP address
vNICs with Duplicate IP IP addresses that are duplicates of an existing assigned IP address within the
selected datacenter
3 For NSX 6.4.0, select View, and click one of the option links.
Option Description
Active Virtual NICs Since Last List of IP addresses that have been validated since the policy was last
Published updated
Option Description
Virtual NICs IP Required Approval IP address changes that require approval before traffic can flow to or from
these virtual machines
Virtual NICs with Duplicate IP IP addresses that are duplicates of an existing assigned IP address within the
selected datacenter
Inactive Virtual NICs List of IP addresses where the current IP address does not match the
published IP address
Unpublished Virtual NICs IP List of virtual machines for which you have edited the IP address assignment
but have not yet published
You must have a working NSX Edge instance before you can use VPN. For information on setting
up NSX Edge, see NSX Edge Configuration.
n L2 VPN Overview
Admin NSX
Manager
Corporate LAN
Remote users
connecting through
web access mode
P Windows
Internet VPN V Server
NSX Edge
SSL VPN
external
Mac OS Catalina 10.15, 10.15.3, 10.15.4 (supported in NSX 6.4.7 and later)
10.15.6, 10.15.7 (supported starting in NSX 6.4.10)
Mac OS Big Sur 11.2 and later (supported starting in NSX 6.4.10)
Important
n SSL VPN-Plus Client is not supported on computers that use ARM-based processors.
n In SSL VPN-Plus Client on Windows, the "auto-reconnect" feature does not work as expected
when the Npcap loopback adapter is "enabled". This loopback adapter interferes with the
function of the Npcap driver on a Windows computer. Make sure that the latest version of
the Npcap driver (0.9983 or later) is installed on your Windows computer. This version of the
driver does not require the loopback adapter for packet captures.
n Linux TCL, TK, and Network Security Services (NSS) libraries are required for the UI to work.
Prerequisites
The SSL VPN gateway requires port 443 to be accessible from external networks and the SSL VPN
client requires the NSX Edge gateway IP and port 443 to be reachable from client system.
Procedure
1 In the SSL VPN-Plus tab, Server Settings from the left panel.
2 Click Change.
4 Edit the port number if required. This port number is required to configure the installation
package.
Note If any of the following GCM ciphers is configured on the SSL VPN server, backward
compatibility issue can occur in some browsers:
n AES128-GCM-SHA256
n ECDHE-RSA-AES128-GCM-SHA256
n ECDHE-RSA-AES256-GCM-SHA38
6 (Optional) From the Server Certificates table, use the default server certificate, or deselect the
Use Default Certificate check box and click the server certificate that you want to add.
Restriction
n SSL VPN-Plus service supports only RSA certificates.
n SSL VPN-Plus service supports server certificate that is signed only by the Root CA. Server
certificate signed by an Intermediate CA is not supported.
7 Click OK.
Add an IP Pool
The remote user is assigned a virtual IP address from the IP pool that you add.
Procedure
1 In the SSL Vpn-Plus tab, select IP Pools from the left panel.
5 Type the IP address which is to add the routing interface in the NSX Edge gateway.
10 Type the connection-specific DNS suffix for domain based host name resolution.
12 Click OK.
Procedure
1 In the SSL VPN-Plus tab, select Private Networks from the left panel.
6 Specify whether you want to send private network and internet traffic over the SSL VPN-Plus
enabled NSX Edge or directly to the private server by bypassing the NSX Edge.
7 If you selected Send traffic over the tunnel, select Enable TCP Optimization to optimize the
internet speed.
Conventional full-access SSL VPNs tunnel sends TCP/IP data in a second TCP/IP stack for
encryption over the internet. This results in application layer data being encapsulated twice
in two separate TCP streams. When packet loss occurs (which happens even under optimal
internet conditions), a performance degradation effect called TCP-over-TCP meltdown occurs.
In essence, two TCP instruments are correcting a single packet of IP data, undermining
network throughput and causing connection timeouts. TCP Optimization eliminates this TCP-
over-TCP problem, ensuring optimal performance.
8 When optimization is enabled, specify the port numbers for which traffic should be optimized.
Traffic for remaining ports for that specific network will not be optimized.
Note Traffic for all ports are optimized, if port numbers are not specified.
When TCP traffic is optimized, the TCP connection is opened by the SSL VPN server on
behalf of the client. Because the TCP connection is opened by the SSLVPN server, the first
automatically generated rule is applied, which allows all connections opened from the Edge to
get passed. Traffic that is not optimized will be evaluated by the regular Edge firewall rules.
The default rule is allow any any.
10 Click OK.
What to do next
Add Authentication
In addition to local user authentication, you can add an external authentication server (AD, LDAP,
Radius, or RSA) which is bound to the SSL gateway. All users with accounts on the bound
authentication server will be authenticated.
The maximum time to authenticate over SSL VPN is 3 minutes. This is because non-authentication
timeout is 3 minutes and is not a configurable property. So in scenarios where AD authentication
timeout is set to more than 3 minutes or there are multiple authentication servers in chain
authorization and the time taken for user authentication is more than 3 minutes, you will not be
authenticated.
Procedure
1 In the SSL VPN-Plus tab, select Authentication from the left panel.
4 Depending on the type of authentication server you selected, complete the following fields.
u AD authentication server
Option Description
Enable SSL Enabling SSL establishes an encrypted link between a web server and a browser.
Note There might be issues if you do not enable SSL and try to change password using SSL
VPN-Plus tab or from client machine later.
Search base Part of the external directory tree to search. The search base may be something equivalent
to the organizational unit (OU), domain controller (DC), or domain name (AD) of external
directory.
Examples:
n OU=Users,DC=aslan,DC=local
n OU=VPN,DC=aslan,DC=local
Bind DN User on the external AD server permitted to search the AD directory within the defined
search base. Most of the time, the bind DN is permitted to search the entire directory. The
role of the bind DN is to query the directory using the query filter and search base for the
DN (distinguished name) for authenticating AD users. When the DN is returned, the DN and
password are used to authenticate the AD user.
Example: CN=[Link],OU=users,OU=Datacenter Users,DC=aslan,DC=local
Login Name against which the user ID entered by the remote user is matched with. For Active
Attribute Directory, the login attribute name is sAMAccountName.
Name
Search Filter Filter values by which the search is to be limited. The search filter format is attribute operator
value.
If you need to limit the search base to a specific group in the AD and not allow searching
across the entire OU, then
n Do not put group name inside the search base, only put OU and DC.
n Do not put both objectClass and memberOf inside the same
search filter string. Example of correct format for the search filter:
memberOf=CN=VPN_Users,OU=Users,DC=aslan,DC=local
Option Description
Use this If selected, this AD server is used as the second level of authentication.
server for
secondary
authentication
Option Description
Enable SSL Enabling SSL establishes an encrypted link between a web server and a browser.
Search base Part of the external directory tree to search. The search base may be something
equivalent to the organization, group, or domain name (AD) of external directory.
Bind DN User on the external server permitted to search the AD directory within the defined
search base. Most of the time, the bind DN is permitted to search the entire directory.
The role of the bind DN is to query the directory using the query filter and search base
for the DN (distinguished name) for authenticating AD users. When the DN is returned,
the DN and password are used to authenticate the AD user.
Login Attribute Name against which the user ID entered by the remote user is matched with. For
Name Active Directory, the login attribute name is sAMAccountName.
Search Filter Filter values by which the search is to be limited. The search filter format is attribute
operator value.
Use this server If selected, this server is used as the second level of authentication.
for secondary
authentication
Option Description
Secret Shared secret specified while adding the authentication agent in the RSA security console.
Retry Count Number of times the RADIUS server is to be contacted if it does not respond before the
authentication fails.
Use this server If selected, this server is used as the second level of authentication.
for secondary
authentication
Option Description
Configuration Click Browse to select the [Link] file that you downloaded from the RSA
File Authentication Manager.
Source IP IP address of the NSX Edge interface through which the RSA server is accessible.
Address
Option Description
Use this server If selected, this server is used as the second level of authentication.
for secondary
authentication
Option Description
Enable If selected, defines an account lockout policy. Specify the required values.
password 1 In Retry Count, type the number of times a remote user can try to access his or her
policy account after entering an incorrect password.
2 In Retry Duration, type the time period in which the remote user's account gets locked
on unsuccessful login attempts.
For example, if you specify Retry Count as 5 and Retry Duration as 1 minute, the remote
user's account will be locked if he makes 5 unsuccessful login attempts within 1 minute.
3 In Lockout Duration, type the time period for which the user account remains locked.
After this time, the account is automatically unlocked.
Use this server If selected, this server is used as the second level of authentication.
for secondary
authentication
Restriction
n On the SSL VPN-Plus Web Portal and SSL VPN-Plus full access client (PHAT client),
client or user certificate that is signed only by the Root CA is supported. Client
certificate signed by an Intermediate CA is not supported.
Procedure
1 In the SSL VPN-Plus tab, select Installation Package from the left panel.
4 In Gateway, type the IP address or FQDN of the public interface of NSX Edge.
This IP address or FQDN is binded to the SSL client. When the client is installed, this IP
address or FQDN is displayed on the SSL client.
5 Type the port number that you specified in the server settings for SSL VPN-Plus. See Add SSL
VPN-Plus Server Settings.
6 (Optional) To bind additional NSX Edge uplink interfaces to the SSL client,
c Click OK.
7 The installation package is created for Windows operating system by default. Select Linux or
Mac to create an installation package for Linux or Mac operating systems as well.
9 Select Enable to display the installation package on the Installation Package page.
Option Description
Start client on logon The SSL VPN client is started when the remote user logs on to his system.
Enable silent mode installation Hides installation commands from remote user.
Hide SSL client network adapter Hides the VMware SSL VPN-Plus Adapter, which is installed on the remote
user's computer along with the SSL VPN installation package.
Hide client system tray icon Hides the SSL VPN tray icon which indicates whether the VPN connection is
active or not.
Create desktop icon Creates an icon to invoke the SSL client on the user's desktop.
Enable silent mode operation Hides the pop-up that indicates that installation is complete.
Server security certificate validation The SSL VPN client validates the SSL VPN server certificate before
establishing the secure connection.
Block user on certificate validation If the certificate validation fails, then block the SSL VPN user.
failure
11 Click OK.
Add a User
Add a remote user to the local database.
Procedure
1 In the SSL VPN-Plus tab, select Users from the left panel.
8 In Password Details, select Password never expires to always keep the same password for the
user.
9 Select Allow change password to let the user change the password.
10 Select Change password on next login if you want the user to change the password the next
time he logs in.
12 Click OK.
Procedure
1 In the SSL VPN-Plus tab, select Dashboard from the left panel.
2 Click Start.
The dashboard displays the status of the service, number of active SSL VPN sessions, and
session statistics and data flow details. Click Details next to Number of Active Sessions to
view information about the concurrent connections to private networks behind the NSX Edge
gateway.
What to do next
1 Add an SNAT rule to translate the IP address of the NSX Edge appliance to the VPN Edge IP
address.
2 Using a web browser, navigate to the IP address of the NSX Edge interface by typing https//
NSXEdgeIPAddress.
3 Login using the user name and password that you created in the Add a User section and
download the installation package.
4 Enable port forwarding on your router for the port number used in Add SSL VPN-Plus Server
Settings.
5 Launch the VPN client, select your VPN server, and login. You can now navigate to the
services on your network. SSL VPN-Plus gateway logs are sent to the syslog server configured
on the NSX Edge appliance. SSL VPN-Plus client logs are stored in the following directory on
the remote user's computer: %PROGRAMFILES%/VMWARE/SSLVPN Client/.
Add a Script
You can add multiple login or logoff scripts. For example, you can bind a login script for starting
Internet Explorer with [Link]. When the remote user logs in to the SSL client, Internet
Explorer opens up [Link].
Procedure
1 In the SSL Vpn-Plus tab, select Login/Logoff Scripts from the left panel.
3 In Script, click Browse and select the script you want to bind to the NSX Edge gateway.
Option Description
Login Performs the script action when remote user logs in to SSL VPN.
Logoff Performs the script action when remote user logs out of SSL VPN.
Both Performs the script action both when remote user logs in and logs out of SSL
VPN.
7 Click OK.
The following topics explain the steps to install the SSL VPN-Plus Client on various operating
systems.
Procedure
1 On the remote client site, open a browser window, and type https://
ExternalEdgeInterfaceIP/sslvpn-plus/, where ExternalEdgeInterfaceIP is the IP
address of the Edge external interface where you enabled the SSL VPN-Plus service.
6 Extract the downloaded files and run the [Link] file to install the client.
What to do next
Log in to the SSL client with the credentials specified in the Users section. The SSL VPN-Plus client
validates the SSL VPN server certificate.
Windows client is authenticated as the Server security certificate validation option is selected by
default, when the installation package was created.
For Internet Explorer (IE) browser, add a trusted CA to the trust certificate store. If server
certificate validation fails, you are prompted to contact your system administrator. If server
certificate validation succeeds, a login prompt is displayed.
Adding a trusted CA to the trust store is independent of SSL VPN work flow.
Prerequisites
You must have root privileges to install the SSL VPN-Plus client.
Procedure
1 On the remote client site, open a browser window, and type https://
ExternalEdgeInterfaceIP/sslvpn-plus/, where ExternalEdgeInterfaceIP is the IP
address of the Edge external interface where you enabled the SSL VPN-Plus service.
4 Click the name of the installer package, and save the linux_phat_client.tgz compressed
file on the remote computer.
What to do next
Log in to the SSL VPN GUI with the credentials specified in the Users section.
Attention
n Two-factor RSA authentication is not supported for logging in to the SSL VPN client on Linux
operating systems.
n SSL VPN Linux client CLI does not validate server certificates. If server certificate validation is
required, use the SSL VPN GUI for connecting to the gateway.
The SSL VPN Linux client validates the server certificate against the browser's certificate
store by default. If server certificate validation fails, you are prompted to contact your system
administrator. If server certificate validation succeeds, a login prompt is displayed.
Adding a trusted CA to the trust store (for example, Firefox certificate store) is independent of the
SSL VPN work flow.
Prerequisites
You must have root privileges to install the SSL VPN-Plus client.
Procedure
1 On the remote client site, open a browser window, and type https://
ExternalEdgeInterfaceIP/sslvpn-plus/, where ExternalEdgeInterfaceIP is the IP
address of the Edge external interface where you enabled the SSL VPN-Plus service.
4 Click the name of the installer package, and save the mac_phat_client.tgz compressed file
on the remote computer.
If your SSL VPN Client installation fails, check the installation log file at /tmp/
naclient_install.log.
For troubleshooting installation problems on Mac OS High Sierra, see the NSX
Troubleshooting Guide.
What to do next
Log in to the SSL client with the credentials specified in the Users section.
Attention Two-factor authentication is not supported for logging in to the SSL VPN client on Mac
operating systems.
The SSL VPN Mac client validates the server certificate against Keychain, a database that stores
certificates on Mac OS, by default. If server certificate validation fails, you are prompted to contact
your system administrator. If server certificate validation succeeds, a login prompt is displayed.
n SSL VPN-Plus Client on Mac OS provides the facility to configure proxy server settings, but
remote users must not configure the proxy server settings.
n Remote Linux OS users must avoid configuring the proxy server settings on the SSL VPN-Plus
Client through the Linux CLI.
The following procedure explains the steps to configure the proxy server settings in an SSL VPN-
Plus Client.
Prerequisites
Procedure
1 Double-click the desktop icon of the SSL VPN-Plus Client on the remote computer.
n On Windows and Mac computer, click Settings, and then click the Proxy Settings tab.
Option Description
Use IE Settings This option is available only in Windows SSL VPN-Plus Client.
Use the proxy server configuration that is specified in your IE browser.
SOCKS ver 4 This option is available only in Windows SSL VPN-Plus Client.
Specify the following settings for a SOCKS 4.0 proxy server:
n Proxy server name or an IP address of the proxy server.
n Proxy server port. The default port is 1080, which you can edit.
SOCKS ver 5 Specify the following settings for a SOCKS 5.0 proxy server:
n Proxy server name or an IP address of the proxy server.
n Proxy server port. The default port is 1080, which you can edit.
n (Optional) User name and password to access the SOCKS 5.0 server.
The following table lists the locations on the remote user's computer where the SSL VPN-Plus
client logs are stored.
Windows 8 C:\Users\username\AppData\Local\VMware\vpn\svp_client.log
Windows 10 C:\Users\username\AppData\Local\VMware\vpn\svp_client.log
2 Go to the Logging Policy section and expand the section to view the current settings.
3 Click Change.
OR
Note SSL VPN-Plus client logs are enabled by default and log level is set to NOTICE.
6 Click OK.
Procedure
1 In the SSL VPN-Plus tab, select Client Configuration from the left panel.
In split tunnel mode, only the VPN flows through the NSX Edge gateway. In full tunnel, the
NSX Edge gateway becomes the remote user's default gateway and all traffic (VPN, local, and
internet) flows through this gateway.
a Select Exclude local subnets to exclude local traffic from flowing through the VPN tunnel.
b Type the IP address for the default gateway of the remote user's system.
4 Select Enable auto reconnect if you would like the remote user to automatically reconnect to
the SSL VPN client after getting disconnected.
5 Select Client upgrade notification for the remote user to get a notification when an upgrade
for the client is available. The remote user can then choose to install the upgrade.
6 Click OK.
Procedure
1 In the SSL VPN-Plus tab, select General Settings from the left panel.
Select To
Prevent multiple logon using same Allow a remote user to login only once with a username.
username
Enable compression Enable TCP based intelligent data compression and improve data transfer
speed.
Enable logging Maintain a log of the traffic passing through the SSL VPN gateway.
Select To
Force virtual keyboard Allow remote users to enter web or client login information only via the virtual
keyboard.
Randomize keys of virtual keyboard Make the virtual keyboard keys random.
Enable forced timeout Disconnect the remote user after the specified timeout period is over. Type
the timeout period in minutes.
Session idle timeout If there is no activity on the user session for the specified period, end the user
session after that period is over.
SSLVPN idle timeout considers all packets, including control packets sent by
any Application and user data, towards timeout detection. As a result, even
if there is no user data, the session won't timeout if there is an application
which transmits a periodic control packet (such as MDNS).
User notification Type a message to be displayed to the remote user after he logs in.
3 Click OK.
Procedure
2 Click the Manage tab and then click the SSL VPN-Plus tab.
6 In Logo, click Change and preferably select a JPEG image for the company logo.
7 In Colors, click the color box next to numbered item for which you want to change the color,
and select the desired color.
8 Change the client banner, if necessary. Select a BMP image for the banner.
9 Click OK.
For information on adding an IP pool, see Configure Network Access SSL VPN-Plus.
Edit an IP Pool
You can edit an IP pool.
Procedure
5 Click OK.
Delete an IP Pool
You can delete an IP pool.
Procedure
Enable an IP Pool
You can enable an IP pool if you want an IP address from that pool to be assigned to the remote
user.
Procedure
Disable an IP Pool
You can disable an IP pool if you do not want the remote user to be assigned an IP address from
that pool.
Procedure
1 In the SSL VPN-Plus tab, select IP Pool from the left panel.
Procedure
2 Select the IP pool that you want to change the order for.
For information on adding a private network, see Configure Network Access SSL VPN-Plus.
Procedure
1 In the SSL VPN-Plus tab, click Private Networks in the left panel.
2 Select the network that you want to delete and click the Delete ( ) icon.
Procedure
1 In the SSL VPN-Plus tab, click Private Networks in the left panel.
Procedure
1 In the SSL VPN-Plus tab, click Private Networks in the left panel.
If you select Enable TCP Optimization for a private network, some applications such as FTP in
Active mode may not work within that subnet. To add an FTP server configured in Active mode,
you must add another private network for that FTP server with TCP Optimization disabled. Also,
the active TCP private network must be enabled, and must be placed above the subnet private
network.
Procedure
1 In the SSL VPN-Plus tab, click Private Networks in the left panel.
3 Select the network that you want to change the order of.
5 Click OK.
What to do next
To add an FTP server configured in Active mode, refer to Configure Private Network for Active
FTP Server.
Prerequisites
Procedure
1 In the SSL VPN-Plus tab, click Private Networks in the left panel.
2 Add the private network that you want to configure for active FTP. For more information, refer
to Add a Private Network .
4 In the Ports field, add port number for the private network.
6 Place the private network that you want to configure for active FTP above other private
networks that are configured. For more information, refer to Change the Sequence of a Private
Network.
What to do next
For information on creating an installation package, see Configure Network Access SSL VPN-Plus.
Procedure
1 In the SSL VPN-Plus tab, click Installation Package in the left panel.
5 Click OK.
Procedure
1 In the SSL VPN-Plus tab, click Installation Package in the left panel.
For information on adding a user, see Configure Network Access SSL VPN-Plus.
Edit a User
You can edit the details for a user except for the user ID.
Procedure
4 Click OK.
Delete a User
You can delete a user.
Procedure
3 Select the user that you want to delete and click the Delete ( ) icon.
Procedure
4 Click Change password on next login to change the password when the user logs in to his
system next time.
5 Click OK.
Edit a Script
You can edit the type, description, and status of a login or logoff script that is bound to the NSX
Edge gateway.
Procedure
1 In the SSL VPN-Plus tab, click Login/Logoff Scripts in the left panel.
4 Click OK.
Delete a Script
You can delete a login or logoff script.
Procedure
1 In the SSL VPN-Plus tab, click Login/Logoff Scripts in the left panel.
Enable a Script
You must enable a script for it to work.
Procedure
1 In the SSL VPN-Plus tab, click Login/Logoff Scripts in the left panel.
Disable a Script
You can disable a login/logoff script.
Procedure
1 In the SSL VPN-Plus tab, click Login/Logoff Scripts in the left panel.
Procedure
1 In the SSL VPN-Plus tab, click Login/Logoff Scripts in the left panel.
2 Select the script that you want to change the order of and click the Move Up ( )or Move
Down ( ) icon.
3 Click OK.
Starting with NSX Data Center 6.4.2, you can configure both policy-based IPSec VPN service
and route-based IPSec VPN service. However, you can configure, manage, and edit route-based
IPSec VPN parameters only by using REST APIs. You cannot configure or edit route-based IPSec
VPN parameters in the vSphere Web Client. For more information about using APIs to configure
route-based IPSec VPN, see the NSX API Guide.
In NSX 6.4.1 and earlier, you can configure only policy-based IPSec VPN service.
When the local IPSec VPN site originates traffic from unprotected local subnets to the protected
remote subnets on the peer site, the traffic is dropped.
The local subnets behind an NSX Edge must have address ranges that do not overlap with the
IP addresses on the peer VPN site. If the local and remote peer across an IPsec VPN tunnel has
overlapping IP addresses, traffic forwarding across the tunnel might not be consistent.
You can deploy an NSX Edge agent behind a NAT device. In this deployment, the NAT device
translates the VPN address of an NSX Edge instance to a publicly accessible address facing the
Internet. Remote VPN sites use this public address to access the NSX Edge instance.
You can place remote VPN sites behind a NAT device as well. You must provide the remote VPN
site's public IP address and its ID (either FQDN or IP address) to set up the tunnel. On both ends,
static one-to-one NAT is required for the VPN address.
The size of the ESG determines the maximum number of supported tunnels, as shown in the
following table.
Compact 512
Large 1600
Quad- 4096
Large
X-Large 6000
Restriction The inherent architecture of policy-based IPSec VPN restricts you from setting up
VPN tunnel redundancy.
For a detailed example of configuring a policy-based IPSec tunnel between an NSX Edge and a
remote VPN Gateway, see Configure Policy-Based IPSec VPN Site Example.
In this VPN tunneling approach, virtual tunnel interfaces (VTI) are created on the ESG appliance.
Each VTI is associated with an IPSec tunnel. The encrypted traffic is routed from one site to
another site through the VTI interfaces. IPSec processing happens only at the VTI interfaces.
Important
n In NSX Data Center 6.4.2 and later, IPSec VPN tunnel redundancy is supported only using
BGP. OSPF dynamic routing is not supported for routing through IPSec VPN tunnels.
n Do not use static routing for route-based IPSec VPN tunnels to achieve VPN tunnel
redundancy.
The following figure shows a logical representation of IPSec VPN tunnel redundancy between two
sites. In this figure, Site A and Site B represent two data centers. For this example, assume that
Site A has Edge VPN Gateways that might not be managed by NSX, and Site B has an Edge
Gateway virtual appliance that is managed by NSX.
BGP
VTI
VPN
BGP
Tunnel
VTI
Uplink
Router
Uplink
BGP
VPN VTI
BGP Tunnel
Subnets VTI Uplink Subnets
As shown in the figure, you can configure two independent IPSec VPN tunnels by using VTIs.
Dynamic routing is configured using BGP protocol to achieve tunnel redundancy. Both IPSec VPN
tunnels remain in service if they are available. All the traffic destined from Site A to Site B through
the ESG is routed through the VTI. The data traffic undergoes IPSec processing and goes out of its
associated ESG uplink interface. All the incoming IPSec traffic received from Site B VPN Gateway
on the ESG uplink interface is forwarded to the VTI after decryption, and then usual routing takes
place.
You must configure BGP HoldDown timer and KeepAlive timer values to detect loss of connectivity
with peer within the required failover time.
Some key points that you must remember about route-based IPSec VPN service are as follows:
n You can configure policy-based IPSec VPN tunnels and route-based IPSec tunnels on the
same ESG appliance. However, you cannot configure a policy-based tunnel and a route-based
tunnel with the same VPN peer site.
n NSX supports a maximum of 32 VTIs on a single ESG appliance. That is, you can configure a
maximum of 32 route-based VPN peer sites.
n NSX does not support migration of existing policy-based IPSec VPN tunnels to route-based
tunnels or conversely.
For information about configuring a route-based IPSec VPN site, see Configure Route-Based
IPSec VPN Site.
For a detailed example of configuring a route-based IPSec VPN tunnel between a local NSX Edge
and a remote Cisco CSR 1000V VPN Gateway, see Using a Cisco CSR 1000V Appliance.
Note If you connect to a remote site by using an IPSec VPN tunnel, dynamic routing on the edge
uplink cannot learn the IP address of that site.
The task topics in this section explain the steps to configure a policy-based IPSec VPN site.
Prerequisites
You must configure at least one IPSec VPN site on the NSX Edge before enabling the IPSec VPN
service.
Procedure
Prerequisites
Procedure
1 On a Linux or Mac machine where OpenSSL is installed, open the file: /opt/local/etc/
openssl/[Link] or /System/Library/OpenSSL/[Link].
mkdir newcerts
mkdir certs
mkdir req
mkdir private
echo "01" > serial
touch [Link]
openssl req -new -x509 -newkey rsa:2048 -keyout private/[Link] -out [Link] -days
3650
b Copy the privacy-enhanced mail (PEM) file content, and save it in a file in req/[Link].
7 On NSX Edge2, generate a CSR, copy the PEM file content, and save it in a file in req/
[Link].
9 Upload the PEM certificate at the end of the file certs/[Link] to Edge1.
10 Upload the PEM certificate at the end of the file certs/[Link] to Edge2.
11 Import the signed certificate ([Link]) to Edge1 and Edge2 as CA-signed certificates.
12 In the IPSec global configuration for Edge1 and Edge2, select the uploaded PEM certificate
and the CA certificate and save the configuration.
13 Navigate to Manage > Settings > Certificates. Select the signed certificate that you imported
and record the DN string.
15 Create IPSec VPN sites on Edge1 and Edge2 with Local ID and Peer ID as the distinguished
name (DN) string in the specified format.
Results
Check the status by clicking Show Statistics or Show IPSec Statistics. Click the channel to see the
tunnel status. The channel status should be enabled and the tunnel status should be Up.
Prerequisites
Self-signed certificates cannot be used for IPSec VPN. They can only be used in load balancing
and SSL VPN.
Procedure
6 Enter a global pre-shared key for those sites whose peer endpoint is set to "any".
To view the pre-shared key, click the Show Pre-Shared Key ( ) icon or select the Display
shared key check box.
Extension Description
add_spd Allowed values are on and off. The default value is on, even when you do not
configure this extension.
When add_spd=off:
n Security policies are installed only when the tunnel is up.
n If the tunnel is up, packets are sent encrypted through the tunnel.
n If the tunnel is down, packets are sent unencrypted, if a route is available.
When add_spd=on:
n Security policies are installed regardless of whether the tunnel is
established.
n If the tunnel is up, packets are sent encrypted through the tunnel.
n If the tunnel is down, packets are dropped.
ike_fragment_size If the maximum transmission unit (MTU) is small, you can set the IKE fragment
size by using this extension to avoid failures in the IKE negotiation. For
example, ike_fragment_size=900
8 Enable certificate authentication, and then select the appropriate Service certificate, CA
certificate, and the certificate revocation list (CRL).
Procedure
5 Enable logging to log traffic flow between the local subnet and peer subnet.
Procedure
5 Click Add.
a Enter the local Id to identify the local NSX Edge instance. This local Id is the peer Id on the
remote site.
The local Id can be any string. Preferably, use the public IP address of the VPN or a fully
qualified domain name (FQDN) for the VPN service as the local Id.
If you are adding an IP-to-IP tunnel using a pre-shared key, the local Id and local endpoint
IP can be the same.
c Enter the subnets to share between the IPSec VPN sites in the CIDR format. Use a comma
separator to enter multiple subnets.
n For PSK peers, the peer Id can be any string. Preferably, use the public IP address of
the VPN or an FQDN for the VPN service as the peer Id.
Note If the Edge has more than one uplink interface that can reach the remote IPSec
peer, routing should be done in such a way that IPSec traffic goes out of the Edge
interface, which is configured with a local peer IP.
e Enter an IP address or an FQDN of the peer endpoint. The default value is any. If you
retain the default value, you must configure the Global PSK.
f Enter the internal IP address of the peer subnet in the CIDR format. Use a comma
separator to type multiple subnets.
a (Optional) Select a security compliance suite to configure the security profile of the IPSec
VPN site with predefined values defined by that suite.
The default selection is none, which means that you must manually specify the
configuration values for authentication method, IKE profile, and tunnel profile. When you
select a compliance suite, values that are predefined in that standard compliance suite
are automatically assigned, and you cannot edit these values. For more information about
compliance suites, see Supported Compliance Suites.
Note
n Compliance suite is supported in NSX Data Center 6.4.5 or later.
n If FIPS mode is enabled on the Edge, you cannot specify a compliance suite.
b Select one of the following Internet Key Exchange (IKE) protocols to set up a security
association (SA) in the IPSec protocol suite.
Option Description
IKEv1 When you select this option, IPSec VPN initiates and responds to IKEv1
protocol only.
IKEv2 When you select this option, IPSec VPN initiates and responds to IKEv2
protocol only.
IKE-Flex When you select this option, and if the tunnel establishment fails with
IKEv2 protocol, the source site does not fall back and initiate a connection
with the IKEv1 protocol. Instead, if the remote site initiates a connection
with the IKEv1 protocol, then the connection is accepted.
Important If you configure multiple sites with the same local and remote endpoints, make
sure that you select the same IKE version and PSK across all these IPSec VPN sites.
c From the Digest Algorithm drop-down menu, select one of the following secure hashing
algorithms:
n SHA1
n SHA_256
d From the Encryption Algorithm drop-down menu, select one of the following supported
encryption algorithms:
n AES (AES128-CBC)
n AES256 (AES256-CBC)
n AES-GCM (AES128-GCM)
Note
n AES-GCM encryption algorithm is not FIPS-compliant.
n Starting in NSX 6.4.5, Triple DES cypher algorithm is deprecated in IPSec VPN service.
The following table explains the encryption settings that are used on the peer VPN
Gateway for the encryption settings that you select on the local NSX Edge.
Option Description
PSK (Pre Shared Key) Indicates that the secret key shared between NSX Edge and the peer site
is to be used for authentication. The secret key can be a string with a
maximum length of 128 bytes.
PSK authentication is disabled in FIPS mode.
Certificate Indicates that the certificate defined at the global level is to be used for
authentication.
f (Optional) Enter the pre-shared key of the peer IPSec VPN site.
g To display the key on the peer site, click the Show Pre-Shared Key ( ) icon or select the
Display Shared Key check box.
h From the Diffie-Hellman (DH) Group drop-down menu, select one of the following
cryptography schemes that allows the peer site and the NSX Edge to establish a shared
secret over an insecure communications channel.
n DH-2
n DH-5
n DH-14
n DH-15
n DH-16
DH14 is default selection for both FIPS and non-FIPS mode. DH2 and DH5 are not
available when the FIPS mode is enabled.
a If the remote IPSec VPN site does not support PFS, disable the Perfect forward secrecy
(PFS) option. By default, PFS is enabled.
b (Optional) To operate IPSec VPN in a responder-only mode, select the Responder only
check box.
What to do next
Tip In the vSphere Web Client, you can follow these steps on the IPSec VPN page to generate
the configuration script for the Peer VPN Gateway.
n In NSX 6.4.6 and later, select the IPSec VPN site, and then click Actions > Generate Peer
Configuration.
n In NSX 6.4.5 and earlier, select the IPSec VPN site, and then click the Generate Peer
Configure icon. In the dialog box that opens, click Generate Peer Configure.
The configuration script is generated. You can use this script as reference to configure the
IPSec VPN parameters on the peer VPN Gateway.
A security compliance suite has a predefined set of values for various security parameters. Think
of a compliance suite as a predefined template to help you automatically configure the security
profile of an IPSec VPN session according to a defined standard. For example, the National
Security Agency in the US government publishes the CNSA suite, and this standard is used for
national security applications. When you select a compliance suite, the security profile of an IPSec
VPN site is automatically configured with predefined values, and you cannot edit these values. By
specifying a compliance suite, you avoid the need of individually configuring each parameter in the
security profile.
NSX supports seven security compliance suites. The following table lists the predefined values for
various configuration parameters in each supported compliance suite.
Digest SHA 384 SHA 256 SHA 384 SHA 256 SHA 384 SHA 256 SHA 256
Algorithm
Encryption AES 256 AES 128 AES 256 AES 128 AES 256 AES GCM AES 128
Algorithm 128
Tunnel AES 256 AES GCM AES GCM AES GMAC AES AES GCM AES 128
Encryption 128 256 128 GMAC 256 128
Tunnel SHA 384 NULL NULL NULL NULL NULL SHA 256
Digest
Algorithm
Attention When you configure an IPSec VPN site using "Prime" and "Foundation" compliance
suites, you cannot configure ikelifetime and salifetime site extensions. These site extensions
are pre-configured based on the standard.
When you select the "CNSA" compliance suite, both DH15 and ECDH20 DH groups are internally
configured on the NSX Edge. However, the following caveats exist when you select this
compliance suite:
n If the IPSec VPN service on an NSX Edge is configured as an initiator, NSX sends only
ECDH20 to establish an IKE security association with the remote IPSec VPN site. By default,
NSX uses ECDH20 because it is more secure than DH15. If a third-party responder IPSec VPN
site is configured with only DH15, the responder sends an invalid IKE payload error message
and asks the initiator to use the DH15 group. The initiator reinitiates IKE SA with the DH15
group, and a tunnel gets established between both the IPSec VPN sites. However, if the
third-party IPSec VPN solution does not support an invalid IKE payload error, the tunnel is
never established between both sites.
n If the IPSec VPN service on an NSX Edge is configured as a responder, the tunnel is always
established depending on the DH group that is shared by the initiator IPSec VPN site.
n When both the initiator and responder IPSec VPN sites use an NSX Edge, the tunnel is always
established with ECDH20.
Unlike a policy-based IPSec tunnel configuration where you configure local and remote subnets, in
a route-based IPSec tunnel configuration, you do not define the local and peer subnets that want
to communicate with each other. In a route-based IPSec tunnel configuration, you must define
a VTI with a private IP address on both the local and peer sites. Traffic from the local subnets
is routed through the VTI to the peer subnets. Use a dynamic routing protocol, such as BGP, to
route traffic through the IPSec tunnel. The dynamic routing protocol decides traffic from which
local subnet is routed using the IPSec tunnel to the peer subnet.
Following steps explain the procedure to configure a route-based IPSec tunnel between the two
sites:
1 Configure the IPSec VPN parameters on the local NSX Edge. In NSX Data Center 6.4.2 and
later, you can configure route-based IPSec VPN parameters only by using REST APIs. For
more information, see the NSX API Guide.
n Local endpoint IP address and local ID to identify the local NSX Edge Gateway.
n Peer endpoint IP address and peer ID to identify the peer VPN Gateway.
n Digest algorithm.
n Encryption algorithm.
n Virtual tunnel interface (VTI) on the NSX Edge. Provide a static private IP address for the
VTI.
Note The VTI that you configure is a static VTI. Therefore, it cannot have more than one IP
address. The best practice is to ensure that the IP address of the VTI on both the local and
peer sites are on the same subnet.
2 Use the IPSec Config Download API to fetch the peer configuration for reference purposes
and configure the peer VPN Gateway.
3 Configure BGP peering between the VTIs at both the sites. Peering ensures that BGP at the
local site advertises the local subnets to the peer VPN gateway, and similarly BGP at the
peer site advertises the remote subnets to the local VPN gateway. For more details about
configuring BGP, see the Routing section in the NSX Administration Guide.
Important In NSX 6.4.2 and later, static routing and OSPF dynamic routing through an IPSec
tunnel are not supported.
4 If you want to configure tunnel redundancy through more than one tunnel, configure BGP
Hold Down timer and Keep Alive timer values. The timer values help in detecting loss of
connectivity with the remote VPN gateway within the required failover time.
For a detailed example of configuring a route-based IPSec tunnel between a local NSX Edge and a
remote Cisco CSR 1000V VPN Gateway, see Using a Cisco CSR 1000V Appliance.
Procedure
Procedure
Procedure
IPsec Terminology
IPSec is a framework of open standards. There are many technical terms in the logs of the NSX
Edge and other VPN appliances that you can use to troubleshoot the IPSEC VPN.
The following terms are some of the standards that you might encounter:
n ISAKMP (Internet Security Association and Key Management Protocol) is a protocol defined
by RFC 2408 for establishing Security Associations (SA) and cryptographic keys in an Internet
environment. ISAKMP only provides a framework for authentication and key exchange and this
framework is key exchange independent.
n IKE (Internet Key Exchange) is a combination of ISAKMP framework and Oakley. NSX Edge
provides IKEv1, IKEv2, and IKE-Flex.
n Diffie-Hellman (DH) key exchange is a cryptographic protocol that allows two parties that have
no prior knowledge of each other to establish jointly a shared secret key over an insecure
communications channel. VSE supports DH group 2 (1024 bits) and group 5 (1536 bits).
Phase 1 Parameters
Phase 1 sets up mutual authentication of the peers, negotiates cryptographic parameters, and
creates session keys. The Phase 1 parameters used by NSX Edge are:
n Main mode.
n SHA1, SHA_256.
Important
n IPSec VPN supports only time-based rekeying. You must disable lifebytes rekeying.
n Starting in NSX 6.4.5, Triple DES cypher algorithm is deprecated in IPSec VPN service.
Phase 2 Parameters
IKE Phase 2 negotiates an IPSec tunnel by creating keying material for the IPSec tunnel to use
(either by using the IKE phase 1 keys as a base or by performing a new key exchange). The IKE
Phase 2 parameters supported by NSX Edge are:
n Triple DES, AES-128, AES-256, and AES-GCM [Matches the Phase 1 setting].
n SHA1, SHA_256.
n Selectors for all IP protocols, all ports, between the two networks, using IPv4 subnets
Important
n IPSec VPN supports only time-based rekeying. You must disable lifebytes rekeying.
n Starting with NSX 6.4.5, Triple DES cypher algorithm is deprecated in IPSec VPN service.
The following transactions occur in a sequence between the NSX Edge and a Cisco VPN device in
Main Mode.
n DPD enabled
n If the Cisco device does not accept any of the parameters the NSX Edge sent in step 1,
the Cisco device sends the message with flag NO_PROPOSAL_CHOSEN and ends the
negotiation.
n Include ID (PSK).
n Include ID (PSK).
n If the Cisco device finds that the PSK does not match, the Cisco device sends a message
with flag INVALID_ID_INFORMATION, and Phase 1 fails.
Cisco device sends back NO_PROPOSAL_CHOSEN if it does not find any matching policy for
the proposal. Otherwise, the Cisco device sends the set of parameters chosen.
To facilitate debugging, you can enable IPSec logging on the NSX Edge and enable crypto
debug on Cisco (debug crypto isakmp <level>).
For this scenario, NSX Edge connects the internal network [Link]/24 to the Internet. NSX
Edge interfaces are configured as follows:
The VPN gateway at the remote site connects the [Link]/16 internal network to the Internet.
The remote gateway interfaces are configured as follows:
[Link]/24 [Link]/16
[Link] [Link]
V
Internet
Note For NSX Edge to NSX Edge IPSec tunnels, you can use the same scenario by setting up the
second NSX Edge as the remote gateway.
Procedure
5 Click Add.
6 In the Name text box, type a name for the IPSec VPN site.
7 In the Local Id text box, type [Link] as the IP address of the NSX Edge instance.
This local Id becomes the peer Id on the remote site.
If you are adding an IP to IP tunnel using a pre-shared key, the local Id and local endpoint IP
can be the same.
10 In the Peer Id, type [Link] to identify the peer site uniquely.
18 To display the pre-shared key on the peer site, click the Show Pre-Shared Key ( ) icon or
select the Display Shared Key check box.
19 Select the Diffie-Hellman (DH) Group cryptography scheme. For example, select DH14.
What to do next
Procedure
interface GigabitEthernet0/0
ip address [Link] [Link]
duplex auto
speed auto
crypto map MYVPN
!
interface GigabitEthernet0/1
ip address [Link] [Link]
duplex auto
speed auto
!
ip route [Link] [Link] [Link]
Example: Configuration
router2821#show running-config output
Building configuration...
!
no aaa new-model
!
resource policy
!
ip subnet-zero
!
ip cef
!no ip dhcp use vrf connected
!
!
no ip ips deny-action ips-interface
!
crypto isakmp policy 1
encr 3des
authentication pre-share
group 2
crypto isakmp key vshield address [Link]
!
crypto ipsec transform-set myset esp-3des
esp-sha-hmac
!
crypto map MYVPN 1 ipsec-isakmp
set peer [Link]
set transform-set myset
set pfs group1
match address 101
!
interface GigabitEthernet0/0
ip address [Link] [Link]
duplex auto
speed auto
crypto map MYVPN
!
interface GigabitEthernet0/1
ip address [Link] [Link]
duplex auto
speed auto
!
ip classless
ip route [Link] [Link] [Link]
!
ip http server
no ip http secure-server
!
access-list 101 permit ip [Link]
[Link] [Link] [Link]
!
control-plane
!
line con 0
line aux 0
line vty 0 4
password cisco
login
line vty 5 15
password cisco
login
!
scheduler allocate 20000 1000
!
end
pre-shared-key *
!
!
prompt hostname context
Cryptochecksum:29c3cc49460831ff6c070671098085a9
: end
"ikeOption" : "ikev2",
"localIp" : "[Link]",
"peerSubnets" : [
"[Link]/0"
],
"responderOnly" : false,
"certificate" : null,
"dhGroup" : "dh2",
"siteId" : "ipsecsite-53",
"localId" : "[Link]",
"tunnelInterfaceLabel" : "vti-1",
"enablePfs" : true
},
{
"peerIp" : "[Link]",
"authenticationMode" : "psk",
"ipsecSessionType" : "routebasedsession",
"tunnelInterfaceId" : 2,
"psk" : "****",
"name" : "VPN to edge-ext tun 1 [Link]",
"encryptionAlgorithm" : "3des",
"description" : "VPN to edge subnet1",
"localSubnets" : [
"[Link]/0"
],
"enabled" : true,
"pskEncryption" : null,
"digestAlgorithm" : "sha1",
"ikeOption" : "ikev2",
"extension" : null,
"peerSubnets" : [
"[Link]/0"
],
"localIp" : "[Link]",
"peerId" : "[Link]",
"mtu" : null,
"siteId" : "ipsecsite-54",
"localId" : "[Link]",
"enablePfs" : true,
"tunnelInterfaceLabel" : "vti-2",
"responderOnly" : false,
"certificate" : null,
"dhGroup" : "dh2"
}
]
}
}
The following CLI output shows the VTI configuration on the NSX Edge:
"mtu" : 1416,
"label" : "vti-1",
"sourceAddress" : "[Link]",
"destinationAddress" : "[Link]",
"tunnelAddresses" : [
"[Link]/24"
],
"mode" : "VTI",
"enabled" : true
},
{
"enabled" : false,
"tunnelAddresses" : [
"[Link]/24"
],
"mode" : "VTI",
"sourceAddress" : "[Link]",
"destinationAddress" : "[Link]",
"label" : "vti-2",
"mtu" : 1416,
"name" : "vti-2"
}
]
}
interface Tunnel1
ip address [Link] [Link]
tunnel source [Link]
tunnel mode ipsec ipv4
tunnel destination [Link]
tunnel protection ipsec profile IPSEC_PROF1
interface Tunnel2
ip address [Link] [Link]
tunnel source [Link]
tunnel mode ipsec ipv4
tunnel destination [Link]
tunnel protection ipsec profile IPSEC_PROF2
interface GigabitEthernet1
ip address dhcp
negotiation auto
interface GigabitEthernet2
no ip address
negotiation auto
interface GigabitEthernet2.2
encapsulation dot1Q 23
ip address [Link] [Link]
interface GigabitEthernet2.3
encapsulation dot1Q 19
ip address [Link] [Link]
interface GigabitEthernet2.4
encapsulation dot1Q 22
ip address [Link] [Link]
Procedure
5 Select Network > Branch Office VPN > Manual IPSec to configure the remote gateway.
6 In the IPSec Configuration dialog box, click Gateways to configure the IPSEC Remote
Gateway.
8 In the IPSec Configuration dialog box, click Add to add a routing policy.
9 Click Close.
L2 VPN Overview
With L2 VPN, you can stretch multiple logical networks (both VLAN and VXLAN) across
geographical sites. In addition, you can configure multiple sites on an L2 VPN server.
Virtual machines remain on the same subnet when they are moved between sites and their IP
addresses do not change. Egress optimization enables Edge to route any packets sent towards the
Egress Optimization IP address locally, and bridge everything else.
So, with L2 VPN service, enterprises can seamlessly migrate workloads between different physical
sites. The workloads can run on either VXLAN-based networks or VLAN-based networks. For
cloud service providers, L2 VPN provides a mechanism to on-board tenants without modifying
existing IP addresses for workloads and applications.
Note
n Starting in NSX Data Center 6.4.2, you can configure the L2 VPN service over both SSL and
IPSec tunnels. However, you can configure the L2 VPN service over IPSec tunnels only by
using REST APIs. For more information about configuring L2 VPN over IPSec, see the NSX
API Guide.
n With NSX 6.4.1 and earlier, you can configure the L2 VPN service only over SSL tunnels.
Layer3
Network
Uplink Uplink
Network Network
vNic vNic
NSX VXLAN VXLAN NSX
VM 1 VM 2 Trunk Trunk VM 5 VM 6
VXLAN VXLAN
5010 5010
VXLAN VXLAN
5011 5011
VM 3 VM 4 VM 7 VM 8
The L2 VPN client and serve learn the MAC addresses on both local and remote sites based on
the traffic flowing through them. Egress optimization maintains local routing because the default
gateway for all virtual machines is always resolved to the local gateway using firewall rules. Virtual
machines that have been moved to Site B can also access L2 segments that are not stretched on
Site A.
If one of the sites does not have NSX deployed, a standalone Edge can be deployed on that site.
In the following graphic, L2 VPN stretches network VLAN 10 to VXLAN 5010 and VLAN 11 to
VXLAN 5011. So VM 1 bridged with VLAN 10 can access VMs 2, 5, and 6.
Figure 15-4. Extending non-NSX Site with VLAN-Based Networks to NSX-Site with VXLAN-
Based Networks Using L2 VPN
Layer3
Network
Uplink Uplink
Network Network
NSX
Standalone NSX Edge
Edge
vNic NSX
Trunk vNic VM 5 VM 6
VLAN 10-11 VXLAN
Trunk
VXLAN
5010
VLAN 10 VLAN 11
VXLAN
5011
VM 1 VM 2 VM 3 VM 4 VM 7 VM 8
Recommendation For an optimum L2 VPN performance, preferably deploy an NSX Edge with an
XLarge form factor on both client and server for the following reasons:
n CPU pinning is possible in vCPU 1, 3, and 5. CPU pinning is not possible in large and quad
large form factors.
Configuring L2 VPN according to best practices can avoid problems such as looping and duplicate
pings and responses.
Option 1: Separate ESXi hosts for the L2VPN Edges and the VMs
Teaming Active/Active:
Not Supported Standby Active
dvPortGroup dvPortGroup
SINK
Trunk
vDS
interface
2 Configure the Teaming and Failover Policy for the Distributed Port Group associated with the
Edge’s Trunk vNic as follows:
b Configure only one uplink as Active and the other uplink as Standby.
3 Configure the teaming and failover policy for the distributed port group associated with the
VMs as follows:
4 Configure Edges to use sink port mode and disable promiscuous mode on the trunk vNic.
Note
n Disable promiscuous mode: If you are using vSphere Distributed Switch.
n Enable promiscuous mode: If you are using virtual switch to configure trunk interface.
If a virtual switch has promiscuous mode enabled, some of the packets that come in from the
uplinks that are not currently used by the promiscuous port, are not discarded. You should enable
and then disable ReversePathFwdCheckPromisc that will explicitly discard all the packets coming in
from the currently unused uplinks, for the promiscuous port.
To block the duplicate packets, activate RPF check for the promiscuous mode from the ESXi CLI
where NSX Edge is present:
In PortGroup security policy, set Promiscous Mode from Accept to Reject and back to Accept to
activate the configured change.
Standby Standby
a Configure the teaming and failover policy for the distributed port group associated with
Edge’s trunk vNic as follows:
b Configure the teaming and failover policy for the distributed port group associated with the
VMs as follows:
3 The order of the active/standby uplinks must be the same for the VMs' distributed port
group and the Edge’s trunk vNic distributed port group.
c Configure the client-side standalone edge to use sink port mode and disable promiscuous
mode on the trunk vNic.
If one of the VPN sites does not have NSX deployed, you can configure an L2 VPN by deploying
a standalone NSX Edge at that site. A standalone Edge is deployed using an OVF file on a host
that is not managed by NSX. This deploys an Edge Services Gateway appliance to function as an
L2 VPN client.
If a standalone edge trunk vNIC is connected to a vSphere Distributed Switch, either promiscuous
mode or a sink port is required for L2 VPN function. Using promiscuous mode can cause duplicate
pings and duplicate responses. For this reason, use sink port mode in the L2 VPN standalone NSX
Edge configuration.
Procedure
1 Retrieve the port number for the trunk vNIC that you want to configure as a sink port.
a Log in to the vSphere Web Client, and navigate to Home > Networking.
b Click the distributed port group to which the NSX Edge trunk interface is connected, and
click Ports to view the ports and connected VMs. Note the port number associated with
the trunk interface.
Use this port number when fetching and updating opaque data.
b Click content.
c Click the link associated with the rootFolder (for example: group-d1 (Datacenters)).
d Click the link associated with the childEntity (for example: datacenter-1).
e Click the link associated with the networkFolder (for example: group-n6).
f Click the DVS name link for the vSphere distributed switch associated with the NSX Edges
(for example: dvs-1 (Mgmt_VDS)).
Use this value for dvsUuid when fetching and updating opaque data.
a Go to [Link]
b Click fetchOpaqueDataEx.
<selectionSet xsi:type="DVPortSelection">
<dvsUuid>c2 1d 11 50 6a 7c 77 68-e6 ba ce 6a 1d 96 2a 15</dvsUuid> <!-- example
dvsUuid -->
<portKey>393</portKey> <!-- example port number -->
</selectionSet>
Use the port number and dvsUuid value that you retrieved for the NSX Edge trunk
interface.
4 Configure the sink port in the vCenter managed object browser (MOB).
a Go to [Link]
b Click updateOpaqueDataEx.
<selectionSet xsi:type="DVPortSelection">
<dvsUuid>c2 1d 11 50 6a 7c 77 68-e6 ba ce 6a 1d 96 2a 15</dvsUuid> <!-- example
dvsUuid -->
<portKey>393</portKey> <!-- example port number -->
</selectionSet>
Use the dvsUuid value that you retrieved from the vCenter MOB.
d On the opaqueDataSpec value box paste one of the following XML inputs:
Use this input to enable a SINK port if opaque data is not set (operation is set to add):
<opaqueDataSpec>
<operation>add</operation>
<opaqueData>
<key>[Link]</key>
<opaqueData
xsi:type="[Link]">AAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=</opaqueData>
</opaqueData>
</opaqueDataSpec>
Use this input to enable a SINK port if opaque data is already set (operation is set to edit):
<opaqueDataSpec>
<operation>edit</operation>
<opaqueData>
<key>[Link]</key>
<opaqueData
xsi:type="[Link]">AAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=</opaqueData>
</opaqueData>
</opaqueDataSpec>
<opaqueDataSpec>
<operation>edit</operation>
<opaqueData>
<key>[Link]</key>
<opaqueData
xsi:type="[Link]">AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=</opaqueData>
</opaqueData>
</opaqueDataSpec>
The task topics in this section explain the steps to configure the L2 VPN service over SSL tunnels.
Prerequisites
A sub interface must have been added on a trunk interface of the NSX Edge. See Add a Sub
Interface.
Procedure
7 In Listener IP, enter the primary or secondary IP address of an external interface of the NSX
Edge.
8 The default port for the L2 VPN service is 443. Edit the port number, if necessary.
9 Select one or more encryption algorithms to encrypt the communication between the server
and the client.
n In NSX 6.4.6 and later, click the Edit ( ) icon. Select one or more encryption algorithms,
and then click Save.
n In NSX 6.4.5 and earlier, select an algorithm from the list box. To select multiple values,
press Ctrl and click the algorithms in the list.
Note Changing site configuration settings causes the NSX Edge to disconnect and reconnect all
existing connections.
Procedure
b Enter a user name and password with which the peer site is to be authenticated. User
credentials on the peer site must be the same as those on the client side.
c In Stretched Interfaces, click or Select Sub Interfaces to select the sub interfaces to be
stretched with the client.
g If the default gateway for virtual machines is the same across the two sites, enter the
gateway IP addresses in the Egress Optimization Gateway Address text box. These IP
addresses are the addresses for which the traffic is to be locally routed or for which the
traffic is to be blocked over the tunnel.
h (Optional) Enable Unstretched Networks when you want the VMs on the unstretched
networks to communicate with the VMs that are behind the L2 VPN client edge on the
stretched network. In addition, you want this communication to be routed through the
same L2 VPN tunnel. Unstretched subnets can either be behind the L2 VPN server edge or
the L2 VPN client edge or both.
For example, imagine that you have created an L2 VPN tunnel to stretch the
[Link]/24 subnetwork between two data center sites using the NSX L2 VPN service.
Behind the L2 VPN server edge, you have two additional subnets (for example,
[Link]/24 and [Link]/24). When unstretched networks are enabled, the VMs
on [Link]/24 and [Link]/24 subnets can communicate with the VMs that
are behind the L2 VPN client edge on the stretched network ([Link]/24). This
communication is routed through the same L2 VPN tunnel.
i If you have enabled unstretched networks, do these steps depending on where the
unstretched subnets are situated:
n When unstretched subnets are behind the L2 VPN client edge, enter the network
address of the unstretched network in the CIDR format while adding the peer (client)
site on the L2 VPN server edge. To enter multiple unstretched networks, separate the
network addresses by commas.
n When unstretched subnets are behind the L2 VPN server edge, keep the Unstretched
Networks text box blank. In other words, do not enter the network address of the
unstretched networks while adding the client (peer) site on the L2 VPN server.
In the earlier example, because the unstretched subnets are behind the L2 VPN server
edge, you must keep the Unstretched Networks text box blank in the Add Peer Site
window.
Procedure
3 Double-click a destination NSX Edge, and navigate to Manage > VPN > L2 VPN.
What to do next
Create NAT or firewall rule on the Internet facing firewall side to enable the client and server to
connect to each other.
You can also configure a standalone Edge as the L2 VPN client. See Configure Standalone Edge
as L2 VPN Client.
Procedure
a Enter the address of the L2 VPN server to which this client is to be connected. The address
can be a host name or an IP address.
b Edit the default port to which the L2 VPN client must connect to, if necessary.
d In Stretched Interfaces, click or Select Sub Interfaces to select the sub interfaces to be
stretched to the server.
g In Egress Optimization Gateway Address, enter the gateway IP address of the sub
interfaces or the IP addresses to which traffic should not flow over the tunnel.
h (Optional) Select Unstretched Networks check box when you want the VMs on the
unstretched networks to communicate with the VMs that are behind the L2 VPN server
edge on the stretched network. In addition, you want this communication to be routed
through the same L2 VPN tunnel. Unstretched subnets can either be behind the L2 VPN
server edge or the L2 VPN client edge or both.
For example, imagine that you have created an L2 VPN tunnel to stretch the
[Link]/24 subnetwork between two data center sites using the NSX L2 VPN service.
Behind the L2 VPN server edge, you have two additional subnets (for example,
[Link]/24 and [Link]/24). When unstretched networks are enabled, the VMs
on [Link]/24 and [Link]/24 subnets can communicate with the VMs that
are behind the L2 VPN server edge on the stretched network ([Link]/24). This
communication is routed through the same L2 VPN tunnel.
i If you have enabled unstretched networks, do these steps depending on where the
unstretched subnets are situated:
n When unstretched subnets are behind the L2 VPN server edge, enter the network
address of the unstretched network in the CIDR format while configuring the L2 VPN
client edge. To enter multiple unstretched networks, separate the network addresses
by commas.
n When unstretched subnets are behind the L2 VPN client edge, keep the Unstretched
Networks text box blank. In other words, do not enter the network address of the
unstretched networks on the L2 VPN client edge.
In the earlier example, because the unstretched subnets are behind the L2 VPN
server edge, you must enter the unstretched networks as [Link]/24,
[Link]/24 while configuring the L2 VPN client edge.
j In User Details, type the user credentials to get authenticated at the server.
8 Click the Advanced tab and specify the other client details.
When a client Edge does not have direct access to the Internet and must reach the source
(server) NSX Edge through a proxy server, you must specify proxy server settings.
b Enter the proxy server address, port, user name, and password.
c To enable server certificate validation, select Validate Server Certificate and select the
appropriate CA certificate.
What to do next
Ensure that the Internet facing firewall allows traffic to flow from L2 VPN Edge to the Internet. The
destination port is 443.
Procedure
What to do next
n To enable the client and server to connect to each other, create NAT or firewall rules on the
Internet facing firewall side .
n If a trunk vNIC backed by a standard portgroup is being stretched, enable the L2 VPN traffic
manually by doing the following steps:
For more information about promiscuous mode operation and forged transmits, see Securing
vSphere Standard Switches in the VMware vSphere ® Documentation.
Procedure
c Expand the Tunnel Status section, and click the Refresh icon to view the tunnel statistics.
c In the Site Configuration Details section, click the Show Statistics or Show L2VPN
Statistics link.
The statistics of all the peer sites that are configured on the L2 VPN server are displayed.
What to do next
To see the networks configured on a trunk interface, navigate to Manage > Settings > Interfaces
for the Edge and click Trunk in the Type column.
You cannot create and edit a route-based IPSec VPN tunnel by using the vSphere Web Client.
You must use the NSX REST APIs. For more information about creating route-based IPSec VPN
tunnels, see the NSX API Guide.
The steps in the workflow are supported only with NSX REST APIs. In this documentation, only the
API URLs are mentioned. For a detailed information about the API parameters, sample requests
and responses, see the NSX API Guide.
First, configure the L2 VPN service in the server (hub) mode on the NSX Edge by using the
following steps. The Edge that you configure in the server mode must be an NSX Edge.
1 Create a route-based IPSec VPN tunnel with the Edge that you want to configure as the L2
VPN server (hub). A site ID is auto-generated when you create the tunnel.
PUT /api/4.0/edges/{edgeId}/ipsec/config
2 Create an L2 VPN tunnel for a client, and bind this L2 VPN tunnel with the site ID that was
generated in step 1.
POST /api/4.0/edges/{edgeId}/l2t/config/l2tunnels
3 Retrieve the peer code for this client. This peer code becomes the input code (shared code) for
configuring the L2 VPN service on the client Edge.
GET /api/4.0/edges/{edgeId}/l2t/config/l2tunnels/{l2tunnelId}/peercodes
POST /api/4.0/edges/{edgeId}/l2t/config
If you want to stretch the L2 network with other sites, repeat the preceding three steps on the
server for the L2 VPN clients at other sites.
Now, configure the L2 VPN service in the client (spoke) mode on another Edge by using the
following steps. This Edge can either be an NSX-managed Edge or a standalone Edge.
1 Create a route-based IPSec VPN tunnel with the same parameters that you used for
configuring the route-based IPSec VPN tunnel on the server Edge.
PUT /api/4.0/edges/{edgeId}/l2t/config/globalconfig
3 Create an L2 VPN tunnel by using the site ID that was generated on the server, and with the
peer code that you retrieved from the server.
Edge appliances deployed on a client site where NSX is not deployed are called standalone
Edges. A standalone Edge is deployed using an OVF file on a host that is not managed by NSX.
If you want to change FIPS mode for a standalone edge, use the fips enable or fips disable
command. For more information, refer to NSX Command Line Interface Reference.
You can deploy a pair of standalone L2 VPN Edge clients and enable HA between them for VPN
redundancy support. The two standalone L2 VPN Edge clients are called node 0 and node 1.
It is not mandatory to specify the HA configuration settings on both standalone L2 VPN Edge
appliance at the time of deployment. However, you must enable HA at the time of deployment.
The steps in the following procedure apply when you want to deploy the standalone Edge as a L2
VPN client for routing traffic either through an SSL tunnel or an IPSec VPN tunnel.
Prerequisites
You have created a trunk port group for the trunk interface of the standalone Edge to connect to.
This port group requires some manual configuration:
n If the trunk port group is on a vSphere Standard Switch you must do the following:
n If the trunk port group is on a vSphere Distributed Switch you must do the following:
n Enable sink port for the trunk vNic, or enable promiscuous mode. A good practice is to
enable a sink port.
Sink port configuration must be done after the standalone Edge has been deployed, because
you need to change the configuration of the port connected to the Edge trunk vNIC.
Procedure
1 Using vSphere Web Client, log in to the vCenter Server that manages the non-NSX
environment.
2 Select Hosts and Clusters and expand clusters to show the available hosts.
3 Right-click the host where you want to install the standalone Edge and select Deploy OVF
Template.
4 Enter the URL to download and install the OVF file from the Internet or click Browse to locate
the folder on your computer that contains the standalone Edge OVF file and click Next.
5 On the OVF Template Details page, verify the template details and click Next.
6 On the Select name and folder page, type a name for the standalone Edge and select the
folder or data center where you want to deploy. Then click Next.
7 On the Select storage page, select the location to store the files for the deployed template.
8 On the Select networks page, configure the networks the deployed template must use. Click
Next.
n The Trunk interface is used to create subinterfaces for the networks that will be stretched.
Connect this interface to the trunk port group you created.
n The HA interface is used to set up high availability on the standalone L2 VPN Edge
appliances. Select a distributed port group for the HA interface.
d Type the uplink IP address and prefix length, and optionally default gateway and DNS IP
address.
e Select the cipher to be used for authentication. The selected value must match the cipher
used on the L2 VPN server.
Note Perform this step only when you want to configure L2 VPN over SSL.
f To enable Egress Optimization, type the gateway IP addresses for which traffic should be
locally routed or for which traffic is to be blocked over the tunnel.
g (Optional) Select the Enable TCP Loose Setting check box when you want the existing TCP
connection (for example, an SSH session) to the VM over L2 VPN to remain active after the
VM is migrated.
By default, this setting is not enabled. When this setting is disabled, the existing TCP
connection to the VM over L2 VPN is lost after the VM is migrated. You must open a new
TCP connection to the VM after the migration is done.
h To enable high availability on the standalone L2 VPN Edge appliance, select the Enable
High Availability for this appliance check box.
i (Optional) Type the IP address of the first standalone L2 VPN Edge appliance (node 0).
The IP address must be in the /30 IP subnet.
j (Optional) Type the IP address of the second standalone L2 VPN Edge appliance (node 1).
The IP address must be in the /30 IP subnet.
k (Optional) On node 0 appliance, select 0 to assign the IP address of node 0 for the HA
interface. Similarly, on node 1 appliance, select 1 so that IP address of node 1 is used for the
HA interface.
l (Optional) Specify an integer value for the dead interval time in seconds. For example, type
15.
If you are configuring the L2 VPN client to route traffic through the IPSec VPN tunnel, you
must specify the IP address of the peer site, and the peer code.
n Type the user name and password with which the peer site is to be authenticated.
Note Perform this step only when you want to configure L2 VPN over SSL.
o In Sub Interfaces VLAN (Tunnel ID), type VLAN ID(s) of the networks you want to stretch.
You can list the VLAN IDs as a comma-separated list or range. For example, 2,3,10-20.
If you want to change the VLAN ID of the network before stretching it to the
standalone Edge site, type the VLAN ID of the network, and then type the tunnel ID
in brackets. For example, 2(100),3(200). The Tunnel ID is used to map the networks
that are being stretched. However, you cannot specify the tunnel ID with a range.
So this might not be allowed: 10(100)-14(104). You might need to rewrite this as
10(100),11(101),12(102),13(103),14(104).
p If the standalone Edge does not have direct access to the Internet and must reach the
source (server) NSX Edge through a proxy server, type the proxy address, port, user
name, and password.
r Click Next.
10 On the Ready to complete page, review the standalone Edge settings and click Finish.
What to do next
n Note the trunk vNIC port number and configure a sink port. See Configure a Sink Port.
n If you have specified the HA configuration settings, such as HA IP address, HA index value,
and the dead interval time while deploying the standalone L2 VPN Edge appliances, you
can validate the HA configuration on the console of the deployed nodes with the show
configuration command.
n If you have not specified the HA configuration settings during deployment, you can do it later
from the NSX Edge Console by running the ha set-config command on each node.
Make any further configuration changes with the standalone Edge command-line interface. See
the NSX Command Line Interface Reference.
You have deployed two standalone L2 VPN Edge clients called L2VPN-Client-01 and L2VPN-
Client-02 with the same configuration. The /30 IP subnet address of L2VPN-Client-01 is
[Link], and for L2VPN-Client-02 is [Link]. In this topic, Node-1 refers to L2VPN-Client-01
and Node-2 refers to L2VPN-Client-02.
Prerequisites
n Make sure that HA configuration settings, such as HA IP address, HA index value, and dead
interval time are configured on both nodes.
n Make sure that both standalone L2 VPN Edge clients have the same VPN configuration.
Procedure
1 Log in to each node, and run the ha get-local node command on both nodes individually to
retrieve the MAC addresses of the three vNIC interface cards.
nsx-l2vpn-edge(config)# ha get-localnode
[Link] [Link] [Link]
nsx-l2vpn-edge(config)# ha get-localnode
[Link] [Link] [Link]
2 Run the ha set-peernode command on both nodes individually to assign the MAC address of
Node-1 to Node-2, and MAC address of Node-2 to Node-1.
nsx-l2vpn-edge(config)# ha admin-state UP
Note Make sure that you type UP in uppercase, as shown in the example.
4 Run the commit command on both nodes individually to save the HA configuration and
establish HA between Node-1 and Node-2.
nsx-l2vpn-edge(config)# commit
High Availability Feature is enabled on this appliance. Please make sure to make
similar configuration on paired Standalone Edge appliance.
Results
What to do next
Log in to each node, and verify the HA status by running the show service highavailability
command on both nodes.
At any stage, if you want to check the link status of the HA, run the show service
highavailability link command on each node. This command lists the local and peer /30
IP subnet addresses that you configured while deploying the standalone L2 VPN nodes.
Consider that you have deployed two standalone L2 VPN Edge appliances called L2VPN-Client-1
and L2VPN-Client-2, and you have established HA between both these appliances. L2VPN-
Client-1 is the active appliance, and L2VPN-Client-2 is the standby appliance.
In the first approach, you can do the following steps to disable HA:
2 Log in to the console of the active appliance (L2-VPN-Client-1), and run the ha disable
command.
The advantage of this approach is that the HA failover time is minimum. However, the limitations
of this approach are as follows:
n This approach might lead to a dual active state situation. For example, you might forget to
power off the standby appliance and start up the standby appliance. This might cause both L2
VPN appliances to become active.
In the second approach, you can do the following steps to disable HA:
1 Delete or power off any one of the L2 VPN appliances, either active or standby appliance.
2 Log in to the console of the other appliance, and run the ha disable command to disable the
HA feature on this appliance.
In the second approach too, the HA failover time is minimum. However, the limitations of the
second approach are as follows:
n This approach might also lead to a dual active state situation. For example, you might forget
to power off the appliance and start up the same appliance. This might cause both L2 VPN
appliances to become active.
n Service is disrupted when you delete the active appliance because the HA failover might not
happen immediately to make the standby appliance active.
Consider that you have already deployed two standalone L2 VPN client appliances called L2VPN-
Client-01 and L2VPN-Client-02, and enabled HA on both the appliances. The L2VPN-Client-01
node has failed or crashed. To replace this failed node, you will deploy a new standalone L2
VPN client appliance called L2VPN-Client-Replace, and specify the same VPN configuration as the
active node.
Procedure
1 Log in to the console of the active node (L2VPN-Client-02), and check its HA index.
2 Run the ha get-localnode command on the L2VPN-Client-02 node to retrieve the vNIC
MAC addresses, and copy the CLI output of this command.
3 Deploy a new standalone L2 VPN client appliance and name it L2VPN-Client-Replace. During
deployment, make sure that you specify the following details:
b Type the correct HA IP address for both node 0 and node 1. The IP addresses must be in
the /30 IP subnet.
n If the node with HA index 1 has failed, then select 0 for the HA index of the L2VPN-
Client-Replace appliance.
n If the node with HA index 0 has failed, then select 1 for the HA index of the L2VPN-
Client-Replace appliance.
4 Log in to the console of the new L2VPN-Client-Replace appliance, and do these steps:
a Run the ha set-peernode command and set the MAC address of the peer node (L2VPN-
Client-02).
b Run the ha get-localnode command, and copy the CLI output of this command.
5 Log in to the console of the active node (L2VPN-Client-02), and run the ha set-peernode
command to set the vNIC MAC addresses of the newly deployed appliance (L2VPN-Client-
Replace).
NSX L2 VPN supports egress optimization by using the same gateway IP address on both sites.
This scenario uses the egress optimization feature and ensures that the IP addresses of the
applications do not change after the migration.
The following figure shows the logical topology of extending networks between two sites by using
the L2 VPN service on the NSX Edges.
Layer3
Network
Uplink Uplink
Network Network
vNic NSX
Trunk vNic VM 5 VM 6
VLAN 10-11 VXLAN
Trunk
VXLAN
NSX 5010
VLAN 10 VLAN 11
VXLAN
5011
VM 1 VM 2 VM 3 VM 4 VM 7 VM 8
The L2 VPN service on the NSX Edge at site A is configured in "client" mode, and the L2 VPN
service on NSX Edge at site B is configured in "server" mode. As an administrator, your objective
is to create a L2 VPN tunnel and perform an L2 extension between sites A and B, such that:
n Tunnel ID 200 extends the VLAN 10 network on site A to the VXLAN 5010 network on site B.
n Tunnel ID 201 extends the VLAN 11 network on site A to the VXLAN 5011 network on site B.
The following figure shows the logical representation of the L2 extension between both sites.
VM VM
L2 Extension
VLAN 11 Tunnel ID: 201 VXLAN 5011
VM VM
Remember In this scenario, both sites have NSX-managed edges. To perform an L2 extension
between two sites, the edge that is configured in "server" mode must be an NSX Edge. However,
the edge that is configured in "client" mode can either be an NSX Edge or a standalone edge,
which is not NSX-managed.
If the client site uses a standalone edge, you can stretch only VLAN networks on the client site
with the VLAN or VXLAN networks on the server site.
You can perform an L2 extension either by configuring L2 VPN service over SSL, or by configuring
L2 VPN over IPSec. The following procedure explains the steps for stretching L2 networks using
L2 VPN over SSL.
Procedure
1 Navigate to the L2 VPN edge on site B and configure a vnic interface of type "trunk". Add two
sub interfaces on this interface.
For detailed instructions about configuring an interface on the edge and adding sub interfaces,
see Configure an Interface .
For example, in this scenario, configure "vnic 1" on the server edge to connect to a distributed
port group. Add sub interfaces that connect to logical switches with VNI 5010 and 5011.
Each sub interface must have a unique tunnel ID. The following table shows the sub interface
configuration on the L2 VPN server edge.
2 Navigate to the L2 VPN edge on site A and configure a vnic interface of type "trunk". Add two
sub interfaces on this interface.
For example, in this scenario, configure "vnic 2" on the client edge to connect to a standard
port group. Add sub interfaces that connect to VLANs 10 and 11. The tunnel IDs on the client
edge must match the tunnel IDs that you specified on the server edge. The following table
shows the sub interface configuration on the L2 VPN client edge.
For detailed instructions about configuring the L2 VPN server, see Configure L2 VPN
Server.
c In Site Configuration Details, click Add and specify the configuration of the L2 VPN client
(peer) site.
For detailed instructions about adding L2 VPN peer sites, see Add Peer Sites.
n Add a peer site with name "site-A". Select the "vnic 1" trunk interface on the server
edge, and include the two sub interfaces "sub_vxlan1" and "sub_vxlan2" as the
stretched networks. Ensure that you enable the peer site. The following table shows
the sub interfaces (stretched interfaces) on peer site-A.
For detailed instructions about configuring the L2 VPN client, see Configure L2 VPN
Client.
For example, in this scenario, do the following configuring on the L2 VPN Client:
n Select the "vnic 2" trunk interface on the client edge, and include the two sub
interfaces "sub_vlan1" and "sub_vlan2" as the stretched networks. The following table
shows the sub interfaces (stretched interfaces) on the L2 VPN client edge.
Results
L2 VPN tunnel is established between site A and site B. You can now migrate workloads between
the two sites by using the stretched L2 networks.
What to do next
Alternatively, you can log in to the CLI console of the L2 VPN server edge and client edge and
verify the tunnel status by running the show service l2vpn command.
For more information about this command, see the NSX Command Line Interface Reference
Guide.
Prerequisites
Make sure that you have configured L2 stretching between VLAN networks (VLAN ID: 10, 11) on
site A to the VXLAN networks (VNI: 5010, 5011) on site B. See Scenario: Add a Stretched VLAN or
VXLAN Network.
Procedure
2 (Required) Navigate to the L2 VPN client edge on site A and stop the L2 VPN service.
3 (Required) Navigate to the L2 VPN server edge on site B and stop the L2 VPN service.
4 On the L2 VPN server edge, delete the "sub_vxlan1" sub interface on the "vnic1" trunk
interface.
a Navigate to the edge interface settings by clicking Manage > Settings > Interfaces.
5 On the L2 VPN client edge, delete the "sub_vlan1" sub interface on the "vnic2" trunk interface.
a Navigate to the edge interface settings by clicking Manage > Settings > Interfaces.
Results
The sub interfaces that extend the VLAN 10 network on site A to the VXLAN 5010 network on site
B are removed.
What to do next
n On site A, navigate to the L2 VPN page of the client edge. Observe that in Stretched
Interfaces, the "sub_vlan1" interface is not shown. The other stretched interface "sub_vlan2"
still exists.
n On site B, navigate to the L2 VPN page of the server edge. Observe that in the Stretched
Interfaces of the peer site (site-A), the "sub_vxlan1" interface is not shown. The other
stretched interface "sub_vxlan2" still exists.
You map an external, or public, IP address to a set of internal servers for load balancing. The load
balancer accepts TCP, UDP, HTTP, or HTTPS requests on the external IP address and decides
which internal server to use. Port 80 is the default port for HTTP and port 443 is the default port
for HTTPs.
You must have a working NSX Edge instance before you can configure load balancing. For
information on setting up NSX Edge, see NSX Edge Configuration.
For information on configuring an NSX Edge certificate, see Working with Certificates.
n Connection throttling
n One-arm mode
n Inline mode
n IPv6 support
n Available on all flavors of an NSX edge services gateway, with a recommendatory of X-Large
or Quad Large for production traffic
Topologies
There are two types of load balancing services to configure in NSX, a one-armed mode, also
known as a proxy mode, or the Inline mode, otherwise known as the transparent mode.
n The external client sends traffic to the virtual IP address (VIP) exposed by the load balancer.
n The load balancer – a centralized NSX edge – performs only destination NAT (DNAT) to
replace the VIP with the IP address of one of the servers deployed in the server farm.
n The server in the server farm replies to the original client IP address. The traffic is received
again by the load balancer since it is deployed inline, usually as the default gateway for the
server farm.
n The load balancer performs source NAT to send traffic to the external client, leveraging its VIP
as source IP address.
Phase 4 Phase 3
IP DST [Link] IP DST [Link]
SRC [Link] SRC [Link]
TCP DST 1025 TCP DST 1025
SRC 80 SRC 80
[Link]
(Selected)
n The external client sends traffic to the Virtual IP address (VIP) exposed by the load balancer.
n The load balancer performs two address translations on the original packets received from the
client: destination NAT (DNAT) to replace the VIP with the IP address of one of the servers
deployed in the server farm, and source NAT (SNAT) to replace the client IP address with
the IP address identifying the load balancer itself. SNAT is required to force through the load
balancer the return traffic from the server farm to the client.
n The server in the server farm replies by sending the traffic to the load balancer per SNAT
functionality.
n The load balancer again performs a source and destination NAT service to send traffic to the
external client, leveraging its VIP as source IP address.
Phase 4 Phase 3
IP DST [Link] IP DST [Link]
SRC [Link] SRC [Link]
TCP DST 1025 TCP DST 4099
SRC 80 SRC 80
SLB
[Link]
(Selected)
[Link]
TCP 80
VLAN Logical
V Switch
(VXLAN) [Link]
NSX load balancer supports layer 4 and layer 7 load balancing engines. The layer 4 load balancer
is connection-based, providing fast path processing and the layer 7 load balancer is HTTP socket-
based, allowing for advanced traffic manipulations and DDOS mitigation for back-end services.
Connection-based load balancing is implemented on the TCP and UDP layer. Connection-based
load balancing does not stop the connection or buffer the whole request, it sends the packet
directly to the selected server after manipulating the packet. TCP and UDP sessions are
maintained in the load balancer so that packets for a single session are directed to the same
server. Connection-based load balancing is done through Acceleration Disabled TCP and UDP
virtual IP, or Acceleration Enabled TCP virtual IP.
Socket-based load balancing is implemented on top of the socket interface. Two connections
are established for a single request, a client-facing connection and a server-facing connection.
The server-facing connection is established after server selection. For HTTP socket-based
implementation, the whole request is received before sending to the selected server with
optional L7 manipulation. For HTTPS socket-based implementation, authentication information
is exchanged either on the client-facing connection or on the server-facing connection. Socket-
based load balancing is the default mode for TCP, HTTP, and HTTPS virtual servers.
Virtual Server
Server Pool
Service Monitor
Application Profile
Represents the TCP, UDP, persistence, and certificate configuration for a given application.
You begin by setting global options for the load balancer, then create a server pool of backend
server members, and associate a service monitor with the pool to manage and share the backend
servers efficiently.
Next, you create an application profile to define the common application behavior in a load
balancer such as client SSL, server SSL, x-forwarded-for, or persistence. Persistence sends
subsequent requests with similar characteristic such as source IP or cookie are required to be
dispatched to the same pool member, without running the load balancing algorithm. Application
profiles can be reused across virtual servers.
You then create an optional application rule to configure application-specific settings for traffic
manipulation such as matching a certain URL or hostname so that different requests can be
handled by different pools. Next, you create a service monitor that is specific to your application,
or use a previously created service monitor.
Optionally, you can create an application rule to support advanced functionality of L7 virtual
servers. Some use cases for application rules include content switching, header manipulation,
security rules, and DOS protection.
Finally, you create a virtual server that connects your server pool, application profile, and any
potential application rules together.
When the virtual server receives a request, the load balancing algorithm considers pool member
configuration and runtime status. The algorithm then calculates the appropriate pool to distribute
the traffic comprising one or more members. The pool member configuration includes settings
such as, weight, maximum connection, and condition status. The runtime status includes current
connections, response time, and health check status information. The calculation methods can be
round-robin, weighted round-robin, least connection, source IP hash, weighted least connections,
URL, URI, or HTTP header.
Each pool is monitored by the associated service monitor. When the load balancer detects a
problem with a pool member, it is marked as DOWN. Only UP server is selected when choosing a
pool member from the server pool. If the server pool is not configured with a service monitor, all
the pool members are considered as UP.
Note For load balancer troubleshooting information, refer to NSX Troubleshooting Guide.
Procedure
Option Description
Load Balancer Allows the NSX Edge load balancer to distribute traffic to internal servers for
load balancing.
Acceleration When disabled, all virtual IP addresses (VIPs) use the L7 LB engine.
When enabled, the virtual IP uses the faster L4 LB engine or L7 LB engine
(based on the VIP configuration).
The L4 VIP ("acceleration enabled" in the VIP configuration and no L7 setting
such as AppProfile with cookie persistence or SSL-Offload) is processed
before the edge firewall, and no edge firewall rule is required to reach the
VIP. However, if the VIP is using a pool in non-transparent mode, the edge
firewall must be enabled (to allow the auto-created SNAT rule).
The L7 HTTP/HTTPS VIPs ("acceleration disabled" or L7 setting such as
AppProfile with cookie persistence or SSL-Offload) are processed after the
edge firewall, and require an edge firewall allow rule to reach the VIP.
Note: To validate which LB engine is used for each VIP by the NSX Load
Balancer, on the NSX Edge CLI (ssh or console), run the following command:
"show service loadbalancer virtual" and look for "LB PROTOCOL [L4|L7]"
Enable Service Insertion Allows the load balancer to work with third-party vendor services.
If you have a third party vendor load balancer service deployed in your
environment, see Using a Partner Load Balancer.
7 Click OK.
Following types of monitors are supported: ICMP, TCP, UDP, HTTP, HTTPS, DNS, MSSQL, and
LDAP.
Procedure
5 Click Add.
Interval, Timeout, and Max Retries are common parameters for all types of health checks.
The interval is the period in seconds that the monitor sends requests to the back-end server.
8 Enter the Timeout value. In each health check, the timeout value is the maximum time in
seconds within which a response from the server must be received.
9 Enter the Max Retries. This value is the number of times the server is tested before it is
declared DOWN.
For example, if Interval is set as 5 seconds, Timeout as 15 seconds, and Max Retries as 3,
it means that the NSX load balancer probes the back-end server every 5 seconds. In each
probe, if the expected response is received from server within 15 seconds, then the health
check result is OK. If not, the result is CRITICAL. If the recent three health check results are all
DOWN, the server is marked as DOWN.
10 From the Type drop-down menu, select how to send the health check request to the
server . Monitor types that are supported are ICMP, TCP, UDP, HTTP, HTTPS, DNS, MSSQL,
and LDAP. Three predefined monitors are embedded in the system: default_tcp_monitor,
default_http_monitor, and default_https_monitor.
11 If you select ICMP as the monitor type, no other parameters are applicable. Leave other
parameters empty.
12 If you select TCP as the monitor type, three more parameters are available: Send, Receive,
and Extension.
a Send (optional) - The string sent to the back-end server after a connection is established.
The maximum permitted string length is 256 characters.
b Receive (optional) Enter the string to be matched. This string can be a header or in the
body of the response. When the received string matches this definition, the server is
considered UP.
A sample extension, warning=10, indicates that if a server does not respond within 10
seconds, the status is set as warning.
escape Can use \n, \r, \t, or \ in send or quit string. Must
come before send or quit option. Default: nothing
added to send, \r\n added to end of quit.
13 If you select HTTP or HTTPS as the monitor type, perform the following steps:
a Expected (optional) - Enter the string that the monitor expects to match in the status line
of HTTP response in the Expected section. This is a comma-separated list.
b Method (optional) - Select the method to detect server status from the drop-down menu:
GET, OPTIONS, or POST.
d If you select the POST method, enter the data to be sent in the Bold section.
e Enter the string to be matched in the response content in the Receive section. This string
can be a header or in the body of the response.
If the string in the Expected section is not matched, the monitor does not try to match the
Receive content.
A sample extension, warning=10, indicates that if a server does not respond within 10
seconds, the status is set as warning.
Note For eregi, regex, and ereg, if the string contains { } and “, then you must add a
character \ before parsing the string for JSON format. Example of JSON format: Validate
response contains "{"Healthy":true}": eregi="\{\"Healthy\":true\}".
14 If you select UDP as the monitor type, perform the following steps:
a Send (required): Enter the string to be sent to the back-end server after a connection is
established.
b Receive (required): Enter the string expected to receive from back-end server. Only when
the received string matches this definition, is the server is considered as UP.
15 If you select DNS as the monitor type, perform the following steps:
a Send (required): Enter the string to be sent to back-end server after a connection is
established.
b Receive: Enter the string expected to receive from the back-end server. Only when the
received string matches this definition, the server is considered as UP.
A sample extension, warning=10, indicates that if a server does not respond within 10
seconds, the status is set as warning. This monitor type supports only TCP protocol.
16 If you select MSSQL as the monitor type, perform the following steps:
a Send: Enter the string to be run on the back-end server after a connection is established.
b Receive: Enter the string expected to receive from the back-end server. Only when the
received string matches this definition, the server is considered as UP.
c User Name, Password and Confirm password (required): Enter the required user name,
password, and confirm the entered password. As monitor is associated with a pool, you
must set MSSQL servers in the pool with the same user name and password that is
specified here.
A sample extension, warning=10, indicates that if a server does not respond within 10
seconds, the status is set as warning.
17 If you select LDAP as the monitor type, perform the following steps:
a Password and Confirm password (optional): Enter the required password and confirm the
entered password.
A sample extension, warning=10, indicates that if a server does not respond within 10
seconds, the status is set as warning.
18 Click OK.
What to do next
Procedure
5 Click Add.
Option Description
IP-HASH Selects a server based on a hash of the source IP address and the total
weight of all the running servers.
Algorithm parameters are disabled for this option.
ROUND_ROBIN Each server is used in turn according to the weight assigned to it.
This is the smoothest and fairest algorithm when the server's processing time
remains equally distributed.
Algorithm parameters are disabled for this option.
URI The left part of the URI (before the question mark) is hashed and divided by
the total weight of the running servers.
The result designates which server receives the request. This ensures that a
URI is always directed to the same server if no server goes up or down.
The URI algorithm parameter has two options uriLength=<len> and
uriDepth=<dep>. The length parameter range should be 1<=len<256. The
depth parameter range should be 1<=dep<10.
Length and depth parameters are followed by a positive integer number.
These options can balance servers based on the beginning of the URI only.
The length parameter indicates that the algorithm should only consider the
defined characters at the beginning of the URI to compute the hash.
The depth parameter indicates the maximum directory depth to be used to
compute the hash. One level is counted for each slash in the request. If both
parameters are specified, the evaluation stops when either is reached.
URL URL parameter specified in the argument is looked up in the query string of
each HTTP GET request.
If the parameter is followed by an equal sign = and a value, then the value
is hashed and divided by the total weight of the running servers. The result
designates which server receives the request. This process is used to track
user identifiers in requests and ensure that a same user ID is always sent to
the same server as long as no server goes up or down.
If no value or parameter is found, then a round robin algorithm is applied.
The URL algorithm parameter has one option urlParam=<url>.
8 (Optional) Select an existing default or custom monitor from the Monitors drop-down menu.
9 (Optional) Select the type of IP traffic for the pool. Default is any IP traffic.
10 To make client IP addresses visible to the back-end servers, enable the Transparent option.
For more details, see Chapter 16 Logical Load Balancer.
If Transparent is not selected (default value), back-end servers see the traffic source IP
address as a load balancer internal IP address. If Transparent is selected, the source IP
address is the real client IP address and NSX Edge must be set as the default gateway to
ensure that return packets go through the NSX Edge device.
a Click Add.
b Enter the name and IP address of the server member or click Select to assign grouping
objects.
Note VMware Tools must be installed on each VM, or an enabled IP discovery method
(DHCP snooping or ARP snooping, or both) must be available when using grouping
objects instead of IP addresses. For more details, see IP Discovery for Virtual Machines.
n Drain - Forces the server to shut down gracefully for maintenance. Setting the pool
member as "drain" removes the back-end server from load balancing, while allowing it
to be used for exiting connections and new connections from clients with persistence
to that server. The persistence methods that work with a drain state are source IP
persistence, cookie insert, and cookie prefix.
Note Drain state cannot be enabled on an NSX Edge load balancer that has been
configured with Enable Acceleration. See Configure Load Balancer Service for more
information.
Note Enabling and disabling High Availability configuration on the NSX Edge can
break the persistence and drain state with source IP persistence method.
n Enable - Removes the server from maintenance mode and brings it back into
operation. The pool member state should either be Drain or Disabled.
Note You cannot change the pool member state from Disabled to Drain.
d Enter the port where the member is to receive traffic, and the monitor port where the
member is to receive health monitor pings.
Port value should be null if the related virtual server is configured with a port range.
f Enter the maximum number of concurrent connections that the member can handle.
If the incoming requests go higher than the maximum, they are queued and wait for a
connection to be released.
g Enter the minimum number of concurrent connections that a member must always accept.
h Click OK.
You create an application profile to define the behavior of a particular type of network traffic.
After configuring a profile, you associate the profile with a virtual server. The virtual server then
processes traffic according to the values specified in the profile.
The following topics explain the steps to create the various application profile types.
Procedure
5 Click Add.
Persistence tracks and stores session data, such as the specific pool member that serviced
a client request. With persistence, client requests are directed to the same pool member
throughout the life of a session or during subsequent sessions.
Persistence Description
Source IP This persistence type tracks sessions based on the source IP address.
When a client requests a connection to a virtual server that supports a
source IP address persistence, the load balancer checks whether that
client was previously connected. If yes, the load balancer returns the client
to the same pool member.
Procedure
5 Click Add.
c Enter the URL to which you want to redirect the HTTP traffic.
Persistence tracks and stores session data, such as the specific pool member that serviced
a client request. With persistence, client requests are directed to the same pool member
throughout the life of a session or during subsequent sessions.
Persistence Description
Source IP This persistence type tracks sessions based on the source IP address.
When a client requests a connection to a virtual server that supports a
source IP address persistence, the load balancer checks whether that
client was previously connected. If yes, the load balancer returns the client
to the same pool member.
Cookie This persistence type inserts a unique cookie to identify a session the first
time a client accesses the site.
The cookie is referred in subsequent requests to persist the connection to
the appropriate server.
7 If you selected the Cookie persistence type, enter the cookie name and select the mode of
inserting the cookie; else, proceed to the next step.
Mode Description
Prefix Select this mode when your client does not support more than one cookie.
All browsers accept multiple cookies. If you have a proprietary application
using a proprietary client that supports only one cookie, the Web server
sends its cookie as usual. NSX Edge injects its cookie information as a prefix
in the server cookie value. This added cookie information is removed when
the Edge sends it to the server.
App Session In this mode, the application does not support a new cookie added by the
virtual server (insert), and nor does it support a modified cookie (prefix).
The virtual server learns the cookie injected by the back-end server. When
the client presents that cookie, the virtual server forwards the client request
to the same back-end server. It is not possible to see the App Session
persistence table for troubleshooting.
8 (Optional) To identify the originating IP address of a client connecting to a Web server through
the load balancer, enable the Insert X-Forwarded-For HTTP header option.
Note
n Starting in NSX 6.4.5, the Application Profile Type drop-down menu contains separate
options to create a profile for each of the three HTTPS traffic types.
n In NSX 6.4.4 and earlier, the Type drop-down menu contains only a single HTTPS option. To
create a profile for each of the three HTTPS traffic types, you must specify appropriate profile
parameters.
Starting in NSX 6.4.5, the UI terminologies for a couple of HTTPS profile parameters have
changed. The following table lists the changes.
SSL Passthrough Application rules related to SSL attributes are allowed without requiring an SSL
termination on the load balancer.
The traffic pattern is: Client -> HTTPS-> LB (SSL passthrough) -> HTTPS -> Server.
HTTPS Offloading HTTP-based load balancing occurs. SSL ends on the load balancer and HTTP is used
between the load balancer and the server pool.
The traffic pattern is: Client -> HTTPS -> LB (end SSL) -> HTTP -> Server.
HTTPS End-to-End HTTP-based load balancing occurs. SSL ends on the load balancer and HTTPS is used
between the load balancer and the server pool.
The traffic pattern is: Client -> HTTPS -> LB (end SSL) -> HTTPS -> Server.
The following table describes the persistence supported in HTTPS traffic types.
Persistence Description
Source IP This persistence type tracks sessions based on the source IP address.
When a client requests a connection to a virtual server that supports a source IP
address persistence, the load balancer checks whether that client was previously
connected. If yes, the load balancer returns the client to the same pool member.
SSL Session ID This persistence type is available when you create a profile for the SSL passthrough
traffic type.
SSL Session ID persistence ensures that repeat connections from the same client
are sent to the same server. Session ID persistence allows the use of SSL session
resumption, which saves processing time for both the client and the server.
Cookie This persistence type inserts a unique cookie to identify a session the first time a client
accesses the site.
The cookie is referred in subsequent requests to persist the connection to the
appropriate server.
For the Source IP and SSL Session ID persistence types, you can enter the persistence expiration
time in seconds. The default value of persistence is 300 seconds (five minutes).
Remember The persistence table is of a limited size. If the traffic is heavy, a large timeout value
might lead to the persistence table filling up quickly. When the persistence table fills up, the oldest
entry is deleted to accept the newest entry.
The load balancer persistence table maintains entries to record that client requests are directed to
the same pool member.
n If no new connection requests are received from the same client within the timeout period, the
persistence entry expires and is deleted.
n If a new connection request from the same client is received within the timeout period, the
timer is reset, and the client request is sent to a sticky pool member.
n After the timeout period has expired, new connection requests will be sent to a pool member
allocated by the load balancing algorithm.
For the L7 load balancing TCP source IP persistence scenario, the persistence entry times out if no
new TCP connections are made for a period, even if the existing connections are still alive.
The following table lists the approved cipher suites that can be used to negotiate security settings
during an SSL or TLS handshake.
DEFAULT DEFAULT
ECDHE-RSA-AES128-GCM-SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
ECDHE-RSA-AES256-GCM-SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
ECDHE-RSA-AES256-SHA TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
ECDHE-ECDSA-AES256-SHA TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
ECDH-ECDSA-AES256-SHA TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA
ECDH-RSA-AES256-SHA TLS_ECDH_RSA_WITH_AES_256_CBC_SHA
AES256-SHA TLS_RSA_WITH_AES_256_CBC_SHA
AES128-SHA TLS_RSA_WITH_AES_128_CBC_SHA
DES-CBC3-SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA
ECDHE-RSA-AES128-SHA TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
ECDHE-RSA-AES128-SHA256 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
ECDHE-RSA-AES256-SHA384 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
AES128-SHA256 TLS_RSA_WITH_AES_128_CBC_SHA256
AES128-GCM-SHA256 TLS_RSA_WITH_AES_128_GCM_SHA256
AES256-SHA256 TLS_RSA_WITH_AES_256_CBC_SHA256
AES256-GCM-SHA384 TLS_RSA_WITH_AES_256_GCM_SHA384
The following procedure explains the steps to create an application profile for each of the three
HTTPS traffic types.
Procedure
5 Click Add.
6.4.5 and later a In the Application Profile Type drop-down menu, select SSL
Passthrough.
b Enter the name of the profile.
c Select the type of persistence.
d Enter the persistence expiration time.
e Click Add.
6.4.5 and later a In the Application Profile Type drop-down menu, select HTTPS
Offloading.
b Enter the name of the profile.
c Enter the URL to which you want to redirect the HTTP traffic.
d Select the type of persistence.
n For Cookie persistence, enter the cookie name and select the mode
of inserting the cookie. For a description about each cookie mode,
see Create an HTTP Application Profile.
n For Source IP persistence, enter the persistence expiration time.
e Optional: To identify the originating IP address of a client connecting to
a Web server through the load balancer, enable the Insert X-Forwarded-
For HTTP header option.
f Click the Client SSL tab.
g Select one or more cipher algorithms or cipher suite to be used during
the SSL handshake. Make sure that the approved cipher suite contains
DH key length more than or equal to 1024 bits.
h Specify whether client authentication is to be ignored or required. If
necessary, then the client must provide a certificate after the request or
the handshake is canceled.
i Select the required service certificate, CA certificate, and CRL that the
profile must use to end the HTTPS traffic from the client on the load
balancer.
j Click Add.
In this application profile type, you specify both the Client SSL (Virtual Server Certificates)
parameters and the Server SSL (Pool Side SSL) parameters.
The Server SSL parameters are used to authenticate the load balancer from the server side.
If the Edge load balancer has a CA certificate and a CRL already configured and the load
balancer needs to verify a service certificate from the back-end servers, select the service
certificate. You can also provide the load balancer certificate to the back-end server if the
back-end server needs to verify the load balancer service certificate.
6.4.5 and later a In the Application Profile Type drop-down menu, select HTTPS End-to-
End.
b Enter the name of the profile.
c Follow the steps given in the table for creating an HTTPS offloading
profile to define the following application profile parameters:
n HTTP Redirect URL
n Persistence
n Insert X-Forwarded-For HTTP header
n Client SSL: cipher algorithms, client authentication, service certificate,
CA certificate, and CRL
d Click the Server SSL tab, and select the cipher algorithms, the required
service certificate, CA certificate, and CRL to authenticate the load
balancer from the server side.
e Click Add.
NSX Data Center supports only virtual server-side application rules. NSX load balancer internally
uses HAProxy. It means that application rules that you add on the virtual server are internally
inserted in that virtual server's "frontend" section of the HAProxy configuration file. Pool-side
application rules (HAProxy "backend" section) are not supported.
For information about the application rule syntax, see the HAProxy documentation at http://
[Link]/haproxy-dconv/
For examples of commonly used application rules, see Application Rule Examples.
Procedure
5 Click Add.
# Each user is supposed to get a single active connection at a time, block the second one
tcp-request content reject if { sc1_conn_cur ge 2 }
# if a user tried to get connected at least 10 times over the last minute,
# it could be a brute force
tcp-request content reject if { sc1_conn_rate ge 10 }
7 Associate the application profile to this virtual server and add the application rule created in
step 4.
The newly applied application rule on the virtual server protects the RDP servers.
Advanced Logging
By default, NSX load balancer supports basic logging. You can create an application rule as
follows to view more detailed logging messages for troubleshooting.
After you associate the application rule to the virtual server, logs include detailed messages such
as the following example.
To troubleshoot the HTTPS traffic, you may need to add more rules. Most web application use
301/302 responses with a location header to redirect the client to a page (most of the time after
a login or a POST call) and also require an application cookie. So your application server may
have difficulty in getting to know client connection information and may not be able to provide the
correct responses: it may even stop the application from working.
To allow the web application to support SSL offloading, add the following rule.
# See clearly in the log if the application is setting up response for HTTP or HTTPS
capture response header Location len 32
capture response header Set-Cookie len 32
# Provide client side connection info to application server over HTTP header
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
The load balancer inserts the following header when the connection is made over SSL.
X-Forwarded-Proto: https
The load balancer inserts the following header when the connection is made over HTTP.
X-Forwarded-Proto: http
# If the request is part of the list forbidden urls,reply "Forbidden"(HTTP response code
403)
http-request deny if block_url_list
no option http-server-close
By default on the client side NSX keeps the TCP connection established between requests.
However with option "X-Forwarded-For", the session is closed after each request. The following
option keeps the client connection open between requests even if XFF is configured.
no option httpclose
rspidel Server
rspadd Server:\ nginx
Rewrite Redirect
You can rewrite the Location header from HTTP to HTTPS. The following sample rule identifies the
Location header and replaces the HTTP with HTTP.
sslv3 enable
tlsv1 enable
The following sample rule sets the timeout period to 100 seconds.
Time can be set as an integer with milliseconds, seconds, minutes, hour, or days.
# Redirect all HTTP requests to same URI but HTTPS redirect scheme
https if !{ ssl_fc }
Sorry Server
In case of servers in the primary pool are all dead, use servers in the secondary pool.
Prerequisites
n If you want to enable acceleration to use a faster load balancer, ensure that acceleration is
enabled in the Global Configuration settings of the load balancer. See Configure Load Balancer
Service.
Procedure
5 Click Add.
a Enable the virtual server to make this virtual server available for use.
b (Optional) Enable acceleration for the load balancer to use the faster L4 load balancer
engine rather than L7 load balancer engine.
You can use the show service loadbalancer virtual CLI command to confirm the
load balancer engine in use.
You can associate only an application profile with the same protocol as the virtual server
that you are adding. The services supported by the selected pool appear.
e Enter an IP address or click Select IP Address to set the IP address that the load balancer
is listening on.
The Select IP Address window shows only the primary IP address. If you are creating a
VIP using a secondary IP address, enter it manually.
g Enter the port number that the load balancer listens on.
You can also enter a range of ports. For example, to share the virtual server configuration,
such as server pool, application profile, and application rule, enter 80,8001-8004,443.
To use FTP, the TCP protocol must have port 21 assigned to it.
i In the Connection Limit text box, enter the maximum concurrent connections that the
virtual server can process.
j In the Connection Rate Limit text box, enter the maximum incoming new connection
requests per second section.
k (Optional) Click the Advanced tab and add the application rule to associate it with the
virtual server.
Procedure
6 Make the appropriate changes to traffic, persistence, certificate, or cipher configuration, and
click Save or OK.
Prerequisites
Go to Manage > Settings > Certificates and ensure that a valid certificate is present. You can add
a certificate for the load balancer in any one of the following ways:
n Generate a CSR.
Procedure
6.4.5 and later a In the Application Profile Type drop-down menu, select HTTPS
Offloading.
b In the Persistence drop-down menu, select None.
c Click Client SSL > Service Certificates.
d Select the service certificate that you added for the NSX Edge load
balancer.
Procedure
If you are using a load balancer service monitor with high availability (HA), HA must be enabled on
a dedicated interface.
After you create a service monitor and associate it with a server pool, you can update the existing
service monitor or delete it to save system resources.
For more information about service monitors, see Create a Service Monitor.
Procedure
Procedure
Procedure
When you add a server pool, the transparent mode is disabled by default. When transparent is
disabled, back-end servers see the traffic source IP as a Load balancer internal IP. When this
mode is enabled, source IP is the real client IP and NSX Edge must be on the path of the server
response. A typical design is to have the server default gateway be the NSX Edge. Transparent
mode does not require SNAT.
For more information about inline or transparent mode, see Chapter 16 Logical Load Balancer.
Procedure
n Name
n Description
n Algorithm
n Monitor
n IP Filter
For information about the various algorithms, see Add a Server Pool.
6 To enable the transport mode, click the toggle switch or select the Transparent check box.
Procedure
Procedure
The Pool and Member Status window displays the status of all the server pools.
6 Select a pool from the Pool Status and Statistics table and view the status of all the members in
that pool in the Member Status and Statistics table.
Pool status can be UP or DOWN. Pool is marked as DOWN when all the members in the pool
are DOWN; else the pool is UP.
n UP: Member is enabled, and the health status of the member is UP. Or monitor is not
defined on the pool.
n DOWN: Member is enabled, and the health status of the member is DOWN.
Tip Starting in NSX 6.4.5, you can click the DOWN status in the Member Status and Statistics
table to determine the cause for the member being down. However, in NSX 6.4.4 and earlier,
you must run the show service loadbalancer pool Edge CLI command to determine the
cause for the member being down.
Procedure
Procedure
For information about the application rule syntax, see HAProxy documentation at http://
[Link]/haproxy-dconv/
For examples of commonly used application rules, see Application Rule Examples.
Procedure
Procedure
By default, NSX Load Balancer closes the server TCP connection after each client request,
however, Windows NT LAN Manager (NTLM) authentication requires the same connection for the
lifetime of the authenticated request, connections are kept alive for the duration of the requests.
To keep the server connection open between requests, add the following application rule on the
Virtual IP load balancing the Web servers using NTLM authentication:
add # NTLM authentication and keep the server connection open between requests
no option http-server-close
HTTP Server Close (default) - The server-facing connection is closed after the end of the response
is received, and the client-facing connection remains open. HTTP Server Close provides latency
on the client side (slow network) and the fastest session reuse on the server side to save server
resources. It also permits non-keepalive capable servers to be served in keep-alive from a client
perspective. This mode is suitable for most common use cases, especially for slow client-facing
networks and fast server-facing networks.
HTTP Keep Alive - All requests and responses are processed and connections remain open
but idle between responses and new requests. The advantages are reduced latency between
transactions, and less processing power required on the server side. Memory requirements will
increase to accommodate the number of active sessions, which will be higher because connections
are no longer closed after each request. The client-facing idle timeout can be configured using
the application rule timeout http-keep-alive [time]. By default the idle timeout is 1 second. This
mode is mandatory when an application requires NTLM authentication.
HTTP Tunnel - Only the first request and response are processed and a tunnel is established
between the client and the server, they can talk without further analysis of HTTP protocol. After
the tunnel is established, the connection is persistent on both the client and server sides. To
enable this mode, none of the following options should be set: passive-close mode, server-close
mode, force-close mode.
HTTP tunnel mode impacts the following features, and applies to only the first request and
response in a session:
n cookie processing
n content switching
HTTP Passive Close - The same as tunnel mode, but with a Connection: close header added
in both the client and server directions. Both ends close after the first request and response
exchange. If option httpclose is set, the Load Balancer works in HTTP tunnel mode and checks
if a Connection: close header is present in each direction. If the header is not present, a
Connection: close header is added. Each end then actively closes the TCP connection after each
transfer, resulting in a switch to the HTTP close mode. Any connection header other than close is
removed. Applications that cannot process the second and subsequent requests properly, such as
an inserted cookie by the Load Balancer then carried back by the following requests from client,
can use tunnel mode or passive close mode.
Some HTTP servers do not necessarily close the connections when they receive the Connection:
close set by option httpclose. If the client also does not close, the connection remains open until
l the timeout expires. This causes a high number of simultaneous connections on the servers, and
shows high global session times in the logs. For this reason, they are not compatible with older
HTTP 1.0 browsers. If this occurs, use the option forceclose which actively closes the request
connection once the server responds. Option forceclose also releases the server connection
earlier because it does not have to wait for the client to acknowledge it.
HTTP Force Close - Both the client and the server connections are actively closed by the Load
Balancer after the end of a response. Some HTTP servers do not necessarily close the connections
when they receive the Connection: close set by option httpclose. If the client also does not
close, the connection remains open until the timeout expires. This causes a high number of
simultaneous connections on the servers and shows high global session times in the logs. When
this happens, option forceclose actively closes the outgoing server channel when the server has
finished to respond and release some resources earlier than with option httpclose.
6.1.2 - 6.1.4 HTTP Server The HTTP Passive Close (option no option http-server-
Close httpclose is added automatically to close
the virtual server) option httpclose
no option httpclose
6.1.5 - 6.1.x 6.2.0 - 6.2.2 HTTP Server The HTTP Server Close xff header no option http-server-
Close is added onto each request from close
the client when dispatching to the option httpclose
backend server.
no option httpclose
6.2.3-6.2.5 HTTP Server The HTTP Server Close xff header no option http-server-
Close is added onto each request from close
the client when dispatching to the option httpclose
backend server.
no option httpclose
6.2.3-6.2.5 HTTP Server The HTTP Server Close xff header no option http-server-
Close is added onto each request from close
the client when dispatching to the no option httpclose
backend server.
option httpclose
6.2.5 - 6.2.x HTTP Server The HTTP Server Close xff header no option http-server-
Close is added onto each request from close
the client when dispatching to the option http-keep-alive
backend server.
option http-tunnel
option httpclose
option forceclose
Phase 4 Phase 3
IP DST [Link] IP DST [Link]
SRC [Link] SRC [Link]
TCP DST 1025 TCP DST 4099
SRC 80 SRC 80
SLB
[Link]
(Selected)
[Link]
TCP 80
VLAN Logical
V Switch
(VXLAN) [Link]
In proxy mode, the load balancer uses its own IP address as the source address to send
requests to a back-end server. The back-end server views all traffic as being sent from the load
balancer and responds to the load balancer directly. This mode is also called SNAT mode or
non-transparent mode. For more information, refer to NSX Administration Guide.
A typical NSX one-armed load balancer is deployed on the same subnet with its back-end servers,
apart from the logical router. The NSX load balancer virtual server listens on a virtual IP for
incoming requests from client and dispatches the requests to back-end servers. For the return
traffic, reverse NAT is required to change the source IP address from the back-end server to a
virtual IP (VIP) address and then send the virtual IP address to the client. Without this operation,
the connection to the client can break.
After the ESG receives the traffic, it performs the following two operations:
n Destination Network Address Translation (DNAT) to change the VIP address to the IP address
of one of the load balanced machines.
n Source Network Address Translation (SNAT) to exchange the client IP address with the ESG IP
address.
Then the ESG server sends the traffic to the load balanced server and the load balanced server
sends the response back to the ESG, and then back to the client. This option is much easier to
configure than the inline mode, but has two potentials caveats. The first is that this mode requires
a dedicated ESG server, and the second is that the load balancer servers are not aware of the
original client IP address. One workaround for HTTP or HTTPS applications is to enable the Insert
X-Forwarded-For option in the HTTP application profile so that the client IP address is carried in
the X-Forwarded-For HTTP header in the request that is sent to the back-end server.
If client IP address visibility is required on the back-end server for applications other than HTTP or
HTTPS, you can configure the IP pool to be transparent. If clients are not on the same subnet as
the back-end server, inline mode is recommended. Otherwise, you must use the load balancer IP
address as the default gateway of the back-end server.
n Inline/transparent mode
In DSR mode, the back-end server responds directly to the client. Currently, NSX load balancer
does not support DSR.
The following procedure explains the configuration of a one-armed load balancer with HTTPS
offloading (SSL offloading) application profile type.
Procedure
Version Procedure
NSX 6.4.5 and later 1 In the Application Profile Type drop-down menu, select HTTPS
Offloading.
2 In the Name text box, enter the name of the profile. For example,
enter Web-SSL-Profile.
3 Click Client SSL > Service Certificates.
4 Select the self-signed certificate that you added earlier.
NSX 6.4.4 and earlier 1 In the Type drop-down menu, select HTTPS.
2 In the Name text box, enter the name of the profile. For example,
Web-SSL-Profile.
3 Select the Configure Service Certificate check box.
4 Select the self-signed certificate that you added earlier.
7 (Optional) Click Manage > Load Balancer > Service Monitoring. Edit the default service
monitoring to change it from basic HTTP or HTTPS to specific URL or URIs, as required.
a Click Manage > Load Balancer > Pools, and then click Add.
b In the Name text box, enter a name for the server pool. For example, enter Web-Tier-
Pool-01.
State Name IP Address Weight Monitor Port Port Max Connections Min Connections
f To use the SNAT mode, ensure that the Transparent option is not enabled.
9 Click Show Status or Show Pool Statistics and verify that the status of the Web-Tier-Pool-01
pool is UP.
Select the pool and ensure that the status of both members in this pool is UP.
a Click Manage > Load Balancer > Virtual Servers, and then click Add.
Option Description
Acceleration If you want to use the L4 load balancer for UDP or higher-performance
TCP, enable acceleration. If you enable this option, ensure that the firewall
status is enabled on the NSX Edge load balancer because a firewall is
required for L4 SNAT.
Default Pool Select the Web-Tier-Pool-01 server pool that you created earlier.
c (Optional) Click the Advanced tab, and associate an application rule with the virtual server.
In non-transparent mode, the back-end server cannot see the client IP, but can see the load
balancer internal IP address. As a workaround for HTTP or HTTPS traffic, select the Insert
X-Forwarded-For HTTP header in the application profile. When this option is selected, the
Edge load balancer adds the header "X-Forwarded-For" with the value of the client source IP
address.
The following figure shows the logical topology of a network that uses an inline load balancer. The
NSX Edge at the perimeter of the network does both north-south routing and the load balancing
function.
Network
External
Clients
[Link]/24
VXLAN 5000
([Link]/24)
For this scenario, consider that you have configured the following interfaces on the NSX Edge:
The load balancer uses the uplink interface on the edge for the virtual IP address (VIP). The
internal interface on the edge acts as the default gateway for the back-end web servers in the
server pool.
You want to load balance the HTTP traffic coming from external clients on the NSX Edge and
distribute the traffic to the Web servers that are connected to the VXLAN 5000 logical switch.
The following procedure explains the steps for configuring an inline load balancer on the NSX
Edge.
Prerequisites
You must have an NSX Edge Service Gateway deployed in your network.
Procedure
For example:
Option Description
a Click Manage > Load Balancer > Pools, and then click Add.
For example:
Option Description
Transparent Enable this option to ensure that the source client IP addresses are visible
to the back-end servers in the pool.
For example, specify the following settings for the pool members.
Max
Monitor Connectio Min
State Name IP Address Weight Port Port ns Connections
7 Click Show Status or Show Pool Statistics and verify that the status of the Web-Server-Pool is
UP.
Select the pool and ensure that the status of all members in this pool is UP.
a Click Manage > Load Balancer > Virtual Servers, and then click Add.
Option Description
IP Address Enter or select the IP address that you configured on the uplink (external)
interface of the edge.
For this scenario, select [Link].
After configuring the NSX load balancer, provide the NSX Edge device uplink interface IP address
for vCenter Single Sign-On.
Note The following procedure explains the steps for configuring an NSX Edge load balancer
for use with Platform Services Controller 6.0. For configuring the Edge load balancer for
use with Platform Services Controller 6.5, see the VMware knowledge base article at https://
[Link]/s/article/2147046.
Prerequisites
n Perform the PSC High Availability preparation tasks that are mentioned in the VMware
knowledge base article at [Link]
n Save the /ha/[Link] and /ha/lb_rsa.key from the first PSC node to configure certificates.
n Verify that you have at least one uplink for configuring VIP and one interface attached to an
internal logical switch.
Procedure
a Save the PSC [Link] certificate, RSA, and passphrase that you generated with the
OpenSSL command.
b Double-click the Edge and click Manage > Settings > Certificates .
d In the Certificate Contents text box, add the contents of the [Link] file.
Option Description
Version Procedure
NSX 6.4.5 and later 1 In the Application Profile Type drop-down menu, select HTTPS
Offloading.
2 In the Name text box, enter the name of the profile. For example,
enter sso_https_profile.
3 Click Client SSL > Service Certificates.
4 Select the PSC certificate that you added earlier.
NSX 6.4.4 and earlier 1 In the Type drop-down menu, select HTTPS.
2 In the Name text box, enter the name of the profile. For example,
sso_https_profile.
3 Select the Configure Service Certificate check box.
4 Select the PSC certificate that you added earlier.
a Click Manage > Load Balancer > Pools, and then click Add.
For example:
Option Description
Add the following members to the sso_tcp_pool1 pool with monitor port 443.
State Name IP Address Weight Monitor Port Port Max Connections Min Connections
For example:
Option Description
Add the following members to the sso_tcp_pool2 pool with monitor port 389.
State Name IP Address Weight Monitor Port Port Max Connections Min Connections
a Select Manage > Load Balancer > Virtual Servers , and then click Add.
b Create a virtual server for TCP VIP with the following configuration settings.
For example:
Option Description
Default Pool Select the sso_tcp_pool2 server pool that you created earlier.
c Create a virtual server for HTTPS VIP with the following configuration settings.
For example:
Option Description
Default Pool Select the sso_tcp_pool1 server pool that you created earlier.
Procedure
f Copy and paste the certificate contents in the Certificate Contents text box. Text must
include "-----BEGIN xxx-----" and "-----END xxx-----".
For chained certificates (server certificate and an intermediate CA certificate), select the
Certificate option. Following is an example of a chained certificate content:
-----BEGIN CERTIFICATE-----
Server cert
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
Intermediate cert
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
Root cert
-----END CERTIFICATE-----
g In the Private Key text box, copy and paste the private key contents .
Prefix the certificate content (PEM for certificate or private key) with one of the following
strings:
For complete examples of certificate and private key, see the Example: Certificate and
Private Key.
Version Procedure
NSX 6.4.5 and later 1 In the Application Profile Type drop-down menu, select HTTPS
Offloading.
2 Click Client SSL > Service Certificates.
3 Select the web server certificate that you added in step 1.
NSX 6.4.4 and earlier 1 In the Type drop-down menu, select HTTPS.
2 Select the Configure Service Certificates check box.
3 Select the web server certificate that you added in step 1.
1 Enable the virtual server to make this virtual server available for use.
3 Select the default pool that is composed of HTTP servers (not HTTPS servers).
For information about specifying the other parameters in the New Virtual Server window,
see Add Virtual Servers.
-----BEGIN CERTIFICATE-----
MIID0DCCArigAwIBAgIBATANBgkqhkiG9w0BAQUFADB/MQswCQYDVQQGEwJGUjET
MBEGA1UECAwKU29tZS1TdGF0ZTEOMAwGA1UEBwwFUGFyaXMxDTALBgNVBAoMBERp
bWkxDTALBgNVBAsMBE5TQlUxEDAOBgNVBAMMB0RpbWkgQ0ExGzAZBgkqhkiG9w0B
CQEWDGRpbWlAZGltaS5mcjAeFw0xNDAxMjgyMDM2NTVaFw0yNDAxMjYyMDM2NTVa
MFsxCzAJBgNVBAYTAkZSMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJ
bnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQxFDASBgNVBAMMC3d3dy5kaW1pLmZyMIIB
IjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAvpnaPKLIKdvx98KW68lz8pGa
RRcYersNGqPjpifMVjjE8LuCoXgPU0HePnNTUjpShBnynKCvrtWhN+haKbSp+QWX
SxiTrW99HBfAl1MDQyWcukoEb9Cw6INctVUN4iRvkn9T8E6q174RbcnwA/7yTc7p
1NCvw+6B/aAN9l1G2pQXgRdYC/+G6o1IZEHtWhqzE97nY5QKNuUVD0V09dc5CDYB
aKjqetwwv6DFk/GRdOSEd/6bW+20z0qSHpa3YNW6qSp+x5pyYmDrzRIR03os6Dau
ZkChSRyc/Whvurx6o85D6qpzywo8xwNaLZHxTQPgcIA5su9ZIytv9LH2E+lSwwID
AQABo3sweTAJBgNVHRMEAjAAMCwGCWCGSAGG+EIBDQQfFh1PcGVuU1NMIEdlbmVy
YXRlZCBDZXJ0aWZpY2F0ZTAdBgNVHQ4EFgQU+tugFtyN+cXe1wxUqeA7X+yS3bgw
HwYDVR0jBBgwFoAUhMwqkbBrGp87HxfvwgPnlGgVR64wDQYJKoZIhvcNAQEFBQAD
ggEBAIEEmqqhEzeXZ4CKhE5UM9vCKzkj5Iv9TFs/a9CcQuepzplt7YVmevBFNOc0
+1ZyR4tXgi4+5MHGzhYCIVvHo4hKqYm+J+o5mwQInf1qoAHuO7CLD3WNa1sKcVUV
vepIxc/1aHZrG+dPeEHt0MdFfOw13YdUc2FH6AqEdcEL4aV5PXq2eYR8hR4zKbc1
fBtuqUsvA8NWSIyzQ16fyGve+ANf6vXvUizyvwDrPRv/kfvLNa3ZPnLMMxU98Mvh
PXy3PkB8++6U4Y3vdk2Ni2WYYlIls8yqbM4327IKmkDc2TimS8u60CT47mKU7aDY
cbTV5RDkrlaYwm5yqlTIglvCv7o=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIID0DCCArigAwIBAgIBATANBgkqhkiG9w0BAQUFADB/MQswCQYDVQQGEwJGUjET
MBEGA1UECAwKU29tZS1TdGF0ZTEOMAwGA1UEBwwFUGFyaXMxDTALBgNVBAoMBERp
bWkxDTALBgNVBAsMBE5TQlUxEDAOBgNVBAMMB0RpbWkgQ0ExGzAZBgkqhkiG9w0B
CQEWDGRpbWlAZGltaS5mcjAeFw0xNDAxMjgyMDM2NTVaFw0yNDAxMjYyMDM2NTVa
MFsxCzAJBgNVBAYTAkZSMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJ
bnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQxFDASBgNVBAMMC3d3dy5kaW1pLmZyMIIB
IjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAvpnaPKLIKdvx98KW68lz8pGa
RRcYersNGqPjpifMVjjE8LuCoXgPU0HePnNTUjpShBnynKCvrtWhN+haKbSp+QWX
SxiTrW99HBfAl1MDQyWcukoEb9Cw6INctVUN4iRvkn9T8E6q174RbcnwA/7yTc7p
1NCvw+6B/aAN9l1G2pQXgRdYC/+G6o1IZEHtWhqzE97nY5QKNuUVD0V09dc5CDYB
aKjqetwwv6DFk/GRdOSEd/6bW+20z0qSHpa3YNW6qSp+x5pyYmDrzRIR03os6Dau
ZkChSRyc/Whvurx6o85D6qpzywo8xwNaLZHxTQPgcIA5su9ZIytv9LH2E+lSwwID
AQABo3sweTAJBgNVHRMEAjAAMCwGCWCGSAGG+EIBDQQfFh1PcGVuU1NMIEdlbmVy
YXRlZCBDZXJ0aWZpY2F0ZTAdBgNVHQ4EFgQU+tugFtyN+cXe1wxUqeA7X+yS3bgw
HwYDVR0jBBgwFoAUhMwqkbBrGp87HxfvwgPnlGgVR64wDQYJKoZIhvcNAQEFBQAD
ggEBAIEEmqqhEzeXZ4CKhE5UM9vCKzkj5Iv9TFs/a9CcQuepzplt7YVmevBFNOc0
+1ZyR4tXgi4+5MHGzhYCIVvHo4hKqYm+J+o5mwQInf1qoAHuO7CLD3WNa1sKcVUV
vepIxc/1aHZrG+dPeEHt0MdFfOw13YdUc2FH6AqEdcEL4aV5PXq2eYR8hR4zKbc1
fBtuqUsvA8NWSIyzQ16fyGve+ANf6vXvUizyvwDrPRv/kfvLNa3ZPnLMMxU98Mvh
PXy3PkB8++6U4Y3vdk2Ni2WYYlIls8yqbM4327IKmkDc2TimS8u60CT47mKU7aDY
cbTV5RDkrlaYwm5yqlTIglvCv7o=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIDyTCCArGgAwIBAgIBADANBgkqhkiG9w0BAQUFADB/MQswCQYDVQQGEwJGUjET
MBEGA1UECAwKU29tZS1TdGF0ZTEOMAwGA1UEBwwFUGFyaXMxDTALBgNVBAoMBERp
bWkxDTALBgNVBAsMBE5TQlUxEDAOBgNVBAMMB0RpbWkgQ0ExGzAZBgkqhkiG9w0B
CQEWDGRpbWlAZGltaS5mcjAeFw0xNDAxMjgyMDI2NDRaFw0yNDAxMjYyMDI2NDRa
MH8xCzAJBgNVBAYTAkZSMRMwEQYDVQQIDApTb21lLVN0YXRlMQ4wDAYDVQQHDAVQ
YXJpczENMAsGA1UECgwERGltaTENMAsGA1UECwwETlNCVTEQMA4GA1UEAwwHRGlt
aSBDQTEbMBkGCSqGSIb3DQEJARYMZGltaUBkaW1pLmZyMIIBIjANBgkqhkiG9w0B
AQEFAAOCAQ8AMIIBCgKCAQEAuxuG4QeBIGXj/AB/YRLLtpgpTpGnDntVlgsycZrL
3qqyOdBNlwnvcB9etfY5iWzjeq7YZRr6i0dIV4sFNBR2NoK+YvdD9j1TRi7njZg0
d6zth0xlsOhCsDlV/YCL1CTcYDlKA/QiKeIQa7GU3Rhf0t/KnAkr6mwoDbdKBQX1
D5HgQuXJiFdh5XRebxF1ZB3gH+0kCEaEZPrjFDApkOXNxEARZdpBLpbvQljtVXtj
HMsvrIOc7QqUSOU3GcbBMSHjT8cgg8ssf492Go3bDQkIzTROz9QgDHaqDqTC9Hoe
vlIpTS+q/3BCY5AGWKl3CCR6dDyK6honnOR/8srezaN4PwIDAQABo1AwTjAdBgNV
HQ4EFgQUhMwqkbBrGp87HxfvwgPnlGgVR64wHwYDVR0jBBgwFoAUhMwqkbBrGp87
HxfvwgPnlGgVR64wDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQUFAAOCAQEAVqYq
vhm5wAEKmvrKXRjeb5kiEIp7oZAFkYp6sKODuZ1VdkjMDD4wv46iqAe1QIIsfGwd
Dmv0oqSl+iPPy24ATMSZQbPLO5K64Hw7Q8KPos0yD8gHSg2d4SOukj+FD2IjAH17
a8auMw7TTHu6976JprQQKtPADRcfodGd5UFiz/6ZgLzUE23cktJMc2Bt18B9OZII
J9ef2PZxZirJg1OqF2KssDlJP5ECo9K3EmovC5M5Aly++s8ayjBnNivtklYL1VOT
ZrpPgcndTHUA5KS/Duf40dXm0snCxLAKNP28pMowDLSYc6IjVrD4+qqw3f1b7yGb
bJcFgxKDeg5YecQOSg==
-----END CERTIFICATE-----
Procedure
f Copy and paste the certificate contents in the Certificate Contents text box. Text must
include "-----BEGIN xxx-----" and "-----END xxx-----".
For chained certificates (server certificate and an intermediate CA certificate), select the
Certificate option. Following is an example of a chained certificate content:
-----BEGIN CERTIFICATE-----
Server cert
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
Intermediate cert
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
Root cert
-----END CERTIFICATE-----
g In the Private Key text box, copy and paste the private key contents.
Prefix the certificate content (PEM for certificate or private key) with one of the following
strings:
For complete examples of certificates and private keys, see the Example: Certificate and
Private Key.
Version Procedure
NSX 6.4.5 and later 1 In the Application Profile Type drop-down menu, select HTTPS End-
to-End.
2 Click Server SSL > Service Certificates.
3 Select the web server certificate that you added in step 1.
NSX 6.4.4 and earlier 1 In the Type drop-down menu, select HTTPS.
2 Select the Enable Pool Side SSL check box.
3 Select the Configure Service Certificates check box.
4 Select the web server certificate that you added in step 1.
1 Enable the virtual server to make this virtual server available for use.
For information about specifying the other parameters in the New Virtual Server window,
see Add Virtual Servers.
Note Certificates are not required for SSL passthrough application profiles.
Procedure
Version Procedure
NSX 6.4.5 and later 1 In the Application Profile Type drop-down menu, select SSL
Passthrough.
2 In the Persistence drop-down menu, select None.
NSX 6.4.4 and earlier 1 In the Type drop-down menu, select HTTPS.
2 Select the Enable SSL Passthrough check box.
3 In the Persistence drop-down menu, select None.
1 Enable the virtual server to make this virtual server available for use.
For information about specifying the other parameters in the New Virtual Server window,
see Add Virtual Servers.
Note
n If Acceleration is enabled and there are no L7 related configurations, Edge does not
end the session.
n If Acceleration is disabled, the session might be treated as L7 TCP mode, and Edge
ends it into two sessions.
Client Authentication
Clients access the Web application through HTTPS. HTTPS session is closed on the Edge VIP and
the session requests for a client certificate.
1 Add a Web server certificate that is signed by a root CA. For more information, see Scenario:
Import SSL Certificate.
Version Procedure
NSX 6.4.5 and later 1 In the Application Profile Type drop-down menu, select HTTPS End-to-End.
2 Click Client SSL > CA Certificates.
3 Select the web server certificate that you added in step 1.
4 In the Client Authentication drop-down menu, select Required.
3 Create a virtual server. For information about specifying virtual server parameters, see
Scenario: Import SSL Certificate.
Note In NSX 6.4.4 and earlier, when the Enable Pool Side SSL option is disabled in the
application profile, the pool selected is composed of HTTP servers. When the Enable Pool
Side SSL option is enabled in the application profile, the pool selected is composed of HTTPS
servers.
b Convert certificate and private key to the pfx file. For complete examples
of certificate and private key, see Example: Certificate and Private Key topic.
Server Authentication
Clients access the Web application through HTTPS. HTTPS session is closed on the Edge VIP.
Edge establishes new HTTPS connections to the servers, and it requests and verifies the server
certificate.
1 Add the Web server certificate that is chained with the root CA certificate for server certificate
authentication. For more information, see Scenario: Import SSL Certificate.
Version Procedure
NSX 6.4.5 and later 1 In the Application Profile Type drop-down menu, select HTTPS End-to-End.
2 Click Server SSL > CA Certificates.
3 Next to Cypher, click the Edit ( ) icon, and select the ciphers.
4 Enable the Server Authentication option.
5 Select the CA certificate that you added in step 1.
Note If your preferred cipher is not in the approved ciphers list, it resets to Default.
After upgrading from an old NSX version, if the cipher is null or empty, or if the cypher is
not in the approved ciphers list of the old version, it resets to Default.
7 In the Client Authentication drop-down menu, select Required.
3 Create a virtual server. For information about specifying virtual server parameters, see
Scenario: Import SSL Certificate.
Note In NSX 6.4.4 and earlier, when the Enable Pool Side SSL option is disabled in the
application profile, the pool selected is composed of HTTP servers. When the Enable Pool
Side SSL option is enabled in the application profile, the pool selected is composed of HTTPS
servers.
You must have a working NSX Edge instance before you can use any of the above services. For
information on setting up NSX Edge, see NSX Edge Configuration.
n Uses the IP address of the internal interface on NSX Edge as the default gateway address
for all clients (except for non-directly connected pools), and the broadcast and subnet mask
values of the internal interface for the container network.
Note By design, DHCP service is supported on the internal interfaces of an NSX Edge. However,
in some situations, you may choose to configure DHCP on an uplink interface of the edge and
configure no internal interfaces. In this situation, the edge can listen to the DHCP client requests
on the uplink interface, and dynamically assign IP addresses to the DHCP clients. Later, if you
configure an internal interface on the same edge, DHCP service stops working because the edge
starts listening to the DHCP client requests on the internal interface.
You must restart the DHCP service on client virtual machines in the following situations:
An IP pool is a sequential range of IP addresses within the network. Virtual machines protected by
NSX Edge that do not have an address binding are allocated an IP address from this pool. An IP
pool's range cannot intersect one another, as a result one IP address can belong to only one IP
pool.
Procedure
5 Click Add.
6 Configure the following general options for the new DHCP IP pool.
Option Action
Domain Name Enter the domain name of the DNS server. This setting is optional.
Auto Configure DNS Select to use the DNS service configuration for the DHCP binding.
Primary Name Server If you did not select Auto Configure DNS, type the Primary Nameserver
for the DNS service. You must enter the IP address of a DNS server for
hostname-to-IP address resolution. This setting is optional.
Secondary Name Server If you did not select Auto Configure DNS, type the Secondary Nameserver
for the DNS service. You must enter the IP address of a DNS server for
hostname-to-IP address resolution. This setting is optional.
Default Gateway Enter the default gateway address. If you do not specify the default gateway
IP address, the internal interface of the NSX Edge instance is taken as the
default gateway. This setting is optional.
Subnet Mask Specify the subnet mask. The subnet mask must be same as the subnet mask
of the Edge interface or the DHCP Relay, in case of a distributed router.
Lease Never Expires Select to bind the address to the MAC address of the virtual machine forever.
If you select this, Lease Time is disabled.
Lease Time Select whether to lease the address to the client for the default time (one
day), or enter a value in seconds. If you selected Lease never expires, you
cannot specify the lease time. This setting is optional.
Option Action
Next Server Next boot TFTP server, used by the PXE boot or bootp.
TFTP server name (option 66) Enter a unicast IPv4 address or a host name that the device will use to
download the file specified in bootfile name (option 67).
TFTP server address (option 150) Enter one or more TFTP server IPv4 addresses.
Bootfile name (option 67) Enter the bootfile file name that is to be downloaded from the server
specified in TFTP server name (option 66).
Interface MTU (option 26) The Maximum Transmission Unit (MTU) is the maximum frame size that can
be sent between two hosts without fragmentation. This option specifies the
MTU size to be used on the interface. One MTU size (in bytes) can be set for
each pool and static binding. The MTU minimum value is 68 bytes and the
maximum value is 65535 bytes. If the interface MTU is not set on the DHCP
server, DHCP clients will keep the OS default setting of the interface MTU.
Classless static route (option 121) Each classless static route option may have multiple routes with the same
destination. Each route includes a destination subnet, subnet mask, next
hop router . Note that [Link]/0 is an invalid subnet for a static route. For
information about classless static routes and option 121, refer to RFC 3442.
a Click Add.
b Enter the destination and the next hop router IP address.
In NSX 6.2.5 and later, if a DHCP pool is configured on an Edge Services
Gateway with both classless static routes and a default gateway, the default
gateway is added as a classless static route.
Prerequisites
Procedure
5 Click Start.
Results
Important It is a good practice to create a firewall rule to prevent malicious users from
introducing rogue DHCP servers. To do this, add a firewall rule that allows UDP traffic only on
ports 67 and 68 when the traffic is going to or from a valid DHCP server IP address. For details,
see Working with Firewall Rules.
What to do next
Procedure
Procedure
6 Click Add.
n Use VM NIC Binding: Select this option when you know the vNIC index of the VM. NSX
determines the MAC address from the vNIC index and binds the IP address of the VM to
the MAC address.
n Use MAC Binding: Select this option when you know the MAC address of the VM and want
to use it for static binding with the IP address.
n If you selected the Use VM NIC Binding option, select the interface to bind, the virtual
machine, and the vNIC index of the VM to bind to the IP address.
n If you selected the Use MAC Binding option, enter the MAC address of the VM that
you want to use for static binding.
Option Action
Host name Type the host name of the DHCP client virtual machine.
IP Address Enter the address to which to bind the MAC address of the selected virtual
machine.
Subnet Mask Specify the subnet mask. The subnet mask should be same as the subnet
mask of the Edge interface or the DHCP Relay, in case of distributed
router.
Default Gateway Enter the default gateway address. If you do not specify the default
gateway IP address, the internal interface of the NSX Edge instance is
taken as the default gateway.
Lease never expires Select to bind the address to the MAC address of the virtual machine
forever.
Lease Time If you did not select Lease never expires, select whether to lease the
address to the client for the default time (one day), or enter a value in
seconds.
Option Action
Auto configure DNS Select to use the DNS service configuration for the DHCP binding.
Primary Name Server If you did not select Auto Configure DNS, enter the Primary Nameserver
for the DNS service. You must enter the IP address of a DNS server for
hostname-to-IP address resolution.
Secondary Name Server If you did not select Auto Configure DNS, enter the Secondary Nameserver
for the DNS service. You must enter the IP address of a DNS server for
hostname-to-IP address resolution.
10 (Optional) Specify the DHCP options. For detailed information about configuring DHCP
options, see step 7 in Add a DHCP IP Pool .
Procedure
5 Select Bindings from the left panel and click the binding to edit.
DHCP configuration is applied on the logical router port and can list several DHCP servers.
Requests are sent to all listed servers. While relaying the DHCP request from the client, the relay
adds a Gateway IP Address to the request. The external DHCP server uses this gateway address
to match a pool and allocate an IP address for the request. The gateway address must belong to a
subnet of the NSX port on which the relay is running.
You can specify a different DHCP server for each logical switch and can configure multiple DHCP
servers on each logical router to provide support for multiple IP domains.
Note If the DHCP Offer contains an IP address that doesn't match a logical interface (LIF), the
DLR does not relay it back to the VM. The packet is dropped.
When configuring pool and binding at DHCP server, ensure that the subnet mask of the pool/
binding for the relayed queries is same as the interface of the DHCP relay. Subnet mask
information must be provided in API while DLR is acting as DHCP relay between VMs and Edge
providing DHCP service. This subnet mask should match the one configured on gateway interface
for VMs on DLR.
NSX DHCP
server X
DHCP
server Y
Relay Server Y
Distributed
router
Logical Logical
Switch Switch
Note
n DHCP relay does not support overlapping IP address space (option 82).
n DHCP Relay and DHCP service cannot run on a port/vNic at the same time. If a relay agent is
configured on a port, a DHCP pool cannot be configured on the subnet(s) of this port.
Prerequisites
n DHCP relay does not support overlapping IP address space (option 82).
n DHCP Relay and DHCP service cannot run on a port/vNic at the same time. If a relay agent is
configured on a port, a DHCP pool cannot be configured on the subnet(s) of this port.
n If the DHCP Offer contains an IP address that doesn't match a logical interface (LIF), the DLR
does not relay it back to the VM. The packet is dropped.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > NSX Edges.
5 Add a DHCP Relay Server by using one of these methods, or by using a combination of these
methods.
Method Description
Specify Domain Names Enter a comma-separated list of domain names. For example,
[Link],[Link].
Ensure that you have manually added the domain IP addresses to the
firewall.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > NSX Edges.
The Gateway IP Address displays the primary IP address of the selected vNIC.
Procedure
Version Procedure
NSX 6.4.3 and earlier a Click Manage > Settings > Configuration.
b In the DNS Configuration pane, click Change.
7 Change the default cache size, if necessary. The default size is 16 MB.
8 To log DNS traffic, click Enable Logging, and select the log level. Default log level is info.
Security Group
You begin by creating a security group to define assets that you want to protect. Security groups
may be static (including specific virtual machines) or dynamic where membership may be defined
in one or more of the following ways:
n Security tags, IPset, MACset, or even other security groups. For example, you may
include a criteria to add all members tagged with the specified security tag (such as
[Link]) to the security group.
Note that security group membership changes constantly. For example, a virtual machine tagged
with the [Link] tag is moved into the Quarantine security group. When the virus is
cleaned and this tag is removed from the virtual machine, it again moves out of the Quarantine
security group.
Important If a VM’s VM-ID is regenerated due to move or copy, the security tags are not
propagated to the new VM-ID.
Security Policy
Firewall rules Rules that define the traffic to be allowed to, from, or within the security vNIC
group.
Endpoint Third party solution provider services such as anti-virus or vulnerability virtual machines
service management services.
Network Services that monitor your network such as IPS. virtual machines
introspection
services
During service deployment in NSX, the third party vendor selects the service category for the
service being deployed. A default service profile is created for each vendor template.
When third party vendor services are upgraded to NSX 6.1, default service profiles are created for
the vendor templates being upgraded. Existing service policies that include Guest Introspection
rules are updated to refer to the service profiles created during the upgrade.
You map a security policy (say SP1) to a security group (say SG1). The services configured for SP1
are applied to all virtual machines that are members of SG1.
Note When you have many security groups to which you need to attach the same security policy,
create an umbrella security group that includes all these child security groups, and apply the
common security policy to the umbrella security group. This will ensure that the NSX distributed
firewall utilises ESXi host memory efficiently.
Security group
If a virtual machine belongs to more than one security group, the services that are applied to the
virtual machine depends on the precedence of the security policy mapped to the security groups.
Service Composer profiles can be exported and imported as backups or for use in other
environments. This approach to managing network and security services helps you with actionable
and repeatable security policy management.
Let us walk through an example to show how Service Composer helps you protect your network
end-to-end. Let us say you have the followings security policies defined in your environment:
n An initial state security policy that includes a vulnerability scanning service (InitStatePolicy)
n A remediation security policy that includes a network IPS service in addition to firewall rules
and an anti-virus service (RemPolicy)
Ensure that the RemPolicy has higher weight (precedence) than InitStatePolicy.
n An applications assets group that includes the business critical applications in your
environment (AssetGroup)
n A remediation security group defined by a tag that indicates the virtual machine is vulnerable
(VULNERABILITY_MGMT.[Link]=medium) named RemGroup
You now map the InitStatePolicy to AssetGroup to protect all business critical applications in your
environment. You also map RemPolicy to RemGroup to protect vulnerable virtual machines.
When you initiate a vulnerability scan, all virtual machines in AssetGroup are
scanned. If the scan identifies a virtual machine with a vulnerability, it applies the
VULNERABILITY_MGMT.[Link]=medium tag to the virtual machine.
Service Composer instantly adds this tagged virtual machine to RemGroup, where a network IPS
solution is already in place to protect this vulnerable virtual machine.
VULNERABILITY_MGMT,
[Link]
=medium
Business Critical
Application
Security Group
VULNERABILITY_MGMT, VULNERABILITY_MGMT,
[Link] [Link]
=medium =medium
This topic will now take you through the steps required to consume the security services offered
by Service Composer.
Procedure
2 Global Settings
Prerequisites
If you are creating a security policy for use with RDSH, ensure that:
n The Applied to field is not supported for rules for remote desktop access.
Procedure
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Service
Composer.
Security groups for use with Identity Firewall for RDSH, must use security policies that are
marked Enable User Identity at Source when created. Security groups for use with Identity
Firewall for RDSH can only contain Active Directory (AD) groups, and all nested security
groups must also be AD groups.
4 Type a name and description for the security group and click Next.
5 On the Dynamic Membership page, define the criteria that an object must meet for it to be
added to the security group you are creating.
For example, you may include a criteria to add all members tagged with the specified security
tag (such as [Link]) to the security group.
Or you can add all virtual machines containing the name W2008 AND virtual machines that are
in the logical switch global_wire to the security group.
Note If you define a security group by virtual machines that have a certain security tag
applied to them, you can create a dynamic or conditional workflow. The moment the tag is
applied to a virtual machine, the virtual machine is automatically added to that security group.
6 Click Next.
7 On the Select objects to include page, select the object type from the drop-down.
Note that security groups for use in remote desktop sessions can only contain Directory
groups.
8 Select the object that you want to add to the include list. You can include the following objects
in a security group.
n Other security groups to nest within the security group you are creating.
n Cluster
n Logical switch
n Network
n Virtual App
n Datacenter
n IP sets
n AD groups
Note The AD configuration for NSX security groups is different from the AD configuration
for vSphere SSO. NSX AD group configuration is for end users accessing guest virtual
machines while vSphere SSO is for administrators using vSphere and NSX.
n MAC Sets
Note Service Composer allows use of Security Groups that contain MAC Sets in Policy
configurations, however, Service Composer fails to enforce rules for that specific MAC Set.
Service Composer works on Layer 3 and does not support Layer 2 constructs.
n Security tag
n vNIC
n Virtual Machine
n Resource Pool
When you add a resource to a security group, all associated resources are automatically
added. For example, when you select a virtual machine, the associated vNIC is automatically
added to the security group.
9 Click Next and double-click the objects that you want to exclude from the security group.
The objects selected here are always excluded from the security group even if they match the
dynamic criteria or are selected in the include list .
10 Click Finish.
Example
{Expression result (derived from step 5) + Inclusions (specified in step 8} - Exclusion (specified in
step 9) which means that inclusion items are first added to the expression result. Exclusion items
are then subtracted from the combined result.
Global Settings
When Service Composer firewall rules have an applied to setting of distributed firewall, the rules
are applied to all clusters on which distributed firewall is installed. If the firewall rules are set to
apply to the policy's security groups, you have more granular control over the firewall rules, but
may need multiple security policies or firewall rules to get the desired result.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Service
Composer.
u In NSX 6.4.1 and later, next to Global Firewall Settings, click the edit ( ) icon.
u In NSX 6.4.0, next to Global Settings: Firewall Rules Applied To, click Edit.
4 Select a default setting for Applied To and click OK. This value determines the vNICs on which
the firewall rule will be applied.
Option Description
Distributed Firewall Firewall rules are applied to all clusters on which Distributed Firewall is
installed.
Policy's Security Groups Firewall rules are applied to security groups on which the security policy is
applied.
The default Applied To setting can also be viewed and changed via the API. See the NSX API
Guide.
Note that when using RDSH firewall rules the applied to setting is Distributed Firewall.
Policy's Security Groups is not supported for the applied to setting for RDSH rules.
n Name: allow-ssh-from-web
n Source: web-servers
n Service: ssh
n Action: allow
If the firewall rule applies to Distributed Firewall, you will be able to ssh from a VM in the security
group web-servers to a VM in the security group app-servers.
If the firewall rule applies to Policy's Security Group, you will not be able to ssh, as the traffic will
be blocked from reaching the app servers. You will need to create an additional security policy to
allow ssh to the app servers, and apply this policy to the security group web-servers.
n Name: allow-ssh-to-app
n Destination: app-servers
n Service: ssh
n Action: allow
Synchronize firewall configuration allows users to re-sync all of the Service composer
configuration with firewall configuration. It recreates Service composer related firewall sections
and rules on the firewall side. This re-sync is applicable only for firewall and network introspection
related policies configurations. This operation creates all the policy sections in order of
precedence above the firewall default section.
Note This operation may take a long time to complete and should be triggered only when
necessary.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Service
Composer.
Prerequisites
Ensure that:
n the required VMware built in services (such as Distributed Firewall, and Guest Introspection)
are installed.
n the required partner services have been registered with NSX Manager.
n the desired default applied to value is set for Service Composer firewall rules. See Edit Service
Composer Firewall Applied To Setting.
If you are creating a security policy framework for Identity Firewall for RDSH:
n The Applied to field is not supported for rules for remote desktop access.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Service
Composer.
4 In the Create Security Policy or New Security Policy dialog box, type a name for the security
policy.
5 Type a description for the security policy. The description must not exceed 255 characters.
NSX assigns a default weight (highest weight +1000) to the policy. For example, if the highest
weight amongst the existing policy is 1200, the new policy is assigned a weight of 2200.
Security policies are applied according to their weight - a policy with the higher weight has
precedence over a policy with a lower weight.
6 Select Inherit security policy if you want the policy that you are creating to receive services
from another security policy. Select the parent policy.
All services from the parent policy are inherited by the new policy.
7 Click Next.
8 In the Guest Introspection Services page, click Add or the Add Guest Introspection Service
( ) icon.
a In the Add Guest Introspection Service dialog box, type a name and description for the
service.
When you inherit a security policy, you may choose to block a service from the parent
policy.
If you apply a service, you must select a service and service profile. If you block a service,
you must select the type of service to block.
d If you chose to apply the Guest Introspection service, select the service name.
The default service profile for the selected service is displayed, which includes information
about the service functionality types supported by the associated vendor template.
e In State, specify whether you want to enable the selected Guest Introspection service or
disable it.
You can add Guest Introspection services as placeholders for services to be enabled at a
later time. This is especially useful for cases where services need to be applied on-demand
(for example, new applications).
If you enforce a Guest Introspection service in a security policy, other policies that inherit
this security policy would require that this policy be applied before the other child policies.
If this service is not enforced, an inheritance selection would add the parent policy after the
child policies are applied.
g Click OK.
You can add additional Guest Introspection services by following the above steps. You can
manage the Guest Introspection services through the icons above the service table.
In NSX 6.4.0, you can export or copy the services on this page by clicking the icon on the
bottom right side of the Guest Introspection Services page.
9 Click Next.
10 On the Firewall page, you are defining firewall rules for the security groups(s) that this security
policy will be applied to.
When creating a security policy for Identity Firewall for RDSH, Enable User Identity at Source
must be checked. Note that this disables the enable stateless firewall option because the TCP
connection state is tracked for identifying the context. This flag cannot be changed while the
policy is being updated. Once a security policy is created with Enable User Identity at Source
inheritance is not supported.
a Click the checkbox to enable the following optional parameters:
Option Description
Enable User Identity at Source When using Identity Firewall for RDSH, Enable User
Identity at Source must be checked. Note that this
disables the enable stateless firewall option because
the TCP connection state is tracked for identifying the
context.
Enable TCP Strict Enables you to set TCP strict for each firewall section.
Enable Stateless Firewall Enables stateless firewall for each firewall section.
c Type a name and description for the firewall rule you are adding.
d Select Allow, Block, or Reject to indicate whether the rule needs to allow, block, or reject
traffic to the selected destination.
e Select the source for the rule. By default, the rule applies to traffic coming from the
security groups to which this policy gets applied to. To change the default source, click
Select or Change and select the appropriate security groups.
Note Either the Source or Destination (or both) must be security groups to which this
policy gets applied to.
Say you create a rule with the default Source, specify the Destination as Payroll, and select
Negate Destination. You then apply this security policy to security group Engineering. This
would result in Engineering being able to access everything except for the Payroll server.
g Select the services and/or service groups to which the rule applies to.
j Enter the text that you want to add in the Tag text box while adding or editing the firewall
rule.
k Click OK.
You can add additional firewall rules by following the above steps. You can manage the
firewall rules through the icons above the firewall table.
In NSX 6.4.0, you can export or copy the rules on this page by clicking the icon on the
bottom right side of the Firewall page.
The firewall rules you add here are displayed on the Firewall table. VMware recommends
that you do not edit Service Composer rules in the firewall table. If you must do so for an
emergency troubleshooting, you must re-synchronize Service Composer rules with firewall
rules as follows:
n In NSX 6.4.1 and later, select Synchronize on the Security Policies tab.
n In NSX 6.4.0, select the Synchronize Firewall Rules from the Actions menu on the
Security Policies tab.
11 Click Next.
The Network Introspection Services page displays NetX services that you have integrated with
your VMware virtual environment.
Option Description
Enable TCP Strict Enables you to set TCP strict for each firewall section.
Enable Stateless Firewall Enables stateless firewall for each firewall section.
a Enter a name and description for the service you are adding.
You can make additional selections based on the service you selected.
h Enter the text that you want to add in the Tag text box.
i Click OK.
You can add additional network introspection services by following the above steps. You can
manage the network introspection services through the icons above the service table.
In NSX 6.4.0, you can export or copy the services on this page by clicking the icon on the
bottom right side of the Network Introspection Service page.
Note Bindings created manually for the Service Profiles used in Service Composer policies
will be overwritten.
14 Click Finish.
The security policy is added to the policies table. You can click the policy name and select
the appropriate tab to view a summary of the services associated with the policy, view service
errors, or edit a service.
What to do next
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Service
Composer.
3 Select a security policy, and click Apply or the Apply Security Policy ( ) icon.
4 Select the security group that you want to apply the policy to.
Security groups for use with Identity Firewall for RDSH, must use security policies that are
marked Enable User Identity at Source when created. Security groups for use with Identity
Firewall for RDSH can only contain Active Directory (AD) groups, and all nested security
groups must also be AD groups.
If you select a security group defined by virtual machines that have a certain security tag
applied to them, you can create a dynamic or conditional workflow. The moment the tag is
applied to a virtual machine, the virtual machine is automatically added to that security group.
Network Introspection rules and Endpoint rules associated with the policy will not take effect
for security groups containing IPSet and/or MacSet members.
5 (Optional) (In NSX 6.4.0 only) Click the Preview Service Status icon to see the services that
cannot be applied to the selected security group and the reason for the failure.
For example, the security group may include a virtual machine that belongs to a cluster on
which one of the policy services has not been installed. You must install that service on the
appropriate cluster for the security policy to work as intended.
6 Click OK.
Note In NSX 6.4.1 and later, the Service Composer > Canvas tab is removed.
This topic introduces Service Composer by walking you through a partially configured system so
that you can visualize the mappings between security groups and security policy objects at a high
level from the canvas view.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Service
Composer.
Synchronization Status, displaying errors or warnings, and Firewall Publish Status, displaying
the date and time stamp of the last successful publishing of firewall rules, are shown at the top
of the screen.
All security groups within the selected NSX Manager (that are not contained within another
security group) are displayed along with the policies applied on them. The NSX Manager
drop-down lists all NSX Managers on which the currently logged in user has a role assigned.
Results
Each rectangular box in the canvas represents a security group and the icons within the box
represents security group members and details about the security policy mapped to the security
group.
A number next to each icon indicates the number of instances - for example, indicates that 1
security policy is mapped to that security group.
Virtual machines that are currently part of the main security group as well as nested security groups. Click the
Errors tab to see virtual machines with service errors.
n You can create a new security policy by clicking the Create Security Policy ( ) icon. The newly created
security policy object is automatically mapped to the security group.
n Map additional security policies to the security group by clicking the Apply Security Policy ( ) icon.
Effective Endpoint services associated with the security policy mapped to the security group. Suppose you have
two policies applied to a security group and both have the same category Endpoint service configured. The
effective service count in this case will be 1 (since the second lower priority service is overridden).
Endpoint service failures, if any, are indicated by the alert icon. Clicking the icon displays the error.
Effective firewall rules associated with the security policy mapped to the security group.
Service failures, if any, are indicated by the alert icon. Clicking the icon displays the error.
Effective network introspection services associated with the security policy mapped to the security group.
Service failures, if any, are indicated by the alert icon. Clicking the icon displays the error.
Figure 18-4. Details displayed when you click an icon in the security group
You can search for security groups by name. For example, if you type PCI in the search field in the
top right corner of the canvas view, only the security groups with PCI in their names are displayed.
To see the security group hierarchy, click the Top Level ( ) icon at the top left of the window and
select the security group you want to display. If a security group contains nested security groups,
click to display the nested groups. The top bar displays the name of the parent security group
and the icons in the bar display the total number of security policies, endpoint services, firewall
services, and network introspection services applicable to the parent group. You can navigate
back up to the top level by clicking the Go up one level ( ) icon in the top left part of the
window.
You can zoom in and out of the canvas view smoothly by moving the zoom slider on the top right
corner of the window. The Navigator box shows a zoomed out view of the entire canvas. If the
canvas is much bigger than what fits on your screen, it will show a box around the area that is
actually visible and you can move it to change the section of the canvas that is being displayed.
What to do next
Now that we have seen how the mapping between security groups and security policies work,
you can begin creating security policies to define the security services you want to apply to your
security groups.
Procedure
1 Select the security policy that you want to apply to the security group.
3 Click Save.
Security tags are labels which can be associated with a Virtual Machine (VM). Numerous security
tags can be created to identify a specific workload. The matching criteria of a Security Group can
be a security tag, and a workload that is tagged can be automatically placed into a Security Group.
Adding or removing security tags to a VM can be done dynamically in response to various criteria
such as antivirus or vulnerability scans, and intrusion prevention systems. Tags can also be added
and removed manually by an administrator.
Important If a VM’s VM-ID is regenerated due to move or copy, the security tags are not
propagated to the new VM-ID.
In a cross-vCenter NSX environment, universal security tags are created on the primary NSX
manager and are marked for universal synchronization with secondary NSX managers. Universal
security tags can be assigned to VMs statically, based on unique ID selection.
Unique ID Selection
The unique ID selection criteria is used when assigning tags to Virtual Machines on active standby
deployments.
Unique ID is used by the NSX Manager when a Virtual Machine (VM) goes from standby to active
deployment. The unique ID can be based on VM instance UUID, VM BIOS UUID, VM name, or a
combination of these options. If the criteria changes (such as a VM name change) after universal
security tags have been created and attached to VMs, the security tag must be detached and
reattached to the VMs.
Procedure
1 In the vSphere Web Client, navigate to Home > Networking & Security > Installation and
Upgrade, click the Management tab.
2 Click the primary NSX Manager, then select Actions > Unique ID Selection Criteria.
n Use Virtual Machine instance UUID (recommended) - The VM instance UUID is unique
within a VC domain, however there are exceptions such as when deployments are made
through snapshots. If the VM instance UUID is not unique, use the VM BIOS UUID in
combination with the VM name.
n Use Virtual Machine BIOS UUID - The BIOS UUID is not guaranteed to be unique within a
VC domain, but it is always preserved in case of disaster. Use BIOS UUID in combination
with VM name.
n Use Virtual Machine Name - If all of the VM names in an environment are unique, then VM
name can be used to identify a VM across vCenters. Use VM name in combination with VM
BIOS UUID.
4 Click OK.
What to do next
Prerequisites
An antivirus scan must have been run, and a tag applied to the appropriate virtual machine.
Note Refer to the third party solution documentation for details of the tags applied by those
solutions.
Procedure
1 In the vSphere Web Client, click Networking & Security > Groups and Tags.
A list of tags applied in your environment is displayed along with details about the virtual
machines to which those tags have been applied. Note down the exact tag name if you plan on
adding a security group to include virtual machines with a specific tag.
3 Click the number in the VM Count column to view the virtual machines to which that tag in that
row has been applied.
Prerequisites
If creating a universal security tag in an active standby deployment scenario, first set the unique ID
selection criteria on the primary NSX Manager.
Procedure
1 In the vSphere Web Client, click Networking & Security > Groups and Tags.
4 (Optional) To create a universal security tag for use in cross-vCenter NSX environments:
n In NSX 6.4.1 and later, click the Universal Synchronization toggle button to On.
5 Type a name and description for the tag and click OK.
What to do next
Security tags can be used as the matching criteria in security groups. In a cross-vCenter
environment, security tags are synchronized between primary and secondary NSX managers.
Procedure
1 In the vSphere Web Client, click Networking & Security > Groups and Tags.
u In NSX 6.4.1 and later, select the required security tag and click Assign VM.
u In NSX 6.4.0, right-click a security tag and select Assign Security Tag.
The Assign Security Tag to Virtual Machine window appears, populated with available VMs.
4 Select one or more virtual machines to move them to the Selected Objects column.
5 Click OK.
The Security Tags tab appears with an updated VM count for the security tag.
Procedure
1 In the vSphere Web Client, click Networking & Security > Groups and Tags.
u In NSX 6.4.1 and later, select the required security tag and click Edit.
u In NSX 6.4.0, right-click a security tag and select Edit Security Tag.
Procedure
1 In the vSphere Web Client, click Networking & Security > Groups and Tags.
3 Select a security tag, and click Delete or the Delete Security Tag ( ) icon.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Service
Composer.
NSX 6.4.1 and later The left navigation displays Summary, Firewall Rules,
Guest Introspection Services, Network Introspection
Services, and Child Policies .
You can edit, delete, or apply policy to the security
group.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Service
Composer.
4 (Optional) (In NSX 6.4.0 only) Ensure that you are in the Monitor > Service Errors tab.
Clicking the link in the Status column takes you to the Service Deployment page where you
can correct service errors.
Procedure
4 Ensure that you are in the Monitor > Service Composer tab.
The following network and security services can be grouped into a security policy:
Multiple security policies may be applied to a virtual machine either because the security group
that contains the virtual machine is associated with multiple policies or because the virtual machine
is part of multiple security groups associated with different policies. If there is a conflict between
services grouped with each policy, the weight of the policy determines the services that will be
applied to the virtual machine. For example, say policy 1 blocks internet access and has a weight
value of 1000 while policy 2 allows internet access and has a weight value of 2000. In this
particular case, policy 2 has a higher weight and hence the virtual machine will be allowed internet
access.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Service
Composer.
4 In the Manage Priority dialog box, select the security policy that you want to change the
priority for and click Move Up ( ) or Move Down ( ).
Enter the required rank, and click the green check mark or OK.
Weight is recalculated accordingly to the new rank.
Enter the required weight, and click the green check mark or OK.
7 Click OK.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Service
Composer.
3 Select the security policy that you want to edit and click Edit or the Edit Security Policy ( )
icon.
4 In the Edit Security Policy dialog box, make the appropriate changes and click Finish.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Service
Composer.
3 Select the security policy that you want to delete and click Delete or the Delete Security Policy
( ) icon.
In the Service Composer, you cannot directly export security groups. You must first ensure
that a security policy is assigned to a security group, and then export that security policy.
All the contents of the security policy, such as DFW rules, guest introspection rules, network
introspection rules, and the security groups that are bound to the security policy are exported.
When a container security group contains nested security groups, the nested security groups are
not exported. While exporting, you can add a prefix to the policy. The prefix gets applied to policy
name, policy actions name, and security group name.
When importing the configuration onto a different NSX Manager, you can specify a suffix. The
suffix gets applied to policy name, policy actions name, and, security group name. If a security
group or security policy with the same name exists on the NSX Manager where the import is
happening, the import of the security policy configuration fails.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Service
Composer.
5 Type a name and description for the configuration that you are exporting.
6 If necessary, type a prefix to be added to the security policies and security groups that are
being exported.
If you specify a prefix, it is added to the target security policy names thus ensuring that they
have unique names.
7 Click Next.
8 On the Select Security Policies page, select the security policy that you want to export and
click Next.
9 The Preview Selection or Ready to complete page displays the security policies, endpoint
services, firewall rules, and network introspection services to be exported.
This page also displays the security groups on which the security policies are applied.
10 Click Finish.
11 Select the directory on your computer where you want to export the blueprint file and click
Save.
When importing the configuration, an empty security group is created. All the services, service
profiles, applications, and application groups must exist in the destination environment, otherwise
the import fails.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Security > Service
Composer.
3 Click More or Actions, and then click the Import Configuration icon.
4 Select the security policy configuration file that you want to import.
5 If necessary, type a suffix to be added to the security policies and security groups that are
being imported.
If you specify a suffix, it is added to the security policy names being imported thus ensuring
that they have unique names.
6 Click Next.
Service Composer verifies that all services referred to in the configuration are available in the
destination environment. If not, the Manage Missing Services page is displayed, where you
can map missing services to available target services.
The Ready to complete page displays the security policies, endpoint services, firewall rules,
and the network introspection services to be imported. This page also displays the security
groups on which the security policies are applied.
7 Click Finish.
The imported security policy configuration is added to the top of the security policy table
(above the existing policies) in the target NSX Manager. The original order of the imported
rules and security services in the security policy is preserved.
Our sample scenario shows how you can protect your desktops end to end.
Create security
policy for desktops
(DesktopPolicy)
Create security
policy for infected VMs
(QuarantinePolicy)
Create security
policy to scan desktops
(DesktopPolicy)
Map DesktopPolicy to
DesktopSecurityGroup
Map Quarantine to
QuarantineSecurityGroup
Run partner
solution scan
Vulnerability
Management
scan
Vulnerable VM
tagged
Tagged VM
instantly added to
QuarantineSecurityGroup
Administrator tasks
VM in
QuarantineSecurityGroup
Automatic action by protected with IPS
Service Composer
Prerequisites
We are aware that Symantec tags infected virtual machine with the [Link] tag.
Procedure
a Click the Security Policies tab and click the Add Security Policy icon.
d Change the weight to 51000. The policy precedence is set very high so as to ensure that it
is enforced above all other policies.
e Click Next.
f On the Add Endpoint Service page, click and fill in the following values.
Option Value
Name Desktop AV
g Click OK.
h Do not add any firewall or network introspection services and click Finish.
a Click the Security Policies tab and click the Add Security Policy icon.
e Click Next.
f On the Add Endpoint Service page, do not do anything and click Next.
g In Firewall, add three rules - one rule to block all outgoing traffic, the next rule to block all
traffic with groups, and the last rule to allow incoming traffic only from remediation tools.
4 Move QuarantinePolicy to the top of the security policy table to ensure that it is enforced
before all other policies.
c Click the Security Groups tab and click the Add Security Group icon.
g Review your selections on the Ready to Complete page and click Finish.
6 Create a Quarantine security group where the infected virtual machines are to be placed.
a Click the Security Groups tab and click the Add Security Group icon.
d On the Define membership Criteria page click and add the following criteria.
e Do not do anything on the Select objects to include or Select objects to exclude pages and
click Next.
f Review your selections on the Ready to Complete page and click Finish.
a On the Security Policies tab, ensure that the DesktopPolicy policy is selected.
b Click the Apply Security Policy ( ) icon and select the SG_Desktops group.
c Click OK.
This mapping ensures that all desktops (part of the DesktopSecurityGroup) are scanned
when an antivirus scan is triggered.
8 Navigate to the canvas view to confirm that QuarantineSecurityGroup does not include any
virtual machines yet.
The scan discovers infected virtual machine and tags them with the security
tag [Link]. The tagged virtual machines are instantly added to
QuarantineSecurityGroup. The QuarantinePolicy allows no traffic to and from the
infected systems.
Procedure
2 Create a security group for the first tier of the Share Point application - web servers.
c Click the Security Groups tab and click the Add Security Group icon.
f Do not do anything on the Define membership Criteria page and click Next.
g On the Select objects to include page, select the web server virtual machines.
h Do not do anything on the Select objects to exclude page and click Next.
i Review your selections on the Ready to Complete page and click Finish.
3 Now create a security group for your database and share point servers and name them
SG_Database, and SG_Server_SharePoint respectively. Include the appropriate objects in
each group.
4 Create a top level security group for your application tiers and name it SG_App_Group. Add
SG_Web, SG_Database, and SG_Server_SharePoint to this group.
a Click the Security Policies tab and click the Add Security Policy icon.
d Change the weight to 50000. The policy precedence is set very high so as to ensure that it
is enforced above most other policies (with the exception of quarantine).
e Click Next.
f On the Endpoint Services page, click and fill in the following values.
Option Value
g Do not add any firewall or network introspection services and click Finish.
7 Navigate to the canvas view to confirm that the SP_App has been mapped to
SG_App_Group.
b Click the number next to the icon to see that the SP_App is mapped.
a Click the Security Policies tab and then click the Export Blueprint ( ) icon.
c Click Next.
f Select the directory on your computer where you want to download the exported file and
click Save.
The security policy as well as all the security groups to which this policy has been applied (in
our case, the Application security group as well as the three security groups nested within it)
are exported.
9 In order to demonstrate how the exported policy works, delete the SP_App policy.
10 Now we will restore the Template_ App_ DevTest policy that we exported in step 7.
a Click Actions and then click the Import Service Configuration icon.
c Click Next.
d The Ready to complete page displays the security policies along with associated objects
(security groups on which these have been applied, as well as Endpoint services, firewall
rules, and network introspection services) to be imported.
e Click Finish.
The configuration and associated objects are imported to the vCenter inventory and are
visible in the canvas view.
Guest Introspection health status is conveyed by using alarms that show in red on the vCenter
Server console. In addition, more status information can be gathered by looking at the event logs.
Important Your environment must be correctly configured for Guest Introspection security:
n All hosts in a resource pool containing protected virtual machines must be prepared for Guest
Introspection so that virtual machines continue to be protected as they are vMotioned from
one ESXi host to another within the resource pool. In NSX 6.4.1 and later, virtual machine
hardware must be at v9.0 or above for Guest Introspection to support VM protection during
migration (vMotion) of VMs from one host to another.
n Virtual machines must have the Guest Introspection thin agent installed to be protected by
Guest Introspection security solution. Not all guest operating systems are supported. Virtual
machines with non-supported operating systems are not protected by the security solution.
ESXi Hypervisor
GI SVM
Guest VM Partner SVM
GI ESXi Module
Legend
Configuration data flow
Health Monitoring data flow
Partner Registration data flow
VM-SVM data flow
Partner Configuration / Status flow
VMware Virtual Machine
VMCI
Communication Interface
RMQ RabbitMQ Message Bus
n The partner management console is responsible for registering the service (e.g. agentless anti-
virus) with NSX Data Center for vSphere, configuring and monitoring the deployed partner
security virtual machines (Partner SVM) and sending VM tagging operations messages to NSX
Manager.
n vCenter manages the ESX Agent Manager (EAM) which is responsible for deploying the
Partner SVM and Guest Introspection security virtual machine (GI-SVM) to hosts on clusters
that have the partner service configured.
n The NSX Manager is the central control for Guest Introspection and provides information to
EAM regarding which hosts require a Partner SVM and GI-SVM to be deployed, sends GI
configuration information to the GI SVM, receives GI health monitoring information from the
host and executes tagging commands received from the Partner Management Console.
On the host, the Partner SVM receives activity events and information from the GI components
through the EPSEC library, and performs security operations and analytics to detect potential
threats or vulnerabilities. The Partner SVM communicates these events to the Partner
Management Console to take NSX Data Center for vSphere actions, such as grouping and tagging.
The GI ESX module in the hypervisor acts like a switch to pass relevant events from the thin agents
installed on VM’s, to the appropriate Partner SVM for analysis. The GI SVM uses configuration
information received from NSX Manager to configure the GI ESX Module appropriately as VM’s
are instantiated or moved, generate Identity Firewall and Endpoint Monitoring context, and send
GI-related health information back to NSX Manager.
Is there a difference between an SVM and a GI USVM? SVMs refer to third party (partners), such
as Trend, and McAfee. USVM is GI SVM.
What are the key characteristics of SVMs and GI SVMs that make them different from a regular
VM? Guest introspection offloads anti-virus and anti-malware agent processing to a dedicated
secure virtual appliance. Since the secure virtual appliance (unlike a guest virtual machine)
doesn't go offline, it can continuously update anti- virus signatures thereby giving uninterrupted
protection to the virtual machines on the host.
The Guest Introspection Universal Service Virtual Machine (GI USVM), provides a framework for
third-party anti-virus products to be run on guest virtual machines from the outside, removing the
need for anti-virus agents in every virtual machine. SVMs contain specific binaries and applications
added by the vendor of the SVM. The GI USVM vendor is NSX Data Center for vSphere.
Can any VM be deployed/managed as a SVM? No, SVMs are prebuilt and provided by the vendor.
Is there a public guide on SVM/USVM related events? No, the SVM logs are for internal
troubleshooting purposes
Note You cannot migrate a Service VM (SVM) using vMotion/SvMotion. SVMs must remain on the
host on which they were deployed for a correct operation.
Prerequisites
The installation instructions that follow assume that you have the following system:
n A data center with supported versions of vCenter Server and ESXi installed on each host in the
cluster.
n Hosts in the cluster where you want to install Guest Introspection have been prepared for NSX.
See "Prepare Host Clusters for NSX" in the NSX Installation Guide. Guest Introspection cannot
be installed on standalone hosts. If you are deploying and managing Guest Introspection for
anti-virus offload capability only, you do not need to prepare the hosts for NSX, and the NSX
for vShield Endpoint license does not allow it.
n Ensure the NSX Manager and the prepared hosts that run Guest Introspection services are
linked to the same NTP server and that time is synchronized. Failure to do so might cause VMs
to be unprotected by anti-virus services, although the status of the cluster will be shown as
green for Guest Introspection and any third-party services.
If an NTP server is added, VMware recommends that you then redeploy Guest Introspection
and any third-party services.
n If your network contains vSphere 7.0 or later, ensure that the vCenter clusters do not use a
vSphere Lifecycle Manager (vLCM) image to manage ESXi host life-cycle operations. Guest
introspection service cannot be installed on vCenter clusters that use a vLCM image.
To verify whether a vLCM image is used to manage hosts in the cluster, log in to the vSphere
Client and go to Hosts and Clusters. In the navigation pane, click the cluster, and navigate to
Updates > Image. If a vLCM image is not used for the cluster, you must see the SetUp Image
button. If a vLCM image is used for the cluster, you can view the image details, such as ESXi
version, vendor add-ons, image compliance details, and so on.
If you want to assign an IP address to the Guest Introspection service virtual machine from an IP
pool, create the IP pool before installing Guest Introspection. See "Working with IP Pools" in the
NSX Administration Guide.
Caution Guest Introspection uses the 169.254.x.x subnet to assign IP addresses internally for
the GI service. If you assign the [Link] IP address to any VMkernel interface of an ESXi
host, the Guest Introspection installation will fail. The GI service uses this IP address for internal
communication.
Guest Introspection is not supported with vSphere Auto Deploy on stateless ESXi hosts.
Procedure
1 Navigate to Networking & Security > Installation and Upgrade > Service Deployment.
2 Click Add.
3 In the Deploy Network and Security Services dialog box, select Guest Introspection.
4 In Specify schedule (at the bottom of the dialog box), select Deploy now to deploy Guest
Introspection immediately after it is installed or select a deployment date and time.
5 Click Next.
6 Select the datacenter and clusters where you want to install Guest Introspection, and click
Next.
7 On the Select storage and Management Network Page, select the datastore on which to
add the service virtual machines storage or select Specified on host. It is recommended that
you use shared datastores and networks instead of "specified on host" so that deployment
workflows are automated.
The selected datastore must be available on all hosts in the selected cluster.
If you selected Specified on host, complete the following substeps for each host in the cluster.
c In the left navigation pane, under Virtual Machines click Agent VMs, and then click Edit.
8 If you set datastore as Specified on host, you must set the network also as Specified on host.
If you selected Specified on host, follow the substeps in Step 7 to select a network on the host.
When you add a host (or multiple hosts) to the cluster, the datastore and network must be set
before each host is added to the cluster.
Select To
Use IP Pool Assign an IP address to the Guest Introspection service virtual machine from
the selected IP pool.
10 Click Next and then click Finish on the Ready to complete page.
11 Monitor the deployment until the Installation Status column displays Succeeded.
In NSX 6.4.0 and later, the name of the GI SVM in vCenter Server displays the IP address of
the host that it has been deployed on.
12 If the Installation Status column displays Failed, click the icon next to Failed. All deployment
errors are displayed. Click Resolve to fix the errors. Sometimes, resolving the errors displays
additional errors. Take the required action and click Resolve again.
Caution In a network that contains vSphere 7.0 or later, after the Guest Introspection service
or any other third-party partner service is installed, you cannot use a vLCM image on the
vCenter clusters. If you try to use a vLCM image on the vCenter clusters, warning messages
are displayed in the vSphere Client to inform you that standalone VIBs are present on the
hosts.
n If you are using vSphere 6.0, see these instructions for installing VMware Tools: http://
[Link]/vsphere-60/[Link]?
topic=%[Link].vm_admin.doc%2FGUID-391BE4BF-89A9-4DC3-85E7-3D45
[Link].
n If you are using vSphere 6.5 or later, see these instructions for installing VMware Tools:https://
[Link]/support/pubs/[Link].
Windows virtual machines with the Guest Introspection drivers installed are automatically
protected whenever they are started up on an ESXi host that has the security solution installed.
Protected virtual machines retain the security protection through shut downs and restarts, and
even after a vMotion move to another ESXi host with the security solution installed.
For Linux instructions, see Install the Guest Introspection Thin Agent on Linux Virtual Machines.
Prerequisites
Ensure that the guest virtual machine has a supported version of Windows installed. The following
Windows operating systems are supported for NSX Guest Introspection:
n Windows 10
n Win2012 (64)
Procedure
1 Start the VMware Tools installation, following the instructions for your version of vSphere.
Select Custom install.
The options available will vary depending on the version of VMware Tools.
Driver Description
NSX File Introspection Driver and NSX Network Select NSX File Introspection Driver to install vsepflt.
Introspection Driver Optionally select NSX Network Introspection Driver to
install vnetflt (vnetWFP on Windows 10 or later).
3 In the drop-down menu next to the drivers you want to add, select This feature will be
installed on the local hard drive.
What to do next
Check if the thin agent is running using the fltmc command with the administrative privileges. The
Filter Name column in the output lists the thin agent with an entry vsepflt.
The GI thin agent is available as part of the VMware Tools operating system-specific packages
(OSPs). Installing VMware Tools is not required. GI thin agent installation and upgrade is not
connected to NSX installation and upgrade. Also, Enterprise or Security Administrator (non-NSX
Administrator) can install the agent on guest VMs outside of NSX.
To install the GI thin agent on RHEL, CentOS, and SLES Linux systems, use the RPM package. To
install the GI thin agent on Ubuntu Linux systems, use the DEB package.
For Windows instructions, see Install the Guest Introspection Thin Agent on Windows Virtual
Machines.
Prerequisites
n Ensure that the guest virtual machine has a supported version of Linux installed:
Note Starting in NSX 6.4.6, support for Ubuntu 14.04 and RHEL 7.0–7.3 is deprecated. NSX
6.4.6 and later supports Ubuntu 16.04.5 and RHEL 7.4.
Procedure
u Based on your Linux operating system, perform the following steps with a root privilege:
a Obtain and import the VMware packaging public keys using the following commands:
curl -O [Link]
[Link]
apt-get update
apt-get install vmware-nsx-gi-file
a Obtain and import the VMware packaging public keys using the following commands:
curl -O [Link]
[Link]
[vmware]
name = VMware
baseurl = [Link]
enabled = 1
gpgcheck = 1
metadata_expire = 86400
ui_repoid_vars = basearch
a Obtain and import the VMware packaging public keys using the following commands:
curl -O [Link]
[Link]
zypper ar -f "[Link]
VMware
a Obtain and import the VMware packaging public keys using the following commands:
curl -O [Link]
[Link]
rpm --import [Link]
[vmware]
name = VMware
baseurl = [Link]
enabled = 1
gpgcheck = 1
metadata_expire = 86400
ui_repoid_vars = basearch
What to do next
Check if the thin agent is running using the service vsepd status command with the
administrative privileges. The status should be running.
Procedure
1 In the vSphere Web Client, click vCenter Inventory Lists, and then click Datacenters.
The Guest Introspection Health and Alarms page displays the health of the objects under the
data center you selected, and the active alarms. Health status changes are reflected within a
minute of the actual occurrence of the event that triggered the change.
n Failure to establish communication with SVM (when first such failure occurs).
Generated log messages have the following substrings near the beginning of each log message:
vf-AUDIT, vf-ERROR, vf-WARN, vf-INFO, vf-DEBUG.
Events can be displayed without a custom vSphere plug-in. See the vCenter Server Administration
Guide on events and alarms.
Events are the basis for alarms that are generated. Upon registering as a vCenter Server
extension, the NSX Manager defines the rules that create and remove alarms.
Common arguments for all events are the event time stamp and the NSX Manager event_id.
Caution Before you uninstall a Guest Introspection module from a cluster, you must uninstall
all third-party products that are using Guest Introspection from the hosts on that cluster. Use the
instructions from the solution provider.
There is a loss of protection for VMs in the host cluster. You must vMotion the VMs out of the
cluster before you uninstall.
1 Navigate to Networking & Security > Installation and Upgrade > Service Deployment.
Prerequisites
Guest Introspection for Linux is installed. You have root privileges on the Linux system.
Procedure
u For uninstalling package from Ubuntu system, run the apt-get remove vmware-nsx-gi-
file command.
u For uninstalling package from RHEL7 system, run the yum remove vmware-nsx-gi-file
command.
u For uninstalling package from SLES system, run the zypper remove vmware-nsx-gi-file
command.
Results
Any Application
Physical or Virtual
Workloads
SW partner
extensions
Virtual Networks
NSX API
Guest and network
introspection
NSX
vSphere
Overlay transport
HW partner
extensions
Edge service
insertion
There are various deployment methods for inserting third party services into NSX.
Vendor solutions that make use of this type of service insertion include Intrusion Prevention
Service (IPS)/Intrusion Detection Service (IDS), Firewall, Anti Virus, File Identity Monitoring (FIM),
and Vulnerability Management.
Vendor solutions that make use of this type of service insertion include ADC/Load Balancer
devices.
Procedure
1 Register the third-party service with NSX Manager on the vendor's console.
You need NSX login credentials to register the service. For more information, refer to the
vendor documentation.
Once deployed, the third-party service is displayed in the NSX Service Definitions window and
is ready to be used. The procedure for using the service in NSX depends on the type of service
inserted.
For example, you can enable a host-based firewall service by creating a security policy in
Service Composer or creating a firewall rule to redirect traffic to the service. See Consuming
Vendor Services through Service Composer or Redirecting Traffic to a Vendor Solution
through Logical Firewall. For information on using an Edge based service, see Using a Partner
Load Balancer.
Important Guest VMs protected by a partner service temporarily lose protection if migrated to
another cluster using vMotion. To avoid this, vMotion guest VMs only to hosts within the same
cluster.
Prerequisites
Ensure that:
n The vCenter clusters in vSphere 7.0 or later do not use a vSphere Lifecycle Manager (vLCM)
image to manage ESXi host life-cycle operations. Partner services cannot be installed on
vCenter clusters that use a vLCM image.
To verify whether a vLCM image is used to manage hosts in the cluster, log in to the vSphere
Client and go to Hosts and Clusters. In the navigation pane, click the cluster, and navigate to
Updates > Image. If a vLCM image is not used for the cluster, you must see the SetUp Image
button. If a vLCM image is used for the cluster, you can view the image details, such as ESXi
version, vendor add-ons, image compliance details, and so on.
Procedure
1 Navigate to Networking & Security > Installation and Upgrade > Service Deployment.
2 Click Add.
3 In the Deploy Network and Security Services dialog box, select Guest Introspection.
4 In Specify schedule (at the bottom of the dialog box), select Deploy now to deploy Guest
Introspection immediately after it is installed or select a deployment date and time.
5 Click Next.
6 Select the datacenter and clusters where you want to install Guest Introspection, and click
Next.
7 On the Select storage and Management Network Page, select the datastore on which to
add the service virtual machines storage or select Specified on host. It is recommended that
you use shared datastores and networks instead of "specified on host" so that deployment
workflows are automated.
The selected datastore must be available on all hosts in the selected cluster.
If you selected Specified on host, complete the following substeps for each host in the cluster.
c In the left navigation pane, under Virtual Machines click Agent VMs, and then click Edit.
8 If you set datastore as Specified on host, you must set the network also as Specified on host.
If you selected Specified on host, follow the substeps in Step 7 to select a network on the host.
When you add a host (or multiple hosts) to the cluster, the datastore and network must be set
before each host is added to the cluster.
Select To
Use IP Pool Assign an IP address to the Guest Introspection service virtual machine from
the selected IP pool.
10 Click Next and then click Finish on the Ready to complete page.
11 Monitor the deployment until the Installation Status column displays Succeeded.
In NSX 6.4.0 and later, the name of the GI SVM in vCenter Server displays the IP address of
the host that it has been deployed on.
12 If the Installation Status column displays Failed, click the icon next to Failed. All deployment
errors are displayed. Click Resolve to fix the errors. Sometimes, resolving the errors displays
additional errors. Take the required action and click Resolve again.
Caution In a network that contains vSphere 7.0 or later, after the Guest Introspection service
or any other third-party partner service is installed, you cannot use a vLCM image on the
vCenter clusters. If you try to use a vLCM image on the vCenter clusters, warning messages
are displayed in the vSphere Client to inform you that standalone VIBs are present on the
hosts.
What to do next
You can now consume the partner service through NSX UI or NSX API.
A security group is a set of vCenter objects such as clusters, virtual machines, vNICs, and logical
switches. A security policy is a set of Guest Introspection services, firewall rules, and network
introspection services.
When you map a security policy to a security group, redirection rules are created on the
appropriate third-party vendor service profile. As traffic flows from virtual machines belonging
to that security group, it is redirected to registered third-party vendor services that determine how
to process that traffic. For more information on Service Composer, see Using Service Composer.
Prerequisites
n The third party service must be registered with NSX Manager, and the service must be
deployed in NSX.
n If the default firewall rule action is set to Block, you must add a rule to allow the traffic to be
redirected.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Firewall.
3 In the section to which you want to add a rule, click the Add rule ( ) icon.
A new any any allow rule is added at the top of the section.
4 Point to the Name cell of the new rule, click , and type a name for the rule.
5 Specify the Source, Destination, and Service for the rule. For more information, see Add a
Firewall Rule
b In Redirect To, select the service profile and the logical switch or security group to which
you want to bind the service profile.
The service profile is applied to virtual machines connected to or contained in the selected
logical switch or security group.
c Indicate whether the redirected traffic is to be logged and type comments, if any.
d Click OK.
The selected service profile is displayed as a link in the Action column. Clicking the service
profile link displays the service profile bindings.
Prerequisites
The third-party load balancer must be registered with NSX Manager, and it must be deployed in
NSX.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > NSX Edges.
8 Complete the remaining fields and set up the load balancer by adding a service monitor,
server pool, application profile, application rules, and a virtual server. When adding a virtual
server, select the template provided by the vendor. For more information, see Setting Up Load
Balancing.
Results
Traffic for the specified Edge is load balanced by the third party vendor's management console.
There is a correct order of software when removing any third-party software solution. If this
sequence is not followed and specifically if the third-party solution is uninstalled or deleted
before it is unregistered with NSX Manager, the removal operation will fail. See https://
[Link]/kb/2126678 for instructions on how to resolve this.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Service Composer, and
delete the rules (or security policies) that are redirecting traffic to the 3rd-party solution.
2 Navigate to Service Definitions and double-click the name of the third-party solution.
4 Navigate to Installation and Upgrade > Service Deployments and delete the third-party
deployment.
Results
What to do next
Make notes of the configuration settings, and then remove NSX from the third-party solution. For
example, you may need to delete rules that reference other objects and then delete the objects.
NSX also supports Single Sign On (SSO), which enables NSX to authenticate users from other
identity services such as Active Directory, NIS, and LDAP.
User management in the vSphere Web Client is separate from user management in the CLI of any
NSX component.
n Security & Role Administrator role is available in NSX 6.4.5 and later.
Administrator
Update No No No R, W R, W No R, W
Acces Acces Acces Access
s s s
Debug No No No No No No No Access
Acces Acces Acces Access Acces Access
s s s s
Housekeeping R R R R, W R, W R R, W
tasks
Basic auth R R R R R R R, W
disable
Object access No No No R R R R
control Acces Acces Acces
s s s
Feature access No No No R R R R
control Acces Acces Acces
s s s
Edge
Advanced R R, W R, W R R R, W R, W
services
High availability R R R R, W R, W R R, W
vNic Interface R R, W R R, W R, W R R, W
configuration on
NSX Edge
DNS R R, W R R R, W R R, W
SSH SSH R R, W R R, W R, W R R, W
configuration on
NSX Edge
Auto plumbing R R, W R, W R R R, W R, W
Statistics R R R R R R R, W
NAT NAT R R, W R R R, W R R, W
configuration on
NSX Edge
DHCP R R, W R R R, W R R, W
Load balance R R, W R R R, W R R, W
L3 VPN L3 VPN R R, W R R R, W R R, W
Syslog Syslog R R, W R R, W R, W R R, W
configuration on
NSX Edge
Support Bundle R R, W R, W R, W R, W R, W R, W
(Down
load
access
)
Firewall Firewall R R, W R, W R R R, W R, W
configuration on
NSX Edge
Bridging R R, W R R R, W R R, W
Certificate R R, W R, W R R R, W R, W
Distributed
Firewall
IP Discovery IP discovery R R, W R, W R No R, W R, W
(DHCP/ARP when VMware Acces
Snooping) Tools are not s
running on
Guest VMs
[Link] R R No R, W No No R, W
Acces Acces Access
s s
Packet capture R R, W R, W R, W R, W R, W R, W
NameSpace
Config R R R R, W R,W R R, W
SpoofGuard
Config SpoofGuard R R, W R, W No No R, W R, W
publish in TOFU Access Acces
or Manual Mode s
Reports R R R R, W R R R, W
Registration Manage R No No R, W R, W No R, W
[Register, Acces Acces Access
Unregister, s s
Query
registered
solutions,
Activate]
Solutions
Scan R No R, W R, W R R, W R, W
scheduling Acces
s
Library
Host Host No No No R, W R, W No R, W
preparation preparation Acces Acces Acces Access
action on s s s
cluster
Install
App No R R R, W R, W R R, W
Acces
s
EPSEC No R R R, W R, W R R, W
Acces
s
DLP No R R R, W R, W R R, W
Acces
s
VDN
Provision R R R R, W R, W R R, W
ESX Agent
Manager
(EAM)
Service Insertion
Service R R, W R, W R, W R R, W R, W
Service profile R R R, W R, W R R, W R, W
Trust Store
Configuration Configuration of R R, W R R, W R, W R R, W
IP pool
IP allocation IP allocation R R, W R R, W R, W R R, W
and release
Security Fabric
Messaging
Messaging Messaging R R, W R, W R, W R, W R, W R, W
framework used
by NSX Edge
and Guest
Introspection to
communicate
with NSX
Manager
Configuration Select or R R R R, W R, W R R, W
deselect
Primary role for
NSX Manager,
and add or
remove
Secondary NSX
Manager
blueprint_sam
.featurelist
Security Policy
Configuration Configure R R, W R, W No No R, W R, W
security policy Access Acces
to create, s
update, edit, or
delete
Apply policy R R, W R, W No No R, W R, W
Access Acces
s
IP
Repository/IP
Discovery
Dashboard
Widget R R, W R R, W R R R, W
configuration
System R R, W R R, W R R R, W
configuration
Upgrade
Coordinator
Upgrade No No R R, W R R R, W
Acces Acces
s s
Upgrade Plan R R R R, W R R R, W
Tech Support
Bundle
Config Endpoint R, W R, W R, W R, W R, W R, W R, W
Token Based
Authentication
Invalidation No No No No No No R, W
Acces Acces Acces Access Acces Access
s s s s
Ops
Config R R R R, W R R R, W
You can configure lookup service on the NSX Manager and provide the SSO administrator
credentials to register NSX Management Service as an SSO user. Integrating the single sign-on
(SSO) service with NSX Data Center for vSphere improves the security of user authentication
for vCenter users and enables NSX Data Center for vSphere to authenticate users from other
identity services such as AD, NIS, and LDAP. With SSO, NSX Data Center for vSphere supports
authentication using authenticated Security Assertion Markup Language (SAML) tokens from a
trusted source using REST API calls. NSX Manager can also acquire authentication SAML tokens
for use with other VMware solutions.
NSX Data Center for vSphere caches group information for SSO users. Changes to group
memberships take up to 60 minutes to propagate from the identity provider (for example, active
directory) to NSX Data Center for vSphere.
Prerequisites
n To use SSO on NSX Manager, you must have vCenter Server 6.0 or later, and single sign-on
(SSO) authentication service must be installed on the vCenter Server. Note that this is for
embedded SSO. Instead, your deployment might use an external centralized SSO server.
For information about SSO services provided by vSphere, see the Platform Services Controller
Administration documentation.
Important You must configure the NSX Manager appliance to use the same SSO
configuration that is used on the associated vCenter Server system.
n NTP server must be specified so that the SSO server time and NSX Manager time are in sync.
For example:
Procedure
3 From the home page, click Manage Appliance Settings > NSX Management Service.
5 Enter the name or IP address of the host that has the lookup service.
The Lookup Service URL is displayed based on the specified host and port.
7 Enter the SSO Administrator user name and password, and click OK.
8 Check that the certificate thumbprint matches the certificate of the SSO server.
If you installed a CA-signed certificate on the CA server, you are presented with the
thumbprint of the CA-signed certificate. Otherwise, you are presented with a self-signed
certificate.
What to do next
A user can have only one role. The following table lists the permissions of each user role.
Role Permissions
Enterprise Administrator Users in this role can perform all tasks related to deployment and configuration of NSX
products and administration of this NSX Manager instance.
NSX Administrator Users in this role can perform all tasks related to deployment and administration of this
NSX Manager instance. For example, install virtual appliances, configure port groups.
Security Administrator Users in this role can configure security compliance policies in addition to viewing
the reporting and auditing information in the system. For example, define distributed
firewall rules, configure NAT and load balancer services.
Auditor Users in this role can only view system settings, auditing, events, and reporting
information and cannot make any configuration changes.
Security Engineer (introduced Users in this role can perform all security tasks, such as configuring policies, firewall
in NSX Data Center for rules. Users have read access to some networking features, but no access to host
vSphere 6.4.2). preparation and user account management.
Network Engineer (introduced Users in this role can perform all networking tasks, such as routing, DHCP, bridging.
in NSX Data Center for Users have read access to endpoint security features, but no access to other security
vSphere 6.4.2). features.
Security & Role Administrator Users in this role have all the feature permissions that a Security Engineer has, and
(introduced in NSX Data they can also perform user management tasks.
Center for vSphere 6.4.5).
When you assign a role to an SSO user, access is granted in the following interfaces:
n The NSX Manager appliance, including the API. This access is available only in NSX 6.4 or
later.
The Enterprise Administrator role gets the same access to the NSX Manager appliance and the
API as the NSX Manager admin user. The other NSX roles get read-only access to the NSX
Manager appliance and the API.
For example:
SSO users with any role other than the Enterprise Administrator role can access the NSX Manager
UI and run API requests in read-only mode. Users can access NSX APIs with the GET API request,
but they cannot run the PUT, POST, and DELETE API requests. In addition, these SSO users
cannot perform actions such as stop, configure, edit, and so on, in the NSX Manager UI.
You can manage NSX Manager appliance admin user only through CLI commands.
When you assign a role to an SSO user, access is granted in the following interfaces:
n The NSX Manager appliance, including the API. This access is available only in NSX 6.4 or
later.
The Enterprise Administrator role gets the same access to the NSX Manager appliance and the
API as the NSX Manager admin user. The other NSX roles get read-only access to the NSX
Manager appliance and the API.
Roles can be assigned individually or through a group membership. A user can be assigned an
NSX role individually, and this user can also be a member of a group that is assigned a different
NSX role. In such cases, the role that is assigned individually to the user is used for logging into
the NSX Manager appliance.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > System > Users and
Domains.
3 If multiple IP addresses are available in the NSX Manager drop-down menu, select an IP
address, or keep the default selection.
For example:
Alias corp
Note When a group is assigned a role on the NSX Manager, any user from that group can log
in to the NSX Manager UI.
7 Click Next.
8 Select the role for the user and click Next. For more information about available roles, see
Managing User Rights.
9 Click Finish.
n AD domain: [Link]
Prerequisites: vCenter Server must be registered with NSX Manager, and SSO must be
configured. Note that SSO is required only for Groups.
b Navigate to Networking & Security > System > Users and Domains.
f Click Next.
f In Assigned Role, select Read-only, deselect Propagate to children, and click OK.
3 Log out of the vSphere Web Client and log in again as smoore@[Link].
Sally can perform NSX operations only. For example, install virtual appliances, create logical
switches, and other operations tasks.
Name G1
Name John
Belongs to group G1
Name G1
Name G2
Resources Datacenter1
Name Joseph
n Read, write (security administrator role) for Datacenter1 and its child resources
Name G1
Name Bob
Belongs to group G1
Resources Datacenter1
Procedure
1 Create a CLI user account. You can create a CLI user account for each NSX virtual appliance.
To create a CLI user account, perform the following steps:
a Log in to the vSphere Web Client, and select an NSX Manager virtual appliance.
c Log in to the CLI session using the Administrator account and password that you specified
while installingNSX Manager. For example,
nsx-mgr> enable
Password:
nsx-mgr>
d Switch to Privileged mode from Basic mode using the enable command as follows:
nsx-mgr> enable
Password:
nsx-mgr#
e Switch to Configuration mode from Privileged mode using the configure terminal
command as follows:
f Add a CLI user account using the user username password (hash | plaintext)
password command. For example,
2 Now provide web interface privilege which will enable the user to login to NSX Manager virtual
appliance and allows the execution of appliance management REST APIs as follows:
b Allow the created CLI user to run the REST API calls using the user username
privilege web-interface command. For example:
Current configuration:
!
user cliuser
!
ntp server [Link]
!
ip name server [Link]
!
hostname nsxmgr-01a
!
interface mgmt
ip address [Link]/24
!
ip route [Link]/0 [Link]
!
web-manager
nsx-mgr#(config)# exit
nsx-mgr# exit
The created user is not listed in the Networking & Security > System > Users and Domains >
Users tab. Also, no role is assigned to the user.
5 Assign the required role to the user using the REST API. You can assign auditor (Auditor),
security_admin (Security Administrator), or super_user (System Administrator) role as follows:
POST - [Link]
<accessControlEntry>
<role>auditor</role> # Enter the required role #
<resource>
<resourceId>globalroot-0</resourceId>
</resource>
</accessControlEntry>
Results
What to do next
You can log in to vSphere Web Client using the credentials provided while creating the user.
For more information on CLI, refer to NSX Command Line Interface Reference.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > System > Users and
Domains.
3 If multiple IP addresses are available in the NSX Manager drop-down menu, select an IP
address, or keep the default selection.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > System > Users and
Domains.
3 If multiple IP addresses are available in the NSX Manager drop-down menu, select an IP
address, or keep the default selection.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > System > Users and
Domains.
3 If multiple IP addresses are available in the NSX Manager drop-down menu, select an IP
address, or keep the default selection.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > System > Users and
Domains.
3 If multiple IP addresses are available in the NSX Manager drop-down menu, select an IP
address, or keep the default selection.
If you delete a vCenter user account, only the role assignment for NSX Manager is deleted.
The user account on the vCenter Server is not deleted.
Note Duplicate names are allowed when you create a group with a universal scope.
You can provide duplicate names when you select the Mark this object for Universal
Synchronization option for creating the following groups:
n Security Group
You can create an IP address group either by manually entering the IP addresses or by importing
a .csv or .txt file that contains a comma-separated list of IP addresses. In addition, you can
export an existing IP address group to a .txt file that contains a comma-separated list of IP
addresses.
Prerequisites
n If you plan to use grouping objects instead of IP addresses, enable an IP discovery method,
such as DHCP snooping or ARP snooping, or both. For more information, see IP Discovery for
Virtual Machines.
Procedure
1 In the vSphere Web Client, click Networking & Security > Groups and Tags.
2 Navigate to IP Sets:
n In NSX 6.4.1 and later, ensure that you are in the IP Sets tab.
n In NSX 6.4.0, ensure that you are in the Grouping Objects > IP Sets tab.
3 If multiple IP addresses are available in the NSX Manager drop-down menu, select an IP
address, or keep the default selection.
u You must select the primary NSX Manager if you want to manage universal IP address
groups.
Caution While entering IPv6 address ranges in the IP sets, ensure that you break the address
ranges into /64. Otherwise, the publishing of the firewall rules fails.
When inheritance is enabled, grouping objects created at the global scope are accessible from
derived scopes, such as datacenter, Edge, and so on.
u In NSX 6.4.1 and later, click the Universal Synchronization toggle button to On.
u In NSX 6.4.0, select Mark this object for Universal Synchronization .
Procedure
1 In the vSphere Web Client, click Networking & Security > Groups and Tags.
2 Navigate to IP Sets:
n In NSX 6.4.1 and later, ensure that you are in the IP Sets tab.
n In NSX 6.4.0, ensure that you are in the Grouping Objects > IP Sets tab.
3 If multiple IP addresses are available in the NSX Manager drop-down menu, select an IP
address, or keep the default selection.
u You must select the primary NSX Manager if you want to manage universal IP address
groups.
4 Select the group that you want to edit, and click the Edit ( or ) icon.
Procedure
1 In the vSphere Web Client, click Networking & Security > Groups and Tags.
2 Navigate to IP Sets:
n In NSX 6.4.1 and later, ensure that you are in the IP Sets tab.
n In NSX 6.4.0, ensure that you are in the Grouping Objects > IP Sets tab.
3 If multiple IP addresses are available in the NSX Manager drop-down menu, select an IP
address, or keep the default selection.
u You must select the primary NSX Manager if you want to manage universal IP address
groups.
4 Select the group that you want to delete, and click the Delete ( or ) icon.
Procedure
1 In the vSphere Web Client, click Networking & Security > Groups and Tags.
n In NSX 6.4.1 and later, ensure that you are in the MAC Sets tab.
n In NSX 6.4.0, ensure that you are in the Grouping Objects > MAC Sets tab.
3 If multiple IP addresses are available in the NSX Manager drop-down menu, select an IP
address, or keep the default selection.
u To manage universal MAC address groups, the primary NSX Manager must be selected.
When inheritance is enabled, grouping objects created at the global scope are accessible from
derived scopes, such as datacenter, Edge, and so on.
9 (Optional) Select Universal Synchronization or Mark this object for Universal Synchronization
to create a universal MAC address group.
Procedure
1 In the vSphere Web Client, click Networking & Security > Groups and Tags.
n In NSX 6.4.1 and later, ensure that you are in the MAC Sets tab.
n In NSX 6.4.0, ensure that you are in the Grouping Objects > MAC Sets tab.
3 If multiple IP addresses are available in the NSX Manager drop-down menu, select an IP
address, or keep the default selection.
u To manage universal MAC address groups, the primary NSX Manager must be selected.
4 Select the group that you want to edit, and click the Edit ( or ) icon.
Procedure
1 In the vSphere Web Client, click Networking & Security > Groups and Tags.
n In NSX 6.4.1 and later, ensure that you are in the MAC Sets tab.
n In NSX 6.4.0, ensure that you are in the Grouping Objects > MAC Sets tab.
3 If multiple IP addresses are available in the NSX Manager drop-down menu, select an IP
address, or keep the default selection.
u To manage universal MAC address groups, the primary NSX Manager must be selected.
4 Select the group that you want to delete, and click the Delete ( or ) icon.
Create an IP Pool
You can add IP address ranges to be included in the IP pool. Make sure that the IP pool does not
include the IP address range or IP addresses that are already used in the network.
Procedure
1 In the vSphere Web Client, click Networking & Security > Groups and Tags.
2 Navigate to IP Pools:
n In NSX 6.4.1 and later, ensure that you are in the IP Pools tab.
n In NSX 6.4.0, ensure that you are in the Grouping Objects > IP Pools tab.
3 If multiple IP addresses are available in the NSX Manager drop-down menu, select an IP
address, or keep the default selection.
5 Type a name for the IP pool and type the default gateway and prefix length.
6 (Optional) Type the primary and secondary DNS and the DNS suffix.
7 Type the IP address ranges to be included in the pool and click Add or OK.
Edit an IP Pool
You can edit the name and IP address range of an IP pool. However, you cannot edit the gateway
and prefix length after an IP pool is used.
Procedure
1 In the vSphere Web Client, click Networking & Security > Groups and Tags.
2 Navigate to IP Pools:
n In NSX 6.4.1 and later, ensure that you are in the IP Pools tab.
n In NSX 6.4.0, ensure that you are in the Grouping Objects > IP Pools tab.
3 If multiple IP addresses are available in the NSX Manager drop-down menu, select an IP
address, or keep the default selection.
Delete an IP Pool
You can delete the IP pool.
Procedure
1 In the vSphere Web Client, click Networking & Security > Groups and Tags.
2 Navigate to IP Pools:
n In NSX 6.4.1 and later, ensure that you are in the IP Pools tab.
n In NSX 6.4.0, ensure that you are in the Grouping Objects > IP Pools tab.
3 If multiple IP addresses are available in the NSX Manager drop-down menu, select an IP
address, or keep the default selection.
4 Select the IP pool that you want to delete, and click the Delete ( or ) icon.
Security Groups are containers that can contain multiple object types including logical switch,
vNIC, IPset, and Virtual Machine (VM). Security groups can have dynamic membership criteria
based on security tags, VM name or logical switch name. For example, all VM's that have the
security tag "web" will be automatically added to a specific security group destined for Web
servers. After creating a security group, a security policy is applied to that group.
Important If a VM’s VM-ID is regenerated due to move or copy, the security tags are not
propagated to the new VM-ID.
Security groups for use with Identity Firewall for RDSH, must use security policies that are marked
Enable User Identity at Source when created. Security groups for use with Identity Firewall for
RDSH can only contain Active Directory (AD) groups, and all nested security groups must also be
AD groups.
Security groups used in Identity Firewall can contain only AD directory groups. Nested groups can
be non-AD groups or other logical entities such as virtual machines.
In a cross-vCenter NSX environment, universal security groups are defined on the primary NSX
manager and are marked for universal synchronization with secondary NSX managers. Universal
security groups cannot have dynamic membership criteria defined unless they are marked for use
in an active standby deployment scenario.
In a cross-vCenter NSX environment with an active standby deployment scenario, the SRM creates
a placeholder VM on the recovery site for every protected VM on the active site. The placeholder
VMs are not active, and stay in the standby mode. When the protected VM goes down, the
placeholder VMs on the recovery site are powered on and take over the tasks of the protected
VM. Users create distributed firewall rules with universal security groups containing universal
security tags on the active site. The NSX manager replicates the distributed firewall rule with the
universal security groups containing universal security tags on the placeholder VMs and when the
placeholder VMs are powered on the replicated firewall rules with the universal security groups
and universal security tags are enforced correctly.
Note
n Universal security groups created prior to 6.3 cannot be edited for use in active standby
deployments.
Table 22-1. Firewall Rule Behavior with RDSH and Non-RDSH Sections
Enable User Identity Security Group Identity Security Group (RDSH Any Security Group (Non-RDSH
(RDSH Section) Section) Section)
Source - SID based rules are Source - IP based rules Source - IP based rules
preemptively pushed to hypervisor.
Rule enforcement is on the first
packet.
Applied To with Identity based Security Group - Applied to all hosts User based Applied To
Applied To with Non-Identity based Security Group - User based Applied to User based Applied to
Universal security groups are used in two types of deployments: active cross-vCenter NSX
environments, and active standby cross-vCenter NSX environments, where one site is live at a
given time and the rest are on standby.
n Universal security groups in an active environment can contain the following included objects
only: security groups, IP sets, MAC sets. You cannot configure dynamic membership or
excluded objects.
n Universal security groups in an active standby environment can contain the following included
objects: security groups, IP sets, MAC sets, universal security tags. You can also configure
dynamic membership using VM Name only. You cannot configure excluded objects.
Note Powered off VM's that are based on dynamic criteria such as Computer OS Name and
Computer Name will not be included in Security Groups. Dynamic criteria is received by NSX only
once when the VM is powered on. After being powered on, the guest details are synced to NSX
Manager and remain with the NSX Manager even if the VM is later powered off.
Note Universal security groups created prior to 6.3 cannot be edited for use in active standby
deployments.
Prerequisites
If you are creating a security group based on Active Directory group objects, ensure that one
or more domains have been registered with NSX Manager. NSX Manager gets group and user
information as well as the relationship between them from each domain that it is registered with.
See Register a Windows Domain with NSX Manager.
Procedure
1 In the vSphere Web Client, click Networking & Security > Groups and Tags.
n In NSX 6.4.1 and later, ensure that you are in the Security Groups tab.
n In NSX 6.4.0, ensure that you are in the Grouping Objects > Security Group tab.
3 If multiple IP addresses are available in the NSX Manager drop-down menu, select an IP
address, or keep the default selection.
u To manage universal security groups, the primary NSX Manager must be selected.
6 (Optional) If you are creating a universal security group, select Universal Synchronization or
Mark this object for universal synchronization.
7 (Optional) If you are creating a universal security group for use in an active standby
deployment, select both Universal Synchronization / Mark this object for universal
synchronization and Use for active standby deployments. Dynamic membership for universal
security groups with active standby deployment is based on virtual machine name
8 Click Next.
9 On the Dynamic Membership page, define the criteria that an object must meet for it to be
added to the security group you are creating. This gives you the ability to include virtual
machines by defining a filter criteria with a number of parameters supported to match the
search criteria.
Note If you are creating a universal security group, the Define dynamic membership step is
not available in active active deployments. It is available in active standby deployments, based
on virtual machine name only.
For example, you may include a criterion to add all virtual machines tagged with the specified
security tag (such as [Link]) to the security group. Security tags are case
sensitive.
Or you can add all virtual machines containing the name W2008 and virtual machines that are in
the logical switch global_wire to the security group.
10 Click Next.
11 On the Select objects to include page, select the tab for the resource you want to add and
select one or more resources to add to the security group. You can include the following
objects in a security group.
Table 22-2. Objects that can be included in security groups and universal security groups.
n Other security groups to nest within the security n Other universal security groups to nest within the
group you are creating. universal security group you are creating.
n Cluster n Universal IP sets
n Logical Switch n Universal MAC sets
n Network n Universal Security Tag (active standby deployments
n Virtual App only)
n Datacenter
n IP sets
n Directory Groups
The objects selected here are always included in the security group regardless of whether or
not they match the criteria that you defined earlier on the Dynamic Membership page.
When you add a resource to a security group, all associated resources are automatically
added. For example, when you select a virtual machine, the associated vNIC is automatically
added to the security group.
12 Click Next and select the objects that you want to exclude from the security group.
Note If you are creating a universal security group, the Select objects to exclude step is not
available.
The objects selected here are always excluded from the security group regardless of whether
or not they match the dynamic criteria.
13 Click Next.
The Ready to Complete window appears with a summary of the security group.
14 Click Finish.
Example
{Expression result (derived from Define dynamic membership) + Inclusions (specified in Select
objects to include} - Exclusion (specified in Select objects to exclude)
This means that inclusion items are first added to the expression result. Exclusion items are then
subtracted from the combined result.
Procedure
1 In the vSphere Web Client, click Networking & Security > Groups and Tags.
n In NSX 6.4.1 and later, ensure that you are in the Security Groups tab.
n In NSX 6.4.0, ensure that you are in the Grouping Objects > Security Group tab.
3 If multiple IP addresses are available in the NSX Manager drop-down menu, select an IP
address, or keep the default selection.
u To manage universal security groups, the primary NSX Manager must be selected.
4 Select the group that you want to edit and click the Edit ( or ) icon.
Note Universal security groups created prior to 6.3 cannot be edited for use in active standby
deployments.
5 In the Edit Security Group dialog box, make the appropriate changes.
Procedure
1 In the vSphere Web Client, click Networking & Security > Groups and Tags.
n In NSX 6.4.1 and later, ensure that you are in the Security Groups tab.
n In NSX 6.4.0, ensure that you are in the Grouping Objects > Security Group tab.
3 If multiple IP addresses are available in the NSX Manager drop-down menu, select an IP
address, or keep the default selection.
u To manage universal security groups, the primary NSX Manager must be selected.
4 Select the group that you want to delete and click the Delete ( or ) icon.
Create a Service
You can use services in firewall rules. You can use pre-defined services, or create additional
services.
You might need to create a service because your application is not already defined, or it is using a
standard protocol, with a non-default port. For example:
Procedure
1 In the vSphere Web Client, click Networking & Security > Groups and Tags.
2 Navigate to Services:
n In NSX 6.4.1 and later, ensure that you are in the Services tab.
n In NSX 6.4.0, ensure that you are in the Grouping Objects > Service tab.
3 If multiple IP addresses are available in the NSX Manager drop-down menu, select an IP
address, or keep the default selection.
7 Select a Layer.
8 Select a Protocol.
Depending on the protocol selected, you might be prompted to enter further information,
such as the destination port. Expand Advanced Options to enter a source port.
When inheritance is enabled, grouping objects created at the global scope are accessible from
derived scopes, such as datacenter, Edge, and so on.
10 (Optional) Select Universal Synchronization or Mark this object for Universal Synchronization
to create a universal service.
Results
Procedure
1 In the vSphere Web Client, click Networking & Security > Groups and Tags.
n In NSX 6.4.1 and later, ensure that you are in the Service Groups tab.
n In NSX 6.4.0, ensure that you are in the Grouping Objects > Service Groups tab.
3 If multiple IP addresses are available in the NSX Manager drop-down menu, select an IP
address, or keep the default selection.
u To manage universal security groups, the primary NSX Manager must be selected.
7 In Members, select the services or service groups that you want to add to the group.
8 (Optional) Select Universal Synchronization or Mark this object for Universal Synchronization
to create a universal service group.
When inheritance is enabled, grouping objects created at the global scope are accessible from
derived scopes, such as datacenter, Edge, and so on.
Procedure
1 In the vSphere Web Client, click Networking & Security > Groups and Tags.
3 Select a custom service or service group and click the Edit ( or ) icon.
Procedure
1 In the vSphere Web Client, click Networking & Security > Groups and Tags.
3 Select a custom service or service group and click the Delete ( or ) icon.
n Audit Logs
n System Events
The default license upon install is NSX for vShield Endpoint. This license enables use of NSX
for deploying and managing vShield Endpoint for anti-virus offload capability only, and has hard
enforcement to restrict usage of VXLAN, firewall, and Edge services, by blocking host preparation
and creation of NSX Edges. To use other features, including logical switches, logical routers,
Distributed Firewall, or NSX Edge, you must either purchase a license to use these features, or
request an evaluation license for short-term evaluation of the features.
n NSX for vSphere Standard, Advanced, and Enterprise license keys are effective in NSX 6.2.2
and later.
n NSX Data Center Standard, Professional, Advanced, Enterprise Plus, and Remote Office
Branch Office license keys are effective in NSX 6.4.1 and later.
For more information about the NSX Data Center, NSX, and NSX for vShield Endpoint licencing
editions and associated features, see [Link]
For more information about managing license in vSphere, see the vCenter Server and Host
Management documentation for your version of vSphere.
Prerequisites
Verify that all vCenter users who manage licenses are in the [Link]
group.
If you have multiple vCenter Server systems using the same Platform Services Controller, and
multiple NSX Managers registered with those vCenter Servers, you must combine the licenses
for the NSX Manager appliances into one license. See [Link] for
more information. See [Link] for information about combining
licenses.
Procedure
4 Select NSX for vSphere in the Solutions list. From the All Actions drop-down menu, select
Assign license....
5 Click the Add ( ) icon. Enter a license key and click Next. Add a name for the license, and
click Next. Click Finish to add the license.
7 (Optional) Click the View Features icon to view what features are enabled with this license.
2 Click Networking & Security. The Dashboard > Overview page appears as your default
homepage.
You can view existing system-defined widgets and the custom widgets.
Widget Component
Controller Nodes:
n Controller node status
n Controller peer connectivity status
n Controller VM status (powered off/deleted)
n Controller disk latency alerts
External Components:
n vSphere ESX Agent Manager (EAM) service status
Firewall Publish Status Number of hosts with Firewall Publish status as failed. Status is Red
when any host does not successfully apply the published distributed
firewall configuration
Logical Switch Status Number of logical switches with status Error or Warning. Flags when
the backed distributed virtual port group is deleted from vCenter
Server
Host Notification Security alerts for hosts. You can see this alert when the hardware
address of the DHCP client is spoofed. A possible DHCP denial-of-
service (DoS) attack is happening.
Widget Component
Edge Notifications Highlights active alarms for certain services. It monitors the list of
critical events that are listed and tracks them until the problem is
not resolved. Alarms are auto resolved when the recovery event is
reported, or edge is force synced, redeployed, or upgraded
System Scale Dashboard Shows a summary of warnings and alerts for scale. For detail listing
of the parameters and scale numbers, click Details to go to the
System Scale Dashboard
Custom Widget You can view the custom widget created through API
Custom Widget
You can add custom widgets to the dashboard using the REST API. You can create five custom
widgets for your personal viewing in the dashboard.
You can also share the custom widgets with other users by setting the shared parameter as true in
the widget configuration. Maximum limit on shared widgets is 10. It means that the total number of
widgets shared by all the users is limited to 10.
n To view information about a specific widget configuration: Use the GET /api/2.0/services/
dashboard/ui-views/dashboard/widgetconfigurations/<widgetconfiguration-id> API .
n To delete the custom widget created earlier: Use the DELETE /api/2.0/services/dashboard/
ui-views/dashboard/widgetconfigurations/<widgetconfiguration-id> API.
If the current value exceeds a specified threshold percentage, a warning indicator is displayed to
alert that the maximum supported scale is approaching. A red indicator shows that configuration
maximum is reached. The listing is sorted in descending order of the current scale percent value
which ensures that the warning indicators are always displayed at the top.
The data is collected every hour to verify if the threshold values are exceeding the limits, and
creates an indicator when the thresholds are exceeded. The information is logged twice a day to
the NSX Manager technical support logs.
n A critical event when a parameter crosses the supported system scale configuration.
For more information on system scales, refer to NSX Recommended Configuration Maximums.
You can also view Communication Channel Health on the dashboard at Networking & Security >
Dashboard in the Fabric Status section.
Procedure
1 Navigate to Networking & Security > Installation and Upgrade > Host Preparation.
2 Complete the following steps to view the health of the communication channels.
NSX 6.4.1 and later a Click a cluster from the left pane. In the right pane, the hosts in the
selected cluster are displayed in the Hosts table.
b In the Communications Channels column of the Hosts table, click the
status icon.
NSX 6.4.0 a Expand the cluster that contains the host for which you want to view the
communication channel health.
b Click the host, and then click Actions > Communication Channel Health.
A pop-up window displays the health status of the following communication channels:
For information about troubleshooting controller cluster problems, including deleting controllers
safely, refer to the NSX Controller section in the NSX Troubleshooting Guide.
Starting in NSX Data Center for vSphere 6.4.0, you can change the controller name using the API.
See the NSX API Guide for more information. Starting in NSX Data Center for vSphere 6.4.2, you
can change the controller name using the vSphere Web Client or vSphere Client.
When you create a controller node, you are prompted to provide a name. The controller node
is also assigned a controller ID, in the format controller-X, for example, controller-5. The
controller name and ID are used to configure identifiers for the controller in a few locations:
When you update the controller name, the following changes are made:
Note The hostname is used in controller log entries. If you change the controller hostname,
the log entries display the new hostname.
Procedure
1 Navigate to Networking & Security > Installation and Upgrade > Management > NSX
Controller Nodes.
2 Select a controller node, then click Actions > Change Controller Name.
Procedure
1 Navigate to Networking & Security > Installation and Upgrade > Management > NSX
Controller Nodes.
2 Select the controller for which you want to change the password.
Configure DNS, NTP, and Syslog for the NSX Controller Cluster
You can configure DNS, NTP, and syslog servers for the NSX Controller cluster. The same settings
apply to all NSX Controller nodes in the cluster.
Starting in NSX Data Center for vSphere 6.4.2, you can make these changes using the vSphere
Web Client or vSphere Client. In earlier 6.4 versions, you can change NTP, and syslog settings
using the API only. See the NSX API Guide for more information.
Important If you have an invalid configuration (for example, unreachable NTP servers), and then
deploy a controller, the controller node deployment fails. Verify and correct the configuration and
deploy the controller node again.
The NSX Controller cluster DNS settings override any DNS settings configured on the controller IP
pool.
Procedure
1 Navigate to Networking & Security > Installation and Upgrade > Management > NSX
Controller Nodes.
2 Select the NSX Manager that manages the NSX Controller nodes you want to modify.
4 (Optional) Enter a comma-separated list of DNS servers, and optionally DNS suffixes.
You can enter the NTP servers as IPv4 addresses or fully qualified domain names (FQDN). If an
FQDN is used, you must configure DNS so that the names can be resolved.
You can enter the syslog servers as IPv4 addresses or fully qualified domain names
(FQDN). If an FQDN is used, you must configure DNS so that the names can be resolved.
Important Selecting TCP or TLS might result in extra consumption of memory for
buffering that could negatively impact the performance of the controller. In extreme cases,
this can stop controller processing until the buffered network log calls are drained.
Note
n If the syslog server is using a self-signed certificate, paste the contents of the syslog
self-signed certificate in the Certificate text box.
n If the syslog server is using a CA-signed certificate, paste the contents of the
intermediary certificates and the root certificate. In the certificate chain, the order of
certificates must be as follows:
n Root CA certificate
-----BEGIN CERTIFICATE-----
Intermediate cert
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
Root cert
-----END CERTIFICATE-----
The default port for TCP and UDP syslog is 514. For TLS syslog, the default port is 6514.
Procedure
1 Navigate to Networking & Security > Installation and Upgrade > Management > NSX
Controller Nodes.
2 Select the controller for which you want to generate technical support logs.
Caution Generate support logs for one controller at a time. An error might occur if you try to
generate support logs for multiple controllers simultaneously.
NSX starts collecting the technical support logs. It takes several minutes for the log files to be
generated. You can click Cancel at any time to cancel the process and generate the support
logs later.
The support logs are saved on your computer in a compressed file with the .tgz file extension.
Results
What to do next
If you want to upload diagnostic information for VMware technical support, refer to the Knowledge
Base article 2070100.
CDO mode avoids the connectivity issues during the following failure scenarios:
n WAN is down.
Note Starting in NSX 6.4.0, CDO is supported at NSX Manager level and not at the transport
zone level.
When the CDO mode is enabled and host detects a control plane failure, the host waits for the
configured time period and then enters the CDO mode. You can configure the time period for
which you want the host to wait before entering the CDO mode. By default, the wait time is five
minutes.
NSX Manager creates a special CDO logical switch (4999) on the controller. The VXLAN Network
Identifier (VNI) of the special CDO logical switch is unique from all other logical switches. When
the CDO mode is enabled, one controller in the cluster is responsible for collecting all the VTEP
information reported from all transport nodes, and replicating the updated VTEP information to
all other transport nodes. After detecting the CDO mode, broadcast packets like ARP/GARP and
RARP is sent to the global VTEP list. This allows to vMotion the VMs across the vCenter Servers
without any data plane connectivity issues.
When you disable the CDO mode, NSX Manager removes the CDO logical switch from the
controller.
Prerequisites
n After upgrading to NSX 6.4, NSX Manager disables CDO mode for the existing transport
nodes. Use a pre-defined global VNI to configure vSphere Distributed Switch (VDS), if NSX
Manager had one or more CDO enabled transport nodes before upgrade.
Procedure
1 Navigate to Networking & Security > Installation and Upgrade > Management > NSX
Managers.
4 Click Yes.
Results
The CDO Mode column displays State as Enabled and Status as Successful.
NSX Manager creates a CDO logical switch on the controller. To view the details of the CDO logical
switch, log in to the NSX Manager CLI as an admin user, and run the following command:
For example:
Observe that the CDO logical switch with VNI 4999 is created.
Procedure
1 Navigate to Networking & Security > Installation and Upgrade > Management > NSX
Managers.
4 Click Yes.
Results
The CDO Mode column displays State as Disabled and Status as Successful.
NSX Manager removes the CDO logical switch from the controller. To verify whether the CDO
logical switch is removed, log in to the NSX Manager CLI as an admin user, and run the following
command:
For example:
Observe that the CDO logical switch with VNI 4999 is not available on the controller.
For example: A vSphere Distributed Switch (VDS) gets deleted from the NSX Manager database,
when the last cluster associated with that VDS gets unconfigured from the VXLAN. Then, NSX
Manager removes the CDO-related opaque property from VDS. If the removal of opaque property
fails due to some reason, the opaque property remains in VDS. Now if you disable the CDO mode
and configure VXLAN on a host associated with that VDS, the CDO gets enabled on that host
as the opaque property is present and VNI is also present in the controller. In this case, use the
resync feature to apply the CDO disabled mode again. Now, NSX Manager tries to remove the
opaque property from VDS.
Prerequisites
Procedure
1 Navigate to Networking & Security > Installation and Upgrade > Management > NSX
Managers.
2 Select the secondary NSX Manager, and click Actions > Enable CDO mode to enable the CDO
mode.
3 Now, select the required NSX Manager, and click Actions > Resync CDO configuration mode.
4 Click Yes.
In NSX 6.2.3 and later, the default VXLAN port is 4789, the standard port assigned by IANA.
Before NSX 6.2.3, the default VXLAN UDP port number was 8472.
Any new NSX installations will use UDP port 4789 for VXLAN.
If you upgrade from NSX 6.2.2 or earlier to NSX 6.2.3 or later, and your installation used the old
default (8472), or a custom port number (for example, 8888) before the upgrade, that port will
continue to be used after the upgrade unless you take steps to change it.
If your upgraded installation uses or will use hardware VTEP gateways (ToR gateways), you must
switch to VXLAN port 4789.
Cross-vCenter NSX does not require that you use 4789 for the VXLAN port, however, all hosts
in a cross-vCenter NSX environment must be configured to use the same VXLAN port. If you
switch to port 4789, this will ensure that any new NSX installations added to the cross-vCenter
NSX environment are using the same port as the existing NSX deployments.
Changing the VXLAN port is done in a three phase process, and will not interrupt VXLAN traffic.
1 NSX Manager configures all hosts to listen for VXLAN traffic on both the old and new ports.
Hosts continue to send VXLAN traffic on the old port.
2 NSX Manager configures all hosts to send traffic on the new port.
3 NSX Manager configures all hosts to stop listening on the old port, all traffic is sent and
received on the new port.
In a cross-vCenter NSX environment you must initiate the port change on the primary NSX
Manager. For each stage, the configuration changes are made on all hosts in the cross-vCenter
NSX environment before proceeding to the next stage.
Prerequisites
n Verify that the port you want to use for VXLAN is not blocked by a firewall.
n Verify that host preparation is not running at the same time as the VXLAN port change.
Procedure
u In NSX 6.4.1 and later, navigate to Networking & Security > Installation and Upgrade >
Logical Network Settings > VXLAN Settings.
u In NSX 6.4.0, navigate to Networking & Security > Installation and Upgrade > Logical
Network Preparation > VXLAN Transport.
2 Next to VXLAN Port, click Edit or Change. Enter the port you want to switch to. 4789 is the
port assigned by IANA for VXLAN.
It takes a short time for the port change to propagate to all hosts.
3 (Optional) Check the progress of the port change with the GET /api/2.0/vdn/config/
vxlan/udp/port/taskStatus API request.
GET [Link]
...
Details regarding the data collected through CEIP and the purposes for which it is used by
VMware are set forth at the Trust & Assurance Center at [Link]
trustvmware/[Link].
To join or leave the CEIP for NSX, or edit program settings, see Edit the Customer Experience
Improvement Program Option.
Prerequisites
n Verify that the NSX manager is connected and can sync with vCenter Server.
Procedure
1 In the vSphere Web Client, navigate to the Customer Experience Improvement Program
settings.
n In NSX 6.4.6 and later, click Networking & Security > About NSX.
n In NSX 6.4.5 and earlier, click Networking & Security > NSX Home > Summary.
n In NSX 6.4.5 and earlier, click the Join the VMware Customer Experience Improvement
Program check box.
For information on configuring a syslog server for hosts managed by a vCenter Server, see the
appropriate version of vSphere documentation at [Link]
Note Syslog or jump servers used to collect logs and access an NSX Distributed Logical Router
(DLR) Control VM can't be on the logical switch that is directly attached to that DLR's logical
interfaces.
Component Description
ESXi Logs These logs are collected as part of the VM support bundle generated from
vCenter Server.
For more information on ESXi log files, refer to vSphere documentation.
NSX Edge Logs Use the show log [follow | reverse] command in the NSX Edge CLI.
Download Technical Support Log bundle via NSX Edge UI.
NSX Manager Logs Use the show log CLI command in the NSX Manager CLI.
Download Technical Support Log bundle via the NSX Manager Virtual Appliance
UI.
Routing Logs See the NSX Logging and System Events Guide.
Guest Introspection Logs See NSX Logging and System Events Guide.
NSX Manager
To specify a syslog server, see Configure a Syslog Server for NSX Manager.
To download technical support logs, see Download Technical Support Logs for NSX.
NSX Edge
To specify a syslog server, see Configure Syslog Servers for NSX Edge.
To download technical support logs, see Download Tech Support Logs for NSX Edge.
NSX Controller
To specify a syslog server, see Configure DNS, NTP, and Syslog for the NSX Controller Cluster.
To download technical support logs, see Download Technical Support Logs for NSX Controller.
Firewall
For more details, refer to Firewall Logs.
Audit Logs
Audit logs for operations tracked by a ticket includes the ticket ID. With the NSX ticket logger
feature, you can track the changes you make with a ticket ID.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > System > Events.
2 Click the Manage tab, and then click NSX Ticket Logger.
The NSX Ticket Logging pane is displayed at the right side of the vSphere Web Client window.
Audit logs for the operations that you perform in the current UI session include the ticket ID in
the Operation Tags column.
If multiple vCenter Servers are being managed by the vSphere Web Client, the ticket ID is used
for logging on all applicable NSX Managers.
What to do next
Ticket logging is session based. If ticket logging is on and you log out or if the session is lost,
ticket logging will be turned off by default when you re-login to the UI. When you complete the
operations for a ticket, you turn logging off by repeating steps 2 and 3 and clicking Turn Off.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > System > Events.
3 If multiple IP addresses are available in the NSX Manager drop-down menu, select an IP
address, or keep the default selection.
The audit log details are displayed in the Audit Logs tab.
4 When details are available for an audit log, the text in the Operation column for that log is
clickable. To view details of an audit log, click the text in the Operation column.
5 In the Audit Log Change Details, select Changed Rows to display only those properties whose
values have changed for this audit log operation.
System Events
System events are events that are related to NSX operations. They are raised to detail every
operational event. Events might relate to basic operation (Informational) or to a critical error
(Critical).
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > System > Events.
You can click the arrows in the column headers to sort events, or use the Filter text box to
filter events.
Alarms
Alarms are notifications that are activated in response to an event, a set of conditions, or the
state of an object. Alarms, along with other alerts, are displayed on the NSX Dashboard and other
screens on the vSphere Web Client UI.
You can use the GET api/2.0/services/systemalarms API to view alarms on NSX objects.
n Alarm corresponds to a system event and has an associated resolver that will attempt
to resolve the issue that triggers the alarm. This approach is designed for network and
security fabric deployment (for example, EAM, Message Bus, Deployment Plug-In), and is
also supported by Service Composer. These alarms use the event code as the alarm code. For
more details, refer to NSX Logging and System Events document.
n Edge notifications alarms are structured as a triggering and resolving alarm pair. This method
is supported by several Edge functions, including IPSec VPN, load balancer, high availability,
health check, edge file system, and resource reservation. These alarms use a unique alarm
code which is not the same as the event code. For more details, refer to NSX Logging and
System Events document.
Generally, an alarm gets automatically deleted by the system when the error condition is rectified.
Some alarms are not auto cleared on a configuration update. Once the issue is resolved, you have
to clear the alarms manually.
Here is an example of the API that you can use to clear the alarms.
You can get alarms for a specific source, for example, cluster, host, resource pool, security group,
or NSX Edge. View alarms for a source by sourceId:
GET [Link]
POST [Link]
You can view NSX alarms, including Message Bus, Deployment Plug-In, Service Composer, and
Edge alarms:
GET [Link]
GET [Link]
POST [Link]
Format of an Alarm
You can view format of an alarm through API.
SNMP traps must have the SNMPv2c version. The traps must be associated with a management
information base (MIB) so that the SNMP receiver can process the traps with object identifiers
(OID).
By default, the SNMP trap mechanism is disabled. Enabling the SNMP trap only activates the
critical and high severity notifications so that the SNMP manager does not get inundated by a high
volume of notifications. An IP address or a host name defines the trap destination. For the host
name to work for the trap destination, the device must be set up to query a Domain Name System
(DNS) server.
When you enable the SNMP service, a coldStart trap with OID [Link].[Link].5.1 is sent out the
first time. A warmStart trap with OID [Link].[Link].5.2 is sent out later on each stop-start to the
configured SNMP receivers.
If the SNMP service remains enabled, a heartbeat trap vmwHbHeartbeat with OID
[Link].4.1.6876.[Link] is sent out every five minutes. When you disable the service, a
vmwNsxMSnmpDisabled trap with OID [Link].4.1.6876.[Link].0.1 is sent out. This process stops
the vmwHbHeartbeat trap from running and disables the service.
When you add, modify, or delete a SNMP receiver value, a warmStart trap
with OID [Link].[Link].5.2 and vmwNsxMSnmpManagerConfigUpdated trap with OID
[Link].4.1.6876.[Link].0.2 is sent to the new or updated set of SNMP receivers.
Prerequisites
n Familiarize yourself with the SNMP trap mechanism. See Working with SNMP Traps.
n Download and install the MIB module for the NSX Manager so that the SNMP receiver can
process the traps with OID. See [Link]
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > System > Events.
2 Click the Manage tab. If multiple NSX Managers are available, select an IP address of an NSX
Manager from the NSX Manager drop-down menu.
Option Description
Group Notification Predefined set of groups for some system events that are used to aggregate
the events that are raised. By default, this option is enabled.
For example, if a system event belongs to a group, the trap for these
grouped events are withheld. Every five minutes a trap is sent out detailing
the number of system events that have been received from the NSX
Manager. The fewer traps being sent out save the SNMP receiver resources.
5 Click OK.
Results
The SNMP service is enabled and traps are sent out to the receivers.
What to do next
Check whether the SNMP configuration works. See Verify SNMP Trap Configuration.
Prerequisites
Verify that you have SNMP configured. See Configure SNMP Settings.
Procedure
c Click OK.
A warmStart trap with OID [Link].[Link].5.2 is sent out to all the SNMP receivers.
a If the SNMP receiver does not receive the traps, verify that the SNMP receiver is running
on a configured port.
b Check the accuracy of the receiver details under the SNMP settings section.
d If the Heartbeat trap stops, check whether the SNMP service is disabled or test whether
the network connectivity between the NSX Manager and the SNMP receiver is working.
When the Module, SNMP OID, or SNMP trap enabled column value appears as --, it means that
those events have not been allocated a trap OID. Therefore, a trap for these events is not going to
be sent out.
A system trap has several columns that list different aspects of a system event.
Option Description
Severity Level of an event can be informational, low, medium, major, critical, or high.
By default when the SNMP service is enabled, traps are sent out for only critical and high severity
events to highlight the traps that require immediate attention.
SNMP OID Represents the individual OID and this OID is sent out when a system event is raised.
Group notification is enabled by default. When group notifications is enabled, the events or traps
under this group show the OID of the group the event or trap belongs to.
For example, group notification OID categorized under configuration group has the OID
[Link].4.1.6876.[Link].1.0.1.
SNMP trap Shows whether sending out of the trap for this event is enabled or disabled.
enabled You can toggle the icon to individually an event or trap enablement. When group notification is
enabled, you cannot toggle the trap enablement.
Prerequisites
Verify that the SNMP settings are available. See Configure SNMP Settings.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > System > Events.
2 Click the Manage tab, and then select an NSX Manager IP address.
Editing a trap enablement is not allowed when group notification is enabled. You can change
the enablement of traps that do not belong to a group.
5 Change the severity of the system event from the drop-down menu.
6 If you change the severity from Informational to critical, check the Enable as SNMP Trap
checkbox.
7 Click OK.
8 (Optional) Click the Enable ( ) icon or Disable ( ) icon in the header to enable or disable
sending a system trap.
9 (Optional) Click the Copy ( ) icon to copy one or more event rows to your clipboard.
Procedure
1 Open a Web browser window and type the IP address assigned to the NSX Manager. For
example, [Link]
The NSX Manager user interface opens in a web browser window using SSL.
3 Log in to the NSX Manager virtual appliance by using the user name admin and the password
you set during installation.
The following events are specific to the NSX Manager virtual appliance.
Local CLI Run show log follow Run show log follow Run show log follow Run show log follow
command. command. command. command.
GUI NA NA NA NA
Local CLI Run show process monitor Run show system memory Run show filesystem command.
command. command.
GUI NA NA NA
Procedure
5 Click OK.
If your partner solution configurations reference the NSX Manager IP address, and you change the
IP address of NSX Manager, you must update your partner solution. See Update Partner Solutions
After NSX Manager IP Change.
If you change the network configuration of a secondary NSX Manager in a cross-vCenter NSX
environment, you must update the secondary NSX Manager configuration on the primary NSX
Manager. See Update Secondary NSX Manager on Primary NSX Manager.
Prerequisites
n Verify that the new IP address is added to DNS, and is resolvable from all related vCenter
Server, Platform Services Controller, and ESXi systems.
Procedure
2 From the home page, click Manage Appliance Settings > Network.
3 Click Edit in the General network settings pane and update the IPv4 or IPv6 configuration
sections.
4 Reboot the NSX Manager appliance. Click Actions ( ) and then Reboot Appliance.
5 After the NSX Manager appliance has rebooted, log in to the web interface, and navigate to
Home > View Summary. Verify that the NSX Management Service has status of RUNNING. In
a cross-vCenter NSX environment, also verify that the NSX Universal Synchronization Service
has status of RUNNING.
If you have problems connecting to the NSX Manager appliance web interface once the NSX
Manger appliance VM is running, your computer or browser might have cached the old IP
address. Close all browser windows, start your browser again, and clear the cache. Or access
the appliance using its new IP address instead of its hostname.
6 Log out of the vSphere Web Client and log back in.
It can take several minutes for NSX Manager to connect to the vCenter Server system after the
network configuration change and reboot. If you see an error message: "No NSX Managers
found", log out of the vSphere Web Client, wait a few minutes, and log back in again.
8 Click Host Preparation. Each host cluster displays Not Ready. Click Not Ready and then click
Resolve all.
The IP change causes the URL used to access VIBs to change. When you click Resolve All, the
URL is updated.
What to do next
If your partner solution configurations reference the NSX Manager IP address, update your
partner solution. See Update Partner Solutions After NSX Manager IP Change.
If you have changed the network configuration of a secondary NSX Manager, update the
secondary NSX Manager configuration on the primary NSX Manager. See Update Secondary NSX
Manager on Primary NSX Manager.
Prerequisites
n Verify that the NSX Manager IP address has been changed. See Change NSX Manager
Appliance IP Address.
n Consult the partner documentation for instructions on updating the NSX Manager IP address
in the partner solution configuration.
Procedure
1 If the partner solution configuration references the NSX Manager IP address, update the
partner solution with the new NSX Manager IP address.
4 If any of your partner services use Guest Introspection, on the Guest Introspection Installation
status columns, click Not Ready then click Resolve all.
5 On the partner solution Installation status columns, click Not Ready then click Resolve all.
Prerequisites
n Verify that you have changed the IP address on a secondary NSX Manager in a cross-vCenter
NSX environment. See Change NSX Manager Appliance IP Address.
Procedure
2 Click Networking & Security and then click Installation and Upgrade.
3 Click the Management tab. In the NSX Managers pane, click Actions > Update Secondary
NSX Manager. Enter the new IP address assigned to the secondary NSX Manager.
4 Verify that the thumbprint displayed is the correct thumbprint for the secondary NSX Manager.
Click OK.
Syslog data is useful for troubleshooting and reviewing data logged during installation and
configuration.
Procedure
2 From the home page, click Manage Appliance Settings > General.
4 Specify the IP address or hostname, port, and protocol of the syslog server.
For example:
5 Click OK.
Results
NSX Manager remote logging is enabled, and logs are stored in your syslog server. If you have
configured multiple syslog servers, logs are stored in all the configured syslog servers.
What to do next
n In a Cross-vCenter NSX environment, you should enable the FIPS mode on each NSX Manager
separately.
n If one of the NSX Managers is not configured for FIPS, you must still ensure that it uses a
secure communication method which complies with the FIPS standards.
n Both primary and secondary NSX Managers must be on the same TLS version for universal
synchronization to work correctly.
Important Changing FIPS mode reboots the NSX Manager virtual appliance.
Prerequisites
n Verify that any partner solutions are FIPS mode certified. See the VMware Compatibility Guide
at [Link]
n If you have upgraded from an earlier version of NSX, do not enable FIPS mode until the
upgrade to NSX 6.3.0 is complete. See Understand FIPS Mode and NSX Upgrade in the NSX
Upgrade Guide.
n Verify that the NSX Manager is NSX 6.3.0 or later.
n Verify that all host clusters running NSX workloads are prepared with NSX 6.3.0 or later.
n Verify that all NSX Edge appliances are version 6.3.0 or later, and that FIPS mode has been
enabled on the required NSX Edge appliances. See Change FIPS Mode on NSX Edge.
Procedure
5 To enable FIPS mode, select the Enable FIPS Mode check box.
6 For Server and Client, select the check boxes for the required TLS protocol version.
Note
n When FIPS mode is enabled, NSX Manager disables the TLS protocols that are not
compliant to the FIPS standards.
If you upgrade to NSX 6.4.0 or later, the TLS settings before upgrade remains unchanged.
7 Click OK.
Procedure
6 Click OK.
Procedure
6 Click OK.
Procedure
7 Click OK.
Procedure
4 Click Download.
5 After the log is ready, click the Save to download the log to your desktop.
What to do next
You can open the log using a decompression utility by browsing for All Files in the directory where
you saved the file.
To obtain the NSX Manager certificate, you can use NSX Manager's built-in CSR generator or you
can use another tool such as OpenSSL.
A CSR generated using NSX Manager's built-in CSR generator cannot contain extended attributes
such as subject alternate name (SAN). If you wish to include extended attributes, you must use
another CSR generation tool. If you are using another tool such as OpenSSL to generate the CSR,
the process is 1) generate the CSR, 2) have it signed, and 3) proceed to the section Convert the
NSX Manager Certificate File to PKCS 12 Format.
This method is limited in that the CSR cannot contain extended attributes such as subject alternate
name (SAN). If you wish to include extended attributes, you must you another CSR generation
tool. If you are using another CSR generation tool, skip this procedure.
Procedure
Option Action
Key Size Select the key length used in the selected algorithm.
Common Name Type the IP address or fully qualified domain name (FQDN) of the NSX
Manager. VMware recommends that you enter the FQDN.
Organization Unit Enter the department in your company that is ordering the certificate.
City Name Enter the full name of the city in which your company resides.
State Name Enter the full name of the state in which your company resides.
Country Code Enter the two-digit code that represents your country. For example, the
United States is US.
6 Click OK.
Using this method, the private key never leaves the NSX Manager.
c Get the Signed Certificate and Root CA and any intermediary CA certificates in PEM
format.
d To convert CER/DER formatted certificates to PEM, use the following OpenSSL command:
e Concatenate all the certificates (server, intermediary and root certificates) in a text file.
f In the NSX Manager UI, click Import and browse to the text file with all of the certificates.
g Once the import is successful, the server certificate and all the CA certificates will be shown
on the SSL Certificates page.
What to do next
Prerequisites
n Verify that OpenSSL is installed on the system. You can download openssl from http://
[Link].
n Generate a public and private key pair. For example, run the following OpenSSL command:
openssl req -x509 -days [number of days] -newkey rsa:2048 -keyout [Link] -out my-
[Link]
Procedure
u After receiving the signed certificate from the authorized signer, run an OpenSSL command to
generate a PKCS 12 (.pfx or .p12) keystore file from the public certificate file and your private
key.
For example:
Where:
n nsx-manager.p12 is the name of the generated output file after the conversion to PKCS 12
format.
What to do next
Prerequisites
When installing a certificate on NSX Manager, only the PKCS#12 keystore format is supported, and
it must contain a single private key and its corresponding signed certificate or certificate chain.
Procedure
6 Click Import.
Results
The NSX Manager backup contains all of the NSX Data Center for vSphere configuration, including
controllers, logical switching and routing entities, security, firewall rules, and everything else that
you configure within the NSX Manager UI or API. The vCenter database and related elements like
the virtual switches need to be backed up separately.
At a minimum, we recommend taking regular backups of NSX Manager and vCenter. Your backup
frequency and schedule might vary based on your business needs and operational procedures.
We recommend taking NSX backups frequently during times of frequent configuration changes.
NSX Manager backups can be taken on demand or on an hourly, daily, or weekly basis. The
backup checksum file is produced with the SHA256 algorithm.
n After Day Zero deployment and initial configuration of NSX Data Center for vSphere
components, such as after the creation of the NSX Controller cluster, logical switches, logical
routers, edge services gateways, security, and firewall policies.
To provide an entire system state at a given time to roll back to, we recommend synchronizing
NSX Data Center for vSphere component backups (such as NSX Manager) with your backup
schedule for other interacting components, such as vCenter, cloud management systems,
operational tools, and so on.
The backup file is saved to a remote FTP or SFTP location that NSX Manager can access.
NSX Manager data includes configuration, events, and audit log tables. Configuration tables are
included in every backup.
Restore is only supported on the same NSX Manager version as the backup version. For this
reason, it is important to create a backup file before and after performing an NSX upgrade, one
backup for the old version and another backup for the new version.
Procedure
3 To specify the backup location, click Change next to FTP Server Settings.
b From the Transfer Protocol drop-down menu, select either SFTP or FTP, based on what
the destination supports.
d Enter the user name and password required to log in to the backup system.
e In the Backup Directory text box, enter the absolute path for storing the backups.
Note If you do not provide a backup directory, backup is stored to the default directory
(home directory) of the FTP server.
To determine the absolute path, log in to the FTP server, navigate to the directory that you
want to use, and run the present working directory command (pwd). For example:
This text is prepended to each backup filename to help you easily recognize the file on
the backup system. For example, if you enter ppdb, the resulting backup file is named as
ppdbHH_MM_SS_YYYY_Mon_Day.
Note Files in the Backup Directory must be limited to 100. If the number of files in the
directory exceeds this limit, a warning message is displayed.
h Click OK.
For example:
Option Example
Port 21
Password *****
Option Example
a From the Backup Frequency drop-down menu, select Hourly, Daily, or Weekly. The Day
of Week, Hour of Day, and Minute drop-down menus are disabled based on the selected
frequency. For example, if you select Daily, the Day of Week drop-down menu is disabled
because this drop-down menu is not applicable to a daily frequency.
b For a weekly backup, select the day of the week the data should be backed up.
c For a weekly or daily backup, select the hour at which the backup should begin.
Option Example
Hour of day 15
Minute 45
6 To exclude logs and flow data from being backed up, click Change next to Exclude.
a Select the items that you want to exclude from the backup.
b Click OK.
7 Save your FTP server IP/hostname, credentials, directory details, and pass phrase. This
information is required to restore the backup.
Prerequisites
Before restoring NSX Manager data, we recommend reinstalling the NSX Manager appliance.
Running the restore operation on an existing NSX Manager appliance might work, too, but is not
supported. The assumption is that the existing NSX Manager has failed, and therefore a new NSX
Manager appliance is deployed.
The best practice is to take note of the current settings for the old NSX Manager appliance so that
they can be used to specify IP information and backup location information for the newly deployed
NSX Manager appliance.
Procedure
1 Take note of all settings on the existing NSX Manager appliance. Also, note down FTP server
settings.
The version must be the same as the backed up NSX Manager appliance.
5 In FTP Server Settings, click Change and add the FTP server settings.
The Host IP Address, User Name, Password, Backup Directory, Filename Prefix, and Pass
Phrase fields in the Backup Location screen must identify the location of the backup to be
restored.
Note If the backup folder does not appear in the Backup History section, verify the FTP
server settings. Check if you can connect to FTP server and view the backup folder.
6 In the Backup History section, select the required backup folder to restore, and click Restore.
Results
Caution After restoring an NSX Manager backup, you might need to take additional action to
ensure correct operation of NSX Edge appliances and logical switches. See Restore NSX Edges
and Resolve Out of Sync Errors on Logical Switches.
If you have an intact NSX Manager configuration, you can recreate an inaccessible or failed Edge
appliance VM by redeploying the NSX Edge. To redeploy an NSX Edge, select the NSX Edge, and
click Actions > Redeploy. See "Redeploy NSX Edge" in the NSX Administration Guide.
Caution After restoring an NSX Manager backup, you might need to take additional action to
ensure correct operation of NSX Edge appliances.
n Edge appliances created after last backup are not removed during restore. You must delete
the VM manually.
n Edge appliances deleted after the last backup are not restored unless redeployed.
n If both the configured and current locations of an NSX Edge appliance saved in the backup
no longer exist when the backup is restored, operations such as redeploy, migrate, enable
or disable HA will fail. You must edit the appliance configuration and provide a valid
location information. Use PUT /api/4.0/edges/{edgeId}/appliances to edit the appliance
location configuration (resourcePoolId, datastoreId, hostId and vmFolderId as necessary). See
"Working With NSX Edge Appliance Configuration" in the NSX API Guide.
If any of the following changes have occurred since the last NSX Manager backup, the restored
NSX Manager configuration and the configuration present on the NSX Edge appliance will differ.
You must Force Sync the NSX Edge to revert these changes on the appliance and ensure
correct operation of the NSX Edge. See "Force Sync NSX Edge with NSX Manager" in the NSX
Administration Guide.
n Changes made via Distributed Firewall for preRules for NSX Edge firewall.
If any of the following changes have occurred since the last NSX Manager backup, the restored
NSX Manager configuration and the configuration present on the NSX Edge appliance will differ.
You must Redeploy the NSX Edge to revert these changes on the appliance and ensure correct
operation of the NSX Edge. See "Redeploy NSX Edge" in the NSX Administration Guide.
n HA enabled or disabled
n port groups
n trunk ports
n fence parameters
n shaping policy
Attention After upgrading to NSX 6.4.4 or 6.4.5, the default MTU value for newly added
trunk interfaces on the edge is incorrectly set to 1500. This issue also occurs after you do a
fresh installation of 6.4.4 or 6.4.5. The issue is fixed in 6.4.6. However, to resolve this issue
in 6.4.4 or 6.4.5, you must manually change the default MTU value in all the trunk interfaces
of the edge to 1600. For more information, see the VMware knowledge base article at https://
VMware, Inc. 521
[Link]/s/article/74878.
NSX Administration Guide
Procedure
3 If present, click the Out of sync link in the Status column to display error details.
4 Click Resolve to recreate missing backing port groups for the logical switch.
Export the vSphere Distributed Switch configuration to create a backup before preparing the
cluster for VXLAN. For detailed instructions about exporting a vSphere Distributed Switch
configuration, see [Link]
Back Up vCenter
To secure your NSX deployment, it is important to back up the vCenter database and take
snapshots of the VMs.
Refer to the vCenter documentation for your vCenter version for vCenter backup and restore
procedures and best practices.
n The vSphere Installation and Setup documentation for your version of vSphere
n [Link]
Flow Monitoring
Flow monitoring is a traffic analysis tool that provides a detailed view of the traffic to and from
protected virtual machines.
When flow monitoring is enabled, its output defines which machines are exchanging data and
over which application. This data includes the number of sessions and packets transmitted per
session. Session details include sources, destinations, applications, and ports being used. Session
details can be used to create firewall to allow or block rules. You can view flow data for many
different protocol types, including TCP, UDP, ARP, ICMP, and so on. Flows can be excluded by
specifying filters. You can live monitor TCP and UDP connections to and from a selected vNIC.
Live flow monitoring provides visualization of flows as they traverse a specific vNIC, enabling quick
troubleshooting. Application context is also captured in live flow monitoring for flows that match
an L7 firewall rule.
Caution
n When flow monitoring is enabled, the Dashboard shows a small yellow warning icon to
indicate that the feature is turned on. Flow monitoring impacts performance and you must
preferably turn it off after monitoring the flow data.
n A critical alarm is displayed when the flow monitoring count exceeds a predefined maximum
count (threshold value). This critical alarm does not impact your environment, and you can
safely ignore the alarm. In NSX 6.4.4 and earlier, the maximum flow monitoring count is set to
2 million. Starting in NSX 6.4.5, the maximum flow monitoring count is increased to 5 million.
Prerequisites
Flow monitoring data is only available for virtual machines in clusters that have the network
virtualization components installed and firewall enabled. See the NSX Data Center for vSphere
Installation Guide.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Tools > Flow Monitoring.
The page might take several seconds to load. The top of the page displays the percentage
of allowed traffic, traffic blocked by firewall rules, and traffic blocked by SpoofGuard.
The multiple line graph displays data flow for each service in your environment. When
you point to a service in the legend area, the plot for that service is highlighted.
n Top Flows displays the total incoming and outgoing traffic per service over the specified
time period based on the total bytes value (not based on sessions/packets). The top five
services are displayed. Blocked flows are not considered when calculating top flows.
n Top Destinations displays incoming traffic per destination over the specified time period.
The top five destinations are displayed.
n Top Sources displays outgoing traffic per source over the specified time period. The top
five sources are displayed.
Details about all traffic for the selected service is displayed. The Allowed Flows tab displays
the allowed traffic sessions and the Blocked Flows tab displays the blocked traffic.
5 Click an item in the table to display the rules that allowed or blocked that traffic flow.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Tools > Flow Monitoring.
3 Select the time period or type a new start and end date.
The maximum time span for which you can view traffic flow data is the previous two weeks.
4 Click OK.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Tools > Flow Monitoring.
Depending on the selected tab, rules that allowed or denied traffic for this service are
displayed.
n To edit a rule:
3 Click OK.
n To add a rule:
2 Complete the form to add a rule. For information on completing the firewall rule form,
see Add a Firewall Rule.
3 Click OK.
Viewing live flows can affect the performance of NSX Manager and the corresponding virtual
machine.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Tools > Flow Monitoring.
The page refreshes every 5 seconds. You can select a different frequency from the Refresh
Rate drop-down.
5 Click Stop when your debugging or troubleshooting is done to avoid affecting the
performance of NSX Manager or the selected virtual machine.
You can filter the data being displayed by specifying exclusion criterion. For example, you may
want to exclude a proxy server to avoid seeing duplicate flows. Or if you are running a Nessus
scan on the virtual machines in your inventory, you may not want to exclude the scan flows from
being collected. You can configure IPFix so that information for specific flows are exported directly
from a firewall to a flow collector. The flow monitoring graphs do not include the IPFix flows.
These are displayed on the IPFix collector's interface.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Tools > Flow Monitoring.
All firewall related flows are collected across your inventory except for the objects specified in
Exclusion Settings.
4 To specify filtering criteria, click Flow Exclusion and follow the steps below.
Service Excludes flows for the specified services and service groups.
1 Click the Add icon.
2 Select the appropriate services and/or service groups.
c Click Save.
5 To configure flow collection, click IPFix and follow the steps as described in IPFIX for
Distributed Firewall.
Configure IPFIX
IPFIX (Internet Protocol Flow Information Export) is an IETF protocol that defines the standard
of exporting flow information from an end device to a monitoring system. NSX supports IPFIX to
export IP flow information to a collector.
In the vSphere environment, vSphere Distributed Switch is the exporter and the collector is any
monitoring tool available from any networking vendor.
The IPFIX standard specifies how IP flow information is presented and transferred from an
exporter to a collector.
After you enable IPFIX on the vSphere Distributed Switch, it periodically sends messages to
the collector tool. The contents of these messages are defined using the templates. For more
information on templates, refer to IPFIX Templates.
Logical Switch
HV1/vSwitch HV2/vSwitch
Flow Collector
Virtual (Overlay)
Physical (Underlay)
TOR2 TOR3
TOR1 TOR4
Network
IPFIX (DFW)
Overlay Tunnel
You can enable flow export for IPFIX on a distributed firewall as follows:
1 In the vSphere Web Client, navigate to Networking & Security > Tools > Flow Monitoring.
n In NSX 6.4.1 and later, navigate to Networking & Security > Tools > IPFIX.
n In NSX 6.4.0, navigate to Networking & Security > Tools > Flow Monitoring >
Configuration > IPFix.
5 Click Edit next to IPFIX Configuration, and then click Enable IPFIX Configuration.
6 In Observation DomainID, enter a 32-bit identifier that identifies the firewall exporter to the
flow collector. Valid range is 0–65535.
7 In Active Flow Export Timeout, type the time (in minutes) after which active flows are to be
exported to the flow collector. The default value is five. For example, if the flow is active for 30
minutes and the export timeout is five minutes, then the flow is exported seven times during its
lifetime. One for each creation and deletion, and five times during the active period.
8 Click Save.
9 In Collector IPs, click Add and enter the IP address and UDP port of the flow collector. Refer
to your NetFlow collector documentation to determine the port number.
10 Click OK.
IPFIX Templates
Distributed firewall implements stateful tracking of flows and the tracked flows go through a set
of state changes. You can use the IPFIX protocol to export data about the status of a flow. The
tracked events include flow creation, flow denial, flow update, and flow teardown.
Because IPFIX is template-based, exporters must declare the format of the data before exporting
any flow, so that the collector knows how to analyze incoming flow records. The format is declared
in templates, which are sets of <type,length> that define the meaning and the length of each field
in a record, one after the other.
The following table describes the information elements that are used in the IPFIX templates of the
distributed firewall.
The following IPFIX templates for a distributed firewall are supported only for UDP payloads.
UDP IPV4 Template
The fields sent for this template are as follows:
IPFIX_TEMPLATE_FIELD(sourceMacAddress,6)
IPFIX_TEMPLATE_FIELD(destinationMacAddress,6)
IPFIX_TEMPLATE_FIELD(sourceIPv4Address,4)
IPFIX_TEMPLATE_FIELD(destinationIPv4Address,4)
IPFIX_TEMPLATE_FIELD(sourceTransportPort,2)
IPFIX_TEMPLATE_FIELD(destinationTransportPort,2)
IPFIX_TEMPLATE_FIELD(protocolIdentifier,1)
IPFIX_TEMPLATE_FIELD(icmpTypeIPv4,1)
IPFIX_TEMPLATE_FIELD(icmpCodeIPv4,1)
IPFIX_TEMPLATE_FIELD(ethernetType,2)
IPFIX_TEMPLATE_FIELD(flowStartSeconds,4)
IPFIX_TEMPLATE_FIELD(flowEndSeconds,4)
IPFIX_TEMPLATE_FIELD(octetDeltaCount,8)
IPFIX_TEMPLATE_FIELD(packetDeltaCount,8)
IPFIX_TEMPLATE_FIELD(firewallEvent,1)
IPFIX_TEMPLATE_FIELD(direction,1)
IPFIX_TEMPLATE_FIELD(ruleId,4)
IPFIX_TEMPLATE_FIELD(vmUUId,16)
IPFIX_TEMPLATE_FIELD(vnicIndex,4)
IPFIX_TEMPLATE_FIELD(sessionFlags,1) /* Introduced in 6.4.2 */
IPFIX_TEMPLATE_FIELD(flowDirection,1) /* Introduced in 6.4.2 */
IPFIX_TEMPLATE_FIELD(flowId,8) /* Introduced in 6.4.4 */
IPFIX_TEMPLATE_FIELD(algControlFlowId,8) /* Introduced in 6.4.4 */
IPFIX_TEMPLATE_FIELD(algType,1) /* Introduced in 6.4.4 */
IPFIX_TEMPLATE_FIELD(algFlowType,1) /* Introduced in 6.4.4 */
IPFIX_TEMPLATE_FIELD(averageLatency,4) /* Introduced in 6.4.4 */
IPFIX_TEMPLATE_FIELD(sourceMacAddress,6)
IPFIX_TEMPLATE_FIELD(destinationMacAddress,6)
IPFIX_TEMPLATE_FIELD(sourceIPv6Address,16)
IPFIX_TEMPLATE_FIELD(destinationIPv6Address,16)
IPFIX_TEMPLATE_FIELD(sourceTransportPort,2)
IPFIX_TEMPLATE_FIELD(destinationTransportPort,2)
IPFIX_TEMPLATE_FIELD(protocolIdentifier,1)
IPFIX_TEMPLATE_FIELD(icmpTypeIPv6,1)
IPFIX_TEMPLATE_FIELD(icmpCodeIPv6,1)
IPFIX_TEMPLATE_FIELD(ethernetType,2)
IPFIX_TEMPLATE_FIELD(flowStartSeconds,4)
IPFIX_TEMPLATE_FIELD(flowEndSeconds,4)
IPFIX_TEMPLATE_FIELD(octetDeltaCount,8)
IPFIX_TEMPLATE_FIELD(packetDeltaCount,8)
IPFIX_TEMPLATE_FIELD(firewallEvent,1)
IPFIX_TEMPLATE_FIELD(direction,1)
IPFIX_TEMPLATE_FIELD(ruleId,4)
IPFIX_TEMPLATE_FIELD(vmUUId,16)
IPFIX_TEMPLATE_FIELD(vnicIndex,4)
IPFIX_TEMPLATE_FIELD(sessionFlags,1) /* Introduced in 6.4.2 */
IPFIX_TEMPLATE_FIELD(flowDirection,1) /* Introduced in 6.4.2 */
IPFIX_TEMPLATE_FIELD(flowId,8) /* Introduced in 6.4.4 */
IPFIX_TEMPLATE_FIELD(algControlFlowId,8) /* Introduced in 6.4.4 */
IPFIX_TEMPLATE_FIELD(algType,1) /* Introduced in 6.4.4 */
IPFIX_TEMPLATE_FIELD(algFlowType,1) /* Introduced in 6.4.4 */
IPFIX_TEMPLATE_FIELD(averageLatency,4) /* Introduced in 6.4.4 */
Logical Switch
HV1/vSwitch HV2/vSwitch
Flow Collector
Virtual (Overlay)
Physical (Underlay)
TOR2 TOR3
TOR1 TOR4
Network
IPFIX (vNic)
IPFIX (pNic)
Overlay Tunnel
1 Configure the NetFlow collector on the vSphere Distributed Switch backing the NSX transport
zone (Logical Switch). For more information on how to configure NetFlow collector, see
"Configure the NetFlow Settings of a vSphere Distributed Switch" topic in the vSphere
Networking Guide.
2 You can enable NetFlow monitoring on the distributed port group corresponding to the
Logical Switch. If the NSX transport zone spans multiple vSphere Distributed Switches (VDS),
then repeat these steps for each VDS/distributed port group. For more information on how to
enable NetFlow monitoring, see "Enable or Disable NetFlow Monitoring on a Distributed Port
Group or Distributed Port in the vSphere documentation.
In an NSX environment, the virtual machine data traffic on a logical switch traversing the NSX
uplink of ESXi is VXLAN encapsulated. When NetFlow is enabled on the host uplink, the IP flow
records are exported using a custom IPFIX flow-record template. The template includes the outer
VXLAN UDP/IP header information and the information of the inner encapsulated IP packet. Such
flow record, as a result provides visibility on the VTEP that is encapsulating the packet (outer
header) and the details of the virtual machine that generated inter-host traffic (inner header) on a
NSX logical switch (VXLAN).
For more details on the IPFIX templates for vSphere Distributed Switch, refer to IPFIX Templates.
IPFIX Templates
IPFIX templates provides the visibility into the VXLAN and non- VXLAN flows. The templates have
additional parameters that provide more information regarding the encapsulated traffic.
The templates are supported in vSphere Distributed Switch (exporter). IPFIX support on vSphere
Distributed Switch provides the required visibility into the virtual machine flows and VXLAN flows.
If you are using any third party collector tool , you can use additional information available in the
templates to provide correlation between the internal and external flows and the port connections.
The following table describes the information elements that are used in the IPFIX templates of the
logical switch.
IPv4 Template
IPFIX_TEMPLATE_START(IPFIX_FLOW_TYPE_IPv4)
IPFIX_TEMPLATE_FIELD(sourceIPv4Address, 4)
IPFIX_TEMPLATE_FIELD(destinationIPv4Address, 4)
IPFIX_TEMPLATE_FIELD(octetDeltaCount, 8)
IPFIX_TEMPLATE_FIELD(packetDeltaCount, 8)
IPFIX_TEMPLATE_FIELD(flowStartSysUpTime, 8)
IPFIX_TEMPLATE_FIELD(flowEndSysUpTime, 8)
IPFIX_TEMPLATE_FIELD(sourceTransportPort, 2)
IPFIX_TEMPLATE_FIELD(destinationTransportPort, 2)
IPFIX_TEMPLATE_FIELD(ingressInterface, 4)
IPFIX_TEMPLATE_FIELD(egressInterface, 4)
IPFIX_TEMPLATE_FIELD(vxlanId, 8)
IPFIX_TEMPLATE_FIELD(protocolIdentifier, 1)
IPFIX_TEMPLATE_FIELD(flowEndReason, 1)
IPFIX_TEMPLATE_FIELD(tcpFlags, 1)
IPFIX_TEMPLATE_FIELD(IPv4TOS, 1)
IPFIX_TEMPLATE_FIELD(maxTTL, 1)
IPFIX_TEMPLATE_FIELD(flowDir, 1)
// Specify the Interface port- Uplink Port, Access port,N.A
IPFIX_VMW_TEMPLATE_FIELD(ingressInterfaceAttr, 2)
IPFIX_VMW_TEMPLATE_FIELD(egressInterfaceAttr, 2)
IPFIX_VMW_TEMPLATE_FIELD(vxlanExportRole, 1)
IPFIX_TEMPLATE_PADDING(paddingOctets, 1)
IPFIX_TEMPLATE_END()
IPFIX_TEMPLATE_START(IPFIX_FLOW_TYPE_IPv4_VXLAN)
IPFIX_TEMPLATE_FIELD(sourceIPv4Address, 4)
IPFIX_TEMPLATE_FIELD(destinationIPv4Address, 4)
IPFIX_TEMPLATE_FIELD(octetDeltaCount, 8)
IPFIX_TEMPLATE_FIELD(packetDeltaCount, 8)
IPFIX_TEMPLATE_FIELD(flowStartSysUpTime, 8)
IPFIX_TEMPLATE_FIELD(flowEndSysUpTime, 8)
IPFIX_TEMPLATE_FIELD(sourceTransportPort, 2)
IPFIX_TEMPLATE_FIELD(destinationTransportPort, 2)
IPFIX_TEMPLATE_FIELD(ingressInterface, 4)
IPFIX_TEMPLATE_FIELD(egressInterface, 4)
IPFIX_TEMPLATE_FIELD(protocolIdentifier, 1)
IPFIX_TEMPLATE_FIELD(flowEndReason, 1)
IPFIX_TEMPLATE_FIELD(tcpFlags, 1)
IPFIX_TEMPLATE_FIELD(IPv4TOS, 1)
IPFIX_TEMPLATE_FIELD(maxTTL, 1)
IPFIX_TEMPLATE_FIELD(flowDir, 1)
IPFIX_TEMPLATE_FIELD(vxlanId, 8)
IPFIX_VMW_TEMPLATE_FIELD(tenantSourceIPv4, 4)
IPFIX_VMW_TEMPLATE_FIELD(tenantDestIPv4, 4)
IPFIX_VMW_TEMPLATE_FIELD(tenantSourcePort, 2)
IPFIX_VMW_TEMPLATE_FIELD(tenantDestPort, 2)
IPFIX_VMW_TEMPLATE_FIELD(tenantProtocol, 1)
// Specify the Interface port- Uplink Port, Access port,N.A
IPFIX_VMW_TEMPLATE_FIELD(ingressInterfaceAttr, 2)
IPFIX_VMW_TEMPLATE_FIELD(egressInterfaceAttr, 2)
// TUNNEL-GW or no.
IPFIX_VMW_TEMPLATE_FIELD(vxlanExportRole, 1)
IPFIX_TEMPLATE_END()
IPFIX_TEMPLATE_START(IPFIX_FLOW_TYPE_IPv4_ICMP_VXLAN)
IPFIX_TEMPLATE_FIELD(sourceIPv4Address, 4)
IPFIX_TEMPLATE_FIELD(destinationIPv4Address, 4)
IPFIX_TEMPLATE_FIELD(octetDeltaCount, 8)
IPFIX_TEMPLATE_FIELD(packetDeltaCount, 8)
IPFIX_TEMPLATE_FIELD(flowStartSysUpTime, 8)
IPFIX_TEMPLATE_FIELD(flowEndSysUpTime, 8)
IPFIX_TEMPLATE_FIELD(sourceTransportPort, 2)
IPFIX_TEMPLATE_FIELD(destinationTransportPort, 2)
IPFIX_TEMPLATE_FIELD(ingressInterface, 4)
IPFIX_TEMPLATE_FIELD(egressInterface, 4)
IPFIX_TEMPLATE_FIELD(protocolIdentifier, 1)
IPFIX_TEMPLATE_FIELD(flowEndReason, 1)
IPFIX_TEMPLATE_FIELD(IPv4TOS, 1)
IPFIX_TEMPLATE_FIELD(maxTTL, 1)
IPFIX_TEMPLATE_FIELD(flowDir, 1)
IPFIX_TEMPLATE_FIELD(vxlanId, 8)
IPFIX_VMW_TEMPLATE_FIELD(tenantSourceIPv4, 4)
IPFIX_VMW_TEMPLATE_FIELD(tenantDestIPv4, 4)
IPFIX_VMW_TEMPLATE_FIELD(tenantProtocol, 1)
// Specify the Interface port- Uplink Port, Access port,N.A
IPFIX_VMW_TEMPLATE_FIELD(ingressInterfaceAttr, 2)
IPFIX_VMW_TEMPLATE_FIELD(egressInterfaceAttr, 2)
// TUNNEL-GW or no.
IPFIX_VMW_TEMPLATE_FIELD(vxlanExportRole, 1)
IPFIX_TEMPLATE_PADDING(paddingOctets, 1)
IPFIX_TEMPLATE_END()
IPFIX_TEMPLATE_START(IPFIX_FLOW_TYPE_IPv4_ICMP)
IPFIX_TEMPLATE_FIELD(sourceIPv4Address, 4)
IPFIX_TEMPLATE_FIELD(destinationIPv4Address, 4)
IPFIX_TEMPLATE_FIELD(octetDeltaCount, 8)
IPFIX_TEMPLATE_FIELD(packetDeltaCount, 8)
IPFIX_TEMPLATE_FIELD(flowStartSysUpTime, 8)
IPFIX_TEMPLATE_FIELD(flowEndSysUpTime, 8)
IPFIX_TEMPLATE_FIELD(ingressInterface, 4)
IPFIX_TEMPLATE_FIELD(egressInterface, 4)
IPFIX_TEMPLATE_FIELD(protocolIdentifier, 1)
IPFIX_TEMPLATE_FIELD(flowEndReason, 1)
IPFIX_TEMPLATE_FIELD(IPv4TOS, 1)
IPFIX_TEMPLATE_FIELD(maxTTL, 1)
IPFIX_TEMPLATE_FIELD(flowDir, 1)
IPFIX_TEMPLATE_FIELD(vxlanId, 8)
// Specify the Interface port- Uplink Port, Access Port,or NA.
IPFIX_VMW_TEMPLATE_FIELD(ingressInterfaceAttr, 2)
IPFIX_VMW_TEMPLATE_FIELD(egressInterfaceAttr, 2)
IPFIX_VMW_TEMPLATE_FIELD(vxlanExportRole, 1)
IPFIX_TEMPLATE_PADDING(paddingOctets, 2)
IPFIX_TEMPLATE_END()
IPFIX_TEMPLATE_START(IPFIX_FLOW_TYPE_IPv6_ICMP_VXLAN)
IPFIX_TEMPLATE_FIELD(sourceIPv4Address, 4)
IPFIX_TEMPLATE_FIELD(destinationIPv4Address, 4)
IPFIX_TEMPLATE_FIELD(octetDeltaCount, 8)
IPFIX_TEMPLATE_FIELD(packetDeltaCount, 8)
IPFIX_TEMPLATE_FIELD(flowStartSysUpTime, 8)
IPFIX_TEMPLATE_FIELD(flowEndSysUpTime, 8)
IPFIX_VMW_TEMPLATE_FIELD(sourceTransportPort, 2)
IPFIX_VMW_TEMPLATE_FIELD(destinationTransportPort, 2)
IPFIX_TEMPLATE_FIELD(ingressInterface, 4)
IPFIX_TEMPLATE_FIELD(egressInterface, 4)
IPFIX_TEMPLATE_FIELD(protocolIdentifier, 1)
IPFIX_TEMPLATE_FIELD(IPv6TOS, 1)
IPFIX_TEMPLATE_FIELD(maxTTL, 1)
IPFIX_TEMPLATE_FIELD(flowDir, 1)
IPFIX_TEMPLATE_FIELD(flowEndReason, 1)
//VXLAN Specific
IPFIX_TEMPLATE_FIELD(vxlanId, 8)
IPFIX_VMW_TEMPLATE_FIELD(tenantSourceIPv6, 16)
IPFIX_VMW_TEMPLATE_FIELD(tenantDestIPv6, 16)
IPFIX_VMW_TEMPLATE_FIELD(tenantProtocol, 1)
// Specify the Interface port- Uplink Port, Access Port, or NA.
IPFIX_VMW_TEMPLATE_FIELD(ingressInterfaceAttr, 2)
IPFIX_VMW_TEMPLATE_FIELD(egressInterfaceAttr, 2)
// TUNNEL-GW or no.
IPFIX_VMW_TEMPLATE_FIELD(vxlanExportRole, 1)
IPFIX_TEMPLATE_PADDING(paddingOctets, 1)
IPFIX_TEMPLATE_END()
IPFIX_TEMPLATE_START(IPFIX_FLOW_TYPE_IPv6_ICMP)
IPFIX_TEMPLATE_FIELD(sourceIPv6Address, 16)
IPFIX_TEMPLATE_FIELD(destinationIPv6Address, 16)
IPFIX_TEMPLATE_FIELD(octetDeltaCount, 8)
IPFIX_TEMPLATE_FIELD(packetDeltaCount, 8)
IPFIX_TEMPLATE_FIELD(flowStartSysUpTime, 8)
IPFIX_TEMPLATE_FIELD(flowEndSysUpTime, 8)
IPFIX_TEMPLATE_FIELD(ingressInterface, 4)
IPFIX_TEMPLATE_FIELD(egressInterface, 4)
IPFIX_TEMPLATE_FIELD(protocolIdentifier, 1)
IPFIX_TEMPLATE_FIELD(flowEndReason, 1)
IPFIX_TEMPLATE_FIELD(IPv6TOS, 1)
IPFIX_TEMPLATE_FIELD(maxTTL, 1)
IPFIX_TEMPLATE_FIELD(flowDir, 1)
IPFIX_TEMPLATE_FIELD(vxlanId, 8)
// Specify the Interface port- Uplink Port, Access Port,or NA.
IPFIX_VMW_TEMPLATE_FIELD(ingressInterfaceAttr, 2)
IPFIX_VMW_TEMPLATE_FIELD(egressInterfaceAttr, 2)
IPFIX_VMW_TEMPLATE_FIELD(vxlanExportRole, 1)
IPFIX_TEMPLATE_PADDING(paddingOctets, 2)
IPFIX_TEMPLATE_END()
IPv6 Template
IPFIX_TEMPLATE_START(IPFIX_FLOW_TYPE_IPv6)
IPFIX_TEMPLATE_FIELD(sourceIPv6Address, 16)
IPFIX_TEMPLATE_FIELD(destinationIPv6Address, 16)
IPFIX_TEMPLATE_FIELD(octetDeltaCount, 8)
IPFIX_TEMPLATE_FIELD(packetDeltaCount, 8)
IPFIX_TEMPLATE_FIELD(flowStartSysUpTime, 8)
IPFIX_TEMPLATE_FIELD(flowEndSysUpTime, 8)
IPFIX_TEMPLATE_FIELD(sourceTransportPort, 2)
IPFIX_TEMPLATE_FIELD(destinationTransportPort, 2)
IPFIX_TEMPLATE_FIELD(ingressInterface, 4)
IPFIX_TEMPLATE_FIELD(egressInterface, 4)
IPFIX_TEMPLATE_FIELD(vxlanId, 8)
IPFIX_TEMPLATE_FIELD(protocolIdentifier, 1)
IPFIX_TEMPLATE_FIELD(flowEndReason, 1)
IPFIX_TEMPLATE_FIELD(tcpFlags, 1)
IPFIX_TEMPLATE_FIELD(IPv6TOS,1)
IPFIX_TEMPLATE_FIELD(maxTTL, 1)
IPFIX_TEMPLATE_FIELD(flowDir, 1)
// Specify the Interface port- Uplink Port, Access Port, or NA.
IPFIX_VMW_TEMPLATE_FIELD(ingressInterfaceAttr, 2)
IPFIX_VMW_TEMPLATE_FIELD(egressInterfaceAttr, 2)
IPFIX_VMW_TEMPLATE_FIELD(vxlanExportRole, 1)
IPFIX_TEMPLATE_PADDING(paddingOctets, 1)
IPFIX_TEMPLATE_END()
IPFIX_TEMPLATE_FIELD(sourceIPv4Address, 4)
IPFIX_TEMPLATE_FIELD(destinationIPv4Address, 4)
IPFIX_TEMPLATE_FIELD(octetDeltaCount, 8)
IPFIX_TEMPLATE_FIELD(packetDeltaCount, 8)
IPFIX_TEMPLATE_FIELD(flowStartSysUpTime, 8)
IPFIX_TEMPLATE_FIELD(flowEndSysUpTime, 8)
IPFIX_TEMPLATE_FIELD(sourceTransportPort, 2)
IPFIX_TEMPLATE_FIELD(destinationTransportPort, 2)
IPFIX_TEMPLATE_FIELD(ingressInterface, 4)
IPFIX_TEMPLATE_FIELD(egressInterface, 4)
IPFIX_TEMPLATE_FIELD(protocolIdentifier, 1)
IPFIX_TEMPLATE_FIELD(flowEndReason, 1)
IPFIX_TEMPLATE_FIELD(tcpFlags, 1)
IPFIX_TEMPLATE_FIELD(IPv6TOS, 1)
IPFIX_TEMPLATE_FIELD(maxTTL, 1)
IPFIX_TEMPLATE_FIELD(flowDir, 1)
//VXLAN specific
IPFIX_TEMPLATE_FIELD(vxlanId, 8)
IPFIX_VMW_TEMPLATE_FIELD(tenantSourceIPv6, 16)
IPFIX_VMW_TEMPLATE_FIELD(tenantDestIPv6, 16)
IPFIX_VMW_TEMPLATE_FIELD(tenantSourcePort, 2)
IPFIX_VMW_TEMPLATE_FIELD(tenantDestPort, 2)
IPFIX_VMW_TEMPLATE_FIELD(tenantProtocol, 1)
// Specify the Interface port- Uplink Port, Access Port,or NA.
IPFIX_VMW_TEMPLATE_FIELD(ingressInterfaceAttr, 2)
IPFIX_VMW_TEMPLATE_FIELD(egressInterfaceAttr, 2)
// TUNNEL-GW or no.
IPFIX_VMW_TEMPLATE_FIELD(vxlanExportRole, 1)
IPFIX_TEMPLATE_END()
Standard Elements...
Host 1 Host 2
IngressInterfaceAttr:
0x02 - port1
IP1 IP2
EgressIngerfaceAttr:
0x03 - Management port
vxlanExportRole:
01 VDS VDS
1 2
Flow 2: IPv4 VXLAN Template
10 11 12 13
Standard Elements...
tenantSourceIPv4:
IP1
tenantDestIPv4:
IP2
Physical Switch
tenantSourcePort:
10000
tenantDestPort:
80
IngressInterfaceAttr:
0x03 - Management port
EgressIngerfaceAttr:
0x01 - dvuplink
vxlanExportRole:
01
The Figure 23-2. Flow on Host 1 shows the flows are collected from Host 1. The IPv4 template has
additional information about the ingress and egress port and the standard elements.
The ingressInterfaceAttr text box 0x02 indicates it is an access port where the virtual machine is
connected. The access port number is assigned to the ingressInterface parameter in the template.
The egressInterfaceAttr value of 0x03 shows that it is a VXLAN tunnel port and the port
number associated with it is a management VMKNic port. This port number is assigned to the
egressInterface parameter in the template.
The IPv4 VXLAN template on the other hand has additional information about the VXLAN ID,
inner source, and destination IP/Port and protocol. The ingress and egress interfaces are VXLAN
tunnel port and dvuplink port respectively.
Standard Elements...
Host 1 Host 2
IngressInterfaceAttr:
0x03 - Management port
IP1 IP2
EgressIngerfaceAttr:
0x02 – port3
vxlanExportRole:
VDS VDS 01
1 3
Flow 3: IPv4 VXLAN Template
10 11 12 13
Standard Elements...
tenantSourceIPv4:
IP1
tenantDestIPv4:
IP2
Physical Switch
tenantSourcePort:
10000
tenantDestPort:
80
IngressInterfaceAttr:
0x01 - dvuplink
EgressIngerfaceAttr:
0x03 - Management port
vxlanExportRole:
01
The templates in the Figure 23-2. Flow on Host 1 differs from the Figure 23-2. Flow on Host 1 only
in the Ingress and egress attributes and port numbers.
The additional information provided through this template helps the collector tool vendors to
correlate the external VXLAN flows and the internal virtual machine flows.
The following section provides the details regarding how to decode the new parameters that are
added in the VXLAN templates. IANA defines IPFIX information elements and their element IDs.
You can find the list of standard element IDs at [Link]
All the new elements defined as part of VXLAN template have their new element IDs.
These custom parameters or elements provide additional information about the VXLAN and
internal flows. The following are the new elements and their IDs:
Note The Enterprise ID is appended to all the custom elements defined above. The enterprise ID
for VMware is 6876.
The following table shows an example of complete list of element IDs. You can find data type and
unit for standard element IDs at [Link]
1 octetDeltaCount
2 packetDeltaCount
4 protocolIdentifier
5 IPv4TOS
5 IPv6TOS
6 tcpFlags
7 sourceTransportPort
8 sourceIPv4Address
10 ingressInterface
11 destinationTransportPort
12 destinationIPv4Address
14 egressInterface
15 nextHopIPv4
27 sourceIPv6Address
28 destinationIPv6Address
53 maxTTL
61 flowDir
136 flowEndReason
152 flowStartSysUpTime
153 flowEndSysUpTime
210 paddingOctets
351 vxlanId
880 tenantProtocol
881 tenantSourceIPv4
882 tenantDestIPv4
883 tenantSourceIPv6
884 tenantDestIPv6
886 tenantSourcePort
887 tenantDestPort
888 egressInterfaceAttr
889 vxlanExportRole
890 ingressInterfaceAttr
Flow monitoring is used for long term data collection across the system, while the application
rule manager is used for a targeted modeling of an application. During a flow monitoring phase,
ARM learns about flows coming in and out of the application being profiled, as well as flows in
between application tiers. It also learns about any Layer 7 Application Identity for the flows being
discovered
1 Select virtual machines (VM) that form the application and need to be monitored. Once
configured, all incoming and outgoing flows for a defined set of VNICs (Virtualized Network
Interface Cards) on the VMs are monitored. There can be up to five sessions collecting flows at
a time.
2 Stop the monitoring to generate the flow tables. The flows are analyzed to reveal the
interaction between VMs. The flows can be filtered to bring the flow records to a limited
working set. After flow analysis, ARM automatically recommends:
n Security Groups & IP set recommendation of the workloads based on the flow pattern and
services used
n Firewall policy based on the analyzed flow for a given ARM session
3 Once a flow is analyzed with security group & policy recommendations, the policy for the given
application can be published as a section in the firewall rule table. The recommended firewall
rule also limits the scope of enforcement (applied to) to VM’s associated with the application.
Users can also modify the rules, especially naming of the groups and rule to make it more
intuitive and readable.
Prerequisites
Before starting a monitoring session, you need to define the VMs and vNICs that need to be
monitored.
VMware Tools must be running and current on your Windows desktop VMs.
Selected VMs need to be in a cluster that has firewall enabled (they cannot be on the exclude list).
A default firewall rule of "any allow" that applies to the selected vNICs must be created for the
duration of the monitoring session, so that flows to and from the vNICs are not dropped by any
other firewall rule.
Procedure
1 Log in to the vSphere Web Client, and navigate to Application Rule Manager.
n In NSX 6.4.1 and later, navigate to Networking & Security > Security > Application Rule
Manager.
n In NSX 6.4.0, navigate to Networking & Security > Tools > Flow Monitoring > Application
Rule Manager.
3 In the Start New Session dialogue box, enter a name for the session.
5 Select the vNICs or VMs you want monitored. The selected vNICs or VMs move to the
Selected Objects column.
The status is now Collecting Data. The latest set of flows collected is shown in the flow table.
Results
A flow monitoring session has been created for the selected vNICs and VMs.
What to do next
Analyzed flows can be filtered to limit the number of flows in a working set. The filter option icon is
next to the Processed View drop-down menu on the right.
Prerequisites
Before analysis, a flow monitoring session must have been collected from selected vNICs or VMs.
Procedure
Defined services are resolved, the IP address to VM translation begins, and duplicates are
removed.
Field Options
Direction IN - flow is coming into one of the VM and VNIC selected as part of the input seed.
OUT - flow is generated from one of the VM and VNIC selected as part of the input seed.
INTRA- flow is between VM- and VNIC selected as part of the input seed.
Source VM Name, if the Source IP address of the flow record is resolved to one VM in the NSX inventory.
Note that IP address can be resolved to VM, only if VM Tools has been enabled on those VMs.
Raw IP, if there is no VM found for this source IP address in NSX Inventory. Note that multicast and
broadcast IP addresses will not be resolved to VMs.
Number of VMs (Ex:2 Virtual Machines) if the IP address is an overlapping IP address mapped to
multiple VMs in different networks, the user needs to resolve Virtual machines to the correct Virtual
Machine related to this flow record.
3 Select the Firewall Rules tab to view the automatically recommended ARM grouped workflows
and policy creation, and created firewall rules based on the selected flows. Users can modify
the recommended rules, especially the naming of the groups and rules to make them more
intuitive.
n Grouping and IP set recommendations of the workflows based on the flow pattern and
services. For example, with a 3-tier application, the outcome would be four recommended
security groups - one for each of the application tires and one groups for all the VMs in
that application. ARM also recommends IP sets for destination based on services used by
application VMs such as DNS/NTP servers if the destination IPs are outside of the vCenter
domain.
n Identify the Application Context (Layer 7) to the flow between application tiers. For
example, L7 application running irrespective of TCP/UDP ports used and TLS version used
for https.
4 Click Publish to publish the policy for the given application as a section in the firewall rule
table. Or, modify the rules as needed. Note that the recommended firewall rule limits the
scope of enforcement (applied to) to VM’s associated with the application. Enter the firewall
rule section name and click the checkbox to enable the following optional parameters:
Option Description
Enable TCP Strict Enables you to set TCP strict for each firewall section.
Enable Stateless Firewall Enables stateless firewall for each firewall section.
After system analysis is complete, the analyzed flow table is available in the Processed View.
Users can further consolidate the flows by changing the source, destination, and service fields.
See Customizing Services in Flow Records and Customizing Source and Destination in Flow
Records .
Processed View
Field Options
Direction IN - flow is coming into one of the VMs or vNICs selected as part of the input seed.
OUT - flow is generated from one of the VMs or vNICs selected as part of the input seed.
INTRA- flow is between the VM or vNIC selected as part of the input seed.
Source VM Name, if the Source IP address of the flow record is resolved to one VM in the NSX inventory.
Raw IP if there is no VM found for this source IP address in the NSX Inventory. Note than multicast and
broadcast IPs will not be resolved to VMs.
Number of VMs if IP address is an overlapping IP address mapped to multiple VMs in different networks.
The user needs to resolve multiple VMs to one VM related to this flow record.
Flow tables can be edited and the flows consolidated for easier rule creation. For example, the
source field can be replaced with ANY. Multiple VMs receiving flows with HTTP and HTTPs can
be replaced with “WEB-Service” service group, which includes both HTTP and HTTPs service.
By doing so, Multiple flows may look similar and flow patterns may emerge that can be easily
translated to a firewall rule.
Note that while each cell of the flow table can be modified, the cells are not auto-populated. For
instance, if the IP address196.1.1.1 is added to the DHCP-Server IPSet, the subsequent occurrences
of that IP are not auto-populated to show the DHCP-Server group. There is a prompt asking if you
want to replace all instances of the IP address with the IPSet. This allows the flexibility to make that
IP part of multiple IPSet groups.
Consolidated View
The consolidated view is accessed from the drop-down list in the right-hand corner. The
consolidated view eliminates duplicate flows and displays the minimal number of flows. This view
can be used to create firewall rules.
Clicking the arrow in the left hand corner of the Direction column shows the corresponding related
raw flow information:
n for intra flows the corresponding IN and OUT flows with raw data are shown
n the original source IP, destination IP, port, and protocol information in all of the raw flows that
were consolidated into the record
n for ALG flows, the corresponding data flow for the control flow is shown
After flow analysis, users can associate any undefined protocol/port combinations and create a
service. Service groups can be created for any of the services listed in the flows collected. For
more information on modifying flow records see Flow Consolidation and Customization.
Prerequisites
Flow data must have been collected from a set of vNICs and VMs. See Create a Monitoring
Session.
Procedure
u After the flow state is Analysis Completed, the flow table is populated with data, in the
Processed View. To customize cell data, hover the cursor over a cell. A gear icon appears in
the right-hand corner of the cell. Click the gear icon in theService column and select one of the
following options:
Option Description
Resolve Services If the port and protocol has been translated to multiple services, use this
option to select the correct service.
Create Services Group and Replace You can create a new service group with the service from the flow included
in it. Then, the new service group will replace the service. To add a service
group:
a Enter a name for the service group.
b Optional - enter a description of the Service Group.
c Select the Object type.
d Select the available objects you want to be added to the Service Group
and click the arrow to move the object to the Selected Objects column.
e A new services group is created and populated in the Service column.
Replace Service with Any Replaces the specific service with any service.
Replace Service with Service Group If the selected service is a member of multiple service groups, you select the
specific service group you want to apply.
a Click the desired Service Group from the list of available objects.
b Click OK .
Revert Protocol and Port Reverts any cell modifications back to the original data.
Results
The changed flow record has a pink bar on the side. When the curser is hovered over any cell
which has been modified there is a green checkmark. Clicking the checkmark displays a pop-up
window with the previous and new values for that cell. The modified flow record is easier to
translate into firewall rules.
What to do next
After flows have been modified, they can be further grouped together to get the smallest distinct
working set. The Processed View is used to create Service Groups and IPSets and modify the
flows. The Consolidated view further compresses these modified flows to make it easier to create
firewall rules.
After flow analysis is complete, flow cells can be customized by the user.
Prerequisites
Flow data must have been collected from a set of vNICs and VMs. See Create a Monitoring
Session
Procedure
u After the flow state shows Analysis Completed, the flow table is populated with data. To
customize cell data, hover the cursor over a cell. A gear icon appears in the right-hand corner
of the cell. Click the gear icon in the Source or Destination column and select one of the
following options:
Option Description
Resolve VMs This option is available if multiple VMs have the same IP address. This option
is used to chose the applicable VM name for the flow record.
Replace with any If the source should be accessible to everyone then any source IP address is
the correct option. In all other cases, you should specify the source address.
Configuring a destination value of any for the destination IP address is
discouraged.
Replace with Membership If the VM is part of Security Groups they will be displayed here and can
replace the VM name.
Option Description
Create Security Group a Enter a Name and (optional) description of the security group.
b Click Next.
c Define the criteria that an object must meet for it to be added to the
security group you are creating. This gives you the ability to include
virtual machines by defining a filter criteria with a number of parameters
supported to match the search criteria.
d Select one or more resources to add to the security group. Note that
when you add a resource to a security group, all associated resources
are automatically added. For example, when you select a virtual machine,
the associated vNIC is automatically added to the security group. You can
include the following objects in a security group:
Cluster
Logical Switch
vApp
Datacenter
e Click Next.
f Select the objects to exclude from the security group. The objects
selected here are always excluded from the security group, regardless
of whether or not they match the dynamic criteria.
g Click Next.
h Review the Security Group details on the Ready to complete window.
Click Finish.
Add to existing Security Group and For VMs, if the selected VM is a member of multiple security groups, select
Replace the specific security group you want to apply. This option is not available
if the IP address is present in the source or destination field. For raw IP
addresses, use Add to existing IPset and Replace option.
a Click the desired Service Group from the list of available objects.
b Click OK.
Create IPSet and Replace An IPset allows you to apply a firewall rule to an entire set of IP addresses at
once.
a Enter a name for the IPSet.
b Optional - enter a description.
c Enter IP addresses or range of address in the new IP set.
d Click OK.
Add to existing IPset and Replace An IP address may be part of several IPsets. Use this option to replace the
shown IP address and replace it with another.
a Select the desire IPset from the Available Objects.
b Click OK.
Revert to initial data Reverts any cell modifications back to the original data.
What to do next
Prerequisites
After the flow record has been analyzed, ARM auto recommends firewall rules. You can modify the
recommended rules, or create new firewall rules.
Procedure
1 Open a flow session. If you are in the Processed View. Right-click on a single flow cell or shift
+ first cell > last cell to select several cells, and then right-click. If you are in the Consolidated
View select a flow cell and click the Action icon. Select Create Firewall rule.
The New Firewall Rule pop-up window appears with all of the cells populated based on the
selected row data. If several cells were selected, all the source, destination, service objects are
added to the corresponding fields of the rule.
3 (Optional) To select a different source or destination click Select next to the Source or
Destination box. Specify a new source or destination from the available objects and click OK.
4 (Optional) To select a different service click Select the Service box. Distributed Firewall
supports ALG (Application Level Gateway) for the following protocols: FTP, CIFS, ORACLE
TNS, MS-RPC, and SUN-RPC. Edge supports ALG for FTP only. Specify a new service from the
available objects and click OK.
5 (Optional) To apply the rule to a different scope click Select next to the Applied To box. Make
appropriate selections as described in the table below and click OK. By default, the rule is
applied to the VNICs you originally right-clicked on.
All prepared clusters in your environment Select Apply this rule on all clusters on which
Distributed Firewall is enabled. After you click OK, the
Applied To column for this rule displays Distributed
Firewall.
One or more cluster, datacenter, distributed virtual port 1 In Container type, select the appropriate object..
group, NSX Edge, network, virtual machine, vNIC, or 2 In the Available list, select one or more objects and
logical switch
click .
If the rule contains virtual machines and vNICS in the source and destination fields, you must
add both the source and destination virtual machines and vNICS to Applied To for the rule to
work correctly.
Action Results in
Allow Allows traffic from or to the specified source(s), destination(s), and service(s).
Block Blocks traffic from or to the specified source(s), destination(s), and service(s).
7 Specify the rule Direction of the rule by clicking the drop-down arrow.
8 Click OK
What to do next
Publish the firewall rules. See Publishing and Managing Firewall Rules From Application Rule
Manager.
After firewall rules have been created they can be managed in the Firewall Rules tab of the
Application Rule Manager.
Prerequisites
Analyze a flow session to create automatically recommended firewall rules, or create your own
firewall rules from a flow monitoring session.
Procedure
u Firewall rules appear in the Firewall Rules tab. Select one of the following options:
Option Description
Publish a Click Publish to publish the created firewall rules. The rules are published
as a new section.
b Enter Section Name for the firewall rule and click the checkbox to enable
the following optional parameters:
Option Description
c Select where the new firewall section will be inserted in the existing
firewall configuration.
d Click OK.
Down Arrow Select the down arrow icon to move the rule down
Note When firewall rules are published from Application Rule Manager, the section name is
added to the Publish button. Any subsequent publishing from the Application Rule Manager
overrides the existing section in the Firewall Configuration with the rules which are currently
available in the Application Rule Manager.
You can monitor the host health status only by using the NSX APIs. This diagnostic feature is not
available in the vCenter UI.
n pNIC status
n Tunnel status
Substatus Description
pNIC status This status is derived from the physical layer. When the
pNICs belong to a link aggregation group (LAG), the status
is either Up, Down, or Degraded.
n When all the pNICs in the LAG are up, the LAG status is
Up.
n When all the pNICs in the LAG are down, the LAG status
is Down.
n If any one of the pNICs in the LAG is down, the LAG
status is Degraded.
When the pNIC does not belong to a LAG, the status is
either Up or Down.
Control plane status It is the connection status between the host and the NSX
Controllers.
Management plane status It is the connection status between the host and the NSX
management plane.
The management plane determines the overall status of the host as follows:
n When all the substatuses are up, the overall host status is Up.
n When at least one of the substatuses is degraded, and the other substatuses are up or down,
the overall host status is Degraded.
PUT <NSX_Manager_IP>/api/2.0/vdn/pnic-check/configuration/global
PUT <NSX_Manager_IP>/api/2.0/vdn/bfd/configuration/global
In NSX 6.4.6 or earlier, when you enable global BFD, the monitoring of tunnel latency and tunnel
health is enabled simultaneously. You cannot separately turn on or turn off the monitoring of
tunnel latency and tunnel health.
Starting in NSX 6.4.7, global BFD configuration API includes two additional parameters to enable
or disable the monitoring of tunnel health and tunnel latency separately.
When BFD is disabled, tunnel latency and tunnel health monitoring cannot be turned on. When
BFD in enabled, you can individually enable the monitoring of tunnel health and tunnel latency.
This decoupling provides greater flexibility and avoids performance problems when the number of
hosts scale in the network.
For a detailed information about configuring the global BFD parameters, see the NSX API Guide.
n GET <NSX_Manager_IP>/api/2.0/vdn/pnic-check/configuration/global
n GET <NSX_Manager_IP>/api/2.0/vdn/host/status
n GET <NSX_Manager_IP>/api/2.0/vdn/host/{hostId}/status
n GET <NSX_Manager_IP>/api/2.0/vdn/host/{hostId}/tunnel
n GET <NSX_Manager_IP>/api/2.0/vdn/host/{hostId}/remote-host-status
For a detailed information about each of these APIs, including the parameter description and the
API response example, see the NSX API Guide.
For example, when NSX is installed and VXLAN-based networks are deployed in the network, the
following types of latency exist:
n vNIC to vNIC
The network operations agent (netopa) on the ESXi host collects the network latency information
from various sources, such as vSphere, NSX, and so on. Administrators can configure external
collector tools, such as vRealize Network Insight (vRNI) to export the latency information to these
collectors. Finally, they can run analytics on the latency information to troubleshoot network-
specific problems.
Note The netopa agent can export the network latency information only to vRNI. Other collector
tools are not supported currently.
You must use NSX REST APIs to configure NSX to calculate the latency metrics. For NSX
to calculate the latency metrics correctly, ensure that the clocks on the different hosts are
synchronized using the network time protocol (NTP).
Tunnel Latency
To calculate the tunnel latency or VTEP-to-VTEP latency between ESXi hosts, NSX transmits
Bidirectional Flow Detection (BFD) packets periodically in each tunnel. You must configure the
BFD global configuration parameters by running the PUT /api/2.0/vdn/bfd/configuration/
global API.
For more information about configuring the BFD global configuration parameters, see the NSX
API Guide.
End-to-End Latency
Starting in NSX 6.4.5, NSX can calculate the end-to-end latency of a data path as traffic moves
between VMs that are either on the same ESXi host or on different ESXi hosts. However, both VMs
must be attached to the same logical switch (subnet).
Note NSX cannot calculate the end-to-end latency information when data traffic is routed
between VMs through a distributed logical router. That is, when VMs are attached to different
logical switches or subnets.
To calculate the end-to-end latency of the data path, NSX uses the timestamp attribute of a data
path packet inside the hypervisor. The end-to-end data path latency is calculated in terms of
latency of the multiple segments in the data path: vNIC to pNIC and pNIC to vNIC.
For example, when traffic moves between VMs on the same host, vNIC to vNIC latency is
calculated. When traffic moves between VMs on different ESXi hosts, vNIC to pNIC latency is
calculated on the source hypervisor and pNIC to vNIC latency is calculated on the destination
hypervisor. For traffic between the ESXi hosts, NSX calculates only the tunnel latency, if BFD
global configuration parameters are configured.
For more information about configuring the latency parameters on a specific vSphere Distributed
Switch and on a specific host, see the following sections in the NSX API Guide:
After data is gathered, it is purged daily at 2:00 a.m. During the data purge the number of flow
records across all sessions combined is checked, and any records above 20 million (or ~4GB) are
deleted. Deletion begins with the oldest session, and continues until the number of flow records
in the database is below 15 million records. If a session is in progress during the data purge, some
records could be lost.
Warning When endpoint monitoring is enabled, the Dashboard shows a small yellow warning
icon to indicate that the feature is turned on. Endpoint monitoring impacts performance and you
must preferably turn it off after the data is collected.
Prerequisites
Windows Vista, Windows 7, Windows 8, Windows 8.1, Windows 2008, Windows 2008 R2,
Windows 2012, Windows 10, and Windows 2016. It is not supported on Linux.
n VMware Tools must be running and current on your Windows desktop VMs.
n Security Groups with 20 or fewer VMs are needed for data collection before Endpoint
Monitoring can begin. See Create a Security Group for more information.
n Data collection must be enabled for one or more virtual machines on a vCenter Server before
running an Endpoint Monitoring report. Before running a report, verify that the enabled virtual
machines are active, and are generating network traffic.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Tools > Endpoint
Monitoring.
3 On the Start Data Collection for Security Groups pop-up window, select the security groups for
which you want to collect data. Click OK.
5 Click OK.
The main Endpoint Monitoring Screen appears. In the left hand corner the status is Collecting
Data.
The EndPoint Monitoring screen appears with the Summary tab populated with data.
Endpoint Monitoring
Endpoint Monitoring enables visibility into specific application processes and their associated
network connections.
Summary Tab
After data collection is completed, the summary screen displays the details of the NSX manager,
the security group and the time slot of the collected data. The number of running virtual machines
(VMs) and the total number of processes generating traffic is shown in the first box. Clicking the
number of virtual machines running takes you to the VM Flows tab, described below. Clicking the
number of processes generating traffic takes you to the Process Flows tab, described below.
The second box displays a donut with the total number of flows. A flow is any unique stream of
network traffic as identified by its packet type, source and destination IP, and port. Hover the
cursor over each section and the number of flows within the security group or outside the security
group is shown.
VM Flows Tab
This screen displays the details of the flows within the VMs including:
n Flows within security group - Traffic flowing between the VMs where the source or destination
is inside the monitored security group
n Flows outside security group - Traffic flowing between the VMs where the source or
destination is outside the monitored security group
n Shared service flows outside group - Shared service flows such as DHCP, LDAP, DNS, or NTP,
outside the monitored security group
n Shared service flows inside security group - Shared service such as DHCP, LDAP, DNS, or
NTP, inside the monitored security group
Clicking on a specific VM name in the table displays a bubble graph that shows the following:
Click on a bubble to view the details of the VM. The detailed flow view includes the process name,
version and number of flows being generated by each process. If it contains shared services there
is a special icon that is visible. Clicking on a line between two VM bubbles displays the process
flow details of the flows between those two VMs including:
n Source process - Name of application/exe generating traffic and initiating the flow
n Protocol - TCP
n Destination process - Name of the server application/exe of the process that is the destination
of the flow
n VM name
n Flows within security group - Traffic flowing between the VMs where the source or destination
is inside the monitored security group
n Flows outside security group - Traffic flowing between the VMs where the source or
destination is outside the monitored security group
n Shared flows within security group - Shared flows, within the monitored security group
n Shared flows outside security group - Shared flows, outside the monitored security group
The bubble graph depicts the flows that are occurring with the process or application on the
selected VM as the anchor. Click on any of the bubbles for the process name and version. Click on
any line to display the following:
n Protocol - TCP
n AD User Table -Lists all users that have initiated network flows from or to VMs that were part
of the selected security group.
n AD Sessions Table - Lists all the sessions that were created by a user selected from the AD
User Table. There are as many sessions as the number of unique pairs of users, source VM IPs.
n AD User Flows Table - When a user clicks on a session, this page appears, providing
additional flow details
Traceflow
Traceflow is a troubleshooting tool that provides the ability to inject a packet and observe where
that packet is seen as it passes through the physical and logical network. The observations allow
you to determine information about the network, such as identifying a node that is down or a
firewall rule that is preventing a packet from being received by its destination.
About Traceflow
Traceflow injects packets into a vSphere distributed switch (VDS) port and provides various
observation points along the packet’s path as it traverses physical and logical entities (such as
ESXi hosts, logical switches, and logical routers) in the overlay and underlay networks. This allows
you to identify the path (or paths) a packet takes to reach its destination or, conversely, where a
packet is dropped along the way. Each entity reports the packet handling on input and output, so
you can determine whether issues occur when receiving a packet or when forwarding the packet.
Keep in mind that traceflow is not the same as a ping request/response that goes from guest-VM
stack to guest-VM stack. What traceflow does is observe a marked packet as it traverses the
overlay network. Each packet is monitored as it crosses the overlay network until it reaches and is
deliverable to the destination guest VM. However, the injected traceflow packet is never actually
delivered to the destination guest VM. This means that a traceflow can be successful even when
the guest VM is powered down.
n Layer 2 unicast
n Layer 3 unicast
n Layer 2 broadcast
n Layer 2 multicast
You can construct packets with custom header fields and packet sizes. The source for the
traceflow is always a virtual machine virtual NIC (vNIC). The destination endpoint can be any
device in the NSX overlay or in the underlay. However, you cannot select a destination that is
north of an NSX edge services gateway (ESG). The destination must be on the same subnet or
must be reachable through NSX distributed logical routers.
The traceflow operation is considered Layer 2 if the source and destination vNICs are in the same
Layer 2 domain. In NSX, this means that they are on the same VXLAN network identifier (VNI or
segment ID). This happens, for example, when two VMs are attached to the same logical switch.
If NSX bridging is configured, unknown Layer 2 packets are always be sent to the bridge.
Typically, the bridge forwards these packets to a VLAN and reports the traceflow packet as
delivered. A packet reported as delivered does not necessarily mean that the trace packet was
delivered to the specified destination.
For Layer 3 traceflow unicast traffic, the two end points are on different logical switches and have
different VNIs, connected to a distributed logical router (DLR).
For multicast traffic, the source is a VM vNIC, and the destination is a multicast group address.
Traceflow observations may include observations of broadcasted traceflow packets. The ESXi
host broadcasts a traceflow packet if it does not know the destination host's MAC address. For
broadcast traffic, the source is a VM vNIC. The Layer 2 destination MAC address for broadcast
traffic is [Link]. To create a valid packet for firewall inspection, the broadcast
traceflow operation requires a subnet prefix length. The subnet mask enables NSX to calculate
an IP network address for the packet.
Caution Depending on the number of logical ports in your deployment, multicast and broadcast
traceflow operations might generate high traffic volume.
There are two ways to use traceflow: through the API and through the GUI. The API is the same
API that the GUI uses, except the API allows you to specify the exact settings within the packet,
while the GUI has more limited settings.
n TCP and UDP source and destination port numbers. The default values are 0.
n TCP flags.
n An expiry timeout, in milliseconds (ms), for the traceflow operation. The default is 10,000 ms.
n Ethernet frame size. The default is 128 bytes per frame. The maximum frame size is 1000 bytes
per frame.
n Payload value.
n Troubleshooting network failures to see the exact path that traffic takes
Prerequisites
n Traceflow operations require communication among vCenter, NSX Manager, the NSX
Controller cluster and the netcpa user world agents on the hosts.
n For Traceflow to work as expected, make sure that the controller cluster is connected and in
healthy state.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Tools > Traceflow.
If the VM is managed in the same vCenter Server where you are running the traceflow, you can
select the VM and vNIC from a list.
Note When a logical switch is in multicast replication mode, the VMs connected to this logical
switch will not be listed and cannot be chosen as the traceflow source or destination.
The destination can be a vNIC of any device in the NSX overlay or underlay, such as a host,
a VM, a logical router, or an edge services gateway. If the destination is a VM that is running
VMware Tools and is managed in the same vCenter Server from which you are running the
traceflow, you can select the VM and vNIC from a list.
Otherwise, you must enter the destination IP address (and the MAC address for a unicast
Layer 2 traceflow). You can gather this information from the device itself in the device console
or in an SSH session. For example, if it is a Linux VM, you can get its IP and MAC address by
running the ifconfig command in the Linux terminal. For a logical router or edge services
gateway, you can gather the information from the show interface CLI command.
Both the source and destination IP addresses are required to make the IP packet valid. In the
case of multicast, the MAC address is deduced from the IP address.
The packet is switched based on MAC address only. The destination MAC address is
[Link].
Both the source and destination IP addresses are required to make the IP packet valid for
firewall inspection.
8 Click Trace.
Packet Capture
You can create a packet capture session for required hosts on the NSX Manager using the Packet
Capture tool. After the packets are captured, the file is available to download. If your dashboard is
indicating that a host is not in a healthy state, you can capture packets for that particular host for
further troubleshooting.
For each session on the host, the packet capture file limit is 20 MB and the capture time limit is
10 minutes. A session remains active for 10 minutes or until the capture file reaches either 20 MB
size or 20,000 packets, whichever limit is reached first. When any one of the limits is reached,
the session is stopped. In the UI, you can create a maximum of 16 packet capture sessions. The
NSX management plane limits the total captured file size across all sessions to 400 MB. When
the combined packet file size of 400 MB is reached, a new capture session cannot be created.
However, you can download the files of the previous packet capture sessions. If you want to start
a new session after the limit of 400 MB file size is reached, you must clear the older sessions.
An existing session is removed one hour after the session was created. If you restart NSX, all the
existing sessions are cleared.
Options Description
NSX Manager Select the NSX Manager for which you want to create a
packet capture session.
CLEAR ALL If you want to clear all sessions, then click CLEAR ALL. A
confirmation dialog box appears. Click YES to clear all the
listed sessions.
You can view the number of active sessions and total file size. The session status can be as follows:
n Finished: After the session is finished, you can download the session. You can also restart, and
clear the session.
n Stopped: The session that was forced stopped. You can restart, or clear the session.
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Tools > Packet Capture.
The Create Session window appears. Enter the required details as explained further.
Parameter Description
General Tab The General tab is the default tab. Enter the required
details as explained further.
Filter Type Select the filter type from the list that is based on
the selected host. The list depends on the configured
firewall rules.
Results
What to do next
n You can download the captured session after it is finished, and can view the file using tools
such as Wireshark.
n You can clear the session after you download the file.
You can also use API to generate, download, delete, or cancel the bundle collection.
If the size limit in NSX is reached before all the requested logs are generated, the operation skips
generation of remaining logs. The bundle gets generated with partial logs and is made available
for the local download or FTP upload. The status of logs that are skipped is displayed.
When you request new log collection request, the old bundle gets deleted. The bundle also gets
deleted after it has been uploaded to a remote server or the generate logs operation is canceled.
Note In an aggregated support bundle, the maximum number of nodes (including NSX Manager,
NSX Edge, Host, and NSX Controller) should not be more than 200.
n VM Guest logs
Note The support bundle does not contain all the host log files that you might need to
troubleshoot host problems. To collect the complete host logs, use the vCenter Server. For
information about collecting vSphere log files, see the vSphere Monitoring and Performance
documentation.
Prerequisites
Procedure
1 In the vSphere Web Client, navigate to Networking & Security > Tools > Support Bundle.
a To include NSX Manager logs, select the Include NSX Manager logs check box.
b Select the required object type from the list. You can select Hosts, Edges, and Controllers.
c Based on the selected object type, the available components are listed under the Available
Objects column.
n Select the check box next to the component that you want to include in the logs, and
then click the arrow key ( ) to move the component to the Selected Objects list.
n To remove the selected objects, select the check box next to the component that you
want to remove, and then click the arrow key ( ) to move the component to the
Available Objects list.
d For Hosts, you can select additional log options as Guest Introspection and Firewall.
4 Set the default timeout (in minutes) for the bundle collection process per selected object from
the list.
Note You can either download the support bundle, or upload it to a remote server. If you do
not want to upload the bundle to a remote server, the bundle will be available for download
after it is generated.
a Select the Upload support bundle to remote file server check box.
n Username and Password: Enter the user name and password of the remote server.
n Port: Port number is added by default. You can change the port number, if necessary.
To increase or decrease the number, click the arrow key.
Results
To view the bundle details, click the View Bundle Details link.
You can view overall completion status (in percent) and status for each component. You can view
the completion status of bundle (in percent) being uploaded to a remote server. The status can be
as follows:
n Skipped: This status can appear due to limited disk space. The bundle gets generated with
partial logs and is made available for a local download, or is uploaded to a remote server. The
status of the logs that are skipped is displayed.
n Failed: Log collection is failed due to various reasons like connectivity issues or timeout error.
Click START NEW to start the data collection again.
n Completed: You can now download the bundle or view at the remote server.
What to do next
n You can Download a Support Bundle after it is generated, or you can view the filename that is
uploaded to the configured remote server.
n Click START NEW to start the data collection again. A confirmation box appears. Click Yes to
start new data generation request.
Prerequisites
Procedure
2 Click DOWNLOAD.
3 The support bundle filename ending with [Link] is downloaded to your default Downloads
folder. Some browsers might alter the file extension.
For example, the name of the support bundle file has a format similar to VMware-NSX-
TechSupport-Bundle-YYYY-MM-DD_HH-[Link].
4 Click the DELETE SUPPORT BUNDLE link.
5 Click Yes. The log files generated in this particular request get deleted.
What to do next
You can provide the downloaded support bundle to VMware technical support.
Prerequisites
Procedure
2 If the log generation is in-progress, you can cancel the process. Click ABORT GENERATION.
5 Click Yes. The log files generated in this particular request get deleted.
Results
What to do next
Click START NEW to start data collection again. A confirmation box appears. Click Yes to start a
new data generation request.
Currently, ACME Enterprise achieves this disaster recovery the traditional way by performing the
following tasks manually:
n Remapping IP address
n Updating other services that use the application IP addresses, such as DNS, security policies,
and other services.
This traditional approach to a disaster recovery consumes significant additional time to complete
100% recovery at its site in Austin. To achieve a fast disaster recovery with a minimal
downtime, ACME Enterprise decides to deploy NSX Data Center 6.4.5 or later in a Cross-vCenter
environment, as shown in the following logical topology diagram.
Figure 24-1. Multi-Site Cross-vCenter NSX Topology in Active - Passive Mode and Local Egress
Disabled
WAN
Firewall Firewall
Site 1 Site 2
(Primary) Physical Physical (Secondary)
Router Router
ESGs ESGs
DLR (Active) (Passive)
Shared storage
In this topology, site 1 at Palo Alto is the primary (protected) data center, and site 2 at Austin is
the secondary (recovery) data center. Each site has a single vCenter Server, which is paired with
its own NSX Manager. The NSX Manager at site 1 (Palo Alto) is assigned the role of a primary
NSX Manager, and the NSX Manager at site 2 (Austin) is assigned the role of a secondary NSX
Manager.
ACME Enterprise deploys the Cross-vCenter NSX across both sites in an Active - Passive mode.
100% applications (workloads) run on site 1 at Palo Alto, and 0% applications run on site 2 at
Austin. That is, by default, site 2 is in passive or standby mode.
Both sites have their own Compute, Edge, and Management Clusters and ESGs that are local to
that site. As local egress is disabled on the UDLR, only a single UDLR Control VM is deployed on
the primary site. The UDLR Control VM is connected to the universal transit logical switch.
The NSX administrator creates universal objects that span two vCenter domains at site 1 and site
2. The universal logical networks use universal networking and security objects, such as Universal
Logical Switches (ULS), Universal Distributed Logical Routers (UDLR), and Universal Distributed
Firewall (UDFW).
n Adds the local Compute, Edge, and Management clusters to the universal transport zone from
the primary NSX Manager.
n Disables local egress, enables ECMP, and enables Graceful Restart on the UDLR Control VM
(Edge Appliance VM).
n Configures dynamic routing using BGP between the Edge Services Gateways (ESGs) and the
UDLR Control VM.
n Disables firewall on both the ESGs because ECMP is enabled on the UDLR Control VM and to
ensure that all traffic is allowed.
The following diagram shows a sample configuration of the uplink and downlink interfaces on the
ESGs and the UDLRs at site 1.
Site 1
Physical
Router
[Link]/24
[Link]/24 [Link]/24
ESGs
(Active)
[Link]/24 [Link]/24
[Link]/24
[Link]/24
n Adds the local Compute, Edge, and Management clusters to the universal transport zone from
the secondary NSX Manager.
n Powers down the ESGs on the secondary site when site 1 is active.
Now, let us walk through the steps that the NSX administrator can perform to achieve a disaster
recovery in the following scenarios:
n Automatically recover all Edge interface settings and BGP protocol configuration settings at
site 2.
Note
n The administrator can do the failover tasks manually by using either the vSphere Web Client
or by running the NSX REST APIs. In addition, the administrator can automate some failover
tasks by running a script file that contains the APIs to run during the failover. This scenario
explains manual failover steps using the vSphere Web Client. However, if any step requires the
use of either the CLI or the NSX REST APIs, adequate instructions are provided.
n In this scenario, the disaster recovery workflow is specific to the topology explained earlier,
which has a primary NSX Manager and a single secondary NSX Manager. The workflow with
multiple secondary NSX Managers is not in the scope of this scenario.
Important When the failover to the secondary site 2 is in progress or partly completed, avoid
powering on the NSX Manager at site 1 to failback to the primary site 1. Ensure that the failover
process is first completed by using the procedure in this scenario. Only after a clean failover is
done to the secondary site 2, restore or failback all the workloads to the original primary site 1. For
detailed instructions about the failback process, see Scenario 3: Full Failback to Primary Site.
Prerequisites
n vCenter Server at sites 1 and 2 are deployed with Enhanced Linked Mode.
n Firewall is disabled on both the ESGs because ECMP is enabled on the UDLRs and to
ensure that all traffic is allowed.
n Similar downlink interfaces are configured manually on the ESGs as configured at site 1.
n ESGs are in powered down state when the primary site 1 is active or running.
Procedure
a Power off the primary NSX Manager and all the three controller nodes that are associated
with the primary NSX Manager.
b On the Installation and Upgrade page, navigate to Management > NSX Managers.
n If you refresh the NSX Managers page in the current browser session, the role of the
primary NSX Manager changes to Unknown.
n If you log out from the vSphere Web Client and log in again or start a new vSphere
Web Client browser session, the primary NSX Manager is no longer displayed on the
NSX Managers page.
n If you refresh the Dashboard page in the current browser session, the following error
message is displayed: Could not establish connection with NSX Manager.
Please contact administrator.. This error means that the primary NSX Manager
is no longer reachable.
n If you log out from the vSphere Web Client and log in again or start a new vSphere
Web Client browser session, the primary NSX Manager is no longer available in the
NSX Manager drop-down menu.
d Navigate to Networking & Security > Installation and Upgrade > Management > NSX
Controller Nodes. Select the secondary NSX Manager, and ensure that the status of all
three controller nodes is Disconnected.
e Power off all the NSX Edges and the universal distributed logical router (UDLR) control
VM.
a On the Installation and Upgrade page, navigate to Management > NSX Managers.
c Click Actions > Disconnect from Primary NSX Manager. When prompted to continue with
the disconnect operation, click Yes.
The secondary NSX Manager is disconnected from the primary NSX Manager, and enters
into a Transit role.
Caution As local egress is disabled on the UDLR, the UDLR Control VM (Edge Appliance VM)
is deployed only at the original primary site (site 1). Before site 1 fails, the UDLR Control VM
is not available at the secondary site (site 2), which is now promoted to primary. Therefore,
redeploy the UDLR Control VM at the promoted primary site (site 2) before redeploying the
NSX Controller Cluster.
If the controller nodes are deployed before deploying the UDLR Control VM, the forwarding
tables on the UDLR are flushed out. This results in a downtime immediately after the first
controller node is deployed at site 2. This situation might result in communication outages. To
avoid this situation, deploy the UDLR Control VM before deploying the NSX Controller nodes.
3 Power on the NSX Edges that are in the powered down state, and deploy the UDLR Control
VM (Edge Appliance VM) at the secondary site 2 (promoted primary).
For instructions about deploying the UDLR Control VM, see the NSX Cross-vCenter Installation
Guide.
While deploying the UDLR Control VM, configure the following resource settings:
Note After deploying the UDLR Control VM, the following configuration settings are
automatically recovered at site 2:
4 Deploy the three NSX Controller Cluster nodes at site 2 (promoted primary).
For detailed instructions about deploying NSX Controllers, see the NSX Cross-vCenter
Installation Guide.
c Select one cluster at a time, and then click Actions > Force Sync Services.
Note The workload VMs continue to exist at site 1. Therefore, you must manually migrate the
workload VMs to site 2.
Results
The manual recovery of NSX components and the failover from the primary site (site 1) to the
secondary site (site 2) is complete.
What to do next
Verify whether the failover to site 2 is 100% complete by doing these steps on site 2 (promoted
primary site):
2 Check whether the Control VM (Edge Appliance VM) is deployed on the UDLR.
5 Log in to the CLI console of the UDLR Control VM (Edge Appliance VM), and do these steps:
a Check whether all BGP neighbors are established and the status is UP by running the show
ip bgp neighbors command.
b Check whether all BGP routes are being learned from all BGP neighbors by running the
show ip route bgp command.
After a complete failover to site 2, all workloads run on the secondary site (promoted primary) and
traffic is routed through the UDLR and the NSX Edges at site 2.
After the scheduled maintenance is done, the administrator powers on the NSX Manager and
controller cluster nodes at the primary site 1, and restores all the workloads to the original primary
site 1. For detailed instructions about doing a manual failback to the primary site, see Scenario 3:
Full Failback to Primary Site.
As the primary site has gone down due to unforeseen circumstances, the administrator cannot do
any failover preparation before the actual failure occurs.
n Automatically recover all Edge interface settings and BGP protocol configuration settings at
site 2.
Note
n The administrator can do the failover tasks manually by using either the vSphere Web Client
or by running the NSX REST APIs. In addition, the administrator can automate some failover
tasks by running a script file that contains the APIs to run during the failover. This scenario
explains manual failover steps using the vSphere Web Client. However, if any step requires the
use of either the CLI or the NSX REST APIs, adequate instructions are provided.
n In this scenario, the disaster recovery workflow is specific to the topology explained earlier,
which has a primary NSX Manager and a single secondary NSX Manager. The workflow with
multiple secondary NSX Managers is not in the scope of this scenario.
Important If the primary site 1 powers on while the failover to the secondary site 2 is in progress,
first ensure that the failover process is completed by using the procedure in this scenario. Only
after a clean failover is done to the secondary site 2, restore or failback all the workloads to the
original primary site 1. For detailed instructions about the failback process, see Scenario 3: Full
Failback to Primary Site.
Prerequisites
n vCenter Server at sites 1 and 2 are deployed with Enhanced Linked Mode.
n Firewall is disabled on both the ESGs because ECMP is enabled on the UDLRs and to
ensure that all traffic is allowed.
n Similar downlink interfaces are configured manually on the ESGs as configured at site 1.
n ESGs are in powered down state when the primary site 1 is active or running.
Procedure
a On the Installation and Upgrade page, navigate to Management > NSX Managers.
n If you refresh the NSX Managers page in the current browser session, the role of the
primary NSX Manager changes to Unknown.
n If you log out from the vSphere Web Client and log in again or start a new vSphere
Web Client browser session, the primary NSX Manager is no longer displayed on the
NSX Managers page.
n If you refresh the Dashboard page in the current browser session, the following error
message is displayed: Could not establish connection with NSX Manager.
Please contact administrator.. This error means that the primary NSX Manager
is no longer reachable.
n If you log out from the vSphere Web Client and log in again or start a new vSphere
Web Client browser session, the primary NSX Manager is no longer available in the
NSX Manager drop-down menu.
a On the Installation and Upgrade page, navigate to Management > NSX Managers.
c Click Actions > Disconnect from Primary NSX Manager. When prompted to continue with
the disconnect operation, click Yes.
The secondary NSX Manager is disconnected from the primary NSX Manager, and enters
into a Transit role.
Caution As local egress is disabled on the UDLR, the UDLR Control VM (Edge Appliance VM)
is deployed only at the original primary site (site 1). Before site 1 fails, the UDLR Control VM
is not available at the secondary site (site 2), which is now promoted to primary. Therefore,
redeploy the UDLR Control VM at the promoted primary site (site 2) before redeploying the
NSX Controller Cluster.
If the controller nodes are deployed before deploying the UDLR Control VM, the forwarding
tables on the UDLR are flushed out. This results in a downtime immediately after the first
controller node is deployed at site 2. This situation might result in communication outages. To
avoid this situation, deploy the UDLR Control VM before deploying the NSX Controller nodes.
3 Power on the NSX Edges that are in the powered down state, and deploy the UDLR Control
VM (Edge Appliance VM) at the secondary site 2 (promoted primary).
For instructions about deploying the UDLR Control VM, see the NSX Cross-vCenter Installation
Guide.
While deploying the UDLR Control VM, configure the following resource settings:
Note After deploying the UDLR Control VM, the following configuration settings are
automatically recovered at site 2:
4 Deploy the three NSX Controller Cluster nodes at site 2 (promoted primary).
For detailed instructions about deploying NSX Controllers, see the NSX Cross-vCenter
Installation Guide.
5 Update the NSX Controller Cluster state.
c Select one cluster at a time, and then click Actions > Force Sync Services.
Note The workload VMs continue to exist at site 1. Therefore, you must manually migrate the
workload VMs to site 2.
Results
The manual recovery of NSX components and the failover from the primary site (site 1) to the
secondary site (site 2) is complete.
What to do next
Verify whether the failover to site 2 is 100% complete by doing these steps on site 2 (promoted
primary site):
2 Check whether the Control VM (Edge Appliance VM) is deployed on the UDLR.
5 Log in to the CLI console of the UDLR Control VM (Edge Appliance VM), and do these steps:
a Check whether all BGP neighbors are established and the status is UP by running the show
ip bgp neighbors command.
b Check whether all BGP routes are being learned from all BGP neighbors by running the
show ip route bgp command.
After a complete failover to site 2, all workloads run on the secondary site (promoted primary) and
traffic is routed through the UDLR and the NSX Edges at site 2.
n Achieve a full failback of all workloads from site 2 to original primary site 1 with minimal
downtime.
n Automatically recover all Edge interface settings and BGP protocol configuration settings at
site 1.
Note
n The administrator can do the failback tasks manually by using either the vSphere Web Client
or by running the NSX REST APIs. In addition, the administrator can automate some failback
tasks by running a script file that contains the APIs to run during the failback. This scenario
explains manual failback steps using the vSphere Web Client. However, if any step requires
the use of either the CLI or the NSX REST APIs, adequate instructions are provided.
n In this scenario, the disaster recovery workflow is specific to the topology explained earlier,
which has a primary NSX Manager and a single secondary NSX Manager. The workflow with
multiple secondary NSX Managers is not in the scope of this scenario.
Prerequisites
n vCenter Server at sites 1 and 2 are deployed with Enhanced Linked Mode.
n Firewall is disabled on both the ESGs because ECMP is enabled on the UDLRs and to
ensure that all traffic is allowed.
n At site 2 (promoted primary), no changes are made in the universal logical components before
initiating the failback process.
Procedure
1 When the primary site 1 is up again, make sure that the NSX Manager and the controller cluster
nodes are powered on and running.
c In the System Overview pane, check the status of the NSX Manager and the controller
cluster nodes.
A filled green dot next to NSX Manager and the controller nodes mean that both the NSX
components are powered on and running.
a On the Installation and Upgrade page, navigate to Management > NSX Managers, and
observe that NSX Managers at both sites have a primary role.
b On the NSX Controller Nodes page, ensure that the Universal Controller Cluster (UCC)
nodes exist at both sites.
3 Shut down all the three UCC nodes that are associated with site 2 (promoted primary).
4 On the NSX Controller Nodes page, delete all the three UCC nodes that are associated with
site 2 (promoted primary).
Tip You can use the NSX REST APIs to remove one controller node at a time by running
the following API call: [Link]
However, delete the last controller node forcefully by running the following API call: https://
NSX_Manager_IP/api/2.0/vdn/controller/{controllerID}?forceRemoval=true.
5 Ensure that there are no changes in the universal components at site 2 (promoted primary)
before proceeding to the next step.
6 Remove the primary role on the NSX Manager at site 2 (promoted primary).
a On the Installation and Upgrade page, navigate to Management > NSX Managers.
b Select the NSX Manager at site 2, and click Actions > Remove Primary Role.
A message prompts you to ensure that the controllers owned by the NSX Manager at site 2
are deleted before removing the primary role.
c Click Yes.
7 On the primary NSX Manager at site 1, remove the associated secondary NSX Manager.
a On the NSX Managers page, select the NSX Manager that is associated with site 1.
c Select the Perform Operation even if NSX Manager is inaccessible check box.
d Click Remove.
8 Register the NSX Manager at site 2, which is in Transit, as the secondary of primary NSX
Manager at site 1.
Caution Because local egress is disabled on the UDLR Control VM (Edge Appliance VM), the
Control VM is automatically deleted. Therefore, before registering the NSX Manager at site 2
(currently in Transit role) with a secondary role, make sure that the controller cluster nodes at
site 2 are deleted. If the controller cluster nodes are not deleted, network traffic disruption can
occur.
a On the Installation and Upgrade page, navigate to Management > NSX Managers.
e Enter the user name and password of the NSX Manager at site 2, and accept the security
certificate.
f Click Add.
n NSX Manager at site 1 has a primary role, and NSX Manager at site 2 has a secondary role.
n On the NSX Manager at site 2, three shadow controller nodes appear with status as
Disconnected. The following message is displayed: Can read or update controller
cluster properties only on Primary or Standalone Manager.
This message means that the secondary NSX Manager at site 2 is unable to establish
connectivity with the Universal Controller Cluster nodes on the primary NSX Manager at
site 1. However, after a few seconds, the connection gets reestablished and the status
changes to Connected.
9 Power on the Control VM (Edge Appliance VM) on the UDLR and the NSX Edges at site 1.
b Right-click the VM Name (VM ID) of the UDLR Control VM and click Power on.
c Repeat step (b) for the Edge VMs that you want to power on.
d Wait until the UDLR Control VM and Edge VMs are up and running before proceeding to
the next step.
10 Make sure that the UDLR Control VM (Edge Appliance VM) that is associated with the
secondary NSX Manager at site 2 is automatically deleted.
c On the Status page, observe that no Edge Appliance VM is deployed on the UDLR.
11 Update the NSX Controller state on the primary site 1 so that the controller services are synced
with the secondary site 2.
Note The workload VMs continue to exist at site 2. Therefore, you must manually migrate the
workload VMs to site 1.
Results
The manual failback of all NSX components and workloads from the secondary site (site 2) to the
primary site (site 1) is complete.
What to do next
Verify whether the failback to primary site 1 is 100% complete by doing these steps on site 1:
2 Check whether the Control VM (Edge Appliance VM) is deployed on the UDLR.
4 Perform a Communication Health Check on each host cluster that is prepared for NSX.
c Select one cluster at a time, and check whether the Communication Channel Health status
of the cluster is UP.
d For each host in the cluster, check whether the Communication Channel Health status of
the host is UP.
5 Log in to the CLI console of the UDLR Control VM (Edge Appliance VM), and do these steps:
a Check whether all BGP neighbors are established and the status is UP by running the show
ip bgp neighbors command.
b Check whether all BGP routes are being learned from all BGP neighbors by running the
show ip route bgp command.
After a complete failback to site 1, all workloads run on the primary site 1 and traffic is routed
through the UDLR and the NSX Edges at site 1.
To establish communication between VMs on different logical switches, you must perform the following steps: Create logical switches and connect each VM to a respective logical switch. Deploy a logical router within the same transport zone as the switches. Attach each logical switch to the logical router, enabling the router to handle east-west traffic routing. This configuration allows seamless VM-to-VM communication across logical network segments by using the router to interconnect separate layer 2 networks and manage traffic efficiently between them .
Cross-vCenter NSX facilitates operations across multiple vCenter environments by centralizing networking and security configurations. This enables consistent management across all associated vCenters using universal logical networks and security controls. It involves components like the Universal NSX Controller Cluster and Universal Transport Zone, which allow logical networks to span across multiple data centers. The implications on multi-site network management include simplified administration, global security policy enforcement, and seamless VM mobility across sites. Universal logical switches and routers ensure that network topology changes only need to be made once and propagated immediately across all connected sites .
Service Composer in NSX is used to create and manage security policies, including firewall rules, with greater flexibility and efficiency. It allows administrators to apply consistent security policies across dynamically created security groups and monitor rule impact using an intuitive interface. Synchronization of Service Composer rules with firewall rules is crucial to ensure that any policy updates or emergency changes are accurately reflected in the active firewall configuration, mitigating security risks from unsynchronized states and ensuring that policies are enforced uniformly across the network infrastructure .
Enabling ARP suppression on a logical switch reduces the broadcast traffic within the network, leads to more efficient bandwidth utilization, and enhances virtual machine communication speeds by responding to ARP requests locally. MAC learning allows the logical switch to dynamically learn and record which interface the MAC address resides on, aiding in efficient packet forwarding and reducing unnecessary data replication. Together, these features optimize network performance by minimizing unnecessary network traffic and improving throughput .
NSX implements load balancing using NSX Edge as a virtual appliance that distributes client requests across multiple server instances. For SSL offloading application profiles, configuration steps include: 1) Create an HTTPS application profile specifying the profile name, enabling the 'Insert X-Forwarded-For HTTP header', and selecting cipher algorithms. 2) Define SSL parameters, selecting appropriate client and server certificates. 3) Add a virtual server, associate it with the application profile, and enable necessary settings for the virtual network configuration. This setup efficiently manages SSL encryption overhead, improves server performance, and enhances security by terminating SSL connections at the load balancer .
Before deploying a logical router in NSX, several prerequisites and considerations must be addressed. The controller cluster must be active as the logical router relies on these controllers to propagate routing information. The deployment should ensure connectivity between ESXi hosts on UDP port 6999 for ARP proxy functionality if VLANs are involved. Logical routers should be part of the same transport zone as the logical switches they connect to. It's recommended to avoid placing them on the same host as upstream ESGs when using ECMP setups, using DRS anti-affinity rules instead to mitigate potential forwarding disruptions from host failures. Finally, the destination host for the logical router appliance must be chosen carefully, taking network topology and high availability requirements into account .
The NSX Data Center for vSphere comprises several critical components, each interacting to provide a seamless networking and security service. The Data Plane performs forwarding at the virtual switch layer across both ESXi and KVM hosts. The Control Plane manages the distribution of logical network topology information to all hosts in the fabric. The Management Plane provides a centralized interface across the entire NSX environment, and the Consumption Platform allows cloud management platforms to consume network services provided by NSX. NSX Edge acts as a gateway to provide routing, firewall, load balancing, and VPN services. These components interact through communication protocols and management interfaces to ensure efficient network operations and security .
Moving an NSX Edge HA pair to different vCenter Servers causes the loss of high availability (HA) functionality since HA operation requires both appliances to reside within the same vCenter. This migration can result in traffic disruptions as the HA pair will not synchronize their states, potentially resulting in bidirectional traffic interruption or packet loss. Therefore, maintaining HA functionality is critical for ensuring continuous service delivery and preventing downtime due to network failures .
Configuring an L2 VPN over SSL in NSX involves setting up an L2 VPN server and client on NSX Edge devices. For the server: 1) Log into vSphere Web Client and select the NSX Edge to configure. 2) Enable L2 VPN, selecting 'Server' for L2 VPN mode. 3) Enter the listener IP, default port (443), and encryption algorithms for secure communication. 4) Bind the appropriate SSL certificate to the VPN server. On the client side: 1) Specify the server address and port, select encryption algorithms, and configure stretched interfaces. This configuration ensures secure, encrypted L2 network extensions across geographically dispersed data centers, maintaining consistency in virtual network segmentation .
Universal logical routers are crucial in cross-vCenter NSX environments for providing consistent routing across multiple NSX domains. They facilitate seamless connectivity for VMs hosted in different vCenter domains by sharing routing information globally within a universal transport zone. These routers enable network policy consistency, minimize configuration complexity, and assist in the maintenance of high availability and disaster recovery setups. Universal logical routers function by distributing routing updates through a centralized control plane, efficiently managing traffic flow across geographically dispersed networks .