5.4.
1 Server setup
This section shows how to set up management access to the ESXi hosts.
Follow the steps below to configure the management interface on the ESXi hosts:
1. Access the console through the iDRAC on the server.
2. Log into ESXi.
3. Select Configure Management Network.
4. Select Network Adaptors.
5. Choose vmnic0 Mezzanine 1A.
ESXi Network Adapter setting
6. Select <Enter> OK.
7. Select IPv4 Configuration.
8. Select Set static IPv4 address and network configuration.
9. Enter IPv4 Address [Link]
10. Enter Subnet Mask [Link]
11. Enter Default Gateway [Link]
ESXi IPv4 Configuration
12. Select <Enter> OK.
13. Select DNS Configuration.
14. Select Use the following DNS server addresses and hostname.
15. Enter a Primary DNS Server (example: [Link]).
16. (Optional) Enter an Alternate DNS Server
17. Enter a Hostname (example: MXvSAN01).
18. Select <Enter> OK.
19. Select <Ecs> Exit.
20. Apply changes and restart the network
21. Repeat for servers 2,3,4
Note: A Windows Server 2016 DNS service is accessible by all hosts in this deployment example. Host (A)
records for forward lookup zones, and Pointer (PTR) records for reverse lookup zones are configured for each
ESXi host and the vCenter appliance. DNS server administration and installation are not within the scope of
this document. DNS is a requirement for the vCenter Server Appliance.
5.4.2 Create a vCenter datacenter and vSAN cluster
This section details the creation of a new Datacenter and cluster within vCenter.
Note: This example assumes a vCenter server appliance is already deployed within the datacenter the
chassis is being installed.
Follow the steps below to create a new DataCenter and cluster:
1. Right click on the vCenter Server, select New Datacenter.
2. Enter a datacenter name, click OK.
3. Right click on the datacenter, select New Cluster.
4. Enter a cluster name, click OK.
Create Datacenter and Cluster
5.4.3 Add hosts to vCenter cluster
This section details adding hosts to the vSAN cluster.
Follow the steps below to add hosts to the vSAN cluster created in the last section:
1. Right click on the cluster, select Add Host.
2. Enter IP address sled 1, Click NEXT.
3. Enter username and password, click NEXT.
4. Review summary, click NEXT.
5. Assign license, click NEXT.
6. Select desired lockdown mode, keep as disabled for now, click NEXT.
7. Click FINISH.
8. Repeat for sled 2, 5, and 6.
Added hosts to vSAN cluster
Note: vSAN Health issues and Warnings are expected
5.4.4 Configure virtual networking
This section provides details on configuring the virtual networking using vCenter.
Create vDS
1. On the web client Home screen, select Networking.
2. Right click on vSAN Datacenter. Select Distributed switch > New Distributed Switch.
3. Provide a name for the distributed switch (example: vds01-vSAN). Click Next.
4. On the Select version page, select 6.6.0 – ESXi 6.6 and later. Click Next.
5. On the Configure settings page:
a. Leave the Number of uplinks set to 4.
b. Leave Network I/O Control set to Enabled.
c. Uncheck the Create a default port group box.
d. Click Next followed by Finish.
Create vDS
Edit the vDS to use jumbo frames
6. Right click on the vDS, select Settings > Edit Settings.
7. Leave the General settings as default.
8. Select the Advanced page.
a. Change the MTU value to 9000.
b. Leave all other settings as default.
9. Click OK.
Add distributed port groups
Value for distributed port groups
Purpose Distributed Port Group Name VLAN
ESXi management Management-vds01-vSAN 2030
vMotion vMotion-vds01-vSAN 2031
vSAN vSAN-vds01-vSAN 2032
The following steps can be used to create the management port group:
1. On the web client Home screen, select Networking.
2. Right click on the distributed switch (vds01-vSAN). Select Distributed Port Group >
New Distributed Port Group.
3. On the Select name and location page, provide a Name for the distributed port group
(example: Management-vds01-vSAN). Click Next.
4. On the Configure settings page, keep all values as default, leaving VLAN type as None. Click Next.
5. Click Finish.
The following steps can be used to create the vMotion and vSAN port groups:
1. On the web client Home screen, select Networking.
2. Right click on the distributed switch (vds01-vSAN). Select Distributed Port Group >
New Distributed Port Group.
3. On the Select name and location page, provide a Name for the distributed port group
(example: vMotion-vds01-vSAN). Click Next.
4. On the Configure settings page, set the VLAN type as VLAN, enter the appropriate VLAN
ID (2031). Click Next.
5. Click Finish.
6. Create the final Distributed Port Group (vSAN) using the value in Table 11.
After creating the distributed port groups (using example values) your configuration would look like Figure 40.
Distributed Port Groups
Configure teaming and failover on management and vMotion port groups:
1. On the web client Home screen, select Networking.
2. Right click on the distributed switch, then select Distributed Port Group > Manage Distributed
Port Groups.
3. Select only the Teaming and failover checkbox, then click Next.
4. Select the management and vMotion port groups.
5. Click Next.
6. On the Teaming and failover page:
a. For Load balancing, select Route based on physical NIC load.
b. For Failover order, confirm Uplink 1 and Uplink 2 are both under the Active uplinks
section. Move Uplink 3 and Uplink 4 to Unused uplinks. Leave other settings at their
defaults. An example is shown in Figure 41.
7. Click Next, then Finish to apply the settings.
Teaming and failover settings for distributed port groups
Configure teaming and failover on vSAN port group:
1. On the web client Home screen, select Networking.
2. Right click on the distributed switch, then select Distributed Port Group > Manage Distributed
Port Groups.
3. Select only the Teaming and failover checkbox, then click Next.
4. Select the vSAN port group.
5. Click Next.
6. On the Teaming and failover page:
a. For Load balancing, select Route based on physical NIC load.
b. For Failover order, confirm Uplink 3 and Uplink 4 are both under the Active uplinks
section. Move Uplink 1 and Uplink 2 to Unused uplinks. Leave other settings at their
defaults.
c. Click Next, then Finish to apply the settings.
Add and manage hosts to the vDS:
1. On the web client Home screen, select Networking.
2. Right-click on the distributed switch and select Add and Manage Hosts.
3. On the Select task page, make sure Add hosts is selected, then click Next.
4. On the Select hosts page, click New hosts, then select the check box next to each host in the
vSAN cluster.
5. Click OK, then click Next.
6. On the Manage physical network adapters page, each host is listed with its vmnics beneath it.
a. vmnic0 is in use by vSwitch0 and has the established management network and kernel. Do
not change settings for vmnic0 at this time.
b. Select vmnic1 on the first host and click
c. Select Uplink 2 then click OK.
d. Select vmnic2 on the first host and click
e. Select Uplink 3 then click OK.
f. Select vmnic3 on the first host and click
g. Select Uplink 4 then click OK.
h. Repeat sub steps a through d to configure the remaining hosts, then click Next.
Manage physical adapters
7. On the Manage VMkernel network adapters page, each host is listed with its VMkernel
adapters beneath it. Only the default ESXi management VMkernel will be present.
a. Leave all settings as default. The management kernel will be migrated in another step. Click
NEXT.
8. On the Migrate VM networking page, leave all settings as default. Click NEXT.
9. Review the Ready to complete summary. Click Finish.
Move management kernel to vDS for each host:
1. On the web client Home screen, select Networking.
2. Right-click on the distributed switch and select Add and Manage Hosts.
3. On the Select task page, make sure Manage host networking selected, then click Next.
4. On the Select hosts page, click Attached hosts, then select the check box next to each host in
the vSAN cluster.
5. Click OK, then click Next.
6. On the Manage physical adapters page, make no changes, click NEXT.
7. On the Manage VMkernel adapters page migrate the management kernel.
a. Select the ESXi management VMkernel adapter, vmk0, on the first host and click
b. Choose the management port group (example: Management-vds01-vSAN). Click OK.
c. Repeat steps 7.a. - 7.b. for each of the remaining hosts in the vSAN cluster.
Manage VMkernel adapters
8. Click NEXT.
9. On the Migrate VM networking page, make no changes, click NEXT.
10. Review the Ready to complete summary. Click Finish.
Move vmnic0 from vSwitch0 to vDS Uplink1:
1. On the web client Home screen, select Networking.
2. Right-click on the distributed switch and select Add and Manage Hosts.
3. On the Select task page, make sure Manage host networking selected, then click Next.
4. On the Select hosts page, click Attached hosts, then select the check box next to each host in
the vSAN cluster.
5. Click OK, then click Next.
6. On the Manage physical adapters page, each host is listed with its vmnics beneath it.
a. Select vmnic0 on the first host and click .
b. Select Uplink1 then click OK.
c. vmnic1,2, and 3 were configured in the previous set of steps and is in their final
configuration state. Do not change settings for vmnic1, 2, or 3.
7. Repeat sub steps a through c to configure the remaining hosts, then click Next.
Migrate vmnic0
8. On the Manage VMkernel adapters page, make no changes, click NEXT.
9. On the Migrate VM networking page, make no changes, click NEXT.
10. Review the Ready to complete summary. Click Finish.
Add VMkernels to vSAN port group
1. On the web client Home screen, select Networking.
2. Right-click on the vSAN-vds01-vSAN port group.
3. Select Add VMkernel Adapters.
4. On the Select hosts page, click Attached hosts, then select the check box next to each host in
the vSAN cluster.
5. Click OK, then click Next.
6. On the Configure VMkernel adapter page:
a. Set the MTU to Custom and enter a value of 9000.
b. For Available Services, check the vSAN box.
c. Click Next.
7. On the IPv4 settings page:
a. Select Use static IPv4 settings.
b. Enter in a IPv4 address for each host. (example: [Link] [Link], etc…)
c. For Gateway, choose Configure on VMkernel adapters, and
enter a gateway (example: [Link]).
d. Click NEXT.
Add vSAN VMkernel
8. Review the Ready to complete summary. Click Finish.
Note: The network configuration examples in this document are designed for flexibility.
Most vSAN deployments are Layer 2 and therefore do not require use of gateways on
vSAN VMkernel adapters. Use of the gateways and VRRP on the switch configurations
will not interfere with operation of Layer 2 deployments. Administrators that require
Layer 3 and more advanced vSAN stretched cluster designs can use these example
configurations in their deployment.
Add VMkernels to vMotion port group
1. On the web client Home screen, select Networking.
2. Right-click on the vMotion-vds01-vSAN port group
3. Select Add VMkernel Adapters.
4. On the Select hosts page, click Attached hosts, then select the check box
next to each host in the vSAN cluster.
5. Click OK, then click Next.
6. On the Configure VMkernel adapter page:
a. Set the MTU to Custom and enter a value of 9000.
b. For Available Services, check the vMotion box
c. Click Next.
7. On the IPv4 settings page:
a. Select Use static IPv4 settings.
b. Enter in a IPv4 address for each host. (example: [Link]
[Link], etc…)
c. For Gateway, choose Configure on VMkernel adapters, and
enter a gateway (example: [Link]).
Add vMotion VMkernel
8. Review the Ready to complete summary. Click Finish.
Enable and configure vSAN
1. From Hosts and Clusters, click on the vSAN cluster and select the Configure
tab
a. Navigate to vSAN > Services.
2. Click opposite the notification vSAN is Turned OFF
3. Choose Single Site cluster, click NEXT.
4. For Services, leave all settings as default, click NEXT.
5. For Claim Disks, change the Group by: to Host
a. Under each host select a disk for the cache tier.
b. Under each host select the remaining disks for the capacity tier. If any
disks are to be left out of the vSAN, mark as Do not claim.
c. Ensure an equal number of cache and capacity disks are used for each
host. The configuration notification at the bottom should show
d. Click NEXT.
6. For fault domains, leave all settings as default, click NEXT.
7. Review the Ready to complete page and click FINISH.
Note: Configuration of fault domains and performance-based settings for vSAN is not
within the scope of this document. For further information on completing the vSAN
configuration, reference the vSAN administrators guide.
6 Alternative HW configurations
This section summarizes additional hardware options that can be used with this
guide with small modifications.
6.1 MX9116n Fabric Switching Engine
The MX9116n FSE switch shown in Section 2.2 is a high-performance switch
capable of serving as a TOR switch for up to 10 chassis. This switch can be used in
either example in this document to increase the bandwidth to external networks.
Both the MX5108n and MX9116n use OS10 Enterprise Edition.
Example #1 – Data Center-in-a-box
Replace the two MX5108n switches with two MX9116n switches
Configuration of global and server facing ports remains unchanged, apart
from the interface port numbering scheme
Configure external network uplinks as desired (not shown)
No change to virtual networking configuration
Example #2 – Chassis in leaf-spine
Option 1
- Replace the two MX5108n switches in fabric A1/A2 with two MX9116n
switches
- Keep the two MX5108n switches in fabric B1/B2 if designated vSAN
links desired
- Configuration of global and server facing ports remains unchanged
- Configure external network uplinks as desired (not shown)
Option 2
- Replace the two MX5108n switches in fabric A1/A2 with two MX9116n
switches
- Remove the two MX5108n switches from B1/B2 (solution loses
dedicated vSAN links)
- Configure the MX9116n as shown in example #1
- Configure the virtual networking as shown in example #1
- Configure external network uplinks as desired (not shown)
6.2 MX5016s Storage Sled
The MX5016s storage sled can be used on any of the two examples in this document. If
there is space in the chassis, additional MX5016s sleds can be utilized.
6.3 MX vSAN Ready Nodes
This document uses a MX740c vSAN Ready Node for validation of all network configs.
Any MX vSAN Ready Node model can be used in place of the MX740c model with no
change in networking deployment steps.
Choose the model of MX vSAN Ready Node to fit your compute and storage
performance requirements.