We have completed the NSX Manager deployment in the previous post and now we are going to deploy the NSX Controllers which are treated as the Control Plane of the NSX. NSX Controllers are Virtual machines. There should be at least three Controllers for the redundancy.
It is important to understand that NSX Controllers use the scale out mechanism and slicing which divides the equal workload across the Controller Nodes. All Controllers are active at the same time and if one controller fails other nodes are taking over the workload which was allocated to the failure node.
Click on the + sign of the NSX Controller nodes option
Once you get the Add Controller window provide the details such as Name, select the NSX Manager, Datacenter, Cluster/Resource Pool, Datastore, Host, Folder, Connected PortGroup, IP pool and Password. If you haven’t configure the IP pool of your datacenter yet click on the Select option.
Click on the New IP Pool…
Provide the details such as Name of the IP Pool, Gateway, Prefix Length, Primary and Secondry DNS, DNS suffix, Static IP Pool (starting and ending IPs). Once you are done with the details click on OK
Select the Created IP pool and click on OK to start the NSX Controller Deployment
It will start the NSX controller deployment and you can also monitor the status in the Recent Tasks window
Prepare ESXi hosts – Deploying the Data Plane
Now we need to prepare the ESXi hosts to be compatible with NSX. Go to the Host Preparation tab and click on Install under the Installation Status
Click on Yes to begin the preparation
Make sure you have the proper licences installed for your NSX otherwise you will not be able to prepare the ESXi hosts. In this step this will install the VIB drivers to use the logical routing VXLAN and distributed firewall features
What is VXLAN
VXLAN is a layer 2 over layer 3 tunneling protocol which allows to extend Virtual network across the routable networks. It is encapsulating the ethernet frames with UDP, IP and VXLAN headers which are adding additional 50 bytes to the ethernet packet. That’s the main reason to use jumbo frames in the NSX network environment. VMware recommends use the MTU size as 1600 in NSX networks including the associated physical switches.
What is VXLAN Tunnel End Point (VTEP)
In this case VTEPs are configured as separate vmkernals in each and every host which is in the VXLAN. When a Virtual Machine sending out a packet to a different Virual Machine in the same VXLAN and a different ESXi host, this packet is encapsulated in the source hypervisor and sending out to the target hypervisor. Target hypervisor forwarding the packet to the target Virtual Machine after decapsulating the header. The outer header in the VXLAN frame contain the source and the destination IP address of the ESXi hosts.
See this quick VMware video How NSX Uses VXLAN
Once we prepared the ESXi hosts we need to configure the VTEPs in the hosts. NSX supports for the multiple vmknics for the redundancy and load balancing purposes.
VXLANs are configured in a NSX cluster basis and each cluster in the NSX connected to a Virtual Distributed Switch and then it can use the logical switching functions. When we are configuring the VXLAN below details and requirements should be provided.
- vSphere Distributed Switch (VDS)
- VLAN ID
- MTU size
- IP allocation methods (DHCP or IP pools)
- NIC Teaming policy
- Jumbo frames should be enabled and MTU size should be 1552 or higher , by default it is set to 1600
To configure the VXLAN go to the Host Preparation and click on Not Configured under the VXLAN section
It will get the Management – Configure VXLAN Networking window and provide the details as mentioned above. I’m using the same IP pool here if you need you can create a new IP pool for the VTEP assignment, now you know how to do it.
Note: Number of VTEP is not editable and it is set to the number of dvUplinks of the switch
Once you done that it will create a VXLAN and you can see it as Configured
Now you can see the created dvPortgroups in the Network view
Let’s see assigning segment ID pools and the other configuration in a different post.