In my last blog post (you can read it here: https://pascalswereld.nl/post/67365305981/nsx) I wrote about NSX architecture with the out of band components such as NSX manager and NSX controller cluster (management and control plane). But do they realy don’t interfere with the data IO?
Time to find out!
I am using the HOL NSX lab to show how this works. This is a preconfigured lab with a NSX manager, a NSX controller HA pair, edge router and some Linux based VM’s.
First we are setting up a logical switch, connecting this to the perimeter edge router and connecting VM’s to this switch.
The VM network Switch is creating.
Adding to the perimeter edge with an IP subnet declaration. And yes don’t forget to connect the port.
As you will notice the subnet 10.1.40.0/24 is connected. The edge port is given the 10.1.40.1 IP address.
Next up adding VM’s to this distributed logical network.
I am using two web servers that are currently at an other logical network. This action will move the VM’s from Web_Logical_Network to the created VM Network.
With SSH putty sessions to the VM’s we can verify that the VM’s have interfaces connected to this network.
We see both the VM’s in the configured subnet with web03 at address 10.1.40.13 and web04 at address 10.1.40.14. When we start a ICMP ping we can confirm data is flowing from one VM to the other, and we can confirm that traffic is flowing from one logical switch port to the other.
Okay now see how the traffic will flow after we shutdown the controller HA pair. We got to VM’s in the vCenter inventory.
Here you also notice the edge components.
With the shutdown guest OS operation we shut down both the NVP_Controller VM’s. This has the complete HA pair shutdown in effect.
After this we can retry our ICMP data flow.
And low and behold data IO is still flowing between web03 and web04. A ping back from web04 to web 03 show this way is also working.
This small example shows that the controller pair don’t interfere with already configured components in the data plane. You won’t even notice problems when wanting to add new VM’s to this logical switch. Let’s demonstrate with adding web02.
Network adapter 1 is connected to the VM network DVS. But why wouldn’t it? DVS is managed by vCenter and the host is already part of this DVS (for example web04 is running on the same host). At the IP address we can notice something wrong, the IP subnet of the guest (this is .30 instead of the .40). When opening /etc/sysconfig/network/ifcfg-eth0 there is a static IP configured, again elementary my dear Watson. Replacing the .30 with .40, and down and up the interface. Now ping is running.
But what will not work with the controllers down? For example creating a new logical switch will fail with a vCNS server error. There is no interaction from management plane to the hosts control plane components. There you need the controller as the work horse.
– This concludes this blog post.