EUC Layers: Horizon Connectivity or From NSX Load Balancers with Love

Another layer that will hit your end users is the connectivity from the client device to the EUC solution. No intermitted errors allowed in this communication. Users very rarely like connection server is not reachable pop ups. Getting your users securely and reliable connected to your organizations data, desktops and applications while guaranteeing connection quality and performance is key for any EUC solution. For a secure workspace protecting and reacting to threats as they happen even makes software defined networking more important for EUC. Dynamic software is required. And that all for any place, any device and any time solution. And if something breaks well….

Rest of the fire

One of the first things we talk about is the need for reliable load balance several components as they scale out. And for not getting in to all the networking bits in one blog post, I am sticking with load balancing for this part.

As Horizon does not have a one package deal with networking or load balancing, you have to look use an add on to the Horizon offering or outside the VMware Product suite. Options are:

  • interacting with physical components,
  • depending on other infrastructure components such as DNS RR (that is a poor mans load balancing) preferably with something extra like Infoblox DNS RR with service checks,
  • using virtual appliances like Kemp or NetScaler VPX. VPX Express is a great free load balancer and more.
  • Specific Software-Defined Networking for desktops, using NSX for Desktop as an add-on. Now instantly that question pops up why isn’t NSX included in for example Horizon Enterprise like vSAN? I have no idea but probably has something to do money (and cue Pink Floyd for the ear worm).

And some people will also hear about the option of doing nothing. Well nothing isn’t an option if you have two components. At a minimum you will have to do a manual or scripted way of redirecting your users to the second component when the first hits the load mark, needs maintenance or fails. I doubt that you or your environment will remain long loved when trying this in a manual way…..

The best fit all depends on what you are trying to achieve with the networking as a larger picture or for example load balancing specifically. Are you load balancing the user connections to two connection servers for availability, doing tunneled desktop sessions, or doing a cloud pod architecture over multiple sites and thus globally. That all has to be taken into account.

In this blog post I want to show you using NSX for load balancing connection server resources.

Horizon Architecture and load balancers

Where in the Horizon architecture do we need load balancers? Well the parts that connect to our user sessions and a scaled out for resources or availability. We need them in our local pods and global load balancers when we have several sites.

Externally:

  • Unified Access Gateway (formally known as Access point)
  • Security Server (if you happen to have that one lying around)

Internally:

  • Workspace ONE/vIDM.
  • Connection Servers within a Pod, with or without CPA. However with CPA we need some more than just local traffic.
  • AppVolumes Managers.

And maybe you have other components to load balance, such as multiple vROPS analytical nodes for user interface load not hitting one node. As long as the node the Horizon for adapter connects to or from is not load balanced.

Load Balancers

To improve the availability of all these kind of components, a load balancer is used to publish a single virtual service that internal or external clients connect to. For example for the connection server load balanced configuration, the load balancer serves as a central point for authentication traffic flow between clients and the Horizon infrastructure, sending clients to the best performing and most available connection server instance. I will keep the lab a bit simple by just load balancing two connection server resources.

Want to read up more about load balancing CPA? EUC Junkie Bearded VDI Junkie vHojan (https://twitter.com/vhojan) has an excellent blog post about CPA and impact of certain load balancing decisions. Read it here https://vhojan.nl/deploy-cpa-without-f5-gtm-nsx/.

For this one here, on to the Bat-Lab….

Bat-Labbing NSX Edge Load Balancing

Lets make the theory stick and get it up and running in a Horizon lab I have added to Ravello. Cloned from an application blueprint I use for almost all my Horizon labs and ready for adding a load balancing option NSX for Desktop. Scenario is load balancing the connection servers. In this particular example, we are going for one-armed. this means the load balancer node will live on the same network segment as the connection servers. Start your engines!

Deploying NSX Manager

Now how do your get NSX in Ravello? Well either deploy it on a nested ESXi or import method to deploy NSX directly on Ravello Cloud AWS or GC. I’m doing the last. As you did not set a password you can log in to the manager with user admin and password ‘default’.
That is the same password you can use to go to enable mode, type enable. And if you wish config t for configuration mode. Flashback to my Cisco days :))….In configuration mode you can set host names, IP and such via CLI.
But the easiest way is to type setup in basic/enable mode. Afterwards you should be able to login via the HTTPS interface. Use that default password and we are in.

NSX - vTestlab

Add a vCenter registration for allowing NSX components to be deployed. On to the vSphere Web Client. Add this point you must register a NSX license else you will fail to deploy the NSX Edge Security Gateway Appliance.

Next prepare the cluster for a network fabric to receive the Edges. Goto Installation and click the Host Preparation tab. Prepare hosts in your cluster you want to deploy to (and have licensed for VDI components or NSX for Desktop is no option). Click on actions – install when you are all set.

NSX - Prepare Host

For this Edge Load Balancer services deployment you don’t need a VXLAN or NSX Controller. So for this blog part I will skip this.

Next up deploying a NSX Edge. Go to NSX Edge and client on the green cross to add. Fill in the details, configure a minimum of one interface (depending on the deployment type) as I am using a one-arm – select the pools, networks and fill in the details. In a production you would also want some sort of cluster for your load balancers, but I have only deployed one for now. Link the network to logical switch, distributed vSwitch or standard vswitch. I have only one, so the same network standard vSwitch. Put in the IP addresses. Put in gateway and decide on your firewall settings. And let it deploy the OVA.

If you forgot to allow for nested in the /etc/vmware/config and get You are running VMware ESX through an incompatible hypervisor error. Add vmx.allowNested = “TRUE” to that file on the ESXi host nested on Ravello. Run /sbin/auto-backup.sh after that. If you retry the deployment this will normally work.

Load Balancing

We have two connection servers in vTestLab

Connection Servers

Go back to the vSphere web client and double-click the just created NSX edge. Go to Manage and tab Load Balancer. Enable the Load Balancer.

Horizon LB - Enable Global

Create an Application Profile. For this configuration I used a SSL pass through for HTTPS protocol with Session persistency.

NSX LB - Application Profile

For this setup you can leave the default HTTPS service monitor. Normally you would also want to have service checks on for example the Blast gateway (8443) or PCoIP (4172) if components use this.
Next setup your pool to include your virtual servers (the connection servers) and the service check, monitor port and connections to take in to account.

NSX Hor Pool Detail

Next up create the virtual server with the load balancing VIP and match that one to the just created pool.

Virtual Server

After this look at the status and select pool

NSX Pool Status.png

Both are up.
You can now test if a HTTPS to 10.0.0.12 will show you the connection server login page.

Connected.png

Connected. Using HTML Access will fail with an error connecting to the connection server (Horizon 7.1) as I did not change the origin checking. You can disable this protection by adding the following entry to the file locked.properties (C:\Program Files\VMware\VMware View\Server\sslgateway\conf) on each connection server:

checkOrigin=false
balancedHost=load-balancer-name

Restart the VMware Horizon View Connection Server service.
And of course you would add a DNS record to 10.0.0.12 to let your users use in the connection to the connection servers, like vdi.vtest.lab. And use a SSL certificate with that name.

Now a last check if the load balancing is working correctly. I kill of one of the connection server.

Man down

And let see what the URL is doing now:

Admin after man down

Perfect the load balancer connects to the remaining connection server. This time for the admin page.

This concludes this small demonstration of using NSX for Load Balancing Horizon components.

– Happy load balancing the EUC world!

Sources: vmware.com

VMware NSX Series – Data flow without control plane

In my last blog post (you can read it here: https://pascalswereld.nl/post/67365305981/nsx) I wrote about NSX architecture with the out of band components such as NSX manager and NSX controller cluster (management and control plane). But do they realy don’t interfere with the data IO?

Time to find out!

I am using the HOL NSX lab to show how this works. This is a preconfigured lab with a NSX manager, a NSX controller HA pair, edge router and some Linux based VM’s.

First we are setting up a logical switch, connecting this to the perimeter edge router and connecting VM’s to this switch.

The VM network Switch is creating.

image

Adding to the perimeter edge with an IP subnet declaration. And yes don’t forget to connect the port.

image

As you will notice the subnet 10.1.40.0/24 is connected. The edge port is given the 10.1.40.1 IP address.

Next up adding VM’s to this distributed logical network.

image

I am using two web servers that are currently at an other logical network. This action will move the VM’s from Web_Logical_Network to the created VM Network.

With SSH putty sessions to the VM’s we can verify that the VM’s have interfaces connected to this network.

image

We see both the VM’s in the configured subnet with web03 at address 10.1.40.13 and web04 at address 10.1.40.14. When we start a ICMP ping we can confirm data is flowing from one VM to the other, and we can confirm that traffic is flowing from one logical switch port to the other.

image

Okay now see how the traffic will flow after we shutdown the controller HA pair. We got to VM’s in the vCenter inventory.

image

Here you also notice the edge components.

With the shutdown guest OS operation we shut down both the NVP_Controller VM’s. This has the complete HA pair shutdown in effect.

image

After this we can retry our ICMP data flow.

image

And low and behold data IO is still flowing between web03 and web04. A ping back from web04 to web 03 show this way is also working.

image

This small example shows that the controller pair don’t interfere with already configured components in the data plane. You won’t even notice problems when wanting to add new VM’s to this logical switch. Let’s demonstrate with adding web02.

image

image

Network adapter 1 is connected to the VM network DVS. But why wouldn’t it? DVS is managed by vCenter and the host is already part of this DVS (for example web04 is running on the same host). At the IP address we can notice something wrong, the IP subnet of the guest (this is .30 instead of the .40). When opening /etc/sysconfig/network/ifcfg-eth0 there is a static IP configured, again elementary my dear Watson. Replacing the .30 with .40, and down and up the interface. Now ping is running.

But what will not work with the controllers down? For example creating a new logical switch will fail with a vCNS server error. There is no interaction from management plane to the hosts control plane components. There you need the controller as the work horse.

– This concludes this blog post.

VMware NSX Series – Introduction and components

This year VMware introduced some new solutions to the software defined data center (SDDC), namely Virtual SAN (or VSAN) for the storage and available solutions and NSX for the network and security layer. Or software defined storage resp. software defined networking.

Virtual SAN will be general available H1 2014. Beta has been released a while now, so there is plenty of opportunity to test this solution. I have done a little blog posting about the initial configuration at  https://pascalswereld.nl/post/62805854730/vsan-beta-part-what-install.

The other solution is NSX where I want to go in some deeper in this blog post. NSX is GA but you will have to contact VMware sales if you want something with NSX. But first a little SDDC.

Software Defined Data Center (SDDC)

So you have heard this SDDC term earlier. That is right, if you have been following the keynotes from this and last years VMworld you will have heard them. And if you are a regular visitor of vmware.com you will have seen even more of that. But what is meant with SDDC?

image

Software defined data center (SDDC) is an architectural model to IT infrastructures that extends traditional virtualization concepts to all of the data center’s resources and services. This started a decinia ago with the visualization of computing resources (CPU and memory) to provide server virtualization (the software server) as the base component of SDDC.
Software defined networking or network virtualization, is the process of merging networking resources and functionality into a software-based virtual network. This creates simplicity by creating virtual network component “free” of the underlying physical network and firewall architecture. Well free, you will still need some cabling and switching to go from you computing cluster to the edge and further. But these can be simplified by just providing hardware connectivity. Let the virtualization layer handle the connectivity of VM, tenants, routing and access control (just a few examples).
Software defined storage or storage virtualization, is simple shared storage specifically designed for virtual machines. by simple it is self tuning, easy provisioning, simple managed and dynamically scalaleble. It presents a single data store distributed across multiple hosts in a vSphere cluster (that is where VSAN is enabled)

If underlying hardware fails the virtualization layers automatically redirects workloads to other components in the data center as long as redundant paths exist.

A important reason for the SDDC is to simplify the provisioning of services and providers for application workloads. Yes, it adds more complexity to the virtualization layer, it is not just computing anymore. But it simplifies provisioning while not having to go from and to different IT service silo’s to get something done. Your expertise is there in the virtualization layers.

Well pretty clear isn’t…

Now for a little in about network virtualization via VMware NSX. Will try to keep it little as you can write a book about this subject. I don’t think I’m gonna be finished in one blog post, so I conveniently used series in my title. That is not a promise but a opening, as I am sure this subject will return.

VMware NSX Architecture

NSX is composed of the following components:

image

These bring components in the network/virtualization layers by means of virtual appliances, and components close to the hypervisor (on the host) components. As you will notice (or not) the switching supports the open vSwitch which allows NSX to be deployed with other hypervisors (and with other I mean other then VMware in this case). For example KVM, Xenserver can be supported/added to provide a true software defined data center, and not just a VMware software defined data center. For this you will have two flavouors of NSX, one optimized for vSphere and NSX for multi hypervisors.
But the question here is how many organizations use hybrid hypervisors in their environments. Often enough I will only see one flavor install base. But that is a case outside of the scope of this blog post. Back to NSX components.

An overview of the NSX components:

NSX Manager.  A web-based GUI management dashboard for user friendly interaction with the VMware NSX controller cluster. Via the NSX API. Primarily used for system setup, administration and troubleshooting. NSX Manager can take snapshots of the entire state of the virtual network for backup, restores, introspection, and archival. The services are provided via NSX API’s. The NSX manager works together with vCenter for managing cluster and host components.

NSX Controller. The NSX controller cluster is the highly available distributed system of virtual appliances responsible for the programmatic deployment of virtual networks across the entire architecture. The NSX controller cluster accepts API requests from cloud management platforms (e.g. vCloud, OpenStack), calculates the virtual network topology, and proactively programs the hypervisor NSX vswitches and NSX gateways with the appropriate configuration. While not handling packets directly, the controller cluster is the workhorse of the NSX infrastructure.

The NSX Manager and NSX Controller cluster are out of band and never handle data packets. Other way of definition are the NSX Manager is in the management pane (together with a vCenter system) and the NSX controllers are in the control pane of the network virtualization.

NSX Gateways/Edge Router. NSX edge services provide a secure path(s) in and out of the software defined data center. NSX Gateway nodes can be deployed in high available pairs, and offer services such as routing, firewalling, private tunneling, and load balancing services for securing and controlling traffic at the edge of one or more virtual networks. NSX gateways are managed by the controller cluster.

– NSX vSwitch. NSX vSwitch is an component that is added to the hypervisor and replaces the traditional switches. Well sort of, as there still is a distributed logical switch layer but now the NSX vSwitch or Open vSwitch. It can span multiple clusters and provide for example layer 2 and layer 3 logical switching.

– Host loadable modules. Most networking components use the host provided modules. For example to let a host understand the NSX switch and let traffic flow between NSX hosts they need to talk the same language. With the kernel modules your ESXi host is able. The installation of modules can be done using the UI or by bundling the vSphere image with proper VMware Installation Bundles (VIBs). These modules provide port Security, VXLAN, distributed firewall (DFW), distributed switching or distributed router (DR) functions on the host level.

—-

Okay that is enough theory done for this blog post.

Would you like some hands on? VMware has some hands on lab (HOL) sessions on the NSX subject. Take these labs at at http://labs.hol.vmware.com/ (or www.projectnee.com). You can choose or do both the  HOL-SDC-1303 – VMware NSX: The Network Virtualization Platform and HOL-SDC-1319 – VMware NSX for Multi-Hypervisor Environments sessions.

– Interesting this network virtualization. To be continued for sure.