App Volumes configuration: Active Directory bind user and use the short username captain!

Here we are again! Holiday is unfortunately over. Lots seen and done, lovely travel companion and a great time. But yes back again to this crazy little thing called work. Some nice projects to be working on. This week started with the deployment of a Workspace ONE environment.As there are several phases in a EUC project, and I was doing assess and design a lot more often than deploy jobs, I wanted to get back with some hands-on experience outside the lab. I think this a) is good for the overall quality of this consultant, b) aligning the assess, design and deployment phased is part of continual improvement of a EUC solution and c) there will always be this techie inside who likes to brea…. erm build… I mean build stuff. Nice putting together some components and with this blog post some of the current deploy experience gotchas need to be recorded. First up App Volumes.

On with some deploy activities

Within the build a Workspace ONE infrastructure one of the tasks is deploying the App Volumes infrastructure. No problem, get a VM, run the installer and do the initial configuration no iceberg straight ahead. Clicky the click tappy the tap. Stopped at a credentials error on the AD Domains page . Erm what happened here

App Volumes Credentials

Continue reading “App Volumes configuration: Active Directory bind user and use the short username captain!”

Horizon 7.2: With a little helpdesk from my friends

On June 20th the latest version of Horizon was released, namely Horizon 7.2. The highlights of this release include the added Horizon Help Desk tool, and general availability of Skype for Business enhancements in the Horizon environment to enable Horizon users to use Skype in a production environment. You can for example find the VMware Virtualization Pack for Skype in the Horizon Agent installer.

Both features are what organizations often asked about, so it is good that these are included in this release. Other somewhat important are the usual upgrade release updates, scale and product interoperability improvements. As expected and delivered, nothing fancy here.

Helpdesk Login

Continue reading “Horizon 7.2: With a little helpdesk from my friends”

This post makes a hundred – time to take a peek back in

And this is blog post number 100 in almost 4 years of blogging. A great milestone if I say so myself. And since my last was in 2013, a recap post is overdue ;) So here goes crunching some numbers.

TL;DR This post is a little recap what I have done with my blog with some statistics from the years. Mainly for the fun of it. I don’t know how useful these are for you, but I can share them anyways as these are a bit of fun for me looking back these couple of years.

Over time I received some questions regarding what it takes doing a blog. Questions like what made you or how did you start, what is your traffic, hosting, time spent, sponsoring,  and so on. Sometimes I can answer these with some ease, but as I am not really a statistics junkie….well not before this one anyway….I could not always give answers. But apparently I have loads of statistic data. I don’t know how useful these are for you, but I can share them anyways as these are a bit of fun for me looking back without anger.

Continue reading “This post makes a hundred – time to take a peek back in”

vRealize Log Insight broadening the Horizon: Active Directory integration deploy VMware Identity Manager

At a customer I am working on the design of vRealize Log Insight. With the authentication objective we can choose from the sources local, Active Directory or VMware Identity Manager. In the latest release (4.5) it is clearly stated that authentication configuration of Active Directory directly from Log Insight is depreciated.

Deprecated vRLI

Edit: Unlike some previous information going around, Active Directory from Log Insight directly is still supported. Quote from updated VMware Knowledge base article: Although direct connectivity from VMware vRealize Log Insight to Active Directory is still supported in Log Insight 4.5, it may be removed in a future version.

But I think it will still be very beneficial to move to vIDM sooner then later.

Continue reading “vRealize Log Insight broadening the Horizon: Active Directory integration deploy VMware Identity Manager”

VCAP-DTM Deploy Achievement Unlocked with some exam time management tips for you

After my exam earlier was postponed due to some problems between Pearson VUE and VMware Lab communications, I did my VCAP-DTM deploy last Friday. And it was a pass on the first attempt :) Woohoo.

The exam is a whopping 3,4 (or somewhat with 205 minutes) hours getting through tasks where time management is the most important piece. Well next to actually knowing what you need to be doing. I missed some questions in the end, but 30 questions seem to be enough to barely pass. I was a bit slow as deployment is something I do differently in real life, irritated about the backspace not working (arrow del key combination is not my cookie) while my Pavlov keeps hitting that key and in the last part of the exam I had to keep pushing radio buttons several times before they got active.

VCAP Passed

Continue reading “VCAP-DTM Deploy Achievement Unlocked with some exam time management tips for you”

EUC Layers: Horizon Connectivity or From NSX Load Balancers with Love

Another layer that will hit your end users is the connectivity from the client device to the EUC solution. No intermitted errors allowed in this communication. Users very rarely like connection server is not reachable pop ups. Getting your users securely and reliable connected to your organizations data, desktops and applications while guaranteeing connection quality and performance is key for any EUC solution. For a secure workspace protecting and reacting to threats as they happen even makes software defined networking more important for EUC. Dynamic software is required. And that all for any place, any device and any time solution. And if something breaks well….

Rest of the fire

One of the first things we talk about is the need for reliable load balance several components as they scale out. And for not getting in to all the networking bits in one blog post, I am sticking with load balancing for this part.

As Horizon does not have a one package deal with networking or load balancing, you have to look use an add on to the Horizon offering or outside the VMware Product suite. Options are:

  • interacting with physical components,
  • depending on other infrastructure components such as DNS RR (that is a poor mans load balancing) preferably with something extra like Infoblox DNS RR with service checks,
  • using virtual appliances like Kemp or NetScaler VPX. VPX Express is a great free load balancer and more.
  • Specific Software-Defined Networking for desktops, using NSX for Desktop as an add-on. Now instantly that question pops up why isn’t NSX included in for example Horizon Enterprise like vSAN? I have no idea but probably has something to do money (and cue Pink Floyd for the ear worm).

And some people will also hear about the option of doing nothing. Well nothing isn’t an option if you have two components. At a minimum you will have to do a manual or scripted way of redirecting your users to the second component when the first hits the load mark, needs maintenance or fails. I doubt that you or your environment will remain long loved when trying this in a manual way…..

The best fit all depends on what you are trying to achieve with the networking as a larger picture or for example load balancing specifically. Are you load balancing the user connections to two connection servers for availability, doing tunneled desktop sessions, or doing a cloud pod architecture over multiple sites and thus globally. That all has to be taken into account.

In this blog post I want to show you using NSX for load balancing connection server resources.

Horizon Architecture and load balancers

Where in the Horizon architecture do we need load balancers? Well the parts that connect to our user sessions and a scaled out for resources or availability. We need them in our local pods and global load balancers when we have several sites.


  • Unified Access Gateway (formally known as Access point)
  • Security Server (if you happen to have that one lying around)


  • Workspace ONE/vIDM.
  • Connection Servers within a Pod, with or without CPA. However with CPA we need some more than just local traffic.
  • AppVolumes Managers.

And maybe you have other components to load balance, such as multiple vROPS analytical nodes for user interface load not hitting one node. As long as the node the Horizon for adapter connects to or from is not load balanced.

Load Balancers

To improve the availability of all these kind of components, a load balancer is used to publish a single virtual service that internal or external clients connect to. For example for the connection server load balanced configuration, the load balancer serves as a central point for authentication traffic flow between clients and the Horizon infrastructure, sending clients to the best performing and most available connection server instance. I will keep the lab a bit simple by just load balancing two connection server resources.

Want to read up more about load balancing CPA? EUC Junkie Bearded VDI Junkie vHojan ( has an excellent blog post about CPA and impact of certain load balancing decisions. Read it here

For this one here, on to the Bat-Lab….

Bat-Labbing NSX Edge Load Balancing

Lets make the theory stick and get it up and running in a Horizon lab I have added to Ravello. Cloned from an application blueprint I use for almost all my Horizon labs and ready for adding a load balancing option NSX for Desktop. Scenario is load balancing the connection servers. In this particular example, we are going for one-armed. this means the load balancer node will live on the same network segment as the connection servers. Start your engines!

Deploying NSX Manager

Now how do your get NSX in Ravello? Well either deploy it on a nested ESXi or import method to deploy NSX directly on Ravello Cloud AWS or GC. I’m doing the last. As you did not set a password you can log in to the manager with user admin and password ‘default’.
That is the same password you can use to go to enable mode, type enable. And if you wish config t for configuration mode. Flashback to my Cisco days :))….In configuration mode you can set host names, IP and such via CLI.
But the easiest way is to type setup in basic/enable mode. Afterwards you should be able to login via the HTTPS interface. Use that default password and we are in.

NSX - vTestlab

Add a vCenter registration for allowing NSX components to be deployed. On to the vSphere Web Client. Add this point you must register a NSX license else you will fail to deploy the NSX Edge Security Gateway Appliance.

Next prepare the cluster for a network fabric to receive the Edges. Goto Installation and click the Host Preparation tab. Prepare hosts in your cluster you want to deploy to (and have licensed for VDI components or NSX for Desktop is no option). Click on actions – install when you are all set.

NSX - Prepare Host

For this Edge Load Balancer services deployment you don’t need a VXLAN or NSX Controller. So for this blog part I will skip this.

Next up deploying a NSX Edge. Go to NSX Edge and client on the green cross to add. Fill in the details, configure a minimum of one interface (depending on the deployment type) as I am using a one-arm – select the pools, networks and fill in the details. In a production you would also want some sort of cluster for your load balancers, but I have only deployed one for now. Link the network to logical switch, distributed vSwitch or standard vswitch. I have only one, so the same network standard vSwitch. Put in the IP addresses. Put in gateway and decide on your firewall settings. And let it deploy the OVA.

If you forgot to allow for nested in the /etc/vmware/config and get You are running VMware ESX through an incompatible hypervisor error. Add vmx.allowNested = “TRUE” to that file on the ESXi host nested on Ravello. Run /sbin/ after that. If you retry the deployment this will normally work.

Load Balancing

We have two connection servers in vTestLab

Connection Servers

Go back to the vSphere web client and double-click the just created NSX edge. Go to Manage and tab Load Balancer. Enable the Load Balancer.

Horizon LB - Enable Global

Create an Application Profile. For this configuration I used a SSL pass through for HTTPS protocol with Session persistency.

NSX LB - Application Profile

For this setup you can leave the default HTTPS service monitor. Normally you would also want to have service checks on for example the Blast gateway (8443) or PCoIP (4172) if components use this.
Next setup your pool to include your virtual servers (the connection servers) and the service check, monitor port and connections to take in to account.

NSX Hor Pool Detail

Next up create the virtual server with the load balancing VIP and match that one to the just created pool.

Virtual Server

After this look at the status and select pool

NSX Pool Status.png

Both are up.
You can now test if a HTTPS to will show you the connection server login page.


Connected. Using HTML Access will fail with an error connecting to the connection server (Horizon 7.1) as I did not change the origin checking. You can disable this protection by adding the following entry to the file (C:\Program Files\VMware\VMware View\Server\sslgateway\conf) on each connection server:


Restart the VMware Horizon View Connection Server service.
And of course you would add a DNS record to to let your users use in the connection to the connection servers, like vdi.vtest.lab. And use a SSL certificate with that name.

Now a last check if the load balancing is working correctly. I kill of one of the connection server.

Man down

And let see what the URL is doing now:

Admin after man down

Perfect the load balancer connects to the remaining connection server. This time for the admin page.

This concludes this small demonstration of using NSX for Load Balancing Horizon components.

– Happy load balancing the EUC world!


WebCommander Walnut Installation Walk-through

In a previous blog post from a far away history, I wrote about the WebCommander Fling ( Man that one is from 2013, I have been putting blog posts out there for a while now, hope you did find something useful on the blog…..
Anyhow back to this one. The WebCommander developer reached out in that previous post comment with a request to write-up a guide for WebCommander Walnut. I am writing it up as a walk-through to get it started and showing some output. If you would like some additions to the post, add some of the information your would like to see added to the post, or post questions / remarks and I will try to look if I can make some additions. But first….a little reminder about that commander out on the web…

What is WebCommander

WebCommander is a collection of web services around PowerShell and PowerCLI scripts. The interface can be used to provide users with scripts without them learning or knowing the PowerCLI commands. Or to give users access only to specific prepared tasks without giving them access to the web client (they still need to have permissions in the environment to do their operations). A great way in delegating specific tasks!

WebCommander was initially released and maintained as a VMware Fling. WebCommander was received very well by the community and saw the Fling being released as, and in turn moved to, an open source project on GitHub in 2014 (as announced on

The WebCommander project page can be found at: This WebCommander version mainly uses XML with browser side transforming (XSLT). And when you hear version you know there might be another one, and yes there is WebCommander Walnut in a different branch.

WebCommander Walnut is to be used when :

  • you prefer JSON over XML,
  • combining commands in workflows for more or complex automation,
  • run local or cloud scripts (WebCommander Hybrid),
  • having a history,
  • 64-bit PowerShell,
  • more new features,
  • and a new User Interface


Take a look at WebCommander Walnut for yourself, go to GitHub:

Installation Guide

Prepare the system:

Create a VM

Use Windows 2012R2 or Windows 2008R2 as the OS.

When using Windows2008R2 there are the following specifics:

  • Install .Net Framework 4.5.2. Needed for the installation of PowerShell v5 on 2008R2
  • Install PowerShell version 5

When using a fresh installation of Windows2012R2 install PowerShell Version 5.

For installation of the PowerShell version 5 install the Windows Management Framework 5.0 that can be downloaded as an update, or directly from

For Webcommander and PowerShell: Set-ExecutionPolicy Unrestricted -Force.

IIS Web-Server (including SubFeatures and Management Tools). Either use the Add Roles and Features GUI to install the Web Server role or use PowerShell:

Install-WindowsFeature Web-Server -IncludeManagementTools -IncludeAllSubFeature

PHP from Click  ‘Install PHP now’ from the web site to download the latest version. Execute the downloaded exe to start the Web Platform Installer. Continue the installer with all the default options (you can change by clicking the options link) and accept to do the installation. The installer will download and install the prerequisites.

PHP IIS Installation

And click Finish when done.

Install MongoDB for commands history.

In short the procedure for MongoDB is:

MongoDB download CEIt should offer you the correct release and OS.

  • Install via the downloaded msi. Select complete or customize if you want. Complete will install in the default locations.
  • Add the installation location as a system path environment. The default installation location is C:\Program Files\MongoDB\Server\3.4\bin.
  • Use your powershell window used to install IIS or open a command prompt
  • MongoDB requires a data directory to store all data. MongoDB’s default data directory path is \data\db. Create this folder using the following a command line
md \data\db
  • Or use another location to suit your needs.
  • MongoDB also requires a location to store logs. Create the log folder using command line
md \data\log
  • Create a config file location with
md \data\conf
  • And add a text file mongodb.cfg there (watch the view – file extensions there!)
  • Add the following to the cfg file and save:
                  destination: file
                  path: c:\data\log\mongod.log
                  dbPath: c:\data\db


  • Install MongoDB as a Windows service by running mongod.exe with –install parameter (as administrator!).
mongod.exe --config "C:\data\conf\mongodb.cfg" --install

If you get api-ms-win-crt-runtime-l1-1-0.dll is missing from your computer like this

System Error - Mongod

your Windows updates either screwed up or you have to install Visual C++ Redistributable. (Re)installing Visual C++ will mostly do the trick.

  • And now we will have a MongoDB service (use –serviceName and –serviceDisplayName to change to another name if you wish).
  • Start the MongoDB service with net start MongoDB.
  • Create database and collection in MongoDB for WebCommander by running the commands below:
    • exe
    • use webcmd
    • createCollection(“history”)
    • Mongo should respond with “ok”:1
  • Install the MongoDB powershell module:
    • In PowerShellv5
Install-Module Mdbc
    • Accept the installation of required components.

Install latest version of VMware PowerCLI (version 6.5.1 at time of writing):

  • Good thing is that version 6.5.1 does not require a msi installer anymore. You can install from the PowerShell Gallery via PowerShellGet (and the correct version of PowerShell, but we covered that one already):
Install-Module VMware.PowerCLI
    • Use –Scope CurrentUser to use only for this user and no admin permissions required

Install WebCommander:

Download the files from GitHub, for example for the zip file:

Extract the zip and copy to c:\WebCommander. Or use your own location.

The Zip is composed of the following files and directories as subdirs of the master directory:
www/ – These are the files that need to be setup as the web service in IIS. _def is the file that is used to add the locations to the local scripts as defined in sources.json.
powershell/ …the local commands powershells – Readme file of the project
sources.json – Locations of local and remote scripts when wanting to use the remote script capability.
Note: that is, currently composed of… You never know what the future brings

Note: For scripts depending on your security policy Windows will normally block the files because they were downloaded from an external location, so you will have to unblock these files. Select the file – properties – and press the unblock in the security part at the bottom.

Open IIS Manager to configure the WebCommander site:

  • Remove the default site
  • Add a new site (in this case I used the administrator to connect as to know which user is running, don’t just copy but do what is appropriate for your environment)

Add Webcommandersite

  • select the WebCommander site and open the authentication feature
  • Enable Windows Authentication, and disable Anonymous.

Site Authentication

  • If we now open a browser we will see the initial page

Initial Localhost

When clicking on select a command we can only select the remote commands. use the source.json to define the local locations. For me it was fixed when removing http://localhost/ from the local configuration to read: “local” : “_def.json”,

This one could also help as the _def.json was also a bit empty. Go to c:\WebCommander\powershell\ and execute .\genDefJson.ps1 to recreate the definition json. We should use genDefJson when updating any ps1 scripts.

And voila local also shows up

WebCommander Local also


Test drive WebCommander

There are scripts for vSphere actions and Horizon view actions distributed with the Git.

I have seen the following error message pop-up: AuthorizationManager check failed. The following is witnessed, and changed:

  • For some reason the execution policy is back to restricted, Set-ExecutionPolicy RemoteSigned or Unrestricted.
  • with the ExecutionPolicy set to RemoteSigned or Unrestricted, this error may occur if the script or some of the other included scripts is still blocked. From the explorer right-click the file, select Properties and click Unblock. Go through all the files!

Let see if we can get some vSphere information:

  • Add Command vSphere (local)
  • Add the required parameters, go to method to select what you want to do. I just want to see, so listDatastore is my option.
  • And press the play
  • Go to the output if there is a Pass
  • And ….

Pass vSphere (local)

If we want to get rid of the PowerCLI Customer Experience Improvement Program (CEIP) warning in the output. Run the following in Powershell:

Set-PowerCLIConfiguration -ParticipateInCEIP $false

(optionally with -Scope User / AllUsers)

And that’s it for now

– Enjoy WebCommanding throughout the universe!


EUC Toolbox: Regshotting across the end user universe

For managing applications and user environments it is very useful to know the way the application and the user behaves. And for application provisioning and user environment management it is necessary to know where the application and system stores the settings and personalizations options. We will need some form of application to use for capturing or monitoring the system for changes that the application or it’s settings are doing. For UEM for example we have the Application Profiler to use and create application configuration or predefined settings. But if you like to see where our Windows friend stores its changes, application profiler is not enough. We need other tools for the job. We can use Process monitor ( or SpyStudio ( to name a few. Or regshot.

The main difference of regshot to, for example the mentioned Process monitor or SpyStudio, is that this tool does not require admin permissions like Process monitor or installation on the system like SpyStudio. You can just download and run in the user context. This is what is the strong point is of Regshot, low footprint and no changes to the system that could influence your capturing. As long as the changes you want to monitor are within the user context, but wasn’t this the point in the first place….

What does regshot do?

In short regshot takes a first and a second shot of the registry, and shows you the differences between these. Next to this regshot also allows you to scan dirs. For example save the registry and APPDATA after you have changed that minor customization. Isn’t that what you would want to see?

In short take a first shot before your change. Change the system and take a second shot. Press compare and see what has been changed. And use that output in for example UEM configurations.


First up the application is available in 32-bit and 64-bit, and in ANSI and Unicode encoding.

Regshot Files

The difference here is the program architecture and how the character encoding is handled. If for example your language settings include non-latin characters, you may want to use the Unicode version of Regshot. Else it will not matter which one you take as long as the processor architecture is right.

Secondly with the shots you can do your shot, or do and save your shot. When saved you can later use this with the load option.

Capture and shot

Third, want your output in HTML or text. HTML is friendlier on the eyes, however it will take some more time to output. Sometimes the external program connection to HTML is screwed.

Fourth is including a scandir. Default regshot will do registry, but a lot of application do save something in for example the AppData Local, ProgramData or other locations. I would recommend to include the scandirs where possible. To only downside is that you would need to know where an application stores its values, or put in the most likely suspects. Just going for all out C:\users is getting you a lot of background noises from other applications using the same space.

Fifth is setting an output path. Currently it is set to the administrators AppData profile path. If I am scanning dirs in that location it might be a better idea to redirect the output to another location not to mess up the output.

Do keep in mind not to let in a lot of cycles between the first and second shot. The system will continue to run and add up in changes between the shots. Do your required change and shoot again.

Where can I get Regshot?

RegShot is available on its Sourceforge project page at You can download Regshot as a compressed .7z file. You can open this with 7Zip or WinZip. Downpart of the 7z is that if you haven’t brought an additional zip application, native Windows can’t handle this. There goes my no changes to the system with using Regshot…..or just unzip it on another system ;)

Show me

Don’t mind if I do. First we are going to take our first shot. Just let the program count the keys and values, and the dirs and files, until the second shot button appears.

Regshot Shooting

I don’t mind the time it takes, my testlab is a bit on the slow hand. And including the scandir takes an even longer time than just browsing the registry. But I’m there for the results not the speed.

Next up do a change to the system. For this example I changed Chrome browser settings to show the home and always show the bookmark bar. Done with the change? Take the 2nd shot. And wait until the compare button is available. Than press that one. In the output is for example:

Keys Home

Now it is up to you to analyse what is needed..

We see that Chrome wrote to the \Software\Google\Chrome\PreferenceMACs in the USER SID key. However SIDs we cannot capture with for example UEM. We do know that this is the same as HKCU and can be captured from the HKCU\Software\Google\Chrome\PreferenceMACs. Just add the HKCU\Software\Google\Chrome\PreferenceMACs or HKCU\Software\Google\Chrome to be included in the UEM Configuration.

Now it is up to you to analyse what is needed.

– Happy shooting at your users…ermmm user environment I mean!


Product Evaluation: Inuvika Open Virtual Desktop (OVD)

Occasionally I get a request, or some urge bubbles in me, to look at vendor X with its product Y. And there is nothing wrong with that as I like to keep a broader view on things and not just betting on one horse.

And so a request from Inuvika did find me asking to look at their evolution of the open virtual desktop (OVD) solution. Okay using virtual desktop and application delivery triggers will get my attention for sure. Kudos for that. On top of that the name Inuvika gets my curiosity running in again a somewhat higher gear. No problem, I will take a peek and see if I can brew up a blog article at the same time. At the same time was almost a year ago…..But still wanting to take that peek. You will probably figure out that letting  you read about OVD is a little bit overdue. Sorry for the delay….

A little notice up front: this blog post is my view only and not paid for, pre-published or otherwise influenced by the vendor. Their opinion might differ. Or not.

Wait what… Inuvika you say?

Yes Inuvika (ĭ-noo′vĭk-ă). If you open up your browser you could learn that the company name is based on a Canadian town Inuvik where it can be very cold. And that for 30 days in the year the sun doesn’t rise above the Horizon (*wink* *wink*). In such a place you will need a strong community and a collaborative approach to be able to be living in harse an environment. Their product strategy is the same. Offering an open source solution and collaborative with the community out there (however the separate community version and site is dead).
Inuvika mothership is based in Toronto, so hopefully that doesn’t lose a bit of the magic just introduced ;). But where ever they are based, it does not change the approach of Inuvika.

Main thing, the guys and gals from Inuvika is where you can get the Open Virtual Desktop from. Go to to download your version. Or take a peak around the site.

Open Virtual Desktop sounds interesting enough, show me

Glad you asked. Let’s find out. We have the option to use a trail version for evaluation purposes, enterprise license or the cloud version. I like it when we can find out a little about the bits and bytes ourselves. So I will be downloading OVD. But first up some architecture to know what screw and bolts we need, or can opt out from.


The following diagram has been taken from the architecture and system requirements document and show the components and the network flow for the system.

OVD-Architecture Overview

The OVD Roles:

  • The OVD Session Manager is first required component. The OSM will be installed prior to the other components. As the master of puppets it’s the session broker, administration console and centralized management of the other OVD components.
  • The OVD Application Server is one of the Slaveservers that will communicate with OSM. The OAS is the component that serves the application and desktops pools to the users. Accessed from either the web portal or the OVD Enterprise client. OAS is available in a Linux or Windows flavor. OAS can be pooled together and load balanced from the OSM. However you will need Enterprise for that as Foundation is limited to one application server (seriously just one?).
  • The OVD Web Access. OWA is responsible for managing and brokering Web sessions.Now where did we see that abbreviation before… Either using Java (going away in a next release) or HTML5, SSL tunneled if required. If using OVD clients only this is component is not needed. OWA will also offer an API (Javascript) to integrate OVD with other web-based applications.
  • The OVD File Server. The OFS component offers a centralized network file system to the users of the OAS’ses keeping access to the same data not depending on the OAS the user is on. Data can be user profiles, application data or other company data. The data is only accessible from the OAS sessions and is not published in another way like a contentlocker or dropbox.
  • ESG (hey wait no O something something). The Enterprise Secure Gateway is used as a unified access layer for external, but optionally also internal connections. ESG tunnels all the OVD connections between the client and itself, over a HTTPS session. So from any location, users that have access to HTTPS (443), will also be able to start a OVD session. If not using ESG tunnels OVD client will need to have HTTPS and RDP open to the OAS. Require the Enterprise license.
  • Further 2.3.0 brings a tech preview to OWAC. Web Application Connector to offer SSO integration as an identity appliance.

All components run on a Linux distribution supporting the flavors RHEL, CentOs or Ubuntu LTS. The only component where Windows will be used is when OAS is offering Windows desktops or Windows-based applications on RDS services. Supported RDS OS versions are Windows 2K8R2, W2012 and W2012R2. Isn’t it time for Windows 2016 by now?

In the OVD architecture we see sorts of familiar components that we see in similar virtual desktop solutions, only with a bit of a different naming. In a first overview the OVD architecture seems like what we are used to, no barriers here to cross.

In a production environment the Inuvika OVD installation will use several servers all doing their specific role. Some roles you will always see in a OVD deployment. Others are optional or can be configured to run together with other roles. And with external dependencies entering the mix with load balancers in front of OWA for example. Small shops will have some roles combined while having a smaller amount of OAS times n.

It all depends on the environment size and requirements you have for availability, scalability, resilience, security and so on.

Into the Bat-lab

Come on Robin to the Bat Cave! I mean the test lab. Time to see that OVD in action and take it for a spin. Lab action that is, however Inuvika also offers access to a hosted demo platform if you don’t have a lab or test environment lying around. From the download page you can download the Demo Appliance or register for the OVD Full installation. I will use the demo appliance for this blog post. As I would probably also would be installing multiple roles on the same virtual machine. The Demo Appliance is a virtual machine with the following OVD roles installed:

  • OVD Session Manager (OSM)
  • OVD Web Access (OWA),
  • OVD Application Server for Linux (OAS)
  • OVD File Server (OFS).

I will be using my Ravello Cloud vTestlab to host the OVD. So first I have to upload the OVA into the Ravello library. Once available in Ravello I can create a lab environment. I can just import the OVD, but I also want to see some client and AD integration if possible. I added my vTestlab domain controller and Windows 10 Clients in to the mix.

Invuvika Demo Lab

Let’s see if I can use them both, or I am wasting CPU cycles in Ravello. Good thing April is half through and I still have 720 CPU hours remaining this month, so not much of a problem in my book.

When starting the OVD demo appliance it will start with the Inuvika Configuration Tools. Choose your keyboard settings (US). And presto the appliance starts up with the IP I configured while deploying the application.

OVD - Demo Console after start

Here you can also capture the login details for the appliance: inuvika/inuvika. The default user for the administration console is admin/admin. Open up a browser and point to the FQDN or IP for web access. HTTP://<your appliance>/. Here we are greeted by a page where we can start a user sessions, open the administration console, documentation, the installer bits for the Windows AS and the clients.

The user sessions offered in the demo appliance are based on the internal users and internal Ubuntu Desktop and applications. The client can be set to desktop mode, which is a virtual desktop with the applications published to the user. Or can be portal mode, where the user is presented with a portal (so it’s not just a clever name) with all its application entitlements. The client starts with Java to allow for redirecting drives. Using HTML5 will not allow a drive to be redirected. The Demo appliance is populated with demo users where the password is the same as the user name. Just add cholland with password cholland in the client screen and you will be presented with a user session.

OVD Web login.png

And see the portal with the users entitlement and the file browser for data exchange between sessions.

OVD Demo - Client Portal

Start up a Firefox browser session and open my blog. Yup all works.

OVD - Client Firefox Blog

For using the Enterprise Client the demo appliance needs to be switched to Enterprise. And you need a license for that! Via the admin console you need to set the system in maintenance mode. Via the appliance console after logging in you get the menu where you can choose option 3 Install OVD Enterprise. After this you can set the system back to production, are greeted by a subscription error and via Configuration – Subscription Keys you can upload the license File. When a valid license is installed you can now run the Enterprise client for your evaluation. The client options are the somewhat similar as with the web client. Besides adding the site name in the client instead of a browser URL.

OVD Ent Client Login

We also have the administration console. While this has a bit more options and I am not trying to rewrite the documentation, I will show some of the parts. Basic try out the options yourself to see what the differences are.

We are greeted with an index page with an environment overview and user/applications publications. These will be the main actions when using the product. Of course we also have some menu options for reporting and configuration.

OVD - Admin Index

Let see if we can get some AD users in and entitle them to the demo. Seems like a lot of organization have their identity source already in place, and Microsoft is something used there. Configuration option seems like a logical part to start. And here we have the domain integration settings. Currently it is set to the internal database. Let get some information in the Microsoft option to see if we get the AD part in.

OVD - Configuration

I am using the internal users to keep it simple and leave in the support for Linux. This is a demo, not production.

When the information is done and added push the test button to see if the LDAP connect and bind works. Save when all green. Problems here? Go to status – logs to see wtf is happening. Main issues can be DNS, time offset or the standard account not having to correct information or UPN in the domain. The OVD Linux bind command is trying Login@Domain hardcoded.

And viola Administrator from the vTestlab domain has a session connected:

OVD - Administrator Session

My opinion about OVD

It works out of the box with any HTML5 Browser. Or you can of course use the Enterprise client, but this will required an Enterprise license and RDP or i-RDP to the client desktops (or ESG to be SSL tunneled).

[Edit] I most correct my previous version that Inuvika is using RDP as an enterprise display protocol.  That is not entirely true. OVD uses RemoteFX with the Enterprise Desktop Client and Windows Application Servers. RemoteFX is a set of technologies on top of RDP that enhances the visual experience significantly in comparison with the older RDP (the non-RemoteFX). Indeed better for the user experience, how much better we will leave up to the users. For Linux Application Servers there is not yet RemoteFX support, this is forthcoming.
[Close Edit]

For HTML browser user connections, or using the Enterprise client in combination with the ESG, OVD utilizes HTTPS (tcp/443) and thus is roadwarior friendly. With roadwarrior friendly I mean a service that is firewall friendly and makes hotel, Starbucks cafe or airport WiFi a place to use the environment without blockages, changing ports, VPN tunnels or not be able to use the service remotely from that location.

For IT Operations the administration console is in a single console. No scattering consoles or admin tools all over the place. And no dependencies, like the disliked flash plugin for some other solution out there ;). Further the expected components are there in a logical location.

Cross publishing apps between distributions is a very nice feature. Windows in Linux or Linux with Windows apps, great. Or add web applications to the mix. Furthermore Inuvika is not bound by a stack choice or hypervisor. VMware vSphere yes, Nutanix (Nutanix Ready AHV) yes, KVM, etc yes.

The use cases, applications and desktops still have to be assessed and designed accordingly. And these will be the most important bits for the users. This is what wins or breaks an EUC environment. I won’t see a lot of users now on Windows-based desktops and applications, going to Linux desktop and apps without more or less resistance and opposition. That Windows will be in there for now. But this is the same for the other vendors, not much difference here.

I personally don’t know what the user experience is when doing your day-to-day working throughout the business cycle. I haven’t come across Inuvika OVD in the wild.

One of the strong points of going open source is that the product will be improved by the contributions of the community (if there still is a community version….). That will mitigate some of the above. But also will require the OVD community to have a footprint of some sort for the required input and change. If the community is too small it will not be able to help Inuvika and the OVD user base.

I think cost wise it will be interesting for some shops out there looking to replace their EUC solutions and in the mean time look for ways to cut costs. These shops probably already have some issues and bad experience with their current solution along the way. I do not think organizations happy with VMware Horizon or Citrix will be lining up to replace their EUC with Inuvika. Yet ..that is.
This is a fast world, and it is interesting to see that there are vendors thinking outside of the paved roads. It makes their but also other solutions a better place for the users. It’s the community and open source that is really interesting here. So just give it a go and see for yourself. Don’t forget to share your experience with the community.

– Happy using your OVD from Inuvika!


EUC Layers: Dude, where’s my settings?

With this blog post I am continuing my EUC Layers series. As I didn’t know that I started one there is no real order to follow. Other that it seems to be somewhat from the user perspective, as that seems a big part in End User Computing. But I cannot guarantee that will be the right order at the end of things.

If you would like to read back the other parts you can find them here:

For this part I would like to ramble on and sing my song about an important part for the user experience, User Environment Management.

User Environment

Organisations will grant its users access to certain workspaces, an application, a desktop and or parts of data required or supporting the users role within the business processes. With that these users are granted access to one or more operating systems below that workspace or application. This organization would also like to apply some kind of corporate policy to ensure the user works with the appropriate level(s) of access for doing their job and keeping organizations data secure. Or in some cases to comply with rules and regulations and thus making the users job a bit difficult at the same time.

On the other side of the force, each user will have a preferred way of using the workspace and will tend to make all sorts of changes that enable these users to work efficiently as human possible. An example of these changes are look and feel options and e-mail signatures.

The combination of the organization policy and the user preferences is the User Environment Layer, also called persona also called user personality.

Whether a user is accessing a virtual desktop or a published application, the requirement for a consistent experience for users across all resources is one of the essential objectives and requirements for End User Computing solutions. If you don’t have a way of managing the UE, you will have disgruntled users and not much of a productive solution.


Managing the User Environment

Managing the User Environment is complicated as there are a lot of factors and variables in the End User environment. Further complexity is added by what will be needed to be managed from the organization perspective and what does your users expect.

Next to this yet an other layer is added to this complexity, the workspaces are often not just one dominating technology, but a combination of several pooled technologies. Physical desktops pools, Virtual desktops pools, 3D engineering pools, application pools and so on.

That means that a user does not always log on to the same virtual desktop each time, or log on to a published application on another device still wanting to have the same settings to the application and the application on the virtual desktop. A common factor is that the operating system layer is a Windows-based OS. Downside is, several versions and a lot of application options. We should make sure that user profiles are portable in one way or another from one session to the next one.

It is absolutely necessary that using different versions pooled workspaces that the method of deploying applications and settings to users is fast, robust and automated. From the user context and operational management.

Sync Personality

User Environment Managers

And cue the software solutions that will abstract the user data and the corporate policies from the delivered operating system and applications. And manage centrally.

The are a lot of solutions that provide a part of the puzzle with profile management and such. And some will provide a more complete UEM solution like:

  • RES ONE Workspace (previously known as RES Workspace Manager),
  • Ivanti Environment Manager (previously known as AppSense Environment Manager),
  • LiquidLabs Profile Unity,
  • VMware User Environment Manager (previously known as Immidio).

And probably some more…

Which one works best is up to your requirements and the fit with the rest of the used solution components. Use the one the fits the bill for your organisation now and in a future interaction. And look for some guidance and experience from the field via the community or the Intarweb.

User Profile Strategy

All the UEM solutions offer an abstraction for the Windows User Profile. The data and settings normally in the Windows User Profile are captured and saved to a central location. When the user session is started on the desktop, context changes, application starts or stops, or sessions are stopped, interaction between (parts of) the central location and the Windows Profile is done to maintain a consistent user experience across any desktop. Just in the time when they are needed, and not bulk loaded on startup.

The Windows Profile itself comes in following flavours:

  • Roaming. Settings and data is saved to a network location. Default the complete profile is copied at log in and log out to any computer the user starts the session. The bits that will be copied or not can be tweaked with policies.
  • Local. Settings and data is saved locally to the desktop. This remains on the desktop. When roaming settings and data are not copied and a new profile is created with a new session.
  • Mandatory. All user sessions use a prepared user profile. All user changes done to the profile are delete when user session are logged off.
  • Temporary. Something fubarred. This profile only comes in to play when an error condition prevents the user’s profile from loading. Temporary profiles are deleted at the end of each session, and changes made by the user to desktop settings and files are lost when the user logs off. Not using this with UEM.

The choice of Windows profile used with(in) the UEM solution often depends on to be architecture and the phase you are doing, starting point and where to go. For example starting with the bloating and error prone roaming profiles, UEM side-by-side for capturing the current settings and moving to clean mandatory profiles. Folder Redirection in the mix for centralized user data and presto.

Use mandatory as de facto wherever possible, it is a great fit for virtual desktops, published applications and host/terminal servers in combination with a UEM solution.

The User Profile strategy should also include something to mitigate against the Windows Profile versions. OS versions are incorporated with different profile versions. Without some UEM solution you cannot roam settings between a V2 and V3 profile. So when migrating or moving between different versions is not possible without tooling. The following table is created with the information from TechNet about User Profiles.

Windows OS User Profile Version
Windows XP and Windows Server 2003 First version without .
Windows Vista and Windows Server 2008 .V2
Windows 7 and Windows Server 2008 R2 .V2
Windows 8 and Windows Server 2012 .V3 (after the software update and registry key are applied)
.V2 (before the software update and registry key are applied)
Windows 8.1 and Windows Server 2012 R2 .V4 (after the software update and registry key are applied)
.V2 (before the software update and registry key are applied)
Windows 10 .V5
Windows 10, 1703 and 1607 .V6

Next to that UEM offers to move settings for the user context from Group Policies and login/logoff scripts, again lowering the amount of policies and scripts at login and logoff. And improving the user experience by lowering those waiting times to actually having what you need just in the time you need it.

And what your organization user environment strategy is, what do you want to manage and control, what to capture for users and applications, and what not.

VMware User Environment Manager

With VMware Horizon often VMware UEM will be used. And what do we need for VMware UEM?

In short VMware UEM is a Windows-based application, which consists of the following main components:

  • Active Directory Group Policy for configuration of the VMware User Environment Manager.
  • UEM configuration share on a file repository.
  • UEM User Profile Archives share on a file repository.
  • The UEM agent or FlexEngine in the Windows Guest OS where the settings are to be applied or captured.
  • For using UEM in offline conditions and synchronizing when a the device connects to the network again.
  • UEM Management Console for centralized management of settings, policies, profiles and config files.
  • The Self-Support or Helpdesk Tool. For resetting to a previous settings state or troubleshooting for level 1 support.
  • The Application Profiler for creating application profile templates., Just run your application with Appliction profiler and Application Profiler automatically analyzes where it stores its file and registry configuration. The analysis results in an optimized Flex config file, which can then be edited in the Application Profiler or used as is in the UEM environment.

UEM will work with the UEM shares and engine components available to the environment. With the latest release Active Directory isn’t a required dependency with the alternative NoAD mode. The last three are for management purposes.

All coming together in the following architecture diagram:

UEM Architecture

That’s it, no need for further redundant application managers and database requirements. In fact UEM will utilize components that organization already have in place. Pretty awesomesauce.

I am not going to cover installation and configuration of UEM, there are already a lot of resources available on the big bad web. Two excellent resources are or And of course VMware blogs and documentation center.

Important for the correct usage of UEM is to keep in mind that the solution works in the user context. Pre-Windows Session settings or computer settings will not be in UEM. And it will not solve application architecture misbehaviour. It can help with some duct tape, but it wont solve an application architecture changes from version 1 to version 4.

VMware UEM continually evolves with even tighter integration with EUC using VMware Horizon Smart Policies, Application Provisioning integrations, Application authorizations, new templates and so on.

Happy Managing the User Environment!