EUC Toolbox: O Sweet data of mine… Mining data with Lakeside Software’s SysTrack

I have already covered the importance of insights for EUC environments in some of my blog post. TL;dr of those is: if not having some kind of insight your screwed. As I find this a very important part of EUC and EUC projects, and see that insights are often lacking when I enter the ring…. I would like to repeat myself: try to focus on the insights a bit more, pretty please.

Main message: to successfully move and design a EUC solution, assessment is key as designing, building and running isn’t possible without visibility.

Assessment Phase

The assessment phase is made up of gathering information from the business, such as objectives, strategy, business non-functional and functional requirements, security requirements, issues and so on. And mostly getting questions from the business as well. This gathering part is made up in getting the information in workshops with all kinds of business and user roles, having questionnaires and getting your hands on documentation regarding the strategy and objectives, and some current state architecture and operational procedures explanation and documentation. Getting and creating a documentation kit.

The other fun part is getting some insights from the current infrastructure. Getting data about the devices, images, application usage, logon details, profiles, faults etc. etc. etc.

Important getting this data is the correlation of user actions with the subjects. It is good to know that when a strategy is to move to cloud only workspaces, that there will be several thousands steps between how a user is currently using his tool set to support the business process and the business objective. An intermediate step of introducing an any device desktop and hosted application solution is likely to have a higher success rate. Or wanting to use user environment management, and the current roaming profiles as bloated to 300GB. But I will try to not to get ahead with theories, get some assessment data.

Data mining dwarfs

Mining data takes time

Okay out with this one. Mining, or gathering, data takes time and therefore a chunk of project time and budget. Unfortunately with a lot of organisations it is either unclear what this assessment will bring, there are costs involved and a permanent solution is not in place. Yes there are sometimes point-in-time software and application reports that can be made from centralized provisioning solution, but these often miss the correlation of that data to the systems and what the user is actually doing. And the fact that Shadow IT is around.

Secondly knowing what to look for in the mined data to answer the business questions

But we can help, and timing costs is more of a planning issue for example not being clear on the efforts.

The process: Day 1 is installation. After a week a health check is done to see if data is flowing in the system. Day 1+14 initial reports and modeling can be started. Day 30+ and a business cycle is mined. This means enough data point have been captured, desktops that not often are connected have had their connection and thus agents, and variation to have a good analytics. Month start and month closing procedures have been captured. What about half-year procedures? No there not in a 30 days assessment when this period doesn’t include that specific procedure. Check with the business if those are critical.

Assess the assessment

What will be your information need and are there any specific objectives from the business they would like to see? If you don’t know what you are looking for, the amount of data will be overwhelming and it will be hard to get some reports out. Secondly try to focus on what and how an assessment tool can do for you. Grouping objects in reports that don’t exist in the current infrastructure or the organization structure will need some additional technical skills or need to be place in the can’t solve the organization with one tool category.

Secondly check the architecture of the chosen tool and how it fits in the current infrastructure. You probably need to deploy a server, have a place where its data is stored and need some client components identified and deployed. Check whether the users are informed, if not do it. Are there desktops that not always connect to the network and how are these captured. Agents connect to the Master once per day to have their data mined.

Thirdly check if data needs to remain in the organization boundaries, or that it can be saved or exported to a secure container outside the organization. To analyze and report it will be beneficial for time lines if you could work with the data offsite, saves a lot of traveling time throughout the project.

Fourthly what kind of assessment is needed. Do we need a desktop assessment, server assessment, physical to virtual assessment or something else. What kind of options do we have in gathering data, do we need agents, something in the network flow etc. etc. This kinda defines our toolbox to use. Check if vendors and/or community is involved in the product this can prove to be very valuable for the right data and interpretation of data in reports. Fortunately for me the tool for this blog post SysTrack can be used for all kinds of assessments. But for this EUC toolbox I will focus on the desktop assessment part.

SysTrack via Cloud

VMware teamed up with Lakeside Software to provide a desktop assessment tool free for 90 days called the SysTrack Desktop Assessment. It will collect data for 60 days and keep that data in the cloud for an additional 30 days. After 90 days access to the data will be gone. The free part is that you pay with your data. VMware does the vCloud Air hosting and adding the reports, Lakeside adds the software to the mix and viola magic happens. The assessment can be found at: https://assessment.vmware.com/. Sign up with an account and your good to go. If you work together with a partner be sure to link your registration with that partner so they have access to you information. When registration is finished your bits will be prepared. The agent software will be linked to your assessment. Use your deployment method of choice to deploy agents to the client devices, physical or virtual as long as its Windows OS. Agents need to connect to the public cloud service to upload the data to the SysTrack system. Don’t like all your agents connecting to the cloud, you can use a proxy where your clients connect to and the proxy connects to the cloud service. Check the collection state after deploying and a week from deploying. After that data will so up in the different visualizers and overviews.

SDA - Dashboard

If you have greyed out options, be patience there is nothing wrong (well yet). These won’t become active until a few days of data have been collected to make sure representative information is in there before most of the Analyze, Investigate and report options are shown.

Have a business cycle in and you can use the reports for your design phase. The Horizon sizing tool is an XML export that you can use in the Digital Workspace Designer (formerly known as Horizon Sizing Estimator) find it at https://code.vmware.com/group/dp/dwdesigner. Use the XML as a custom workload.

SDA- User Visualizer

SysTrack On site

Okay, now for the on site part. You got a customer that doesn’t like its data on somebody else her computer, needs more time, needs customizations to reports, dashboards or further drill down options -> tick on site deployment. It needs more preparations and planning between you and the customer. If the cloud data isn’t a problem, let your customer start the SDA the have some information before having the onsite running, mostly it will take calls and operational procedures before a system is ready to install.

Architecture

Okay so what do we need? First get a license from Lakeside or your partner for the amount of desktop you want to manage. You will get the install bits or the consultant doing the install will bring them.

Next the SysTrack Master server. Virtual or physical. 2vCPU and 8GB (with Express use 12GB) to start with, grows when having more endpoints. Use the calculator (Requirements generator) available on the Lakeside portal. Windows server minimum 2008R2 SP. IIS Web Roles, .Net Framework (all), AppFabric and Silverlight (brr). If you did not setup the pre-requisites this will be installed by the installer (but it will take time). That is… not .Net Framework 3.5 as this is a feature on servers where you need some additional location of source files. Add this feature to the system prior installation. And while you are at it install the rest.
For a small environment or without non persistent desktops a SQL Express (2014) can be included in the deployment. Else use an external database server with SQL Server Reporting Service (SSRS) setup. With Express SRSS is setup by the way.

Lakeside Launch

You need a SQL user (or the local system) with DBO to the new created SysTrack database and a domain user with admin rights to reporting service and local admin on the Windows server. If you are not using a application provisioning mechanisme or desktop pool template, you can push or pull from the SysTrack Master. For this you need a AD user with local admin rights to the desktops (to install the packages) and File and Print Services, Remote management and Remote Registry. If SCCM or MSI installation in the template is used, you won’t require local admin rights, remote registry and such.

If there is a firewall between the clients (or agents or childs) and master server be sure to open the port you used in the installation, default you need 57632 TCP/UDP. And if there is something between the Master Server and Internet with the registration, you will need to activate by phone. Internet is only used with license activation though.

And get a thermos of coffee, it can take some time.

To visualise the SysTrack architecture we can use the diagram from the documentation (without the coffee that is).

SysTrack Architecture

Installation is done in four parts, first the SysTrack Master Server (with or without SQL), secondly the SysTrack Web Services, thirdly the SysTrack Administrative tools and when 1-3 are installed and SysTrack is configured, you can deploy the agents.

  • SysTrack Master Server is for the Master for the application intelligence storing the data from childs (or connecting to data repository), configuration, roles and so on.
  • SysTrack Web Services is for Front end visualizers and reporting (SSRS on SQL server).
  • SysTrack Administrative Tools for example the deployment tool for configuration.

You gotta catch them all.

SysTrack Install menu

And click on Start install.

The installers are straightforward. Typical choices are the deployment type, full or passive. Add the reporting service user that was prepared (you can do this later as well). Database type, pre-existing (new window will open for connection details) for an external database or the express version. Every component will need its restart. After restarting the Master Setup the Web Services installer will start. After this restart, the Administrative Tools don’t start automatically. Just open the Setup and tick the third option and start the install.

Open the deployment tool. Connect to the master server. Add your license details if this is a new installation. Create a new configuration (Configuration – Alarming and Configuration). Selecting Base Roles\Windows Desktop and VMP will work for a good start in desktop assessments. Set your newly created as default, or change manual in the tree when clients have been added to the tree. And push the play button when ready to start or receive clients. Else nothing will come in.

vTestlab Master Deployment Tool

Now deploy the agent via MSI. The installation files are on the Master Server in the installation location: SysTrack\InstallationPackages. You have the SysTrack agent (System Management Agent 32-bit) and the prerequisite C Redistributable’s VC2010.
With MSI deployments you add the master server and port to the installer options. If the Master allows clients to auto add themselves to the tree which is the default case with version 8.2, they will show up.

“Normally” the clients won’t notice the SysTrack agent deployed. There is not a restart required for the agent installation.
For strict environments you can have a pop-up in Internet Explorer about LSI Hook browser snap in. You can suppress this by adding the CLSID of LSI Hook to the add-on list with a value of 1. Or you can edit your configuration and change Web browser plugins to false. This in turn will mean that web data from all browser is not collected by SysTrack.

Configuration Web Browser Plugin.png

In any case be sure to test the behaviour in your environment before rolling out to a large group of client.

Conclusion

While the cloud is deployed within a snap and data is easily accessed within the provided tools and reports fit the why of the assessment, there is a big but.. Namely that a big chunk of organisations don’t like this kind of data to go into the cloud, even when the user names are anonymized. Pros for the on site version is that it will give you more customizations and reporting possibilities. Downside is that SysTrack onsite is Windows based and the architecture will require Windows licenses next to the Lakeside license. All the visualisers and tools can be clicked and drilled down from the interface, but it feels a little like several tools have been duck-taped together. You can customize whatever you want, dashboards, reports and grouping. You would need a pretty skill set including how to build SQL queries, SSRS reports and the SysTrack products themselves. And what about the requirement for Microsoft SilverLight, using a deprecated framework Tsck tsck. Come on this is 2017 calling….

But in the end it does not matter if SysTrack from Lakeside Software or for example Stratusphere FIT from Liquidwarelabs is used, that is your tool set. The most important part is to know what information is needed from what places and know thy ways to present these. Assess the assessment, plan some time and get mining for diamonds in your environment.

– Happy Mining!

Sources: vmware.com, lakesidesoftware.com

EUC: Can I kick it – upgrading to Horizon 7.1

The 16th of March was a good day. The NLVMUG was going on in the Netherlands (great event!) , great weather and Horizon 7.1 went GA. And I wanted to get my TestLab up and running with that version, and take a little peek if there are any of my’s in the upgrade. See what and where things are changed. So why not write-up this pirate’s adventure….

Upgrade Procedure and Interoperability

Before the upgrade it is important to know in which order the bits are to be upgraded, are we doing an in place or new VM deployment and does new versions still work with other components in the environment or are those also needed to be upgraded or break the upgrade.

The upgrade procedure is more or less the same as with the previous ones:

  • Check the status of the components. If there currently are health issues, fix them before the upgrade. Or use the upgrade to try to fix your issue if they are named as a fix in the release notes.
  • Get out your password manager for database passwords and so on.
  • Complete backups and snapshots. Don’t forget databases and such!
  • Disable provisioning and upgrade Composers. Provisioning can only be enabled when all components are upgraded.
  • Disable connection server and upgrade connection server. If you have more you can do one at a time to leave your users the option to connect. Disable connection server in Horizon admin and load balancer.
  • Optional Upgrade Paired Connection Server and Security Server. Disable connection and prepare security server for upgrade in the Horizon Admin, and in load balancer. First upgrade the paired connection server and then the Security server.
  • Upgrade the Horizon Agent.
  • Upgrade the Horizon Clients.
  • Upgrade the GPO’s to ADMX’s.

Note: during an upgrade it is allowed, or supported, that some older versions interact with the new versions. For example first upgrade the composer in a maintenance window and in the following the connections servers. Just don’t let that upgrade window take for ages.

Your environment probably will have some other upgrades like other Horizon suite components, vSphere, Tools, Windows versions and so on. Be sure to have the steps breakdown before doing any upgrades.

Check if the component versions can work together by checking the VMware Product Interoperability Matrices at http://www.vmware.com/resources/compatibility/sim/interop_matrix.php#interop. Be sure to put in all the VMware solutions you are using. And check with vendors of components outside of the VMware scope. Don’t forget your Zero or Thin Client vendors!

Find a red in there, well stop right there before upgrading.

Trasure map

I have my testlab in the cloud. So for not breaking all the bits, I am cloning my lab in a new lab that I will use for the upgrade. Pretty nice functionality!

Announcement and location

While preparing for the upgrade bit to download we have some time to browse through the 7.1 announcements. Sure you have seen to VMware announcement or blog write ups where you can choose from. If not, ITQ Master of Drones and EUC Laurens has a post on the announcement bit that you can find over here: https://www.vdrone.nl/whats-new-vmware-horizon-7-1/.

Downloads, well easy pease they are in the usual my.vmware.com spot (linkie to the VMware spot: https://my.vmware.com/group/vmware/info?slug=desktop_end_user_computing/vmware_horizon/7_1). Have an active SnS and your entitled to get the upgrade bits or else go for an evaluation.

Grab - Download Horizon 7.1

And while your at it get the ADMX files for all of the Horizon GPO. Thumbs up, finally they are there VMware. Better late than never.

Upgrade Procedure

I have the following components in my vTestlab that need upgrading: Horizon Composer because of the current desktop pools, Horizon Connection Server and databases that are running because of these services. And Horizon Agent in the desktop pools.

For my testlab I used a saved blueprint of my VCAP-DTM lab and used that blueprint to publish a new testlab in Ravello.

After the upgrade I have to check the following components that interact with Horizon, vIDM and vROPS for Horizon. And client connections of course.

Composer

After disabling the provisioning of the desktop pools, log on to your composer server.

Capture - Disable Provisioning Desktop Pool

On the composer server start the installer. After the startup it detects that an upgrade should take place.

Capture - Composer Upgrade

  1. Click next,
  2. Accept the EULA,
  3. Check your destination folder,
  4. Check database settings and input password,
  5. Check port and certificate settings. Note: if you create a new SSL certificate you will have to retrust that one in Horizon. I am reusing the SSL certificate so I select the one installed,
  6. Check and push the install button,
  7. Grab a coffee and check status,
  8. Finish,
  9. Restart server,
  10. Rinse and repeat for other composers in your environment,
  11. If you are done with all components in your desktop block, don’t forget to enable provisioning of the desktop pool!

Connection Server

After disabling the connection server you are going to work on, log on to the connection server.

Capture - Disable connection serverSelect the connection server and click the disable button.

On the connection server start the installer. Like the composer upgrade, the installer will detect it is in an upgrade scenario.Capture - Horizon Connection Upgrade

  1. Click next,
  2. Accept the EULA,
  3. Check and push the install button,
  4. Grab another coffee and check status,
  5. Finish and read the read me. Yes really, depending where your coming from there are some pointers in there to check or change to make your life simpler,
  6. Open a browser to your upgraded host and look at that spiffy portal,
  7. Open the admin console and check connection to other components,
  8. Enable your connection server,
  9. Rinse and repeat for others,
  10. (don’t forget your load balancers….)

Look at that pretty new portal

Capture - Horizon Portal

unfortunately the administration console GUI isn’t changed and flash (ahaaaa) is still around. Sad panda…..

Don’t forget to check if vIDM and vROPS for Horizon isn’t broken. I had to repair/restart the broker agent with vROPS. And have a little patience for the metrics to flow back in.

Agent

I have got an RDSH Hosted application farm server, I will be updating that agent. And some desktop pools, but the procedure is the same. First off, disabling access to the RDSH. Well that depends on the amount of servers you have in the farm and what your hosting from it. Disable hosted desktop pool for example. With my test lab its one server, so disabling the farm would be sufficient. Heck I am the only user so letting everything running would only bug my multiple personalities (who said that?!?).

With several servers you could maintenance one by removing it from the farm. Be sure to have your farm running with the same versions. Or have a cloned pool, just update the template.

On the RDSH host start the installer. Again the installer will notice it is an upgrade.

  1. Click next,
  2. Accept the EULA,
  3. Check your IP version,
  4. Custom setup components, but we are not adding just upgrading click next,
  5. (manual only) Check registered settings RDSH with connection server,
  6. Next and Install,
  7. Finish and reboot,
  8. Enable hosts or pools when the farm is done.

What’s new in the admin?

Instance Clone pools have the option to select specific vLANs for that pool or use the VM network of the template snapshot.

Capture - IC Select Networks

In Global Settings – you have two new client settings:

Capture - Global Settings client

  • hide server information in client interface. You will only see the lock if the certificate is trusted, but not https://connectiontoserver.fq.dn.
  • hide domain list in client interface. Only the username and password boxes are shown. The drop down with the domains are gone. Great for use cases where you want to hide the domain or there is a sh*t load of domains in there. Users have to remember there UPN.

With client user interface this is the Horizon Client and the HTML client (for the domain list the URL is still in your browser if you haven’t hidden that in another way).

Capture - HTML client no domain

Mind that this is currently not working if the Horizon client is pushed from AirWatch to iOS.

In global settings you can also add an automatic refresh of the admin interface (can’t remember if this was already in) or display some MOTD or legal pre-login to all your users. This must be accepted by all your users before able to logon.

What is missing from the admin?

As @jketels already mentioned on twitter:

Still no VLAN selection support for Dedicated and Floating pools. Only Instant-Clones have this new option available. #Horizon #View 7.1 pic.twitter.com/ehYCnZa4nB

— Joey Ketels (@jketels) March 17, 2017

The network selection you can only do from the GUI in instant clone desktop pools. The network selection (step 7 in vCenter settings) are not available in for example Linked clone pools. And like networks are not used in a CPA multiple POD deployment, or all other reasons that a lot of customers are using multi vLANs for the desktop pools. Again a missed opportunity. And no, linked clones are not yet depreciated or planned to be so support this from the GUI. Well if needed, with PowerShell you can still get this in for your linked clones.

That’s it

That it, core components are upgraded and running happily. I probably still have to find out a bit more about what has been changed within this release but for a start it looks pretty slick and without to much of a hassle.

– Happy getting your Horizon going the distance!

Sources: vmware.com, vdrone.nl

 

VCAP-DTM Deploy Prep: Horizon Lab on Ravello Cloud importing OVA

In my last post I was writing about creating a lab for your VCAP-DTM prep. Read it here VCAP-DTM Deploy Prep: La La Land Lab and Horizon software versions. In that post I mentioned the cloud lab option with Ravello Cloud that I’m using myself. With appliances the are some o did you look at this moments while deploying them on Ravello Cloud. There are two or three appliances to take care of depending on your chosen architecture: vROPS, vIDM and VCSA. Two of those you can also do on a VM, vCenter on Windows and vROPS on Windows or Linux. For vROPS, 6.4 is the last version with a Windows installer.

I personally went with one vCenter on Windows combined with composer (Windows only), so I will skip that one. For vIDM you will have to use the OVA.

Okay, options for OVA’s and getting them deployed: 1) directly on Ravello or 2) use nested hypervisor to deploy to, or 3) use a frog-leap with a deployment on vSphere and upload those to Ravello. The first we are going to do as the second creates a dependency with a nested hypervisor, wasting resource on that layer, getting the data there, traffic data flow, and for this lab I don’t want the hypervisor to be used other than for composer actions required in the objectives. The third, well wasn’t there a point to putting labs in Ravello Cloud.

Now how do I get my OVA deployed on Ravello?

For this we have the Ravello import tool where we can upload several VM’s, disks and installers to the environment. We first need to have the install bits for identity manager and vROPS downloaded from my.vmware.com.

In Ravello Cloud go to Library – VM – +Import VM. This will either prompt you to install Ravello Import Tool (available for Windows and Mac) or start the import tool.
In the Ravello import tool click on Upload (or Upload a new item). This will open the upload wizard. Select the Upload a VM from a OVF, OVA or Ravello Export File source. And click start to select your OVA location.

Grab Ravello Import Wizard - VM from OVA

Select the vIDM OVA and upload.

Grab - Ravello Upload There she goes

But are we done?
No grab vROPS as well.

Grab - Ravello Upload vROPS as well.png

If the upload is finished we will need to verify the VM. As part of the VM import process, the Ravello Import Tool automatically gets the settings from the OVF extracted out of the OVA. Verify that the settings for this imported VM matches its original configuration or the one you want to use. You can verify at Library – VM. You will see your imported VM’s with a configuration icon. Click your VM and select the configuration, go through the tabs to check. Finish.

It normally imports the values from the OVF, it will sometimes screw up some values. When you have multiple deployment options like vROPS you will have to choose the default size. vROPS import will be set either to extra small deployment 2vCPU 8GB or very large. Or use the one you like yourself. Same goes with the External Services. I won’t put them in (yet). Checking the settings from the OVA yourself up in the next paragraph.

Now how do I get the information to verify to?

You can from the sizing calculations done in designing the solution ;). But an other wat is to look in the OVA. OVA is just an archive format for OVF and VMDK’s that make up the appliance.

We need something to extract the ova’s. Use tar on any Linux/Mac or 7Zip on a Windows. I am using tar for this example on my mac. First up getting vIDM in running my test lab.

Open a terminal and go to the download location. Extract the ova with tar xvf. xvf stands for verbosely extract file followed by the filename. Well not in that order, but that’s the way I learned to type it ;).

That give us this:

Capture - tar - ova

Here we see the appliance has four disks, system, db, tomcat and var vmdks.

If we look in the OVF (use VI) file, at the DiskSection we will see need to have system in front and bootable. Followed by DB, Tomcat and last var.

Still in the OVF file, next up note the resource requirements for the vIDM VM. We need that figures later on to configure the VM with the right resources. In the VirtualHardwareSection you will find Number of virtual CPUs and Memory Size sections. We will need 2 vCPUs and 6 GB of vRAM (6144). And one network interface, so reserve one IP from your lab IP scheme. Okay ready and set prepping done.

Deploying a VM from the Library

Go to the application you want to add the VM to. Click the plus sign and select the imported VM from the list. In the right pane customize the name, network, external settings and all the things you like to have set.

GRab - Ravello Add imported VM to App

Save and update the Application.

Wait for all the background processes to finish, and the VM is deployed and starts. Open a console to check if the start-up goes accordingly. And it will not ;) When you have opened a console you will notice a press any key message that the appliance fails to detect VMware’s Hypervisor and you are not supposed to run the product on this system. When you continue the application will run in an unsupported state. But we are running in a lab and not production.

IF YOU ARE READING THIS BLOG AND (MERELY) THINK ABOUT RUNNING PRODUCTION ON RAVELLO OR RUNNING PRODUCTION WITH THE IMPORTED VIDM LATER ON, GO QUIT YOUR JOB AND GO WALK THE WALK OF SHAME FOREVER.

Grab - Ravello Press Key

Press any key if you can find the any key on your keyboard. And yes you will have to do this all the time you start-up. Or use the procedure highlighted at this blog post https://www.ravellosystems.com/blog/install-vcenter-server-on-cloud/  to change /etc/init.d/boot.compliance (Scroll to 4 action 2 in the post, or to MSG in the file). Do it after you have configured the VM and the required passwords. But sssst you didn’t hear that from me…..

Back to the deployment and configure the VM with hostname, DNS and IPv4. Save and restart network. After this the deployment will continue with the startup.

And now you have a started appliance. We need the install wizard for IDM. Go to the vIDM URL that is shown on the blue screen in the console. For example, https://hostname.example.com. If this is the first time it will start the install wizard. Put in the passwords you want, select your database and finish.

After that you are redirected to the login screen. Log on with your login details and voila vIDM is deployed.

Grab - Ravello vIDM

Bloody Dutch in the interface, everything on my client is English except for the region settings. Have the “wrong” order in Chrome and boom vIDM is in Dutch. For the preparation and the simple fact that I cannot find anything in the user interface when its in Dutch I want to change this. Change the order in Chrome://settings – advanced settings – Languages – Language and input Settings button – drag English in front of Dutch to change the order. Refresh or click on a different tab and voila vIDM talks the language required for the VCAP-DTM or to find stuff…

Grab - Ravello vIDM English

Aaand the same goes for vROPS?

You can do the same with the vROPS deployment. Ravello doesn’t support the ovf properties normally used for setting vROPS appliance configuration. You miss that nifty IP address for the vROPS appliance. At the same time you have the issue that vROPS doesn’t like changes too much, it breaks easily. But follow more or less the same procedure as vIDM. For vROPS set the Ravello network to DHCP. Put in a reservation so the IP is not shared within your lab and is shown with the remote console. The IP reservation is used in the appliance itself. It is very important that an IP is set correctly on first boot, else it will break 11 out of 10 times. I have also noticed that setting a static IP in Ravello is not copied to the appliance, use a DHCP for vROPS works more often.

And now for vROPS:

  • Press any key to continue the boot sequence.
  • The initial screen needs you to press ALT+F1 to go to the prompt.
  • the vROPS console password of root is blank the first time you logon to the console. You will have to set the password immediately and it’s a little strict compared to for example the vIDM appliance.
  • the appliance (hopefully) starts with DHCP configured. And you can open a session to the hostname.
  • [Optional if you don’t trust the DHCP reservation] Within vROPS appliance. Change the IP to manual to stay fixed within vROPS so it will not break when changing IP’s. Use the IP it received from the DHCP, do not change or you will have to follow the change IP configuration procedure for master IP (see a how to blog post here: http://imallvirtual.com/change-vrops-master-node-ip-address/):

Changing vROPS DHCP to static:
Run /opt/vmware/share/vami/vami_config_net. Choose option 6 and put in your values, choose option 4 and put yours in and change hostname etc……

Next reboot the appliance and verify the boot up and IP address is correct. If you get to the initial cluster configuration your ready and set.

Other issues failing the deployment are resolved by redeploying the VM, sometimes by first re-downloading and re-importing the OVA in Ravello.

Grab - vROPS First Start

Do choose New installation and get it up for the VCAP-DTM objectives.

If you happen to have enough patience and your application is not set to stop during the initial configuration, you will have a vROPS appliance to use in your Horizon preparations.

So appliances are no issue for Ravello?

Well I do not know for all appliances, but for Horizon the appliance only components that are needed for a VCAP-DTM lab can be deployed on Ravello.

 

-Happy Labbing in Ravello Cloud!

 

Sources: ravellosystems.com, vmware.com

VCAP-DTM Deploy Prep: La La Land Lab and Horizon software versions

VCAP-DTMmmmmm. After securing the VCP-DTM for version 6 and getting the pass results in for the version 7 DTM Beta, my sniper target is set for the VCAP-DTM’s. Maybe I should cut down on Battlefield 1 a bit ;). Anyhow…..

As the title of this post suggests, first up the deploy exam. Version 6 as version 7 VCAP’s are not yet out. Deploy is possibly the one that fits my person a bit lesser than the design part, but it is always good to have the “weakest” out-of-the-way the fastest. But there is no requirement that you should do deploy first, if you want design out of the way first go with that one.

Sniper Rifle target

With the VCAPs I have attempted and by hearing of the experience from those that have tried, next to actually knowing what you’re doing time management is (still) the key of securing the VCAPs. I think the actually knowing bit is pretty okay for most that will attempt this exam. Maybe some bit of practice in the Mirage parts for myself. And that is exactly needed for time management. Know your weak(est) and strong(est) points in the list of exam objectives. And next to that, with time management comes drill drill drill. And where better to drill than in a lab. Or to put it in other words, you will need a lab for the deploy!

VCAP-DTM Deploy

Now where are we with DTM?

Exam Topics aka Objectives

You will find a lot of blog post explaining how to prepare and going through all the exam objectives. And I do mean a lot. I am not putting in a how to study for that objective in this blog post. Use your google-fu for that.

The exam objectives for this post are important for what components you need to have in your lab.

On the mylearn page of the exam the exam topics are in expendable sections and clickable white papers, documents and such to prepare. Just go to: https://mylearn.vmware.com/mgrReg/plan.cfm?plan=88780&ui=www_cert. I haven’t seen an other PDF exam blueprint document for this exam on the VMware site.

Some bloggers will offer their packages of collected set of documents for preparation. One for example is offering theirs on: http://www.virtuallyvirtuoso.com/vcap6-dtm/.

VCAP6-DTM Component Versions

When going through the VCAP6 objectives we will need the following components and their versions of the Horizon Suite:

  • Horizon 6.2 Components: CPA, Connection Server, Security Server and Composer.
  • Pools: Linked clone PCoIP pool (Windows 7), RDSH Farm (W2K8R2/W2K12R2), Application Pools (Evernote). Reference machine Windows 7 and RDS version for ThinApp and App Volumes.
  • vSphere and vSAN 6.0: vSphere HA/DRS Cluster resources for management and pools. VSAN Storage.
  • Identity Management: vIDM 2.4.1
  • Application Layer Management: App Volumes 2.9, ThinApp 5, version 5.1.1.
  • Image Management: Mirage 5.4
  • Endpoints: Web-based, Horizon Clients, Kiosk.
  • Operations Management: vROPS for Horizon version 6.1.0.
  • Supporting Infrastructure/Tools: Active Directory (DNS,DHCP), GPO, MSSQL Database server, VMware OS Optimization Tool (OSOT) with support for Windows 7/8, File Services ThinApps Repository, syslog and Windows 2012R2 Jump Host.

The easiest way to get the VMware bits is to go to the Horizon Enterprise edition download on my.vmware.com and select the version 6.2. You need evaluation or an entitled my VMware user to access those. You can use this link for your bits: https://my.vmware.com/group/vmware/info?slug=desktop_end_user_computing/vmware_horizon/6_2.

VCAP Lab Download bits

Download OSOT here: https://labs.vmware.com/flings/vmware-os-optimization-tool.

Strange, wondering why they did not put Access Point or UEM in the exam objectives. Access Point for example is designed to be deployed with Horizon version 6.2. A well less bits to put in the lab.

For supporting Infrastructure and tools, and client versions it is up to you, at least put in the supporting versions.

Study Lab options

The deploy part is a lab based exam. Hands-on experience with the Horizon suite is crucial for success. Not everyone has a home lab, cloud lab credits or have enough resources on their notebooks to put in all the resource hungry Horizon suite components, you can use a combination of lab options in your exam preparations. Don’t forget the Horizon suite versions that are used in the VCAP version and components in your study lab. Practice with the right version, or know what have been changed between versions what takes a little more preparation time.

Get command line experience in practicing with vdmadmin, lvmutil, client and dct command line options, web interface locations, RDP to servers, SSH to appliance and log / config file locations.

Home

This can be a lab in a notebook and to some people having a home lab that are offering more services and resources than a small country uses in a decade. Home labs are excellent for build and break your own. You will not have any permissions issues. Downside mostly are the resources required.

Cloud

Again this provides good experience in build and break your own. Accessible from anywhere. Downside mostly are the resources required and the costs that are involved.

If you are a 2017 vExpert like me, Ravello (https://www.ravellosystems.com/go/vexpert/lab-service-description) still offers 1000 CPU hours per month to vExperts. Build your lab, configure an application start-up and stop procedure and set your lab to stop after practicing. For example put in 2:00 hours of studying and after that your lab will shut down and no CPU cycles will be wasted.

You can even simulate the exam lab speed and put your lab in a cost optimized far away cloud provider location. Pretty good for the time management preparations.
Downside for Ravello is the support of VMware OVA appliance deployment, there are some tips and tricks needed to get appliances uploaded to Ravello. Or optionally go for Windows components or nested deployments.

I’m currently building my lab in here: (yes status stopped in screenshot and Windows 10 is my client)

Ravello vExpert VCAP-DTM Prep

Hands on Labs.

VMware Hands on Labs are an excellent place to practice with a whole scale of VMware products. Use the manual to be guided through the labs, or just click it away and go on your own. Choose from the mobility labs for example: http://labs.hol.vmware.com/HOL/catalogs/catalog/125.

I personally use HOL-1751-MBL-1-HOL a lot. Downside no composer as Horizon 7 instant clones is used, version mismatch with exam lab and no vROPS for Horizon. For vROPS for Horizon I use Testdrive. You also aren’t administrator on Windows hosts and there is no Internet connection to get some missing piece in.

VCAP-HOL1751

You start with 1hr:30min, and you can extend the lab time up to 8 times with one hours. Topping up to 9hr:30minutes of lab time per enrollment. Amazing discovery Mike!

Testdrive

VMware Testdrive is the EUC demo environment. Need to show the customer some part they are missing or need some extra’s to make your point, open up a testdrive for the customer and let them show see it. As a superuser I also misuse it to work on some vROPS for Horizon parts. You are admin in vROPS so testing a metric set for a dashboard or showing policies without breaking the customers vROPS environment. The rest of the components are limited in what you can do and practice over there. But that wasn’t the use case of Testdrive in the first place.

Time management studying for the exam

Time management starts with studying. Plan your exam date and schedule your exam up front. Take enough time to prepare and work through the objectives. How much depends on your own strong and weak points. But do schedule the exam, else you will have no target to work to and that VCAP-DTM will be a never-ending story.

Time management throughout the Exam Lab

You can navigate through the lab exercise scenario’s. Go through the objectives. Use you notepad to put an order for easy or though ones. Get the easy one’s done and out-of-the-way. Labs that require deployments, captures, synchronisation or otherwise take time to finish, start-up those actions and go to the next. Don’t waste time watching progress bars……

There are dependencies between questions and skipping a part of a question because you are waiting for a deployment can be tricky for your mind if your also working through the scenario. You have to make sure you come back to that incomplete task and finish it.

ticktock

Test Center Check

If you have the opportunity and have multiple options for test centers in your friendly neighborhood, be sure to check out what lab setup they have. I know where I would go if I had to choose between test centers that have 21″ or 17″ screens. Or ask on twitter or Reddit if someone has experience with the test center.

– Happy prepping your exam!

Sources: vmware.com, ravellosystems.com

EUC Toolbox: Don’t wanna be your monkey wrench, use Flings

To remind some of whom have had previous experience with flings, or to explain flings to newbies if there still are any, in a few words Flings are apps and tools built by VMware engineers that are intended to be played with and explored. Even more, they are cool ideas worked out in cool apps and tools. Which are not only to play with but are very useful.
And, with no official production support from VMware.
This doesn’t mean the fling will tear a hole in the space-time continuum or your environment will randomly blow up at places, just be a little cautious when using a fling untested in production. Like with everything in production. Not official supported doesn’t mean the engineers stopped working on the products as soon as it is published on the Flings page. They do often respond to comments and with updates to make their cool ideas even better. And at times a fling makes it to the product like the vSphere HTML5 Web Client or ViewDBChk in Horizon.

Tools?

home_improvement

Anyway. Below is a list of my five most used EUC flings. Because well… it is an often overheard question: what do you or other customers use? And a listing disclaimer, don’t stop at number five, there are other very cool flings out there and new emerging ones coming. So keep an eye out. Hey I won’t stop at 5 either…..

VMware OS Optimization Tool aka OSOT

Guest OS systems are often designed for other form factors than virtual machines thus being very bloaty to include every variable choose and iniminie little device supported. When running these in virtual machines we have to optimize the OS so it won’t waste resources on unneeded options, features or services. Optimize to improve performance. One of these use cases is Horizon VDI or published. But personally I would like to see server components a bit more optimized as well.

With VMware OS Optimization Tool you can use templates to analyze and optimize Windows templates. Use the provided templates, make your own or use the public templates to share knowledge with the community. Made an oops and there is a rollback option.

OSOT.png

Get the VMware OS Optimization Tool here: https://labs.vmware.com/flings/vmware-os-optimization-tool.

Horizon Toolbox

The Horizon Toolbox is een set of helpful extensions to the Horizon Administrator page. The tools are provided at a Tomcat Web portal that is installed next to the Horizon Administrator. There the downside is visible straight away, yet another portal/console in the spaghetti western of the Horizon suite consoles. But the extensions for operations and no flash are worth it.

The Horizon Toolbox adds:

  • Auditing of user sessions, VM snapshots and used client versions.
  • Remote assistance to user sessions.
  • Access to the desktops VM remote console.
  • Power policies for Horizon pools.

Get the Horizon Toolbox here: https://labs.vmware.com/flings/horizon-toolbox-2.

VMware Access Point Deployment Utility

When we have use cases that need external access we have a design decision to use the Access Point in the DMZ to tunnel those external access sessions. The Horizon Access Point is an appliance that is deployed via a OVF. With the deployment you can use several methods to add the configuration options to the appliance, Web client, ovftool and Powershell for example. Another option is to use the Access point Deployment Tool fling. Especially when redeploying the appliance is faster than debugging or reconfiguring.

The VMware Access Point Deployment utility is a wrapper around ovftool. The utility let’s you input configuration values in a human friendly interface and PEM certificate format. It will create the ovf string, and will execute that string and deploy and configure Access Point. It will export the certificate and keys to the required JSON format. And it allows your input to be saved to XML and imported at a later time. This minimizes the amount of re-input required, and in result the amount of failures with reconfiguration or redeployment.

Get the VMware Access Point Deployment Utility here: https://labs.vmware.com/flings/vmware-access-point-deployment-utility.

App Volumes Backup Utility

App Volumes Appstacks are read only VMDK’s that are stored on a datastore and attached to a user sessions or desktop VM that has the App Volumes agent running. When we need to back up the appstacks we have the option to use a backup solution that backs up the datastore. But not all backup solutions have this option. A lot of VADP compatible backups look at the vCenter inventory to do their backup. Appstacks, and writeable volumes for that matter, are not available as direct selectable objects in the vCenter inventory. The Appstacks are only attached when a session or desktop is active, and non persistent desktop are not in the backup in the first place.

App Volumes Backup Utility to the rescue. In short what this tool does is connect App Volumes and vCenter, create a dummy VM object and attach the App Stack and writable volumes VMDK’s to that VM. And presto backup tool can do its magic. A little heads up for writable volumes, be sure to include pre and post actions to automatically detach, and re-attach any writable volumes which are in use while the backup is running. Utility for that is included in the fling.

Get the App Volumes Backup Utility here: https://labs.vmware.com/flings/app-volumes-backup-utility.

VMware Logon Monitor

VMware Logon Monitor fling monitors Windows 7 and 10 user logons. It reports a wide variety of performance metrics. It is firstly intended to help troubleshoot slow logon performance. But it can also be used for insights if you happen to miss vROPS for Horizon for example. Or when you want to find out how your physical desktop is doing in this same process when assessing the environment.

Some of the metrics categories include logon time, shell load, profile, policy load times, redirection load times, resource usage and the list goes on and on and on. VMware Logon Monitor also collects metrics from other VMware components used in the desktop. This will provide even more insight in what is happening during the logon process. For example what is that App Volumes AppStacks adding to the logon process……

Install Logon Monitor in your desktop pool and let the collection of metrics commence. Note that the logs are locally stored and not on a central location. The installer will create and start VMware Logon Monitor service.

logonmonitor

VMware Logon Monitor will log to C:\ProgramData\VMware\VMware Logon Monitor\Logs.

Get the VMware Logon Monitor here: https://labs.vmware.com/flings/vmware-logon-monitor.

And there’s more where that came from…..

And probably some that make your order of appearance a little bit different. Just take a look a https://labs.vmware.com/flings/?product=Horizon+View for the Horizon View tagged flings. And be sure to also check without this tag as for example the App Volumes related flings are not in this tag listing.

– Enjoy the flings!

Sources: labs.vmware.com/flings

EUC Layers: Display protocols and graphics – the stars look very different today

In my previous EUC Layer post I discussed the importance of putting insights on screens, in this post I want to discuss the EUC Layer of putting something on the screen of the end user.

Display Protocols

In short, a display protocol transfers the mouse, keyboard and screen (ever wondered about vSphere MKS error if that popped up) input and output from a (virtual) desktop to the physical client endpoint device and vice versa. Display protocols usually will optimize this transfer with encoding, compressing, deduplicating and performing other magical operations to minimize the amount of data transferred between the client endpoint device and the desktop. Minimize data equals less chance of interference equals better user experience at the client device. Yes, the one the end user is using.

For this blog post I will stick to the display protocols VMware Horizon has under its hood. VMware Horizon supports four ways of using a display protocol: PCoIP via the Horizon Client, Blast Extreme/BEAT via the Horizon Client, RDP via Horizon Client or MS Terminal Client, and any HTML5 compatible browser for HTML Blast connections.

The performance and experience of all the display protocols are influenced by the client endpoint device – everything in between – desktop agent and the road back to the client. : for example virtual desktop Horizon Agent. USB Redirected Mass storage device to your application, good-bye performance. Network filtering and poof black screen. Bad WiFi coverage and good-bye session when moving from office cubicle to meeting room.

poof-its-gone

RDP

Who? What? Skip this one when you are serious about display protocols. The only reason it is around in this list, is for troubleshooting when every other method fails. And yes the Horizon Agent default uses RDP as an installation dependency.

Blast Extreme

Just Beat it PCoIP. Not the official statement of VMware. VMware ensures it’s customers that Blast Extreme is not a replacement but an additional display protocol. But yeah…..sure…

With Horizon 7.1 VMware introduced BEAT to the Blast Extreme protocol. BEAT stands for Blast Extreme Adaptive Transport— UDP-based adaptive transport as part of the Blast Extreme protocol. BEAT is designed to ensure user experience stays crisp across quality varying network conditions. You know them, those with low bandwidth, high latency and high packet loss, jitter and so on. Great news for mobile and remote workers. And for spaghetti incident local networks……..

Blast uses standardized encoding schemes such as default H.264 for graphical encoding, and Opus as audio codec. If it can’t do H.264 it will fallback to JPG/PNG, so always use H.264 and check the conditions you have that might cause a fallback. JPG/PNG is more a codec for static agraphics or at least not something larger than an animated gif. H.264 the other way around is more a video codec but also very good in encoding static images, will compress them better than JPG/PNG. Plus 90% of the client devices are already equipped with a capability to decode H.264. Blast Extreme is network friendlier by using TCP by default, easier for configuration and performance under congestion and drops. It is effecient in not using up all the client resources, so that for example mobile device batteries are not drained because of the device using a lot of power feeding these resources.
Default protocol Blast Extreme selected.

PCoIP

PC-over-IP or PCoIP is a display protocol developed by Teradici. PcoIP is available in hardware, like Zero Clients, and in software. VMware and Amazon are licensed to use the PCoIP protocol in VMware Horizon and AWS Amazon Workspaces. For VMware Horizon PCoIP is an option with the Horizon Client or PCoIP optimized Zero Clients.
PCoIP is mainly a UDP based protocol, it does use TCP but only in the initial phase (TCP/UDP4172). PcoIP is rendered, multi-codec and can dynamically adapt itself based on available bandwidth. In low bandwidth environments it utilizes a lossy compression technique  where a highly compressed image is quickly delivered followed by additional data to refine that image. This process is termed “build to perceptually lossless”. The default protocol behaviour is to use lossless compression when there is minimal network congestion expected. Or explicitly disable as might be required for use cases where image quality is more important than bandwidth for example in medical imaging.
Images rendered on the server are captured as pixels, compressed and encoded and then sent to the client where decryption and decompression happens. Depending on the display, different codecs are used to encode the pixels sent since techniques to compress video images can be different in effectiveness compared to those more effective for text.

 

HTML

Blast Extreme without the Horizon client dependency. Client is a HTML5 compatible browser. HTML access needs to be installed and enabled on the datacenter side.
HTML uses the Blast Extreme display protocol with the JPG/PNG codec. HTML is not feature par with the Horizon Client that’s why I am putting it up as a separate display protocol option. As not all features can be used it not a best fit in must production environments, but it will be very sufficient for enough to use for remote or external use cases.

Protocol Selection

Depending how the pool is configured in Horizon, the end user has either the option to change the display protocol from the Horizon Client or the protocol is set on the pool with the setting that a user cannot change it’s protocol. The latter is has to be selected when using GPU, but it depends a bit on the work force and use case if you would like to leave all the options available to the user.

horizon-client-protocol

Display Protocol Optimizations

Unlike what some might think, display protocol optimization will benefit user experience in all situations. Either from an end user point of view or from IT having some control over what can and will be sent over the network. Network optimizations in the form of QoS for example. PCoIP and Blast Extreme can also be optimized via policy. You can add the policy items to your template, use Smart Policies and User Environment Management (highly recommended) to apply on specific conditions or use GPO’s. IMHO use UEM, and then template or GPO are the order to work from.

uem-smart-policy-example

For both protocols you can configure the image quality level and frame rate used during periods of network congestion. This works well for static screen content that does not need to be updated or in situations where only a portion of the display needs to be refreshed.

With regard to the amount of bandwidth a session eats up, you can configure the maximum bandwidth, in kilobits per second. Try to correspond these settings to the type of network connection, such an interconnect or a Internet connection, that are available in your environment.For example a higher FPS is fluent motion, but more used network bandwidth. Lower is less fluent but a less network bandwidth cost. Keep in mind that the network bandwidth includes all the imaging, audio, virtual channel, USB, and PCoIP or Blast control traffic.

You can also configure a lower limit for the bandwidth that is always reserved for the session. With this option set an user does not have to wait for bandwidth to become available.

For more information, see the “PCoIP General Settings” and the “VMware Blast Policy Settings” sections in Setting Up Desktop and Application Pools in View on documentation center (https://pubs.vmware.com/horizon-7-view/index.jsp#com.vmware.horizon-view.desktops.doc/GUID-34EA8D54-2E41-4B71-8B7D-F7A03613FB5A.html).

If you are changing these values, do it one setting at a time. Check what the result of your change is and if it fits your end users need. Yes, again use real users. Make a note of the setting and result, and move on to the next. Some values have to be redone to find the sweet spot that works best. Most values will be applied when disconnecting and reconnecting to the session where you are changing the values.

Another optimization can be done by optimizing the virtual desktops so less is transferred or resources can be dedicated to encoding and not for example defragmenting non persistent desktops during work. VMware OS Optimization Tool (OSOT) Fling to the rescue, get it here.

Monitoring of the display protocols is essential. Use vROPS for Horizon to get insights of your display protocol performance. Blast Extreme and PCoIP are included in vROPS. The only downside is that these session details are only available when the session is active. There is no history or trending for session information.

Graphic Acceleration

There are other options to help the display protocols on the server-side by offloading some of the graphics rendering and coding to specialized components. Software acceleration uses a lot of vCPU resources and just don’t cut it in playing 1080p full screen video’s. Not even 720p full screen for that matter. Higher clock speed of processor will help graphical applications a lot, but a the cost that those processor types have lower core count. Lower core count and a low overcommitment and physical to virtual ratio will lower the amount of desktops on your desktop hosts. Specialized engineering, medical or map layering software requires graphic capabilities that are not offered by software acceleration. Or require hardware acceleration as a de facto. Here we need offloading to specialized hardware for VDI and/or Published applications and desktops. Nvidia for example.

gpu-oprah-meme

What will those applications be using? How many frame buffers? Will the engineers be using these application mostly or just for a few moments and are afterwards doing work in office to write their reports. For this Nvidia supports all kinds of GPU profiles. Need more screens and framebuffers, choose a profile for this use case. A board can support multiple profiles if it has multiple GPU cores. But per core there only one type of profile can be used, multiple times if you not out of memory (buffers) yet. How to find the right profile for your work force? Assessment and PoC testing. GPU monitoring can be a little hard as not all monitoring application have the metrics up there.

And don’t forget that some applications need to be set to use hardware acceleration to be used by GPU or applications that don’t support or run worse on hardware acceleration because their main resource request is CPU (Apex maybe).

Engineers only? What about Office Workers?

Windows 10, Office 2016, browsers, and streaming video are used all over the offices. These applications can benefit from graphics acceleration. The number of applications that support and use hardware graphics acceleration has doubled over the past years. That’s why you see that the hardware vendors also changed their focus. NVidias’ M10 is targeted at consolidation while its brother M60 is targetted to performance, however reaching higher consolidation ratio’s then the older K generation. But cost a little bit more.

vGPU and one of the 0B/1B profiles and a vGPU for everyone. The Q’s can be saved for engineering. Set the profiles on the VM’s and for usage on the desktop pools.

And what can possibly go wrong?

Fast Provisioning – vGPU for instant clones

Yeah. Smashing graphics and depJloying those desktops like crazy… me likes! The first iteration of instant clones did not support any GPU hardware acceleration. With the latest Horizon release instant clones can be used for GPU. Awesomesauce.

– Enjoy looking at the stars!

Sources: vmware.com, wikipedia.org, teradici.com, nvidia.com

EUC Toolbox: Helpful tool Desktop Info

As somebody who works with all different kinds of systems from preferably one client device, from the intitial look, all those connected desktops look a bit the same. I want a) to see on what specific template am I doing the magic, b) directly see what that system is doing and c) don’t want breaking the wrong component. And trust me the latter will happen sooner then later to us all.

dammit-jim

Don’t like to have to open even more windows or search for metrics in some monitoring application as it does not make sense at this time? Want to see some background information on what the system you are using is doing, right next to the look and feel of the desktop itself? Or keep an eye on the workload of your synthetic load testing? See what for example the CPU of your Windows 7 VDI does at the time an assigned AppStack is direct attached? And want to keep test and production to be easily kept apart in all those clients you are running from your device?

Desktop Info can help you there.

Desktop Info you say?

Desktop Info displays system information on your desktop in a similar way to for example BGinfo. But unlike BgInfo the application stays resident in memory and continually updates the display in real time with the interesting information for you. It looks like a wallpaper. And has a very small footprint of it’s own. Fit’s perfectly for quick identification of test desktop templates with some realtime information. Or keeping production infrastructure servers apart or….

And remember it’s for information. Desktop Info does not replace your monitoring toolset, it gives the user information on the desktop. So it’s not just a clever name……..

How does it work?

Easy, just download, extract and configure how you want Desktop Info to show you the …well.. info. For example put it in your desktop template for a test with the latest application release.

It can be downloaded at http://www.glenn.delahoy.com/software/files/DesktopInfo151.zip. There is no configuration program for Desktop Info. Options are set by editting the ini file in a text editor such as Notepad or whatever you have lying around. The ini file included in the downloaded zip shows all the available options you can have and set. Think about the layout, top/bottom placement, colors, items to monitor and WMI counters for the specific stuff. Using Nvidia WMI counters here to see what the GPU is doing would be an excellent option. Just don’t overdo it.

In the readme.txt that is also included in the zip there is some more explanation and examples. Keep that one closeby.

capture-basicinformation

Test and save your configuration. Put Desktop Info in a place or tool so that it is started with the user session that needs this information. For example in a startup, shortcut or as a response to an action.

Capturing data

You have the option to use Desktop Info with data logging for references. Adding csv:filename to items will output the data to a csv formatted file. Just keep in mind that the output data is the display formatted data.

– Enjoy!

vROPS – survive the Kraken – Endpoint Operations Example

Guess who’s back, back again…

Next to doing End User Computing engagements, where user experience, performance and capacity management is also an integral part, I occasionally am involved in separate operations management engagements. And with VMware often vRealize Operations Manager, or vROPS, shows its face. As some would have noticed I mentioned vROPS in articles on this blog before. This time I am going to dive a little into growing some more tentacles and getting some more insights besides our old vSphere friend.

Show how does this getting more insights work again?

Great you asked! First get your vROPS up and running, configured, customized and showing the insight information from vSphere you wanted to see shown up in the right places. Not there yet? Well stop reading here and directly go to jail. Do not pass go. Stop it and slowly turn away from the keyboard. As mentioned, it can be very helpful to have more insights before you create something that jumps back at you and eats you…

Still reading, okay I guess your ready, just curious or thinking ahead. Like the vSphere adapter that is standardly included into vROPS, you can add solutions (or adapters) from management packs or are they called extensions (still following?) to collect information from other sources. Most of the time ‘the other’ data sources are management components for these specific components or layer. For example, getting EUC information from Horizon into vROPS, use vROPS for Horizon and connect to a broker agent on the connection server (management layer) and an agent in the desktop or published application. And what that name does not show on first glance, vROPS for Horizon can also bring insights from XenApp and XenDesktop.

Anyhow, why would I need this isn’t the vSphere adapter showing everything from my virtual infrastructure you ask. Well no, not everything. The vSphere adapter creates visibility for the vSphere layer, and that is the hypervisor and management. And information from the hypervisor and management about storage, networking and virtual machines BUT only from the view of vSphere. Storage, yes datastore but not how your storage infrastructure or vSAN are behaving, networking, yes vSwitches but not how your network devices or NSX are behaving and VM’s yes virtual machines but not what is happening in guest. And so on. You can, but you need solutions for that. And size accordingly. And customized dashboards or reports that actually show something of interest. And o yes the correct vROPS edition license.

Getting in guest insight via Endpoint Operations Management

In the old days or before vROPS 6.1 when you wanted to get in guest metrics for applications, middleware and database you could get the Hyperic beast out. With the 6.1 release of vROPS, VMware merged some of the Hyperic solution into vROPS. This would make it a lot easier to get a view through vROPS management interface all the way up or down to services, processes and the application layer. However, you would still have to do a lot of customizing to show something interesting.

servicesdashboard

Fortunately the solution exchange shows more and more applications services being integrated with vROPS via the Endpoint agent, for example:

  • Active Directory
  • Exchange
  • MSSQL Server
  • IIS
  • Apache Tomcat
  • ProgresQL
  • vCenter

Visit VMware solution exchange for the latest versions. Note that the vCenter Endpoint Operations Solution shows up as a standard management pack, but vROPS needs an advanced edition license to get Endpoint integration shown, it is not quite open on that.

Yeah Yeah enough show an example please and getting me some in guest metrics recipe

What ingredients do we need?

1 tablespoon of vROPS evaluation or a minimum of advanced edition
1 teaspoon of Endpoint Operations Management Solution
2 drops of Endpoint Agent deployed on a virtual machine
1 gr of user with permission to registered configured on vROPS
100ml of Solution Exchange Application layer something specific (or your own build something specific)

Stir and let it rest for a while.

vROPS you probably have in a test setup or can deploy as an ova in a PoC. Just a little warning upfront if you are not in a test or PoC setup, solutions/management packs are added to vROPS easily. Removing them is not an easy task.

You will need a minimum of one node, a remote collector as a data node is preferable. The Endpoint Operations Management solution is installed with vROPS and needs no specific configuration of the solution itself. The agents are downloaded at my.vmware.com. There are Linux and Windows platform versions, with or without JRE, installation packages or just the data bundles. Use what you like or fits with your application provisioning. I go for the JRE bundles.

And yes I hear you, another agent?!? Yes unfortunately currently you still need the Endpoint Agent. A big ass agent/VMware tools integration is not yet there, we need a little patience for that.

For the user create an Endpoint Management role with permissions in Administration – Manage Agents and Environment – Inventory Trees. Add this role to the user you are planning to use. The user is added to every agent.

If you have a firewall or other ACL’s in between your endpoint agents and the vROPS remote collector or data node(s), open up HTTPS (443) from endpoint group range to the remote collector or data node(s).

Manually Installing vRealize End Point Operations Agent

Manually installing and updating the vRealize End Point Operations Agent only is needed for VM’s that are not deployed via automation, where there is no application provisioning like SCCM or have an issue where a reinstall is needed. Yes you can also use the MSI or RPM, but with the files you will get a little insight (you see what I’m doing?) on how the agent works.

Note: Preferable the agent is not installed in a template. When a need arises that EP Ops Agent is installed in a system that is cloned, do not start EP Ops or remove EP Ops token and data/ prior to cloning. If started a client token is created and all clones will be the same object in vROPS.

Windows 64-bit Agent

You will need an installation user with permissions to put files, change owner/permissions on the server, install a service and start the service.

Copy the following files from a central file repository:

  • Copy and extract the softwarepackages/vRealize-Endpoint-Operations-Management-Agent-x86-64-win-.zip. Place the files in for example D:\Program Files\epopsagent
  • Edit the agent.properties file in the /conf and put in the following as a minimum:
    • setup.serverIP=data node or LB VIP to connect to
    • setup.serverLogin=User with role to register agent on vROPS
    • setup.serverPword=Password
    • setup.serverCertificateThumbprint=SSL Certificate thumbprint of the node to connect to (the one you entered above)

Note on the Password: this password can be added plaintext. When the agents is installed and started for the first time. The password is encrypted. The key is stored in the agent.scu file in the conf/ directory. You can use the agent.properties and the scu file to distribute from a central location and copy these in the conf/ directory. (Linux uses a different scu file, but the agent.properties can be the same)

  • open Command Prompt
  • go to /bin
  • run epops-agent.bat install
  • run epops-agent.bat start

ep-agentbat

Linux Agent

For the Linux agent use the same flow as the Windows Agent. Just a few differences:

  • Copy and extract the tarbal to the extract location, for example /opt/vmware/epops-agent
  • Copy the files to conf/
  • Go to bin/
  • ep-agent.sh start (no need for install)

Monitoring specific Windows Service or Linux Process

Current configuration of the agent does not include an autodiscovery of Windows Services or Linux Processes.

The reason this is not done is that there currently that all services is certainly not an option from a monitoring standpoint. It has more use to monitor specific groups of Windows services or processes that actually contribute or have a direct relation with a hosted service that is needed to be monitored.

monitor-windows-service

Follow these steps to monitor specific Windows Service/Linux Multiproces:

  • go to environment – Operating Systems – Operating System World – Windows / Linux
  • Select VM hostname
  • Actions – Monitor OS Object – Monitor Windows Service
  • Fill in the details, the service_name must match the Windows Server name.

service-details

Note: for autodiscovering services when agent.properties value autodiscovery is true, services are discovered by their Windows service name. As the services don’t have a servername in there, all service that are the same are named the same but in will the inventory hierarchy will have a different node. In the services view all services are without the node hierarchy: for example when monitoring Windows Time Service from 3 hosts will create three times a Windows Time Service is this view. You can change the service before services are discovered so that the servername is included in the display name. Please see Microsoft documentation on changing service names.

Adding a monitoring Solution

And installing a solution that monitors a service via the Endpoint Agent will show you a combination of nice metric additions. Or at least some additional pointers on how to get and in some cases display application insights you can use for your own.

All these packs can be downloaded from the solution exchange and are *.pak files. These are installed in vROPS Administration – Solutions – Add Solution and follow the details there.

Be sure you download the for operations packs, there are also for Hyperic versions still around. The latter you don’t need.

– Happy fishing

Sources: pubs.vmware.com, blogs.vmware.com, solutionexchange.vmware.com

Digital Workspace Transformation: information security

Yes…. it has been a while since I posted on this blog, but I’m still alive ;-)

For a 2016 starter (what?!? is it June already), I want to ramble on about information security in the digital workspace. With a growing number of digital workspace transformations going on, information security is more important than ever. With the growing variety of client endpoints and methodes of access in the personal and corporate environments, users are becoming increasingly independent from the physical company locations. Making it interesting how to centrally manage storage of data, passwords, access policies, application settings and network access (just examples, not the complete list). For any place, any device, any information and any application environments for your users (or do we want any user in there), it is not just a couple of clicks of this super-duper secure solution and were done.

encrypt-image300x225
(image source blogs.vmware.com)

Storing data on for example Virtual Desktop servers (hello VMware Horizon!) in the data center is (hopefully) a bit more secure than storing it locally on the user’s endpoint. At the same time, allowing users to access virtual desktops remotely puts your network at a higher risk then local only. But it’s not all virtual desktops. We have mobile users who will like to have the presentations or the applications directly on the tablet or handheld. I for instance, don’t want to have to open a whole virtual desktop for just one application. You ever tried a virtual desktop on a iPhone, it is technical possible yes, but works crappy. Erm forgot my Macbook HDMI USB-C converter for this presentation, well I send it to your gmail or dropbox for access with the native mobile apps at your conference room. And the information is gone out of the company sphere…..(a hypothetical situation of course..)

Data Leak

Great ideas all those ways to be in and out of company information. But but but….. these also pose some challenges to which a lot of companies have not started thinking about. Sounds a bit foolish as it is probably the biggest asset of a company, information. But unfortunately it’s a fact (or maybe it could be just the companies I visit). Sure these companies have IT departments or IT vendors who think a bit about security. And in effect mostly make their users life’s miserable with all sort of technical barriers installed in the infrastructure. In which the users, business and IT (!) users, will find all sorts of ways to pass these installed barriers. Why? First of all to increase their productivity while effectively decreasing security, and secondly they are not informed about the important why. And then those barriers can be just a nuisance.

Break down the wall

IT’s Business

I have covered this earlier in my post (https://pascalswereld.nl/2015/03/31/design-for-failure-but-what-about-the-failure-in-designs-in-the-big-bad-world). The business needs to have full knowledge of their required processes and information flows, that support or process in and out information for the services supporting the business strategy. And the persons that are part of the business and operate the services. And what to do with this information in what different ways, is it allowed for certain users to access the information outside of the data center and such. Compliancy to for example certain local privacy laws. Governance with policies and choices, and risk management do we do this part or not, how do we mitigate some risk if we take approach y, and what are the consequences if we do (or don’t).

Commitment from the business and people in the business is of utmost importance for information security. Start explaining, start educating and start listening.
If scratch is the starting point, start the writing first on a global level. What does the business mean by working from everywhere everyplace, what is this digital workspace and such.  What are the risks, how do we approach IAM, what do we have for data loss protection (DLP), is it allowed for IT to inspect SSL traffic (decrypt, inspect and encrypt) etc. etc.
Not to detailed at first it is not necessary, as it can take a long time to have a version 1.0. We can work on it. And to be fair information security and digital workspace for a fact, is continue evolving and moving. A continual improvement of these processes must be in place. Be sure to check with legal if there are no loops in what has been written in the first iteration.
Then map to logical components (think from the information, why is it there, where does it come from and where does it go, and think for the apps, the users) and then when you have defined the logical components. IT can then add the physical components (insert the providers, vendors, building blocks). Evaluate together, what works, what doesn’t, what’s needed and what is not. And rave and repeat…..

Furthermore, a target for a 100% safe environment all the time will just not cut it. Mission Impossible. Think about and define how to react to information leaks and minimize the surface of a compromise.

Design Considerations

With the above we should have a good starting point for the business requirement phase of a design and deploy of the digital workspace. And there will also be information from IT flowing back to the business for continual improvement.

Within the design of an EUC environment we have several software components were we can take actions to increase (or decrease, but I will leave that part out ;-)) security in the layers of the digital workspace environment. And yes, when software defined is not a option there is always hardware…
And from the previous phase we have some idea what choices can be made in technical ways to conform to the business strategy and policies.

If we think of the VMware portfolio and the technical software layers were we need to think about security, we can go from AirWatch/Workspace ONE, Access Point, Identity Manager, Security Server, Horizon, AppVolumes to User Environment Management. And And….Two-Factor, One Time Password (OTP), Microsoft Security Compliance Manager (SCM) for Windows based components, anti-virus and anti-malware, networking segmentation and access policies with SDDC NSX for Horizon. And what about Business Continuity and disaster recovery plans, and SRM, vDP.
Enterprise Management with vROPS and Log Insight integration to for example SIEM. vRealize for automating and orchestrating to mitigate work arounds or faults in manual steps. And so on and so on. We have all sorts of layers where to implement or help with implementing security and access policies. And how will all these interact? A lot to think about. (It could be that a new blog post series subject is born…)

But the justification should start at the business… Start explaining and start acting! This is probably 80% of the success rate of implementing information security. And the technical components can be made fit, but… after the strategy, policies, information architecture are somewhat clear….

And the people in the business are supporting the need for information security in the workspace. (Am I repeating myself a bit ;-)

Ideas, suggestions, conversation, opinions. Love to hear them.