EUC Layers: Dude, where’s my settings?

With this blog post I am continuing my EUC Layers series. As I didn’t know that I started one there is no real order to follow. Other that it seems to be somewhat from the user perspective, as that seems a big part in End User Computing. But I cannot guarantee that will be the right order at the end of things.

If you would like to read back the other parts you can find them here:

For this part I would like to ramble on and sing my song about an important part for the user experience, User Environment Management.

User Environment

Organisations will grant its users access to certain workspaces, an application, a desktop and or parts of data required or supporting the users role within the business processes. With that these users are granted access to one or more operating systems below that workspace or application. This organization would also like to apply some kind of corporate policy to ensure the user works with the appropriate level(s) of access for doing their job and keeping organizations data secure. Or in some cases to comply with rules and regulations and thus making the users job a bit difficult at the same time.

On the other side of the force, each user will have a preferred way of using the workspace and will tend to make all sorts of changes that enable these users to work efficiently as human possible. An example of these changes are look and feel options and e-mail signatures.

The combination of the organization policy and the user preferences is the User Environment Layer, also called persona also called user personality.

Whether a user is accessing a virtual desktop or a published application, the requirement for a consistent experience for users across all resources is one of the essential objectives and requirements for End User Computing solutions. If you don’t have a way of managing the UE, you will have disgruntled users and not much of a productive solution.

Dude

Managing the User Environment

Managing the User Environment is complicated as there are a lot of factors and variables in the End User environment. Further complexity is added by what will be needed to be managed from the organization perspective and what does your users expect.

Next to this yet an other layer is added to this complexity, the workspaces are often not just one dominating technology, but a combination of several pooled technologies. Physical desktops pools, Virtual desktops pools, 3D engineering pools, application pools and so on.

That means that a user does not always log on to the same virtual desktop each time, or log on to a published application on another device still wanting to have the same settings to the application and the application on the virtual desktop. A common factor is that the operating system layer is a Windows-based OS. Downside is, several versions and a lot of application options. We should make sure that user profiles are portable in one way or another from one session to the next one.

It is absolutely necessary that using different versions pooled workspaces that the method of deploying applications and settings to users is fast, robust and automated. From the user context and operational management.

Sync Personality

User Environment Managers

And cue the software solutions that will abstract the user data and the corporate policies from the delivered operating system and applications. And manage centrally.

The are a lot of solutions that provide a part of the puzzle with profile management and such. And some will provide a more complete UEM solution like:

  • RES ONE Workspace (previously known as RES Workspace Manager),
  • Ivanti Environment Manager (previously known as AppSense Environment Manager),
  • LiquidLabs Profile Unity,
  • VMware User Environment Manager (previously known as Immidio).

And probably some more…

Which one works best is up to your requirements and the fit with the rest of the used solution components. Use the one the fits the bill for your organisation now and in a future interaction. And look for some guidance and experience from the field via the community or the Intarweb.

User Profile Strategy

All the UEM solutions offer an abstraction for the Windows User Profile. The data and settings normally in the Windows User Profile are captured and saved to a central location. When the user session is started on the desktop, context changes, application starts or stops, or sessions are stopped, interaction between (parts of) the central location and the Windows Profile is done to maintain a consistent user experience across any desktop. Just in the time when they are needed, and not bulk loaded on startup.

The Windows Profile itself comes in following flavours:

  • Roaming. Settings and data is saved to a network location. Default the complete profile is copied at log in and log out to any computer the user starts the session. The bits that will be copied or not can be tweaked with policies.
  • Local. Settings and data is saved locally to the desktop. This remains on the desktop. When roaming settings and data are not copied and a new profile is created with a new session.
  • Mandatory. All user sessions use a prepared user profile. All user changes done to the profile are delete when user session are logged off.
  • Temporary. Something fubarred. This profile only comes in to play when an error condition prevents the user’s profile from loading. Temporary profiles are deleted at the end of each session, and changes made by the user to desktop settings and files are lost when the user logs off. Not using this with UEM.

The choice of Windows profile used with(in) the UEM solution often depends on to be architecture and the phase you are doing, starting point and where to go. For example starting with the bloating and error prone roaming profiles, UEM side-by-side for capturing the current settings and moving to clean mandatory profiles. Folder Redirection in the mix for centralized user data and presto.

Use mandatory as de facto wherever possible, it is a great fit for virtual desktops, published applications and host/terminal servers in combination with a UEM solution.

The User Profile strategy should also include something to mitigate against the Windows Profile versions. OS versions are incorporated with different profile versions. Without some UEM solution you cannot roam settings between a V2 and V3 profile. So when migrating or moving between different versions is not possible without tooling. The following table is created with the information from TechNet about User Profiles.

Windows OS User Profile Version
Windows XP and Windows Server 2003 First version without .
Windows Vista and Windows Server 2008 .V2
Windows 7 and Windows Server 2008 R2 .V2
Windows 8 and Windows Server 2012 .V3 (after the software update and registry key are applied)
.V2 (before the software update and registry key are applied)
Windows 8.1 and Windows Server 2012 R2 .V4 (after the software update and registry key are applied)
.V2 (before the software update and registry key are applied)
Windows 10 .V5
Windows 10, 1703 and 1607 .V6

Next to that UEM offers to move settings for the user context from Group Policies and login/logoff scripts, again lowering the amount of policies and scripts at login and logoff. And improving the user experience by lowering those waiting times to actually having what you need just in the time you need it.

And what your organization user environment strategy is, what do you want to manage and control, what to capture for users and applications, and what not.

VMware User Environment Manager

With VMware Horizon often VMware UEM will be used. And what do we need for VMware UEM?

In short VMware UEM is a Windows-based application, which consists of the following main components:

  • Active Directory Group Policy for configuration of the VMware User Environment Manager.
  • UEM configuration share on a file repository.
  • UEM User Profile Archives share on a file repository.
  • The UEM agent or FlexEngine in the Windows Guest OS where the settings are to be applied or captured.
  • For using UEM in offline conditions and synchronizing when a the device connects to the network again.
  • UEM Management Console for centralized management of settings, policies, profiles and config files.
  • The Self-Support or Helpdesk Tool. For resetting to a previous settings state or troubleshooting for level 1 support.
  • The Application Profiler for creating application profile templates., Just run your application with Appliction profiler and Application Profiler automatically analyzes where it stores its file and registry configuration. The analysis results in an optimized Flex config file, which can then be edited in the Application Profiler or used as is in the UEM environment.

UEM will work with the UEM shares and engine components available to the environment. With the latest release Active Directory isn’t a required dependency with the alternative NoAD mode. The last three are for management purposes.

All coming together in the following architecture diagram:

UEM Architecture

That’s it, no need for further redundant application managers and database requirements. In fact UEM will utilize components that organization already have in place. Pretty awesomesauce.

I am not going to cover installation and configuration of UEM, there are already a lot of resources available on the big bad web. Two excellent resources are http://www.carlstalhood.com/vmware-user-environment-manager/ or https://chrisdhalstead.net/2015/04/23/vmware-user-environment-manager-uem-part-1-overview-installation/. And of course VMware blogs and documentation center.

Important for the correct usage of UEM is to keep in mind that the solution works in the user context. Pre-Windows Session settings or computer settings will not be in UEM. And it will not solve application architecture misbehaviour. It can help with some duct tape, but it wont solve an application architecture changes from version 1 to version 4.

VMware UEM continually evolves with even tighter integration with EUC using VMware Horizon Smart Policies, Application Provisioning integrations, Application authorizations, new templates and so on.

Happy Managing the User Environment!

Sources: vmware.com, microsoft.com, res.com, ivanti.com, liquidwarelabs.com

EUC Toolbox: O Sweet data of mine… Mining data with Lakeside Software’s SysTrack

I have already covered the importance of insights for EUC environments in some of my blog post. TL;dr of those is: if not having some kind of insight your screwed. As I find this a very important part of EUC and EUC projects, and see that insights are often lacking when I enter the ring…. I would like to repeat myself: try to focus on the insights a bit more, pretty please.

Main message: to successfully move and design a EUC solution, assessment is key as designing, building and running isn’t possible without visibility.

Assessment Phase

The assessment phase is made up of gathering information from the business, such as objectives, strategy, business non-functional and functional requirements, security requirements, issues and so on. And mostly getting questions from the business as well. This gathering part is made up in getting the information in workshops with all kinds of business and user roles, having questionnaires and getting your hands on documentation regarding the strategy and objectives, and some current state architecture and operational procedures explanation and documentation. Getting and creating a documentation kit.

The other fun part is getting some insights from the current infrastructure. Getting data about the devices, images, application usage, logon details, profiles, faults etc. etc. etc.

Important getting this data is the correlation of user actions with the subjects. It is good to know that when a strategy is to move to cloud only workspaces, that there will be several thousands steps between how a user is currently using his tool set to support the business process and the business objective. An intermediate step of introducing an any device desktop and hosted application solution is likely to have a higher success rate. Or wanting to use user environment management, and the current roaming profiles as bloated to 300GB. But I will try to not to get ahead with theories, get some assessment data.

Data mining dwarfs

Mining data takes time

Okay out with this one. Mining, or gathering, data takes time and therefore a chunk of project time and budget. Unfortunately with a lot of organisations it is either unclear what this assessment will bring, there are costs involved and a permanent solution is not in place. Yes there are sometimes point-in-time software and application reports that can be made from centralized provisioning solution, but these often miss the correlation of that data to the systems and what the user is actually doing. And the fact that Shadow IT is around.

Secondly knowing what to look for in the mined data to answer the business questions

But we can help, and timing costs is more of a planning issue for example not being clear on the efforts.

The process: Day 1 is installation. After a week a health check is done to see if data is flowing in the system. Day 1+14 initial reports and modeling can be started. Day 30+ and a business cycle is mined. This means enough data point have been captured, desktops that not often are connected have had their connection and thus agents, and variation to have a good analytics. Month start and month closing procedures have been captured. What about half-year procedures? No there not in a 30 days assessment when this period doesn’t include that specific procedure. Check with the business if those are critical.

Assess the assessment

What will be your information need and are there any specific objectives from the business they would like to see? If you don’t know what you are looking for, the amount of data will be overwhelming and it will be hard to get some reports out. Secondly try to focus on what and how an assessment tool can do for you. Grouping objects in reports that don’t exist in the current infrastructure or the organization structure will need some additional technical skills or need to be place in the can’t solve the organization with one tool category.

Secondly check the architecture of the chosen tool and how it fits in the current infrastructure. You probably need to deploy a server, have a place where its data is stored and need some client components identified and deployed. Check whether the users are informed, if not do it. Are there desktops that not always connect to the network and how are these captured. Agents connect to the Master once per day to have their data mined.

Thirdly check if data needs to remain in the organization boundaries, or that it can be saved or exported to a secure container outside the organization. To analyze and report it will be beneficial for time lines if you could work with the data offsite, saves a lot of traveling time throughout the project.

Fourthly what kind of assessment is needed. Do we need a desktop assessment, server assessment, physical to virtual assessment or something else. What kind of options do we have in gathering data, do we need agents, something in the network flow etc. etc. This kinda defines our toolbox to use. Check if vendors and/or community is involved in the product this can prove to be very valuable for the right data and interpretation of data in reports. Fortunately for me the tool for this blog post SysTrack can be used for all kinds of assessments. But for this EUC toolbox I will focus on the desktop assessment part.

SysTrack via Cloud

VMware teamed up with Lakeside Software to provide a desktop assessment tool free for 90 days called the SysTrack Desktop Assessment. It will collect data for 60 days and keep that data in the cloud for an additional 30 days. After 90 days access to the data will be gone. The free part is that you pay with your data. VMware does the vCloud Air hosting and adding the reports, Lakeside adds the software to the mix and viola magic happens. The assessment can be found at: https://assessment.vmware.com/. Sign up with an account and your good to go. If you work together with a partner be sure to link your registration with that partner so they have access to you information. When registration is finished your bits will be prepared. The agent software will be linked to your assessment. Use your deployment method of choice to deploy agents to the client devices, physical or virtual as long as its Windows OS. Agents need to connect to the public cloud service to upload the data to the SysTrack system. Don’t like all your agents connecting to the cloud, you can use a proxy where your clients connect to and the proxy connects to the cloud service. Check the collection state after deploying and a week from deploying. After that data will so up in the different visualizers and overviews.

SDA - Dashboard

If you have greyed out options, be patience there is nothing wrong (well yet). These won’t become active until a few days of data have been collected to make sure representative information is in there before most of the Analyze, Investigate and report options are shown.

Have a business cycle in and you can use the reports for your design phase. The Horizon sizing tool is an XML export that you can use in the Digital Workspace Designer (formerly known as Horizon Sizing Estimator) find it at https://code.vmware.com/group/dp/dwdesigner. Use the XML as a custom workload.

SDA- User Visualizer

SysTrack On site

Okay, now for the on site part. You got a customer that doesn’t like its data on somebody else her computer, needs more time, needs customizations to reports, dashboards or further drill down options -> tick on site deployment. It needs more preparations and planning between you and the customer. If the cloud data isn’t a problem, let your customer start the SDA the have some information before having the onsite running, mostly it will take calls and operational procedures before a system is ready to install.

Architecture

Okay so what do we need? First get a license from Lakeside or your partner for the amount of desktop you want to manage. You will get the install bits or the consultant doing the install will bring them.

Next the SysTrack Master server. Virtual or physical. 2vCPU and 8GB (with Express use 12GB) to start with, grows when having more endpoints. Use the calculator (Requirements generator) available on the Lakeside portal. Windows server minimum 2008R2 SP. IIS Web Roles, .Net Framework (all), AppFabric and Silverlight (brr). If you did not setup the pre-requisites this will be installed by the installer (but it will take time). That is… not .Net Framework 3.5 as this is a feature on servers where you need some additional location of source files. Add this feature to the system prior installation. And while you are at it install the rest.
For a small environment or without non persistent desktops a SQL Express (2014) can be included in the deployment. Else use an external database server with SQL Server Reporting Service (SSRS) setup. With Express SRSS is setup by the way.

Lakeside Launch

You need a SQL user (or the local system) with DBO to the new created SysTrack database and a domain user with admin rights to reporting service and local admin on the Windows server. If you are not using a application provisioning mechanisme or desktop pool template, you can push or pull from the SysTrack Master. For this you need a AD user with local admin rights to the desktops (to install the packages) and File and Print Services, Remote management and Remote Registry. If SCCM or MSI installation in the template is used, you won’t require local admin rights, remote registry and such.

If there is a firewall between the clients (or agents or childs) and master server be sure to open the port you used in the installation, default you need 57632 TCP/UDP. And if there is something between the Master Server and Internet with the registration, you will need to activate by phone. Internet is only used with license activation though.

And get a thermos of coffee, it can take some time.

To visualise the SysTrack architecture we can use the diagram from the documentation (without the coffee that is).

SysTrack Architecture

Installation is done in four parts, first the SysTrack Master Server (with or without SQL), secondly the SysTrack Web Services, thirdly the SysTrack Administrative tools and when 1-3 are installed and SysTrack is configured, you can deploy the agents.

  • SysTrack Master Server is for the Master for the application intelligence storing the data from childs (or connecting to data repository), configuration, roles and so on.
  • SysTrack Web Services is for Front end visualizers and reporting (SSRS on SQL server).
  • SysTrack Administrative Tools for example the deployment tool for configuration.

You gotta catch them all.

SysTrack Install menu

And click on Start install.

The installers are straightforward. Typical choices are the deployment type, full or passive. Add the reporting service user that was prepared (you can do this later as well). Database type, pre-existing (new window will open for connection details) for an external database or the express version. Every component will need its restart. After restarting the Master Setup the Web Services installer will start. After this restart, the Administrative Tools don’t start automatically. Just open the Setup and tick the third option and start the install.

Open the deployment tool. Connect to the master server. Add your license details if this is a new installation. Create a new configuration (Configuration – Alarming and Configuration). Selecting Base Roles\Windows Desktop and VMP will work for a good start in desktop assessments. Set your newly created as default, or change manual in the tree when clients have been added to the tree. And push the play button when ready to start or receive clients. Else nothing will come in.

vTestlab Master Deployment Tool

Now deploy the agent via MSI. The installation files are on the Master Server in the installation location: SysTrack\InstallationPackages. You have the SysTrack agent (System Management Agent 32-bit) and the prerequisite C Redistributable’s VC2010.
With MSI deployments you add the master server and port to the installer options. If the Master allows clients to auto add themselves to the tree which is the default case with version 8.2, they will show up.

“Normally” the clients won’t notice the SysTrack agent deployed. There is not a restart required for the agent installation.
For strict environments you can have a pop-up in Internet Explorer about LSI Hook browser snap in. You can suppress this by adding the CLSID of LSI Hook to the add-on list with a value of 1. Or you can edit your configuration and change Web browser plugins to false. This in turn will mean that web data from all browser is not collected by SysTrack.

Configuration Web Browser Plugin.png

In any case be sure to test the behaviour in your environment before rolling out to a large group of client.

Conclusion

While the cloud is deployed within a snap and data is easily accessed within the provided tools and reports fit the why of the assessment, there is a big but.. Namely that a big chunk of organisations don’t like this kind of data to go into the cloud, even when the user names are anonymized. Pros for the on site version is that it will give you more customizations and reporting possibilities. Downside is that SysTrack onsite is Windows based and the architecture will require Windows licenses next to the Lakeside license. All the visualisers and tools can be clicked and drilled down from the interface, but it feels a little like several tools have been duck-taped together. You can customize whatever you want, dashboards, reports and grouping. You would need a pretty skill set including how to build SQL queries, SSRS reports and the SysTrack products themselves. And what about the requirement for Microsoft SilverLight, using a deprecated framework Tsck tsck. Come on this is 2017 calling….

But in the end it does not matter if SysTrack from Lakeside Software or for example Stratusphere FIT from Liquidwarelabs is used, that is your tool set. The most important part is to know what information is needed from what places and know thy ways to present these. Assess the assessment, plan some time and get mining for diamonds in your environment.

– Happy Mining!

Sources: vmware.com, lakesidesoftware.com

vCenter Server Appliance 6.0 in VMware Workstation

For demo’s, presentations, breaking environments or just killing time I have a portable testlab on my notebook. Yes I know there are also options for permanent labs, hosted labs and Hands-on labs for these same purposes. Great places for sure, but that is not really what I wanted to discuss here.

As I am break.. ehhh rebuilding my lab to vSphere 6.0 I wanted to install VCSA 6.0 in VMware workstation. Nice, import a my vmware downloaded VCSA-versionsomething.ova (after e-mail address number ####### registered over there) and we are done! …….. Well not quite.
First the vCenter download contains the OVA, but it is a little bit hidden. The guided installer will not help you here. You will need to mount or extract the downloaded ISO and look for vmware-vcsa in the vcsa/ folder.

VCSA Location
Copy the vmware-vcsa file to a writable location (when just mounted the ISO) and rename vmware-vcsa to vmware-vcsa.ova. And now we can import the ova to VMware Workstation. When the import finishes, do not start the VM yet. Certain values that are normally inserted via the vSphere Client or ovftool are to be appended to the VMX file of the imported VCSA. Open the vmx in the location where you let Workstation import the VM. Append the following lines:

guestinfo.cis.appliance.net.addr.family = “ipv4”
guestinfo.cis.appliance.net.mode = “static”
guestinfo.cis.appliance.net.addr = “10.0.0.11”
guestinfo.cis.appliance.net.prefix = “8”
guestinfo.cis.appliance.net.gateway = “10.0.0.1”
guestinfo.cis.appliance.net.dns.servers = “10.0.0.1”
guestinfo.cis.vmdir.password = “vmware-notsecurepassexample”
guestinfo.cis.appliance.root.passwd = “vmware-notsecurepassexample”

Note: Change the net and vmdir/appliance.password options to the appropriate values for your environment.

If not appended when you start the VCSA an error: vmdir.password not set aborting installation is shown on the console (next to root password not set) and network connection will be dropped even if you configure these on the VCSA console (via F2).

Save the VMX file.

And now it is time to let it rip. Start up your engines. And be patience until…lo and behold:

VCSA Running in workstation

And to check if the networking is accepting connections from a server in the same network segment open up the VCSA url in Chrome for example. After accepting the self signed certificate unsecure site (run away!) message you will (hopefully) see:

VCSA in Workstation

Next we can logon to the Web client (click and accept the unsecure connection/certificate) and logon via Administrator@VSPHERE.LOCAL and the password provided in the VMX (in the above example vmware-notsecurepassexample). As a bonus you now know where to look when you forget your lab VCSA password ;-).

VCSA 6 Web Client

(And now I notice the vCenter Operations Manager icon in the Web Client Home screen. Why is this not updated like vRealize Orchestrator :-) )

-Enjoy!

 

 

VMware Utility Belt must have tools – RVTools 3.7 released

March 2015 RVTools version 3.7 is released. 

This, in my opinion, is the tool each VMware consultant must have in his VMware utility belt together with the other standard presented tools. At this time RVTool is still free, so budget is no constrain to use this tool. More important it’s lightweight, very simple in usage and shows much wanted information in a ordered overview or allows for exporting the information in Excel format to analyse this offline. 

Before using this tool, it is important to understand the tool is used to make a point in time snapshot of the infrastructure configuration items in place. In short what is configured and what is the current operational state. No more, no less. The information can then be used in for example operational health checks or AS IS starting point in projects (consolidation or refresh projects) in the analysis/inventory phase. See more use cases further below, and I am sure there can be some more examples out there.

No trending or what if’s for example, that is something you will have to do yourself or use other solutions/tools available for the software defined data center. VMware has some other excellent tools for SDDC management and insights in your virtual environment (for example vRealize Operations and Infrastructure Navigator). But that is a complete other story.

What is RVTools?

RVTools is a Windows .NET application which used the VI SDK (which is updated to 5.5 in this release) to display information about your VMware infrastructure.
A inventory connection can be made to vCenter or a single host, to get as is information about hosts, VM’s, VM Tools information, Data stores, Clusters, networking, CPU, health and more. This information is displayed in a tabpage view. Each tab represents a specific type of information, for example hosts or datastores.

RVTools can currently interact with Virtual Center 2.5, ESX Server 3.5, ESX Server 3i, Virtual Center 4.x, ESX(i) Server 4.x, Virtual Center 5.0, Virtual Center Appliance, ESXi Server 5.0, Virtual Center 5.1, ESXi Server 5.1, Virtua lCenter 5.5, ESXi Server 5.5 (no official 6.0 in this version).

RVTools can export the inventory to Excel and CSV for further analysis. The same tab from the GUI will be visible in Excel.

image

image

There is also a command line option to have (for example) a inventory schedule and let the results be send via e-mail to an administrative address.

Use Cases?

– On site Assessment / Analysis; Get a simple and fast overview of a VMware infrastructure. The presented information is easy to browse through, where in the vSphere Web Client you would go clicking through screens. When there is something interesting in the presented data you can go deeper with the standard vSphere and ESXi tools. Perfect for fast analysis and health checks.

– Off site Assessment / Analysis; Get the information and save the Excel or CSV dump to get a fast overview and dump for later analysis. You will have the complete dump (a point in time reference that is) which you can easily browse through when writing up an analysis/health check report.

– Documentation; The dumped information can be used on or offline to write up documentation. Excel tabs are easily copied in to the documentation.

– (Administrator) reporting; Via the command tool get a daily overview of your VMware infrastructure. Compare your status of today with the point in time overview of the day before or last week (depending on your schedule and/or retention). Use this information in the daily tasks of adding/changing documentation, analysis, reporting and such.

Release 3.7 Notes

For version 3.7 the following has been added:

  • VI SDK reference changed from 5.0 to 5.5
  • Extended the timeout value from 10 to 20 minutes for realy big enviroments
  • New field VM Folder on vCPU, vMemory, vDisk, vPartition, vNetwork, vFloppy, vCD, vSnapshot and vTools tabpages
  • On vDisk tabpage new Storage IO Allocation Information
  • On vHost tabpage new fields: service tag (serial #) and OEM specific string
  • On vNic tabpage new field: Name of (distributed) virtual switch
  • On vMultipath tabpage added multipath info for path 5, 6, 7 and 8
  • On vHealth tabpage new health check: Multipath operational state
  • On vHealth tabpage new health check: Virtual machine consolidation needed check
  • On vInfo tabpage new fields: boot options, firmware and Scheduled Hardware Upgrade Info
  • On statusbar last refresh date time stamp
  • On vhealth tabpage: Search datastore errors are now visible as health messages
  • You can now export the csv files separately from the command line interface (just like the xls export)
  • You can now set a auto refresh data interval in the preferences dialog box
  • All datetime columns are now formatted as yyyy/mm/dd hh:mm:ss
  • The export dir / filenames now have a formated datetime stamp yyyy-mm-dd_hh:mm:ss
  • Bug fix: on dvPort tabpage not all networks are displayed
  • Overall improved debug information

Who?

RVTools is written by Rob de Veij aka Robware. You can find Rob on twitter (@rvtools) and via his website http://robware.net.
Big thank to Rob for unleashing yet another version of this great tool!

As the tool is currently free please donate if you find the application useful to help and support Rob in further developing and maintaining RVTools.

vSphere: Working with traffic filtering in the vNetwork Distributed Switch

Introduction

Within a physical and virtual infrastructure there are several options to limit the inbound and outbound traffic from and to a network node, part of the network or entire network (security zone). A limit can be, filtering (allow or dropping certain traffic) or the prioritization of traffic (QoS / DSCP tagging of the data) where a defined type of traffic is limited versus a kind of traffic with a higher prioritization.

Options include filtering with ACL, tagging and handling sort of traffic with QoS / DSCP devices, firewalling (physical or virtual appliances), physical or logical separation or Private VLAN’s (PVLAN for short). Furthermore, an often overlooked component, keep all your layers in view when designing the required security. If required to filter traffic from a specific data source to a specific group of hosts where the requirement is that those VM’s are not allowed to see or influence the other hosts, traffic filters setup on the physical network layer will not always be able to “see” the traffic as for example blade servers in certain blade chassis can access the same trunked switch ports / VLAN, or VM’s with same portgroup / VLAN are able to connect to each other’s network as the traffic is not reaching or redirected to the physical network infrastructure where these filters are in place. That is when not using a local firewall on the OS. You could say this is bad designing, but I have seen these described “flaws” pop up a little too often.

 Options in the VMware virtual infrastructure

You have to option to use third party virtual appliances as firewalls, vCloud suite components or network virtualization via NSX (SDN) for example. Not always implemented due to constraints overheard around, like: overhead of the handled traffic by the virtual firewall (sizing), single point of failure when just using one appliance, added complexity for certain IT Ops where networking and virtualization are strict separated (Bad bad bad) or just no budget/intention to implement a solution that goes further than just the host virtualization the organization is at (as they probably just started). These are just a few, not all are valid in my opinion….

From vSphere 5.5 there is another unused option (mostly unknown); use the traffic filtering and tagging engine in the vNetwork Distributed Switch (vDS or dvSwitch). That is when you have an Enterprise Plus edition, but hey without this a vDS is not available in the first place. Traffic filtering is introduced in version 5.5 and therefore can only be implemented on vSphere 5.5+ members of the 5.5+ version of vDS. This vDS option is the one I want to show you in this blog post.

Traffic filters, or ACL, control which network traffic is allowed to enter or return (ingress and/or egress rules) from a VM, a group of VM’s or network via the port group, or a uplink (vmnic). The filters are configured at the Uplink or port group, and allow for an unlimited number of rules to be set at this level. These handle the traffic from VM to the portgroup and/or the traffic from portgroup to the physical uplink port, and vice versa. The rules are processed in the VMkernel, this is fast processed and there is no external appliance needed. With outgoing traffic rule processing happens before the traffic leaves the vSphere host, which also possibly will save on the ACL on the physical layer and networking traffic when only types of traffic or to a specific destination are allowed.

With the traffic filter we have the option to set rules based allow drop (for ACL) on the following Qualifiers:

vDS - image1

The tag action allows setting the traffic tags. For this example we don’t use the tag action.

System Traffic are the vSphere traffic types you will likely see around, where we can allow a certain type of traffic to a specific network. MAC let’s us filter on layer 2, and specific source and/or destination MAC addresses or VLAN ID’s. IP let’s us filter on Layer 3 for the IP traffic types TCP/UDP/ICMP traffic for IPv4 and IPv6.

The following System traffic type are predefined:

vDS - image2

Make it so, number One

I will demonstrate the filtering option by creating a vDS and adding a ESXi host and VM to this configuration. Just a simple one to get the concept.

My testlab vDS is setup with a VM like this screenshot:

vDS - image3

I got a DSwitch-Testlab vD switch with a dvPortgroup VM-DvS (tsk tsk I made a typo and therefore not consistent with cases, please don’t follow this example ;-)). A VM Windows Server 2012 – SRDS is connected to this portgroup.

 The VM details are as follow:

vDS - image4

The IP address 192.168.243.165 we will be looking at.

A the VM-DvS and going to the manage tab, we can choose Policies. When we push the edit button we can add or change the traffic filtering (just look for the clever name).

vDS - image5vDS - image6

As you see I already have created an IP ICMP rule which action currently says something completely the opposite as the rule name. This is on purpose to show the effect when I change this action. When I ping the VM from a network outside of the ESXi host, I get a nice ICMP response:

vDS - image7

When we change the ICMP rule to drop action, we get the following response:

vDS - image8

 

That’s what we want from the action. Other protocols are still available as there are no other rules yet, I can open an RDP to this Windows Server.

vDS - image9

When wanting to allow certain traffic and others not you will have to create several rules. The applied network traffic rules are in a strict order (which you can order). If a packet already satisfies a rule, the packet might not be passed to the next rule in the policy. This concept does not differ from filtering on most physical network devices. Document and draw out your rules and traffic flows carefully else implementation/troubleshooting will be a pain in the $$.

This concludes my simple demonstration.

 – Enjoy!

Sources: vmware.com

Hey come out and play. Join the vSphere Beta Program

At June the 30th VMware announced the launch of the latest vSphere Beta Program. This program is now open for anyone to register and participate in the beta program. The Beta program used to be for just a select group, but with VSAN Beta VMware started to allow a wider group of participants. In my opinion this is good as the group of software testing participants is larger and the amount of feedback, learned lessons and experience will be greater. The community and it’s testers will make the product and it’s features even better. Hopefully more Beta’s will also be open to a larger group of participants.

A larger group will make it harder to keep information within the group and not publicly shared on for example the big bad Intarweb. As this Beta program seems open to everyone, it is still bound to NDA rules. Details are in the VMware Master Software Beta Test Agreement (MSBTA) and the program rules which you are required to accept before joining. After that share your comments, feedback and information in the private community that is offered with the program.

What can you expect from the vSphere Beta program?

When you register with a my VMware account, the participant can expect to download, install, and test vSphere Beta software for his or hers environment. The vSphere Beta Program has no established end date. VMware strongly encourages participation and feedback in the first 4-6 weeks of the program (starting on June 30 2014). What are you waiting for? 

Some of the reasons to participate in the vSphere Beta Program are:

  • Receive access to the vSphere Beta products
  • Gain knowledge of and visibility into product roadmap before others.
  • Interact with the VMware vSphere Beta team, a chance to interact with engineers and such.
  • Provide direct input on product functionality, configuration, usability, and performance.
  • Provide feedback influencing future products, training, documentation, and services
  • Collaborate with other participants, learn about their use cases, and share advice and learned lessons of your own.

What is expected from the participants?

Provide VMware with valuable insight into how you use vSphere in real-world conditions and with real-world test cases, enabling VMware to better align the products with business needs.

Where?

Sign up and join the vSphere Beta Program today at: https://communities.vmware.com/community/vmtn/vsphere-beta.

– Go ahead, come out and play. Join the vSphere Beta program now!

Source: vmware.com

vSphere Auto deploy TFTP on Citrix PVS

I occasionally do a deployment of Citrix on vSphere. When using Citrix a returning component is Provisioning Services (PVS) for automated deployment of Citrix session servers or VD images. When using a management cluster (to prioritize and separate control layer or infrastructure components) an option is to use auto deploy for the session images hypervisor hosts (depending on the edition, depending on the cluster requirements).
One of the components to use in auto deploy is TFTP for the boot image. This can be leveraged for PVS as well, as PVS also needs a type of boot image to start up the VM’s. With PVS this is also a TFTP service (or boot image, but we leave this out for this posting) that is installed at the PVS server.

Why not use this TFTP service as vSphere Auto Deploy does not include one?

What is Auto Deploy?

vSphere Auto Deploy facilitates a infrastructure for automatic server provisioning and network deployment (streaming) of the ESXi hypervisor image. It uses a central managed image, administrators just need to manage the central image and the host profiles. The deployment can be on local storage, state full on HDD, SD or USB or stateless to the hosts ram. It works in conjunction with:

– vCenter,
– host profiles,
– TFTP server,
– Auto Deploy server and Image Builder,
– a PXE boot infrastructure with a DHCP service.

These services can be installed on the vCenter host or hosted/integrated on specific services. When using the stateless host option be sure to have a high available Auto Deploy infrastructure. For example load balancing the TFTP and auto deploy services.

Auto Deploy and host profiles are available from the Enterprise plus Edition. Okay, not always will a virtual desktop cluster be set up with Enterprise Plus, but we can also do this in the management cluster as long as we prioritize the right machines.

What is PVS?

Provisioning Services infrastructure is also based on software-streaming technology. This technology allows computers to be provisioned and re-provisioned in real-time from a single shared-disk image. In doing so, administrators can completely eliminate the need to manage and patch individual systems. Instead, all image management is done on the master image.

PVS works with:

– PVS vDisk store for the master image,
– PVS Console for setting up farm, device collections and managing updates and assignments,
– PVS Streaming service,
– PVS Citrix Shared components, such as MSSQL data store, License server,
– PVS Network services DHCP, PXE and TFTP.

Boot image

A boot image or boot strap file is a small kernel used to start up the machine, connect to the network and receive it’s image via network. The boot file must be configured so that it contains the information needed to communicate with the streaming services, Auto Deploy or PVS.

Putting it all together

As the list above we have some components that are used on both the infrastructures, DHCP, TFTP and PXE. The DHCP is used to provide a bootp image server and image name to the PXE clients (options 66 to the PXE/TFTP on the PVS server and 67 for the boot image name). The PXE infrastructure is a networking zone where client connect to. These clients are configured to boot from network (change the VM bios boot order for example). This can be a separate network, a logical separated network (vLAN) or just a one in all network (policies are advisable to guarantee some networking bandwidth to different types) depending on the requirements of the organization. We will be using a DHCP scope already set up on Microsoft DHCP and TFTP service on the PVS server (why set up more than one).

We can setup the DHCP options on scopes (if logical separate networks) or on leases (DHCP reservations for the smallest amount of host. If we have less ESXi hosts than streaming VM’s we put a scope option to the common image and use DHCP reservations to the other image). We will use to same TFTP service (66), but depending on the sort of machine (VM or ESXi host) we use a different boot image name.

First we get the Auto deploy TFTP image from the vCenter service and put it in the PVS image locations. We connect to the vCenter services via the Web client and browse to vCenter, vCenter server name, manage and auto deploy. Here we have the option to download the TFTP boot zip.

image

The boot zip is a collection of vSphere boot straps with the correct IP of your auto deploy service (it is created on installation).

image

Next we connect to the PVS server and go to the PVS TFTP image location. This is in C:ProgramDataCitrixProvisioning ServicesTFTPboot.

image

Here we place the Auto Deploy images from the boot zip.

image

Next up we change the DHCP options accordant, I used the DHCP scope options for the PVS image and used a reservation for the ESXi hosts (as these are just three hosts). Citrix is using the ARDBP32.bin image from the 10.0.0.100 lab PVS server.

image

And VMware is setup to use the undionly.kpxe.vmw-hardwired image from the same 10.0.0.100 lab PVS server.

image

Let’s boot the machines up. First the ESXi host. It receives the correct VMware image and starts to boot the base image from auto deploy.

image

Next up start a VM to use the PVS stream image:

imageIt boot’s up to connect to the PVS server. Unfortunately I apparently forgot to add an entry for this device (auto create is off as well), no vDisk in this example. But if we add a vDisk this will load as well.

image

Alternatively we can setup a TFTP service in on a other host if we want to separate this service from the PVS service and do the same for the Citrix boot image. Just follow the same procedure and add the Citrix images as well.

High Available

Standard the TFTP service is not high available, and when using multiple dependent services the need to increase availability is even higher. Set up a High available PVS, auto Deploy service by for example leveraging a Netscaler (or other LB with service check technique) to load balance the TFTP services and streaming services.

VMware Utility Belt must have tools – RVTools 3.6 released

This februari the 22nd RVTools version 3.6 was released. As I use this tool very often, and I notice on some occasions not all consultants are familiar with this tool, I wanted to write a post about RVTools to further spread the word.

This in my opinion is the tool each VMware specialist must have in his VMware utility belt together with the other standard presented tools. At this time RVTool is free and lightweight, very simple in usage, and budget is small (just a donation!) and will not have to be a constrain to use this tool.

What is RVTools?

RVTools is a Windows .NET application which used the VI SDK to display information about your VMware infrastructure.
Connection can be made to vCenter or a single host to get information about hosts, VM’s, VM Tools information, Data stores, Clusters, networking, CPU, health and more. This information is displayed in a tab view. Each tab represents a type of information for when clicking on the tab you will be displayed that specific kind of information from your environment.

RVTools can currently interact with Virtual Center 2.5, ESX Server 3.5, ESX Server 3i, Virtual Center 4.x, ESX(i) Server 4.x, Virtual Center 5.0, Virtual Center Appliance, ESXi Server 5.0, Virtual Center 5.1, ESXi Server 5.1, Virtua lCenter 5.5, ESXi Server 5.5.

RVTools can export the inventory to Excel and CSV for further analysis. The same tab from the GUI will be visible in Excel.

image

image

There is also a command line option to have (for example) a inventory schedule and let the results be send via e-mail to an administrative address.

Use Cases?

On site Assessment / Analysis; Get a simple and fast overview of a VMware infrastructure. The presented information is easy to browse through, where in the vSphere Web Client you would go clicking through screens. When there is something interesting in the presented data you can go deeper with the standard vSphere and ESXi tools. Perfect for fast analysis and health checks.

– Off site Assessment / Analysis; Get the information and save the Excel or CSV dump to get a fast overview and dump for later analysis. You will have the complete dump (a point in time reference that is) which you can easily browse through when writing up an analysis/health check report.

– Documentation; The dumped information can be used on or offline to write up documentation. Excel tabs are easily copied in to the documentation.

– (Administrator) reporting; Via the command tool get a daily overview of your VMware infrastructure. Compare your status of today with the point in time overview of the day before or last week (depending on your schedule and/or retention). Use this information in the daily tasks of adding/changing documentation, analysis, reporting and such.

Release 3.6 Notes

From the version information page, at 3.6 the following has been added:

  • New tab page with cluster information
  • New tab page with multi-path information
  • On vInfo tab page new fields HA Isolation response and HA restart priority
  • On vInfo tab page new fields Cluster affinity rule information
  • On vInfo tab page new fields connection state and suspend time
  • On vInfo tab page new field The vSphere HA protection state for a virtual machine (DAS Protection)
  • On vInfo tab page new field quest state.
  • On vCPU tab page new fields Hot Add and Hot Remove information
  • On vCPU tab page cpu/socket/cores information adapted
  • On vHost tab page new fields VMotion support and storage VMotion support
  • On vMemory tab page new field Hot Add
  • On vNetwork tab page new field VM folder.
  • On vSC_VMK tab page new field MTU
  • RVToolsSendMail: you can now also set the mail subject
  • Fixed a data store bug for ESX version 3.5
  • Fixed a vmFolder bug when started from the command line
  • Improved documentation for the command line options

Pretty fly…

Who?

RVTools is written by Rob de Veij aka Robware. You can find Rob on twitter (@rvtools) and via his website http://robware.net.
Big thank you Rob for letting this excellent tool in the cyberspace!

As the tool is currently free please donate if you find the application useful to help and support Rob in further developing and maintaining RVTools.

vSphere the statistics gathering

For VMware vSphere infrastructures and the how and why my environment is doing that, it is helpful to understand how vSphere and vCenter standardly collect and store statistics, and how these are displayed. The point here is that there is an awful lot of assumption in performance reports or troubleshooting done. When does there assumptions come in to play? When looking at several counters and the way the data is collected (and the what), storage and graphs are made. Especially when selecting intervals how to display (or gather), peaks can become less because they’ve been averaged out over the displayed graph time of the historical (historical vs realtime) or metrics are missing when needed.

How does the statitics gathering work?

Each host stores statistics data for up to a hour via the local performance manager. The performance manager receives realtime instance data from for example the CPU instances. Within the vCenter data collection interval, vCenter performance manager queries each host (that is the hosts that are managed by this vCenter) and vCenter will retrieve a subset of the host statistics data, and stores it in the vCenter database. When, what and how much are configurable in your VMware infrastructure. We have two values for that, interval and levels that can be set on the vCenter.

These collected historical metrics can then be displayed via the vSphere client.

A small model taken from the VMware Documentation Center.

image

Statistics rollups and intervals

As the ESXi host collects the statistics realtime (that is every 20 seconds) these are rolled up to the vCenter database for historical purposes. The vCenter collects data from all of the hosts that the vCenter Server manages. The PerformanceManager defines performance intervals that specify time periods for performance data rollup, a methodology for combining data values. The server stores the rolled up performance counter data in the vCenter database. This is done in four performance intervals that determine how collected instance data is aggregated and stored. The aggregated data is a set of instance data values collected for a performance counter. These intervals can be modified to a limited extent via the collection intervals. These determine the duration for which statistics are aggregated, calculated, rolled up, and archived. Together, the collection interval and collection level determine how much statistical data is gathered and stored in your vCenter Server database.

image

Are those rollups evil? Yes they can be. Peaks can become less because they’ve been averaged out over the graph time. But you must not forget the fact these are historical data, graphs made for a month with 12 data points can show different peak values than week graphs. Know what you are looking for, for which periode and don’t base your conclusion on just one graph.

Statistics Levels

To reduce traffic to the vCenter database vCenter uses a technique to limit which metrics are archived in the database. Certain statistics might be deemed more valuable for your like then others. The statistics levels varies from one to four, with one being the least-detailed statistics level and four being the most detailed.

  • Level one; statistics cluster Services, CPU, Disk, Memory, Network, System, and Virtual Machine Operations counters. Default level.
  • Level two; level one plus all disk, memory and VM operations metrics. Use for long term performance monitoring when device statistics are not required but you want to monitor more than the basic statistics.
  • Level three incorporates level one and two plus per-device statistics, such as CPU usage of a host on a per-CPU basis, or per-virtual machine statistics .Use for short-term performance monitoring after encountering problems or when device statistics are required.
  • Level four. All possible metrics. Use for short-term performance monitoring after encountering problems or when device statistics are required. Only to be used in the shortest amount of time due to the large quantity of data.

The statistics level is used to dictate whether or not a statistic is stored in the vCenter database. If a metric is a level two statistic, but vCenter is configured to level one, this metric is not stored in the database. Not stored means users are not able to query its historical values. Not a problem all the time, but also not good to have if you are looking for just these counters.

These levels sure have their benefits (information at minimal database costs) and drawbacks (possible missing metrics), but the ability to have and change the statistics levels gives something back. We can gather basic information at minimal database cost for the normal running environment with the level one counters. When needed at a troubleshooting scenario we can temporarily increase the statistics level to get more detailed information.

– Happy statistics gathering!

Should I hot add CPU/Mem or go now?

I am doing my VCAP5-DCA prep and using the unofficial official VCAP5 DCA Study Guide to guide me true the subjects (great resource! Check it out at http://www.virtuallanger.com/vcap-dca-5/). And apparently I’m easily distracted, this time with the subject of resource optimization/management and in particular the hot plugging CPU and memory options.

As you probably know there are several device options you can add to a running VM like virtual nics and virtual disks. vSphere 5.5 even has support for hot add/remove of PCIe SSD. These normally work hot out of the box.
For CPU and memory (the virtual ones vCPU and vRAM) this is not the case. They are disabled by default. Why? Well because support of the guest OS is limited, not all hypervisor features are operable (for example FT) and for applications running in the VM support is even more limited. Often you will have to recycle the application service to let it use the added resource or there is possibility of resource degradation/stability. That means downtime and that isn’t what hot add/plug is about. What is the point of using hot add/plug when there is downtime involved? Sure a guest OS restart vs an application service restart takes probably more time on the OS part, but applications work in chains you will know which part (or all) of the layers need attention. This tends to count more for the vCPU hot add, memory hot add/plug is longer around and is incorporated in more guest OS’ses/applications then hot plug vCPU is.

Secondly if you haven’t selected hot add/plug/remove vCPU and vRAM in the creation of your VM (or template), and enabled guest OS support, you will have to power down the VM before being able to change these options. Planning ahead (capacity management) is key here, but that goes for resource management as well. You might as well adjust the resource values you require before powering on the VM.

And what about with elastic resources, is there a hot remove option? For vSphere this is currently a no no, no matter what the guest OS support on this is (on some of the Linux distributions, Windows is also a no no). Memory remove is only done by powering down the VM.

What OS’ses support hot add/plug/remove CPU/Memory?

image

NB. I have left out Windows Server 2003. Yes there is support in there, but this product is bound for EOL. If you are thinking on hot add for this product, rather start thinking about doing a step up in life cycle.

NB. For Linux I just included some distributions. There are of course more supported on VMware.

What applications support hot add/plug/remove CPU/Memory?

This is the harder list to create. For a very few there is public information on support (or just it can be done) of hot add vCPU/vRAM. For applications it is harder to determine whether the resource is added to the OS for usage in the application or for other purposes. Applications must support multi processors to take the full advantage of a virtual socket that is added.

IIS? No. Needs a recycle of service or application pool.

SQL Server? Needs Enterprise edition. After adding vCPUs, execution of the RECONFIGURE statement is needed before it is used.

Exchange? A maybe, again a recycle. There are performance issues rumored when using these features, some have to do with paging depletion afterwards.

Be sure to test test test.

What are you prerequisites to hot add/plug/remove CPU/Memory a running VM?

  • The virtual machine has a guest operating system that supports hot functionality, and must be turned on in the OS.
  • The virtual machine is using hardware version 7 or later. With KB 2050800 there is mentioned an issue with 2012 and windows 8 on hardware version 9 (5.1).
  • VMware Tools is installed.
  • Hot add must be enabled per VM, at the creation time or by powering off.
  • The hardware must be able to support hot plug as well, else there must be resources available to evict the VM’s to an other host while doing maintenance on the physical host.
  • Does not need FT.
  • Does not need vNUMA.
  • Use vSphere Essentials, Essentials plus, Standard, Enterprise or Enterprise Plus. The are limitations per edition you will have to take in account, and versions (for example 5.0 standard edition will not let you use hot add)
  • Check application licensing, how will this effect a per core or per processor application license.

Edit settings of VM to change hot plug/add.

image

image

Conclusion

Do we need to enable hot plug by default on my VM’s? Depending on the usage, only set it on for VM’s that have a tendency to run out of resources quickly, are very in demand and cannot tolerate a little down time in re-configuring. For a small part of the private desktops of the VD environment this can also be a good option to configure before hand, this is normally is a small group of your users for example developers with some private type desktops. For the standard bunch of VM’s/VD’s leave it off, like the products default. You can normally plan ahead the resources for these kind of VM’s. vCenter Operations is great for resource capacity planning. So plan ahead and enable default on all your VM’s? No not yet, there is to little application support out there specially on the vCPU’s. Planning ahead means knowing some more variables and they are not around yet (on the application level that is). Planning ahead is having your resource configured and monitored correctly, when there is need to change, change the resource settings.

The limited OS support and very limited application support is the why to the non adoption of this hot plug features that have already been around a little while. Will we expect it to grow with all the cloud movements and software defined data centers? This depends, when there is a need it will be incorporated in more applications. This list is currently small.

– Back to studying.