EUC Layers: Display protocols and graphics – the stars look very different today

In my previous EUC Layer post I discussed the importance of putting insights on screens, in this post I want to discuss the EUC Layer of putting something on the screen of the end user.

Display Protocols

In short, a display protocol transfers the mouse, keyboard and screen (ever wondered about vSphere MKS error if that popped up) input and output from a (virtual) desktop to the physical client endpoint device and vice versa. Display protocols usually will optimize this transfer with encoding, compressing, deduplicating and performing other magical operations to minimize the amount of data transferred between the client endpoint device and the desktop. Minimize data equals less chance of interference equals better user experience at the client device. Yes, the one the end user is using.

For this blog post I will stick to the display protocols VMware Horizon has under its hood. VMware Horizon supports four ways of using a display protocol: PCoIP via the Horizon Client, Blast Extreme/BEAT via the Horizon Client, RDP via Horizon Client or MS Terminal Client, and any HTML5 compatible browser for HTML Blast connections.

The performance and experience of all the display protocols are influenced by the client endpoint device – everything in between – desktop agent and the road back to the client. : for example virtual desktop Horizon Agent. USB Redirected Mass storage device to your application, good-bye performance. Network filtering and poof black screen. Bad WiFi coverage and good-bye session when moving from office cubicle to meeting room.

poof-its-gone

RDP

Who? What? Skip this one when you are serious about display protocols. The only reason it is around in this list, is for troubleshooting when every other method fails. And yes the Horizon Agent default uses RDP as an installation dependency.

Blast Extreme

Just Beat it PCoIP. Not the official statement of VMware. VMware ensures it’s customers that Blast Extreme is not a replacement but an additional display protocol. But yeah…..sure…

With Horizon 7.1 VMware introduced BEAT to the Blast Extreme protocol. BEAT stands for Blast Extreme Adaptive Transport— UDP-based adaptive transport as part of the Blast Extreme protocol. BEAT is designed to ensure user experience stays crisp across quality varying network conditions. You know them, those with low bandwidth, high latency and high packet loss, jitter and so on. Great news for mobile and remote workers. And for spaghetti incident local networks……..

Blast uses standardized encoding schemes such as default H.264 for graphical encoding, and Opus as audio codec. If it can’t do H.264 it will fallback to JPG/PNG, so always use H.264 and check the conditions you have that might cause a fallback. JPG/PNG is more a codec for static agraphics or at least not something larger than an animated gif. H.264 the other way around is more a video codec but also very good in encoding static images, will compress them better than JPG/PNG. Plus 90% of the client devices are already equipped with a capability to decode H.264. Blast Extreme is network friendlier by using TCP by default, easier for configuration and performance under congestion and drops. It is effecient in not using up all the client resources, so that for example mobile device batteries are not drained because of the device using a lot of power feeding these resources.
Default protocol Blast Extreme selected.

PCoIP

PC-over-IP or PCoIP is a display protocol developed by Teradici. PcoIP is available in hardware, like Zero Clients, and in software. VMware and Amazon are licensed to use the PCoIP protocol in VMware Horizon and AWS Amazon Workspaces. For VMware Horizon PCoIP is an option with the Horizon Client or PCoIP optimized Zero Clients.
PCoIP is mainly a UDP based protocol, it does use TCP but only in the initial phase (TCP/UDP4172). PcoIP is rendered, multi-codec and can dynamically adapt itself based on available bandwidth. In low bandwidth environments it utilizes a lossy compression technique  where a highly compressed image is quickly delivered followed by additional data to refine that image. This process is termed “build to perceptually lossless”. The default protocol behaviour is to use lossless compression when there is minimal network congestion expected. Or explicitly disable as might be required for use cases where image quality is more important than bandwidth for example in medical imaging.
Images rendered on the server are captured as pixels, compressed and encoded and then sent to the client where decryption and decompression happens. Depending on the display, different codecs are used to encode the pixels sent since techniques to compress video images can be different in effectiveness compared to those more effective for text.

 

HTML

Blast Extreme without the Horizon client dependency. Client is a HTML5 compatible browser. HTML access needs to be installed and enabled on the datacenter side.
HTML uses the Blast Extreme display protocol with the JPG/PNG codec. HTML is not feature par with the Horizon Client that’s why I am putting it up as a separate display protocol option. As not all features can be used it not a best fit in must production environments, but it will be very sufficient for enough to use for remote or external use cases.

Protocol Selection

Depending how the pool is configured in Horizon, the end user has either the option to change the display protocol from the Horizon Client or the protocol is set on the pool with the setting that a user cannot change it’s protocol. The latter is has to be selected when using GPU, but it depends a bit on the work force and use case if you would like to leave all the options available to the user.

horizon-client-protocol

Display Protocol Optimizations

Unlike what some might think, display protocol optimization will benefit user experience in all situations. Either from an end user point of view or from IT having some control over what can and will be sent over the network. Network optimizations in the form of QoS for example. PCoIP and Blast Extreme can also be optimized via policy. You can add the policy items to your template, use Smart Policies and User Environment Management (highly recommended) to apply on specific conditions or use GPO’s. IMHO use UEM, and then template or GPO are the order to work from.

uem-smart-policy-example

For both protocols you can configure the image quality level and frame rate used during periods of network congestion. This works well for static screen content that does not need to be updated or in situations where only a portion of the display needs to be refreshed.

With regard to the amount of bandwidth a session eats up, you can configure the maximum bandwidth, in kilobits per second. Try to correspond these settings to the type of network connection, such an interconnect or a Internet connection, that are available in your environment.For example a higher FPS is fluent motion, but more used network bandwidth. Lower is less fluent but a less network bandwidth cost. Keep in mind that the network bandwidth includes all the imaging, audio, virtual channel, USB, and PCoIP or Blast control traffic.

You can also configure a lower limit for the bandwidth that is always reserved for the session. With this option set an user does not have to wait for bandwidth to become available.

For more information, see the “PCoIP General Settings” and the “VMware Blast Policy Settings” sections in Setting Up Desktop and Application Pools in View on documentation center (https://pubs.vmware.com/horizon-7-view/index.jsp#com.vmware.horizon-view.desktops.doc/GUID-34EA8D54-2E41-4B71-8B7D-F7A03613FB5A.html).

If you are changing these values, do it one setting at a time. Check what the result of your change is and if it fits your end users need. Yes, again use real users. Make a note of the setting and result, and move on to the next. Some values have to be redone to find the sweet spot that works best. Most values will be applied when disconnecting and reconnecting to the session where you are changing the values.

Another optimization can be done by optimizing the virtual desktops so less is transferred or resources can be dedicated to encoding and not for example defragmenting non persistent desktops during work. VMware OS Optimization Tool (OSOT) Fling to the rescue, get it here.

Monitoring of the display protocols is essential. Use vROPS for Horizon to get insights of your display protocol performance. Blast Extreme and PCoIP are included in vROPS. The only downside is that these session details are only available when the session is active. There is no history or trending for session information.

Graphic Acceleration

There are other options to help the display protocols on the server-side by offloading some of the graphics rendering and coding to specialized components. Software acceleration uses a lot of vCPU resources and just don’t cut it in playing 1080p full screen video’s. Not even 720p full screen for that matter. Higher clock speed of processor will help graphical applications a lot, but a the cost that those processor types have lower core count. Lower core count and a low overcommitment and physical to virtual ratio will lower the amount of desktops on your desktop hosts. Specialized engineering, medical or map layering software requires graphic capabilities that are not offered by software acceleration. Or require hardware acceleration as a de facto. Here we need offloading to specialized hardware for VDI and/or Published applications and desktops. Nvidia for example.

gpu-oprah-meme

What will those applications be using? How many frame buffers? Will the engineers be using these application mostly or just for a few moments and are afterwards doing work in office to write their reports. For this Nvidia supports all kinds of GPU profiles. Need more screens and framebuffers, choose a profile for this use case. A board can support multiple profiles if it has multiple GPU cores. But per core there only one type of profile can be used, multiple times if you not out of memory (buffers) yet. How to find the right profile for your work force? Assessment and PoC testing. GPU monitoring can be a little hard as not all monitoring application have the metrics up there.

And don’t forget that some applications need to be set to use hardware acceleration to be used by GPU or applications that don’t support or run worse on hardware acceleration because their main resource request is CPU (Apex maybe).

Engineers only? What about Office Workers?

Windows 10, Office 2016, browsers, and streaming video are used all over the offices. These applications can benefit from graphics acceleration. The number of applications that support and use hardware graphics acceleration has doubled over the past years. That’s why you see that the hardware vendors also changed their focus. NVidias’ M10 is targeted at consolidation while its brother M60 is targetted to performance, however reaching higher consolidation ratio’s then the older K generation. But cost a little bit more.

vGPU and one of the 0B/1B profiles and a vGPU for everyone. The Q’s can be saved for engineering. Set the profiles on the VM’s and for usage on the desktop pools.

And what can possibly go wrong?

Fast Provisioning – vGPU for instant clones

Yeah. Smashing graphics and depJloying those desktops like crazy… me likes! The first iteration of instant clones did not support any GPU hardware acceleration. With the latest Horizon release instant clones can be used for GPU. Awesomesauce.

– Enjoy looking at the stars!

Sources: vmware.com, wikipedia.org, teradici.com, nvidia.com

EUC Toolbox: Helpful tool Desktop Info

As somebody who works with all different kinds of systems from preferably one client device, from the intitial look, all those connected desktops look a bit the same. I want a) to see on what specific template am I doing the magic, b) directly see what that system is doing and c) don’t want breaking the wrong component. And trust me the latter will happen sooner then later to us all.

dammit-jim

Don’t like to have to open even more windows or search for metrics in some monitoring application as it does not make sense at this time? Want to see some background information on what the system you are using is doing, right next to the look and feel of the desktop itself? Or keep an eye on the workload of your synthetic load testing? See what for example the CPU of your Windows 7 VDI does at the time an assigned AppStack is direct attached? And want to keep test and production to be easily kept apart in all those clients you are running from your device?

Desktop Info can help you there.

Desktop Info you say?

Desktop Info displays system information on your desktop in a similar way to for example BGinfo. But unlike BgInfo the application stays resident in memory and continually updates the display in real time with the interesting information for you. It looks like a wallpaper. And has a very small footprint of it’s own. Fit’s perfectly for quick identification of test desktop templates with some realtime information. Or keeping production infrastructure servers apart or….

And remember it’s for information. Desktop Info does not replace your monitoring toolset, it gives the user information on the desktop. So it’s not just a clever name……..

How does it work?

Easy, just download, extract and configure how you want Desktop Info to show you the …well.. info. For example put it in your desktop template for a test with the latest application release.

It can be downloaded at http://www.glenn.delahoy.com/software/files/DesktopInfo151.zip. There is no configuration program for Desktop Info. Options are set by editting the ini file in a text editor such as Notepad or whatever you have lying around. The ini file included in the downloaded zip shows all the available options you can have and set. Think about the layout, top/bottom placement, colors, items to monitor and WMI counters for the specific stuff. Using Nvidia WMI counters here to see what the GPU is doing would be an excellent option. Just don’t overdo it.

In the readme.txt that is also included in the zip there is some more explanation and examples. Keep that one closeby.

capture-basicinformation

Test and save your configuration. Put Desktop Info in a place or tool so that it is started with the user session that needs this information. For example in a startup, shortcut or as a response to an action.

Capturing data

You have the option to use Desktop Info with data logging for references. Adding csv:filename to items will output the data to a csv formatted file. Just keep in mind that the output data is the display formatted data.

– Enjoy!

vROPS – survive the Kraken – Endpoint Operations Example

Guess who’s back, back again…

Next to doing End User Computing engagements, where user experience, performance and capacity management is also an integral part, I occasionally am involved in separate operations management engagements. And with VMware often vRealize Operations Manager, or vROPS, shows its face. As some would have noticed I mentioned vROPS in articles on this blog before. This time I am going to dive a little into growing some more tentacles and getting some more insights besides our old vSphere friend.

Show how does this getting more insights work again?

Great you asked! First get your vROPS up and running, configured, customized and showing the insight information from vSphere you wanted to see shown up in the right places. Not there yet? Well stop reading here and directly go to jail. Do not pass go. Stop it and slowly turn away from the keyboard. As mentioned, it can be very helpful to have more insights before you create something that jumps back at you and eats you…

Still reading, okay I guess your ready, just curious or thinking ahead. Like the vSphere adapter that is standardly included into vROPS, you can add solutions (or adapters) from management packs or are they called extensions (still following?) to collect information from other sources. Most of the time ‘the other’ data sources are management components for these specific components or layer. For example, getting EUC information from Horizon into vROPS, use vROPS for Horizon and connect to a broker agent on the connection server (management layer) and an agent in the desktop or published application. And what that name does not show on first glance, vROPS for Horizon can also bring insights from XenApp and XenDesktop.

Anyhow, why would I need this isn’t the vSphere adapter showing everything from my virtual infrastructure you ask. Well no, not everything. The vSphere adapter creates visibility for the vSphere layer, and that is the hypervisor and management. And information from the hypervisor and management about storage, networking and virtual machines BUT only from the view of vSphere. Storage, yes datastore but not how your storage infrastructure or vSAN are behaving, networking, yes vSwitches but not how your network devices or NSX are behaving and VM’s yes virtual machines but not what is happening in guest. And so on. You can, but you need solutions for that. And size accordingly. And customized dashboards or reports that actually show something of interest. And o yes the correct vROPS edition license.

Getting in guest insight via Endpoint Operations Management

In the old days or before vROPS 6.1 when you wanted to get in guest metrics for applications, middleware and database you could get the Hyperic beast out. With the 6.1 release of vROPS, VMware merged some of the Hyperic solution into vROPS. This would make it a lot easier to get a view through vROPS management interface all the way up or down to services, processes and the application layer. However, you would still have to do a lot of customizing to show something interesting.

servicesdashboard

Fortunately the solution exchange shows more and more applications services being integrated with vROPS via the Endpoint agent, for example:

  • Active Directory
  • Exchange
  • MSSQL Server
  • IIS
  • Apache Tomcat
  • ProgresQL
  • vCenter

Visit VMware solution exchange for the latest versions. Note that the vCenter Endpoint Operations Solution shows up as a standard management pack, but vROPS needs an advanced edition license to get Endpoint integration shown, it is not quite open on that.

Yeah Yeah enough show an example please and getting me some in guest metrics recipe

What ingredients do we need?

1 tablespoon of vROPS evaluation or a minimum of advanced edition
1 teaspoon of Endpoint Operations Management Solution
2 drops of Endpoint Agent deployed on a virtual machine
1 gr of user with permission to registered configured on vROPS
100ml of Solution Exchange Application layer something specific (or your own build something specific)

Stir and let it rest for a while.

vROPS you probably have in a test setup or can deploy as an ova in a PoC. Just a little warning upfront if you are not in a test or PoC setup, solutions/management packs are added to vROPS easily. Removing them is not an easy task.

You will need a minimum of one node, a remote collector as a data node is preferable. The Endpoint Operations Management solution is installed with vROPS and needs no specific configuration of the solution itself. The agents are downloaded at my.vmware.com. There are Linux and Windows platform versions, with or without JRE, installation packages or just the data bundles. Use what you like or fits with your application provisioning. I go for the JRE bundles.

And yes I hear you, another agent?!? Yes unfortunately currently you still need the Endpoint Agent. A big ass agent/VMware tools integration is not yet there, we need a little patience for that.

For the user create an Endpoint Management role with permissions in Administration – Manage Agents and Environment – Inventory Trees. Add this role to the user you are planning to use. The user is added to every agent.

If you have a firewall or other ACL’s in between your endpoint agents and the vROPS remote collector or data node(s), open up HTTPS (443) from endpoint group range to the remote collector or data node(s).

Manually Installing vRealize End Point Operations Agent

Manually installing and updating the vRealize End Point Operations Agent only is needed for VM’s that are not deployed via automation, where there is no application provisioning like SCCM or have an issue where a reinstall is needed. Yes you can also use the MSI or RPM, but with the files you will get a little insight (you see what I’m doing?) on how the agent works.

Note: Preferable the agent is not installed in a template. When a need arises that EP Ops Agent is installed in a system that is cloned, do not start EP Ops or remove EP Ops token and data/ prior to cloning. If started a client token is created and all clones will be the same object in vROPS.

Windows 64-bit Agent

You will need an installation user with permissions to put files, change owner/permissions on the server, install a service and start the service.

Copy the following files from a central file repository:

  • Copy and extract the softwarepackages/vRealize-Endpoint-Operations-Management-Agent-x86-64-win-.zip. Place the files in for example D:\Program Files\epopsagent
  • Edit the agent.properties file in the /conf and put in the following as a minimum:
    • setup.serverIP=data node or LB VIP to connect to
    • setup.serverLogin=User with role to register agent on vROPS
    • setup.serverPword=Password
    • setup.serverCertificateThumbprint=SSL Certificate thumbprint of the node to connect to (the one you entered above)

Note on the Password: this password can be added plaintext. When the agents is installed and started for the first time. The password is encrypted. The key is stored in the agent.scu file in the conf/ directory. You can use the agent.properties and the scu file to distribute from a central location and copy these in the conf/ directory. (Linux uses a different scu file, but the agent.properties can be the same)

  • open Command Prompt
  • go to /bin
  • run epops-agent.bat install
  • run epops-agent.bat start

ep-agentbat

Linux Agent

For the Linux agent use the same flow as the Windows Agent. Just a few differences:

  • Copy and extract the tarbal to the extract location, for example /opt/vmware/epops-agent
  • Copy the files to conf/
  • Go to bin/
  • ep-agent.sh start (no need for install)

Monitoring specific Windows Service or Linux Process

Current configuration of the agent does not include an autodiscovery of Windows Services or Linux Processes.

The reason this is not done is that there currently that all services is certainly not an option from a monitoring standpoint. It has more use to monitor specific groups of Windows services or processes that actually contribute or have a direct relation with a hosted service that is needed to be monitored.

monitor-windows-service

Follow these steps to monitor specific Windows Service/Linux Multiproces:

  • go to environment – Operating Systems – Operating System World – Windows / Linux
  • Select VM hostname
  • Actions – Monitor OS Object – Monitor Windows Service
  • Fill in the details, the service_name must match the Windows Server name.

service-details

Note: for autodiscovering services when agent.properties value autodiscovery is true, services are discovered by their Windows service name. As the services don’t have a servername in there, all service that are the same are named the same but in will the inventory hierarchy will have a different node. In the services view all services are without the node hierarchy: for example when monitoring Windows Time Service from 3 hosts will create three times a Windows Time Service is this view. You can change the service before services are discovered so that the servername is included in the display name. Please see Microsoft documentation on changing service names.

Adding a monitoring Solution

And installing a solution that monitors a service via the Endpoint Agent will show you a combination of nice metric additions. Or at least some additional pointers on how to get and in some cases display application insights you can use for your own.

All these packs can be downloaded from the solution exchange and are *.pak files. These are installed in vROPS Administration – Solutions – Add Solution and follow the details there.

Be sure you download the for operations packs, there are also for Hyperic versions still around. The latter you don’t need.

– Happy fishing

Sources: pubs.vmware.com, blogs.vmware.com, solutionexchange.vmware.com

Digital Workspace Transformation: information security

Yes…. it has been a while since I posted on this blog, but I’m still alive ;-)

For a 2016 starter (what?!? is it June already), I want to ramble on about information security in the digital workspace. With a growing number of digital workspace transformations going on, information security is more important than ever. With the growing variety of client endpoints and methodes of access in the personal and corporate environments, users are becoming increasingly independent from the physical company locations. Making it interesting how to centrally manage storage of data, passwords, access policies, application settings and network access (just examples, not the complete list). For any place, any device, any information and any application environments for your users (or do we want any user in there), it is not just a couple of clicks of this super-duper secure solution and were done.

encrypt-image300x225
(image source blogs.vmware.com)

Storing data on for example Virtual Desktop servers (hello VMware Horizon!) in the data center is (hopefully) a bit more secure than storing it locally on the user’s endpoint. At the same time, allowing users to access virtual desktops remotely puts your network at a higher risk then local only. But it’s not all virtual desktops. We have mobile users who will like to have the presentations or the applications directly on the tablet or handheld. I for instance, don’t want to have to open a whole virtual desktop for just one application. You ever tried a virtual desktop on a iPhone, it is technical possible yes, but works crappy. Erm forgot my Macbook HDMI USB-C converter for this presentation, well I send it to your gmail or dropbox for access with the native mobile apps at your conference room. And the information is gone out of the company sphere…..(a hypothetical situation of course..)

Data Leak

Great ideas all those ways to be in and out of company information. But but but….. these also pose some challenges to which a lot of companies have not started thinking about. Sounds a bit foolish as it is probably the biggest asset of a company, information. But unfortunately it’s a fact (or maybe it could be just the companies I visit). Sure these companies have IT departments or IT vendors who think a bit about security. And in effect mostly make their users life’s miserable with all sort of technical barriers installed in the infrastructure. In which the users, business and IT (!) users, will find all sorts of ways to pass these installed barriers. Why? First of all to increase their productivity while effectively decreasing security, and secondly they are not informed about the important why. And then those barriers can be just a nuisance.

Break down the wall

IT’s Business

I have covered this earlier in my post (https://pascalswereld.nl/2015/03/31/design-for-failure-but-what-about-the-failure-in-designs-in-the-big-bad-world). The business needs to have full knowledge of their required processes and information flows, that support or process in and out information for the services supporting the business strategy. And the persons that are part of the business and operate the services. And what to do with this information in what different ways, is it allowed for certain users to access the information outside of the data center and such. Compliancy to for example certain local privacy laws. Governance with policies and choices, and risk management do we do this part or not, how do we mitigate some risk if we take approach y, and what are the consequences if we do (or don’t).

Commitment from the business and people in the business is of utmost importance for information security. Start explaining, start educating and start listening.
If scratch is the starting point, start the writing first on a global level. What does the business mean by working from everywhere everyplace, what is this digital workspace and such.  What are the risks, how do we approach IAM, what do we have for data loss protection (DLP), is it allowed for IT to inspect SSL traffic (decrypt, inspect and encrypt) etc. etc.
Not to detailed at first it is not necessary, as it can take a long time to have a version 1.0. We can work on it. And to be fair information security and digital workspace for a fact, is continue evolving and moving. A continual improvement of these processes must be in place. Be sure to check with legal if there are no loops in what has been written in the first iteration.
Then map to logical components (think from the information, why is it there, where does it come from and where does it go, and think for the apps, the users) and then when you have defined the logical components. IT can then add the physical components (insert the providers, vendors, building blocks). Evaluate together, what works, what doesn’t, what’s needed and what is not. And rave and repeat…..

Furthermore, a target for a 100% safe environment all the time will just not cut it. Mission Impossible. Think about and define how to react to information leaks and minimize the surface of a compromise.

Design Considerations

With the above we should have a good starting point for the business requirement phase of a design and deploy of the digital workspace. And there will also be information from IT flowing back to the business for continual improvement.

Within the design of an EUC environment we have several software components were we can take actions to increase (or decrease, but I will leave that part out ;-)) security in the layers of the digital workspace environment. And yes, when software defined is not a option there is always hardware…
And from the previous phase we have some idea what choices can be made in technical ways to conform to the business strategy and policies.

If we think of the VMware portfolio and the technical software layers were we need to think about security, we can go from AirWatch/Workspace ONE, Access Point, Identity Manager, Security Server, Horizon, AppVolumes to User Environment Management. And And….Two-Factor, One Time Password (OTP), Microsoft Security Compliance Manager (SCM) for Windows based components, anti-virus and anti-malware, networking segmentation and access policies with SDDC NSX for Horizon. And what about Business Continuity and disaster recovery plans, and SRM, vDP.
Enterprise Management with vROPS and Log Insight integration to for example SIEM. vRealize for automating and orchestrating to mitigate work arounds or faults in manual steps. And so on and so on. We have all sorts of layers where to implement or help with implementing security and access policies. And how will all these interact? A lot to think about. (It could be that a new blog post series subject is born…)

But the justification should start at the business… Start explaining and start acting! This is probably 80% of the success rate of implementing information security. And the technical components can be made fit, but… after the strategy, policies, information architecture are somewhat clear….

And the people in the business are supporting the need for information security in the workspace. (Am I repeating myself a bit ;-)

Ideas, suggestions, conversation, opinions. Love to hear them.

vROPS: Beware of a Whole Lotta Metrics creating the Spaghetti Incident

When doing consultancy at organisations I often find a vRealize Operations, vROPS (or vCOPS) initial deployment left alone because of the IT operations department responsible persons are overwhelmed by the information you (can) receive from vROPS (manager and optionally other suite components). Mostly this is because of the lack of time invested in getting to know the product as admins are busy with reacting to operational actions/issues and operational processes. The downside of this is that perfectly useful pieces of information, recommendations and actions are left alone and the virtual infrastructure (and the IT admins) suffers from neglect. Which in turn only increases the stated problem and were all going down the same roundabout round and round without getting to leave at the vROPS successful exit. This is what we don’t want.

But start at the why we want vROPS in the first place:

  • Continual visibility gained across virtual and physical infrastructure.
  • Pro-actively identify and solve emerging issues with predictive analytic and smart alerts and re-mediation.
  • Reclaim unused resources and capacity, getting the other VM’s happier while saving on unnecessary investments (assets and people).
  • Unified IT Management, complete visibility in one place, across applications, cloud, storage and network devices; with an open and extensible Operations Manager architecture.

With the latter, that extensible OPS architecture, start moderately! Just adding products and management pack after management pack will get you in a spaghetti incident. And lots of metrics, alerts and headaches…..

To help from overwhelming information here are a few pointers you can follow in deployment of vROPS in your environment:

  • Determine beforehand what you want to learn and like to see from your environment. What policies do you currently have for your virtual infrastructure and their application workloads, are those sufficient, and is the information presented out-of-the-box close to these policies.
  • Get to know the out-of-the-box insights and policies that vROPS offers. Stop here and take a breather before wanting to customize every aspect.
  • After implementation let vROPS gather metric data for minimal one week, but preferably longer, before there can be trusted analytic’s.
  • Start with virtual infrastructure components and then move to next levels.
  • Determine what information you would like next to vSphere and in what particular phase you want to introduce this information. Again; go easy here and take your time. Introduce one adapter (or other vROPS components) at a time. Familiarize yourself with the specific insights offered by this specific adapter (for example what is collected) and after a proven success move on to a next. 
  • Familiarize yourself with the basic vROPS Architecture and where collectors and data flow moves in the architecture. A model that I created recently with the data flow in a vROPS basic architecture can be helpful in understanding:

vROPS 6 - Data Flow

  • When designing a vROPS architecture with multiple physical sites, determine where you users connect to and where the data needs to go. When trying to achieve a single pane use a distributed architecture with remote collectors for the connected physical locations. When using a per physical site vROPS instance the users connect to that local vROPS UI with data of that collected site. There is no single pane and metric data is not transferred between instances, ergo complexity your IT operations needs to handle accordantly.

– vROPS is to be enjoyed as it gives you very valuable information and let’s IT ops get from reactive mode to a proactive mode, not get headaches from!

Sources: vmware.com

 

vCenter Server Appliance 6.0 in VMware Workstation

For demo’s, presentations, breaking environments or just killing time I have a portable testlab on my notebook. Yes I know there are also options for permanent labs, hosted labs and Hands-on labs for these same purposes. Great places for sure, but that is not really what I wanted to discuss here.

As I am break.. ehhh rebuilding my lab to vSphere 6.0 I wanted to install VCSA 6.0 in VMware workstation. Nice, import a my vmware downloaded VCSA-versionsomething.ova (after e-mail address number ####### registered over there) and we are done! …….. Well not quite.
First the vCenter download contains the OVA, but it is a little bit hidden. The guided installer will not help you here. You will need to mount or extract the downloaded ISO and look for vmware-vcsa in the vcsa/ folder.

VCSA Location
Copy the vmware-vcsa file to a writable location (when just mounted the ISO) and rename vmware-vcsa to vmware-vcsa.ova. And now we can import the ova to VMware Workstation. When the import finishes, do not start the VM yet. Certain values that are normally inserted via the vSphere Client or ovftool are to be appended to the VMX file of the imported VCSA. Open the vmx in the location where you let Workstation import the VM. Append the following lines:

guestinfo.cis.appliance.net.addr.family = “ipv4”
guestinfo.cis.appliance.net.mode = “static”
guestinfo.cis.appliance.net.addr = “10.0.0.11”
guestinfo.cis.appliance.net.prefix = “8”
guestinfo.cis.appliance.net.gateway = “10.0.0.1”
guestinfo.cis.appliance.net.dns.servers = “10.0.0.1”
guestinfo.cis.vmdir.password = “vmware-notsecurepassexample”
guestinfo.cis.appliance.root.passwd = “vmware-notsecurepassexample”

Note: Change the net and vmdir/appliance.password options to the appropriate values for your environment.

If not appended when you start the VCSA an error: vmdir.password not set aborting installation is shown on the console (next to root password not set) and network connection will be dropped even if you configure these on the VCSA console (via F2).

Save the VMX file.

And now it is time to let it rip. Start up your engines. And be patience until…lo and behold:

VCSA Running in workstation

And to check if the networking is accepting connections from a server in the same network segment open up the VCSA url in Chrome for example. After accepting the self signed certificate unsecure site (run away!) message you will (hopefully) see:

VCSA in Workstation

Next we can logon to the Web client (click and accept the unsecure connection/certificate) and logon via Administrator@VSPHERE.LOCAL and the password provided in the VMX (in the above example vmware-notsecurepassexample). As a bonus you now know where to look when you forget your lab VCSA password ;-).

VCSA 6 Web Client

(And now I notice the vCenter Operations Manager icon in the Web Client Home screen. Why is this not updated like vRealize Orchestrator :-) )

-Enjoy!

 

 

Design for failure – but what about the failure in designs in the big bad world?

This post is a random thought post, not quite technical but in my opinion very important. The idea formed after some subjects and discussions at last week’s NL VMUG. This blog post’s main goal is to create a discussion, so why don’t you post a comment with your opinion … Here it goes…

Murphy, hardware failures and engineers tripping over cables in the data center, us tech gals and guys all know and probably experienced them. Disaster happens everyday. But what about a state of the art application that ticks all the boxes for functional and technical requirements, but users are not able to use it, because of their lack of knowledge in this field, or because they are clueless why the business has created this thingy (why and how this application or data is supposed to help the information flow of business processes)? Failure is a constant and needs to be handled accordantly, and from all angles.

Techies are used to look at the environment from the bottom up. We design complete infrastructures with failure in our minds and have the technology and knowledge to perfectly execute disaster avoidance or disaster recovery (forget the theoretical RTO/RPO of 0’s here). We can do this at a lower cost (CAPEX) than ever before, and there are more benefits (OPEX and minimized downtime for business processes) than before. But subsequently, we should ask ourselves this: What about failing applications or data which is generated but not reaching the required business processes (the people that are operating or using these processes)?
Designs need to tackle this problem, using design based on the complete business view and connecting strategy, technical possibility and users!

And how will we do this then?

Well, first of all, the business needs to have full knowledge of their required processes and information flows, that support or process in and out data for these services supporting the business strategy. Very important. And to be honest, only a few companies have figured out this part. Most experience difficulties. And they give up. Commitment from the business and people in the business is of utmost importance. Be a strategic partner (to the management). Start with asking why certain choices are made and explain the why a little more often than just the how, what and when!

Describe why and how information and data is collected organized and distributed (in a fail safe and secure method) and what information systems are used. Describe the applications (and their ROI, services, processes and busses), how the information is presented and flows back in the business (via the people or automated systems). How does your solution let the business grow and flourish? Keep clear of too much technical detail – present your story in a way the manager understands the added value, and knows which team members (future users) to delegate to project meetings.

Next up IT, or ICT here in the Netherlands, Information and Communication Technology. I really like the Communication part for this post, businesses must do that a little more often. Start looking at the business from different points of view, and make sure you understand the functional parts and what is required to operate. To prevent people working on their own without a common goal or reason, internal communication is essential. Know the in and outs, describe why and how the desired result is achieved. Connect the different business layers. For this a great part of business IT departments needs to refocus it’s 1984 vision to the now and future. IT is not about infrastructure alone, it is a working part within the business, a facilitator, a placeholder (for lack of other words in my current vocabulary). IT needs to be about aligning the business services with applications and data, the tools and services that support and provides the business. That is why IT is there in the first place, not the business that is (connected or not) there for IT. IT’s business. Start listening, start writing first on a global level (what does the business mean by working from everywhere everyplace), then map possibilities to logical components (think from the information, why is it there, where does it come from and where does it go, and think for the apps, the users) and then when you have defined the logical components, you can add the physical components (insert the providers, vendors, hardware building blocks).

Sounds familiar? There are frameworks out there to use. Use your Google-Fu: Enterprise Architecture. Is this for enterprise size organizations only? No, any size company must know the why and why and why. And do something about it. And a simplified version will work for SMB size companies. Below is an example of a simplified model and what layers of attention this architectural framework brings to your organization.

Design for Failure

And…in addition to this, start using the following as a basis to include in your designs:

The best way to avoid failure is to fail constantly

Not my own, but from Netflix. This cannot be closer than the truth. No test or disaster recovery plan testing in iterations of half year or year. Do it constantly and see if your environment and business is up to the task to not influence any applications that will go down. Sure, there will be influences that for example the services running at 100% warp speed, but your users still able to do things with the services is better than nothing at all. And knowing that your service operates with a failure is the important part here. Now you can do something about not reaching the full speed, for example scale out to allow a service failure but not at a degraded service speed. Or know which of your services can actually go down without influencing business services for a certain time-frame. This is valuable feedback that will need to go back to the business. Is going down sufficient for the business, or should we try and handle this part so it does not go down at all. Just don’t use it at the infrastructure level only, include the data, application and information layers as well.
Big words here: trust and commitment. Trust the environment in place and test if it succeeds to provide the services needed even when hell freezes over (or when some other unexpected thing should happen). Trust that your environment can handle failure. Trust that the people can do something with or about the failures.
Commitment of the organization not to abandon when reaching a brick wall over and over, but to keep going until you are all satisfied. And trust that your people can fail also. Let them be familiar with the procedures and let a broader range of people handle the procedure (not just the current users names mapped to the processes, but within defined and mapped roles to services, multiple people can operate and analyze the information). Just like technical testing, your people are not operating 24x7x365, they like to go on leave and sometimes they tend to get ill.

Back to Netflix. For their failure generating Netflix uses Chaos Monkey. With that name an other Monkey comes to mind, Monkey Lives: http://www.folklore.org/StoryView.py?project=Macintosh&story=Monkey_Lives.txt. Not sure where the idea came from, but such a service and name cannot be a coincidence only (if you believe coincidence exists in the first place). But that is not what this paragraph is about.
The Chaos Monkey’s job is to automatically and randomly kill instances and services within the Netflix Infrastructure architecture. When working with Chaos Monkey you will quickly learn that everything happens for a reason. And you will have to do something about it. Pretty Awesome. And the engineers even shared Chaos Monkey on Github:
https://github.com/Netflix/SimianArmy/wiki/Chaos-MonkeyIt must not stop at the battle plan of randomly killing services; fill up the environment with random events where services will get into some not okay state (unlike a dead service) and see how the environment reacts to this.

 

VMware Utility Belt must have tools – RVTools 3.7 released

March 2015 RVTools version 3.7 is released. 

This, in my opinion, is the tool each VMware consultant must have in his VMware utility belt together with the other standard presented tools. At this time RVTool is still free, so budget is no constrain to use this tool. More important it’s lightweight, very simple in usage and shows much wanted information in a ordered overview or allows for exporting the information in Excel format to analyse this offline. 

Before using this tool, it is important to understand the tool is used to make a point in time snapshot of the infrastructure configuration items in place. In short what is configured and what is the current operational state. No more, no less. The information can then be used in for example operational health checks or AS IS starting point in projects (consolidation or refresh projects) in the analysis/inventory phase. See more use cases further below, and I am sure there can be some more examples out there.

No trending or what if’s for example, that is something you will have to do yourself or use other solutions/tools available for the software defined data center. VMware has some other excellent tools for SDDC management and insights in your virtual environment (for example vRealize Operations and Infrastructure Navigator). But that is a complete other story.

What is RVTools?

RVTools is a Windows .NET application which used the VI SDK (which is updated to 5.5 in this release) to display information about your VMware infrastructure.
A inventory connection can be made to vCenter or a single host, to get as is information about hosts, VM’s, VM Tools information, Data stores, Clusters, networking, CPU, health and more. This information is displayed in a tabpage view. Each tab represents a specific type of information, for example hosts or datastores.

RVTools can currently interact with Virtual Center 2.5, ESX Server 3.5, ESX Server 3i, Virtual Center 4.x, ESX(i) Server 4.x, Virtual Center 5.0, Virtual Center Appliance, ESXi Server 5.0, Virtual Center 5.1, ESXi Server 5.1, Virtua lCenter 5.5, ESXi Server 5.5 (no official 6.0 in this version).

RVTools can export the inventory to Excel and CSV for further analysis. The same tab from the GUI will be visible in Excel.

image

image

There is also a command line option to have (for example) a inventory schedule and let the results be send via e-mail to an administrative address.

Use Cases?

– On site Assessment / Analysis; Get a simple and fast overview of a VMware infrastructure. The presented information is easy to browse through, where in the vSphere Web Client you would go clicking through screens. When there is something interesting in the presented data you can go deeper with the standard vSphere and ESXi tools. Perfect for fast analysis and health checks.

– Off site Assessment / Analysis; Get the information and save the Excel or CSV dump to get a fast overview and dump for later analysis. You will have the complete dump (a point in time reference that is) which you can easily browse through when writing up an analysis/health check report.

– Documentation; The dumped information can be used on or offline to write up documentation. Excel tabs are easily copied in to the documentation.

– (Administrator) reporting; Via the command tool get a daily overview of your VMware infrastructure. Compare your status of today with the point in time overview of the day before or last week (depending on your schedule and/or retention). Use this information in the daily tasks of adding/changing documentation, analysis, reporting and such.

Release 3.7 Notes

For version 3.7 the following has been added:

  • VI SDK reference changed from 5.0 to 5.5
  • Extended the timeout value from 10 to 20 minutes for realy big enviroments
  • New field VM Folder on vCPU, vMemory, vDisk, vPartition, vNetwork, vFloppy, vCD, vSnapshot and vTools tabpages
  • On vDisk tabpage new Storage IO Allocation Information
  • On vHost tabpage new fields: service tag (serial #) and OEM specific string
  • On vNic tabpage new field: Name of (distributed) virtual switch
  • On vMultipath tabpage added multipath info for path 5, 6, 7 and 8
  • On vHealth tabpage new health check: Multipath operational state
  • On vHealth tabpage new health check: Virtual machine consolidation needed check
  • On vInfo tabpage new fields: boot options, firmware and Scheduled Hardware Upgrade Info
  • On statusbar last refresh date time stamp
  • On vhealth tabpage: Search datastore errors are now visible as health messages
  • You can now export the csv files separately from the command line interface (just like the xls export)
  • You can now set a auto refresh data interval in the preferences dialog box
  • All datetime columns are now formatted as yyyy/mm/dd hh:mm:ss
  • The export dir / filenames now have a formated datetime stamp yyyy-mm-dd_hh:mm:ss
  • Bug fix: on dvPort tabpage not all networks are displayed
  • Overall improved debug information

Who?

RVTools is written by Rob de Veij aka Robware. You can find Rob on twitter (@rvtools) and via his website http://robware.net.
Big thank to Rob for unleashing yet another version of this great tool!

As the tool is currently free please donate if you find the application useful to help and support Rob in further developing and maintaining RVTools.

Let’s get ready to cast your vote: vBlog 2015

Like the years before Eric Siebert of vSphere-Land.com is opening the annual vBlog voting for 2015 (http://vsphere-land.com/news/voting-now-open-for-the-2015-top-vmware-virtualization-blogs.html). This year Infinio is the sponsor and the top 50 is going to receive a special custom commemorative coin. All the blogs that are listed on vLaunchpad are on the ballot for the general voting. The top vBlog voting contest helps rank the most popular vblogs based on the community (you) votes and the outcome determines the ranking that is announced on the 19-03 Live show (and published on the vLaunchpad website).

Pascalswereld.nl is included on the voting ballot, but please keep in mind there is a lot of better blogs out there. As Eric states; keep in mind quality, frequency, longevity and length of the blogs out there when voting.
And of course your personal preferences ;-)

Ready to participate?

You can place your vote at: http://www.surveygizmo.com/s3/2032977/TopvBlog2015.

Good luck to all the great bloggers out there!

Sources: http://vsphere-land.com, http://info.infinio.com/topvblog2015