Protecting vCenter services, what is around (comes around)

Depending on your environment there is a need to protect vCenter or some of the services included in the vCenter system. A big question to ask yourself is what kind of downtime can you have according to your service levels and what kind of options do we have or need to have in place?

What will go down if you lose a vCenter component?

Like said this depends on your environment and components using vCenter services to connect to and from. A “plain” server virtualization workload for one company is different than a VD workload in a high demanding organization. The latter probably needs the ability to provision a little more urgent then the first example. Want to deploy a vCOPS vApp or VD Desktop, well wait until your vCenter is back. Using solutions like VMware Data Protection requires an operational vCenter with a functioning vCenter Single Sign-On server to restore a virtual machine. Losing that part of your environment could impact your recovery options seriously. Manage/Edit some VM version 10? How will you do that without vSphere Web client? You can’t. Have a HA or DRS cluster? Well HA will still partially function, it will react with restarts when needed. But to add to the cluster will need vCenter to make this posible. DRS needs vCenter to function in manual or automatic mode. And these are just a few examples.
Important to keep in mind, running VM’s will keep on running and HA will keep on HA’ing, no need to panic there.

Let see which components make up vCenter, a little vCenter architecture to start with.

A “standard” vCenter is made up of the components vCenter SSO (Single Sign-on), Lookup Service, Inventory Service, vSphere Web Client and the vCenter Server (with all of it’s services) itself. Optional services are Dump collector, Syslog Collector and Auto Deploy (and optionally TFTP and PXE DHCP service, but they can be on a separate system so not included in the model as a part). vCenter is also expanded by Update Manager, vCOPS and all sorts.

image

What are your standard protecting options?

  • Do nothing
    Not advisable, but if you are sure, have a small (just a few hosts and VM’s) environment and have an insight of your environment (or use some scripting to dump your configuration), you could do nothing. You lose part of your services and (in worse case) will have to manually rebuild vCenter and your configuration. You will lose any trending information. Recovery time is typically measured in days, and requires manual intervention.
  • Back-up Restore or Replication.
    Backup and restore should be an essential part of any availability solution, exclamation mark. This provides a recovery method utilizing tape, disk, replication or snapshot technology. This also enables a recovery method when data corruption occurs, depending on the solution that is. If data is corrupt on the primary VM then a replication to the recovery VM can occur after this moment. vCenter VM replication from primary to recovery site should be well monitored (and tested with SRM plans for example). Preferably used on several layers, application and application data (for example databases, certificates, logs, dump locations etc.). Be sure to know your backup  and recovery steps (look in the VMware KB’s for backing up the vCenter Server Appliance services and embedded vPostgres database), document, practice and test them. Recovery time is typically measured in hours or days, and typically requires manual intervention.
  • MS SQL Log shipping – database only
    A simple and cost effective solution. You can use log shipping to send transaction logs from one database (the primary database) to another (the secondary database) on a constant basis. Continually backing up the transaction logs from a primary database server and then copying and restoring them to a secondary database server keeps the secondary database nearly synchronized (depending on your plan) with the primary database. The destination server acts as a cold standby or backup server. Your destination server can also act as primary database for other databases so you will have some sort of active-active instead of a cold standby. Be ware of licensing in this case, log shipping target only or serving database is a different license show! Has to be setup for every database, include your vCenter, Inventory, SSO and such. Recovery time is depending on your plan, but can be minutes or hours. Requires manual intervention to fail over from primary to secondary.
  • SQL mirror / clustering – database only
    Depending on the license of MSSQL these are a more robust solution then the previously mentioned SQL log shipping. These have data replication mechanism in place and have the ability to automatically detect failures and do there fail overs with out manual intervention. Mostly used with a Witness out side the cluster/mirror pair to act as a tie breaker to prevent split brain scenario’s in case of partial failures. Mirroring, clustering has to be setup  for every database, include your vCenter, Inventory, SSO and such. Clustering can also be done per instance with it’s included databases. Oracle will have it’s own clustering, with Oracle RAC for example. Recovery time is typically measured in minutes. No intervention to fail over.
  • Hypervisor HA.
    Hypervisor HA will start your VM after a host failure or VMtools timeout. The time it takes to recover is depending on your amount of free slots, your priority of vCenter vs the other workload and the amount of VM’s needed to restart. Depending on your environment this can take some time to start up. Hypervisor HA will not protect against service failures as it will not monitor any application components, it will also not protect against any data corruption. Hypervisor HA is to be used in conjunction with one or more other protection options. For example a vCenter system on HA and SQL databases on MSSQL Cluster. Recovery time is typically measured in minutes or hours depending on your consolidation ratio and restart settings.
  • App Aware HA.
    If you have the correct edition and have the application aware components in place. Monitors the application and if it goes down, it can be restarted. There is no app aware HA specifically for vCenter yet. But you can protect parts of the applications with app HA, for example MSSQL services. Recovery time is typically measured in minutes or hours.
  • FT
    That is currently a no no. Why did I put it up here? Because it comes up as a question once in a while. FT creates virtual machine “pairs” that run in lock step—essentially mirroring the execution state of a virtual machine. This only protects against host or VM failures. Services that go down or corruption in the application data will be mirrored to the secondary VM.
    FT in vSphere 5.5 is still limited to 1 vCPU, and with a small inventory you still need a minimum of 2vCPU. Same goes for for example a database server these also tend to have more vCPU’s. Yes this has been a issue all along for FT, and we know from following those VMworld sessions demo’s that there is work in progress on multiple vCPU FT, but unfortunately this is not yet released. But a similar technique is next up.
  • vCenter Server Heartbeat
    vCenter Server Heartbeat is a separately licensed vCenter Server plug-in that provides protection of your vCenter system, (physical or virtual). Next to protecting against host failures, heartbeat adds application-level monitoring and intelligence of all vCenter Server components. Heartbeat replicates changes to a cloned virtual machine. The cloned virtual machine can take over when a failure event is triggered.
    imageThe vCenter recovery can be accomplished by restarting the vCenter service, by restarting the entire application, or by the entire failover of the vCenter system. Use in conjuction with a data protection like SQL mirroring to protect against corruption. Recovery time is measured in minutes and requires no manual intervention.
  • Scale out / HA service pair
    Move some of your vCenter services to other components or use multiple same role servers to provide high available and load balanced services. Not all of the vCenter services can be separated this way, but for example SSO can be. Those high availability service are placed behind a third-party network load balancer (for example, Apache HTTPD, vCloud Networking and Security vShield Edge load balancer or load balance appliance like Netscaler).
    imageMove logs to a log insight server, move statistics to vCOPS. Keep vCenter lean and mean.

Conclusion

vCenter Server Heartbeat is a done package for protecting your vCenter server system, but this is at an additional cost. More often you will have some back-end services, like Oracle/MSSQL clustering and back-up restore/replication solutions, already in place or products with a similar need. A combination of protection is the preferred way to utilize those in or to be in place solutions with the need for protection and the allowed recovery/down time. But this is the main thing, know your environment, know how the components interact, know what is needed at which time and know what will be (temporary) unavailable when services are down. Protect against unavailability, corruption and please randomly test to be sure all components are working as expected (even the manual procedures).

And yes sure there will be some other great options out there like a script collection or cold standby solution et al….. but hey isn’t that what the comments section is about. Tell me yours. Share.

– Happy managing your environment!

vCenter 5.5 SSO after initial installation steps

As you probably know (or don’t) from vCenter version 5.1 Single Sign-On (SSO) has a password policy. This password policy is for the SSO component only, not for the external identities (like AD DS). For version 5.1 the maximum lifetime of a password is default 365 days. This means the password will expire in 365 days and the account will be locked. After installation we are used to either changing this to a setting appropriate for your organizations security policy (smaller or greater than the default) or to an non-expiring (0) value, and adding an additional user/group source to the SSO administrators for access to SSO (like setting permissions on your vSphere infrastructure objects). Preferred is using an domain user/group to add to your SSO administrators. Yes domain does also have a password policy, but you only have to worry about it at the domain level. Expired SSO admin, login to SSO, reset and you good to go using SSO admin again. When using a password policy for the SSO admin be sure to have a procedure in place to notice you on time that the password will expire and how to change this. It if often enough that the system is installed and this or saving the password somewhere is forgotten. It can also be an inconvenience that the vSphere Web Client (or an other task, alarm what ever) won’t remind you when the password is about to expire. Have some procedure in place.
When using a password policy be sure to set it a the correct level so your users won’t be post-it pasting it to their monitors.

Okay, but help me what is SSO again?

SSO is an authentication and security broker between a identity source (like local OS, LDAP or Active Directory) and accessing several vSphere solutions. Want to use vCenter you will be authenticating to SSO, want to use operations you will be authenticated to SSO and so on. Below is a model taken from the VMware.com site giving a graphical representation on SSO and it’s role.

image

As you see SSO is a critical component in the authentication/security within a vSphere infrastructure. Lose access to SSO, you will lose access and functionality (when configured to use the expired account) of a lot of components. It’s is a required component when installing a vSphere infrastructure and should be set-up at step one.

What do we need to know in vCenter SSO 5.5?

There is still a password policy for administrator@vSphere.local (okay that is also change from Admin@System-Domain) and is standard set to expire after 90 days. Login to the vSphere Web client with administrator@vpshere.local (or other SSO administrator). Go to Administration – Single Sign-On – Configuration – policies to take a look for your self.

When trying to reset the maximum lifetime to 0 you will be presented with an nice non-descriptive error message “The error has no message.” Whaaaat?!?
Fortunately there is a KB article 2053196 in the VMware knowledge base for that.

image

In other words you can’t set it to unexpired. You can set a maximum value of 9999 as the maximum lifetime value. This will keep you going for some years….

image

Be sure to change the other policies accordant to your organizations security policy. Same goes for the lockout and token policy.

How do we add the additional users to the SSO admin group again?

Login to the vSphere Web client with administrator@vsphere.local (or other SSO administrator). Go to Administration – Single Sign-On – Users / Groups. Select your group and click the add member button (the plus user). In the Add Principals select your identity domain and user or groups you wish to add.

image

Great to have that sorted again. Now off to add more roles to my vSphere 5.5 testlab.

– Happy SSO’in !

New VMworld Sessions on youtube released via VMworld TV

Video killed the radio star. But for VMworld sessions this is a great way to experience the recorded sessions when you did not go to VMworld or were not able to follow (as there are a lot of sessions). Maybe this will get you excited to go the next time around. 

VMworld TV released a couple of free video’s to watch on some of the VMworld sessions. Here is a list for your convenience to open and watch:

I actually followed some of these sessions in Barcelona; like VSVC4944, VSVC5280 and NET5521. Nice to have a flash back to those moments.

– Happy youtubing these VMworld sessions!

Get up, stand up and get your Windows XP out of there

April 8, 2014.

Ring a bell? No? Really? That is a date that will have to ring a bell. You are either on the ah don’t worry I’ve got my things sorted, or it will send a shiver down your spine.
Have been lying under a rock or went on a travel to the end of the universe? Well this 2014 april day is the day that Microsoft will end support (to be precise End of extended support) for Windows XP. See this nice matrix that Microsoft have up their site: http://windows.microsoft.com/en-us/windows/lifecycle.

Ah but that is no problem for us because we have a perfect running system and never had to worry about Microsoft support. Okay, well congratulations on your perfect operating environment. But it will grind to a halting stop. Why? As you probably need applications on there, and the suppliers of those will also stop (or already stopped) support for their products on Windows XP. And if that doesn’t get you started what do you think about: attacks to vulnerabilities. You got a bunch of Cyber attackers waiting around and they will be able to target vulnerabilities in Windows XP without fear these flaws will be patched. And there will not be anything you can do to protect yourself besides upgrading to a newer operating system.

[Edit] Okay apparently not anything. Your not completly on your own. Microsoft announced on 15 january 2014 it will extend it’s antimalware support of Windows XP through 14th july 2015 (http://blogs.technet.com/b/mmpc/archive/2014/01/15/microsoft-antimalware-support-for-windows-xp.aspx). This means anti malware signatures and engine updates for the essentials suite. The EOL of Windows XP still stays on the 8 april 2014. If you have essentials in your environment you will have some support, but keep in mind this doesn’t fix vulnerabilities in the OS itself. The effectiveness of anti malware/virus protection or what ever solutions are limited on an out dated OS. You maybe have a little more protection then nothing, but be aware that this doesn’t give a false oh I’m okay. The urge to move from Windows XP still exists.
[/Edit]

What are your upgrade/migration paths?

Got your attention? Well there is still some time to get your plans and start moving. To help you get started I will highlight some of the paths you can explore to get that pesky Windows XP out of there.

– Assess. Check your environment, what is out there? Check what applications you have, requirements, what hardware there is, how your distribution is done are there any central management solutions out there?
Involve your users. This is key! These are to ones that are using the environment, these are the ones that will use the target environment, these know their applications. These are also the ones that test and accept the new environment. Don’t got them in your project? Well you are bound to fail. Go back a few steps and include those users!
Tools, use the assessment tools out there. You got the Microsoft assessment and planning toolkit, Flexera or AppDNA for example. Use System Center Configuration Manager for deployment, well use it to gather the information from your clients. At this time you should have a good starting point on what is in your environment. Check with your suppliers of applications what is their support with newer OS versions, do you need a new version no problem. Add it to your new deployment and migrate the data as described by these suppliers. Make sure you start collecting your application installation disks plus any necessary product keys. Check with your business what there plans are, are you currently in a fat client environment, maybe this is the time to move to a VDI or hybrid environment (I would say the perfect time, but this is up to your organization).

– Pick a new OS version. You got a picture of your environment, what is lying around and how your suppliers support is for new OS’ses. Take your pick. Part of making your migration plan is to pick a new OS, as this influences the way to go. As the most Windows XP users will want to go to a newer version of Windows, decide on going to Windows 7 or Windows 8/8.1. You will have to do a new install as there is no direct upgrade path. You will have to take care of the personal settings currently on the system. There is no direct interchangeability between the Windows XP user settings and Windows 7 for example. You probably have a good view of your application support, check which OS has to most support. And for Windows 7 be aware that mainstream support will also stop in 2015, is Windows 8.x a better option when your application suppliers support these.

– Virtual Desktops. You will have a picture of your business strategy and your current vs future hardware and application support. This is probably a good time to start thinking about VD as your target. [Edit] To clearify a bit on the ways to deliver a VD (as the comments show this was needed); Depending on the type of users in your organizations landscape, this can be either a shared, a one-to-one or a hybrid virtual desktop environment. What is the difference? A shared VD is actually a desktop that is being shared by serveral user on one server (with same Server OS install base, same resources). Where an one-to-one is a desktop where one user connects. And yes these desktops also run on a shared hypervisor host, but seperated from the other user desktops. Changes to the desktops don’t influence the other users. In most organization it will either be a large portion of shared, or hybrid shared with a small portion of one-to-one. But here you again must decide what is best for your organization. There is no one size fits all organizations, there is an design choice that can be easily expanded to an other solution when needed. [/Edit] Is your hardware still able to support Windows 7 and Windows 8, the need for VD is a little lesser. But when you will have to invest a lot in new hardware a VD is the perfect place to go to. It will make a central managed environment with upgrade methods for future OS life cycles.

– Legacy Applications. There may be a custom application that won’t work on newer versions of Windows. Okay, but here we also have options other then leaving Windows XP out there. There is application virtualization for example. Sandbox this application in a previous Windows support mode. ThinApp or App-v for example. No application virtualization initiatives yet? There is also a some virtualization support in Windows 7 as are there ways to run virtual machines on your desktop. A virtualization feature called Windows XP Mode is included in Windows 7 Professional as are VMware workstation or such as available. Just run your legacy application in the Windows XP VM and work on a plan to replace these later on.

– Persona migration. Users of Windows XP will have set there applications and workspace to their needs. Preferably they want a seamless move to a new OS with the possibility to retain these settings. As we got the support for applications, we need to think about a way to get those settings to the new OS. What options do we have here? We can virtualize the profile via RES Workspace Manager for example. This decouples (or abstract) the profile and settings from the Windows profile (which has changed from XP to 7, so again no direct way). Deploy at the current Windows XP base and gather the settings needed. When going to Windows 7/8 these settings we be applied there as well. There is a little catch to this method (and to be clear for all migration options), non of the solutions are publishing settings from application that changed over time. Your Office 2003 settings will not be straight applied on Office 2010, some conversions will be needed.
An other option is to use Windows Easy Transfer to transfer your settings from XP to Windows 7. Use your network or a USB hard disk to save the settings and sneaker migrate them to the new system. 32-bit to 64-bit will be harder to migrate, but there there will be some backup restore options. And other option is to use the layer management upgrade approach.

– Design with workspace layers. Make your design one that will be easily upgraded in the future. OS Life Cycles will be shorter, Windows 9 is already rumored to be released in 2015. By treating your OS as one of your workspace layers, you will be able to easier to migrate in the future. These layers will be transported, migrated and recovered more easier. Loose your corporate notebook, well here is a VD image with your persona, data and application layers restored from a previous (central) point. And off you go! Decouple those layers and these are easier managed. What are workspace layers? You got your hardware layer, driver layer, base image (OS) layer, one or more application layers and the user layer. Your corporate data will be in several layers, but if you have good insights and a working data management (not often seen to be honest) you can even have a data layer. With these layers you can have different owners, managers and responsibilities (IT vs User vs Business).

– Desktop Layer management. VMware Horizon Mirage is a layered image management solution that separates the desktop into logical layers that are owned and managed by either the IT organization or the end user (persona/applications). You can update the IT-managed layers while maintaining end-user files and personalization. With the centralized management provided by Horizon Mirage, you can perform all of the snapshot, migration, and recovery tasks remotely. This will significant reduce the manual migration process steps and accelerate the migration project. This significantly decreases IT costs. When setup and captured correctly this is the preferred tool to do a online and seemless migration of Windows XP to Windows 7.

– But my business wants to go even further and include all those buzzwords like BYOD, mobility and such. Yup, and why are your still stuck with Windows XP? Take the simple approach, first get your infrastructure up to the right OS and running. Take one step at a time instead of a giant leap. Yes, of course you will have to design with the future in mind (VDI with workspaces will open the environment to mobility) but you first have to get this big change a successful one. Let the infrastructure sink in, get your issues out. And let the organization get used to this change. After this, and design in mind, it should be a nice easy project to add mobility to your upgraded environment.

—-

For those on Windows XP. It was time to act, and there is still time to act. But you will have to do this now!

Else it is Tick Tick Boom! (just to get some more earworm out of my head)

– Happy migration!

Keep on IO’in in the VDI world

I have already done some posting about analyzers, optimization and IO offloading (https://pascalswereld.nl/post/72323420934/ram-based-storage-vdi and https://pascalswereld.nl/post/71309552232/reducing-io-at-host), but a question that rises often in VDI projects is what IO(PS) do we expect in what layers of the infrastructure and how can we measure them accordantly?

But first start at the start.

What kind of IO can we expect in VDI?

IO in the case of VDI, and other virtualization workloads, are mainly in the networking and storage. Networking IO is measured in the throughput of the network components and in the time a command takes in traveling through the components (up and downstream) measured in latency.

Storage IO is measured in the amount of operations (input output per second, IOPS), characteristics of the IO (sequential, random, read/write ratio), the time a command takes in traveling through the components (latency) and the throughput of the storage components.

Why is IO important in VDI?

Resources and IO these resources need is key in delivering a good desktop experience. It does not matter that the desktop is a physical or virtual desktop, it is a matter of ensuring that the desktop has sufficient hardware resources (the four horsemen CPU, memory, networking and storage) to run the OS and applications at the right time. Don’t have enough or slower then needed, the desktop experience will suffer immediately as will the users.
With physical desktops each of the resources are local to and dedicated for that desktop, with modern systems this is often not a problem, but here some of your user experience requirements are started. Most of the users are used to working with physical desktop environments and know what to expect in experience of those systems. They want the same or faster in VDI. With resources in VDI, there will be abstracting and pooling of those (hardware) resources. The resources are presented and pooled to the hypervisor host. For example the virtual hard drive is available on the hypervisor local or centralized storage. This storage is shared (or replicated) between hypervisor hosts. IO requirements and consequences of the IO load is spread across the virtual and facilitating resource (storage and network) infrastructures. In order to deliver a consistently high performance, virtual desktops require a constant access to preferably low latency and high throughput resource infrastructures.

What is normal IO in the VDI?

Normal? There is no normal. Every organization will have it’s own workload footprint, this is the case in server virtualization and this is also the case for virtual desktops. A Windows image will have a certain IO footprint that is more or less known out there (in averages), but depending on your build, sizing, own components and usage these differ. The IO footprint of a virtual desktop will be influenced by choices in your deployment, for example the optimization of your image, persistent or stateless, offloading of certain applications (like virus protection to the hypervisor), using application virtualization, optimized in IO characteristics (small random IO to large sequential IO) and de-duplication/compression throughout the infrastructure. With certain techniques you can get your IO down to and handled on the host.

How will you be able the know your specific IO? Well measure, test, baseline, report (or document) and repeat. Preferably start when planning (pilot or poc) for a VDI.
Sure, you can use one of the available calculators out there as a reference point (please do), but you still have to do some of the magic yourself.

How can we measure the IO of the VDI?

It is important here to know what kind of components are in your infrastructure and how to measure the metrics of these components.

Start from within your guest as there the user will feel it’s pain, oh wait….will get his warp speed desktop experience.

Tools as a complete solution:

  • vCenter Operations (VCOPS) and for VDI specifically, VCOPS for Horizon view. Complements the vCenter / ESXi monitoring and metrics already in place with excellent views and reporting states of your environment.
  • EdgeSight and Desktop Director for XenDesktop VDI.
  • XenServer monitoring, depending on your version you will possibly need to add a Performance Monitoring Enhancement pack.
  • Planning tools like the VMware View Planner (https://pascalswereld.nl/post/66369941380/vmware-view-planner), or analyzer tools like the VMware fling vBenchmark (https://pascalswereld.nl/post/62991166022/flings-vbenchmark) / IO Analyzer (https://pascalswereld.nl/post/58225706990/vmware-io-analyzer-fling) or for Citrix the XenServer PerformanceVM.
  • Login VSI an industry standard load testing tool for virtualized desktop environments. Login VSI can be used to test the performance and scalability of every Windows based virtual desktop solution. Can be used for VDI Horizon View, XenDesktop as well as heterogeneous solutions where SBC or RDS are in place. ( there is going to be a blog post on this subject a little later on).
  • Nagios / Cacti and the like. You will have to add your counters and checks to the configuration, which can be a little gruesome if you don’t have any experience. But these are excellent products to have your whole infrastructure monitored with all sorts of counters and have a chains with parents and child objects in there.

Tools at the layers:

This really depends on the solution in place. Know your infrastructure and how traffic flows is key here. The monitoring needs to be planned so that you get your counters in the same time window and with the same metrics. Get the disk and network counters from the guest OS itself thoughout the infrastructure. Keep an eye on the CPU and memory on all layers, and check for paging/swapping. Averages on layer x and downstream in real time can be a bother. Also like written in a previous post, time (and time zone) needs to be set correctly and synchronized with the same source.

But to give some pointers

  • Perfmon / Resource monitor.
  • iostat/vmstat/top and other Linux tools.
  • Hypervisor monitoring, like from vSphere webclient /vCenter/ESXi host or XenCenter.
  • Esxtop / XenTop / vscsistats. See some standards in my post (https://pascalswereld.nl/post/69976450322/vphere-monitoring-metrics)
  • Networking SNMP tools, or the own supplied tools.
  • Storage SNMP tools, or the own supplied tools. When you have switches and controllers in your storage infrastructure monitor them all the way.
  • Benchmarking tools like IOMeter, IOzone, Passmark or specific to a kind of workload (be ware, there are a lot out there).

– Happy IO’in in the VDI world (and cue Neil Young)

Atlantis ILIO – RAM Based storage matched for VDI

I personally am very fond of solutions that handle IO close to the source and therefor give more performance to your virtual machine workload and minimize (or preferably skip) the storage footprint downstream. I previously written a blog post about sollutions you can use at the host. One of these solutions is Atlantis ILIO.
As the company I currently work for (Qwise – http://www.qwise.nl) is also a partner for consulting on and delivering Atlantis ILIO solutions, I thought one plus one is… three. 

If you’re not familiar with Atlantis ILIO, it works with running an Atlantis appliance (VSA) on each of your hypervisor hosts (dedicated for VDI for example) and presenting a NFS or iSCSI data store that all the VMs on that host use. For this data store it uses a configured part of the hosts RAM to handle all reads and writes directly from this hosts RAM (that is when you let the VM deploy here and you have reserved this RAM for this kind of usage). The IO traffic is first analyzed by Atlantis to reduce the amount of IO, then the data is de-duplicated and compressed before being written to server RAM. When needed Atlantis ILIO converts small random IO blocks into larger blocks of sequential IO before sending to storage, increasing storage and desktop performance. This is the IO Blender Effect.
The OS footprint is minimized to a rather small one in RAM, numbers of 90% percent can be reached depending on the type of workload. Any data that will be written to the external storage (outside of RAM) also undergoes write coalescing before it is written. 

Since Atlantis will only store each data block once, regardless of how many VMs on that host use that block, you can run dozens or hundreds of VMs of just a tiny amount of memory.

And what does RAM gives? A warp speed user experience and faster deployment.

Atlantis ILIO can be used for stateless VDI (completely in RAM), persistent VDI (out of server memory or shared storage backed), XenApp and can also be used with virtual server infrastructures.

Atlantis ILIO Architecture

image

Like written before Atlantis ILIO is deployed as an appliance on each host or on a host that serves a complete rack. This appliance is an Atlantis ILIO (or ILIO for short) controller or instance. The Atlantis ILIO appliance uses a defined part of the host it’s RAM to present a NFS or iSCSI datastore via the hypervisor. Here you can place the VD’s, XenApp or other needed to accelerate VM’s. ILIO sits in the IO stream of your VM, hypervisor and storage. You need the correct Atlantis product to use the optimized features for the wanted solution workload, currently VDI and XenApp. Keep an eye out for other servers solutions, there bound to come out this half of 2014.
In above model the hypervisor is VMware vSphere with a stateless VDI deployment, but this can be Citrix XenServer or Microsoft Hyper-V as Atlantis supports these also. The Atlantis presented storage can be easily used to accelerate PVS or MCS for using XenDesktop provisioning. Or in combination with some form of local or shared storage for persistent desktops unique user data.

Atlantis ILIO Management Center.

The Atlantis ILIO Management Center will setup, discover and manage one or more Atlantis ILIO instances. The ILIO Center is a virtual appliance that is registered with a VMware vCenter cluster. Once ILIO Center is registered with a vCenter, ILIO Center can discover Atlantis ILIO instances that are in the same vCenter management cluster and selectively install a Management Agent on Atlantis ILIO instances. If additional vCenter clusters with Atlantis ILIO instances exist, then an ILIO Center virtual machine can be created and registered for each cluster.
The ILIO center can be used for provisioning of ILIO instances, monitoring and alerting, maintenance (patching and updates) and for (probably the most importing part) reporting of the status of the ILIO proces and handled IO offload (for example what amount of blocks is de-duped). The ILIO center can also be used to fast clone a VD image. This clones full desktop VM’s in as little as 5 seconds without network or storage traffic.

Hosts and High Availability (mainly for persistent deployments)

Atlantis supports creating a synchronous FT cluster of Atlantis ILIO virtual machines on different hosts to provide resiliency and “zero downtime” during hardware failures. Atlantis supports using HA across multiple hosts or automatically restarting virtual machines on the same host. 

A host that is offering resources to a specific workload, for example the VD’s, this is called a session host. This session host can use local or share storage for it’s unique data storage. With shared storage when a failure happens you can use the hypervisor HA (together with DRS VM rule to keep appliance and VD’s together). When using local storage in vSphere this is not an option as HA requires a form of shared storage. For this you can use ILIO clustering with replication.

With the availability of unique local host data a replication and a standby host come into the picture.

In a Atlantis ILIO persistent VD solution, a replication host is a centrally placed Atlantis ILIO instance that maintains the master copy of each user’s unique data. The desktop reads and writes its IO on the RAM of the session host. The session host (after handling the IO) then replicates any unique compressed data over to the replication host. This replication is called Fast Replication. The replication is handled over the internal out-of-band Atlantis ILIO network. The replication host is shared storage backed where the unique user data is written. There is also a standby host that is a standby for the replication host. This standby host has the same access to shared storage location as the replication host. In case the replication host fails the standby host takes over and has access to the same unique user data on the share storage. Keep in mind that, depending upon your workload, between 5 and 8 session hosts can share a single replication host. 
Disk-backed configurations that leverage external shared storage do not need a Replication Host as ILIO Fast Replication mirrors the desktop data directly from the external storage to this shared storage

For non persistent stateless VD’s the data stays purely in RAM. VMware Horizon View or Citrix XenDesktop will notice that the VD’s are down when the hosts fails, and will make new VD’s available at an other host. Users will temporarily experience a disconnect but their workspaces will reconnect when available again.

Conclusion

With interesting RAM pricing and reducing infrastructure complexity Atlantis ILIO is the perfect solution to use in (re)building VD infrastructure with already in place solutions or solution components. You can provision lighting fast VD’s and engage your workforce to warp speed productivity) with a user cost around 150-200 euro. You can session hunderd of VD’s with just a small amount of configured RAM on one host. Next to this you will have a much smaller unique IO data footprint on your shared storage. No need to go with expensive accelarated storage infrastructure controllers. You can easily go with a cheaper SAN/NAS/JBOD or a No SAN solution.

– Happy Atlantis ILIO’ing!