Virtual Infrastructure WAN Benchmarking with Dummynet

Ever had a customer environment where either a deployment is done over WAN links or the customers wants to change there WAN links, and they (or you in planning or testing phase) are interesting to see what happens in advance? Or you just want to know how latency or ping drops in a WAN link influences user experience in VDI or application deployments (well if the users aren’t complaining first ;-)…)?

Introduce some real world networking examples to your lab set-up?

You will have some opportunities in the virtual infrastructure, vSphere for example let’s you throttle a vSwitch/portgroup to a certain peak bandwidth (traffic shaping). But bandwidth is just a part of the deal, what about what is latency doing to your desktop or application experience, what will ping drops do to the communication?

For these sort of testing/benchmarking I often use a Dummynet setup to influence the traffic between infrastructure components. So how does this work? Time to find out in this blog post.

What is Dummynet?

Start at the start you dummy…..Dummynet is a traffic shaper, bandwidth manager and delay emulator and is included in FreeBSD. There are several ways to use this, deploying a VM with a FreeBSD image or with some of the live ISO’s out there (Frenzy for example http://frenzy.org.ua/en/index.shtml ). Dummynet was initially implemented as a testing tool for TCP congestion control by Luigi Rizzo <luigi@iet.unipi.it>. See Luigi’s site http://info.iet.unipi.it/~luigi/ip_dummynet/. Dummynet using ipfw to influence traffic flowing through the created network tunnel, 

How to use Dummynet in your virtual infrastructure?

Build a Dummynet VM with two networks connected. Connect a client VM and a server VM on one of the networks (client in network X and server in network Y). Let the dummynet route and pipe the network traffic between those VM and shape the network accordantly to your testing needs.

A model to help clarify.

image

When installing FreeBSD be sure to include src for the ability to include Dummynet firewall in the kernel. You can either use a kernel load or recompile a kernel with the dummynet option included.

Demo

I’m putting a client VM on the same network as one of the dummynet interfaces. The second dummynet interface is connected to an other VM network where the server VM is connected. The VM’s are given IP addresses in their respective IP subnets and the dummynet VM is given IP in these as well (on the em0 for the client VM subnet en on the em1 for the server VM subnet). I configure the client and server VM’s to use the Dummynet VM as their IP gateway (just a route add for there subnet pointing to the dummynet interface).

On the dummynet VM you can use the following commands or included them in the rc.conf and /etc/sysctl.conf files.

ifconfig em0 192.168.243.241 netmask 255.255.255.0
ifconfig em1 10.0.1.1 netmask 255.255.255.0
sysctl net.inet.ip.forwarding=1 (Tell FreeBSD to forward packets between the two IP addresses)

Check if your can reach the both VM’s by pinging there IP adresses from the dummynet host.

image

Yes? Okay move on. Check if you can reach from one to the other VM via the ip forwarding option. The freebsd-server is on the 10.0.1.0 subnet. So first add the route like said before.

image

This works. Now introduce some dummynet.

kldload dummynet
ipfw flush

Add a firewall rule to allow all traffic from and to the first vm to the second.

ipfw add 1000 allow all from any to any

Add a Dummynet pipe to check if ipfw works:
ipfw add 100 pipe 1 ip from any to any

And put some delay on that:

ipfw pipe 1 config delay 10ms

And check with ping:

image

You will see the delay multiplied by how many times to traffic passes the pipe. From 2.5 / 3ms in previous shot to 39/38 ms in current shot.

When this works you can add some other test for example:

Add a rule to delay selected packets by 50 ms, randomly drop 30% (0 is for no loss and 1 is for 100% packet loss) of the packets and limit the bandwidth to 1Mbps (bandwidth can be checked by a copy or such.)

ipfw pipe 1 config delay 50ms plr .03 bw 1024Kbits/s

image

Sequence 3 is dropped from the packets.

To see what is configured on the pipe use:

ipfw pipe 1 show and destroy with ipfw pipe 1 delete.

This concludes this blog post.

– Happy Dummynet network testing!

VMware Horizon View 5.3 is available for download and new feature list

VMware Horizon View 5.3 is available for download and can be downloaded at the following location: https://my.vmware.com/web/vmware/info/slug/desktop_end_user_computing/vmware_horizon_view/5_3.

So what new features are available with version 5.3?

The following features are new to this version:

  • 3D graphics with virtual dedicated graphics acceleration (vDGA). This leverages view to use complex 3D graphics in your VDI environment. In combination with vSphere.
  • Virtual Shared Graphics Acceleration (vSGA) now supports AMD/ATI graphics cards next to the NVidia cards.
  • Improved Real-Time Audio-Video experience and performance. With encoding and compression techniques seriously improving reduction to bandwidth consumption. This enables end users rich communication and collaboration over WAN links. Available in the feature pack.
  • Enhancement to mobility features in HTML5 and Unity Touch. Use Blast HTML to provide end users with a mobile workspace experience even when the client is not available. Also available in the feature pack.
  • Windows 8.1 support. Support for the latest Windows version as virtual desktop.
  • VMware® ThinApp® 5.0. Support for application virtualization of 64-bit applications. The support of 64-applications in VDI environments starts with VMware Horizion View 5.3.
  • Manage persistent virtual desktop images with VMware Horizon Mirage™ 4.3. Before 4.3 Mirage was only supported with physical images.
  • Virtual SAN or VSAN support. Leverage Virtual SAN for your Horizon View VDI deployments. (maybe a little over done, but Virtual SAN is Beta.).
  • Support for Windows Server 2008 as virtual desktop.
  • View Agent Direct Connection (VADC). An optional plugin for end user sessions without having to authenticate into a connection server. This let’s your users connect to sessions when a WAN link isn’t available (due to connection problems, poor bandwidth or high latency). Perfect for you mobile workforce.

So go out and download this version if you not already haven’t. Test, plan and update your reference architecture for new deployments with this version.

The updated versions of View, Mirage and Thinapp (etc). are also available via the Horizon suite download link: https://my.vmware.com/web/vmware/info/slug/desktop_end_user_computing/vmware_horizon_suite/1_0.

– Enjoy delivering a mobile workspace with VMware Horizon!

VMware NSX Series – Data flow without control plane

In my last blog post (you can read it here: https://pascalswereld.nl/post/67365305981/nsx) I wrote about NSX architecture with the out of band components such as NSX manager and NSX controller cluster (management and control plane). But do they realy don’t interfere with the data IO?

Time to find out!

I am using the HOL NSX lab to show how this works. This is a preconfigured lab with a NSX manager, a NSX controller HA pair, edge router and some Linux based VM’s.

First we are setting up a logical switch, connecting this to the perimeter edge router and connecting VM’s to this switch.

The VM network Switch is creating.

image

Adding to the perimeter edge with an IP subnet declaration. And yes don’t forget to connect the port.

image

As you will notice the subnet 10.1.40.0/24 is connected. The edge port is given the 10.1.40.1 IP address.

Next up adding VM’s to this distributed logical network.

image

I am using two web servers that are currently at an other logical network. This action will move the VM’s from Web_Logical_Network to the created VM Network.

With SSH putty sessions to the VM’s we can verify that the VM’s have interfaces connected to this network.

image

We see both the VM’s in the configured subnet with web03 at address 10.1.40.13 and web04 at address 10.1.40.14. When we start a ICMP ping we can confirm data is flowing from one VM to the other, and we can confirm that traffic is flowing from one logical switch port to the other.

image

Okay now see how the traffic will flow after we shutdown the controller HA pair. We got to VM’s in the vCenter inventory.

image

Here you also notice the edge components.

With the shutdown guest OS operation we shut down both the NVP_Controller VM’s. This has the complete HA pair shutdown in effect.

image

After this we can retry our ICMP data flow.

image

And low and behold data IO is still flowing between web03 and web04. A ping back from web04 to web 03 show this way is also working.

image

This small example shows that the controller pair don’t interfere with already configured components in the data plane. You won’t even notice problems when wanting to add new VM’s to this logical switch. Let’s demonstrate with adding web02.

image

image

Network adapter 1 is connected to the VM network DVS. But why wouldn’t it? DVS is managed by vCenter and the host is already part of this DVS (for example web04 is running on the same host). At the IP address we can notice something wrong, the IP subnet of the guest (this is .30 instead of the .40). When opening /etc/sysconfig/network/ifcfg-eth0 there is a static IP configured, again elementary my dear Watson. Replacing the .30 with .40, and down and up the interface. Now ping is running.

But what will not work with the controllers down? For example creating a new logical switch will fail with a vCNS server error. There is no interaction from management plane to the hosts control plane components. There you need the controller as the work horse.

– This concludes this blog post.

VMware NSX Series – Introduction and components

This year VMware introduced some new solutions to the software defined data center (SDDC), namely Virtual SAN (or VSAN) for the storage and available solutions and NSX for the network and security layer. Or software defined storage resp. software defined networking.

Virtual SAN will be general available H1 2014. Beta has been released a while now, so there is plenty of opportunity to test this solution. I have done a little blog posting about the initial configuration at  https://pascalswereld.nl/post/62805854730/vsan-beta-part-what-install.

The other solution is NSX where I want to go in some deeper in this blog post. NSX is GA but you will have to contact VMware sales if you want something with NSX. But first a little SDDC.

Software Defined Data Center (SDDC)

So you have heard this SDDC term earlier. That is right, if you have been following the keynotes from this and last years VMworld you will have heard them. And if you are a regular visitor of vmware.com you will have seen even more of that. But what is meant with SDDC?

image

Software defined data center (SDDC) is an architectural model to IT infrastructures that extends traditional virtualization concepts to all of the data center’s resources and services. This started a decinia ago with the visualization of computing resources (CPU and memory) to provide server virtualization (the software server) as the base component of SDDC.
Software defined networking or network virtualization, is the process of merging networking resources and functionality into a software-based virtual network. This creates simplicity by creating virtual network component “free” of the underlying physical network and firewall architecture. Well free, you will still need some cabling and switching to go from you computing cluster to the edge and further. But these can be simplified by just providing hardware connectivity. Let the virtualization layer handle the connectivity of VM, tenants, routing and access control (just a few examples).
Software defined storage or storage virtualization, is simple shared storage specifically designed for virtual machines. by simple it is self tuning, easy provisioning, simple managed and dynamically scalaleble. It presents a single data store distributed across multiple hosts in a vSphere cluster (that is where VSAN is enabled)

If underlying hardware fails the virtualization layers automatically redirects workloads to other components in the data center as long as redundant paths exist.

A important reason for the SDDC is to simplify the provisioning of services and providers for application workloads. Yes, it adds more complexity to the virtualization layer, it is not just computing anymore. But it simplifies provisioning while not having to go from and to different IT service silo’s to get something done. Your expertise is there in the virtualization layers.

Well pretty clear isn’t…

Now for a little in about network virtualization via VMware NSX. Will try to keep it little as you can write a book about this subject. I don’t think I’m gonna be finished in one blog post, so I conveniently used series in my title. That is not a promise but a opening, as I am sure this subject will return.

VMware NSX Architecture

NSX is composed of the following components:

image

These bring components in the network/virtualization layers by means of virtual appliances, and components close to the hypervisor (on the host) components. As you will notice (or not) the switching supports the open vSwitch which allows NSX to be deployed with other hypervisors (and with other I mean other then VMware in this case). For example KVM, Xenserver can be supported/added to provide a true software defined data center, and not just a VMware software defined data center. For this you will have two flavouors of NSX, one optimized for vSphere and NSX for multi hypervisors.
But the question here is how many organizations use hybrid hypervisors in their environments. Often enough I will only see one flavor install base. But that is a case outside of the scope of this blog post. Back to NSX components.

An overview of the NSX components:

NSX Manager.  A web-based GUI management dashboard for user friendly interaction with the VMware NSX controller cluster. Via the NSX API. Primarily used for system setup, administration and troubleshooting. NSX Manager can take snapshots of the entire state of the virtual network for backup, restores, introspection, and archival. The services are provided via NSX API’s. The NSX manager works together with vCenter for managing cluster and host components.

NSX Controller. The NSX controller cluster is the highly available distributed system of virtual appliances responsible for the programmatic deployment of virtual networks across the entire architecture. The NSX controller cluster accepts API requests from cloud management platforms (e.g. vCloud, OpenStack), calculates the virtual network topology, and proactively programs the hypervisor NSX vswitches and NSX gateways with the appropriate configuration. While not handling packets directly, the controller cluster is the workhorse of the NSX infrastructure.

The NSX Manager and NSX Controller cluster are out of band and never handle data packets. Other way of definition are the NSX Manager is in the management pane (together with a vCenter system) and the NSX controllers are in the control pane of the network virtualization.

NSX Gateways/Edge Router. NSX edge services provide a secure path(s) in and out of the software defined data center. NSX Gateway nodes can be deployed in high available pairs, and offer services such as routing, firewalling, private tunneling, and load balancing services for securing and controlling traffic at the edge of one or more virtual networks. NSX gateways are managed by the controller cluster.

– NSX vSwitch. NSX vSwitch is an component that is added to the hypervisor and replaces the traditional switches. Well sort of, as there still is a distributed logical switch layer but now the NSX vSwitch or Open vSwitch. It can span multiple clusters and provide for example layer 2 and layer 3 logical switching.

– Host loadable modules. Most networking components use the host provided modules. For example to let a host understand the NSX switch and let traffic flow between NSX hosts they need to talk the same language. With the kernel modules your ESXi host is able. The installation of modules can be done using the UI or by bundling the vSphere image with proper VMware Installation Bundles (VIBs). These modules provide port Security, VXLAN, distributed firewall (DFW), distributed switching or distributed router (DR) functions on the host level.

—-

Okay that is enough theory done for this blog post.

Would you like some hands on? VMware has some hands on lab (HOL) sessions on the NSX subject. Take these labs at at http://labs.hol.vmware.com/ (or www.projectnee.com). You can choose or do both the  HOL-SDC-1303 – VMware NSX: The Network Virtualization Platform and HOL-SDC-1319 – VMware NSX for Multi-Hypervisor Environments sessions.

– Interesting this network virtualization. To be continued for sure.

Learning Puppet – getting the Puppet learning VM

While trying to learn Puppet you should probably first want to know what Puppet is.

What is Puppet and what can we do?

Puppet is an automation configuration management tool from Puppet Labs. It’s supports automation cross-platform, for example in several OS like Windows, Linux but there is also VMware vCenter integration. With its integration with vCenter, Puppet enables provisioning VMware VMs and automatically install, configure, and deploy applications like web servers and databases. For vCloud Director there is also a plugin to furthermore automate deployment of multiple tenants.

For this VMware integration you need the Enterprise edition of Puppet. You can try out a free edition to start and manage 10 nodes for free.

From 11 nodes you will have to buy a additional enterprise license (including standard support) from $99 per node. To more nodes the lower the pricing. See details at http://puppetlabs.com/puppet/how-to-buy.

And… even better at http://docs.puppetlabs.com/learning/ you can get a learning course and a learning VM (vmx or OVF) dowload so you have your Puppet learning lab ready. There is a serverless module and an agent master module to start up your Puppet skills. The learning VM is based on the Enterprise Free edition.

The VM

You can download the learning Puppet VM at http://info.puppetlabs.com/download-learning-puppet-VM.html. Here you will have to register and choose you flavour of download; either the OVF or the VMX (recommended for VMware fusion or workstation). You can install the VM in different flavours of virtualization software like VMware Workstation or VirtualBox.

You can use the console (root and password puppet) to access the system, you can also use SSH. The IP is shown on bootup. The is also a web interface to connect; https://<ip>/. The web interface is for managing your environment and connections. There you will have to log in as puppet@example.com, with the password learningpuppet.

I am using the vmx download and import the vmx in VMware workstation 10 inventory. It is a 1,5 GB download. Extract and put it in a VM folder. Open workstation and from file – open browse to the VMX file. Alternatively you can use explorer and double click vmx if you have a file extension to opening application set-up to workstation.

If your happy with the configuration start up the guest. After boot at the login just wait a few seconds and the IP information with default user passwords are shown.

image

image

You system is now set to further learn Puppet.

– Enjoy learning automation with Puppet!

VMware Flings – Nested ESXi VMtools

When I was at Barcelona I first heard William Lam (http://www.virtuallyghetto.com/) talking about the soon release of a VMware fling that would be around VMware tools for nested ESXi hosts. The soon release was 11 November 2013.

I use lab setup often, to try out a few steps for education or to present features in a demo lab environment. That is, when those features don’t need much resources else I am currently bound to the availability of Hands on Labs.

Labs are pretty much setup with nested ESXi, whether just a data center lab, cloud setup of for mobility with Horizon. Testing and demoing comes with a lot of hosts actions, and we (it’s not just me) are lacking a simple way to control the host (or interact with the vSphere API as William is explaining in his http://www.virtuallyghetto.com/2013/11/w00t-vmware-tools-for-nested-esxi.html blogpost). And with simple I mean not having to go to the console or SSH and for example shutting down the host from DCUI.
With the nested ESXi tools it’s just right click and shutdown guest for a clean shutdown.

Great! But to be clear, as this is a VMware fling there is no official support just use it in a lab set-up (non production warning).

But what do we need?

You will have to go to the flings page and download the VIB (or you can use the vmware.com source when you have a Internet connection on your hosts). You can find it at http://labs.vmware.com/flings/vmware-tools-for-nested-esxi.

Put the VIB on a datastore that is accessible from your host.

Next open a console to enable SSH or enable the SSH service in your host’s security profile. Putty to that host.

Install the VIB with the following one liner: esxcli software vib install -v /vmfs/volumes/[DATASTORE]/esx-tools-for-esxi-9.7.0-0.0.00000.i386.vib -f

Where [DATASTORE] is replaced with the datastore you placed the tools.

It should return installation successful.

And you wil notice that a reboot is required. So either reboot or esxcli system shutdown reboot will do the trick when your in the SSH session.

In your web client you will now notice to VMware tools running in your hosts summary.

You can also use this for ESXi installations in VMware Workstation. The shutdown guest will nicely shutdown the guest os (just take a peek with ALT+F12 in the console or compare with a none ESXitools host that will be off in a second)

One thing, you cannot click the upgrade VMware Tools. You will have to monitor new releases and install them as a VIB.

– Thanks for this fling guys!

Benchmarking your VMware Horzion VDI infrastructure with VMware View Planner

So I got myself a VDI infrastructure based on VMware Horizon view. Planning or Design phase done, first implementing phase done. Depending on your why’s you are either in concept planning phase or testing before going to production. Good let’s introduce the next step in the project phases, planning for workload or testing the infrastructure for the right workload. But how do I measure if my architecture can be, or is, up to the right numbers….
For vSphere server testing we can use the IO analyzer fling or specific service/applications tools like sqlio, iometer, vscsistats, iozone, Citrix Edgesight for load testing (well till the end of the year as Citrix EOL is per 31 december 2013 ), website testing with Apache JMeter and so on. But VDI needs its own benchmarking.

For VMware Horizon View VDI infrastructure we have VMware View Planner as our designated tool for planning or benchmarking.

So what is VMware View Planner?

VMware View Planner is a tool designed to simulate a large-scale deployment of virtualized desktop systems. This is achieved by generating a workload to the infrastructure that is representative of (many) user-operations, from user or administrative point of view. Selection can be made from user actions that typically take place in a VDI environment or customs workloads can be added.

The VMware Planner can be downloaded as a virtual appliance (OVF template) from http://www.vmware.com/products/desktop_virtualization/view-planner/overview.html. There you will also find some documentation to get your environment up and running.

What testing can be done?

The test can be summarized in three categories:

  1. Workload generation; By configuring View Planner to simulate the desired number of users and configured applications, View Planner can accurately represent the load presented in a given VMware VDI deployment. Once the workload is running, resource usage can be measured at the servers, network, and storage infrastructure to determine if bottlenecks exist.
  2. Architectural comparisons; To determine the impact of a particular component of VDI architecture, you can configure a fixed load using View Planner and measure the latencies of administrative operations (provisioning, powering on virtual machines, and so on) and user operations (steady-state workload execution). Changing the component and measuring latencies again provides  a comparison of the different options. A note, you will have to do some comparison of other architecture components (eg. hosts, storage, networking) configuration decisions to the View Planner workload to completely have a view of the impact.
  3. Scalability testing; The system load is increased gradually until a selected resource is experiencing contention or is exhausted. Resources measured include CPU, memory, and storage bandwidth.

There is more, as VDI deployments are organization specific (at least organizations have specific application landscapes) VMware View Planner allows for Custom Applications to be added.

View of predefined workloads:

image

So what is need, VMware View Planner Architecture.

Basically you need a VMware Horizion view environment where you deploy the VMware View Planner appliance.

I have taken the following model from the VMware website

image

  • VMware View Planner Virtual Appliance. A Linux-based appliance virtual machine. This virtual appliance interacts. It runs a web server to present a web interface. The View Planner appliance interacts with a VMware vCenter View connection server or Virtual Center server to control desktop virtual machines. It also communicates with client virtual machines to initiate remote protocol connections.
  • Harness. Part of the appliance, but such an important piece it needs to be mentioned on it’s own. The central piece in the View Planner architecture. The harness controls everything, from the management of participating desktop virtual machines (which can be scaled in thousands, up to 4000), to starting the defined or selected workloads, and collecting results back from these workloads. Results are storage in a database in the appliance. The harness provides monitoring the state through a web user interface.
  • Web Interface; The graphic user interface to interact to the VMware View Planner appliance to setup the environment, set up the workloads, controlling the workloads and viewing the results.
  • Workload; A predefined set of actions that can be custom build for your organization. The workloads can be sequenced in any way desired. Workloads typically have two possible categories to operate in: user and admin operations. User operations can include typing documents, browsing the web, reading or printing PDF documents, checking email, etc, etc. Admin operations can include provisioning virtual machines, cloning operations, powering servers on and off, etc. etc. etc.
    Workloads are placed in sequences to better simulate a real user environment. An example of user workload sequence steps: starting with Excel (Open, compute, save, close, minimize, maximize, enter value), followed by Zip (Compress) and let’s play a Video (Open, Play, Close).

If your not going to stick to only local mode workloads, you will need interaction with Active Directory, vCenter, ESXi hosts and View Connection Server.

Test Driving

In my VMware Workstation 10 lab environment I have a Horizon View 5.2 environment with vSphere 5.1 hosts, and 5.1 Windows vCenter Server system. vCenter is running on Windows 2012 with a SQL Express DB. View components are running on Windows 2008 R2 server and also using the SQL Express instance. Not much is to expect from this environment but to walk through the deployment steps of VMware View Planner.

What will you need for resources? The appliance is configured with 1vCPU, 3GB vRAM and a circa 15GB for disk space. Can fit in any environment easily.

In this virtual infrastructure I add the VMware View Planner Appliance downloaded from the VMware web site. Deploying the OVF is straightforward. Just accept the agreement, select location and networking. Add the appliance to the current network and let it rip.

Connect to the console to setup use the provided scripts (see installation manual). The console can be accessed by the root vmware combo.

Change to the /root/ViewPlanner directory. Set the path for Python by running the command: source setup.sh
Configure the virtual machine’s static IP address and the corresponding settings by running the command: python ./harness_setup.pyc -i <ipaddr> -m <netmask> -g <gateway> -d <full-domainname> -n <dnsip1> [,<dnsip2>, …]

Replacing <ipaddr>, <netmask>, <gateway>, <full-domainname>, <dnsip1>, and (optionally) additional DNS addresses with appropriate values, including the static IP address and the fully-qualified domain name.

Fire up the view planner web interface (http://your fqdn you configured in the python harness setup)

image

and logon via user root and password: abc123.

image

Next setup view planner to your vCenter and AD DS.

Go back to Run & Reports to start setting up your profiles. First you will have to set some up, and when you have some saved you can return later and load them when you want to rerun them.

image

Conclusion

For VDI based on VMware products this is the tool to use in your planning phase to determine if your infrastructure is up to the challenge and can be easily scaled up when necessary. How are some architectural decisions going to influence your environment now and in the future, well find out with VMware View Planner.
You can also use it to test/benchmark current infrastructures or just implemented infrastructures, and test what they are capable of. With existing and operational systems (and concepts that are using shared components) stress testing influences your environment. So be careful of the consequences of your action. Main focus of this product is in the name: Plan!

As with projects, same goes with the planner, start small and scale when needed. Get the hang of it and try your use case scenario’s and see what results you can get before going out in an all war scenario (just fireup all sequences for 4K virtual desktops sergeant!).

– Enjoy your VDI VMware Horizon View planning!

vCenter 5.1 installation on Windows 2012 fails at SPS – ProtectedStorage none existing service

In the middle of building a VMware Horizon Suite 5.2 lab for a comparison demo (VDI, mobility et al) I was installing vCenter 5.1 on Windows 2012. I can almost be sure I have tried an earlier installation that didn’t show this problem, but hay my memory sometimes plays a little trick on me (I’m used to VCSA in my lab environments). Could be Update 1 as vCenter 5.1 is not supported to run on Windows 2012 (and cue warning not for production only U1 is supported on Windows 2012 (and without the R2)). But vCenter 5.1 without update is in my Horizon Suite evaluation, and this is a lab…

Anyhow, with the installation of SSO en inventory service passed I tried to install the vCenter Server. Nothing new nothing fancy. Only at the SPS (the Profile Drive Storage part) the installer stops and redirects me to the logs. Okay… While checking the logs and system logs the following is registered as event id 7000.:

image

Okay ProtectedStorage. So fire up service manager via services.mc and errhhhh..no protectedStorage in dependency of VMware VirtualCenter Server. And no ProtectedStorage service to be found (also no service ).

image

So power up those register skills and check the dependencies of the VPX service. You can find that at ComputrerHKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesvpxd and then DependOnService key.

image

Here we see ProtectedStorage, next to Workstation service and MSSQL Express (only if you combine this locally). Remove that ProtectedStorage entry and close registry. You will have to restart to server and on boot up the vCenter Server service is started. (Yes really check in your services ;-))

You will have to rerun the vCenter Server installation to complete the required components. It will start where it left off.

– Enjoy labbing!