vCenter Server Appliance 6.0 in VMware Workstation

For demo’s, presentations, breaking environments or just killing time I have a portable testlab on my notebook. Yes I know there are also options for permanent labs, hosted labs and Hands-on labs for these same purposes. Great places for sure, but that is not really what I wanted to discuss here.

As I am break.. ehhh rebuilding my lab to vSphere 6.0 I wanted to install VCSA 6.0 in VMware workstation. Nice, import a my vmware downloaded VCSA-versionsomething.ova (after e-mail address number ####### registered over there) and we are done! …….. Well not quite.
First the vCenter download contains the OVA, but it is a little bit hidden. The guided installer will not help you here. You will need to mount or extract the downloaded ISO and look for vmware-vcsa in the vcsa/ folder.

VCSA Location
Copy the vmware-vcsa file to a writable location (when just mounted the ISO) and rename vmware-vcsa to vmware-vcsa.ova. And now we can import the ova to VMware Workstation. When the import finishes, do not start the VM yet. Certain values that are normally inserted via the vSphere Client or ovftool are to be appended to the VMX file of the imported VCSA. Open the vmx in the location where you let Workstation import the VM. Append the following lines:

guestinfo.cis.appliance.net.addr.family = “ipv4”
guestinfo.cis.appliance.net.mode = “static”
guestinfo.cis.appliance.net.addr = “10.0.0.11”
guestinfo.cis.appliance.net.prefix = “8”
guestinfo.cis.appliance.net.gateway = “10.0.0.1”
guestinfo.cis.appliance.net.dns.servers = “10.0.0.1”
guestinfo.cis.vmdir.password = “vmware-notsecurepassexample”
guestinfo.cis.appliance.root.passwd = “vmware-notsecurepassexample”

Note: Change the net and vmdir/appliance.password options to the appropriate values for your environment.

If not appended when you start the VCSA an error: vmdir.password not set aborting installation is shown on the console (next to root password not set) and network connection will be dropped even if you configure these on the VCSA console (via F2).

Save the VMX file.

And now it is time to let it rip. Start up your engines. And be patience until…lo and behold:

VCSA Running in workstation

And to check if the networking is accepting connections from a server in the same network segment open up the VCSA url in Chrome for example. After accepting the self signed certificate unsecure site (run away!) message you will (hopefully) see:

VCSA in Workstation

Next we can logon to the Web client (click and accept the unsecure connection/certificate) and logon via Administrator@VSPHERE.LOCAL and the password provided in the VMX (in the above example vmware-notsecurepassexample). As a bonus you now know where to look when you forget your lab VCSA password ;-).

VCSA 6 Web Client

(And now I notice the vCenter Operations Manager icon in the Web Client Home screen. Why is this not updated like vRealize Orchestrator :-) )

-Enjoy!

 

 

Design for failure – but what about the failure in designs in the big bad world?

This post is a random thought post, not quite technical but in my opinion very important. The idea formed after some subjects and discussions at last week’s NL VMUG. This blog post’s main goal is to create a discussion, so why don’t you post a comment with your opinion … Here it goes…

Murphy, hardware failures and engineers tripping over cables in the data center, us tech gals and guys all know and probably experienced them. Disaster happens everyday. But what about a state of the art application that ticks all the boxes for functional and technical requirements, but users are not able to use it, because of their lack of knowledge in this field, or because they are clueless why the business has created this thingy (why and how this application or data is supposed to help the information flow of business processes)? Failure is a constant and needs to be handled accordantly, and from all angles.

Techies are used to look at the environment from the bottom up. We design complete infrastructures with failure in our minds and have the technology and knowledge to perfectly execute disaster avoidance or disaster recovery (forget the theoretical RTO/RPO of 0’s here). We can do this at a lower cost (CAPEX) than ever before, and there are more benefits (OPEX and minimized downtime for business processes) than before. But subsequently, we should ask ourselves this: What about failing applications or data which is generated but not reaching the required business processes (the people that are operating or using these processes)?
Designs need to tackle this problem, using design based on the complete business view and connecting strategy, technical possibility and users!

And how will we do this then?

Well, first of all, the business needs to have full knowledge of their required processes and information flows, that support or process in and out data for these services supporting the business strategy. Very important. And to be honest, only a few companies have figured out this part. Most experience difficulties. And they give up. Commitment from the business and people in the business is of utmost importance. Be a strategic partner (to the management). Start with asking why certain choices are made and explain the why a little more often than just the how, what and when!

Describe why and how information and data is collected organized and distributed (in a fail safe and secure method) and what information systems are used. Describe the applications (and their ROI, services, processes and busses), how the information is presented and flows back in the business (via the people or automated systems). How does your solution let the business grow and flourish? Keep clear of too much technical detail – present your story in a way the manager understands the added value, and knows which team members (future users) to delegate to project meetings.

Next up IT, or ICT here in the Netherlands, Information and Communication Technology. I really like the Communication part for this post, businesses must do that a little more often. Start looking at the business from different points of view, and make sure you understand the functional parts and what is required to operate. To prevent people working on their own without a common goal or reason, internal communication is essential. Know the in and outs, describe why and how the desired result is achieved. Connect the different business layers. For this a great part of business IT departments needs to refocus it’s 1984 vision to the now and future. IT is not about infrastructure alone, it is a working part within the business, a facilitator, a placeholder (for lack of other words in my current vocabulary). IT needs to be about aligning the business services with applications and data, the tools and services that support and provides the business. That is why IT is there in the first place, not the business that is (connected or not) there for IT. IT’s business. Start listening, start writing first on a global level (what does the business mean by working from everywhere everyplace), then map possibilities to logical components (think from the information, why is it there, where does it come from and where does it go, and think for the apps, the users) and then when you have defined the logical components, you can add the physical components (insert the providers, vendors, hardware building blocks).

Sounds familiar? There are frameworks out there to use. Use your Google-Fu: Enterprise Architecture. Is this for enterprise size organizations only? No, any size company must know the why and why and why. And do something about it. And a simplified version will work for SMB size companies. Below is an example of a simplified model and what layers of attention this architectural framework brings to your organization.

Design for Failure

And…in addition to this, start using the following as a basis to include in your designs:

The best way to avoid failure is to fail constantly

Not my own, but from Netflix. This cannot be closer than the truth. No test or disaster recovery plan testing in iterations of half year or year. Do it constantly and see if your environment and business is up to the task to not influence any applications that will go down. Sure, there will be influences that for example the services running at 100% warp speed, but your users still able to do things with the services is better than nothing at all. And knowing that your service operates with a failure is the important part here. Now you can do something about not reaching the full speed, for example scale out to allow a service failure but not at a degraded service speed. Or know which of your services can actually go down without influencing business services for a certain time-frame. This is valuable feedback that will need to go back to the business. Is going down sufficient for the business, or should we try and handle this part so it does not go down at all. Just don’t use it at the infrastructure level only, include the data, application and information layers as well.
Big words here: trust and commitment. Trust the environment in place and test if it succeeds to provide the services needed even when hell freezes over (or when some other unexpected thing should happen). Trust that your environment can handle failure. Trust that the people can do something with or about the failures.
Commitment of the organization not to abandon when reaching a brick wall over and over, but to keep going until you are all satisfied. And trust that your people can fail also. Let them be familiar with the procedures and let a broader range of people handle the procedure (not just the current users names mapped to the processes, but within defined and mapped roles to services, multiple people can operate and analyze the information). Just like technical testing, your people are not operating 24x7x365, they like to go on leave and sometimes they tend to get ill.

Back to Netflix. For their failure generating Netflix uses Chaos Monkey. With that name an other Monkey comes to mind, Monkey Lives: http://www.folklore.org/StoryView.py?project=Macintosh&story=Monkey_Lives.txt. Not sure where the idea came from, but such a service and name cannot be a coincidence only (if you believe coincidence exists in the first place). But that is not what this paragraph is about.
The Chaos Monkey’s job is to automatically and randomly kill instances and services within the Netflix Infrastructure architecture. When working with Chaos Monkey you will quickly learn that everything happens for a reason. And you will have to do something about it. Pretty Awesome. And the engineers even shared Chaos Monkey on Github:
https://github.com/Netflix/SimianArmy/wiki/Chaos-MonkeyIt must not stop at the battle plan of randomly killing services; fill up the environment with random events where services will get into some not okay state (unlike a dead service) and see how the environment reacts to this.

 

VMware Utility Belt must have tools – RVTools 3.7 released

March 2015 RVTools version 3.7 is released. 

This, in my opinion, is the tool each VMware consultant must have in his VMware utility belt together with the other standard presented tools. At this time RVTool is still free, so budget is no constrain to use this tool. More important it’s lightweight, very simple in usage and shows much wanted information in a ordered overview or allows for exporting the information in Excel format to analyse this offline. 

Before using this tool, it is important to understand the tool is used to make a point in time snapshot of the infrastructure configuration items in place. In short what is configured and what is the current operational state. No more, no less. The information can then be used in for example operational health checks or AS IS starting point in projects (consolidation or refresh projects) in the analysis/inventory phase. See more use cases further below, and I am sure there can be some more examples out there.

No trending or what if’s for example, that is something you will have to do yourself or use other solutions/tools available for the software defined data center. VMware has some other excellent tools for SDDC management and insights in your virtual environment (for example vRealize Operations and Infrastructure Navigator). But that is a complete other story.

What is RVTools?

RVTools is a Windows .NET application which used the VI SDK (which is updated to 5.5 in this release) to display information about your VMware infrastructure.
A inventory connection can be made to vCenter or a single host, to get as is information about hosts, VM’s, VM Tools information, Data stores, Clusters, networking, CPU, health and more. This information is displayed in a tabpage view. Each tab represents a specific type of information, for example hosts or datastores.

RVTools can currently interact with Virtual Center 2.5, ESX Server 3.5, ESX Server 3i, Virtual Center 4.x, ESX(i) Server 4.x, Virtual Center 5.0, Virtual Center Appliance, ESXi Server 5.0, Virtual Center 5.1, ESXi Server 5.1, Virtua lCenter 5.5, ESXi Server 5.5 (no official 6.0 in this version).

RVTools can export the inventory to Excel and CSV for further analysis. The same tab from the GUI will be visible in Excel.

image

image

There is also a command line option to have (for example) a inventory schedule and let the results be send via e-mail to an administrative address.

Use Cases?

– On site Assessment / Analysis; Get a simple and fast overview of a VMware infrastructure. The presented information is easy to browse through, where in the vSphere Web Client you would go clicking through screens. When there is something interesting in the presented data you can go deeper with the standard vSphere and ESXi tools. Perfect for fast analysis and health checks.

– Off site Assessment / Analysis; Get the information and save the Excel or CSV dump to get a fast overview and dump for later analysis. You will have the complete dump (a point in time reference that is) which you can easily browse through when writing up an analysis/health check report.

– Documentation; The dumped information can be used on or offline to write up documentation. Excel tabs are easily copied in to the documentation.

– (Administrator) reporting; Via the command tool get a daily overview of your VMware infrastructure. Compare your status of today with the point in time overview of the day before or last week (depending on your schedule and/or retention). Use this information in the daily tasks of adding/changing documentation, analysis, reporting and such.

Release 3.7 Notes

For version 3.7 the following has been added:

  • VI SDK reference changed from 5.0 to 5.5
  • Extended the timeout value from 10 to 20 minutes for realy big enviroments
  • New field VM Folder on vCPU, vMemory, vDisk, vPartition, vNetwork, vFloppy, vCD, vSnapshot and vTools tabpages
  • On vDisk tabpage new Storage IO Allocation Information
  • On vHost tabpage new fields: service tag (serial #) and OEM specific string
  • On vNic tabpage new field: Name of (distributed) virtual switch
  • On vMultipath tabpage added multipath info for path 5, 6, 7 and 8
  • On vHealth tabpage new health check: Multipath operational state
  • On vHealth tabpage new health check: Virtual machine consolidation needed check
  • On vInfo tabpage new fields: boot options, firmware and Scheduled Hardware Upgrade Info
  • On statusbar last refresh date time stamp
  • On vhealth tabpage: Search datastore errors are now visible as health messages
  • You can now export the csv files separately from the command line interface (just like the xls export)
  • You can now set a auto refresh data interval in the preferences dialog box
  • All datetime columns are now formatted as yyyy/mm/dd hh:mm:ss
  • The export dir / filenames now have a formated datetime stamp yyyy-mm-dd_hh:mm:ss
  • Bug fix: on dvPort tabpage not all networks are displayed
  • Overall improved debug information

Who?

RVTools is written by Rob de Veij aka Robware. You can find Rob on twitter (@rvtools) and via his website http://robware.net.
Big thank to Rob for unleashing yet another version of this great tool!

As the tool is currently free please donate if you find the application useful to help and support Rob in further developing and maintaining RVTools.

Let’s get ready to cast your vote: vBlog 2015

Like the years before Eric Siebert of vSphere-Land.com is opening the annual vBlog voting for 2015 (http://vsphere-land.com/news/voting-now-open-for-the-2015-top-vmware-virtualization-blogs.html). This year Infinio is the sponsor and the top 50 is going to receive a special custom commemorative coin. All the blogs that are listed on vLaunchpad are on the ballot for the general voting. The top vBlog voting contest helps rank the most popular vblogs based on the community (you) votes and the outcome determines the ranking that is announced on the 19-03 Live show (and published on the vLaunchpad website).

Pascalswereld.nl is included on the voting ballot, but please keep in mind there is a lot of better blogs out there. As Eric states; keep in mind quality, frequency, longevity and length of the blogs out there when voting.
And of course your personal preferences ;-)

Ready to participate?

You can place your vote at: http://www.surveygizmo.com/s3/2032977/TopvBlog2015.

Good luck to all the great bloggers out there!

Sources: http://vsphere-land.com, http://info.infinio.com/topvblog2015

 

vExpert 2015 Announcement

Last year was my first year I was awarded vExpert. This year I can happily repeat the following statement: it’s a great honour to be awarded and to be added to this years list of vExperts. I’m glad to be a part of the community and a big thank you is in order to be selected in this list for 2015.

Looks like a second star to my record of achievements and the year is just starting.

gold-star

The vExpert Listing

The current listing is 1028 rows long. Not sure if this is the official number but hey we are all in this together. The full listing and the announcement blog post can be found over here: http://blogs.vmware.com/vmtn/2015/02/vexpert-2014-announcement-2.html.

Congratulations to all the 2015 vExperts, returning and new ones. Keep up the good work!

Community Survey: Project VRC State of the VDI and SBC union 2015

Once in a while a request comes to my e-mail box with content to be included on my blog. This time it is Ruben Spruijt (@rspruijt) and Jeroen van de Kamp (@thejeroen) contacting about their project VRC ‘State of the VDI and SBC union’ community survey for this year. As the community is an important and often high quality source for opinions, input and discussions, I wanted to take part in reaching out to this community for participating on this year survey. The success of a survey will be determined on the amount of quality high responses, and that is just what the IT community brings more then often. Not that I have an awful lot of followers, but hey it just takes two to tango ;-).

This Project VRC what’s that about?

Project VRC is an independent R&D project. VRC stands for Virtual Reality Check. The R&D project was started in early 2009 and focuses on research in the desktop and application virtualization market. Several white papers are published about performance impact and best practices regarding different hypervisors, application virtualization solution, (Published) Desktop OS, infrastructure solutions and such for the VDI and Server Based computing environments. The previous published white papers can be downloaded from http://www.projectvrc.com/white-papers. Of course you are also invited to take a look there. Previous survey white papers can be found there as well.

State of the VDI and SBC union survey 2015

In 2013 and 2014 Project VRC released the first iterations of community surveys about VDI and SBC environments. Over 1300 people participated so far. As times still continue on changing (and will be doing so on and on and on), and the more provided community knowledge as input for 2015 Project VRC the survey will get better and better. It needs your input (again if you already participated on previous versions). Who? Well everyone that is involved from strategy, designing, implementing and/or maintaining VDI or SBC environments can give helpful input in this survey. This survey probably will take no more than 10 minutes of your time. So what are you waiting for? A link probably, open www.projectvrc.com/blog/23-project-vrc-state-of-the-vdi-and-sbc-union-2015-survey or directly to the survey https://www.surveymonkey.com/r/VRC2015 to fill out the Project Virtual Reality Check “State of the VDI and SBC Union 2015” survey.

This survey will be closed on February 15th of 2015.

– Have fun!

Source: projectvrc.com

Stories from the VMworld Solutions Exchange: PernixData FVP: The what and installation in the lab.

This year my plan is to write up some blog posts about some of the solutions of the partners at the VMworld Solutions Exchange. Next to VMworld general sessions, technical sessions, hanging space and hands on Labs, an important part of VMworld is the partner eco system at the Solutions Exchange. I have visited the solution exchange floor several times this year (not only for the hall crawl) and I wanted to make a series about some of the companies I spoke with. Some I already know and some are new to me. The series is in absolute no order of importance, but about companies that all offer cool products/solutions and present their solutions with a whole lotta love to help the businesses and technologies getting happy. I have been a bit busy these last couple of weeks so probably a bit later then I first wanted, but here goes…

This time it is about getting familiar with PernixData’s FVP. I have seen enough on the communities and concepts on the big bad Intarweb, I had a chance to see it in real action from some demo’s and the shots in the technical presentations by PernixData (thanks for those sessions guys :-) ). Time to take it for a spin in the test lab. But first a little why and what before the how.

PernixData Logo

What is PernixData FVP?

To know about PernixData FVP you first have to start at the issue that this software solution is trying to solve; IO bottlenecks in the primary storage infrastructure. These IO bottlenecks add serious latency to the application workload the virtual infrastructure is serving out to the users.  Slow responses or unusable applications are a result of these latency issues. This leads to frustrated and suffering end users (which leads to anger and the dark side), but this also creates increased work for the IT department, such as increased help desk calls, troubleshooting/analyze performance issues and extra costs to compensate these troubleshooting (cost in personnel and hardware to patch up the issues). One of the options often used to try and solve the IO puzzle is to add flash, at first mostly to the storage infrastructure as a caching layer or a flash all storage arrays. Flash has microsecond response times and delivers more performance as magnetic spinning disks with their millisecond response times. The problems with adding to the storage infrastructure are high returning costs and not really solving the problem. Sure giving the storage processors faster IO response and more flash capacity will have an improvement of some sort vs traditional storage, but this needs constant investments when the limit is reached again and IO is still far from the workload. The IO still must travel  through the busses, to the host adapter over a network and through the storage
processor to reach the flash, and back the same way for a response of the operation or the requested data. This typically adds some response times where each component adds some handling and their own response time. Not the talk about adding workload to the storage processors with all the additional processing. Flash normally does it’s responses in microseconds. That seems to be a waste.
Okay no problem we add flash to the compute layer at the host. That is close to the application workloads. Yes good, performance needs to cuddle with the workloads. We decouple storage performance and capacity to performance in the host and storage capacity in the traditional storage infrastructure. Only just putting flash in the host does not solve it as a whole. The flash still needs to be presented to the workload and handle locality issues for VM mobility requiremens (fault tolerance, HA, DRS and such). As PernixData is not the only that tries to solve this issue, but some are presenting the local acceleration resources with a VSA (Virtual Storage Appliance) architecture. This in itself introduces yet another layer of complexity and additional IO handling as those appliances act as an intermediate between workload, hypervisor and flash. Furthermore as they are part of a virtual infrastructure they can have to battle with other workloads (that are also using the VSA resources for IO) when needing host resources. We need a storage virtualization layer to solve the mobility issue, optimize IO for flash and need to talk as directly as possible, or need a protection mechanism or smart storage software of some sort for IO appliances (there are some solutions out there that are also handling these). The first is where PernixData FVP comes in play.

PernixData Overview

Architecture

The architecture of FVP is simple. All the intuition, magic and smartness is in the software it self. It uses flash and/or RAM on the host, a host extension on those hosts and a management server component. It currently works only with the VMware hypervisor (a lot of the smart people from PernixData come from previous work at VMware). It can work with backend storage in block (FC, iSCSI) or file (NFS) storage as well as direct attached storage. As long as it is on the VMware HCL.
The host extension is installed as a VMware VIB. The management server requires a Windows server and a database. The management server is installed as a extension to vCenter and uses a service account (with rights to the VMware infrastructure and the database) or a local user (can be SSO only). With the latter it uses local system as the service account.
When adding a second FVP host this host is automatically added to the default fault domain. Default local acceleration is replicated to one peer in the same fault domain (with the Write-Back policy). This works from the box, but you probably will need to match the domains (add you own) and settings to the architecture you are using. The default fault domain cannot be renamed, removed, or given explicit associations.

Installation

After installing the flash or RAM to use for acceleration at the hosts. We can install the host extension with access to the ESXi shell (local or remote with SSH). I downloaded the FVP components and place the zip with the host extension on the local datastore as I’m not installing across a lot hosts. To install a VIB the host must be in maintenance.

# esxcli system maintenanceMode set –enable on
~ # esxcli software vib install -d /vmfs/volumes/datastore1/PernixData-host-exte
nsion-vSphere5.5.0_2.0.0.2-32837.zip
Installation Result
Message: Operation finished successfully.
Reboot Required: false
VIBs Installed: PernixData_bootbank_pernixcore-vSphere5.5.0_2.0.0.2-32837
VIBs Removed:
VIBs Skipped:
~ # esxcli system maintenanceMode set –enable off

Next up is the management server. The Windows installer itself is pretty straightforward. Have a service account, database and access to the database and vCenter inventory for that service account setup beforehand, and you are ready to roll.

Install - FVP Management Server IP-port Install - SQL Express Install - vCenter

When installing with a Web Client the FVP client add on is added when registering with the vCenter. Restart any active session by logging off and logging on and FVP will show up in the object navigator.

Web client Inventory

Next up create your FVP cluster as a transparent IO acceleration tier and associate this with a vCenter cluster of the to accelerate hosts (with the host extension and local resources). The hosts in this cluster will be added to the cluster. With cluster created and the hosts visible in the FVP cluster we add the local acceleration resources (the flash and/or RAM) to use. Next we add datastores and VM’s. Add this level we can set the policies to use. Depending on your environment certain policies will be effect. In the manage advance tab we can blacklist VM’s to be excluded from the acceleration (VADP or other reasons). In the advance tab we can also define which network to use for acceleration traffic. By default FVP automatically chooses a single vMotion network. In the manage advance tab we can also put in the license information or create a support bundle if we ever need one. The manage fault domain tab is not just a clever name, here we can see the default domain and add our own when needed.

Add Flash Devices Add FVP Cluster Fault Domain
The monitoring tab is where you we have the opportunity to look at what your environment is doing. An overview of what FVP is doing is shown in the summary tab. These are great places to get some more insight on what your workloads IO are doing. My test lab is getting the acceleration for the moment it is started.

Monitor Tab Performance results writethrough and writeback two login sessions

I can also show a first write policy versus on a Login VSI workload. (Keep in mind my testlab isn’t that much) LoginVSI-Policies

But that is more something for an other blog post about FVP.

Policies

When an application issues a write IO, the data is committed to the flash device however this data must always be written to the storage infrastructure. The timing of the write operation to the storage infrastructure is controlled by the write policies in FVP. There are two policies to choose from: the write-through and write-back policy.

Write-Through. When a virtual machine application workload issues an IO operation, FVP will determine if it can serve it from flash. When it’s a write IO the write goes straight to the storage infrastructure and the data is copied to the flash device. FVP acknowledges the completion of the operation to the application after it receives the acknowledge from the storage system. In effect write IO’s are not accelerated by the flash devices when selecting this write policy, but all subsequent reads on that particular data are served from flash. The write IO will benefit from the reads acceleration as these read operations/requests are not hitting the storage infrastructure and more resources are available there to serve the write IO.

Write-Back. This policy accelerates both read and write IO. When an virtual machine application workload issues a write IO operation, FVP forwards the command to the flash device. The flash devices acknowledges the write to the application first and then handles the write operation to the storage system in the background. With these delayed write’s we have a situation that for a small time window the data is on the host and not yet written to the backend storage infrastructure. When something happens on the vSphere host, this could end up with data loss. For this replica’s of the data are used. With replica’s the data is forward to the local and one or more remote acceleration devices. This results in the application having flash-level latencies while FVP deals with fault tolerance, and latency/throughput performance levels of the storage infrastructure. 

FVP will allow to set policies on datastores or on a per VM level. You can have an environment where a great number of virtual machines run in write-through mode, while others write-back.

Simple

You notice from the installation and usage of FVP that this product is to give simplicity to it’s operators. Just add some VIB and initial configuration, and your FVP solution is running and showing improvements within a few minutes (when you have some flash/RAM else it’s installing those first). No calculating, sizing and deploying virtual appliances where IO flows through, with FVP it’s extension it is talking to the right place. Yes you will have to have a Windows server for the Management component, but this is out of band of the IO flow.
If you have some experience in the VMware product line and understand the way the PernixData product is setup, the level of entry to using FVP is super low. You just have to familiarize yourself with the way FVP handles your workload performance/IO (the policies, settings and the places to set them), next to actually knowing some of the workloads that you have in the environment and what they are doing with IO. And there FVP can be of assistance as well, next to accelerating your IO workloads it will give you lot’s of insight on what storage IO is doing in your environment by presenting several metrics in the management interface of vSphere. And that’s another big simplicity, integrating seamlessly in one management layer.

 

Sources: pernixdata.com

VMworld Barcelona from the notebook: VMware Strategic Summary

At the VMworld conferences in San Francisco and Barcelona VMware we learned that VMware is continuing the strategic priorities it started almost a year ago. Not a real surprise as the road still has a lot of opportunities but also some bumps to take. These are some of the notes that I crafted during my visit of keynotes, sessions and such at VMworld Barcelona. While there where not mind blowing new technical announcements, it does tell about the ever changing world in which we are and what VMware is bringing to help IT business with these changes and challenges.

The VMware strategic priorities are divided in to three pillars to continue to serve the liquefying IT world. Within this strategies there are no limits, and this was also the theme of VMworld this year is (maybe no limits is not that good for the VMworld parties ;-) ).

As we learned from the keynotes the current IT world is moving from a rigid, known, limited IT environment to a more liquid, unknown, unlimited, accessible from everywhere and every device IT environment. Here new business models are needed where data and applications are presented in a uniform way to the users and the devices they are using.

Strategy - Overview 1

These IT business models need more AND decisions instead of the OR decisions it currently sees. We don’t build the infrastructure for traditional application or cloud applications, on or off-premise, we build the infrastructures for traditional and cloud applications available on and off-premise depending on the users and application requirements. The power of AND. And this also includes for the mentioned VMware strategic pilars where cloud is the returning component in the SDDC, Hybrid Cloud and EUC for cloud mobility. Cloud in all it’s glory, private, hybrid, mobile, cloud applications and public cloud services.

Strategy - Power of AND

Software-Defined Data Center (SDDC)

Continuing to further virtualize the data center from the compute virtualization via flagship vSphere (now in vSphere 6.0 Beta) and continue to virtualize the network (via NSX) and storage (via Virtual SAN/VSAN and Virtual Volumes). This can be done by designing and building your own building blocks (as long those blocks are on the VMware compatibilty matrix), VMware ready partner building blocks optimized for vSphere and Horizon View. Since VMworld VMware introduced another building component, the VMware Hyperconverged Infrastructure Architecture in the form of EVO:RAIL and EVO:RACK (the big brother of EVO:RAIL for cloud scalability). These are complete OEM hardware building blocks combining compute, networking and storage, and VMware vSphere and VSAN ready to go (a somewhat simplified explanation). This reduces deployment times, complexity, optimizes resources and performance for a number of reasons. Rack, cable and create a initial configuration from defined wizards and their configuration. Deploy VM’s in 15 minutes with pre-defined VM configuration blocks. Or create your own VM configuration based on your needs, security and such. This probably takes a little more than the announced 15 minutes, but still significant less time then when using your own building blocks or VMware ready blocks.
A the partner level of news, HP is introduced as partner in EVO:RAIL, networking and enterprise mobillity, exciting what that will bring from the partner eco-sphere.

Strategy - SDDC Compute Strategy - SDDC Network Strategy - SDDC Storage

End-User Computing (EUC) in a Mobile Cloud Era

This is one of the layers needed for providing applications and data that run on VMware software products. In the last year there where several knowledge investments (or takeovers) that where needed to put the VMware EUC mobile cloud strategy in the right place on the IT world map. This started with the acquisition of Desktone for Desktop as a server (DaaS), Airwatch as leader in enterprise mobile device management and the latest Cloud Volumes acquisition for delivering virtualized applications (announced around VMworld US). Next to this VMware updated it’s own product from a VDI to a hybrid VDI published application/desktop product suite with VMware Horizon Suite updates. Additionally VMware announced Just in time Dekstops for the mobile users, Horizon Flex for offline BYOD desktops and Project Fargo for rapid duplication and sharing of resources of EUC virtual machines. 

Hybrid Cloud

Cloud is everywhere. It could be that a strategic model with the Hybrid cloud pilar positioned between SDDC and EUC pilars is a little unclear as it is not a pilar on it’s own (but that is that whole AND that was in Pat’s keynote). The cloud pilar is partly for transition and partly for allowing new cloud related functionality from and outside of the VMware product groups. You can also see this a different way, SDDC and EUC are delivered in the cloud, for the cloud which cloud definition this is. But I can see that a business model and strategy requires a little more then just a theoretical term that is everywhere.
The VMware strategy breaths and revolves about cloud. The cloud is presented in services for the private (the local on-premise data center services in SDDC) and public cloud (the public accesible services and cloud applications). Around this tools to seamlessly as possible move fast from the one cloud to the other without affecting but serving the user. Users move from on premises workspaces, to traveling workers back to the office workspaces and to home. All those places have there devices and infrastructures and all need a form of interaction with the company data and applications. In the private cloud the important products are the SDDC. To move from private to a hybrid cloud VMware earlier introduced vCloud Hybrid Services. This got more body (more services like DB as a Service) and a re-branding to vCloud Air. At VMworld a new location for vCloud Air for the EMEA market was announced, Germany will offer a new vCloud Air location.
This last year the main usage of the hybrid cloud was a Disaster Recovery endpoint and testing and developing. This needs to be expanded in other vCloud services like (but not limited to) virtual private cloud (starting piont for IaaS in the cloud for old and new workloads), DB as a Service (DBaaS MSSQL and MySQL) and further using DRaaS.

The IT business experimental phase of cloud is over, now the professional phase is starting with more and more production workloads are landing on the cloud.The growth of 2% workloads in the cloud in 2009 to 6% in 2014 does not show a lot of cloud adoption, but the exceptional growth in the last year (the 6%) is showing faster cloud adaption. Are you next?

vCloud Air is not only positioned for VMware related workloads, vCloud Air is also meant to host new cloud applications for mobile devices or for legacy applications created in the own DevOps environment. vCloud air is a central platform that allows other hypervisors then just VMware proprietary. 

vCloud connector (free) as a product or integrated with vCloud Director and vRealize Automation (the artist formely known a vCloud Automation Center or vCAC) is one of the tools to move workloads from the private to the vCloud.

vCloud Air Virtual Private on Demand beta is opened. An on demand services to offer flexibility to rapidly expand capacity and to integrate with the existing local infrastructure. A workspace in minutes and within a few easy steps. Direct access to cloud services that are the same as the onsite VMware infrastructure. Just have a credit card ready. Pay per minute for the resources you use. Support for 5000+ VMware certified applications and 90+ OS.

An overview of this and other Beta programs with these announcement can be found at my previous blogpost: https://pascalswereld.nl/2014/10/15/vmworld-barcelona-keynote-mentioned-beta-and-early-access-programs-link-list/.

Docker containers

A combined architecture of VM’s and application containers is nothing new for this VMworld. More and more organizations are rapidly adopting the Docker platform as it allows them to ship applications faster. Whether these applications are delivered to bare metal, virtualized data center, or public cloud infrastructures, it must not matter. For IT businesses seeking to efficiently build, deliver, and run enterprise applications, Docker and VMware deliver the best of both worlds for developers and IT/operations teams. Docker integration is brought to several VMware products.

Cloud management

Management of the private and public cloud, or physical environments, is delivered via the vRealize suite. vRealize is a suite of management tools for SDDC computer, network and storage virtualization, cloud and EUC (vRealize for Horizon). vRealize is a collection partly from re-branding and new features of old known components. Application and infrastructure automated provisioning is done via vRealize Automation (formally known as vCloud Automation Center or vCAC), management and monitoring is done via vRealize Operations (vCenter Operations Management) and IT billing and cost management is done via vRealize Business (ITBM, or IT Business Management). Not just a new name but also improved visualization, proactive alerting, improved capacity planning, project management with what-if scenario’s and automated resolving of found issues. Not just for the VMware products but also provisioning and management of physical or other hypervisor platforms as Hyper-V, KVN or OpenStack clouds. 

Announcement overview Strategy - SDDC Management

 

+++ Are you ready to go beyond your current limits?

Looking to find more information on VMware products, take a start here: http://www.vmware.com/products/?src=vmw_so_vex_pheld_277.

Next up I will be drafting from my VMworld notes some posts about product demo’s and technical briefings from my multiple visits to the partner ecosphere at the VMworld solutions exchange. I will be doing (or at least trying) a series about the technologies these partners and exhibitors are offering so stay tuned.

Sources: vmware.com.

 

VMworld Barcelona: Keynote mentioned Beta and early access programs link list

In the keynote sessions there are/where several Beta and early access programs mentioned for the VMware innovations.Beta’s are excellent if you have access to a lab, and some are available as a hosted beta program if you happen to miss resources or such. Get an early look, play, try and break. But do also comment, discuss and return feedback.

So where are those Beta’s? I have tried to put up a list of the mentioned Beta program URL’s for your convenience (okay started for my own reference, but I can share ;-)).

vSphere 6.0

VVOLS

VSAN 2.0

vCloud Air

vRealize Air

VMware Integrated OpenStack (VIO)

Please join in the fun and participate in these beta (and off course also others) programs.

Have I misted one or more (that could be true but I call sleep deprivation as my witness), please let me know.  

Sources: vmware.com.

 

VMworld Barcelona: day of the tentacle (or my first VMworld 2014 day)

My first VMworld day, or actually pre-day, was probably the same as a lot of the VMworld visitors, travelling inbound, finding my hotel, registration and vRockstar. Plus trying some little bit of a tourist mode. 

I had a afternoon flight from the Netherlands that arrived a 30 minutes behind on schedule (something to do with the weather above France). But on the plus side, this let me arrive in time for the shuttle and the venue to start up. As Sunday registration means no lines, I had my badge in no time. Good addition is the QR express check in. Next up get the T-10 metro card (check) from the information stand. I also wanted to pick up the VMworld backpack, but I was denied apparently because I am there on a bloggers pass. To bad, now I have to figure out a way to transport my stuff around as I innocently counted on the backpack.

2014-10-13 09.14.26

After that it was time to hop on the metro shuttle and get from the Fira station to the Plaça d’Espanya where my hotel should be. Fortunately it was there, just I little walk around the placa because the first time I take a metro exit, it is always the one on the opposite site that I need to be. But hey, with the Barcelona weather I don’t mind.

After some freshening up, after-travel drinks and some diner, I went walking around town. After a few miles wandering and looking around, I found myself at my last target of the day, the vRockstar party at the Hard Rock Cafe. That was a blast.

Monday will be a day for some further walking around the venue, Partner day and such. Maybe see you around?