My co-worker, Jason, was tasked with our VMware View implementation and I’m glad to report that its been largely successful and more importantly easy to deploy. I thought that now was a good time to reflect and share with you how we came to the decision to deploy virtual desktops, where we plan to use them, and what components we have implemented.
I can’t believe it, but its been almost a month since my last post. And what a month its been around my work. This has been one of the busiest and most difficult months that I can remember with the company. I have my hands in several different technologies, VMware and our blades are just two of my primary responsiblities. Over the past month, though, we’ve experienced a catastrophic failure of one of our blade enclosures. The failure has only occurred once, but the fall-out from this has taken almost a month to work out. And honestly, we’re still not through working out the kinks.
Of course, my story has to begin on Friday the 13th… Sometime around 9:00am, we started getting calls for both our SQL 2005 database cluster and our Exchange cluster. After investigation, we found that the active nodes were both in the same enclosure and a third ESX host in the same was experiencing problems, too. The problems were affecting both network and disk IO on the blades. All of our blades are boot from SAN, so the IO had to be a fiber-channel issue.
Several hours later, we were finally able to get enough response out of the nodes to be able to force a failover of services for Exchange, shortly followed by SQL 2005. As I worked with HP support, nothing improved on the affected servers. We were finally diagnosed with a problem mid-plane on the enclosure.
While waiting for the mid-plane to be dispatched to the field service folks, I requested that we go ahead and do a complete power-down on the enclosure and bring it up clean. This required physically removing power from the enclosure after powering down everything that I could from the onboard administrator.
After the reboot, everything looked much healthier. The blades came back to life and everything began operating as expected. After intense discussions on the HP side, we reseated our OA’s and the sleeve that they plug into on the back side of the enclosure. Net outcome was the same – everything still operating well. The OA’s nor the sleeve were loose, so we doubted that was the cause.
One nugget I learned from HP support (please vett this information on your own), is that the Virtual Connect interconnect modules require communication with the onboard administrators (OA’s). I’m still not sure I fully understand, but HP support did tell us that if VC lost communication to the OA, its possible that it caused our problems. If this is so, this smells like very, very bad engineering and design…
Continued investigation on HP’s part has pointed us back to the original diagnosis – a faulty mid-plane. Only by default did we return to that conculsion, however. This is the only piece of hardware common to the problems. Our only other conclusion was that this was a very bad, “hiccup” — which obviously buys us no real peace of mind…
So, sometime soon, we will be replacing the mid-plane of our enclosure. I have, of course, lost some faith in the HP blade ecosystem. We have plans to migrate our corporate VMware cluster onto blades, as well as some Citrix and other servers. Losing an enclosure like this has un-nerved those plans. We were fortunate to have drug our feet to only have 3 blades populated and serving anything at the time this happened. I will post updates as we move forward…
I was invited to write a guest entry for Matt Simmon’s Standalone SysAdmin blog while he’s on vacation. My entry about patch management in virtual environments appeared today… http://standalone-sysadmin.blogspot.com/2009/02/software-patching-is-other-benefit-of.html
I’m wondering what everyone else in the world does… I’m investigating running Virtual Center/vCenter as a virtual machine in order to remove two additional physical servers running Windows 2003 Enterprise in a cluster. I’m also very interested in running vCenter as a Linux virtual appliance when that becomes available. Is anyone out there running vCenter as a virtual machine? If so, how do you do it? Do you run it on a separate ESX cluster from the clusters it manages? How do you protect this in the event of a storage or ESX failure or some kind? Does anyone else NOT do this for specific reasons? Anyone else running vCenter 2.5 in a Windows cluster (MCSC)? If not, how do you protect your vCenter to make it resilient to failures? Just looking for other people’s personal experiences, not necisarily a corporate recommendation…
As a side note, we recently upgraded our farm from Virtual Center 2.0 to 2.5. Since that time, I’m seeing some things that I don’t necessarily think are working properly. We have Update Manager and Converter Enterprise installed and both services seem to go AWOL on us from time to time, disconnecting or just going unavailable from the Plug-Ins in Virtual Infrastructure Client. Not seeing much so far in VMware KB to help with running these peripherial services in a Windows cluster… Any help there would be greatly appreciated too…
Not long ago, we did a demo of VMware’s VDI product at HTC. To be honest, we were underwhelmed at the time. Fast forward to December and VMware remedied that problem with the release of VMware View. We are just in the beginnings of the trial for the View 3.0 product and we haven’t gotten our first virtual desktop running under it, but already our expectations are set pretty high.
Yesterday, we had a conference call with our VMware partner and some of VMware’s technical and sales resources. We talked specifically about the packages available with the View product line and what the two basic packages buy us. First, VMware is offering an Enterprise and a Premier bundles of the VMware View software. Enterprise is basically the upgraded version of VDI. Premier on the other hand includes the features we were looking forward to utilizing – namely thin provisioning, View Composer and maybe even VMware ThinApp.
Going into the virtual desktop foray, we knew that storage was likely our biggest hurdle. We knew that the SAN storage is much more expensive than the SATA drives we currently bundle in desktops, so our gigabytes will become a premium in the virtual world. The thin provisioning/linked clones technology is huge as we drive further down this road. I think that this technology has the potential of cutting our storage utilization by 1/10th of the estimated need we projected in November.
All that said, we were concerned about the price point from VMware for the product, especially once I saw that we needed the Premier bundle, but VMware surprised us with the list price being only $200 per virtual desktop with a buy in as low as 10 virtual desktops in a starter pack. I have not seen our actual quote from our partner, but I assume we may get some sort of price break, but the value of the bundle is great in my eyes.
The bundle includes not only the license to run 10 virtual desktops, but also an ESX server license – not just the free hypervisor, but he full VI3 enterprise license, and it includes a copy of VirtualCenter Foundations (a scaled down, 3 ESX only version of vCenter). Our thought is to purchase two of these 10 user starter bundles, which will give us two ESX servers on which to run our virtual desktops and two hosts to provide some level of redundancy for our virtual desktops – with a cluster utilizing HA, DRS and VMotion.
We still must cross the Microsoft licensing hurdles. And, I read earlier this week that Microsoft has again changed their licensing, so we’ll see what impact that has on us, but for now, we’re making progress. Should be an interesting week next week looking into thin provisioning with my co-worker. I’m pretty excited.
As a followup to my previous posts about thin clients and our research, we made a first step into the virtual desktop arena. We purchased 55 Pano Logic units just before end of last year. They arrived and have been safely stored away while we work out the rest of our implementation plans. This is just our first step at HTC and we see the potential of choosing one or maybe two more solutions which meet our needs in other areas. We are unsure how the Pano units will work in our business offices. Our initial roll out will be in our central offices and we may expand upon that.
We still need to complete the backend infrastructure to handle our roll out. We are making decisions now whether to locate our initial virtual desktop pools with our other virtual servers in the same cluster, or whether to deploy a new cluster. We also have to look at DHCP for our entire network, as our current solution does not appear to be able to hand out the additional directives to the Pano devices. I hope to be able to do a more detailed entry about the whole process once we get further down the road.
I had a conversation with one of the senior systems admins in my group today. The conversation was basically why is it easier to get to VMware patches and to know what has been released to you than it is with Microsoft’s patches? Beyond the basic “well Microsoft has way more software to support” answer, I came to the conclusion that VMware’s website organization of their patches for their products is far superior to Microsoft and their emails alerts are actually useful.
Twas the night of new years, and all through the house, not a creature was stirring, not even a mouse. The little one had passed out, and we’d put her to bed. We had all celebrated with Carson, Dick Clark and the rest. Mom in her kerchief and I in my cap, had just settled in for a long winter’s nap. When all of a sudden, I awoke to a clatter, it must be my text paging, I wonder what is the matter? I spring from my bed and stumble to the Mac, oh man, my VMware at work has gone all to crap.
That’s how my 2009 started… about 13 hours later, I finally left work and resumed my long-interrupted nap.
There is a nice article out in one of the VMTN blogs which covers all the rebranding that VMware is attempting with their next generation products. I’m not a big fan of rebranding (think Citrix’s Xen-everything confusion) as I think it muddies the water and causes brand confusion, but for some reason or another companies like to do this. VMware is at least doing so with a new release of software and I understand their reasons behind the rebranding, but it is something new to learn.
Anyways, the link. You can find more information on all the renaming here: http://blogs.vmware.com/vmtn/2008/12/do-they-smell-a.html
Early this morning, VMware released the VMware View product. As you may recall, VMware View was one of the announcements from VMworld 2008. It is essentially a renaming and new version of the Virtual Desktop Infrastructure product. VMware rebranded the product line VMware View and demoed quite a few new features and capabilities during the conference and today we find it making its official debut.
Among the improvements that VMware View 3 brings is thin provisioning for the desktops, the ability to run a golden master merged with user’s customized hard drives – including the ability to update the golden master, offline virtual desktops and a more automated experience for virtual desktop environments.
VMware touts this release as a Unified Client Solution. The concept is that the desktop will follow the user, regardless of their end-point device, and will allow them to access their same familiar desktop. The virtual desktops will run on a variety of types of devices. It introduces two ways to run the virtual desktop – either in the datacenter or on the end-point device, if capable. In the past, virtual desktops were relegated to the datacenter and were unable to run outside of the backend Virtual Infrastructure. But, this release changes that.
The most exciting, at least from my perspective, is the offline desktop. View has a check-out feature, much like the local library. From the VMware presentation a couple weeks ago in Colorado Springs, this feature will allow you to boot your laptop into a thin hypervisor (a la ESX) on the client, login into the View Manager and check-out your virtual desktop, download the files onto your laptop, run it locally. For the mobile workforce, this is a great capability. The way that I’ve seen this run, at least so far, is via a bootable USB thumb drive. The thumb drive boots you into the hypervisor and interface for VDI. Once the virtual desktop has downloaded, you then execute it on the laptop or other local device (some vendors, such as Wyse are introducing thin laptop products). After you’ve finished using the virtual desktop offline, you can reintroduce it to the backend infrastructure which merges the changes back into your online copy of the virtual desktop. This feature carries an Experiemental tag (see this post for explaination).
In addition to this, View 3 also introduces the thin provisioning and the use of VMware’s Linked Clones technology. The technology has long existed in the Workstation and Lab products and they’ve finally been bundled together into a solution for virtual desktops. From a systems administrator’s perspective, the biggest challenge for virtual desktops is moving their storage off of cheap SATA drives internal to a client device and onto much more expensive datacenter storage. Even if you’re talking a SATA storage array, the costs are much higher than that of the PC’s hard drive. By more economically using the space on your enterprise arrays, I think that VMware View 3 is solving this delima. In our particular environment, we really don’t have “cheap” storage to use for a virtual desktop deployment.
We’ll begin testing the new product in house within a week and I’ll post again about any impressions once we begin that trial process.