Short version: To grow a VM with and RDM in virtual compatiblity mode, you must VMotion the VM after performing your storage rescans to recognize the additional space. After the VMotion, the guest OS will see the additional space and be able to access it.
Datacenter
When Apple last refreshed the Mac Mini line, it introduced a new version with Mac OS X Server preloaded that also dropped its optical drive in lieu of dual hard drives. This new configuration is quite popular according to a recent post on AppleInsider. That leads me to wonder and speculate about what could be next? I’ve speculated about a home server before…
Mac OS X Server has been incrementally growing in features and capabilities with each major release. Its market penetration and target has largely been towards Mac heavy businesses. It has packaged several new products in the last two releases to help Mac organizations reduce their dependence on Microsoft solutions – new products like iCal and Address Book server as well as a mail server, all of which are based heavily on standards which is great for interoperability across all client platforms.
But, with the introduction of the Mac Mini server, Apple has tilted the OS X Server offering more towards small businesses (who might not purchase a Mac Pro or X-Serve to run OS X Server). It also could be used for some home users — could this be the first, quiet step towards the home server for Apple? I don’t have to answer and this is purely speculation, but let me continue.
As in my post about Lessons Learned on ESX4 rollout, we had a pretty serious hiccup with our storage and the ESX systems in December while trying to bring up our ESX4 environment. The primary trouble uncovered was what I’ll call “controller ping-pong”.
An EVA normally has two (maybe more, I’m not primarily a storage guy) controllers and those handle all the requests received through the SAN. For every LUN, one controller is its master. Both controllers can handle requests for the LUN, but only one actually handles the access. If the controller on fabric A is the primary but the controller on fabric B is getting more requests, eventually the EVA swaps control for the LUN to fabric B — wherever the majority of requests are coming.
This behavior would only become a problem if you had hosts configured to access the LUN on different fabrics. ESX4 is ALUA (asymmetric logical unit access) aware, meaning it should automatically determine the optimal path and in the case of an EVA. The EVA, I’m told by HP support, is supposed to respond an ALUA request for the optimal path by responding with the controller that is the master over the LUN.
If you, like us, have an ESX 3.5 cluster with preferred paths setup, you should proceed with caution. The ALUA information isn’t apparently shared between clusters. And if your clusters get different optimal paths, you could end up with controller ping-pong as requests are sent down both fabrics and the volume changes between the two, resulting in more on Fabric A followed by more on Fabric B — forcing the controller to switch masters.
So, while in a migratory state, I think my safest route is to configure the ESX4 hosts to use a preferred path like the ESX3.5 cluster nodes. I hate to move from the default ESX configuration and this isn’t an official recommendation from HP support, but it certainly makes the most sense to define the paths being used (except in a failure).
I post this because I feel like there have to be other HP Storageworks customers who have the same situation or have experienced something similar. I would love to hear from you…
Following my November upgrade of Flex-10 VirtualConnect on my blade enclosure, I have begun my rollout and upgrades to ESX4 on a new blade cluster as well as one existing cluster. There are quite a few lessons that I’ve learned on my roll-out.
Upgrading Virtual Connect Ethernet to VC-Ethernet Flex-10 on an HP Bladesystem
We recently made the purchase to upgrade our two blade enclosures to HP’s Flex-10 Ethernet technology and last week, with the help of a partner, we performed the install and upgrade of the Virtual Connect domain.
HP Virtual Connect Flex-10 allows an administrator to take a physical 10G connection and partition it into up to 4 more appropriately sized logical NICs presented to a server blade. Virtual Connect Flex-10 incorporates all of the benefits of Virtual Connect (previously explained here) while allowing for additional logical NICs. A Flex-10 Ethernet interconnect module with two onboard Flex-10 NICs on a blade will allow for 8 total NICs – something formerly only available if all 8 interconnect bays and the two mezzanine slots were populated with Ethernet modules. This greatly reduces the amount of cabling required at the blade enclosure and condenses the amount of connections required.
The Flex-10 interconnect modules include SFP connectors which are capable of copper or fiber connections depending on the GBIC installed.
Upgrade Process
We expected going into the installation that we could need to recreate the Virtual Connect domain (a domain is the configuration, server profiles and settings for a Virtual Connect deployment on a specific enclosure or set of enclosures). Since we only had 5 server blades installed and operating, we weren’t overly concerned. Our primary concerns were to ensure that the same Virtual Connect MAC and WWID settings were reinstalled and that our recreated server profiles matched their MAC address and WWID settings from the prior installation. So, to ensure we had all the information, we made printouts of the server profiles, Ethernet networks and fiber channel configuration and then proceeded to installing the modules.
We shutdown all of the existing blades from service. We removed the existing VC-Ethernet modules from Bays 1 and 2 and we installed the new VC-Ethernet Flex-10 modules in these bays. The VC-Eth modules in Bays 1 and 2 contain the domain configuration. Once removed, the domain had to be completely recreated. The new modules were at a higher firmware (2.12) versus the original modules (2.01).
What surprised us during installation was that we were able to successfully restore our original Virtual Connect domain configuration onto the Flex-10 modules. Once restored, we were able to see all configuration and the only thing showing an error were the Ethernet networks and shared uplink sets. These didn’t have any active ports assigned to the uplinks. The new VC-Eth Flex 10 modules have a different port numbering to differentiate from the original VC-Eth modules. After reassigning ports to the uplink sets, all of the Ethernet networks appeared to be back online. The firmware differences didn’t appear to be a problem after initial restore.
After booting servers onto the enclosure (all are boot-from-SAN and booted fine), we determined that we were having some network connectivity issues. After troubleshooting, we upgraded VC firmware on the modules. This proceeded without problems, but upon reboot, the VC-Ethernet Flex-10 modules would not come back online. After troubleshooting, we installed upgraded firmware on the enclosure. The newer firmware on all seemed to resolve all issues and the enclosure was back online. Final time to upgrade was around 5 hours, but much of that was waiting on Virtual Connect modules to reboot and wondering why they wouldn’t come back online.
Our second enclosure was expected to be a simple slam dunk after our largely successful and simple upgrade procedure — especially since we knew about firmware before on this round, but unexpected problems were there. This enclosure had a mid-plane replacement early in its life and apparently there was confusion with the Virtual Connect domain and the enclosure serial number. We received errors during restore and even after we forced the restore and upgraded firmware, the Ethernet networks never came back and responded.
This forced us into a recreate scenario. After recreating this VC domain by hand, everything came online and worked as wanted. I still believe that this is the cleaner way to handle the upgrade, although the restore worked on the first enclosure and presented very few problems.
Next… Configuring Flex-10 FlexNICs
Deja vu. Well, almost. I sat in on a simliar session last year and I wondered what has now changed with vSphere being available and what new expectations could be had for virtualizing Exchange and I found answers. First of all, as the speaker put it – VMware has eaten their own dogfood and virtualized Exchange 2007 for their internal consumption. With approximately 55,000 mailboxes, that is an impressive feat itself.
Beyond internal consumption, all data points to Exchange evolving into a better workload to run within virtualization. Much of that can probably be attributed to Microsoft’s own virtualization technology, but Exchange on ESX benefits just the same. Performance gains out of ESX 4 make for a good combination with the improved I/O for Exchange 2007. Initial data for Exchange 2010 continues the trend of making Exchange a better workload in general and making it more appropriate to virtualize.
With the advent of vSphere, VMware has released a host of new features. Today I am going to talk about VMware Fault Tolerance. I’ll give you a overview, and talk to you about the Requirements. Next I’ll walk you through the setup and configuration , and finally, we will discuss both the benefits and pitfalls of Fault Tolerance. Oh, and I will provide you with some links to documentation both through the blog, and again at the end. Just a little light reading for a rainy day, incase you get bored. I almost forgot! I will also show you a Demo of Fault Tolerance , as I test failover. * note, to see video, please open this in a full window.
Overview:
Officially, VMware states the following
“Maximize uptime in your datacenter and reduce downtime management costs by enabling VMware Fault Tolerance for your virtual machines. VMware Fault Tolerance, based on vLockstep technology, provides zero downtime, zero data loss continuous availability for your applications, without the cost and complexity of traditional hardware or software clustering solutions.” Source http://www.vmware.com/products/fault-tolerance/overview.html
But what does that mean? Basically, that Fault tolerance provides you protection by running a backup copy of your VM on a second host. Possible, Yes, but as you will see, there are some stiff requirements. However, for those that need Continuous Availability, and are running VMware, it is a great offering. And for the rest of you, it gives you a compelling reason to take a hard look at VMware, and vSphere.
VMware has a technical description here : How Fault Tolerance Works and use Case example here : Fault Tolerance Use Cases
My first thought, would be that Fault Tolerance will allow you to achieve incredible uptime with pre existing smaller legacy applications. And that it could allow smaller shops the ability to have a Continuous Availability without having to know and or configure Microsoft Clustering.
Requirements, Interoperability and Preparation:
There is a extensive amount of documentation around Fault Tolerance Configuration Requirements , Fault Tolerance Interoperability , and Preparing Your Cluster and Hosts for Fault Tolerance. Therefore, I will quote the following from the above links.
Fault Tolerance Configuration Requirements
“
Cluster Prerequisites
Unlike VMware HA which, by default, protects every virtual machine in the cluster, VMware Fault Tolerance is enabled on individual virtual machines. For a cluster to support VMware Fault Tolerance, the following prerequisites must be met:
■ VMware HA must be enabled on the cluster. Host Monitoring should also be enabled. If it is not, when Fault Tolerance uses a Secondary VM to replace a Primary VM no new Secondary VM is created and redundancy is not restored.
■ Host certificate checking must be enabled for all hosts that will be used for Fault Tolerance. See Enable Host Certificate Checking.
■ Each host must have a VMotion and a Fault Tolerance Logging NIC configured. See Configure Networking for Host Machines.
■ At least two hosts must have processors from the same compatible processor group. While Fault Tolerance supports heterogeneous clusters (a mix of processor groups), you get the maximum flexibility if all hosts are compatible. See the VMware knowledge base article at http://kb.vmware.com/kb/1008027 for information on supported processors.
■ All hosts must have the same ESX/ESXi version and patch level.
■ All hosts must have access to the virtual machines’ datastores and networks.”
“
Host Prerequisites
A host can support fault tolerant virtual machines if it meets the following requirements.
■ A host must have processors from the FT-compatible processor group. See the VMware knowledge base article at http://kb.vmware.com/kb/1008027.
■ A host must be certified by the OEM as FT-capable. Refer to the current Hardware Compatibility List (HCL) for a list of FT-supported servers (see http://www.vmware.com/resources/compatibility/search.php).
■ The host configuration must have Hardware Virtualization (HV) enabled in the BIOS. Some hardware manufacturers ship their products with HV disabled. The process for enabling HV varies among BIOSes. See the documentation for your hosts’ BIOSes for details on how to enable HV. If HV is not enabled, attempts to power on a fault tolerant virtual machine produce an error and the virtual machine does not power on.Before Fault Tolerance can be turned on, a virtual machine must meet minimum requirements.
■ Virtual machine files must be stored on shared storage. Acceptable shared storage solutions include Fibre Channel, (hardware and software) iSCSI, NFS, and NAS.
■ Virtual machines must be stored in virtual RDM or virtual machine disk (VMDK) files that are thick provisioned with the Cluster Features option. If a virtual machine is stored in a VMDK file that is thin provisioned or thick provisioned without clustering features enabled and an attempt is made to enable Fault Tolerance, a message appears indicating that the VMDK file must be converted. Users can accept this automatic conversion (which requires the virtual machine to be powered off), allowing the disk to be converted and the virtual machine to be protected with Fault Tolerance. The amount of time needed for this conversion process can vary depending on the size of the disk and the host’s processor type.
■ Virtual machines must be running on one of the supported guest operating systems. See the VMware knowledge base article at http://kb.vmware.com/kb/1008027 for more information.”
Basically, you need to make sure your cluster is setup correctly, that your CPU and NIC’s are supported, and the networking is setup for FT. Additionally, you need to make sure your VM Guest is supported and configured with support virtual hardware, Like Fault Tolerance support checked in the Hard Drives as you create them. I provided the full list above so that everyone is on the same page with a full understanding of the necessary Requirements. Here are a couple of screen shot examples. The first is of the network configuration, and the next of VM’s hard drive.
Fault Tolerance Interoperability
“
The following vSphere features are not supported for fault tolerant virtual machines.
■ Snapshots. Snapshots must be removed or committed before Fault Tolerance can be enabled on a virtual machine. In addition, it is not possible to take snapshots of virtual machines on which Fault Tolerance is enabled.
■ Storage VMotion. You cannot invoke Storage VMotion for virtual machines with Fault Tolerance turned on. To migrate the storage, you should temporarily turn off Fault Tolerance, and perform the storage VMotion action. When this is complete, you can turn Fault Tolerance back on.
■ DRS features. A fault tolerant virtual machine is automatically configured as DRS-disabled. DRS does initially place a Secondary VM, however, DRS does not make recommendations or load balance Primary or Secondary VMs when load balancing the cluster. The Primary and Secondary VMs can be manually migrated during normal operation.”
Here is a link with some other features that are incompatible, worth the read.
That cover’s the most of the requirements, additionally, you want to make sure your cost certificate checking is enable, and that you have configured everything correctly as stated above.
“Here are the steps Preparing Your Cluster and Hosts for Fault Tolerance.
The tasks you should complete before attempting to enable Fault Tolerance for your cluster include:
■ Enable host certificate checking (if you are upgrading from a previous version of Virtual Infrastructure)
■ Configure networking for each host
■ Create the VMware HA cluster, add hosts, and check compliance”
Whew, that section was fairly boring, my apologizes.
Setup and Configuration:
Let’s hear the drum roll, & let’s enable Fault Tolerance! Here are the steps.
Turn On Fault Tolerance for Virtual Machines
- Right click on the VM, Select Fault Tolerance
- Click turn on Fault Tolerance.
Yep, to easy, yes I made those steps up, by enabling Fault Tolerance, without following directions. Please check the above link for official directions.
Wait, if you run in to problem while powering on, where do you start?
VMware has a page of answers titled Turning On Fault Tolerance for Virtual Machines “The option to turn on Fault Tolerance is unavailable (grayed out) if any of these conditions apply”
I choose not to list all the possibilities here, but I wanted to provide the link for those of you that want to try this out, but have problems. The link also talks about validation checks, and how Fault Tolerance acts with a Powered on VM, VS a Powered off VM.
VMware also has Fault Tolerance Best Practices , VMware Fault Tolerance Configuration Recommendations, and Troubleshooting Fault Tolerance documentation.
Benefits VS pitfalls:
VMware Fault Tolerance lets you protect your VM by basically running a Second copy. This allows for great protection as you need it, or on a 24/7 basis’s. That alone may be well worth ANY pitfalls. However, there are some system requirements, and it does take two hosts, and have some Performance impacts. See VMware vSphere 4 Fault Tolerance: Architecture and Performance & Comparing Fault Tolerance Performance & Overhead Utilizing VMmark v1.1.1
Over all, I would recommend you take a look at meeting the requirements if you see a Use or Need of Fault Tolerance, and are architecting a new VMware vSphere Environment.
Update: new VM Blog titled : Comparing Performance of 1vCPU Nehalem VM with 2vCPU Harpertown VM compares performance, worth a read, I quoted a tidbit from the end.
“Conclusion
A 1vCPU Xeon X5500 series based Exchange Server VM can support 50% more users per core than a 2vCPU VM based on previous generation processors while maintaining the same level of performance in terms of Sendmail latency. This is accomplished while the VM’s CPU utilization remains below 50%, allowing plenty of capacity for peaks in workload and making an FT VM practical for use with Exchange Server 2007”
Failover demo:
Here I an showing a Exchange 2010 VM running loadgen, and a VMware VDI client pinging the Exchange VM while I do a test failover and as I restart the Secondary VM. * note, this is a fresh exchange install, with no configuration done, hence the exceptions in loadgen. And no, i don’t normally use FaceBook, or play Poker within VM’s.
Thanks for Reading!
I want to thanks to Jason Boche, for letting me have access to this lab for my testing. Thanks again! Without people like him, what would the world be like?
Roger L.
John Troyer did a great job on ustream and the VMworld Channel.
There are 25 episodes to date, plus the first two, which are not numbered. I’ll do my best to lay them out.
I kept the order of original upload date.
vmware: VMworld Live 08/31/09 04:46PM
vmware: VMworld Live 08/31/09 05:07PM
vmware: #1 – hello vmworld
vmware: #2 TechExchangeDeveloperDay
vmware: #3 Roundtable vExperts
vmware: #4 – SRM
vmware: #5 VMworldTV and VMware Europe
vmware: #6 SearchServerVirtualization.com
vmware: #7 VMUGs, Partners, Labs
vmware: #8 – Day 1 Commentary, vCloud Express, Booth Babes
vmware: #9 Day 1 comments: vCloud APIs, VIew, VMware Go
vmware: #10 Andres Martinez
vmware: #11 vSphere 4.0 Quickstart Guide
vmware: #12 Client Virtualization Platform
vmware: #13 vCloud Express, vCloud API
vmware: #14 vStorage APIs, vCalendar
vmware: #15 vCloud API
vmware: #16 Moving to the cloud & Springsource
vmware: #17 Brian Madden & Gabe Knuth
vmware: #18 PowerCLI
vmware: #19 Cost per Application Calculator
vmware: #20 Scott Lowe
vmware: #21 VMworld, IT in the Cloud perspectives
vmware: #22 SpringSource VP Shaun Connolly
vmware: #23 Virtualization EcoShell
vmware: #24 Hypervisor architecture
vmware: #25 David M Davis
Thanks John for all the work, I hope to see you do interviews like this in the future on a scheduled basis.
Roger L.
Yep, you guessed it, again , Yet Even More blogs on VMworld 2009!
I take no credit for any of this content, it go’s to the author, and I mean to only put the content here without having to weed through the varies blogs, I have both the blog link, and article title here, with a link to the article. Thanks to everyone for putting these together for everyone to read!
—
http://netapptips.com/ : VMworld 2009: Brian Madden is true to his word
“
As all of us that have been in the industry (for way too long) know tradeshow season is a full of sound and fury; signifying nothing. Lots of announcements, lots of demonstrations, lots of bickering back and forth between vendors about whose widget is more widgety. But at the end of the day, just like the rest of the year, the success or failure of tradeshows is all about people. It takes many, many people and many, many (hundreds of) hours to organize, coordinate and execute a tradeshow. It ultimately comes down to trust. That most basic element of strong relationships.
For quite some time, the TMEs on our team that focus on Virtual Desktops (VDI) have been reading Brian Madden’s blog. They look to it for industry trends, but most importantly they look to it for viewpoints from experienced Desktop Administrators. From time to time we find ourselves being too storage-centric, and Brian’s blog helps us step back and look at the desktop challenges from a broader perspective.
Back in July, we approached Brian about presenting at his BriForum event. We wanted to introduce some new concepts (prior to VMworld), and his audience would give us the feedback (good or bad) that we needed to hear. We’d be the only storage vendor in attendance, but we needed to get close to the customer base.
Not only did we host a booth, but we presented one of the technical sessions. Prior to the event, we had our slides ready to go, and then we saw this challenge from Brian. Always interested in a challenge, Mike Slisinger (@slisinger) took it upon himself to create a winning presentation. Needless to say, the NetApp presentation met and exceeded the criteria and Brian promised to be wearing his NetApp t-shirt around the halls of VMworld.
A man of his word, today we found Brian enthusiastically (as always) walking around the NetApp booth (#2102) proudly wearing his NetApp t-shirt. The shirt looked great, and we really appreciate the feedback Brian has provided our VDI team on our products and solutions.
As I mentioned earlier, the best part about tradeshows is the chance to connect with people. The change to have open discussions, to learn about new things, and to see how you can improve. Having strong relationships is the foundation of that ability to connect with people.”
VMworld 2009: Get prepared for vSphere upgrades
“
As we heard in Paul Martiz’ keynote, VMware is forecasting that 75% of their customers will be upgrading their virtualized environments to vSphere within six months. If you’re an existing VMware customer, or VMware partner, this means that you had better be prepared for the updates.
To help everyone planning to move to vSphere, we’d recommend that you review the following technical documents:
- NetApp and VMware vSphere Storage Best Practices (TR-3749)
- Microsoft Exchange, SQL and Sharepoint (mixed workload) on vSphere on NetApp Unified Storage (TR-3785)
Both of these documents have been fully tested and co-reviewed by all of NetApp’s alliance partners (VMware, Microsoft and Cisco), so you can feel extremely confident in using these are the foundation of your next generation architectures utilizing vSphere.
And of course, vSphere deployments are covered by the NetApp 50% Virtualization Storage Guarantee. “
—
http://itsjustanotherlayer.com : EA3196 – Virtualizing BlackBerry Enterprise on VMware
“
Once again.. another session I didn’t sign up with and zero issues getting into.
To start off RIM & VMware have been working together for 2 years and it is officially supported on VMware. Together RIM & VMware have done many numerous and successful engagements running BES on VMware. The interesting thing is RIM runs their own BES on VMware for over 3 years now.
Today BES best practice is no more than 1k users per server and they are not very multi-core friendly. It is not cluster aware or have any HA built in. The new 5.0 version of BES is coming with some HA availability via replication at the application layer. One thing that has been seen in various engagements is if you put the BES servers on the same VMware Hosts as virtualized Exchange, there are noticable performance improvements.
The support options for BES do clearly state that they support on VMware ESX.
One of the big reasons to virtualize BES is that since it can not use multi-cores effectively the big 32 core boxes today are only able to use a fraction. By virtualizing BES can get significant consolidation. Then when doing the virtualization BES gets all the advantages of running virtual such as Test/Dev deployments and server consolidation and HA etc. Things that are well known and talked about already.
BES encourages template use to do rapid deployments. The gotcha is just what your company policies and rules are and can potentially save quite a bit of time. This presentation is really trying to show how to use VMware/Virtualization with BES for change management improvements, server maintenance, HA, component failures and other base vSphere technologies. VMware is looking towards using Fault Tolerance for their own BES servers.
BES is often not considered Tier 1 for DR events. Even though email is often the biggest thing needed to start working after a DR event to start communications. The reason is generally been seen due to the complexity and cost of DR.
The performance testing with the Alliance Team from VMware has been successfully done numerous times for the past couple of years. They have done testing at both RIM & VMware offices. The main goal of these efforts was to generate white papers and a reference architectures that are known to work. The testing was to use Exchange LoadGen & PERK load driver (BES testing driver). Part of this is how to scale outwith more VMs as the scale up is known.
The hardware was 8 cpus, Intel E5450 3Ghz, 16 G RAM and FAS3020 Netapp on vSphere 4 & BES 4.1.6. The 2k user test with 2 Exchange systems the results were 23% CPU utilization on 2 vCPU BES VMs. Latency numbers was under 10 ms. Nothing majorly wrong seen in the testing metrics. Going from ESX 3.5 to vSphere 4 was a 10-15% CPU reduction in the same workload tests. Adding in Hardware Assist for Memory saw what looks like another 3-5% reducting in CPU usage. In their high load testing when doing VMotion there is a small hiccup of about 10% increase in CPU utilization during the cut over period of the VMotion. This is well within the capacity available on the host and in the Guest OS.
Their recommendation is to do no more than 2k users on a 2vCPU VM. If you need more then add more VMs. Scales and performs well in this scale out architecture. Be sure you give the storage the number of spindles needed. The standard statement when talking about virtualization management.
The presenter then went into a couple of reference architecture designs. Small Business & Enterprise with a couple different varieties.
BES @ VMware. 3 physical locations, 6,500 Exchange users. 1k of them have 5G mailboxes and the default for the rest are 2G. BES has become pretty common. They run Exchange 2007 & Windows 2003 for AD & the Guest OS. Looks fairly straight forward.
4 prod BES VMS, 1 STandby BES VM, 1 Attachment BES VM and 1 BES dedicated Database VM. Done on 7 physical servers and 40 additional VM workloads on this cluster.”
—
http://www.virtuallifestyle.nl : VMworld ‘09 – Long Distance VMotion (TA3105)
“
I’ve attended the breakout session about long distance VMotion, TA3105. This sessions presented results from a Long Distance VMotion Joint Validation research project done by VMware, EMC and Cisco in the form of Shudong Zhou, Staff Engineer for VMware, Balaji Sivasubramanian, Product Marketing Manager for Cisco and Chad Sakac, VP of VMware Technology Alliance for EMC.
What’s the use case?
With Paul Maritz mentioning the vCloud a lot, I can see where WMotion can make itself useful. When migration to or from any internal or external cloud, there’s a good chance you’d want to do so without downtime, i.e. with all your virtual machines in the cloud running.
What are the challenges?
The main challenge to get VMotion working between datacenters isn’t with the VMotion technology itself, but with the adaptations to shared storage and networking. Because a virtual machine being VMotioned cannot have it’s IP address changed, some challenges exist with the network spanning across datacenters. You’ll need stretched VLAN’s, a flat network with same subnet and broadcast domain at both locations.
The same goes for storage. VMotion requires all ESX hosts to have read/write access to the same shared storage, and storage does not go well over (smaller) WAN links.There need to be some kind of synchronisation, or a different way to present datastores at both sides.
Replication won’t work in this case, as replication doesn’t do active/active access to the data. The secundary datacenter doesn’t have active access to the replicated data, it just has a passive copy of the data, which it can’t write to. Using replication as a method of getting WMotion working will result in major problems, one of which is your boss making you stand in the corner for being a bad, bad boy.
What methods are available now?
Chad explained a couple of methods of making the storage available to the secondary datacenter:
Remote VMotion
This is the most simply way to get WMotion up-and-running. This method entails a single SAN, with datastores presented to ESX servers in both datacenters and doing a standard VMotion. Doing this will leave the virtual machine’s files at the primary site, and as such, is not completely what you’d want, as you’re not independent from the primary location.Storage VMotion before compute VMotion
This method will do a Storage VMotion from a datastore at the primary location to the secundary location. After the Storage VMotion, a compute VMotion is done. This solved the problem with the previous method as the VM will move completely. It will take a lot of time, and does not (yet) have any improvements that leverage the vStorage API for for instance deduplication.Remote VMotion with advanced active/active storage model
Here’s where Chad catches fire and really starts to talk his magic. This method involves a SAN solution with additional layers of virtualization built into it, so two physically separated heads and shelves share RAM and CPU. This makes both heads into a single, logical SAN, which is fully geo-redundant. When doing a WMotion, no additional steps are needed on the vSphere platform to make all data (VMDK’s, etc) available at the secundary datacenter, as the SAN itself will do all the heavy lifting. This technique will do it’s trick completely transparent to the vSphere environment, as only a single LUN is presented to the ESX hosts.What’s VMware’s official statement on WMotion?
VMotion across datacenters is officialy supported as of september 2009. You’ll need VMware ESX 4 at both datacenters and a single instance of vCenter 4 for both datacenters. Because VMware DRS and HA aren’t aware any physical separation, WMotion is supported only when using a cluster for each site. Spanned clusters could work, but are simply not supported. The maximum distance at this point is 200 kilometers, which simply indicates that the research did not include any higher latencies than 5ms RTT latency. The minimum link between sites needs to be OC12 (or 622Mbps). The bandwith requirement for normal VMotions, within the datacenter and cluster will change accordingly. The network needs to be streched to include both datacenters, as the IP-address of the migrated virtual machine cannot change.
There need to be certain extensions to the network for WMotion to be supported. Be prepared to use your creditcard to buy any Cisco DCI-capable device like the Catalyst 6500 VSS or Nexus 7000 vPC. On the storage side, extensions like WDM, FCoIP and optionally Cisco I/O Acceleration are required.
Best Practices
- Single vCenter instance managing both datacenters;
- At least one cluster per site;
- A single vNetwork Distributed Switch (like the Cisco Nexus 1000v) across clusters and sites;
- Network routing and policies need to be synchronized or adjusted accordingly.
Future Directions
More information
- Check out the PDF “Virtual Machine Mobility with VMware VMotion and Cisco Data Center Interconnect Technologies“
- Information about VMware MetroCluster
- Information about the Cisco Datacenter Interconnect (DCI)
—
http://technodrone.blogspot.com : Best of VMworld 2009 Contest Winners
“
Category: Business Continuity and Data Protection
Gold: Vizioncore Inc., vRanger Pro 4.0
Finalist: Veeam Software Inc., Veeam Backup & Replication
Finalist: PHD Virtual Technologies, esXpress n 3.6
Category: Security and Virtualization
Gold: Hytrust, Hytrust Appliance
Finalist: Catbird Networks Inc., Catbird vComplianceCategory: Virtualization Management
Gold: Netuitive Inc., Netuitive SI for Virtual Data Centers
Finalist: Veeam Software, Veeam Management Suite
Finalist: Embotics Corp., V-Commander 3.0Category: Hardware for Virtualization
Gold: Cisco Systems Inc., Unified Computing System
Finalist: Xsigo Systems, VP780 I/O Director 2.0
Finalist: AFORE Solutions Inc., ASE3300
Category: Desktop Virtualization
Gold: AppSense, AppSense Environment Manager, 8.0
Finalist: Liquidware Labs, Stratusphere
Finalist: Virtual Computer Inc., NxTop
Category: Cloud Computing Technologies
Gold: Mellanox Technologies, Italio Cloud Appliance
Finalist: InContinuum Software, CloudController v 1.5
Finalist: Catbird Networks Inc., Catbird V-Security Cloud EditionCategory: New Technology
Winner: VirtenSys, Virtensys IOV switch VMX-500LSRCategory: Best of Show
Winner: Hytrust, Hytrust ApplianceCongratulations to all the Winners!!!”
—
http://www.chriswolf.com/ : Thoughts on the VMworld Day 2 Keynote
“
I was very impressed by the information disseminated in the second VMworld keynote, led by CTO Steve Herrod. Here’s a summary of the thoughts I tweeted during the morning keynote (in chronological order).
Steve Herrod talked about a “people centric” approach. VMware’s technology needs to understand desktop user behavior. The existing offline VDI model (requiring a manual “check-out”) is not people centric.VMware’s announcement to OEM RTO Software’s Virtual Profiles was a good move. Burton Group considers profile virtualization a required element of enterprise desktop virtualization architecture.
VMware’s Steve Herrod and Mike Coleman discussed VMware’s software-based PC-over-IP (PCoIP) protocol. Feedback from Burton Group clients who were early PCoIP beta testers indicates that the protocol’s development is progressing well.
Herrod showed a picture of “hosted virtualization” for employee owned PCs on a MacBook. Is that a hint of a forthcoming announcement?
I would like to know if VMware’s Type I CVP client hypervisor will have VMsafe-like support in the 1.0 release. VMware has made few public statements regarding CVP architecture.
VMware’s CVP demo looked good, but it didn’t reach the “wow factor” achieved by Citrix when Citrix demoed a type I client hypervisor on a Mac at their Synergy conference.
The Wyse PocketCloud demonstration was impressive. PocketCloud is VMware’s first answer to the Citrix Receiver for iPhone.
VMware demonstrated the execution of a Google Android application on a Windows Mobile-based smart phone. Many opportunities exist for VMware and Google to collaborate in the user service and application delivery space.
Burton Group client experience backs VMware’s claims that vSphere 4.0 is a suitable platform for tier 1 applications. We recommend that x86 virtualization be the default platform for all newly deployed x86 applications, unless an application owner can justify why physical hardware is required (e.g., for a proprietary interface that is unsupported by virtualization).
To support tier 1 application dynamic load balancing, storage and network I/O must be included in the DRS VM placement calculations. It’s good to see that VMware is heading in that direction. DRS will also need to evaluate non-performance metrics such as vShield Zone membership as part of the VM placement metric (no word on this yet).
I would like to hear more from folks who have tested AppSpeed. Burton Group clients I have spoken with to date have not been impressed.
The DMTF needs to start doing more to evangelize the role of OVF as it pertains to cloud computing and service manifests.
I like vSphere’s VMsafe security API, but I want to see tighter integration with external management (exposed via the SDK), and better integration with VMware’s DRS and DPM services.
VMware talked about Lab Manager as a tool to promote user self-service for server VMs and applications, but I haven’t heard mention of a similar interface for desktop applications (like Citrix Dazzle). A user application service catalog is a missing part of VMware’s current virtual desktop architecture, and will need to be addressed by either VMware or a third party.
The data center on the show floor running 37,248 VMs on 776 physical servers would be more impressive if VMware disclosed the applications running on the VMs, along with the application workloads. Otherwise, the demonstration is really just a density science project.
I liked VMware’s coverage of virtual data centers. They are also defined in Burton Group’s internal cloud hardware infrastructure as a service (HIaaS) reference architecture.
Herrod mentioned forthcoming network L3 improvements that will make it easier to separate location and identity. This is something to follow.
Both Cisco and F5 are enablers for VMware’s long distance VMotion and are vendors to follow as this technology further matures.
VMware’s cloud layered architecture is very similar to the architecture defined in the Drue Reeves’ report “Cloud Computing: Transforming IT.”
Herrod did a great job articulating the importance of SpringSource to the VMware software solution. VMware needs an application platform to have a chance at holding off Microsoft long term, and SpringSource gives them that.
That’s it for my thoughts on day 2. As always, I’d love to hear your feedback. VMworld 2009 was a great conference. I enjoyed my time meeting with Burton Group clients as well as the several conversations that I had with many attendees. See you next year!”
—
http://virtualfuture.info/ : VMworld 2009: notes
“
Just some notes about VMworld. I’m also working on a couple of other blogposts, but they take more time then I expected. VMworld was again a great experience for me. I was able to talk a lot of people (people I’ve met at previous VMworld conferences or people I follow on twitter), check out new/other vendors on the solution exchange, attend interesting sessions and get some hands-on experience on products I haven’t had the chance before. And off course, I found out at VMworld I passed my VCP4 beta exam!
BTW: I had a comment the other day about the long lines of people waiting for sessions and labs. Well, it wasn’t a real problem at all. First of all, it seems like everyone could eventually get into the session he wanted to (registered or not). Secondly, I don’t understand all those people waiting half an hour before the start of the session when you can just walk in 5 minutes before the session starts. As a matter of fact, I just did the SRM advanced options lab, and there were plenty of seats available.
Here are some interesting things I’ve seen:
– vCloud providers (terremark)
– VMware View: PCoIP software solution
Software PCoIP in View will be released later this year. The good thing is, you can combine software and hardware PCoIP so you can give each type of user the right solution for their needs. Also Wyse demonstrated the iPhone View connection.– Client Virtualization Platform (CVP)
Run a bare-metal hypervisor on a client with management through Intel vPro. I’m really curious how this will integrate with VMware View: will the VM hosted in the datacenter be automatically synchronized with the VM on your local client? How will this be licensed? Because what I do know that CVP will not be a product you can buy stand-alone, it will be part of VMware View.Integration of storage management into vCenter
– HP and NetApp already showed great integration of monitoring and configuring hardware integrated into the vCenter server interface. Maybe there are already other vendors, but the fact that you can control and manage your complete Virtual Infrastructure (both hardware and software) is really cool.Cisco UCS
– Cisco really impressed me with their UCS solution: a blade chassis designed for virtualization.Other (future promises of) vSphere(-related) news:
IO DRS in the future
Long distance VMotion
RTO-profiles integration in ViewAnd really future-future talk: Follow the sun vs Follow the moon
In other words: Datacenter VMotion! Imagine a company with offices in USA and Europe. If you want your datacenter to be closest to your users, your Datacenter will vmotion from the datacenter in Europe to the datacenter in the USA (follow the sun). If you want your Datacenter to be in the place when the energy-price are the lowest (during night-times) you vmotion the Datacenter… well… at night (follow the moon).”
—
http://www.techhead.co.uk : VMworld 2009 Video Interview: Rick Scherer (VMwaretips.com)
“Here’s another TechHead video interview from VMworld 2009 in San Francisco. This time I’m talking to Rick Scherer who is well known for his informative VMwareTips.com site. In his daytime capacity he is a UNIX Systems Administrator for San Diego Data Processing Corporation.
Check out his site at VMwareTips.com – well worth a visit”
—
http://www.vmguru.nl/wordpress : Building and maintaining the VMworld 2009 Datacenter
“
It is becoming a sequel, the datacenter VMware has build for this weeks VMworld 2009 at the Moscone Center in San Francisco.
In addition to our two previous articles (art1, art2), today we found two very nice videos loaded with tons of techno porno!
The first video shows the VMware team building the complete datacenter on-site at the Moscone Center. During the video footage the awesome numbers representing this huge infrastructure run by.
In short? 28 racks containing 776 ESX servers which provide the infrastructure with 37TB of memory, 6.208 CPU cores and 348TB storage which uses 528KW electricity and is servicing 37.248 virtual machines. You will probably never find such an infrastructure anywhere in the world, at least I know I won’t.
In the second video Richard Garsthagen is interviewing Dan, who is responsible for the datacenter at the Moscone Center, in which he gives inside information on how this huge datacenter is designed, build, connected and what hardware is used. Some interesting figures 85% of the used hardware is new to market, they only used 3 miles of cable to connect all 776 ESX servers, storage, switches, etc and the total cost of all hardware used to build this datacenter is estimated at $35M!
Awesome figures, a very very impressive datacenter and a must see for all technology freaks out there.
Respect for VMware for putting together such a datacenter just for a one week event!”
“
Yesterday we showed you how VMware designed and build an awesome infrastructure just for one week of VMworld 2009.
Awesome figures!
776 ESX servers, 37TB RAM, 6.208 cores, 348TB storage and 37.248 virtual machines. But what is this infrastructure used for?Well VMware primarily uses it for the 10 different self paced labs they provide to VMworld participants. VMware build a vCloud which consists of the best of three solutions, vSphere, Lab Manager and a self service portal in which users can select the lab they want to follow and within minutes they get there own lab environment.
Depending one the wanted lab virtual machines can be provisioned within minutes, from 3 minutes for a simple two server lab environment to 7 minutes for a large nine server lab environment.
The system does not just provision basic virtual machines like Windows, but it provisions entire virtual ESX environments, with nested ESX server, storage, SRM and vCenter server.
If you want to know more just watch the interview Richard Garsthagen did below”
I believe the Ask the Experts Session is going on now, see the below picture.
Roger L.
Yep, you guessed it , Yet Even More blogs on VMworld 2009!
I take no credit for any of this content, it go’s to the author, and I mean to only put the content here without having to weed through the varies blogs, I have both the blog link, and article title here, with a link to the article. Thanks to everyone for putting these together for everyone to read!
—
http://searchservervirtualization.techtarget.com : Virtualization innovators vie for Best of VMworld 2009 Awards
“
AN FRANCISCO — More than 200 products were considered in The Best of VMworld 2009 Awards in categories ranging from desktop virtualization to cloud computing. The awards, sponsored by TechTarget’s SearchServerVirtualization.com, highlight the most innovative technologies at the show.
In the Security and Virtualization category, the gold winner was HyTrust Inc. for its HyTrust Appliance. The finalist was Catbird Networks Inc. for Catbird vCompliance.
Related content
Watch video of some of the Best of VMworld 2009 Awards finalists and gold winners.The HyTrust product "provides a single point of control for hypervisor access management, configuration, logging and compliance," according to a statement from the judging panel, which is anonymous.
In the Business Continuity and Data Protection Software category, Vizioncore Inc. took the gold for vRanger Pro 4.0.
There were two finalists in that category: Veeam Software Inc. for Veeam Backup & Replication and PHD Virtual Technologies for esXpress n 3.6.
"While contenders in this category were close rivals, the winner changed the most for the better," the judges wrote. "The gold winner provides a cleaner interface and all-around faster tool, speed being crucial in this category."
In Hardware for Virtualization, Cisco Systems Inc.’s Unified Computing System (UCS) won the gold.
Judges said that UCS "provides a unified platform for hardware and networking that radically reduces the number of devices requiring setup, management, power and cooling, and cabling. … This offering will be hugely impactful in the marketplace."
A group of engineers from the National Defense and Canadian Forces in Ontario who attended a super session on UCS Tuesday said the UCS system is ideal for dedicated technologies such as desktop virtualization.
"It looks like a good technology. … You could get the system and throw a bunch of desktops on the blades, and it would run great," said one of the engineers, who wished to remain anonymous.
The two finalists were Xsigo Systems for VP780 I/O Director 2.0 and AFORE Solutions Inc. for ASE3300
More than 40 companies entered in the Virtualization Management category, and Netuitive Inc. won for its Netuitive SI for Virtual Data Centers. The product impressed the judges by providing "broad management far beyond virtual environments and a self-learning capability that alerts you to problems hours before they happen," the judges said.
Finalists in the management category were Veeam Software for Veeam Management Suite and Embotics Corp. for V-Commander 3.0.
The gold winner in the desktop virtualization was AppSense for AppSense Environment Manager 8.0. The judges said AppSense "rocked our boat. … It offers the most complete user management environment system out there."
The two finalists in this category were Liquidware Labs for its desktop virtualization diagnostic tool Stratusphere and Virtual Computer Inc. for NxTop.
For Cloud Computing technologies, the gold went to Mellanox Technologies for its Intalio Cloud Appliance. Judges said Mellanox offered "the most impressive integrated private cloud system for building large-scale internal clouds."
The finalists were InContinuum Software for CloudController v 1.5 and Catbird Networks Inc. for Catbird V-Security Cloud Edition.
The best new technology at VMworld 2009 was VirtenSys’ VirtenSys IOV switch VMX-500LSR. Judges considered VirtenSys innovative because of its potential cost savings. "It eliminates an entire layer of switching," the judges wrote. "Without this product, you would have to buy big iron to get all this functionality."
The Best of Show award this year went to HyTrust for HyTrust Appliance because, according to the judges, it offers "the greatest potential to secure virtual environments by providing a single point of access control. It frees admins to set policies once that won’t be overridden by other tools."”
http://link.brightcove.com/services/player/bcpid1740037323
Video, VMworld Awards
—
http://knudt.net/vblog : VMWorld 2009 Day 4: Wednesday
“
Steve Herrod started off the keynote today discussing VDI, including the announcement of an agreement to embed RTO Software’s Virtual Profiles into View. Their goal is to provide end users the same rich experience no matter the situation (WAN, LAN or offline mobile). Like any good Herrod keynote, live demos ensued, including PCOIP and the Wyse iPhone View Client. The demo of the Mobile Virtualization Phone was pretty interesting, especially when he showed that the demo app was running in an android VM, completely seamless within the Windows CE environment. The keynote then switched over to the datacenter, where he came out swinging by describing why VMotion is more mature and proven (and a time-tested marriage saver) compared to other “live migration” offerings. Next, he discussed the fact that VMware is currently working towards I/O based DRS, which will include setting shares and IOPs limits per hard disk. He then covered the big features of vSphere, but didn’t cover anything new until the end when he introduced and gave a quick demo of vCenter ConfigControl. Next up was the cloud discussion, but nothing terribly groundbreaking, though he did mention long-distance VMotion as an upcoming feature. Following up on the cloud discussion, Mr. Herrod described IaaS, PaaS and Saas (Infrastructure, Platform and Software as a Service, respectively), and why SpringSource is so key to the cloud strategy. In essence, it helps to continue to break apart the different layers of the datacenter into individual pieces that can be manipulated independently from one another. The CEO of SpringSource then came out to demo their technology. All in all, another great keynote. Steve Herrod is not to be missed!
After the keynote I attended a session on vSphere deployments in the morning and an AppSpeed presentation in the afternoon. Both were okay, but informational. AppSpeed is definitely worth considering, but still has a lot of maturing to do.
Most of my day was spent in the Solutions Expo chatting with many different vendors. The most impressive product I saw was the new HP MDS600, which is a SAS direct storage solution. It holds 70 SAS drives in 5U. Very impressive when you consider some of the futures of the SAS switches in the c-class blade system. Go check it out; I believe they have one set up in the Melanox booth. I also spent some time with VDI related vendors, including the aforementioned RTO Software, AppSense and LiquidWare Labs. All have very interesting products that will need some lab time.
The highlight of the day was the vExpert lunch and meeting. It was a great opportunity to meet and chat with many familiar names. I can’t possibly list them here, but it was great meeting every one of you. We even got to hear from Steve Herrod who told us he was going to be the executive sponsor of the program going forward.
And then the party…
As always, the party was a great time. The food wasn’t great, but the drinks were free and the band was great, as was the company. Unfortunately, by the time the concert was done, we emerged to find that the entire party was shut down, which was very disappointing! We didn’t have near enough time to enjoy anything else that VMware had arranged for us. “
—
http://vmjunkie.wordpress.com : VMworld session DV3260 – Protocol Benchmarking and Comparisons
“
I arrived to this session a bit late as well (noticing a theme here?) but a lot of the basics of this session were very similar to one last year on remote user experience in virtual desktops.
The gist of it is VMware has done some internal benchmarking using the PCoIP beta code (not final!) on vSphere and compared it to PortICA 2.1 – not the newest with HDX stuff, this was asked in a question pretty early and they were (deservedly) given some guff for that – and RDP (to an XP VM so only RDP 5.1).
They talked forever about their testing methodology. Essentially they tested three things:
- A synthetic benchmark they created in-house called RPerf (which I saw last year in the similar session) that basically exercises a display protocol in as low-impact a way as possible to the underlying host (so you can measure how much CPU/memory the protocol takes and not how much CPU/RAM running the benchmark takes)
- A 320×240, 25fps video with mixtures of different types of video that range from fairly static, pans, zooms, areas of motion on still backgrounds, and random static.
- An AutoIT-based workload that tests actual VM performance in addition to the connection protocol.
The results were pretty favorable to PCoIP. In many cases it wasn’t the fastest, but it was never the worst. Sometimes it would barely lose to RDP in the LAN case, and barely lose to PortICA on the WAN case. It was never far behind the in any of the tests they showed results for, and in many cases was the fastest. The other big benefit was PCoIP had lower overhead in CPU and RAM than either PortICA or RDP. Tests were run entirely with the software PCoIP implementation – no hardware.”
VMworld session DV2801 – Integrating View into your Environment
“
I arrived late to this session, but it looks like the beginning was about how to plug into today’s View product and make automated changes or fire off scripts based on events and such. The basics of it was the integration points you have today are very very limited – you have the two CLI tools (SVIConfig and VDMAdmin), log file monitoring, and editing the ADAM LDAP directly.
In View 4 new features will include an event reporting central warehouse – a database with a rollup of events from all clients, agents, and servers. It will include an event database with information on what events mean what along with resolutions, and will allow for querying using VDMAdmin or SQL tools such as Crystal Reports.
The best news though is PowerShell automation support! That makes View the 3rd product (after vCenter and Update Manager) to get PowerShell support. Using PowerShell should obviate the need to ever directly edit the LDAP, which is good because PowerShell can validate your input and will be far less dangerous. You can use PowerShell to stand up an environment from scratch, everything from global config, pairing it with a vCenter server, and making pools and VMs. You can also query the event warehouse for reporting purposes, and perform actions on sessions and VMs managed by View. Some examples:
#Set View License Key
Set-license -key AA113-XXXXX...#Set the Pre-Login Message
Update-GlobalConfig -PreloginMessage "message"#Update the power policy of a pool so you can preboot VMs at 5AM to avoid boot storm
Update-AutomaticDesktop -id DesktopJoe -PowerPolicy AlwaysOn
#Create a new Individual Desktop by using PowerCLI to get VM Object and pipe it to View CLI
Add-IndividualDesktop -id DesktopJoe -DisplayName "Desktop" -vm (Get-VM -name JoeVM)
#Entitle a user to a desktop
Get-User ADUserName | Add-DesktopEntitlement -desktop_id DesktopJoe
#Disconnect an active session
Get-ActiveSession -User "Joe" | Send-SessionDisconnectThis was the best news I’d heard all day. Finally, I can do all the neato stuff I can do in standard vCenter in View!
They then went into a bunch of Microsoft SCOM integration stuff which seemed pretty useless to me, and I was so buzzed from the PowerShell stuff I barely paid attention.”
VMworld session DV2363 – CVP Tech Deep Dive
“
his session was about VMware’s Client Hypervisor Platform, or CVP. CVP was announced a while back by VMware. Here are the highlights of the session.
CVP is a powerful client hypervisor solution, which is part of the greater VMware View offering. It is not going to be offered standalone, it is a View product only. It helps create what the presenters called a “thin” thick client.
There are two approaches to doing a client hypervisor: Direct Assignment or Advanced Device Emulation.
In Direct Assignment, technologies like Intel VT-D or other software techniques are used to pass through a physical device (such as a video card) directly into the VM. This has some advantages such as lower overhead, and if you’re running Windows in your VM then all you need is a set of Windows drivers, which are easy to find. Passthrough is also much easier to program…
It has several downsides, however. For example, it ties your VM to that particular hardware which reduces portability. It also becomes difficult to interpose on that device. For example, if the video card is owned by the VM, there’s no way for the hypervisor to access it. Same goes for the network card. The point being – if all you’re doing is passing through your physical devices, why do you need a Client Hypervisor? Just run native. You can’t add value when using passthrough on everything. For some device types (such as USB) where the O/S is expecting hardware to appear and disappear, passthrough is okay.
VMware’s strategy is around Advanced Device Emulation. Client only needs a driver for the emulated hardware device, because the hypervisor itself contains the driver for the underlying physical hardware. The advantages here are that it divorces the VM from the hardware, making portability easy, as well as simplifying hardware upgrade and recovery. Also, the Hypervisor can add functionality by managing the devices, such as enforcing network security policies and the like. This does mean that the hypervisor needs to have complete drivers for the underlying hardware.
VMware’s CVP has the following features:
- Improved guest 3d support using a new type of virtual SVGA card. Supports DirectX 9.0L for Aero Glass.
- Paravirtualized Wireless device. This is important because unlike a wired NIC, a wireless NIC only has one radio, so your hypervisor and VM can’t both be tuned to different networks. You need to give control of the radio to someone, so they allow the guest to control (using its native management capabilities built into the OS) that radio through a special VMware WiFi virtual device. This also means it works with guest-based “supplicants” like iPass.
- USB is fully supported and is Passthrough like Workstation.
- External Display and MultiMonitor capable. Allows extended desktop, mirroring, rotation either in built-in OS control (Windows 7) or through a special tab from VMware (WinXP, Vista, analogous to the ATI/nVidia control panel applets that do the same)
- External Storage support for eSATA (!!) and built-in laptop card readers.
- Power Management awareness – respond to guest power state (i.e. allow the VM to suspend or shutdown the physical hardware). Respect the guest power policy and connect special events to guest like the lid switch or the sleep/power buttons on the physical hardware.
- Encryption support: the VMX and VMDKs are all encrypted using the onboard Intel vPro TXT and TPM capabilities. Uses 256-Bit AES encryption. When asked if this would be optional or modifiable, that is still to be determined.
- CVP is based on linux and in the pre-beta version they showed, it actually had a shell we could break out into. In the final version we were assured this would not be available.
So what good is all this supposed to do? The idea is the user checks out a Virtual Machine (or one is pre-provisioned for them) to their CVP device. That device is managed by View Manager, which accesses an embedded View Agent in the CVP. This is used for policy enforcement, heartbeats, configuration changes, endpoint statistic gathering, and managing transfers from the View Server. The VM can run offline and also is smart enough to adapt its virtual hardware (like number of CPUs, GB of RAM) to the underlying physical hardware. VMware is targetting only a 256MB overhead for CVP. Today the CVP can run one VM only, but could store more than one.
CVP is an embedded Linux Type 1 hypervisor with a minimal set of packages installed. It’s optimized for fast boot time, and will be fully qualified on individual hardware platforms (like ESX). It does not contain a general purpose OS, so no doing work in the CVP. VMware itself provides updates such as patches, bug fixes, and new hardware enablement. It will be updated monolithically like ESXi is (full firmware updates), and this is updated from the View Manager server. The codebase is really unrelated to ESX, it’s more based on Workstation for Linux.
CVP requires Intel’s vPro and integrates with it’s Active Management Technology (AMT) for a bunch of things like Inventory collection, remote power on/off, and configuration backup onto the AMT private storage. It will be compatible with all AMT-enabled management tools like Altiris, LANDesk, etc.
The CVP itself has no listening ports, so it should be impossible to break into via the network. The disks are encrypted, Intel TXT + Trusted Boot protects integrity of the hypervisor in hardware. After installation, laptop will only boot approved hypervisor (no booting to a rescue CD). Encryption keys are stored in the TPM module and are used to encrypt the drives.
I asked several questions at this session:
- The demo from last year involved booting from a USB Key. Will boot from flash be supported?
- Initial release installs on hard disk and runs there.
- Will the CVP also work as a remote View client (with PCoverIP support)?
- That is on the roadmap but will not be in version 1, only locally running VMs.
- At VMworld 2007, tech for streaming a virtual appliance and booting it while data was still in flight was demoed. Will this be in CVP?
- They have the code, but user issues kept it out of first release. How does user know when it’s safe to go offline? When they resolve this issue they will bring that code in.
Overall I am pretty excited about CVP. I understand the HCL may be fairly limited at launch, but it really does have tremendous potential for View environments.”
VMworld session TA3438 – Top 10 Performance improvements in vSphere 4
“
This was a really interesting session that really broke down a lot of the stuff that was improved in vSphere. VMware likes to talk about how vSphere has however many hundred new features, here’s an interesting list of the highlights:
- IO overhead has been cut in half. Also, IO for a VM can execute on a different core than the VM Monitor is running on. This means a single CPU VM can actually use two CPUs.
- The CPU scheduler is much better at scheduling SMP workloads. 4-way SMP VMs perform 20% petter, and 8-way is about 2x the performance of a 4-way with an Oracle OLTP workload, so performance scales well.
- EPT improves performance a LOT. Turning it on also enables Large Pages by default (which can negatively affect TPS). Applications need to have Large Pages turned on, like SQL (which gains 7% performance)
- Hardware iSCSI is 30% less overhead across the board, Software iSCSI is 30% better on reads, 60% better on writes!
- Storage VMotion is significantly faster, because of block change tracking and no need to do a self-VMotion (Which also means it doesn’t need 2x RAM)
- In vSphere performance between RDM and VMFS is less than 5%, and while this is the same as ESX3.5, performance of a VM on a VMFS volume where another operation (like a VM getting cloned) has improved.
- Big improvement in VDI workloads – a boot storm of 512 VMs is five times faster in vSphere. 20 minutes reduced to 4.
- PVSCSI does some very clever things like sharing the I/O queue depth with the underlying hypervisor, so you have one less queue.
- vSphere TCP stack is improved (I know from other sessions they’re using the new tcpip2 stack end-to-end.
- VMXNET3 gives big network I/O improvements, especially in Windows SMP VMs.
- Network throughput scales much better, 80% performance improvement with 16 VMs running full blast.
- VMotion 5x faster on active workloads, 2x faster at idle.
- 350K IOPS per ESX Host, 120K IOPS per VM.
All reasons to be running vSphere on your infrastructure today.”
—
http://blog.scottlowe.org : TA2384 – Deploying the Nexus 1000V
“
Wednesday, September 2, 2009 in by slowe
There is no Internet connectivity in this session, so I’ll have to publish this after the session has concluded.
The Cisco Nexus 1000V is, of course, a Layer 2 distributed virtual switch for VMware vSphere built on Cisco NX-OS (the same operating system that drives the physical Nexus switches). It’s compatible with all switching platforms, meaning that it doesn’t require physical Nexus switches upstream in order to work. The Nexus 1000V brings policy-based VM connectivity, network and security property mobility, and a non-disruptive operational model.
The Nexus 1000V has two components: the Virtual Supervisor Module (VSM). Interestingly enough, the slide shows that the VSM can be a virtual or physical instance of NX-OS; there has been no formal announcement of which I know that has discussed using a physical instance of NX-OS as the VSM for the Nexus 1000V. The second component is the Virtual Ethernet Module (VEM), which is a per-host switching module that resides on each ESX/ESXi host. A VSM can support up to 64 VEMs in a distributed logical switch model, meaning that all VEMs are centrally managed by the VSM. Each VEM appears as a remote line card to the VSM.
The VEM is deployed using vCenter Update Manager (VUM) and supports both ESX and ESXi. The Nexus 1000V supports both 1Gbps and 10Gbps Ethernet uplinks and works with all types of servers (everything on the HCL) and upstream switches.
The Nexus 1000V supports a feature called virtual port channel host mode (vPC-HM). This feature allows the Nexus 1000V to use two uplinks (NICs in the server) connected to two different physical switches and treat them as a single logical uplink. This does not require any upstream switch support. Multiple instances of vPC-HM can be used; for example, you could use four Gigabit Ethernet uplinks, two to each physical switches, could be used to create two different vPC-HM uplinks for redundancy and separation of traffic.
For upstream switches that support VSS or VBS, you can configure the Nexus 1000V to use all uplinks as a single logical uplink. This requires upstream switch support but provides more bandwidth across all upstream switches. Of course, users can also create multiple port channels to upstream switches for traffic separation. There are lots of flexiblity in how the Nexus 1000V can be connected to the existing network infrastructure.
These network designs can be extrapolated to six NICs (uplinks), eight NICs, and more.
One interesting statement from the presenter was that Layer 8 (the Human layer) can create more problems than Layers 1 through 7.
Next, the presenter went through the use and configuration of the Cisco Nexus 1000V in DMZ environments. Key features for this use case include private VLANs (private VLANs can span both physical and virtual systems). Network professionals can also use access-conrol lists (ACLs) and remote port mirroring (ERSPAN) improve visibility and control over the virtual networking environment.
At this point, I left the session because it was clear that this session was more about educating users on the features of the Nexus 1000V and not about best practices on how to deploy the Nexus 1000V.”
—
http://dantedog29.blogspot.com : Day 2 Wrap-Up
“
Today was an interesting day. I sat and watched the Day 2 Keynote, looking at what VMware shows as the vision for the future.
I then went to watch a session on Virtualizing Exchange EA2631
I learned some things there:
Exchange has made itself (over time) better suited for Virtualization from Exchange 2003 –> Exchange 2007 –> Exchange 2010 due to less IO, and better design. Also, ESX has improved and the computers have evolved, so they have all joined together.
But on the MS Front, we have updated out sites with some new information from our Exchange and Virtualization Team on Virtualizing this Tier 1 application. Check out Zane’s Blog Post for more information.
After lunch I went to the VMware Head to Head comparison of VMware vSphere and ESX and Hyper-V with some SCVMM, and Citrix. It would have been a better conversation if it would actually have been more than one side. Their big comments on Architecture differences and memory overcommitment were old and tired. They were biased and based on conjecture. They commented that our “integrated” solution is a bunch of applications which we do need to work on, but when they showed them all, they showed many twice, and and some that you wouldn’t use except in some cases, but not when loading other apps that they showed.
They said we don’t have a Host Profile equivalent, when if you look at what System Center Configuration Manager does, it does a lot of what Host Profiles does. Of course they didn’t mention that to get Host Profiles, customers would have to buy the Enterprise Plus SKU.
They failed to mention that if you are comparing vSphere with Microsoft Solutions, you have to include all of the SMSD products, NOT just VMM and a little OM.
They failed to mention that you have to pay 3 times more to get a VMware Solution than you would have to pay to get the comparable solution from Microsoft, I wonder why?
Are we Enterprise Class? Yes, we are.
Do we have some work to do? Yes, we do.
Is VMware scared? Yes, they are.”
Dr. Stephen Herrod’s Keynote
always enjoy Day 2 Keynotes at VMworld. You always get to see something new. Dr. Stephen Herrod started the keynote today by sliding VMware View over to the left emphasizing that it is the biggest focus for VMware right now. He says managing the desktops will be the same as managing the servers. I don’t think that is the right way to look at it. Yeah, I believe it resonates to Server guys, but there are many, MANY differences between how you have to manage the desktop and the datacenter. It seems to VMware, that (like one of our TSPs told me at a conference earlier):
Key agreement with rto Virtual Profiles coupled with the ThinApp “bubble”. Create a master image of the OS, plop it down and keep each app out there encapsulated on its own.
Best User Experience to All Endpoints – From a WAN to a LAN environment to Local so you can run it on the net, and on the local machine to leverage the “Media” devices (Graphics, etc). PCoIP releasing later this year.
Employee-Owned IT – rebrand, revamp of ACE – VM on a DVD, or running directly on the laptop No host OS, Client Hypervisor (Client Virtualization Platform (CVP)) with Intel vPro. They have Win7 x64 running in a VM with the CVP underneath.
VMware Mobile Strategy – VCMA – a mobile app to manage your vCenter and now VMware View environment
Mobile Phone to Mobile Personal Computer – “Device Freedom” Mobile Virtualization Platform (MVP), and “Application Freedom”
All of this client stuff, still makes me think that they are trying to adapt and fit VDI as the solution for everything. Really now, shouldn’t it be that the customer should use the whole toolbox and not just the hammer. VDI works for some cases, but Terminal Services is better for other, and App-V solves other solutions. If you have a local user that needs to run a policy encapsulated VM, MedV will give you this. Microsoft has the Desktop solution that you can use for the challenges you face.
VMotion the Foundation of the Giant Computer – First VMotion, then Storage VMotion, then Network VMotion (Distributed Virtual Switch), now Long Distance VMotion
New workload – HPC
DRS – Shuffling VMs around for best performance. Squeezing more out of your systems, extending soon to include IO not just CPU and Memory. Tiering the needs and the applications with the Resources around you.
This could be interesting if they DRS the VMs, and also the Storage as your storage IO patterns change.
AppSpeed – Nothing new
vApp – IT Service Policy Descriptor SLA as metadata to the group of VMs using OVF.
VMsafe – Always on Security and Compliance via APIs. Aware of the application running in the VM, not the VM, so it can be smart in what it protects and secures.
Choice – Lab Manager to allow for self service portals.
Long Distance vMotion – Proactively move the DC when certain events will occur. Cisco with it Data Center Interconnect up to 200 km. F5 uses BIG-IP Global Traffic Manager to move different iSessions around.
VMware vCloud API – Programmatic Access to resources, Self Service Portals, vSphere Client Plugin (one vCenter to view local and Cloud resources).
After moving View to the left, they now added vApps to the right as a fourth pillar. vSphere provides IaaS (Infrastructure as a Service). Software is Middleware and Tools that is combined and hooked underneath termed PaaS (Platform as a Service). The middle yellow bar is the Automated, Policy enforcement, scalability. Developers only need to know the application interface, they don’t need to be bothered with anything else, and then there is SaaS (Software as a Service).
PaaS – Open set of interfaces for Ruby on Rails, Python, .Net, PHP, Rod says we want the developers happy, we want them to know about this as little as possible but enough to be productive. Can be deployed internally and externally, wherever.
How popular are these different interfaces? Azure provides this to our Developer community.
SpringSource –
Take advantage of the revolution…
I don’t see a revolution here, but I think some of the new capabilities are nice, and looking forward to see how we respond.”
Interesting comments from VMware’s competition.
—
http://www.brianmadden.com : Brian Madden TV #17 – VMworld 2009 wrap-up: software PC-over-IP, client hypervisor, and RTO OEM
“
Gabe and Brian are at VMworld 2009 in San Francisco this week. (Read Brian’s live blogs of the two keynotes – Day 1 and Day 2.) It’s been a busy few days with a lot of demos. We recorded enough video content for probably two or three weeks worth of shows.
Today’s episode includes the following highlights:
An interview with RTO Software CEO Kevin Goodman. (VMware just announced that they’re OEMing RTO’s "Virtual Profile" product for inclusion in a future version of View.)
A demo of VMware’s client hypervisor called "CVP" from VMware’s Robert Baesman.
A demo of VMware’s upcoming software version of Teradici’s PC-over-IP remote display protocol from Wyse’s Aditya Prasad.
And of course, Brian and Gabe’s thoughts and conversation about the show in general.Oh, and in case you’re wondering, yes, that’s a custom NetApp shirt that I’m wearing. NetApp was the only vendor who met my challenge to use no PowerPoint in their BriForum 2009 breakout session. Thanks Mike Slisinger for making the great presentation!”
Visit the page for the video.
—
http://www.compellentblog.com : VMworld 2009: Jon Toor of Xsigo Talks Virtual I/O
“We’re posting two videos from the conversation we had with Jon Toor, VP of Marketing for Xsigo. In this video, Jon talks about how virtual I/O works”
Visit the page for the video.
—
http://rodos.haywood.org : VMorld Live Interview : Moving to the cloud and SpringSource
“
Recorded a interview with Dr John Troyer from VMware in the recording booth at VMworld today.”
Visit the page for the video.
—
More blog links
http://erikzandboer.wordpress.com/
VMware ThinApp becoming automagic!
esXpress uses vStorage API for detecting changed blocks
—
VMWorld 2009 – vCloud and Performance Monitoring
—
HP VMworld session: Conquering Costs and Complexity in a Virtualized Environment
Roger L.