VMware Direct Path and More, VMware piles up next virtual stack for servers – Lots of new code coming in 2009

http://www.channelregister.co.uk has a article, with some metrial containing some information obtained from Bogomil Balkansky, senior director of product marketing at VMware.

Here are some quotes from the body of the article.

The VirtualCenter management tool for ESX Server hypervisors will become vCenter; software that rides atop the hypervisor will be referred to as vServices, and virtual file systems and other storage-related code will be called vStorage.

Balkansky says that in 2009, the hypervisor will double VirtualSMP capability to eight cores and with quadruple memory to 256 GB maximum per virtual machine.

Balkansky is expecting that virtualization-driven server consolidation will accelerate in 2009, and for an interesting reason: to boost performance on a single piece of iron. “A lot of applications running on servers today cannot take advantage of the extra cores chip makers are delivering,” Balkansky explains. “They end up wasting the extra horsepower.”

In the 2009 release, ESX Server will also get the capability to add virtual memory or CPUs to a VM instance without having to shut down and reboot the VM.

Next year’s hypervisor update from VMware will also include a less-virtual feature called VM Direct Path, which is an I/O passthrough that allows a virtual machine to be tired directly to a physical I/O device, such as a disk controller or network feature card.

VMware wants to do this for two reasons, according to Balkansky. For one thing, by going physical with the link to an I/O device, the performance gets closer to native speeds. Moreover, if you have a peripheral that is not supported by the virtual I/O created inside ESX Server, you can use VM Direct Path to support that peripheral. You don’t have to wait for a driver from VMware or the peripheral supplier to come along. The only issue with VM Direct Path is that supporting peripherals in this manner sacrifices their virtual mobility. So VMotion, which allows a VM to teleport around a network of servers using shared storage, and DRS, the distributed resource scheduling software in the VMware Infrastructure stack, won’t work on VMs that use the VM Direct Path feature.

Balkansky says that VMware is also cooking up its own fault tolerant VM capability, not code imported from partners Stratus Technologies or SteelEye Technology. While not divulging a lot of details, Balkansky says the code will keep two virtual machines running on two distinct physical servers in lock-step, with one VM active and the other operating in passive mode until a failure. This code was demonstrated at VMworld last summer, in fact.

On the storage front, the vNetwork Distributed Switch, a software switch that interfaces the network links to VMs that VMware created in conjunction with Cisco Systems, will be delivered as a virtual appliance running atop ESX Server (or whatever the future product is called). Cisco will also apparently sell a version of the product, too, called the Cisco Nexus 1000V.

The vStorage APIs are going to be opened up so storage administration tools, which think in terms of LUNs and arrays, can see where VMs and their affiliated VMDK files are located on the disk arrays. Right now, VirtualCenter can’t easily tell you where the VMDKs are physically located on the storage, which means storage admins can’t easily figure out what is running where before they tweak an array in some way.

VMware is also cooking up a variant of thin provisioning for VMDKs that will allow a virtual machine to overcommit its storage capacity, much as VMware has done for years with memory capacity. Such overcommitment may make some people jumpy, but allocating memory and disk to a VM and then only using a fraction of it is inefficient.

I thought this was a great read, and it seems that they have compiled a lot of information here, I can’t wait for 09.