Home Datacenter VMworld 2009 San Francisco, Day 3, Late PM Edition

VMworld 2009 San Francisco, Day 3, Late PM Edition

by Roger Lund

Yet Even More blogs on VMworld 2009!

I take no credit for any of this content, it go’s to the author, and mean to only put the content here without having to weed through the varies blogs, I have both the blog link, and article title here, with a link to the article. Thanks to everyone for putting these together for everyone to read!

http://blogs.netapp.com : Vmworld 2009: NetApp updates with VMware Site Recovery Manager

As we all know, virtualization does an incredible job of reducing costs by creating massively dense pools of computing and storage.  With this added density comes increased risk if a single server fails. Technologies like VMware vMoton, VMware Fault Tolerance, VMware Storage vMotion and NetApp DataMotion help reduce those risks. But in today’s 24×7 economy, high availability of resources is a given and the ability to simplify backup and recovery is a mandatory part of any data center administrator’s world.

Building upon the work that we’ve done with VMware Site Recovery Manager (SRM) over the past several years, we’re proud to announce our next phases of expansion with the integration of NetApp and SRM.  In the demos videos below (which are also shown in the NetApp booth, #2102), you’ll see our integration with the upcoming version of SRM (which includes support for NFS), and continued integration with NetApp SnapMirror and NetApp SMVI 2.0. 

In this demo video, you’ll see our latest vCenter plug-in which truly automates and simplifies the backup and recovery process. Too many other SRM products just handle backup, and then make recovery a tedious manual step of steps that are prone to errors. “

http://blog.scottlowe.org : http://blog.scottlowe.org/2009/09/02/notes-on-some-vmworld-vendor-meetings/

Wednesday, September 2, 2009 in Storage, Virtualization by slowe

I’ve had the opportunity to speak with a few different vendors over the last couple of days here in San Francisco at VMworld 2009. Here are some notes on my meetings.


My first meeting of the week was with Virsto, a early storage startup (they just closed Series A funding in the last few weeks). Virsto is led by some long-time storage professionals from companies such as StorageTek, Veritas, and others.

Virsto is unique, to me, in that they have an interesting view of the storage component. I met with Alex, one of the founders, and he used a term that I found quite illustrative and useful: “the I/O blender”. This is the term he applied to the effect that the hypervisor has on I/O as it moves from the virtual server to the physical server to the storage layer. If you think about it, it makes sense: I/O from each virtual server has to be multiplexed onto the same HBAs as the I/O from every other virtual server. The end result is, of course, that the storage array ends up having to deal with small, random I/O workloads instead of large, sequential workloads. This impairs performance.

The Virsto solution combines a software portion that is currently architected only for Microsoft Hyper-V. Virsto’s software component illustrates both the strength and the weakness of Hyper-V’s indirect I/O model. It’s a strength in that it’s very easy to write a filter driver to run in the management partition to modify VM I/O; the weakness is that it’s really easy to write a filter driver to run in the management partition to modify VM I/O. I’m being partially facetious here, but I hope you get the point. In any case, what Virsto’s software layer does is help undo the I/O blender effect by working in conjunction with a storage staging layer. Typically this would be some sort of high-speed local storage, such as an SSD. As a result of the software working in conjunction with the hardware, Virsto can “re-assemble” I/O into workloads that are better suited for performance at the physical layer and thus undo the I/O blender effect.

Virsto’s solution also allows for some forms of storage virtualization, in that different types of underlying block storage can be combined and managed by Virsto. Virsto’s solution also offers snapshots (checkpoints), the ability to split data streams for replication, and better support for disk-to-disk (D2D) backups via their snapshots.

My biggest concern with Virsto is that they are competing in a space with lots of very large, very well-funded organizations that are laser-focused on making their storage work extremely well with VMware vSphere and virtual environments, including Hyper-V. Think of NetApp integration with Hyper-V, or EMC integration with Hyper-V (remember that Virsto supports only Hyper-V at this time). These companies have lots of development talent, lots of money, and an established presence. I fear it will be difficult for Virsto to really gain a foothold in that space.


Xangati (pronounced “zan-gotti”) is an application performance solution. I had a spirited discussion with the Xangati folks about what differentiates them versus other solutions like AppSpeed, BlueStripe, etc., in that Xangati relies upon network traffic information to measure application performance. In Xangati’s case, they rely upon NetFlow (or the various vendor-specific implementations of NetFlow). At first, I found this a bit limiting because I wasn’t aware that NetFlow v5 was supported in ESX 3.5 on vNetwork Standard Switches (I know, this is probably something everyone knows). But it indeed is (see here); the real question is whether it will continue to be supported on vNetwork Standard Switches on vSphere. In any case, Xangati insists that using NetFlow to gather network information is very different (and yields different results) than performing packet analysis. I must admit that I don’t fully see the difference; perhaps a network guru can explain it?

Having stated all that, what Xangati does it pretty interesting. It requires that you enable NetFlow throughout the environment—on both virtual and physical switches and other physical network equipment—and then allows you to see end-to-end network usage on an application-by-application basis. From there, Xangati allows the organization to take “network recordings” of the network behavior and then replay that recording later. Different views can be created for different roles within the organization, allowing IT pros to see only the information they need or want to see.

I’ll go back to my earlier statements and say that while Xangati offers some unique functionality—such as the ability for an end-user to initiate a network recording and submit that as a “Visual Trouble Ticket” to the help desk—I’m still at a loss to explain how, in the end, they are different from AppSpeed, BlueStripe, and others who provide end-to-end application performance and correlation. Yes, they use a different way (perhaps a superior way) of gathering information, but what the customer ultimately wants to know is this: “Which part is slow?” It seems that there are other, more well-established solutions already on the market that are trying to address this. Whether or not Xangati is successful in the space is yet to be seen, but I do wish them the best of luck!

I also met with Virtual Instruments and had a great discussion with them, but as I’m running a bit short on time I’ll have to do their write-up later on. Check back here later for the write-up on Virtual Instruments.”

Some Additional VMworld Vendor Meetings

Earlier I posted some notes on meetings I’d had with Virsto and Xangati. In this post I’d like to discuss some additional meetings I’ve had with Virtual Instruments and Tranxition.
Virtual Instruments

Virtual Instruments makes a solution that is intended to help troubleshoot and optimize storage environments. I had the opportunity to grab some coffee with them this morning and hear about what they’re doing and how they’re doing it. As a company carved out of Finisar and taken private, their goal is to help drive higher levels of virtualization by providing more visibility into the storage fabric.

Clearly, this message will really only resonate with larger customers, and that is their target market: multiple hundreds of terabytes into the single petabyte range. At this scale, providing visibility into the thousands of virtual machines across hundreds of ESX/ESXi hosts attached to hundreds of Fibre Channel ports is almost impossible. Virtual Instruments tackles this with a multi-prong approach:
First, they use a SAN tap to plug into the Fibre Channel fabric and mirror traffic information to a collection device for analysis. If you’re a networking person, you can think of this as using a SPAN port to mirror traffic. This is done on the storage side to reduce the scale due to fan in-fan out ratios.
Second, they gather SNMP information from the Fibre Channel switches. This enables visibility at the switch level.
Third and finally, Virtual Instruments collects information from VMware vCenter Server. This information provides the final piece necessary to correlate per-host and per-VM traffic to the information being gathered by the fabric taps and the switch monitoring.

What this allows Virtual Instruments to do is to feed information back to vCenter Server to enable I/O-based recommendations for VM movement. It also enables visibility into path utilization so that multipathing information can be configured for optimal performance. Finally, more detailed storage information is exposed that enables organizations to more effectively place VM storage on Tier 1, Tier 2, or Tier 3 according to its storage needs. In some cases, in fact, money saved on buying additional Tier 1 storage can more than pay for an implementation of Virtual Instruments.

Overall, this is very interesting soltuion, albeit limited in scope to larger environments. If this describes your organization, though, it may definitely be worth a closer look.

Tranxition makes software to do “personality virtualization.” Apparently they’ve been around since 1998 and are just now becoming more visible, creating a partner program, and starting to expand coverage. Their key product is Adaptive Persona, which some have said can be called “Softricity for user personality data”. The product seems to work a lot like ThinApp in that it creates a virtual file system and virtual Registry that captures all user personality data. This user personality data, which can reside either inside or outside the traditional user profile file system structure, is then continuously streamed back to a central server. When a user logs off, whatever data has not been synchronized to the server is then copied up to the server, and the local system is scrubbed of user personality data. Then, when that same user logs on to a different system, Tranxition streams down only those portions of the user personality that are needed at that moment. All other data is fetched “on demand”. This helps speed up the logon process by decoupling the size of the profile from the time required to log on.

Overall, I was fairly impressed with the product. They seem to have done a reasonably good job of taking the principles behind application virtualization and applied them to user personality management. If anyone has any additional feedback on Tranxition (vendors, please disclose yourselves!), I’d love to hear it in the comments.”


VMWorld 2009 Keynote Day 1

“First up this morning was Tod Nielsen, Chief Operating Officer from VMware.

Tod thanked the sponsors for their support and gave us the number: 12,488 attendees at  VMworld 2009.  Quite an amazing achievement given this economy.  Tod also described the goal of VMware: To Energize and Save, wither that would be Financial Savings, Human Savings, or Earth’s Savings.  He then showed a video of customer testimonials from Siemens and Kroger.  Next up Tod introduced Paul Maritz, Chief Executive Officer from VMware.

Paul gave a definition of the cloud to the audience.  He went on to talk about how VMware will bridge the gap between the datacenter and the cloud of the future.  He described the VMware journey, where VMware has come from and where he believes VMware is headed.  He reiterated a comment that I’ve heard him say once before: that vSphere had more than 1,500 engineers working on it which was more than any Windows operating system durning his tenure at Microsoft.  This is another reason that vSphere is reconized as a true platform.  He went on to give a review of the vSphere features and how vSphere integrates into the ecosystem of available software.  He talked about customer upgrade plans and how customers polled are planning on upgrading to vSphere in the next few months.  He then introduced Tom Brey, Sr. Technical Staff Engineer, IBM.

om showed a great demo of power consumption of a IBM 3850 inside the vSphere client.  In the standard performance tab, he had real time counters of power consumption in watts on a per VM basis.  Very cool to see how much power your VM’s are using.  One thing I did note from his preso, he was running it on ESX 4.1 build 000000 (where can I get a copy of that?)

Paul returns after the demo to give us a explanation of all of the things coming to enhance vCenter and the management of the infrastructure.  With that lead in, Paul introduces us to an engineer named Bruce who goes thru a demonstration of Lab Manager 4 and chargeback.

Bruce shows us a demo of checking out a configuration in Lab Manager and then Paul leads that into a new portal called vCloud Express.  This portal lets customers check out VM’s or groups of them to use for development or for production.  It’s an interface that fulfills the much needed aspect for the cloud: a self-service portal.

Paul next leads into Desktops as a service and what is happening on that front.  For this Paul introduces Steve Dupree, Director of Platform Virtualization, Hewlett Packard.

Steve talks about the direction that HP is headed with virtualization.  Steve does a great demo of Hp Insight integration into the vSphere client.  An extra tab is show for the physical hosts where admins can find all of the Insight information about the health of the physical hardware.

Next up, Paul introduces Chris Renter, Telus Communications.

Chris demo’s a bit on the upcoming functionality and experience customers can expect from VMware View with PCoIP integration.  The PCoIP protocol will be a new enhancement for View coming later this year according to Paul.

The last (whew!) of the guests is Rod Johnson, CEO, SpringSource.

od gives us a description of how SpringSource will complement VMware to deliver PaaS (Platform as a Service).  He demo’s how easily code can be migrated to the cloud using the SpringSource framework.  It’s pretty impressive to see how this migration can happen from a developer’s perspective.

At this point Paul finally closes the opening keynote.  On to the press event and more pictures to follow.

*all photos are copyright vmguy.com.  Any reuse of these photographs without express written consent is prohibited.”

VMworld 2009 keynote day 2

“Got to the general session early to snag a good seat at the bloggers table.  First up: Steve Herrod, CTO, VMware.

Steve begins on the topic of desktops.  He says the industry wants to go from device-centric to people-centric(I totally agree – it’s a better model).  He talks about the goals the VMware has in it’s desktop strategy and he labels the user experience as the most important.  He describes why vSphere is the right platform for desktop virtualization.  Steve talks about how VMware decomposes the desktop to share images and simplify patching.  Steve announces a OEM agreement with RTO Virtual Profiles (just heard about this, check the link, cool stuff).  Steve goes on to describe PCoIP.  He describes hosted virtualization and how users can check-out a desktop.  He describes a use-case of how users can bring their own device to work and check out a VM.  This seems like a great use case for hosted virtualization where the user is running a base OS on their device and they check out their desktop to run as a VM on top of their own personal OS on the hardware.   He then talks about the scenario of bare metal virtualization for corporate owned IT. This is where you would load a hypervisor onto bare metal laptop hardware and then check-out a desktop VM to it.  He then introduces Mike to show CVP.

Mike shows Windows 7 running on CVP.  He shows the Aero interface and some videos running in CVP.  He then shows the Wyse cloud app and demos accessing the same desktop from his iPhone.  Really cool stuff, I love the iPhone app. Now back to Steve.  Steve talks about mobile access to vCenter and describles how the engineers are working on have mobile access to manage view desktops as well as vCenter mobile management. Steve describes the mobile virtualization platform (MVP) which is a mobile virtualization platform that runs on mobile phones.  For this section he introduces Peter Chiura, from Visa.

Peter demonstrates what Visa is doing.  He shows an Visa application on a phone which can show live transactions from the owner’s Visa card in near real time.  It can then show special offers from vendors you have made purchases thru.  He then backs out and shows the audience that the application is really running on Android in a VM on a windows mobile phone.  Pretty cool demo really. Back to Steve.  Steve talks about the platform.  He describes vSphere as “the software mainframe”.  He describes vMotion and it’s maturity.  He gives the history of VMworld 2003 with 1600 attendees and how they did a early preview of vMotion on stage for the first time at that conference (I was there when I was a customer!).  He describes vmotion migrations and how many have occured, how many dollars it’s saved and how many marriages have been saved due to the reduced time spent working by the administrators (get’s a really good laugh from the crowd).

He talks about the breath of vmotion and how it’s grown to storage vmotion.  He talks about using vmotion to balance workloads with DRS.  He talks about how you’ll soon be able to extend DRS to network and storage, for instance, moving a virtual machine to another host when the network adapter has been saturated.  He goes on to talk about Distributed Power Management and how it powers off servers when they are not in use and how this can be very useful for desktop scenarios.  Talks about AppSpeed and describes how it works and how it can provide the answer on who’s to be called when a multi-tier app is not running optimally.  He then shows vApp and the descriptors that they can have.  He leads into the new VMsafe APIs.  He describes how the vApp descriptors can now have security descriptors.  He then introduces Rob to show a preview of Config Control.

Rob shows how we can see exactly what’s changed in a virtualized enviornment.  He shows a demo with a port group being changed and how it was tracked and what else was impacted by that change.  Actually it was a very technical but cool demo on how to see what changes can impact your environment.

Back to Steve and onto choice.  Lab Manager and providing users a self service portal. Gives some stats on the datacenter for VMworld:  If it were physical, they would need: 37,248 machines, 25 Megawatts, and  3 football fields worth of physical servies.  Running everything virtual and they cut it down to 1 end zone, 776 physical servers, 540 Kilowatts. He leads onto the Cloud.  He gives an overview of the cloud and how vSphere is the foundation for the cloud.  He talks about the connecting of the internal and external cloud.  He talks how SRM is handling connectivity between two internal clouds.  He then talks about how long distance vMotion can be handled.  He descirbes the challenges: memory and disk sync and network identity.  It’s very challenging to keep 2 VMs in sync with their memory changes and disk changes.  Steve explains how Long Distance vMotion can be used to follow the sun/moon, or for DR. He talks about how different vendors are doing this different ways.  He describes Cisco and F5 and their different strategies for doing it. He goes on to the vCloud API and how a console can show your VMs in your datacenter and your VM’s that are running in a hosting provider.  He describes PaaS and how VMware wants to provide the platform and now can provide the framework to run applications. Introduces Adrian Colyer, CTO, SpringSource.

Adrian shows an application written in SprinSource Cloud Foundry. He describes how the application itself and method of delivering it via scale are seperated.  The app can be coded and then delivered and scaled very easily.  I notice a few people exiting as we reach towards the end.  Probably not the most interesting section for non-programmers but I was getting what he was trying to say.

Steve returns to summarize everything that was discussed:  How we are in a pendulm shift in the industry and VMware is posied to assist customers with all of their needs.

*Photos are copyright vmguy.com.  Any reproduction without express written consent is prohibited.

( Thanks for permission to post the pictures from vmguy.com)


and a nice picture of a local MN VMware User, Jason Boche.


Roger L.

You may also like