Home Datacenter VMworld 2009 San Francisco, Day 4, PM Edition

VMworld 2009 San Francisco, Day 4, PM Edition

by Roger Lund

Yep, you guessed it, again , Yet Even More blogs on VMworld 2009!

I take no credit for any of this content, it go’s to the author, and I mean to only put the content here without having to weed through the varies blogs, I have both the blog link, and article title here, with a link to the article. Thanks to everyone for putting these together for everyone to read!

http://netapptips.com/ : VMworld 2009: Brian Madden is true to his word

As all of us that have been in the industry (for way too long) know tradeshow season is a full of sound and fury; signifying nothing.  Lots of announcements, lots of demonstrations, lots of bickering back and forth between vendors about whose widget is more widgety.  But at the end of the day, just like the rest of the year, the success or failure of tradeshows is all about people.  It takes many, many people and many, many (hundreds of) hours to organize, coordinate and execute a tradeshow. It ultimately comes down to trust.  That most basic element of strong relationships. 

For quite some time, the TMEs on our team that focus on Virtual Desktops (VDI) have been reading Brian Madden’s blog.  They look to it for industry trends, but most importantly they look to it for viewpoints from experienced Desktop Administrators.  From time to time we find ourselves being too storage-centric, and Brian’s blog helps us step back and look at the desktop challenges from a broader perspective.

Back in July, we approached Brian about presenting at his BriForum event.  We wanted to introduce some new concepts (prior to VMworld), and his audience would give us the feedback (good or bad) that we needed to hear.  We’d be the only storage vendor in attendance, but we needed to get close to the customer base.

Not only did we host a booth, but we presented one of the technical sessions.  Prior to the event, we had our slides ready to go, and then we saw this challenge from Brian.  Always interested in a challenge, Mike Slisinger (@slisinger) took it upon himself to create a winning presentation.  Needless to say, the NetApp presentation met and exceeded the criteria and Brian promised to be wearing his NetApp t-shirt around the halls of VMworld. 

A man of his word, today we found Brian enthusiastically (as always) walking around the NetApp booth (#2102) proudly wearing his NetApp t-shirt.  The shirt looked great, and we really appreciate the feedback Brian has provided our VDI team on our products and solutions. 

As I mentioned earlier, the best part about tradeshows is the chance to connect with people.  The change to have open discussions, to learn about new things, and to see how you can improve.  Having strong relationships is the foundation of that ability to connect with people.”

 

VMworld 2009: Get prepared for vSphere upgrades

As we heard in Paul Martiz’ keynote, VMware is forecasting that 75% of their customers will be upgrading their virtualized environments to vSphere within six months.  If you’re an existing VMware customer, or VMware partner, this means that you had better be prepared for the updates.

To help everyone planning to move to vSphere, we’d recommend that you review the following technical documents:

Both of these documents have been fully tested and co-reviewed by all of NetApp’s alliance partners (VMware, Microsoft and Cisco), so you can feel extremely confident in using these are the foundation of your next generation architectures utilizing vSphere.

And of course, vSphere deployments are covered by the NetApp 50% Virtualization Storage Guarantee.  “

 

http://itsjustanotherlayer.com : EA3196 – Virtualizing BlackBerry Enterprise on VMware

Once again.. another session I didn’t sign up with and zero issues getting into.

To start off RIM & VMware have been working together for 2 years and it is officially supported on VMware.   Together RIM & VMware have done many numerous and successful engagements running BES on VMware.  The interesting thing is RIM runs their own BES on VMware for over 3 years now.

Today BES best practice is no more than 1k users per server and they are not very multi-core friendly.   It is not cluster aware or have any HA built in.   The new 5.0 version of BES is coming with some HA availability via replication at the application layer.   One thing that has been seen in various engagements is if you put the BES servers on the same VMware Hosts as virtualized Exchange, there are noticable performance improvements.

The support options for BES do clearly state that they support on VMware ESX. 

One of the big reasons to virtualize BES is that since it can not use multi-cores effectively the big 32 core boxes today are only able to use a fraction.  By virtualizing BES can get significant consolidation.   Then when doing the virtualization BES gets all the advantages of running virtual such as Test/Dev deployments and server consolidation and HA etc.   Things that are well known and talked about already. 

BES encourages template use to do rapid deployments.   The gotcha is just what your company policies and rules are and can potentially save quite a bit of time.   This presentation is really trying to show how to use VMware/Virtualization with BES for change management improvements, server maintenance, HA, component failures and other base vSphere technologies.   VMware is looking towards using Fault Tolerance for their own BES servers.

BES is often not considered Tier 1 for DR events.   Even though email is often the biggest thing needed to start working after a DR event to start communications.   The reason is generally been seen due to the complexity and cost of DR.

The performance testing with the Alliance Team from VMware has been successfully done numerous times for the past couple of years.   They have done testing at both RIM & VMware offices.   The main goal of these efforts was to generate white papers and a reference architectures that are known to work.   The testing was to use Exchange LoadGen & PERK load driver (BES testing driver).  Part of this is how to scale outwith more VMs  as the scale up is known.

The hardware was 8 cpus, Intel E5450 3Ghz, 16 G RAM and FAS3020 Netapp on vSphere 4 & BES 4.1.6.  The 2k user test with 2 Exchange systems the results were 23% CPU utilization on 2 vCPU BES VMs.   Latency numbers was under 10 ms.   Nothing majorly wrong seen in the testing metrics.   Going from ESX 3.5 to vSphere 4 was a 10-15% CPU reduction in the same workload tests.   Adding in Hardware Assist for Memory saw what looks like another 3-5% reducting in CPU usage.   In their high load testing when doing VMotion there is a small hiccup of about 10% increase in CPU utilization during the cut over period of the VMotion.   This is well within the capacity available on the host and in the Guest OS.

Their recommendation is to do no more than 2k users on a 2vCPU VM.  If you need more then add more VMs.   Scales and performs well in this scale out architecture.   Be sure you give the storage the number of spindles needed.   The standard statement when talking about virtualization management. 

The presenter then went into a couple of reference architecture designs.  Small Business & Enterprise with a couple different varieties.

BES @ VMware.   3 physical locations, 6,500 Exchange users.   1k of them have 5G mailboxes and the default for the rest are 2G.   BES has become pretty common.   They run Exchange 2007 & Windows 2003 for AD & the Guest OS.   Looks fairly straight forward.

4 prod BES VMS, 1 STandby BES VM, 1 Attachment BES VM and 1 BES dedicated Database VM.   Done on 7 physical servers and 40 additional VM workloads on this cluster.”

http://www.virtuallifestyle.nl : VMworld ‘09 – Long Distance VMotion (TA3105)

I’ve attended the breakout session about long distance VMotion, TA3105. This sessions presented results from a Long Distance VMotion Joint Validation research project done by VMware, EMC and Cisco in the form of Shudong Zhou, Staff Engineer for VMware, Balaji Sivasubramanian, Product Marketing Manager for Cisco and Chad Sakac, VP of VMware Technology Alliance for EMC.

What’s the use case?

With Paul Maritz mentioning the vCloud a lot, I can see where WMotion can make itself useful. When migration to or from any internal or external cloud, there’s a good chance you’d want to do so without downtime, i.e. with all your virtual machines in the cloud running.

What are the challenges?

The main challenge to get VMotion working between datacenters isn’t with the VMotion technology itself, but with the adaptations to shared storage and networking. Because a virtual machine being VMotioned cannot have it’s IP address changed, some challenges exist with the network spanning across datacenters. You’ll need stretched VLAN’s, a flat network with same subnet and broadcast domain at both locations.

The same goes for storage. VMotion requires all ESX hosts to have read/write access to the same shared storage, and storage does not go well over (smaller) WAN links.There need to be some kind of synchronisation, or a different way to present datastores at both sides.

Replication won’t work in this case, as replication doesn’t do active/active access to the data. The secundary datacenter doesn’t have active access to the replicated data, it just has a passive copy of the data, which it can’t write to. Using replication as a method of getting WMotion working will result in major problems, one of which is your boss making you stand in the corner for being a bad, bad boy.

What methods are available now?

Chad explained a couple of methods of making the storage available to the secondary datacenter:

Remote VMotion
This is the most simply way to get WMotion up-and-running. This method entails a single SAN, with datastores presented to ESX servers in both datacenters and doing a standard VMotion. Doing this will leave the virtual machine’s files at the primary site, and as such, is not completely what you’d want, as you’re not independent from the primary location.

Storage VMotion before compute VMotion
This method will do a Storage VMotion from a datastore at the primary location to the secundary location. After the Storage VMotion, a compute VMotion is done. This solved the problem with the previous method as the VM will move completely. It will take a lot of time, and does not (yet) have any improvements that leverage the vStorage API for for instance deduplication.

Remote VMotion with advanced active/active storage model
Here’s where Chad catches fire and really starts to talk his magic. This method involves a SAN solution with additional layers of virtualization built into it, so two physically separated heads and shelves share RAM and CPU. This makes both heads into a single, logical SAN, which is fully geo-redundant. When doing a WMotion, no additional steps are needed on the vSphere platform to make all data (VMDK’s, etc) available at the secundary datacenter, as the SAN itself will do all the heavy lifting. This technique will do it’s trick completely transparent to the vSphere environment, as only a single LUN is presented to the ESX hosts.

What’s VMware’s official statement on WMotion?

VMotion across datacenters is officialy supported as of september 2009. You’ll need VMware ESX 4 at both datacenters and a single instance of vCenter 4 for both datacenters. Because VMware DRS and HA aren’t aware any physical separation, WMotion is supported only when using a cluster for each site. Spanned clusters could work, but are simply not supported. The maximum distance at this point is 200 kilometers, which simply indicates that the research did not include any higher latencies than 5ms RTT latency. The minimum link between sites needs to be OC12 (or 622Mbps). The bandwith requirement for normal VMotions, within the datacenter and cluster will change accordingly. The network needs to be streched to include both datacenters, as the IP-address of the migrated virtual machine cannot change.

There need to be certain extensions to the network for WMotion to be supported. Be prepared to use your creditcard to buy any Cisco DCI-capable device like the Catalyst 6500 VSS or Nexus 7000 vPC. On the storage side, extensions like WDM, FCoIP and optionally Cisco I/O Acceleration are required.

Best Practices

  • Single vCenter instance managing both datacenters;
  • At least one cluster per site;
  • A single vNetwork Distributed Switch (like the Cisco Nexus 1000v) across clusters and sites;
  • Network routing and policies need to be synchronized or adjusted accordingly.

Future Directions

More information

http://technodrone.blogspot.com : Best of VMworld 2009 Contest Winners

Category: Business Continuity and Data Protection

Gold: Vizioncore Inc., vRanger Pro 4.0
Finalist: Veeam Software Inc., Veeam Backup & Replication
Finalist: PHD Virtual Technologies, esXpress n 3.6
Category: Security and Virtualization
Gold
: Hytrust, Hytrust Appliance
Finalist: Catbird Networks Inc., Catbird vCompliance

Category: Virtualization Management
Gold: Netuitive Inc., Netuitive SI for Virtual Data Centers
Finalist: Veeam Software, Veeam Management Suite
Finalist: Embotics Corp., V-Commander 3.0

Category: Hardware for Virtualization
Gold: Cisco Systems Inc., Unified Computing System
Finalist: Xsigo Systems, VP780 I/O Director 2.0
Finalist: AFORE Solutions Inc., ASE3300

Category: Desktop Virtualization
Gold: AppSense, AppSense Environment Manager, 8.0
Finalist: Liquidware Labs, Stratusphere
Finalist: Virtual Computer Inc., NxTop
Category: Cloud Computing Technologies
Gold: Mellanox Technologies, Italio Cloud Appliance
Finalist: InContinuum Software, CloudController v 1.5
Finalist: Catbird Networks Inc., Catbird V-Security Cloud Edition

Category: New Technology
Winner: VirtenSys, Virtensys IOV switch VMX-500LSR

Category: Best of Show
Winner: Hytrust, Hytrust Appliance

Congratulations to all the Winners!!!”

http://www.chriswolf.com/ : Thoughts on the VMworld Day 2 Keynote

I was very impressed by the information disseminated in the second VMworld keynote, led by CTO Steve Herrod. Here’s a summary of the thoughts I tweeted during the morning keynote (in chronological order).
Steve Herrod talked about a “people centric” approach. VMware’s technology needs to understand desktop user behavior. The existing offline VDI model (requiring a manual “check-out”) is not people centric.

VMware’s announcement to OEM RTO Software’s Virtual Profiles was a good move. Burton Group considers profile virtualization a required element of enterprise desktop virtualization architecture.

VMware’s Steve Herrod and Mike Coleman discussed VMware’s software-based PC-over-IP (PCoIP) protocol. Feedback from Burton Group clients who were early PCoIP beta testers indicates that the protocol’s development is progressing well.

Herrod showed a picture of “hosted virtualization” for employee owned PCs on a MacBook. Is that a hint of a forthcoming announcement?

I would like to know if VMware’s Type I CVP client hypervisor will have VMsafe-like support in the 1.0 release. VMware has made few public statements regarding CVP architecture.

VMware’s CVP demo looked good, but it didn’t reach the “wow factor” achieved by Citrix when Citrix demoed a type I client hypervisor on a Mac at their Synergy conference.

The Wyse PocketCloud demonstration was impressive. PocketCloud is VMware’s first answer to the Citrix Receiver for iPhone.

VMware demonstrated the execution of a Google Android application on a Windows Mobile-based smart phone. Many opportunities exist for VMware and Google to collaborate in the user service and application delivery space.

Burton Group client experience backs VMware’s claims that vSphere 4.0 is a suitable platform for tier 1 applications. We recommend that x86 virtualization be the default platform for all newly deployed x86 applications, unless an application owner can justify why physical hardware is required (e.g., for a proprietary interface that is unsupported by virtualization).

To support tier 1 application dynamic load balancing, storage and network I/O must be included in the DRS VM placement calculations. It’s good to see that VMware is heading in that direction. DRS will also need to evaluate non-performance metrics such as vShield Zone membership as part of the VM placement metric (no word on this yet).

I would like to hear more from folks who have tested AppSpeed. Burton Group clients I have spoken with to date have not been impressed.

The DMTF needs to start doing more to evangelize the role of OVF as it pertains to cloud computing and service manifests.

I like vSphere’s VMsafe security API, but I want to see tighter integration with external management (exposed via the SDK), and better integration with VMware’s DRS and DPM services.

VMware talked about Lab Manager as a tool to promote user self-service for server VMs and applications, but I haven’t heard mention of a similar interface for desktop applications (like Citrix Dazzle). A user application service catalog is a missing part of VMware’s current virtual desktop architecture, and will need to be addressed by either VMware or a third party.

The data center on the show floor running 37,248 VMs on 776 physical servers would be more impressive if VMware disclosed the applications running on the VMs, along with the application workloads. Otherwise, the demonstration is really just a density science project.

I liked VMware’s coverage of virtual data centers. They are also defined in Burton Group’s internal cloud hardware infrastructure as a service (HIaaS) reference architecture.

Herrod mentioned forthcoming network L3 improvements that will make it easier to separate location and identity. This is something to follow.

Both Cisco and F5 are enablers for VMware’s long distance VMotion and are vendors to follow as this technology further matures.

VMware’s cloud layered architecture is very similar to the architecture defined in the Drue Reeves’ report “Cloud Computing: Transforming IT.”

Herrod did a great job articulating the importance of SpringSource to the VMware software solution. VMware needs an application platform to have a chance at holding off Microsoft long term, and SpringSource gives them that.

That’s it for my thoughts on day 2. As always, I’d love to hear your feedback. VMworld 2009 was a great conference. I enjoyed my time meeting with Burton Group clients as well as the several conversations that I had with many attendees. See you next year!”

http://virtualfuture.info/ : VMworld 2009: notes

Just some notes about VMworld. I’m also working on a couple of other blogposts, but they take more time then I expected. VMworld was again a great experience for me. I was able to talk a lot of people (people I’ve met at previous VMworld conferences or people I follow on twitter), check out new/other vendors on the solution exchange, attend interesting sessions and get some hands-on experience on products I haven’t had the chance before. And off course, I found out at VMworld I passed my VCP4 beta exam!

BTW: I had a comment the other day about the long lines of people waiting for sessions and labs. Well, it wasn’t a real problem at all. First of all, it seems like everyone could eventually get into the session he wanted to (registered or not). Secondly, I don’t understand all those people waiting half an hour before the start of the session when you can just walk in 5 minutes before the session starts. As a matter of fact, I just did the SRM advanced options lab, and there were plenty of seats available.

Here are some interesting things I’ve seen:

– vCloud providers (terremark)

– VMware View: PCoIP software solution
Software PCoIP in View will be released later this year. The good thing is, you  can combine software and hardware PCoIP so you can give each type of user the right solution for their needs. Also Wyse demonstrated the iPhone View connection.

– Client Virtualization Platform (CVP)
Run a bare-metal hypervisor on a client with management through Intel vPro. I’m really curious how this will integrate with VMware View: will the VM hosted in the datacenter be automatically synchronized with the VM on your local client? How will this be licensed? Because what I do know that CVP will not be a product you can buy stand-alone, it will be part of VMware View.

Integration of storage management into vCenter
– HP and NetApp already showed great integration of monitoring and configuring hardware integrated into the vCenter server interface. Maybe there are already other vendors, but the  fact that you can control and manage your complete Virtual Infrastructure (both hardware and software) is really cool.

Cisco UCS
– Cisco really impressed me with their UCS solution: a blade chassis designed for virtualization.

Other (future promises of) vSphere(-related) news:
IO DRS in the future
Long distance VMotion
RTO-profiles integration in View

And really future-future talk: Follow the sun vs Follow the moon

In other words: Datacenter VMotion! Imagine a company with offices in USA and Europe. If you want your datacenter to be closest to your users, your Datacenter will vmotion from the datacenter in Europe to the datacenter in the USA (follow the sun). If you want your Datacenter to be in the place when the energy-price are the lowest (during night-times) you vmotion the Datacenter… well… at night (follow the moon).”

 

http://www.techhead.co.uk : VMworld 2009 Video Interview: Rick Scherer (VMwaretips.com)

“Here’s another TechHead video interview from VMworld 2009 in San Francisco.  This time I’m talking to Rick Scherer  who is well known for his informative VMwareTips.com site.  In his daytime capacity he is a UNIX Systems Administrator for San Diego Data Processing Corporation.

Check out his site at VMwareTips.com – well worth a visit”

http://www.vmguru.nl/wordpress : Building and maintaining the VMworld 2009 Datacenter

It is becoming a sequel, the datacenter VMware has build for this weeks VMworld 2009 at the Moscone Center in San Francisco.

In addition to our two previous articles (art1, art2), today we found two very nice videos loaded with tons of techno porno!

The first video shows the VMware team building the complete datacenter on-site at the Moscone Center. During the video footage the awesome numbers representing this huge infrastructure run by.

In short? 28 racks containing 776 ESX servers which provide the infrastructure with 37TB of memory, 6.208 CPU cores and 348TB storage which uses 528KW electricity and is servicing 37.248 virtual machines. You will probably never find such an infrastructure anywhere in the world, at least I know I won’t.

In the second video Richard Garsthagen is interviewing Dan, who is responsible for the datacenter at the Moscone Center, in which he gives inside information on how this huge datacenter is designed, build, connected and what hardware is used. Some interesting figures 85% of the used hardware is new to market, they only used 3 miles of cable to connect all 776 ESX servers, storage, switches, etc and the total cost of all hardware used to build this datacenter is estimated at $35M!

Awesome figures, a very very impressive datacenter and a must see for all technology freaks out there.
Respect for VMware for putting together such a datacenter just for a one week event!”

The VMworld 2009 Lab Cloud

Yesterday we showed you how VMware designed and build an awesome infrastructure just for one week of VMworld 2009.

Awesome figures!
776 ESX servers,  37TB RAM, 6.208 cores, 348TB storage and 37.248 virtual machines. But what is this infrastructure used for?

Well VMware primarily uses it for the 10 different self paced labs they provide to VMworld participants. VMware build a vCloud which consists of the best of three solutions, vSphere, Lab Manager and a self service portal in which users can select the lab they want to follow and within minutes they get there own lab environment.

Depending one the wanted lab virtual machines can be provisioned within minutes, from 3 minutes for a simple two server lab environment to 7 minutes for a large nine server lab environment.

The system does not just provision basic virtual machines like Windows, but it provisions entire virtual ESX environments, with nested ESX server, storage, SRM and vCenter server.

If you want to know more just watch the interview Richard Garsthagen did below”

I believe the Ask the Experts Session is going on now, see the below picture.

 

Roger L.

You may also like