I shot some video at last nights the Official VMworld 2010 Tweetup!
Thanks to http://www.xangati.com/ and http://www.trainsignal.com for sponsoring the event!
Roger L
I shot some video at last nights the Official VMworld 2010 Tweetup!
Thanks to http://www.xangati.com/ and http://www.trainsignal.com for sponsoring the event!
Roger L
I found my self talking with Xangati today, and I was told I can release this information. I am looking forward to seeing this product in action at a later date.
“Xangati, the emerging leader in infrastructure performance management, today introduced a new and completely free Xangati for ESX management tool, giving virtualization administrators unprecedented situational awareness into the dynamic environment they are responsible for managing – at absolutely no cost. In addition, Xangati concurrently announced sweeping product enhancements across the Xangati Management Dashboard line, while achieving 10 industry firsts in the process (see related release).
The two announcements further solidify Xangati’s position as an innovator to watch in the IPM market, providing administrators with the most comprehensive solution available today for managing their entire virtualized infrastructures, as well as a 100% free mechanism for trial. To get started and see for yourself, download Xangati for ESX free-of-charge
Xangati for ESX: Features and Functionality
Xangati for ESX is the most fully-featured free VMware tool available today, giving virtualization administrators the ability to actually solve problems in real-time for both server virtualization and Virtual Desktop Infrastructure (VDI) environments. Providing ESX host level visibility, Xangati for ESX offers an unparalleled user interface (UI) presentation with live, continuous “scroll bar” views, DVR recordings and reports – surpassing all other free solutions in terms of its sheer usability. Xangati for ESX expands the number of data sources typically covered and includes breakthrough functionality not found in other free offerings. Features include:
multiple data sources, including: traffic traversing vSwitches and VMware API;
optimized for visibility and troubleshooting;
superior UI with live navigational drill-down and streaming views with full details for VMs and the ESX;
DVR recordings triggered by VMware alerts; and,
20+ top 10 real-time views.
With a rich feature set available right after install, Xangati leads the market in terms of the turnkey nature of the download and the immediacy of the value it provides. Xangati for ESX installs as an .OVF virtual appliance in less than 10 minutes, giving administrators a much smoother installation process than many of the other free options on the market today, which require pre-existing operating systems be set-up for install. “
I will post a full link once they have the info on the website.
I am glad to see more companies offering a free product, and expanding existing product lines like this.
I am looking forward to many more product announcements for the up coming VMworld.
Thanks to Xangati for this information.
Roger L.
Well, I have finally gotten all my notes out from VMworld and posted on the site. I followed the conference with a family vacation on the West coast, so my time to get these notes processed and posted was limited. I attended several additional sessions, but these were the best of the sessions I attended and ones where I felt like I got the most information from them. Hope that they may help you too…
Deja vu. Well, almost. I sat in on a simliar session last year and I wondered what has now changed with vSphere being available and what new expectations could be had for virtualizing Exchange and I found answers. First of all, as the speaker put it – VMware has eaten their own dogfood and virtualized Exchange 2007 for their internal consumption. With approximately 55,000 mailboxes, that is an impressive feat itself.
Beyond internal consumption, all data points to Exchange evolving into a better workload to run within virtualization. Much of that can probably be attributed to Microsoft’s own virtualization technology, but Exchange on ESX benefits just the same. Performance gains out of ESX 4 make for a good combination with the improved I/O for Exchange 2007. Initial data for Exchange 2010 continues the trend of making Exchange a better workload in general and making it more appropriate to virtualize.
Rodney Haywood @ http://rodos.haywood.org/ put together some Video’s on VMworld 2009. I will outline them below.
“Each day I hope do give a brief video summary giving my thoughts and experiences on the day. Here is todays.
Of course I have posted some separate videos you can see on the site as well.
Hopefully the lack of sleep will not get to me and I can actually throw one of these out a day. I don’t have my Mac as I could not extract it from my kids at home so I am stuck with the features of the software on my Windows XP machine and the Flip Cemera software. So apologies in advance for the lack of interesting music, boring transitions and simplistic textual intros. Maybe one day I will get a Mac for my work machine.
Big day tomorrow, its now around 3am and I have to be up at 6:30am to make the keynotes. No rest for this blogger. If only my hotel had faster bandwidth!”
VMworld 2009 Day 1 Video Summary from Rodney Haywood on Vimeo.
“Here is my day to summary. This will be the longest one of the week I suspect.
"Hopefully the lighting is better. There is some general video at the end of the goings on of the day.
Rodos
P.S. Thanks for the fantastic bandwidth available over at VMworld, 6 minute upload compared to the estimated 3.5 hours at my hotel! However its taken a while for Vimeo to process it. Hopefully tomorrow will be smoother.”
“Recorded a interview with Dr John Troyer from VMware in the recording booth at VMworld today.
Here is the video below.”
“Here is my summary video for Day 3.
It is a shame that I was not able to post this until now. I stayed up till 3am to record it but the poor internet quality of my hotel meant I could not upload it until after lunch when I finally got access to some decent bandwidth (400K a sec rocked). Still for those who were not there hopefully its helpful.
I have learnt a lot of things not to do in recording these this year, so thats a great outcome in itself. Next time can be better!”
“Well here is my video for VMworld 2009 Hello Freedom.
After a week of very little sleep due to the long video blogs I did (hey we all have to try new things) I thought I would end the week with something funny. Okay and attempt to be funny.
Unfortunatley you need to know the people to get the joke at the end but this is to say thinks to the VMware online community that were so welcoming, warm and kind with their words this week. You are a great bunch of people who are keen to share your knowledge and a friendly hello.
Also a great thanks to the VMware communities team, especially John Troyer, for their commitment and passion.
Guys, this one is for you! I hope you get a laugh.
Rodos
P.S. Thanks to all the participants for being good sports and contributing to something that sounded crazy at the time. Sorry to all those great people that I did not run into today and did not make it in as a result, I am thinking of you Dr Troyer!
P.P.S. If you are reading this through RSS the video may not appear, you might have to click through to see it.”
—
Thanks Rodos for taking the time to share your experiences at VMworld, it makes those of us that couldn’t make the event, feel as where were attending!
Roger L
John Troyer did a great job on ustream and the VMworld Channel.
There are 25 episodes to date, plus the first two, which are not numbered. I’ll do my best to lay them out.
I kept the order of original upload date.
Thanks John for all the work, I hope to see you do interviews like this in the future on a scheduled basis.
Roger L.
Yep, you guessed it, again , Yet Even More blogs on VMworld 2009!
I take no credit for any of this content, it go’s to the author, and I mean to only put the content here without having to weed through the varies blogs, I have both the blog link, and article title here, with a link to the article. Thanks to everyone for putting these together for everyone to read!
—
http://netapptips.com/ : VMworld 2009: Brian Madden is true to his word
“
As all of us that have been in the industry (for way too long) know tradeshow season is a full of sound and fury; signifying nothing. Lots of announcements, lots of demonstrations, lots of bickering back and forth between vendors about whose widget is more widgety. But at the end of the day, just like the rest of the year, the success or failure of tradeshows is all about people. It takes many, many people and many, many (hundreds of) hours to organize, coordinate and execute a tradeshow. It ultimately comes down to trust. That most basic element of strong relationships.
For quite some time, the TMEs on our team that focus on Virtual Desktops (VDI) have been reading Brian Madden’s blog. They look to it for industry trends, but most importantly they look to it for viewpoints from experienced Desktop Administrators. From time to time we find ourselves being too storage-centric, and Brian’s blog helps us step back and look at the desktop challenges from a broader perspective.
Back in July, we approached Brian about presenting at his BriForum event. We wanted to introduce some new concepts (prior to VMworld), and his audience would give us the feedback (good or bad) that we needed to hear. We’d be the only storage vendor in attendance, but we needed to get close to the customer base.
Not only did we host a booth, but we presented one of the technical sessions. Prior to the event, we had our slides ready to go, and then we saw this challenge from Brian. Always interested in a challenge, Mike Slisinger (@slisinger) took it upon himself to create a winning presentation. Needless to say, the NetApp presentation met and exceeded the criteria and Brian promised to be wearing his NetApp t-shirt around the halls of VMworld.
A man of his word, today we found Brian enthusiastically (as always) walking around the NetApp booth (#2102) proudly wearing his NetApp t-shirt. The shirt looked great, and we really appreciate the feedback Brian has provided our VDI team on our products and solutions.
As I mentioned earlier, the best part about tradeshows is the chance to connect with people. The change to have open discussions, to learn about new things, and to see how you can improve. Having strong relationships is the foundation of that ability to connect with people.”
VMworld 2009: Get prepared for vSphere upgrades
“
As we heard in Paul Martiz’ keynote, VMware is forecasting that 75% of their customers will be upgrading their virtualized environments to vSphere within six months. If you’re an existing VMware customer, or VMware partner, this means that you had better be prepared for the updates.
To help everyone planning to move to vSphere, we’d recommend that you review the following technical documents:
- NetApp and VMware vSphere Storage Best Practices (TR-3749)
- Microsoft Exchange, SQL and Sharepoint (mixed workload) on vSphere on NetApp Unified Storage (TR-3785)
Both of these documents have been fully tested and co-reviewed by all of NetApp’s alliance partners (VMware, Microsoft and Cisco), so you can feel extremely confident in using these are the foundation of your next generation architectures utilizing vSphere.
And of course, vSphere deployments are covered by the NetApp 50% Virtualization Storage Guarantee. “
—
http://itsjustanotherlayer.com : EA3196 – Virtualizing BlackBerry Enterprise on VMware
“
Once again.. another session I didn’t sign up with and zero issues getting into.
To start off RIM & VMware have been working together for 2 years and it is officially supported on VMware. Together RIM & VMware have done many numerous and successful engagements running BES on VMware. The interesting thing is RIM runs their own BES on VMware for over 3 years now.
Today BES best practice is no more than 1k users per server and they are not very multi-core friendly. It is not cluster aware or have any HA built in. The new 5.0 version of BES is coming with some HA availability via replication at the application layer. One thing that has been seen in various engagements is if you put the BES servers on the same VMware Hosts as virtualized Exchange, there are noticable performance improvements.
The support options for BES do clearly state that they support on VMware ESX.
One of the big reasons to virtualize BES is that since it can not use multi-cores effectively the big 32 core boxes today are only able to use a fraction. By virtualizing BES can get significant consolidation. Then when doing the virtualization BES gets all the advantages of running virtual such as Test/Dev deployments and server consolidation and HA etc. Things that are well known and talked about already.
BES encourages template use to do rapid deployments. The gotcha is just what your company policies and rules are and can potentially save quite a bit of time. This presentation is really trying to show how to use VMware/Virtualization with BES for change management improvements, server maintenance, HA, component failures and other base vSphere technologies. VMware is looking towards using Fault Tolerance for their own BES servers.
BES is often not considered Tier 1 for DR events. Even though email is often the biggest thing needed to start working after a DR event to start communications. The reason is generally been seen due to the complexity and cost of DR.
The performance testing with the Alliance Team from VMware has been successfully done numerous times for the past couple of years. They have done testing at both RIM & VMware offices. The main goal of these efforts was to generate white papers and a reference architectures that are known to work. The testing was to use Exchange LoadGen & PERK load driver (BES testing driver). Part of this is how to scale outwith more VMs as the scale up is known.
The hardware was 8 cpus, Intel E5450 3Ghz, 16 G RAM and FAS3020 Netapp on vSphere 4 & BES 4.1.6. The 2k user test with 2 Exchange systems the results were 23% CPU utilization on 2 vCPU BES VMs. Latency numbers was under 10 ms. Nothing majorly wrong seen in the testing metrics. Going from ESX 3.5 to vSphere 4 was a 10-15% CPU reduction in the same workload tests. Adding in Hardware Assist for Memory saw what looks like another 3-5% reducting in CPU usage. In their high load testing when doing VMotion there is a small hiccup of about 10% increase in CPU utilization during the cut over period of the VMotion. This is well within the capacity available on the host and in the Guest OS.
Their recommendation is to do no more than 2k users on a 2vCPU VM. If you need more then add more VMs. Scales and performs well in this scale out architecture. Be sure you give the storage the number of spindles needed. The standard statement when talking about virtualization management.
The presenter then went into a couple of reference architecture designs. Small Business & Enterprise with a couple different varieties.
BES @ VMware. 3 physical locations, 6,500 Exchange users. 1k of them have 5G mailboxes and the default for the rest are 2G. BES has become pretty common. They run Exchange 2007 & Windows 2003 for AD & the Guest OS. Looks fairly straight forward.
4 prod BES VMS, 1 STandby BES VM, 1 Attachment BES VM and 1 BES dedicated Database VM. Done on 7 physical servers and 40 additional VM workloads on this cluster.”
—
http://www.virtuallifestyle.nl : VMworld ‘09 – Long Distance VMotion (TA3105)
“
I’ve attended the breakout session about long distance VMotion, TA3105. This sessions presented results from a Long Distance VMotion Joint Validation research project done by VMware, EMC and Cisco in the form of Shudong Zhou, Staff Engineer for VMware, Balaji Sivasubramanian, Product Marketing Manager for Cisco and Chad Sakac, VP of VMware Technology Alliance for EMC.
What’s the use case?
With Paul Maritz mentioning the vCloud a lot, I can see where WMotion can make itself useful. When migration to or from any internal or external cloud, there’s a good chance you’d want to do so without downtime, i.e. with all your virtual machines in the cloud running.
What are the challenges?
The main challenge to get VMotion working between datacenters isn’t with the VMotion technology itself, but with the adaptations to shared storage and networking. Because a virtual machine being VMotioned cannot have it’s IP address changed, some challenges exist with the network spanning across datacenters. You’ll need stretched VLAN’s, a flat network with same subnet and broadcast domain at both locations.
The same goes for storage. VMotion requires all ESX hosts to have read/write access to the same shared storage, and storage does not go well over (smaller) WAN links.There need to be some kind of synchronisation, or a different way to present datastores at both sides.
Replication won’t work in this case, as replication doesn’t do active/active access to the data. The secundary datacenter doesn’t have active access to the replicated data, it just has a passive copy of the data, which it can’t write to. Using replication as a method of getting WMotion working will result in major problems, one of which is your boss making you stand in the corner for being a bad, bad boy.
What methods are available now?
Chad explained a couple of methods of making the storage available to the secondary datacenter:
Remote VMotion
This is the most simply way to get WMotion up-and-running. This method entails a single SAN, with datastores presented to ESX servers in both datacenters and doing a standard VMotion. Doing this will leave the virtual machine’s files at the primary site, and as such, is not completely what you’d want, as you’re not independent from the primary location.Storage VMotion before compute VMotion
This method will do a Storage VMotion from a datastore at the primary location to the secundary location. After the Storage VMotion, a compute VMotion is done. This solved the problem with the previous method as the VM will move completely. It will take a lot of time, and does not (yet) have any improvements that leverage the vStorage API for for instance deduplication.Remote VMotion with advanced active/active storage model
Here’s where Chad catches fire and really starts to talk his magic. This method involves a SAN solution with additional layers of virtualization built into it, so two physically separated heads and shelves share RAM and CPU. This makes both heads into a single, logical SAN, which is fully geo-redundant. When doing a WMotion, no additional steps are needed on the vSphere platform to make all data (VMDK’s, etc) available at the secundary datacenter, as the SAN itself will do all the heavy lifting. This technique will do it’s trick completely transparent to the vSphere environment, as only a single LUN is presented to the ESX hosts.What’s VMware’s official statement on WMotion?
VMotion across datacenters is officialy supported as of september 2009. You’ll need VMware ESX 4 at both datacenters and a single instance of vCenter 4 for both datacenters. Because VMware DRS and HA aren’t aware any physical separation, WMotion is supported only when using a cluster for each site. Spanned clusters could work, but are simply not supported. The maximum distance at this point is 200 kilometers, which simply indicates that the research did not include any higher latencies than 5ms RTT latency. The minimum link between sites needs to be OC12 (or 622Mbps). The bandwith requirement for normal VMotions, within the datacenter and cluster will change accordingly. The network needs to be streched to include both datacenters, as the IP-address of the migrated virtual machine cannot change.
There need to be certain extensions to the network for WMotion to be supported. Be prepared to use your creditcard to buy any Cisco DCI-capable device like the Catalyst 6500 VSS or Nexus 7000 vPC. On the storage side, extensions like WDM, FCoIP and optionally Cisco I/O Acceleration are required.
Best Practices
- Single vCenter instance managing both datacenters;
- At least one cluster per site;
- A single vNetwork Distributed Switch (like the Cisco Nexus 1000v) across clusters and sites;
- Network routing and policies need to be synchronized or adjusted accordingly.
Future Directions
More information
- Check out the PDF “Virtual Machine Mobility with VMware VMotion and Cisco Data Center Interconnect Technologies“
- Information about VMware MetroCluster
- Information about the Cisco Datacenter Interconnect (DCI)
—
http://technodrone.blogspot.com : Best of VMworld 2009 Contest Winners
“
Category: Business Continuity and Data Protection
Gold: Vizioncore Inc., vRanger Pro 4.0
Finalist: Veeam Software Inc., Veeam Backup & Replication
Finalist: PHD Virtual Technologies, esXpress n 3.6
Category: Security and Virtualization
Gold: Hytrust, Hytrust Appliance
Finalist: Catbird Networks Inc., Catbird vComplianceCategory: Virtualization Management
Gold: Netuitive Inc., Netuitive SI for Virtual Data Centers
Finalist: Veeam Software, Veeam Management Suite
Finalist: Embotics Corp., V-Commander 3.0Category: Hardware for Virtualization
Gold: Cisco Systems Inc., Unified Computing System
Finalist: Xsigo Systems, VP780 I/O Director 2.0
Finalist: AFORE Solutions Inc., ASE3300
Category: Desktop Virtualization
Gold: AppSense, AppSense Environment Manager, 8.0
Finalist: Liquidware Labs, Stratusphere
Finalist: Virtual Computer Inc., NxTop
Category: Cloud Computing Technologies
Gold: Mellanox Technologies, Italio Cloud Appliance
Finalist: InContinuum Software, CloudController v 1.5
Finalist: Catbird Networks Inc., Catbird V-Security Cloud EditionCategory: New Technology
Winner: VirtenSys, Virtensys IOV switch VMX-500LSRCategory: Best of Show
Winner: Hytrust, Hytrust ApplianceCongratulations to all the Winners!!!”
—
http://www.chriswolf.com/ : Thoughts on the VMworld Day 2 Keynote
“
I was very impressed by the information disseminated in the second VMworld keynote, led by CTO Steve Herrod. Here’s a summary of the thoughts I tweeted during the morning keynote (in chronological order).
Steve Herrod talked about a “people centric” approach. VMware’s technology needs to understand desktop user behavior. The existing offline VDI model (requiring a manual “check-out”) is not people centric.VMware’s announcement to OEM RTO Software’s Virtual Profiles was a good move. Burton Group considers profile virtualization a required element of enterprise desktop virtualization architecture.
VMware’s Steve Herrod and Mike Coleman discussed VMware’s software-based PC-over-IP (PCoIP) protocol. Feedback from Burton Group clients who were early PCoIP beta testers indicates that the protocol’s development is progressing well.
Herrod showed a picture of “hosted virtualization” for employee owned PCs on a MacBook. Is that a hint of a forthcoming announcement?
I would like to know if VMware’s Type I CVP client hypervisor will have VMsafe-like support in the 1.0 release. VMware has made few public statements regarding CVP architecture.
VMware’s CVP demo looked good, but it didn’t reach the “wow factor” achieved by Citrix when Citrix demoed a type I client hypervisor on a Mac at their Synergy conference.
The Wyse PocketCloud demonstration was impressive. PocketCloud is VMware’s first answer to the Citrix Receiver for iPhone.
VMware demonstrated the execution of a Google Android application on a Windows Mobile-based smart phone. Many opportunities exist for VMware and Google to collaborate in the user service and application delivery space.
Burton Group client experience backs VMware’s claims that vSphere 4.0 is a suitable platform for tier 1 applications. We recommend that x86 virtualization be the default platform for all newly deployed x86 applications, unless an application owner can justify why physical hardware is required (e.g., for a proprietary interface that is unsupported by virtualization).
To support tier 1 application dynamic load balancing, storage and network I/O must be included in the DRS VM placement calculations. It’s good to see that VMware is heading in that direction. DRS will also need to evaluate non-performance metrics such as vShield Zone membership as part of the VM placement metric (no word on this yet).
I would like to hear more from folks who have tested AppSpeed. Burton Group clients I have spoken with to date have not been impressed.
The DMTF needs to start doing more to evangelize the role of OVF as it pertains to cloud computing and service manifests.
I like vSphere’s VMsafe security API, but I want to see tighter integration with external management (exposed via the SDK), and better integration with VMware’s DRS and DPM services.
VMware talked about Lab Manager as a tool to promote user self-service for server VMs and applications, but I haven’t heard mention of a similar interface for desktop applications (like Citrix Dazzle). A user application service catalog is a missing part of VMware’s current virtual desktop architecture, and will need to be addressed by either VMware or a third party.
The data center on the show floor running 37,248 VMs on 776 physical servers would be more impressive if VMware disclosed the applications running on the VMs, along with the application workloads. Otherwise, the demonstration is really just a density science project.
I liked VMware’s coverage of virtual data centers. They are also defined in Burton Group’s internal cloud hardware infrastructure as a service (HIaaS) reference architecture.
Herrod mentioned forthcoming network L3 improvements that will make it easier to separate location and identity. This is something to follow.
Both Cisco and F5 are enablers for VMware’s long distance VMotion and are vendors to follow as this technology further matures.
VMware’s cloud layered architecture is very similar to the architecture defined in the Drue Reeves’ report “Cloud Computing: Transforming IT.”
Herrod did a great job articulating the importance of SpringSource to the VMware software solution. VMware needs an application platform to have a chance at holding off Microsoft long term, and SpringSource gives them that.
That’s it for my thoughts on day 2. As always, I’d love to hear your feedback. VMworld 2009 was a great conference. I enjoyed my time meeting with Burton Group clients as well as the several conversations that I had with many attendees. See you next year!”
—
http://virtualfuture.info/ : VMworld 2009: notes
“
Just some notes about VMworld. I’m also working on a couple of other blogposts, but they take more time then I expected. VMworld was again a great experience for me. I was able to talk a lot of people (people I’ve met at previous VMworld conferences or people I follow on twitter), check out new/other vendors on the solution exchange, attend interesting sessions and get some hands-on experience on products I haven’t had the chance before. And off course, I found out at VMworld I passed my VCP4 beta exam!
BTW: I had a comment the other day about the long lines of people waiting for sessions and labs. Well, it wasn’t a real problem at all. First of all, it seems like everyone could eventually get into the session he wanted to (registered or not). Secondly, I don’t understand all those people waiting half an hour before the start of the session when you can just walk in 5 minutes before the session starts. As a matter of fact, I just did the SRM advanced options lab, and there were plenty of seats available.
Here are some interesting things I’ve seen:
– vCloud providers (terremark)
– VMware View: PCoIP software solution
Software PCoIP in View will be released later this year. The good thing is, you can combine software and hardware PCoIP so you can give each type of user the right solution for their needs. Also Wyse demonstrated the iPhone View connection.– Client Virtualization Platform (CVP)
Run a bare-metal hypervisor on a client with management through Intel vPro. I’m really curious how this will integrate with VMware View: will the VM hosted in the datacenter be automatically synchronized with the VM on your local client? How will this be licensed? Because what I do know that CVP will not be a product you can buy stand-alone, it will be part of VMware View.Integration of storage management into vCenter
– HP and NetApp already showed great integration of monitoring and configuring hardware integrated into the vCenter server interface. Maybe there are already other vendors, but the fact that you can control and manage your complete Virtual Infrastructure (both hardware and software) is really cool.Cisco UCS
– Cisco really impressed me with their UCS solution: a blade chassis designed for virtualization.Other (future promises of) vSphere(-related) news:
IO DRS in the future
Long distance VMotion
RTO-profiles integration in ViewAnd really future-future talk: Follow the sun vs Follow the moon
In other words: Datacenter VMotion! Imagine a company with offices in USA and Europe. If you want your datacenter to be closest to your users, your Datacenter will vmotion from the datacenter in Europe to the datacenter in the USA (follow the sun). If you want your Datacenter to be in the place when the energy-price are the lowest (during night-times) you vmotion the Datacenter… well… at night (follow the moon).”
—
http://www.techhead.co.uk : VMworld 2009 Video Interview: Rick Scherer (VMwaretips.com)
“Here’s another TechHead video interview from VMworld 2009 in San Francisco. This time I’m talking to Rick Scherer who is well known for his informative VMwareTips.com site. In his daytime capacity he is a UNIX Systems Administrator for San Diego Data Processing Corporation.
Check out his site at VMwareTips.com – well worth a visit”
—
http://www.vmguru.nl/wordpress : Building and maintaining the VMworld 2009 Datacenter
“
It is becoming a sequel, the datacenter VMware has build for this weeks VMworld 2009 at the Moscone Center in San Francisco.
In addition to our two previous articles (art1, art2), today we found two very nice videos loaded with tons of techno porno!
The first video shows the VMware team building the complete datacenter on-site at the Moscone Center. During the video footage the awesome numbers representing this huge infrastructure run by.
In short? 28 racks containing 776 ESX servers which provide the infrastructure with 37TB of memory, 6.208 CPU cores and 348TB storage which uses 528KW electricity and is servicing 37.248 virtual machines. You will probably never find such an infrastructure anywhere in the world, at least I know I won’t.
In the second video Richard Garsthagen is interviewing Dan, who is responsible for the datacenter at the Moscone Center, in which he gives inside information on how this huge datacenter is designed, build, connected and what hardware is used. Some interesting figures 85% of the used hardware is new to market, they only used 3 miles of cable to connect all 776 ESX servers, storage, switches, etc and the total cost of all hardware used to build this datacenter is estimated at $35M!
Awesome figures, a very very impressive datacenter and a must see for all technology freaks out there.
Respect for VMware for putting together such a datacenter just for a one week event!”
“
Yesterday we showed you how VMware designed and build an awesome infrastructure just for one week of VMworld 2009.
Awesome figures!
776 ESX servers, 37TB RAM, 6.208 cores, 348TB storage and 37.248 virtual machines. But what is this infrastructure used for?Well VMware primarily uses it for the 10 different self paced labs they provide to VMworld participants. VMware build a vCloud which consists of the best of three solutions, vSphere, Lab Manager and a self service portal in which users can select the lab they want to follow and within minutes they get there own lab environment.
Depending one the wanted lab virtual machines can be provisioned within minutes, from 3 minutes for a simple two server lab environment to 7 minutes for a large nine server lab environment.
The system does not just provision basic virtual machines like Windows, but it provisions entire virtual ESX environments, with nested ESX server, storage, SRM and vCenter server.
If you want to know more just watch the interview Richard Garsthagen did below”
I believe the Ask the Experts Session is going on now, see the below picture.
Roger L.
VMware Data Recovery is a new feature introduced with vSphere 4 which attempts to be a full-featured backup solution for the ESX lineup. There are some limitations to the software that limit it more towards small to medium business – not really enterprise customers, however, I’d consider my company a small enterprise user and we plan on implementing the technology when we upgrade to vSphere soon.
Data Recovery, like VCB, is an agentless backup technology used to grab either full VM image or file level backups of virtual machines in ESX. Data Recovery also includes de-duplication technology and backup to disk, where VCB is just a method of obtaining the thin VMDK file to be backed up by a third party solution. Data Recovery also makes use of innovation in the virtual hardware version 7 which allows for block level change tracking. Although, Data Recovery can apparently (not 100% sure) backup earlier versions of virtual hardware, it won’t be nearly as fast because they lack the block level changes.
Data Recovery deploys in two parts – a virtual appliance and a plug-in for vSphere client. The virtual appliance is imported from OVF format and with some basic configuration is ready to begin backups. An IP must be configured and a VMDK must be added to the virtual appliance as a target for the de-duplicated data. You may have two destination storage locations of up to 1TB each for a total of 2TB per virtual appliance.
Creating jobs for back are pretty simple. A nice feature for this is that you may choose folders, hosts or clusters as part of backup jobs – meaning that any new VM’s included in that folder will automatically be backed up in addition to the existing VM’s. This is a nice function for future-proofing your backup strategy.
Backup jobs are scheduled with a backup window to work and there can be up to 8 jobs at a time running, but the virtual appliance does all the scheduling and deciding of when to run the backups. After an initial backup is run with all data, incremental backups are run from that point on – grabbing on change blocks. Retention policies are also set for your backup stores and then enforced to keep a number of versions for your backups so that you can go back to a point in time backup.
Destination storage may be unmounted and exported, backed up or otherwise saved, though this is a manual process.
As another note, this is only available on vSphere hosts and cannot backup ESX 3.5 or vCenter 2.5 infrastructures.
Long Distance VMotion is by far the best session I’ve attended and the most exciting news for me of the VMworld week this far. The session was a presentation of a research project performed by VMware, EMC and Cisco. The session presented four options for performing a long distance VMotion using stock vSphere and existing technologies, well, almost. Three of the four include technologies currently available.
Why would you want to do a long distance VMotion? In my case, we have two data-centers – geographically close to one another. We currently stretch our cluster between the two locations and it allows us to float VM’s using VMotion between the two. The problem is that if we lose our primary datacenter, all storage is presented from here. Long Distance VMotion is the notion of having two separate clusters, one in each datacenter, and being able to VMotion between them.
What was really news to me from this session (I’ll get to what was presented) was that we can present the same data stores to two different clusters and have them recognized on both clusters. I am pretty sure I tried this way back in the 3.0 days and it failed to work. This must have been added in 3.5 or 4.0 – I have not tried in recent years.
So, what was presented? The three companies worked together to identify and trial a solution to allow for long distance VMotion. At this point, there is a very narrow set of criteria must be satisified to be support and for this to occur. Much of the restriction comes on the storage side, but network also presents some problems. Apparently, everything you need in vSphere is there, if you separate each datacenter into its own set of hosts.
Requirements
Surprisingly, we have most of this configured in our environment and its been status-quo for us for several years. The biggest difference between our environment and this spec configuration is that we run a stretched cluster to achieve this. Our datacenters are very close to one another and we only present storage from our primary datacenter so that we don’t have a split-brain scenario. But, it does give me new things to think about and talk about with co-workers. We currently don’t run two clusters or SRM because we like the flexibility to VMotions between datacenters – with that now a possiblity, we may have something new to investigate…