I had the opportunity to meet with Vkernel today and and had them Demo the product live, Before the show.
http://virtualgeek.typepad.com/virtual_geek posted a nice blog post titled : A “Multivendor Post” to help our mutual iSCSI customers using VMware
Today’s post is one you don’t often find in the blogosphere, see today’s post is a collaborative effort initiated by me, Chad Sakac (EMC), which includes contributions from Andy Banta (VMware), Vaughn Stewart (NetApp), Eric Schott (Dell/EqualLogic), and Adam Carter (HP/Lefthand), David Black (EMC) and various other folks at each of the companies.
Together, our companies make up the large majority of the iSCSI market, all make great iSCSI targets, and we (as individuals and companies) all want our customers to have iSCSI success.
I have to say, I see this one often – customer struggling to get high throughput out of iSCSI targets on ESX. Sometimes they are OK with that, but often I hear this comment: "…My internal SAS controller can drive 4-5x the throughput of an iSCSI LUN…"
Can you get high throughput with iSCSI with GbE on ESX? The answer is YES. But there are some complications, and some configuration steps that are not immediately apparent. You need to understanding some iSCSI fundamentals, some Link Aggregation fundamentals, and know some ESX internals – none of which are immediately obvious…
If you’re interested (and who wouldn’t be interested with a great topic and a bizzaro-world “multi-vendor collaboration”… I can feel the space-time continuum collapsing around me :-), read on…
We could start this conversation by playing a trump card; 10GbE, but we’ll save this topic for another discussion. Today 10GbE is relatively expensive per port and relatively rare, and the vast majority of iSCSI and NFS deployments are on GbE. 10GbE is supported by VMware today (see the VMware HCL here), and all of the vendors here either have, or have announced 10GbE support.
10GbE can support the ideal number of cables from an ESX host – two. This reduction in port count can simplify configurations, reduce the need for link aggregation, provide ample bandwidth, and even unify FC using FCoE on the same fabric for customers with existing FC investments. We all expect to see rapid adoption of 10GbE as prices continue to drop. Chad has blogged on 10GbE and VMware here.
This post is about trying to help people maximize iSCSI on GbE, so we’ll leave 10GbE for a followup.
If you are serious about iSCSI in your production environment, it’s valuable to do a bit of learning, and it’s important to do a little engineering during design. iSCSI is easy to connect and begin using, but like many technologies which excel in terms of their simplicity the default options and parameters may not be robust enough to provide an iSCSI infrastructure which can support your business.
This is a very long post, and worth reading more than once. Included are diagrams and links. This is a very nice overview for anyone considering iSCSI for their San.
Here are some topic’s he covers.
Understanding your Ethernet Infrastructure
Understanding: iSCSI Fundamentals
Understanding: Link Aggregation Fundamentals
Understanding: iSCSI implementation in ESX 3.x
(Strongly) Recommended Additional Reading
ENOUGH WITH THE LEARNING!!! HOW do you get high iSCSI throughput in ESX 3.x?
As discussed earlier, the ESX 3.x software initiator really only works on a single TCP connection for each target – so all traffic to a single iSCSI Target will use a single logical interface. Without extra design measures, it does limit the amount of IO available to each iSCSI target to roughly 120 – 160 MBs of read and write access.
This design does not limit the total amount of I/O bandwidth available to an ESX host configured with multiple GbE links for iSCSI traffic (or more generally VMKernel traffic) connecting to multiple datastores across multiple iSCSI targets, but does for a single iSCSI target without taking extra steps.
Here are the questions that customers usually ask themselves:
Question 1: How do I configure MPIO (in this case, VMware NMP) and my iSCSI targets and LUNs to get the most optimal use of my network infrastructure? How do I scale that up?
Question 2: If I have a single LUN that needs really high bandwidth – more than 160MBps and I can’t wait for the next major ESX version, how do I do that?
Question 3: Do I use the Software Initiator or the Hardware Initiator?
Question 4: Do I use Link Aggregation and if so, how?
I would suggest that anyone considering iSCSI with VMware should feel confident that their deployments can provide high performance and high availability. You would be joining many, many customer enjoying the benefits of VMware and advanced storage that leverages Ethernet.
To make your deployment a success, understand the “one link max per iSCSI target” ESX 3.x iSCSI initiator behavior. Set your expectations accordingly, and if you have to, use the guest iSCSI initiator method for LUNs needing higher bandwidth than a single link can provide.
Most of all ensure that you follow the best practices of your storage vendor and VMware.
Thanks to VirutalGeek for content.
Eric Sloof blogged on this, and I thought I would as well.
since we updated our ESX hosts from ESX 3.5 Update 2 to ESX 3.5 Update 3 (+ Dec 2008 patches) we experience an abnormally high RAM usage of the hostd management process on the hosts.
The symptoms are that hostd hits its default hard memory limit of 200 MB and shuts itself down. This is indicated by the following error message in var/log/vmware/hostd.log:
[fusion_builder_container hundred_percent=”yes” overflow=”visible”][fusion_builder_row][fusion_builder_column type=”1_1″ background_position=”left top” background_color=”” border_size=”” border_color=”” border_style=”solid” spacing=”yes” background_image=”” background_repeat=”no-repeat” padding=”” margin_top=”0px” margin_bottom=”0px” class=”” id=”” animation_type=”” animation_speed=”0.3″ animation_direction=”left” hide_on_mobile=”no” center_content=”no” min_height=”none”][2009-01-06 06:25:47.707 ‘Memory checker’ 19123120 error] Current value 207184 exceeds hard limit 204800. Shutting down process.
The service is automatically restarted, but as a consequence the VirtualCenter agent also restarts and the host will become temporarily unresponsive in VirtualCenter (shown as "not responding" or disconnected").
We increased the hard limit to 250 MB, but within several hours this limit was also reached on some hosts and the problem re-appeared. So, I suspect that there is ome kind of memory leak in hostd, and we need to find its cause to finally solve the problem.
If anyone has any idea’s on this please post a reply on the VMware Community forums. Or comment / post them on the Blog / blog forums and I will link them on the VMware Community Forums.
At the last MN twin cities VMUG, a user asked about Site Recovery Manager, and who was using it. Zero hands went up. Therefor, when I saw that http://vmetc.com had posted a blog entry titled : VMware Site Recovery Manager – links, lessons, and labs I took note.
“This post is a collection of VMware Site Recovery Manager (SRM) links that have been building up as various notes, to dos, and draft posts collecting dust for VM /ETC. I’m including links to popular blog posts describing how to set up “in a box” demo lab environments for SRM, links to a couple free chapters from a SRM book, and links to multiple VMworld 2008 SRM presentations for those readers that did not get attend either VMworld Europe or VMworld in Las Vegas this year. Finally, at the end of this post I am also including the VMworld 2008 Installing Configuring and Troubleshooting SRM lab materials.
Be sure to follow all links to read the original content, but I am briefly quoting from each source to provide some descriptions.
New SRM posts and links seem to be popping up daily. If you have a great SRM link please leave it in the comments for all to find.
Some examples of the good links in the blog.
SRM in a Box final release (the complete setup)
Site Recovery Manager in a Can” – Celerra Virtual Appliance HOWTO 401
Make sure to check out the the full post with full links on the below page.
Planetvm.net posted this blog post : VMware ESXi Alert
Dear ESX 3.5 customer,
Our records indicate you recently downloaded VMware® ESX Version 3.5 U3 from our product download site. This email is to alert you that an issue with that product version could adversely effect your environment. This email provides steps you can take to correct any issues that you may currently have or prevent you from encountering the issue.
A Virtual Machine may unexpectedly reboot when using VMware HA with Virtual Machine Monitoring on ESX 3.5 Update 3
Virtual Machines may unexpectedly reboot after a VMotion migration to an ESX 3.5 Update 3 Host OR after a Power On operation on an ESX 3.5 Update 3 Host, when the VMware HA feature with Virtual Machine Monitoring is active.
A virtual machine may reboot itself when the following conditions exist:
• Virtual Machine is running on ESX 3.5 Update 3 Host, either by virtue of VMotion or a Power On operation, and
• Host has VMware HA enabled with the “Virtual Machine Monitoring” option active.
Virtual Machine monitoring is dependent on VMware tools heartbeats to determine the state of the Virtual Machines. With ESX Server 3.5 Update 3, after a VMotion or Power On operation, the host agent running on the ESX server may delay sending the heartbeat state of the Virtual Machine to the Host. VMware HA detects this as a failure of the Virtual Machine and attempts to restart the Virtual Machine.
Note: Before you begin please refer to KB 1003490 for important information on restarting the mgmt-vmware service.
To work around this problem:
Option 1: Disable Virtual Machine Monitoring
1. Select the VMware HA cluster and choose Edit Settings from the right-click menu (note that this feature can also be enabled for a new cluster on the VMware HA page of the New Cluster wizard).
2. In the Cluster Settings dialog box, select VMware HA in the left column.
3. Un-Check the Enable virtual machine monitoring check box.
4. Click OK.
1. Disconnect the host form VC (Right click on host in VI Client and select “Disconnect” )
2. Login as root to the ESX Server with SSH.
3. Using a text editor such as nano or vi, edit the file /etc/vmware/hostd/config.xml
4. Set the “heartbeatDelayInSecs” tag under “vmsvc” to 0 seconds as shown here:
5. Restart the management agents for this change to take effect. See Restarting the Management agents on an ESX Server (KB 1003490).
6. Reconnect the host in VC ( Right click on host in VI Client and select “Connect” )
Please consult KB 1007899 for further details on managing packages already registered.
Please consult your local support center if you require further information or assistance. We apologize in advance for any inconvenience this issue may cause you. Your satisfaction is our number one goal
Read the Full post http://planetvm.net/blog/?p=109
Thanks planetvm.net for posting this!
As a reminder, the VMware Online Virtualization Forum is Tuesday, December 16th, from 8:00 am – 5:00pm
Click the links for information.
8:00am – 5:00pm PST
Exhibit Hall and Networking Lounge
Visit sponsors’ booths and chat with VMware product experts.
Chat with your peers in the Networking Lounge
8:00am – 9:00am
Keynote: VMware Future Visions and Future Directions
9:00am – 9:30am
Designing the Next-Generation
Datacenter Today with EMC
9:00am – 9:30am
Comprehensive Disaster Recovery
and Data Protection with Dell
10:00am – 10:30am
VMware View: Take Control of Your Desktops and Applications
11:00am – 11:30am
From the Virtualized to the Automated Datacenter
12:00pm – 12:30pm
Introduction to Server and Datacenter Virtualization
1:00pm – 1:30pm
IT Consolidation and Datacenter
Transformation with HP
1:00pm – 1:30pm
NetApp Storage Solutions for
VMware Virtual Desktop Infrastructure (VDI)
2:00pm – 2:30pm
Business Continuity and Disaster Recovery with VMware
3:00pm – 3:30pm
Customer Spotlight: Next Generation Server Computing with the City of San Diego
From Zero to 1,000+ Virtual Machines: A Step-By-Step Guide
Simplifying Remote Office Management with VMware Infrastructure
The Best Platform for Business-Critical Applications
I hope to see you there.
Following my previous Citrix testing on VMware I thought it best to try a dual vCPU VM to see if I can get double the users of a single vCPU, sounds logically sensible to me !
So just a reminder that a single vCPU with 25 users ran at around 73% average, a nice figure to run at giving plenty of CPU for peaks and troughs.
This test of a dual vCPU was undertaken following the sucesfull test of 25 users on one VM with 1 x vCPU this was run as a baseline to see how it relates to the single vCPU VM.
This is a interesting look at Citrix on VMware, A must read.
I stumbled across this on the VMware Communities, I am sure countless others have saw and linked this, but it was a new read for me.
This document is intended for someone new to Fusion, and possibly someone who is new to Macs in general. It describes basic terminology/concepts, where to find things, and notes on using virtual machines.
For those of you Mac users, who are getting into Fusion, this is a must read.
Looks like a good time to get into VMware for enterprise customers.
VMware Infrastructure Acceleration Kits provide cost-effective options for workgroups or small and medium-sized businesses to take advantage of enterprise-level virtualization features.