Home Datacenter A “Multivendor Post” to help our mutual iSCSI customers using VMware

A “Multivendor Post” to help our mutual iSCSI customers using VMware

by Roger Lund

 

http://virtualgeek.typepad.com/virtual_geek posted a nice blog post titled : A “Multivendor Post” to help our mutual iSCSI customers using VMware

 

Today’s post is one you don’t often find in the blogosphere, see today’s post is a collaborative effort initiated by me, Chad Sakac (EMC), which includes contributions from Andy Banta (VMware), Vaughn Stewart (NetApp), Eric Schott (Dell/EqualLogic), and Adam Carter (HP/Lefthand), David Black (EMC) and various other folks at each of the companies.

Together, our companies make up the large majority of the iSCSI market, all make great iSCSI targets, and we (as individuals and companies) all want our customers to have iSCSI success.

I have to say, I see this one often – customer struggling to get high throughput out of iSCSI targets on ESX.   Sometimes they are OK with that, but often I hear this comment: "…My internal SAS controller can drive 4-5x the throughput of an iSCSI LUN…"   

Can you get high throughput with iSCSI with GbE on ESX?   The answer is YES.  But there are some complications, and some configuration steps that are not immediately apparent. You need to understanding some iSCSI fundamentals, some Link Aggregation fundamentals, and know some ESX internals – none of which are immediately obvious…

If you’re interested (and who wouldn’t be interested with a great topic and a bizzaro-world “multi-vendor collaboration”… I can feel the space-time continuum collapsing around me :-), read on…

We could start this conversation by playing a trump card; 10GbE, but we’ll save this topic for another discussion. Today 10GbE is relatively expensive per port and relatively rare, and the vast majority of iSCSI and NFS deployments are on GbE. 10GbE is supported by VMware today (see the VMware HCL here), and all of the vendors here either have, or have announced 10GbE support.

10GbE can support the ideal number of cables from an ESX host – two. This reduction in port count can simplify configurations, reduce the need for link aggregation, provide ample bandwidth, and even unify FC using FCoE on the same fabric for customers with existing FC investments. We all expect to see rapid adoption of 10GbE as prices continue to drop. Chad has blogged on 10GbE and VMware here.

This post is about trying to help people maximize iSCSI on GbE, so we’ll leave 10GbE for a followup.

If you are serious about iSCSI in your production environment, it’s valuable to do a bit of learning, and it’s important to do a little engineering during design.    iSCSI is easy to connect and begin using, but like many technologies which excel in terms of their simplicity the default options and parameters may not be robust enough to provide an iSCSI infrastructure which can support your business.

 

This is a very long post, and worth reading more than once. Included are diagrams and links. This is a very nice overview for anyone considering iSCSI for their San.

 

Here are some topic’s he covers.

 

Understanding your Ethernet Infrastructure

Understanding: iSCSI Fundamentals

Understanding: Link Aggregation Fundamentals

Understanding: iSCSI implementation in ESX 3.x

(Strongly) Recommended Additional Reading

  1. I would STRONGLY recommend reading a series of posts that the inimitable Scott Lowe has done on ESX networking, and start at his recap here: http://blog.scottlowe.org/2008/12/19/vmware-esx-networking-articles/
  2. Also – I would strongly recommend reading the vendor documentation on this carefully.

ENOUGH WITH THE LEARNING!!! HOW do you get high iSCSI throughput in ESX 3.x?

As discussed earlier, the ESX 3.x software initiator really only works on a single TCP connection for each target – so all traffic to a single iSCSI Target will use a single logical interface. Without extra design measures, it does limit the amount of IO available to each iSCSI target to roughly 120 – 160 MBs of read and write access.

This design does not limit the total amount of I/O bandwidth available to an ESX host configured with multiple GbE links for iSCSI traffic (or more generally VMKernel traffic) connecting to multiple datastores across multiple iSCSI targets, but does for a single iSCSI target without taking extra steps.

Here are the questions that customers usually ask themselves:

Question 1: How do I configure MPIO (in this case, VMware NMP) and my iSCSI targets and LUNs to get the most optimal use of my network infrastructure? How do I scale that up?

Question 2: If I have a single LUN that needs really high bandwidth – more than 160MBps and I can’t wait for the next major ESX version, how do I do that?

Question 3: Do I use the Software Initiator or the Hardware Initiator?

Question 4: Do I use Link Aggregation and if so, how?

In closing…..

I would suggest that anyone considering iSCSI with VMware should feel confident that their deployments can provide high performance and high availability. You would be joining many, many customer enjoying the benefits of VMware and advanced storage that leverages Ethernet.

To make your deployment a success, understand the “one link max per iSCSI target” ESX 3.x iSCSI initiator behavior. Set your expectations accordingly, and if you have to, use the guest iSCSI initiator method for LUNs needing higher bandwidth than a single link can provide.

Most of all ensure that you follow the best practices of your storage vendor and VMware.

Thanks to VirutalGeek for content.

Full Source Post http://virtualgeek.typepad.com/virtual_geek/2009/01/a-multivendor-post-to-help-our-mutual-iscsi-customers-using-vmware.html

You may also like