Category Archives: Uncategorized

vSphere Paravirtual SCSI testing

I hopped on the wagon to test vSphere with the new Paravirtual SCSI driver.

I used I/O meter, with 10GB test files, one per drive. 32K 100% read.

I did a comparison of 1 LSI VDMK drive, vs 1 paravirtual VMDK drive, vs 1 RDM, vs 1 paravirtual RMD.





And then I did a comparison of 3 LSI VDMK drive, vs 3 paravirtual VMDK drive, vs 3 RDM, vs 3 paravirtual RMD.





Updated Wiki on Supported Virtualized Products


I updated my Wiki with Supported Virtualization products, this was a question I had at our First Local IT & Virtualization user group.

Here are links to info about the user group. Launched, Based on Local Central MN User group and Community has it’s official launch!


We are a group of IT professionals dedicated to improving collaboration and education in the central Minnesota IT community.

Our team members work and reside in the central Minnesota community, locally supporting the backbone of the digital age and sharing their passion to expand YOUR abilities by donating their time and talents hosting user groups and technology seminars at no cost to you.


I am pleased to announce that we have a web site up for the Central MN user group, please check it out and pass the word.


More updates to come on both the page and blog.



Roger L.

St. Cloud, Minnesota’s, IT & Virtualization User Group



I have gotten together with a couple of local IT administrator’s, and we started an IT & Virtualization user group. Central MN IT & Virtualization User Group.

We meet for the first time, May 7th, at the St. Cloud Library in St. Cloud MN, from 3-5 P.M. in the Bremmer Room , max 50 people.

Our topic is going to be introduction to virtualization.

We are going to talk about virtualization , and how it works, and brush on how VMware, Citrix and Microsoft approach it.

People can register by going to

I hope to see you there!



If you have questions, message me on twitter



Roger L.

To P2V or Not to P2V, That is not the Question. has a interesting write up titled Is P2V always the best solution?

I would like to however raise another point of view. Correct, the easiest solution for a system residing on aging hardware is to P2V the system to a VM – and it seems that I have solved my problem. But have I? I posted a question about 9 months ago with this exact same thought.

Do not blindly P2V systems, just because you can.

You will regret doing so in the long run.

I for one do not want ancient Windows/Linux systems in my datacenter. Would you?


Which is in response to’s write up titled Using Virtualization to Extend The Hardware Lifecycle

In harder economic times getting real money to spend on server refreshes is difficult. There are the arguments that new kit is more power efficient; supports higher VM/CPU core densities but the reality is that even if you can show a cost saving over time most current project budgets are at best frozen until the economic uncertainty passes, at worst eliminated.

Although power costs have become increasingly visible because they’ve risen so much over the last 18 months this is still a hidden cost to many organisations, particularly if you run servers in your offices where a facilities team picks up the bill the overall energy savings through virtualization and hardware refresh don’t always get through.

So, I propose some alternative thinking to ride out the recession and make the kit you have and can’t get budget to replace last longer, as well as delivering a basic disaster recovery or test & development platform (business value) in the meantime.

Interesting read’s each, I would like to share my thoughts on this.


Instead of looking at this as, why should I use or not use virtualization to extend my applications life. We should instead look at it as reason’s to promote virtualization as a whole, whether it is to your company or CEO, or to the first time user. Shouldn’t you instead focus on what virtualization allows you to do? If it causes extra work, because of legacy systems,long term then great. Then you have a valid reason to ask for money or staffing, and if you are are not allotted this; then, it would be give you reason to ask to shut down legacy systems that are no longer needed.

I think today’s virtualization , no matter VMware, Citrix or Hyper-V,  should be showing applications vendor’s a wake up call.  As they are the ones that are going to be required to support their software products longer, as virtual environments have no hardware ties. Gone are the days when software vendor’s well both hardware and software in one package, and support both. I say, Welcome to Today, when you ask for five year support contracts from the day of purchase.

In my eyes, Gone are the times, of sitting there in a dusty corner; trying to sell a product with a limited life cycle, with a attached hardware platform. Those still doing so, will undoubtedly be left behind in the dust in today’s recession riddled times.

Roger L.

First Talkshoe Discussion, on backups Success.

I had my first Talkshoe discussion.

I thought it went well, Thanks to everyone for being on the Call.

The next will be next week. Open Topic, and I will think on a subject, or contact me with idea’s.

Title: EPISODE2 – Roger’s IT & Virtualization Discussion
Time: 03/31/2009 03:00 PM EDT

Topic: SNMP monitor, SYS LOGGING, Maintaining your ESX Servers

Future Topic’s : E-mail Gateway’s, Secure E-mail, virus scanning, Spam Filtering, all in one or not? Appliance vs VMware Appliance, vs Server.

Been Busy, New content soon.

We recently decided on a NetApp FAS 3140 at work, ( more on that later) so we have been busy implementing it into our environment.

Also recently, we purchased the VMware Foundation kit.

Once the FAS had the Fiber hooked up to the ESX server, and was setup ( more on this later ), I moved all of my existing VMware Guests from ESXi hosts to the ESX host.

Also I migrated my Onbase SQL, SQL temp, and Diskgroup local attached disk (SAS) to the San.

Then I configured our 3750G switch ( details later) for iSCSI, setting up three dual port nic teams, and enabling jumbo frames, and then configuring the ESX server (details later)

Then I have been working on configuring Cacti to monitor the FAS 3140, which only has SNMP V1, ( Ontap 7.2.X ), and playing with templates. ( details later )

So, as you can see I have some content I need to write up formally and publish, since I have not posted lately, I wanted to tell you all that I am alive, but just busy.

Roger L

One, Two, or Four vCPU’s in VMware?



Someone in Twitter today was asking about performance and how it relates to vCPU’s.


Personally I have found much better performance, and less CPU usage with 1 vCPU vs 2 vCPU’s and have never tried 4 vCPU’s


Let’s see if we can find some other information on the subject.


Co-scheduling SMP VMs in VMware ESX Server talks about the following

VMware ESX Server efficiently manages a mix of uniprocessor and multiprocessor VMs, providing a rich set of controls for specifing both absolute and relative VM execution rates. For general information on cpu scheduling controls and other resource management topics, please see the official VMware Resource Management Guide.
For a multiprocessor VM (also known as an "SMP VM"), it is important to present the guest OS and applications executing within the VM with the illusion that they are running on a dedicated physical multiprocessor. ESX Server faithfully implements this illusion by supporting near-synchronous coscheduling of the virtual CPUs within a single multiprocessor VM.
The term "coscheduling" refers to a technique used in concurrent systems for scheduling related processes to run on different processors at the same time. This approach, alternately referred to as "gang scheduling", had historically been applied to running high-performance parallel applications, such as scientific computations. VMware ESX Server pioneered a form of coscheduling that is optimized for running SMP VMs efficiently.


But the only document I can find that really addresses this is a older document titled:

 Best Practices Using VMware Virtual SMP

Virtual SMP can provide a significant advantage for multi-threaded applications and applications
using multiple processes in execution. However, since not all applications are able to take
advantage of a second or multiple CPUs, SMP virtual machines should not be provisioned by
default. For each virtual machine, system planners should carefully analyze the trade-off
between possible increases in performance and throughput gained through individual virtual
machines and the number of virtual machines supported by physical hardware.
The best practice for deploying Virtual SMP depends on a number of performance and
configuration factors, some of which are similar to physical SMP. You should analyze your
specific applications to understand if deploying them in SMP virtual machines will improve
throughput. In addition, you can achieve more efficient use of the underlying hardware by
deploying a mixture of single-CPU and SMP virtual machines to ESX Server platform systems.
Some key points regarding Virtual SMP deployment to take away from this white paper are:
• Virtual SMP allows a virtual machine to access two or more CPUs.
• Up to two physical processors can be consumed when you run a two-way virtual machine.
• Only allocate two or more virtual CPUs to a virtual machine if the operating system and the
application can truly take advantage of all the virtual CPUs. Otherwise, physical processor
resources may be consumed with no application performance benefit and, as a result,
other virtual machines on the same physical machine will be penalized.

I bolded and Underlined the key points I found in the conclusion of the document.


What do other blogger’s say?

The Lone Sysadmin has a posted titled : Why My Two vCPU VM is Slow

Sometimes computers are counterintuitive. One great case continues to be why a virtual machine with two vCPUs runs more slowly than a virtual machine with one vCPU.

Think of virtualization like a movie. A movie is a series of individual frames, but played back the motion looks continuous. It’s the same way with virtual machines. A physical CPU can only run one thing at a time, which means that only one virtual machine can run at a time. So the hypervisor “shares” a CPU by cutting up the CPU time into chunks. Each virtual machine gets a certain chunk to do its thing, and if it gets chunks of CPU often enough it’s like the movie: it seems like the virtual machine has been running continuously, even when it hasn’t. Modern CPUs are fast enough that they can pull this illusion off.

When one virtual machine stops running another virtual machine has an opportunity to run. If you have a virtual machine with one vCPU it needs a chunk of time from a single physical CPU. When a physical CPU has some free time that single vCPU virtual machine will run. No problem.

Similarly, in order for a virtual machine with two vCPUs to run it needs to have chunks of free time on two physical CPUs. When two physical CPUs are both available that virtual machine can run.

The trouble comes when folks mix and match single and dual-vCPU virtual machines in an environment that doesn’t have a lot of CPU resources available. A two-vCPU virtual machine has to wait for two physical processors to free up, but the hypervisor doesn’t like to have idle CPUs, so it runs a single vCPU virtual machine instead. It ends up being a long time before two physical CPUs free up simultaneously, and the two vCPU virtual machine seems really slow as a result. By “seems really slow” I mean it doesn’t perform very well, but none of the performance graphs show any problems at all.

To fix this you need to set the environment up so that two physical CPUs become free more often. First, you could add CPU resources so that the probability of two CPUs being idle at the same time is higher. Unfortunately this usually means buying stuff, which isn’t quick, easy, or even possible sometimes.

Second, you could set all your virtual machines to have one vCPU. That way they’ll run whenever a single physical CPU is free. This is usually a good stopgap until you can add CPU resources.

Last, you can group all your two vCPU machines together where those pesky single vCPU virtual machines won’t bother them. When a two vCPU virtual machine stops running it’ll always free up two physical CPUs. This usually means cutting up a cluster, though, so that will have also have drawbacks.

Virtualization can be awesome, but it can be pretty counterintuitive sometimes, too.

Very Interesting, take what you can from it. To my it says that you should keep like VM’s 1 vCPU’s and 2 vCPU’s on different hosts.


I fond this on the VMware Communities site.


Multiple vCPU’s (SMP); convincing the business

As most have come to understand, it is not a good practice to designate multiple vCPU’s for a VM unless the application running can actually use the 2nd processor effectively (see discussion I am fairly new to the VM environment at my new job and noticed ~20% of the VM’s have either 2 or 4 (i assume very bad) vCPU’s assigned. We steadily get requests for VM’s with multiple CPU’s and it appears they are coming from requirements that assume physical servers are being used (ex:Two (2) VM Servers; Each with 2 Dual-Core/8GB RAM 108GB Usable disk space). The VMware support team understands the implications of using vSMP when not needed, but how do we convince the business that their application will work on a single CPU when their requirements call for 2 dual-core; 4 processor cores. Any documentation available that we can hand out or any suggestions is greatly appreciated. Thanks.


Responses that caught my eye are.



It’s not bad using vSMP – one just has to do it intelligently and sparingly. It’s only really bad if you have the same number of cores on the VM as the ESX host. Also, if you assign too many multi-core VMs you could run into really high CPU Ready times and therefore poor performance. Simply put, the ESX host needs to have 4 IDLE cores to satisfy a 4-core VM. On a fairly busy 8-core ESX host, 4 available cores may be hard to come by. In that instance, that 4-core VM could potentially run slower than a single-core VM (depending on the CPU shares assigned and such).
VMWare recommends having at least double the amount of cores on the ESX host as compared to any multicore VMs. I think that works OK with the odd 2-core on a 4-core ESX. However, I’ve insisted on using 16-core ESX hosts for any 4-core VMs. In those two scenarios… with a little monitoring and tweaking… I have seen no issues with CPU Ready counters.


I recommend reading the post, but the point is you need understand is that the Physical CPU to use 2 or 4 vCPU’s without a performance hit.

And has Texiwill points out in the Thread, most of the time you will see no increase by going to more vCPU’s, but you will see a performance hit.



To my understanding, the only time you would want to do multiple vCPU’s is when you have lots of Physical CPU’s / Cores, and you are running a Multiple CPU threaded Application, and then don’t mix vCPU’s on the same Host / Cluster.




Thoughts? Am I understanding this correctly? Post comments please