Home News One, Two, or Four vCPU’s in VMware?

One, Two, or Four vCPU’s in VMware?

by Roger Lund

 

 

Someone in Twitter today was asking about performance and how it relates to vCPU’s.

 

Personally I have found much better performance, and less CPU usage with 1 vCPU vs 2 vCPU’s and have never tried 4 vCPU’s

 

Let’s see if we can find some other information on the subject.

 

Co-scheduling SMP VMs in VMware ESX Server talks about the following

VMware ESX Server efficiently manages a mix of uniprocessor and multiprocessor VMs, providing a rich set of controls for specifing both absolute and relative VM execution rates. For general information on cpu scheduling controls and other resource management topics, please see the official VMware Resource Management Guide.
For a multiprocessor VM (also known as an "SMP VM"), it is important to present the guest OS and applications executing within the VM with the illusion that they are running on a dedicated physical multiprocessor. ESX Server faithfully implements this illusion by supporting near-synchronous coscheduling of the virtual CPUs within a single multiprocessor VM.
The term "coscheduling" refers to a technique used in concurrent systems for scheduling related processes to run on different processors at the same time. This approach, alternately referred to as "gang scheduling", had historically been applied to running high-performance parallel applications, such as scientific computations. VMware ESX Server pioneered a form of coscheduling that is optimized for running SMP VMs efficiently.

 

But the only document I can find that really addresses this is a older document titled:

 Best Practices Using VMware Virtual SMP

Virtual SMP can provide a significant advantage for multi-threaded applications and applications
using multiple processes in execution. However, since not all applications are able to take
advantage of a second or multiple CPUs, SMP virtual machines should not be provisioned by
default. For each virtual machine, system planners should carefully analyze the trade-off
between possible increases in performance and throughput gained through individual virtual
machines and the number of virtual machines supported by physical hardware.
The best practice for deploying Virtual SMP depends on a number of performance and
configuration factors, some of which are similar to physical SMP. You should analyze your
specific applications to understand if deploying them in SMP virtual machines will improve
throughput. In addition, you can achieve more efficient use of the underlying hardware by
deploying a mixture of single-CPU and SMP virtual machines to ESX Server platform systems.
Some key points regarding Virtual SMP deployment to take away from this white paper are:
• Virtual SMP allows a virtual machine to access two or more CPUs.
• Up to two physical processors can be consumed when you run a two-way virtual machine.
• Only allocate two or more virtual CPUs to a virtual machine if the operating system and the
application can truly take advantage of all the virtual CPUs. Otherwise, physical processor
resources may be consumed with no application performance benefit and, as a result,
other virtual machines on the same physical machine will be penalized.

I bolded and Underlined the key points I found in the conclusion of the document.

 

What do other blogger’s say?

The Lone Sysadmin has a posted titled : Why My Two vCPU VM is Slow

Sometimes computers are counterintuitive. One great case continues to be why a virtual machine with two vCPUs runs more slowly than a virtual machine with one vCPU.

Think of virtualization like a movie. A movie is a series of individual frames, but played back the motion looks continuous. It’s the same way with virtual machines. A physical CPU can only run one thing at a time, which means that only one virtual machine can run at a time. So the hypervisor “shares” a CPU by cutting up the CPU time into chunks. Each virtual machine gets a certain chunk to do its thing, and if it gets chunks of CPU often enough it’s like the movie: it seems like the virtual machine has been running continuously, even when it hasn’t. Modern CPUs are fast enough that they can pull this illusion off.

When one virtual machine stops running another virtual machine has an opportunity to run. If you have a virtual machine with one vCPU it needs a chunk of time from a single physical CPU. When a physical CPU has some free time that single vCPU virtual machine will run. No problem.

Similarly, in order for a virtual machine with two vCPUs to run it needs to have chunks of free time on two physical CPUs. When two physical CPUs are both available that virtual machine can run.

The trouble comes when folks mix and match single and dual-vCPU virtual machines in an environment that doesn’t have a lot of CPU resources available. A two-vCPU virtual machine has to wait for two physical processors to free up, but the hypervisor doesn’t like to have idle CPUs, so it runs a single vCPU virtual machine instead. It ends up being a long time before two physical CPUs free up simultaneously, and the two vCPU virtual machine seems really slow as a result. By “seems really slow” I mean it doesn’t perform very well, but none of the performance graphs show any problems at all.

To fix this you need to set the environment up so that two physical CPUs become free more often. First, you could add CPU resources so that the probability of two CPUs being idle at the same time is higher. Unfortunately this usually means buying stuff, which isn’t quick, easy, or even possible sometimes.

Second, you could set all your virtual machines to have one vCPU. That way they’ll run whenever a single physical CPU is free. This is usually a good stopgap until you can add CPU resources.

Last, you can group all your two vCPU machines together where those pesky single vCPU virtual machines won’t bother them. When a two vCPU virtual machine stops running it’ll always free up two physical CPUs. This usually means cutting up a cluster, though, so that will have also have drawbacks.

Virtualization can be awesome, but it can be pretty counterintuitive sometimes, too.

Very Interesting, take what you can from it. To my it says that you should keep like VM’s 1 vCPU’s and 2 vCPU’s on different hosts.

 

I fond this on the VMware Communities site.

 

Multiple vCPU’s (SMP); convincing the business

As most have come to understand, it is not a good practice to designate multiple vCPU’s for a VM unless the application running can actually use the 2nd processor effectively (see discussionhttp://communities.vmware.com/thread/131269). I am fairly new to the VM environment at my new job and noticed ~20% of the VM’s have either 2 or 4 (i assume very bad) vCPU’s assigned. We steadily get requests for VM’s with multiple CPU’s and it appears they are coming from requirements that assume physical servers are being used (ex:Two (2) VM Servers; Each with 2 Dual-Core/8GB RAM 108GB Usable disk space). The VMware support team understands the implications of using vSMP when not needed, but how do we convince the business that their application will work on a single CPU when their requirements call for 2 dual-core; 4 processor cores. Any documentation available that we can hand out or any suggestions is greatly appreciated. Thanks.

Dave

Responses that caught my eye are.

 

beagle_jbl

It’s not bad using vSMP – one just has to do it intelligently and sparingly. It’s only really bad if you have the same number of cores on the VM as the ESX host. Also, if you assign too many multi-core VMs you could run into really high CPU Ready times and therefore poor performance. Simply put, the ESX host needs to have 4 IDLE cores to satisfy a 4-core VM. On a fairly busy 8-core ESX host, 4 available cores may be hard to come by. In that instance, that 4-core VM could potentially run slower than a single-core VM (depending on the CPU shares assigned and such).
VMWare recommends having at least double the amount of cores on the ESX host as compared to any multicore VMs. I think that works OK with the odd 2-core on a 4-core ESX. However, I’ve insisted on using 16-core ESX hosts for any 4-core VMs. In those two scenarios… with a little monitoring and tweaking… I have seen no issues with CPU Ready counters.

 

I recommend reading the post, but the point is you need understand is that the Physical CPU to use 2 or 4 vCPU’s without a performance hit.

And has Texiwill points out in the Thread, most of the time you will see no increase by going to more vCPU’s, but you will see a performance hit.

 

Summary

To my understanding, the only time you would want to do multiple vCPU’s is when you have lots of Physical CPU’s / Cores, and you are running a Multiple CPU threaded Application, and then don’t mix vCPU’s on the same Host / Cluster.

 

 

 

Thoughts? Am I understanding this correctly? Post comments please

You may also like