Category Archives: Uncategorized

http://www.educait.net/ Launched, Based on Local Central MN User group and Community

 

 

http://www.educait.net/ has it’s official launch!

 

We are a group of IT professionals dedicated to improving collaboration and education in the central Minnesota IT community.

Our team members work and reside in the central Minnesota community, locally supporting the backbone of the digital age and sharing their passion to expand YOUR abilities by donating their time and talents hosting user groups and technology seminars at no cost to you.

 

I am pleased to announce that we have a web site up for the Central MN user group, please check it out and pass the word.

 

More updates to come on both the page and blog.

 

 

Roger L.

St. Cloud, Minnesota’s, IT & Virtualization User Group

 

 

I have gotten together with a couple of local IT administrator’s, and we started an IT & Virtualization user group. Central MN IT & Virtualization User Group.

We meet for the first time, May 7th, at the St. Cloud Library in St. Cloud MN, from 3-5 P.M. in the Bremmer Room , max 50 people.

Our topic is going to be introduction to virtualization.

We are going to talk about virtualization , and how it works, and brush on how VMware, Citrix and Microsoft approach it.

People can register by going to https://sites.google.com/a/centralmnit.com/registration/

I hope to see you there!

 

 

If you have questions, message me on twitter http://twitter.com/rogerlund

 

 

Roger L.

To P2V or Not to P2V, That is not the Question.

http://technodrone.blogspot.com has a interesting write up titled Is P2V always the best solution?

I would like to however raise another point of view. Correct, the easiest solution for a system residing on aging hardware is to P2V the system to a VM – and it seems that I have solved my problem. But have I? I posted a question about 9 months ago with this exact same thought.

Do not blindly P2V systems, just because you can.

You will regret doing so in the long run.

I for one do not want ancient Windows/Linux systems in my datacenter. Would you?

 

Which is in response to http://vinf.net’s write up titled Using Virtualization to Extend The Hardware Lifecycle

In harder economic times getting real money to spend on server refreshes is difficult. There are the arguments that new kit is more power efficient; supports higher VM/CPU core densities but the reality is that even if you can show a cost saving over time most current project budgets are at best frozen until the economic uncertainty passes, at worst eliminated.

Although power costs have become increasingly visible because they’ve risen so much over the last 18 months this is still a hidden cost to many organisations, particularly if you run servers in your offices where a facilities team picks up the bill the overall energy savings through virtualization and hardware refresh don’t always get through.

So, I propose some alternative thinking to ride out the recession and make the kit you have and can’t get budget to replace last longer, as well as delivering a basic disaster recovery or test & development platform (business value) in the meantime.

Interesting read’s each, I would like to share my thoughts on this.

 

Instead of looking at this as, why should I use or not use virtualization to extend my applications life. We should instead look at it as reason’s to promote virtualization as a whole, whether it is to your company or CEO, or to the first time user. Shouldn’t you instead focus on what virtualization allows you to do? If it causes extra work, because of legacy systems,long term then great. Then you have a valid reason to ask for money or staffing, and if you are are not allotted this; then, it would be give you reason to ask to shut down legacy systems that are no longer needed.

I think today’s virtualization , no matter VMware, Citrix or Hyper-V,  should be showing applications vendor’s a wake up call.  As they are the ones that are going to be required to support their software products longer, as virtual environments have no hardware ties. Gone are the days when software vendor’s well both hardware and software in one package, and support both. I say, Welcome to Today, when you ask for five year support contracts from the day of purchase.

In my eyes, Gone are the times, of sitting there in a dusty corner; trying to sell a product with a limited life cycle, with a attached hardware platform. Those still doing so, will undoubtedly be left behind in the dust in today’s recession riddled times.

Roger L.

First Talkshoe Discussion, on backups Success.

I had my first Talkshoe discussion.

I thought it went well, Thanks to everyone for being on the Call.

The next will be next week. Open Topic, and I will think on a subject, or contact me with idea’s.

Title: EPISODE2 – Roger’s IT & Virtualization Discussion
Time: 03/31/2009 03:00 PM EDT

http://www.talkshoe.com/tc/44802

Topic: SNMP monitor, SYS LOGGING, Maintaining your ESX Servers

Future Topic’s : E-mail Gateway’s, Secure E-mail, virus scanning, Spam Filtering, all in one or not? Appliance vs VMware Appliance, vs Server.

Been Busy, New content soon.

We recently decided on a NetApp FAS 3140 at work, ( more on that later) so we have been busy implementing it into our environment.

Also recently, we purchased the VMware Foundation kit.

Once the FAS had the Fiber hooked up to the ESX server, and was setup ( more on this later ), I moved all of my existing VMware Guests from ESXi hosts to the ESX host.

Also I migrated my Onbase SQL, SQL temp, and Diskgroup local attached disk (SAS) to the San.

Then I configured our 3750G switch ( details later) for iSCSI, setting up three dual port nic teams, and enabling jumbo frames, and then configuring the ESX server (details later)

Then I have been working on configuring Cacti to monitor the FAS 3140, which only has SNMP V1, ( Ontap 7.2.X ), and playing with templates. ( details later )

So, as you can see I have some content I need to write up formally and publish, since I have not posted lately, I wanted to tell you all that I am alive, but just busy.

Roger L

One, Two, or Four vCPU’s in VMware?

 

 

Someone in Twitter today was asking about performance and how it relates to vCPU’s.

 

Personally I have found much better performance, and less CPU usage with 1 vCPU vs 2 vCPU’s and have never tried 4 vCPU’s

 

Let’s see if we can find some other information on the subject.

 

Co-scheduling SMP VMs in VMware ESX Server talks about the following

VMware ESX Server efficiently manages a mix of uniprocessor and multiprocessor VMs, providing a rich set of controls for specifing both absolute and relative VM execution rates. For general information on cpu scheduling controls and other resource management topics, please see the official VMware Resource Management Guide.
For a multiprocessor VM (also known as an "SMP VM"), it is important to present the guest OS and applications executing within the VM with the illusion that they are running on a dedicated physical multiprocessor. ESX Server faithfully implements this illusion by supporting near-synchronous coscheduling of the virtual CPUs within a single multiprocessor VM.
The term "coscheduling" refers to a technique used in concurrent systems for scheduling related processes to run on different processors at the same time. This approach, alternately referred to as "gang scheduling", had historically been applied to running high-performance parallel applications, such as scientific computations. VMware ESX Server pioneered a form of coscheduling that is optimized for running SMP VMs efficiently.

 

But the only document I can find that really addresses this is a older document titled:

 Best Practices Using VMware Virtual SMP

Virtual SMP can provide a significant advantage for multi-threaded applications and applications
using multiple processes in execution. However, since not all applications are able to take
advantage of a second or multiple CPUs, SMP virtual machines should not be provisioned by
default. For each virtual machine, system planners should carefully analyze the trade-off
between possible increases in performance and throughput gained through individual virtual
machines and the number of virtual machines supported by physical hardware.
The best practice for deploying Virtual SMP depends on a number of performance and
configuration factors, some of which are similar to physical SMP. You should analyze your
specific applications to understand if deploying them in SMP virtual machines will improve
throughput. In addition, you can achieve more efficient use of the underlying hardware by
deploying a mixture of single-CPU and SMP virtual machines to ESX Server platform systems.
Some key points regarding Virtual SMP deployment to take away from this white paper are:
• Virtual SMP allows a virtual machine to access two or more CPUs.
• Up to two physical processors can be consumed when you run a two-way virtual machine.
• Only allocate two or more virtual CPUs to a virtual machine if the operating system and the
application can truly take advantage of all the virtual CPUs. Otherwise, physical processor
resources may be consumed with no application performance benefit and, as a result,
other virtual machines on the same physical machine will be penalized.

I bolded and Underlined the key points I found in the conclusion of the document.

 

What do other blogger’s say?

The Lone Sysadmin has a posted titled : Why My Two vCPU VM is Slow

Sometimes computers are counterintuitive. One great case continues to be why a virtual machine with two vCPUs runs more slowly than a virtual machine with one vCPU.

Think of virtualization like a movie. A movie is a series of individual frames, but played back the motion looks continuous. It’s the same way with virtual machines. A physical CPU can only run one thing at a time, which means that only one virtual machine can run at a time. So the hypervisor “shares” a CPU by cutting up the CPU time into chunks. Each virtual machine gets a certain chunk to do its thing, and if it gets chunks of CPU often enough it’s like the movie: it seems like the virtual machine has been running continuously, even when it hasn’t. Modern CPUs are fast enough that they can pull this illusion off.

When one virtual machine stops running another virtual machine has an opportunity to run. If you have a virtual machine with one vCPU it needs a chunk of time from a single physical CPU. When a physical CPU has some free time that single vCPU virtual machine will run. No problem.

Similarly, in order for a virtual machine with two vCPUs to run it needs to have chunks of free time on two physical CPUs. When two physical CPUs are both available that virtual machine can run.

The trouble comes when folks mix and match single and dual-vCPU virtual machines in an environment that doesn’t have a lot of CPU resources available. A two-vCPU virtual machine has to wait for two physical processors to free up, but the hypervisor doesn’t like to have idle CPUs, so it runs a single vCPU virtual machine instead. It ends up being a long time before two physical CPUs free up simultaneously, and the two vCPU virtual machine seems really slow as a result. By “seems really slow” I mean it doesn’t perform very well, but none of the performance graphs show any problems at all.

To fix this you need to set the environment up so that two physical CPUs become free more often. First, you could add CPU resources so that the probability of two CPUs being idle at the same time is higher. Unfortunately this usually means buying stuff, which isn’t quick, easy, or even possible sometimes.

Second, you could set all your virtual machines to have one vCPU. That way they’ll run whenever a single physical CPU is free. This is usually a good stopgap until you can add CPU resources.

Last, you can group all your two vCPU machines together where those pesky single vCPU virtual machines won’t bother them. When a two vCPU virtual machine stops running it’ll always free up two physical CPUs. This usually means cutting up a cluster, though, so that will have also have drawbacks.

Virtualization can be awesome, but it can be pretty counterintuitive sometimes, too.

Very Interesting, take what you can from it. To my it says that you should keep like VM’s 1 vCPU’s and 2 vCPU’s on different hosts.

 

I fond this on the VMware Communities site.

 

Multiple vCPU’s (SMP); convincing the business

As most have come to understand, it is not a good practice to designate multiple vCPU’s for a VM unless the application running can actually use the 2nd processor effectively (see discussionhttp://communities.vmware.com/thread/131269). I am fairly new to the VM environment at my new job and noticed ~20% of the VM’s have either 2 or 4 (i assume very bad) vCPU’s assigned. We steadily get requests for VM’s with multiple CPU’s and it appears they are coming from requirements that assume physical servers are being used (ex:Two (2) VM Servers; Each with 2 Dual-Core/8GB RAM 108GB Usable disk space). The VMware support team understands the implications of using vSMP when not needed, but how do we convince the business that their application will work on a single CPU when their requirements call for 2 dual-core; 4 processor cores. Any documentation available that we can hand out or any suggestions is greatly appreciated. Thanks.

Dave

Responses that caught my eye are.

 

beagle_jbl

It’s not bad using vSMP – one just has to do it intelligently and sparingly. It’s only really bad if you have the same number of cores on the VM as the ESX host. Also, if you assign too many multi-core VMs you could run into really high CPU Ready times and therefore poor performance. Simply put, the ESX host needs to have 4 IDLE cores to satisfy a 4-core VM. On a fairly busy 8-core ESX host, 4 available cores may be hard to come by. In that instance, that 4-core VM could potentially run slower than a single-core VM (depending on the CPU shares assigned and such).
VMWare recommends having at least double the amount of cores on the ESX host as compared to any multicore VMs. I think that works OK with the odd 2-core on a 4-core ESX. However, I’ve insisted on using 16-core ESX hosts for any 4-core VMs. In those two scenarios… with a little monitoring and tweaking… I have seen no issues with CPU Ready counters.

 

I recommend reading the post, but the point is you need understand is that the Physical CPU to use 2 or 4 vCPU’s without a performance hit.

And has Texiwill points out in the Thread, most of the time you will see no increase by going to more vCPU’s, but you will see a performance hit.

 

Summary

To my understanding, the only time you would want to do multiple vCPU’s is when you have lots of Physical CPU’s / Cores, and you are running a Multiple CPU threaded Application, and then don’t mix vCPU’s on the same Host / Cluster.

 

 

 

Thoughts? Am I understanding this correctly? Post comments please

Idle Memory Tax

 

Jason Boche has taken the time to outline Idle Memory Tax in VMware ESX.

Jason, I wanted to thank you for taking the time on this, and I am hoping you do not mind me quoting your Article.

Memory over commit is a money/infrastructure saving feature that fits perfectly within the theme of two of virtualization’s core concepts: doing more with less hardware, and helping save the environment with greenness. While Microsoft Hyper-V offers no memory over commit or page sharing technologies, VMware has understood the value in these technologies long before VI3. I’ve mentioned this before – if you haven’t read it yet, take a look at Carl Waldspurger’s 2002 white paper on Memory Resource management in VMware ESX Server.

One of VMware’s memory over commit technologies is called Idle Memory Tax. IMT basically allows the VMKernel to reclaim unused guest VM memory by assigning a higher “cost value” to unused allocated shares. The last piece of that sentence is key – did you catch it? This mechanism is tied to shares. When do shares come into play? When there is contention for physical host RAM allocated to the VMs. Or in short, when physical RAM on the ESX host has been over committed – we’ve granted more RAM to guest VMs than we actually have on the ESX host to cover at one time. When this happens, there is contention or a battle for who actually gets the physical RAM. Share values are what determine this. I don’t want to get too far off track here as this discussion is specifically on Idle Memory Tax, but shares are the foundation so they are important to understand.

Back to Idle Memory Tax. Quite simply it’s a mechanism to take idle/unused memory from guest VMs that are hogging it in order to give that memory to another VM where it’s more badly needed. Sort of like Robin Hood for VI. By default this is performed using VMware’s balloon driver which is the more optimal of the two available methods. Out of the box, the amount of idle memory that will be reclaimed is 75% as configured by Mem.IdleTax under advanced host configuration. The VMKernel polls for idle memory in guest VMs every 60 seconds. This interval was doubled from ESX2.x where the polling period was every 30 seconds.

Here’s a working example of the scenario:

  • Two guest VMs live on an ESX/ESXi host with 8GB RAM
  • Each VM is assigned 8GB RAM and 8,192 shares. Discounting memory overhead, content based page sharing, and COS memory usage, we’ve effectively over committed our memory by 100%
  • VM1 is running IIS using only 1GB RAM
  • VM2 is running SQL and is request the use of all 8GB RAM
  • Idle Memory Tax allows the VMKernel to “borrow” 75% of the 7GB of allocated but unused RAM from VM1 and give it to VM2.  25% of the unused allocated RAM will be left for the VM as a cushion for requests for additional memory before other memory over commit technologies kick in

Here are the values under ESX host advanced configuration that we can tweak to modify the default behavior of Idle Memory Tax:

  • Mem.IdleTax – default: 75, range: 0 to 99, specifies the percent of idle memory that may be reclaimed by the tax
  • Mem.SamplePeriod – default: 60 in ESX3.x 30 in ESX2.x, range: 0 to 180, specifies the polling interval in seconds at which the VMKernel will scan for idle memory
  • Mem.IdleTaxType – default: 1 (variable), range: 0 (flat – use paging mechanism) to 1 (variable – use the balloon driver), specifies the method at which the VMKernel will reclaim idle memory. It is highly recommended to leave this at 1 to use the balloon driver as paging is more detrimental to the performance of the VM

VMware recommends that changes to Idle Memory Tax are not necessary, or even appropriate. If you get into the situation where Idle Memory Tax comes into play, you need to question the VMs that have large quantities of allocated but idle memory. Rather than allocating more memory to the VM than it needs, thus wrongly inflating its share value, consider reducing the allocated amounts of RAM to those VMs.

I thought this was very interesting, I my self have never played with this setting.

 

I did have some questions on performance on both the VMware Host side, and VMware Guest side. Do either have a performance hit when ESX is using Idle Memory Tax to move that memory.

I could see large uses for this, if you have a system that only uses large amounts of memory at certain times of the day.

In fact, would you want to design your systems to use this ( by forcing low ram ), if you have certain systems that run at night, when other systems are idle ( with more ram) that normally are only used during the day?

 

Source Post http://www.boche.net/blog/index.php/2009/01/29/idle-memory-tax/

Xiotech Market information and Points of Interest.

A local vendor sent me these links, and I thought I would share.

 

http://www.xiotech.com/About_Press_Release.aspx?ID=NR-09-17-08 Virtual View, recognized in Storage Software for Virtualization category, dramatically simplifies storage monitoring, management and provisioning tasks in VMWare virtual environments
http://www.drunkendata.com/?p=2100  “Xiotech gets my nod for best technology initiative of 2008”
http://www.byteandswitch.com/document.asp?doc_id=169956&WT.svl=news2_1  Article featuring Dan Lewis of USC Marshall School of Business. Lewis declares that his Emprise 7000 system provides an 8x better performance than his old HP EVA3000. He also praises Virtual View, self-healing capabilities, and reduced energy consumption
http://searchstorage.techtarget.com.au/articles/28223-Five-hot-storage-technologies-for-2-9-and-five-flops- ISE among the top 5 technologies for 2009
http://www.drunkendata.com/?p=2085 Jon Toigo comments on the most important IT-related things that happened in 2008. Xiotech holds both the #1 and #2 spots – with ISE and Web Services based management.
http://www.crn.com/storage/212501502;jsessionid=4JXY4Y42KKETWQSNDLRSKH0CJUNN2JVN?pgno=9 The 10 Biggest Storage Stories Of 2008
http://www.xiotech.com/About_Press_Release.aspx?ID=NR-12-15-08  Xiotech Selected as 2009 Hot Companies Finalist
http://www.xiotech.com/About_Press.aspx#  More, more, more
http://www.networkworld.com/supp/2009/ndc1/012609-storage-rethink.html?page=1 Article on Web Services and ISE

 

Source Xiotech and local Vendor.