Bad news for VMware VI Enterprise customers everywhere. I just found out I have 110 unsupported production and development VMs in my datacenter. Symantec published Document ID 2008101607465248 on 10/15/08 removing VMware VMotion support from its Symantec Antivirus (SAV) and Symantec Endpoint Protection (SEP) products.
Operating systems impacted are: All Windows operating systems.
Reported issues include but are not limited to:
* Client communication problems * Symantec Endpoint Protection Manager (SEPM) communication issues * Content update failures * Policy update failures * Client data does not get entered into the database * Replication failures
Bad news indeed, I hope that that Symantec up’s their game and gets this working.
Building on my earlier article on setting up NetApp deduplication, I wanted to follow up with some information on using NetApp deduplication with block storage (LUNs presented via Fibre Channel or iSCSI).
I’m relatively new to NetApp deduplication (formerly A-SIS), so this article won’t be an advanced treatise on NetApp deduplication or its deep inner workings. Instead, this is intended to be a quick guide to setting up NetApp deduplication for others, like myself, who may be familiar with Data ONTAP but not necessarily deduplication.
f you’ve worked with Network Appliance storage before, you’re probably already familiar with the idea of snap reserve (storage space set aside to accommodate for Snapshots) and fractional reserve (used with LUNs). I’m going to hold the in-depth discussion of why you need snap reserve and fractional reserve for a different day, but I did want to pass on these commands that were shared with me by a colleague of mine. These Data ONTAP commands, available with Data ONTAP 7.2 or later (some commands are available in Data ONTAP 7.1), will help you manage the space requirements for LUNs on a NetApp storage area network (SAN).
When properly implemented and configured, VMware Virtual Infrastructure can make provisioning new servers a task that takes only minutes. In fact, in my own lab (running equipment that is, admittedly, several years old and woefully underpowered), I can provision new servers running Windows Server 2003 R2 in less than 10 minutes. That’s pretty impressive.
As impressive as those numbers may be (and I’m sure there are readers out there with even more impressive numbers), if we leverage some vendor-specific storage functionality we can achieve some really impressive times. For example, leveraging NetApp FlexClones could allow us to provision new VMs in seconds. Let’s take a quick look at how that’s done.
In this article, I’m going to discuss how to use FlexClones for provisioning new VMs in a VMware VI3 environment. This is not an exhaustive treatise on the subject, but rather an introduction to the process and some of the configuration that needs to take place in your environment. (Disclaimer: Use this stuff at your own risk.)
The ability to quickly and easily create new virtual servers in VMware VirtualCenter (using templates and cloning) is a key feature that benefits a lot of VMware customers. A new server running Windows Server 2003 R2 in less than 10 minutes? Who wouldn’t like that functionality? (Some other day, perhaps we’ll discuss that very question.)
VMware’s cloning functionality is great in that it is completely storage-agnostic; it works the same on an HP EVA, a NetApp storage system, an EMC Clariion, or a homebrew iSCSI target. At the same time, VMware’s cloning functionality is not so great in that it is completely storage-agnostic. It doesn’t take advantage of any of the hardware-specific functionality. In this article, I’d like for us to look at one particular vendor Network Appliance and how using some vendor- and hardware-specific functionality (namelyFlexClones) can provide some benefits.
Part 1 in our series on NetApp FlexClones and VMware discussed in greater detail some of the advantages of using FlexClones for VM provisioning. In that article, we saw that using FlexClones can greatly reduce both the storage required for new VMs as well as the time required to provision new VMs, especially when the storage needed by the VMs is large. Both of these advantages can be very compelling.
However, in order to make an informed decision about whether we should use FlexClones we must also look at the disadvantages of this approach. In this part of the series, we’ll take a look at some of those disadvantages.
My recent article on how to provision VMs using FlexClones prompted a reader to ask the question, â€œWhat about using LUN clones?â€� That’s an excellent question, and one that I myself asked when I first started using some of the advanced functionality of Network Appliance storage systems. I had expected that this question would come up, and so I’d already begun preparing an article discussing LUN clones vs. FlexClones. My thanks go to Aaron for prompting the discussion!
A number of times over the last few months, I’ve run into situations where NetApp’s FlexClone technology was being heavily pitched to customers interested in deploying, or expanding their deployment of, VMware Infrastructure.
Wow, what a great bunch of Information! Thanks Scott Lowe
As we look at Netapp more, I will post findings, and resources.
10 Gigabit Ethernet is expected to replace Gigabit Ethernet and become the dominant Ethernet standard in the next few years. VMware ESX now supports a number of 10 Gbps network cards and allows multiple virtual machines to share a single physical NIC. This paper presents results from our single-virtual machine and multi-virtual machine network throughput experiments that show ESX can easily reach line rates on 10 Gbps links. The paper discusses how Jumbo Frames influence networking performance, both on the receive and the transmit paths. The paper also presents the results of our scalability experiments in which up to 16 virtual machines share a single 10 Gbps physical NIC and discusses the allocation of bandwidth to different virtual machines.
Monday Night Football kickoff is coming up but I wanted follow up quickly with another option (as suggested by Lane Leverett): Deploy the VirtualCenter Management Server (VCMS) on a Windows VM hosted on a VMware Virtual Infrastructure cluster. Why is this a good option? “
This is what I plan on looking at when we purchase our own Virtual Center Licensing. ( soon ) Jason did a great job on writing this up, and thank him for it.
Zenoss has discovered a security vulnerability related to XML-RPC authentication which, in some cases, allows for un-authenticated method invocation in all versions of Zenoss Professional, Enterprise, Service Provider and Core.
Zenoss strongly recommends you patch this vulnerability immediately. All users should review this advisory, however, those customers who have installed Zenoss in a publicly available network may be at an increased risk. “
“ Economy Crisis this year has encouraged users to consider the ISCSISAN Vs FC SAN today. Data Storage growth will never stop unless the business is stop. In order to keep the environment growth, the IT architect will have to provide a cost effective solution in the finance critical timing like now. “
A great read, and yet another good article on iSCSI vs FC.
“ I currently admin a sizable VMWare ESX deployment hosting a couple dozen domains and over a hundred Windows servers. Most are for testing and could be rebuilt pretty quickly, but some are for production deployment – at least one of which is fairly irreplaceable. “ This is something I have yet to work on in full force, thanks to Motogobi for posting this.
There areseveralmentions of this out in the blogosphere, plus it was talked about openly at VMworld by VMware staff in presentations and labs. It’s worth mentioning again because people don’t seem to be catching on:
Future versions of VMware ESX/ESXi will only run on 64-bit-capable CPUs.
You will still be able to run 32-bit guest OSes, but the ESX console OS will only work on CPUs capable of Intel VT & EM64T. This is a big deal”
If you think about this, and how it would effect you and your company, I tend to agree with Bob.
We are currently running a couple ESXi Servers that are 32 bit, and even if we replaced these with 64 bit counter parts at some point, I would like to put them into a test environment or use them in a future DR site.