Category Archives: ESXi

VMware vSphere 6.0 is now available

Looks like VMware has made the 6.0 version of their vSphere and related product lines available today.  Here are the links to download.  Note:  These links require a My VMware account that is licensed for these products.

VMware vCloud Suite 6.0 (You can get ESXi 6.0, vCenter Server 6.0, vSphere Replication 6.0, vSphere Data Protection 6.0, vCenter Site Recovery Manager 6.0,  vRealize Orchestrator Appliance 6.0.1 and vRealize Operations Manager 6.0.1 from this link.  Virtual SAN is included with ESXi and vCenter Server downloads)

Short and simple right?

 

This is from the VMware Site.

 

VMware Announces General Availability of vSphere 6

Today, we are excited to announce the general availability of VMware vSphere 6 along with a slew of other Software-Defined Data Center (SDDC) products including VMware Integrated OpenStack, VMware Virtual SAN 6, VMware vSphere Virtual Volumes, VMware vCloud Suite 6, and VMware vSphere with Operations Management 6.

vSphere 6 is the latest release of the industry-leading virtualization platform and serves as the foundation of the SDDC. This is the largest ever release of vSphere and is the first major release of the flagship product in over three years.  vSphere 6 is jammed pack with features and innovations that enable users to virtualize any application, including both scale-up and scale-out applications, with confidence. New capabilities include increased scale and performance, breakthrough industry-first availability, storage efficiencies for virtual machines, and simplified management  at scale. For more details on the blockbuster features please refer to the vSphere 6 announcement.

If you are interested in learning more about vSphere 6, there are several options:

 

 [/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

Move vSphere Replicated VM files from one datastore to another

Recently at my day job we had some new storage allocated at our recovery site to use for vSphere storage.  I was tasked with decommissioning the old datastores.  The problem is that my replicated VMs resided on the old storage.  Of course I could go into my vSphere replication settings on each VM and just point it to the new datastores and be done with it.  That would have taken quite some time to do since the VMs would have to fully replicate again.  I wanted to find an easy way to copy the replicated VMs from the old datastores to the new datastores.  So I did some Internet searches and found the following blog post:  Copy Files Between Datastores – PowerCLI.   Dan Hayward posted a useful PowerCLI script that he used to copy ISO files from one datastore to another.  I basically adapted this script and changed it to move a VM from an old datastore to another.  I could have scripted it and passed in the variables from a CSV file but I wanted to update the vSphere Replication settings one VM at a time.  So here is what my script looked like:

Connect-VIServer ServerName

#Set’s Old Datastore
$oldds = get-datastore “OldDatastore”

#Set’s New Datastore
$newds = get-datastore “NewDatastore”

#Set’s VM Folder Location
$VMloc = “VMName”

#Map Drives
new-psdrive -Location $oldds -Name olddrive -PSProvider VimDatastore -Root “\”
new-psdrive -Location $newds -Name newdrive -PSProvider VimDatastore -Root “\”
#Copies Files from Old to New
copy-datastoreitem -recurse -force -item olddrive:\$VMloc\$VMloc*.vmdk newdrive:\$VMloc\

Basically the script connects you to your vCenter server, sets the old and new datastore variables, sets the VM Folder name and then does the magic to map the datastores and copy the VMDK files from the old to the new.  Having the VMDK files copied over to the new datastores allowed me to use these as my replication seed for each drive when I reconfigured replication settings for the VM.  I just updated this file for each VM that I needed to copy to the new datastores.

Obviously this could have been automated even more as I had to do this for over 120 VMs but I am not a scripting expert.  I am just thankful for a great blog post from Dan Hayward to help me out!  Thanks Dan!

vSphere 6.0 Fault Tolerance – New Features

Today Announces the next major release of VMware’Fault Tolerance Product.  Note. These facts are pre release, thus I will update any changes if necessary, All Facts are direct From VMware.

New features

  • Enhanced virtual disk format support
  • Ability to hot configure FT
  • Greatly increased FT host compatibility
  • 4vCPU per VM.
  • Up to 4 VM’s per Host. ( 10GB required.) or 8 vCPU’s per Host.
  • Support for vStorage APIs for Data Protection (VADP)
  • API for non-disruptive snapshots
  • Source and Destination VM each have independent vmdk files
  • Source and Destination VM are Allowed to be on different datastores
Stay Tuned for a walk through.
Update: a post vSphere 6 – Clarifying the misinformation has been posted to clairify any changes that have or will happen between beta and this post. I did my best to validate that my information is correct.
Roger L

vSphere 6.0 Platform New Features

Today Announces the release of VMware’s vSphere largest release in VMware History. Note. These facts are pre release, thus I will update any changes if necessary, All Facts are direct From VMware.

Increased vSphere Maximums

  • 64 Hosts per Cluster
  • 8000 Virtual Machines per Cluster
  • 480 CPUs
  • 12 TB RAM
  • 1000 Virtual Machines Per Host
Virtual Machine Compatibility ESXi 6 (vHW 11)
 
  • 128 vCPUs
  • 4 TB RAM to 12 TB RAM (depending on partner)
  • Hot-add RAM now vNUMA aware
  • WDDM 1.1 GDI acceleration features
  • xHCI 1.0 controller compatible with OS X 10.8+ xHCI driver
  • Serial and parallel port enhancements
  • A virtual machine can now have a maximum of 32 serial ports
  • Serial and parallel ports can now be removed
Local ESXi Account and Password Management Enhancements
 

New ESXCLI Commands

  • Now possible to use ESXCLI commands to:
  • Create a new local user
  • List local user accounts
  • Remove local user account
  • Modify local user account
  • List permissions defined on the host
  • Set / remove permission for individual users or user groups

Account Lockout

  • Two Configurable Parameters
  • Can set the maximum allowed failed login attempts (10 by default)
  • Can set lockout duration period (2 minutes by default)
  • Configurable via vCenter Host Advanced System Settings
  • Available for SSH and vSphere Web Services SDK
  • DCUI and Console Shell are not locked

Complexity Rules via Advanced Settings

  • No editing of PAM config files on the host required anymore
  • Change default password complexity rules using VIM API
  • Configurable via vCenter Host Advanced System Settings

Improved Auditability of ESXi Admin Actions

  •  In 6.0, all actions taken at vCenter against an ESXi server now show up in the ESXi logs with the vCenter username
  • [fusion_builder_container hundred_percent=”yes” overflow=”visible”][fusion_builder_row][fusion_builder_column type=”1_1″ background_position=”left top” background_color=”” border_size=”” border_color=”” border_style=”solid” spacing=”yes” background_image=”” background_repeat=”no-repeat” padding=”” margin_top=”0px” margin_bottom=”0px” class=”” id=”” animation_type=”” animation_speed=”0.3″ animation_direction=”left” hide_on_mobile=”no” center_content=”no” min_height=”none”][user=vpxuser:CORPAdministrator]

Enhanced Microsoft Clustering (MSCS)

  • Support for Windows 2012 R2 and SQL 2012
  • Failover Clustering and AlwaysOn Availability Groups
  • IPV6 Support
  • PVSCSI and SCSI controller support
  • vMotion Support
  • Clustering across physical hosts (CAB) with Physical Compatibility Mode RDM’s
  • Supported on Windows 2008, 2008 R2, 2012 and 2012 R2
Enjoy.
Update: a post vSphere 6 – Clarifying the misinformation has been posted to clairify any changes that have or will happen between beta and this post. I did my best to validate that my information is correct.
Roger L

[/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

vSphere 6 Configuration of Fault Tolerance

Make sure to follow the Requirements Here

We will be showing off 4 vCPU Fault Tolerance.

Login to vSphere vCenter 6 Web Server.

Click on Hosts and Clusters.

Select your first host.

Click on Manage

Click on Networking

Click on Add Host Networking

This pop’s up.

Select Physical Network Adapter, and Click Next.

Select new Standard Switch, and Click Next.

Click + to add a adapter

 Select vmnic2

Click Next

Click Finish.

Repeat on all other hosts in the cluster.

Select Host one again.

Click on Manage

Click on Networking

Click on VMkernal Adapters

Click on Add Host Networking

This pop’s up.

 Click on Next.

Click browse under Select a existing stand vSwitch .

Select vSwitch one, Click ok.

Click Next.

In Network Label, label the name to fit your needs. In this case I will use Fault Tolerance.
Enter in VLAN ID if required, since this is a lab, I will leave it blank.
Check Fault Tolerance Logging, and click next.

Enter in Static IP Information, or DHCP if you have a scope setup.

Click Next, and finish.

Repeat on all other hosts in the cluster. Make sure each has a different IP Address.

Next we enable Fault Tolerance on the VM.

I have done this in the video.

I did not have a 10GB network, so we could not actually test and show the results.

Roger Lund

In my hunt for a way to optimize my new server 2012 build’s I ran across this blog post.

In the below post , she talks about using powershell to build a template for VMware ESXi / vSphere.

Below are some useful commands.

Windows Server 2012: VM template tuning using PowerShell 

 

 

http://johansenreidar.blogspot.com/2013/06/windows-server-2012-vm-template-tuning.html

# Change Drive Letter on DVD Drive to X

gwmi Win32_Volume -Filter "DriveType = '5'" | swmi -Arguments @{DriveLetter = 'X:'}

# Initialize RAW disks

Get-Disk | Where-Object PartitionStyle –eq 'RAW' | Initialize-Disk –PartitionStyle MBR

# Format disk 0 for pagefile
# Verify correct disknumber before use

New-Partition –DiskNumber 0 -UseMaximumSize -AssignDriveLetter | Format-Volume -NewFileSystemLabel 'Pagefile' -FileSystem NTFS -AllocationUnitSize 65536 -Confirm:$false

# Disable Indexing on all drives

gwmi Win32_Volume -Filter "IndexingEnabled=$true" | swmi -Arguments @{IndexingEnabled=$false}

# Set location for dedicated dump file at system failure
# Verify correct path before use

Set-ItemProperty -Path 'HKLM:SYSTEMCurrentControlSetControlCrashControl' -Name 'DedicatedDumpFile' -Value 'D:MEMORY.DMP'
gwmi Win32_OSRecoveryConfiguration -EnableAllPrivileges | swmi -Arguments @{DebugFilePath='D:MEMORY.DMP'}

# Use small memory dump at system failure

gwmi Win32_OSRecoveryConfiguration -EnableAllPrivileges | swmi -Arguments @{DebugInfoType=1}

# Change setting to: Do not automatically restart at system failure

gwmi Win32_OSRecoveryConfiguration -EnableAllPrivileges | swmi -Arguments @{AutoReboot=$false}

# Turn off automatically manage paging file size for all drives

gwmi Win32_ComputerSystem -EnableAllPrivileges | swmi -Arguments @{AutomaticManagedPagefile=$false}

 Since I am at a basic level with powershell, I thought this was helpful to me.

What is everyone else doing?

  All Credit to Reidar Johansen http://johansenreidar.blogspot.com

 Roger L

 

Bochet.net repost : VMware View 5.0 VDI vHardware 8 vMotion Error

 

Thanks to Jason for this post, as I had this happen this morning:

 

I had this message.

 

 

arronerror

 

I twittered this, @rogerlund: error when I vmotion aaron’s pc between two ESXi 5 hosts. lockerz.com/s/140661269

Jason, responded, @jasonboche: Read my blost post from last night RT @rogerlund: error when I vmotion aaron’s pc between two ESXi 5 hosts. http://lockerz.com/s/140661269

I found the following on his blog.

 

VMware View 5.0 VDI vHardware 8 vMotion Error

General awareness/heads up blog post here on something I stumbled on with VMware View 5.0.  A few weeks ago while working with View 5.0 BETA in the lab, I ran into an issue where a Windows 7 virtual machine would not vMotion from one ESXi 5.0 host to another.  The resulting error in the vSphere Client was:

A general system error occurred: Failed to flush checkpoint data

I did a little searching and found similar symptoms in VMware KB 1011971 which speaks to an issue that can arise  when Video RAM (VRAM) is greater than 30MB for a virtual machine. In my case it was greater than 30MB but I could not adjust it due to the fact that it was being managed by the View Connection Server.  At the same time, a VMware source on Twitter volunteered his assistance and quickly came up with some inside information on the issue.  He had me try adding the following line to /etc/vmware/config on the ESXi 5.0 hosts (no reboot required):

migrate.baseCptCacheSize = “16777216″

The fix worked and I was able to vMotion the Windows 7 VM back and forth between hosts.  The information was taken back to Engineering for a KB to be released.  That KB is now available: VMware KB 2005741 vMotion of a virtual machine fails with the error: A general system error occurred: Failed to flush checkpoint data! The new KB article lists the following background information and several workarounds:

Cause

Due to new features with Hardware Version 8 for the WDDM driver, the vMotion display graphics memory requirement has increased. The default pre-allocated buffer may be too small for certain virtual machines with higher resolutions. The buffer size is not automatically increased to account for the requirements of those new features if mks.enable3d is set to FALSE (the default).

Resolution

To work around this issue, perform one of these options:

  • Change the resolution to a single screen of 1280×1024 or smaller before the vMotion.
  • Do not upgrade to Virtual Machine Hardware version 8.
  • Increase the base checkpoint cache size. Doubling it from its default 8MB to 16MB (16777216 byte) should be enough for every single display resolution. If you are using two displays at 1600×1200 each, increase the setting to 20MB (20971520 byte).To increase thebase checkpoint cache size:
    1. Power off the virtual machine.
    2. Click the virtual machine in the Inventory.
    3. On the Summary tab for that virtual machine, click Edit Settings.
    4. In the virtual machine Properties dialog box, click the Options tab.
    5. Under Advanced, select General and click Configuration Parameters.
    6. Click Add Row.
    7. In the new row, add migrate. baseCptCacheSize to the name column and add 16777216 to the value column.
    8. Click OK to save the change.

    Note: If you don’t want to power off your virtual machine to change the resolution, you can also add the parameter to the /etc/vmware/config file on the target host. This adds the option to every VMX process that is spawning on this host, which happens when vMotion is starting a virtual machine on the server.

  • Set mks.enable3d = TRUE for the virtual machine:
    1. Power off the virtual machine.
    2. Click the virtual machine in the Inventory.
    3. On the Summary tab for that virtual machine, click Edit Settings.
    4. In the virtual machine Properties dialog box, click the Options tab.
    5. Under Advanced, select General and click Configuration Parameters.
    6. Click Add Row.
    7. In the new row, add mks.enable3d to the name column and add True to the value column.
    8. Click OK to save the change.

Caution: This workaround increases the overhead memory reservation by 256MB. As such, it may have a negative impact on HA Clusters with strict Admission Control. However, this memory is only used if the 3d application is active. If, for example, Aero Basic and not Aero Glass is used as a window theme, most of the reservation is not used and the memory could be kept available for the ESX host. The reservation still affects HA Admission Control if large multi-monitor setups are used for the virtual machine and if the CPU is older than a Nehalem processor and does not have the SSE 4.1 instruction set. In this case, using 3d is not recommended. The maximum recommended resolution for using 3d, regardless of CPU type and SSE 4.1 support, is 1920×1200 with dual screens.

The permanent fix for this issue did not make it into the recent View 5.0 GA release but I expect it will be included in a future release or patch.”

 

I quoted the whole blog, as I thought it was a great post, Thanks Jason!

 

 

Roger L

ESXi, Restart Management Agents Via Console / SSH

I recently had to use Console / SSH to restart the Managment Agents on ESXi. This is not supported, but in this case, if I didn’t have it unlocked, I would have been up a creak..

Here is the page my google search turned up. ( thanks to VM-Help.com)


With ESX it is sometimes necessary to run the command services vmware-mgmt restart should you not be able to connect with the VI client or if you need the host to re-read the esx.conf configuration file because changes you have made at the console are not visible in VirtualCenter. The services command is not available on ESXi and the supported method is to access the Direct Console User Interface (DCUI) and then to select the restart option from there as in shown in the first image.

If you have console access or have enabled SSH on your ESXi host, you can run the command /sbin/services.sh restart to accomplish the same thing as is shown in the second image. This will restart the agents that are installed in /etc/init.d/ and with a default install that includes: hostd, ntpd, sfcbd, sfcbd-watchdog, slpd and wsmand. It will aalso restart the VMware HA agent (AAM) if that has been enabled on the host.