[smcountdown timerdate=’2015/08/30 07:00:00′]
VMworld 2015 is right around the corner in San Francisco and vBrainstorm.com will be there as an official blogger of VMworld! What can you expect? All the announcements made at the keynotes will be reported and summarized here. We will be meeting with some of the vendors to report on their products. These reports will be blog posts as well as videos. We will also use Periscope for some live video reports so make sure you check us out there. It should be a great week of virtual goodness and fun so stay tuned!
Shawn Cannon – vBrainstorm Blogger
Looks like VMware has made the 6.0 version of their vSphere and related product lines available today. Here are the links to download. Note: These links require a My VMware account that is licensed for these products.
VMware vCloud Suite 6.0 (You can get ESXi 6.0, vCenter Server 6.0, vSphere Replication 6.0, vSphere Data Protection 6.0, vCenter Site Recovery Manager 6.0, vRealize Orchestrator Appliance 6.0.1 and vRealize Operations Manager 6.0.1 from this link. Virtual SAN is included with ESXi and vCenter Server downloads)
Short and simple right?
This is from the VMware Site.
VMware Announces General Availability of vSphere 6
Today, we are excited to announce the general availability of VMware vSphere 6 along with a slew of other Software-Defined Data Center (SDDC) products including VMware Integrated OpenStack, VMware Virtual SAN 6, VMware vSphere Virtual Volumes, VMware vCloud Suite 6, and VMware vSphere with Operations Management 6.
vSphere 6 is the latest release of the industry-leading virtualization platform and serves as the foundation of the SDDC. This is the largest ever release of vSphere and is the first major release of the flagship product in over three years. vSphere 6 is jammed pack with features and innovations that enable users to virtualize any application, including both scale-up and scale-out applications, with confidence. New capabilities include increased scale and performance, breakthrough industry-first availability, storage efficiencies for virtual machines, and simplified management at scale. For more details on the blockbuster features please refer to the vSphere 6 announcement.
If you are interested in learning more about vSphere 6, there are several options:
- Read more about vSphere 6 at the products pages
- Experience vSphere 6 with our free 60-day evaluation
- Take one of the new instructor-lead vSphere 6 courses that are now available:
- VMware vSphere: What’s New [fusion_builder_container hundred_percent=”yes” overflow=”visible”][fusion_builder_row][fusion_builder_column type=”1_1″ background_position=”left top” background_color=”” border_size=”” border_color=”” border_style=”solid” spacing=”yes” background_image=”” background_repeat=”no-repeat” padding=”” margin_top=”0px” margin_bottom=”0px” class=”” id=”” animation_type=”” animation_speed=”0.3″ animation_direction=”left” hide_on_mobile=”no” center_content=”no” min_height=”none”][V5.5 to V6] explores the newest features and enhancements in VMware vSphere 6 including VMware vCenter Server 6.
- VMware vSphere: Install, Configure, Manage [V6] intensive hands-on training that focuses on installing, configuring, and managing VMware vSphere 6.
- VMware vSphere: Optimize and Scale [V6] teaches advanced skills for configuring and maintaining a highly available and scalable virtual infrastructure.
Recently at my day job we had some new storage allocated at our recovery site to use for vSphere storage. I was tasked with decommissioning the old datastores. The problem is that my replicated VMs resided on the old storage. Of course I could go into my vSphere replication settings on each VM and just point it to the new datastores and be done with it. That would have taken quite some time to do since the VMs would have to fully replicate again. I wanted to find an easy way to copy the replicated VMs from the old datastores to the new datastores. So I did some Internet searches and found the following blog post: Copy Files Between Datastores – PowerCLI. Dan Hayward posted a useful PowerCLI script that he used to copy ISO files from one datastore to another. I basically adapted this script and changed it to move a VM from an old datastore to another. I could have scripted it and passed in the variables from a CSV file but I wanted to update the vSphere Replication settings one VM at a time. So here is what my script looked like:
#Set’s Old Datastore
$oldds = get-datastore “OldDatastore”
#Set’s New Datastore
$newds = get-datastore “NewDatastore”
#Set’s VM Folder Location
$VMloc = “VMName”
new-psdrive -Location $oldds -Name olddrive -PSProvider VimDatastore -Root “\”
new-psdrive -Location $newds -Name newdrive -PSProvider VimDatastore -Root “\”
#Copies Files from Old to New
copy-datastoreitem -recurse -force -item olddrive:\$VMloc\$VMloc*.vmdk newdrive:\$VMloc\
Basically the script connects you to your vCenter server, sets the old and new datastore variables, sets the VM Folder name and then does the magic to map the datastores and copy the VMDK files from the old to the new. Having the VMDK files copied over to the new datastores allowed me to use these as my replication seed for each drive when I reconfigured replication settings for the VM. I just updated this file for each VM that I needed to copy to the new datastores.
Obviously this could have been automated even more as I had to do this for over 120 VMs but I am not a scripting expert. I am just thankful for a great blog post from Dan Hayward to help me out! Thanks Dan!
The following Blog post is about VMware VSAN or Virtual San. http://www.vmware.com/products/virtual-san
The following post was made with pre GA , Beta content.
All performance numbers are subject to final benchmarking results. Please refer to guidance published at GA
All Content and media is from VMWare, as part of the blogger program.
Please also read vSphere 6 – Clarifying the misinformation http://blogs.vmware.com/vsphere/2015/02/vsphere-6-clarifying-misinformation.html
Here is the published What’s New: VMware Virtual SAN 6.0 http://www.vmware.com/files/pdf/products/vsan/VMware_Virtual_SAN_Whats_New.pdf
Here is the published VMware Virtual SAN 6.0 http://www.vmware.com/files/pdf/products/vsan/VMware_Virtual_SAN_Datasheet.pdf
- New On-Disk Format
- New delta-disk type vsanSparse
- Performance Based snapshots and clones
VSAN 5.5 to 6.0
- In-Place modular rolling upgrade
- Seamless In-place Upgrade
- Seamless Upgrade Rollback Supported
- Upgrade performed from RVC CLI
- PowerCLI integration for automation and management
Disk Serviceability Functions
- Ability to manage flash-based and magnetic devices.
- Storage consumption models for policy definition
- Default Storage Policies
- Resync Status dashboard in UI
- VM capacity consumption per VMDK
- Disk/Disk group evacuation
- New Caching Architecture for all-flash VSAN
- Virtual SAN Health Services
- Proactive Rebalance
- Fault domains support
- High Density Storage Systems with Direct Attached Storage
- File Services via 3rd party
- Limited support hardware encryption and checksum
Virtual SAN Performance and Scale Improvements
2x VMs per host
- Larger Consolidation Ratios
- Due to increase of supported components per hosts
- 9000 Components per Host
62TB Virtual Disks
- Greater capacity allocations per VMDK
- VMDK >2TB are supported
Snapshots and Clone
- Larger supported capacity of snapshots and clones per VMs
- 32 per Virtual Machine
- Cluster support raised to match vSphere
- Up to 64 nodes per cluster in vSphere
VSAN can scale up to 64 nodes
Enterprise-Class Scale and Performance
VMware Virtual SAN : All-Flash
Flash-based devices used for caching as well as persistence
Cost-effective all-flash 2-tier model:oCache is 100% write: using write-intensive, higher grade flash-based devices
Persistent storage: can leverage lower cost read-intensive flash-based devices
Very high IOPS: up to 100K(1)IOPS/Host
Consistent performance with sub-millisecond latencies
30K IOPS/Host100K IOPS/Hostpredictable sub-millisecond latency
Virtual SAN FlashCaching Architectures
All-FlashCache Tier Sizing
Cache tier should have 10% of the anticipated consumed storage capacity
|Projected VM space usage||20GB|
|Projected number of VMs||1000|
|Total projected space consumption per VM||20GB x 1000 = 20,000 GB = 20 TB|
|Target flash cache capacity percentage||10%|
|Total flash cache capacity required||20TB x .10 = 2 TB|
- Cache is entirely write-buffer in all-flash architecture
- Cache devices should be high write endurance models: Choose 2+ TBW/day or 3650+/5 year
- Total cache capacity percentage should be based on use case requirements.
–For general recommendation visit the VMware Compatibility Guide.
–For write-intensive workloads a higher amount should be configured.
–Increase cache size if expecting heavy use of snapshots
New On-Disk Format
- Virtual SAN 6.0 introduces a new on-disk format.
- The new on-disk format enables:
–Higher performance characteristics
–Efficient and scalable high performance snapshots and clones
–Online migration to new (RVC only)
- The object store will continue to mount the volumes from all hosts in a cluster and presents them as a single shared datastore.
- The upgrade to the new on-disk format is optional; the on-disk format for Virtual SAN 5.5 will continue to be supported
Performance Snapshots and Clones
- Virtual SAN 6.0 new on-disk format introduces a new VMDK type
–Virtual SAN 5.5 snapshots were based on vmfsSparse (redo logs)
- vsanSparse based snapshots are expected to deliver performance comparable to native SAN snapshots.
–vsanSparse takes advantage of the new on-disk format writing and extended caching capabilities to deliver efficient performance.
- All disks in a vsanSparse disk-chain need to be vsanSparse (except base disk).
–Cannot create linked clones of a VM with vsanSparse snapshots on a non-vsan datastore.
–If a VM has existing redo log based snapshots, it will continue to get redo log based snapshots until the user consolidates and deletes all current snapshots.
Flash Based Devices In Virtual SAN hybrid ALL read and write operations always go directly to the Flash tier.
Flash based devices serve two purposes in Virtual SAN hybrid architecture
1.Non-volatile Write Buffer (30%)
–Writes are acknowledged when they enter prepare stage on the flash-based devices.
–Reduces latency for writes2.
2. Read Cache (70%)
– Cache hits reduces read latency
– Cache miss – retrieve data from the magnetic devices
Flash Based Devices In Virtual SAN all-flash read and write operations always go directly to the Flash devices.
Flash based devices serve two purposes in Virtual SAN All Flash:
1 .Cache Tier
–High endurance flash devices.
–Listed on VCG2.
2. Capacity Tier
–Low endurance flash devices
–Listed on VCG
•1Gb / 10Gb supported
–10Gb shared with NIOC for QoS will support most environments
–If 1GB then recommend dedicated links for Virtual SAN
–Layer 3network configuration supported in 6.0
•Jumbo Frames will provide nominal performance increase
–Enable for greenfield deployments
–Enable in large deployments to reduce CPU overhead
•Virtual SAN supports both VSS & VDS–NetIOC requires VDS
•Network bandwidth performance has more impact on host evacuation, rebuild times than on workload performance
High Density Direct Attached Storage
–Manage disks in enclosures – helps enable blade environment
–Flash acceleration provided on the server or in thesubsystem
–Data services delivered via the VSAN Data Services and platform capabilities
–Direct attached and disks (flash devices, and magnetic devices) are Supports combination of direct attached disks and high density attached disks (SSDs and HDDs) per disk group.
Users are expected to configure the HDDAS switch such that each disk is only seen by one host.
–VSAN protects against misconfigured HDDASs (a disk is seen by more than 1 host).
–The owner of a disk group can be explicitly changedby unmounting and restamping the disk group from the new owner.
•If a host who own a disk group crashes, manual re-stamping can be done on another host
.–Supported HDDASs will be tightly controlled by the HCL (exact list TBD).•Applies to HDDASs and controllers
FOLLOW the VMware HCL. www.vmware.com/go/virtualsan-hcl
- Virtual SAN 6.0 has a new on disk format for disk groups and exposes a new delta-disk type, so upgrading from 1.0 to 2.0 involves more than upgrading the ESX/VC software.
Upgrades are performed in multiple phases
Phase 1: Fresh deployment or upgrade of to vSphere 6.0
Phase 2: Disk format conversion (DFC)
Reformat disk groups
Disk/Disk Group Evacuation
- In Virtual SAN 5.5 in order to remove a disk/disk group without data lost, hosts were placed in maintenance mode with the full data evacuation mode from all disk/disk groups.
- Virtual SAN 6.0 Introduces the support and ability to evacuate data from individual disk/disk groups before removing a disk/disk group from the Virtual SAN.
- Supported in the UI, esxcli and RVC.
Check box in the “Remove disk/disk group” UI screen
Virtual SAN 6.0 introduces a new disk serviceability feature to easily map the location of magnetic disks and flash based devices from the vSphere Web Client.
- Light LED on failures
- When a disk hits a permanent error, it can be challenging to find where that disk sits in the chassis to find and replace it.
- When SSD or MD encounters a permanent error, VSAN automatically turns the disk LED on.
- Turn disk LED on/off
- User might need to locate a disk so VSAN supports manually turning a SSD or MD LED on/off.
- Marking a disk as SSD
- Some SSDs might not be recognized as SSDs by ESX.
- Disks can be tagged/untagged as SSDs
- Marking a disk as local
- Some SSDs/MDs might not be recognized by ESX as local disks.
- Disks can be tagged/untagged as local disks.
Virtual SAN Usability Improvements
- What-if-APIs (Scenarios)
- Adding functionality to visualize Virtual SAN datastore resource utilization when a VM Storage Policy is created or edited.
–Reapplying a Policy
Default Storage Policies
- A Virtual SAN Default Profile is automatically created in SPBM when VSAN is enabled on a cluster.
–Default Profiles are utilized by any VM created without an explicit SPBM profile assigned.
vSphere admins to designate a preferred VM Storage Policy as the preferred default policy for the Virtual SAN cluster
- vCenter can manage multiple vsanDatastores with different sets of requirements.
- Each vsanDatastore can have a different default profile assigned.
Virtual Machine Usability Improvements
- Virtual SAN 6.0 adds functionality to visualize Virtual SAN datastore resource utilization when a VM Storage Policy is created or edited.
- Virtual SAN’s free disk space is raw capacity.
–With replication, actual usable space is lesser.
- New UI shows real usage on
Displayed in the vSphere Web Client and RVC
Virtual Machine >2TB VMDKs
- In VSAN 5.5, the max size of a VMDK was limited to 2TB.
–Max size of a VSAN component is 255GB.
–Max number of stripes per object was 12.
- In VSAN 6.0 the limit has been increased to allow VMDK up to 62TB.
–Objects are still striped at 255GB.
- 62TB limit is the same as VMFS and NFS so VMDK can be
There it is. I tried to lay it out as best I could.
Want to try it? Try the Virtual SAN Hosted Evaluation https://my.vmware.com/web/vmware/evalcenter?p=vsan6-hol
What’s New in Virtual SAN 6
Learn how to deploy, configure, and manage VMware’s latest hypervisor-converged storage solution.
VMware has announced the first round of vExperts for 2015 and I am please to report that myself and Roger of vBrainstorm.com have made the list once again! This is my 3rd year in a row being selected so the picture above reflects that. Here is the announcement from VMware as well as a link it.
First we would like to say thank you to everyone who applied for the 2015 vExpert program.
I’m pleased to announce the list 2015 vExperts. Each of these vExperts have demonstrated significant contributions to the community and a willingness to share their expertise with others. Contributing is not always blogging or Twitter as there are many public speakers, book authors, script writers, VMUG leaders, VMTN community moderators and internal champions among this group.
I want to personally thank everyone who applied and point out that a “vExpert” is not a technical certification or even a general measure of VMware expertise. The judges selected people who were particularly engaged with their community and who had developed a substantial personal platform of influence in those communities. There were a lot of very smart, very accomplished people, even VCDXs, that weren’t named as vExpert this year.
If you feel like you were not selected in error, that’s entirely possible. The judges may have overlooked or misinterpreted what you wrote in your application. Email us at [email protected] and we can discuss your situation. We looked at all of the 2014 activities to determine the voting results.
We will open the second half 2015 applications soon which will only allow for two voting periods this year rather then the three we had last year.
If you were selected as a vExpert 2015, we will be conducting the on-boarding throughout the next few weeks so hold tight and expect future communication from us soon. You must successfully be enrolled in our private vExpert community to be listed in the vExpert directory and to be alerted to opportunities like the beta programs and complimentary licenses that we offer to vExperts. We will provide instructions to gain access to the private forum and the vExpert directory in the next communication via email. We will use the email address provided in your vExpert application.
Congratulations to all the vExperts, new and returning. We’re looking forward to working with you. Command + F away and find your name if you can’t wait for the welcome email
and the VMware Social Media & Community Team
- Enhanced virtual disk format support
- Ability to hot configure FT
- Greatly increased FT host compatibility
- 4vCPU per VM.
- Up to 4 VM’s per Host. ( 10GB required.) or 8 vCPU’s per Host.
- Support for vStorage APIs for Data Protection (VADP)
- API for non-disruptive snapshots
- Source and Destination VM each have independent vmdk files
- Source and Destination VM are Allowed to be on different datastores
Increased vSphere Maximums
- 64 Hosts per Cluster
- 8000 Virtual Machines per Cluster
- 480 CPUs
- 12 TB RAM
- 1000 Virtual Machines Per Host
- 128 vCPUs
- 4 TB RAM to 12 TB RAM (depending on partner)
- Hot-add RAM now vNUMA aware
- WDDM 1.1 GDI acceleration features
- xHCI 1.0 controller compatible with OS X 10.8+ xHCI driver
- Serial and parallel port enhancements
- A virtual machine can now have a maximum of 32 serial ports
- Serial and parallel ports can now be removed
New ESXCLI Commands
- Now possible to use ESXCLI commands to:
- Create a new local user
- List local user accounts
- Remove local user account
- Modify local user account
- List permissions defined on the host
- Set / remove permission for individual users or user groups
- Two Configurable Parameters
- Can set the maximum allowed failed login attempts (10 by default)
- Can set lockout duration period (2 minutes by default)
- Configurable via vCenter Host Advanced System Settings
- Available for SSH and vSphere Web Services SDK
- DCUI and Console Shell are not locked
Complexity Rules via Advanced Settings
- No editing of PAM config files on the host required anymore
- Change default password complexity rules using VIM API
- Configurable via vCenter Host Advanced System Settings
Improved Auditability of ESXi Admin Actions
- In 6.0, all actions taken at vCenter against an ESXi server now show up in the ESXi logs with the vCenter username
- [fusion_builder_container hundred_percent=”yes” overflow=”visible”][fusion_builder_row][fusion_builder_column type=”1_1″ background_position=”left top” background_color=”” border_size=”” border_color=”” border_style=”solid” spacing=”yes” background_image=”” background_repeat=”no-repeat” padding=”” margin_top=”0px” margin_bottom=”0px” class=”” id=”” animation_type=”” animation_speed=”0.3″ animation_direction=”left” hide_on_mobile=”no” center_content=”no” min_height=”none”][user=vpxuser:CORPAdministrator]
Enhanced Microsoft Clustering (MSCS)
- Support for Windows 2012 R2 and SQL 2012
- Failover Clustering and AlwaysOn Availability Groups
- IPV6 Support
- PVSCSI and SCSI controller support
- vMotion Support
- Clustering across physical hosts (CAB) with Physical Compatibility Mode RDM’s
- Supported on Windows 2008, 2008 R2, 2012 and 2012 R2
vCenter Server Features – Enhanced Capabilities
Hosts per VC
Powered-On VMs per VC
Hosts per Cluster
VMs per Cluster
Platform Services Controller
Platform Services Controller includes takes it beyond just Single Sign-On. It groups:
- Single Sign-On (SSO)
- Certificate Authority
Linked Mode Comparison
Single Inventory View
Single Inventory Search
•Roles & Permissions
Certificate Lifecycle Management for vCenter and ESXi
VMware Certificate Authority (VMCA)
Provisions each ESXi host, each vCenter Server and vCenter Server service with certificates that are signed by VMCA
VMware Endpoint Certificate Service (VECS)
Stores and Certs and Private Keys for vCenter Services
vCenter Server 6.0 – VMCA
- During installation, VMCA automatically creates a self-signed certificate
- This is a CA certificate, capable of issuing other certificates
- All solutions and endpoint certificates are created (and trusted) from this self-signed CA certificate
- Can replace the default self-signed CA certificate created during installation
- Requires a CSR issued from VMCA to be used in an Enterprise/Commercial CA to generate a new Issuing Certificate
- Requires replacement of all issued default certificates after implementation
Certificate Replacement Options for vCenter Server
- Default installed certificates
- Self-signed VMCA CA certificate as Root
- Possible to regenerate these on demand easily
- Replace VMCA CA certificates with a new CA certificate from the Enterprise PKI
- On removal of the old VMCA CA certificate, all old certificates must be regenerated
- Disable VMCA as CA
- Provision custom leaf certificates for each solution, user and endpoint
- More complicated, for highly security conscious customers
Cross vSwitch vMotion
- Transparent operation to the guest OS
- Works across different types of virtual switches
- vSS to vSS
- vSS to vDS
- vDS to vDS
- Requires L2 network connectivity
- Does not change the IP of the VM
- Transfers vDS port metadata
Cross vCenter vMotion
- Simultaneously changes
- vMotion without shared storage
- Increased scale
- Pool resources across vCenter servers
- Targeted topologies
- vCenter 6.0 and greater
- SSO Domain
- Same SSO domain to use the UI
- Different SSO domain possible if using API
- 250 Mbps network bandwidth per vMotion operation
- L2 network connectivity on VM portgroups
- IP addresses are not updated
- VM UUID maintained across vCenter server instances
- Not the same as MoRef or BIOS UUID
- Data Preservation
- Events, Alarms, Tasks History
- HA/DRS Settings
- Affinity/Anti-Affinity Rules
- Automation level
- Start-up priority
- Host isolation response
- VM Resource Settings
- MAC Address of virtual NIC
- MAC Addresses preserved across vCenters
- Always unique within a vCenter
- Not reused when VM leaves vCenter
Long Distance vMotion
- Cross-continental distances – up to 100ms RTTs
- Maintain standard vMotion guarantees
- Does not require VVOLs
- Use Cases:
- Permanent migrations
- Disaster avoidance
- Multi-site load balancing
- Follow the sun
Increased vMotion Network Flexibility
- vMotion network will cross L3 boundaries
- vMotion can now use it’s own TCP/IP stack
Content Library Overview
- Simple content management
- VM templates
- ISO images
- Store and manage content
- One central location to manage all content
- Beyond templates within vCenter
- Support for other file types
- Share content
- Store once, share many times
- vCenter -> vCenter
- vCloud Director -> vCenter
- Consume content
- Deploy templates to a host or a cluster
Client Overview and Web client Changes
- It’s still here
- Direct Access to hosts
- VUM remediation
- New features in vSphere 5.1 and newer are only available in the web client
- Added support for virtual hardware versions 10 and 11 *read only*
vSphere Web Client
- Improved login time
- Faster right click menu load
- Faster performance charts
- Recent Tasks moved to bottom
- Flattened right click menus
- Deep lateral linking
- Major Performance Improvements
- Screen by screen code optimization
- Login now 13x faster
- Right click menu now 4x faster
- Most tasks end to end are 50+% faster
- Performance charts
- Charts are available and usable in less then half the time
- VMRC integration
- Advanced virtual machine operations
- Usability Improvements
- Can get anywhere in one click
- Right click menu has been flattened
- Recent tasks are back at the bottom
- Dockable UI
vCenter Cluster Support
vCenter server is now supported to be ran in a Microsoft Cluster.
That’s all of the changes we were presented with from VMware. What a ton of changes, I will dig into these more soon.
Update: a post vSphere 6 – Clarifying the misinformation has been posted to clairify any changes that have or will happen between beta and this post. I did my best to validate that my information is correct.
Make sure to follow the Requirements Here
We will be showing off 4 vCPU Fault Tolerance.
Login to vSphere vCenter 6 Web Server.
Click on Hosts and Clusters.
Select your first host.
Click on Manage
Click on Networking
This pop’s up.
Select Physical Network Adapter, and Click Next.
Select new Standard Switch, and Click Next.
Repeat on all other hosts in the cluster.
Select Host one again.
Click on Manage
Click on Networking
Click on VMkernal Adapters
This pop’s up.
Click on Next.
Click browse under Select a existing stand vSwitch .
Select vSwitch one, Click ok.
In Network Label, label the name to fit your needs. In this case I will use Fault Tolerance.
Enter in VLAN ID if required, since this is a lab, I will leave it blank.
Check Fault Tolerance Logging, and click next.
Enter in Static IP Information, or DHCP if you have a scope setup.
Click Next, and finish.
Repeat on all other hosts in the cluster. Make sure each has a different IP Address.
Next we enable Fault Tolerance on the VM.
I have done this in the video.
I did not have a 10GB network, so we could not actually test and show the results.