All posts by Roger Lund

VMware and Storage crazy man, vExpert, MN VMUG leader

Getting Opvizor Working by Michael White worth a read.

Michael White another vExpert did a great write up on Opvizor www.opvizor.com

Opvizor has engaged the VMware vExpert program to allow us to dive into the program. Michael’s write up shows how to install the product.

 

Getting Opvizor Working – and is it interesting or what!

 

“If you watch the video on their home page you will see what I did and perhaps be as interested as I was.  Essentially their product compares your environment against best practices from VMware and or NetApp and then will let you know how your environment differs and provide you with the info to fix it.”

 

He then walks you through the installation.

I will dive into the inside of Opvizor in my a follow up post.

Roger Lund

VMware Virtual SAN 6.0

The following Blog post is about VMware VSAN or Virtual San. http://www.vmware.com/products/virtual-san

 

The following post was made with pre GA , Beta content.

All performance numbers are subject to final benchmarking results. Please refer to guidance published at GA

All Content and media is from VMWare, as part of the blogger program.

Please also read vSphere 6 – Clarifying the misinformation http://blogs.vmware.com/vsphere/2015/02/vsphere-6-clarifying-misinformation.html

 

Here is the published What’s New: VMware Virtual SAN 6.0 http://www.vmware.com/files/pdf/products/vsan/VMware_Virtual_SAN_Whats_New.pdf

Here is the published VMware Virtual SAN 6.0 http://www.vmware.com/files/pdf/products/vsan/VMware_Virtual_SAN_Datasheet.pdf

 

Whats New?

 

vsan01

 

Disk Format

  • New On-Disk Format
  • New delta-disk type vsanSparse
  • Performance Based snapshots and clones

VSAN 5.5 to 6.0

  • In-Place modular rolling upgrade
  • Seamless In-place Upgrade
  • Seamless Upgrade Rollback Supported
  • Upgrade performed from RVC CLI
  • PowerCLI integration for automation and management

 

Disk Serviceability Functions

  • Ability to manage flash-based and magnetic devices.
  • Storage consumption models for policy definition
  • Default Storage Policies
  • Resync Status dashboard in UI
  • VM capacity consumption per VMDK
  • Disk/Disk group evacuation

VSAN Platform

  • New Caching Architecture for all-flash VSAN
  • Virtual SAN Health Services
  • Proactive Rebalance
  • Fault domains support
  • High Density Storage Systems with Direct Attached Storage
  • File Services via 3rd party
  • Limited support hardware encryption and checksum

 

Virtual SAN Performance and Scale Improvements

 

 

2x VMs per host

  • Larger Consolidation Ratios
  • Due to increase of supported components per hosts
  • 9000 Components per Host

 

62TB Virtual Disks

  • Greater capacity allocations per VMDK
  • VMDK >2TB are supported

Snapshots and Clone

  • Larger supported capacity of snapshots and clones per VMs
  • 32 per Virtual Machine

Host Scalability

  • Cluster support raised to match vSphere
  • Up to 64 nodes per cluster in vSphere

VSAN can scale up to 64 nodes

 

Enterprise-Class Scale and Performance

vsan02

VMware Virtual SAN : All-Flash

Flash-based devices used for caching as well as persistence

Cost-effective all-flash 2-tier model:oCache is 100% write: using write-intensive, higher grade flash-based devices
Persistent storage: can leverage lower cost read-intensive flash-based devices

Very high IOPS: up to 100K(1)IOPS/Host
Consistent performance with sub-millisecond latencies

 

vsan05

 

 

30K IOPS/Host100K IOPS/Hostpredictable sub-millisecond latency

 

Virtual SAN FlashCaching Architectures

 

vsan06

 

 

All-FlashCache Tier Sizing

Cache tier should have 10% of the anticipated consumed storage capacity

 Measurement Requirements Values
Projected VM space usage 20GB
Projected number of VMs 1000
Total projected space consumption per VM 20GB x 1000 = 20,000 GB = 20 TB
Target flash cache capacity percentage 10%
Total flash cache capacity required 20TB x .10 = 2 TB

 

  • Cache is entirely write-buffer in all-flash architecture
  • Cache devices should be high write endurance models: Choose 2+ TBW/day or 3650+/5 year
  • Total cache capacity percentage should be based on use case requirements.

–For general recommendation visit the VMware Compatibility Guide.

–For write-intensive workloads a higher amount should be configured.

–Increase cache size if expecting heavy use of snapshots

 

New On-Disk Format

 

  • Virtual SAN 6.0 introduces a new on-disk format.
  • The new on-disk format enables:

–Higher performance characteristics

–Efficient and scalable high performance snapshots and clones

–Online migration to new (RVC only)

  • The object store will continue to mount the volumes from all hosts in a cluster and presents them as a single shared datastore.
  • The upgrade to the new on-disk format is optional; the on-disk format for Virtual SAN 5.5 will continue to be supported

 

Performance Snapshots and Clones

 

  • Virtual SAN 6.0 new on-disk format introduces a new VMDK type

–Virtual SAN 5.5 snapshots were based on vmfsSparse (redo logs)

  • vsanSparse based snapshots are expected to deliver performance comparable to native SAN snapshots.

–vsanSparse takes advantage of the new on-disk format writing and extended caching capabilities to deliver efficient performance.

  • All disks in a vsanSparse disk-chain need to be vsanSparse (except base disk).

–Cannot create linked clones of a VM with vsanSparse snapshots on a non-vsan datastore.

–If a VM has existing redo log based snapshots, it will continue to get redo log based snapshots until the user consolidates and deletes all current snapshots.

 

 

 

 

Hardware Requirements

 

vsan03

 

Flash Based Devices In Virtual SAN hybrid ALL read and write operations always go directly to the Flash tier.
Flash based devices serve two purposes in Virtual SAN hybrid architecture

1.Non-volatile Write Buffer (30%)
–Writes are acknowledged when they enter prepare stage on the flash-based devices.
–Reduces latency for writes2.

2. Read Cache (70%)
– Cache hits reduces read latency
– Cache miss – retrieve data from the magnetic devices

 

Flash Based Devices In Virtual SAN all-flash read and write operations always go directly to the Flash devices.

Flash based devices serve two purposes in Virtual SAN All Flash:

1 .Cache Tier
–High endurance flash devices.
–Listed on VCG2.
2. Capacity Tier
–Low endurance flash devices
–Listed on VCG
Network

•1Gb / 10Gb supported
–10Gb shared with NIOC for QoS will support most environments
–If 1GB then recommend dedicated links for Virtual SAN
–Layer 3network configuration supported in 6.0

•Jumbo Frames will provide nominal performance increase
–Enable for greenfield deployments
–Enable in large deployments to reduce CPU overhead

•Virtual SAN supports both VSS & VDS–NetIOC requires VDS
•Network bandwidth performance has more impact on host evacuation, rebuild times than on workload performance

 

High Density Direct Attached Storage

 

vsan04

 

 

–Manage disks in enclosures – helps enable blade environment
–Flash acceleration provided on the server or in thesubsystem
–Data services delivered via the VSAN Data Services and platform capabilities
–Direct attached and disks (flash devices, and magnetic devices) are Supports combination of direct attached disks and high density attached disks (SSDs and HDDs) per disk group.

 

Users are expected to configure the HDDAS switch such that each disk is only seen by one host.

–VSAN protects against misconfigured HDDASs (a disk is seen by more than 1 host).
–The owner of a disk group can be explicitly changedby unmounting and restamping the disk group from the new owner.

•If a host who own a disk group crashes, manual re-stamping can be done on another host

.–Supported HDDASs will be tightly controlled by the HCL (exact list TBD).•Applies to HDDASs and controllers

 

FOLLOW the VMware HCL. www.vmware.com/go/virtualsan-hcl

 

Upgrade

 

  • Virtual SAN 6.0 has a new on disk format for disk groups and exposes a new delta-disk type, so upgrading from 1.0 to 2.0 involves more than upgrading the ESX/VC software.

Upgrades are performed in multiple phases

Phase 1: Fresh deployment or upgrade of to vSphere 6.0

vCenter Server

ESXi Hypervisor

Phase 2: Disk format conversion (DFC)

Reformat disk groups

Object upgrade

Disk/Disk Group Evacuation

  • In Virtual SAN 5.5 in order to remove a disk/disk group without data lost, hosts were placed in maintenance mode with the full data evacuation mode from all disk/disk groups.
  • Virtual SAN 6.0 Introduces the support and ability to evacuate data from individual disk/disk groups before removing a disk/disk group from the Virtual SAN.
  • Supported in the UI, esxcli and RVC.

Check box in the “Remove disk/disk group” UI screen

vsanexample

 

Disks Serviceability

 

Virtual SAN 6.0 introduces a new disk serviceability feature to easily map the location of magnetic disks and flash based devices from the vSphere Web Client.

vsanexample2

  • Light LED on failures
  • When a disk hits a permanent error, it can be challenging to find where that disk sits in the chassis to find and replace it.
  • When SSD or MD encounters a permanent error, VSAN automatically turns the disk LED on.
  • Turn disk LED on/off
  • User might need to locate a disk so VSAN supports manually turning a SSD or MD LED on/off.
  • Marking a disk as SSD
  • Some SSDs might not be recognized as SSDs by ESX.
  • Disks can be tagged/untagged as SSDs
  • Marking a disk as local
  • Some SSDs/MDs might not be recognized by ESX as local disks.
  • Disks can be tagged/untagged as local disks.

 

Virtual SAN Usability Improvements

  • What-if-APIs (Scenarios)
  • Adding functionality to visualize Virtual SAN datastore resource utilization when a VM Storage Policy is created or edited.

–Creating Policies

–Reapplying a Policy

vsanexample3

 

Default Storage Policies

 

  • A Virtual SAN Default Profile is automatically created in SPBM when VSAN is enabled on a cluster.

–Default Profiles are utilized by any VM created without an explicit SPBM profile assigned.

vSphere admins to designate a preferred VM Storage Policy as the preferred default policy for the Virtual SAN cluster

 

vsanexample4

  • vCenter can manage multiple vsanDatastores with different sets of requirements.
  • Each vsanDatastore can have a different default profile assigned.

Virtual Machine Usability Improvements

  • Virtual SAN 6.0 adds functionality to visualize Virtual SAN datastore resource utilization when a VM Storage Policy is created or edited.
  • Virtual SAN’s free disk space is raw capacity.

–With replication, actual usable space is lesser.

  • New UI shows real usage on

–Flash Devices

–Magnetic Disks

Displayed in the vSphere Web Client and RVC

vsanexample5

 

Virtual Machine >2TB VMDKs

 

  • In VSAN 5.5, the max size of a VMDK was limited to 2TB.

–Max size of a VSAN component is 255GB.

–Max number of stripes per object was 12.

  • In VSAN 6.0 the limit has been increased to allow VMDK up to 62TB.

–Objects are still striped at 255GB.

  • 62TB limit is the same as VMFS and NFS so VMDK can be

 

vsanexample6

 

There it is. I tried to lay it out as best I could.

Want to try it? Try the Virtual SAN Hosted Evaluation https://my.vmware.com/web/vmware/evalcenter?p=vsan6-hol

 

What’s New in Virtual SAN 6

Learn how to deploy, configure, and manage VMware’s latest hypervisor-converged storage solution.

– See more at: https://my.vmware.com/web/vmware/evalcenter?p=vsan6-hol#sthash.OcuO1gXQ.dpuf

 

Roger Lund

vSphere 6 Configuration of Fault Tolerance

Make sure to follow the Requirements Here

We will be showing off 4 vCPU Fault Tolerance.

Login to vSphere vCenter 6 Web Server.

Click on Hosts and Clusters.

Select your first host.

Click on Manage

Click on Networking

Click on Add Host Networking

This pop’s up.

Select Physical Network Adapter, and Click Next.

Select new Standard Switch, and Click Next.

Click + to add a adapter

 Select vmnic2

Click Next

Click Finish.

Repeat on all other hosts in the cluster.

Select Host one again.

Click on Manage

Click on Networking

Click on VMkernal Adapters

Click on Add Host Networking

This pop’s up.

 Click on Next.

Click browse under Select a existing stand vSwitch .

Select vSwitch one, Click ok.

Click Next.

In Network Label, label the name to fit your needs. In this case I will use Fault Tolerance.
Enter in VLAN ID if required, since this is a lab, I will leave it blank.
Check Fault Tolerance Logging, and click next.

Enter in Static IP Information, or DHCP if you have a scope setup.

Click Next, and finish.

Repeat on all other hosts in the cluster. Make sure each has a different IP Address.

Next we enable Fault Tolerance on the VM.

I have done this in the video.

I did not have a 10GB network, so we could not actually test and show the results.

Roger Lund

vCenter Server 6.0 New Features

Today Announces the release of VMware’vCenter largest release in VMware History. Note. These facts are pre release, thus I will update any changes if necessary, All Facts are direct From VMware.

vCenter Server Features – Enhanced Capabilities

Metric
Windows
Appliance
Hosts per VC
1,000
1,000
Powered-On VMs per VC
10,000
10,000
Hosts per Cluster
64
64
VMs per Cluster
8,000
8,000
Linked Mode

Platform Services Controller 

Platform Services Controller includes takes it beyond just Single Sign-On. It groups:

  • Single Sign-On (SSO)
  • Licensing
  • Certificate Authority
Two Deployment Models:
Embedded
vCenter Server and Platform Services Controller in one virtual machine
– Recommended for small deployments where there is less then two SSO integrated solutions
Centralized
vCenter Server and Platform Services Controller in their own virtual machines
– Recommended for most deployments where there are two or more SSO integrated solutions

Linked Mode Comparison

vSphere 5.5
vSphere 6.0
Windows
Yes
Yes
Appliance
No
Yes
Single Inventory View
Yes
Yes
Single Inventory Search
Yes
Yes
Replication Technology
Microsoft ADAM
Native
Roles & Permissions
Yes
Yes
Licenses
Yes
Yes
Policies
No
Yes
Tags
No
Yes

Certificate Lifecycle Management for vCenter and ESXi

VMware Certificate Authority (VMCA)

Provisions each ESXi host, each vCenter Server and vCenter Server service with certificates that are signed by VMCA

VMware Endpoint Certificate Service (VECS)

Stores and Certs and Private Keys for vCenter Services

vCenter Server 6.0 – VMCA

Root CA

  • During installation, VMCA automatically creates a self-signed certificate
  • This is a CA certificate, capable of issuing other certificates
  • All solutions and endpoint certificates are created (and trusted) from this self-signed CA certificate

Issuer CA

  • Can replace the default self-signed CA certificate created during installation
  • Requires a CSR issued from VMCA to be used in an Enterprise/Commercial CA to generate a new Issuing Certificate
  • Requires replacement of all issued default certificates after implementation

Certificate Replacement Options for vCenter Server

VMCA Default

  • Default installed certificates
  • Self-signed VMCA CA certificate as Root
  • Possible to regenerate these on demand easily

VMCA Enterprise

  • Replace VMCA CA certificates with a new CA certificate from the Enterprise PKI
  • On removal of the old VMCA CA certificate, all old certificates must be regenerated

Custom

  • Disable VMCA as CA
  • Provision custom leaf certificates for each solution, user and endpoint
  • More complicated, for highly security conscious customers

Cross vSwitch vMotion

  • Transparent operation to the guest OS
  • Works across different types of virtual switches
  • vSS to vSS
  • vSS to vDS
  • vDS to vDS
  • Requires L2 network connectivity
  • Does not change the IP of the VM
  • Transfers vDS port metadata

Cross vCenter vMotion

  • Simultaneously changes
  • Compute
  • Storage
  • Network
  • vCenter
  • vMotion without shared storage
  • Increased scale
  • Pool resources across vCenter servers
  • Targeted topologies
  • Local
  • Metro
  • Cross-continental

Requirements

  • vCenter 6.0 and greater
  • SSO Domain
  • Same SSO domain to use the UI
  • Different SSO domain possible if using API
  • 250 Mbps network bandwidth per vMotion operation
  • L2 network connectivity on VM portgroups
  • IP addresses are not updated

Features

  • VM UUID maintained across vCenter server instances
  • Not the same as MoRef or BIOS UUID
  • Data Preservation
  • Events, Alarms, Tasks History
  • HA/DRS Settings
  • Affinity/Anti-Affinity Rules
  • Automation level
  • Start-up priority
  • Host isolation response
  • VM Resource Settings
  • Shares
  • Reservations
  • Limits
  • MAC Address of virtual NIC
  • MAC Addresses preserved across vCenters
  • Always unique within a vCenter
  • Not reused when VM leaves vCenter

Long Distance vMotion 

  • Cross-continental distances – up to 100ms RTTs
  • Maintain standard vMotion guarantees
  • Does not require VVOLs
  • Use Cases:
  • Permanent migrations
  • Disaster avoidance
  • Multi-site load balancing
  • Follow the sun

Increased vMotion Network Flexibility

 

  • vMotion network will cross L3 boundaries
  • vMotion can now use it’s own TCP/IP stack

Content Library Overview

  • Simple content management
  • VM templates
  • vApps
  • ISO images
  • Scripts
  • Store and manage content
  • One central location to manage all content
  • Beyond templates within vCenter
  • Support for other file types
  • Share content
  • Store once, share many times
  • Publish/Subscribe
  • vCenter  -> vCenter
  • vCloud Director -> vCenter
  • Consume content
  • Deploy templates to a host or a cluster

Client Overview and Web client Changes

 Client Comparison

vSphere Client

  •  It’s still here
  • Direct Access to hosts
  • VUM remediation
  • New features in vSphere 5.1 and newer are only available in the web client
  • Added support for virtual hardware versions 10 and 11 *read only*

vSphere Web Client

  • Performance
  • Improved login time
  • Faster right click menu load
  • Faster performance charts

Usability

  • Recent Tasks moved to bottom
  • Flattened right click menus
  • Deep lateral linking
  • Major Performance Improvements

UI

  • Screen by screen code optimization
  • Login now 13x faster
  • Right click menu now 4x faster
  • Most tasks end to end are 50+% faster
  • Performance charts
  • Charts are available and usable in less then half the time
  • VMRC integration
  • Advanced virtual machine operations
  • Usability Improvements
  • Can get anywhere in one click
  • Right click menu has been flattened
  • Recent tasks are back at the bottom
  • Dockable UI

vCenter Cluster Support



vCenter server is now supported to be ran in a Microsoft Cluster.




That’s all of the changes we were presented with from VMware. What a ton of changes, I will dig into these more soon.

 

 

Update: a post vSphere 6 – Clarifying the misinformation has been posted to clairify any changes that have or will happen between beta and this post. I did my best to validate that my information is correct.

 

Roger L

vSphere 6.0 Platform New Features

Today Announces the release of VMware’s vSphere largest release in VMware History. Note. These facts are pre release, thus I will update any changes if necessary, All Facts are direct From VMware.

Increased vSphere Maximums

  • 64 Hosts per Cluster
  • 8000 Virtual Machines per Cluster
  • 480 CPUs
  • 12 TB RAM
  • 1000 Virtual Machines Per Host
Virtual Machine Compatibility ESXi 6 (vHW 11)
 
  • 128 vCPUs
  • 4 TB RAM to 12 TB RAM (depending on partner)
  • Hot-add RAM now vNUMA aware
  • WDDM 1.1 GDI acceleration features
  • xHCI 1.0 controller compatible with OS X 10.8+ xHCI driver
  • Serial and parallel port enhancements
  • A virtual machine can now have a maximum of 32 serial ports
  • Serial and parallel ports can now be removed
Local ESXi Account and Password Management Enhancements
 

New ESXCLI Commands

  • Now possible to use ESXCLI commands to:
  • Create a new local user
  • List local user accounts
  • Remove local user account
  • Modify local user account
  • List permissions defined on the host
  • Set / remove permission for individual users or user groups

Account Lockout

  • Two Configurable Parameters
  • Can set the maximum allowed failed login attempts (10 by default)
  • Can set lockout duration period (2 minutes by default)
  • Configurable via vCenter Host Advanced System Settings
  • Available for SSH and vSphere Web Services SDK
  • DCUI and Console Shell are not locked

Complexity Rules via Advanced Settings

  • No editing of PAM config files on the host required anymore
  • Change default password complexity rules using VIM API
  • Configurable via vCenter Host Advanced System Settings

Improved Auditability of ESXi Admin Actions

  •  In 6.0, all actions taken at vCenter against an ESXi server now show up in the ESXi logs with the vCenter username
  • [user=vpxuser:CORPAdministrator]

Enhanced Microsoft Clustering (MSCS)

  • Support for Windows 2012 R2 and SQL 2012
  • Failover Clustering and AlwaysOn Availability Groups
  • IPV6 Support
  • PVSCSI and SCSI controller support
  • vMotion Support
  • Clustering across physical hosts (CAB) with Physical Compatibility Mode RDM’s
  • Supported on Windows 2008, 2008 R2, 2012 and 2012 R2
Enjoy.
Update: a post vSphere 6 – Clarifying the misinformation has been posted to clairify any changes that have or will happen between beta and this post. I did my best to validate that my information is correct.
Roger L