All posts by Roger Lund

VMware and Storage crazy man, vExpert, MN VMUG leader

Opvizor Inside the Dashboard.

This is my first post on a series of posts about opvizor. http://www.opvizor.com/ Opvizor is a predictive analysis and issue prevention tool.

In my previous post, Getting Opvizor Working by Michael White worth a read. Michael outlines the install process. I will continue , assuming you have setup the product.

 

Once we login to the webpage at https://opvizor.com/login/  we see a dashboard. Where to start?  That is a lot of information.

In the top left you have the environmental overview. This includes your issues per cluster, host, VM, datastore, and network. As you can see I have induced lots of issues into my lab.

vmware vsphere opvizor home

Let’s dig into each.

 

Overall Status

vmware vsphere opvizor overall Statistics

Overall Statistics

vmware vsphere opvizor overall Statistics

Statistics by Upload

vmware vsphere opvizor Statistics by Upload

Issues by  by Upload

vmware vsphere opvizor Issues by  by Upload

Top 5 Largest Datastores

vmware vsphere opvizor Top 5 Largest Datastores

Top 5 Entities by Issue Rate

vmware vsphere opvizor Top 5 Entities by Issue Rate

Top 5 VMs by Snapshot Usage

vmware vsphere opvizor top 5 snapshots

 

Yikes, that seems a little large doesn’t it?

 

Top 5 BMs by Memory Usage

vmware vsphere opvizor Top 5 BMs by Memory Usage

These are fairly self explanatory. But we can dig into each to get more information from this screen.

 

If we click the small table icon  vmware vsphere opvizor expand icon next to Top 5 Larges Datastores

 

vmware vsphere opvizor Top 5 Largest Datastores expand

 

Here we can see more information on the Datastores in my lab.

 

This is our first look into Opvizor, of the series.

 

 

Roger Lund

Getting Opvizor Working by Michael White worth a read.

Michael White another vExpert did a great write up on Opvizor www.opvizor.com

Opvizor has engaged the VMware vExpert program to allow us to dive into the program. Michael’s write up shows how to install the product.

 

Getting Opvizor Working – and is it interesting or what!

 

“If you watch the video on their home page you will see what I did and perhaps be as interested as I was.  Essentially their product compares your environment against best practices from VMware and or NetApp and then will let you know how your environment differs and provide you with the info to fix it.”

 

He then walks you through the installation.

I will dive into the inside of Opvizor in my a follow up post.

Roger Lund

VMware Virtual SAN 6.0

The following Blog post is about VMware VSAN or Virtual San. http://www.vmware.com/products/virtual-san

 

The following post was made with pre GA , Beta content.

All performance numbers are subject to final benchmarking results. Please refer to guidance published at GA

All Content and media is from VMWare, as part of the blogger program.

Please also read vSphere 6 – Clarifying the misinformation http://blogs.vmware.com/vsphere/2015/02/vsphere-6-clarifying-misinformation.html

 

Here is the published What’s New: VMware Virtual SAN 6.0 http://www.vmware.com/files/pdf/products/vsan/VMware_Virtual_SAN_Whats_New.pdf

Here is the published VMware Virtual SAN 6.0 http://www.vmware.com/files/pdf/products/vsan/VMware_Virtual_SAN_Datasheet.pdf

 

Whats New?

 

vsan01

 

Disk Format

  • New On-Disk Format
  • New delta-disk type vsanSparse
  • Performance Based snapshots and clones

VSAN 5.5 to 6.0

  • In-Place modular rolling upgrade
  • Seamless In-place Upgrade
  • Seamless Upgrade Rollback Supported
  • Upgrade performed from RVC CLI
  • PowerCLI integration for automation and management

 

Disk Serviceability Functions

  • Ability to manage flash-based and magnetic devices.
  • Storage consumption models for policy definition
  • Default Storage Policies
  • Resync Status dashboard in UI
  • VM capacity consumption per VMDK
  • Disk/Disk group evacuation

VSAN Platform

  • New Caching Architecture for all-flash VSAN
  • Virtual SAN Health Services
  • Proactive Rebalance
  • Fault domains support
  • High Density Storage Systems with Direct Attached Storage
  • File Services via 3rd party
  • Limited support hardware encryption and checksum

 

Virtual SAN Performance and Scale Improvements

 

 

2x VMs per host

  • Larger Consolidation Ratios
  • Due to increase of supported components per hosts
  • 9000 Components per Host

 

62TB Virtual Disks

  • Greater capacity allocations per VMDK
  • VMDK >2TB are supported

Snapshots and Clone

  • Larger supported capacity of snapshots and clones per VMs
  • 32 per Virtual Machine

Host Scalability

  • Cluster support raised to match vSphere
  • Up to 64 nodes per cluster in vSphere

VSAN can scale up to 64 nodes

 

Enterprise-Class Scale and Performance

vsan02

VMware Virtual SAN : All-Flash

Flash-based devices used for caching as well as persistence

Cost-effective all-flash 2-tier model:oCache is 100% write: using write-intensive, higher grade flash-based devices
Persistent storage: can leverage lower cost read-intensive flash-based devices

Very high IOPS: up to 100K(1)IOPS/Host
Consistent performance with sub-millisecond latencies

 

vsan05

 

 

30K IOPS/Host100K IOPS/Hostpredictable sub-millisecond latency

 

Virtual SAN FlashCaching Architectures

 

vsan06

 

 

All-FlashCache Tier Sizing

Cache tier should have 10% of the anticipated consumed storage capacity

 Measurement Requirements Values
Projected VM space usage 20GB
Projected number of VMs 1000
Total projected space consumption per VM 20GB x 1000 = 20,000 GB = 20 TB
Target flash cache capacity percentage 10%
Total flash cache capacity required 20TB x .10 = 2 TB

 

  • Cache is entirely write-buffer in all-flash architecture
  • Cache devices should be high write endurance models: Choose 2+ TBW/day or 3650+/5 year
  • Total cache capacity percentage should be based on use case requirements.

–For general recommendation visit the VMware Compatibility Guide.

–For write-intensive workloads a higher amount should be configured.

–Increase cache size if expecting heavy use of snapshots

 

New On-Disk Format

 

  • Virtual SAN 6.0 introduces a new on-disk format.
  • The new on-disk format enables:

–Higher performance characteristics

–Efficient and scalable high performance snapshots and clones

–Online migration to new (RVC only)

  • The object store will continue to mount the volumes from all hosts in a cluster and presents them as a single shared datastore.
  • The upgrade to the new on-disk format is optional; the on-disk format for Virtual SAN 5.5 will continue to be supported

 

Performance Snapshots and Clones

 

  • Virtual SAN 6.0 new on-disk format introduces a new VMDK type

–Virtual SAN 5.5 snapshots were based on vmfsSparse (redo logs)

  • vsanSparse based snapshots are expected to deliver performance comparable to native SAN snapshots.

–vsanSparse takes advantage of the new on-disk format writing and extended caching capabilities to deliver efficient performance.

  • All disks in a vsanSparse disk-chain need to be vsanSparse (except base disk).

–Cannot create linked clones of a VM with vsanSparse snapshots on a non-vsan datastore.

–If a VM has existing redo log based snapshots, it will continue to get redo log based snapshots until the user consolidates and deletes all current snapshots.

 

 

 

 

Hardware Requirements

 

vsan03

 

Flash Based Devices In Virtual SAN hybrid ALL read and write operations always go directly to the Flash tier.
Flash based devices serve two purposes in Virtual SAN hybrid architecture

1.Non-volatile Write Buffer (30%)
–Writes are acknowledged when they enter prepare stage on the flash-based devices.
–Reduces latency for writes2.

2. Read Cache (70%)
– Cache hits reduces read latency
– Cache miss – retrieve data from the magnetic devices

 

Flash Based Devices In Virtual SAN all-flash read and write operations always go directly to the Flash devices.

Flash based devices serve two purposes in Virtual SAN All Flash:

1 .Cache Tier
–High endurance flash devices.
–Listed on VCG2.
2. Capacity Tier
–Low endurance flash devices
–Listed on VCG
Network

•1Gb / 10Gb supported
–10Gb shared with NIOC for QoS will support most environments
–If 1GB then recommend dedicated links for Virtual SAN
–Layer 3network configuration supported in 6.0

•Jumbo Frames will provide nominal performance increase
–Enable for greenfield deployments
–Enable in large deployments to reduce CPU overhead

•Virtual SAN supports both VSS & VDS–NetIOC requires VDS
•Network bandwidth performance has more impact on host evacuation, rebuild times than on workload performance

 

High Density Direct Attached Storage

 

vsan04

 

 

–Manage disks in enclosures – helps enable blade environment
–Flash acceleration provided on the server or in thesubsystem
–Data services delivered via the VSAN Data Services and platform capabilities
–Direct attached and disks (flash devices, and magnetic devices) are Supports combination of direct attached disks and high density attached disks (SSDs and HDDs) per disk group.

 

Users are expected to configure the HDDAS switch such that each disk is only seen by one host.

–VSAN protects against misconfigured HDDASs (a disk is seen by more than 1 host).
–The owner of a disk group can be explicitly changedby unmounting and restamping the disk group from the new owner.

•If a host who own a disk group crashes, manual re-stamping can be done on another host

.–Supported HDDASs will be tightly controlled by the HCL (exact list TBD).•Applies to HDDASs and controllers

 

FOLLOW the VMware HCL. www.vmware.com/go/virtualsan-hcl

 

Upgrade

 

  • Virtual SAN 6.0 has a new on disk format for disk groups and exposes a new delta-disk type, so upgrading from 1.0 to 2.0 involves more than upgrading the ESX/VC software.

Upgrades are performed in multiple phases

Phase 1: Fresh deployment or upgrade of to vSphere 6.0

vCenter Server

ESXi Hypervisor

Phase 2: Disk format conversion (DFC)

Reformat disk groups

Object upgrade

Disk/Disk Group Evacuation

  • In Virtual SAN 5.5 in order to remove a disk/disk group without data lost, hosts were placed in maintenance mode with the full data evacuation mode from all disk/disk groups.
  • Virtual SAN 6.0 Introduces the support and ability to evacuate data from individual disk/disk groups before removing a disk/disk group from the Virtual SAN.
  • Supported in the UI, esxcli and RVC.

Check box in the “Remove disk/disk group” UI screen

vsanexample

 

Disks Serviceability

 

Virtual SAN 6.0 introduces a new disk serviceability feature to easily map the location of magnetic disks and flash based devices from the vSphere Web Client.

vsanexample2

  • Light LED on failures
  • When a disk hits a permanent error, it can be challenging to find where that disk sits in the chassis to find and replace it.
  • When SSD or MD encounters a permanent error, VSAN automatically turns the disk LED on.
  • Turn disk LED on/off
  • User might need to locate a disk so VSAN supports manually turning a SSD or MD LED on/off.
  • Marking a disk as SSD
  • Some SSDs might not be recognized as SSDs by ESX.
  • Disks can be tagged/untagged as SSDs
  • Marking a disk as local
  • Some SSDs/MDs might not be recognized by ESX as local disks.
  • Disks can be tagged/untagged as local disks.

 

Virtual SAN Usability Improvements

  • What-if-APIs (Scenarios)
  • Adding functionality to visualize Virtual SAN datastore resource utilization when a VM Storage Policy is created or edited.

–Creating Policies

–Reapplying a Policy

vsanexample3

 

Default Storage Policies

 

  • A Virtual SAN Default Profile is automatically created in SPBM when VSAN is enabled on a cluster.

–Default Profiles are utilized by any VM created without an explicit SPBM profile assigned.

vSphere admins to designate a preferred VM Storage Policy as the preferred default policy for the Virtual SAN cluster

 

vsanexample4

  • vCenter can manage multiple vsanDatastores with different sets of requirements.
  • Each vsanDatastore can have a different default profile assigned.

Virtual Machine Usability Improvements

  • Virtual SAN 6.0 adds functionality to visualize Virtual SAN datastore resource utilization when a VM Storage Policy is created or edited.
  • Virtual SAN’s free disk space is raw capacity.

–With replication, actual usable space is lesser.

  • New UI shows real usage on

–Flash Devices

–Magnetic Disks

Displayed in the vSphere Web Client and RVC

vsanexample5

 

Virtual Machine >2TB VMDKs

 

  • In VSAN 5.5, the max size of a VMDK was limited to 2TB.

–Max size of a VSAN component is 255GB.

–Max number of stripes per object was 12.

  • In VSAN 6.0 the limit has been increased to allow VMDK up to 62TB.

–Objects are still striped at 255GB.

  • 62TB limit is the same as VMFS and NFS so VMDK can be

 

vsanexample6

 

There it is. I tried to lay it out as best I could.

Want to try it? Try the Virtual SAN Hosted Evaluation https://my.vmware.com/web/vmware/evalcenter?p=vsan6-hol

 

What’s New in Virtual SAN 6

Learn how to deploy, configure, and manage VMware’s latest hypervisor-converged storage solution.

– See more at: https://my.vmware.com/web/vmware/evalcenter?p=vsan6-hol#sthash.OcuO1gXQ.dpuf

 

Roger Lund

vSphere 6.0 Fault Tolerance – New Features

Today Announces the next major release of VMware’Fault Tolerance Product.  Note. These facts are pre release, thus I will update any changes if necessary, All Facts are direct From VMware.

New features

  • Enhanced virtual disk format support
  • Ability to hot configure FT
  • Greatly increased FT host compatibility
  • 4vCPU per VM.
  • Up to 4 VM’s per Host. ( 10GB required.) or 8 vCPU’s per Host.
  • Support for vStorage APIs for Data Protection (VADP)
  • API for non-disruptive snapshots
  • Source and Destination VM each have independent vmdk files
  • Source and Destination VM are Allowed to be on different datastores
Stay Tuned for a walk through.
Update: a post vSphere 6 – Clarifying the misinformation has been posted to clairify any changes that have or will happen between beta and this post. I did my best to validate that my information is correct.
Roger L

vSphere 6.0 Platform New Features

Today Announces the release of VMware’s vSphere largest release in VMware History. Note. These facts are pre release, thus I will update any changes if necessary, All Facts are direct From VMware.

Increased vSphere Maximums

  • 64 Hosts per Cluster
  • 8000 Virtual Machines per Cluster
  • 480 CPUs
  • 12 TB RAM
  • 1000 Virtual Machines Per Host
Virtual Machine Compatibility ESXi 6 (vHW 11)
 
  • 128 vCPUs
  • 4 TB RAM to 12 TB RAM (depending on partner)
  • Hot-add RAM now vNUMA aware
  • WDDM 1.1 GDI acceleration features
  • xHCI 1.0 controller compatible with OS X 10.8+ xHCI driver
  • Serial and parallel port enhancements
  • A virtual machine can now have a maximum of 32 serial ports
  • Serial and parallel ports can now be removed
Local ESXi Account and Password Management Enhancements
 

New ESXCLI Commands

  • Now possible to use ESXCLI commands to:
  • Create a new local user
  • List local user accounts
  • Remove local user account
  • Modify local user account
  • List permissions defined on the host
  • Set / remove permission for individual users or user groups

Account Lockout

  • Two Configurable Parameters
  • Can set the maximum allowed failed login attempts (10 by default)
  • Can set lockout duration period (2 minutes by default)
  • Configurable via vCenter Host Advanced System Settings
  • Available for SSH and vSphere Web Services SDK
  • DCUI and Console Shell are not locked

Complexity Rules via Advanced Settings

  • No editing of PAM config files on the host required anymore
  • Change default password complexity rules using VIM API
  • Configurable via vCenter Host Advanced System Settings

Improved Auditability of ESXi Admin Actions

  •  In 6.0, all actions taken at vCenter against an ESXi server now show up in the ESXi logs with the vCenter username
  • [fusion_builder_container hundred_percent=”yes” overflow=”visible”][fusion_builder_row][fusion_builder_column type=”1_1″ background_position=”left top” background_color=”” border_size=”” border_color=”” border_style=”solid” spacing=”yes” background_image=”” background_repeat=”no-repeat” padding=”” margin_top=”0px” margin_bottom=”0px” class=”” id=”” animation_type=”” animation_speed=”0.3″ animation_direction=”left” hide_on_mobile=”no” center_content=”no” min_height=”none”][user=vpxuser:CORPAdministrator]

Enhanced Microsoft Clustering (MSCS)

  • Support for Windows 2012 R2 and SQL 2012
  • Failover Clustering and AlwaysOn Availability Groups
  • IPV6 Support
  • PVSCSI and SCSI controller support
  • vMotion Support
  • Clustering across physical hosts (CAB) with Physical Compatibility Mode RDM’s
  • Supported on Windows 2008, 2008 R2, 2012 and 2012 R2
Enjoy.
Update: a post vSphere 6 – Clarifying the misinformation has been posted to clairify any changes that have or will happen between beta and this post. I did my best to validate that my information is correct.
Roger L

[/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

vCenter Server 6.0 New Features

Today Announces the release of VMware’vCenter largest release in VMware History. Note. These facts are pre release, thus I will update any changes if necessary, All Facts are direct From VMware.

vCenter Server Features – Enhanced Capabilities

Metric
Windows
Appliance
Hosts per VC
1,000
1,000
Powered-On VMs per VC
10,000
10,000
Hosts per Cluster
64
64
VMs per Cluster
8,000
8,000
Linked Mode

Platform Services Controller 

Platform Services Controller includes takes it beyond just Single Sign-On. It groups:

  • Single Sign-On (SSO)
  • Licensing
  • Certificate Authority
Two Deployment Models:
Embedded
vCenter Server and Platform Services Controller in one virtual machine
– Recommended for small deployments where there is less then two SSO integrated solutions
Centralized
vCenter Server and Platform Services Controller in their own virtual machines
– Recommended for most deployments where there are two or more SSO integrated solutions

Linked Mode Comparison

vSphere 5.5
vSphere 6.0
Windows
Yes
Yes
Appliance
No
Yes
Single Inventory View
Yes
Yes
Single Inventory Search
Yes
Yes
Replication Technology
Microsoft ADAM
Native
Roles & Permissions
Yes
Yes
Licenses
Yes
Yes
Policies
No
Yes
Tags
No
Yes

Certificate Lifecycle Management for vCenter and ESXi

VMware Certificate Authority (VMCA)

Provisions each ESXi host, each vCenter Server and vCenter Server service with certificates that are signed by VMCA

VMware Endpoint Certificate Service (VECS)

Stores and Certs and Private Keys for vCenter Services

vCenter Server 6.0 – VMCA

Root CA

  • During installation, VMCA automatically creates a self-signed certificate
  • This is a CA certificate, capable of issuing other certificates
  • All solutions and endpoint certificates are created (and trusted) from this self-signed CA certificate

Issuer CA

  • Can replace the default self-signed CA certificate created during installation
  • Requires a CSR issued from VMCA to be used in an Enterprise/Commercial CA to generate a new Issuing Certificate
  • Requires replacement of all issued default certificates after implementation

Certificate Replacement Options for vCenter Server

VMCA Default

  • Default installed certificates
  • Self-signed VMCA CA certificate as Root
  • Possible to regenerate these on demand easily

VMCA Enterprise

  • Replace VMCA CA certificates with a new CA certificate from the Enterprise PKI
  • On removal of the old VMCA CA certificate, all old certificates must be regenerated

Custom

  • Disable VMCA as CA
  • Provision custom leaf certificates for each solution, user and endpoint
  • More complicated, for highly security conscious customers

Cross vSwitch vMotion

  • Transparent operation to the guest OS
  • Works across different types of virtual switches
  • vSS to vSS
  • vSS to vDS
  • vDS to vDS
  • Requires L2 network connectivity
  • Does not change the IP of the VM
  • Transfers vDS port metadata

Cross vCenter vMotion

  • Simultaneously changes
  • Compute
  • Storage
  • Network
  • vCenter
  • vMotion without shared storage
  • Increased scale
  • Pool resources across vCenter servers
  • Targeted topologies
  • Local
  • Metro
  • Cross-continental

Requirements

  • vCenter 6.0 and greater
  • SSO Domain
  • Same SSO domain to use the UI
  • Different SSO domain possible if using API
  • 250 Mbps network bandwidth per vMotion operation
  • L2 network connectivity on VM portgroups
  • IP addresses are not updated

Features

  • VM UUID maintained across vCenter server instances
  • Not the same as MoRef or BIOS UUID
  • Data Preservation
  • Events, Alarms, Tasks History
  • HA/DRS Settings
  • Affinity/Anti-Affinity Rules
  • Automation level
  • Start-up priority
  • Host isolation response
  • VM Resource Settings
  • Shares
  • Reservations
  • Limits
  • MAC Address of virtual NIC
  • MAC Addresses preserved across vCenters
  • Always unique within a vCenter
  • Not reused when VM leaves vCenter

Long Distance vMotion 

  • Cross-continental distances – up to 100ms RTTs
  • Maintain standard vMotion guarantees
  • Does not require VVOLs
  • Use Cases:
  • Permanent migrations
  • Disaster avoidance
  • Multi-site load balancing
  • Follow the sun

Increased vMotion Network Flexibility

 

  • vMotion network will cross L3 boundaries
  • vMotion can now use it’s own TCP/IP stack

Content Library Overview

  • Simple content management
  • VM templates
  • vApps
  • ISO images
  • Scripts
  • Store and manage content
  • One central location to manage all content
  • Beyond templates within vCenter
  • Support for other file types
  • Share content
  • Store once, share many times
  • Publish/Subscribe
  • vCenter  -> vCenter
  • vCloud Director -> vCenter
  • Consume content
  • Deploy templates to a host or a cluster

Client Overview and Web client Changes

 Client Comparison

vSphere Client

  •  It’s still here
  • Direct Access to hosts
  • VUM remediation
  • New features in vSphere 5.1 and newer are only available in the web client
  • Added support for virtual hardware versions 10 and 11 *read only*

vSphere Web Client

  • Performance
  • Improved login time
  • Faster right click menu load
  • Faster performance charts

Usability

  • Recent Tasks moved to bottom
  • Flattened right click menus
  • Deep lateral linking
  • Major Performance Improvements

UI

  • Screen by screen code optimization
  • Login now 13x faster
  • Right click menu now 4x faster
  • Most tasks end to end are 50+% faster
  • Performance charts
  • Charts are available and usable in less then half the time
  • VMRC integration
  • Advanced virtual machine operations
  • Usability Improvements
  • Can get anywhere in one click
  • Right click menu has been flattened
  • Recent tasks are back at the bottom
  • Dockable UI

vCenter Cluster Support



vCenter server is now supported to be ran in a Microsoft Cluster.




That’s all of the changes we were presented with from VMware. What a ton of changes, I will dig into these more soon.

 

 

Update: a post vSphere 6 – Clarifying the misinformation has been posted to clairify any changes that have or will happen between beta and this post. I did my best to validate that my information is correct.

 

Roger L

vSphere 6 Configuration of Fault Tolerance

Make sure to follow the Requirements Here

We will be showing off 4 vCPU Fault Tolerance.

Login to vSphere vCenter 6 Web Server.

Click on Hosts and Clusters.

Select your first host.

Click on Manage

Click on Networking

Click on Add Host Networking

This pop’s up.

Select Physical Network Adapter, and Click Next.

Select new Standard Switch, and Click Next.

Click + to add a adapter

 Select vmnic2

Click Next

Click Finish.

Repeat on all other hosts in the cluster.

Select Host one again.

Click on Manage

Click on Networking

Click on VMkernal Adapters

Click on Add Host Networking

This pop’s up.

 Click on Next.

Click browse under Select a existing stand vSwitch .

Select vSwitch one, Click ok.

Click Next.

In Network Label, label the name to fit your needs. In this case I will use Fault Tolerance.
Enter in VLAN ID if required, since this is a lab, I will leave it blank.
Check Fault Tolerance Logging, and click next.

Enter in Static IP Information, or DHCP if you have a scope setup.

Click Next, and finish.

Repeat on all other hosts in the cluster. Make sure each has a different IP Address.

Next we enable Fault Tolerance on the VM.

I have done this in the video.

I did not have a 10GB network, so we could not actually test and show the results.

Roger Lund

Veeam 8.0 Patch 1

I’ve been working on getting Veeam 8.0 up in my lab, I am planning on doing a series of write up’s on the technology.  I Noticed that they have released a Patch. I wanted to share this.

KB ID: 1982
Products: Veeam Backup & Replication
Version: 8.0.0.917
Published: 
Created: 2014-12-25


They have added a slew of new features.


New Features and Enhancements

General 

  • Retention policy for replication jobs is now processed concurrently for multiple VMs in the job, as opposed to one by one, reducing the overall job run time.
  • Added email notification about File to Tape and Backup to Tape jobs waiting for user interaction (for example, when a tape medium needs to be inserted by the user)
  • VMware Tools quiescence should now log an informational event when VMware Tools are outdated, as opposed to a warning event.
  • Independent virtual disks which are explicitly excluded from processing in the job settings should no longer log a warning during the first time when the given VM is being backed up.
  • Virtual Lab proxy appliance was upgraded to virtual hardware version 7 to allow for it to manage more networks out of box.
  • Jobs should no longer appear to “hang doing nothing” in some circumstances, with the duration not ticking for any event in the job log.



Cloud Connect

  • Cloud backup repository size limit of 63TB has been removed



Linux support enhancements

  • Added support for file level recovery from XFS volumes (default file system in Red Hat Enterprise Linux and CentOS distributions starting from version 7.0)
  • Added support for file level recovery from Btfrs volumes (default file system in SUSE Linux Enterprise Server starting from version 12.0).
  • Added support for file level recovery from VMs with more than 10 LVMs.
  • Improved Linux guest file system indexing performance for incremental backup job runs.
  • FTP access mode of the file level recovery helper appliance now also displays friendly LVM volume names introduced in v8.
  • To enable identity verification of the remote server, SSH key fingerprint is now displayed to a user when registering the new Linux server. Accepted SSH keys are stored in the configuration database to protect from MITM attack.



Performance enhancements

  • Added experimental support for direct data movers communication when both are running on the same server (for example, when backing up to a local storage on backup proxy server). If your local backup jobs report Network as the bottleneck, and you see high load on some backup proxy server NICs when the data was supposed to stay local to the server, you are likely to benefit from this behavior modifier. To enable alternative data exchange path, create the DataMoverLocalFastPath (DWORD) registry value under HKLMSOFTWAREVeeamVeeam Backup and Replication, and set it to the following values:

0: Default behavior (no optimizations)1: Data exchange through TCP socket on the loopback interface (faster)2: Data exchange through shared memory (fastest) EMC Data Domain integration

  • Synthetic full backup creation and transformation performance has been improved significantly.



HP 3PAR StoreServ integration

  • Remote Copy snapshots are now automatically excluded from rescan, both speeding up the rescan process and reducing storage load.



NetApp integration

  • Added support for NetApp Data ONTAP 8.3.
  • Added support for HA Pair 7-mode configuration.
  • Added support for vFiler DR configuration.



 And even more Fixes.



 Resolved Issues

General

  • Restored incremental backup job performance back to v7 levels by removing unnecessary metadata queries from previous backup files.
  • Multiple issues when parallel processing is disabled (high CPU usage on backup server, wrong task processing order, reversed incremental backup jobs with disk excluded failing to access backup files).
  • Automatic network traffic encryption (based on public IP address presence) is enabled even when both data movers are running on the same computer, reducing job’s processing performance.
  • Quick Backup operation triggers jobs chained to the job it uses.
  • Loss password recovery option does not function correctly when Enterprise Manager has multiple backup server registered.
  • Under rare circumstances, backup job with encryption enabled may fail with the “Cannot resize block data is encrypted” error.
  • When multiple network traffic rules having the same source and target ranges are set up, changing settings to one of the rules also updates all other rules.



vSphere

  • In vSphere 5.5, enabling virtual disk updates redirection to the VMFS datastore in Instant VM Recovery settings, and then migrating a recovered VM to the same datastore with Storage vMotion causes data loss of data generated by the running VM.
  • Full VM restore process puts all virtual disks to the datastore selected for VM configuration file,ignoring virtual disks placement settings specified in the wizard.
  • In certain environments, jobs fail to process VMs in Direct SAN Access mode with the “Failed to create processing task for VM Error: Disk not found” error.
  • Dismounting processed virtual disks from backup proxy server takes longer than expected in the Virtual Appliance (hot add) processing mode.
  • Rescan of vCenter server containing hosts without HBA and SCSI adaptors fails with the “Object Reference not set to an instance of an object” error.
  • Replication jobs to a cluster fail with the “The operation is not allowed in the current state” error if chosen cluster host is in the maintenance mode.
  • Under rare circumstances, jobs with Backup I/O Control enabled may occasionally log the “Operation is RetrievePropertiesOperation” related errors.
  • Enabling Backup I/O Control limits snapshot removal tasks to one per datastore regardless of datastore latency levels.
  • VM Copy jobs always fail over to the Network (NBD) processing mode.
  • VMs with virtual disks having the same names but with different capitalization, and located on the same datastore cannot be backed up.
  • Backing up vCloud Director vApp to a CIFS share may fail with the “Object Reference not set to an instance of an object” error.
  • Snapshot Hunter ignores backup window specified in the corresponding job.
  • vCloud Director VMs created from linked clone template and with a user snapshot present incorrectly trigger Snapshot Hunter.



Hyper-V 

  • CBT does not track changes correctly on VHDX disks files larger than 2 TB.
  • CBT may return incorrect changed blocks information for differential disks, when parent and child disks have the same name.
  • Large amount of unmapped VHDX blocks may cause the job to failover to full scan incremental backup with the “Failed to update unaccounted changes for disk. Change tracking is disabled” error.
  • Existing Hyper-V backup and replication jobs processing VM with SCSI disks start to consume x2 space on target storage after upgrade to v8.
  • Removing a node from a cluster does not remove the corresponding node from the cluster on the Backup Infrastructure tab of the management tree until the next periodic infrastructure rescan (in up to 4 hours).
  • Replication from backup fails with the “Virtual Hard Disk file not found[fusion_builder_container hundred_percent=”yes” overflow=”visible”][fusion_builder_row][/id” error if backup job that created this backup had one or more disks excluded from processing.
  • Under rare circumstances, legacy replicas using non-default block size may fail with the “[fusion_builder_column type=”1_1″ background_position=”left top” background_color=”” border_size=”” border_color=”” border_style=”solid” spacing=”yes” background_image=”” background_repeat=”no-repeat” padding=”” margin_top=”0px” margin_bottom=”0px” class=”” id=”” animation_type=”” animation_speed=”0.3″ animation_direction=”left” hide_on_mobile=”no” center_content=”no” min_height=”none”][i]FIB block points to block located outside the patched FIB” or “Block offset [] does not match block ID []” error messages.
  • Offhost backup from Windows Server 2012 R2 fails with the “Exception of type ‘Veeam.Backup.AgentProvider.AgentClosedException’ was thrown” error when processing a VM that can only be backed up in Saved or Crash-Consistent states, and native Hyper-V quiescence is enabled in the job settings.
  • If a Hyper-V cluster node goes offline during backup from shared volume snapshot, and does not recover until the Veeam job fails, CSV volume backup infrastructure resource will not be released by the scheduler. As the result, other jobs waiting to process VMs from the same volume may remain hanging in the “Resource not ready: Snapshot” state.
  • Replication job fails with the “Cannot assign the specified number of processors for virtual machine” error for VMs with core count larger than the one of the target Hyper-V host. Legacy replication job works fine in the same circumstances, however VM failover is not possible.



Application-aware processing

  • VeeamGuestHelper process crashes when backing up Windows 2003 SP2 VM running Oracle.
  • Application-aware processing hangs for extended time when processing a Windows VM containing one or more Oracle databases in the suspended state.
  • Certain advanced guest processing settings lead to VM processing failing with the “Unknown indexing mode ‘None’ ” error.



Multi-OS File Level Recovery

  • Restore to original location does not work in certain LVM configurations.



Enterprise Manager

  • Under certain circumstances, Enterprise Manager data collection may fail with the “Cannot insert the value NULL into column ‘guest_os’ ” error.
  • Restore scope rebuild operation tries to reach out to the hosts already removed from the corresponding backup server, resulting in login errors.



Built-in WAN Acceleration

  • WAN accelerators fail to detect and populate global cache Windows Server 2012 R2 and Windows 8.1 OS data, which may result in reduced data reduction ratios.



Cloud Connect

  • The Decompress backup data blocks before storing backup repository option is ignored for Cloud Connect backend repositories.
  • Backup Copy jobs with encryption enabled fail with the “Attempted to compress encrypted data-block” error if backend repository has the Decompress backup data blocks before storing option enabled.
  • Veeam Backup Service crashes with the “An existing connection was forcibly closed by the remote host” if network connection fails immediately upon establishing the initial connection to the Cloud Gateway.
  • Cloud Connect quotas are rounded incorrectly in the service provider’s interface.
  • Tenant statistics data such as data sent/received is only shown for the last 24 hours.



Veeam Explorer for Active Directory

  • “This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms” error is displayed if Use FIPS compliant algorithms for encryption, hashing, and signing Group Policy setting is enabled.
  • Attempting to open single label domain’s AD database fails with the “No writable domain found” error.
  • Attempting to open third level domain’s AD database fails with the “No writable domain found” error.
  • Under certain circumstances, object restore may fail with the “Unable to cast object of type ‘System.String’ to type ‘System.Byte[]’ “error.



Veeam Explorer for Exchange

  • Process of exporting email messages cannot be stopped.
  • Copying or saving attachments from multiple different emails always copies or saves attachment from the first email for which this operation was performed.
  • Opening public folder’s mailbox fails with the “Unable to find wastebasket” error.



Veeam Explorer for SQL Server

  • Restoring to SQL instance with case sensitive collation enabled fails with the “The multi-part identifier “a.database_id” could not be bound” error.



Backup from Storage Snapshots

  • Backup jobs from storage snapshots on iSCSI SAN take too long to initialize when large amount of iSCSI targets are configured on the backup proxy server.
  • Under certain circumstances, backup jobs from storage snapshots on FC SAN may fail with the “Unable to find volume with Node WWN and LUN ID” error.



HP 3PAR integration

  • Rescan of 3PAR array with empty host sets fails with the “Object reference not set to an instance of an object” error.



NetApp integration

  • Under certain circumstances, storage rescan process may leave temporary storage snapshots behind.



Tape

  • Backup to Tape jobs may appear to run slower after upgrading to v8 because of copying unnecessary backup files (VRB) to tape when source backup job uses reversed incremental backup mode.
  • On standalone drives, tape jobs configured to eject tape upon completion fail the “Object reference not set to an instance of an object” error after 10 minutes of waiting for the user to insert tape.
  • Tape job configured to use full and incremental media pools hosted by the different standalone tape drives fails when processing incremental backup with the “Value cannot be null” error. Tapes are formatted with the block size of tape medium, as opposed to the default block size of the tape drive.
  • Tape job configured to use full and incremental media pools hosted by the different tape libraries fails while waiting for an incremental pool’s tape with the “Tape not exchanged” error
  • Tape jobs fail with the “GetTapeHeader failed” error on IBM TS3100 tape library with Path failover feature disabled.



PowerShell

  • Objects obtained with Find-VBRViFolder cmdlet have incorrect Path attribute.
  • Get-VBRTapeJob cmdlet fails to obtain job object for tape jobs with incremental processing disabled.
  • Get-VBRTapeLibrary cmdlet fails with the Standard Edition license.
  • All built-in WAN accelerator management cmdlets have been made available in all paid product editions

To download the patch. http://www.veeam.com/download_add_packs/vmware-esx-backup/8.0patch1/ note you must be logged into Veeam’s website.




Enjoy

Roger Lund [/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

New Frontier

We all have choices in life. I choice the career within Information Technology for a couple of key reasons. Foremost, I love technology. It’s the reason I am so passionate about my career and it is why I am working with the technology I work with.

After being in the Information Technology field , I found I enjoyed helping and enabling others. This started in the service field, when I was in college; then at my first Information Technology job as helpdesk, and so on over the years. This was what kept me going as a Systems Admin. This is what keep me going through the long nights over the years. This is what made me crazy enough to Start my first User Group. Then go on to lead two VMware User Groups in Minnesota. This is why I applied for the VMware vExpert Program.

I count my self lucky in some way’s. I have had the privilege to be a part of Tech Field Day. I was able do social media events around the country as a evangelist, as a influencer or even press. As time passed, I grew networking connections, and learned a great deal about public speaking, independent thinking, and a ton about collaboration with colleagues around the world.

At some point I began to feel like I lead two life’s. One of a evangelist. Someone who shows his passion about the technology and wants to share the vision with others. And another as a Admin, who put out fires and did his best at the day to day battle. But I spent 90% of my time as a Admin. I won’t lie, I felt like I was being held back as a Admin. I wanted to do more but wasn’t sure on how I should move forward.The company I worked for didn’t have a role that I fit into anymore.

I did what I think was the perhaps best and most important choice I have made in 2014. I talked to my colleagues.I talked to colleagues who are Admin’s. I talked colleagues who are in the Social Media space. I talked to colleagues that are industry annalists. I talked to colleagues that work for partners, and vendors.

It was a wonderful experience, as it was both enlightening and humbling. I had a hundred conversations. A lot of recommendations about future paths. But none suggested that I should continue to be a System Admin. It was the verification I needed. It matched my personal feelings about the role that I was currently in as a Admin.

After a great deal of thought, I decided that my role as a Admin was over. I decided that I wanted to move on to a field that I could leverage the skill set I have developed over the years. I wanted it to be in a role that I can still enable and help others in. That role ended up being a Solutions Architect for a partner.

A couple of weeks ago I accepted a Solutions Architect role at Deltaware DataSolutions. This marks my first week fully in my role. I will not attempt to mask my excitement, as I feel both liberated and empowered in this role. I am looking forward to working with my new team.

This is my next step. My New Frontier.  I feel like a fog has lifted in some ways. I can focus on a single career and job. I plan to get more sleep. And I plan to blog more, talk to others more, and pay more attention to social media than I could as a Admin. 


As always Hit me up on twitter @rogerlund or flag me down next time you see me.




Roger Lund

See the data, not just the storage with DataGravity

DataGravity @DataGravityInc has revealed what the announced secret gravity sauce is all about. Be sure to check out the press release ” DataGravity Unveils Industry’s First Data-Aware Storage Platform”

“NASHUA, N.H., August 19, 2014 – DataGravity today announced the launch of the DataGravity Discovery Series, the first ever data-aware storage platform that tracks data access and analyzes data as it is stored to provide greater visibility, insight and value from a company’s information assets. The DataGravity Discovery Series delivers storage, protection, data governance, search and discovery powered by an enterprise-grade hardware platform and patent-pending software architecture, enabling midmarket companies to glean new insights and make better business decisions.”

We often see new storage Array’s, with a impressive list of new features. Lets see how this stacks up.

From http://datagravity.com/sites/default/files/resource-files/DataGravity-DiscoverySeries-Datasheet-081714.pdf copyright Datagravity

Looks fairly standard, a nice sharp looking box. But what makes this unique?

Why the fuss? why do we care what is on our storage? With the increasing costs of data, many are looking to what what is taking the space, and using the disk.

“The unstructured data dilemma is growing, and IDC has been predicting technology would catch up to provide an answer to the market demand,” said Laura Dubois, program vice president of storage at IDC. “The DataGravity approach is transformational in an industry where innovation has been mostly incremental. DataGravity data-aware storage can tell you about the data it’s holding, making the embedded value of stored information accessible to customers who cannot otherwise support the cost and complexity of solutions available today.”

Source http://datagravity.com/datagravity-unveils-first-data-aware-storage

“The Digital Universe in 2020″ report, IDC estimates that the overall volume of digital bits created, replicated, and consumed across the United States will hit 6.6 zettabytes by 2020. That represents a doubling of volume about every three years. For those not up on their Greek numerical prefixes, a zettabyte is 1,000 exabytes, or just over 25 billion 4-TB drives.” Source http://www.networkcomputing.com/storage/2014-state-of-storage-cost-worries-grow/a/d-id/1113476

Today companies often lavage things like Hadoop, EMC ESA or new offerings like CloudPhysics to learn what people are using storage for in their environment, or to get data out of the system.

This becomes more complex with Big Data

That’s all great, but how does DataGravity change things?

From http://datagravity.com/sites/default/files/resource-files/DataGravity-DiscoverySeries-Datasheet-081714.pdf copyright DataGravity

DataGravity Discovery Series Demo: End user role-based access

DataGravity Discovery Series Demo: Analyze your file shares

DataGravity Discovery Series Demo: Find Experts and Collaborators

Great, I See some neat features. But what about standard storage features, like data protection? And Data Protection?

DataGravity Discovery Series Demo: Preview, download and restore files

DataGravity Discovery Series Demo: Provision and protect your storage

I’m excited to see this at VMworld. VMworld attendees can see it in action at the DataGravity booth, #1647, August 24 through 28 in San Francisco.

I will be attending VMworld, and Tech Field Day. Make sure to check back to the DataGravity page. http://techfieldday.com/appearance/datagravity-presents-at-tech-field-day-extra-at-vmworld-us-2014/

Lets check out what others are saying about DataGravity

DataGravity – First Look

“Perhaps you’ve heard of DataGravity (@DataGravityInc), or perhaps you haven’t. They’ve been staying pretty quiet about what they’ve been working on. Today, however, they’re dropping the veil. Read on for a look into what you can expect from this exciting announcement!”

by James Green

Why is DataGravity Such a Big Deal?

“DataGravity just released their embargo and my little techie corner of the Internet is on fire. There’s a very good reason for that, but it might not be obvious at a glance. Read on to learn why DataGravity is a Big Deal even though it might not work out.”

by Stephen Foskett

Make sure to check the full articles out.

I hope you enjoyed the write up.

Roger Lund