Tagged in: VMware

VMware Virtual SAN 6.0

The following Blog post is about VMware VSAN or Virtual San. http://www.vmware.com/products/virtual-san

 

The following post was made with pre GA , Beta content.

All performance numbers are subject to final benchmarking results. Please refer to guidance published at GA

All Content and media is from VMWare, as part of the blogger program.

Please also read vSphere 6 – Clarifying the misinformation http://blogs.vmware.com/vsphere/2015/02/vsphere-6-clarifying-misinformation.html

 

Here is the published What’s New: VMware Virtual SAN 6.0 http://www.vmware.com/files/pdf/products/vsan/VMware_Virtual_SAN_Whats_New.pdf

Here is the published VMware Virtual SAN 6.0 http://www.vmware.com/files/pdf/products/vsan/VMware_Virtual_SAN_Datasheet.pdf

 

Whats New?

 

vsan01

 

Disk Format

  • New On-Disk Format
  • New delta-disk type vsanSparse
  • Performance Based snapshots and clones

VSAN 5.5 to 6.0

  • In-Place modular rolling upgrade
  • Seamless In-place Upgrade
  • Seamless Upgrade Rollback Supported
  • Upgrade performed from RVC CLI
  • PowerCLI integration for automation and management

 

Disk Serviceability Functions

  • Ability to manage flash-based and magnetic devices.
  • Storage consumption models for policy definition
  • Default Storage Policies
  • Resync Status dashboard in UI
  • VM capacity consumption per VMDK
  • Disk/Disk group evacuation

VSAN Platform

  • New Caching Architecture for all-flash VSAN
  • Virtual SAN Health Services
  • Proactive Rebalance
  • Fault domains support
  • High Density Storage Systems with Direct Attached Storage
  • File Services via 3rd party
  • Limited support hardware encryption and checksum

 

Virtual SAN Performance and Scale Improvements

 

 

2x VMs per host

  • Larger Consolidation Ratios
  • Due to increase of supported components per hosts
  • 9000 Components per Host

 

62TB Virtual Disks

  • Greater capacity allocations per VMDK
  • VMDK >2TB are supported

Snapshots and Clone

  • Larger supported capacity of snapshots and clones per VMs
  • 32 per Virtual Machine

Host Scalability

  • Cluster support raised to match vSphere
  • Up to 64 nodes per cluster in vSphere

VSAN can scale up to 64 nodes

 

Enterprise-Class Scale and Performance

vsan02

VMware Virtual SAN : All-Flash

Flash-based devices used for caching as well as persistence

Cost-effective all-flash 2-tier model:oCache is 100% write: using write-intensive, higher grade flash-based devices
Persistent storage: can leverage lower cost read-intensive flash-based devices

Very high IOPS: up to 100K(1)IOPS/Host
Consistent performance with sub-millisecond latencies

 

vsan05

 

 

30K IOPS/Host100K IOPS/Hostpredictable sub-millisecond latency

 

Virtual SAN FlashCaching Architectures

 

vsan06

 

 

All-FlashCache Tier Sizing

Cache tier should have 10% of the anticipated consumed storage capacity

 Measurement Requirements Values
Projected VM space usage 20GB
Projected number of VMs 1000
Total projected space consumption per VM 20GB x 1000 = 20,000 GB = 20 TB
Target flash cache capacity percentage 10%
Total flash cache capacity required 20TB x .10 = 2 TB

 

  • Cache is entirely write-buffer in all-flash architecture
  • Cache devices should be high write endurance models: Choose 2+ TBW/day or 3650+/5 year
  • Total cache capacity percentage should be based on use case requirements.

–For general recommendation visit the VMware Compatibility Guide.

–For write-intensive workloads a higher amount should be configured.

–Increase cache size if expecting heavy use of snapshots

 

New On-Disk Format

 

  • Virtual SAN 6.0 introduces a new on-disk format.
  • The new on-disk format enables:

–Higher performance characteristics

–Efficient and scalable high performance snapshots and clones

–Online migration to new (RVC only)

  • The object store will continue to mount the volumes from all hosts in a cluster and presents them as a single shared datastore.
  • The upgrade to the new on-disk format is optional; the on-disk format for Virtual SAN 5.5 will continue to be supported

 

Performance Snapshots and Clones

 

  • Virtual SAN 6.0 new on-disk format introduces a new VMDK type

–Virtual SAN 5.5 snapshots were based on vmfsSparse (redo logs)

  • vsanSparse based snapshots are expected to deliver performance comparable to native SAN snapshots.

–vsanSparse takes advantage of the new on-disk format writing and extended caching capabilities to deliver efficient performance.

  • All disks in a vsanSparse disk-chain need to be vsanSparse (except base disk).

–Cannot create linked clones of a VM with vsanSparse snapshots on a non-vsan datastore.

–If a VM has existing redo log based snapshots, it will continue to get redo log based snapshots until the user consolidates and deletes all current snapshots.

 

 

 

 

Hardware Requirements

 

vsan03

 

Flash Based Devices In Virtual SAN hybrid ALL read and write operations always go directly to the Flash tier.
Flash based devices serve two purposes in Virtual SAN hybrid architecture

1.Non-volatile Write Buffer (30%)
–Writes are acknowledged when they enter prepare stage on the flash-based devices.
–Reduces latency for writes2.

2. Read Cache (70%)
– Cache hits reduces read latency
– Cache miss – retrieve data from the magnetic devices

 

Flash Based Devices In Virtual SAN all-flash read and write operations always go directly to the Flash devices.

Flash based devices serve two purposes in Virtual SAN All Flash:

1 .Cache Tier
–High endurance flash devices.
–Listed on VCG2.
2. Capacity Tier
–Low endurance flash devices
–Listed on VCG
Network

•1Gb / 10Gb supported
–10Gb shared with NIOC for QoS will support most environments
–If 1GB then recommend dedicated links for Virtual SAN
–Layer 3network configuration supported in 6.0

•Jumbo Frames will provide nominal performance increase
–Enable for greenfield deployments
–Enable in large deployments to reduce CPU overhead

•Virtual SAN supports both VSS & VDS–NetIOC requires VDS
•Network bandwidth performance has more impact on host evacuation, rebuild times than on workload performance

 

High Density Direct Attached Storage

 

vsan04

 

 

–Manage disks in enclosures – helps enable blade environment
–Flash acceleration provided on the server or in thesubsystem
–Data services delivered via the VSAN Data Services and platform capabilities
–Direct attached and disks (flash devices, and magnetic devices) are Supports combination of direct attached disks and high density attached disks (SSDs and HDDs) per disk group.

 

Users are expected to configure the HDDAS switch such that each disk is only seen by one host.

–VSAN protects against misconfigured HDDASs (a disk is seen by more than 1 host).
–The owner of a disk group can be explicitly changedby unmounting and restamping the disk group from the new owner.

•If a host who own a disk group crashes, manual re-stamping can be done on another host

.–Supported HDDASs will be tightly controlled by the HCL (exact list TBD).•Applies to HDDASs and controllers

 

FOLLOW the VMware HCL. www.vmware.com/go/virtualsan-hcl

 

Upgrade

 

  • Virtual SAN 6.0 has a new on disk format for disk groups and exposes a new delta-disk type, so upgrading from 1.0 to 2.0 involves more than upgrading the ESX/VC software.

Upgrades are performed in multiple phases

Phase 1: Fresh deployment or upgrade of to vSphere 6.0

vCenter Server

ESXi Hypervisor

Phase 2: Disk format conversion (DFC)

Reformat disk groups

Object upgrade

Disk/Disk Group Evacuation

  • In Virtual SAN 5.5 in order to remove a disk/disk group without data lost, hosts were placed in maintenance mode with the full data evacuation mode from all disk/disk groups.
  • Virtual SAN 6.0 Introduces the support and ability to evacuate data from individual disk/disk groups before removing a disk/disk group from the Virtual SAN.
  • Supported in the UI, esxcli and RVC.

Check box in the “Remove disk/disk group” UI screen

vsanexample

 

Disks Serviceability

 

Virtual SAN 6.0 introduces a new disk serviceability feature to easily map the location of magnetic disks and flash based devices from the vSphere Web Client.

vsanexample2

  • Light LED on failures
  • When a disk hits a permanent error, it can be challenging to find where that disk sits in the chassis to find and replace it.
  • When SSD or MD encounters a permanent error, VSAN automatically turns the disk LED on.
  • Turn disk LED on/off
  • User might need to locate a disk so VSAN supports manually turning a SSD or MD LED on/off.
  • Marking a disk as SSD
  • Some SSDs might not be recognized as SSDs by ESX.
  • Disks can be tagged/untagged as SSDs
  • Marking a disk as local
  • Some SSDs/MDs might not be recognized by ESX as local disks.
  • Disks can be tagged/untagged as local disks.

 

Virtual SAN Usability Improvements

  • What-if-APIs (Scenarios)
  • Adding functionality to visualize Virtual SAN datastore resource utilization when a VM Storage Policy is created or edited.

–Creating Policies

–Reapplying a Policy

vsanexample3

 

Default Storage Policies

 

  • A Virtual SAN Default Profile is automatically created in SPBM when VSAN is enabled on a cluster.

–Default Profiles are utilized by any VM created without an explicit SPBM profile assigned.

vSphere admins to designate a preferred VM Storage Policy as the preferred default policy for the Virtual SAN cluster

 

vsanexample4

  • vCenter can manage multiple vsanDatastores with different sets of requirements.
  • Each vsanDatastore can have a different default profile assigned.

Virtual Machine Usability Improvements

  • Virtual SAN 6.0 adds functionality to visualize Virtual SAN datastore resource utilization when a VM Storage Policy is created or edited.
  • Virtual SAN’s free disk space is raw capacity.

–With replication, actual usable space is lesser.

  • New UI shows real usage on

–Flash Devices

–Magnetic Disks

Displayed in the vSphere Web Client and RVC

vsanexample5

 

Virtual Machine >2TB VMDKs

 

  • In VSAN 5.5, the max size of a VMDK was limited to 2TB.

–Max size of a VSAN component is 255GB.

–Max number of stripes per object was 12.

  • In VSAN 6.0 the limit has been increased to allow VMDK up to 62TB.

–Objects are still striped at 255GB.

  • 62TB limit is the same as VMFS and NFS so VMDK can be

 

vsanexample6

 

There it is. I tried to lay it out as best I could.

Want to try it? Try the Virtual SAN Hosted Evaluation https://my.vmware.com/web/vmware/evalcenter?p=vsan6-hol

 

What’s New in Virtual SAN 6

Learn how to deploy, configure, and manage VMware’s latest hypervisor-converged storage solution.

– See more at: https://my.vmware.com/web/vmware/evalcenter?p=vsan6-hol#sthash.OcuO1gXQ.dpuf

 

Roger Lund

See the data, not just the storage with DataGravity

DataGravity @DataGravityInc has revealed what the announced secret gravity sauce is all about. Be sure to check out the press release ” DataGravity Unveils Industry’s First Data-Aware Storage Platform”

“NASHUA, N.H., August 19, 2014 – DataGravity today announced the launch of the DataGravity Discovery Series, the first ever data-aware storage platform that tracks data access and analyzes data as it is stored to provide greater visibility, insight and value from a company’s information assets. The DataGravity Discovery Series delivers storage, protection, data governance, search and discovery powered by an enterprise-grade hardware platform and patent-pending software architecture, enabling midmarket companies to glean new insights and make better business decisions.”

We often see new storage Array’s, with a impressive list of new features. Lets see how this stacks up.

From http://datagravity.com/sites/default/files/resource-files/DataGravity-DiscoverySeries-Datasheet-081714.pdf copyright Datagravity

Looks fairly standard, a nice sharp looking box. But what makes this unique?

Why the fuss? why do we care what is on our storage? With the increasing costs of data, many are looking to what what is taking the space, and using the disk.

“The unstructured data dilemma is growing, and IDC has been predicting technology would catch up to provide an answer to the market demand,” said Laura Dubois, program vice president of storage at IDC. “The DataGravity approach is transformational in an industry where innovation has been mostly incremental. DataGravity data-aware storage can tell you about the data it’s holding, making the embedded value of stored information accessible to customers who cannot otherwise support the cost and complexity of solutions available today.”

Source http://datagravity.com/datagravity-unveils-first-data-aware-storage

“The Digital Universe in 2020″ report, IDC estimates that the overall volume of digital bits created, replicated, and consumed across the United States will hit 6.6 zettabytes by 2020. That represents a doubling of volume about every three years. For those not up on their Greek numerical prefixes, a zettabyte is 1,000 exabytes, or just over 25 billion 4-TB drives.” Source http://www.networkcomputing.com/storage/2014-state-of-storage-cost-worries-grow/a/d-id/1113476

Today companies often lavage things like Hadoop, EMC ESA or new offerings like CloudPhysics to learn what people are using storage for in their environment, or to get data out of the system.

This becomes more complex with Big Data

That’s all great, but how does DataGravity change things?

From http://datagravity.com/sites/default/files/resource-files/DataGravity-DiscoverySeries-Datasheet-081714.pdf copyright DataGravity

DataGravity Discovery Series Demo: End user role-based access

DataGravity Discovery Series Demo: Analyze your file shares

DataGravity Discovery Series Demo: Find Experts and Collaborators

Great, I See some neat features. But what about standard storage features, like data protection? And Data Protection?

DataGravity Discovery Series Demo: Preview, download and restore files

DataGravity Discovery Series Demo: Provision and protect your storage

I’m excited to see this at VMworld. VMworld attendees can see it in action at the DataGravity booth, #1647, August 24 through 28 in San Francisco.

I will be attending VMworld, and Tech Field Day. Make sure to check back to the DataGravity page. http://techfieldday.com/appearance/datagravity-presents-at-tech-field-day-extra-at-vmworld-us-2014/

Lets check out what others are saying about DataGravity

DataGravity – First Look

“Perhaps you’ve heard of DataGravity (@DataGravityInc), or perhaps you haven’t. They’ve been staying pretty quiet about what they’ve been working on. Today, however, they’re dropping the veil. Read on for a look into what you can expect from this exciting announcement!”

by James Green

Why is DataGravity Such a Big Deal?

“DataGravity just released their embargo and my little techie corner of the Internet is on fire. There’s a very good reason for that, but it might not be obvious at a glance. Read on to learn why DataGravity is a Big Deal even though it might not work out.”

by Stephen Foskett

Make sure to check the full articles out.

I hope you enjoyed the write up.

Roger Lund

Re Post : Veeam Best Practices for VMware on Nutanix

Derek Seaman has a nice write up on Veeam Best Practices for VMware on Nutanix

I’m a fan of Veeam, and use in production today. Thus, I wanted to share the write up.

“The goal of the joint whitepaper between Veeam and Nutanix is to help customers deploy Veeam Backup & Replication v7 on Nutanix, when used with VMware vSphere 5.x. This post will highlight some of the major points and how customers can head off some potential issues. The whitepaper covers all the applicable technologies such as VMware’s VADP, CBT, and Microsoft VSS. It also includes and easy to follow checklist of all the recommendations.”

The official whitepaper can be downloaded here.


Veeam is modern data protection for virtual environments, and are also a great sponsor of my blog. The web-scale Nutanix solution and its data locality technology are complimented by the distributed and scale-out architecture of Veeam Backup & Replication v7. The combined Veeam and Nutanix solutions leverage the strengths of both products to provide network efficient backups to enable meeting recovery point objective (RPO) and recovery time objective (RTO) requirements.
The architecture is flexible enough to enable the use of either 100% virtualized Veeam components or a combination of virtual and physical components, depending on customer requirements and available hardware. You could also use existing dedicated backup appliances. In short, our joint solution is flexible enough to meet your requirements and efficiently use your physical assets. For example, if you have requirements for tape-out, then you will need at least one physical server in the mix to connect your library to since tape Fibre Channel/SAS pass-thru is not available in ESXi 5.x.

“When virtualizing solution the last thing you want is your backup data stored in the same location as the data you are trying to protect. So the first best practice for a 100% virtualized solution is to use a secondary Nutanix cluster. The cluster would be comprised of at least three Nutanix nodes. This is where the virtualized Veeam Backup & Replication server (along with the data repository), would reside. Should you have a problem with the production Nutanix cluster, your secondary cluster is unaffected. Depending on the amount of data you are backing up and your retention policies, you may or may not want the same Nutanix hardware models as your production cluster. For example, you may want to consider the 6000 series hardware which are ‘storage heavy’ for your secondary cluster. The following figure depicts a virtualized Veeam backup solution.”

Read the full post.

http://www.derekseaman.com/2014/04/veeam-best-practices-vmware-nutanix.html

Thanks to Derek Seaman for the write up.