Tagged in: VMware

Early peak at the #vBrownBag TechTalks Schedule at VMworld US 2015

The professionalvmware.com site, the home of the #vBrownBag goodness, has posted an early look at the schedule for the #vBrownBag TechTalks in the hang space at VMworld US 2015.  I wanted to re-post the schedule here to increase awareness of these talks.  If you are going to hang out in the hang space, please come and support this individuals!
(Note:  Info shared from http://professionalvmware.com/2015/08/vbrownbag-techtalks-schedule-vmworld-usa-2015/)

Time   Monday Tuesday Wednesday Thursday
10:30 vExpert Daily vExpert Daily vExpert Daily vExpert Daily
11:00 Edward Haletky @Texiwill 6-Step Program for Hybrid Cloud Edward Haletky @Texiwill Let’s Throw in Some Security: A hybrid
Cloud Security Model
Virtualization Practise Panel Chris Wahl -Scripting and Versioning with PowerShell ISE and Git Shell
11:15 Irfan Ahmad – CloudPhysics – The Unofficial VCDX Toolchest Irfan Ahmad – CloudPhysics – Help! My Dashboards Suck! continued Vaughn Stewart – Run 20% – 40% more VMs per host with Flash Storage
11:30 Michael White – Spotlight on Your Data Kenneth Hui -The Easy Button For Turning Your vSphere Environment Into A Private Cloud Gabriel Chapman – Solidfire – Infrastructure Agility in the Software Defined Data Center Josh Atwell – DevOps with PowerCLI and PowerShell DSC
11:45 Andy Warfield – Coho Data Andy Warfield – Coho Data continued Cody Hosterman – Space Reclamation in vSphere 6
12:00 VMUG LATAM Panel (Spanish) Anthony Spiteri – NSX – An Unexpected Journey VMware Communities Podcast Eric Wright – Getting to the Core with CoreOS
12:15 continued Ken Thomas -One Solution for All: Deploy to VMs and Physical using MDT continued James Brown -Virtual Design Master
12:30 Dave LeClair – Unitrends – Recover Simplicity! Dave LeClair – Unitrends – Is your data protection and DR solution built for a cloud-driven world? continued continued
12:45 Joseph Griffiths – Reference Architecture for Automation Luis Concistre – Horizon View & VMware NSX continued Tim Carr – Versioning deployments with Puppet, R10K, and Git
13:00  Kevin Moats – GPU virtualization in the home lab HP Blogger Briefing Dave Frederick – The Advantages of a Tiered Approach to VM Storage Steve Flanders – The importance of customer experience
13:15 Gina Rosenthal (Minks) – Why you need to #BackThatSaaSUp continued Gabriel Maciel – VMware NSX Architecture and Components Interaction
13:30 Scott Davis – Infinio – Disruptive Storage Innovations and the Impact on Virtual Desktop Solutions continued vBrownBag LATAM Panel (Spanish)
13:45 Scott Davis – cont continued continued
14:00 Jaison Bailley and the vBrisket community continued Paul Braren – Home Virtualization Lab Productivity Tips Hang Space Closed
14:15 Luke Brown – Dinosaurs Didn’t Do DR continued Anthony Chow – Micro-Segmentation – a perfect fit for microservices security
14:30 Craig Waters – Anatomy of an SSD Craig Waters  – How to start a user group (BYOUG) Shruti Bhat – Liuve demo – Running your ESXi workloads in AWS
14:45 Jonathan Frappier and Tim Jabaut – Stop being an “Admin” and learn to embrace “DevOps” Edward Haletky @Texiwill – Proper Use of RBAC to Secure your Virtual Environment John Arrasjid – IT Architect: Foundation in the Art of IT Infrastructure Design
15:00 In Tech We Trust Podcast Speaking in Tech Podcast Geek Whisperers Podcast
15:15 continued continued continued
15:30 continued continued continued
15:45 continued continued continued
16:00 Dennis Bray and Chip Copper – Brocade – NSX based Advanced Network Architecture in SDDC to deploy highly automated IT Service Delivery Elver Sena Sosa and Anuj Dewangan – Brocade -Scaling your cloud network: Building a massively scalable cloud infrastructure with VMware NSX & advanced physical network architectures Tony Foster – Sizing for GPU Workloads
16:15 David Siles – Proactive Data-Awareness Inside Your VMs David Klebanov – SDDC Gone Global with Software Defined WAN Brian Trainor – Automatic” remediation with vRealize Operations
16:30 Michaell Rice – Rackspace – Stop testing scripts in production; Meet vCenter Simulator Luke Huckaba – Rackspace – A distilled version of the SRM talk David Klee – SQL Server Virtualization Gotchya’s
16:45 Michael Rice – Introduction to pyVmomi Michael Rice – YAVIJAVA an alternative Java SDK for vSphere Michael Rice – vCenter Simulator For Functional Testing
17:00 Hersey Cartwright – Using PowerCLI to manage SSH on ESXi Hersey Cartwright – Managing vCenter Roles and Permissions with PowerCLI Hang Space closed
17:15 Eric Wright – Abstract all the things! Simplifying the whole stack James Green, Emad Younis – ”Batman” is Not A Career Strategy
17:30 Cisco Panel Nigel Hickey – The #vExpertSpotlight
17:45  continued Steve Flanders – Quality log messages and how to write them  

My vCenter C: drive ran out of space

I came across something interesting today. The 160 GB C: drive on my vCenter Server ran out of space today…rather embarrassing. The first thing I checked is why in the heck my alerts didn’t go off…ok..problem fixed. After a couple of Google searches I came across and interesting VMware KB. Apparently there is a bug in the vCenter 5.5 upgrade that enables debug logging on the VMware Syslog Collector service and logs to C:\ProgramData\VMware\VMware Syslog Collector\logs\debug.log. In this case, my debug log was 62GB. The fix is rather simple. Stop the ‘VMware Syslog Collector’ service and edit the C:\ProgramData\VMware\VMware Syslog Collector\vmconfig-syslog.xml file (make a copy of it first) and change the following:

<debug> <level>1</level> </debug>

to

<debug> <level>0</level> </debug>

 

Remove the debug.log file and start the service again.

 

After upgrading to vCenter Server 5.5, the debug.log file of syslog collector is growing without limit (2094175)

KB: 2094175

  • Updated: Mar 3, 2015
  • Categories:
    Troubleshooting
  • Languages:
    English
  • Product(s):
    VMware vCenter Server
  • Product Version(s):
    VMware vCenter Server 5.5.x

 

Symptoms

  • After upgrading to vCenter Server 5.5, the debug.log file of syslog collector is growing without limit
  • C:\ProgramData\VMware\VMware Syslog Collector\logs\debug.log continues to grow without being rotated

Resolution

This is a known issue affecting VMware Syslog Collector 5.5.

 

Currently, there is no resolution.

To work around this issue:

  1. Stop the VMware Syslog Collector service. For more information, see Stopping, starting, or restarting VMware vCenter Server services (1003895).
  2. On the server running the VMware Syslog Collector service, navigate to C:\ProgramData\VMware\VMware Syslog Collector and save a copy of vmconfig-syslog.xml.
  3. In a text editor, open vmconfig-syslog.xml and modify  from:

    <debug> <level>1</level> </debug>

    to

    <debug> <level>0</level> </debug>

  4. Start the VMware Syslog Collector service. For more information, see Stopping, starting, or restarting VMware vCenter Server services (1003895).

 

-Aaron

Unitrends Free Edition. Protecting your assets with Asset Protection and Job Creation.

Yesterday I write on Installing and Configuring Unitrends free edition. Today I will cover protecting your assets with Asset Protection. Then we will go over Job Creation. This allows you to have a backup of your VM’s for free!

 

Requirements:

For this scenario we will be using vCenter 5.5.

 

And we are off!

 

1. Browse to the UI.

unitrends free edition UI

 

2. Close the welcome to Unitrends free.

unitrends free edition ui welcome

3. Navigate to Configure.

unitrends free edition ui configure

4. Click on the Protected Assets Tab.

unitrends free edition ui configure protected assets

5. Click Add, and select Virtual Host.

unitrends free edition ui configure protected assets add virtual host

 

6. Put in your vCenter information and click save.

unitrends free edition ui configure protected assets add virtual host details

7. Now navigate to Protect.

unitrends free edition ui protect

8. Highlight your vCenter

unitrends free edition ui protect select vcenter

9. Check the box next to the VM you would like to Backup.

unitrends free edition ui protect select vcenter select vm

10. Click Backup

unitrends free edition ui protect backup

11. A new window will popup, titled Create Backup Job.

unitrends free edition ui protect create backup jobJPG

12. Click Define Job Settings.

unitrends free edition ui protect create backup job define

For now we are going to just run this job once. This is called a on demand job.

13. Under Schedule, Select when to run this job: check Now, and save.

unitrends free edition ui protect create backup job define on demand

 

Right away a job success window pops up.

unitrends free edition ui protect job success

 

Lets take a look at the job to verify the job’s running status.

14. Click view Jobs.

unitrends free edition ui protect jobs running

 

15. Click on the job, and click View Details.

unitrends free edition ui protect jobs running view details

 

Once it completes you will see a Successful Status.

unitrends free edition ui protect job finished 1

Congrats! You have just protected your first VM!

 

 

Roger Lund

Unitrends offers free backup’s with Unitrends Free

Product overview:

Unitrends is a software company that has been doing backup’s since 1985 according to Wikipedia. On may 12th they announced a open beta to a free version of Unitrends free..

Unitrends free is a Virtual Appliance that offers data protection in the form of a Linux based VMware or Hyper-v VM. You can protect VM’s with Full backup’s, incremental backups, on demand or scheduled. You can recover individual files, the entire vm or perform a instant recovery.

 

Features in Unitrends Free include:

  • Free vSphere and Hyper-V Backup for Unlimited Virtual Machines (VMs) and Sockets. Unlike other tools that restrict usage by number of VMs or sockets, Unitrends Free provides hypervisor-level protection for up to 1 terabyte (TB) of data.

  • Instant VM Recovery. Unitrends Free makes it possible to quickly run a VM directly from a backup to reduce downtime. Instant recovery also allows users to spin up copies of their VMs for recovery verification, testing and development.

  • Automated Daily Scheduling. The product features “set it and forget it” scheduling with daily recovery points to keep a user’s system protected at all times – even when no one is around.

  • Fast, Incremental Forever Backups. Unitrends Free delivers changed-block tracking and incremental forever backups to ensure that backups complete rapidly every day – without using a lot of storage.

  • Cloud Integration. Users can also take advantage of low cost long-term storage via integration with third-party clouds such as Google Cloud Storage, Google Nearline and Amazon Simple Storage Service (S3).

  • Unitrends Community Integration. Users benefit from limitless support provided by the Unitrends Community. Directly integrated into the Unitrends Free user interface, IT professionals can search the forum and collaborate to help one another, while also earning attractive rewards. Source http://www.unitrends.com/company/press-releases/2015/unitrends-redefines-free%E2%80%9D-virtualization-backup-w

 

Availability:

The beta version of Unitrends Free is available to download now.  Here is the Unitrends Free Administrator’s Guide

In celebration of the launch, one lucky user will win a $1,500 Visa Giftcard.

UnitrendsFree

To download the software and obtain more details regarding the sweepstakes, please visit: http://www.UnitrendsFree.com.

 

I will be following up with installation and configuration of the product.

 

Roger Lund

Opvizor Inside the Dashboard.

This is my first post on a series of posts about opvizor. http://www.opvizor.com/ Opvizor is a predictive analysis and issue prevention tool.

In my previous post, Getting Opvizor Working by Michael White worth a read. Michael outlines the install process. I will continue , assuming you have setup the product.

 

Once we login to the webpage at https://opvizor.com/login/  we see a dashboard. Where to start?  That is a lot of information.

In the top left you have the environmental overview. This includes your issues per cluster, host, VM, datastore, and network. As you can see I have induced lots of issues into my lab.

vmware vsphere opvizor home

Let’s dig into each.

 

Overall Status

vmware vsphere opvizor overall Statistics

Overall Statistics

vmware vsphere opvizor overall Statistics

Statistics by Upload

vmware vsphere opvizor Statistics by Upload

Issues by  by Upload

vmware vsphere opvizor Issues by  by Upload

Top 5 Largest Datastores

vmware vsphere opvizor Top 5 Largest Datastores

Top 5 Entities by Issue Rate

vmware vsphere opvizor Top 5 Entities by Issue Rate

Top 5 VMs by Snapshot Usage

vmware vsphere opvizor top 5 snapshots

 

Yikes, that seems a little large doesn’t it?

 

Top 5 BMs by Memory Usage

vmware vsphere opvizor Top 5 BMs by Memory Usage

These are fairly self explanatory. But we can dig into each to get more information from this screen.

 

If we click the small table icon  vmware vsphere opvizor expand icon next to Top 5 Larges Datastores

 

vmware vsphere opvizor Top 5 Largest Datastores expand

 

Here we can see more information on the Datastores in my lab.

 

This is our first look into Opvizor, of the series.

 

 

Roger Lund

VMware Virtual SAN 6.0

The following Blog post is about VMware VSAN or Virtual San. http://www.vmware.com/products/virtual-san

 

The following post was made with pre GA , Beta content.

All performance numbers are subject to final benchmarking results. Please refer to guidance published at GA

All Content and media is from VMWare, as part of the blogger program.

Please also read vSphere 6 – Clarifying the misinformation http://blogs.vmware.com/vsphere/2015/02/vsphere-6-clarifying-misinformation.html

 

Here is the published What’s New: VMware Virtual SAN 6.0 http://www.vmware.com/files/pdf/products/vsan/VMware_Virtual_SAN_Whats_New.pdf

Here is the published VMware Virtual SAN 6.0 http://www.vmware.com/files/pdf/products/vsan/VMware_Virtual_SAN_Datasheet.pdf

 

Whats New?

 

vsan01

 

Disk Format

  • New On-Disk Format
  • New delta-disk type vsanSparse
  • Performance Based snapshots and clones

VSAN 5.5 to 6.0

  • In-Place modular rolling upgrade
  • Seamless In-place Upgrade
  • Seamless Upgrade Rollback Supported
  • Upgrade performed from RVC CLI
  • PowerCLI integration for automation and management

 

Disk Serviceability Functions

  • Ability to manage flash-based and magnetic devices.
  • Storage consumption models for policy definition
  • Default Storage Policies
  • Resync Status dashboard in UI
  • VM capacity consumption per VMDK
  • Disk/Disk group evacuation

VSAN Platform

  • New Caching Architecture for all-flash VSAN
  • Virtual SAN Health Services
  • Proactive Rebalance
  • Fault domains support
  • High Density Storage Systems with Direct Attached Storage
  • File Services via 3rd party
  • Limited support hardware encryption and checksum

 

Virtual SAN Performance and Scale Improvements

 

 

2x VMs per host

  • Larger Consolidation Ratios
  • Due to increase of supported components per hosts
  • 9000 Components per Host

 

62TB Virtual Disks

  • Greater capacity allocations per VMDK
  • VMDK >2TB are supported

Snapshots and Clone

  • Larger supported capacity of snapshots and clones per VMs
  • 32 per Virtual Machine

Host Scalability

  • Cluster support raised to match vSphere
  • Up to 64 nodes per cluster in vSphere

VSAN can scale up to 64 nodes

 

Enterprise-Class Scale and Performance

vsan02

VMware Virtual SAN : All-Flash

Flash-based devices used for caching as well as persistence

Cost-effective all-flash 2-tier model:oCache is 100% write: using write-intensive, higher grade flash-based devices
Persistent storage: can leverage lower cost read-intensive flash-based devices

Very high IOPS: up to 100K(1)IOPS/Host
Consistent performance with sub-millisecond latencies

 

vsan05

 

 

30K IOPS/Host100K IOPS/Hostpredictable sub-millisecond latency

 

Virtual SAN FlashCaching Architectures

 

vsan06

 

 

All-FlashCache Tier Sizing

Cache tier should have 10% of the anticipated consumed storage capacity

 Measurement Requirements Values
Projected VM space usage 20GB
Projected number of VMs 1000
Total projected space consumption per VM 20GB x 1000 = 20,000 GB = 20 TB
Target flash cache capacity percentage 10%
Total flash cache capacity required 20TB x .10 = 2 TB

 

  • Cache is entirely write-buffer in all-flash architecture
  • Cache devices should be high write endurance models: Choose 2+ TBW/day or 3650+/5 year
  • Total cache capacity percentage should be based on use case requirements.

–For general recommendation visit the VMware Compatibility Guide.

–For write-intensive workloads a higher amount should be configured.

–Increase cache size if expecting heavy use of snapshots

 

New On-Disk Format

 

  • Virtual SAN 6.0 introduces a new on-disk format.
  • The new on-disk format enables:

–Higher performance characteristics

–Efficient and scalable high performance snapshots and clones

–Online migration to new (RVC only)

  • The object store will continue to mount the volumes from all hosts in a cluster and presents them as a single shared datastore.
  • The upgrade to the new on-disk format is optional; the on-disk format for Virtual SAN 5.5 will continue to be supported

 

Performance Snapshots and Clones

 

  • Virtual SAN 6.0 new on-disk format introduces a new VMDK type

–Virtual SAN 5.5 snapshots were based on vmfsSparse (redo logs)

  • vsanSparse based snapshots are expected to deliver performance comparable to native SAN snapshots.

–vsanSparse takes advantage of the new on-disk format writing and extended caching capabilities to deliver efficient performance.

  • All disks in a vsanSparse disk-chain need to be vsanSparse (except base disk).

–Cannot create linked clones of a VM with vsanSparse snapshots on a non-vsan datastore.

–If a VM has existing redo log based snapshots, it will continue to get redo log based snapshots until the user consolidates and deletes all current snapshots.

 

 

 

 

Hardware Requirements

 

vsan03

 

Flash Based Devices In Virtual SAN hybrid ALL read and write operations always go directly to the Flash tier.
Flash based devices serve two purposes in Virtual SAN hybrid architecture

1.Non-volatile Write Buffer (30%)
–Writes are acknowledged when they enter prepare stage on the flash-based devices.
–Reduces latency for writes2.

2. Read Cache (70%)
– Cache hits reduces read latency
– Cache miss – retrieve data from the magnetic devices

 

Flash Based Devices In Virtual SAN all-flash read and write operations always go directly to the Flash devices.

Flash based devices serve two purposes in Virtual SAN All Flash:

1 .Cache Tier
–High endurance flash devices.
–Listed on VCG2.
2. Capacity Tier
–Low endurance flash devices
–Listed on VCG
Network

•1Gb / 10Gb supported
–10Gb shared with NIOC for QoS will support most environments
–If 1GB then recommend dedicated links for Virtual SAN
–Layer 3network configuration supported in 6.0

•Jumbo Frames will provide nominal performance increase
–Enable for greenfield deployments
–Enable in large deployments to reduce CPU overhead

•Virtual SAN supports both VSS & VDS–NetIOC requires VDS
•Network bandwidth performance has more impact on host evacuation, rebuild times than on workload performance

 

High Density Direct Attached Storage

 

vsan04

 

 

–Manage disks in enclosures – helps enable blade environment
–Flash acceleration provided on the server or in thesubsystem
–Data services delivered via the VSAN Data Services and platform capabilities
–Direct attached and disks (flash devices, and magnetic devices) are Supports combination of direct attached disks and high density attached disks (SSDs and HDDs) per disk group.

 

Users are expected to configure the HDDAS switch such that each disk is only seen by one host.

–VSAN protects against misconfigured HDDASs (a disk is seen by more than 1 host).
–The owner of a disk group can be explicitly changedby unmounting and restamping the disk group from the new owner.

•If a host who own a disk group crashes, manual re-stamping can be done on another host

.–Supported HDDASs will be tightly controlled by the HCL (exact list TBD).•Applies to HDDASs and controllers

 

FOLLOW the VMware HCL. www.vmware.com/go/virtualsan-hcl

 

Upgrade

 

  • Virtual SAN 6.0 has a new on disk format for disk groups and exposes a new delta-disk type, so upgrading from 1.0 to 2.0 involves more than upgrading the ESX/VC software.

Upgrades are performed in multiple phases

Phase 1: Fresh deployment or upgrade of to vSphere 6.0

vCenter Server

ESXi Hypervisor

Phase 2: Disk format conversion (DFC)

Reformat disk groups

Object upgrade

Disk/Disk Group Evacuation

  • In Virtual SAN 5.5 in order to remove a disk/disk group without data lost, hosts were placed in maintenance mode with the full data evacuation mode from all disk/disk groups.
  • Virtual SAN 6.0 Introduces the support and ability to evacuate data from individual disk/disk groups before removing a disk/disk group from the Virtual SAN.
  • Supported in the UI, esxcli and RVC.

Check box in the “Remove disk/disk group” UI screen

vsanexample

 

Disks Serviceability

 

Virtual SAN 6.0 introduces a new disk serviceability feature to easily map the location of magnetic disks and flash based devices from the vSphere Web Client.

vsanexample2

  • Light LED on failures
  • When a disk hits a permanent error, it can be challenging to find where that disk sits in the chassis to find and replace it.
  • When SSD or MD encounters a permanent error, VSAN automatically turns the disk LED on.
  • Turn disk LED on/off
  • User might need to locate a disk so VSAN supports manually turning a SSD or MD LED on/off.
  • Marking a disk as SSD
  • Some SSDs might not be recognized as SSDs by ESX.
  • Disks can be tagged/untagged as SSDs
  • Marking a disk as local
  • Some SSDs/MDs might not be recognized by ESX as local disks.
  • Disks can be tagged/untagged as local disks.

 

Virtual SAN Usability Improvements

  • What-if-APIs (Scenarios)
  • Adding functionality to visualize Virtual SAN datastore resource utilization when a VM Storage Policy is created or edited.

–Creating Policies

–Reapplying a Policy

vsanexample3

 

Default Storage Policies

 

  • A Virtual SAN Default Profile is automatically created in SPBM when VSAN is enabled on a cluster.

–Default Profiles are utilized by any VM created without an explicit SPBM profile assigned.

vSphere admins to designate a preferred VM Storage Policy as the preferred default policy for the Virtual SAN cluster

 

vsanexample4

  • vCenter can manage multiple vsanDatastores with different sets of requirements.
  • Each vsanDatastore can have a different default profile assigned.

Virtual Machine Usability Improvements

  • Virtual SAN 6.0 adds functionality to visualize Virtual SAN datastore resource utilization when a VM Storage Policy is created or edited.
  • Virtual SAN’s free disk space is raw capacity.

–With replication, actual usable space is lesser.

  • New UI shows real usage on

–Flash Devices

–Magnetic Disks

Displayed in the vSphere Web Client and RVC

vsanexample5

 

Virtual Machine >2TB VMDKs

 

  • In VSAN 5.5, the max size of a VMDK was limited to 2TB.

–Max size of a VSAN component is 255GB.

–Max number of stripes per object was 12.

  • In VSAN 6.0 the limit has been increased to allow VMDK up to 62TB.

–Objects are still striped at 255GB.

  • 62TB limit is the same as VMFS and NFS so VMDK can be

 

vsanexample6

 

There it is. I tried to lay it out as best I could.

Want to try it? Try the Virtual SAN Hosted Evaluation https://my.vmware.com/web/vmware/evalcenter?p=vsan6-hol

 

What’s New in Virtual SAN 6

Learn how to deploy, configure, and manage VMware’s latest hypervisor-converged storage solution.

– See more at: https://my.vmware.com/web/vmware/evalcenter?p=vsan6-hol#sthash.OcuO1gXQ.dpuf

 

Roger Lund

See the data, not just the storage with DataGravity

DataGravity @DataGravityInc has revealed what the announced secret gravity sauce is all about. Be sure to check out the press release ” DataGravity Unveils Industry’s First Data-Aware Storage Platform”

“NASHUA, N.H., August 19, 2014 – DataGravity today announced the launch of the DataGravity Discovery Series, the first ever data-aware storage platform that tracks data access and analyzes data as it is stored to provide greater visibility, insight and value from a company’s information assets. The DataGravity Discovery Series delivers storage, protection, data governance, search and discovery powered by an enterprise-grade hardware platform and patent-pending software architecture, enabling midmarket companies to glean new insights and make better business decisions.”

We often see new storage Array’s, with a impressive list of new features. Lets see how this stacks up.

From http://datagravity.com/sites/default/files/resource-files/DataGravity-DiscoverySeries-Datasheet-081714.pdf copyright Datagravity

Looks fairly standard, a nice sharp looking box. But what makes this unique?

Why the fuss? why do we care what is on our storage? With the increasing costs of data, many are looking to what what is taking the space, and using the disk.

“The unstructured data dilemma is growing, and IDC has been predicting technology would catch up to provide an answer to the market demand,” said Laura Dubois, program vice president of storage at IDC. “The DataGravity approach is transformational in an industry where innovation has been mostly incremental. DataGravity data-aware storage can tell you about the data it’s holding, making the embedded value of stored information accessible to customers who cannot otherwise support the cost and complexity of solutions available today.”

Source http://datagravity.com/datagravity-unveils-first-data-aware-storage

“The Digital Universe in 2020″ report, IDC estimates that the overall volume of digital bits created, replicated, and consumed across the United States will hit 6.6 zettabytes by 2020. That represents a doubling of volume about every three years. For those not up on their Greek numerical prefixes, a zettabyte is 1,000 exabytes, or just over 25 billion 4-TB drives.” Source http://www.networkcomputing.com/storage/2014-state-of-storage-cost-worries-grow/a/d-id/1113476

Today companies often lavage things like Hadoop, EMC ESA or new offerings like CloudPhysics to learn what people are using storage for in their environment, or to get data out of the system.

This becomes more complex with Big Data

That’s all great, but how does DataGravity change things?

From http://datagravity.com/sites/default/files/resource-files/DataGravity-DiscoverySeries-Datasheet-081714.pdf copyright DataGravity

DataGravity Discovery Series Demo: End user role-based access

DataGravity Discovery Series Demo: Analyze your file shares

DataGravity Discovery Series Demo: Find Experts and Collaborators

Great, I See some neat features. But what about standard storage features, like data protection? And Data Protection?

DataGravity Discovery Series Demo: Preview, download and restore files

DataGravity Discovery Series Demo: Provision and protect your storage

I’m excited to see this at VMworld. VMworld attendees can see it in action at the DataGravity booth, #1647, August 24 through 28 in San Francisco.

I will be attending VMworld, and Tech Field Day. Make sure to check back to the DataGravity page. http://techfieldday.com/appearance/datagravity-presents-at-tech-field-day-extra-at-vmworld-us-2014/

Lets check out what others are saying about DataGravity

DataGravity – First Look

“Perhaps you’ve heard of DataGravity (@DataGravityInc), or perhaps you haven’t. They’ve been staying pretty quiet about what they’ve been working on. Today, however, they’re dropping the veil. Read on for a look into what you can expect from this exciting announcement!”

by James Green

Why is DataGravity Such a Big Deal?

“DataGravity just released their embargo and my little techie corner of the Internet is on fire. There’s a very good reason for that, but it might not be obvious at a glance. Read on to learn why DataGravity is a Big Deal even though it might not work out.”

by Stephen Foskett

Make sure to check the full articles out.

I hope you enjoyed the write up.

Roger Lund

Re Post : Veeam Best Practices for VMware on Nutanix

Derek Seaman has a nice write up on Veeam Best Practices for VMware on Nutanix

I’m a fan of Veeam, and use in production today. Thus, I wanted to share the write up.

“The goal of the joint whitepaper between Veeam and Nutanix is to help customers deploy Veeam Backup & Replication v7 on Nutanix, when used with VMware vSphere 5.x. This post will highlight some of the major points and how customers can head off some potential issues. The whitepaper covers all the applicable technologies such as VMware’s VADP, CBT, and Microsoft VSS. It also includes and easy to follow checklist of all the recommendations.”

The official whitepaper can be downloaded here.


Veeam is modern data protection for virtual environments, and are also a great sponsor of my blog. The web-scale Nutanix solution and its data locality technology are complimented by the distributed and scale-out architecture of Veeam Backup & Replication v7. The combined Veeam and Nutanix solutions leverage the strengths of both products to provide network efficient backups to enable meeting recovery point objective (RPO) and recovery time objective (RTO) requirements.
The architecture is flexible enough to enable the use of either 100% virtualized Veeam components or a combination of virtual and physical components, depending on customer requirements and available hardware. You could also use existing dedicated backup appliances. In short, our joint solution is flexible enough to meet your requirements and efficiently use your physical assets. For example, if you have requirements for tape-out, then you will need at least one physical server in the mix to connect your library to since tape Fibre Channel/SAS pass-thru is not available in ESXi 5.x.

“When virtualizing solution the last thing you want is your backup data stored in the same location as the data you are trying to protect. So the first best practice for a 100% virtualized solution is to use a secondary Nutanix cluster. The cluster would be comprised of at least three Nutanix nodes. This is where the virtualized Veeam Backup & Replication server (along with the data repository), would reside. Should you have a problem with the production Nutanix cluster, your secondary cluster is unaffected. Depending on the amount of data you are backing up and your retention policies, you may or may not want the same Nutanix hardware models as your production cluster. For example, you may want to consider the 6000 series hardware which are ‘storage heavy’ for your secondary cluster. The following figure depicts a virtualized Veeam backup solution.”

Read the full post.

http://www.derekseaman.com/2014/04/veeam-best-practices-vmware-nutanix.html

Thanks to Derek Seaman for the write up.