As of writing, the Cloud Native Computing Foundation has 137 different certified distributions or flavors of Kubernetes. That’s a very crowded and fragmented marketplace of choice for consumers. Much of the differentiation between distributions is within the CNI and CSI – the container networking interface and container storage interface. A company’s opinion or innovation around how to handle networking and storage generally powers the differentiation of a distribution.
HPE Container Platform was first introduced back in November, 2019. Unlike its competitors, HPE offers CSI’s for many of its storage offerings, allowing persistent storage offerings across many Kubernetes distributions. And if the company’s strategy is to support many distros, why have its own? HPE’s Container Platform is based on 100% pure Kubernetes. That’s not differentiated, so what are the ways that HPE is trying to make their distro different?
Kubernetes is Kubernetes. In the case of HPE, its Container Platform has a lifecycle controller and management plane, from the acquisition of BlueData, is the first place it is differentiating its offering. HPE touts it as the first enterprise-ready platform — marketing that is next to meaningless. While BlueData released multiple earlier versions, the first HPE release – version 5, back in November 2019, made the switch to Kubernetes as the orchestration engine while it continues to use the BlueData controller.
Controllers and management planes are important. Lifecycle is an important concept and Kubernetes is like any other software. You must upgrade it and manage your orchestration software. A lot of the value proposition from VMware’s PKS/TKGi offering is around their lifecycle driven through BOSH. The HPE Container Platform has a very similar value proposition to ease administration for the enterprise.
HPE’s Container Platform is bare-metal. It runs on hardware and does not have the extra licensing costs or overhead associated with competitors offerings that run inside of a hypervisor. But unlike competitors on bare-metal, HPE touts that its physical platform can support both stateless and stateful workloads. Squarely, HPE is talking about VMware here.
VMware has made a lot of noise around Kubernetes in the last couple years, by integrating it as a first-class citizen on its hypervisor in the latest release. VMware’s opinion is the best way to orchestration containers is on a hypervisor – and HPE is here to say that they believe bare metal is the best way to run containers.
Cloud Native & Traditional, side by side
HPE touts its data layer as the answer to running both stateless, cloud-native workloads and stateful, traditional workloads side-by-side on its Container Platform. This is an important concept. A big barrier to entry for cloud-native software and operations is the legacy software and systems that companies have investments.
Within my own organization, the conversation of what is container ready and what is not is a big sticking point to migration of the overall DevOps methodology of software release and management. HPE’s ability to allow stateful, legacy software to run in a container beside of stateless, cloud-native services is a potentially massive advantage.
Opinionated towards Data
The BlueData acquisition powers most of what is in the Ezmeral portfolio. It is an opinionated release of Kubernetes that packages together AI/ML and data analytics software into an app store. In terms of distributions and addressable markets, HPE is smart to focus on a market with maturing software products that run cloud-native, that tap into the full ability of bare metal to deliver value to customers, and to develop a full ecosystem for that addressable market.
Artificial Intelligence and Machine Learning both benefit from lots of resources – both CPU and RAM – along with many other data services and analytic software packages. This is a smart bet to align to the company’s strength in hardware and focus on software that extends those benefits.
Think DevOps married to Machine Learning and you get ML Ops. It is DevOps like speed and practice brought to Machine Learning workflows. Machine Learning, by its very nature, is iterative and workflows are a key to iterative work. This is a specialized niche of the industry at this time, but its one that HPE can have a larger opinion and leadership position on – rather than targeting mass adoption – like its failed Helion cloud efforts in the past.
Today, HPE introduced Ezmeral, a new software portfolio and brand name at its Discovery Virtual Event (DVE), with product offerings ranging from container orchestration and management, AI/ML and data analytics, cost control, IT automation and AI-driven operations, and security. The new product line continues the company’s trend of creating software closely bundled to its hardware offerings.
Three years ago, HPE divested itself of its software division. Much of the ‘spin-merge’, as it was called, was a portfolio of software components that didn’t have a cohesive strategy. The software that was left to remain in the company was closely tied to hardware and infrastructure – its core businesses. And like the software that powers its infrastructure, OneView and InfoSight, the new Ezmeral line is built around infrastructure automation, namely Kubernetes.
Infrastructure orchestration is the core competency of the Ezmeral suite of software – with the HPE Ezmeral Container Platform servicing the the foundational building block for the rest of the portfolio. Introduced in November, HPE Container Platform is a pure Kubernetes distribution, certified by the Cloud Native Computing Foundation (CNCF) and supported by HPE. HPE touts it as an enterprise-ready distribution. HPE is not doing a lot of value add at the container space, but they are working to help enterprise adopt and lifecycle Kubernetes.
Kubernetes has emerged as the de-facto infrastructure orchestration software. Originally developed by Google, it allows container workloads to be defined and orchestrated in an automated way, including remediation and leveling. It provides an abstracted way to handle networking and storage – through the use of plug-ins developed by different providers.
A single software does not make a portfolio, however, so there is more. In addition to the Kubernetes underpinnings, the acquisition of BlueData by HPE provides the company with an app store and controller which assists with the lifecycle and upgrades of both the products running on the Container Platform and the platform itself. There is a lot of value to companies in these components.
In addition, the HPE Data Fabric is also a component of the portfolio allowing customers to pick and choose their stateful storage services and plugins within the Kubernetes platform.
The focus of the App Store is on analytics and big data – with artificial intelligence and machine learning as a focus as well. The traditional big data software titles – Spark, Kafka, Hadoop, etc. – will continue to be big portions of the workloads intended to run on the HPE Ezmeral Container Platform.
The new announcement for HPE Discover Virtual Event centers around Machine Learning and ML Ops.
According to the press release, HPE Ezmeral ML Ops software leverages containerization to streamline the entire machine learning model lifecycle across on-premises, public cloud, hybrid cloud, and edge environments. The solution introduces a DevOps-like process to standardize machine learning workflows and accelerate AI deployments from months to days. Customers benefit by operationalizing AI / ML data science projects faster, eliminating data silos, seamlessly scaling from pilot to production, and avoiding the costs and risks of moving data. HPE Ezmeral ML Ops will also now be available through HPE GreenLake.
“The HPE Ezmeral software portfolio fuels data-driven digital transformation in the enterprise by modernizing applications, unlocking insights, and automating operations,” said Kumar Sreekanti, CTO and Head of Software for Hewlett Packard Enterprise. “Our software uniquely enables customers to eliminate lock-in and costly legacy licensing models, helping them to accelerate innovation and reduce costs, while ensuring enterprise-grade security. With over 8,300 software engineers in HPE continually innovating across our edge to cloud portfolio and signature customer wins in every vertical, HPE Ezmeral software and HPE GreenLake cloud services will disrupt the industry by delivering an open, flexible, cloud experience everywhere.”
HPE also reiterated its engagement with open-source projects and specifically to the Cloud Native Computing Foundation (CNCF), the sponsor of Kubernetes, with an upgraded membership from silver to gold status. The recent acquisition of Scytale, a founding member of the SPIFFE and SPIRE projects, underscores the commitment, representatives say.
“We are thrilled to increase our involvement in CNCF and double down on our contributions to this important open source community,” said Sreekanti. “With our CNCF-certified Kubernetes distribution and CNCF-certification for Kubernetes services, we are focused on helping enterprises deploy containerized environments at scale for both cloud native and non-cloud native applications. Our participation in the CNCF community is integral to building and providing leading-edge solutions like this for our clients.”
Disclaimer: HPE invited me to attend the HPE Discover Virtual Experience as a social influencer. Views and opinions are my own.
Zerto Announces Plans to Extend IT Resilience Platform to Next-Gen Applications
Zerto to provide disaster recovery, continuous data protection, and mobility in a single, simple, scalable IT Resilience Platform for on-premises, cloud, and Kubernetes applications
Boston, June 10, 2020– Zerto today opened its first virtual user conference, ZertoCON 2020, by unveiling its plans to extend its IT Resilience Platform™ to support next generation, cloud native applications. This news reinforces Zerto’s leadership in the data protection and disaster recovery market by providing disaster recovery, data protection, and mobility in a single, simple, scalable platform for on-premises, cloud, and now next-gen applications. As a first step, Zerto for Kubernetes was announced for the initial release, which will be platform-agnostic and enable protecting applications across Amazon Elastic Kubernetes Service (Amazon EKS), Google Kubernetes Engine (GKE), IBM Cloud Kubernetes Service, Microsoft Azure Kubernetes Service (AKS), Red Hat OpenShift, and VMware Tanzu.
“Our IT Resilience Platform has fundamentally changed the way companies protect and backup data across their organizations. With the clear shift towards containers based application development in the market, we are looking to extend our platform to offer these applications the same level of resilience we have delivered to VM based applications,” said Gil Levonai, CMO and senior vice president of product, Zerto. “Today at our first virtual ZertoCON, we are announcing our vision and plan for resilience in next generation applications. While next-gen applications are built with a lot of internal availability and resilience concepts, they still require an easy and simple way to recover from human error or malicious attacks, or to be mobilized and recovered quickly without interruption. This is where Zerto can help.”
“Zerto is focusing on resilience for next-gen applications with its continuous data protection and journaling technology,” said Christophe Bertrand, senior analyst, ESG. “With Kubernetes becoming the critical technology in container orchestration for managing production applications, the availability of Zerto for Kubernetes is very timely and needed by many organizations. It will enable enterprises to efficiently protect, recover, and also move applications and their persistent data for Intelligent Data Management use cases such as accelerated development and delivery within DevOps workflows.”
Resilience for Next-Gen Applications
Building on its expertise with continuous data protection, Zerto will add new capabilities to its IT Resilience Platform to disrupt the way that backup and disaster recovery is done for next generation applications. Zerto for Kubernetes will include:
- Data protection as code: Integrate both data protection and disaster recovery into the application deployment lifecycle from day one.
- Continuous replication: Always-on replication provides protection and fast recovery of persistent volumes within and between clusters, data centers, or clouds.
- Application-centric protection: Beyond protecting the persistent data, protect, move and recover complex applications as one consistent entity, including all associated Kubernetes objects and metadata.
- Low RPOs for Kubernetes: Continuous journaling for Kubernetes applications, including Persistent Volumes, StatefulSets, Deployments, Services, and ConfigMaps.
- Point-in-time restore: Granular journal provides thousands of recovery checkpoints to overcome challenges of synchronous replication while providing more point-in-times than traditional backup.
- Support ordered recovery of microservices for complex applications.
- Simplicity: Easy management with powerful kubectl plug-in and native Kubernetes tooling
- Non-disruptive recovery and testing: Instantiate a full running copy of an entire application in minutes from any point in time for testing or recovery from data corruption or ransomware without impacting production.
- Automated workflows: Workflows for failover, failover test, restore, rollback restore, and commit restore.
“At IFDS, the technology and services we provide underpin the world’s largest financial institutions. Our aim is to continually invest in award winning technology and ongoing R&D to deliver the most accurate, innovative, and reliable financial processing platform,” said Kent Pollard, senior infrastructure architect, IFDS. “As an early adopter of Zerto’s IT Resilience Platform for next-gen applications, data protection has become an integral part of the development cycle. Not only can we protect every piece of the application, but non-disruptive testing can happen throughout the process. It also has low RPO and is so easy to manage.”
Zerto for Kubernetes is targeted to be released next year, with an early adopter program launching later in 2020. Interested customers are encouraged to visit our site to learn more.
“Businesses are increasingly leveraging containers and Kubernetes to manage enterprise applications,” said Manvinder Singh, director, partnerships at Google Cloud. “We’re pleased that Zerto will expand its capabilities in data protection, disaster recovery, automated workflows, and more to help these enterprises deploy and manage Kubernetes at scale.”
“As customers increasingly move business-critical applications to Kubernetes on Azure, Zerto’s continuous data protection technology will be valuable to safeguard stateful production-scale deployments using Azure Kubernetes Service,” said Aung Oo, head of product, Azure Disks & Files, Microsoft Corp.
“Red Hat OpenShift provides an industry-leading, comprehensive enterprise Kubernetes platform, pairing the scale of Kubernetes with the simplicity of Operators and Red Hat’s overall commitment to open source innovation. We’re pleased that Zerto has made its IT Resilience Platform available for Red Hat OpenShift, providing further choice to our customers in how they secure and protect their critical operations,” said Julio Tapia, senior director, partner ecosystem, cloud platforms, Red Hat.
“Developers need to be able to push applications into production quickly and also securely,” said Amit Sabharwal, Director, Tanzu technology and business solutions alliances, VMware. “Kubernetes is a great starting point, and VMware has expanded on its core functionality with a suite of products in the VMware Tanzu portfolio to provide a more developer friendly, enterprise-ready, and secure-by-default experience. The Zerto platform provides a simple user experience and built-in orchestration for disaster recovery for applications running on Kubernetes.”
For further information about Zerto for containerized applications, visit: www.zerto.com/zerto-for-kubernetes/
For more information about ZertoCON, visit: https://www.zerto.com/zertocon/
Zerto helps customers accelerate IT transformation by reducing the risk and complexity of modernization and cloud adoption. By replacing multiple legacy solutions with a single IT Resilience Platform, Zerto is changing the way disaster recovery, data protection and cloud are managed. With enterprise scale, Zerto’s software platform delivers continuous availability for an always-on customer experience while simplifying workload mobility to protect, recover and move applications freely across hybrid and multi-clouds. Zerto is trusted globally by over 8,000 customers, works with more than 1,500 partners and is powering resiliency offerings for 450 managed services providers. Learn more at Zerto.com.
Exciting news from Vmware this week.
“Our customers who are most successful in meeting the changing needs of their customers simultaneously work on two initiatives: modernize their approach to applications, and modernize the infrastructure that those applications run on, to meet the needs of their developers and IT teams.
As part of these initiatives, customers want to gain the benefits of a cloud operating model, which means having rapid, self-service access to infrastructure, simple lifecycle management, security, performance, and scalability.
This is exactly what vSphere now provides. vSphere 7 is the biggest release of vSphere in over a decade and delivers these innovations and the rearchitecting of vSphere with native Kubernetes that we introduced at VMworld 2019 as Project Pacific.
The headline news is that vSphere now has native support for Kubernetes, so you can run containers and virtual machines on the same platform, with a simple upgrade of the system that you’ve currently standardized on and adopting VMware Cloud Foundation. In addition, this release is chock-full of new capabilities focused on significantly improving developer and operator productivity, regardless of whether you are running containers.”
Today I want to talk about Snapt:
but wait…… lets talk about Modern service delivery and cover some lingo first.
We all have heard the words “Digital Transformation” , “Mode 2 bimodal IT“. Businesses in 2020 are trying to use “Digital innovation” I’ve linked these terms from the first google matches I could find. In short companies are trying to use modern app delivery fabrics like Kubernetes with a set of DevOps practices.
Sounds easy, right? Well traditional application delivery applications will not support cloud ‘s micro services eccentric design.
But first, what is Kubernetes? “Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.”
As you can see above, containers allow you for a easy way to bundle and run applications. But also adds in standards for a framework, so things like process’s and tasks can be automated. Lets see what the Kubernetes website says Kubernetes provides you with:
- Service discovery and load balancing
Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
- Storage orchestration
Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
- Automated rollouts and rollbacks
You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
- Automatic bin packing
You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.
Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.
- Secret and configuration management
Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration
As you can see this why Kubernetes has become the default back-end of modern service delivery.
Today I’m going to talk about Snapt – Nova.
Snapt-Nova, Centrally Managed, Cloud Native Application Delivery
Kubernetes native ADC platform for DevOps, Developers and Infrastructure IT at companies embracing digital transformation and migrating workloads from legacy load balancers to a more modern app delivery fabric.
- Hyper scale Cloud & Kubernetes native ADC: Load Balancer, WAF, GSLB
- Easy Self-Service – Centrally Orchestrated, Configured, Deployed, Monitored
- Automatically Scale out for bursts, scale back in. No more overprovisioning
- Host on-prem or in Snapt’s cloud
- Dynamic and intelligent GSLB
- Control Plane Native – Easily replicate configurations at scale
- Robust telemetry, repotting, metrics
- Deploy in any cloud and any location
How Nova Works
- Nova ADC Platform – Centrally deployed anywhere, anytime. Nova deploys ADCs into any cloud, VM and container. Nova ADCs are Layer 7 load balancers, web accelerators, WAF and GSLB.
- Nova Cloud Controller– the Nova Cloud provides centralized deployment, configuration, management, analytics, monitoring, telemetry and more for all of your Nova ADC Nodes.
- Nova Destinations – destinations are powerful Nova powered intelligent DNS addresses that allow auto-scaling, traffic routing, disaster recovery, multi-cloud routing and much more.
Kubernetes native Load Balancer, WAF, GSLB centrally managed, and autoscaling for Kubernetes visibility, performance, and reliability.
Multi-cloud and Multi-site ADC
Snapt Nova is a centralized platform for ADC’s. Deploy into any clouds or locations and manage, configurations, rote the ADCs centrally.
Snapt-Nova – Cloud-Native ADC Service-discovery. Snapt-Nova features automated service discovery for Kubernetes, Consul, Docker, Rancher, more.
Snapt-Nova – the first purpose built microservices and cloud-native ADC for DigitalOcean. Droplet discovery, automated deployments and more…
Snapt for AWS EC2 is a supercharged load balancer, web accelerator and WAF much more robust than ELB. Deploy our AMI right into AWS
Web/HTTP Load Balancing
Intelligent, powerful Web Layer 7 HTTP/S Load Balancing, WAF and GSLB for your website, e-commerce store, and web apps. Free trial
Snapt intelligent load balancing, WAF and GSLB for Microsoft Exchange 2010, 2013 and 2016, Microsoft RDP and IIS. Snapt is a Microsoft partner.
Who is Snapt-Nova for?
- For companies embracing digital transformation that need a full featured yet right sized ADCs for different clouds, platforms, locations and budgets
- For DevOps, Developers and Infrastructure IT pro’s embracing digital transformation that need a future proof and flexible on-demand application scalability for any platform and need.
- For enterprises migrating from legacy hardware load balancers to a modern app delivery fabric that is more scalable and easier to manage than legacy vendors offer.
- FOR IT looking to consolidate disparate load balancers and combine them into a single, powerful, scalable platform for providing application delivery and visibility in clouds, Kubernetes, on-prem, vm’s, anywhere and any scale
Setup and Deployment
But what’s using this like? How do I sign up? What are my first setups?
- First some house keeping. I’m going to show you how easy it is to deploy a node, and even go through the first steps of the orchestration.
- This isn’t a example of a best practice deployment but rather a snapshot view into the product, to allow you to see how easy and simple the product is to use and deploy.
That said, On wards!
Sign up for the free trial at https://nova.snapt.net/
Fill out the details and we all set to rock and roll.
You’ll see a page asking about your org, type that in and click create organization.
Next you’ll see Step one through four.
Let’s walk through creating a node, by clicking on Create a Node.
I’m going to add a node called, First Node, one I typed it in I hit create.
You will see this page, showing you Nova Installation Options.
Here you can proceed with a docker install, Ubuntu Linux or grab a Virtual Machine image.
You’ll also see the Manuals.
I’m going to show you a example on how to deploy this with the above docker image. I have abstracted the id and key for security reasons.
sudo docker run --cap-add=NET_ADMIN --ulimit nofile=500000:500000 -d -t \
--network=host --restart=always \
-v /var/run/docker.sock:/var/run/docker.sock -v /etc/nova:/etc/nova \
-e NODE_ID='putidhere' \
-e NODE_KEY='putkeyhere' \
-e NODE_HOST='poll.nova-adc.com' novaadc/nova-client:latest
sudo: docker: command not found
hmmm looks like I have to install docker. Lets do that quick.
sudo apt-get update&&
sudo apt install docker.io -y
sudo systemctl start docker &&
sudo systemctl enable docker
Unable to find image ‘novaadc/nova-client:latest’ locally
latest: Pulling from novaadc/nova-client
6abc03819f3e: Pull complete
05731e63f211: Pull complete
0bd67c50d6be: Pull complete
5573ecb4be96: Pull complete
049e27438cf3: Pull complete
eb98b228f2cd: Pull complete
be9c74bbddc6: Pull complete
e315c15c9b6c: Pull complete
d205ee8965c2: Pull complete
73a00ea2ce43: Pull complete
16948cade704: Pull complete
b7db21670be1: Pull complete
Status: Downloaded newer image for novaadc/nova-client:latest
Stage Right, Platform usage.
OK, now we should be set. Next navigate to ADC’s, Management and Backends and lets make a backend.
You’ll see a result like this.
Next we will need a ACD.
As you can see you can change and configure this as needed. Next, I want to attach a node to my ADC.
I clicked Edit. In this example, I’m just going to click my only node, and tag it with all nodes, as it’s my only tag in my example.
I hope this example of what Snapt – Nova‘s platform has to offer was interesting to you. I hope I get a chance to dive deeper into this interesting technology in the future.
A overview of Big Cloud Fabric
Big Switch labels Big Cloud Fabric, the ” Next-Gen Data Center Switching Fabric”
From the above Link.
Big Cloud Fabric™ is the next-generation data center switching fabric delivering operational velocity, network automation and visibility for cloud-native applications and software-defined data centers, while staying within flat IT budgets.
Enterprise data centers are challenged today to support cloud-native applications, drive business velocity and work within flat budgets.
Network layer is often cited as the least agile part of data center infrastructure,especially when compared to compute infrastructure. The advent of virtualization changed the server landscape and delivered operational efficiencies across management workflows via automation. Emerging cloud-native applications are expected to demand even greater agility from the underlying infrastructure.
Most data centers are built using old network architecture, a box-by-box operational paradigm that inhibits the pace of IT operations to meet the demand of modern applications and software-defined data centers. Click here for more information on the challenges.
Software-defined data center is demanding network innovation. With virtualization going mainstream, networks are required to provide visibility into virtual machines, east-west traffic across VMs, and deliver network service connectivity easily. Networks are expected to not adversely impact software-defined data center agility by mandating manual box-by-box network configuration and upgrades. Emerging cloud-native applications require rapid application and services deployment. This demands network operations to be more automated instead of relying on manual CLI and limited GUI workflows. Lastly, infrastructure budgets trends have flat-lined in most organizations. This demands an innovative approach compared to the legacy network based on proprietary hardware that increases costs.
These network demands are met by software-defined networking (SDN) solutions. Leveraging a centralized controller, the SDN networks overcome the box-by-box operational paradigm to deliver business velocity. As applications become more distributed, SDN approaches are required for networks to become agile and automated via orchestrated workflows using RESTful APIs. By leveraging open industry-standard network hardware, SDN solutions provide vendor choice and drives down costs in a flat budget environment. This cycle of innovation has been witnessed before in the server infrastructure, driven by virtualization and containers. More recently, storage infrastructure is getting transformed as well with various software-enabled architectures.
Lets Dig in. Below is a overview of the Clos Fabric.
What are my use cases? What type of deployments support the fabric?
Who uses the product today?
How do I deploy this with my existing data center, do I need to worry about my legacy network working with Big Switch?
What has defined customer success?
API’s are key, how do you leverage them for automation?
How do you enable me to out scale my competitors?
How do you allow me to see inside my network?
How do you support multi tenancy?