As of writing, the Cloud Native Computing Foundation has 137 different certified distributions or flavors of Kubernetes. That’s a very crowded and fragmented marketplace of choice for consumers. Much of the differentiation between distributions is within the CNI and CSI – the container networking interface and container storage interface. A company’s opinion or innovation around how to handle networking and storage generally powers the differentiation of a distribution.
HPE Container Platform was first introduced back in November, 2019. Unlike its competitors, HPE offers CSI’s for many of its storage offerings, allowing persistent storage offerings across many Kubernetes distributions. And if the company’s strategy is to support many distros, why have its own? HPE’s Container Platform is based on 100% pure Kubernetes.  That’s not differentiated, so what are the ways that HPE is trying to make their distro different?
Controller
Kubernetes is Kubernetes.  In the case of HPE, its Container Platform has a lifecycle controller and management plane, from the acquisition of BlueData, is the first place it is differentiating its offering. HPE touts it as the first enterprise-ready platform — marketing that is next to meaningless.  While BlueData released multiple earlier versions, the first HPE release – version 5, back in November 2019, made the switch to Kubernetes as the orchestration engine while it continues to use the BlueData controller.
Controllers and management planes are important. Lifecycle is an important concept and Kubernetes is like any other software. You must upgrade it and manage your orchestration software. A lot of the value proposition from VMware’s PKS/TKGi offering is around their lifecycle driven through BOSH. The HPE Container Platform has a very similar value proposition to ease administration for the enterprise.
Physical
HPE’s Container Platform is bare-metal. It runs on hardware and does not have the extra licensing costs or overhead associated with competitors offerings that run inside of a hypervisor. But unlike competitors on bare-metal, HPE touts that its physical platform can support both stateless and stateful workloads. Squarely, HPE is talking about VMware here.
VMware has made a lot of noise around Kubernetes in the last couple years, by integrating it as a first-class citizen on its hypervisor in the latest release. VMware’s opinion is the best way to orchestration containers is on a hypervisor – and HPE is here to say that they believe bare metal is the best way to run containers.
Cloud Native & Traditional, side by side
HPE touts its data layer as the answer to running both stateless, cloud-native workloads and stateful, traditional workloads side-by-side on its Container Platform. This is an important concept. A big barrier to entry for cloud-native software and operations is the legacy software and systems that companies have investments.
Within my own organization, the conversation of what is container ready and what is not is a big sticking point to migration of the overall DevOps methodology of software release and management. HPE’s ability to allow stateful, legacy software to run in a container beside of stateless, cloud-native services is a potentially massive advantage.
Opinionated towards Data
The BlueData acquisition powers most of what is in the Ezmeral portfolio. It is an opinionated release of Kubernetes that packages together AI/ML and data analytics software into an app store. In terms of distributions and addressable markets, HPE is smart to focus on a market with maturing software products that run cloud-native, that tap into the full ability of bare metal to deliver value to customers, and to develop a full ecosystem for that addressable market.
Artificial Intelligence and Machine Learning both benefit from lots of resources – both CPU and RAM – along with many other data services and analytic software packages. This is a smart bet to align to the company’s strength in hardware and focus on software that extends those benefits.
ML Ops
Think DevOps married to Machine Learning and you get ML Ops. It is DevOps like speed and practice brought to Machine Learning workflows. Machine Learning, by its very nature, is iterative and workflows are a key to iterative work. This is a specialized niche of the industry at this time, but its one that HPE can have a larger opinion and leadership position on – rather than targeting mass adoption – like its failed Helion cloud efforts in the past.