For users getting started on vSphere 8 and Tanzu, after you get everything setup and configured, you’ll likely hit a snag. VMware’s GitHub hosts a quick start guide and YAML manifest for deploying your first Tanzu workload cluster. It will fail with the following error:
Error from server (unable to find a compatible full version matching version hint "1.18" and default OS labels: "os-arch=amd64,os-name=photon,os-type=linux,os-version=3.0".
vCenter and Kubernetes Versioning
Why does this error? This is a brand new cluster, after all? The answer is a versioning issue.
Tanzu Kubernetes Grid is tied to the vCenter version and since we are running vSphere 8 and vCenter 8, the release has specific Kubernetes releases it supports. K8s 1.18 is not compatible. vCenter 8.0.0.10100 releases with Tanzu Kubernetes Grid 1.6.1. The compatible versions for vCenter 8.0.0.10100 are Kubernetes 1.21, 1.22 and 1.23. The supervisor cluster deployed with vCenter 8.0.0.10100 is Kubernetes 1.23.10.
My VMware team has told me that each vCenter will ship with a specific K8 release and upgrading vCenter is the way to upgrade your Tanzu Kubernetes versions. In addition to the vCenter and Tanzu Supervisor Cluster being tied at the hip, VMware will support the current and two prior K8 releases on each vCenter version.
VMware, for its part, doesn’t do much to help the confusion of versioning – as Tanzu has its own versioning, separate from vCenter and separate from the Kubernetes releases. Tanzu Kubernetes Grid version 1.6.1 has K8 version 1.23.10 as its release. TKG 1.5.4 was K8 1.22.9, TKG 1.4.2 was K8 1.21.8.
Demo YAML File
Source: https://raw.githubusercontent.com/vsphere-tmm/vsphere-with-tanzu-quick-start/master/manifests/tkc.yaml
apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
name: tkc-1
spec:
distribution:
version: v1.18
topology:
controlPlane:
class: guaranteed-small
count: 1
storageClass: kubernetes-demo-storage
workers:
class: guaranteed-small
count: 1
storageClass: kubernetes-demo-storage
settings:
storage:
classes: ["kubernetes-demo-storage"] #Named PVC storage classes
defaultClass: kubernetes-demo-storage #Default PVC storage class
Troubleshooting the Issue
The first thought is, can my supervisor cluster see the library of Kubernetes images? How can I verify that? Google results point me to a post from Anthony Spiteri with some helpful troubleshooting commands. After verifying that my content library was linked correctly, I ran the following command:
kubectl get virtualmachineimages
This output is a good sign. I can see that I have a LOT of images that are available to this namespace. That’s promising, but no silver bullet why I’m getting this error. Digging a bit further into Google results, I hit another kubectl command:
kubectl get tanzukubernetesreleases
BINGO. In the output, I noticed that the version I requested in the YAML was not “READY” or “COMPATIBLE” in the list.
What needs to be altered to allow this to run on vCenter 8 and Tanzu Kubernetes Grid 1.6.1?
Its a simple change – just adjust the distribution version number to 1.21, 1.22, or 1.23. The error is very helpful, in that it says it can’t find a compatible version.
Working YAML
apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
name: tkc-1
spec:
distribution:
version: v1.23
topology:
controlPlane:
class: guaranteed-small
count: 1
storageClass: kubernetes-demo-storage
workers:
class: guaranteed-small
count: 1
storageClass: kubernetes-demo-storage
settings:
storage:
classes: ["kubernetes-demo-storage"] #Named PVC storage classes
defaultClass: kubernetes-demo-storage #Default PVC storage class
Further assumptions – for this file to work, you need to have created a “kubernetes-demo-storage” storage policy and assigned it to the namespace where you’re running this YAML. You also need to have the default VM Class of “guaranteed-small” assigned to the namespace and available for consumption. If that’s in place, run kubectl from the same directory as the YAML file and voila, you will get a working Tanzu Kubernetes Cluster for workloads!
kubectl apply -f ./tkc.yml
Happy clustering!