Does anyone know where to get the grid and/or member versions via the api? I would have thought its in member node_info, but it doesn't seem to be there. Also, tried to grab it via upgradestatus, but I can't get the current_version field to return. Not seeing any other place in the wapidoc that lists the version.
Aha! I always try to use the highest WAPI version from the oldest currently supported NIOS version. That way my examples are pretty much up-to-date without being bleeding edge. Today that would be NIOS 8.4.x and WAPI 2.10.
I'm poring over the SGE documentation and I can't find a simple version kind of command. In particular, I want to read up on this system that is already installed via Rocks 5.2.2 and I want to be sure I'm reading the right docs. Is there an SGE command that will tell me the version?
TKG v2.5.x is distributed as a downloadable Tanzu CLI package that deploys a versioned TKG standalone management cluster. TKG v2.5.x supports creating and managing class-based workload clusters with a standalone management cluster that can run on vSphere.
The vSphere with Tanzu Supervisor in vSphere 8.0.1c and later runs TKG v2.2. Earlier versions of vSphere 8 run TKG v2.0, which was not released independently of Supervisor. Standalone management clusters that run TKG 2.x are available from TKG 2.1 onwards. Due to the earlier TKG version that is embedded in Supervisor, some of the features that are available if you are using a standalone TKG 2.5.x management cluster are not available if you are using a vSphere with Tanzu Supervisor to create workload clusters. Later TKG releases will be embedded in Supervisor in future vSphere update releases. Consequently, the version of TKG that is embedded in the latest vSphere with Tanzu version at a given time might not be as recent as the latest standalone version of TKG. However, the versions of the Tanzu CLI that are compatible with all TKG v2.x releases are fully supported for use with Supervisor in all releases of vSphere 8. For example, Tanzu CLI v1.1.x is fully backwards compatible with the TKG 2.2 plugins that Supervisor provides.
Each version of Tanzu Kubernetes Grid adds support for the Kubernetes version of its management cluster, plus additional Kubernetes versions, distributed as Tanzu Kubernetes releases (TKrs), except where noted as a Known Issue.
Minor versions: VMware supports TKG v2.5.x with Kubernetes v1.28, v1.27, and v1.26 at time of release. Once TKG v2.3 reaches its End of General Support milestone, VMware will no longer support Kubernetes v1.26 with TKG. Once TKG v2.4 reaches its End of General Support milestone, VMware will no longer support Kubernetes v1.27 with TKG.
Patch versions: After VMware publishes a new TKr patch version for a minor line, it retains support for older patch versions for two months. This gives customers a 2-month window to upgrade to new TKr patch versions. From TKG v2.2 onwards, VMware does not support all TKr patch versions from previous minor lines of Kubernetes.
Minor versions: VMware supports TKG following the N-2 Lifecycle Policy, which applies to the latest and previous two minor versions of TKG. With the release of TKG v2.5.x, TKG v2.2 is no longer supported after a period of one year has elapsed since the v2.2 release. See the VMware Product Lifecycle Matrix for more information.
Patch versions: VMware does not support all previous TKG patch versions. After VMware releases a new patch version of TKG, it retains support for the older patch version for two months. This gives customers a 2-month window to upgrade to new TKG patch versions.
Package versions in the Tanzu Standard repository for TKG v2.5.x are compatible via TKrs with Kubernetes minor versions v1.28, v1.27, and v1.26, and are listed in the Tanzu Standard Repository Release Notes.
Tanzu Kubernetes Grid v2.5.x supports the following infrastructure platforms and operating systems (OSs), as well as cluster creation and management, networking, storage, authentication, backup and migration, and observability components.
For a list of software component versions that ship with TKG v2.5.x, use imgpkg to pull repository bundles and then list their contents. For example, to list the component versions that ship with the Tanzu Standard repository for TKG v2.5.1, run the following command:
When upgrading Kubernetes versions on workload clusters, you cannot skip minor versions. For example, you cannot upgrade a Tanzu Kubernetes cluster directly from v1.26.x to v1.28.x. You must upgrade a v1.26.x cluster to v1.27.x before upgrading the cluster to v1.28.x.
Tanzu Kubernetes Grid v2.5.x does not support the creation of standalone TKG management clusters and TKG workload clusters on AWS and Azure. Use Tanzu Mission Control to create native AWS EKS and Azure AKS clusters on AWS and Azure. For information about how to create native AWS EKS and Azure AKS clusters with Tanzu Mission Control, see Managing the Lifecycle of AWS EKS Clusters and Managing the Lifecycle of Azure AKS Clusters in the Tanzu Mission Control documentation.
From v2.5.1 onwards, Tanzu Kubernetes Grid does not support creating management clusters or workload clusters on vSphere 6.7. TKG v2.5.1 includes critical cloud network security (CNS) updates for container storage interface (CSI) storage functionality that are not compatible with vSphere 6.7. Creating clusters on vSphere 6.7 is supported on TKG versions up to and including v2.5.0 only. General support for vSphere 6.7 ended in October 2022 and VMware recommends that you upgrade to vSphere 7 or 8.
Deploying and Managing TKG 2.5 Standalone Management Clusters on vSphere, includes topics specific to standalone management clusters that are not relevant to using TKG with a vSphere with Tanzu Supervisor.
Attempts to create a ClusterClass workload cluster on IPv4 primary dualstack in an airgapped environment with proxy mode enabled fail with the error unable to wait for cluster nodes to be available: worker nodes are still being created for MachineDeployment 'wl-antrea-md-0-5zlgc', DesiredReplicas=1 Replicas=0 ReadyReplicas=0 UpdatedReplicas=0
If multiple templates are detected at the same path that have the same os-name, os-arch, and os-version, the first one that matches the requirement is used. This has been fixed so that an error is thrown prompting the user to provide the full path to the desired template.
If you specify the VM Network when deploying a management cluster to vSphere 7, the deployment fails with the error unable to set up management cluster: unable to wait for cluster control plane available: control plane is not available yet.
The following are known issues in Tanzu Kubernetes Grid v2.5.x. Any known issues that were present in v2.5.0 that have been resolved in a subsequent v2.5.x patch release are listed under the Resolved Issues for the patch release in which they were fixed.
For workload clusters that use Photon 3 as the node OS, if you are upgrading the cluster from Kubernetes v1.26 to v1.27, TKG also by default upgrades the cluster to Photon 5, which is the default OS version for TKG clusters running Kubernetes v1.27 on Photon.
If you want the upgraded cluster to continue using Photon 3 as its node OS, follow the additional steps for Kubernetes v1.27 on Photon 3 described in Additional Steps for Certain OS, Kubernetes, and Cluster Type Combinations in Upgrade Workload Clusters. There are different steps for plan-based and class-based workload clusters.
In the decoded package values, check the value for avi_ca_data_b64 under akoOperatorPackage.akoOperator.config. If it differs from the avi-controller-ca value, update tkg-pkg-tkg-system-values with the new value:
When creating a management cluster with configuration variables AVI_CONTROL_PLANE_HA_PROVIDER set to true to use AVI as the control plane HA provider and IDENTITY_MANAGEMENT_TYPE set to oidc to use an external OIDC identity provider, CLI output shows Waiting messages for AKO package and resource mgmt-load-balancer-and-ingress-service and then fails with packageinstalls/mgmt-load-balancer-and-ingress-service ... connect: connection refused.
Workaround: When configuring a management cluster, set VSPHERE_CONTROL_PLANE_ENDPOINT to an IP address that is not the first address in the configured AVI VIP range, AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR if set, or else AVI_DATA_NETWORK_CIDR.
You cannot create a new workload cluster that uses Antrea CNI and runs Kubernetes versions shipped with prior versions TKG, such as Kubernetes v1.23.10, which was the default Kubernetes version in TKG v1.6.1 as listed in Supported Kubernetes Versions in Tanzu Kubernetes Grid v2.5.
Workaround: Create a workload cluster that runs Kubernetes 1.28.x, 1.27.x, or 1.26.x. The Kubernetes project recommends that you run components on the most recent patch version of any current minor version.
On VMware cloud infrastructure products on AWS, Azure, Oracle Cloud, Google Cloud, and other infrastructures, running tanzu management-cluster delete and other standalone management cluster operations fail with an error failed to create kind cluster.
Workaround: On your bootstrap machine, change your Docker or Docker Desktop cgroup driver default to systemd so that it matches the cgroup setting in the TKG container runtime, as described in the Tanzu CLI installation Prerequisites.
Due to a known issue in the cluster-api-provider-vsphere (CAPV) project, standalone management clusters on vSphere may leave orphaned VSphereMachine objects behind after cluster upgrade or scale operations.
With TKG v2.5, the Tanzu Standard package repository is versioned and distributed separately from TKG, and its versioning is based on a date stamp. For TKG v2.5.1, the latest compatible Tanzu Standard repository version is v2024.4.12 and both are released around the same date.
Future Tanzu Standard repository versions may publish more frequently than TKG versions, but all patch versions will maintain existing compatibilities between minor versions of TKG and Tanzu Standard.
b37509886e