Skip to content

chore(deps): update helm release rancher to v2.11.1

Renovate Bot requested to merge renovate/rancher-2.x into main

This MR contains the following updates:

Package Update Change
rancher (source) minor 2.8.5 -> 2.11.1

Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

rancher/rancher (rancher)

v2.11.1

Compare Source

Release v2.11.1

[!CAUTION] Note: If you are using Active Directory Federation Service (AD FS) upgrading to Rancher v2.10.1 or later may cause issues with authentication caused by the AD FS Relying Party Trust not being able to pick up a signature verification certificate from the metadata, that requires manual intervention. This can be corrected by either trying to update Relying Party Trust information from federation metadata (Relying Party Trust -> Update from Federation Metadata...) or by directly adding the certificate (Relying Party Trust -> Properties -> Signature tab -> Add -> Select the certificate). For more information see #​48655.

Important: Rancher Kubernetes Engine (RKE/RKE1) will reach end of life on July 31, 2025. Rancher 2.12.0 and later will no longer support provisioning or managing downstream RKE1 clusters. We recommend replatforming RKE1 clusters to RKE2 to ensure continued support and security updates. Learn more about the transition here.

Important: Rancher-Istio will be deprecated in Rancher v2.12.0; turn to the SUSE Application Collection build of Istio for enhanced security (included in SUSE Rancher Prime subscriptions). Detailed information can be found in this announcement

Important: Review the Install/Upgrade Notes before upgrading to any Rancher version.

Rancher v2.11.1 is the latest patch release of Rancher. This is a Community and Prime version release that introduces maintenance updates and bug fixes. To learn more about Rancher Prime, see our page on the Rancher Prime Platform.

For more information on new features in the general minor release see the v2.11.0 release notes.

Changes Since v2.11.0

See the full list of changes.

Security Fixes for Rancher Vulnerabilities

This release addresses the following Rancher security issues:

  • An issue was found when using Continuous Delivery with Fleet where Fleet does not validate a server's certificate when connecting through SSH. This can allow for a main-in-the-middle-attack against Fleet. The fix provides a new insecureSkipHostKeyChecks value for the fleet Helm chart. The default value is set to true (opt-in) for Rancher v2.9 - v2.11 for backward compatibility. The default value will be set to false (opt-out) for Rancher v2.12 and later, and Fleet v0.13 and later.
    • If insecureSkipHostKeyChecks is set to true, then not finding any matching known_hosts entry for an SSH host will not lead to any error.
    • If insecureSkipHostKeyChecks is set to false, then strict host key checks are enabled. When enabled, the checks ensure that when using SSH, Fleet rejects connection attempts to hosts not matching any entry found in (decreasing order of precedence):
      • A secret referenced by name in a GitRepo which is located in the same GitRepo's namespace.
        • If no such secret name is provided in a gitcredential secret located in the same namespace.
      • A new known-hosts ConfigMap, created during the Fleet chart installation time and located in the namespace cattle-fleet-system. For more information, see CVE-2025-23390.
  • A vulnerability was found where users could create a project and then gain access to arbitrary projects. As a fix, a new field has been added to projects called the BackingNampespace, which represents the namespace created for a project containing all resources needed for project operations. This includes resources such as ProjectRoleTemplateBindings, project-scoped secrets and workloads.
    • The field is populated automatically during project creation and is formatted as <clusterID>-<project.Name>. For example, if your project is named project-abc123 in a cluster with ID cluster-xyz789, then the project will have the BackingNampespace: cluster-xyz789-project-abc123. Existing projects will not be migrated and only newly created projects will have the new namespace naming convention. For more information, see CVE-2025-22031.
  • A vulnerability was found where users with permission to create a service in the Kubernetes cluster where Rancher is deployed can take over the Rancher UI, display their own UI, and gather sensitive information. This is only possible when the setting ui-offline-preferred is set to remote. This release introduces a patch, and the malicious user can no longer serve their own UI. If users can't upgrade, please make sure that only trustable users have access to create a service in the local cluster. For more information, see CVE-2025-32198.

For more details, see the Security Advisories and CVEs page in Rancher's documentation or in Rancher's GitHub repo.

Announcements

Rancher Kubernetes API

Rancher v2.8.0 introduced the Rancher Kubernetes API (RK-API). Our new RK-API lets you manage Rancher in the same way you manage Kubernetes. You can now use the RK-API to interact with Rancher CRDs via Kubernetes tooling. This includes convenient documentation via the kubectl explain command. A limited set of our most widely-used CRDs are already supported, and our team is working to add more features on a continuous basis. For more information on RK-API, see the RK-API quick start and reference guide.

Note that while the previous v3 Rancher API is still available:

  • The v3 API is considered an internal API and is not officially supported.
  • No new features will be added to the v3 Rancher API going forward.
  • Customers should plan to re-write any automation built using this API to the new RK-API.

Support for UI plugins for cluster and node drivers based on the legacy Ember UI has been deprecated and will be removed in a future release. These UI plugins should be migrated to the new UI Extensions mechanism. Follow this link for more UI Extensions information.

Frameworks

Major Bug Fixes
  • Fixes an infinite loop caused when a certain watch operation was interrupted after some uptime (normally the watcher timeout). This prevents CPU and memory increase due to the rapid executions of the affected code. See #​49667.

Observability and Backup

Major Bug Fixes
  • This release fixes a bug where when the Prometheus Federator pod is deleted, an install Helm job is triggered and the project monitor is recreated, and as a consequence data is lost. The fix introduces new values to the prometheus-federator chart. These values are important during pod startup and default values are provided. In case of a timeout error occurring during pod initialization you may need to adjust the values under namespaceRegistration as documented in the chart itself. See #​48175.

Install/Upgrade Notes

Upgrade Requirements

  • Creating backups: Create a backup before you upgrade Rancher. To roll back Rancher after an upgrade, you must first back up and restore Rancher to the previous Rancher version. Because Rancher will be restored to the same state as when the backup was created, any changes post-upgrade will not be included after the restore.
  • CNI requirements:
    • For Kubernetes v1.19 and later, disable firewalld as it's incompatible with various CNI plugins. See #​28840.
    • When upgrading or installing a Linux distribution that uses nf_tables as the backend packet filter, such as SLES 15, RHEL 8, Ubuntu 20.10, Debian 10, or later, upgrade to RKE v1.19.2 or later to get Flannel v0.13.0. Flannel v0.13.0 supports nf_tables. See Flannel #​1317.
  • Requirements for air-gapped environments:
    • When using a proxy in front of an air-gapped Rancher instance, you must pass additional parameters to NO_PROXY. See the documentation and issue #​2725.
    • When installing Rancher with Docker in an air-gapped environment, you must supply a custom registries.yaml file to the docker run command, as shown in the K3s documentation. If the registry has certificates, then you'll also need to supply those. See #​28969.
  • Requirements for general Docker installs:
    • When starting the Rancher Docker container, you must use the privileged flag. See documentation.
    • When upgrading a Docker installation, a panic may occur in the container, which causes it to restart. After restarting, the container will come up and work as expected. See #​33685.

Versions

Please refer to the README for the latest and stable Rancher versions.

Please review our version documentation for more details on versioning and tagging conventions.

Images

  • rancher/rancher:v2.11.1

Tools

Kubernetes Versions for RKE

  • v1.32.3 (Default)
  • v1.31.7
  • v1.30.11

Kubernetes Versions for RKE2/K3s

  • v1.32.3 (Default)
  • v1.31.7
  • v1.30.11

Rancher Helm Chart Versions

In Rancher v2.6.0 and later, in the Apps & Marketplace UI, many Rancher Helm charts are named with a major version that starts with 100. This avoids simultaneous upstream changes and Rancher changes from causing conflicting version increments. This also complies with semantic versioning (SemVer), which is a requirement for Helm. You can see the upstream version number of a chart in the build metadata, for example: 100.0.0+up2.1.0. See #​32294.

Other Notes

Experimental Features

Rancher now supports the ability to use an OCI Helm chart registry for Apps & Marketplace. View documentation on using OCI based Helm chart repositories and note this feature is in an experimental stage. See #​29105 and #​45062

Deprecated Upstream Projects

In June 2023, Microsoft deprecated the Azure AD Graph API that Rancher had been using for authentication via Azure AD. When updating Rancher, update the configuration to make sure that users can still use Rancher with Azure AD. See the documentation and issue #​29306 for details.

Removed Legacy Features

Apps functionality in the cluster manager has been deprecated as of the Rancher v2.7 line. This functionality has been replaced by the Apps & Marketplace section of the Rancher UI.

Also, rancher-external-dns and rancher-global-dns have been deprecated as of the Rancher v2.7 line.

The following legacy features have been removed as of Rancher v2.7.0. The deprecation and removal of these features was announced in previous releases. See #​6864.

UI and Backend

  • CIS Scans v1 (Cluster)
  • Pipelines (Project)
  • Istio v1 (Project)
  • Logging v1 (Project)
  • RancherD

UI

  • Multiclusterapps (Global): Apps within the Multicluster Apps section of the Rancher UI.

Previous Rancher Behavior Changes

Previous Rancher Behavior Changes - Rancher General

  • Rancher v2.11.0:
    • Kubernetes v1.28 and v1.29 are no longer supported. Before upgrading to Rancher v2.11.0, ensure all clusters are running Kubernetes v1.30 or later. See #​48628.

Previous Rancher Behavior Changes - Rancher App (Global UI)

  • Rancher v2.11.0:
    • Replaced instances of v-tooltip with v-clean-tooltip to fix an issue where the UI did not sanitize cluster description inputs, allowing the possibility of changes to a cluster (local or downstream) description to cause a stored XSS attack. For more information, see CVE-2024-52281 and #​12564.
    • The v2.11.0 release removed the data collection feature, which removed the Rancher legacy telemetry and the subsequent Opt-out of Telemetry setting during set-up. Part of the legacy telemetry has been replaced with the SCC registration process for Prime, and the most important deployment metric is still tracked via the system-charts registry analysis. Going forward, telemetry will be gathered SUSE-wide. For more information, see #​12639.

Previous Rancher Behavior Changes - Cluster Provisioning

  • Rancher v2.11.0:
    • Generic Kubernetes imported clusters now use the v3 management cluster object (cluster.management.cattle.io) for both the initial creation and the updates (POST and PUT API calls respectively). See #​13151.

Previous Rancher Behavior Changes - RKE2/K3s Provisioning

  • Rancher v2.11.0:
    • etcd snapshots are now populated to Rancher by listing the etcdsnapshotfile.k3s.cattle.io resources in the downstream cluster instead of periodically scraping the CLI and rke2/k3s-etcd-snapshots configmap. See #​44452.

Previous Rancher Behavior Changes - Rancher CLI

  • Rancher v2.11.0:
    • CLI commands corresponding to the multi-cluster app legacy feature are no longer available. See #​48252.
    • The deprecated subcommand globaldns was removed from the Rancher CLI. See #​48129.

Previous Rancher Behavior Changes - Role-Based Access Control (RBAC)

  • Rancher v2.11.0:
    • The Restricted Admin role has been removed. Existing users with the Restricted Admin role will have privileges associated with this role revoked upon upgrade. See #​47875.

Previous Rancher Behavior Changes - Continuous Delivery (Fleet)

  • Rancher v2.11.0:
    • Fleet now honors custom certificate authority (CA) bundles configured in Rancher.

      This prevents you from needing to copy your CA bundles to all GitRepos and/or Helm secrets referenced by those GitRepos. Instead, you can configure those bundles directly through a single secret already known by Rancher, which Fleet will transparently use as a fallback option.

      See the Fleet documentation and fleet#2750.

    • Since the move from StatefulSet to a Deployment and the introduction of leader election for the agents, agent failover has improved. When failover has been tested by shutting down a node with a fleet agent, we observed the pods from that node to stay in the terminating state. We want to make sure that it is clear to our users, that this is not a problem of Fleet, nor is it Fleet related. This is how Kubernetes behaves when the node becomes unreachable. Failover works correctly, even if those pods are kept in the terminating state. See fleet#3096 and kubernetes/kubernetes#72226.

Previous Rancher Behavior Changes - Apps & Marketplace

  • Rancher v2.11.0:

    • The Catalog v1, Multi-Cluster App (MCA) legacy feature has been removed. If upgrading from a previous Rancher version to v2.11 then the MCA associated CRD's and their instances will still exist in the cluster and can be manually deleted by using the following command:

      kubectl delete crds catalogs.management.cattle.io catalogtemplates.management.cattle.io catalogtemplateversions.management.cattle.io templates.management.cattle.io templateversions.management.cattle.io templatecontents.management.cattle.io clustercatalogs.management.cattle.io projectcatalogs.management.cattle.io multiclusterapps.management.cattle.io multiclusterapprevisions.management.cattle.io apps.project.cattle.io apprevisions.project.cattle.io

    See #​39525.

Previous Rancher Behavior Changes - Monitoring

  • Rancher v2.11.0:
    • rancher-alerting-drivers app now uses rancher/kuberlr-kubectl, improving how alerts are sent and received. See #​48849.

Long-standing Known Issues

Long-standing Known Issues - Cluster Provisioning

  • Not all cluster tools can be installed on a hardened cluster.

  • Rancher v2.8.1:

    • When you attempt to register a new etcd/controlplane node in a CAPR-managed cluster after a failed etcd snapshot restoration, the node can become stuck in a perpetual paused state, displaying the error message [ERROR] 000 received while downloading Rancher connection information. Sleeping for 5 seconds and trying again. As a workaround, you can unpause the cluster by running kubectl edit clusters.cluster clustername -n fleet-default and set spec.unpaused to false. See #​43735.
  • Rancher v2.7.2:

    • If you upgrade or update any hosted cluster, and go to Cluster Management > Clusters while the cluster is still provisioning, the Registration tab is visible. Registering a cluster that is already registered with Rancher can cause data corruption. See #​8524.

Long-standing Known Issues - RKE Provisioning

  • Rancher v2.9.0:
    • The Weave CNI plugin for RKE v1.27 and later is now deprecated, due to the plugin being deprecated for upstream Kubernetes v1.27 and later. RKE creation will not go through as it will raise a validation warning. See #​11322.

Long-standing Known Issues - RKE2 Provisioning

  • Rancher v2.9.0:
    • When adding the provisioning.cattle.io/allow-dynamic-schema-drop annotation through the cluster config UI, the annotation disappears before adding the value field. When viewing the YAML, the respective value field is not updated and is displayed as an empty string. As a workaround, when creating the cluster, set the annotation by using the Edit Yaml option located in the dropdown attached to your respective cluster in the Cluster Management view. See #​13655.
  • Rancher v2.7.7:
    • Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #​34518 and #​42834.
  • Rancher v2.7.6:
    • Provisioning RKE2/K3s clusters with added (not built-in) custom node drivers causes provisioning to fail. As a workaround, fix the added node drivers after activating. See #​37074.
  • Rancher v2.7.2:
    • When viewing or editing the YAML configuration of downstream RKE2 clusters through the UI, spec.rkeConfig.machineGlobalConfig.profile is set to null, which is an invalid configuration. See #​8480.

Long-standing Known Issues - K3s Provisioning

  • Rancher v2.7.6:
    • Provisioning RKE2/K3s clusters with added (not built-in) custom node drivers causes provisioning to fail. As a workaround, fix the added node drivers after activating. See #​37074.
  • Rancher v2.7.2:
    • Clusters remain in an Updating state even when they contain nodes in an Error state. See #​39164.

Long-standing Known Issues - Rancher App (Global UI)

  • Rancher v2.10.0:
    • After deleting a Namespace or Project in the Rancher UI, the Namespace or Project remains visible. As a workaround, refresh the page. See #​12220.
  • Rancher v2.9.2:
    • Although system mode node pools must have at least one node, the Rancher UI allows a minimum node count of zero. Inputting a zero minimum node count through the UI can cause cluster creation to fail due to an invalid parameter error. To prevent this error from occurring, enter a minimum node count at least equal to the node count. See #​11922.
  • Rancher v2.7.7:
    • When creating a cluster in the Rancher UI it does not allow the use of an underscore _ in the Cluster Name field. See #​9416.

Long-standing Known Issues - Hosted Rancher

  • Rancher v2.7.5:
    • The Cluster page shows the Registration tab when updating or upgrading a hosted cluster. See #​8524.

Long-standing Known Issues - EKS

  • Rancher v2.7.0:
    • EKS clusters on Kubernetes v1.21 or below on Rancher v2.7 cannot be upgraded. See #​39392.

Long-standing Known Issues - Authentication

  • Rancher v2.9.0:
    • There are some known issues with the OpenID Connect provider support:
      • When the generic OIDC auth provider is enabled, and you attempt to add auth provider users to a cluster or project, users are not populated in the dropdown search bar. This is expected behavior as the OIDC auth provider alone is not searchable. See #​46104.
      • When the generic OIDC auth provider is enabled, auth provider users that are added to a cluster/project by their username are not able to access resources upon logging in. A user will only have access to resources upon login if the user is added by their userID. See #​46105.
      • When the generic OIDC auth provider is enabled and an auth provider user in a nested group is logged into Rancher, the user will see the following error when they attempt to create a Project: [projectroletemplatebindings.management.cattle.io](http://projectroletemplatebindings.management.cattle.io/) is forbidden: User "u-gcxatwsnku" cannot create resource "projectroletemplatebindings" in API group "[management.cattle.io](http://management.cattle.io/)" in the namespace "p-9t5pg". However, the project is still created. See #​46106.

Long-standing Known Issues - Rancher Webhook

  • Rancher v2.7.2:
    • A webhook is installed in all downstream clusters. There are several issues that users may encounter with this functionality:
      • If you rollback from a version of Rancher v2.7.2 or later, to a Rancher version earlier than v2.7.2, the webhooks will remain in downstream clusters. Since the webhook is designed to be 1:1 compatible with specific versions of Rancher, this can cause unexpected behaviors to occur downstream. The Rancher team has developed a script which should be used after rollback is complete (meaning after a Rancher version earlier than v2.7.2 is running). This removes the webhook from affected downstream clusters. See #​40816.

Long-standing Known Issues - Harvester

  • Rancher v2.7.2:
    • If you're using Rancher v2.7.2 with Harvester v1.1.1 clusters, you won't be able to select the Harvester cloud provider when deploying or updating guest clusters. The Harvester release notes contain instructions on how to resolve this. See #​3750.

Long-standing Known Issues - Backup/Restore

  • When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location. It must continue to use the same URL.

  • Rancher v2.7.7:

    • Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #​34518 and #​42834.

v2.10.3

Compare Source

Release v2.10.3

[!CAUTION] Note: If you are using Active Directory Federation Service (AD FS) upgrading to Rancher v2.10.1 or later may cause issues with authentication caused by the AD FS Relying Party Trust not being able to pick up a signature verification certificate from the metadata, that requires manual intervention. This can be corrected by either trying to update Relying Party Trust information from federation metadata (Relying Party Trust -> Update from Federation Metadata...) or by directly adding the certificate (Relying Party Trust -> Properties -> Signature tab -> Add -> Select the certificate). For more information see #​48655.

Important: Review the Install/Upgrade Notes before upgrading to any Rancher version.

Rancher v2.10.3 is the latest patch release of Rancher. This is a Community and Prime version release that introduces maintenance updates and bug fixes.

For more information on new features in the general minor release see the v2.10.0 release notes.

Security Fixes for Rancher Vulnerabilities

This release addresses the following Rancher security issues:

  • The User ID required for configuring SAML providers is now stored inside a signed JSON Web Token (JWT), ensuring it is securely protected against tampering. For more information, see CVE-2025-23389 and #​48964.
  • Rancher now allows only the GET method for the public /v3-public/authProviders endpoint. For more information, see CVE-2025-23388 and #​48608.
  • Rancher no longer supports the GET and DELETE methods for the public /v3-public/authTokens endpoint. For more information, see CVE-2025-23387 and #​48616.

For more details, see the Security Advisories and CVEs page in Rancher's documentation or in Rancher's GitHub repository.

Rancher App (Global UI)

Known Issues
  • Extensions are not loaded after logging in via a LDAP authentication provider. As a workaround, refresh the page again after logging in. See #​13499.

Changes Since v2.10.2

See the full list of issues addressed.

Install/Upgrade Notes

Upgrade Requirements

  • Creating backups: Create a backup before you upgrade Rancher. To roll back Rancher after an upgrade, you must first back up and restore Rancher to the previous Rancher version. Because Rancher will be restored to the same state as when the backup was created, any changes post-upgrade will not be included after the restore.
  • CNI requirements:
    • For Kubernetes v1.19 and later, disable firewalld as it's incompatible with various CNI plugins. See #​28840.
    • When upgrading or installing a Linux distribution that uses nf_tables as the backend packet filter, such as SLES 15, RHEL 8, Ubuntu 20.10, Debian 10, or later, upgrade to RKE v1.19.2 or later to get Flannel v0.13.0. Flannel v0.13.0 supports nf_tables. See Flannel #​1317.
  • Requirements for air gapped environments:
    • When using a proxy in front of an air-gapped Rancher instance, you must pass additional parameters to NO_PROXY. See the documentation and issue #​2725.
    • When installing Rancher with Docker in an air-gapped environment, you must supply a custom registries.yaml file to the docker run command, as shown in the K3s documentation. If the registry has certificates, then you'll also need to supply those. See #​28969.
  • Requirements for general Docker installs:
    • When starting the Rancher Docker container, you must use the privileged flag. See documentation.
    • When upgrading a Docker installation, a panic may occur in the container, which causes it to restart. After restarting, the container will come up and work as expected. See #​33685.

Versions

Please refer to the README for the latest and stable Rancher versions.

Please review our version documentation for more details on versioning and tagging conventions.

Important: With the release of Rancher Kubernetes Engine (RKE) v1.6.0, we are informing customers that RKE is now deprecated. RKE will be maintained for two more versions, following our deprecation policy.

Please note, EOL for RKE is July 31st, 2025. Prime customers must replatform from RKE to RKE2 or K3s.

RKE2 and K3s provide stronger security, and move away from upstream-deprecated Docker machine. Learn more about replatforming here.

Images

  • rancher/rancher:v2.10.3

Tools

Kubernetes Versions for RKE

  • v1.31.5 (Default)
  • v1.30.9
  • v1.29.13
  • v1.28.15

Kubernetes Versions for RKE2/K3s

  • v1.31.5 (Default)
  • v1.30.9
  • v1.29.13
  • v1.28.15

Rancher Helm Chart Versions

In Rancher v2.6.0 and later, in the Apps & Marketplace UI, many Rancher Helm charts are named with a major version that starts with 100. This avoids simultaneous upstream changes and Rancher changes from causing conflicting version increments. This also complies with semantic versioning (SemVer), which is a requirement for Helm. You can see the upstream version number of a chart in the build metadata, for example: 100.0.0+up2.1.0. See #​32294.

Other Notes

Experimental Features

Rancher now supports the ability to use an OCI Helm chart registry for Apps & Marketplace. View documentation on using OCI based Helm chart repositories and note this feature is in an experimental stage. See #​29105 and #​45062

Deprecated Upstream Projects

In June 2023, Microsoft deprecated the Azure AD Graph API that Rancher had been using for authentication via Azure AD. When updating Rancher, update the configuration to make sure that users can still use Rancher with Azure AD. See the documentation and issue #​29306 for details.

Removed Legacy Features

Apps functionality in the cluster manager has been deprecated as of the Rancher v2.7 line. This functionality has been replaced by the Apps & Marketplace section of the Rancher UI.

Also, rancher-external-dns and rancher-global-dns have been deprecated as of the Rancher v2.7 line.

The following legacy features have been removed as of Rancher v2.7.0. The deprecation and removal of these features was announced in previous releases. See #​6864.

UI and Backend

  • CIS Scans v1 (Cluster)
  • Pipelines (Project)
  • Istio v1 (Project)
  • Logging v1 (Project)
  • RancherD

UI

  • Multiclusterapps (Global): Apps within the Multicluster Apps section of the Rancher UI.

Previous Rancher Behavior Changes

Previous Rancher Behavior Changes - Rancher General

  • Rancher v2.10.0:
    • Kubernetes v1.27 is no longer supported. Before you upgrade to Rancher v2.10.0, make sure that all clusters are running Kubernetes v1.28 or later. See #​47591.
    • The new annotation field.cattle.io/creator-principal-name was introduced in addition to the existing field.cattle.io/creatorId that allows specifying the creator's principal name when creating a cluster or a project. If this annotation is used, the userPrincipalName field of the corresponding ClusterRoleTemplateBinding or ProjectRoleTemplateBinding will be set to the specified principal. The principal should belong to the creator's user, which is enforced by the webhook. See #​46828.
    • When searching for group principals with a SAML authentication provider (with LDAP turned off), Rancher now returns a principal of correct type (group) with the name matching the search term. When searching principals with a SAML provider (with LDAP turned off) without specifying the desired type (as in Add cluster/project member), Rancher now returns both user and group principals with the name matching the search term. See #​44441.
    • Rancher now captures the last used time for Tokens and stores it in the lastUsedAt field. If the Authorized Cluster Endpoint is enabled and used on a downstream cluster Rancher captures the last used time in the ClusterAuthToken object and makes the best effort to sync it back to the corresponding Token in the upstream. See #​45732.
    • Rancher deploys the System Upgrade Controller (SUC) to facilitate Kubernetes upgrades for imported RKE2/K3s clusters. Starting with this version, the mechanism used to deploy this component in downstream clusters has transitioned from legacy V1 apps to fully supported V2 apps, providing a seamless upgrade process for Rancher. For more details, please see this issue comment.

Previous Rancher Behavior Changes - Rancher CLI

  • Rancher v2.10.0:
    • The deprecated subcommand globaldns was removed from the Rancher CLI. See #​48127.

Previous Rancher Behavior Changes - Rancher App (Global UI)

  • Rancher v2.10.0:
    • This release includes a major upgrade to the Dashboard (Cluster Explorer) Vue framework from Vue 2 to Vue 3. Please view our documentation on updating existing UI extensions to be compliant with the Rancher v2.10 UI framework in the v2.10.0 UI extension changelog. If experiencing a page that fails to load please file an issue via the Dashboard repository and choose the "Bug report" option for us to further investigate. See #​7653.

    • The performance of the Clusters lists in the Home page and the Side Menu has greatly improved when there are hundreds of clusters. See #​11995 and #​11993.

    • The previous Dashboard Ember UI (Cluster Manager) will no longer be directly accessible. The relative pages that rely on the previous UI will continue to be embedded in the new Vue UI (Cluster Explorer). See #​11371.

    • Updated the data directory configuration by replacing the checkbox option with 3 user input options below:

      1. Use default data directory configuration
      2. Use a common base directory for data directory configuration (sub-directories will be used for the system-agent, provisioning and distro paths) - This option displays a text input where users can enter a base directory for all 3 subdirectories which Rancher programmatically appends to the correct subdirectories.
      3. Use custom data directories - This option displays 3 text inputs, one for each subdirectory type where users can input each path individually.

      See #​11560.

Previous Rancher Behavior Changes - RKE Provisioning

  • Rancher v2.10.0:
    • With the release of Rancher Kubernetes Engine (RKE) v1.6.0, we are informing customers that RKE is now deprecated. RKE will be maintained for two more versions, following our deprecation policy.

      Please note, End-of-Life (EOL) for RKE is July 31st, 2025. Prime customers must replatform from RKE to RKE2 or K3s.

      RKE2 and K3s provide stronger security, and move away from upstream-deprecated Docker machine. Learn more about replatforming here.

Previous Rancher Behavior Changes - Virtualization (Harvester)

  • Rancher v2.10.0:
    • On the Cloud Credential list, you can now easily see if a Harvester Credential is about to expire or has expired and choose to renew it. You will also be notified on the Cluster Management Clusters list when an associated Harvester Cloud Credential is about to expire or has expired. When upgrading, an existing expired Harvester Credential will not contain a warning. You can still renew the token on the resources menu. See #​11332.

Previous Rancher Behavior Changes - Windows

  • Rancher v2.10.0:
    • Rancher v2.10.0 includes changes to how Windows nodes behave post node reboot, as well as provides two new settings to control how Windows services created by Rancher behave on startup.

      Two new agent environment variables have been added for Windows nodes, CATTLE_ENABLE_WINS_SERVICE_DEPENDENCY and CATTLE_ENABLE_WINS_DELAYED_START. These changes can be configured in the Rancher UI, and will be respected by all nodes running rancher-wins version v0.4.20 or greater.

      • CATTLE_ENABLE_WINS_SERVICE_DEPENDENCY defines a service dependency between RKE2 and rancher-wins, ensuring RKE2 will not start before rancher-wins.
      • CATTLE_ENABLE_WINS_DELAYED_START changes the start type of rancher-wins to AUTOMATIC (DELAYED), ensuring it starts after other Windows services.

      Additionally, Windows nodes will now attempt to execute plans multiple times if the initial application fails, up to 5 times. This change, as well as appropriate use of the above two agent environment variables, aims to address plan failures for Windows nodes after a node reboot.

      See #​42458.

    • A change was made starting with RKE2 versions v1.28.15, v1.29.10, v1.30.6 and v1.31.2 on Windows which allows the user to configure *_PROXY environment variables on the rke2 service after the node has already been provisioned.

      Previously any attempt to do so would be a no-op. With this change, If the *_PROXY environment variables are set on the cluster after a Windows node is provisioned, they can be automatically removed from the rke2 service. However, if the variables are set before the node is provisioned, they cannot be removed.

      More information can be found here. A workaround is to remove the environment variables from the rancher-wins service and restart the service or node. At which point *_PROXY environment variables will no longer be set on either service.

      Remove-ItemProperty HKLM:SYSTEM\CurrentControlSet\Services\rancher-wins -Name Environment
      Restart-Service rancher-wins

      See #​47544.

Long-standing Known Issues

Long-standing Known Issues - Cluster Provisioning

  • Not all cluster tools can be installed on a hardened cluster.

  • Rancher v2.8.1:

    • When you attempt to register a new etcd/controlplane node in a CAPR-managed cluster after a failed etcd snapshot restoration, the node can become stuck in a perpetual paused state, displaying the error message [ERROR] 000 received while downloading Rancher connection information. Sleeping for 5 seconds and trying again. As a workaround, you can unpause the cluster by running kubectl edit clusters.cluster clustername -n fleet-default and set spec.unpaused to false. See #​43735.
  • Rancher v2.7.2:

    • If you upgrade or update any hosted cluster, and go to Cluster Management > Clusters while the cluster is still provisioning, the Registration tab is visible. Registering a cluster that is already registered with Rancher can cause data corruption. See #​8524.

Long-standing Known Issues - RKE Provisioning

  • Rancher v2.9.0:
    • The Weave CNI plugin for RKE v1.27 and later is now deprecated, due to the plugin being deprecated for upstream Kubernetes v1.27 and later. RKE creation will not go through as it will raise a validation warning. See #​11322.

Long-standing Known Issues - RKE2 Provisioning

  • Rancher v2.9.0:
    • When adding the provisioning.cattle.io/allow-dynamic-schema-drop annotation through the cluster config UI, the annotation disappears before adding the value field. When viewing the YAML, the respective value field is not updated and is displayed as an empty string. As a workaround, when creating the cluster, set the annotation by using the Edit Yaml option located in the dropdown attached to your respective cluster in the Cluster Management view. See #​11435.
  • Rancher v2.7.7:
    • Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #​34518 and #​42834.
  • Rancher v2.7.6:
    • Provisioning RKE2/K3s clusters with added (not built-in) custom node drivers causes provisioning to fail. As a workaround, fix the added node drivers after activating. See #​37074.
  • Rancher v2.7.2:
    • When viewing or editing the YAML configuration of downstream RKE2 clusters through the UI, spec.rkeConfig.machineGlobalConfig.profile is set to null, which is an invalid configuration. See #​8480.

Long-standing Known Issues - K3s Provisioning

  • Rancher v2.7.6:
    • Provisioning RKE2/K3s clusters with added (not built-in) custom node drivers causes provisioning to fail. As a workaround, fix the added node drivers after activating. See #​37074.
  • Rancher v2.7.2:
    • Clusters remain in an Updating state even when they contain nodes in an Error state. See #​39164.

Long-standing Known Issues - Rancher CLI

  • Rancher v2.10.1:
    • The Rancher CLI uses dedicated HTTP clients in login and SSH commands to download certificates and an SSH key respectively. However, the CLI currently does not respect proxy settings and does not set an HTTP timeout. See #​48321.

Long-standing Known Issues - Rancher App (Global UI)

  • Rancher v2.9.2:
    • Although system mode node pools must have at least one node, the Rancher UI allows a minimum node count of zero. Inputting a zero minimum node count through the UI can cause cluster creation to fail due to an invalid parameter error. To prevent this error from occurring, enter a minimum node count at least equal to the node count. See #​11922.
  • Rancher v2.7.7:
    • When creating a cluster in the Rancher UI it does not allow the use of an underscore _ in the Cluster Name field. See #​9416.

Long-standing Known Issues - Hosted Rancher

  • Rancher v2.7.5:
    • The Cluster page shows the Registration tab when updating or upgrading a hosted cluster. See #​8524.

Long-standing Known Issues - EKS

  • Rancher v2.7.0:
    • EKS clusters on Kubernetes v1.21 or below on Rancher v2.7 cannot be upgraded. See #​39392.

Long-standing Known Issues - Authentication

  • Rancher v2.9.0:
    • There are some known issues with the OpenID Connect provider support:
      • When the generic OIDC auth provider is enabled, and you attempt to add auth provider users to a cluster or project, users are not populated in the dropdown search bar. This is expected behavior as the OIDC auth provider alone is not searchable. See #​46104.
      • When the generic OIDC auth provider is enabled, auth provider users that are added to a cluster/project by their username are not able to access resources upon logging in. A user will only have access to resources upon login if the user is added by their userID. See #​46105.
      • When the generic OIDC auth provider is enabled and an auth provider user in a nested group is logged into Rancher, the user will see the following error when they attempt to create a Project: [projectroletemplatebindings.management.cattle.io](http://projectroletemplatebindings.management.cattle.io/) is forbidden: User "u-gcxatwsnku" cannot create resource "projectroletemplatebindings" in API group "[management.cattle.io](http://management.cattle.io/)" in the namespace "p-9t5pg". However, the project is still created. See #​46106.

Long-standing Known Issues - Rancher Webhook

  • Rancher v2.7.2:
    • A webhook is installed in all downstream clusters. There are several issues that users may encounter with this functionality:
      • If you rollback from a version of Rancher v2.7.2 or later, to a Rancher version earlier than v2.7.2, the webhooks will remain in downstream clusters. Since the webhook is designed to be 1:1 compatible with specific versions of Rancher, this can cause unexpected behaviors to occur downstream. The Rancher team has developed a script which should be used after rollback is complete (meaning after a Rancher version earlier than v2.7.2 is running). This removes the webhook from affected downstream clusters. See #​40816.

Long-standing Known Issues - Harvester

  • Rancher v2.7.2:
    • If you're using Rancher v2.7.2 with Harvester v1.1.1 clusters, you won't be able to select the Harvester cloud provider when deploying or updating guest clusters. The Harvester release notes contain instructions on how to resolve this. See #​3750.

Long-standing Known Issues - Backup/Restore

  • When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location. It must continue to use the same URL.

  • Rancher v2.7.7:

    • Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #​34518 and #​42834.

Long-standing Known Issues - Continuous Delivery (Fleet)

  • Rancher v2.10.0:
    • Target customization for namespace labels and annotations cannot modify/remove labels when updating. See #​3064.
    • In version 0.10, GitRepo resources provided a comprehensive list of all deployed resources across all clusters in their status. However, in version 0.11, this list has been modified to report resources only once until the feature is integrated into the Rancher UI. While this change addresses a UI freeze issue, it may result in potential inaccuracies in the list of resources and resource counts under some conditions. See #​3027.

v2.10.2

Compare Source

Release v2.10.2

[!CAUTION] Note: If you are using Active Directory Federation Service (AD FS) upgrading to Rancher v2.10.1 or later may cause issues with authentication caused by the AD FS Relying Party Trust not being able to pick up a signature verification certificate from the metadata, that requires manual intervention. This can be corrected by either trying to update Relying Party Trust information from federation metadata (Relying Party Trust -> Update from Federation Metadata...) or by directly adding the certificate (Relying Party Trust -> Properties -> Signature tab -> Add -> Select the certificate). For more information see #​48655.

Important: Review the Install/Upgrade Notes before upgrading to any Rancher version.

Rancher v2.10.2 is the latest patch release of Rancher. This is a Community and Prime version release that introduces maintenance updates and bug fixes.

For more information on new features in the general minor release see the v2.10.0 release notes.

Cluster Provisioning

Major Bug Fixes
  • Fixed an issue where stale impersonation secrets were building up in the cattle-impersonation-system namespace. See #​48313.
  • Fixed an issue where the Rancher chart ingress path is set to "/" causing https://<rancher_url>/ to fail with "response 404 (backend NotFound), service rules for the path non-existent." The ingress path can now be configured as needed. See #​48167.

RKE2 Provisioning

Major Bug Fixes
  • Fixd an issue where clusters containing nodes with split etcd and control plane roles would fail to reconcile when upgrading Rancher. See #​48389.

Rancher App (Global UI)

Major Bug Fixes
  • Fixed an issue where users were able to create or edit clusters even when using an invalid Add-on YAML configuration. See #​12466.

Install/Upgrade Notes

Upgrade Requirements

  • Creating backups: Create a backup before you upgrade Rancher. To roll back Rancher after an upgrade, you must first back up and restore Rancher to the previous Rancher version. Because Rancher will be restored to the same state as when the backup was created, any changes post-upgrade will not be included after the restore.
  • CNI requirements:
    • For Kubernetes v1.19 and later, disable firewalld as it's incompatible with various CNI plugins. See #​28840.
    • When upgrading or installing a Linux distribution that uses nf_tables as the backend packet filter, such as SLES 15, RHEL 8, Ubuntu 20.10, Debian 10, or later, upgrade to RKE v1.19.2 or later to get Flannel v0.13.0. Flannel v0.13.0 supports nf_tables. See Flannel #​1317.
  • Requirements for air gapped environments:
    • When using a proxy in front of an air-gapped Rancher instance, you must pass additional parameters to NO_PROXY. See the documentation and issue #​2725.
    • When installing Rancher with Docker in an air-gapped environment, you must supply a custom registries.yaml file to the docker run command, as shown in the K3s documentation. If the registry has certificates, then you'll also need to supply those. See #​28969.
  • Requirements for general Docker installs:
    • When starting the Rancher Docker container, you must use the privileged flag. See documentation.
    • When upgrading a Docker installation, a panic may occur in the container, which causes it to restart. After restarting, the container will come up and work as expected. See #​33685.

Versions

Please refer to the README for the latest and stable Rancher versions.

Please review our version documentation for more details on versioning and tagging conventions.

Important: With the release of Rancher Kubernetes Engine (RKE) v1.6.0, we are informing customers that RKE is now deprecated. RKE will be maintained for two more versions, following our deprecation policy.

Please note, EOL for RKE is July 31st, 2025. Prime customers must replatform from RKE to RKE2 or K3s.

RKE2 and K3s provide stronger security, and move away from upstream-deprecated Docker machine. Learn more about replatforming here.

Images

  • rancher/rancher:v2.10.2

Tools

Kubernetes Versions for RKE

  • v1.31.4 (Default)
  • v1.30.8
  • v1.29.12
  • v1.28.15

Kubernetes Versions for RKE2/K3s

  • v1.31.4 (Default)
  • v1.30.8
  • v1.29.12
  • v1.28.15

Rancher Helm Chart Versions

In Rancher v2.6.0 and later, in the Apps & Marketplace UI, many Rancher Helm charts are named with a major version that starts with 100. This avoids simultaneous upstream changes and Rancher changes from causing conflicting version increments. This also complies with semantic versioning (SemVer), which is a requirement for Helm. You can see the upstream version number of a chart in the build metadata, for example: 100.0.0+up2.1.0. See #​32294.

Other Notes

Experimental Features

Rancher now supports the ability to use an OCI Helm chart registry for Apps & Marketplace. View documentation on using OCI based Helm chart repositories and note this feature is in an experimental stage. See #​29105 and #​45062

Deprecated Upstream Projects

In June 2023, Microsoft deprecated the Azure AD Graph API that Rancher had been using for authentication via Azure AD. When updating Rancher, update the configuration to make sure that users can still use Rancher with Azure AD. See the documentation and issue #​29306 for details.

Removed Legacy Features

Apps functionality in the cluster manager has been deprecated as of the Rancher v2.7 line. This functionality has been replaced by the Apps & Marketplace section of the Rancher UI.

Also, rancher-external-dns and rancher-global-dns have been deprecated as of the Rancher v2.7 line.

The following legacy features have been removed as of Rancher v2.7.0. The deprecation and removal of these features was announced in previous releases. See #​6864.

UI and Backend

  • CIS Scans v1 (Cluster)
  • Pipelines (Project)
  • Istio v1 (Project)
  • Logging v1 (Project)
  • RancherD

UI

  • Multiclusterapps (Global): Apps within the Multicluster Apps section of the Rancher UI.

Previous Rancher Behavior Changes

Previous Rancher Behavior Changes - Rancher General

  • Rancher v2.10.0:
    • Kubernetes v1.27 is no longer supported. Before you upgrade to Rancher v2.10.0, make sure that all clusters are running Kubernetes v1.28 or later. See #​47591.
    • The new annotation field.cattle.io/creator-principal-name was introduced in addition to the existing field.cattle.io/creatorId that allows specifying the creator's principal name when creating a cluster or a project. If this annotation is used, the userPrincipalName field of the corresponding ClusterRoleTemplateBinding or ProjectRoleTemplateBinding will be set to the specified principal. The principal should belong to the creator's user, which is enforced by the webhook. See #​46828.
    • When searching for group principals with a SAML authentication provider (with LDAP turned off), Rancher now returns a principal of correct type (group) with the name matching the search term. When searching principals with a SAML provider (with LDAP turned off) without specifying the desired type (as in Add cluster/project member), Rancher now returns both user and group principals with the name matching the search term. See #​44441.
    • Rancher now captures the last used time for Tokens and stores it in the lastUsedAt field. If the Authorized Cluster Endpoint is enabled and used on a downstream cluster Rancher captures the last used time in the ClusterAuthToken object and makes the best effort to sync it back to the corresponding Token in the upstream. See #​45732.
    • Rancher deploys the System Upgrade Controller (SUC) to facilitate Kubernetes upgrades for imported RKE2/K3s clusters. Starting with this version, the mechanism used to deploy this component in downstream clusters has transitioned from legacy V1 apps to fully supported V2 apps, providing a seamless upgrade process for Rancher. For more details, please see this issue comment.

Previous Rancher Behavior Changes - Rancher CLI

  • Rancher v2.10.0:
    • The deprecated subcommand globaldns was removed from the Rancher CLI. See #​48127.

Previous Rancher Behavior Changes - Rancher App (Global UI)

  • Rancher v2.10.0:
    • This release includes a major upgrade to the Dashboard (Cluster Explorer) Vue framework from Vue 2 to Vue 3. Please view our documentation on updating existing UI extensions to be compliant with the Rancher v2.10 UI framework in the v2.10.0 UI extension changelog. If experiencing a page that fails to load please file an issue via the Dashboard repository and choose the "Bug report" option for us to further investigate. See #​7653.

    • The performance of the Clusters lists in the Home page and the Side Menu has greatly improved when there are hundreds of clusters. See #​11995 and #​11993.

    • The previous Dashboard Ember UI (Cluster Manager) will no longer be directly accessible. The relative pages that rely on the previous UI will continue to be embedded in the new Vue UI (Cluster Explorer). See #​11371.

    • Updated the data directory configuration by replacing the checkbox option with 3 user input options below:

      1. Use default data directory configuration
      2. Use a common base directory for data directory configuration (sub-directories will be used for the system-agent, provisioning and distro paths) - This option displays a text input where users can enter a base directory for all 3 subdirectories which Rancher programmatically appends to the correct subdirectories.
      3. Use custom data directories - This option displays 3 text inputs, one for each subdirectory type where users can input each path individually.

      See #​11560.

Previous Rancher Behavior Changes - RKE Provisioning

  • Rancher v2.10.0:
    • With the release of Rancher Kubernetes Engine (RKE) v1.6.0, we are informing customers that RKE is now deprecated. RKE will be maintained for two more versions, following our deprecation policy.

      Please note, End-of-Life (EOL) for RKE is July 31st, 2025. Prime customers must replatform from RKE to RKE2 or K3s.

      RKE2 and K3s provide stronger security, and move away from upstream-deprecated Docker machine. Learn more about replatforming here.

Previous Rancher Behavior Changes - Virtualization (Harvester)

  • Rancher v2.10.0:
    • On the Cloud Credential list, you can now easily see if a Harvester Credential is about to expire or has expired and choose to renew it. You will also be notified on the Cluster Management Clusters list when an associated Harvester Cloud Credential is about to expire or has expired. When upgrading, an existing expired Harvester Credential will not contain a warning. You can still renew the token on the resources menu. See #​11332.

Previous Rancher Behavior Changes - Windows

  • Rancher v2.10.0:
    • Rancher v2.10.0 includes changes to how Windows nodes behave post node reboot, as well as provides two new settings to control how Windows services created by Rancher behave on startup.

      Two new agent environment variables have been added for Windows nodes, CATTLE_ENABLE_WINS_SERVICE_DEPENDENCY and CATTLE_ENABLE_WINS_DELAYED_START. These changes can be configured in the Rancher UI, and will be respected by all nodes running rancher-wins version v0.4.20 or greater.

      • CATTLE_ENABLE_WINS_SERVICE_DEPENDENCY defines a service dependency between RKE2 and rancher-wins, ensuring RKE2 will not start before rancher-wins.
      • CATTLE_ENABLE_WINS_DELAYED_START changes the start type of rancher-wins to AUTOMATIC (DELAYED), ensuring it starts after other Windows services.

      Additionally, Windows nodes will now attempt to execute plans multiple times if the initial application fails, up to 5 times. This change, as well as appropriate use of the above two agent environment variables, aims to address plan failures for Windows nodes after a node reboot.

      See #​42458.

    • A change was made starting with RKE2 versions v1.28.15, v1.29.10, v1.30.6 and v1.31.2 on Windows which allows the user to configure *_PROXY environment variables on the rke2 service after the node has already been provisioned.

      Previously any attempt to do so would be a no-op. With this change, If the *_PROXY environment variables are set on the cluster after a Windows node is provisioned, they can be automatically removed from the rke2 service. However, if the variables are set before the node is provisioned, they cannot be removed.

      More information can be found here. A workaround is to remove the environment variables from the rancher-wins service and restart the service or node. At which point *_PROXY environment variables will no longer be set on either service.

      Remove-ItemProperty HKLM:SYSTEM\CurrentControlSet\Services\rancher-wins -Name Environment
      Restart-Service rancher-wins

      See #​47544.

Long-standing Known Issues

Long-standing Known Issues - Cluster Provisioning

  • Not all cluster tools can be installed on a hardened cluster.

  • Rancher v2.8.1:

    • When you attempt to register a new etcd/controlplane node in a CAPR-managed cluster after a failed etcd snapshot restoration, the node can become stuck in a perpetual paused state, displaying the error message [ERROR] 000 received while downloading Rancher connection information. Sleeping for 5 seconds and trying again. As a workaround, you can unpause the cluster by running kubectl edit clusters.cluster clustername -n fleet-default and set spec.unpaused to false. See #​43735.
  • Rancher v2.7.2:

    • If you upgrade or update any hosted cluster, and go to Cluster Management > Clusters while the cluster is still provisioning, the Registration tab is visible. Registering a cluster that is already registered with Rancher can cause data corruption. See #​8524.

Long-standing Known Issues - RKE Provisioning

  • Rancher v2.9.0:
    • The Weave CNI plugin for RKE v1.27 and later is now deprecated, due to the plugin being deprecated for upstream Kubernetes v1.27 and later. RKE creation will not go through as it will raise a validation warning. See #​11322.

Long-standing Known Issues - RKE2 Provisioning

  • Rancher v2.9.0:
    • When adding the provisioning.cattle.io/allow-dynamic-schema-drop annotation through the cluster config UI, the annotation disappears before adding the value field. When viewing the YAML, the respective value field is not updated and is displayed as an empty string. As a workaround, when creating the cluster, set the annotation by using the Edit Yaml option located in the dropdown attached to your respective cluster in the Cluster Management view. See #​11435.
  • Rancher v2.7.7:
    • Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #​34518 and #​42834.
  • Rancher v2.7.6:
    • Provisioning RKE2/K3s clusters with added (not built-in) custom node drivers causes provisioning to fail. As a workaround, fix the added node drivers after activating. See #​37074.
  • Rancher v2.7.2:
    • When viewing or editing the YAML configuration of downstream RKE2 clusters through the UI, spec.rkeConfig.machineGlobalConfig.profile is set to null, which is an invalid configuration. See #​8480.

Long-standing Known Issues - K3s Provisioning

  • Rancher v2.7.6:
    • Provisioning RKE2/K3s clusters with added (not built-in) custom node drivers causes provisioning to fail. As a workaround, fix the added node drivers after activating. See #​37074.
  • Rancher v2.7.2:
    • Clusters remain in an Updating state even when they contain nodes in an Error state. See #​39164.

Long-standing Known Issues - Rancher CLI

  • Rancher v2.10.1:
    • The Rancher CLI uses dedicated HTTP clients in login and SSH commands to download certificates and an SSH key respectively. However, the CLI currently does not respect proxy settings and does not set an HTTP timeout. See #​48321.

Long-standing Known Issues - Rancher App (Global UI)

  • Rancher v2.9.2:
    • Although system mode node pools must have at least one node, the Rancher UI allows a minimum node count of zero. Inputting a zero minimum node count through the UI can cause cluster creation to fail due to an invalid parameter error. To prevent this error from occurring, enter a minimum node count at least equal to the node count. See #​11922.
  • Rancher v2.7.7:
    • When creating a cluster in the Rancher UI it does not allow the use of an underscore _ in the Cluster Name field. See #​9416.

Long-standing Known Issues - Hosted Rancher

  • Rancher v2.7.5:
    • The Cluster page shows the Registration tab when updating or upgrading a hosted cluster. See #​8524.

Long-standing Known Issues - EKS

  • Rancher v2.7.0:
    • EKS clusters on Kubernetes v1.21 or below on Rancher v2.7 cannot be upgraded. See #​39392.

Long-standing Known Issues - Authentication

  • Rancher v2.9.0:
    • There are some known issues with the OpenID Connect provider support:
      • When the generic OIDC auth provider is enabled, and you attempt to add auth provider users to a cluster or project, users are not populated in the dropdown search bar. This is expected behavior as the OIDC auth provider alone is not searchable. See #​46104.
      • When the generic OIDC auth provider is enabled, auth provider users that are added to a cluster/project by their username are not able to access resources upon logging in. A user will only have access to resources upon login if the user is added by their userID. See #​46105.
      • When the generic OIDC auth provider is enabled and an auth provider user in a nested group is logged into Rancher, the user will see the following error when they attempt to create a Project: [projectroletemplatebindings.management.cattle.io](http://projectroletemplatebindings.management.cattle.io/) is forbidden: User "u-gcxatwsnku" cannot create resource "projectroletemplatebindings" in API group "[management.cattle.io](http://management.cattle.io/)" in the namespace "p-9t5pg". However, the project is still created. See #​46106.

Long-standing Known Issues - Rancher Webhook

  • Rancher v2.7.2:
    • A webhook is installed in all downstream clusters. There are several issues that users may encounter with this functionality:
      • If you rollback from a version of Rancher v2.7.2 or later, to a Rancher version earlier than v2.7.2, the webhooks will remain in downstream clusters. Since the webhook is designed to be 1:1 compatible with specific versions of Rancher, this can cause unexpected behaviors to occur downstream. The Rancher team has developed a script which should be used after rollback is complete (meaning after a Rancher version earlier than v2.7.2 is running). This removes the webhook from affected downstream clusters. See #​40816.

Long-standing Known Issues - Harvester

  • Rancher v2.7.2:
    • If you're using Rancher v2.7.2 with Harvester v1.1.1 clusters, you won't be able to select the Harvester cloud provider when deploying or updating guest clusters. The Harvester release notes contain instructions on how to resolve this. See #​3750.

Long-standing Known Issues - Backup/Restore

  • When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location. It must continue to use the same URL.

  • Rancher v2.7.7:

    • Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #​34518 and #​42834.

Long-standing Known Issues - Continuous Delivery (Fleet)

  • Rancher v2.10.0:
    • Target customization for namespace labels and annotations cannot modify/remove labels when updating. See #​3064.
    • In version 0.10, GitRepo resources provided a comprehensive list of all deployed resources across all clusters in their status. However, in version 0.11, this list has been modified to report resources only once until the feature is integrated into the Rancher UI. While this change addresses a UI freeze issue, it may result in potential inaccuracies in the list of resources and resource counts under some conditions. See #​3027.

v2.10.1

Compare Source

Release v2.10.1

[!CAUTION] Note: If you are using Active Directory Federation Service (AD FS) upgrading to Rancher v2.10.1 or later may cause issues with authentication caused by the AD FS Relying Party Trust not being able to pick up a signature verification certificate from the metadata, that requires manual intervention. This can be corrected by either trying to update Relying Party Trust information from federation metadata (Relying Party Trust -> Update from Federation Metadata...) or by directly adding the certificate (Relying Party Trust -> Properties -> Signature tab -> Add -> Select the certificate). For more information see #​48655.

Important: Review the Install/Upgrade Notes before upgrading to any Rancher version.

Rancher v2.10.1 is the latest patch release of Rancher. This is a Community and Prime version release that introduces maintenance updates and bug fixes. For more information on new features in the general minor release see the v2.10.0 release notes.

Cluster Provisioning

Major Bug Fixes
  • Fixed a file permission issue where after upgrading to Rancher v2.9.3 or newer and deleting a node (i.e., scaling down a node pool) that was present before the upgrade would result in the node being removed from Rancher and the downstream cluster, but the underlying virtual machine is not removed from the infrastructure provider. See #​48341.
  • Fixed how ClusterIPs with IPv6 addresses were being handled incorrectly, which led to IPv6-based deployments getting waiting for cluster agent to connect. See #​43878.

K3s Provisioning

Major Bug Fixes
  • Fixed an issue where upgrading the K8s version of the downstream node driver and custom K3s clusters may result in an etcd node reporting NodePressure, and eventually the rancher-system-agent reporting failures to execute plans. If this issue is encountered, it can be resolved by performing a systemctl restart k3s.service on the affected etcd-only nodes. See #​48096.

RKE2 Provisioning

Known Issues
  • Clusters containing nodes with split etcd and control plane roles may fail to reconcile when upgrading Rancher. The following can be seen repeatedly in the control plane logs:

    time="(timestamp)" level=info msg="Starting rke2 v1.28.15+rke2r1 (96bb2a62fad8cb938a1761286b1d896623ac7014)"
    time="(timestamp)" level=fatal msg="starting kubernetes: preparing server: failed to get CA certs: Get \"https://(etcd-init-node-ip):9345/cacerts\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
    ...
    Failed to start Rancher Kubernetes Engine v2 (server).
    rke2-server.service: Scheduled restart job, restart counter is at 173.
    Stopped Rancher Kubernetes Engine v2 (server).

    In the event that a control plane node gets stuck during a Rancher upgrade, perform a systemctl restart rke2-server.service on the stuck control plane node, and then the etcd node that it is joined to (specified via the "server" field in /etc/rancher/rke2/config.yaml.d/50-rancher.yaml). This results in the cluster reconciling as normal.

    See #​48389.

Rancher CLI

Known Issues
  • The Rancher CLI uses dedicated HTTP clients in login and SSH commands to download certificates and an SSH key respectively. However, the CLI currently does not respect proxy settings and does not set an HTTP timeout. See #​48321.

Rancher App (Global UI)

Features and Enhancements
  • This release includes a major upgrade to the Dashboard (Cluster Explorer) Vue framework from Vue 2 to Vue 3. Please view our documentation on updating existing UI extensions to be compliant with the Rancher v2.10 UI framework in the v2.10.0 UI extension changelog. If experiencing a page that fails to load please file an issue via the Dashboard repository and choose the "Bug report" option for us to further investigate. See #​7653.

  • As part of the Dashboard Vue framework transition to Vue 3 introduced in Rancher v2.10.0, a migration script was developed to allow extension developers to easily migrate their extensions to the minimum required setup required. A number of fixes and enhancements have been made to this script:

    • Added a completion log to inform user to install dependencies.
    • Resolved build errors that would appear after running the migration script by updating the libraries used.
    • Fixed an issue where resolutions were not updating properly.

    See #​12656.

Known Issues
  • Users are able to create or edit clusters even when using an invalid Add-on YAML configuration. See #​12466.

Install/Upgrade Notes

Upgrade Requirements

  • Creating backups: Create a backup before you upgrade Rancher. To roll back Rancher after an upgrade, you must first back up and restore Rancher to the previous Rancher version. Because Rancher will be restored to the same state as when the backup was created, any changes post-upgrade will not be included after the restore.
  • CNI requirements:
    • For Kubernetes v1.19 and later, disable firewalld as it's incompatible with various CNI plugins. See #​28840.
    • When upgrading or installing a Linux distribution that uses nf_tables as the backend packet filter, such as SLES 15, RHEL 8, Ubuntu 20.10, Debian 10, or later, upgrade to RKE v1.19.2 or later to get Flannel v0.13.0. Flannel v0.13.0 supports nf_tables. See Flannel #​1317.
  • Requirements for air gapped environments:
    • When using a proxy in front of an air-gapped Rancher instance, you must pass additional parameters to NO_PROXY. See the documentation and issue #​2725.
    • When installing Rancher with Docker in an air-gapped environment, you must supply a custom registries.yaml file to the docker run command, as shown in the K3s documentation. If the registry has certificates, then you'll also need to supply those. See #​28969.
  • Requirements for general Docker installs:
    • When starting the Rancher Docker container, you must use the privileged flag. See documentation.
    • When upgrading a Docker installation, a panic may occur in the container, which causes it to restart. After restarting, the container will come up and work as expected. See #​33685.

Versions

Please refer to the README for the latest and stable Rancher versions.

Please review our version documentation for more details on versioning and tagging conventions.

Important: With the release of Rancher Kubernetes Engine (RKE) v1.6.0, we are informing customers that RKE is now deprecated. RKE will be maintained for two more versions, following our deprecation policy.

Please note, EOL for RKE is July 31st, 2025. Prime customers must replatform from RKE to RKE2 or K3s.

RKE2 and K3s provide stronger security, and move away from upstream-deprecated Docker machine. Learn more about replatforming here.

Images

  • rancher/rancher:v2.10.1

Tools

Kubernetes Versions for RKE

  • v1.31.3 (Default)
  • v1.30.7
  • v1.29.11
  • v1.28.15

Kubernetes Versions for RKE2/K3s

  • v1.31.3 (Default)
  • v1.30.7
  • v1.29.11
  • v1.28.15

Rancher Helm Chart Versions

In Rancher v2.6.0 and later, in the Apps & Marketplace UI, many Rancher Helm charts are named with a major version that starts with 100. This avoids simultaneous upstream changes and Rancher changes from causing conflicting version increments. This also complies with semantic versioning (SemVer), which is a requirement for Helm. You can see the upstream version number of a chart in the build metadata, for example: 100.0.0+up2.1.0. See #​32294.

Other Notes

Experimental Features

Rancher now supports the ability to use an OCI Helm chart registry for Apps & Marketplace. View documentation on using OCI based Helm chart repositories and note this feature is in an experimental stage. See #​29105 and #​45062

Deprecated Upstream Projects

In June 2023, Microsoft deprecated the Azure AD Graph API that Rancher had been using for authentication via Azure AD. When updating Rancher, update the configuration to make sure that users can still use Rancher with Azure AD. See the documentation and issue #​29306 for details.

Removed Legacy Features

Apps functionality in the cluster manager has been deprecated as of the Rancher v2.7 line. This functionality has been replaced by the Apps & Marketplace section of the Rancher UI.

Also, rancher-external-dns and rancher-global-dns have been deprecated as of the Rancher v2.7 line.

The following legacy features have been removed as of Rancher v2.7.0. The deprecation and removal of these features was announced in previous releases. See #​6864.

UI and Backend

  • CIS Scans v1 (Cluster)
  • Pipelines (Project)
  • Istio v1 (Project)
  • Logging v1 (Project)
  • RancherD

UI

  • Multiclusterapps (Global): Apps within the Multicluster Apps section of the Rancher UI.

Previous Rancher Behavior Changes

Previous Rancher Behavior Changes - Rancher General

  • Rancher v2.10.0:
    • Kubernetes v1.27 is no longer supported. Before you upgrade to Rancher v2.10.0, make sure that all clusters are running Kubernetes v1.28 or later. See #​47591.
    • The new annotation field.cattle.io/creator-principal-name was introduced in addition to the existing field.cattle.io/creatorId that allows specifying the creator's principal name when creating a cluster or a project. If this annotation is used, the userPrincipalName field of the corresponding ClusterRoleTemplateBinding or ProjectRoleTemplateBinding will be set to the specified principal. The principal should belong to the creator's user, which is enforced by the webhook. See #​46828.
    • When searching for group principals with a SAML authentication provider (with LDAP turned off), Rancher now returns a principal of correct type (group) with the name matching the search term. When searching principals with a SAML provider (with LDAP turned off) without specifying the desired type (as in Add cluster/project member), Rancher now returns both user and group principals with the name matching the search term. See #​44441.
    • Rancher now captures the last used time for Tokens and stores it in the lastUsedAt field. If the Authorized Cluster Endpoint is enabled and used on a downstream cluster Rancher captures the last used time in the ClusterAuthToken object and makes the best effort to sync it back to the corresponding Token in the upstream. See #​45732.
    • Rancher deploys the System Upgrade Controller (SUC) to facilitate Kubernetes upgrades for imported RKE2/K3s clusters. Starting with this version, the mechanism used to deploy this component in downstream clusters has transitioned from legacy V1 apps to fully supported V2 apps, providing a seamless upgrade process for Rancher. For more details, please see this issue comment.

Previous Rancher Behavior Changes - Rancher CLI

  • Rancher v2.10.0:
    • The deprecated subcommand globaldns was removed from the Rancher CLI. See #​48127.

Previous Rancher Behavior Changes - Rancher App (Global UI)

  • Rancher v2.10.0:
    • This release includes a major upgrade to the Dashboard (Cluster Explorer) Vue framework from Vue 2 to Vue 3. Please view our documentation on updating existing UI extensions to be compliant with the Rancher v2.10 UI framework in the v2.10.0 UI extension changelog. If experiencing a page that fails to load please file an issue via the Dashboard repository and choose the "Bug report" option for us to further investigate. See #​7653.

    • The performance of the Clusters lists in the Home page and the Side Menu has greatly improved when there are hundreds of clusters. See #​11995 and #​11993.

    • The previous Dashboard Ember UI (Cluster Manager) will no longer be directly accessible. The relative pages that rely on the previous UI will continue to be embedded in the new Vue UI (Cluster Explorer). See #​11371.

    • Updated the data directory configuration by replacing the checkbox option with 3 user input options below:

      1. Use default data directory configuration
      2. Use a common base directory for data directory configuration (sub-directories will be used for the system-agent, provisioning and distro paths) - This option displays a text input where users can enter a base directory for all 3 subdirectories which Rancher programmatically appends to the correct subdirectories.
      3. Use custom data directories - This option displays 3 text inputs, one for each subdirectory type where users can input each path individually.

      See #​11560.

Previous Rancher Behavior Changes - RKE Provisioning

  • Rancher v2.10.0:
    • With the release of Rancher Kubernetes Engine (RKE) v1.6.0, we are informing customers that RKE is now deprecated. RKE will be maintained for two more versions, following our deprecation policy.

      Please note, End-of-Life (EOL) for RKE is July 31st, 2025. Prime customers must replatform from RKE to RKE2 or K3s.

      RKE2 and K3s provide stronger security, and move away from upstream-deprecated Docker machine. Learn more about replatforming here.

Previous Rancher Behavior Changes - Virtualization (Harvester)

  • Rancher v2.10.0:
    • On the Cloud Credential list, you can now easily see if a Harvester Credential is about to expire or has expired and choose to renew it. You will also be notified on the Cluster Management Clusters list when an associated Harvester Cloud Credential is about to expire or has expired. When upgrading, an existing expired Harvester Credential will not contain a warning. You can still renew the token on the resources menu. See #​11332.

Previous Rancher Behavior Changes - Windows

  • Rancher v2.10.0:
    • Rancher v2.10.0 includes changes to how Windows nodes behave post node reboot, as well as provides two new settings to control how Windows services created by Rancher behave on startup.

      Two new agent environment variables have been added for Windows nodes, CATTLE_ENABLE_WINS_SERVICE_DEPENDENCY and CATTLE_ENABLE_WINS_DELAYED_START. These changes can be configured in the Rancher UI, and will be respected by all nodes running rancher-wins version v0.4.20 or greater.

      • CATTLE_ENABLE_WINS_SERVICE_DEPENDENCY defines a service dependency between RKE2 and rancher-wins, ensuring RKE2 will not start before rancher-wins.
      • CATTLE_ENABLE_WINS_DELAYED_START changes the start type of rancher-wins to AUTOMATIC (DELAYED), ensuring it starts after other Windows services.

      Additionally, Windows nodes will now attempt to execute plans multiple times if the initial application fails, up to 5 times. This change, as well as appropriate use of the above two agent environment variables, aims to address plan failures for Windows nodes after a node reboot.

      See #​42458.

    • A change was made starting with RKE2 versions v1.28.15, v1.29.10, v1.30.6 and v1.31.2 on Windows which allows the user to configure *_PROXY environment variables on the rke2 service after the node has already been provisioned.

      Previously any attempt to do so would be a no-op. With this change, If the *_PROXY environment variables are set on the cluster after a Windows node is provisioned, they can be automatically removed from the rke2 service. However, if the variables are set before the node is provisioned, they cannot be removed.

      More information can be found here. A workaround is to remove the environment variables from the rancher-wins service and restart the service or node. At which point *_PROXY environment variables will no longer be set on either service.

      Remove-ItemProperty HKLM:SYSTEM\CurrentControlSet\Services\rancher-wins -Name Environment
      Restart-Service rancher-wins

      See #​47544.

Long-standing Known Issues

Long-standing Known Issues - Cluster Provisioning

  • Not all cluster tools can be installed on a hardened cluster.

  • Rancher v2.8.1:

    • When you attempt to register a new etcd/controlplane node in a CAPR-managed cluster after a failed etcd snapshot restoration, the node can become stuck in a perpetual paused state, displaying the error message [ERROR] 000 received while downloading Rancher connection information. Sleeping for 5 seconds and trying again. As a workaround, you can unpause the cluster by running kubectl edit clusters.cluster clustername -n fleet-default and set spec.unpaused to false. See #​43735.
  • Rancher v2.7.2:

    • If you upgrade or update any hosted cluster, and go to Cluster Management > Clusters while the cluster is still provisioning, the Registration tab is visible. Registering a cluster that is already registered with Rancher can cause data corruption. See #​8524.

Long-standing Known Issues - RKE Provisioning

  • Rancher v2.9.0:
    • The Weave CNI plugin for RKE v1.27 and later is now deprecated, due to the plugin being deprecated for upstream Kubernetes v1.27 and later. RKE creation will not go through as it will raise a validation warning. See #​11322.

Long-standing Known Issues - RKE2 Provisioning

  • Rancher v2.9.0:
    • When adding the provisioning.cattle.io/allow-dynamic-schema-drop annotation through the cluster config UI, the annotation disappears before adding the value field. When viewing the YAML, the respective value field is not updated and is displayed as an empty string. As a workaround, when creating the cluster, set the annotation by using the Edit Yaml option located in the dropdown attached to your respective cluster in the Cluster Management view. See #​11435.
  • Rancher v2.7.7:
    • Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #​34518 and #​42834.
  • Rancher v2.7.6:
    • Provisioning RKE2/K3s clusters with added (not built-in) custom node drivers causes provisioning to fail. As a workaround, fix the added node drivers after activating. See #​37074.
  • Rancher v2.7.2:
    • When viewing or editing the YAML configuration of downstream RKE2 clusters through the UI, spec.rkeConfig.machineGlobalConfig.profile is set to null, which is an invalid configuration. See #​8480.

Long-standing Known Issues - K3s Provisioning

  • Rancher v2.7.6:
    • Provisioning RKE2/K3s clusters with added (not built-in) custom node drivers causes provisioning to fail. As a workaround, fix the added node drivers after activating. See #​37074.
  • Rancher v2.7.2:
    • Clusters remain in an Updating state even when they contain nodes in an Error state. See #​39164.

Long-standing Known Issues - Rancher App (Global UI)

  • Rancher v2.9.2:
    • Although system mode node pools must have at least one node, the Rancher UI allows a minimum node count of zero. Inputting a zero minimum node count through the UI can cause cluster creation to fail due to an invalid parameter error. To prevent this error from occurring, enter a minimum node count at least equal to the node count. See #​11922.
  • Rancher v2.7.7:
    • When creating a cluster in the Rancher UI it does not allow the use of an underscore _ in the Cluster Name field. See #​9416.

Long-standing Known Issues - Hosted Rancher

  • Rancher v2.7.5:
    • The Cluster page shows the Registration tab when updating or upgrading a hosted cluster. See #​8524.

Long-standing Known Issues - EKS

  • Rancher v2.7.0:
    • EKS clusters on Kubernetes v1.21 or below on Rancher v2.7 cannot be upgraded. See #​39392.

Long-standing Known Issues - Authentication

  • Rancher v2.9.0:
    • There are some known issues with the OpenID Connect provider support:
      • When the generic OIDC auth provider is enabled, and you attempt to add auth provider users to a cluster or project, users are not populated in the dropdown search bar. This is expected behavior as the OIDC auth provider alone is not searchable. See #​46104.
      • When the generic OIDC auth provider is enabled, auth provider users that are added to a cluster/project by their username are not able to access resources upon logging in. A user will only have access to resources upon login if the user is added by their userID. See #​46105.
      • When the generic OIDC auth provider is enabled and an auth provider user in a nested group is logged into Rancher, the user will see the following error when they attempt to create a Project: [projectroletemplatebindings.management.cattle.io](http://projectroletemplatebindings.management.cattle.io/) is forbidden: User "u-gcxatwsnku" cannot create resource "projectroletemplatebindings" in API group "[management.cattle.io](http://management.cattle.io/)" in the namespace "p-9t5pg". However, the project is still created. See #​46106.

Long-standing Known Issues - Rancher Webhook

  • Rancher v2.7.2:
    • A webhook is installed in all downstream clusters. There are several issues that users may encounter with this functionality:
      • If you rollback from a version of Rancher v2.7.2 or later, to a Rancher version earlier than v2.7.2, the webhooks will remain in downstream clusters. Since the webhook is designed to be 1:1 compatible with specific versions of Rancher, this can cause unexpected behaviors to occur downstream. The Rancher team has developed a script which should be used after rollback is complete (meaning after a Rancher version earlier than v2.7.2 is running). This removes the webhook from affected downstream clusters. See #​40816.

Long-standing Known Issues - Harvester

  • Rancher v2.7.2:
    • If you're using Rancher v2.7.2 with Harvester v1.1.1 clusters, you won't be able to select the Harvester cloud provider when deploying or updating guest clusters. The Harvester release notes contain instructions on how to resolve this. See #​3750.

Long-standing Known Issues - Backup/Restore

  • When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location. It must continue to use the same URL.

  • Rancher v2.7.7:

    • Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #​34518 and #​42834.

Long-standing Known Issues - Continuous Delivery (Fleet)

  • Rancher v2.10.0:
    • Target customization for namespace labels and annotations cannot modify/remove labels when updating. See #​3064.
    • In version 0.10, GitRepo resources provided a comprehensive list of all deployed resources across all clusters in their status. However, in version 0.11, this list has been modified to report resources only once until the feature is integrated into the Rancher UI. While this change addresses a UI freeze issue, it may result in potential inaccuracies in the list of resources and resource counts under some conditions. See #​3027.

v2.10.0

Compare Source

Release v2.10.0

[!CAUTION] Note: If you are using Active Directory Federation Service (AD FS) upgrading to Rancher v2.10.1 or later may cause issues with authentication caused by the AD FS Relying Party Trust not being able to pick up a signature verification certificate from the metadata, that requires manual intervention. This can be corrected by either trying to update Relying Party Trust information from federation metadata (Relying Party Trust -> Update from Federation Metadata...) or by directly adding the certificate (Relying Party Trust -> Properties -> Signature tab -> Add -> Select the certificate). For more information see #​48655.

Important: Review the Install/Upgrade Notes before upgrading to any Rancher version.

Rancher v2.10.0 is the latest minor release of Rancher. This is a Community version release that introduces new features, enhancements, and various updates.

Security Fixes for Rancher Vulnerabilities

This release addresses the following Rancher security issues:

  • Permissions required to view, edit, and upgrade Apps have been revised. Users must now possess the "read" permission for the associated Helm secret in order to view the values used during an App's installation, as well as to edit or upgrade it. For more information, see CVE-2024-52282.
  • Fixed an issue where Rancher API watch requests ignored user permissions, enabling non-privileged Rancher users to view sensitive objects (including secrets and credentials) they do not own. For more information, see CVE-2024-52280.
  • Fixed an issue where namespace filters issued to watch requests through the Rancher API were sometimes ignored. Specifying multiple different namespaces in a watch request by ID now generates a warning, and will be disallowed in a future Rancher version. For more information, see CVE-2024-52280.
  • Several enhancements have been made to the cluster and node driver registration process to prevent the possibility of remote code execution (RCE) through untrusted third-party cluster and node drivers. For more information, see CVE-2024-22036.
  • To avoid credentials being stored in plain-text within the vSphere add-on config when creating a vSphere Rancher HA setup, the provisioningprebootstrap feature was added. For more information, see CVE-2022-45157.
  • Replaced instances of v-tooltip with v-clean-tooltip to fix an issue where the UI did not sanitize cluster description inputs, allowing the possibility of changes to a cluster (local or downstream) description to cause a stored XSS attack. For more information, see CVE-2024-52281.

For more details, see the Security Advisories and CVEs page in Rancher's documentation or in Rancher's GitHub repo.

Highlights

Rancher General

Features and Enhancements
  • Rancher now supports Kubernetes v1.31. See #​46197 for information on Rancher support for Kubernetes v1.31. Additionally, the upstream Kubernetes changelogs for v1.31 can be viewed for a full list of changes.
Behavior Changes
  • Kubernetes v1.27 is no longer supported. Before you upgrade to Rancher v2.10.0, make sure that all clusters are running Kubernetes v1.28 or later. See #​47591.
  • The new annotation field.cattle.io/creator-principal-name was introduced in addition to the existing field.cattle.io/creatorId that allows specifying the creator's principal name when creating a cluster or a project. If this annotation is used the userPrincipalName field of the corresponding ClusterRoleTemplateBinding or ProjectRoleTemplateBinding will be set to the specified principal. The principal should belong to the creator's user, which is enforced by the webhook. See #​46828.
  • When searching for group principals with a SAML authentication provider (with LDAP turned off) Rancher now returns a principal of correct type (group) with the name matching the search term. When searching principals with a SAML provider (with LDAP turned off) without specifying the desired type (as in Add cluster/project member) Rancher now returns both user and group principals with the name matching the search term. See #​44441.
  • Rancher now captures the last used time for Tokens and stores it in the lastUsedAt field. If the Authorized Cluster Endpoint is enabled and used on a downstream cluster Rancher captures the last used time in the ClusterAuthToken object and makes the best effort to sync it back to the corresponding Token in the upstream. See #​45732.
  • Rancher deploys the System Upgrade Controller (SUC) to facilitate Kubernetes upgrades for imported RKE2/K3s clusters. Starting with this version, the mechanism used to deploy this component in downstream clusters has transitioned from legacy V1 apps to fully supported V2 apps, providing a seamless upgrade process for Rancher. For more details, please see this issue comment.

Rancher App (Global UI)

Behavior Changes
  • This release includes a major upgrade to the Dashboard (Cluster Explorer) Vue framework from Vue 2 to Vue 3. Please view our documentation on updating existing UI extensions to be compliant with the Rancher v2.10 UI framework in the v2.10.0 UI extension changelog. If experiencing a page that fails to load please file an issue via the Dashboard repository and choose the "Bug report" option for us to further investigate. See #​7653.
  • The performance of the Clusters lists in the Home page and the Side Menu has greatly improved when there are hundreds of clusters. See #​11995 and #​11993.
  • The previous Dashboard Ember UI (Cluster Manager) will no longer be directly accessible. The relative pages that rely on the previous UI will continue to be embedded in the new Vue UI (Cluster Explorer). See #​11371.
  • Updated the data directory configuration by replacing the checkbox option with 3 user input options below:
    1. Use default data directory configuration
    2. Use a common base directory for data directory configuration (sub-directories will be used for the system-agent, provisioning and distro paths) -> This option displays a text input where users can enter a base directory for all 3 subdirectories which Rancher programmatically appends to the correct subdirectories.
    3. Use custom data directories -> This option displays 3 text inputs, one for each subdirectory type where users can input each path individually. See #​11560.
Major Bug Fixes
  • Fixed an issue where when creating a GKE cluster in the Rancher UI you would see provisioning failures as the clusterIpv4CidrBlock and clusterSecondaryRangeName fields conflict. See #​8749.

K3s Provisioning

Known Issues
  • An issue was discovered where upgrading the k8s version of the downstream node driver and custom K3s clusters may result in an etcd node reporting NodePressure, and eventually the rancher-system-agent reporting failures to execute plans. If this issue is encountered, it can be solved by performing a systemctl restart k3s.service on the affected etcd-only nodes. See #​48096 and this issue comment for more information.

RKE Provisioning

Important: With the release of Rancher Kubernetes Engine (RKE) v1.6.0, we are informing customers that RKE is now deprecated. RKE will be maintained for two more versions, following our deprecation policy.

Please note, End-of-Life (EOL) for RKE is July 31st, 2025. Prime customers must replatform from RKE to RKE2 or K3s.

RKE2 and K3s provide stronger security, and move away from upstream-deprecated Docker machine. Learn more about replatforming here.

Major Bug Fixes
  • Fixed a permission issue which led to failures when attempting to provision an RKE cluster on vSphere. See #​47938.

Rancher CLI

Major Bug Fixes
  • When using the Rancher CLI, the prompt to choose an auth provider will now always have the local provider in the first position. See #​46128.
Behavior Changes

The deprecated subcommand globaldns was removed from the Rancher CLI. See #​48127.

Authentication

Features and Enhancements
  • Added support for SAML logout-all to the SAML-based external auth providers (EAP). A logout-all logs a user not only out of Rancher, but also out of the associated session in the EAP. This logs the user out of other applications attached to the same session as well. When logging into Rancher again a full authentication has to be performed to reestablish a new session in the EAP. This is in contrast to a regular logout where a re-login re-uses the session and bypasses the need for actual re-authentication. Extended the EAP configuration form enabling the configuring admin to choose whether logout-all will be available to users or not, and if yes, if users may even be forced to always use logout-all instead of having a choice between it and regular logout. See #​38494.
  • There is an option now to force a password reset on first logon when setting up a rancher2_user. See #​45736.

Continuous Delivery (Fleet)

  • Fleet v0.11.0 is also releasing alongside Rancher v2.10 and improves several log and status messages. It reduces the amount of reconciles done by the controllers for resource changes. It adds k8s events for the GitRepo resource, that users can subscribe to and are documented.
Known Issues
  • There are a few known issues which were not fixed in time which affect Rancher:

  • Target customization for namespace labels and annotations cannot modify/remove labels when updating. See #​3064.

  • In version 0.10, GitRepo resources provided a comprehensive list of all deployed resources across all clusters in their status. However, in version 0.11, this list has been modified to report resources only once until the feature is integrated into the Rancher UI. While this change addresses a UI freeze issue, it may result in potential inaccuracies in the list of resources and resource counts under some conditions. See #​3027.

Role-Based Access Control (RBAC) Framework

Features and Enhancements
  • Impersonation in downstream clusters via the Rancher proxy is now supported, enabling users with the appropriate permissions to impersonate other users or ServiceAccounts. See #​41988.
  • It is possible to opt out of cluster owner and project owner RBAC for a newly provisioned cluster if the cluster yaml includes the annotation field.cattle.io/no-creator-rbac: "true". This is useful when a service account is provisioning the cluster as service accounts can't have RBAC applied to them. See #​45591.

Virtualization (Harvester)

Features and Enhancements
  • A warning banner has been added when provisioning a multi-node Harvester RKE2 cluster in Rancher that you will need to allocate one vGPU more than the number of nodes you have to avoid the "un-schedulable" errors seen after cluster updates. See #​10989.
Behavior Changes
  • On the Cloud Credential list, you can now easily see if a Harvester Credential is about to expire or has expired and choose to renew it. You will also be notified on the Cluster Management Clusters list when an associated Harvester Cloud Credential is about to expire or has expired. When upgrading, an existing expired Harvester Credential will not contain a warning. You can still renew the token on the resources menu. See #​11332.

Windows Nodes - General

Behavior Changes
  • Rancher v2.10.0 includes changes to how Windows nodes behave post node reboot, as well as provides two new settings to control how Windows services created by Rancher behave on startup.

    Two new agent environment variables have been added for Windows nodes, CATTLE_ENABLE_WINS_SERVICE_DEPENDENCY and CATTLE_ENABLE_WINS_DELAYED_START. These changes can be configured in the Rancher UI, and will be respected by all nodes running rancher-wins version v0.4.20 or greater.

    • CATTLE_ENABLE_WINS_SERVICE_DEPENDENCY defines a service dependency between RKE2 and rancher-wins, ensuring RKE2 will not start before rancher-wins.
    • CATTLE_ENABLE_WINS_DELAYED_START changes the start type of rancher-wins to AUTOMATIC (DELAYED), ensuring it starts after other Windows services.

    Additionally, Windows nodes will now attempt to execute plans multiple times if the initial application fails, up to 5 times. This change, as well as appropriate use of the above two agent environment variables, aims to address plan failures for Windows nodes after a node reboot.

    See #​42458.

Windows Nodes in RKE2 Clusters

Behavior Changes
  • A change was made starting with RKE2 versions v1.28.15, v1.29.10, v1.30.6 and v1.31.2 on Windows which allows the user to configure *_PROXY environment variables on the rke2 service after the node has already been provisioned.

    Previously any attempt to do so would be a no-op. With this change, If the *_PROXY environment variables are set on the cluster after a Windows node is provisioned, they can be automatically removed from the rke2 service. However, if the variables are set before the node is provisioned, they cannot be removed.

    More information can be found here. A workaround is to remove the environment variables from the rancher-wins service and restart the service or node. At which point *_PROXY environment variables will no longer be set on either service.

    Remove-ItemProperty HKLM:SYSTEM\CurrentControlSet\Services\rancher-wins -Name Environment
    Restart-Service rancher-wins

    See #​47544.

Install/Upgrade Notes

Upgrade Requirements

  • Creating backups: Create a backup before you upgrade Rancher. To roll back Rancher after an upgrade, you must first back up and restore Rancher to the previous Rancher version. Because Rancher will be restored to the same state as when the backup was created, any changes post-upgrade will not be included after the restore.
  • CNI requirements:
    • For Kubernetes v1.19 and later, disable firewalld as it's incompatible with various CNI plugins. See #​28840.
    • When upgrading or installing a Linux distribution that uses nf_tables as the backend packet filter, such as SLES 15, RHEL 8, Ubuntu 20.10, Debian 10, or later, upgrade to RKE v1.19.2 or later to get Flannel v0.13.0. Flannel v0.13.0 supports nf_tables. See Flannel #​1317.
  • Requirements for air gapped environments:
    • When using a proxy in front of an air-gapped Rancher instance, you must pass additional parameters to NO_PROXY. See the documentation and issue #​2725.
    • When installing Rancher with Docker in an air-gapped environment, you must supply a custom registries.yaml file to the docker run command, as shown in the K3s documentation. If the registry has certificates, then you'll also need to supply those. See #​28969.
  • Requirements for general Docker installs:
    • When starting the Rancher Docker container, you must use the privileged flag. See documentation.
    • When upgrading a Docker installation, a panic may occur in the container, which causes it to restart. After restarting, the container will come up and work as expected. See #​33685.

Versions

Please refer to the README for the latest and stable Rancher versions.

Please review our version documentation for more details on versioning and tagging conventions.

Important: With the release of Rancher Kubernetes Engine (RKE) v1.6.0, we are informing customers that RKE is now deprecated. RKE will be maintained for two more versions, following our deprecation policy.

Please note, EOL for RKE is July 31st, 2025. Prime customers must replatform from RKE to RKE2 or K3s.

RKE2 and K3s provide stronger security, and move away from upstream-deprecated Docker machine. Learn more about replatforming here.

Images

  • rancher/rancher:v2.10.0

Tools

Kubernetes Versions for RKE

  • v1.31.2 (Default)
  • v1.30.6
  • v1.29.10
  • v1.28.15

Kubernetes Versions for RKE2/K3s

  • v1.31.2 (Default)
  • v1.30.6
  • v1.29.10
  • v1.28.15

Rancher Helm Chart Versions

In Rancher v2.6.0 and later, in the Apps & Marketplace UI, many Rancher Helm charts are named with a major version that starts with 100. This avoids simultaneous upstream changes and Rancher changes from causing conflicting version increments. This also complies with semantic versioning (SemVer), which is a requirement for Helm. You can see the upstream version number of a chart in the build metadata, for example: 100.0.0+up2.1.0. See #​32294.

Other Notes

Experimental Features

Rancher now supports the ability to use an OCI Helm chart registry for Apps & Marketplace. View documentation on using OCI based Helm chart repositories and note this feature is in an experimental stage. See #​29105 and #​45062

Deprecated Upstream Projects

In June 2023, Microsoft deprecated the Azure AD Graph API that Rancher had been using for authentication via Azure AD. When updating Rancher, update the configuration to make sure that users can still use Rancher with Azure AD. See the documentation and issue #​29306 for details.

Removed Legacy Features

Apps functionality in the cluster manager has been deprecated as of the Rancher v2.7 line. This functionality has been replaced by the Apps & Marketplace section of the Rancher UI.

Also, rancher-external-dns and rancher-global-dns have been deprecated as of the Rancher v2.7 line.

The following legacy features have been removed as of Rancher v2.7.0. The deprecation and removal of these features was announced in previous releases. See #​6864.

UI and Backend

  • CIS Scans v1 (Cluster)
  • Pipelines (Project)
  • Istio v1 (Project)
  • Logging v1 (Project)
  • RancherD

UI

  • Multiclusterapps (Global): Apps within the Multicluster Apps section of the Rancher UI.

Long-standing Known Issues

Long-standing Known Issues - Cluster Provisioning

  • Not all cluster tools can be installed on a hardened cluster.

  • Rancher v2.8.1:

    • When you attempt to register a new etcd/controlplane node in a CAPR-managed cluster after a failed etcd snapshot restoration, the node can become stuck in a perpetual paused state, displaying the error message [ERROR] 000 received while downloading Rancher connection information. Sleeping for 5 seconds and trying again. As a workaround, you can unpause the cluster by running kubectl edit clusters.cluster clustername -n fleet-default and set spec.unpaused to false. See #​43735.
  • Rancher v2.7.2:

    • If you upgrade or update any hosted cluster, and go to Cluster Management > Clusters while the cluster is still provisioning, the Registration tab is visible. Registering a cluster that is already registered with Rancher can cause data corruption. See #​8524.

Long-standing Known Issues - RKE Provisioning

  • Rancher v2.9.0:
    • The Weave CNI plugin for RKE v1.27 and later is now deprecated, due to the plugin being deprecated for upstream Kubernetes v1.27 and later. RKE creation will not go through as it will raise a validation warning. See #​11322.

Long-standing Known Issues - RKE2 Provisioning

  • Rancher v2.9.0:
    • When adding the provisioning.cattle.io/allow-dynamic-schema-drop annotation through the cluster config UI, the annotation disappears before adding the value field. When viewing the YAML, the respective value field is not updated and is displayed as an empty string. As a workaround, when creating the cluster, set the annotation by using the Edit Yaml option located in the dropdown attached to your respective cluster in the Cluster Management view. See #​11435.
  • Rancher v2.7.7:
    • Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #​34518 and #​42834.
  • Rancher v2.7.6:
    • Provisioning RKE2/K3s clusters with added (not built-in) custom node drivers causes provisioning to fail. As a workaround, fix the added node drivers after activating. See #​37074.
  • Rancher v2.7.2:
    • When viewing or editing the YAML configuration of downstream RKE2 clusters through the UI, spec.rkeConfig.machineGlobalConfig.profile is set to null, which is an invalid configuration. See #​8480.

Long-standing Known Issues - K3s Provisioning

  • Rancher v2.7.6:
    • Provisioning RKE2/K3s clusters with added (not built-in) custom node drivers causes provisioning to fail. As a workaround, fix the added node drivers after activating. See #​37074.
  • Rancher v2.7.2:
    • Clusters remain in an Updating state even when they contain nodes in an Error state. See #​39164.

Long-standing Known Issues - Rancher App (Global UI)

  • Rancher v2.9.2:
    • Although system mode node pools must have at least one node, the Rancher UI allows a minimum node count of zero. Inputting a zero minimum node count through the UI can cause cluster creation to fail due to an invalid parameter error. To prevent this error from occurring, enter a minimum node count at least equal to the node count. See #​11922.
  • Rancher v2.7.7:
    • When creating a cluster in the Rancher UI it does not allow the use of an underscore _ in the Cluster Name field. See #​9416.

Long-standing Known Issues - Hosted Rancher

  • Rancher v2.7.5:
    • The Cluster page shows the Registration tab when updating or upgrading a hosted cluster. See #​8524.

Long-standing Known Issues - EKS

  • Rancher v2.7.0:
    • EKS clusters on Kubernetes v1.21 or below on Rancher v2.7 cannot be upgraded. See #​39392.

Long-standing Known Issues - Authentication

  • Rancher v2.9.0:
    • There are some known issues with the OpenID Connect provider support:
      • When the generic OIDC auth provider is enabled, and you attempt to add auth provider users to a cluster or project, users are not populated in the dropdown search bar. This is expected behavior as the OIDC auth provider alone is not searchable. See #​46104.
      • When the generic OIDC auth provider is enabled, auth provider users that are added to a cluster/project by their username are not able to access resources upon logging in. A user will only have access to resources upon login if the user is added by their userID. See #​46105.
      • When the generic OIDC auth provider is enabled and an auth provider user in a nested group is logged into Rancher, the user will see the following error when they attempt to create a Project: [projectroletemplatebindings.management.cattle.io](http://projectroletemplatebindings.management.cattle.io/) is forbidden: User "u-gcxatwsnku" cannot create resource "projectroletemplatebindings" in API group "[management.cattle.io](http://management.cattle.io/)" in the namespace "p-9t5pg". However, the project is still created. See #​46106.

Long-standing Known Issues - Rancher Webhook

  • Rancher v2.7.2:
    • A webhook is installed in all downstream clusters. There are several issues that users may encounter with this functionality:
      • If you rollback from a version of Rancher v2.7.2 or later, to a Rancher version earlier than v2.7.2, the webhooks will remain in downstream clusters. Since the webhook is designed to be 1:1 compatible with specific versions of Rancher, this can cause unexpected behaviors to occur downstream. The Rancher team has developed a script which should be used after rollback is complete (meaning after a Rancher version earlier than v2.7.2 is running). This removes the webhook from affected downstream clusters. See #​40816.

Long-standing Known Issues - Harvester

  • Rancher v2.9.0:
    • In the Rancher UI when navigating between Harvester clusters of different versions a refresh may be required to view version specific functionality. See #​11559.
  • Rancher v2.7.2:
    • If you're using Rancher v2.7.2 with Harvester v1.1.1 clusters, you won't be able to select the Harvester cloud provider when deploying or updating guest clusters. The Harvester release notes contain instructions on how to resolve this. See #​3750.

Long-standing Known Issues - Backup/Restore

  • When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location. It must continue to use the same URL.

  • Rancher v2.7.7:

    • Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #​34518 and #​42834.

v2.9.3

Compare Source

Release v2.9.3

Important: Review the Install/Upgrade Notes before upgrading to any Rancher version.

Rancher v2.9.3 is the latest minor release of Rancher. This is a Community and Prime version release that introduces new features, enhancements, and various updates. To learn more about Rancher Prime, see our page on the Rancher Prime Platform.

Security Fixes for Rancher Vulnerabilities

This release addresses the following Rancher security issues:

  • Several enhancements have been made to the cluster and node driver registration process to prevent the possibility of remote code execution (RCE) through untrusted third-party cluster and node drivers. For more information, see CVE-2024-22036.
  • Improvements have been made to binaries and configuration files that are known to be executed by administrative accounts to prevent the possibility of privilege escalation. The binaries and configuration files now have stricter ACLs so that only Administrators can amend the files. For more information, see CVE-2023-32197.
  • To avoid credentials being stored in plain-text within the vSphere add-on config when creating a vSphere Rancher HA setup, the provisioningprebootstrap feature was added. For more information, see CVE-2022-45157.

For more details, see the Security Advisories and CVEs page in Rancher's documentation or in Rancher's GitHub repo.

Rancher UI

Features and Enhancements
  • The performance of the Clusters list on the Home page and the Side Menu has greatly improved when there are hundreds of clusters. See #​11999 and #​12009.

Harvester

Features and Enhancements
  • On the Cloud Credential list, you can now easily see if a Harvester Credential is about to expire or has expired and choose to renew it. You will also be notified on the Cluster Management Clusters list when an associated Harvester Cloud Credential is about to expire or has expired. When upgrading, an existing expired Harvester Credential will not contain a warning. You can still renew the token on the resources menu. See #​11270.

Role-Based Access Control (RBAC) Framework

Major Bug Fixes
  • Fixed issue where multiple, stale secrets may be erroneously created for users on downstream clusters. See #​46894.

Install/Upgrade Notes

Upgrade Requirements

  • Creating backups: Create a backup before you upgrade Rancher. To roll back Rancher after an upgrade, you must first back up and restore Rancher to the previous Rancher version. Because Rancher will be restored to the same state as when the backup was created, any changes post-upgrade will not be included after the restore.
  • CNI requirements:
    • For Kubernetes v1.19 and later, disable firewalld as it's incompatible with various CNI plugins. See #​28840.
    • When upgrading or installing a Linux distribution that uses nf_tables as the backend packet filter, such as SLES 15, RHEL 8, Ubuntu 20.10, Debian 10, or later, upgrade to RKE v1.19.2 or later to get Flannel v0.13.0. Flannel v0.13.0 supports nf_tables. See Flannel #​1317.
  • Requirements for air gapped environments:
    • When using a proxy in front of an air-gapped Rancher instance, you must pass additional parameters to NO_PROXY. See the documentation and issue #​2725.
    • When installing Rancher with Docker in an air-gapped environment, you must supply a custom registries.yaml file to the docker run command, as shown in the K3s documentation. If the registry has certificates, then you'll also need to supply those. See #​28969.
  • Requirements for general Docker installs:
    • When starting the Rancher Docker container, you must use the privileged flag. See documentation.
    • When upgrading a Docker installation, a panic may occur in the container, which causes it to restart. After restarting, the container will come up and work as expected. See #​33685.

Versions

Please refer to the README for the latest and stable Rancher versions.

Please review our version documentation for more details on versioning and tagging conventions.

Important: With the release of Rancher Kubernetes Engine (RKE) v1.6.0, we are informing customers that RKE is now deprecated. RKE will be maintained for two more versions, following our deprecation policy.

Please note, End-of-Life (EOL) for RKE is July 31st, 2025. Prime customers must re-platform from RKE to RKE2 or k3s.

RKE2 and K3s provide stronger security, and move away from upstream-deprecated Docker machine. Learn more about re-platforming here.

Images

  • rancher/rancher:v2.9.3

Tools

Kubernetes Versions for RKE

  • v1.30.5 (Default)
  • v1.29.9
  • v1.28.14
  • v1.27.16

Kubernetes Versions for RKE2/K3s

  • v1.30.5 (Default)
  • v1.29.9
  • v1.28.14
  • v1.27.16

Rancher Helm Chart Versions

In Rancher v2.6.0 and later, in the Apps & Marketplace UI, many Rancher Helm charts are named with a major version that starts with 100. This avoids simultaneous upstream changes and Rancher changes from causing conflicting version increments. This also complies with semantic versioning (SemVer), which is a requirement for Helm. You can see the upstream version number of a chart in the build metadata, for example: 100.0.0+up2.1.0. See #​32294.

Other Notes

Experimental Features

Rancher now supports the ability to use an OCI Helm chart registry for Apps & Marketplace. View documentation on using OCI based Helm chart repositories and note this feature is in an experimental stage. See #​29105 and #​45062

Deprecated Upstream Projects

In June 2023, Microsoft deprecated the Azure AD Graph API that Rancher had been using for authentication via Azure AD. When updating Rancher, update the configuration to make sure that users can still use Rancher with Azure AD. See the documentation and issue #​29306 for details.

Removed Legacy Features

Apps functionality in the cluster manager has been deprecated as of the Rancher v2.7 line. This functionality has been replaced by the Apps & Marketplace section of the Rancher UI.

Also, rancher-external-dns and rancher-global-dns have been deprecated as of the Rancher v2.7 line.

The following legacy features have been removed as of Rancher v2.7.0. The deprecation and removal of these features was announced in previous releases. See #​6864.

UI and Backend

  • CIS Scans v1 (Cluster)
  • Pipelines (Project)
  • Istio v1 (Project)
  • Logging v1 (Project)
  • RancherD

UI

  • Multiclusterapps (Global): Apps within the Multicluster Apps section of the Rancher UI.

Previous Rancher Behavior Changes

Previous Rancher Behavior Changes - Rancher General

  • Rancher v2.9.0:
    • Kubernetes v1.25 and v1.26 are no longer supported. Before you upgrade to Rancher v2.9.0, make sure that all clusters are running Kubernetes v1.27 or later. See #​45882.
    • The external-rules feature flag functionality is removed in Rancher v2.9.0 as the behavior is enabled by default. The feature flag is still present when upgrading from v2.8.5; however, enabling or disabling the feature won't have any effect. For more information, see CVE-2023-32196 and #​45863.
    • Rancher now validates the Container Default Resource Limit on Projects. Validation mimics the upstream behavior of the Kubernetes API server when it validates LimitRanges. The container default resource configuration must have properly formatted quantities for all requests and limits. Limits for any resource must not be less than requests. See #​39700.
  • Rancher v2.8.4:
    • The controller now cleans up instances of ClusterUserAttribute that have no corresponding UserAttribute. See #​44985.
  • Rancher v2.8.3:
    • When Rancher starts, it now identifies all deprecated and unrecognized setting resources and adds a cattle.io/unknown label. You can list these settings with the command kubectl get settings -l 'cattle.io/unknown==true'. In Rancher v2.9 and later, these settings will be removed instead. See #​43992.
  • Rancher v2.8.0:
    • Rancher Compose is no longer supported, and all parts of it are being removed in the v2.8 release line. See #​43341.
    • Kubernetes v1.23 and v1.24 are no longer supported. Before you upgrade to Rancher v2.8.0, make sure that all clusters are running Kubernetes v1.25 or later. See #​42828.

Previous Rancher Behavior Changes - Cluster Provisioning

  • Rancher v2.8.4:
    • Docker CLI 20.x is at end-of-life and no longer supported in Rancher. Please update your local Docker CLI versions to 23.0.x or later. Earlier versions may not recognize OCI compliant Rancher image manifests. See #​45424.
  • Rancher v2.8.0:
    • Kontainer Engine v1 (KEv1) provisioning and the respective cluster drivers are now deprecated. KEv1 provided plug-ins for different targets using cluster drivers. The Rancher-maintained cluster drivers for EKS, GKE and AKS have been replaced by the hosted provider drivers, EKS-Operator, GKE-Operator and AKS-Operator. Node drivers are now available for self-managed Kubernetes.
  • Rancher v2.7.2:
    • When you provision a downstream cluster, the cluster's name must conform to RFC-1123. Previously, characters that did not follow the specification, such as ., were permitted and would result in clusters being provisioned without the necessary Fleet components. See #​39248.
    • Privilege escalation is disabled by default when creating deployments from the Rancher API. See #​7165.

Previous Rancher Behavior Changes - RKE Provisioning

  • Rancher v2.9.0:
    • With the release of Rancher Kubernetes Engine (RKE) v1.6.0, RKE is now deprecated. RKE will be maintained for two more versions, following our deprecation policy.

      Please note, End-of-Life (EOL) for RKE is July 31st, 2025. Prime customers must re-platform from RKE to RKE2 or K3s.

      RKE2 and K3s provide stronger security, and move away from the upstream-deprecated Docker machine. Learn more about re-platforming at the official SUSE blog.

    • Rancher has added support for external Azure cloud providers in downstream RKE clusters. Note that migration to an external Azure cloud provider is required when running Kubernetes v1.30 and recommended when running Kubernetes v1.29. See #​44857.

    • Weave CNI support for RKE clusters is removed in response to Weave CNI not being supported by upstream Kubernetes v1.30 and later. See #​45954

  • Rancher v2.8.0:
    • Rancher no longer supports the Amazon Web Services (AWS) in-tree cloud provider for RKE clusters. This is in response to upstream Kubernetes removing the in-tree AWS provider in Kubernetes v1.27. You should instead use the out-of-tree AWS cloud provider for any Rancher-managed clusters running Kubernetes v1.27 or later. See #​43175.
    • The Weave CNI plugin for RKE v1.27 and later is now deprecated. Weave will be removed in RKE v1.30. See #​42730.

Previous Rancher Behavior Changes - RKE2 Provisioning

  • Rancher v2.9.2:
    • Fixed an issue where downstream RKE2 clusters may become corrupted if KDM data (from rke-metadata-config setting) is invalid. Note that per the fix these clusters' status may change to "Updating" with a message indicating KDM data is missing instead of the cluster status stating "Active". See #​46855.
  • Rancher v2.9.0:
    • Rancher has added support for external Azure cloud providers in downstream RKE2 clusters. Note that migration to an external Azure cloud provider is required when running Kubernetes v1.30 and recommended when running Kubernetes v1.29. See #​44856.
    • Added a new annotation, provisioning.cattle.io/allow-dynamic-schema-drop. When set to true, it drops the dynamicSchemaSpec field from machine pool definitions. This prevents cluster nodes from re-provisioning unintentionally when the cluster object is updated from an external source such as Terraform or Fleet. See #​44618.
  • Rancher v2.8.0:
    • Rancher no longer supports the Amazon Web Services (AWS) in-tree cloud provider for RKE2 clusters. This is in response to upstream Kubernetes removing the in-tree AWS provider in Kubernetes v1.27. You should instead use the out-of-tree AWS cloud provider for any Rancher-managed clusters running Kubernetes v1.27 or later. See #​42749.

Previous Rancher Behavior Changes - Cluster API

  • Rancher v2.7.7:
    • The cluster-api core provider controllers run in a pod in the cattle-provisioning-cattle-system namespace, within the local cluster. These controllers are installed with a Helm chart. Previously, Rancher ran cluster-api controllers in an embedded fashion. This change makes it easier to maintain cluster-api versioning. See #​41094.
    • The token hashing algorithm generates new tokens using SHA3. Existing tokens that don't use SHA3 won't be re-hashed. This change affects ClusterAuthTokens (the downstream synced version of tokens for ACE) and Tokens (only when token hashing is enabled). SHA3 tokens should work with ACE and Token Hashing. Tokens that don't use SHA3 may not work when ACE and token hashing are used in combination. If, after upgrading to Rancher v2.7.7, you experience issues with ACE while token hashing is enabled, re-generate any applicable tokens. See #​42062.

Previous Rancher Behavior Changes - Rancher App (Global UI)

  • Rancher v2.8.0:
    • The built-in restricted-admin role is being deprecated in favor of a more flexible global role configuration, which is now available for different use cases other than only the restricted-admin. If you want to replicate the permissions given through this role, use the new inheritedClusterRoles feature to create a custom global role. A custom global role, like the restricted-admin role, grants permissions on all downstream clusters. See #​42462. Given its deprecation, the restricted-admin role will continue to be included in future builds of Rancher through the v2.8.x and v2.9.x release lines. However, in accordance with the CVSS standard, only security issues scored as critical will be backported and fixed in the restricted-admin role until it is completely removed from Rancher.
    • Reverse DNS server functionality has been removed. The associated rancher/rdns-server repository is now archived. Reverse DNS is already disabled by default.
    • The Rancher CLI configuration file ~/.rancher/cli2.json previously had permissions set to 0644. Although 0644 would usually indicate that all users have read access to the file, the parent directory would block users' access. New Rancher CLI configuration files will only be readable by the owner (0600). Invoking the CLI will trigger a warning, in case old configuration files are world-readable or group-readable. See #​42838.

Previous Rancher Behavior Changes - Rancher App (Helm Chart)

  • Rancher v2.7.0:
    • When installing or upgrading an official Rancher Helm chart app in a RKE2/K3s cluster, if a private registry exists in the cluster configuration, that registry will be used for pulling images. If no cluster-scoped registry is found, the global container registry will be used. A custom default registry can be specified during the Helm chart install and upgrade workflows. Previously, only the global container registry was used when installing or upgrading an official Rancher Helm chart app for RKE2/K3s node driver clusters.

Previous Rancher Behavior Changes - Continuous Delivery

  • Rancher v2.9.0:
    • Rancher now supports monitoring of continuous delivery. Starting with version v104.0.1 of the Fleet (v0.10.1 of Fleet) and rancher-monitoring chart, continuous delivery provides metrics about the state of its resources and the rancher-monitoring chart contains dashboards to visualize those metrics. Installing the rancher-monitoring chart to the local/upstream cluster automatically configures Prometheus to scrape metrics from the continuous delivery controllers and installs Grafana dashboards. These dashboards are accessible via Grafana but are not yet integrated into the Rancher UI. You can open Grafana from the Rancher UI by navigating to the Cluster > Monitoring > Grafana view. See rancher/fleet#1408 for implementation details.
    • Continuous delivery in Rancher also introduces sharding with node selectors. See rancher/fleet#1740 for implementation details and the Fleet documentation for instructions on how to use it.
    • We have reduced image size and complexity by integrating the former external gitjob repository and by merging various controller codes. This also means that the gitjob container image (rancher/gitjob) is not needed anymore, as the required functionality is embedded into the rancher/fleet container image. The gitjob deployment will still be created but pointing to the rancher/fleet container image instead. Please also note that a complete list of necessary container images for air-gapped deployments is released alongside Rancher releases. You can find this list as rancher-images.txt in the assets of the release on Github. See rancher/fleet#2342 for more details.
    • Continuous delivery also adds experimental OCI content storage. See rancher/fleet#2561 for implementation details and rancher/fleet-docs#179 for documentation.
    • Continuous delivery now splits components into containers and has switched to the controller-runtime framework. The rewritten controllers switch to structured logging.
    • Leader election can now be configured (see rancher/fleet#1981), as well as the worker count for the fleet-controller (see rancher/fleet#2430).
    • The release deprecates the "fleet test" command in favor of "target" and "deploy" with a dry-run option (see rancher/fleet#2102).
    • Bug fixes enhance drift detection, cluster status reporting, and various operational aspects.

Previous Rancher Behavior Changes - Pod Security Standard (PSS) & Pod Security Admission (PSA)

  • Rancher v2.7.2:
    • You must manually change the psp.enabled value in the chart install yaml when you install or upgrade v102.x.y charts on hardened RKE2 clusters. Instructions for updating the value are available. See #​41018.

Previous Rancher Behavior Changes - Authentication

  • Rancher v2.8.3:
    • Rancher uses additional trusted CAs when establishing a secure connection to the keycloak OIDC authentication provider. See #​43217.
  • Rancher v2.8.0:
    • The kubeconfig-token-ttl-minutes setting has been replaced by the setting, kubeconfig-default-token-ttl-minutes, and is no longer available in the UI. See #​38535.
    • API tokens now have default time periods after which they expire. Authentication tokens expire after 90 days, while kubeconfig tokens expire after 30 days. See #​41919.
  • Rancher v2.7.2:
    • Rancher might retain resources from a disabled auth provider configuration in the local cluster, even after you configure another auth provider. To manually trigger cleanup for a disabled auth provider, add the management.cattle.io/auth-provider-cleanup annotation with the unlocked value to its auth config. See #​40378.

Previous Rancher Behavior Changes - Rancher Webhook

  • Rancher v2.8.3:
    • The embedded Cluster API webhook is removed from the Rancher webhook and can no longer be installed from the webhook chart. It has not been used as of Rancher v2.7.7, where it was migrated to a separate Pod. See #​44619.
  • Rancher v2.8.0:
    • Rancher's webhook now honors the bind and escalate verbs for GlobalRoles. Users who have * set on GlobalRoles will now have both of these verbs, and could potentially use them to escalate privileges in Rancher v2.8.0 and later. You should review current custom GlobalRoles, especially cases where bind, escalate, or * are granted, before you upgrade.
  • Rancher v2.7.5:
    • Rancher installs the same pinned version of the rancher-webhook chart not only in the local cluster but also in all downstream clusters. Restoring Rancher from v2.7.5 to an earlier version will result in downstream clusters' webhooks being at the version set by Rancher v2.7.5, which might cause incompatibility issues. Local and downstream webhook versions need to be in sync. See #​41730 and #​41917.
    • The mutating webhook configuration for secrets is no longer active in downstream clusters. See #​41613.

Previous Rancher Behavior Changes - Apps & Marketplace

  • Rancher v2.8.0:
    • Legacy code for the following v1 charts is no longer available in the rancher/system-charts repository:

      • rancher-cis-benchmark
      • rancher-gatekeeper-operator
      • rancher-istio
      • rancher-logging
      • rancher-monitoring

      The code for these charts will remain available for previous versions of Rancher.

    • Helm v2 support is deprecated as of the Rancher v2.7 line and will be removed in Rancher v2.9.

  • Rancher v2.7.0:
    • Rancher no longer validates an app registration's permissions to use Microsoft Graph on endpoint updates or initial setup. You should add Directory.Read.All permissions of type Application. If you configure a different set of permissions, Rancher may not have sufficient privileges to perform some necessary actions within Azure AD, causing errors.
    • The multi-cluster app legacy feature is no longer available. See #​39525.

Previous Rancher Behavior Changes - OPA Gatekeeper

  • Rancher v2.8.0:
    • OPA Gatekeeper is now deprecated and will be removed in a future release. As a replacement for OPA Gatekeeper, consider switching to Kubewarden. See #​42627.

Previous Rancher Behavior Changes - Feature Charts

  • Rancher v2.7.0:
    • A configurable priorityClass is available in the Rancher pod and its feature charts. Previously, pods critical to running Rancher didn't use a priority class. This could cause a cluster with limited resources to evict Rancher pods before other noncritical pods. See #​37927.

Previous Rancher Behavior Changes - Backup/Restore

  • Rancher v2.7.7:
    • If you use a version of backup-restore older than v102.0.2+up3.1.2 to take a backup of Rancher v2.7.7, the migration will encounter a capi-webhook error. Make sure that the chart version used for backups is v102.0.2+up3.1.2, which has cluster.x-k8s.io/v1alpha4 resources removed from the resourceSet. If you can't use v102.0.2+up3.1.2 for backups, delete all cluster.x-k8s.io/v1alpha4 resources from the backup tar before using it. See #​382.

Previous Rancher Behavior Changes - Logging

  • Rancher v2.7.0:
    • Rancher defaults to using the bci-micro image for sidecar audit logging. Previously, the default image was Busybox. See #​35587.

Previous Rancher Behavior Changes - Monitoring

  • Rancher v2.7.2:
    • Rancher maintains a /v1/counts endpoint that the UI uses to display resource counts. The UI subscribes to changes to the counts for all resources through a websocket to receive the new counts for resources.
      • Rancher aggregates the changed counts and only sends a message every 5 seconds. This, in turn, requires the UI to update the counts at most once every 5 seconds, improving UI performance. Previously, Rancher would send a message each time the resource counts changed for a resource type. This lead to the UI needing to constantly stop other areas of processing to update the resource counts. See #​36682.
      • Rancher now only sends back a count for a resource type if the count has changed from the previously known number, improving UI performance. Previously, each message from this socket would include all counts for every resource type in the cluster, even if the counts only changed for one specific resource type. This would cause the UI to need to re-update resource counts for every resource type at a high frequency, with a significant performance impact. See #​36681.

Previous Rancher Behavior Changes - Project Monitoring

  • Rancher v2.7.2:
    • The Helm Controller in RKE2/K3s respects the managedBy annotation. In its initial release, Project Monitoring V2 required a workaround to set helmProjectOperator.helmController.enabled: false, since the Helm Controller operated on a cluster-wide level and ignored the managedBy annotation. See #​39724.

Previous Rancher Behavior Changes - Security

  • Rancher v2.9.0:
    • When agent-tls-mode is set to strict, users must provide the certificate authority to Rancher or downstream clusters will disconnect from Rancher, and require manual intervention to fix. This applies to several setup types, including:

      • Let's Encrypt - When set to strict, users must upload the Let's Encrypt Certificate Authority and provide privateCA=true when installing the chart.
      • Bring Your Own Cert - When set to strict, users must upload the Certificate Authority used to generate the cert and provide privateCA=true when installing the chart.
      • Proxy/External - when the setting is strict, users must upload the Certificate Authority used by the proxy and provide privateCA=true when installing the chart.

      See #​45628 and #​45655.

  • Rancher v2.8.0:
    • TLS v1.0 and v1.1 are no longer supported for Rancher app ingresses. See #​42027.

Previous Rancher Behavior Changes - Extensions

  • Rancher v2.9.0:
    • A new feature flag uiextensions has been added for enabling and disabling the UI extension feature (this replaces the need to install the ui-plugin-operator). The first time it's set to true (the default value is true) it will create the CRD and enable the controllers and endpoints necessary for the feature to work. If set to false, it won't create the CRD if it doesn't already exist, but it won't delete it if it does. It will also disable the controllers and endpoints used by the feature. Enabling or disabling the feature flag will cause Rancher to restart. See #​44230 and #​43089.
    • UI extension owners must update and publish a new version of their extensions to be compatible with Rancher v2.9.0 and later. For more information see the Rancher v2.9 extension support page.
  • Rancher v2.8.4:
    • The Rancher dashboard fails to load an extension that utilizes backported Vue 3 features, displaying an error in the console object(...) is not a function. New extensions that utilize the defineComponent will not be backwards compatible with older versions of the dashboard. Existing extensions should continue to work moving forward. See #​10568.

Long-standing Known Issues

Long-standing Known Issues - Cluster Provisioning

  • Not all cluster tools can be installed on a hardened cluster.

  • Rancher v2.8.1:

    • When you attempt to register a new etcd/controlplane node in a CAPR-managed cluster after a failed etcd snapshot restoration, the node can become stuck in a perpetual paused state, displaying the error message [ERROR] 000 received while downloading Rancher connection information. Sleeping for 5 seconds and trying again. As a workaround, you can unpause the cluster by running kubectl edit clusters.cluster clustername -n fleet-default and set spec.unpaused to false. See #​43735.
  • Rancher v2.7.2:

    • If you upgrade or update any hosted cluster, and go to Cluster Management > Clusters while the cluster is still provisioning, the Registration tab is visible. Registering a cluster that is already registered with Rancher can cause data corruption. See #​8524.
    • When you upgrade your Kubernetes cluster, you might see the following error: Cluster health check failed. This is a benign error that occurs as part of the upgrade process, and will self-resolve. It's caused by the Kubernetes API server becoming temporarily unavailable as it is being upgraded within your cluster. See #​41012.
    • Once you configure a setting with an environmental variable, it can't be updated through the Rancher API or the UI. It can only be updated through changing the value of the environmental variable. Setting the environmental variable to "" (the empty string) changes the value in the Rancher API but not in Kubernetes. As a workaround, run kubectl edit setting <setting-name>, then set the value and source fields to "", and re-deploy Rancher. See #​37998.
  • Rancher v2.6.1:

    • When using the Rancher UI to add a new port of type ClusterIP to an existing Deployment created using the legacy UI, the new port won't be created upon your first attempt to save the new port. You must repeat the procedure to add the port again. The Service Type field will display Do not create a service during the second procedure. Change this to ClusterIP and save to create the new port. See #​4280.

Long-standing Known Issues - RKE Provisioning

  • Rancher v2.9.0:
    • The Weave CNI plugin for RKE v1.27 and later is now deprecated, due to the plugin being deprecated for upstream Kubernetes v1.27 and later. RKE creation will not go through as it will raise a validation warning. See #​11322.

Long-standing Known Issues - RKE2 Provisioning

  • Rancher v2.9.0:
    • Currently there are known issues with the data directory feature which are outlined below:
      • K3s does not support the data directory feature. See #​10589.
      • Currently selecting Use the same path for System-agent, Provisioning and K8s Distro data directories configuration results in Rancher using the same data directory for system agent, provisioning, and distribution components as opposed to appending the specified component names to the root directory. To mitigate this issue, you will need to configure the 3 paths separately and they must follow the guidelines below: - Absolute paths (start with /) - Clean (not contain env vars, shell expressions, ., or ..) - Not set to the same thing - Not nested one within another See #​11566.
    • When adding the provisioning.cattle.io/allow-dynamic-schema-drop annotation through the cluster config UI, the annotation disappears before adding the value field. When viewing the YAML, the respective value field is not updated and is displayed as an empty string. As a workaround, when creating the cluster, set the annotation by using the Edit Yaml option located in the dropdown attached to your respective cluster in the Cluster Management view. See #​11435.
  • Rancher v2.7.7:
    • Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #​34518 and #​42834.
  • Rancher v2.7.6:
    • Provisioning RKE2/K3s clusters with added (not built-in) custom node drivers causes provisioning to fail. As a workaround, fix the added node drivers after activating. See #​37074.
  • Rancher v2.7.4:
    • RKE2 clusters with invalid values for tolerations or affinity agent customizations don't display an error message, and remain in an Updating state. This causes cluster creation to hang. See #​41606.
  • Rancher v2.7.2:
    • When viewing or editing the YAML configuration of downstream RKE2 clusters through the UI, spec.rkeConfig.machineGlobalConfig.profile is set to null, which is an invalid configuration. See #​8480.
    • Deleting nodes from custom RKE2/K3s clusters in Rancher v2.7.2 can cause unexpected behavior, if the underlying infrastructure isn't thoroughly cleaned. When deleting a custom node from your cluster, ensure that you delete the underlying infrastructure for it, or run the corresponding uninstall script for the Kubernetes distribution installed on the node. See #​41034:
  • Rancher v2.6.9:
    • Deleting a control plane node results in worker nodes also reconciling. See #​39021.
  • Rancher v2.6.4:
    • Communication between the ingress controller and the pods doesn't work when you create an RKE2 cluster with Cilium as the CNI and activate project network isolation. See documentation and #​34275.
  • Rancher v2.6.3:
    • When provisioning clusters with an RKE2 cluster template, the rootSize for AWS EC2 provisioners doesn't take an integer when it should, and an error is thrown. As a workaround, wrap the EC2 rootSize in quotes. See #​40128.
  • Rancher v2.6.0:
    • Amazon ECR Private Registries don't work from RKE2/K3s. See #​33920.

Long-standing Known Issues - K3s Provisioning

  • Rancher v2.7.7:
    • Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #​34518 and #​42834.
  • Rancher v2.7.6:
    • Provisioning RKE2/K3s clusters with added (not built-in) custom node drivers causes provisioning to fail. As a workaround, fix the added node drivers after activating. See #​37074.
  • Rancher v2.7.2:
    • Clusters remain in an Updating state even when they contain nodes in an Error state. See #​39164.
    • Deleting nodes from custom RKE2/K3s clusters in Rancher v2.7.2 can cause unexpected behavior, if the underlying infrastructure isn't thoroughly cleaned. When deleting a custom node from your cluster, ensure that you delete the underlying infrastructure for it, or run the corresponding uninstall script for the Kubernetes distribution installed on the node. See #​41034:
  • Rancher v2.6.0:
    • Deleting a control plane node results in worker nodes also reconciling. See #​39021..

Long-standing Known Issues - Rancher App (Global UI)

  • Rancher v2.9.2:
    • Although system mode node pools must have at least one node, the Rancher UI allows a minimum node count of zero. Inputting a zero minimum node count through the UI can cause cluster creation to fail due to an invalid parameter error. To prevent this error from occurring, enter a minimum node count at least equal to the node count. See #​11922.
    • Node drivers that rely on machine configs that contain fields of the type array string render a form requesting a single string instead. See #​11936.
    • Some node drivers incorrectly stipulate that no credentials are required, resulting in the UI skipping the requirement to supply credentials when provisioning a cluster of that type. See #​11974.
  • Rancher v2.7.7:
    • When creating a cluster in the Rancher UI it does not allow the use of an underscore _ in the Cluster Name field. See #​9416.
  • Rancher v2.7.2:
    • When creating a GKE cluster in the Rancher UI you will see provisioning failures as the clusterIpv4CidrBlock and clusterSecondaryRangeName fields conflict. See #​8749.

Long-standing Known Issues - Rancher CLI

  • Rancher v2.9.0:
    • The Rancher CLI currently lists the Azure authentication provider options out of order. See #​46128.

Long-standing Known Issues - Hosted Rancher

  • Rancher v2.7.5:
    • The Cluster page shows the Registration tab when updating or upgrading a hosted cluster. See #​8524.

Long-standing Known Issues - Docker Install

  • Rancher v2.6.4:
    • Single node Rancher won't start on Apple M1 devices with Docker Desktop 4.3.0 or later. See #​35930.
  • Rancher v2.6.3:
    • On a Docker install upgrade and rollback, Rancher logs repeatedly display the messages "Updating workload ingress-nginx/nginx-ingress-controller" and "Updating service frontend with public endpoints". Ingresses and clusters are functional and active, and logs resolve eventually. See #​35798 and #​40257.
  • Rancher v2.5.0:
    • UI issues may occur due to longer startup times. When launching Docker for the first time, you'll receive an error message stating, "Cannot read property endsWith of undefined", as described in #​28800. You'll then be directed to a login screen. See #​28798.

Long-standing Known Issues - Windows

  • Rancher v2.5.8:
    • Windows nodeAgents are not deleted when performing a helm upgrade after disabling Windows logging on a Windows cluster. See #​32325.
    • If you deploy Monitoring V2 on a Windows cluster with win_prefix_path set, you must deploy Rancher Wins Upgrader to restart wins on the hosts. This will allow Rancher to start collecting metrics in Prometheus. See #​32535.
Long-standing Known Issues - Windows Nodes in RKE2 Clusters
  • Rancher v2.6.4:
    • NodePorts do not work on Windows Server 2022 in RKE2 clusters due to a Windows kernel bug. See #​159.

Long-standing Known Issues - AKS

  • Rancher v2.7.2:
    • Imported Azure Kubernetes Service (AKS) clusters don't display workload level metrics. This bug affects Monitoring V1. A workaround is available. See #​4658.
  • Rancher v2.6.x:
    • Windows node pools are not currently supported. See #​32586.
  • Rancher v2.6.0:
    • When editing or upgrading an Azure Kubernetes Service (AKS) cluster, do not make changes from the Azure console or CLI at the same time. These actions must be done separately. See #​33561.

Long-standing Known Issues - EKS

  • Rancher v2.7.0:
    • EKS clusters on Kubernetes v1.21 or below on Rancher v2.7 cannot be upgraded. See #​39392.

Long-standing Known Issues - GKE

  • Rancher v2.5.8:
    • Basic authentication must be explicitly disabled in GCP before upgrading a GKE cluster to Kubernetes v1.19+ in Rancher. See #​32312.

Long-standing Known Issues - Role-Based Access Control (RBAC) Framework

  • Rancher v2.9.1
    • Temporarily reducing privileges by impersonating an account with lower privileges is currently not supported. See #​41988 and #​46790.

Long-standing Known Issues - Pod Security Standard (PSS) & Pod Security Admission (PSA)

  • Rancher v2.6.4:
    • The deployment's securityContext section is missing when a new workload is created. This prevents pods from starting when Pod Security Policy (PSP) support is enabled. See #​4815.

Long-standing Known Issues - Authentication

  • Rancher v2.9.0:
    • There are some known issues with the OpenID Connect provider support:
      • When the generic OIDC auth provider is enabled, and you attempt to add auth provider users to a cluster or project, users are not populated in the dropdown search bar. This is expected behavior as the OIDC auth provider alone is not searchable. See #​46104.
      • When the generic OIDC auth provider is enabled, auth provider users that are added to a cluster/project by their username are not able to access resources upon logging in. A user will only have access to resources upon login if the user is added by their userID. See #​46105.
      • When the generic OIDC auth provider is enabled and an auth provider user in a nested group is logged into Rancher, the user will see the following error when they attempt to create a Project: [projectroletemplatebindings.management.cattle.io](http://projectroletemplatebindings.management.cattle.io/) is forbidden: User "u-gcxatwsnku" cannot create resource "projectroletemplatebindings" in API group "[management.cattle.io](http://management.cattle.io/)" in the namespace "p-9t5pg". However, the project is still created. See #​46106.
  • Rancher v2.7.7:
    • The SAML authentication pop-up throws a 404 error on high-availability RKE installations. Single node Docker installations aren't affected. If you refresh the browser window and select Resend, the authentication request will succeed, and you will be able to log in. See #​31163.
  • Rancher v2.6.2:
    • Users on certain LDAP setups don't have permission to search LDAP. When they attempt to perform a search, they receive the error message, Result Code 32 "No Such Object". See #​35259.

Long-standing Known Issues - Encryption

  • Rancher v2.5.4:
    • Rotating encryption keys with a custom encryption provider is not supported. See #​30539.

Long-standing Known Issues - Rancher Webhook

  • Rancher v2.7.2:
    • A webhook is installed in all downstream clusters. There are several issues that users may encounter with this functionality:
      • If you rollback from a version of Rancher v2.7.2 or later, to a Rancher version earlier than v2.7.2, the webhooks will remain in downstream clusters. Since the webhook is designed to be 1:1 compatible with specific versions of Rancher, this can cause unexpected behaviors to occur downstream. The Rancher team has developed a script which should be used after rollback is complete (meaning after a Rancher version earlier than v2.7.2 is running). This removes the webhook from affected downstream clusters. See #​40816.

Long-standing Known Issues - Harvester

  • Rancher v2.8.4:
    • When provisioning a Harvester RKE1 cluster in Rancher, the vGPU field is not displayed under Cluster Management > Advanced Settings, this is not a supported feature. However, the vGPU field is available when provisioning a Harvester RKE2 cluster. See #​10909.
    • When provisioning a multi-node Harvester RKE2 cluster in Rancher, you need to allocate one vGPU more than the number of nodes you have or provisioning will fail. See #​11009 and v2.9.0 back-port issue #​10989.
  • Rancher v2.7.2:
    • If you're using Rancher v2.7.2 with Harvester v1.1.1 clusters, you won't be able to select the Harvester cloud provider when deploying or updating guest clusters. The Harvester release notes contain instructions on how to resolve this. See #​3750.
  • Rancher v2.6.1:
    • Deploying Fleet to Harvester clusters is not yet supported. Clusters, whether Harvester or non-Harvester, imported using the Virtualization Management page will result in the cluster not being listed on the Continuous Delivery page. See #​35049.
    • Upgrades from Harvester v0.3.0 are not supported.

Long-standing Known Issues - Continuous Delivery

  • Rancher v2.7.6:
    • Target customization can produce custom resources that exceed the Rancher API's maximum bundle size. This results in Request entity too large errors when attempting to add a GitHub repo. Only target customizations that modify the Helm chart URL or version are affected. As a workaround, use multiple paths or GitHub repos instead of target customization. See #​1650.
  • Rancher v2.6.1:
    • Deploying Fleet to Harvester clusters is not yet supported. Clusters, whether Harvester or non-Harvester, imported using the Virtualization Management page will result in the cluster not being listed on the Continuous Delivery page. See #​35049.
  • Rancher v2.6.0:
    • Multiple fleet-agent pods may be created and deleted during initial downstream agent deployment, rather than just one. This resolves itself quickly, but is unintentional behavior. See #​33293.

Long-standing Known Issues - Feature Charts

  • Rancher v2.6.5:
    • After installing an app from a partner chart repo, the partner chart will upgrade to feature charts if the chart also exists in the feature charts default repo. See #​5655.

Long-standing Known Issues - CIS Scan

  • Rancher v2.8.3:
    • Some CIS checks related to file permissions fail on RKE and RKE2 clusters with CIS v1.7 and CIS v1.8 profiles. See #​42971.
  • Rancher v2.7.2:
    • When running CIS scans on RKE and RKE2 clusters on Kubernetes v1.25, some tests will fail if the rke-profile-hardened-1.23 or the rke2-profile-hardened-1.23 profile is used. These RKE and RKE2 test cases failing is expected as they rely on PSPs, which have been removed in Kubernetes v1.25. See #​39851.

Long-standing Known Issues - Backup/Restore

  • When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location. It must continue to use the same URL.

  • Rancher v2.7.7:

    • Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #​34518 and #​42834.
  • Rancher v2.6.3:

    • Because Kubernetes v1.22 drops the apiVersion apiextensions.k8s.io/v1beta1, trying to restore an existing backup file into a v1.22+ cluster will fail. The backup file contains CRDs with the apiVersion v1beta1. There are two workarounds for this issue: update the default resourceSet to collect the CRDs with the apiVersion v1, or update the default resourceSet and the client to use the new APIs internally. See the documentation and #​34154.

Long-standing Known Issues - Istio

  • Istio v1.12 and below do not work on Kubernetes v1.23 clusters. To use the Istio charts, please do not update to Kubernetes v1.23 until the next charts' release.

  • Rancher v2.6.4:

    • Applications injecting Istio sidecars, fail on SELinux RHEL 8.4 enabled clusters. A temporary workaround for this issue is to run the following command on each cluster node before creating a cluster: mkdir -p /var/run/istio-cni && semanage fcontext -a -t container_file_t /var/run/istio-cni && restorecon -v /var/run/istio-cni. See #​33291.
  • Rancher v2.6.1:

    • Deprecated resources are not automatically removed and will cause errors during upgrades. Manual steps must be taken to migrate and/or cleanup resources before an upgrade is performed. See #​34699.

Long-standing Known Issues - Logging

  • Rancher v2.5.8:
    • Windows nodeAgents are not deleted when performing a helm upgrade after disabling Windows logging on a Windows cluster. See #​32325.

Long-standing Known Issues - Monitoring

  • Rancher v2.8.0:
    • Read-only project permissions and the View Monitoring role aren't sufficient to view links on the Monitoring index page. Users won't be able to see monitoring links. As a workaround, you can perform the following steps:

      1. If you haven't already, install Monitoring on the project.
      2. Move the cattle-monitoring-system namespace into the project.
      3. Grant project users the View Monitoring (monitoring-ui-view) role, and read-only or higher permissions on at least one project in the cluster.

      See #​4466.

Long-standing Known Issues - Project Monitoring

  • Rancher v2.5.8:
    • If you deploy Monitoring V2 on a Windows cluster with win_prefix_path set, you must deploy Rancher Wins Upgrader to restart wins on the hosts. This will allow Rancher to start collecting metrics in Prometheus. See #​32535.

v2.9.2

Compare Source

Release v2.9.2

Important: Review the Install/Upgrade Notes before upgrading to any Rancher version.

Rancher v2.9.2 is the latest minor release of Rancher. This is a Community and Prime version release that introduces new features, enhancements, and various updates. To learn more about Rancher Prime, see our page on the Rancher Prime Platform.

RKE2 Provisioning

Major Bug Fixes
  • Fixed an issue where, when upgrading from Rancher v2.7.4 or earlier to a more recent Rancher version with provisioned RKE2/K3s clusters in an unhealthy state, you may have encountered the error message, implausible joined server for entry. This required manually marking the nodes in the cluster with a joined server. A workaround was available. See #​42856.
  • Fixed an issue where downstream RKE2 clusters may become corrupted if KDM data (from rke-metadata-config setting) is invalid. Note that per the fix these clusters' status may change to "Updating" with a message indicating KDM data is missing instead of the cluster status stating "Active". See #​46855.

Windows Cluster Provisioning Fixes

Major Bug Fixes
  • The following fix only applies to newly provisioned Windows nodes, and existing Windows nodes running the August 2024 patch releases of RKE2 (v1.30.4, v1.29.8, v1.28.13, and v1.27.16): The STRICT_VERIFY environment variable is now successfully passed to Windows nodes. There is a workaround for existing nodes that do not have the August patches. See #​46396.

Rancher App (Global UI)

Known Issues
  • Although system mode node pools must have at least one node, the Rancher UI allows a minimum node count of zero. Inputting a zero minimum node count through the UI can cause cluster creation to fail due to an invalid parameter error. To prevent this error from occuring, enter a minimum node count at least equal to the node count. See #​11922.
  • Node drivers that rely on machine configs that contain fields of the type array string render a form requesting a single string instead. See #​11936.
  • Some node drivers incorrectly stipulate that no credentials are required, resulting in the UI skipping the requirement to supply credentials when provisioning a cluster of that type. See #​11974.

Role-Based Access Control (RBAC) Framework

Known Issues
  • Multiple, stale secrets may be erroneously created for users on downstream clusters. The stale secrets don't get reconciled even if the referenced secret is deleted in the user account. This can cause problems with memory usage and UI slowdown. See #​46894.

Install/Upgrade Notes

Upgrade Requirements

  • Creating backups: Create a backup before you upgrade Rancher. To roll back Rancher after an upgrade, you must first back up and restore Rancher to the previous Rancher version. Because Rancher will be restored to the same state as when the backup was created, any changes post-upgrade will not be included after the restore.
  • CNI requirements:
    • For Kubernetes v1.19 and later, disable firewalld as it's incompatible with various CNI plugins. See #​28840.
    • When upgrading or installing a Linux distribution that uses nf_tables as the backend packet filter, such as SLES 15, RHEL 8, Ubuntu 20.10, Debian 10, or later, upgrade to RKE v1.19.2 or later to get Flannel v0.13.0. Flannel v0.13.0 supports nf_tables. See Flannel #​1317.
  • Requirements for air gapped environments:
    • When using a proxy in front of an air-gapped Rancher instance, you must pass additional parameters to NO_PROXY. See the documentation and issue #​2725.
    • When installing Rancher with Docker in an air-gapped environment, you must supply a custom registries.yaml file to the docker run command, as shown in the K3s documentation. If the registry has certificates, then you'll also need to supply those. See #​28969.
  • Requirements for general Docker installs:
    • When starting the Rancher Docker container, you must use the privileged flag. See documentation.
    • When upgrading a Docker installation, a panic may occur in the container, which causes it to restart. After restarting, the container will come up and work as expected. See #​33685.

Versions

Please refer to the README for the latest and stable Rancher versions.

Please review our version documentation for more details on versioning and tagging conventions.

Important: With the release of Rancher Kubernetes Engine (RKE) v1.6.0, we are informing customers that RKE is now deprecated. RKE will be maintained for two more versions, following our deprecation policy.

Please note, End-of-Life (EOL) for RKE is July 31st, 2025. Prime customers must re-platform from RKE to RKE2 or k3s.

RKE2 and K3s provide stronger security, and move away from upstream-deprecated Docker machine. Learn more about re-platforming here.

Images

  • rancher/rancher:v2.9.2

Tools

Kubernetes Versions for RKE

  • v1.30.4 (Default)
  • v1.29.8
  • v1.28.13
  • v1.27.16

Kubernetes Versions for RKE2/K3s

  • v1.30.4 (Default)
  • v1.29.7
  • v1.28.13
  • v1.27.16

Rancher Helm Chart Versions

In Rancher v2.6.0 and later, in the Apps & Marketplace UI, many Rancher Helm charts are named with a major version that starts with 100. This avoids simultaneous upstream changes and Rancher changes from causing conflicting version increments. This also complies with semantic versioning (SemVer), which is a requirement for Helm. You can see the upstream version number of a chart in the build metadata, for example: 100.0.0+up2.1.0. See #​32294.

Other Notes

Experimental Features

Rancher now supports the ability to use an OCI Helm chart registry for Apps & Marketplace. View documentation on using OCI based Helm chart repositories and note this feature is in an experimental stage. See #​29105 and #​45062

Deprecated Upstream Projects

In June 2023, Microsoft deprecated the Azure AD Graph API that Rancher had been using for authentication via Azure AD. When updating Rancher, update the configuration to make sure that users can still use Rancher with Azure AD. See the documentation and issue #​29306 for details.

Removed Legacy Features

Apps functionality in the cluster manager has been deprecated as of the Rancher v2.7 line. This functionality has been replaced by the Apps & Marketplace section of the Rancher UI.

Also, rancher-external-dns and rancher-global-dns have been deprecated as of the Rancher v2.7 line.

The following legacy features have been removed as of Rancher v2.7.0. The deprecation and removal of these features was announced in previous releases. See #​6864.

UI and Backend

  • CIS Scans v1 (Cluster)
  • Pipelines (Project)
  • Istio v1 (Project)
  • Logging v1 (Project)
  • RancherD

UI

  • Multiclusterapps (Global): Apps within the Multicluster Apps section of the Rancher UI.

Previous Rancher Behavior Changes

Previous Rancher Behavior Changes - Rancher General

  • Rancher v2.9.0:
    • Kubernetes v1.25 and v1.26 are no longer supported. Before you upgrade to Rancher v2.9.0, make sure that all clusters are running Kubernetes v1.27 or later. See #​45882.
    • The external-rules feature flag functionality is removed in Rancher v2.9.0 as the behavior is enabled by default. The feature flag is still present when upgrading from v2.8.5; however, enabling or disabling the feature won't have any effect. For more information, see CVE-2023-32196 and #​45863.
    • Rancher now validates the Container Default Resource Limit on Projects. Validation mimics the upstream behavior of the Kubernetes API server when it validates LimitRanges. The container default resource configuration must have properly formatted quantities for all requests and limits. Limits for any resource must not be less than requests. See #​39700.
  • Rancher v2.8.4:
    • The controller now cleans up instances of ClusterUserAttribute that have no corresponding UserAttribute. See #​44985.
  • Rancher v2.8.3:
    • When Rancher starts, it now identifies all deprecated and unrecognized setting resources and adds a cattle.io/unknown label. You can list these settings with the command kubectl get settings -l 'cattle.io/unknown==true'. In Rancher v2.9 and later, these settings will be removed instead. See #​43992.
  • Rancher v2.8.0:
    • Rancher Compose is no longer supported, and all parts of it are being removed in the v2.8 release line. See #​43341.
    • Kubernetes v1.23 and v1.24 are no longer supported. Before you upgrade to Rancher v2.8.0, make sure that all clusters are running Kubernetes v1.25 or later. See #​42828.

Previous Rancher Behavior Changes - Cluster Provisioning

  • Rancher v2.8.4:
    • Docker CLI 20.x is at end-of-life and no longer supported in Rancher. Please update your local Docker CLI versions to 23.0.x or later. Earlier versions may not recognize OCI compliant Rancher image manifests. See #​45424.
  • Rancher v2.8.0:
    • Kontainer Engine v1 (KEv1) provisioning and the respective cluster drivers are now deprecated. KEv1 provided plug-ins for different targets using cluster drivers. The Rancher-maintained cluster drivers for EKS, GKE and AKS have been replaced by the hosted provider drivers, EKS-Operator, GKE-Operator and AKS-Operator. Node drivers are now available for self-managed Kubernetes.
  • Rancher v2.7.2:
    • When you provision a downstream cluster, the cluster's name must conform to RFC-1123. Previously, characters that did not follow the specification, such as ., were permitted and would result in clusters being provisioned without the necessary Fleet components. See #​39248.
    • Privilege escalation is disabled by default when creating deployments from the Rancher API. See #​7165.

Previous Rancher Behavior Changes - RKE Provisioning

  • Rancher v2.9.0:
    • With the release of Rancher Kubernetes Engine (RKE) v1.6.0, RKE is now deprecated. RKE will be maintained for two more versions, following our deprecation policy.

      Please note, End-of-Life (EOL) for RKE is July 31st, 2025. Prime customers must re-platform from RKE to RKE2 or K3s.

      RKE2 and K3s provide stronger security, and move away from the upstream-deprecated Docker machine. Learn more about re-platforming at the official SUSE blog.

    • Rancher has added support for external Azure cloud providers in downstream RKE clusters. Note that migration to an external Azure cloud provider is required when running Kubernetes v1.30 and recommended when running Kubernetes v1.29. See #​44857.

    • Weave CNI support for RKE clusters is removed in response to Weave CNI not being supported by upstream Kubernetes v1.30 and later. See #​45954

  • Rancher v2.8.0:
    • Rancher no longer supports the Amazon Web Services (AWS) in-tree cloud provider for RKE clusters. This is in response to upstream Kubernetes removing the in-tree AWS provider in Kubernetes v1.27. You should instead use the out-of-tree AWS cloud provider for any Rancher-managed clusters running Kubernetes v1.27 or later. See #​43175.
    • The Weave CNI plugin for RKE v1.27 and later is now deprecated. Weave will be removed in RKE v1.30. See #​42730.

Previous Rancher Behavior Changes - RKE2 Provisioning

  • Rancher v2.9.0:
    • Rancher has added support for external Azure cloud providers in downstream RKE2 clusters. Note that migration to an external Azure cloud provider is required when running Kubernetes v1.30 and recommended when running Kubernetes v1.29. See #​44856.
    • Added a new annotation, provisioning.cattle.io/allow-dynamic-schema-drop. When set to true, it drops the dynamicSchemaSpec field from machine pool definitions. This prevents cluster nodes from re-provisioning unintentionally when the cluster object is updated from an external source such as Terraform or Fleet. See #​44618.
  • Rancher v2.8.0:
    • Rancher no longer supports the Amazon Web Services (AWS) in-tree cloud provider for RKE2 clusters. This is in response to upstream Kubernetes removing the in-tree AWS provider in Kubernetes v1.27. You should instead use the out-of-tree AWS cloud provider for any Rancher-managed clusters running Kubernetes v1.27 or later. See #​42749.

Previous Rancher Behavior Changes - Cluster API

  • Rancher v2.7.7:
    • The cluster-api core provider controllers run in a pod in the cattle-provisioning-cattle-system namespace, within the local cluster. These controllers are installed with a Helm chart. Previously, Rancher ran cluster-api controllers in an embedded fashion. This change makes it easier to maintain cluster-api versioning. See #​41094.
    • The token hashing algorithm generates new tokens using SHA3. Existing tokens that don't use SHA3 won't be re-hashed. This change affects ClusterAuthTokens (the downstream synced version of tokens for ACE) and Tokens (only when token hashing is enabled). SHA3 tokens should work with ACE and Token Hashing. Tokens that don't use SHA3 may not work when ACE and token hashing are used in combination. If, after upgrading to Rancher v2.7.7, you experience issues with ACE while token hashing is enabled, re-generate any applicable tokens. See #​42062.

Previous Rancher Behavior Changes - Rancher App (Global UI)

  • Rancher v2.8.0:
    • The built-in restricted-admin role is being deprecated in favor of a more flexible global role configuration, which is now available for different use cases other than only the restricted-admin. If you want to replicate the permissions given through this role, use the new inheritedClusterRoles feature to create a custom global role. A custom global role, like the restricted-admin role, grants permissions on all downstream clusters. See #​42462. Given its deprecation, the restricted-admin role will continue to be included in future builds of Rancher through the v2.8.x and v2.9.x release lines. However, in accordance with the CVSS standard, only security issues scored as critical will be backported and fixed in the restricted-admin role until it is completely removed from Rancher.
    • Reverse DNS server functionality has been removed. The associated rancher/rdns-server repository is now archived. Reverse DNS is already disabled by default.
    • The Rancher CLI configuration file ~/.rancher/cli2.json previously had permissions set to 0644. Although 0644 would usually indicate that all users have read access to the file, the parent directory would block users' access. New Rancher CLI configuration files will only be readable by the owner (0600). Invoking the CLI will trigger a warning, in case old configuration files are world-readable or group-readable. See #​42838.

Previous Rancher Behavior Changes - Rancher App (Helm Chart)

  • Rancher v2.7.0:
    • When installing or upgrading an official Rancher Helm chart app in a RKE2/K3s cluster, if a private registry exists in the cluster configuration, that registry will be used for pulling images. If no cluster-scoped registry is found, the global container registry will be used. A custom default registry can be specified during the Helm chart install and upgrade workflows. Previously, only the global container registry was used when installing or upgrading an official Rancher Helm chart app for RKE2/K3s node driver clusters.

Previous Rancher Behavior Changes - Continuous Delivery

  • Rancher v2.9.0:
    • Rancher now supports monitoring of continuous delivery. Starting with version v104.0.1 of the Fleet (v0.10.1 of Fleet) and rancher-monitoring chart, continuous delivery provides metrics about the state of its resources and the rancher-monitoring chart contains dashboards to visualize those metrics. Installing the rancher-monitoring chart to the local/upstream cluster automatically configures Prometheus to scrape metrics from the continuous delivery controllers and installs Grafana dashboards. These dashboards are accessible via Grafana but are not yet integrated into the Rancher UI. You can open Grafana from the Rancher UI by navigating to the Cluster > Monitoring > Grafana view. See rancher/fleet#1408 for implementation details.
    • Continuous delivery in Rancher also introduces sharding with node selectors. See rancher/fleet#1740 for implementation details and the Fleet documentation for instructions on how to use it.
    • We have reduced image size and complexity by integrating the former external gitjob repository and by merging various controller codes. This also means that the gitjob container image (rancher/gitjob) is not needed anymore, as the required functionality is embedded into the rancher/fleet container image. The gitjob deployment will still be created but pointing to the rancher/fleet container image instead. Please also note that a complete list of necessary container images for air-gapped deployments is released alongside Rancher releases. You can find this list as rancher-images.txt in the assets of the release on Github. See rancher/fleet#2342 for more details.
    • Continuous delivery also adds experimental OCI content storage. See rancher/fleet#2561 for implementation details and rancher/fleet-docs#179 for documentation.
    • Continuous delivery now splits components into containers and has switched to the controller-runtime framework. The rewritten controllers switch to structured logging.
    • Leader election can now be configured (see rancher/fleet#1981), as well as the worker count for the fleet-controller (see rancher/fleet#2430).
    • The release deprecates the "fleet test" command in favor of "target" and "deploy" with a dry-run option (see rancher/fleet#2102).
    • Bug fixes enhance drift detection, cluster status reporting, and various operational aspects.

Previous Rancher Behavior Changes - Pod Security Standard (PSS) & Pod Security Admission (PSA)

  • Rancher v2.7.2:
    • You must manually change the psp.enabled value in the chart install yaml when you install or upgrade v102.x.y charts on hardened RKE2 clusters. Instructions for updating the value are available. See #​41018.

Previous Rancher Behavior Changes - Authentication

  • Rancher v2.8.3:
    • Rancher uses additional trusted CAs when establishing a secure connection to the keycloak OIDC authentication provider. See #​43217.
  • Rancher v2.8.0:
    • The kubeconfig-token-ttl-minutes setting has been replaced by the setting, kubeconfig-default-token-ttl-minutes, and is no longer available in the UI. See #​38535.
    • API tokens now have default time periods after which they expire. Authentication tokens expire after 90 days, while kubeconfig tokens expire after 30 days. See #​41919.
  • Rancher v2.7.2:
    • Rancher might retain resources from a disabled auth provider configuration in the local cluster, even after you configure another auth provider. To manually trigger cleanup for a disabled auth provider, add the management.cattle.io/auth-provider-cleanup annotation with the unlocked value to its auth config. See #​40378.

Previous Rancher Behavior Changes - Rancher Webhook

  • Rancher v2.8.3:
    • The embedded Cluster API webhook is removed from the Rancher webhook and can no longer be installed from the webhook chart. It has not been used as of Rancher v2.7.7, where it was migrated to a separate Pod. See #​44619.
  • Rancher v2.8.0:
    • Rancher's webhook now honors the bind and escalate verbs for GlobalRoles. Users who have * set on GlobalRoles will now have both of these verbs, and could potentially use them to escalate privileges in Rancher v2.8.0 and later. You should review current custom GlobalRoles, especially cases where bind, escalate, or * are granted, before you upgrade.
  • Rancher v2.7.5:
    • Rancher installs the same pinned version of the rancher-webhook chart not only in the local cluster but also in all downstream clusters. Restoring Rancher from v2.7.5 to an earlier version will result in downstream clusters' webhooks being at the version set by Rancher v2.7.5, which might cause incompatibility issues. Local and downstream webhook versions need to be in sync. See #​41730 and #​41917.
    • The mutating webhook configuration for secrets is no longer active in downstream clusters. See #​41613.

Previous Rancher Behavior Changes - Apps & Marketplace

  • Rancher v2.8.0:
    • Legacy code for the following v1 charts is no longer available in the rancher/system-charts repository:

      • rancher-cis-benchmark
      • rancher-gatekeeper-operator
      • rancher-istio
      • rancher-logging
      • rancher-monitoring

      The code for these charts will remain available for previous versions of Rancher.

    • Helm v2 support is deprecated as of the Rancher v2.7 line and will be removed in Rancher v2.9.

  • Rancher v2.7.0:
    • Rancher no longer validates an app registration's permissions to use Microsoft Graph on endpoint updates or initial setup. You should add Directory.Read.All permissions of type Application. If you configure a different set of permissions, Rancher may not have sufficient privileges to perform some necessary actions within Azure AD, causing errors.
    • The multi-cluster app legacy feature is no longer available. See #​39525.

Previous Rancher Behavior Changes - OPA Gatekeeper

  • Rancher v2.8.0:
    • OPA Gatekeeper is now deprecated and will be removed in a future release. As a replacement for OPA Gatekeeper, consider switching to Kubewarden. See #​42627.

Previous Rancher Behavior Changes - Feature Charts

  • Rancher v2.7.0:
    • A configurable priorityClass is available in the Rancher pod and its feature charts. Previously, pods critical to running Rancher didn't use a priority class. This could cause a cluster with limited resources to evict Rancher pods before other noncritical pods. See #​37927.

Previous Rancher Behavior Changes - Backup/Restore

  • Rancher v2.7.7:
    • If you use a version of backup-restore older than v102.0.2+up3.1.2 to take a backup of Rancher v2.7.7, the migration will encounter a capi-webhook error. Make sure that the chart version used for backups is v102.0.2+up3.1.2, which has cluster.x-k8s.io/v1alpha4 resources removed from the resourceSet. If you can't use v102.0.2+up3.1.2 for backups, delete all cluster.x-k8s.io/v1alpha4 resources from the backup tar before using it. See #​382.

Previous Rancher Behavior Changes - Logging

  • Rancher v2.7.0:
    • Rancher defaults to using the bci-micro image for sidecar audit logging. Previously, the default image was Busybox. See #​35587.

Previous Rancher Behavior Changes - Monitoring

  • Rancher v2.7.2:
    • Rancher maintains a /v1/counts endpoint that the UI uses to display resource counts. The UI subscribes to changes to the counts for all resources through a websocket to receive the new counts for resources.
      • Rancher aggregates the changed counts and only sends a message every 5 seconds. This, in turn, requires the UI to update the counts at most once every 5 seconds, improving UI performance. Previously, Rancher would send a message each time the resource counts changed for a resource type. This lead to the UI needing to constantly stop other areas of processing to update the resource counts. See #​36682.
      • Rancher now only sends back a count for a resource type if the count has changed from the previously known number, improving UI performance. Previously, each message from this socket would include all counts for every resource type in the cluster, even if the counts only changed for one specific resource type. This would cause the UI to need to re-update resource counts for every resource type at a high frequency, with a significant performance impact. See #​36681.

Previous Rancher Behavior Changes - Project Monitoring

  • Rancher v2.7.2:
    • The Helm Controller in RKE2/K3s respects the managedBy annotation. In its initial release, Project Monitoring V2 required a workaround to set helmProjectOperator.helmController.enabled: false, since the Helm Controller operated on a cluster-wide level and ignored the managedBy annotation. See #​39724.

Previous Rancher Behavior Changes - Security

  • Rancher v2.9.0:
    • When agent-tls-mode is set to strict, users must provide the certificate authority to Rancher or downstream clusters will disconnect from Rancher, and require manual intervention to fix. This applies to several setup types, including:

      • Let's Encrypt - When set to strict, users must upload the Let's Encrypt Certificate Authority and provide privateCA=true when installing the chart.
      • Bring Your Own Cert - When set to strict, users must upload the Certificate Authority used to generate the cert and provide privateCA=true when installing the chart.
      • Proxy/External - when the setting is strict, users must upload the Certificate Authority used by the proxy and provide privateCA=true when installing the chart.

      See #​45628 and #​45655.

  • Rancher v2.8.0:
    • TLS v1.0 and v1.1 are no longer supported for Rancher app ingresses. See #​42027.

Previous Rancher Behavior Changes - Extensions

  • Rancher v2.9.0:
    • A new feature flag uiextensions has been added for enabling and disabling the UI extension feature (this replaces the need to install the ui-plugin-operator). The first time it's set to true (the default value is true) it will create the CRD and enable the controllers and endpoints necessary for the feature to work. If set to false, it won't create the CRD if it doesn't already exist, but it won't delete it if it does. It will also disable the controllers and endpoints used by the feature. Enabling or disabling the feature flag will cause Rancher to restart. See #​44230 and #​43089.
    • UI extension owners must update and publish a new version of their extensions to be compatible with Rancher v2.9.0 and later. For more information see the Rancher v2.9 extension support page.
  • Rancher v2.8.4:
    • The Rancher dashboard fails to load an extension that utilizes backported Vue 3 features, displaying an error in the console object(...) is not a function. New extensions that utilize the defineComponent will not be backwards compatible with older versions of the dashboard. Existing extensions should continue to work moving forward. See #​10568.

Long-standing Known Issues

Long-standing Known Issues - Cluster Provisioning

  • Not all cluster tools can be installed on a hardened cluster.

  • Rancher v2.8.1:

    • When you attempt to register a new etcd/controlplane node in a CAPR-managed cluster after a failed etcd snapshot restoration, the node can become stuck in a perpetual paused state, displaying the error message [ERROR] 000 received while downloading Rancher connection information. Sleeping for 5 seconds and trying again. As a workaround, you can unpause the cluster by running kubectl edit clusters.cluster clustername -n fleet-default and set spec.unpaused to false. See #​43735.
  • Rancher v2.7.2:

    • If you upgrade or update any hosted cluster, and go to Cluster Management > Clusters while the cluster is still provisioning, the Registration tab is visible. Registering a cluster that is already registered with Rancher can cause data corruption. See #​8524.
    • When you upgrade your Kubernetes cluster, you might see the following error: Cluster health check failed. This is a benign error that occurs as part of the upgrade process, and will self-resolve. It's caused by the Kubernetes API server becoming temporarily unavailable as it is being upgraded within your cluster. See #​41012.
    • Once you configure a setting with an environmental variable, it can't be updated through the Rancher API or the UI. It can only be updated through changing the value of the environmental variable. Setting the environmental variable to "" (the empty string) changes the value in the Rancher API but not in Kubernetes. As a workaround, run kubectl edit setting <setting-name>, then set the value and source fields to "", and re-deploy Rancher. See #​37998.
  • Rancher v2.6.1:

    • When using the Rancher UI to add a new port of type ClusterIP to an existing Deployment created using the legacy UI, the new port won't be created upon your first attempt to save the new port. You must repeat the procedure to add the port again. The Service Type field will display Do not create a service during the second procedure. Change this to ClusterIP and save to create the new port. See #​4280.

Long-standing Known Issues - RKE Provisioning

  • Rancher v2.9.0:
    • The Weave CNI plugin for RKE v1.27 and later is now deprecated, due to the plugin being deprecated for upstream Kubernetes v1.27 and later. RKE creation will not go through as it will raise a validation warning. See #​11322.

Long-standing Known Issues - RKE2 Provisioning

  • Rancher v2.9.0:
    • Currently there are known issues with the data directory feature which are outlined below:
      • K3s does not support the data directory feature. See #​10589.
      • Currently selecting Use the same path for System-agent, Provisioning and K8s Distro data directories configuration results in Rancher using the same data directory for system agent, provisioning, and distribution components as opposed to appending the specified component names to the root directory. To mitigate this issue, you will need to configure the 3 paths separately and they must follow the guidelines below: - Absolute paths (start with /) - Clean (not contain env vars, shell expressions, ., or ..) - Not set to the same thing - Not nested one within another See #​11566.
    • When adding the provisioning.cattle.io/allow-dynamic-schema-drop annotation through the cluster config UI, the annotation disappears before adding the value field. When viewing the YAML, the respective value field is not updated and is displayed as an empty string. As a workaround, when creating the cluster, set the annotation by using the Edit Yaml option located in the dropdown attached to your respective cluster in the Cluster Management view. See #​11435.
  • Rancher v2.7.7:
    • Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #​34518 and #​42834.
  • Rancher v2.7.6:
    • Provisioning RKE2/K3s clusters with added (not built-in) custom node drivers causes provisioning to fail. As a workaround, fix the added node drivers after activating. See #​37074.
  • Rancher v2.7.4:
    • RKE2 clusters with invalid values for tolerations or affinity agent customizations don't display an error message, and remain in an Updating state. This causes cluster creation to hang. See #​41606.
  • Rancher v2.7.2:
    • When viewing or editing the YAML configuration of downstream RKE2 clusters through the UI, spec.rkeConfig.machineGlobalConfig.profile is set to null, which is an invalid configuration. See #​8480.
    • Deleting nodes from custom RKE2/K3s clusters in Rancher v2.7.2 can cause unexpected behavior, if the underlying infrastructure isn't thoroughly cleaned. When deleting a custom node from your cluster, ensure that you delete the underlying infrastructure for it, or run the corresponding uninstall script for the Kubernetes distribution installed on the node. See #​41034:
  • Rancher v2.6.9:
    • Deleting a control plane node results in worker nodes also reconciling. See #​39021.
  • Rancher v2.6.4:
    • Communication between the ingress controller and the pods doesn't work when you create an RKE2 cluster with Cilium as the CNI and activate project network isolation. See documentation and #​34275.
  • Rancher v2.6.3:
    • When provisioning clusters with an RKE2 cluster template, the rootSize for AWS EC2 provisioners doesn't take an integer when it should, and an error is thrown. As a workaround, wrap the EC2 rootSize in quotes. See #​40128.
  • Rancher v2.6.0:
    • Amazon ECR Private Registries don't work from RKE2/K3s. See #​33920.

Long-standing Known Issues - K3s Provisioning

  • Rancher v2.7.7:
    • Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #​34518 and #​42834.
  • Rancher v2.7.6:
    • Provisioning RKE2/K3s clusters with added (not built-in) custom node drivers causes provisioning to fail. As a workaround, fix the added node drivers after activating. See #​37074.
  • Rancher v2.7.2:
    • Clusters remain in an Updating state even when they contain nodes in an Error state. See #​39164.
    • Deleting nodes from custom RKE2/K3s clusters in Rancher v2.7.2 can cause unexpected behavior, if the underlying infrastructure isn't thoroughly cleaned. When deleting a custom node from your cluster, ensure that you delete the underlying infrastructure for it, or run the corresponding uninstall script for the Kubernetes distribution installed on the node. See #​41034:
  • Rancher v2.6.0:
    • Deleting a control plane node results in worker nodes also reconciling. See #​39021..

Long-standing Known Issues - Rancher App (Global UI)

  • Rancher v2.7.7:
    • When creating a cluster in the Rancher UI it does not allow the use of an underscore _ in the Cluster Name field. See #​9416.
  • Rancher v2.7.2:
    • When creating a GKE cluster in the Rancher UI you will see provisioning failures as the clusterIpv4CidrBlock and clusterSecondaryRangeName fields conflict. See #​8749.

Long-standing Known Issues - Rancher CLI

  • Rancher v2.9.0:
    • The Rancher CLI currently lists the Azure authentication provider options out of order. See #​46128.

Long-standing Known Issues - Hosted Rancher

  • Rancher v2.7.5:
    • The Cluster page shows the Registration tab when updating or upgrading a hosted cluster. See #​8524.

Long-standing Known Issues - Docker Install

  • Rancher v2.6.4:
    • Single node Rancher won't start on Apple M1 devices with Docker Desktop 4.3.0 or later. See #​35930.
  • Rancher v2.6.3:
    • On a Docker install upgrade and rollback, Rancher logs repeatedly display the messages "Updating workload ingress-nginx/nginx-ingress-controller" and "Updating service frontend with public endpoints". Ingresses and clusters are functional and active, and logs resolve eventually. See #​35798 and #​40257.
  • Rancher v2.5.0:
    • UI issues may occur due to longer startup times. When launching Docker for the first time, you'll receive an error message stating, "Cannot read property endsWith of undefined", as described in #​28800. You'll then be directed to a login screen. See #​28798.

Long-standing Known Issues - Windows

  • Rancher v2.5.8:
    • Windows nodeAgents are not deleted when performing a helm upgrade after disabling Windows logging on a Windows cluster. See #​32325.
    • If you deploy Monitoring V2 on a Windows cluster with win_prefix_path set, you must deploy Rancher Wins Upgrader to restart wins on the hosts. This will allow Rancher to start collecting metrics in Prometheus. See #​32535.
Long-standing Known Issues - Windows Nodes in RKE2 Clusters
  • Rancher v2.6.4:
    • NodePorts do not work on Windows Server 2022 in RKE2 clusters due to a Windows kernel bug. See #​159.

Long-standing Known Issues - AKS

  • Rancher v2.7.2:
    • Imported Azure Kubernetes Service (AKS) clusters don't display workload level metrics. This bug affects Monitoring V1. A workaround is available. See #​4658.
  • Rancher v2.6.x:
    • Windows node pools are not currently supported. See #​32586.
  • Rancher v2.6.0:
    • When editing or upgrading an Azure Kubernetes Service (AKS) cluster, do not make changes from the Azure console or CLI at the same time. These actions must be done separately. See #​33561.

Long-standing Known Issues - EKS

  • Rancher v2.7.0:
    • EKS clusters on Kubernetes v1.21 or below on Rancher v2.7 cannot be upgraded. See #​39392.

Long-standing Known Issues - GKE

  • Rancher v2.5.8:
    • Basic authentication must be explicitly disabled in GCP before upgrading a GKE cluster to Kubernetes v1.19+ in Rancher. See #​32312.

Long-standing Known Issues - Role-Based Access Control (RBAC) Framework

  • Rancher v2.9.1
    • Temporarily reducing privileges by impersonating an account with lower privileges is currently not supported. See #​41988 and #​46790.

Long-standing Known Issues - Pod Security Standard (PSS) & Pod Security Admission (PSA)

  • Rancher v2.6.4:
    • The deployment's securityContext section is missing when a new workload is created. This prevents pods from starting when Pod Security Policy (PSP) support is enabled. See #​4815.

Long-standing Known Issues - Authentication

  • Rancher v2.9.0:
    • There are some known issues with the OpenID Connect provider support:
      • When the generic OIDC auth provider is enabled, and you attempt to add auth provider users to a cluster or project, users are not populated in the dropdown search bar. This is expected behavior as the OIDC auth provider alone is not searchable. See #​46104.
      • When the generic OIDC auth provider is enabled, auth provider users that are added to a cluster/project by their username are not able to access resources upon logging in. A user will only have access to resources upon login if the user is added by their userID. See #​46105.
      • When the generic OIDC auth provider is enabled and an auth provider user in a nested group is logged into Rancher, the user will see the following error when they attempt to create a Project: [projectroletemplatebindings.management.cattle.io](http://projectroletemplatebindings.management.cattle.io/) is forbidden: User "u-gcxatwsnku" cannot create resource "projectroletemplatebindings" in API group "[management.cattle.io](http://management.cattle.io/)" in the namespace "p-9t5pg". However, the project is still created. See #​46106.
  • Rancher v2.7.7:
    • The SAML authentication pop-up throws a 404 error on high-availability RKE installations. Single node Docker installations aren't affected. If you refresh the browser window and select Resend, the authentication request will succeed, and you will be able to log in. See #​31163.
  • Rancher v2.6.2:
    • Users on certain LDAP setups don't have permission to search LDAP. When they attempt to perform a search, they receive the error message, Result Code 32 "No Such Object". See #​35259.

Long-standing Known Issues - Encryption

  • Rancher v2.5.4:
    • Rotating encryption keys with a custom encryption provider is not supported. See #​30539.

Long-standing Known Issues - Rancher Webhook

  • Rancher v2.7.2:
    • A webhook is installed in all downstream clusters. There are several issues that users may encounter with this functionality:
      • If you rollback from a version of Rancher v2.7.2 or later, to a Rancher version earlier than v2.7.2, the webhooks will remain in downstream clusters. Since the webhook is designed to be 1:1 compatible with specific versions of Rancher, this can cause unexpected behaviors to occur downstream. The Rancher team has developed a script which should be used after rollback is complete (meaning after a Rancher version earlier than v2.7.2 is running). This removes the webhook from affected downstream clusters. See #​40816.

Long-standing Known Issues - Harvester

  • Rancher v2.8.4:
    • When provisioning a Harvester RKE1 cluster in Rancher, the vGPU field is not displayed under Cluster Management > Advanced Settings, this is not a supported feature. However, the vGPU field is available when provisioning a Harvester RKE2 cluster. See #​10909.
    • When provisioning a multi-node Harvester RKE2 cluster in Rancher, you need to allocate one vGPU more than the number of nodes you have or provisioning will fail. See #​11009 and v2.9.0 back-port issue #​10989.
  • Rancher v2.7.2:
    • If you're using Rancher v2.7.2 with Harvester v1.1.1 clusters, you won't be able to select the Harvester cloud provider when deploying or updating guest clusters. The Harvester release notes contain instructions on how to resolve this. See #​3750.
  • Rancher v2.6.1:
    • Deploying Fleet to Harvester clusters is not yet supported. Clusters, whether Harvester or non-Harvester, imported using the Virtualization Management page will result in the cluster not being listed on the Continuous Delivery page. See #​35049.
    • Upgrades from Harvester v0.3.0 are not supported.

Long-standing Known Issues - Continuous Delivery

  • Rancher v2.7.6:
    • Target customization can produce custom resources that exceed the Rancher API's maximum bundle size. This results in Request entity too large errors when attempting to add a GitHub repo. Only target customizations that modify the Helm chart URL or version are affected. As a workaround, use multiple paths or GitHub repos instead of target customization. See #​1650.
  • Rancher v2.6.1:
    • Deploying Fleet to Harvester clusters is not yet supported. Clusters, whether Harvester or non-Harvester, imported using the Virtualization Management page will result in the cluster not being listed on the Continuous Delivery page. See #​35049.
  • Rancher v2.6.0:
    • Multiple fleet-agent pods may be created and deleted during initial downstream agent deployment, rather than just one. This resolves itself quickly, but is unintentional behavior. See #​33293.

Long-standing Known Issues - Feature Charts

  • Rancher v2.6.5:
    • After installing an app from a partner chart repo, the partner chart will upgrade to feature charts if the chart also exists in the feature charts default repo. See #​5655.

Long-standing Known Issues - CIS Scan

  • Rancher v2.8.3:
    • Some CIS checks related to file permissions fail on RKE and RKE2 clusters with CIS v1.7 and CIS v1.8 profiles. See #​42971.
  • Rancher v2.7.2:
    • When running CIS scans on RKE and RKE2 clusters on Kubernetes v1.25, some tests will fail if the rke-profile-hardened-1.23 or the rke2-profile-hardened-1.23 profile is used. These RKE and RKE2 test cases failing is expected as they rely on PSPs, which have been removed in Kubernetes v1.25. See #​39851.

Long-standing Known Issues - Backup/Restore

  • When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location. It must continue to use the same URL.

  • Rancher v2.7.7:

    • Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #​34518 and #​42834.
  • Rancher v2.6.3:

    • Because Kubernetes v1.22 drops the apiVersion apiextensions.k8s.io/v1beta1, trying to restore an existing backup file into a v1.22+ cluster will fail. The backup file contains CRDs with the apiVersion v1beta1. There are two workarounds for this issue: update the default resourceSet to collect the CRDs with the apiVersion v1, or update the default resourceSet and the client to use the new APIs internally. See the documentation and #​34154.

Long-standing Known Issues - Istio

  • Istio v1.12 and below do not work on Kubernetes v1.23 clusters. To use the Istio charts, please do not update to Kubernetes v1.23 until the next charts' release.

  • Rancher v2.6.4:

    • Applications injecting Istio sidecars, fail on SELinux RHEL 8.4 enabled clusters. A temporary workaround for this issue is to run the following command on each cluster node before creating a cluster: mkdir -p /var/run/istio-cni && semanage fcontext -a -t container_file_t /var/run/istio-cni && restorecon -v /var/run/istio-cni. See #​33291.
  • Rancher v2.6.1:

    • Deprecated resources are not automatically removed and will cause errors during upgrades. Manual steps must be taken to migrate and/or cleanup resources before an upgrade is performed. See #​34699.

Long-standing Known Issues - Logging

  • Rancher v2.5.8:
    • Windows nodeAgents are not deleted when performing a helm upgrade after disabling Windows logging on a Windows cluster. See #​32325.

Long-standing Known Issues - Monitoring

  • Rancher v2.8.0:
    • Read-only project permissions and the View Monitoring role aren't sufficient to view links on the Monitoring index page. Users won't be able to see monitoring links. As a workaround, you can perform the following steps:

      1. If you haven't already, install Monitoring on the project.
      2. Move the cattle-monitoring-system namespace into the project.
      3. Grant project users the View Monitoring (monitoring-ui-view) role, and read-only or higher permissions on at least one project in the cluster.

      See #​4466.

Long-standing Known Issues - Project Monitoring

  • Rancher v2.5.8:
    • If you deploy Monitoring V2 on a Windows cluster with win_prefix_path set, you must deploy Rancher Wins Upgrader to restart wins on the hosts. This will allow Rancher to start collecting metrics in Prometheus. See #​32535.

v2.9.1

Compare Source

Release v2.9.1

Important: Review the Install/Upgrade Notes before upgrading to any Rancher version.

Rancher v2.9.1 is the latest minor release of Rancher. This is a Community and Prime version release that introduces new features, enhancements, and various updates.

Highlights

Cluster Provisioning

Features and Enhancements
  • The vSphere Cloud Storage Interface (CSI) now supports Kubernetes v1.30. See #​45747.
  • The vSphere Cloud Provider Interface (CPI) now supports Kubernetes v1.30. See #​45746.
  • SLE Micro 6 is now supported for RKE2/K3s provisioned clusters. See #​45571.

RKE2 Provisioning

Major Bug Fixes
  • Fixed an issue where vSphere CSI chart deployments would fail on RKE2 with Kubernetes v1.30. See #​46132.
  • Fixed issues related to the data directory feature where:
    • Provisioning an RKE2 cluster with the data directory enabled would cause snapshot restores to fail. See #​46066.
    • The system-agent-upgrader SUC plan is not applied correctly after cluster provisioning leading to the specified system-agent data directory not being used. See #​46361.
    • Provisioning a custom RKE2 cluster with the data directory enabled would result in the registration command not including the user-specified data directory. See #​46362.

Rancher App (Global UI)

Major Bug Fixes
  • Fixed an issue where the Rancher UI would display Azure as a possible in-tree cloud provider option for RKE1 and RKE2 clusters running Kubernetes v1.30, even though the Azure in-tree cloud provider has been removed for Kubernetes v1.30 and later. See #​11363.

Role-Based Access Control (RBAC) Framework

Known Issues
  • Temporarily reducing privileges by impersonating an account with lower privileges is currently not supported. See #​41988 and #​46790.

Install/Upgrade Notes

Upgrade Requirements

  • Creating backups: Create a backup before you upgrade Rancher. To roll back Rancher after an upgrade, you must first back up and restore Rancher to the previous Rancher version. Because Rancher will be restored to the same state as when the backup was created, any changes post-upgrade will not be included after the restore.
  • CNI requirements:
    • For Kubernetes v1.19 and later, disable firewalld as it's incompatible with various CNI plugins. See #​28840.
    • When upgrading or installing a Linux distribution that uses nf_tables as the backend packet filter, such as SLES 15, RHEL 8, Ubuntu 20.10, Debian 10, or later, upgrade to RKE v1.19.2 or later to get Flannel v0.13.0. Flannel v0.13.0 supports nf_tables. See Flannel #​1317.
  • Requirements for air gapped environments:
    • When using a proxy in front of an air-gapped Rancher instance, you must pass additional parameters to NO_PROXY. See the documentation and issue #​2725.
    • When installing Rancher with Docker in an air-gapped environment, you must supply a custom registries.yaml file to the docker run command, as shown in the K3s documentation. If the registry has certificates, then you'll also need to supply those. See #​28969.
  • Requirements for general Docker installs:
    • When starting the Rancher Docker container, you must use the privileged flag. See documentation.
    • When upgrading a Docker installation, a panic may occur in the container, which causes it to restart. After restarting, the container will come up and work as expected. See #​33685.

Versions

Please refer to the README for the latest and stable Rancher versions.

Please review our version documentation for more details on versioning and tagging conventions.

Important: With the release of Rancher Kubernetes Engine (RKE) v1.6.0, we are informing customers that RKE is now deprecated. RKE will be maintained for two more versions, following our deprecation policy.

Please note, End-of-Life (EOL) for RKE is July 31st, 2025. Prime customers must re-platform from RKE to RKE2 or k3s.

RKE2 and k3s provide stronger security, and move away from upstream-deprecated Docker machine. Learn more about re-platforming here.

Images

  • rancher/rancher:v2.9.0

Tools

Kubernetes Versions for RKE

  • v1.30.3 (Default)
  • v1.29.7
  • v1.28.12
  • v1.27.16

Kubernetes Versions for RKE2/K3s

  • v1.30.3 (Default)
  • v1.29.7
  • v1.28.12
  • v1.27.16

Rancher Helm Chart Versions

In Rancher v2.6.0 and later, in the Apps & Marketplace UI, many Rancher Helm charts are named with a major version that starts with 100. This avoids simultaneous upstream changes and Rancher changes from causing conflicting version increments. This also complies with semantic versioning (SemVer), which is a requirement for Helm. You can see the upstream version number of a chart in the build metadata, for example: 100.0.0+up2.1.0. See #​32294.

Other Notes

Experimental Features

Rancher now supports the ability to use an OCI Helm chart registry for Apps & Marketplace. View documentation on using OCI based Helm chart repositories and note this feature is in an experimental stage. See #​29105 and #​45062

Deprecated Upstream Projects

In June 2023, Microsoft deprecated the Azure AD Graph API that Rancher had been using for authentication via Azure AD. When updating Rancher, update the configuration to make sure that users can still use Rancher with Azure AD. See the documentation and issue #​29306 for details.

Removed Legacy Features

Apps functionality in the cluster manager has been deprecated as of the Rancher v2.7 line. This functionality has been replaced by the Apps & Marketplace section of the Rancher UI.

Also, rancher-external-dns and rancher-global-dns have been deprecated as of the Rancher v2.7 line.

The following legacy features have been removed as of Rancher v2.7.0. The deprecation and removal of these features was announced in previous releases. See #​6864.

UI and Backend

  • CIS Scans v1 (Cluster)
  • Pipelines (Project)
  • Istio v1 (Project)
  • Logging v1 (Project)
  • RancherD

UI

  • Multiclusterapps (Global): Apps within the Multicluster Apps section of the Rancher UI.

Previous Rancher Behavior Changes

Previous Rancher Behavior Changes - Rancher General

  • Rancher v2.9.0:
    • Kubernetes v1.25 and v1.26 are no longer supported. Before you upgrade to Rancher v2.9.0, make sure that all clusters are running Kubernetes v1.27 or later. See #​45882.
    • The external-rules feature flag functionality is removed in Rancher v2.9.0 as the behavior is enabled by default. The feature flag is still present when upgrading from v2.8.5; however, enabling or disabling the feature won't have any effect. For more information, see CVE-2023-32196 and #​45863.
    • Rancher now validates the Container Default Resource Limit on Projects. Validation mimics the upstream behavior of the Kubernetes API server when it validates LimitRanges. The container default resource configuration must have properly formatted quantities for all requests and limits. Limits for any resource must not be less than requests. See #​39700.
  • Rancher v2.8.4:
    • The controller now cleans up instances of ClusterUserAttribute that have no corresponding UserAttribute. See #​44985.
  • Rancher v2.8.3:
    • When Rancher starts, it now identifies all deprecated and unrecognized setting resources and adds a cattle.io/unknown label. You can list these settings with the command kubectl get settings -l 'cattle.io/unknown==true'. In Rancher v2.9 and later, these settings will be removed instead. See #​43992.
  • Rancher v2.8.0:
    • Rancher Compose is no longer supported, and all parts of it are being removed in the v2.8 release line. See #​43341.
    • Kubernetes v1.23 and v1.24 are no longer supported. Before you upgrade to Rancher v2.8.0, make sure that all clusters are running Kubernetes v1.25 or later. See #​42828.

Previous Rancher Behavior Changes - Cluster Provisioning

  • Rancher v2.8.4:
    • Docker CLI 20.x is at end-of-life and no longer supported in Rancher. Please update your local Docker CLI versions to 23.0.x or later. Earlier versions may not recognize OCI compliant Rancher image manifests. See #​45424.
  • Rancher v2.8.0:
    • Kontainer Engine v1 (KEv1) provisioning and the respective cluster drivers are now deprecated. KEv1 provided plug-ins for different targets using cluster drivers. The Rancher-maintained cluster drivers for EKS, GKE and AKS have been replaced by the hosted provider drivers, EKS-Operator, GKE-Operator and AKS-Operator. Node drivers are now available for self-managed Kubernetes.
  • Rancher v2.7.2:
    • When you provision a downstream cluster, the cluster's name must conform to RFC-1123. Previously, characters that did not follow the specification, such as ., were permitted and would result in clusters being provisioned without the necessary Fleet components. See #​39248.
    • Privilege escalation is disabled by default when creating deployments from the Rancher API. See #​7165.

Previous Rancher Behavior Changes - RKE Provisioning

  • Rancher v2.9.0:
    • With the release of Rancher Kubernetes Engine (RKE) v1.6.0, RKE is now deprecated. RKE will be maintained for two more versions, following our deprecation policy.

      Please note, End-of-Life (EOL) for RKE is July 31st, 2025. Prime customers must re-platform from RKE to RKE2 or K3s.

      RKE2 and K3s provide stronger security, and move away from the upstream-deprecated Docker machine. Learn more about re-platforming at the official SUSE blog.

    • Rancher has added support for external Azure cloud providers in downstream RKE clusters. Note that migration to an external Azure cloud provider is required when running Kubernetes v1.30 and recommended when running Kubernetes v1.29. See #​44857.

    • Weave CNI support for RKE clusters is removed in response to Weave CNI not being supported by upstream Kubernetes v1.30 and later. See #​45954

  • Rancher v2.8.0:
    • Rancher no longer supports the Amazon Web Services (AWS) in-tree cloud provider for RKE clusters. This is in response to upstream Kubernetes removing the in-tree AWS provider in Kubernetes v1.27. You should instead use the out-of-tree AWS cloud provider for any Rancher-managed clusters running Kubernetes v1.27 or later. See #​43175.
    • The Weave CNI plugin for RKE v1.27 and later is now deprecated. Weave will be removed in RKE v1.30. See #​42730.

Previous Rancher Behavior Changes - RKE2 Provisioning

  • Rancher v2.9.0:
    • Rancher has added support for external Azure cloud providers in downstream RKE2 clusters. Note that migration to an external Azure cloud provider is required when running Kubernetes v1.30 and recommended when running Kubernetes v1.29. See #​44856.
    • Added a new annotation, provisioning.cattle.io/allow-dynamic-schema-drop. When set to true, it drops the dynamicSchemaSpec field from machine pool definitions. This prevents cluster nodes from re-provisioning unintentionally when the cluster object is updated from an external source such as Terraform or Fleet. See #​44618.
  • Rancher v2.8.0:
    • Rancher no longer supports the Amazon Web Services (AWS) in-tree cloud provider for RKE2 clusters. This is in response to upstream Kubernetes removing the in-tree AWS provider in Kubernetes v1.27. You should instead use the out-of-tree AWS cloud provider for any Rancher-managed clusters running Kubernetes v1.27 or later. See #​42749.
    • Similar to Rancher v2.7.9, when you upgrade to Rancher v2.8.0 with provisioned RKE2/K3s clusters in an unhealthy state, you may encounter the error message, implausible joined server for entry. This requires manually marking the nodes in the cluster with a joined server. See #​42856.

Previous Rancher Behavior Changes - Cluster API

  • Rancher v2.7.7:
    • The cluster-api core provider controllers run in a pod in the cattle-provisioning-cattle-system namespace, within the local cluster. These controllers are installed with a Helm chart. Previously, Rancher ran cluster-api controllers in an embedded fashion. This change makes it easier to maintain cluster-api versioning. See #​41094.
    • The token hashing algorithm generates new tokens using SHA3. Existing tokens that don't use SHA3 won't be re-hashed. This change affects ClusterAuthTokens (the downstream synced version of tokens for ACE) and Tokens (only when token hashing is enabled). SHA3 tokens should work with ACE and Token Hashing. Tokens that don't use SHA3 may not work when ACE and token hashing are used in combination. If, after upgrading to Rancher v2.7.7, you experience issues with ACE while token hashing is enabled, re-generate any applicable tokens. See #​42062.

Previous Rancher Behavior Changes - Rancher App (Global UI)

  • Rancher v2.8.0:
    • The built-in restricted-admin role is being deprecated in favor of a more flexible global role configuration, which is now available for different use cases other than only the restricted-admin. If you want to replicate the permissions given through this role, use the new inheritedClusterRoles feature to create a custom global role. A custom global role, like the restricted-admin role, grants permissions on all downstream clusters. See #​42462. Given its deprecation, the restricted-admin role will continue to be included in future builds of Rancher through the v2.8.x and v2.9.x release lines. However, in accordance with the CVSS standard, only security issues scored as critical will be backported and fixed in the restricted-admin role until it is completely removed from Rancher.
    • Reverse DNS server functionality has been removed. The associated rancher/rdns-server repository is now archived. Reverse DNS is already disabled by default.
    • The Rancher CLI configuration file ~/.rancher/cli2.json previously had permissions set to 0644. Although 0644 would usually indicate that all users have read access to the file, the parent directory would block users' access. New Rancher CLI configuration files will only be readable by the owner (0600). Invoking the CLI will trigger a warning, in case old configuration files are world-readable or group-readable. See #​42838.

Previous Rancher Behavior Changes - Rancher App (Helm Chart)

  • Rancher v2.7.0:
    • When installing or upgrading an official Rancher Helm chart app in a RKE2/K3s cluster, if a private registry exists in the cluster configuration, that registry will be used for pulling images. If no cluster-scoped registry is found, the global container registry will be used. A custom default registry can be specified during the Helm chart install and upgrade workflows. Previously, only the global container registry was used when installing or upgrading an official Rancher Helm chart app for RKE2/K3s node driver clusters.

Previous Rancher Behavior Changes - Continuous Delivery

  • Rancher v2.9.0:
    • Rancher now supports monitoring of continuous delivery. Starting with version v104.0.1 of the Fleet (v0.10.1 of Fleet) and rancher-monitoring chart, continuous delivery provides metrics about the state of its resources and the rancher-monitoring chart contains dashboards to visualize those metrics. Installing the rancher-monitoring chart to the local/upstream cluster automatically configures Prometheus to scrape metrics from the continuous delivery controllers and installs Grafana dashboards. These dashboards are accessible via Grafana but are not yet integrated into the Rancher UI. You can open Grafana from the Rancher UI by navigating to the Cluster > Monitoring > Grafana view. See rancher/fleet#1408 for implementation details.
    • Continuous delivery in Rancher also introduces sharding with node selectors. See rancher/fleet#1740 for implementation details and the Fleet documentation for instructions on how to use it.
    • We have reduced image size and complexity by integrating the former external gitjob repository and by merging various controller codes. This also means that the gitjob container image (rancher/gitjob) is not needed anymore, as the required functionality is embedded into the rancher/fleet container image. The gitjob deployment will still be created but pointing to the rancher/fleet container image instead. Please also note that a complete list of necessary container images for air-gapped deployments is released alongside Rancher releases. You can find this list as rancher-images.txt in the assets of the release on Github. See rancher/fleet#2342 for more details.
    • Continuous delivery also adds experimental OCI content storage. See rancher/fleet#2561 for implementation details and rancher/fleet-docs#179 for documentation.
    • Continuous delivery now splits components into containers and has switched to the controller-runtime framework. The rewritten controllers switch to structured logging.
    • Leader election can now be configured (see rancher/fleet#1981), as well as the worker count for the fleet-controller (see rancher/fleet#2430).
    • The release deprecates the "fleet test" command in favor of "target" and "deploy" with a dry-run option (see rancher/fleet#2102).
    • Bug fixes enhance drift detection, cluster status reporting, and various operational aspects.

Previous Rancher Behavior Changes - Pod Security Standard (PSS) & Pod Security Admission (PSA)

  • Rancher v2.7.2:
    • You must manually change the psp.enabled value in the chart install yaml when you install or upgrade v102.x.y charts on hardened RKE2 clusters. Instructions for updating the value are available. See #​41018.

Previous Rancher Behavior Changes - Authentication

  • Rancher v2.8.3:
    • Rancher uses additional trusted CAs when establishing a secure connection to the keycloak OIDC authentication provider. See #​43217.
  • Rancher v2.8.0:
    • The kubeconfig-token-ttl-minutes setting has been replaced by the setting, kubeconfig-default-token-ttl-minutes, and is no longer available in the UI. See #​38535.
    • API tokens now have default time periods after which they expire. Authentication tokens expire after 90 days, while kubeconfig tokens expire after 30 days. See #​41919.
  • Rancher v2.7.2:
    • Rancher might retain resources from a disabled auth provider configuration in the local cluster, even after you configure another auth provider. To manually trigger cleanup for a disabled auth provider, add the management.cattle.io/auth-provider-cleanup annotation with the unlocked value to its auth config. See #​40378.

Previous Rancher Behavior Changes - Rancher Webhook

  • Rancher v2.8.3:
    • The embedded Cluster API webhook is removed from the Rancher webhook and can no longer be installed from the webhook chart. It has not been used as of Rancher v2.7.7, where it was migrated to a separate Pod. See #​44619.
  • Rancher v2.8.0:
    • Rancher's webhook now honors the bind and escalate verbs for GlobalRoles. Users who have * set on GlobalRoles will now have both of these verbs, and could potentially use them to escalate privileges in Rancher v2.8.0 and later. You should review current custom GlobalRoles, especially cases where bind, escalate, or * are granted, before you upgrade.
  • Rancher v2.7.5:
    • Rancher installs the same pinned version of the rancher-webhook chart not only in the local cluster but also in all downstream clusters. Restoring Rancher from v2.7.5 to an earlier version will result in downstream clusters' webhooks being at the version set by Rancher v2.7.5, which might cause incompatibility issues. Local and downstream webhook versions need to be in sync. See #​41730 and #​41917.
    • The mutating webhook configuration for secrets is no longer active in downstream clusters. See #​41613.

Previous Rancher Behavior Changes - Apps & Marketplace

  • Rancher v2.8.0:
    • Legacy code for the following v1 charts is no longer available in the rancher/system-charts repository:

      • rancher-cis-benchmark
      • rancher-gatekeeper-operator
      • rancher-istio
      • rancher-logging
      • rancher-monitoring

      The code for these charts will remain available for previous versions of Rancher.

    • Helm v2 support is deprecated as of the Rancher v2.7 line and will be removed in Rancher v2.9.

  • Rancher v2.7.0:
    • Rancher no longer validates an app registration's permissions to use Microsoft Graph on endpoint updates or initial setup. You should add Directory.Read.All permissions of type Application. If you configure a different set of permissions, Rancher may not have sufficient privileges to perform some necessary actions within Azure AD, causing errors.
    • The multi-cluster app legacy feature is no longer available. See #​39525.

Previous Rancher Behavior Changes - OPA Gatekeeper

  • Rancher v2.8.0:
    • OPA Gatekeeper is now deprecated and will be removed in a future release. As a replacement for OPA Gatekeeper, consider switching to Kubewarden. See #​42627.

Previous Rancher Behavior Changes - Feature Charts

  • Rancher v2.7.0:
    • A configurable priorityClass is available in the Rancher pod and its feature charts. Previously, pods critical to running Rancher didn't use a priority class. This could cause a cluster with limited resources to evict Rancher pods before other noncritical pods. See #​37927.

Previous Rancher Behavior Changes - Backup/Restore

  • Rancher v2.7.7:
    • If you use a version of backup-restore older than v102.0.2+up3.1.2 to take a backup of Rancher v2.7.7, the migration will encounter a capi-webhook error. Make sure that the chart version used for backups is v102.0.2+up3.1.2, which has cluster.x-k8s.io/v1alpha4 resources removed from the resourceSet. If you can't use v102.0.2+up3.1.2 for backups, delete all cluster.x-k8s.io/v1alpha4 resources from the backup tar before using it. See #​382.

Previous Rancher Behavior Changes - Logging

  • Rancher v2.7.0:
    • Rancher defaults to using the bci-micro image for sidecar audit logging. Previously, the default image was Busybox. See #​35587.

Previous Rancher Behavior Changes - Monitoring

  • Rancher v2.7.2:
    • Rancher maintains a /v1/counts endpoint that the UI uses to display resource counts. The UI subscribes to changes to the counts for all resources through a websocket to receive the new counts for resources.
      • Rancher aggregates the changed counts and only sends a message every 5 seconds. This, in turn, requires the UI to update the counts at most once every 5 seconds, improving UI performance. Previously, Rancher would send a message each time the resource counts changed for a resource type. This lead to the UI needing to constantly stop other areas of processing to update the resource counts. See #​36682.
      • Rancher now only sends back a count for a resource type if the count has changed from the previously known number, improving UI performance. Previously, each message from this socket would include all counts for every resource type in the cluster, even if the counts only changed for one specific resource type. This would cause the UI to need to re-update resource counts for every resource type at a high frequency, with a significant performance impact. See #​36681.

Previous Rancher Behavior Changes - Project Monitoring

  • Rancher v2.7.2:
    • The Helm Controller in RKE2/K3s respects the managedBy annotation. In its initial release, Project Monitoring V2 required a workaround to set helmProjectOperator.helmController.enabled: false, since the Helm Controller operated on a cluster-wide level and ignored the managedBy annotation. See #​39724.

Previous Rancher Behavior Changes - Security

  • Rancher v2.9.0:
    • When agent-tls-mode is set to strict, users must provide the certificate authority to Rancher or downstream clusters will disconnect from Rancher, and require manual intervention to fix. This applies to several setup types, including:

      • Let's Encrypt - When set to strict, users must upload the Let's Encrypt Certificate Authority and provide privateCA=true when installing the chart.
      • Bring Your Own Cert - When set to strict, users must upload the Certificate Authority used to generate the cert and provide privateCA=true when installing the chart.
      • Proxy/External - when the setting is strict, users must upload the Certificate Authority used by the proxy and provide privateCA=true when installing the chart.

      See #​45628 and #​45655.

  • Rancher v2.8.0:
    • TLS v1.0 and v1.1 are no longer supported for Rancher app ingresses. See #​42027.

Previous Rancher Behavior Changes - Extensions

  • Rancher v2.9.0:
    • A new feature flag uiextensions has been added for enabling and disabling the UI extension feature (this replaces the need to install the ui-plugin-operator). The first time it's set to true (the default value is true) it will create the CRD and enable the controllers and endpoints necessary for the feature to work. If set to false, it won't create the CRD if it doesn't already exist, but it won't delete it if it does. It will also disable the controllers and endpoints used by the feature. Enabling or disabling the feature flag will cause Rancher to restart. See #​44230 and #​43089.
    • UI extension owners must update and publish a new version of their extensions to be compatible with Rancher v2.9.0 and later. For more information see the Rancher v2.9 extension support page.
  • Rancher v2.8.4:
    • The Rancher dashboard fails to load an extension that utilizes backported Vue 3 features, displaying an error in the console object(...) is not a function. New extensions that utilize the defineComponent will not be backwards compatible with older versions of the dashboard. Existing extensions should continue to work moving forward. See #​10568.

Long-standing Known Issues

Long-standing Known Issues - Cluster Provisioning

  • Not all cluster tools can be installed on a hardened cluster.

  • Rancher v2.8.1:

    • When you attempt to register a new etcd/controlplane node in a CAPR-managed cluster after a failed etcd snapshot restoration, the node can become stuck in a perpetual paused state, displaying the error message [ERROR] 000 received while downloading Rancher connection information. Sleeping for 5 seconds and trying again. As a workaround, you can unpause the cluster by running kubectl edit clusters.cluster clustername -n fleet-default and set spec.unpaused to false. See #​43735.
  • Rancher v2.7.2:

    • If you upgrade or update any hosted cluster, and go to Cluster Management > Clusters while the cluster is still provisioning, the Registration tab is visible. Registering a cluster that is already registered with Rancher can cause data corruption. See #​8524.
    • When you upgrade your Kubernetes cluster, you might see the following error: Cluster health check failed. This is a benign error that occurs as part of the upgrade process, and will self-resolve. It's caused by the Kubernetes API server becoming temporarily unavailable as it is being upgraded within your cluster. See #​41012.
    • Once you configure a setting with an environmental variable, it can't be updated through the Rancher API or the UI. It can only be updated through changing the value of the environmental variable. Setting the environmental variable to "" (the empty string) changes the value in the Rancher API but not in Kubernetes. As a workaround, run kubectl edit setting <setting-name>, then set the value and source fields to "", and re-deploy Rancher. See #​37998.
  • Rancher v2.6.1:

    • When using the Rancher UI to add a new port of type ClusterIP to an existing Deployment created using the legacy UI, the new port won't be created upon your first attempt to save the new port. You must repeat the procedure to add the port again. The Service Type field will display Do not create a service during the second procedure. Change this to ClusterIP and save to create the new port. See #​4280.

Long-standing Known Issues - RKE Provisioning

  • Rancher v2.9.0:
    • The Weave CNI plugin for RKE v1.27 and later is now deprecated, due to the plugin being deprecated for upstream Kubernetes v1.27 and later. RKE creation will not go through as it will raise a validation warning. See #​11322.

Long-standing Known Issues - RKE2 Provisioning

  • Rancher v2.9.0:
    • Currently there are known issues with the data directory feature which are outlined below:
      • K3s does not support the data directory feature. See #​10589.
      • Currently selecting Use the same path for System-agent, Provisioning and K8s Distro data directories configuration results in Rancher using the same data directory for system agent, provisioning, and distribution components as opposed to appending the specified component names to the root directory. To mitigate this issue, you will need to configure the 3 paths separately and they must follow the guidelines below: - Absolute paths (start with /) - Clean (not contain env vars, shell expressions, ., or ..) - Not set to the same thing - Not nested one within another See #​11566.
    • When adding the provisioning.cattle.io/allow-dynamic-schema-drop annotation through the cluster config UI, the annotation disappears before adding the value field. When viewing the YAML, the respective value field is not updated and is displayed as an empty string. As a workaround, when creating the cluster, set the annotation by using the Edit Yaml option located in the dropdown attached to your respective cluster in the Cluster Management view. See #​11435.
  • Rancher v2.7.7:
    • Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #​34518 and #​42834.
  • Rancher v2.7.6:
    • Provisioning RKE2/K3s clusters with added (not built-in) custom node drivers causes provisioning to fail. As a workaround, fix the added node drivers after activating. See #​37074.
  • Rancher v2.7.4:
    • RKE2 clusters with invalid values for tolerations or affinity agent customizations don't display an error message, and remain in an Updating state. This causes cluster creation to hang. See #​41606.
  • Rancher v2.7.2:
    • When viewing or editing the YAML configuration of downstream RKE2 clusters through the UI, spec.rkeConfig.machineGlobalConfig.profile is set to null, which is an invalid configuration. See #​8480.
    • Deleting nodes from custom RKE2/K3s clusters in Rancher v2.7.2 can cause unexpected behavior, if the underlying infrastructure isn't thoroughly cleaned. When deleting a custom node from your cluster, ensure that you delete the underlying infrastructure for it, or run the corresponding uninstall script for the Kubernetes distribution installed on the node. See #​41034:
  • Rancher v2.6.9:
    • Deleting a control plane node results in worker nodes also reconciling. See #​39021.
  • Rancher v2.6.4:
    • Communication between the ingress controller and the pods doesn't work when you create an RKE2 cluster with Cilium as the CNI and activate project network isolation. See documentation and #​34275.
  • Rancher v2.6.3:
    • When provisioning clusters with an RKE2 cluster template, the rootSize for AWS EC2 provisioners doesn't take an integer when it should, and an error is thrown. As a workaround, wrap the EC2 rootSize in quotes. See #​40128.
  • Rancher v2.6.0:
    • Amazon ECR Private Registries don't work from RKE2/K3s. See #​33920.

Long-standing Known Issues - K3s Provisioning

  • Rancher v2.7.7:
    • Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #​34518 and #​42834.
  • Rancher v2.7.6:
    • Provisioning RKE2/K3s clusters with added (not built-in) custom node drivers causes provisioning to fail. As a workaround, fix the added node drivers after activating. See #​37074.
  • Rancher v2.7.2:
    • Clusters remain in an Updating state even when they contain nodes in an Error state. See #​39164.
    • Deleting nodes from custom RKE2/K3s clusters in Rancher v2.7.2 can cause unexpected behavior, if the underlying infrastructure isn't thoroughly cleaned. When deleting a custom node from your cluster, ensure that you delete the underlying infrastructure for it, or run the corresponding uninstall script for the Kubernetes distribution installed on the node. See #​41034:
  • Rancher v2.6.0:
    • Deleting a control plane node results in worker nodes also reconciling. See #​39021..

Long-standing Known Issues - Rancher App (Global UI)

  • Rancher v2.7.7:
    • When creating a cluster in the Rancher UI it does not allow the use of an underscore _ in the Cluster Name field. See #​9416.
  • Rancher v2.7.2:
    • When creating a GKE cluster in the Rancher UI you will see provisioning failures as the clusterIpv4CidrBlock and clusterSecondaryRangeName fields conflict. See #​8749.

Long-standing Known Issues - Rancher CLI

  • Rancher v2.9.0:
    • The Rancher CLI currently lists the Azure authentication provider options out of order. See #​46128.

Long-standing Known Issues - Hosted Rancher

  • Rancher v2.7.5:
    • The Cluster page shows the Registration tab when updating or upgrading a hosted cluster. See #​8524.

Long-standing Known Issues - Docker Install

  • Rancher v2.6.4:
    • Single node Rancher won't start on Apple M1 devices with Docker Desktop 4.3.0 or later. See #​35930.
  • Rancher v2.6.3:
    • On a Docker install upgrade and rollback, Rancher logs repeatedly display the messages "Updating workload ingress-nginx/nginx-ingress-controller" and "Updating service frontend with public endpoints". Ingresses and clusters are functional and active, and logs resolve eventually. See #​35798 and #​40257.
  • Rancher v2.5.0:
    • UI issues may occur due to longer startup times. When launching Docker for the first time, you'll receive an error message stating, "Cannot read property endsWith of undefined", as described in #​28800. You'll then be directed to a login screen. See #​28798.

Long-standing Known Issues - Windows

  • Rancher v2.5.8:
    • Windows nodeAgents are not deleted when performing a helm upgrade after disabling Windows logging on a Windows cluster. See #​32325.
    • If you deploy Monitoring V2 on a Windows cluster with win_prefix_path set, you must deploy Rancher Wins Upgrader to restart wins on the hosts. This will allow Rancher to start collecting metrics in Prometheus. See #​32535.
Long-standing Known Issues - Windows Nodes in RKE2 Clusters
  • Rancher v2.6.4:
    • NodePorts do not work on Windows Server 2022 in RKE2 clusters due to a Windows kernel bug. See #​159.

Long-standing Known Issues - AKS

  • Rancher v2.7.2:
    • Imported Azure Kubernetes Service (AKS) clusters don't display workload level metrics. This bug affects Monitoring V1. A workaround is available. See #​4658.
  • Rancher v2.6.x:
    • Windows node pools are not currently supported. See #​32586.
  • Rancher v2.6.0:
    • When editing or upgrading an Azure Kubernetes Service (AKS) cluster, do not make changes from the Azure console or CLI at the same time. These actions must be done separately. See #​33561.

Long-standing Known Issues - EKS

  • Rancher v2.7.0:
    • EKS clusters on Kubernetes v1.21 or below on Rancher v2.7 cannot be upgraded. See #​39392.

Long-standing Known Issues - GKE

  • Rancher v2.5.8:
    • Basic authentication must be explicitly disabled in GCP before upgrading a GKE cluster to Kubernetes v1.19+ in Rancher. See #​32312.

Long-standing Known Issues - Pod Security Standard (PSS) & Pod Security Admission (PSA)

  • Rancher v2.6.4:
    • The deployment's securityContext section is missing when a new workload is created. This prevents pods from starting when Pod Security Policy (PSP) support is enabled. See #​4815.

Long-standing Known Issues - Authentication

  • Rancher v2.9.0:
    • There are some known issues with the OpenID Connect provider support:
      • When the generic OIDC auth provider is enabled, and you attempt to add auth provider users to a cluster or project, users are not populated in the dropdown search bar. This is expected behavior as the OIDC auth provider alone is not searchable. See #​46104.
      • When the generic OIDC auth provider is enabled, auth provider users that are added to a cluster/project by their username are not able to access resources upon logging in. A user will only have access to resources upon login if the user is added by their userID. See #​46105.
      • When the generic OIDC auth provider is enabled and an auth provider user in a nested group is logged into Rancher, the user will see the following error when they attempt to create a Project: [projectroletemplatebindings.management.cattle.io](http://projectroletemplatebindings.management.cattle.io/) is forbidden: User "u-gcxatwsnku" cannot create resource "projectroletemplatebindings" in API group "[management.cattle.io](http://management.cattle.io/)" in the namespace "p-9t5pg". However, the project is still created. See #​46106.
  • Rancher v2.7.7:
    • The SAML authentication pop-up throws a 404 error on high-availability RKE installations. Single node Docker installations aren't affected. If you refresh the browser window and select Resend, the authentication request will succeed, and you will be able to log in. See #​31163.
  • Rancher v2.6.2:
    • Users on certain LDAP setups don't have permission to search LDAP. When they attempt to perform a search, they receive the error message, Result Code 32 "No Such Object". See #​35259.

Long-standing Known Issues - Encryption

  • Rancher v2.5.4:
    • Rotating encryption keys with a custom encryption provider is not supported. See #​30539.

Long-standing Known Issues - Rancher Webhook

  • Rancher v2.7.2:
    • A webhook is installed in all downstream clusters. There are several issues that users may encounter with this functionality:
      • If you rollback from a version of Rancher v2.7.2 or later, to a Rancher version earlier than v2.7.2, the webhooks will remain in downstream clusters. Since the webhook is designed to be 1:1 compatible with specific versions of Rancher, this can cause unexpected behaviors to occur downstream. The Rancher team has developed a script which should be used after rollback is complete (meaning after a Rancher version earlier than v2.7.2 is running). This removes the webhook from affected downstream clusters. See #​40816.

Long-standing Known Issues - Harvester

  • Rancher v2.8.4:
    • When provisioning a Harvester RKE1 cluster in Rancher, the vGPU field is not displayed under Cluster Management > Advanced Settings, this is not a supported feature. However, the vGPU field is available when provisioning a Harvester RKE2 cluster. See #​10909.
    • When provisioning a multi-node Harvester RKE2 cluster in Rancher, you need to allocate one vGPU more than the number of nodes you have or provisioning will fail. See #​11009 and v2.9.0 back-port issue #​10989.
  • Rancher v2.7.2:
    • If you're using Rancher v2.7.2 with Harvester v1.1.1 clusters, you won't be able to select the Harvester cloud provider when deploying or updating guest clusters. The Harvester release notes contain instructions on how to resolve this. See #​3750.
  • Rancher v2.6.1:
    • Deploying Fleet to Harvester clusters is not yet supported. Clusters, whether Harvester or non-Harvester, imported using the Virtualization Management page will result in the cluster not being listed on the Continuous Delivery page. See #​35049.
    • Upgrades from Harvester v0.3.0 are not supported.

Long-standing Known Issues - Continuous Delivery

  • Rancher v2.9.0:
    • The git-sync job that clones the repository does not have access to the CA bundle as a necessary secret is not created from the GitRepo resource data. Cloning from git repositories that require custom certificates will fail. See #​2824.
  • Rancher v2.7.6:
    • Target customization can produce custom resources that exceed the Rancher API's maximum bundle size. This results in Request entity too large errors when attempting to add a GitHub repo. Only target customizations that modify the Helm chart URL or version are affected. As a workaround, use multiple paths or GitHub repos instead of target customization. See #​1650.
  • Rancher v2.6.1:
    • Deploying Fleet to Harvester clusters is not yet supported. Clusters, whether Harvester or non-Harvester, imported using the Virtualization Management page will result in the cluster not being listed on the Continuous Delivery page. See #​35049.
  • Rancher v2.6.0:
    • Multiple fleet-agent pods may be created and deleted during initial downstream agent deployment, rather than just one. This resolves itself quickly, but is unintentional behavior. See #​33293.

Long-standing Known Issues - Feature Charts

  • Rancher v2.6.5:
    • After installing an app from a partner chart repo, the partner chart will upgrade to feature charts if the chart also exists in the feature charts default repo. See #​5655.

Long-standing Known Issues - CIS Scan

  • Rancher v2.8.3:
    • Some CIS checks related to file permissions fail on RKE and RKE2 clusters with CIS v1.7 and CIS v1.8 profiles. See #​42971.
  • Rancher v2.7.2:
    • When running CIS scans on RKE and RKE2 clusters on Kubernetes v1.25, some tests will fail if the rke-profile-hardened-1.23 or the rke2-profile-hardened-1.23 profile is used. These RKE and RKE2 test cases failing is expected as they rely on PSPs, which have been removed in Kubernetes v1.25. See #​39851.

Long-standing Known Issues - Backup/Restore

  • When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location. It must continue to use the same URL.

  • Rancher v2.7.7:

    • Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve Active status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #​34518 and #​42834.
  • Rancher v2.6.3:

    • Because Kubernetes v1.22 drops the apiVersion apiextensions.k8s.io/v1beta1, trying to restore an existing backup file into a v1.22+ cluster will fail. The backup file contains CRDs with the apiVersion v1beta1. There are two workarounds for this issue: update the default resourceSet to collect the CRDs with the apiVersion v1, or update the default resourceSet and the client to use the new APIs internally. See the documentation and #​34154.

Long-standing Known Issues - Istio

  • Istio v1.12 and below do not work on Kubernetes v1.23 clusters. To use the Istio charts, please do not update to Kubernetes v1.23 until the next charts' release.

  • Rancher v2.6.4:

    • Applications injecting Istio sidecars, fail on SELinux RHEL 8.4 enabled clusters. A temporary workaround for this issue is to run the following command on each cluster node before creating a cluster: mkdir -p /var/run/istio-cni && semanage fcontext -a -t container_file_t /var/run/istio-cni && restorecon -v /var/run/istio-cni. See #​33291.
  • Rancher v2.6.1:

    • Deprecated resources are not automatically removed and will cause errors during upgrades. Manual steps must be taken to migrate and/or cleanup resources before an upgrade is performed. See #​34699.

Long-standing Known Issues - Logging

  • Rancher v2.5.8:
    • Windows nodeAgents are not deleted when performing a helm upgrade after disabling Windows logging on a Windows cluster. See #​32325.

Long-standing Known Issues - Monitoring

  • Rancher v2.8.0:
    • Read-only project permissions and the View Monitoring role aren't sufficient to view links on the Monitoring index page. Users won't be able to see monitoring links. As a workaround, you can perform the following steps:

      1. If you haven't already, install Monitoring on the project.
      2. Move the cattle-monitoring-system namespace into the project.
      3. Grant project users the View Monitoring (monitoring-ui-view) role, and read-only or higher permissions on at least one project in the cluster.

      See #​4466.

Long-standing Known Issues - Project Monitoring

  • Rancher v2.5.8:
    • If you deploy Monitoring V2 on a Windows cluster with win_prefix_path set, you must deploy Rancher Wins Upgrader to restart wins on the hosts. This will allow Rancher to start collecting metrics in Prometheus. See #​32535.

Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever MR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this MR and you won't be reminded about this update again.


  • If you want to rebase/retry this MR, check this box

This MR has been generated by Renovate Bot.

Edited by Renovate Bot

Merge request reports