Releases: Azure/AKS
Release 2025-04-06
Monitor the release status by region at AKS-Release-Tracker. This release is titled v20250406
.
Announcements
- Starting in May 2025, Azure Kubernetes Service will begin rolling out a change to enable quota for all current and new AKS customers. AKS quota will represent a limit of the maximum number of managed clusters that an Azure subscription can consume per region. Existing AKS customer subscriptions will be given a quota limit at or above their current usage, depending on region availability. Once quota is enabled, customers can view their available quota and request quota increases in the Quotas page in the Azure Portal or by using the Quotas REST API. For details on how to view and request quota increases via the Portal Quotas page, visit Azure Quotas. For details on how to view and request quota increases via the Quotas REST API, visit: Azure Quota REST API Reference. New AKS customer subscriptions will be given a default limit upon new subscription creation. More information on the default limits for new subscriptions is available in documentation here.
- AKS Kubernetes version 1.32 roll out has been delayed and is now expected to reach all regions on or before the end of April. Please use the az-aks-get-versions command to accurately capture if Kubernetes version 1.32 is available in your region.
- Kubernetes version 1.28, 1.29 will become additional Long Term Support (LTS) versions in AKS, alongside existing LTS versions 1.27 and 1.30.
- AKS Kubernetes version 1.29 is going out of support in all regions on or before end April, 2025.
- You can now switch non-LTS clusters on Kubernetes versions 1.25 onwards and within 3 versions of the current LTS versions to LTS by switching their tier to Premium.
- As of 31 March 2025, AKS no longer allows new cluster creation with the Basic Load Balancer. On 30 September 2025, the Basic Load Balancer will be retired. We will be posting updates on migration paths to the Standard Load Balancer. See AKS Basic LB Migration Issue for updates on when a simplified upgrade path is available. Refer to Basic Load Balancer Deprecation Update for more information.
- The asm-1-22 revision for the Istio-based service mesh add-on has been deprecated. Migrate to a supported revision following the AKS Istio upgrade guide.
- The pod security policy feature was retired on 1st August 2023 and removed from AKS versions 1.25 and higher. PodSecurityPolicy property will be officially removed from AKS API starting from 2025-03-01.
- Starting on 17 June 2025, AKS will no longer create new node images for Ubuntu 18.04 or provide security updates. Existing node images will be deleted. Your node pools will be unsupported and you will no longer be able to scale. To avoid service disruptions, scaling restrictions, and remain supported, please follow our instructions to upgrade to a supported Kubernetes version.
- Starting on 17 March 2027, AKS will no longer create new node images for Ubuntu 20.04 or provide security updates. Existing node images will be deleted. Your node pools will be unsupported and you will no longer be able to scale. To avoid service disruptions, scaling restrictions, and remain supported, please follow our instructions to upgrade to Kubernetes version 1.34+ by the retirement date.
- HTTP Application Routing (preview) has been retired as of March 3, 2025 and AKS will start to block new cluster creation with HTTP App routing enabled. Affected clusters must migrate to the generally available Application Routing add-on prior to that date.
- Customers with nodepools using Standard_NC24rsv3 VM sizes should resize or deallocate those VMs. Microsoft will deallocate remaining Standard_NC24rsv3 VMs in the coming weeks.
- Teleport (preview) on AKS will be retired on 15 July 2025, please migrate to Artifact Streaming (preview) on AKS or update your node pools to set --aks-custom-headers EnableACRTeleport=false. Azure Container Registry has removed the Teleport API meaning that any nodes with Teleport enabled are pulling images from Azure Container Registry as any other AKS node. After 15 July 2025, any node pools with Teleport (preview) enabled may experience breakage and node provisioning failures. For more information, see aka.ms/aks/teleport-retirement.
Release Notes
-
Features:
- AKS Security Bulletin and AKS CVE Mitigation Status are now available to track Security and CVE mitigations
- Azure Portal will now show you Deployment Recommendations based on available capacity of virtual machines
- Microsoft Copilot in Azure, including AKS is now generally available
- AKS cost recommendations in Azure Advisor is Generally Available
- Kubernetes 1.32 is now Generally Available
- AKS Kubernetes patch versions 1.31.7, 1.30.11, 1.29.15 to resolve CVE-2025-0426
- You can now enable Federal Information Process Standard (FIPS) when using Arm64 VM SKUs in Azure Linux 3.0 node pools in Kubernetes version 1.31+.
- Enable Pod Sandboxing Confidential mounts for Azure File CSI driver on AKS 1.32
- The Azure Portal now offers Deployment Recommendations proactively if there are capacity constraints on the selected node pool sku, zone, and region when creating a new AKS cluster.
- Custom Certificate Authority is available as GA in the 2025-01-01 GA API. It isn't yet available in the CLI until May 2025. To use the GA feature in CLI before release, you can use the
az rest
command to add custom certificates during cluster creation. For more information, see aka.ms/aks/custom-certificate-authority.
-
Behavior Changes:
- Add node anti-affinity for FIPS-compliant nodes to prevent scheduling of retina-agent pods to stop CrashLoopBackOff on FIPS-enabled nodes whilst fix for Retina + FIPS is being rolled out.
- Increased tofqdns-endpoint-max-ip-per-hostname from 50 to 1000 and tofqdns-min-ttl from 0 to 3600 in Azure Cilium for better handling of large DNS responses and reduce DNS query load.
- Konnectivity agent will now scale based on cluster node count.
- Starting on 15 April 2025, you will now be able to update your clusters to add an HTTP Proxy Configuration. Any update command that adds/changes an HTTP Proxy Configuration will now trigger an automatic reimage that will ensure all node pools in the cluster will have the same configuration. For more information, see aka.ms/aks/http-proxy.
-
Component Updates:
- Cost Analysis add-on updated to v0.0.22 to fix CVE-2025-22866
- Updated ip-masq-agent updated to 0.1.15-2 to address CVE-2024-45338
- Application routing add-on updated to v0.2.1-patch-8 for Kubernetes below 1.30 and to v0.2.3-patch-6 for Kubernetes 1.30+. This updates ingress-nginx to v1.11.5 to fix CVE-2025-1097, CVE-2025-1098, CVE-2025-1974, CVE-2025-24513, and CVE-2025-24514.
- Coredns 1.12.0 introduced a breaking change which was used in 1.32 AKS clusters. After the issue was discovered, Coredns was updated to [v1.11.3-6](https://github.com/aks-lts/coredns/releases/tag/v1.1...
Release 2025-03-16
Release 2025-03-16
Monitor the release status by region in the AKS Release Tracker. This release is titled v20250316
.
Announcements
- Starting in April 2025, Azure Kubernetes Service will begin rolling out a change to enable quota for all current and new AKS customers. AKS quota will represent a limit of the maximum number of managed clusters that an Azure subscription can consume per region. Existing AKS customer subscriptions will be given a quota limit at or above their current usage, depending on region availability. Once quota is enabled, customers can view their available quota and request quota increases in the Quotas page in the Azure Portal or by using the Quotas REST API. For details on how to view and request quota increases via the Portal Quotas page, visit Azure Quotas. For details on how to view and request quota increases via the Quotas REST API, visit: Azure Quota REST API Reference. New AKS customer subscriptions will be given a default limit upon new subscription creation. More information on the default limits for new subscriptions is available in documentation here.
- AKS Kubernetes version 1.32 roll out has been delayed and is now expected to reach all regions on or before the end of April. Please use the az-aks-get-versions command to accurately capture if Kubernetes version 1.32 is available in your region.
- AKS will be upgrading the KEDA addon to more recent KEDA versions. The AKS team will add KEDA 2.16 on AKS clusters with K8s versions >=1.32, KEDA 2.14 for Kubernetes v1.30 and v1.31. KEDA 2.15 and KEDA 2.14 will introduce multiple breaking changes. View the troubleshooting guide to learn how to mitigate these breaking changes.
- AKS Kubernetes version 1.28 will soon be available as a Long Term Support version.
- You can now switch non-LTS clusters on Kubernetes versions 1.25 onwards and within 3 versions of the current LTS versions to LTS by switching their tier to Premium.
- On 31 March 2025, AKS will no longer allow new cluster creation with the Basic Load Balancer. On 30 September 2025, the Basic Load Balancer will be retired. We will be posting updates on migration paths to the Standard Load Balancer. See AKS Basic LB Migration Issue for updates on when a simplified upgrade path is available. Refer to Basic Load Balancer Deprecation Update for more information.
- The asm-1-22 revision for the Istio-based service mesh add-on has been deprecated. Migrate to a supported revision following the AKS Istio upgrade guide.
- The pod security policy feature was retired on 1st August 2023 and removed from AKS versions 1.25 and higher. PodSecurityPolicy property will be officially removed from AKS API starting from 2025-03-01.
- Starting on 17 June 2025, AKS will no longer create new node images for Ubuntu 18.04 or provide security updates. Existing node images will be deleted. Your node pools will be unsupported and you will no longer be able to scale. To avoid service disruptions, scaling restrictions, and remain supported, please follow our instructions to upgrade to a supported Kubernetes version.
- Starting on 17 March 2027, AKS will no longer create new node images for Ubuntu 20.04 or provide security updates. Existing node images will be deleted. Your node pools will be unsupported and you will no longer be able to scale. To avoid service disruptions, scaling restrictions, and remain supported, please follow our instructions to upgrade to Kubernetes version 1.34+ by the retirement date.
- Customer on retired NCv1, NCv2, NDv1, and NVv1 VM sizes should expect to have those node pools deallocated. Please move to supported VM sizes. You can find more information and instructions to do so here.
Release Notes
-
Features:
- Application routing add-on support for configuring the default NGINX ingress controller visibility is now generally available in API 2025-02-01.
- Kubernetes events for monitoring node auto-repair actions are now available for your AKS cluster. You can ingest these events and create alerts following the same process as other Kubernetes events.
- AKS Kubernetes patch versions 1.29.12, 1.29.13, 1.30.8, 1.30.9, 1.31.4, and 1.31.5 are now available.
- Application Gateway Ingress Controller now supports Azure CNI overlay clusters.
- You can now upgrade AKS clusters with the Istio-based service mesh add-on enabled regardless of the compatibility with the current mesh revision, allowing to recover to a compatible and supported state. For more information, visit istio upgrade documentation.
- Istio-based service mesh add-on users can now customize the
externalTrafficPolicy
field in the Istio ingress gatewayService
spec. AKS will no longer reconcile this field, preserving user-defined values. - AKS now supports upgrading from Node Subnet to Node Subnet + Cilium and from Node Subnet + Cilium to Azure CNI Overlay + Cilium. For more information, please see our upgrade documentation.
- Message of the day is now generally available.
- You can now enable Federal Information Process Standard (FIPS) when using Arm64 VM SKUs. This is only supported for Azure Linux 3.0 node pools on Kubernetes version 1.32+.
- You can now create Windows type Virtual Machine Node Pools. Note that existing Linux type VM node pools cannot be converted to Windows VM node pools. For more information, see Create a Virtual Machine node pool.
- Private clusters are now supported in Automated Deployments.
-
Preview Features:
- You can use the
EnableCiliumNodeSubnet
feature in preview to create Cilium node subnet clusters using Azure CNI Powered by Cilium. - Control plane metrics are now available through Azure Monitor platform metrics in preview to monitor critical control plane components such as API server and etcd.
- You can use the
-
Bug Fixes:
- Fixed an issue with the retina-agent volume to restrict access to only
/var/run/cilium
directory. Currently retina-agent mounts/var/run
from host directory. This can have potential issue as it can overwrite data in the directory. - Fixed an issue where SSHAccess was being reset to the default value
enabled
on partial PUT requests formanagedCluster.AgentPoolProfile.SecurityProfile
without specifying SSHAccess. - Fixed an issue where Node Auto Provisioning (Karpenter) failed to properly apply the
kubernetes.azure.com/azure-cni-overlay=true
label to nodes which resulted in failure to assign pod IPs in some cases. - Fixed an issue where
calico-typha
could be scheduled on virtual-kubelet due to overly permissive tolerations. Tolerations are now properly restricted to prevent incorrect scheduling. Check this GitHub Issue for more details. - Fixed an issue in Hubble-Relay scheduling behavior to prevent deployment on cordoned nodes, allowing the cluster autoscaler to properly scale down nodes.
- Fixed an issue where pods could get stuck in
ContainerCreating
during Cilium+NodeSubnet to Cilium+Overlay upgrades by ensuring the original network configuration is retained on existing nodes. - Fixed an issue where priority class isn't set on the Custom CA Trust DaemonSet. This change ensures that the DaemonSet will not be evicted first in case of node pressure.
- Fixed an issue where policy enforcements through Azure Policy addon were interrupted during cluster scaling or upgrade operations due to a missing Pod Disruption Budget (PDB) for the Gatekee...
- Fixed an issue with the retina-agent volume to restrict access to only
Release 2025-02-20
Release 2025-02-20
Monitor the release status by region at AKS-Release-Tracker. This release is titled v20250220
.
Announcements
- AKS Kubernetes version 1.32 is rolling out soon and is expected to reach all regions on or before the end of March. Please use the az-aks-get-versions command to accurately capture if Kubernetes version 1.32 is available in your region.
- HTTP Application Routing (preview) is going to be retired on March 3, 2025 and AKS will start to block new cluster creation with HTTP Application Routing (preview) enabled. Affected clusters must migrate to the generally available Application Routing add-on prior to that date. Refer to the migration guide for more information.
- Using the GPU VHD image (preview) to provision GPU-enabled AKS nodes was retired on January 10, 2025 and AKS will block creation of new node pools with the GPU VHD image (preview). Follow the detailed steps to create GPU-enabled node pools using the alternative supported options.
- Extend the AKS security patch release notes in release tracker to include a package comparison with the current - 1 AKS Ubuntu base image.
Release Notes
-
Features:
- Application routing add-on support for configuring the default NGINX ingress controller visibility is now generally available in API 2025-02-01.
- Kubernetes events for monitoring node auto-repair actions are now available for your AKS cluster. You can ingest these events and create alerts following the same process as other Kubernetes events.
- AKS Kubernetes patch versions 1.29.12, 1.29.13, 1.30.8, 1.30.9, 1.31.4, and 1.31.5 are now available.
- The default max surge value for node pool upgrade has been set to 10% for new and existing clusters on Kubernetes versions 1.32.0 and above.
- You can now upgrade from one LTS version to another LTS version on your AKS cluster. If you are running version 1.27 LTS you can directly upgrade to version 1.30 LTS.
-
Preview Features:
- You can use the
EnableCiliumNodeSubnet
feature in preview to create Cilium node subnet clusters using Azure CNI Powered by Cilium. - Control plane metrics are now available through Azure Monitor platform metrics in preview to monitor critical control plane components such as API server, etcd, scheculer, autoscaler, and controller-manager.
- You can use the
-
Bug Fixes:
- Resolved an issue with Istio service mesh add-on where having multiple operations with the Lua EnvoyFilter (e.g. adding the Lua filter to call an external service and specifying the cluster referenced by Lua code) was not allowed.
- Fixed a bug in Azure CNI Pod Subnet Static Block Allocation mode with Cilium which caused incorrect iptables rules, leading to pod connectivity failures to DNS and IMDS.
- Resolved an issue in Azure CNI static block IP allocation mode, where the updated Azure Table client mishandled untyped numbers, causing static block node pools to be misidentified as dynamic and leading to operation failures.
- Fixed a bug in Azure Kubernetes Fleet Manager hub cluster resource groups (FL_ prefix resource groups) by truncating the name to avoid issues with long generated managed resource group names breaking the maximum length of resource groups.
-
Behavior Changes:
- Horizontal Pod Autoscaling introduced for
ama-metrics replicaset pod
in the Azure Monitor managed service for Prometheus add-on. More details about the configuration of the Horizontal Pod Autoscaler can be found here. - Starting with Kubernetes v1.32, node subnet mode will be installed via the
azure-cns
DaemonSet, allowing for faster security updates. - By default, in new create operations on supported k8s versions, if you have selected a VM SKU which supports Ephemeral OS disks but have not specified an OS disk size, AKS will provision an Ephemeral OS disk with a size that scales according to the total temp storage of the VM SKU, so long as the temp is at least 128GiB. If you are looking to utilize the temp storage of the VM SKU, you will need to specify the OS disk size during deployment, otherwise it will be consumed by default. See more information here.
vmSize
is no longer a required parameter in the AKS REST API. For AgentPools created through the SDK without a specifiedvmSize
, AKS will find an appropriate VM SKU for your deployment based on quota and capacity. See more information underproperties.vmSize
here.
- Horizontal Pod Autoscaling introduced for
-
Component Updates:
- Updated Windows CNS from v1.6.13 to v1.6.21 and Linux CNS from v1.6.18 to v1.6.21.
- Updated Windows CNI and Linux CNI from v1.6.18 to v1.6.21.
- Updated tigera operator to v1.36.3 and calico to v3.29.0.
- Node Auto Provisioning has been upgraded to use Karpenter v0.7.2.
- Updated LTS patch version 1.27.102 for Command Injection affecting Windows nodes to address CVE-2024-9042.
- Updated the Retina basic image to v0.0.25 for Linux and Windows to address CVE-2025-23047 and CVE-2024-45338.
- Updated the cost-analysis-agent image from v0.0.20 to v0.0.21. Upgrades the following dependencies in cost-analysis-agent to fix CVE-2024-45341 and CVE-2024-45336:
- AKS Azure Linux v2 image has been updated to 202502.09.0.
- AKS Ubuntu 22.04 node image has been updated to 202502.09.0.
- AKS Ubuntu 24.04 node image has been updated to 202502.09.0.
- AKS Windows Server 2019 image has been updated to 17763.6775.250117.
- AKS Windows Server 2022 image has been updated to 20348.3091.250117.
- AKS Windows Server 23H2 image has been updated to 25398.1369.250117.
Release 2025-01-30
Release 2025-01-30
Monitor the release status by regions at AKS-Release-Tracker. This release is titled v20250130
.
Announcements
- General support for AKS Kubernetes version 1.28 was deprecated on Jan 30, 2025. Upgrade your clusters to version 1.29 or later. Refer to version support policy and upgrading a cluster for more information.
- Azure Kubernetes Service will no longer support the WebAssembly System Interface (WASI) nodepools (preview). Starting on May 5, 2025 you will no longer be able to create new WASI nodepools. If you'd like to run WebAssembly (WASM) workloads, you can deploy SpinKube to Azure Kubernetes Service (AKS) from Azure Marketplace. For more information on this retirement, see AKS GitHub.
- The open-source project Bridge to Kubernetes will be retired on April 30, 2025. For more information, please see the Bridge to Kubernetes repository.
- The HTTP Application Routing add-on (preview) is going to be retired on March 3, 2025. You will no longer be able to create clusters that enable the add-on. Migrate to the generally available Application Routing add-on now.
Release Notes
-
Features:
- AKS Kubernetes patch versions 1.29.11, 1.30.7 and 1.31.3 are now available.
- Security patch releases in release tracker, starting with 20250115T000000Z will contain release notes for the release.
-
Preview Features:
- You can now monitor your stateful workloads running on AKS with Azure Container Storage using Azure Monitor managed service for Prometheus in Preview. You can use Azure Monitor managed service for Prometheus to collect Azure Container Storage metrics along with other Prometheus metrics from your AKS cluster. For more information please see (Enable monitoring for Azure Container Storage)[https://learn.microsoft.com/azure/storage/container-storage/enable-monitoring?source=recommendations].
- CNI validation for node autoprovisioner now allows all CNI configurations except for Calico and kubenet. See AKS CNI Overview for more information.
- AKS Automatic SKU now supports using a custom virtual network.
- When using NAP, custom subnets can be specified for node use via an update to the AKSNodeClass CRD which adds the vnetSubnetID property.
-
Behavior change:
- Proper casing will be enforced on PUT of
Microsoft.ContainerService/managedClusters/agentPools
for theAgentPoolMode
property. See this issue for more detail. - Removed Prometheus port and scrape annotations from Retina Linux and Windows DaemonSets to avoid double scraping of metrics.
- The standard load balancer can now be customized to include
port_*
annotations referenced in the documentation. An additional annotation has been added for:external-dns.alpha.kubernetes.io/hostname
. See this document for more information.
- Proper casing will be enforced on PUT of
-
Bug Fix:
- Fixed a bug where some AgentPools with
"kubeletDiskType":"OS",
were not validated. - Fixed a bug when creating a cluster with a private DNS zone may result in an
InvalidTemplateDeployment
error. - Fixed a race and potential deadlock condition when a Non-Cilium cluster is updating to ACNS Cilium.
- Added early validation on cluster creation when attempting to use 169.254.0.0/16 (link local) for pod or service CIDR blocks to prevent later run-time failures.
- Fixed a breaking change between AppArmor and cilium. Starting on K8s 1.30 and Ubuntu 24.04, cilium containers can fail with error Init:CreateContainerError since AppArmor annotations are no longer supported. This change keeps apparmor annotations for k8s versions below 1.30, and adds the new security context field for k8s versions 1.30 and above. Related PR in upstream cilium charts: cilium/cilium#32199.
- Fixed a bug that prevented upgrade from starting if the PDB
expectedPods
count is less than theminAvailable
count. - Fixed an error condition when AKS attempts to remove the taint
disk.csi.azure.com/agent-not-ready=NoExecute
on node startup. More details: kubernetes-sigs/azuredisk-csi-driver#2309 - Addressed an issue related to node subnet
IPAM Invoker Add failed with error: Failed to allocate pool
in the CNI logs and the associated agentbaker release. - Added validation when a cluster migrates to CNI Overlay to block migration when there is a custom ip-masq-agent config in the kube-system namespace. This prevents loss of connectivity during migration. See the AKS documentation for more information.
- Fixed a bug where some AgentPools with
-
Component updates:
- Cilium v1.14 version from v1.14.18-241220 to v1.14.18-250107 (v1.14.18-1) to include a fix for cilium dual stack upgrades. On upgrades, cilium config changes bpf-filter-priority from 1 to 2 but is not cleaning up the old filters at the old priority and as a result impacts connectivity. This patch will fix this bug, see GH issue in cilium repo for more details cilium/cilium#36172
- Update Azure File CSI driver version to v1.29.10 on AKS 1.28
- Update Azure File CSI driver version to v1.30.7 on AKS 1.29 and 1.30
- Update Azure File CSI driver version to v1.31.3 on AKS 1.31
- Update Azure Disk CSI driver to v1.29.12 on AKS 1.28, 1.29
- Update Azure Disk CSI driver to v1.30.7 on AKS 1.30, 1.31
- Update Azure Blob CSI driver to v1.23.10 on AKS 1.28, 1.29
- Update Azure Blob CSI driver to v1.24.6 on AKS 1.30, 1.31
- Update Workload Identity image version to v1.4.0
- CNS/CNI updated to v1.6.18 which includes Cilium nodesubnet support
- Added Multi-Instance GPU support for standard_nc40ads_h100_v5
- Update the OMS image to v3.1.25-1
- Update secret store driver to v1.4.7 and akv provider to v1.6.2.
- Updates the Retina basic image to v0.0.23 on Linux and Windows: release notes
- Update karpenter image version to 0.6.1-aks
- Update Cilium v1.16 from v1.16.5-250108 to v1.16.5-250110 (v1.16.5-1) to include a fix for Cilium dual stack upgrades. This will fix cilium/cilium#36172. Cilium v1.16.5 also contains fix for CVE-2024-52529.
- The following CVEs were patched in Cilium v1.14.15
- Update the cost-analysis-agent image v0.0.19 to v0.0.20. Upgrades the following dependencies in cost-analysis-agent to fix CVE-2024-45337 and CVE-2024-45338
Release 2025-01-06
Release 2025-01-06
Monitor the release status by regions at AKS-Release-Tracker. This release is titled as v20250106
.
Announcements
- AKS Kubernetes version 1.28 is deprecated by Jan 30, 2025. Kindly upgrade your clusters to 1.29 version or above. Refer to version support policy and upgrading a cluster for more information.
Release Notes
-
Features:
- AKS Kubernetes version 1.31 is now in GA.
- AKS Kubernetes patch versions 1.29.11, 1.30.7, 1.31.2, and 1.31.3 are now available.
- AKS LTS version 1.27.101 available in all regions since December 2024. This patches the kubelet CVE-2024-10220
- Advanced Container Networking Service (ACNS) is Generally Available.
-
Preview features:
- SeccompDefault is now an available parameter in custom node configuration. For more information on enabling seccomp profiles, see Secure container access to resources.
-
Behavior change:
- Invalid values sent to the Azure AKS API for the properties.mode field of AKS AgentPools will now be rejected. Prior to this change, unknown modes were assumed to be User. The only valid values for this field are the (case-sensitive) strings: "User", "System", or "Gateway".
- AKS no longer supports the GPU image (preview) to provision GPU-enabled AKS nodes. Alternative options that are supported today and recommended by AKS include the default experience with manual NVIDIA device plugin installation or the NVIDIA GPU Operator, detailed in AKS GPU node pool documentation.
- Kubernetes version 1.32 is the last version that supports Windows Server 2019. You will not be able to create new or upgrade existing Windows Server 2019 node pools in AKS versions 1.33+. Follow the detailed steps in AKS documentation to transition to Windows Server 2022 or any newly supported Windows Server version by that date. After 1 March 2026, Windows Server 2019 won't be supported.
- New API throttling limit has been added to PutManagedCluster API for AKS. Please see AKS resource provider throttling limits for more details.
-
Bug Fix:
- GPU bootstrapping issue impacting GPU provisioning with Node Auto Provision has been fixed. Refer Github issue for more details.
- Fixed an issue in v1.31 where Cluster Autoscaler did not respond to external changes in Spot VMSS based nodepool's node count (e.g., evictions), leading to scale-up failures. Refer Github Issue 7373 for more details.
- Resolved an issue (NotFound error message) when querying a VM which has been deleted, which results in the NodeClaim being stuck in notReady state resulting in the NodeClaim not being deleted.
- Fixed the windows nodes CNS pods restarting Github issue observed in clusters running on AKS +v1.27 Kubernetes version.
-
Component updates:
- Tigera operator image version has been bumped to v1.34.7 with this release, for clusters running Kubernetes version (and including) v1.30.0. This patches the following CVEs detected in the tigera operator - CVE-2021-3999, CVE-2020-1751, CVE-2019-19126, CVE-2021-35942, CVE-2020-1752, CVE-2020-10029, CVE-2019-9169, CVE-2020-6096, CVE-2021-38604, CVE-2018-19591, CVE-2018-20796, CVE-2019-9192, CVE-2021-3326, CVE-2019-6488, CVE-2016-10739, CVE-2019-7309, CVE-2022-23219, CVE-2022-23218, CVE-2019-25013, CVE-2020-27618.
- Azure Disks CSI driver version has been bumped to v1.30.6 for AKS clusters running AKS Kubernetes version +v1.30. This patches the following CVEs - CVE-2024-51744, CVE-2024-50602, CVE-2024-9143, CVE-2019-11255
- Bumping the Azure CNI version from v1.4.56 to v1.4.58. This patches the CVE regarding grpc 1.52.0 dependencies - CVE-2023-2976, CVE-2020-8908
- Cilium container image version bumped to v1.14.15-241024 for AKS clusters running k8s version greater than v1.29.
- AKS Azure Linux v2 image has been updated to 202501.12.0
- AKS Azure Linux v3 image has been updated to 202501.05.0
- AKS Ubuntu 22.04 node image has been updated to 202501.12.0
- AKS Windows Server 2022 image has been updated to v20348.2966.241218
- AKS Windows Server 2019 image has been updated to 17763.6659.241226
- AKS Windows Server 23H2 image has been updated to 25398.1308.241226
- App routing operator updated to 0.2.1-patch-6 for K8s < 1.30 and which upgrades external-dns to version 0.15.0 fixing a number of CVEs (CVE-2023-39325, GHSA-m425-mq94-257g, CVE-2024-24790, CVE-2023-39325, CVE-2023-45283, CVE-2023-45288, CVE-2024-34156)
- App routing operator updated to 0.2.3-patch-3 for K8s +1.30 which fixes an issue where Open Service Mesh would not reload correctly on Nginx deployment updates. The Prometheus metrics endpoint has now been moved to a separate Service called nginx-metrics behind a ClusterIP. Prometheus scraping will continue to work as expected.
- Cost-analysis-agent image upgraded from v0.0.18 to v0.0.19. this upgrades the golang-jwt dependency in cost-analysis-agent to patch CVE-2024-51744
- Promtheus collector for Azure Monitor managed service for Prometheus addon version bumped from 6.10.1-main-10-04-2024-77dcfe3d to 6.11.0-main-10-21-2024-91ec49e3. This fixes a bug where the minimal ingestion profile keep list was not being honored.
- Application Gateway ingress controller addon version bumped from 1.7.4 to 1.7.6 for clusters with AKS Kubernetes version greater than or equal to 1.27. please find more details here
- Retina enterprise and operator image version bumped to v0.1.3. This resolves the following CVEs - CVE-2024-37307, CVE-2024-42486, CVE-2024-42487, CVE-2024-42488, CVE-2024-47825, and CVE-2023-45288 and changes for high-level filtering of some metric labels. This results in less irrelevant metric collection which can affect clusters at a large scale.
- Retina basic image version bumped to [v0.0.17](https://github.com/microsoft/retina/releases...
Release 2024-10-25
Release 2024-10-25
Monitor the release status by regions at AKS-Release-Tracker. This release is titled as v20241025
.
Announcements
- AKS version 1.28 End of Life is Jan, 15 2025.
- AKS will be upgrading the KEDA addon to more recent KEDA versions. The AKS team has added KEDA 2.15 on AKS clusters with K8s versions >=1.32, KEDA 2.14 for Kubernetes v1.30 and v1.31. KEDA 2.15 and KEDA 2.14 will introduce multiple breaking changes. View the troubleshooting guide to learn how to mitigate these breaking changes.
- AKS will no longer support the GPU image (preview) to provision GPU-enabled AKS nodes. Starting on Jan 10, 2025 you will no longer be able to create new GPU-enabled node pools with the GPU image. Alternative options that are supported today and recommended by AKS include the default experience with manual NVIDIA device plugin installation or the NVIDIA GPU Operator, detailed in AKS GPU node pool documentation.
- Starting on January 1, 2025, invalid values sent to the Azure AKS API for the properties.mode field of AKS AgentPools will be rejected. Prior to this change, unknown modes were assumed to be User. The only valid values for this field are the (case-sensitive) strings:"User", "System", or "Gateway".
- AKS will start to block new cluster creation with basic load balancer in January 2025. Basic Load Balancer will be deprecated September 31 2025 and affected clusters must be migrated to the Standard Load Balancer prior to that date. Refer to BLB deprecation announcement for more information.
- As of November 30th, 2024, new AKS clusters created with Kubernetes versions 1.28 and 1.29 will no longer enable beta Kubernetes APIs. This matches the behavior of AKS 1.27 LTS and AKS 1.30+ clusters, which no longer enable beta APIs.
Release Notes
-
Features:
- AKS patch versions 1.28.14, 1.29.9, 1.30.5 are now available. Refer to version support policy and upgrading a cluster for more information.
- AKS version
1.31
is now generally available. Please check the release tracker for when your region will receive the GA update. Some regions may not receive this update until later in November. - The first official patch version of AKS LTS 1.27, 1.27.100, is being released.
- GitHub Copilot for Azure now supports AKS commands.
- You can now skip one release while upgrading Azure Service Mesh as long as the destination release is a supported revision - for example, asm-1-21 can upgrade directly to asm-1-23.
- You can now fine-tune supported models on KAITO version 0.3.1 with the AI toolchain operator add-on on your AKS cluster.
- Advanced Container Networking Services (ACNS) is now Generally Available. To earn more please see the ACNS Documentation.
-
Preview features:
- We've added a new way to optimize your upgrade process drain behavior. By default, a node drain failure causes the upgrade operation to fail, leaving the undrained nodes in a schedulable state, this behavior is called
Schedule
. Alternatively, you can select theCordon
behavior, which skips nodes that fail to drain by placing them in a quarantined state, labeling themkubernetes.azure.com/upgrade-status:Quarantined
and proceeds with upgrading the remaining nodes. This ensures that all nodes are either upgraded or quarantined. This approach allows you to troubleshoot drain failures and gracefully manage the quarantined nodes. - You can now block pod access to the Azure Instance Metadata Service (IMDS) endpoint to enhance security.
- Azure Linux v3 is now in preview for AKS 1.31 clusters. After registering the preview flag
AzureLinuxV3Preview
newly created AzureLinux node pools will receive the v3 image. Existing Azure Linux v2 node pools will not upgrade to v3 and must be recreated to upgrade.- NOTE: Azure Linux v3 changes the cryptographic provider to OpenSSL + SymCrypt. The SymCrypt library will operate in FIPS mode but is still in the final stages of the validation process and thus is not considered to be FIPS-validated at this time. Do not use this preview with FIPS-enabled node pools if you must use a FIPS-validated cryptographic library.
- We've added a new way to optimize your upgrade process drain behavior. By default, a node drain failure causes the upgrade operation to fail, leaving the undrained nodes in a schedulable state, this behavior is called
-
Behavior change:
- Virtual Machine node pools creation will be blocked if the cluster is using system-assigned identity and bring-your-own virtual network, as this combination does not function properly. To utilize virtual machine node pools, migrate the cluster to a user-assigned managed identity with the required permissions on the virtual network. Virtual Machine Scale Set pools are unaffected by this change.
- Enabling long term support no longer changes the default cluster upgrade channel to
patch
. - AKS CoreDNS configuration will now block all queries ending in
reddog.microsoft.com
and some queries ending ininternal.cloudapp.net
from being forwarded to upstream DNS when they are the result of improper search domain completion. See the documentation for more details. - Azure NPM's CPU request has been lowered from 250m to 50m.
- Azure CNI Overlay now checks that the pod CIDR does not conflict with any subnet in the virtual network, rather than checking if it conflicts with the virtual network address space as a whole.
- Azure CNI Overlay is now the default networking configuration for AKS clusters. This means, when running
az aks create --name TestCluster --Resource-Group TestGroup
, by default, Azure CNI Overlay will be the CNI for the cluster. Other networking configurations are still available with definition.
-
Component updates:
- gMSA support is updated to version v0.10.0, adding support for random hostnames and fixing an issue with multiple containers invalidating domain trusts.
- Image Cleaner has been upgraded to v1.4.0-1.
- The following Azure CSI drivers have been updated:
- Azure Blob CSI Driver: v1.22.9 for AKS 1.27, v1.23.9 for AKS 1.28 and 1.29, and v1.24.5 for AKS 1.30+
- starting from v1.23.9 and v1.24.5, blobfuse mount would respect
http_proxy
,https_proxy
environment variables
- starting from v1.23.9 and v1.24.5, blobfuse mount would respect
- Azure Disk CSI Driver: v1.28.11 for AKS 1.27, v1.19.10 for AKS 1.28 and 1.29, and v1.30.5 for AKS 1.30+
- Azure Files CSI Driver: v1.28.13 for AKS 1.27, v1.29.9 for AKS 1.28, v1.30.6 for AKS 1.29+
- Azure Blob CSI Driver: v1.22.9 for AKS 1.27, v1.23.9 for AKS 1.28 and 1.29, and v1.24.5 for AKS 1.30+
- Azure Monitor for Containers has been upgraded to 3.1.24.
- AKS Windows Server 2019 image has been updated to AKSWindows-2019-17763.6414.241010.
- AKS Windows Server 2022 image has been updated to AKSWindows-20348.2762.241009.
- AKS Azure Linux image has been updated to 202410.27.0.
- AKS Ubuntu image has been updated to 202410.27.0.
- cost-analysis-agent image has been updated to v0.0.18
- ip-masq-agent image has been updated to v0.1.14
- Components in the AKS run-command image have been added and upgraded
- New components: jq, awk, grep, xargs
- Upgraded: kubectl to v1.30.5, helm to 3.15.4
Release 2024-10-06
Monitor the release status by regions at AKS-Release-Tracker. This release is titled as v20241006
.
Announcements
- AKS version 1.30 is now available as a Long term support version and AKS version 1.28 End of Life is Jan, 15 2025.
- Upgrade from LTS 1.27 to LTS 1.30 is now supported.
- AKS will be upgrading the KEDA addon to more recent KEDA versions. The AKS team has added KEDA 2.15 on AKS clusters with K8s versions >=1.31, KEDA 2.14 for Kubernetes v1.30. KEDA 2.15 and KEDA 2.14 will introduce multiple breaking changes. View the troubleshooting guide to learn how to mitigate these breaking changes.
- AKS will no longer support the GPU image (preview) to provision GPU-enabled AKS nodes. Starting on Jan 10, 2025 you will no longer be able to create new GPU-enabled node pools with the GPU image. Alternative options that are supported today and recommended by AKS include the default experience with manual NVIDIA device plugin installation or the NVIDIA GPU Operator, detailed in AKS GPU node pool documentation.
- Starting on January 1, 2025, invalid values sent to the Azure AKS API for the properties.mode field of AKS AgentPools will be rejected. Prior to this change, unknown modes were assumed to be User. The only valid values for this field are the (case-sensitive) strings:"User", "System", or "Gateway".
Release Notes
-
Preview features:
- AKS version
1.31
is now available in preview. - You can now specify the GPU driver type when creating a new AKS Windows GPU Nodepool using the
--driver-type
flag. - You can now assign a static egress gateway node pool to provide a stable egress IP for your pods.
- AKS version
-
Bug fixes:
- Bug fix to address an issue where Calico pods were stuck in Terminating state.
- Fixed a race condition in Azure Network Policy when editing or deleting then re-adding a network policy without a CIDR handle.
- Fixed a race condition between Cilium and Retina CRDs for Cilium (when Retina is updating to Cilium).
- Bug fix for certificate rotation in the gMSA webhook.
- Bug fix for Advanced Network Observability where the Retina operator didn't have proper permissions.
- Bug fix to address an issue where the Retina operator was not reading the configuration from the ConfigMap.
-
Behavior change:
- Deprecated API detection will now only show usage on non-readonly verbs (ie: not GET/LIST/WATCH).
- Starting with AKS version 1.31, nodes will now pull container images in a parallel by default. In versions prior to 1.31, the pull type will remain serialized.
- When cloud-node-manager-windows enables Windows HostProcess containers, a Windows DaemonSet will be deployed to initialize kube-proxy.
-
Component updates:
- Updated CNI and CNS versions to
v1.6.7
. - Updated Azure Network Policy Manager (NPM) to
v1.5.37
. - Updated Azure Policy addon to
v1.7.1
. - Updated konnectivity-agent image version to
v0.30.3-hotfix.20240819
. - Updated containerd-spin-shim to
v0.15.1
. - Updated Istio-based service mesh add-on revision
asm-1-23
to patchv1.23.1
.asm-1-20
is now unsupported. Users can restart the workload pods to trigger re-injection of the newer patch version of istio-proxy. More information can be found here. - Updated Cilium to
v1.14.15-241002
. - Updated Calico to
v3.28.1
. - Updated ama-logs to
v3.1.24
. - Updated azure-cloud-controller-manager to versions
v1.31.1
,v1.30.7
,v1.29.11
,v1.28.13
. - Updated overlay-vpa to
v1.2.1
for Kubernetes 1.31.0+ andv1.0.0
for Kubernetes 1.27.0+. - Azure Linux image has been updated to
Azure Linux-202403.25.0
. - Azure Linux image has been updated to
Azure Linux-202409.30.0
. - AKS Ubuntu 22.04 image has been updated to
AKSUbuntu-202409.30.0
.
- Updated CNI and CNS versions to
Release 2024-09-18
Release 2024-09-18
Monitor the release status by regions at AKS-Release-Tracker. This release is titled as v20240918.
Announcements
- AKS version 1.30 is now available as a Long term support version and AKS version 1.28 End of Life is Jan, 15 2025.
- AKS will be upgrading the KEDA addon to more recent KEDA versions. The AKS team has added KEDA 2.15 on AKS clusters with K8s versions >=1.31, KEDA 2.14 for Kubernetes v1.30. KEDA 2.15 and KEDA 2.14 will introduce multiple breaking changes which are listed below:
- KEDA 2.15 for Kubernetes >=1.31: The removal of Pod Identity support. If you use pod identity, we recommend you move over to workload identity for your authentication.
- KEDA 2.14 for Kubernetes = 1.30: The removal of Azure Data Explorer 'metadata.clientSecret' as it was not safe for managing secrets.
- KEDA 2.14 for Kubernetes = 1.30: Removal of the deprecated metricName from trigger metadata section. The two impacted Azure Scalers are Azure Blob Scaler and Azure Log Analytics Scaler. If you are using
metricName
today, please movemetricName
outside of trigger metadata section totrigger.name
in the trigger section to optionally name your trigger. To view an example of what this would look like, please view the open GitHub issue.
- AKS will no longer support the GPU image (preview) to provision GPU-enabled AKS nodes. Starting on Jan 10, 2025 you will no longer be able to create new GPU-enabled node pools with the GPU image. Alternative options that are supported today and recommended by AKS include the default experience with manual NVIDIA device plugin installation or the NVIDIA GPU Operator, detailed in AKS GPU node pool documentation.
- Starting on January 1, 2025, invalid values sent to the Azure AKS API for the properties.mode field of AKS AgentPools will be rejected. Prior to this change, unknown modes were assumed to be User. The only valid values for this field are the (case-sensitive) strings:"User", "System", or "Gateway".
Release Notes
-
Features:
- AKS patch versions 1.28.13, 1.29.8, 1.30.4 are now available. Refer to version support policy and upgrading a cluster for more information.
-
Bug fixes:
- Bug fix to address the issue where the OSDiskSize validator throws an error if the existing agent pool does not have a default value set
- Bug fix causing cluster creation to fail when creating a new cluster with multiple agent pools using the Dynamic Pod IP Allocation feature (podsubnet)
- Resolved a race condition that could occur when deleting a CNI Overlay cluster with auto-scaler enabled, ensuring smoother cluster deletion.
-
Behavior change:
- Abandoned cluster will be deallocated with status
Failed(Deallocated)
instead ofSucceeded (Stopped)
. - PDB drain errors will now include additional PDB debug message and appropriate original error instead of generic "API call to Kubernetes API Server failed" error message. Example - "PDB debug info: myNode/myPod1 blocked by pdb myPDB (MaxUnavailable: 1) with 1 unready pods: myNode/myPod2".
- Updated Azure NPM version to v1.5.36 to address race condition in Azure NPM Linux which can occur when editing/deleting a NetworkPolicy with "enough" rules. The race can result in unexpected connectivity for traffic to/from Pods on the impacted Node. NPM will now auto-restart to mitigate the issue ~15 seconds after if it enters a broken state caused by the race.
- Lowering Linux Azure NPM's CPU request from 250m to 50m. This addresses [Github Issue 2792](#2792.
*Clusters using the Key Management Service (KMS) plugin based on Azure Key Vault with a private endpoint and konnectivity tunnel may run into a deadlock issue resulting inapiserver
becoming unreachable. Clusters using this configuration will not be allowed starting Kubernetes version >= 1.31. - Allow Istio add-on users to add the customizations to the Ingress gateway.
- Busybox will be removed from kube-proxy init container. This will eliminate the need for security updates on busybox.
- Abandoned cluster will be deallocated with status
-
Component updates:
- All revisions of Azure Service Mesh use zipkin as the default tracer config.
- Cost-analysis-agent image upgraded from v0.0.16 to v0.0.17.
- Updated retina linux to v0.0.15.
- Updated ip-masq-agent to v0.1.13 to address CVE-2024-24790, CVE-2023-45288, CVE-2023-45289, CVE-2023-45290, CVE-2024-24783, CVE-2024-24784, CVE-2024-24785, CVE-2024-24789, CVE-2024-24791, CVE-2024-5321.
- Updated CNI versions to v1.5.35 and v1.6.5. Updated CNS versions to v1.5.35 and v1.6.5.
- Updated Azure Container Instances (ACI) connector addon to v1.6.2 and init-validation to v0.3.0.
- Azure Monitor managed service for Prometheus images updated to 09-16-2024 release.
- Updated Azure Disk CSI driver version to v1.29.9 on AKS 1.28, 1.29, and to v1.30.4 on AKS 1.30.
- Updated Azure File CSI driver to v1.29.8 on AKS 1.28.
- Updated tigera operator to v1.30.11 and calico to v3.26.5 for versions running on k8s 1.29 and 1.30 to address CVE patches.
- Updated the Advanced Container Networking Services Image tag for fixing the bug that causes cilium pods to crash in Advanced Container Networking Service enabled AKS clusters.
- Retina Enterprise and Operator image update [v0.1.0].
- Updated the Windows containerd version from v1.6.21 to v1.6.35 for Kubernetes version < 1.28.
- AKS Windows Server 2022 image has been updated to AKSWindows-2022-20348.2700.240911.
- AKS Windows Server 2019 image has been updated to AKSWindows-2019-17763.6293.240911.
- Azure Linux image has been updated to Azure Linux-202409.09.0.
- AKS Ubuntu 22.04 image has been updated to AKSUbuntu-202409.09.0.
Release 2024-08-27
Release 2024-08-27
Monitor the release status by regions at AKS-Release-Tracker. This release is titled as v20240827.
Announcements
- AKS version 1.27 is now deprecated. Enable long-term support for AKS versions if you still need to operate on 1.27.
- The attestation report for CIS Kubernetes V1.9.0 Benchmark is published which covers AKS 1.27.x through AKS 1.29.x.
- AKS will be upgrading the KEDA addon to more recent KEDA versions. The AKS team has added KEDA 2.15 on AKS clusters with K8s versions >=1.31, KEDA 2.14 for Kubernetes v1.30. KEDA 2.15 and KEDA 2.14 will introduce multiple breaking changes which are listed below:
- KEDA 2.15 for Kubernetes >=1.31: The removal of Pod Identity support. If you use pod identity, we recommend you move over to workload identity for your authentication.
- KEDA 2.14 for Kubernetes = 1.30: The removal of Azure Data Explorer 'metadata.clientSecret' as it was not safe for managing secrets.
- KEDA 2.14 for Kubernetes = 1.30: Removal of the deprecated metricName from trigger metadata section. The two impacted Azure Scalers are Azure Blob Scaler and Azure Log Analytics Scaler. If you are using
metricName
today, please movemetricName
outside of trigger metadata section totrigger.name
in the trigger section to optionally name your trigger. To view an example of what this would look like, please view the open GitHub issue.
Release Notes
-
Features:
- Existing Linux node pools can now be updated to enable or disable Federal Information Process Standard (FIPS). See documentation for more information.
-
Bug fixes:
- Fix an Azure NPM issue that user could meet unexpected connectivity for Pods on the Node when editing a NetworkPolicy with a CIDR "except" field.
- Fix bug to block non-VMSS (VirtualMachineScaleSets) agent pools in the Automatic SKU validation process.
- Fix bug to ensure correct default network plugin settings for Kubernetes clusters using VMAS.
- Fix bug for intermittent precondition failures when applying an AKS Bicep deployment on the pod subnet delegation.
- Fix bug of public IP on VMSS dropped after upgrade node image or reset service principal operation.
- Fix bug #4282 to remove duplicated toleration from Calico components.
- Fix bug to ensure
AnnotationControlled
is correctly populated by default when creating AKS clusters with app routing enabled, and to ensureAnnotationControlled
is an accepted value for the default nginx ingress controller config for AKS clusters with K8s versions <1.30. - Fix bug for Cluster Autoscaler that requires an implementation of the
HasInstance
method on AKS. This implementation prevents the Cluster Autoscaler from stalling during scale-up due to node scale-down issues. - Fix bug Azure/azure-service-operator#3220 to allow creation of AgentPools without
Count
field specified if autoscaler enabled. - Fix bug to accept user to set the
PowerState
field for API versions that do not support the filed. Impacted API versions are 2020-09-01, 2020-11-01, 2020-12-01, 2021-02-01 and 2021-03-01.
-
Behavior change:
- For non-host network pods running on AKS nodes, they cannot access wireserver(168.63.129.16) port 32526. Before this change user cannot access wireserver port 80, but port 32526 is accessible.
- When deploying an AKS Automatic (preview) cluster, user do not need to register extra feature flags for related preview features, such as APIServerVnetIntegration, NRGLockdown, NodeAutoProvisioning, and Safeguards.
- CBL-Mariner 1.0 is end of life, creation of new nodepools with OSSKU cblmariner is disabled.
- Application Gateway Ingress Controller addon has been assigned the network contributor role.
-
Component updates:
- AKS Ubuntu 22.04 image has been updated to AKSUbuntu-202408.27.0.
- Azure Linux image has been updated to AzureLinux-202408.27.0.
- Azure Disk CSI driver has been upgraded to v1.30.3 on AKS 1.30, V1.29.8 on AKS 1.28, 1.28.1 on AKS 1.27.
- Azure Blob Disk CSI driver has been upgraded to v1.24.3 on AKS 1.30, v1.23.7 on AKS 1.29 and 1.28.
- Azure File CSI driver has been upgraded to v1.30.5 on AKS 1.30 and 1.29, v1.29.7 on AKS 1.28.
- AKS Windows Server 2019 image has been updated to AKSWindows-2019-17763.6189.240814.
- AKS Windows Server 2022 image has been updated to AKSWindows-2022-20348.2655.240814.
- AKS App Routing operator image has been updated to v0.2.3-patch-2 for AKS cluster with K8s versions >=1.30, v0.2.1-patch-4 for AKS cluster with K8s versions <1.30 to address CVEs.
- Windows containerd has been updated to v1.7.20 in AKS cluster with K8s versions >= v1.28.
- Kubernetes Secrets Store CSI Driver has been updated to v1.4.4 and Azure Key Vault Provider for Secrets Store CSI Driver to v1.5.3
- Application Gateway Ingress Controller add-on image has been updated to v1.7.5.
- Retina Enterprise and Operator image has been updated to v0.0.9.
- azure-cloud-controller-manager has been updated to version v1.30.5, v1.29.9, v1.28.11, v1.27.19.
- KEDA addon has been updated to v2.14.1 for Kubernetes = 1.30.
- Azure Policy addon has been updated to v1.7.0.
- Istio-based service mesh add-on revision asm-1-20 has been upgraded to patch v1.20.8, revision asm-1-21 has been upgraded to patch v1.21.5, and revision asm-1-22 has been upgraded to patch v1.22.3. Users can restart the workload pods to trigger re-injection of the newer patch version of istio-proxy. More information can be found here.
- Calico v3.28.1 is supported for AKS cluster with K8s versions 1.31.
Release 2024-08-05
Release 2024-08-05
Monitor the release status by regions at AKS-Release-Tracker. This release is titled as v20240805.
Announcements
- AKS will be upgrading the KEDA addon to more recent KEDA versions. The AKS team has added KEDA 2.15 on AKS clusters with K8s versions >=1.31, KEDA 2.14 for Kubernetes v1.30. KEDA 2.15 and KEDA 2.14 will introduce multiple breaking changes which are listed below:
- KEDA 2.15 for Kubernetes >=1.31: The removal of Pod Identity support. If you use pod identity, we recommend you move over to workload identity for your authentication.
- KEDA 2.14 for Kubernetes = 1.30: The removal of Azure Data Explorer 'metadata.clientSecret' as it was not safe for managing secrets.
- KEDA 2.14 for Kubernetes = 1.30: Removal of the deprecated metricName from trigger metadata section. The two impacted Azure Scalers are Azure Blob Scaler and Azure Log Analytics Scaler. If you are using
metricName
today, please movemetricName
outside of trigger metadata section totrigger.name
in the trigger section to optionally name your trigger. To view an example of what this would look like, please view the open GitHub issue.
Release Notes
-
Features:
- AKS version 1.30 is now available and will be the next LTS version of AKS. You can now upgrade your 1.27 clusters to 1.30 during the LTS period.
- Updating an existing node pool to enable or disable FIPS is now Generally Available.
- AKS patch versions 1.30.3, 1.29.7, 1.28.12, 1.27.16 are now available. Refer to version support policy and upgrading a cluster for more information.
- Istio add-on now only allows
EnvoyFilter
s of the types Lua, local rate limiting, and gzip compression. - Telemetry API v1 is now available for the Istio based service mesh add-on.
- The AKS extension for Visual Studio Code now supports the ability to attach an ACR to your cluster, generate Kubernetes deployment files, generate Dockerfiles, and generate GitHub Actions.
- The ignore-daemonsets-utilization, daemonset-eviction-for-empty-nodes, and daemonset-eviction-for-occupied-nodes parameters on the cluster autoscaler profile are GA from API version 2024-05-01 onwards. If you are using the CLI to update these flags, please ensure you are using version 2.63 or later.
-
Bug fixes:
- Fixed a bug where sometimes
NodePublicIPPrefixID
could show unset on a cluster even though it was set. - Previously, as part of Istio addon canary upgrade, users had to manually copy their edits to HorizontalPodAutoscaler from old revision to new revision. This has been fixed so that changes done to Horizontal Pod Autoscaler will be automatically copied for the newer revision.
- Added validation that if a LTS cluster has a node pool on non-LTS version, upgrade to the next LTS version is blocked.
- Fixed a bug where sometimes
-
Behavior change:
- When Advanced Networking Observability is enabled, increased memory limit of 700Mi (from 400Mi) is used for retina-agent.
GOMAXPROCS
for coredns has been set to equal CPU limit to avoid throttling.- In Azure CNI,
init-cni-dropgz
initContainer has been renamed tocni-installer
. - Validation for minimum 5 minutes has been introduced for drain timeout value to prevent drain issues during upgrade.
query
label removed fromdns
metrics in Advanced Network Observability.- Control plane only AKS upgrades will now reconcile node pools to desired state. For example, previously let's say a user did did a Kubernetes upgrade and network plugin mode transition to overlay where a reimaging of the nodes was required, but it wasn't done as nodes were skipped. Going ahead nodes will be reconciled in these circumstances.
-
Component updates:
- To address scheduler issues fixed in this upstream change, 1.27.15, 1.28.11, 1.29.6 schedulers versions will be used for Kubernetes versions 1.27.14, 1.28.10, 1.29.5 respectively.
- Updated Azure Blob CSI driver to v1.22.7 on AKS version 1.27.
- For Node Auto Provisioning, Azure provider of Karpenter is upgraded to v0.5.1.
- Updated Azure Monitor Container Insights image to v3.1.23.
- Azure Monitor managed service for Prometheus images updated to 07-19-2024 release.
- Updated Eraser version to v1.3.1 for Image Cleaner.
- Updated Azure Disk CSI driver to v1.28.9 on AKS 1.27 and to v1.29.7 on AKS 1.28 and 1.29.
- Updated Azure File CSI driver to v1.28.11 on AKS 1.27, to v1.29.6 on AKS 1.28, and to v1.30.3 on AKS 1.29.
- Updated Ratify image used in Image Integrity to v1.2.0.
- Updated Cilium version has been updated to 1.14.12 for AKS cluster with versions >= 1.29 and Advanced Network Observability enabled.
- Istio-based service mesh add-on revision asm-1-21 has been upgraded to patch v1.21.4 and revision asm-1-22 has been upgraded to patch v1.22.2. Users can restart the workload pods to trigger re-injection of the newer patch version of istio-proxy. More information can be found here.
- Updated Windows Kubernetes packages in all AKS versions to address CVE-2024-5321.
- AKS Ubuntu 22.04 image has been updated to AKSUbuntu-202407.29.0.
- Azure Linux image has been updated to AzureLinux-202407.29.0.
- AKS Windows Server 2019 image has been updated to AKSWindows-2019-17763.6054.240716.
- AKS Windows Server 2022 image has been updated to AKSWindows-2022-20348.2582.240716.