Skip to content

Commit 8dee07d

Browse files
committed
Bump workflow and cluster versions
1 parent 22b1095 commit 8dee07d

File tree

4 files changed

+54
-49
lines changed

4 files changed

+54
-49
lines changed

.github/workflows/deploy.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,10 +19,10 @@ jobs:
1919
runs-on: ubuntu-latest
2020

2121
steps:
22-
- uses: actions/checkout@v3
22+
- uses: actions/checkout@v4
2323

2424
- name: Login to Azure
25-
uses: azure/login@v2.1.1
25+
uses: azure/login@v2
2626
with:
2727
client-id: ${{ secrets.AZURE_CLIENT_ID }}
2828
tenant-id: ${{ secrets.AZURE_TENANT_ID }}

Cluster.bicep

Lines changed: 4 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -114,9 +114,9 @@ param additionalNodePoolProfiles array = []
114114

115115
@description('''
116116
Optional. The base version of Kubernetes to use. Node pools are set to auto patch, so they only use the 'major.minor' part.
117-
Defaults to 1.28
117+
Defaults to 1.30
118118
''')
119-
param kubernetesVersion string = '1.28'
119+
param kubernetesVersion string = '1.30'
120120

121121
@description('''Optional. Controls automatic upgrades:
122122
- none. No automatic patching
@@ -218,7 +218,7 @@ module waitForRole 'modules/deploymentScript.bicep' = {
218218
params: {
219219
name: 'waitForRoleAssignment'
220220
location: location
221-
azPowerShellVersion : '11.0'
221+
azPowerShellVersion : '13.2'
222222
userAssignedIdentityResourceID: controlPlaneId.outputs.id
223223
timeout: 'PT60M'
224224
scriptContent : join([
@@ -253,8 +253,6 @@ module waitForRole 'modules/deploymentScript.bicep' = {
253253
// }
254254

255255

256-
257-
258256
module keyVault 'modules/keyVault.bicep' = {
259257
name: '${deploymentName}_keyvault'
260258
params: {
@@ -313,10 +311,9 @@ module fluxId 'modules/userAssignedIdentity.bicep' = {
313311
@description('Optional. If true, skips Flux extension (you can still deploy it later, or by hand).')
314312
param installFluxManually bool = false
315313

316-
// // Managed Flux
314+
// Managed Flux (obviously depends on the fluxId which depends on aks)
317315
module flux 'modules/flux.bicep' = if (!installFluxManually) {
318316
name: '${deploymentName}_flux'
319-
dependsOn: [ aks, fluxId ]
320317
params: {
321318
baseName: baseName
322319
identityClientId: fluxId.outputs.clientId

README.md

Lines changed: 18 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -46,9 +46,9 @@ New-AzResourceGroupDeployment @Deployment -OutVariable Results
4646

4747
## GitOps Configuration
4848

49-
One thing to note is that I am using a _public_ repository ([PoshCode/Cluster](/PoshCode/Cluster)) for my GitOps configuration. Because it's public, there's no need to configure a PAT token or anything for Flux to be able to access it.
49+
One thing to note is that I am using a _public_ repository ([PoshCode/Cluster](/PoshCode/Cluster)) for my GitOps configuration. Because it's public, there's no need to configure any sort of authentication tokens for Flux to be able to access it.
5050

51-
I'm currently using the Azure Kubernetes Flux Extension to install Flux CD for GitOps. That dramatically simplifies everything, because the Bicep deployment is literally all that's required to deploy the working cluster. However, if you needed to configure the credentials, you would just pass `gitOpsGitUsername` and `gitOpsGitPassword` as part of the `TemplateParameterObject`. There is a feature coming later this year to Flux to support Workflow Identity for git authentication, but for now you need to use a read-only deploy token or something.
51+
I'm currently using the Azure Kubernetes Flux Extension to install Flux CD for GitOps, this dramatically simplifies configuration for Flux: when the Bicep deployment is complete, Flux is already running on the cluster.
5252

5353
### Manually bootstrapping Flux
5454

@@ -58,31 +58,41 @@ If you wanted to install flux by hand on an existing cluster, it can be as simpl
5858
flux bootstrap github --owner PoshCode --repository cluster --path=clusters/poshcode
5959
```
6060

61-
But if you need to customize workload identity, it can get a lot more complex, because you'll need to patch the flux deployment.
61+
But if you need to customize workload identity, it can get a bit more complex, but Workload Identity is supported now for access to [Azure DevOps](https://fluxcd.io/flux/components/source/gitrepositories/#azure) and [GitHub](https://fluxcd.io/flux/components/source/gitrepositories/#github), at least.
6262

6363
## CURRENT STATUS WARNING
6464

6565
I'm playing with Cilium Gateway API, so I've set the network plugin to "none" so that I can take control of the cilium install.
6666

67+
Since the Gateway API in Cilium is part of their Service Mesh, it doesn't seem Azure's AKS team is too keen on getting it working out of the box, so I have to install it manually.
68+
69+
NOTE: in order to use the Gateway API, you need to install the Gateway CRDs. That's handled after the cluster install by Flux.
70+
However, Cilium CNI has to be installed _before the nodes can even connect_, so it's basically a multipart install, which I have not automated.
71+
72+
1. Install the cluster with the network plugin set to "none"
73+
2. Install Cilium CNI, and then the nodes will come up.
74+
3. Install the Gateway API CRDs, and then the Gateway API will be available.
75+
4. "Upgrade" cilium to enable the gateway API.
76+
6777
Installing the cilium tools is as simple as downloading the right release from their GitHub release pages and unzipping.
6878

6979
```PowerShell
70-
Install-GitHubRelease cilium cilium
80+
Install-GitHubRelease cilium cilium-cli
7181
Install-GitHubRelease cilium hubble
7282
```
7383

7484
And installing it into the AKS cluster is just this, using the same `"rg-$name"` value as the resource group deployment:
7585

7686
```PowerShell
77-
cilium install --version 1.15.3 --set azure.resourceGroup="rg-$name" --set kubeProxyReplacement=true --set gatewayAPI.enabled=true
87+
cilium install --version 1.17.0 --set azure.resourceGroup="rg-$name" --set kubeProxyReplacement=true --set gatewayAPI.enabled=true
7888
```
7989

8090
If you want to complete the deployment in a single pass, you have to `Import-AzAksCredential` as soon as the cluster shows up in Azure, and then once `kubectl get nodes` shows all your nodes (they won't come up ready, because they won't have a network), you can run the `cilium install` while Azure is showing the Flux deployment is still running (it won't complete successfully until after cilium is installed, so if you don't run the install, it will fail after the time-out, and you'll have to re-run the deployment).
8191

82-
Currently, I'm running it with prometheus enabled, which is more like:
92+
Once the cluster is up, and you've installed the Gateway API CRDs, you can run the `cilium upgrade` command to enable the Gateway API. I'm _also_ enabling hubble and prometheus:
8393

8494
```PowerShell
85-
cilium upgrade --version 1.15.3 --set azure.resourceGroup="rg-$name" `
95+
cilium install --version 1.17.0 --set azure.resourceGroup="rg-$name" `
8696
--set kubeProxyReplacement=true `
8797
--set gatewayAPI.enabled=true `
8898
--set hubble.enabled=true `
@@ -92,4 +102,4 @@ cilium upgrade --version 1.15.3 --set azure.resourceGroup="rg-$name" `
92102
--set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,httpV2:exemplars=true;labelsContext=source_ip\,source_namespace\,source_workload\,destination_ip\,destination_namespace\,destination_workload\,traffic_direction}"
93103
```
94104

95-
I haven't even tried to automate this, because I'm honestly not sure I'll keep using cilium gateway, and I still hope the AKS team will expose settings for this option.
105+
Given it's been more than a year, and Azure's "CNI powered by Cilium" still lists L7 policy enforcement as a limmitation, I still have not tried to use that _and_ cilium gateway, so I should probably go ahead and get the Cilium Helm Chart into my GitOps repo 😒

modules/managedCluster.bicep

Lines changed: 30 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -198,7 +198,7 @@ param kubeletIdentityId string
198198
// param logAnalyticsWorkspaceResourceID string
199199

200200
@description('A private AKS Kubernetes cluster')
201-
resource cluster 'Microsoft.ContainerService/managedClusters@2024-01-01' = {
201+
resource cluster 'Microsoft.ContainerService/managedClusters@2024-10-01' = {
202202
name: 'aks-${baseName}'
203203
location: location
204204
tags: tags
@@ -404,56 +404,54 @@ resource cluster 'Microsoft.ContainerService/managedClusters@2024-01-01' = {
404404

405405
resource agentPools 'Microsoft.ContainerService/managedClusters/agentPools@2024-01-01' = [for (pool, index) in additionalNodePools: {
406406
// The default is 6 characters long because that's the max for Windows nodes
407-
name: contains(pool, 'name') ? pool.name : format('npuser{0:D2}', index)
407+
name: pool.?name ?? format('npuser{0:D2}', index)
408408
parent: cluster
409409
properties: {
410-
availabilityZones: contains(pool, 'availabilityZones') ? pool.availabilityZones : []
410+
availabilityZones: pool.?availabilityZones ?? []
411411
// capacityReservationGroupID: 'string'
412-
count: contains(pool, 'count') ? pool.count : 1
412+
count: pool.?count ?? 1
413413
// creationData: { sourceResourceId: 'string' }
414-
enableAutoScaling: contains(pool, 'enableAutoScaling') ? pool.enableAutoScaling : true
414+
enableAutoScaling: pool.?enableAutoScaling ?? true
415415
// enableCustomCATrust: contains(pool, 'enableCustomCATrust') ? pool.enableCustomCATrust : false
416-
enableEncryptionAtHost: contains(pool, 'enableEncryptionAtHost') ? pool.enableEncryptionAtHost : false
417-
enableFIPS: contains(pool, 'enableFIPS') ? pool.enableFIPS : false
418-
enableNodePublicIP: contains(pool, 'enableNodePublicIP') ? pool.enableNodePublicIP : false
419-
enableUltraSSD: contains(pool, 'enableUltraSSD') ? pool.enableUltraSSD : false
416+
enableEncryptionAtHost: pool.?enableEncryptionAtHost ?? false
417+
enableFIPS: pool.?enableFIPS ?? false
418+
enableNodePublicIP: pool.?enableNodePublicIP ?? false
419+
enableUltraSSD: pool.?enableUltraSSD ?? false
420420
// gpuInstanceProfile: 'string'
421421
// hostGroupID: 'string'
422422
// podSubnetID: 'string'
423-
kubeletConfig: contains(pool, 'kubeletConfig') ? pool.kubeletConfig : null // If not null, causes error: CustomKubeletConfig or CustomLinuxOSConfig can not be changed for this operation.
424-
kubeletDiskType: contains(pool, 'kubeletDiskType') ? pool.kubeletDiskType : 'OS'
425-
linuxOSConfig: contains(pool, 'linuxOSConfig') ? pool.linuxOSConfig : null
426-
maxCount: contains(pool, 'maxCount') ? pool.maxCount : 10 // int
427-
maxPods: contains(pool, 'maxPods') ? pool.maxPods : maxPodsPerNode // int
428-
minCount: contains(pool, 'minCount') ? pool.minCount : 1 // int
429-
mode: contains(pool, 'mode') ? pool.mode : 'User' // 'string'
430-
networkProfile: contains(pool, 'networkProfile') ? pool.networkProfile : {}
431-
432-
nodeLabels: contains(pool, 'nodeLabels') ? pool.nodeLabels : {} // {}
423+
kubeletConfig: pool.?kubeletConfig ?? null
424+
kubeletDiskType: pool.?kubeletDiskType ?? 'OS'
425+
linuxOSConfig: pool.?linuxOSConfig ?? null
426+
maxCount: pool.?maxCount ?? 10
427+
maxPods: pool.?maxPods ?? maxPodsPerNode
428+
minCount: pool.?minCount ?? 1
429+
mode: pool.?mode ?? 'User'
430+
networkProfile: pool.?networkProfile ?? {}
431+
nodeLabels: pool.?nodeLabels ?? {}
433432
// nodePublicIPPrefixID: Doesn't support being empty
434-
nodeTaints: contains(pool, 'nodeTaints') ? pool.nodeTaints : [] // ['string' ]
433+
nodeTaints: pool.?nodeTaints ?? []
435434
orchestratorVersion: join(take(split(kubernetesVersion, '.'), 2), '.')
436-
osDiskSizeGB: contains(pool, 'osDiskSizeGB') ? pool.osDiskSizeGB : 128 // int
437-
osDiskType: contains(pool, 'osDiskType') ? pool.osDiskType : 'Ephemeral' // 'string'
438-
osSKU: contains(pool, 'osSKU') ? pool.osSKU : 'Ubuntu' // 'string'
439-
osType: contains(pool, 'osType') ? pool.osType : 'Linux' // 'string'
435+
osDiskSizeGB: pool.?osDiskSizeGB ?? 128
436+
osDiskType: pool.?osDiskType ?? 'Ephemeral'
437+
osSKU: pool.?osSKU ?? 'Ubuntu'
438+
osType: pool.?osType ?? 'Linux'
440439
// podSubnetID: 'string'
441440
// powerState: contains(pool,'powerState') ? pool.powerState : //
442441
// proximityPlacementGroupID: contains(pool,'proximityPlacementGroupID') ? pool.proximityPlacementGroupID : // 'string'
443-
scaleDownMode: contains(pool, 'scaleDownMode') ? pool.scaleDownMode : 'Delete' // 'string'
444-
scaleSetEvictionPolicy: contains(pool, 'scaleSetEvictionPolicy') ? pool.scaleSetEvictionPolicy : 'Delete' // 'string'
442+
scaleDownMode: pool.?scaleDownMode ?? 'Delete'
443+
scaleSetEvictionPolicy: pool.?scaleSetEvictionPolicy ?? 'Delete'
445444
// scaleSetPriority: contains(pool,'scaleSetPriority') ? pool.scaleSetPriority : 'Regular' // causes error: Changing property 'properties.ScaleSetPriority' is not allowed.
446445
// spotMaxPrice: contains(pool,'spotMaxPrice') ? pool.spotMaxPrice : 0
447-
type: contains(pool, 'type') ? pool.type : 'VirtualMachineScaleSets'
448-
tags: contains(pool, 'tags') ? pool.tags : tags // tags
449-
upgradeSettings: contains(pool, 'upgradeSettings') ? pool.upgradeSettings : { maxSurge: '33%' }
450-
vmSize: contains(pool, 'vmSize') ? pool.vmSize : 'Standard_DS2_v2' // 'string'
446+
type: pool.?type ?? 'VirtualMachineScaleSets'
447+
tags: pool.?tags ?? tags
448+
upgradeSettings: pool.?upgradeSettings ?? { maxSurge: '33%' }
449+
vmSize: pool.?vmSize ?? 'Standard_DS2_v2'
451450
// vnetSubnetID: contains(pool, 'vnetSubnetID') ? pool.vnetSubnetID : nodeSubnetId // 'string'
452451
// windowsProfile: contains(pool,'windowsProfile') ? pool.windowsProfile : {} // { }
453-
workloadRuntime: contains(pool, 'workloadRuntime') ? pool.workloadRuntime : 'OCIContainer' // 'string'
452+
workloadRuntime: pool.?workloadRuntime ?? 'OCIContainer'
454453
}
455454
}]
456-
457455
@description('The id of the AKS cluster')
458456
output id string = cluster.id
459457

0 commit comments

Comments
 (0)