Skip to content

ci: [Service Tags] add public ips with service tags for LBs during cluster creation #3277

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 68 commits into from
Apr 8, 2025
Merged
Changes from all commits
Commits
Show all changes
68 commits
Select commit Hold shift + click to select a range
b596518
Create an outbound public ip for LB/Cilium cluster
shubham-pathak-03 Jun 20, 2024
f367851
create and attach public ip for cilium e2e cluster
shubham-pathak-03 Jun 25, 2024
29b1a92
Test cluster independent ip creation
shubham-pathak-03 Jun 25, 2024
e7cd70d
Test outbound public ip creation
shubham-pathak-03 Jun 25, 2024
3697054
Test outbound public ip creation
shubham-pathak-03 Jun 25, 2024
ec41733
Test outbound public ip creation with azcli creds
shubham-pathak-03 Jun 25, 2024
3e6f5c8
Test outbound public ip creation with azcli creds
shubham-pathak-03 Jun 25, 2024
87ea244
Add lb ip creation alias call for clusters in makefile
shubham-pathak-03 Jun 25, 2024
651a405
Add lb ip creation alias call for clusters in makefile
shubham-pathak-03 Jun 25, 2024
d390d45
Add lb ip creation alias call for clusters in makefile
shubham-pathak-03 Jun 25, 2024
edf70ee
Add lb ip creation alias call for clusters in makefile
shubham-pathak-03 Jun 25, 2024
cb2fa8f
Add lb ip creation alias call for clusters in makefile
shubham-pathak-03 Jun 25, 2024
2d17d5d
Add lb ip creation alias call for clusters in makefile
shubham-pathak-03 Jun 25, 2024
00549ce
Add lb ip creation alias call for clusters in makefile
shubham-pathak-03 Jun 25, 2024
bf50b77
Add lb ip creation alias call for clusters in makefile
shubham-pathak-03 Jun 25, 2024
17807c6
Merge branch 'master' into spathak/add-service-tag
shubham-pathak-03 Jun 25, 2024
19b5600
Add lb ip creation alias call for clusters in makefile
shubham-pathak-03 Jun 26, 2024
460a1cf
Add lb ip creation alias call for clusters in makefile
shubham-pathak-03 Jun 26, 2024
9a332d0
Add lb ip creation alias call for clusters in makefile
shubham-pathak-03 Jun 26, 2024
73228a4
Add lb ip creation alias call for clusters in makefile
shubham-pathak-03 Jun 26, 2024
f1c8247
Add lb ip creation alias call for clusters in makefile
shubham-pathak-03 Jun 26, 2024
926acd6
Add lb ip creation alias call for clusters in makefile
shubham-pathak-03 Jun 26, 2024
3cbbf71
Add lb ip creation alias call for clusters in makefile
shubham-pathak-03 Jun 27, 2024
99d5d42
Add lb ip creation alias call for clusters in makefile
shubham-pathak-03 Jun 27, 2024
aa70640
Add lb ip creation alias call for clusters in makefile
shubham-pathak-03 Jun 27, 2024
28cba9e
Add lb ip creation alias call for clusters in makefile
shubham-pathak-03 Jun 27, 2024
4a797f7
Add managed identity to public ip/load balancer
shubham-pathak-03 Jun 27, 2024
74efd8c
Add managed identity to public ip/load balancer
shubham-pathak-03 Jun 27, 2024
c631b0c
Test wo managed identity
shubham-pathak-03 Jun 27, 2024
e016323
Test wo managed identity
shubham-pathak-03 Jul 1, 2024
aec6803
Add Public ip to one cluster
shubham-pathak-03 Jul 1, 2024
273d39b
Add public to all cluster creations
shubham-pathak-03 Jul 1, 2024
d6de754
Add public ip
shubham-pathak-03 Jul 1, 2024
04ee4eb
Fix spacing
shubham-pathak-03 Jul 1, 2024
5fa58d0
Fix spacing
shubham-pathak-03 Jul 1, 2024
875354e
Merge branch 'master' into spathak/add-service-tag
shubham-pathak-03 Jul 1, 2024
6ab97b0
Merge branch 'master' into spathak/add-service-tag
shubham-pathak-03 Jul 11, 2024
748eb20
Add LB to win cni v1 cluster
shubham-pathak-03 Jul 11, 2024
bac845d
Add LB to win cni v1 cluster
shubham-pathak-03 Jul 11, 2024
2dc3613
Add ip-tag variable to makefile
shubham-pathak-03 Jul 11, 2024
5c9924f
Add ip-tag variable to makefile
shubham-pathak-03 Jul 11, 2024
6a285dd
Add ip-tag variable to makefile
shubham-pathak-03 Jul 11, 2024
e1e15cb
Add ip-tag variable to makefile
shubham-pathak-03 Jul 11, 2024
57854f8
Add ip-tag variable to makefile
shubham-pathak-03 Jul 11, 2024
d110100
Add ip-tag variable to makefile
shubham-pathak-03 Jul 11, 2024
f10249e
Add ip-tag variable to makefile
shubham-pathak-03 Jul 11, 2024
1d1d78a
Add ip-tag variable to makefile
shubham-pathak-03 Jul 11, 2024
88a8a92
Add ip-tag variable to makefile
shubham-pathak-03 Jul 12, 2024
64300df
updated service tag to 'DelegatedNetworkControllerTest'
k-routhu Nov 1, 2024
6d50417
Merge branch 'master' into spathak/add-service-tag
k-routhu Dec 13, 2024
9189f09
create public IP as target
k-routhu Dec 13, 2024
ad1101f
add ipv6 public ips to dualstack
k-routhu Dec 13, 2024
ca8cc6e
updated v6 ip
k-routhu Dec 13, 2024
b43dc44
remove space
k-routhu Dec 13, 2024
cae983b
added public ip for nodesubnet-byocni-nokubeproxy-up resource
k-routhu Dec 16, 2024
413f253
addressed comments on PR
k-routhu Dec 18, 2024
cdf97b3
parameterize ip v4 & v6
k-routhu Jan 15, 2025
8a67717
address comments
k-routhu Jan 21, 2025
95c0891
resolve merge conflict
k-routhu Jan 21, 2025
e8bf9c4
Merge branch 'master' into krouthu/service-tag
k-routhu Mar 7, 2025
6a62abe
address PR comments
k-routhu Mar 10, 2025
0fdb4df
Update hack/aks/Makefile
k-routhu Mar 11, 2025
22e90d4
Merge branch 'master' into krouthu/service-tag
k-routhu Apr 3, 2025
07dca3e
Merge branch 'master' into krouthu/service-tag
k-routhu Apr 4, 2025
362b911
test
k-routhu Apr 4, 2025
573c0ef
test
k-routhu Apr 4, 2025
56d2fa5
test
k-routhu Apr 4, 2025
38220a9
Merge branch 'master' into krouthu/service-tag
k-routhu Apr 7, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
86 changes: 62 additions & 24 deletions hack/aks/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,11 @@ OS_SKU_WIN ?= Windows2022
REGION ?= westus2
VM_SIZE ?= Standard_B2s
VM_SIZE_WIN ?= Standard_B2s
IP_TAG ?= FirstPartyUsage=/DelegatedNetworkControllerTest
IP_PREFIX ?= serviceTaggedIp
PUBLIC_IP_ID ?= /subscriptions/$(SUB)/resourceGroups/$(GROUP)/providers/Microsoft.Network/publicIPAddresses
PUBLIC_IPv4 ?= $(PUBLIC_IP_ID)/$(IP_PREFIX)-$(CLUSTER)-v4
PUBLIC_IPv6 ?= $(PUBLIC_IP_ID)/$(IP_PREFIX)-$(CLUSTER)-v6
KUBE_PROXY_JSON_PATH ?= ./kube-proxy.json

# overrideable variables
Expand All @@ -43,6 +48,23 @@ azcfg: ## Set the $AZCLI to use aks-preview
@$(AZCLI) extension add --name aks-preview --yes
@$(AZCLI) extension update --name aks-preview

ip:
$(AZCLI) network public-ip create --name $(IP_PREFIX)-$(CLUSTER)-$(IPVERSION) \
--resource-group $(GROUP) \
--allocation-method Static \
--ip-tags $(IP_TAG) \
--location $(REGION) \
--sku Standard \
--tier Regional \
--version IP$(IPVERSION)

ipv4:
@$(MAKE) ip IPVERSION=v4

ipv6:
@$(MAKE) ip IPVERSION=v6


set-kubeconf: ## Adds the kubeconf for $CLUSTER
$(AZCLI) aks get-credentials -n $(CLUSTER) -g $(GROUP)

Expand Down Expand Up @@ -89,23 +111,22 @@ overlay-net-up: ## Create vnet, nodenet subnets
$(AZCLI) network vnet create -g $(GROUP) -l $(REGION) --name $(VNET) --address-prefixes 10.0.0.0/8 -o none
$(AZCLI) network vnet subnet create -g $(GROUP) --vnet-name $(VNET) --name nodenet --address-prefix 10.10.0.0/16 -o none


##@ AKS Clusters

byocni-up: swift-byocni-up ## Alias to swift-byocni-up
cilium-up: swift-cilium-up ## Alias to swift-cilium-up
up: swift-up ## Alias to swift-up


nodesubnet-byocni-nokubeproxy-up: rg-up overlay-net-up ## Brings up an NodeSubnet BYO CNI cluster without kube-proxy
nodesubnet-byocni-nokubeproxy-up: rg-up ipv4 overlay-net-up ## Brings up an NodeSubnet BYO CNI cluster without kube-proxy
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--auto-upgrade-channel $(AUTOUPGRADE) \
--node-os-upgrade-channel $(NODEUPGRADE) \
--kubernetes-version $(K8S_VER) \
--node-count $(NODE_COUNT) \
--node-vm-size $(VM_SIZE) \
--load-balancer-sku standard \
--max-pods 250 \
--load-balancer-outbound-ips $(PUBLIC_IPv4) \
--network-plugin none \
--vnet-subnet-id /subscriptions/$(SUB)/resourceGroups/$(GROUP)/providers/Microsoft.Network/virtualNetworks/$(VNET)/subnets/nodenet \
--os-sku $(OS_SKU) \
Expand All @@ -114,14 +135,14 @@ nodesubnet-byocni-nokubeproxy-up: rg-up overlay-net-up ## Brings up an NodeSubne
--yes
@$(MAKE) set-kubeconf

overlay-byocni-up: rg-up overlay-net-up ## Brings up a Linux Overlay BYO CNI cluster
overlay-byocni-up: rg-up ipv4 overlay-net-up ## Brings up an Overlay BYO CNI cluster
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--auto-upgrade-channel $(AUTOUPGRADE) \
--node-os-upgrade-channel $(NODEUPGRADE) \
--kubernetes-version $(K8S_VER) \
--node-count $(NODE_COUNT) \
--node-vm-size $(VM_SIZE) \
--load-balancer-sku standard \
--load-balancer-outbound-ips $(PUBLIC_IPv4) \
--network-plugin none \
--network-plugin-mode overlay \
--pod-cidr 192.168.0.0/16 \
Expand All @@ -134,13 +155,14 @@ ifeq ($(OS),windows)
$(MAKE) windows-nodepool-up
endif

overlay-byocni-nokubeproxy-up: rg-up overlay-net-up ## Brings up an Overlay BYO CNI cluster without kube-proxy
overlay-byocni-nokubeproxy-up: rg-up ipv4 overlay-net-up ## Brings up an Overlay BYO CNI cluster without kube-proxy
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--auto-upgrade-channel $(AUTOUPGRADE) \
--node-os-upgrade-channel $(NODEUPGRADE) \
--kubernetes-version $(K8S_VER) \
--node-count $(NODE_COUNT) \
--node-vm-size $(VM_SIZE) \
--load-balancer-outbound-ips $(PUBLIC_IPv4) \
--network-plugin none \
--network-plugin-mode overlay \
--pod-cidr 192.168.0.0/16 \
Expand All @@ -150,13 +172,14 @@ overlay-byocni-nokubeproxy-up: rg-up overlay-net-up ## Brings up an Overlay BYO
--yes
@$(MAKE) set-kubeconf

overlay-cilium-up: rg-up overlay-net-up ## Brings up an Overlay Cilium cluster
overlay-cilium-up: rg-up ipv4 overlay-net-up ## Brings up an Overlay Cilium cluster
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--auto-upgrade-channel $(AUTOUPGRADE) \
--node-os-upgrade-channel $(NODEUPGRADE) \
--kubernetes-version $(K8S_VER) \
--node-count $(NODE_COUNT) \
--node-vm-size $(VM_SIZE) \
--load-balancer-outbound-ips (PUBLIC_IPv4) \
--network-plugin azure \
--network-dataplane cilium \
--network-plugin-mode overlay \
Expand All @@ -166,13 +189,14 @@ overlay-cilium-up: rg-up overlay-net-up ## Brings up an Overlay Cilium cluster
--yes
@$(MAKE) set-kubeconf

overlay-up: rg-up overlay-net-up ## Brings up an Overlay AzCNI cluster
overlay-up: rg-up ipv4 overlay-net-up ## Brings up an Overlay AzCNI cluster
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--auto-upgrade-channel $(AUTOUPGRADE) \
--node-os-upgrade-channel $(NODEUPGRADE) \
--kubernetes-version $(K8S_VER) \
--node-count $(NODE_COUNT) \
--node-vm-size $(VM_SIZE) \
--load-balancer-outbound-ips $(PUBLIC_IPv4) \
--network-plugin azure \
--network-plugin-mode overlay \
--pod-cidr 192.168.0.0/16 \
Expand All @@ -181,14 +205,14 @@ overlay-up: rg-up overlay-net-up ## Brings up an Overlay AzCNI cluster
--yes
@$(MAKE) set-kubeconf

swift-byocni-up: rg-up swift-net-up ## Bring up a SWIFT (Podsubnet) BYO CNI cluster
swift-byocni-up: rg-up ipv4 swift-net-up ## Bring up a SWIFT BYO CNI cluster
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--auto-upgrade-channel $(AUTOUPGRADE) \
--node-os-upgrade-channel $(NODEUPGRADE) \
--kubernetes-version $(K8S_VER) \
--node-count $(NODE_COUNT) \
--node-vm-size $(VM_SIZE) \
--load-balancer-sku standard \
--load-balancer-outbound-ips $(PUBLIC_IPv4) \
--network-plugin none \
--vnet-subnet-id /subscriptions/$(SUB)/resourceGroups/$(GROUP)/providers/Microsoft.Network/virtualNetworks/$(VNET)/subnets/nodenet \
--pod-subnet-id /subscriptions/$(SUB)/resourceGroups/$(GROUP)/providers/Microsoft.Network/virtualNetworks/$(VNET)/subnets/podnet \
Expand All @@ -200,13 +224,14 @@ ifeq ($(OS),windows)
endif
@$(MAKE) set-kubeconf

swift-byocni-nokubeproxy-up: rg-up swift-net-up ## Bring up a SWIFT (Podsubnet) BYO CNI cluster without kube-proxy
swift-byocni-nokubeproxy-up: rg-up ipv4 swift-net-up ## Bring up a SWIFT BYO CNI cluster without kube-proxy, add managed identity and public ip
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--auto-upgrade-channel $(AUTOUPGRADE) \
--node-os-upgrade-channel $(NODEUPGRADE) \
--kubernetes-version $(K8S_VER) \
--node-count $(NODE_COUNT) \
--node-vm-size $(VM_SIZE) \
--load-balancer-outbound-ips $(PUBLIC_IPv4) \
--network-plugin none \
--vnet-subnet-id /subscriptions/$(SUB)/resourceGroups/$(GROUP)/providers/Microsoft.Network/virtualNetworks/$(VNET)/subnets/nodenet \
--pod-subnet-id /subscriptions/$(SUB)/resourceGroups/$(GROUP)/providers/Microsoft.Network/virtualNetworks/$(VNET)/subnets/podnet \
Expand All @@ -216,13 +241,14 @@ swift-byocni-nokubeproxy-up: rg-up swift-net-up ## Bring up a SWIFT (Podsubnet)
--yes
@$(MAKE) set-kubeconf

swift-cilium-up: rg-up swift-net-up ## Bring up a SWIFT Cilium cluster
swift-cilium-up: rg-up ipv4 swift-net-up ## Bring up a SWIFT Cilium cluster
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--auto-upgrade-channel $(AUTOUPGRADE) \
--node-os-upgrade-channel $(NODEUPGRADE) \
--kubernetes-version $(K8S_VER) \
--node-count $(NODE_COUNT) \
--node-vm-size $(VM_SIZE) \
--load-balancer-outbound-ips $(PUBLIC_IPv4) \
--network-plugin azure \
--network-dataplane cilium \
--aks-custom-headers AKSHTTPCustomFeatures=Microsoft.ContainerService/CiliumDataplanePreview \
Expand All @@ -232,52 +258,56 @@ swift-cilium-up: rg-up swift-net-up ## Bring up a SWIFT Cilium cluster
--yes
@$(MAKE) set-kubeconf

swift-up: rg-up swift-net-up ## Bring up a SWIFT AzCNI cluster
swift-up: rg-up ipv4 swift-net-up ## Bring up a SWIFT AzCNI cluster
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--auto-upgrade-channel $(AUTOUPGRADE) \
--node-os-upgrade-channel $(NODEUPGRADE) \
--kubernetes-version $(K8S_VER) \
--node-count $(NODE_COUNT) \
--node-vm-size $(VM_SIZE) \
--load-balancer-outbound-ips $(PUBLIC_IPv4) \
--network-plugin azure \
--vnet-subnet-id /subscriptions/$(SUB)/resourceGroups/$(GROUP)/providers/Microsoft.Network/virtualNetworks/$(VNET)/subnets/nodenet \
--pod-subnet-id /subscriptions/$(SUB)/resourceGroups/$(GROUP)/providers/Microsoft.Network/virtualNetworks/$(VNET)/subnets/podnet \
--no-ssh-key \
--yes
@$(MAKE) set-kubeconf

swiftv2-multitenancy-cluster-up: rg-up
swiftv2-multitenancy-cluster-up: rg-up ipv4
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--network-plugin azure \
--network-plugin-mode overlay \
--kubernetes-version $(K8S_VER) \
--nodepool-name "mtapool" \
--node-vm-size $(VM_SIZE) \
--node-count 2 \
--load-balancer-outbound-ips $(PUBLIC_IPv4) \
--nodepool-tags fastpathenabled=true \
--no-ssh-key \
--yes
@$(MAKE) set-kubeconf

swiftv2-dummy-cluster-up: rg-up swift-net-up ## Bring up a SWIFT AzCNI cluster
swiftv2-dummy-cluster-up: rg-up ipv4 swift-net-up ## Bring up a SWIFT AzCNI cluster
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--network-plugin azure \
--vnet-subnet-id /subscriptions/$(SUB)/resourceGroups/$(GROUP)/providers/Microsoft.Network/virtualNetworks/$(VNET)/subnets/nodenet \
--pod-subnet-id /subscriptions/$(SUB)/resourceGroups/$(GROUP)/providers/Microsoft.Network/virtualNetworks/$(VNET)/subnets/podnet \
--load-balancer-outbound-ips $(PUBLIC_IPv4) \
--no-ssh-key \
--yes
@$(MAKE) set-kubeconf

# The below Vnet Scale clusters are currently only in private preview and available with Kubernetes 1.28
# These AKS clusters can only be created in a limited subscription listed here:
# https://dev.azure.com/msazure/CloudNativeCompute/_git/aks-rp?path=/resourceprovider/server/microsoft.com/containerservice/flags/network_flags.go&version=GBmaster&line=134&lineEnd=135&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents
vnetscale-swift-byocni-up: rg-up vnetscale-swift-net-up ## Bring up a Vnet Scale SWIFT BYO CNI cluster
vnetscale-swift-byocni-up: rg-up ipv4 vnetscale-swift-net-up ## Bring up a Vnet Scale SWIFT BYO CNI cluster
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--auto-upgrade-channel $(AUTOUPGRADE) \
--node-os-upgrade-channel $(NODEUPGRADE) \
--kubernetes-version $(K8S_VER) \
--node-count $(NODE_COUNT) \
--node-vm-size $(VM_SIZE) \
--load-balancer-outbound-ips $(PUBLIC_IPv4) \
--network-plugin none \
--vnet-subnet-id /subscriptions/$(SUB)/resourceGroups/$(GROUP)/providers/Microsoft.Network/virtualNetworks/$(VNET)/subnets/nodenet \
--pod-subnet-id /subscriptions/$(SUB)/resourceGroups/$(GROUP)/providers/Microsoft.Network/virtualNetworks/$(VNET)/subnets/podnet \
Expand All @@ -287,13 +317,14 @@ vnetscale-swift-byocni-up: rg-up vnetscale-swift-net-up ## Bring up a Vnet Scale
--yes
@$(MAKE) set-kubeconf

vnetscale-swift-byocni-nokubeproxy-up: rg-up vnetscale-swift-net-up ## Bring up a Vnet Scale SWIFT (Podsubnet) BYO CNI cluster without kube-proxy
vnetscale-swift-byocni-nokubeproxy-up: rg-up ipv4 vnetscale-swift-net-up ## Bring up a Vnet Scale SWIFT BYO CNI cluster without kube-proxy
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--auto-upgrade-channel $(AUTOUPGRADE) \
--node-os-upgrade-channel $(NODEUPGRADE) \
--kubernetes-version $(K8S_VER) \
--node-count $(NODE_COUNT) \
--node-vm-size $(VM_SIZE) \
--load-balancer-outbound-ips $(PUBLIC_IPv4) \
--network-plugin none \
--vnet-subnet-id /subscriptions/$(SUB)/resourceGroups/$(GROUP)/providers/Microsoft.Network/virtualNetworks/$(VNET)/subnets/nodenet \
--pod-subnet-id /subscriptions/$(SUB)/resourceGroups/$(GROUP)/providers/Microsoft.Network/virtualNetworks/$(VNET)/subnets/podnet \
Expand All @@ -304,13 +335,14 @@ vnetscale-swift-byocni-nokubeproxy-up: rg-up vnetscale-swift-net-up ## Bring up
--yes
@$(MAKE) set-kubeconf

vnetscale-swift-cilium-up: rg-up vnetscale-swift-net-up ## Bring up a Vnet Scale SWIFT Cilium cluster
vnetscale-swift-cilium-up: rg-up ipv4 vnetscale-swift-net-up ## Bring up a Vnet Scale SWIFT Cilium cluster
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--auto-upgrade-channel $(AUTOUPGRADE) \
--node-os-upgrade-channel $(NODEUPGRADE) \
--kubernetes-version $(K8S_VER) \
--node-count $(NODE_COUNT) \
--node-vm-size $(VM_SIZE) \
--load-balancer-outbound-ips $(PUBLIC_IPv4) \
--network-plugin azure \
--network-dataplane cilium \
--aks-custom-headers AKSHTTPCustomFeatures=Microsoft.ContainerService/CiliumDataplanePreview \
Expand All @@ -321,13 +353,14 @@ vnetscale-swift-cilium-up: rg-up vnetscale-swift-net-up ## Bring up a Vnet Scale
--yes
@$(MAKE) set-kubeconf

vnetscale-swift-up: rg-up vnetscale-swift-net-up ## Bring up a Vnet Scale SWIFT AzCNI cluster
vnetscale-swift-up: rg-up ipv4 vnetscale-swift-net-up ## Bring up a Vnet Scale SWIFT AzCNI cluster
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--auto-upgrade-channel $(AUTOUPGRADE) \
--node-os-upgrade-channel $(NODEUPGRADE) \
--kubernetes-version $(K8S_VER) \
--node-count $(NODE_COUNT) \
--node-vm-size $(VM_SIZE) \
--load-balancer-outbound-ips $(PUBLIC_IPv4) \
--network-plugin azure \
--vnet-subnet-id /subscriptions/$(SUB)/resourceGroups/$(GROUP)/providers/Microsoft.Network/virtualNetworks/$(VNET)/subnets/nodenet \
--pod-subnet-id /subscriptions/$(SUB)/resourceGroups/$(GROUP)/providers/Microsoft.Network/virtualNetworks/$(VNET)/subnets/podnet \
Expand All @@ -336,13 +369,14 @@ vnetscale-swift-up: rg-up vnetscale-swift-net-up ## Bring up a Vnet Scale SWIFT
--yes
@$(MAKE) set-kubeconf

cniv1-up: rg-up overlay-net-up ## Bring up a CNIv1 cluster
cniv1-up: rg-up ipv4 overlay-net-up ## Bring up a CNIv1 cluster
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--auto-upgrade-channel $(AUTOUPGRADE) \
--node-os-upgrade-channel $(NODEUPGRADE) \
--kubernetes-version $(K8S_VER) \
--node-count $(NODE_COUNT) \
--node-vm-size $(VM_SIZE) \
--load-balancer-outbound-ips $(PUBLIC_IPv4) \
--max-pods 250 \
--network-plugin azure \
--vnet-subnet-id /subscriptions/$(SUB)/resourceGroups/$(GROUP)/providers/Microsoft.Network/virtualNetworks/$(VNET)/subnets/nodenet \
Expand All @@ -354,13 +388,14 @@ ifeq ($(OS),windows)
$(MAKE) windows-nodepool-up
endif

dualstack-overlay-up: rg-up overlay-net-up ## Brings up an dualstack Overlay cluster with Linux node only
dualstack-overlay-up: rg-up ipv4 ipv6 overlay-net-up ## Brings up an dualstack Overlay cluster with Linux node only
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--auto-upgrade-channel $(AUTOUPGRADE) \
--node-os-upgrade-channel $(NODEUPGRADE) \
--kubernetes-version $(K8S_VER) \
--node-count $(NODE_COUNT) \
--node-vm-size $(VM_SIZE) \
--load-balancer-outbound-ips $(PUBLIC_IPv4),$(PUBLIC_IPv6) \
--network-plugin azure \
--network-plugin-mode overlay \
--subscription $(SUB) \
Expand All @@ -370,13 +405,14 @@ dualstack-overlay-up: rg-up overlay-net-up ## Brings up an dualstack Overlay clu
--yes
@$(MAKE) set-kubeconf

dualstack-overlay-byocni-up: rg-up overlay-net-up ## Brings up an dualstack Overlay BYO CNI cluster
dualstack-overlay-byocni-up: rg-up ipv4 ipv6 overlay-net-up ## Brings up an dualstack Overlay BYO CNI cluster
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--auto-upgrade-channel $(AUTOUPGRADE) \
--node-os-upgrade-channel $(NODEUPGRADE) \
--kubernetes-version $(K8S_VER) \
--node-count $(NODE_COUNT) \
--node-vm-size $(VM_SIZE) \
--load-balancer-outbound-ips $(PUBLIC_IPv4),$(PUBLIC_IPv6) \
--network-plugin none \
--network-plugin-mode overlay \
--subscription $(SUB) \
Expand All @@ -389,13 +425,14 @@ ifeq ($(OS),windows)
$(MAKE) windows-nodepool-up
endif

cilium-dualstack-up: rg-up overlay-net-up ## Brings up a Cilium Dualstack Overlay cluster with Linux node only
cilium-dualstack-up: rg-up ipv4 ipv6 overlay-net-up ## Brings up a Cilium Dualstack Overlay cluster with Linux node only
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--auto-upgrade-channel $(AUTOUPGRADE) \
--node-os-upgrade-channel $(NODEUPGRADE) \
--kubernetes-version $(K8S_VER) \
--node-count $(NODE_COUNT) \
--node-vm-size $(VM_SIZE) \
--load-balancer-outbound-ips $(PUBLIC_IPv4),$(PUBLIC_IPv6) \
--network-plugin azure \
--network-plugin-mode overlay \
--network-dataplane cilium \
Expand All @@ -406,13 +443,14 @@ cilium-dualstack-up: rg-up overlay-net-up ## Brings up a Cilium Dualstack Overla
--yes
@$(MAKE) set-kubeconf

dualstack-byocni-nokubeproxy-up: rg-up overlay-net-up ## Brings up a Dualstack overlay BYOCNI cluster with Linux node only and no kube-proxy
dualstack-byocni-nokubeproxy-up: rg-up ipv4 ipv6 overlay-net-up ## Brings up a Dualstack overlay BYOCNI cluster with Linux node only and no kube-proxy
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--auto-upgrade-channel $(AUTOUPGRADE) \
--node-os-upgrade-channel $(NODEUPGRADE) \
--kubernetes-version $(K8S_VER) \
--node-count $(NODE_COUNT) \
--node-vm-size $(VM_SIZE) \
--load-balancer-outbound-ips $(PUBLIC_IPv4),$(PUBLIC_IPv6) \
--network-plugin none \
--network-plugin-mode overlay \
--subscription $(SUB) \
Expand Down
Loading