Skip to content

Commit b626bc6

Browse files
authored
Merge pull request #79 from sighupio/feat/update-calico-add-compatibility-to-1.29
Feat: update calico add compatibility to 1.29, release v1.17.0
2 parents c1c9200 + d252aca commit b626bc6

File tree

11 files changed

+749
-36
lines changed

11 files changed

+749
-36
lines changed

.drone.yml

+514-7
Large diffs are not rendered by default.

README.md

+5-4
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
</h1>
66
<!-- markdownlint-enable MD033 -->
77

8-
![Release](https://img.shields.io/badge/Latest%20Release-v1.15.2-blue)
8+
![Release](https://img.shields.io/badge/Latest%20Release-v1.17.0-blue)
99
![License](https://img.shields.io/github/license/sighupio/fury-kubernetes-networking?label=License)
1010
![Slack](https://img.shields.io/badge/slack-@kubernetes/fury-yellow.svg?logo=slack&label=Slack)
1111

@@ -29,9 +29,9 @@ Kubernetes Fury Networking provides the following packages:
2929

3030
| Package | Version | Description |
3131
| -------------------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- |
32-
| [calico](katalog/calico) | `3.27.0` | [Calico][calico-page] CNI Plugin. For cluster with `< 50` nodes. |
32+
| [calico](katalog/calico) | `3.27.3` | [Calico][calico-page] CNI Plugin. For cluster with `< 50` nodes. |
3333
| [cilium](katalog/cilium) | `1.15.2` | [Cilium][cilium-page] CNI Plugin. For cluster with `< 200` nodes. |
34-
| [tigera](katalog/tigera) | `1.32.3` | [Tigera Operator][tigera-page], a Kubernetes Operator for Calico, provides pre-configured installations for on-prem and for EKS in policy-only mode. |
34+
| [tigera](katalog/tigera) | `1.32.7` | [Tigera Operator][tigera-page], a Kubernetes Operator for Calico, provides pre-configured installations for on-prem and for EKS in policy-only mode. |
3535
| [ip-masq](katalog/ip-masq) | `2.8.0` | The `ip-masq-agent` configures iptables rules to implement IP masquerading functionality |
3636

3737
> The resources in these packages are going to be deployed in `kube-system` namespace. Except for the operator.
@@ -45,6 +45,7 @@ Click on each package to see its full documentation.
4545
| `1.26.x` | :white_check_mark: | No known issues |
4646
| `1.27.x` | :white_check_mark: | No known issues |
4747
| `1.28.x` | :white_check_mark: | No known issues |
48+
| `1.29.x` | :white_check_mark: | No known issues |
4849

4950

5051
Check the [compatibility matrix][compatibility-matrix] for additional information on previous releases of the module.
@@ -67,7 +68,7 @@ Check the [compatibility matrix][compatibility-matrix] for additional informatio
6768
```yaml
6869
bases:
6970
- name: networking
70-
version: "v1.16.0"
71+
version: "v1.17.0"
7172
```
7273
7374
> See `furyctl` [documentation][furyctl-repo] for additional details about `Furyfile.yml` format.

docs/COMPATIBILITY_MATRIX.md

+11-10
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,16 @@
11
# Compatibility Matrix
22

3-
| Module Version / Kubernetes Version | 1.24.X | 1.25.X | 1.26.X | 1.27.X | 1.28.X |
4-
| ----------------------------------- | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ |
5-
| v1.10.0 | :white_check_mark: | | | | |
6-
| v1.11.0 | :white_check_mark: | :white_check_mark: | | | |
7-
| v1.12.0 | :white_check_mark: | :white_check_mark: | | | |
8-
| v1.12.1 | :white_check_mark: | :white_check_mark: | | | |
9-
| v1.12.2 | :white_check_mark: | :white_check_mark: | | | |
10-
| v1.14.0 | :white_check_mark: | :white_check_mark: | :white_check_mark: | | |
11-
| v1.15.0 | | :white_check_mark: | :white_check_mark: | :white_check_mark: | |
12-
| v1.16.0 | | | :white_check_mark: | :white_check_mark: | :white_check_mark: |
3+
| Module Version / Kubernetes Version | 1.24.X | 1.25.X | 1.26.X | 1.27.X | 1.28.X | 1.29.X |
4+
| ----------------------------------- | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ |
5+
| v1.10.0 | :white_check_mark: | | | | | |
6+
| v1.11.0 | :white_check_mark: | :white_check_mark: | | | | |
7+
| v1.12.0 | :white_check_mark: | :white_check_mark: | | | | |
8+
| v1.12.1 | :white_check_mark: | :white_check_mark: | | | | |
9+
| v1.12.2 | :white_check_mark: | :white_check_mark: | | | | |
10+
| v1.14.0 | :white_check_mark: | :white_check_mark: | :white_check_mark: | | | |
11+
| v1.15.0 | | :white_check_mark: | :white_check_mark: | :white_check_mark: | | |
12+
| v1.16.0 | | | :white_check_mark: | :white_check_mark: | :white_check_mark: | |
13+
| v1.17.0 | | | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
1314

1415

1516
:white_check_mark: Compatible

docs/releases/v1.17.0.md

+32
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
# Networking Core Module Release 1.17.0
2+
3+
Welcome to the latest release of the `Networking` module of [`Kubernetes Fury Distribution`](https://github.com/sighupio/fury-distribution) maintained by team SIGHUP.
4+
5+
This patch release updates some components and adds support to Kubernetes 1.29.
6+
7+
## Component Images 🚢
8+
9+
| Component | Supported Version | Previous Version |
10+
| ----------------- | -------------------------------------------------------------------------------- | ---------------- |
11+
| `calico` | [`v3.27.3`](https://docs.tigera.io/calico/3.27/about/) | `v3.27.0` |
12+
| `cilium` | [`v1.15.2`](https://github.com/cilium/cilium/releases/tag/v1.15.2) | No update |
13+
| `ip-masq` | [`v2.8.0`](https://github.com/kubernetes-sigs/ip-masq-agent/releases/tag/v2.8.0) | No update |
14+
| `tigera-operator` | [`v1.32.7`](https://github.com/tigera/operator/releases/tag/v1.32.7) | `v1.32.3` |
15+
16+
> Please refer the individual release notes to get detailed information on each release.
17+
18+
## Update Guide 🦮
19+
20+
### Process
21+
22+
1. Just deploy as usual:
23+
24+
```bash
25+
kustomize build katalog/calico | kubectl apply -f -
26+
# OR
27+
kustomize build katalog/tigera/on-prem | kubectl apply -f -
28+
# OR
29+
kustomize build katalog/cilium | kubectl apply -f -
30+
```
31+
32+
If you are upgrading from previous versions, please refer to the [`v1.16.0` release notes](https://github.com/sighupio/fury-kubernetes-networking/releases/tag/v1.16.0).

katalog/calico/MAINTENANCE.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ To update the Calico package with upstream, please follow the next steps:
77
1. Download upstream manifests:
88

99
```bash
10-
export CALICO_VERSION=3.27.0
10+
export CALICO_VERSION=3.27.3
1111
curl -L https://raw.githubusercontent.com/projectcalico/calico/v${CALICO_VERSION}/manifests/calico.yaml -o calico-${CALICO_VERSION}.yaml
1212
```
1313

@@ -20,7 +20,7 @@ Compare the `deploy.yaml` file with the downloaded `calico-${CALICO_VERSION}` fi
2020
3. Update the `kustomization.yaml` file with the right image versions.
2121

2222
```bash
23-
export CALICO_IMAGE_TAG=v3.27.0
23+
export CALICO_IMAGE_TAG=v3.27.3
2424
kustomize edit set image docker.io/calico/kube-controllers=registry.sighup.io/fury/calico/kube-controllers:${CALICO_IMAGE_TAG}
2525
kustomize edit set image docker.io/calico/cni=registry.sighup.io/fury/calico/cni:${CALICO_IMAGE_TAG}
2626
kustomize edit set image docker.io/calico/node=registry.sighup.io/fury/calico/node:${CALICO_IMAGE_TAG}
@@ -39,7 +39,7 @@ See <https://docs.tigera.io/calico/latest/operations/monitor/monitor-component-m
3939
1. Download the dashboard from upstream:
4040

4141
```bash
42-
export CALICO_VERSION=3.27.0
42+
export CALICO_VERSION=3.27.3
4343
# ⚠️ Assuming $PWD == root of the project
4444
# We take the `felix-dashboard.json` from the downloaded yaml, we are not deploying `typha`, so we don't need its dashboard.
4545
curl -L https://raw.githubusercontent.com/projectcalico/calico/v${CALICO_VERSION}/manifests/grafana-dashboards.yaml | yq '.data["felix-dashboard.json"]' | sed 's/calico-demo-prometheus/prometheus/g' | jq > ./monitoring/dashboards/felix-dashboard.json

katalog/calico/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -21,9 +21,9 @@ The deployment of Calico consists of a daemon set running on every node (includi
2121
## Image repository and tag
2222

2323
- calico images:
24-
- `calico/kube-controllers:v3.27.0`.
25-
- `calico/cni:v3.27.0`.
26-
- `calico/node:v3.27.0`.
24+
- `calico/kube-controllers:v3.27.3`.
25+
- `calico/cni:v3.27.3`.
26+
- `calico/node:v3.27.3`.
2727
- calico repositories:
2828
- [https://github.com/projectcalico/kube-controllers](https://github.com/projectcalico/calico/tree/master/kube-controllers).
2929
- [https://github.com/projectcalico/cni-plugin](https://github.com/projectcalico/calico/tree/master/cni-plugin).

katalog/calico/kustomization.yaml

+3-3
Original file line numberDiff line numberDiff line change
@@ -10,13 +10,13 @@ namespace: kube-system
1010
images:
1111
- name: docker.io/calico/cni
1212
newName: registry.sighup.io/fury/calico/cni
13-
newTag: v3.27.0
13+
newTag: v3.27.3
1414
- name: docker.io/calico/kube-controllers
1515
newName: registry.sighup.io/fury/calico/kube-controllers
16-
newTag: v3.27.0
16+
newTag: v3.27.3
1717
- name: docker.io/calico/node
1818
newName: registry.sighup.io/fury/calico/node
19-
newTag: v3.27.0
19+
newTag: v3.27.3
2020

2121
# Resources needed for Monitoring
2222
resources:

katalog/tests/calico/tigera.sh

+157
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,157 @@
1+
#!/bin/bash
2+
# Copyright (c) 2024-present SIGHUP s.r.l All rights reserved.
3+
# Use of this source code is governed by a BSD-style
4+
# license that can be found in the LICENSE file.
5+
6+
# shellcheck disable=SC2154
7+
8+
load ./../helper
9+
10+
@test "Nodes in Not Ready state" {
11+
info
12+
nodes_not_ready() {
13+
kubectl get nodes --no-headers | awk '{print $2}' | uniq | grep -q NotReady
14+
}
15+
run nodes_not_ready
16+
[ "$status" -eq 0 ]
17+
}
18+
19+
@test "Install Prerequisites" {
20+
info
21+
install() {
22+
kubectl apply -f 'https://raw.githubusercontent.com/sighupio/fury-kubernetes-monitoring/v3.1.0/katalog/prometheus-operator/crds/0servicemonitorCustomResourceDefinition.yaml'
23+
kubectl apply -f 'https://raw.githubusercontent.com/sighupio/fury-kubernetes-monitoring/v3.1.0/katalog/prometheus-operator/crds/0prometheusruleCustomResourceDefinition.yaml'
24+
}
25+
run install
26+
[ "$status" -eq 0 ]
27+
}
28+
29+
#
30+
@test "Install Tigera operator and calico operated" {
31+
info
32+
test() {
33+
apply katalog/tigera/on-prem
34+
}
35+
loop_it test 60 5
36+
status=${loop_it_result}
37+
[ "$status" -eq 0 ]
38+
}
39+
40+
@test "Calico Kube Controller is Running" {
41+
info
42+
test() {
43+
kubectl get pods -l k8s-app=calico-kube-controllers -o json -n calico-system |jq '.items[].status.containerStatuses[].ready' | uniq | grep -q true
44+
}
45+
loop_it test 60 5
46+
status=${loop_it_result}
47+
[ "$status" -eq 0 ]
48+
}
49+
50+
@test "Calico Node is Running" {
51+
info
52+
test() {
53+
kubectl get pods -l k8s-app=calico-node -o json -n calico-system |jq '.items[].status.containerStatuses[].ready' | uniq | grep -q true
54+
}
55+
loop_it test 60 5
56+
status=${loop_it_result}
57+
[ "$status" -eq 0 ]
58+
}
59+
60+
@test "Nodes in ready State" {
61+
info
62+
test() {
63+
kubectl get nodes --no-headers | awk '{print $2}' | uniq | grep -q Ready
64+
}
65+
run test
66+
[ "$status" -eq 0 ]
67+
}
68+
69+
@test "Apply whitelist-system-ns GlobalNetworkPolicy" {
70+
info
71+
install() {
72+
kubectl apply -f examples/globalnetworkpolicies/1.whitelist-system-namespace.yml
73+
}
74+
run install
75+
[ "$status" -eq 0 ]
76+
}
77+
78+
@test "Create a non-whitelisted namespace with an app" {
79+
info
80+
install() {
81+
kubectl create ns test-1
82+
kubectl apply -f katalog/tests/calico/resources/echo-server.yaml -n test-1
83+
kubectl wait -n test-1 --for=condition=ready --timeout=120s pod -l app=echoserver
84+
}
85+
run install
86+
[ "$status" -eq 0 ]
87+
}
88+
89+
@test "Test app within the same namespace" {
90+
info
91+
test() {
92+
kubectl create job -n test-1 isolated-test --image travelping/nettools -- curl http://echoserver.test-1.svc.cluster.local
93+
kubectl wait -n test-1 --for=condition=complete --timeout=30s job/isolated-test
94+
}
95+
run test
96+
[ "$status" -eq 0 ]
97+
}
98+
99+
@test "Test app from a system namespace" {
100+
info
101+
test() {
102+
kubectl create job -n kube-system isolated-test --image travelping/nettools -- curl http://echoserver.test-1.svc.cluster.local
103+
kubectl wait -n kube-system --for=condition=complete --timeout=30s job/isolated-test
104+
}
105+
run test
106+
[ "$status" -eq 0 ]
107+
}
108+
109+
@test "Test app from a different namespace" {
110+
info
111+
test() {
112+
kubectl create ns test-1-1
113+
kubectl create job -n test-1-1 isolated-test --image travelping/nettools -- curl http://echoserver.test-1.svc.cluster.local
114+
kubectl wait -n test-1-1 --for=condition=complete --timeout=30s job/isolated-test
115+
}
116+
run test
117+
[ "$status" -eq 0 ]
118+
}
119+
120+
@test "Apply deny-all GlobalNetworkPolicy" {
121+
info
122+
install() {
123+
kubectl apply -f examples/globalnetworkpolicies/2000.deny-all.yml
124+
}
125+
run install
126+
[ "$status" -eq 0 ]
127+
}
128+
129+
@test "Test app from the same namespace (isolated namespace)" {
130+
info
131+
test() {
132+
kubectl create job -n test-1 isolated-test-1 --image travelping/nettools -- curl http://echoserver.test-1.svc.cluster.local
133+
kubectl wait -n test-1 --for=condition=complete --timeout=30s job/isolated-test-1
134+
}
135+
run test
136+
[ "$status" -eq 1 ]
137+
}
138+
139+
@test "Test app from a system namespace (isolated namespace)" {
140+
info
141+
test() {
142+
kubectl create job -n kube-system isolated-test-1 --image travelping/nettools -- curl http://echoserver.test-1.svc.cluster.local
143+
kubectl wait -n kube-system --for=condition=complete --timeout=30s job/isolated-test-1
144+
}
145+
run test
146+
[ "$status" -eq 0 ]
147+
}
148+
149+
@test "Test app from a different namespace (isolated namespace)" {
150+
info
151+
test() {
152+
kubectl create job -n test-1-1 isolated-test-1 --image travelping/nettools -- curl http://echoserver.test-1.svc.cluster.local
153+
kubectl wait -n test-1-1 --for=condition=complete --timeout=30s job/isolated-test-1
154+
}
155+
run test
156+
[ "$status" -eq 1 ]
157+
}

katalog/tests/helper.bash

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33

44
apply (){
55
kustomize build $1 >&2
6-
kustomize build $1 | kubectl apply -f - 2>&3
6+
kustomize build $1 | kubectl apply --server-side -f - 2>&3
77
}
88

99
delete (){

katalog/tigera/MAINTENANCE.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ To update the YAML file, run the following command:
1111

1212
```bash
1313
# assuming katalog/tigera is the root of the repository
14-
export CALICO_VERSION="3.27.0"
14+
export CALICO_VERSION="3.27.3"
1515
curl "https://raw.githubusercontent.com/projectcalico/calico/v${CALICO_VERSION}/manifests/tigera-operator.yaml" --output operator/tigera-operator.yaml
1616
```
1717

@@ -28,7 +28,7 @@ To download the default configuration from upstream and update the file use the
2828

2929
```bash
3030
# assuming katalog/tigera is the root of the repository
31-
export CALICO_VERSION="3.27.0"
31+
export CALICO_VERSION="3.27.3"
3232
curl https://raw.githubusercontent.com/projectcalico/calico/v${CALICO_VERSION}/manifests/custom-resources.yaml --output on-prem/custom-resources.yaml
3333
```
3434

@@ -50,7 +50,7 @@ To get the dashboards you can use the following commands:
5050

5151
```bash
5252
# ⚠️ Assuming $PWD == root of the project
53-
export CALICO_VERSION="3.27.0"
53+
export CALICO_VERSION="3.27.3"
5454
# we split the upstream file and store only the json files
5555
curl -L https://raw.githubusercontent.com/projectcalico/calico/v${CALICO_VERSION}/manifests/grafana-dashboards.yaml | yq '.data["felix-dashboard.json"]' | sed 's/calico-demo-prometheus/prometheus/g' | jq > ./on-prem/monitoring/dashboards/felix-dashboard.json
5656
curl -L https://raw.githubusercontent.com/projectcalico/calico/v${CALICO_VERSION}/manifests/grafana-dashboards.yaml | yq '.data["typha-dashboard.json"]' | sed 's/calico-demo-prometheus/prometheus/g' | jq > ./on-prem/monitoring/dashboards/typa-dashboard.json

katalog/tigera/operator/tigera-operator.yaml

+17-2
Original file line numberDiff line numberDiff line change
@@ -983,6 +983,13 @@ spec:
983983
Loose]'
984984
pattern: ^(?i)(Disabled|Strict|Loose)?$
985985
type: string
986+
bpfExcludeCIDRsFromNAT:
987+
description: BPFExcludeCIDRsFromNAT is a list of CIDRs that are to
988+
be excluded from NAT resolution so that host can handle them. A
989+
typical usecase is node local DNS cache.
990+
items:
991+
type: string
992+
type: array
986993
bpfExtToServiceConnmark:
987994
description: 'BPFExtToServiceConnmark in BPF mode, control a 32bit
988995
mark that is set on connections from an external client to a local
@@ -25102,6 +25109,14 @@ rules:
2510225109
verbs:
2510325110
- create
2510425111
- delete
25112+
# In addition to the above, the operator should have the ability to delete their own resources during uninstallation.
25113+
- apiGroups:
25114+
- operator.tigera.io
25115+
resources:
25116+
- installations
25117+
- apiservers
25118+
verbs:
25119+
- delete
2510525120
- apiGroups:
2510625121
- networking.k8s.io
2510725122
resources:
@@ -25273,7 +25288,7 @@ spec:
2527325288
dnsPolicy: ClusterFirstWithHostNet
2527425289
containers:
2527525290
- name: tigera-operator
25276-
image: quay.io/tigera/operator:v1.32.3
25291+
image: quay.io/tigera/operator:v1.32.7
2527725292
imagePullPolicy: IfNotPresent
2527825293
command:
2527925294
- operator
@@ -25291,7 +25306,7 @@ spec:
2529125306
- name: OPERATOR_NAME
2529225307
value: "tigera-operator"
2529325308
- name: TIGERA_OPERATOR_INIT_IMAGE_VERSION
25294-
value: v1.32.3
25309+
value: v1.32.7
2529525310
envFrom:
2529625311
- configMapRef:
2529725312
name: kubernetes-services-endpoint

0 commit comments

Comments
 (0)