You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the second cluster (acting as gateway server) you can find the following resources:
105
105
106
+
```{admonition} Note
107
+
If the status return **Error** check the FAQ section [Debug gateway-to-gateway communication issues](../../faq/faq.md#debug-gateway-to-gateway-communication-issues) to get an hint on how to solve the issue.
This section contains the answers to the most frequently asked questions by the community (Slack, GitHub, etc.).
4
4
5
-
## Table of contents
6
-
7
-
*[General](FAQGeneralSection)
8
-
*[Cluster limits](FAQClusterLimits)
9
-
*[Why DaemonSets pods (e.g., Kube-Proxy, CNI pods) scheduled on Virtual Nodes are in OffloadingBackOff?](FAQDaemonsetBackOff)
10
-
*[Installation](FAQInstallationSection)
11
-
*[Upgrade the Liqo version installed on a cluster](FAQUpgradeLiqo)
12
-
*[How to install Liqo on DigitalOcean](FAQInstallLiqoDO)
13
-
*[Peering](FAQPeeringSection)
14
-
*[How to force unpeer a cluster?](FAQForceUnpeer)
15
-
*[Is it possible to peer clusters using an ingress?](FAQPeerOverIngress)
16
-
17
-
(FAQGeneralSection)=
18
-
19
5
## General
20
6
21
-
(FAQClusterLimits)=
22
-
23
7
### Cluster limits
24
8
25
9
The official Kubernetes documentation presents some [general best practices and considerations for large clusters](https://kubernetes.io/docs/setup/best-practices/cluster-large/), defining some cluster limits.
@@ -30,8 +14,6 @@ For instance, the limitation of 110 pods per node is not enforced on Liqo virtua
30
14
The same consideration applies to the maximum number of nodes (5000) since all the remote nodes are hidden by a single virtual node.
31
15
You can find additional information [here](https://github.com/liqotech/liqo/issues/1863).
32
16
33
-
(FAQDaemonsetBackOff)=
34
-
35
17
### Why DaemonSets pods (e.g., Kube-Proxy, CNI pods) scheduled on virtual nodes are in OffloadingBackOff?
36
18
37
19
The virtual nodes generated by Liqo have a [taint](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) that prevents pods from being scheduled on a virtual node (so on the remote cluster) unless the pod is a created in an [offloaded namespace](../usage/namespace-offloading.md).
@@ -54,32 +36,22 @@ nodeAffinity:
54
36
55
37
This ensures that a pod is **not** created on any nodes with the `liqo.io/type` label.
56
38
57
-
(FAQInstallationSection)=
58
-
59
39
## Installation
60
40
61
-
(FAQUpgradeLiqo)=
62
-
63
41
### Upgrade the Liqo version installed on a cluster
64
42
65
43
Unfortunately, this feature is not currently fully supported.
66
44
At the moment, upgrading through `liqoctl install` or `helm update` will update manifests and Docker images (excluding the *virtual-kubelet* one as it is created dynamically by the *controller-manager*), but it will not update any CRD-related changes (see this [issue](https://github.com/liqotech/liqo/issues/1831) for further details).
67
45
The easiest way is to unpeer all existing clusters and then uninstall and reinstall Liqo on all clusters (make sure to have the same Liqo version on all peered clusters).
68
46
69
-
(FAQInstallLiqoDO)=
70
-
71
47
### How to install Liqo on DigitalOcean
72
48
73
49
The installation of Liqo on a Digital Ocean's cluster does not work out of the box.
74
50
The problem is related to the `liqo-gateway` service and DigitalOcean load balancer health check (which does not support a health check based on UDP).
75
51
This [issue](https://github.com/liqotech/liqo/issues/1668) presents a step-by-step solution to overcome this problem.
76
52
77
-
(FAQPeeringSection)=
78
-
79
53
## Peering
80
54
81
-
(FAQForceUnpeer)=
82
-
83
55
### How to force unpeer a cluster?
84
56
85
57
It is highly recommended to first unpeer all existing foreignclusters before upgrading/uninstalling Liqo.
@@ -98,8 +70,6 @@ This is a not recommended solution, use this only as a last resort if no other v
98
70
Future upgrades will make it easier to unpeer a cluster or uninstall Liqo.
99
71
```
100
72
101
-
(FAQPeerOverIngress)=
102
-
103
73
### Is it possible to peer clusters using an ingress?
104
74
105
75
It is possible to use an ingress to expose the `liqo-auth` service instead of a NodePort/LoadBalancer using Helm values.
@@ -108,3 +78,179 @@ Make sure to set `auth.ingress.enable` to `true` and configure the rest of the v
108
78
```{admonition} Note
109
79
The `liqo-gateway` service can't be exposed through a common ingress (proxies like nginx which works with HTTP only) because it uses UDP.
110
80
```
81
+
82
+
## Network
83
+
84
+
### Debug gateway-to-gateway communication issues
85
+
86
+
Follow these steps only if you are receiving an **error** in the **connection** resources.
87
+
Run the following command to check the status of the connections:
88
+
89
+
```bash
90
+
kubectl get connection -A
91
+
```
92
+
93
+
#### Check the UDP service
94
+
95
+
Liqo exposes the **gateway server** using a UDP service.
96
+
97
+
In the majority of the cases, the issue is related to the missing support for UDP services on a cloud provider or in your on-premise environment.
98
+
99
+
You can manually test if your UDP **LoadBalancers** or **NodePort** services are working correctly by creating a dummy UDP echo server:
100
+
101
+
```yaml
102
+
# echo-server.yaml
103
+
apiVersion: apps/v1
104
+
kind: Deployment
105
+
metadata:
106
+
name: echo-server
107
+
spec:
108
+
replicas: 1
109
+
selector:
110
+
matchLabels:
111
+
app: echo-server
112
+
template:
113
+
metadata:
114
+
labels:
115
+
app: echo-server
116
+
spec:
117
+
containers:
118
+
- name: echo-server
119
+
image: ghcr.io/liqotech/udpecho
120
+
ports:
121
+
- containerPort: 5000
122
+
protocol: UDP
123
+
---
124
+
apiVersion: v1
125
+
kind: Service
126
+
metadata:
127
+
name: echo-server-lb
128
+
spec:
129
+
selector:
130
+
app: echo-server
131
+
type: LoadBalancer
132
+
ports:
133
+
- protocol: UDP
134
+
port: 5000
135
+
targetPort: 5000
136
+
---
137
+
apiVersion: v1
138
+
kind: Service
139
+
metadata:
140
+
name: echo-server-np
141
+
spec:
142
+
selector:
143
+
app: echo-server
144
+
type: NodePort
145
+
ports:
146
+
- protocol: UDP
147
+
port: 5000
148
+
targetPort: 5000
149
+
```
150
+
151
+
Save this file and apply the manifests to create the echo server and expose it:
152
+
153
+
```bash
154
+
kubectl apply -f echo-server.yaml
155
+
```
156
+
157
+
Now you can test the UDP service exposed by the echo server using the following command:
158
+
159
+
```bash
160
+
nc -u <IP><PORT>
161
+
```
162
+
163
+
In case you want to test a **LoadBalancer** service, replace `<IP>` and `<PORT>` with the values of the `echo-server-lb` service. Otherwise, if you are testing the **NodePort** connectivity, replace `<IP>` with the IP of one of your nodes and `<PORT>` with the NodePort value of the `echo-server-np` service.
164
+
165
+
After you have run the command, you can type a message and press `Enter`. If you see the message echoed back in upper case, the UDP service is working correctly.
166
+
167
+
### Debug pod-to-pod communication issues
168
+
169
+
These steps are intended to be used to get information about network issues between **two clusters**, to share with maintainers when asking for help.
170
+
171
+
Before starting check the **connection** resources on your clusters using ```kubectl get connection -A```.
172
+
If you get an error in their status, refer to the [Debug gateway-to-gateway communication issues](./faq.md#debug-gateway-to-gateway-communication-issues) section.
173
+
174
+
```{warning}
175
+
It's strongly recommended to use 2 clusters with different **pod CIDRs** for debugging.
176
+
```
177
+
178
+
#### Deploy debug pods
179
+
180
+
Create 2 namespaces, one in each cluster, and deploy a debug pod in each namespace.
Now install the required tools to test the connectivity.
199
+
200
+
```bash
201
+
apt update
202
+
apt install iputils-ping -y
203
+
```
204
+
205
+
#### Get the remote pod IP
206
+
207
+
We need to obtain the IPs we need to ping to test the connectivity.
208
+
209
+
If you are using 2 different pod CIDRs, you can use the original pod IPs.
210
+
211
+
```bash
212
+
kubectl get pods -n liqo-debug -o wide
213
+
```
214
+
215
+
If you are using the same pod CIDR, you need to **remap** the IPs of the pods.
216
+
217
+
If you have 2 clusters called `cluster A` and `cluster B`, to remap the pod IP on `cluster B` you need to:
218
+
get the **configuration** resource on `cluster A` related to `cluster B`:
219
+
220
+
```bash
221
+
kubectl get configuration -A
222
+
```
223
+
224
+
Now take the **REMAPPED POD CIDR** value, keep the **network** part of the CIDR and replace the **host** part with the one of the pod you want to reach on `cluster B`.
225
+
226
+
If you want a more detailed explanation, you can find an example of remapping [here](../advanced/external-ip-remapping.md).
227
+
228
+
#### Sniff the traffic inside the gateway
229
+
230
+
In your tenant namespace, you can find a pod called `gw-<CLUSTER_ID>`. This pod routes the traffic between the clusters.
231
+
232
+
In order to check if the traffic is correctly routed, you can sniff the traffic inside the gateway pod.
0 commit comments