Skip to content

Commit cc672c2

Browse files
fix: fixed issue with taskfile forcing an incorrect cluster and context and added a dual cluster readme (#1396)
Signed-off-by: Jeromy Cannon <[email protected]>
1 parent 36f3316 commit cc672c2

File tree

2 files changed

+222
-2
lines changed

2 files changed

+222
-2
lines changed

Taskfile.helper.yml

Lines changed: 15 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,9 @@ tasks:
6969
- echo "CONSENSUS_NODE_VERSION=${CONSENSUS_NODE_VERSION}"
7070
- echo "SOLO_NAMESPACE=${SOLO_NAMESPACE}"
7171
- echo "SOLO_DEPLOYMENT=${SOLO_DEPLOYMENT}"
72+
- echo "CLUSTER_REF=${CLUSTER_REF}"
7273
- echo "SOLO_CLUSTER_RELEASE_NAME=${SOLO_CLUSTER_RELEASE_NAME}"
74+
- echo "CONTEXT=${CONTEXT}"
7375
- echo "nodes={{ .nodes }}"
7476
- echo "node_identifiers={{ .node_identifiers }}"
7577
- echo "use_port_forwards={{ .use_port_forwards }}"
@@ -165,7 +167,18 @@ tasks:
165167
deps:
166168
- task: "init"
167169
cmds:
168-
- SOLO_HOME_DIR=${SOLO_HOME_DIR} npm run solo -- deployment create -n {{ .SOLO_NAMESPACE }} --context kind-${SOLO_CLUSTER_NAME} --email {{ .SOLO_EMAIL }} --deployment-clusters kind-${SOLO_CLUSTER_NAME} --cluster-ref kind-${SOLO_CLUSTER_NAME} --deployment "${SOLO_DEPLOYMENT}" --node-aliases {{.node_identifiers}} --dev
170+
- |
171+
if [[ "${CONTEXT}" != "" ]]; then
172+
echo "CONTEXT=${CONTEXT}"
173+
else
174+
export CONTEXT="kind-${SOLO_CLUSTER_NAME}"
175+
fi
176+
if [[ "${CLUSTER_REF}" != "" ]]; then
177+
echo "CLUSTER_REF=${CLUSTER_REF}"
178+
else
179+
export CLUSTER_REF="kind-${SOLO_CLUSTER_NAME}"
180+
fi
181+
SOLO_HOME_DIR=${SOLO_HOME_DIR} npm run solo -- deployment create -n {{ .SOLO_NAMESPACE }} --context ${CONTEXT} --email {{ .SOLO_EMAIL }} --deployment-clusters ${CLUSTER_REF} --cluster-ref ${CLUSTER_REF} --deployment "${SOLO_DEPLOYMENT}" --node-aliases {{.node_identifiers}} --dev
169182
170183
solo:keys:
171184
silent: true
@@ -197,7 +210,7 @@ tasks:
197210
export CONSENSUS_NODE_FLAG='--release-tag {{.CONSENSUS_NODE_VERSION}}'
198211
fi
199212
if [[ "${SOLO_CHART_VERSION}" != "" ]]; then
200-
export SOLO_CHART_FLAG='--solo-chart-version ${SOLO_CHART_VERSION}'
213+
export SOLO_CHART_FLAG="--solo-chart-version ${SOLO_CHART_VERSION}"
201214
fi
202215
SOLO_HOME_DIR=${SOLO_HOME_DIR} npm run solo -- network deploy --deployment "${SOLO_DEPLOYMENT}" --node-aliases {{.node_identifiers}} ${CONSENSUS_NODE_FLAG} ${SOLO_CHART_FLAG} ${VALUES_FLAG} ${SETTINGS_FLAG} ${LOG4J2_FLAG} ${APPLICATION_PROPERTIES_FLAG} ${GENESIS_THROTTLES_FLAG} ${DEBUG_NODE_FLAG} ${SOLO_CHARTS_DIR_FLAG} ${LOAD_BALANCER_FLAG} ${NETWORK_DEPLOY_EXTRA_FLAGS} -q --dev
203216
- task: "solo:node:setup"

test/e2e/dual-cluster/README.md

Lines changed: 207 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,207 @@
1+
# Local Dual Cluster Testing
2+
This document describes how to test the dual cluster setup locally.
3+
4+
## Prerequisites
5+
- Make sure you give your Docker sufficient resources
6+
- ? CPUs
7+
- ? GB RAM
8+
- ? GB Swap
9+
- ? GB Disk Space
10+
- If you are tight on resources you might want to make sure that no other Kind clusters are running or anything that is resource heavy on your machine.
11+
12+
## Calling
13+
```bash
14+
# from your Solo root directory run:
15+
./test/e2e/dual-cluster/setup-dual-e2e.sh
16+
```
17+
Output:
18+
```bash
19+
SOLO_CHARTS_DIR:
20+
Deleting cluster "solo-e2e-c1" ...
21+
Deleting cluster "solo-e2e-c2" ...
22+
1051ed73cb755a017c3d578e5c324eef1cae95c606164f97228781db126f80b6
23+
"metrics-server" has been added to your repositories
24+
"metallb" has been added to your repositories
25+
Creating cluster "solo-e2e-c1" ...
26+
✓ Ensuring node image (kindest/node:v1.31.4) 🖼
27+
✓ Preparing nodes 📦
28+
✓ Writing configuration 📜
29+
✓ Starting control-plane 🕹️
30+
✓ Installing CNI 🔌
31+
✓ Installing StorageClass 💾
32+
Set kubectl context to "kind-solo-e2e-c1"
33+
You can now use your cluster with:
34+
35+
kubectl cluster-info --context kind-solo-e2e-c1
36+
37+
Thanks for using kind! 😊
38+
Release "metrics-server" does not exist. Installing it now.
39+
NAME: metrics-server
40+
LAST DEPLOYED: Fri Feb 14 16:04:15 2025
41+
NAMESPACE: kube-system
42+
STATUS: deployed
43+
REVISION: 1
44+
TEST SUITE: None
45+
NOTES:
46+
***********************************************************************
47+
* Metrics Server *
48+
***********************************************************************
49+
Chart version: 3.12.2
50+
App version: 0.7.2
51+
Image tag: registry.k8s.io/metrics-server/metrics-server:v0.7.2
52+
***********************************************************************
53+
Release "metallb" does not exist. Installing it now.
54+
NAME: metallb
55+
LAST DEPLOYED: Fri Feb 14 16:04:16 2025
56+
NAMESPACE: metallb-system
57+
STATUS: deployed
58+
REVISION: 1
59+
TEST SUITE: None
60+
NOTES:
61+
MetalLB is now running in the cluster.
62+
63+
Now you can configure it via its CRs. Please refer to the metallb official docs
64+
on how to use the CRs.
65+
ipaddresspool.metallb.io/local created
66+
l2advertisement.metallb.io/local created
67+
namespace/cluster-diagnostics created
68+
configmap/cluster-diagnostics-cm created
69+
service/cluster-diagnostics-svc created
70+
deployment.apps/cluster-diagnostics created
71+
Creating cluster "solo-e2e-c2" ...
72+
✓ Ensuring node image (kindest/node:v1.31.4) 🖼
73+
✓ Preparing nodes 📦
74+
✓ Writing configuration 📜
75+
✓ Starting control-plane 🕹️
76+
✓ Installing CNI 🔌
77+
✓ Installing StorageClass 💾
78+
Set kubectl context to "kind-solo-e2e-c2"
79+
You can now use your cluster with:
80+
81+
kubectl cluster-info --context kind-solo-e2e-c2
82+
83+
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
84+
Release "metrics-server" does not exist. Installing it now.
85+
NAME: metrics-server
86+
LAST DEPLOYED: Fri Feb 14 16:05:07 2025
87+
NAMESPACE: kube-system
88+
STATUS: deployed
89+
REVISION: 1
90+
TEST SUITE: None
91+
NOTES:
92+
***********************************************************************
93+
* Metrics Server *
94+
***********************************************************************
95+
Chart version: 3.12.2
96+
App version: 0.7.2
97+
Image tag: registry.k8s.io/metrics-server/metrics-server:v0.7.2
98+
***********************************************************************
99+
Release "metallb" does not exist. Installing it now.
100+
NAME: metallb
101+
LAST DEPLOYED: Fri Feb 14 16:05:08 2025
102+
NAMESPACE: metallb-system
103+
STATUS: deployed
104+
REVISION: 1
105+
TEST SUITE: None
106+
NOTES:
107+
MetalLB is now running in the cluster.
108+
109+
Now you can configure it via its CRs. Please refer to the metallb official docs
110+
on how to use the CRs.
111+
ipaddresspool.metallb.io/local created
112+
l2advertisement.metallb.io/local created
113+
namespace/cluster-diagnostics created
114+
configmap/cluster-diagnostics-cm created
115+
service/cluster-diagnostics-svc created
116+
deployment.apps/cluster-diagnostics created
117+
118+
> @hashgraph/[email protected] build
119+
> rm -Rf dist && tsc && node resources/post-build-script.js
120+
121+
122+
> @hashgraph/[email protected] solo
123+
> node --no-deprecation --no-warnings dist/solo.js init
124+
125+
126+
******************************* Solo *********************************************
127+
Version : 0.34.0
128+
Kubernetes Context : kind-solo-e2e-c2
129+
Kubernetes Cluster : kind-solo-e2e-c2
130+
Current Command : init
131+
**********************************************************************************
132+
✔ Setup home directory and cache
133+
✔ Check dependencies
134+
✔ Check dependency: helm [OS: darwin, Release: 23.6.0, Arch: arm64]
135+
✔ Setup chart manager [1s]
136+
✔ Copy templates in '/Users/user/.solo/cache'
137+
138+
139+
***************************************************************************************
140+
Note: solo stores various artifacts (config, logs, keys etc.) in its home directory: /Users/user/.solo
141+
If a full reset is needed, delete the directory or relevant sub-directories before running 'solo init'.
142+
***************************************************************************************
143+
Switched to context "kind-solo-e2e-c1".
144+
145+
> @hashgraph/[email protected] solo
146+
> node --no-deprecation --no-warnings dist/solo.js cluster setup -s solo-setup
147+
148+
149+
******************************* Solo *********************************************
150+
Version : 0.34.0
151+
Kubernetes Context : kind-solo-e2e-c1
152+
Kubernetes Cluster : kind-solo-e2e-c1
153+
Current Command : cluster setup
154+
**********************************************************************************
155+
✔ Initialize
156+
✔ Prepare chart values
157+
✔ Install 'solo-cluster-setup' chart [2s]
158+
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
159+
metallb metallb-system 1 2025-02-14 16:04:16.785411 +0000 UTC deployed metallb-0.14.9 v0.14.9
160+
metrics-server kube-system 1 2025-02-14 16:04:15.593138 +0000 UTC deployed metrics-server-3.12.2 0.7.2
161+
solo-cluster-setup solo-setup 1 2025-02-14 16:05:54.334181 +0000 UTC deployed solo-cluster-setup-0.44.0 0.44.0
162+
Switched to context "kind-solo-e2e-c2".
163+
164+
> @hashgraph/[email protected] solo
165+
> node --no-deprecation --no-warnings dist/solo.js cluster setup -s solo-setup
166+
167+
168+
******************************* Solo *********************************************
169+
Version : 0.34.0
170+
Kubernetes Context : kind-solo-e2e-c2
171+
Kubernetes Cluster : kind-solo-e2e-c2
172+
Current Command : cluster setup
173+
**********************************************************************************
174+
✔ Initialize
175+
✔ Prepare chart values
176+
✔ Install 'solo-cluster-setup' chart [2s]
177+
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
178+
metallb metallb-system 1 2025-02-14 16:05:08.226466 +0000 UTC deployed metallb-0.14.9 v0.14.9
179+
metrics-server kube-system 1 2025-02-14 16:05:07.217358 +0000 UTC deployed metrics-server-3.12.2 0.7.2
180+
solo-cluster-setup solo-setup 1 2025-02-14 16:05:58.114619 +0000 UTC deployed solo-cluster-setup-0.44.0 0.44.0
181+
Switched to context "kind-solo-e2e-c1".
182+
```
183+
## Diagnostics
184+
The `./diagnostics/cluster/deploy.sh` deploys a `cluster-diagnostics` deployment (and its pod) with a service that has its external IP exposed. It is deployed to both clusters, runs Ubuntu, and has most diagnostic software installed. After ran you can shell into the pod and use the container to run your own troubleshooting commands for verifying network connectivity between the two clusters or DNS resolution, etc.
185+
186+
Calling
187+
```bash
188+
# from your Solo root directory run:
189+
$ ./test/e2e/dual-cluster/diagnostics/cluster/deploy.sh
190+
```
191+
Output:
192+
```bash
193+
namespace/cluster-diagnostics unchanged
194+
configmap/cluster-diagnostics-cm unchanged
195+
service/cluster-diagnostics-svc unchanged
196+
deployment.apps/cluster-diagnostics unchanged
197+
```
198+
## Cleanup
199+
Calling
200+
```bash
201+
# from your Solo root directory run:
202+
kind delete clusters cluster1 cluster2
203+
```
204+
Output:
205+
```bash
206+
Deleted clusters: ["cluster1" "cluster2"]
207+
```

0 commit comments

Comments
 (0)