Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Targets are not added to Load balancer in version 1.23.0 #853

Open
ashok-eurostar opened this issue Jan 25, 2025 · 11 comments
Open

Targets are not added to Load balancer in version 1.23.0 #853

ashok-eurostar opened this issue Jan 25, 2025 · 11 comments
Assignees
Labels
help wanted Extra attention is needed

Comments

@ashok-eurostar
Copy link

I have installed Hetzner Cloud Controller Manager using helm chart and with values

env:
    - name: HCLOUD_NETWORK
      valueFrom:
        secretKeyRef:
          name: hcloud
          key: network
    - name: HCLOUD_NETWORK_ROUTES_ENABLED
      value: "false"
    - name: HCLOUD_TOKEN
      valueFrom:
        secretKeyRef:
          name: hcloud
          key: token

I use

    repoURL: "https://kubernetes.github.io/ingress-nginx"
    chart: ingress-nginx
    targetRevision: 4.11.3

with annotations:

       controller:
          replicaCount: 2
          service:
            annotations:              
              load-balancer.hetzner.cloud/name : k8s-ingress-lb-nbg1
              load-balancer.hetzner.cloud/location: "nbg1" 
              load-balancer.hetzner.cloud/type: "lb11"
              load-balancer.hetzner.cloud/use-private-ip: "true"
              load-balancer.hetzner.cloud/disable-private-ingress: "true"

Ingress controller is installed and my load balancer is created, attached to private network. But no targets added to the load balancer.

I tried deleting ingress controller and reinstalled and no success yet. Some logs :


redLoadBalancer" message="Ensured load balancer"
I0125 08:05:59.079349       1 event.go:389] "Event occurred" object="nginx-ingress-controller/nginx-ingress-controller" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I0125 08:05:59.136364       1 load_balancers.go:127] "ensure Load Balancer" op="hcloud/loadBalancers.EnsureLoadBalancer" service="nginx-ingress-controller" nodes=["w-app-fsn1-1","w-app-hel1-1","w-common-fsn1-1","w-common-hel1-1","w-common-nbg1-1"]
I0125 08:06:00.949499       1 load_balancer.go:501] "attach to network" op="hcops/LoadBalancerOps.attachToNetwork" loadBalancerID=2295634 networkID=10592131
I0125 08:14:38.134134       1 load_balancers.go:171] "reload HC Load Balancer" op="hcloud/loadBalancers.EnsureLoadBalancer" loadBalancerID=2295634
I0125 08:14:38.323126       1 load_balancer.go:897] "add service" op="hcops/LoadBalancerOps.ReconcileHCLBServices" port=80 loadBalancerID=2295634
I0125 08:14:38.781959       1 load_balancer.go:897] "add service" op="hcops/LoadBalancerOps.ReconcileHCLBServices" port=443 loadBalancerID=2295634
I0125 08:14:39.200647       1 load_balancers.go:192] "reload HC Load Balancer" op="hcloud/loadBalancers.EnsureLoadBalancer" loadBalancerID=2295634
I0125 08:14:39.333110       1 event.go:389] "Event occurred" object="nginx-ingress-controller/nginx-ingress-controller" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"
I0125 08:14:56.795461       1 load_balancers.go:127] "ensure Load Balancer" op="hcloud/loadBalancers.EnsureLoadBalancer" service="nginx-ingress-controller" nodes=["w-app-fsn1-1","w-app-hel1-1","w-common-fsn1-1","w-common-hel1-1","w-common-nbg1-1"]
I0125 08:14:56.796006       1 event.go:389] "Event occurred" object="nginx-ingress-controller/nginx-ingress-controller" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I0125 08:14:56.936104       1 load_balancer.go:886] "update service" op="hcops/LoadBalancerOps.ReconcileHCLBServices" port=80 loadBalancerID=2295634
I0125 08:14:57.067375       1 load_balancer.go:886] "update service" op="hcops/LoadBalancerOps.ReconcileHCLBServices" port=443 loadBalancerID=2295634
I0125 08:14:57.382871       1 load_balancers.go:192] "reload HC Load Balancer" op="hcloud/loadBalancers.EnsureLoadBalancer" loadBalancerID=2295634
I0125 08:14:57.552179       1 event.go:389] "Event occurred" object="nginx-ingress-controller/nginx-ingress-controller" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"
I0125 08:20:34.512941       1 event.go:389] "Event occurred" object="nginx-ingress-controller/nginx-ingress-controller" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="DeletingLoadBalancer" message="Deleting load balancer"
I0125 08:20:34.689463       1 load_balancers.go:374] "delete Load Balancer" op="hcloud/loadBalancers.EnsureLoadBalancerDeleted" loadBalancerID=2295634
I0125 08:20:35.423219       1 event.go:389] "Event occurred" object="nginx-ingress-controller/nginx-ingress-controller" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="DeletedLoadBalancer" message="Deleted load balancer"
I0125 08:20:37.254915       1 event.go:389] "Event occurred" object="nginx-ingress-controller/nginx-ingress-controller-ingress-nginx-controller" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I0125 08:20:37.318585       1 load_balancers.go:127] "ensure Load Balancer" op="hcloud/loadBalancers.EnsureLoadBalancer" service="nginx-ingress-controller-ingress-nginx-controller" nodes=["w-common-fsn1-1","w-common-hel1-1","w-common-nbg1-1","w-app-fsn1-1","w-app-hel1-1"]
I0125 08:20:38.877864       1 load_balancer.go:501] "attach to network" op="hcops/LoadBalancerOps.attachToNetwork" loadBalancerID=2295649 networkID=10592131
I0125 08:20:50.009995       1 load_balancers.go:171] "reload HC Load Balancer" op="hcloud/loadBalancers.EnsureLoadBalancer" loadBalancerID=2295649
I0125 08:20:50.148277       1 load_balancer.go:897] "add service" op="hcops/LoadBalancerOps.ReconcileHCLBServices" port=80 loadBalancerID=2295649
I0125 08:20:50.715458       1 load_balancer.go:897] "add service" op="hcops/LoadBalancerOps.ReconcileHCLBServices" port=443 loadBalancerID=2295649
I0125 08:20:51.510826       1 load_balancers.go:192] "reload HC Load Balancer" op="hcloud/loadBalancers.EnsureLoadBalancer" loadBalancerID=2295649
I0125 08:20:51.657535       1 event.go:389] "Event occurred" object="nginx-ingress-controller/nginx-ingress-controller-ingress-nginx-controller" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"

Am I missing something major?

@lukasmetzner
Copy link
Contributor

lukasmetzner commented Jan 27, 2025

Hey,

I can not reproduce your issue. Could you provide me some more details about your cluster configuration? Are you using the hcloud-cloud-controller-manager in combination with Robot servers?

Could you also provide me the output of kubectl -n kube-system describe deployments.apps hcloud-cloud-controller-manager?

Best Regards,
Lukas

@lukasmetzner lukasmetzner added the help wanted Extra attention is needed label Jan 27, 2025
@ashok-eurostar
Copy link
Author

No, I am not using Robot servers. Here is the output.

kubectl -n kube-system describe deployments.apps hcloud-cloud-controller-manager
Name:                   hcloud-cloud-controller-manager
Namespace:              kube-system
CreationTimestamp:      Sat, 25 Jan 2025 08:44:08 +0100
Labels:                 app.kubernetes.io/managed-by=Helm
Annotations:            deployment.kubernetes.io/revision: 2
                        meta.helm.sh/release-name: hccm
                        meta.helm.sh/release-namespace: kube-system
Selector:               app.kubernetes.io/instance=hccm,app.kubernetes.io/name=hcloud-cloud-controller-manager
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app.kubernetes.io/instance=hccm
                    app.kubernetes.io/name=hcloud-cloud-controller-manager
  Service Account:  hcloud-cloud-controller-manager
  Containers:
   hcloud-cloud-controller-manager:
    Image:      docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.23.0
    Port:       8233/TCP
    Host Port:  0/TCP
    Args:
      --allow-untagged-cloud
      --cloud-provider=hcloud
      --route-reconciliation-period=30s
      --webhook-secure-port=0
      --leader-elect=false
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      HCLOUD_NETWORK:                 <set to the key 'network' in secret 'hcloud'>  Optional: false
      HCLOUD_NETWORK_ROUTES_ENABLED:  false
      HCLOUD_TOKEN:                   <set to the key 'token' in secret 'hcloud'>  Optional: false
    Mounts:                           <none>
  Volumes:                            <none>
  Priority Class Name:                system-cluster-critical
  Node-Selectors:                     <none>
  Tolerations:                        CriticalAddonsOnly op=Exists
                                      node-role.kubernetes.io/control-plane:NoSchedule op=Exists
                                      node-role.kubernetes.io/master:NoSchedule op=Exists
                                      node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
                                      node.kubernetes.io/not-ready:NoExecute
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  hcloud-cloud-controller-manager-68948d8d5d (0/0 replicas created)
NewReplicaSet:   hcloud-cloud-controller-manager-587cfd46f9 (1/1 replicas created)
Events:          <none>
 kubectl get nodes
NAME              STATUS   ROLES           AGE    VERSION
cp-fsn1-1         Ready    control-plane   4d5h   v1.32.0
cp-hel1-1         Ready    control-plane   4d5h   v1.32.0
cp-nbg1-1         Ready    control-plane   4d5h   v1.32.0
w-app-fsn1-1      Ready    worker          4d5h   v1.32.0
w-app-hel1-1      Ready    worker          4d5h   v1.32.0
w-common-fsn1-1   Ready    worker          4d5h   v1.32.0
w-common-hel1-1   Ready    worker          4d5h   v1.32.0
w-common-nbg1-1   Ready    worker          4d5h   v1.32.0

my cluster is mix of CAX11 and CAX21 servers. (No Robot servers)

@lukasmetzner
Copy link
Contributor

Hey,

to debug this further could you set the environment variable HCLOUD_DEBUG="true" for the HCCM and in addition provide the output of kubectl describe node for a node in the target location of your load balancer.

Best Regards,
Lukas

@ashok-eurostar
Copy link
Author

Hello @lukasmetzner ,

here is the output again and below the output of describe node.
NOTE: I have manually added the node as my ingress controller LoadBalancer target to make it work for the moment.


ashokkumar@Mac ~ % kubectl -n kube-system describe deployments.apps hcloud-cloud-controller-manager
Name:                   hcloud-cloud-controller-manager
Namespace:              kube-system
CreationTimestamp:      Sat, 25 Jan 2025 08:44:08 +0100
Labels:                 app.kubernetes.io/managed-by=Helm
Annotations:            deployment.kubernetes.io/revision: 3
                        meta.helm.sh/release-name: hccm
                        meta.helm.sh/release-namespace: kube-system
Selector:               app.kubernetes.io/instance=hccm,app.kubernetes.io/name=hcloud-cloud-controller-manager
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app.kubernetes.io/instance=hccm
                    app.kubernetes.io/name=hcloud-cloud-controller-manager
  Service Account:  hcloud-cloud-controller-manager
  Containers:
   hcloud-cloud-controller-manager:
    Image:      docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.23.0
    Port:       8233/TCP
    Host Port:  0/TCP
    Args:
      --allow-untagged-cloud
      --cloud-provider=hcloud
      --route-reconciliation-period=30s
      --webhook-secure-port=0
      --leader-elect=false
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      HCLOUD_NETWORK:                 <set to the key 'network' in secret 'hcloud'>  Optional: false
      HCLOUD_NETWORK_ROUTES_ENABLED:  false
      HCLOUD_DEBUG:                   true
      HCLOUD_TOKEN:                   <set to the key 'token' in secret 'hcloud'>  Optional: false
    Mounts:                           <none>
  Volumes:                            <none>
  Priority Class Name:                system-cluster-critical
  Node-Selectors:                     <none>
  Tolerations:                        CriticalAddonsOnly op=Exists
                                      node-role.kubernetes.io/control-plane:NoSchedule op=Exists
                                      node-role.kubernetes.io/master:NoSchedule op=Exists
                                      node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
                                      node.kubernetes.io/not-ready:NoExecute
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  hcloud-cloud-controller-manager-68948d8d5d (0/0 replicas created), hcloud-cloud-controller-manager-587cfd46f9 (0/0 replicas created)
NewReplicaSet:   hcloud-cloud-controller-manager-56545bf4ff (1/1 replicas created)
Events:          <none>

Output of describe node

ashokkumar@Mac ~ % kubectl describe node w-common-fsn1-1
Name:               w-common-fsn1-1
Roles:              worker
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    campuszeus.io/purpose=common
                    csi.hetzner.cloud/location=fsn1
                    environment=development
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=w-common-fsn1-1
                    kubernetes.io/os=linux
                    kubernetes.io/zone=fsn1
                    node-role.kubernetes.io/worker=
                    purpose=common
                    role=worker
Annotations:        csi.volume.kubernetes.io/nodeid: {"csi.hetzner.cloud":"59188435"}
                    flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"26:58:13:7f:b9:c8"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 49.13.58.170
                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 24 Jan 2025 15:13:00 +0100
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  w-common-fsn1-1
  AcquireTime:     <unset>
  RenewTime:       Fri, 31 Jan 2025 20:13:12 +0100
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Wed, 29 Jan 2025 19:11:21 +0100   Wed, 29 Jan 2025 19:11:21 +0100   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Fri, 31 Jan 2025 20:09:00 +0100   Wed, 29 Jan 2025 16:08:33 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Fri, 31 Jan 2025 20:09:00 +0100   Wed, 29 Jan 2025 16:08:33 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Fri, 31 Jan 2025 20:09:00 +0100   Wed, 29 Jan 2025 16:08:33 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Fri, 31 Jan 2025 20:09:00 +0100   Wed, 29 Jan 2025 16:08:33 +0100   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  49.13.58.170
  Hostname:    w-common-fsn1-1
Capacity:
  cpu:                4
  ephemeral-storage:  78425224Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             7916048Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  72276686319
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             7813648Ki
  pods:               110
System Info:
  Machine ID:                 d26834621b874c37bd833f36e15aa674
  System UUID:                d2683462-1b87-4c37-bd83-3f36e15aa674
  Boot ID:                    be67f61a-93e1-4359-9e4b-8171cdaec957
  Kernel Version:             6.8.0-51-generic
  OS Image:                   Ubuntu 24.04.1 LTS
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://2.0.1
  Kubelet Version:            v1.32.0
  Kube-Proxy Version:         v1.32.0
PodCIDR:                      10.244.6.0/24
PodCIDRs:                     10.244.6.0/24
Non-terminated Pods:          (15 in total)
  Namespace                   Name                                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                                               ------------  ----------  ---------------  -------------  ---
  argocd                      argocd-applicationset-controller-6846b8c8c9-xdvjr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7d4h
  argocd                      argocd-redis-ha-haproxy-665bd7db7-hvl8n                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         7d4h
  argocd                      argocd-redis-ha-server-1                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7d4h
  argocd                      argocd-repo-server-667d54cc98-wwlsh                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         7d4h
  argocd                      argocd-server-587d4f6865-srfjp                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7d4h
  kube-flannel                kube-flannel-ds-5rgqf                                              100m (2%)     0 (0%)      50Mi (0%)        0 (0%)         2d1h
  kube-system                 hcloud-csi-node-crjfm                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7d5h
  kube-system                 kube-proxy-9862g                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7d5h
  nginx-ingress-controller    nginx-ingress-controller-ingress-nginx-controller-7c6d58688p69m    100m (2%)     0 (0%)      90Mi (1%)        0 (0%)         6d10h
  observability               alertmanager-kube-prometheus-stack-alertmanager-0                  0 (0%)        0 (0%)      200Mi (2%)       0 (0%)         6d22h
  observability               kube-prometheus-stack-operator-7975f96c79-x8wgq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d22h
  observability               kube-prometheus-stack-prometheus-node-exporter-lpqpd               0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d22h
  observability               loki-distributed-querier-0                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d22h
  observability               loki-distributed-query-frontend-658646787d-bbcr9                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d22h
  observability               promtail-qx2ff                                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d22h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                200m (5%)   0 (0%)
  memory             340Mi (4%)  0 (0%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
  hugepages-32Mi     0 (0%)      0 (0%)
  hugepages-64Ki     0 (0%)      0 (0%)
Events:              <none>

Thanks,
Ashok

@lukasmetzner
Copy link
Contributor

lukasmetzner commented Feb 3, 2025

Hey,

in your output of kubectl describe node ... I can see that you are missing the ProviderID, which is usually set by the hcloud-cloud-controller-manager when initializing the nodes. The nodes ProviderID should correspond with your hcloud server id.

[...]
PodCIDR:                      10.244.1.0/24
PodCIDRs:                     10.244.1.0/24
ProviderID:                   hcloud://XXXXXXXX
[...]

The reason might be, that you did not start your kubeletes with the --cloud-provider flag set to external. This is described in the deployment guide (see 1.).

Could you check if this resolves your issue?

Reference:

Best Regards,
Lukas

@ashok-eurostar
Copy link
Author

@lukasmetzner ,
Thanks for pointing this.
I have the following config below, but still it did not work. I am going to debug further and keep you posted.

But in general, I have a question. I see --cloud-provider=external is deprecated and we still use it in Hetzner set up.

Instead of this, we have all our custom script to setup the k8s cluster.
So when we initiate the cluster or add a new node, can we apply provider ID by doing kubectl patch?

kubectl patch node <node-name> -p '{"spec":{"providerID":"<provider>://<provider-specific-id>"}}'

Will this work?

Anyway, I have the following, but I will debug why it did not work.

root@cp-nbg1-1:/etc/systemd/system/kubelet.service.d# ls
20-hcloud.conf
root@cp-nbg1-1:/etc/systemd/system/kubelet.service.d# cat 20-hcloud.conf 
[Service]
Environment="KUBELET_EXTRA_ARGS=--cloud-provider=external"
root@cp-nbg1-1:/etc/systemd/system/kubelet.service.d# 

Regards,
Ashok

@lukasmetzner
Copy link
Contributor

lukasmetzner commented Feb 4, 2025

Hey,

The hcloud-cloud-controller-manager is based on the cloud-provider library. We do not have direct control over when to remove the dependency on --cloud-provider-external.

Maybe the solution from this comment is helpful to you?

The provider id is set automatically by the HCCM, if you setup everything else correctly. If you want to manually assign the provider id, you could do it via the --provider-id flag of kubelet. AFAIK patching afterwards should also work.

@lukasmetzner lukasmetzner self-assigned this Feb 6, 2025
@aidas-emersoft
Copy link

Same or similar issue here.

I have several worker nodes with k3s on them, and their ProviderID starts with k3s://, e.g.: k3s://k8-worker-node-ash-0. Upon installing hcloud-cloud-controller-manager and describing the nodes, I see this:

Warning | UnknownProviderIDPrefix | hcloud-cloud-controller-manager | Node could not be added to Load Balancer for service ingress-nginx-controller because the provider ID does not match any known format.

Please advise how could this be solved. Thanks!

@lukasmetzner
Copy link
Contributor

lukasmetzner commented Feb 7, 2025

@aidas-emersoft The providerID is used by the cloud-provider to identify the underlying machine. As you want to use Hetzner Cloud this should be hcloud://XXXXXXX. Clusters with different provider ID prefixes are not supported.

You probably forgot to start k3s with --disable-cloud-controller. Please also double check if you run k3s with --kubelet-arg=cloud-provider=external.

@aidas-emersoft
Copy link

@lukasmetzner thank you. --disable-cloud-controller was definitely missing, but master and worker nodes were configured with --kubelet-arg=cloud-provider=external.

After starting k3s with --disable-cloud-controller flag and installing hcloud-cloud-controller-manager, all nodes gained a valid ProviderID like hcloud://<ID>, however the created load balancer has no targets...

@lukasmetzner
Copy link
Contributor

@aidas-emersoft This seems now unrelated to this issue. Could you please open a new ticket for this ^^

In addition, please provide steps to reproduce your issue, log messages of hcloud-cloud-controller-manager and a kubectl describe on both hcloud-cloud-controller-manager and your nodes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants