Skip to content

Helm chart not creating controller-configmap-tcp.yaml when tcp field specified in values #9079

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
cbc02009 opened this issue Sep 23, 2022 · 6 comments
Labels
needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@cbc02009
Copy link

cbc02009 commented Sep 23, 2022

What happened:

I am trying to add a tcp service to the nginx controller following by adding

tcp:
  "2222": "services/gitea-ssh:2222"

to the values for the helm chart. (chart source available here: https://github.com/cbc02009/k8s-home-ops/blob/c5f17c8833f32ddec3d7e3f9a91b697c4099a1c4/cluster/manifests/network/ingress-nginx/helmrelease.yaml)
however the appropriate resources are not being generated from the field existing.

note: the quotes around 2222 are added to try to work around kubernetes-sigs/kustomize#3446

What you expected to happen:

According to the helm source it should add the args with

{{- if .Values.tcp }}
- --tcp-services-configmap={{ default "$(POD_NAMESPACE)" .Values.controller.tcp.configMapNamespace }}/{{ include "ingress-nginx.fullname" . }}-tcp
{{- end }}

and should also create the controller-configmap-tcp.yaml file

I'm honestly not sure why this would be the case, since the helm chart is pretty explicit, but you can see from the following kubectl outputs that the resources aren't being created.

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):

-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.3.1
  Build:         92534fa2ae799b502882c8684db13a25cde68155
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.19.10

-------------------------------------------------------------------------------

Kubernetes version (use kubectl version):

Environment:

  • Cloud provider or hardware configuration: Baremetal home servers using consumer hardware
  • OS (e.g. from /etc/os-release):
NAME="Arch Linux"
PRETTY_NAME="Arch Linux"
ID=arch
BUILD_ID=rolling
ANSI_COLOR="38;2;23;147;209"
HOME_URL="https://archlinux.org/"
DOCUMENTATION_URL="https://wiki.archlinux.org/"
SUPPORT_URL="https://bbs.archlinux.org/"
BUG_REPORT_URL="https://bugs.archlinux.org/"
LOGO=archlinux-logo
  • Kernel (e.g. uname -a): Linux <hostname> 5.19.9-arch1-1 #1 SMP PREEMPT_DYNAMIC Thu, 15 Sep 2022 16:08:26 +0000 x86_64 GNU/Linux

  • Install tools: fluxcd using helm and kustomize. bootstrapped with kubeadm.

    • Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
  • Basic cluster related info:

    • kubectl version:
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.4", GitCommit:"95ee5ab382d64cfe6c28967f36b53970b8374491", GitTreeState:"archive", BuildDate:"2022-08-23T15:32:20Z", GoVersion:"go1.19", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.5", GitCommit:"e979822c185a14537054f15808a118d7fcce1d6e", GitTreeState:"clean", BuildDate:"2022-09-14T16:35:41Z", GoVersion:"go1.18.6", Compiler:"gc", Platform:"linux/amd64"}
  • kubectl get nodes -o wide:
NAME     STATUS   ROLES           AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE     KERNEL-VERSION    CONTAINER-RUNTIME
anya     Ready    control-plane   3d4h   v1.24.4   10.0.3.9      <none>        Arch Linux   5.19.9-arch1-1    cri-o://1.25.0
sakura   Ready    <none>          2d4h   v1.24.4   10.0.0.12     <none>        Arch Linux   5.19.9-arch1-1    cri-o://1.25.0
uiharu   Ready    <none>          3d3h   v1.24.4   10.0.0.10     <none>        Arch Linux   5.19.10-arch1-1   cri-o://1.25.0
Name:         nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.3.1
              helm.sh/chart=ingress-nginx-4.2.5
              helm.toolkit.fluxcd.io/name=ingress-nginx
              helm.toolkit.fluxcd.io/namespace=network
Annotations:  ingressclass.kubernetes.io/is-default-class: true
              meta.helm.sh/release-name: ingress-nginx
              meta.helm.sh/release-namespace: network
Controller:   k8s.io/ingress-nginx
Events:       <none>
  • kubectl -n <ingresscontrollernamespace> get all -A -o wide:
NAME                                            READY   STATUS    RESTARTS   AGE     IP            NODE     NOMINATED NODE   READINESS GATES
pod/ingress-nginx-controller-7987769596-79pn5   1/1     Running   0          46m     10.11.2.13    uiharu   <none>           <none>
pod/ingress-nginx-controller-7987769596-kns7s   1/1     Running   0          46m     10.11.3.95    sakura   <none>           <none>
pod/ingress-nginx-controller-7987769596-vqvqs   1/1     Running   0          46m     10.11.0.64    anya     <none>           <none>
pod/metallb-controller-94c85f6db-wbcfh          1/1     Running   0          6h48m   10.11.2.196   uiharu   <none>           <none>
pod/metallb-speaker-9jnzq                       1/1     Running   0          6h49m   10.0.0.12     sakura   <none>           <none>
pod/metallb-speaker-q7ckv                       1/1     Running   0          6h49m   10.0.0.10     uiharu   <none>           <none>
pod/metallb-speaker-wpjcf                       1/1     Running   0          6h49m   10.0.3.9      anya     <none>           <none>

NAME                                         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE    SELECTOR
service/ingress-nginx-controller             LoadBalancer   10.10.33.146    10.0.1.1      80:31279/TCP,443:31335/TCP   46m    app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-admission   ClusterIP      10.10.142.0     <none>        443/TCP                      46m    app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-metrics     ClusterIP      10.10.110.243   <none>        10254/TCP                    46m    app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/metallb-webhook-service              ClusterIP      10.10.5.28      <none>        443/TCP                      3d3h   app.kubernetes.io/component=controller,app.kubernetes.io/instance=metallb,app.kubernetes.io/name=metallb

NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE    CONTAINERS   IMAGES                            SELECTOR
daemonset.apps/metallb-speaker   3         3         3       3            3           kubernetes.io/os=linux   3d3h   speaker      quay.io/metallb/speaker:v0.13.5   app.kubernetes.io/component=speaker,app.kubernetes.io/instance=metallb,app.kubernetes.io/name=metallb

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS   IMAGES                                                                                                                    SELECTOR
deployment.apps/ingress-nginx-controller   3/3     3            3           46m    controller   registry.k8s.io/ingress-nginx/controller:v1.3.1@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
deployment.apps/metallb-controller         1/1     1            1           3d3h   controller   quay.io/metallb/controller:v0.13.5                                                                                        app.kubernetes.io/component=controller,app.kubernetes.io/instance=metallb,app.kubernetes.io/name=metallb

NAME                                                  DESIRED   CURRENT   READY   AGE    CONTAINERS   IMAGES                                                                                                                    SELECTOR
replicaset.apps/ingress-nginx-controller-7987769596   3         3         3       46m    controller   registry.k8s.io/ingress-nginx/controller:v1.3.1@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7987769596
replicaset.apps/metallb-controller-94c85f6db          1         1         1       3d3h   controller   quay.io/metallb/controller:v0.13.5                                                                                        app.kubernetes.io/component=controller,app.kubernetes.io/instance=metallb,app.kubernetes.io/name=metallb,pod-template-hash=94c85f6db
  • kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>:
Name:         ingress-nginx-controller-7987769596-79pn5
Namespace:    network
Priority:     0
Node:         uiharu/10.0.0.10
Start Time:   Fri, 23 Sep 2022 14:06:15 -0400
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              pod-template-hash=7987769596
Annotations:  policies.kyverno.io/last-applied-patches: delete-cpu-limits.delete-cpu-limits.kyverno.io: added /spec/initContainers/0/resources/limits
Status:       Running
IP:           10.11.2.13
IPs:
  IP:           10.11.2.13
Controlled By:  ReplicaSet/ingress-nginx-controller-7987769596
Init Containers:
  provide-timezone:
    Container ID:  cri-o://2bbe115cee214d40f9b8944be90ef0e9a4301119af154060cd090a93d089cb41
    Image:         quay.io/k8tz/k8tz:0.7.0
    Image ID:      quay.io/k8tz/k8tz@sha256:5c51bd0d0b73dff3b49c80734dfa213bebe127c2c4c6d68b8c1ef4ca2867544a
    Port:          <none>
    Host Port:     <none>
    Args:
      bootstrap
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 23 Sep 2022 14:06:17 -0400
      Finished:     Fri, 23 Sep 2022 14:06:17 -0400
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        10m
      memory:     100Mi
    Environment:  <none>
    Mounts:
      /mnt/zoneinfo from timezone (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mbr5s (ro)
Containers:
  controller:
    Container ID:  cri-o://676ee9fedb8f924e987c58cd1256010b087697a4b3c5594a0c78694369026c93
    Image:         registry.k8s.io/ingress-nginx/controller:v1.3.1@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974
    Image ID:      registry.k8s.io/ingress-nginx/controller@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974
    Ports:         80/TCP, 443/TCP, 10254/TCP, 8443/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
      --election-id=ingress-controller-leader
      --controller-class=k8s.io/ingress-nginx
      --ingress-class=nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
      --default-ssl-certificate=network/[REDACTED]-tls
    State:          Running
      Started:      Fri, 23 Sep 2022 14:06:18 -0400
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  1000Mi
    Requests:
      cpu:      20m
      memory:   250Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      TZ:             America/New_York
      POD_NAME:       ingress-nginx-controller-7987769596-79pn5 (v1:metadata.name)
      POD_NAMESPACE:  network (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /etc/localtime from timezone (ro,path="America/New_York")
      /usr/local/certificates/ from webhook-cert (ro)
      /usr/share/zoneinfo from timezone (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mbr5s (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  timezone:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  kube-api-access-mbr5s:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 :NoSchedule op=Exists
                             :NoExecute op=Exists
Events:
  Type    Reason     Age   From                      Message
  ----    ------     ----  ----                      -------
  Normal  Scheduled  48m   default-scheduler         Successfully assigned network/ingress-nginx-controller-7987769596-79pn5 to uiharu
  Normal  Pulled     48m   kubelet                   Container image "quay.io/k8tz/k8tz:0.7.0" already present on machine
  Normal  Created    48m   kubelet                   Created container provide-timezone
  Normal  Started    48m   kubelet                   Started container provide-timezone
  Normal  Pulled     48m   kubelet                   Container image "registry.k8s.io/ingress-nginx/controller:v1.3.1@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974" already present on machine
  Normal  Created    48m   kubelet                   Created container controller
  Normal  Started    48m   kubelet                   Started container controller
  Normal  RELOAD     48m   nginx-ingress-controller  NGINX reload triggered due to a change in configuration


Name:         ingress-nginx-controller-7987769596-kns7s
Namespace:    network
Priority:     0
Node:         sakura/10.0.0.12
Start Time:   Fri, 23 Sep 2022 14:06:15 -0400
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              pod-template-hash=7987769596
Annotations:  policies.kyverno.io/last-applied-patches: delete-cpu-limits.delete-cpu-limits.kyverno.io: added /spec/initContainers/0/resources/limits
Status:       Running
IP:           10.11.3.95
IPs:
  IP:           10.11.3.95
Controlled By:  ReplicaSet/ingress-nginx-controller-7987769596
Init Containers:
  provide-timezone:
    Container ID:  cri-o://1866d0367c5fc2416abcf9040aae44630c11c153b3df3d9e7d28bb02cd01628e
    Image:         quay.io/k8tz/k8tz:0.7.0
    Image ID:      quay.io/k8tz/k8tz@sha256:5c51bd0d0b73dff3b49c80734dfa213bebe127c2c4c6d68b8c1ef4ca2867544a
    Port:          <none>
    Host Port:     <none>
    Args:
      bootstrap
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 23 Sep 2022 14:06:16 -0400
      Finished:     Fri, 23 Sep 2022 14:06:16 -0400
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        10m
      memory:     100Mi
    Environment:  <none>
    Mounts:
      /mnt/zoneinfo from timezone (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gp4ks (ro)
Containers:
  controller:
    Container ID:  cri-o://d5e5d75a50a83887d680a1952f11a60ab07fb890c7f67ba293528f0a0d733a64
    Image:         registry.k8s.io/ingress-nginx/controller:v1.3.1@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974
    Image ID:      registry.k8s.io/ingress-nginx/controller@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974
    Ports:         80/TCP, 443/TCP, 10254/TCP, 8443/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
      --election-id=ingress-controller-leader
      --controller-class=k8s.io/ingress-nginx
      --ingress-class=nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
      --default-ssl-certificate=network/[REDACTED]-tls
    State:          Running
      Started:      Fri, 23 Sep 2022 14:06:16 -0400
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  1000Mi
    Requests:
      cpu:      20m
      memory:   250Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      TZ:             America/New_York
      POD_NAME:       ingress-nginx-controller-7987769596-kns7s (v1:metadata.name)
      POD_NAMESPACE:  network (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /etc/localtime from timezone (ro,path="America/New_York")
      /usr/local/certificates/ from webhook-cert (ro)
      /usr/share/zoneinfo from timezone (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gp4ks (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  timezone:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  kube-api-access-gp4ks:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 :NoSchedule op=Exists
                             :NoExecute op=Exists
Events:
  Type    Reason     Age   From                      Message
  ----    ------     ----  ----                      -------
  Normal  Scheduled  48m   default-scheduler         Successfully assigned network/ingress-nginx-controller-7987769596-kns7s to sakura
  Normal  Pulled     48m   kubelet                   Container image "quay.io/k8tz/k8tz:0.7.0" already present on machine
  Normal  Created    48m   kubelet                   Created container provide-timezone
  Normal  Started    48m   kubelet                   Started container provide-timezone
  Normal  Pulled     48m   kubelet                   Container image "registry.k8s.io/ingress-nginx/controller:v1.3.1@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974" already present on machine
  Normal  Created    48m   kubelet                   Created container controller
  Normal  Started    48m   kubelet                   Started container controller
  Normal  RELOAD     48m   nginx-ingress-controller  NGINX reload triggered due to a change in configuration


Name:         ingress-nginx-controller-7987769596-vqvqs
Namespace:    network
Priority:     0
Node:         anya/10.0.3.9
Start Time:   Fri, 23 Sep 2022 14:06:15 -0400
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              pod-template-hash=7987769596
Annotations:  policies.kyverno.io/last-applied-patches: delete-cpu-limits.delete-cpu-limits.kyverno.io: added /spec/initContainers/0/resources/limits
Status:       Running
IP:           10.11.0.64
IPs:
  IP:           10.11.0.64
Controlled By:  ReplicaSet/ingress-nginx-controller-7987769596
Init Containers:
  provide-timezone:
    Container ID:  cri-o://a1105799a3fc19e240beedeef1661a53b52fefa25ce9cf9f799c406e42982a0e
    Image:         quay.io/k8tz/k8tz:0.7.0
    Image ID:      quay.io/k8tz/k8tz@sha256:5c51bd0d0b73dff3b49c80734dfa213bebe127c2c4c6d68b8c1ef4ca2867544a
    Port:          <none>
    Host Port:     <none>
    Args:
      bootstrap
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 23 Sep 2022 14:06:16 -0400
      Finished:     Fri, 23 Sep 2022 14:06:17 -0400
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        10m
      memory:     100Mi
    Environment:  <none>
    Mounts:
      /mnt/zoneinfo from timezone (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-brn2j (ro)
Containers:
  controller:
    Container ID:  cri-o://e9425b3ceeacee5086e33a0f8c5f383344030de25c6e9cf32b55686ffd1760ec
    Image:         registry.k8s.io/ingress-nginx/controller:v1.3.1@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974
    Image ID:      registry.k8s.io/ingress-nginx/controller@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974
    Ports:         80/TCP, 443/TCP, 10254/TCP, 8443/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
      --election-id=ingress-controller-leader
      --controller-class=k8s.io/ingress-nginx
      --ingress-class=nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
      --default-ssl-certificate=network/[REDACTED]-tls
    State:          Running
      Started:      Fri, 23 Sep 2022 14:06:17 -0400
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  1000Mi
    Requests:
      cpu:      20m
      memory:   250Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      TZ:             America/New_York
      POD_NAME:       ingress-nginx-controller-7987769596-vqvqs (v1:metadata.name)
      POD_NAMESPACE:  network (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /etc/localtime from timezone (ro,path="America/New_York")
      /usr/local/certificates/ from webhook-cert (ro)
      /usr/share/zoneinfo from timezone (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-brn2j (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  timezone:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  kube-api-access-brn2j:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 :NoSchedule op=Exists
                             :NoExecute op=Exists
Events:
  Type    Reason     Age   From                      Message
  ----    ------     ----  ----                      -------
  Normal  Scheduled  48m   default-scheduler         Successfully assigned network/ingress-nginx-controller-7987769596-vqvqs to anya
  Normal  Pulled     48m   kubelet                   Container image "quay.io/k8tz/k8tz:0.7.0" already present on machine
  Normal  Created    48m   kubelet                   Created container provide-timezone
  Normal  Started    48m   kubelet                   Started container provide-timezone
  Normal  Pulled     48m   kubelet                   Container image "registry.k8s.io/ingress-nginx/controller:v1.3.1@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974" already present on machine
  Normal  Created    48m   kubelet                   Created container controller
  Normal  Started    48m   kubelet                   Started container controller
  Normal  RELOAD     48m   nginx-ingress-controller  NGINX reload triggered due to a change in configuration
  • kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>:
Name:                     ingress-nginx-controller
Namespace:                network
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=1.3.1
                          helm.sh/chart=ingress-nginx-4.2.5
                          helm.toolkit.fluxcd.io/name=ingress-nginx
                          helm.toolkit.fluxcd.io/namespace=network
Annotations:              meta.helm.sh/release-name: ingress-nginx
                          meta.helm.sh/release-namespace: network
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.10.33.146
IPs:                      10.10.33.146
IP:                       10.0.1.1
LoadBalancer Ingress:     10.0.1.1
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  31279/TCP
Endpoints:                10.11.0.64:80,10.11.2.13:80,10.11.3.95:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  31335/TCP
Endpoints:                10.11.0.64:443,10.11.2.13:443,10.11.3.95:443
Session Affinity:         None
External Traffic Policy:  Local
HealthCheck NodePort:     31710
Events:
  Type    Reason        Age                  From                Message
  ----    ------        ----                 ----                -------
  Normal  IPAllocated   49m                  metallb-controller  Assigned IP ["10.0.1.1"]
  Normal  nodeAssigned  4m6s (x39 over 49m)  metallb-speaker     announcing from node "sakura" with protocol "layer2"


Name:              ingress-nginx-controller-admission
Namespace:         network
Labels:            app.kubernetes.io/component=controller
                   app.kubernetes.io/instance=ingress-nginx
                   app.kubernetes.io/managed-by=Helm
                   app.kubernetes.io/name=ingress-nginx
                   app.kubernetes.io/part-of=ingress-nginx
                   app.kubernetes.io/version=1.3.1
                   helm.sh/chart=ingress-nginx-4.2.5
                   helm.toolkit.fluxcd.io/name=ingress-nginx
                   helm.toolkit.fluxcd.io/namespace=network
Annotations:       meta.helm.sh/release-name: ingress-nginx
                   meta.helm.sh/release-namespace: network
Selector:          app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.10.142.0
IPs:               10.10.142.0
Port:              https-webhook  443/TCP
TargetPort:        webhook/TCP
Endpoints:         10.11.0.64:8443,10.11.2.13:8443,10.11.3.95:8443
Session Affinity:  None
Events:            <none>


Name:              ingress-nginx-controller-metrics
Namespace:         network
Labels:            app.kubernetes.io/component=controller
                   app.kubernetes.io/instance=ingress-nginx
                   app.kubernetes.io/managed-by=Helm
                   app.kubernetes.io/name=ingress-nginx
                   app.kubernetes.io/part-of=ingress-nginx
                   app.kubernetes.io/version=1.3.1
                   helm.sh/chart=ingress-nginx-4.2.5
                   helm.toolkit.fluxcd.io/name=ingress-nginx
                   helm.toolkit.fluxcd.io/namespace=network
Annotations:       meta.helm.sh/release-name: ingress-nginx
                   meta.helm.sh/release-namespace: network
Selector:          app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.10.110.243
IPs:               10.10.110.243
Port:              http-metrics  10254/TCP
TargetPort:        http-metrics/TCP
Endpoints:         10.11.0.64:10254,10.11.2.13:10254,10.11.3.95:10254
Session Affinity:  None
Events:            <none>

The helm output is really long, so I've added it in a text file here:
ingress-nginx-helm-output.txt

@cbc02009 cbc02009 added the kind/bug Categorizes issue or PR as related to a bug. label Sep 23, 2022
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Sep 23, 2022
@k8s-ci-robot
Copy link
Contributor

@cbc02009: This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@longwuyuan
Copy link
Contributor

longwuyuan commented Sep 23, 2022

/remove-kind bug

Works for me ;

% helm -n ingress-nginx install ingress-nginx ingress-nginx/ingress-nginx -f values.yaml 
NAME: ingress-nginx
LAST DEPLOYED: Sat Sep 24 02:35:48 2022
NAMESPACE: ingress-nginx
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace ingress-nginx get services -o wide -w ingress-nginx-controller'

An example Ingress that makes use of the controller:
  apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    name: example
    namespace: foo
  spec:
    ingressClassName: nginx
    rules:
      - host: www.example.com
        http:
          paths:
            - pathType: Prefix
              backend:
                service:
                  name: exampleService
                  port:
                    number: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
      - hosts:
        - www.example.com
        secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls
[~/Documents/ingress-nginx-issues/9079] 
% cat values.yaml 
tcp:
  "2222": "services/gitea-ssh:2222"
[~/Documents/ingress-nginx-issues/9079] 
% k -n ingress-nginx get svc
NAME                                 TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                                     AGE
ingress-nginx-controller             LoadBalancer   10.96.64.153   172.18.0.2    80:31389/TCP,443:31051/TCP,2222:32423/TCP   2m25s
ingress-nginx-controller-admission   ClusterIP      10.96.81.162   <none>        443/TCP                                     2m25s
[~/Documents/ingress-nginx-issues/9079] 
% k -n ingress-nginx describe svc ingress-nginx-controller
Name:                     ingress-nginx-controller
Namespace:                ingress-nginx
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=1.3.1
                          helm.sh/chart=ingress-nginx-4.2.5
Annotations:              meta.helm.sh/release-name: ingress-nginx
                          meta.helm.sh/release-namespace: ingress-nginx
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.64.153
IPs:                      10.96.64.153
LoadBalancer Ingress:     172.18.0.2
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  31389/TCP
Endpoints:                10.244.0.11:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  31051/TCP
Endpoints:                10.244.0.11:443
Port:                     2222-tcp  2222/TCP
TargetPort:               2222-tcp/TCP
NodePort:                 2222-tcp  32423/TCP
Endpoints:                10.244.0.11:2222
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason        Age                  From                Message
  ----    ------        ----                 ----                -------
  Normal  IPAllocated   2m40s                metallb-controller  Assigned IP ["172.18.0.2"]
  Normal  nodeAssigned  36s (x2 over 2m29s)  metallb-speaker     announcing from node "kind-control-plane" with protocol "layer2"

@k8s-ci-robot k8s-ci-robot added needs-kind Indicates a PR lacks a `kind/foo` label and requires one. and removed kind/bug Categorizes issue or PR as related to a bug. labels Sep 23, 2022
@longwuyuan
Copy link
Contributor

Chart that you have installed is not published by this project https://kubernetes.github.io/ingress-nginx/deploy/#quick-start . Default install you have seems to have some init container but the default install of the chart published by this project does not have any such init container

% k -n ingress-nginx get po ingress-nginx-controller-7bf78659d-pr4sn -o yaml| grep -i image
    image: registry.k8s.io/ingress-nginx/controller:v1.3.1@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974
    imagePullPolicy: IfNotPresent
    image: sha256:b7c8e5e285c0a247776882de8af0895b1e46bab32e7a1090e43e7fbb845bac50
    imageID: registry.k8s.io/ingress-nginx/controller@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974

Please try the suggested process to tnstall the controller. If you find a bug or a problem that is related to the code of this project, then kindly re-open this issue with the data on the bug or the problem you found.

/close

@k8s-ci-robot
Copy link
Contributor

@longwuyuan: Closing this issue.

In response to this:

Chart that you have installed is not published by this project https://kubernetes.github.io/ingress-nginx/deploy/#quick-start . Default install you have seems to have some init container but the default install of the chart published by this project does not have any such init container

% k -n ingress-nginx get po ingress-nginx-controller-7bf78659d-pr4sn -o yaml| grep -i image
   image: registry.k8s.io/ingress-nginx/controller:v1.3.1@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974
   imagePullPolicy: IfNotPresent
   image: sha256:b7c8e5e285c0a247776882de8af0895b1e46bab32e7a1090e43e7fbb845bac50
   imageID: registry.k8s.io/ingress-nginx/controller@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974

Please try the suggested process to tnstall the controller. If you find a bug or a problem that is related to the code of this project, then kindly re-open this issue with the data on the bug or the problem you found.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@cbc02009
Copy link
Author

cbc02009 commented Sep 24, 2022

@longwuyuan I am using the official chart from the project. I have a kyverno policy that adds init containers to all my pods to add timezone data. I can try disabling the kyverno policy, but I don't think that will make a difference, since this happens well before the policy is applied.

Edit: Here's the repo definition to prove I'm using the correct chart: https://github.com/cbc02009/k8s-home-ops/blob/main/cluster/repositories/helm/ingress-nginx-charts.yaml

@longwuyuan
Copy link
Contributor

longwuyuan commented Sep 24, 2022

Please see my post above with copy/pasted data from a cluster created using kind. It shows that there is no problem with the functionality of opening a port for TCP, in this case 2222.

The reason for closing the issue is to help reduce the use of resources for tracking support issues. If the functionality of configuring a TCP port as documented is broken, then I would not have been able to create that in my test.

My suggestion is that you try your to create the same TCP service in a kind or a minikube cluster without any customization or kyverno policies or extra settings. Just try the default out-of-the-box install of ingress-nginx-controller with the values.yaml fror tcp port. That way you can see if the function is broken in all clusters or not. Also there are many more engineers on kubernetes slack, so its better to get support there because very very few people will offer support here on github for all issues.

Its not possible to reply to closed issues so kindly post bug related data as proof of a problem to solve(and reopen the issue) or discuss this in kubernetes slack.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

3 participants