Skip to content

[BUG] Cannot disable autoconf in Kubernetes – all Ingresses are picked up #2277

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
2 tasks done
Alexxanddr opened this issue May 15, 2025 · 4 comments
Open
2 tasks done
Assignees
Labels
bug Something isn't working

Comments

@Alexxanddr
Copy link

What happened?

Description:

Currently, when deploying BunkerWeb on Kubernetes, it automatically picks up all Ingresses in the cluster due to autoconf. There seems to be no way to disable this behavior.

According to the documentation, there is an AUTOCONF_MODE option which should allow disabling autoconf, but it doesn’t appear to have any effect when running on Kubernetes. Even when setting AUTOCONF_MODE=no, BunkerWeb still loads all Ingresses.

In my use case, I only want BunkerWeb to manage a single, manually configured Ingress resource. However, the current autoconf behavior prevents this.

Expected behavior:

There should be an option (e.g. via environment variable or config file) to disable autoconf entirely in Kubernetes so that only explicitly configured Ingresses are handled.

Environment:

  • Platform: Kubernetes
  • BunkerWeb version: 1.6.1
  • Helm chart version: 0.0.7

Suggested solution:

Fix the AUTOCONF_MODE=manual option so that it properly disables autoconf on Kubernetes. Or put a custom configuration inside values of helm chart

How to reproduce?

### How to reproduce:
1. Deploy BunkerWeb on a Kubernetes cluster (e.g. using the Helm chart).
2. Set the environment variable AUTOCONF_MODE=no in the release.
3. Create multiple Ingress resources in the cluster.
4. Observe that BunkerWeb still autoconfigures all of them, despite the manual setting.

Configuration file(s) (yaml or .env)

Relevant log output

BunkerWeb version

1.6.1

What integration are you using?

Kubernetes

Linux distribution (if applicable)

No response

Removed private data

  • I have removed all private data from the configuration file and the logs

Code of Conduct

  • I agree to follow this project's Code of Conduct
@Alexxanddr Alexxanddr added the bug Something isn't working label May 15, 2025
@TheophileDiot
Copy link
Member

Hi @Alexxanddr, thank you for opening this issue.

Have you had a chance to review the Kubernetes integration section of the BunkerWeb documentation? You can fine-tune the scope of the ingress controller using either:

  • Namespaces — Limit the controller to a specific namespace using the NAMESPACES environment variable in the bunkerweb-controller deployment:
    👉 Docs - Namespaces

  • Ingress Classes — Filter Ingress resources by setting the KUBERNETES_INGRESS_CLASS environment variable (e.g. bunkerweb) and use ingressClassName: bunkerweb in your Ingress definitions:
    👉 Docs - Ingress Class

These options provide granular control over which Ingress resources BunkerWeb processes, helping you avoid conflicts in shared clusters or multi-tenant environments.

@TheophileDiot TheophileDiot self-assigned this May 16, 2025
@Alexxanddr
Copy link
Author

Hi @TheophileDiot, thank you for the quick reply!

Yes, I’ve reviewed the Kubernetes integration section and I’m already using KUBERNETES_INGRESS_CLASS=bunkerweb correctly. My setup works and BunkerWeb coexists fine with another Ingress controller.

The issue is not about avoiding conflicts between multiple controllers — everything works on that front.

The actual problem is that autoconf still detects all 20 Ingresses in the cluster, even though only 1 of them is using the bunkerweb ingress class. The other 19 use a different class and should be ignored.

Despite using KUBERNETES_INGRESS_CLASS=bunkerweb, BunkerWeb still shows all 20 Ingresses in the generated configuration, which is not ideal. I would expect it to completely ignore those not matching the specified class.

Also, scoping by namespace (with NAMESPACES) has a similar effect to ingress class filtering but is more limited, and doesn’t really solve the issue in clusters with many Ingresses across different namespaces.

What I’m looking for is a way to:

  • Disable autoconf entirely, or
  • Ensure it only includes the Ingresses that actually match the KUBERNETES_INGRESS_CLASS.

Thanks again for your help!

@TheophileDiot
Copy link
Member

Hi @Alexxanddr, thanks for your message.

It looks like the issue might be related to how the Ingress class is being configured. Typically, it should only process resources that match the specified class, so if it's picking up others, something might be off.

Could you please share more details about your stack—such as your Ingress configuration, controller setup, and any relevant logs? That will help us determine whether this is a configuration issue or a potential bug.

Thanks!

@Alexxanddr
Copy link
Author

Hi @TheophileDiot, thanks again for the follow-up.

As requested, I’m attaching more details about the setup:

  • Screenshots showing the two Ingress resources, each clearly specifying a different ingressClassName and bunker web interface that all ingress are recognized
  • YAML definitions of both IngressClasses and daemonset of bunker web
  • Logs from both the BunkerWeb controller and scheduler pods.

bunkerweb-issue.zip

Both Ingress controllers are functioning correctly and coexist without issues. However, the issue is that BunkerWeb still detects and displays all Ingress resources, including the ones that are not assigned to bunkerweb. I would expect it to ignore those completely, as they use a different ingress class.

The environment variable KUBERNETES_INGRESS_CLASS=bunkerweb is correctly set in the BunkerWeb daemonset, and both Ingresses explicitly define their respective ingressClassName. Still, the BunkerWeb autoconf logic includes the non-bunkerweb Ingress resources.

Let me know if you need anything else to help troubleshoot — happy to provide further details.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants