-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Failed to check for stale credentials #813
Comments
Any update on this issue, please share if there is any workaround |
I also had this problem the day before yesterday, using the Helm chart. The only difference was that I got a timeout instead of a refused connection. I tried multiple things:
Then, at some point it randomly started working again. But this was after I had undone all my changes and about an hour had passed after that. During my whole investigation, the controller was able to sync secrets to k8s. It was just the env injector having trouble. Because of this, I don't think there was an issue with Azure at the time and (as mentioned before), I also tested the network connectivity. |
It’s one of the blocker for the upgrade, for now we have downgraded the versions, it would great if we get a fix for this |
We had this problem again last week, but it turned out to be a faulty node which was missing some network connectivity. When I replaced it with a new one, the |
Components and versions
App version 1.7.3
Describe the bug
After upgrading the akv2k8s to 1.73, noticed pod getting restarted with error “ main.go:267] “failed to get credentials” error=“Failed to check for stale credentials …….. dial 10.0.1.80: connect : connection refused”
But after 1 or 2 restart the pod running without any issues.
To Reproduce
Create deployment with environment injection and delete pod or rollout restart
Expected behavior
No restart to the pod
The text was updated successfully, but these errors were encountered: