-
Notifications
You must be signed in to change notification settings - Fork 84
Flask App Error #73
Comments
Looks like a bug in the secret.
Notice the extra f in the name. |
huh wow ok should I get rid of this and re-apply the secret? kubectl delete secret -f ... ? |
Just fixed I did |
I forced a pod restart in order to pick up the updated secret
Not ideal but effective |
Looks like the environment variable is also wrong. It using "kf-label-bot-prod.private-key.pem " but the secret is "issue-label-bot-github-app.private-key.pem" I think the kf-label-bot is specific to KF. |
I'm not sure what the correct value should be. I guess the questions to answer would be
#57 is a little unclear
|
@hamelsmu I think you want to change this line
To have the correct filename; i.e. the filename should match the key in the K8s secret. |
Is this something you feel comfortable in changing? how do I get the right k8s secret again ? Sorry for the noob questions |
yeah I am completely lost, could use some help |
Data is a map of filenames to base64 encoded file contents. So Kubernetes will create a file with that contents in whatever location is specified in the config map. Here
We mount the secret on /var/secrets/github. The secret "github-app" has key "issue-label-bot-github-app.private-key.pem" so the path will be
So we need to update the environment variable in the manifest
To use that path. |
@jlewi how do you know
How can I find this information? Just trying to learn. I also updated Issue-Label-Bot/deployment/base/deployment.yaml with the new value as you suggested and did |
If you look at the Deployment the secret is specified here
So we know the secret is
So there are two files |
hmm I wonder why error is still showing up, it is almost like I need to refresh or re-deploy something but I am not sure what. I tried to redeploy but keep seeing the same error. It looks like containers are stuck in
|
@hamelsmu What command did you run to deploy? I think we are looking at two different Kubernetes clusters. I think the front end (just the flask app) and the backend (embedding server) should now be running on two different clusters. Here's how we can verify. First lets figure out where
Lets check if we have an ingress for that ip address
Now we we can check which k8s sevice that ingress is pointing to
So its pointing at service
We can compare the selector to the labels on a given deployment
So the above confims that I'm looking at the pods that should map to the IP address. It also looks the environment variable So it looks to me like we are looking at different clusters (because your namespace isn't showing those pods). We can check which cluster by doing cluster info
If the IP address for your master is different you are talking to a different kubernetes cluster. You can get a mapping of kubectl contexts to kubeconfig using
So the full cluster name is
So I'm looking at
|
I can confirm that I'm using the cluster
I get Not sure how to fix this, I went to the gcloud console and used |
friendly bump |
Ahh I ssh'd into one of the pods and saw that the path is I could not figure out how to change the deployment.yaml file in this repo, so I did the hack of I tried to edit
@jlewi can you direct me to right yaml file for this, or change this in a PR? Thanks I can confirm the issue is currently fixed. |
@jlewi Second question, are the YAML files in this repo up to date? the deployment.yaml files do not match what is currently running in our cluster, for example I cannot find any yaml files with the deployment |
The answer to the first question is you need use kustomize to build the manifests. so
The answer again is kustomize. Compare the output of kustomize build and the names should match. If look at the kustomization file
We use kustomize to add a prefix to all resource names. Thats where the "label-bot-" prefix comes from. A systematic way to address this would be to
|
Thanks, I won't go into kustomize but at least this provides a starting point for me to try to do this. closing issue since this is fixed |
You can see the logs on this URL
I am not really sure what is happening I might need some help to debug or how to debug this in Kubernetes, its abstracted enough where I'm not sure what the best route to debug this is
Thanks, @jlewi
The text was updated successfully, but these errors were encountered: