Skip to content

helm upgrade deletes and recreates serviceaccount #18731

Closed
@rhooper

Description

@rhooper

Related to - #15701 #10336 #9133

Environment

  • Airbyte version: 0.44.17 / 0.44.33
  • OS Version / Instance: macOS ARM, AWS EKS (Graviton, Intel)
  • Deployment: helm chart version 0.44.33
  • Source Connector and version: n/a
  • Destination Connector and version: n/a
  • Step where error happened: Deploy / Upgrade Airbyte

Current Behavior

The serviceaccount object is deleted and recreated during helm upgrade, making updating values to tweak configuration challenging.

Expected Behavior

Service account should be untouched.

Logs

$ (main) helm upgrade  -f values.yaml -n airybte airybte-airbyte . --debug 
upgrade.go:142: [debug] preparing upgrade for airybte-airbyte
upgrade.go:150: [debug] performing update for airybte-airbyte
upgrade.go:322: [debug] creating upgraded release for airybte-airbyte
client.go:310: [debug] Starting delete for "airbyte-admin" ServiceAccount
client.go:128: [debug] creating 1 resource(s)
client.go:310: [debug] Starting delete for "airbyte-db" StatefulSet
client.go:128: [debug] creating 1 resource(s)
client.go:310: [debug] Starting delete for "airbyte-db-svc" Service
client.go:128: [debug] creating 1 resource(s)
client.go:310: [debug] Starting delete for "airbyte-minio" StatefulSet
client.go:128: [debug] creating 1 resource(s)
client.go:310: [debug] Starting delete for "airbyte-minio-svc" Service
client.go:128: [debug] creating 1 resource(s)
client.go:310: [debug] Starting delete for "airybte-airbyte-airbyte-env" ConfigMap
client.go:128: [debug] creating 1 resource(s)
client.go:310: [debug] Starting delete for "airybte-airbyte-airbyte-secrets" Secret
client.go:128: [debug] creating 1 resource(s)
client.go:310: [debug] Starting delete for "airybte-airbyte-airbyte-bootloader" Pod
client.go:128: [debug] creating 1 resource(s)
client.go:540: [debug] Watching for changes to Pod airybte-airbyte-airbyte-bootloader with timeout of 5m0s
client.go:568: [debug] Add/Modify event for airybte-airbyte-airbyte-bootloader: ADDED
client.go:627: [debug] Pod airybte-airbyte-airbyte-bootloader pending
client.go:568: [debug] Add/Modify event for airybte-airbyte-airbyte-bootloader: MODIFIED
client.go:629: [debug] Pod airybte-airbyte-airbyte-bootloader running
client.go:568: [debug] Add/Modify event for airybte-airbyte-airbyte-bootloader: MODIFIED
client.go:629: [debug] Pod airybte-airbyte-airbyte-bootloader running
client.go:568: [debug] Add/Modify event for airybte-airbyte-airbyte-bootloader: MODIFIED
client.go:622: [debug] Pod airybte-airbyte-airbyte-bootloader succeeded
client.go:229: [debug] checking 13 resources for changes
client.go:512: [debug] Looks like there are no changes for Secret "airybte-airbyte-gcs-log-creds"
client.go:512: [debug] Looks like there are no changes for ConfigMap "airybte-airbyte-pod-sweeper-sweep-pod-script"
client.go:512: [debug] Looks like there are no changes for ConfigMap "airybte-airbyte-temporal-dynamicconfig"
client.go:512: [debug] Looks like there are no changes for Role "airbyte-admin-role"
client.go:521: [debug] Patch RoleBinding "airbyte-admin-binding" in namespace airybte
client.go:512: [debug] Looks like there are no changes for Service "airybte-airbyte-airbyte-server-svc"
client.go:512: [debug] Looks like there are no changes for Service "airybte-airbyte-temporal"
client.go:512: [debug] Looks like there are no changes for Service "airybte-airbyte-airbyte-webapp-svc"
client.go:521: [debug] Patch Deployment "airybte-airbyte-pod-sweeper-pod-sweeper" in namespace airybte
client.go:521: [debug] Patch Deployment "airybte-airbyte-server" in namespace airybte
client.go:521: [debug] Patch Deployment "airybte-airbyte-temporal" in namespace airybte
client.go:521: [debug] Patch Deployment "airybte-airbyte-webapp" in namespace airybte
client.go:521: [debug] Patch Deployment "airybte-airbyte-worker" in namespace airybte
upgrade.go:157: [debug] updating status for upgraded release for airybte-airbyte

worker

│ 2022-10-31 18:46:09 ERROR i.t.i.w.Poller$PollerUncaughtExceptionHandler(logPollErrors):289 - Failure in thread Host Local Workflow Poller: 4                         │
│ io.grpc.StatusRuntimeException: UNAVAILABLE: last connection error: connection closed before server preface received                                                 │
│     at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:271) ~[grpc-stub-1.49.0.jar:1.49.0]                                                        │
│     at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:252) ~[grpc-stub-1.49.0.jar:1.49.0]                                                                    │
│     at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:165) ~[grpc-stub-1.49.0.jar:1.49.0]                                                               │
│     at io.temporal.api.workflowservice.v1.WorkflowServiceGrpc$WorkflowServiceBlockingStub.pollWorkflowTaskQueue(WorkflowServiceGrpc.java:2656) ~[temporal-servicecli │
│     at io.temporal.internal.worker.WorkflowPollTask.poll(WorkflowPollTask.java:83) ~[temporal-sdk-1.8.1.jar:?]                                                       │
│     at io.temporal.internal.worker.WorkflowPollTask.poll(WorkflowPollTask.java:39) ~[temporal-sdk-1.8.1.jar:?]                                                       │
│     at io.temporal.internal.worker.Poller$PollExecutionTask.run(Poller.java:262) ~[temporal-sdk-1.8.1.jar:?]                                                         │
│     at io.temporal.internal.worker.Poller$PollLoopTask.run(Poller.java:227) ~[temporal-sdk-1.8.1.jar:?]                                                              │
│     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]                                                                        │
│     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]                                                                        │
│     at java.lang.Thread.run(Thread.java:1589) ~[?:?]                                                                                                                 │

Steps to Reproduce

  1. Install helm chart w/values.
  2. Upgrade helm chart with upgrade. No need to edit values.
  3. Check logs for errors...

If you try to perform any action that needs to spin up pods (eg create source, destination, etc.) there will be errors about being unable to speak to the kubernetes API.

pod-sweeper:
│ error: You must be logged in to the server (Unauthorized) │

Are you willing to submit a PR?

Probably

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions