Skip to content

Pod fails starting (User/Group/SecurityContext problem?) #1542

@fpoyer

Description

@fpoyer

What were you trying to do?

I was trying to run a simple debian container in my cluster to inspect some volume mount rights issue.

Maybe important: my deployment uses the following in spec

securityContext:
        runAsUser: 1001
        runAsGroup: 1001
        fsGroup: 1001

(used to allow container with non-root user to be able to read and write volume mount content)

What did you expect to happen?

My container to run, giving me a shell (as user:group 1001:1001) inside my cluster, with volumes mounted at $TELEPRESENCE_ROOT, where I could inspect them.

What happened instead?

Container did not run...

(please tell us - the traceback is automatically included, see below.
use https://gist.github.com to pass along full telepresence.log)

Automatically included information

Command line: ['/usr/local/bin/telepresence', '--expose', '9999:81', '--swap-deployment', 'my-deployment', '--namespace', 'my-namespace', '--docker-run', '-it', '--rm', 'debian', '/bin/bash']
Version: 0.108
Python version: 3.9.0 (default, Dec 2 2020, 10:34:08) [Clang 12.0.0 (clang-1200.0.32.27)]
kubectl version: Client Version: v1.19.4 // Server Version: v1.19.5
oc version: (error: [Errno 2] No such file or directory: 'oc')
OS: Darwin MyMac.local 20.3.0 Darwin Kernel Version 20.3.0: Thu Jan 21 00:07:06 PST 2021; root:xnu-7195.81.3~1/RELEASE_X86_64 x86_64

Traceback (most recent call last):
  File "/usr/local/bin/telepresence/telepresence/cli.py", line 135, in crash_reporting
    yield
  File "/usr/local/bin/telepresence/telepresence/main.py", line 65, in main
    remote_info = start_proxy(runner)
  File "/usr/local/bin/telepresence/telepresence/proxy/operation.py", line 271, in act
    wait_for_pod(runner, self.remote_info)
  File "/usr/local/bin/telepresence/telepresence/proxy/remote.py", line 140, in wait_for_pod
    raise RuntimeError(
RuntimeError: Pod isn't starting or can't be found: {'conditions': [{'lastProbeTime': None, 'lastTransitionTime': '2021-02-26T13:36:46Z', 'status': 'True', 'type': 'Initialized'}, {'lastProbeTime': None, 'lastTransitionTime': '2021-02-26T13:36:46Z', 'message': 'containers with unready status: [my-container]', 'reason': 'ContainersNotReady', 'status': 'False', 'type': 'Ready'}, {'lastProbeTime': None, 'lastTransitionTime': '2021-02-26T13:36:46Z', 'message': 'containers with unready status: [my-container]', 'reason': 'ContainersNotReady', 'status': 'False', 'type': 'ContainersReady'}, {'lastProbeTime': None, 'lastTransitionTime': '2021-02-26T13:36:46Z', 'status': 'True', 'type': 'PodScheduled'}], 'containerStatuses': [{'containerID': 'docker://a4a8ac55321348852100f9ae4ed3c9fac4b60455aa6b467f7828e49a64ab231b', 'image': 'datawire/telepresence-k8s-priv:0.108', 'imageID': 'docker-pullable://datawire/telepresence-k8s-priv@sha256:a577bf403a0a824bad10ee7524fc4b29e0429d8e39efde71d072cbe7b14ec0ff', 'lastState': {}, 'name': 'my-container', 'ready': False, 'restartCount': 0, 'started': False, 'state': {'terminated': {'containerID': 'docker://a4a8ac55321348852100f9ae4ed3c9fac4b60455aa6b467f7828e49a64ab231b', 'exitCode': 1, 'finishedAt': '2021-02-26T13:36:48Z', 'message': 'Could not load host key: /etc/ssh/ssh_host_rsa_key\r\nCould not load host key: /etc/ssh/ssh_host_dsa_key\r\nCould not load host key: /etc/ssh/ssh_host_ecdsa_key\r\nCould not load host key: /etc/ssh/ssh_host_ed25519_key\r\nsshd: no hostkeys available -- exiting.\r\n', 'reason': 'Error', 'startedAt': '2021-02-26T13:36:48Z'}}}], 'hostIP': '51.178.59.234', 'phase': 'Failed', 'podIP': '10.2.0.173', 'podIPs': [{'ip': '10.2.0.173'}], 'qosClass': 'BestEffort', 'startTime': '2021-02-26T13:36:46Z'}

Logs:


 178.3 TEL | [212] Capturing: kubectl --context kubernetes-admin@my_cluster --namespace my-namespace get pod my-deployment-16feffef602245dcb47a445f70214485 -o json
 178.7 TEL | [212] captured in 0.48 secs.
 179.0 TEL | [213] Capturing: kubectl --context kubernetes-admin@my_cluster --namespace my-namespace get pod my-deployment-16feffef602245dcb47a445f70214485 -o json
 179.1 TEL | [213] captured in 0.16 secs.
 179.4 TEL | [214] Capturing: kubectl --context kubernetes-admin@my_cluster --namespace my-namespace get pod my-deployment-16feffef602245dcb47a445f70214485 -o json
 179.6 TEL | [214] captured in 0.18 secs.
 179.8 TEL | [215] Capturing: kubectl --context kubernetes-admin@my_cluster --namespace my-namespace get pod my-deployment-16feffef602245dcb47

Metadata

Metadata

Assignees

No one assigned

    Labels

    staleIssue is stale and will be closed

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions