-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to create a cluster inside LXD container #455
Comments
@GrosLalo seems you are using a proxy
Please configure the NO_PROXY env variable with localhost and your docker subnet ( I assume is 172.17.0.0/16) and try again |
@aojea: I am getting the following error when setting the NO_PROXY:
I then tried to run the
For instance for the above issues where we have permission denied (e.g. /proc/sys/vm/overcommit_memory), i checked that the default user
Any ideas? |
Do you have enough memory? |
Yes plenty of memory. 14GB available |
That is definitely the relevant part. Are you running with selinux or apparmor by any chance? normally I'd expect a
the question is can a container open those? IE from |
I can run
Would there be some other test i could perform to see if my host (i.e. the LXD container) is inadequate? Or, other test to see if something is buggy in Thanks in advance. |
Can you access those paths from within a the privileged container?
Not working inside an LXD container is not surprising. Kubernetes still
needs access to the host and the container is probably too restrictive.
…On Fri, Apr 26, 2019, 22:34 GrosLalo ***@***.***> wrote:
I can run docker run --privileged -it --rm ubuntu on the host. Perhaps I
should mention that the host is an LXD container. But the host is
configured correctly (to at least run the above-mentioned --privileged
command). E.g:
d run -it --privileged --rm ubuntu
***@***.***:/#
Would there be some other test i could perform to see if my host (i.e. the
LXD container) is inadequate? Or, other test to see if something is buggy
in kind on my setup?
Thanks in advance.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#455 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAHADK24WAMUIVLTAPEQRMDPSPQXXANCNFSM4HIUPXRQ>
.
|
I.E. the test would be:
|
Per the docs at https://help.ubuntu.com/lts/serverguide/lxd.html.en the permissions are likely too restrictive even when trivial "privileged" setting is enabled. These guides cover some more settings including rw proc and sys, and may be relevant: |
Yes did those tests inside the container and they do not report anything suspicious:
|
@BenTheElder : The links helped. I realized that the apparmor was impacting. Configuring the container to This issue can be closed. |
Excellent! Will note this for the next user, thanks for figuring it out and reporting back! :-) |
@GrosLalo it will be nice if you can explain what changes are needed so other users can benefit from your experience |
Having the same problem and I'm not sure what you guys are talking about but its not fixed and its likely because yes
no it's not writable
Also I would just merely point out that the line number specified in the https://github.com/kubernetes/kubernetes/blob/v1.23.5/pkg/kubelet/kubelet.go#L1431 but I guess just in case you wanna see some error reporting code hey why not but thats sorta like the golang equiv to the old object oriented reporting the outer exception but not the inner. Was hoping to see how the file was being opened (RO/RW) but I donno where to find it so that's on you
trying to initialize kubelet with
containerd toml:
|
Hi, we do not have the bandwidth to actively support / develop / test LXD environment. To my knowledge Kubernetes does not either. Most of our users run docker or podman in VMs, or on their developer machines.
You should probably not try to run kubernetes/kind under shared tenancy unless that tenancy is done by way of VMs.
Kubelet behavior is not something we own, we do our best to make KIND meet kubelet's expectations. Go's error management style is certainly out of scope for this project.
Yes, the /dev/kmem symlink is a bad hack, we should probably drop it or just create an empty file there. |
would be nice if that were more apparent from the start I wouldn't have wasted my time, and yeah I'm not really a fan of the idea of a "LXD vps" either but it's the way nja.la does things and to be fair it's probably what you would consider a modern openvz vps but yeah I know what you mean, still though it's what I got and there's nothing between then and now that really said it just simply won't work, quite the opposite given the number of feature gates and options that would seem to indicate that it's possible to do it. You guys might consider making that behavior a little more definitive though like an option that you have to override to try to run it on LXD. Yeah I know it's nice to run on a kvm vm if you have one I have a kubic VM running |
We've accepted LXD related fixes in the past and known it to work, this is the first we've heard in a long time otherwise. But we don't particularly have the resources to follow up ourselves. I would say Kubernetes in general hasn't make statements about LXD for the same reasons. The project is not monolithic, the KIND maintainers do not own kubelet's behavior so we can't directly change those requirements. The host kernel etc must be kubernetes compatible, which generally is the case on Linux but it's possible not to be. |
* version v0.18.0-alpha * update docs for v0.17.0 * fix kind version in readme * comments-update-buildcontext * fixed auth_required false in acr, ecr, gcr and gar * added login for cloud provider registry
What happened:
I have been following the instructions mentioned https://kind.sigs.k8s.io/docs/user/quick-start/ and I am unable to get the cluster created with
kind
. The process tokind create cluster
fails at the level ofStarting control-plane
What you expected to happen:
I expected the same outcome as described on https://kind.sigs.k8s.io/docs/user/quick-start/
How to reproduce it (as minimally and precisely as possible):
For the given environment below, i just ran the command
kind create cluster --loglevel debug
and then observed the following issues:*** Preflight verification error: ***
However, given that pre-flight errors are ignored during that stage, i assume that the above kernel version has not been deemed to be a problem. So, subsequent problems worth noting are:
Anything else we need to know?:
Environment:
0.3.0-alpha
The text was updated successfully, but these errors were encountered: