-
Notifications
You must be signed in to change notification settings - Fork 322
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] kube-system pods reserve 35 % of allocatable memory on a 4 GB node #3525
Comments
Some of the overall settings here are supposed to be configurable in kubernetes, see e.g. https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#enforcing-node-allocatable , but they don't seem to be on AKS according to what I heard so far from Azure Support. |
@nemobis you can check the configuration of kubelet by yourself by running a debug pod on the node and look at the process snapshot:
I ran into the same "issue"; using a VM with only 4GiB of memory (Standard F2S v2) returns the following:
According to the documentation, kubelet will reserve 25% of memory (i.e. 1GiB). Indeed, using the method described above, you can see kubelet runs with the following flags:
So in total 1816576kiB of memory is reserved; and thus: 4025836-1816576=2209260KiB i.e the amount reported by AKS. |
Action required from @Azure/aks-pm |
Issue needing attention of @Azure/aks-leads |
4 similar comments
Issue needing attention of @Azure/aks-leads |
Issue needing attention of @Azure/aks-leads |
Issue needing attention of @Azure/aks-leads |
Issue needing attention of @Azure/aks-leads |
Issue needing attention of @Azure/aks-leads |
6 similar comments
Issue needing attention of @Azure/aks-leads |
Issue needing attention of @Azure/aks-leads |
Issue needing attention of @Azure/aks-leads |
Issue needing attention of @Azure/aks-leads |
Issue needing attention of @Azure/aks-leads |
Issue needing attention of @Azure/aks-leads |
Hello, beginning with AKS 1.29 preview and beyond, we shipped changes to the eviction threshold and memory reservation for kube-reserved. The new rate of memory reservations is set according to the lesser value of: 20MB * Max Pods supported on the Node + 50MB or 25% of the total system memory resources. The new eviction threshold is 100Mi. See more information here. These changes will help reduce the resource consumption by AKS and can deliver up to 20% more allocatable space depending on your pod configuration. Thanks! |
Describe the bug
On AKS with kubernetes 1.24, a node with 4 GB RAM capacity only has 2157 MiB allocatable; yet kube-system alone reserves some 750 MB (of which 550 MB for azure-cns and azure-npm), leaving less than 1400 MiB available for
requests
by others.To Reproduce
Steps to reproduce the behavior:
kube-capacity
orkubectl describe node
on a recently created nodeExample node:
Expected behavior
A node with 4 GB of RAM should be able to be assigned a pod which requests 1600 MB of RAM (e.g. for Prometheus). (I'm not talking of limits.)
Screenshots

Environment (please complete the following information):
Additional context
There's been a lot of discussion about what the requests and limits should be for various components, but in this case the issue is only with the value of the allocatable memory, so I believe it's orthogonal. If everything in kube-system is requesting way more memory than it needs most of the time, there's no need for such a huge buffer. At very least it should be configurable, or the really available memory should be made clearer so that people can configure their loads and nodepools accordingly, without tinkering with eviction thresholds.
#1339
#2125
#3348
#3496
I think it's unrelated from #3443
The text was updated successfully, but these errors were encountered: