Skip to content
This repository was archived by the owner on Jun 26, 2023. It is now read-only.

Commit 6e62f38

Browse files
committed
Increase memory limits and requests and update docs
Increase memory limits to 300M and requests to 150M since we store more info, e.g. source objects, in the forest now. About 700 namespaces with 10 propagatable objects in each namespace would use about 200M memory from performance tests. Add memory usage in user-guide faq docs.
1 parent 0fd0860 commit 6e62f38

File tree

2 files changed

+14
-2
lines changed

2 files changed

+14
-2
lines changed

incubator/hnc/config/manager/manager.yaml

+2-2
Original file line numberDiff line numberDiff line change
@@ -49,8 +49,8 @@ spec:
4949
resources:
5050
limits:
5151
cpu: 100m
52-
memory: 100Mi
52+
memory: 300Mi
5353
requests:
5454
cpu: 100m
55-
memory: 50Mi
55+
memory: 150Mi
5656
terminationGracePeriodSeconds: 10

incubator/hnc/docs/user-guide/faq.md

+12
Original file line numberDiff line numberDiff line change
@@ -57,6 +57,18 @@ limitations. You can adjust the `--apiserver-qps-throttle` parameter in the
5757
manifest to increase it from the default of 50qps if your cluster supports
5858
higher values.
5959

60+
## How much memory does HNC need?
61+
62+
As of Dec. 2020, the [HNC performance test](../../scripts/performance/README.md)
63+
shows that 700 namespaces with 10 propagatable objects in each namespace would
64+
use about 200M memory during HNC startup and about 150M afterwards. Thus, we set
65+
a default of 300M memory limits and 150M memory requests for HNC.
66+
67+
To change HNC memory limits and requests, you can update the values in
68+
`config/manager/manager.yaml`, run `make manifests` and reapply the manifest. If
69+
you are using a GKE cluster, you can view the real-time memory usage in the
70+
`Workloads` tab and determine what's the best limits and requests for you.
71+
6072
## Does HNC support high-availability?
6173

6274
HNC is currently deployed as a single in-memory pod and therefore does not

0 commit comments

Comments
 (0)