Skip to content
This repository was archived by the owner on Apr 17, 2025. It is now read-only.

Simple way to know HierarchicalResourceQuota status #274

Closed
mochizuki875 opened this issue Apr 5, 2023 · 11 comments
Closed

Simple way to know HierarchicalResourceQuota status #274

mochizuki875 opened this issue Apr 5, 2023 · 11 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@mochizuki875
Copy link
Contributor

mochizuki875 commented Apr 5, 2023

Overview

When using HierarchicalResourceQuota, resource quotas is applied across parent and child Namespaces.
However, there is no way to know the quota and consumption of resource in the HierarchicalResourceQuota other than check the detail.
This may confuse users, so I think there should be a simple way to know it(like kubectl get resourcequotas).

Detail

For example, there are two Namespaces with hierarchy like this.
If HierarchicalResourceQuota of limits.cpu=1 is applied to the parent test Namespace, the total amount of CPU available across parent and child Namespaces is 1.

test
└── subns-a

Now, the result of kubectl get hierarchicalresourcequotas is this.
We can't know the quota and consumption of resource.

$ kubectl get hierarchicalresourcequotas -n test
NAME       AGE
test-hrq   80s

To know that, we need to check the detail of HierarchicalResourceQuota.
In this case, the total CPU concumption across parent and child Namespace is 500m/1.

$ kubectl describe hierarchicalresourcequotas test-hrq -n test
Name:         test-hrq
Namespace:    test
Labels:       <none>
Annotations:  <none>
API Version:  hnc.x-k8s.io/v1alpha2
Kind:         HierarchicalResourceQuota
...
Spec:
  Hard:
    limits.cpu:  1
Status:
  Hard:
    limits.cpu:  1
  Used:
    limits.cpu:  500m
Events:          <none>

In addition, if we check the ResourceQuota of each Namespace that are automatically created in each Namespace when we create HierarchicalResourceQuota, the result is this.

$ kubectl get resourcequotas -n test
NAME               AGE   REQUEST   LIMIT
hrq.hnc.x-k8s.io   16s             limits.cpu: 500m/1

$ kubectl get resourcequotas -n subns-a 
NAME               AGE   REQUEST   LIMIT
hrq.hnc.x-k8s.io   19s             limits.cpu: 0/1

These status do not fully reflect quotas by HierarchicalResourceQuota.
In this case, it looks like this:

  • test Namespace: 500m remaining CPU available
  • subns-a Namespace: 1 CPU available

However, the actual available CPU is 500m across these Namespaces.

Expectation

I think there should be a simple way to know HierarchicalResourceQuota status like this.

NAME               AGE   REQUEST   LIMIT
test-hrq           50s             limits.cpu: 500m/1
@mochizuki875
Copy link
Contributor Author

/kind feature

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Apr 5, 2023
@mochizuki875
Copy link
Contributor Author

/assign @mochizuki875

@mochizuki875
Copy link
Contributor Author

I tried using JsonPath with HierarchicalResourceQuota's CustomResourceDefinition(.spec.versions.additionalPrinterColumns field), but I couldn't come up with an expression to control what is displayed.
In this case, we need to control the value to display depending on whether each field under spec.hard exists.

So I propose a new sub-command of kubectl-hns in the following format to view the status of HierarchicalResourceQuota.

kubectl-hns hrq [NAME] [flags]

For Example

$ kubectl hns hrq -n test
NAME         AGE     REQUEST                                              LIMIT
test-hrq     2m37s   requests.cpu: 500m/1, requests.memory: 100Mi/200Mi   limits.cpu: 500m/2, limits.memory: 100Mi/500Mi
test-hrq-2   4m6s    requests.memory: 100Mi/300Mi                         limits.cpu: 500m/1

$ kubectl hns hrq test-hrq -n test
NAME       AGE     REQUEST                                              LIMIT
test-hrq   2m43s   requests.cpu: 500m/1, requests.memory: 100Mi/200Mi   limits.cpu: 500m/2, limits.memory: 100Mi/500Mi

$ kubectl hns hrq --all-namespaces
NAMESPACE   NAME         AGE     REQUEST                                              LIMIT
test        test-hrq     2m49s   requests.cpu: 500m/1, requests.memory: 100Mi/200Mi   limits.cpu: 500m/2, limits.memory: 100Mi/500Mi
test        test-hrq-2   4m18s   requests.memory: 100Mi/300Mi                         limits.cpu: 500m/1

@mochizuki875
Copy link
Contributor Author

@adrianludwin
PTAL?
How do you think this?

@zfrhv
Copy link
Contributor

zfrhv commented May 17, 2023

hii
im not sure how to display exactly output as the ResourceQuota, but we can try to put the next field in the hrq crd:

spec:
  versions:
  - name: v1alpha2
    additionalPrinterColumns:
    - description: blah blah
      jsonPath: .status.hard
      name: Hard
      type: string
    - description: blah blah
      jsonPath: .status.used
      name: Used
      type: string

@mochizuki875
Copy link
Contributor Author

@zfrhv
Thanks for your comment!
Yes, as you say, it is possible to display .status.hard and .status.used by using additionalPrinterColumns like this.

$ kubectl get hierarchicalresourcequotas.hnc.x-k8s.io -n test
NAME       HARD                                                                                      USED
test-hrq   {"limits.cpu":"2","limits.memory":"500Mi","requests.cpu":"1","requests.memory":"200Mi"}   {"limits.cpu":"500m","limits.memory":"100Mi","requests.cpu":"500m","requests.memory":"100Mi"}

However, I don't know how to display exactly output as the ResourceQuota...(said here)

If it needed, I'll add this:)

@adrianludwin
Copy link
Contributor

Sorry I didn't respond on the bug, but the PR looks fantastic and I've approved it. Can you please cherry-pick it to v1.1 as well? Thanks!

@adrianludwin
Copy link
Contributor

I wouldn't worry too much about making this exactly like the RQ output. For now, it's human output, and machines can use the json representation.

@mochizuki875
Copy link
Contributor Author

Can you please cherry-pick it to v1.1 as well?

@adrianludwin
Thanks!
My friend @keisukesakasai did it for me :)
Please check it.
#284

@mochizuki875
Copy link
Contributor Author

mochizuki875 commented Jun 12, 2023

These PR have been merged and I'll close this issue.
#283
#295

/close

@k8s-ci-robot
Copy link
Contributor

@mochizuki875: Closing this issue.

In response to this:

These PR have merged and I'll close this issue.
#283
#295

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

4 participants