-
Notifications
You must be signed in to change notification settings - Fork 1.2k
[wip] add k8s integration design doc #346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Wataru Ishida <[email protected]>
3bb3554
to
be1b905
Compare
|
Signed-off-by: Wataru Ishida <[email protected]>
@qiluo-msft Thanks for the comment. I added a glossary to the doc.
It should be mutually exclusive. In standalone mode, a switch itself has the control of the containers which run on it. In cluster mode, the k8s controller has the control of it.
The joining procedure needs to be invoked by a switch. So a switch should identify its cluster master, get the token, and ask the master to join the cluster.
Added in the document. Not sure about CPU usage.
Added in the document.
The official document says it can scale up to 5000 nodes. However, I think this number really depends on the environment. Also
No. k8s master only needs IP reachability to control nodes.
The easiest way would be unjoining the node from the cluster and join again after the upgrade.
T.B.D. I'll investigate what k3s is offering.
T.B.D. I'll investigate what k3s is offering.
Can't we use warm reboot for the transition as we did at the hackathon?
Yes, as I described, this can be supported by using selector and labels. |
Signed-off-by: Wataru Ishida <[email protected]>
afa3afe
to
f8039fd
Compare
before upgrading container, the controller may need to do some actions, such as take bgp snapshot, drain traffic from the switch. after upgrade container, the controller may need to do some post upgrade actions, such as comparing the snapshot, restore traffic. any consideration for such actions supported by k8s? |
|
||
- Cluster | ||
- A set of machines, called nodes, that run containerized applications managed by Kubernetes | ||
- In SONiC use-case, each machine is SONiC switch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This statement looks weird. Do you mean cluster?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A cluster is a set of machines and each machine is a SONiC switch (except controller node).
How should I change the statement?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess you mean 'In SONiC use-case, a node is a SONiC switch. And a cluster is all the SONiC switches managed by Kubernetes'.
We can use
Also, we can use In k8s, this kind of application specific operations can be implemented as an |
8498931
to
8837dc2
Compare
Is this dead? |
Signed-off-by: Wataru Ishida [email protected]