Replies: 2 comments
-
Hi everyone, while experimenting with the k8s runner, I've discovered that it's actually using generic ephemeral volumes, which is not mentioned in the documentation. It is possible to use any local storage provider such as: |
Beta Was this translation helpful? Give feedback.
0 replies
-
@Vijay-train - a note for something we should work out how to add into our docs! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi everyone!
When using
containerMode: kubernetes
runner, it seems that we must provide aworkVolumeClaimTemplate
as stated in the documentation here:If I understand correctly, the constraint of specifying a volume claim comes from the fact that the k8s runner will spawn a new "job" pod, which can be scheduled on any node, i.e. not necessarily on the same node as of the runner, and as both need to share some information, they need to use a persistent volume.
So in the case of an ephemeral k8s runner, the following steps will happen:
Please correct me if I'm wrong but this means that for each job in a workflow, a new EBS volume is requested. If that's the case, it seems that this solution doesn't scale very well.
Another approach would be to use a non ephemeral k8s runner. In that case, there is a one EBS volume per runner. It is better in terms of scalability. However, I'm not sure of what is shared exactly between the runner pod and the "job" pod.
As we want to implement reproducible builds:
Any help would be greatly appreciated!
Beta Was this translation helpful? Give feedback.
All reactions