You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+5-5
Original file line number
Diff line number
Diff line change
@@ -17,19 +17,19 @@ Take a look at the [concepts](https://jobset.sigs.k8s.io/docs/concepts/) page fo
17
17
18
18
-**Support for multi-template jobs**: JobSet models a distributed training workload as a group of K8s Jobs. This allows a user to easily specify different pod templates for different distinct groups of pods (e.g. a leader, workers, parameter servers, etc.), something which cannot be done by a single Job.
19
19
20
-
-**Automatic headless service configuration and lifecycle management**: ML and HPC frameworks require a stable network endpoint for each worker in the distributed workload, and since pod IPs are dynamically assigned and can change between restarts, stable pod hostnames are required for distributed training on k8s, By default, JobSet uses [IndexedJobs](https://kubernetes.io/blog/2021/04/19/introducing-indexed-jobs/) to establish stable pod hostnames, and does automatic configuration and lifecycle management of the headless service to trigger DNS record creations and establish network connectivity via pod hostnames. These networking configurations are defaulted automatically to enable stable network endpoints and pod-to-pod communication via hostnames; however, they can be customized in the JobSet spec: see this [example](examples/simple/jobset-with-network.yaml) of using a custom subdomain your JobSet's network configuration.
20
+
-**Automatic headless service configuration and lifecycle management**: ML and HPC frameworks require a stable network endpoint for each worker in the distributed workload, and since pod IPs are dynamically assigned and can change between restarts, stable pod hostnames are required for distributed training on k8s, By default, JobSet uses [IndexedJobs](https://kubernetes.io/blog/2021/04/19/introducing-indexed-jobs/) to establish stable pod hostnames, and does automatic configuration and lifecycle management of the headless service to trigger DNS record creations and establish network connectivity via pod hostnames. These networking configurations are defaulted automatically to enable stable network endpoints and pod-to-pod communication via hostnames; however, they can be customized in the JobSet spec: see this [example](./site/static/examples/simple/jobset-with-network.yaml) of using a custom subdomain your JobSet's network configuration.
21
21
22
22
-**Configurable failure policies**: JobSet has configurable failure policies which allow the user to specify a maximum number of times the JobSet should be restarted in the event of a failure. If any job is marked failed, the entire JobSet will be recreated, allowing the workload to resume from the last checkpoint. When no failure policy is specified, if any job fails, the JobSet simply fails. Using JobSet v0.6.0+, the [extended failure policy API](https://github.com/kubernetes-sigs/jobset/tree/main/keps/262-ConfigurableFailurePolicy) allows
23
23
users to configure different behavior for different error types, enabling them to use compute resources more
24
24
efficiently and improve ML training goodput.
25
25
26
-
-**Configurable success policies**: JobSet has [configurable success policies](https://github.com/kubernetes-sigs/jobset/blob/v0.6.0/examples/simple/success-policy.yaml) which target specific ReplicatedJobs, with operators to target `Any` or `All` of their child jobs. For example, you can configure the JobSet to be marked complete if and only if all pods that are part of the “worker” ReplicatedJob are completed. This enables users to use their compute resources more efficiently, allowing a workload to be declared successful and release the resources for the next workload more quickly.
26
+
-**Configurable success policies**: JobSet has [configurable success policies](./site/static/examples/simple/success-policy.yaml) which target specific ReplicatedJobs, with operators to target `Any` or `All` of their child jobs. For example, you can configure the JobSet to be marked complete if and only if all pods that are part of the “worker” ReplicatedJob are completed. This enables users to use their compute resources more efficiently, allowing a workload to be declared successful and release the resources for the next workload more quickly.
27
27
28
-
-**Exclusive Placement Per Topology Domain**: JobSet includes an [annotation](https://github.com/kubernetes-sigs/jobset/blob/1ae6c0c039c21d29083de38ae70d13c2c8ec613f/examples/simple/exclusive-placement.yaml#L6) which can be set by the user, specifying that there should be a 1:1 mapping between child job and a particular topology domain, such as a datacenter rack or zone. This means that all the pods belonging to a child job will be colocated in the same topology domain, while pods from other jobs will not be allowed to run within this domain. This gives the child job exclusive access to computer resources in this domain. You can run this [example](https://github.com/kubernetes-sigs/jobset/blob/v0.6.0/examples/simple/exclusive-placement.yaml) yourself to see how exclusive placement works.
28
+
-**Exclusive Placement Per Topology Domain**: JobSet includes an [annotation](https://github.com/kubernetes-sigs/jobset/blob/1ae6c0c039c21d29083de38ae70d13c2c8ec613f/examples/simple/exclusive-placement.yaml#L6) which can be set by the user, specifying that there should be a 1:1 mapping between child job and a particular topology domain, such as a datacenter rack or zone. This means that all the pods belonging to a child job will be colocated in the same topology domain, while pods from other jobs will not be allowed to run within this domain. This gives the child job exclusive access to computer resources in this domain. You can run this [example](./site/static/examples/simple/exclusive-placement.yaml) yourself to see how exclusive placement works.
29
29
30
30
-**Fast failure recovery**: JobSet recovers from failures by recreating all the child Jobs. When scheduling constraints such as exclusive Job placement are used, fast failure recovery at scale can become challenging. As of JobSet v0.3.0, JobSet uses a designed such that it minimizes impact on scheduling throughput. We have benchmarked scheduling throughput during failure recovery at 290 pods/second at a 15k node scale.
31
31
32
-
-**Startup Sequencing**: As of JobSet v0.6.0 users can configure a [startup order](https://github.com/kubernetes-sigs/jobset/blob/v0.6.0/examples/startup-policy/startup-driver-ready.yaml) for the ReplicatedJobs in a JobSet. This enables support for patterns like the “leader-worker” paradigm, where the leader must be running before the workers should start up and connect to it.
32
+
-**Startup Sequencing**: As of JobSet v0.6.0 users can configure a [startup order](./site/static/examples/startup-policy/startup-driver-ready.yaml) for the ReplicatedJobs in a JobSet. This enables support for patterns like the “leader-worker” paradigm, where the leader must be running before the workers should start up and connect to it.
33
33
34
34
-**Integration with Kueue**: Use JobSet v0.2.3+ and [Kueue](https://kueue.sigs.k8s.io/) v0.6.0+ to oversubscribe your cluster with JobSet workloads, placing them in queue which supports multi-tenancy, resource sharing and more. See [Kueue documentation](https://kueue.sigs.k8s.io/) for more details on the benefits of managing JobSet workloads via Kueue.
35
35
@@ -58,7 +58,7 @@ efficiently and improve ML training goodput.
58
58
To install the latest release of JobSet in your cluster, run the following command:
0 commit comments