Skip to content

Commit b92cbdc

Browse files
authored
chore: use symbolic link instead of directory (kubernetes-sigs#630)
* docs: use symbolic link instead of directory * update version to v0.6.0
1 parent f30d5a8 commit b92cbdc

30 files changed

+7
-515
lines changed

README.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -17,19 +17,19 @@ Take a look at the [concepts](https://jobset.sigs.k8s.io/docs/concepts/) page fo
1717

1818
- **Support for multi-template jobs**: JobSet models a distributed training workload as a group of K8s Jobs. This allows a user to easily specify different pod templates for different distinct groups of pods (e.g. a leader, workers, parameter servers, etc.), something which cannot be done by a single Job.
1919

20-
- **Automatic headless service configuration and lifecycle management**: ML and HPC frameworks require a stable network endpoint for each worker in the distributed workload, and since pod IPs are dynamically assigned and can change between restarts, stable pod hostnames are required for distributed training on k8s, By default, JobSet uses [IndexedJobs](https://kubernetes.io/blog/2021/04/19/introducing-indexed-jobs/) to establish stable pod hostnames, and does automatic configuration and lifecycle management of the headless service to trigger DNS record creations and establish network connectivity via pod hostnames. These networking configurations are defaulted automatically to enable stable network endpoints and pod-to-pod communication via hostnames; however, they can be customized in the JobSet spec: see this [example](examples/simple/jobset-with-network.yaml) of using a custom subdomain your JobSet's network configuration.
20+
- **Automatic headless service configuration and lifecycle management**: ML and HPC frameworks require a stable network endpoint for each worker in the distributed workload, and since pod IPs are dynamically assigned and can change between restarts, stable pod hostnames are required for distributed training on k8s, By default, JobSet uses [IndexedJobs](https://kubernetes.io/blog/2021/04/19/introducing-indexed-jobs/) to establish stable pod hostnames, and does automatic configuration and lifecycle management of the headless service to trigger DNS record creations and establish network connectivity via pod hostnames. These networking configurations are defaulted automatically to enable stable network endpoints and pod-to-pod communication via hostnames; however, they can be customized in the JobSet spec: see this [example](./site/static/examples/simple/jobset-with-network.yaml) of using a custom subdomain your JobSet's network configuration.
2121

2222
- **Configurable failure policies**: JobSet has configurable failure policies which allow the user to specify a maximum number of times the JobSet should be restarted in the event of a failure. If any job is marked failed, the entire JobSet will be recreated, allowing the workload to resume from the last checkpoint. When no failure policy is specified, if any job fails, the JobSet simply fails. Using JobSet v0.6.0+, the [extended failure policy API](https://github.com/kubernetes-sigs/jobset/tree/main/keps/262-ConfigurableFailurePolicy) allows
2323
users to configure different behavior for different error types, enabling them to use compute resources more
2424
efficiently and improve ML training goodput.
2525

26-
- **Configurable success policies**: JobSet has [configurable success policies](https://github.com/kubernetes-sigs/jobset/blob/v0.6.0/examples/simple/success-policy.yaml) which target specific ReplicatedJobs, with operators to target `Any` or `All` of their child jobs. For example, you can configure the JobSet to be marked complete if and only if all pods that are part of the “worker” ReplicatedJob are completed. This enables users to use their compute resources more efficiently, allowing a workload to be declared successful and release the resources for the next workload more quickly.
26+
- **Configurable success policies**: JobSet has [configurable success policies](./site/static/examples/simple/success-policy.yaml) which target specific ReplicatedJobs, with operators to target `Any` or `All` of their child jobs. For example, you can configure the JobSet to be marked complete if and only if all pods that are part of the “worker” ReplicatedJob are completed. This enables users to use their compute resources more efficiently, allowing a workload to be declared successful and release the resources for the next workload more quickly.
2727

28-
- **Exclusive Placement Per Topology Domain**: JobSet includes an [annotation](https://github.com/kubernetes-sigs/jobset/blob/1ae6c0c039c21d29083de38ae70d13c2c8ec613f/examples/simple/exclusive-placement.yaml#L6) which can be set by the user, specifying that there should be a 1:1 mapping between child job and a particular topology domain, such as a datacenter rack or zone. This means that all the pods belonging to a child job will be colocated in the same topology domain, while pods from other jobs will not be allowed to run within this domain. This gives the child job exclusive access to computer resources in this domain. You can run this [example](https://github.com/kubernetes-sigs/jobset/blob/v0.6.0/examples/simple/exclusive-placement.yaml) yourself to see how exclusive placement works.
28+
- **Exclusive Placement Per Topology Domain**: JobSet includes an [annotation](https://github.com/kubernetes-sigs/jobset/blob/1ae6c0c039c21d29083de38ae70d13c2c8ec613f/examples/simple/exclusive-placement.yaml#L6) which can be set by the user, specifying that there should be a 1:1 mapping between child job and a particular topology domain, such as a datacenter rack or zone. This means that all the pods belonging to a child job will be colocated in the same topology domain, while pods from other jobs will not be allowed to run within this domain. This gives the child job exclusive access to computer resources in this domain. You can run this [example](./site/static/examples/simple/exclusive-placement.yaml) yourself to see how exclusive placement works.
2929

3030
- **Fast failure recovery**: JobSet recovers from failures by recreating all the child Jobs. When scheduling constraints such as exclusive Job placement are used, fast failure recovery at scale can become challenging. As of JobSet v0.3.0, JobSet uses a designed such that it minimizes impact on scheduling throughput. We have benchmarked scheduling throughput during failure recovery at 290 pods/second at a 15k node scale.
3131

32-
- **Startup Sequencing**: As of JobSet v0.6.0 users can configure a [startup order](https://github.com/kubernetes-sigs/jobset/blob/v0.6.0/examples/startup-policy/startup-driver-ready.yaml) for the ReplicatedJobs in a JobSet. This enables support for patterns like the “leader-worker” paradigm, where the leader must be running before the workers should start up and connect to it.
32+
- **Startup Sequencing**: As of JobSet v0.6.0 users can configure a [startup order](./site/static/examples/startup-policy/startup-driver-ready.yaml) for the ReplicatedJobs in a JobSet. This enables support for patterns like the “leader-worker” paradigm, where the leader must be running before the workers should start up and connect to it.
3333

3434
- **Integration with Kueue**: Use JobSet v0.2.3+ and [Kueue](https://kueue.sigs.k8s.io/) v0.6.0+ to oversubscribe your cluster with JobSet workloads, placing them in queue which supports multi-tenancy, resource sharing and more. See [Kueue documentation](https://kueue.sigs.k8s.io/) for more details on the benefits of managing JobSet workloads via Kueue.
3535

@@ -58,7 +58,7 @@ efficiently and improve ML training goodput.
5858
To install the latest release of JobSet in your cluster, run the following command:
5959

6060
```shell
61-
kubectl apply --server-side -f https://github.com/kubernetes-sigs/jobset/releases/download/v0.5.2/manifests.yaml
61+
kubectl apply --server-side -f https://github.com/kubernetes-sigs/jobset/releases/download/v0.6.0/manifests.yaml
6262
```
6363

6464
The controller runs in the `jobset-system` namespace.

examples

+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
./site/static/examples

examples/pytorch/cnn-mnist/Dockerfile

-4
This file was deleted.

examples/pytorch/cnn-mnist/mnist.py

-155
This file was deleted.

examples/pytorch/cnn-mnist/mnist.yaml

-38
This file was deleted.

examples/pytorch/resnet-cifar10/resnet.yaml

-35
This file was deleted.

examples/simple/exclusive-placement.yaml

-26
This file was deleted.

examples/simple/jobset-with-network.yaml

-61
This file was deleted.

0 commit comments

Comments
 (0)