Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add features overview to README #452

Merged
merged 4 commits into from
Mar 22, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 24 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,19 +13,21 @@ Take a look at the [concepts](/docs/concepts/README.md) page for a brief descrip
## Conceptual Diagram
<img src="site/static/images/jobset_diagram.png" alt="jobset diagram">

## Installation
## Features overview

**Requires Kubernetes 1.26 or newer**.
- **Support for multi-template jobs**: JobSet models a distributed training workload as a group of K8s Jobs. This allows a user to easily specify different pod templates for different distinct groups of pods (e.g. a leader, workers, parameter servers, etc.), something which cannot be done by a single Job.

To install the latest release of JobSet in your cluster, run the following command:
- **Automatic headless service configuration and lifecycle management**: ML and HPC frameworks require a stable network endpoint for each worker in the distributed workload, and since pod IPs are dynamically assigned and can change between restarts, stable pod hostnames are required for distributed training on k8s, By default, JobSet uses [IndexedJobs](https://kubernetes.io/blog/2021/04/19/introducing-indexed-jobs/) to establish stable pod hostnames, and does automatic configuration and lifecycle management of the headless service to trigger DNS record creations and establish network connectivity via pod hostnames.

```shell
kubectl apply --server-side -f https://github.com/kubernetes-sigs/jobset/releases/download/v0.4.0/manifests.yaml
```
- **Configurable success policies**: JobSet has [configurable success policies](https://github.com/kubernetes-sigs/jobset/blob/1ae6c0c039c21d29083de38ae70d13c2c8ec613f/examples/simple/success-policy.yaml) which target specific ReplicatedJobs, with operators to target `Any` or `All` of their child jobs. For example, you can configure the JobSet to be marked complete if and only if all pods that are part of the “worker” ReplicatedJob are completed. This enables users to use their compute resources more efficiently, allowing a workload to be declared successful and release the resources for the next workload more quickly.
danielvegamyhre marked this conversation as resolved.
Show resolved Hide resolved

The controller runs in the `jobset-system` namespace.
- **Configurable failure policies**: JobSet has [configurable failure policies](https://github.com/kubernetes-sigs/jobset/blob/1ae6c0c039c21d29083de38ae70d13c2c8ec613f/examples/simple/max-restarts.yaml) which allow the user to specify a maximum number of times the JobSet should be restarted in the event of a failure. If any job is marked failed, the entire JobSet will be recreated, allowing the workload to resume from the last checkpoint. When no failure policy is specified, if any job fails, the JobSet simply fails.

Read the [installation guide](/docs/setup/install.md) to learn more.
- **Exclusive Placement Per Topology Domain**: JobSet includes an [annotation](https://github.com/kubernetes-sigs/jobset/blob/1ae6c0c039c21d29083de38ae70d13c2c8ec613f/examples/simple/exclusive-placement.yaml#L6) which can be set by the user, specifying that there should be a 1:1 mapping between child job and a particular topology domain, such as a datacenter rack or zone. This means that all the pods belonging to a child job will be colocated in the same topology domain, while pods from other jobs will not be allowed to run within this domain. This gives the child job exclusive access to computer resources in this domain.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question for reviewers: I think this feature will make little sense to users without a concrete use case, but the only one I can think of is TPU Multislice training, and since TPUs are specific to Google I didn't include it here. If anyone has any a suggestion for a concrete use case here I would appreciate it. I am happy to include TPU multislice training as well, based on feedback.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe @vsoch has some ideas of a general example?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we come up with a better concrete example we can add it in a follow up PR. For now I think we should get the feature overview list into the README so it's clear to potential users glancing at the Github landing page what JobSet offers.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry missed this comment! Mapping to the level of a rack isn't particularly useful, at least it doesn't belong at this level - when we deploy to Google Cloud we usually ask for COMPACT mode when we want some guarantee of rack closeness. For mapping topology that is interesting, a better example is 1 pod per node. I think that can typically be achieved with resource requests / limits that are slightly below the node max capacity, and (maybe) a suggestion to the scheduler with affinity rules (but in practice I have found this is not enough). The topology that we are really interested in is more fine grained than that, and probably would need to be under the jurisdiction of the kubelet.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also I'm designing a new project idea that (I think) will use JobSet again, will ping you / keep you in the loop if/when it manifests. No pun intended! :P


- **Fast failure recovery**: JobSet recovers from failures by recreating all the child Jobs. When scheduling constraints such as exclusive Job placement are used, fast failure recovery at scale can become challenging. As of JobSet v0.3.0, JobSet uses a designed such that it minimizes impact on scheduling throughput. We have benchmarked scheduling throughput during failure recovery at 290 pods/second at a 15k node scale.

- **Startup Sequencing**: As of JobSet v0.4.0 users can configure a [startup order](https://github.com/kubernetes-sigs/jobset/blob/1ae6c0c039c21d29083de38ae70d13c2c8ec613f/examples/startup-policy/startup-driver-ready.yaml) for the ReplicatedJobs in a JobSet. This enables support for patterns like the “leader-worker” paradigm, where the leader must be running before the workers should start up and connect to it.

## Production Readiness status

Expand All @@ -44,6 +46,20 @@ Read the [installation guide](/docs/setup/install.md) to learn more.
- ✔️ Security: RBAC based accessibility.
- ✔️ Stable release cycle(2-3 months) for new features, bugfixes, cleanups.

## Installation

**Requires Kubernetes 1.26 or newer**.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we say that we follow Kubernetes release process?

In 1-2 months I think we would want to bump this to Kubernetes 1.27..

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, so something like:

Maintains support for latest 3 Kubernetes minor versions. Current: 1.27, 1.28, 1.29

(I know we currently run e2e-tests with 1.26 as well, but we should remove this and just focus on support for latest 3 minors, to align with upstream k8s).

What are your thoughts on this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My goal would be to avoid having to PR to keep these versions up to date..

Maintains support for latest 3 Kubernetes minor versions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I added a line to "production readiness" bullets about this, and then here (installation instructiosn) I mentioned one of the last 3 minor versions is required.


To install the latest release of JobSet in your cluster, run the following command:

```shell
kubectl apply --server-side -f https://github.com/kubernetes-sigs/jobset/releases/download/v0.4.0/manifests.yaml
```

The controller runs in the `jobset-system` namespace.

Read the [installation guide](/docs/setup/install.md) to learn more.

## Troubleshooting common issues

See the [troubleshooting](/docs/troubleshooting/README.md) guide for help resolving common issues.
Expand Down