Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal for a XGBOOST operator #247

Closed
merlintang opened this issue Apr 4, 2019 · 15 comments
Closed

Proposal for a XGBOOST operator #247

merlintang opened this issue Apr 4, 2019 · 15 comments
Assignees

Comments

@merlintang
Copy link

Motivation

XGBOOST is the state of art approach for machine learning. Additionally to deploy XGboost over Yarn, or Spark, it is necessary to provide Kubernetes with the ability to handle distributed XGBoost training and prediction. The Kubernetes Operator for XGBOOST reduces the gap to build distributed XGBOOST over Kubernetes, and allow XGBOOST applications to be specified, run, and monitored idiomatically on Kubernetes.

The operator allows ML applications based XGBOOST to be specified in a declarative manner (e.g., in a YAML file) and run without the need to deal with the XGBOOST submission process. It also enables status of XGBOOST job to be tracked and presented idiomatically like other types of workloads on Kubernetes. This document discusses the design and architecture of the operator.

Goals

  1. Provide a common custom resource definition (CRD) for defining a single-node or multiple node XGBOOST training and predication job.

  2. Implement a custom controller to manage the CRD, create dependency resources, and reconcile the desired states.

More details
A XGBOOST operator
A way to deploy the operator
A single pod XGBOOST example
A distributed XGBOOST example

Non-Goals

Issues or changes not being addressed by this proposal.

UI or API

Custom Resource Definition
The custom resource submitted to the Kubernetes API would look something like this:

apiVersion: "kubeflow.org/v1alpha1"
kind: "XGBoostJob"
metadata:
  name: "xgboost-example-job"
  command: ["xgboost"]
        args: [
          "-bind-to", "none",
          '-distributed', "yes",
          "-job-type", "train",
          "python", "scripts/xgboost_test/xgboost_test.py",
          "--model", "modelname"
        ]
spec:
  backend: "rabit"
  masterPort: "23456"
  replicaSpecs:
    - replicas: 1
      ReplicaType: MASTER
      template:
        spec:
          containers:
            - image: xgboost/xgboost:latest
              name: master
              imagePullPolicy: IfNotPresent
          restartPolicy: OnFailure
    - replicas: 2
      ReplicaType: WORKER
      template:
        spec:
          containers:
            - image: xgboost/xgboost:latest
              name: worker
          restartPolicy: OnFailure

This XGBoostJob resembles the existing TFJob for the tf-operator. backend defines the protocol the XGboost workers to communicate when initializing the worker group. masterPort defines the port the group will use to communicate with the master's Kubernetes service.

Resulting Master

kind: Service
apiVersion: v1
metadata:
  name: xgboost-master-${job_id}
spec:
  selector:
    app: xgboost-master-${job_id}
  ports:
  - port: 23456
    targetPort: 23456

Details

apiVersion: v1
kind: Pod
metadata:
  name: xgboost-master-${job_id}
  labels:
    app: xgboost-${job_id}
spec:
  containers:
  - image: xgboost/xgboost:latest
    imagePullPolicy: IfNotPresent
    name: master
    env:
      - name: MASTER_PORT
        value: "23456"
      - name: MASTER_ADDR
        value: "localhost"
      - name: WORLD_SIZE
        value: "3"
        # Rank 0 is the master
      - name: RANK
        value: "0"
    ports:
      - name: masterPort
        containerPort: 23456
  restartPolicy: OnFailure

Resulting Worker

apiVersion: v1
kind: Pod
metadata:
  name: xgboost-worker-${job_id}
spec:
  containers:
  - image: xboost/xgboost:latest
    imagePullPolicy: IfNotPresent
    name: worker
    env:
    - name: MASTER_PORT
      value: "23456"
    - name: MASTER_ADDR
      value: xgboost-master-${job_id}
    - name: WORLD_SIZE
      value: "3"
    - name: RANK
      value: "1"
  restartPolicy: OnFailure

The worker spec generates a pod. They will communicate to the master through the master's service name.

Design

The design of distributed XGBOOST follow the Rabit protocol of XGBOOST. The rabit design can be found here. Thus, XGBoost operator is coming to provide the framework for start the master node and slave nodes for Rabit as following way.

The master node of Rabit is initialized, and each slave node comes to connect with master node via the provided port and IP. Each work in pods to read data from locally, and map the input data into Dmatrix format for XGBoost.
a. For training job: One of worker is selected as Host, and other workers use the IP and port of number of HOST to build the rabit network for training as the Figure 1.
b. For predication job: the trained model is popugate to each work node, and use the local validation data for predication..

image

Alternatives Considered

Description of possible alternative solutions and the reasons they were not chosen.

@jlewi
Copy link
Contributor

jlewi commented Apr 8, 2019

Thanks for writing this up!

The link to Rabit isn't working.

What does the heavy lifting of splitting work among the different workers; is that Rabit?

I'd suggest sharing this issue in slack to make sure others are aware. @johnugeorge and @richardsliu are the primary owners of our job operators so they would probably be the best points of contact in terms of reviewing this proposal.

/assign @johnugeorge
/assign @richardsliu

@johnugeorge
Copy link
Member

Thanks for this proposal.

I see that your proposed CRD is similar to v1alpha1 version of job operators. See sample current v1beta2 (latest version) of operator. See https://github.com/kubeflow/pytorch-operator/blob/master/examples/mnist/v1beta2/pytorch_job_mnist_gloo.yaml

We tried to keep consistent behavior across operators. See tf and pytorch operator v1beta2 spec. Do you see any issues in keeping the same format of v1beta2? This would help in using the common implementation and having consistent behavior.

Just to add: we are moving the common code(currently, it resides in tf-operator) to a separate repo(https://github.com/kubeflow/common). This will contain the common code and apis that can be used across all operators(https://github.com/kubeflow/common/blob/master/operator/v1/types.go)

@gaocegege
Copy link
Member

Thanks for the awesome proposal. Same opinion with johnugeorge

I suggest using v1alpha2 style API (See https://github.com/kubeflow/pytorch-operator/blob/master/examples/mnist/v1beta2/pytorch_job_mnist_gloo.yaml).

@terrytangyuan
Copy link
Member

Link to Rabit here: https://github.com/dmlc/rabit

+1 to use kubeflow/common. Feel free to create new issues there if you come across anything else that can be reused by XGBoost operator.

@richardsliu
Copy link

Yes please use the kubeflow/common library and file issues if you find any problems. Thanks!

@merlintang
Copy link
Author

merlintang commented Apr 10, 2019 via email

@merlintang
Copy link
Author

merlintang commented Apr 17, 2019

@jlewi should we create a repo under kubeflow for xgboost operator ?

jlewi added a commit to kubeflow/xgboost-operator that referenced this issue Apr 22, 2019
@jlewi
Copy link
Contributor

jlewi commented Apr 22, 2019

I have created https://github.com/kubeflow/xgboost

Should we close this issue out and open up more specific issues in that repo?

A good place to start would be to define a ROADMAP.md in that repository.

Can we aim to have an alpha version of the custom resource as part of 0.6 which will be released end of June?

@terrytangyuan
Copy link
Member

@jlewi Thanks! Would it be better to change the repo name to be xgboost-operator so it's more consistent with repos for the other operators?

@gaocegege
Copy link
Member

+1 for xgboost-operator

@merlintang
Copy link
Author

merlintang commented Apr 23, 2019 via email

@richardsliu
Copy link

This can be closed right? @merlintang

@terrytangyuan
Copy link
Member

terrytangyuan commented Jul 17, 2019

Yes.

/close

@k8s-ci-robot
Copy link

@terrytangyuan: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@merlintang
Copy link
Author

merlintang commented Jul 18, 2019 via email

xfate123 pushed a commit to xfate123/xgboost-operator that referenced this issue May 16, 2020
woop pushed a commit to woop/community that referenced this issue Nov 16, 2020
* Improve the instructions for setting up the OAuth client.

* Add screenshots
* Explain that Authorized Domains may not be an option and explain they
  don't need to set it in that case.
Fix kubeflow#177

* Address comments.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants