Skip to content

Commit

Permalink
Merge pull request #521 from gmfrasca/dspv2-merge
Browse files Browse the repository at this point in the history
Dspv2 merge - Sprint 9 Check-in Merge 2
  • Loading branch information
openshift-merge-bot[bot] authored Jan 8, 2024
2 parents 54ddfde + 9b39ab9 commit dcdec26
Show file tree
Hide file tree
Showing 35 changed files with 308 additions and 760 deletions.
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/bug_report.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ body:
multiple: false
options:
- Standalone DSPO (without ODH)
- Manually deployed Kfdef
- Manually deployed DataScienceCluster
- ODH Dashboard UI
validations:
required: true
Expand Down
10 changes: 10 additions & 0 deletions .github/resources/datasciencecluster/datasciencecluster.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
kind: DataScienceCluster
apiVersion: datasciencecluster.opendatahub.io/v1
metadata:
name: data-science-pipelines-operator
spec:
components:
dashboard:
managementState: Managed
datasciencepipelines:
managementState: Managed
16 changes: 0 additions & 16 deletions .github/resources/kfdef/kfdef.yaml

This file was deleted.

32 changes: 16 additions & 16 deletions .github/workflows/upgrade-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -90,18 +90,18 @@ jobs:

- name: Prepare for Upgrade Testing
run: |
# Update the KfDef manifest with the latest released version
sed -i "s/main/${{ inputs.released-version }}/" ${{ env.RESOURCES_DIR }}/kfdef/kfdef.yaml
working-directory: ${{ env.RESOURCES_DIR }}/kfdef
# Update the DataScienceCluster manifest with the latest released version
sed -i "s/main/${{ inputs.released-version }}/" ${{ env.RESOURCES_DIR }}/datasciencecluster/datasciencecluster.yaml
working-directory: ${{ env.RESOURCES_DIR }}/datasciencecluster

- name: Print KfDef Manifest Contents
run: cat ${{ env.RESOURCES_DIR }}/kfdef/kfdef.yaml
- name: Print DataScienceCluster Manifest Contents
run: cat ${{ env.RESOURCES_DIR }}/datasciencecluster/datasciencecluster.yaml

- name: Deploy DSPO KfDef
- name: Deploy DataScienceCluster
run: |
kubectl apply -f ${{ env.RESOURCES_DIR }}/kfdef/kfdef.yaml -n ${{ env.DSPO_NS }}
kubectl apply -f ${{ env.RESOURCES_DIR }}/datasciencecluster/datasciencecluster.yaml -n ${{ env.DSC_NS }}
env:
DSPO_NS: "data-science-pipelines-operator"
DSC_NS: "opendatahub"

- name: Print ODH Operator Pod Logs
run: kubectl get pods -n openshift-operators -o jsonpath='{.items[*].metadata.name}' | xargs -I {} kubectl logs -n openshift-operators {}
Expand All @@ -122,18 +122,18 @@ jobs:

- name: Prepare for Upgrade Testing
run: |
# Update the KfDef manifest with the candidate version
sed -i "s/${{ inputs.released-version }}/${{ inputs.candidate-version }}/" ${{ env.RESOURCES_DIR }}/kfdef/kfdef.yaml
working-directory: ${{ env.RESOURCES_DIR }}/kfdef
# Update the DataScienceCluster manifest with the candidate version
sed -i "s/${{ inputs.released-version }}/${{ inputs.candidate-version }}/" ${{ env.RESOURCES_DIR }}/datasciencecluster/datasciencecluster.yaml
working-directory: ${{ env.RESOURCES_DIR }}/datasciencecluster

- name: Print KfDef Manifest Contents
run: cat ${{ env.RESOURCES_DIR }}/kfdef/kfdef.yaml
- name: Print DataScienceCluster Manifest Contents
run: cat ${{ env.RESOURCES_DIR }}/datasciencecluster/datasciencecluster.yaml

- name: Deploy KfDef Core for the candidate DSP Version
- name: Deploy DataScienceCluster for the candidate DSP Version
run: |
kubectl apply -f ${{ env.RESOURCES_DIR }}/kfdef/kfdef.yaml -n ${{ env.DSPO_NS }}
kubectl apply -f ${{ env.RESOURCES_DIR }}/datasciencecluster/datasciencecluster.yaml -n ${{ env.DSC_NS }}
env:
DSPO_NS: data-science-pipelines-operator
DSC_NS: opendatahub

- name: Run upgrade tests
run: |
Expand Down
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ IMG ?= quay.io/opendatahub/data-science-pipelines-operator:main
# ENVTEST_K8S_VERSION refers to the version of kubebuilder assets to be downloaded by envtest binary.
ENVTEST_K8S_VERSION = 1.25.0
# Namespace to deploy the operator
OPERATOR_NS ?= odh-applications
OPERATOR_NS ?= opendatahub
# Namespace to deploy v2 infrastructure
V2INFRA_NS ?= openshift-pipelines
# Namespace to deploy argo infrastructure
Expand Down
3 changes: 2 additions & 1 deletion OWNERS
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
approvers:
- accorvin
- anishasthana
- DharmitD
- dsp-developers
- gmfrasca
Expand All @@ -12,7 +11,9 @@ reviewers:
- DharmitD
- gmfrasca
- gregsheremeta
- hbelmiro
- HumairAK
- rimolive
- VaniHaripriya
emeritus_approvers:
- harshad16
70 changes: 46 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ interact with them in the
To get started you will first need to satisfy the following pre-requisites:

## Pre-requisites
1. An OpenShift cluster that is 4.9 or higher.
1. An OpenShift cluster that is 4.11 or higher.
2. You will need to be logged into this cluster as [cluster admin] via [oc client].
3. The OpenShift Cluster must have OpenShift Pipelines 1.8 or higher installed. We recommend channel pipelines-1.8
on OCP 4.10 and pipelines-1.9 or pipelines-1.10 for OCP 4.11, 4.12 and 4.13.
Expand All @@ -62,35 +62,27 @@ To get started you will first need to satisfy the following pre-requisites:

## Deploy the Operator via ODH

On a cluster with ODH installed, create a namespace where you would like to install DSPO:

```bash
DSPO_NS=data-science-pipelines-operator
oc new-project ${DSPO_NS}
```

Then deploy the following `KfDef` into the namespace created above:
Deploy the following `DataScienceCluster`:

```bash
cat <<EOF | oc apply -f -
apiVersion: kfdef.apps.kubeflow.org/v1
kind: KfDef
kind: DataScienceCluster
apiVersion: datasciencecluster.opendatahub.io/v1
metadata:
name: data-science-pipelines-operator
namespace: ${DSPO_NS}
name: data-science-pipelines-operator
spec:
applications:
- kustomizeConfig:
repoRef:
name: manifests
path: data-science-pipelines-operator/
name: data-science-pipelines-operator
repos:
- name: manifests
uri: "https://github.com/opendatahub-io/odh-manifests/tarball/master"
components:
dashboard:
managementState: Managed
datasciencepipelines:
managementState: Managed
EOF
```

> ℹ️ **Note:**
>
> You can also deploy other ODH components using DataScienceCluster`. See https://github.com/opendatahub-io/opendatahub-operator#example-datasciencecluster for more information.
Confirm the pods are successfully deployed and reach running state:

```bash
Expand All @@ -100,6 +92,35 @@ oc get pods -n ${DSPO_NS}
Once all pods are ready, we can proceed to deploying the first Data Science Pipelines (DSP) instance. Instructions
[here](#deploy-dsp-instance).

### Using a development image

You can use custom manifests from a branch or tag to use a different Data Science Pipelines Operator image. To do so, modify the `IMAGES_DSPO` config in [config/base/params.env](config/base/params.env) and push the changes to a branch or tag.

Create a (or edit the existent) `DataSciencePipelines` adding `devFlags.manifests` with the URL of your branch or tag. For example, given the following repository and branch:

* Repository: `https://github.com/a_user/data-science-pipelines-operator`
* Branch: `my_branch`

The `DataSciencePipelines` YAML should look like:

```yaml
kind: DataScienceCluster
apiVersion: datasciencecluster.opendatahub.io/v1
metadata:
name: data-science-pipelines-operator
spec:
components:
dashboard:
managementState: Managed
datasciencepipelines:
managementState: Managed
devFlags:
manifests:
- uri: https://github.com/a_user/data-science-pipelines-operator/tarball/my_branch
contextDir: config
sourcePath: base
```
## Deploy the Operator standalone
First clone this repository:
Expand Down Expand Up @@ -436,8 +457,9 @@ Depending on how you installed DSPO, follow the instructions below accordingly t
To uninstall DSPO via ODH run the following:

```bash
KFDEF_NAME=data-science-pipelines-operator
oc delete kfdef ${KFDEF_NAME} -n ${DSPO_NS}
DSC_NAME=$(oc get DataScienceCluster -o jsonpath='{.items[0].metadata.name}')
DSPO_NS=$(oc get DataScienceCluster -o jsonpath='{.items[0].metadata.namespace}')
oc delete datasciencecluster ${DSC_NAME} -n "${DSPO_NS}"
```

## Cleanup Standalone Installation
Expand Down
6 changes: 6 additions & 0 deletions api/v1alpha1/dspipeline_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -203,6 +203,9 @@ type MariaDB struct {
// Customize the size of the PVC created for the default MariaDB instance. Default: 10Gi
// +kubebuilder:default:="10Gi"
PVCSize resource.Quantity `json:"pvcSize,omitempty"`
// Volume Mode Filesystem storageClass to use for PVC creation
// +kubebuilder:validation:Optional
StorageClassName string `json:"storageClassName,omitempty"`
// Specify custom Pod resource requirements for this component.
Resources *ResourceRequirements `json:"resources,omitempty"`
}
Expand Down Expand Up @@ -243,6 +246,9 @@ type Minio struct {
// Customize the size of the PVC created for the Minio instance. Default: 10Gi
// +kubebuilder:default:="10Gi"
PVCSize resource.Quantity `json:"pvcSize,omitempty"`
// Volume Mode Filesystem storageClass to use for PVC creation
// +kubebuilder:validation:Optional
StorageClassName string `json:"storageClassName,omitempty"`
// Specify custom Pod resource requirements for this component.
Resources *ResourceRequirements `json:"resources,omitempty"`
// Specify a custom image for Minio pod.
Expand Down
3 changes: 1 addition & 2 deletions config/base/kustomization.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: odh-applications
namespace: opendatahub
namePrefix: data-science-pipelines-operator-
resources:
- ../crd
Expand All @@ -9,7 +9,6 @@ resources:
- ../prometheus
- ../configmaps

# Parameterize images via KfDef in ODH
configMapGenerator:
- name: dspo-parameters
envs:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -289,6 +289,10 @@ spec:
x-kubernetes-int-or-string: true
type: object
type: object
storageClassName:
description: Volume Mode Filesystem storageClass to use for
PVC creation
type: string
username:
default: mlpipeline
description: 'The MariadB username that will be created. Should
Expand Down Expand Up @@ -637,6 +641,10 @@ spec:
- secretKey
- secretName
type: object
storageClassName:
description: Volume Mode Filesystem storageClass to use for
PVC creation
type: string
required:
- image
type: object
Expand Down
2 changes: 2 additions & 0 deletions config/internal/apiserver/default/service.yaml.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,12 @@ metadata:
component: data-science-pipelines
spec:
ports:
{{ if .APIServer.EnableRoute }}
- name: oauth
port: 8443
protocol: TCP
targetPort: oauth
{{ end }}
- name: http
port: 8888
protocol: TCP
Expand Down
Loading

0 comments on commit dcdec26

Please sign in to comment.