diff --git a/README.md b/README.md index 175d235d1..992511e8d 100644 --- a/README.md +++ b/README.md @@ -182,14 +182,14 @@ components, your DB and Storage configurations, and so forth. For now, we'll use inspect this sample resource to see other configurable options. ```bash -cd ${WORKING_DIR}/config/samples +cd ${WORKING_DIR}/config/samples/v2/dspa-simple kustomize build . | oc -n ${DSP_Namespace} apply -f - ``` > Note: the sample CR used here deploys a minio instance so DSP may work out of the box > this is unsupported in production environments and we recommend to provide your own > object storage connection details via spec.objectStorage.externalStorage -> see ${WORKING_DIR}/config/samples/dspa_simple_external_storage.yaml for an example. +> see ${WORKING_DIR}/config/samples/v2/external-object-storage/dspa.yaml for an example. Confirm all pods reach ready state by running: @@ -207,7 +207,7 @@ we deployed a `DataSciencePipelinesApplication` resource named `sample`. We can ```bash DSP_Namespace_2=test-ds-project-2 oc new-project ${DSP_Namespace_2} -cd ${WORKING_DIR}/config/samples +cd ${WORKING_DIR}/config/samples/v2/dspa-simple kustomize build . | oc -n ${DSP_Namespace_2} apply -f - ``` @@ -220,7 +220,7 @@ you can simply investigate and deploy the following path: ```bash DSP_Namespace_3=test-ds-project-3 oc new-project ${DSP_Namespace_3} -cd ${WORKING_DIR}/config/samples/custom-configs +cd ${WORKING_DIR}/config/samples/v2/custom-configs kustomize build . | oc -n ${DSP_Namespace_3} apply -f - ``` @@ -232,16 +232,16 @@ These can be configured by the end user as needed. ### Deploy a DSP with external Object Storage To specify a custom Object Storage (example an AWS s3 bucket) you will need to provide DSPO with your S3 credentials in -the form of a k8s `Secret`, see an example of such a secret here `config/samples/external-object-storage/storage-creds.yaml`. +the form of a k8s `Secret`, see an example of such a secret here `config/samples/v2/external-object-storage/storage-creds.yaml`. DSPO can deploy a DSPA instance and use this S3 bucket for storing its metadata and pipeline artifacts. A sample -configuration for a DSPA that does this is found in `config/samples/external-object-storage`, you can update this as +configuration for a DSPA that does this is found in `config/samples/v2/external-object-storage`, you can update this as needed, and deploy this DSPA by running the following: ```bash DSP_Namespace_3=test-ds-project-4 oc new-project ${DSP_Namespace_4} -cd ${WORKING_DIR}/config/samples/external-object-storage +cd ${WORKING_DIR}/config/samples/v2/external-object-storage kustomize build . | oc -n ${DSP_Namespace_3} apply -f - ``` @@ -264,7 +264,7 @@ To understand how these components interact with each other please refer to the ## Deploying Optional Components ### MariaDB -To deploy a standalone MariaDB metadata database (rather than providing your own database connection details), simply add a `mariaDB` item under the `spec.database` in your DSPA definition with an `deploy` key set to `true`. All other fields are defaultable/optional, see [All Fields DSPA Example](./config/samples/dspa_all_fields.yaml) for full details. Note that this component is mutually exclusive with externally-provided databases (defined by `spec.database.externalDB`). +To deploy a standalone MariaDB metadata database (rather than providing your own database connection details), simply add a `mariaDB` item under the `spec.database` in your DSPA definition with an `deploy` key set to `true`. All other fields are defaultable/optional, see [All Fields DSPA Example](config/samples/v2/dspa-all-fields/dspa_all_fields.yaml) for full details. Note that this component is mutually exclusive with externally-provided databases (defined by `spec.database.externalDB`). ``` apiVersion: datasciencepipelinesapplications.opendatahub.io/v1alpha1 @@ -280,7 +280,7 @@ spec: ``` ### Minio -To deploy a Minio Object Storage component (rather than providing your own object storage connection details), simply add a `minio` item under the `spec.objectStorage` in your DSPA definition with an `image` key set to a valid minio component container image. All other fields are defaultable/optional, see [All Fields DSPA Example](./config/samples/dspa_all_fields.yaml) for full details. Note that this component is mutually exclusive with externally-provided object stores (defined by `spec.objectStorage.externalStorage`). +To deploy a Minio Object Storage component (rather than providing your own object storage connection details), simply add a `minio` item under the `spec.objectStorage` in your DSPA definition with an `image` key set to a valid minio component container image. All other fields are defaultable/optional, see [All Fields DSPA Example](config/samples/v2/dspa-all-fields/dspa_all_fields.yaml) for full details. Note that this component is mutually exclusive with externally-provided object stores (defined by `spec.objectStorage.externalStorage`). ``` apiVersion: datasciencepipelinesapplications.opendatahub.io/v1alpha1 @@ -297,7 +297,7 @@ spec: ``` ### ML Pipelines UI -To deploy the standalone DS Pipelines UI component, simply add a `spec.mlpipelineUI` item to your DSPA with an `image` key set to a valid ui component container image. All other fields are defaultable/optional, see [All Fields DSPA Example](./config/samples/dspa_all_fields.yaml) for full details. +To deploy the standalone DS Pipelines UI component, simply add a `spec.mlpipelineUI` item to your DSPA with an `image` key set to a valid ui component container image. All other fields are defaultable/optional, see [All Fields DSPA Example](config/samples/v2/dspa-all-fields/dspa_all_fields.yaml) for full details. ``` apiVersion: datasciencepipelinesapplications.opendatahub.io/v1alpha1 @@ -314,7 +314,7 @@ spec: ### ML Metadata -To deploy the ML Metadata artifact linage/metadata component, simply add a `spec.mlmd` item to your DSPA with `deploy` set to `true`. All other fields are defaultable/optional, see [All Fields DSPA Example](./config/samples/dspa_all_fields.yaml) for full details. +To deploy the ML Metadata artifact linage/metadata component, simply add a `spec.mlmd` item to your DSPA with `deploy` set to `true`. All other fields are defaultable/optional, see [All Fields DSPA Example](config/samples/v2/dspa-all-fields/dspa_all_fields.yaml) for full details. ``` apiVersion: datasciencepipelinesapplications.opendatahub.io/v1alpha1 @@ -643,6 +643,6 @@ Refer to this [repo][kubeflow-pipelines-examples] to see examples of different p [thirdparty-images]: https://github.com/opendatahub-io/data-science-pipelines/tree/master/third-party [pre-commit-installation]: https://pre-commit.com/ [kubebuilder-docs]: https://book.kubebuilder.io/ -[dspa-yaml]: https://github.com/opendatahub-io/data-science-pipelines-operator/blob/main/config/samples/dspa_all_fields.yaml#L77 -[sample-yaml]: https://github.com/opendatahub-io/data-science-pipelines-operator/blob/main/config/samples/dspa_simple.yaml +[dspa-yaml]: https://github.com/opendatahub-io/data-science-pipelines-operator/blob/main/config/samples/v2/dspa-all-fields/dspa_all_fields.yaml#L77 +[sample-yaml]: https://github.com/opendatahub-io/data-science-pipelines-operator/blob/main/config/samples/v2/dspa-simple/dspa_simple.yaml [kubeflow-pipelines-examples]: https://github.com/rh-datascience-and-edge-practice/kubeflow-pipelines-examples diff --git a/config/samples/dspa_simple_external_storage.yaml b/config/samples/dspa_simple_external_storage.yaml deleted file mode 100644 index e8143e57d..000000000 --- a/config/samples/dspa_simple_external_storage.yaml +++ /dev/null @@ -1,21 +0,0 @@ -apiVersion: datasciencepipelinesapplications.opendatahub.io/v1alpha1 -kind: DataSciencePipelinesApplication -metadata: - name: sample - namespace: data-science-project -spec: - objectStorage: - # Configure your external object storage via `externalStorage` field - externalStorage: - host: minio.com - port: "9092" - bucket: mlpipeline - scheme: https - s3CredentialsSecret: - secretName: somesecret-db-sample - accessKey: AWS_ACCESS_KEY_ID - secretKey: AWS_SECRET_ACCESS_KEY - # Optional - mlpipelineUI: - # Image field is required - image: 'quay.io/opendatahub/odh-ml-pipelines-frontend-container:beta-ui' diff --git a/config/samples/custom-configs/artifact_script.yaml b/config/samples/v1/custom-configs/artifact_script.yaml similarity index 100% rename from config/samples/custom-configs/artifact_script.yaml rename to config/samples/v1/custom-configs/artifact_script.yaml diff --git a/config/samples/custom-configs/db-creds.yaml b/config/samples/v1/custom-configs/db-creds.yaml similarity index 100% rename from config/samples/custom-configs/db-creds.yaml rename to config/samples/v1/custom-configs/db-creds.yaml diff --git a/config/samples/custom-configs/dspa.yaml b/config/samples/v1/custom-configs/dspa.yaml similarity index 100% rename from config/samples/custom-configs/dspa.yaml rename to config/samples/v1/custom-configs/dspa.yaml diff --git a/config/samples/v1/custom-configs/kustomization.yaml b/config/samples/v1/custom-configs/kustomization.yaml new file mode 100644 index 000000000..ac15cb546 --- /dev/null +++ b/config/samples/v1/custom-configs/kustomization.yaml @@ -0,0 +1,10 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: + # mlpipeline-minio-artifact.yaml is required due to https://issues.redhat.com/browse/RHOAIENG-1720 + - mlpipeline-minio-artifact.yaml + - dspa.yaml + - db-creds.yaml + - artifact_script.yaml + - storage-creds.yaml + - ui-configmap.yaml diff --git a/config/samples/v1/custom-configs/mlpipeline-minio-artifact.yaml b/config/samples/v1/custom-configs/mlpipeline-minio-artifact.yaml new file mode 100644 index 000000000..6b48df417 --- /dev/null +++ b/config/samples/v1/custom-configs/mlpipeline-minio-artifact.yaml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: Secret +metadata: + name: mlpipeline-minio-artifact + labels: + opendatahub.io/dashboard: 'true' + opendatahub.io/managed: 'true' + annotations: + opendatahub.io/connection-type: s3 + openshift.io/display-name: Minio Data Connection +data: + accesskey: QUtJQUlPU0ZPRE5ON0VYQU1QTEU= + secretkey: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQ== +type: Opaque diff --git a/config/samples/custom-configs/storage-creds.yaml b/config/samples/v1/custom-configs/storage-creds.yaml similarity index 100% rename from config/samples/custom-configs/storage-creds.yaml rename to config/samples/v1/custom-configs/storage-creds.yaml diff --git a/config/samples/custom-configs/ui-configmap.yaml b/config/samples/v1/custom-configs/ui-configmap.yaml similarity index 100% rename from config/samples/custom-configs/ui-configmap.yaml rename to config/samples/v1/custom-configs/ui-configmap.yaml diff --git a/config/samples/dspa_all_fields.yaml b/config/samples/v1/dspa-all-fields/dspa_all_fields.yaml similarity index 100% rename from config/samples/dspa_all_fields.yaml rename to config/samples/v1/dspa-all-fields/dspa_all_fields.yaml diff --git a/config/samples/dspa_local_dev.yaml b/config/samples/v1/dspa-local-dev/dspa_local_dev.yaml similarity index 100% rename from config/samples/dspa_local_dev.yaml rename to config/samples/v1/dspa-local-dev/dspa_local_dev.yaml diff --git a/config/samples/dspa_simple.yaml b/config/samples/v1/dspa-simple/dspa_simple.yaml similarity index 91% rename from config/samples/dspa_simple.yaml rename to config/samples/v1/dspa-simple/dspa_simple.yaml index 8e2a889af..09776c4ef 100644 --- a/config/samples/dspa_simple.yaml +++ b/config/samples/v1/dspa-simple/dspa_simple.yaml @@ -6,7 +6,7 @@ spec: # One of minio or externalStorage must be specified for objectStorage # This example illustrates minimal deployment with minio # This is NOT supported and should be used for dev testing/experimentation only. - # See dspa_simple_external_storage.yaml for an example with external connection. + # See external-object-storage/dspa.yaml for an example with external connection. objectStorage: minio: # Image field is required diff --git a/config/samples/kustomization.yaml b/config/samples/v1/dspa-simple/kustomization.yaml similarity index 100% rename from config/samples/kustomization.yaml rename to config/samples/v1/dspa-simple/kustomization.yaml diff --git a/config/samples/external-object-storage/dspa.yaml b/config/samples/v1/external-object-storage/dspa.yaml similarity index 100% rename from config/samples/external-object-storage/dspa.yaml rename to config/samples/v1/external-object-storage/dspa.yaml diff --git a/config/samples/external-object-storage/kustomization.yaml b/config/samples/v1/external-object-storage/kustomization.yaml similarity index 100% rename from config/samples/external-object-storage/kustomization.yaml rename to config/samples/v1/external-object-storage/kustomization.yaml diff --git a/config/samples/external-object-storage/storage-creds.yaml b/config/samples/v1/external-object-storage/storage-creds.yaml similarity index 100% rename from config/samples/external-object-storage/storage-creds.yaml rename to config/samples/v1/external-object-storage/storage-creds.yaml diff --git a/config/samples/v2/custom-configs/db-creds.yaml b/config/samples/v2/custom-configs/db-creds.yaml new file mode 100644 index 000000000..bf9a3bf6c --- /dev/null +++ b/config/samples/v2/custom-configs/db-creds.yaml @@ -0,0 +1,10 @@ +kind: Secret +apiVersion: v1 +metadata: + name: testdbsecret + labels: + app: mariadb-sample + component: data-science-pipelines +stringData: + password: "testingpassword" +type: Opaque diff --git a/config/samples/v2/custom-configs/dspa.yaml b/config/samples/v2/custom-configs/dspa.yaml new file mode 100644 index 000000000..883f7355b --- /dev/null +++ b/config/samples/v2/custom-configs/dspa.yaml @@ -0,0 +1,89 @@ +apiVersion: datasciencepipelinesapplications.opendatahub.io/v1alpha1 +kind: DataSciencePipelinesApplication +metadata: + name: sample +spec: + dspVersion: v2 + apiServer: + deploy: true + image: gcr.io/ml-pipeline/api-server:2.0.2 + stripEOF: true + terminateStatus: Cancelled + trackArtifacts: true + dbConfigConMaxLifetimeSec: 120 + collectMetrics: true + autoUpdatePipelineDefaultVersion: true + resources: + requests: + cpu: 250m + memory: 500Mi + limits: + cpu: 500m + memory: 1Gi + persistenceAgent: + deploy: true + image: gcr.io/ml-pipeline/persistenceagent:2.0.2 + numWorkers: 2 + resources: + requests: + cpu: 120m + memory: 500Mi + limits: + cpu: 250m + memory: 1Gi + scheduledWorkflow: + deploy: true + image: gcr.io/ml-pipeline/scheduledworkflow:2.0.2 + cronScheduleTimezone: UTC + resources: + requests: + cpu: 120m + memory: 100Mi + limits: + cpu: 250m + memory: 250Mi + mlpipelineUI: + deploy: true + image: gcr.io/ml-pipeline/frontend:2.0.2 + resources: + limits: + cpu: 100m + memory: 256Mi + requests: + cpu: 100m + memory: 256Mi + configMap: custom-ui-configmap + database: + mariaDB: + deploy: true + image: registry.redhat.io/rhel8/mariadb-103:1-188 + username: mlpipeline + pipelineDBName: randomDBName + pvcSize: 10Gi + resources: + requests: + cpu: 300m + memory: 800Mi + limits: + cpu: "1" + memory: 1Gi + passwordSecret: + name: testdbsecret + key: password + objectStorage: + minio: + deploy: true + image: quay.io/opendatahub/minio:RELEASE.2019-08-14T20-37-41Z-license-compliance + bucket: mlpipeline + pvcSize: 10Gi + resources: + requests: + cpu: 200m + memory: 100Mi + limits: + cpu: 250m + memory: 1Gi + s3CredentialsSecret: + secretName: mlpipeline-minio-artifact + accessKey: AWS_ACCESS_KEY_ID + secretKey: AWS_SECRET_ACCESS_KEY diff --git a/config/samples/custom-configs/kustomization.yaml b/config/samples/v2/custom-configs/kustomization.yaml similarity index 85% rename from config/samples/custom-configs/kustomization.yaml rename to config/samples/v2/custom-configs/kustomization.yaml index 5b7f5481b..f27b7285d 100644 --- a/config/samples/custom-configs/kustomization.yaml +++ b/config/samples/v2/custom-configs/kustomization.yaml @@ -3,6 +3,5 @@ kind: Kustomization resources: - dspa.yaml - db-creds.yaml - - artifact_script.yaml - storage-creds.yaml - ui-configmap.yaml diff --git a/config/samples/v2/custom-configs/storage-creds.yaml b/config/samples/v2/custom-configs/storage-creds.yaml new file mode 100644 index 000000000..b30151a50 --- /dev/null +++ b/config/samples/v2/custom-configs/storage-creds.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Secret +metadata: + name: mlpipeline-minio-artifact + labels: + opendatahub.io/dashboard: 'true' + opendatahub.io/managed: 'true' + annotations: + opendatahub.io/connection-type: s3 + openshift.io/display-name: Minio Data Connection +data: + AWS_ACCESS_KEY_ID: QUtJQUlPU0ZPRE5ON0VYQU1QTEU= + AWS_SECRET_ACCESS_KEY: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQ== + # The following keys are needed while https://github.com/kubeflow/pipelines/issues/9689 is open + accesskey: QUtJQUlPU0ZPRE5ON0VYQU1QTEU= + secretkey: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQ== +type: Opaque diff --git a/config/samples/v2/custom-configs/ui-configmap.yaml b/config/samples/v2/custom-configs/ui-configmap.yaml new file mode 100644 index 000000000..7e2e7ebaf --- /dev/null +++ b/config/samples/v2/custom-configs/ui-configmap.yaml @@ -0,0 +1,11 @@ +apiVersion: v1 +data: + viewer-pod-template.json: |- + { + "spec": { + "serviceAccountName": "ds-pipelines-viewer-sample" + } + } +kind: ConfigMap +metadata: + name: custom-ui-configmap diff --git a/config/samples/v2/dspa-all-fields/dspa_all_fields.yaml b/config/samples/v2/dspa-all-fields/dspa_all_fields.yaml new file mode 100644 index 000000000..272a11476 --- /dev/null +++ b/config/samples/v2/dspa-all-fields/dspa_all_fields.yaml @@ -0,0 +1,209 @@ +# This file should not be used to deploy a DataSciencePipelinesApplication +# It's main purpose is to show all possible fields that can be configured +# Note that you cannot specify all fields, some are mutually exclusive +# For example, you can only specify either a miniodb deployment or +# externalstorage connection, but not both +apiVersion: datasciencepipelinesapplications.opendatahub.io/v1alpha1 +kind: DataSciencePipelinesApplication +metadata: + name: sample + namespace: data-science-project +spec: + dspVersion: v2 + apiServer: + deploy: true + image: quay.io/modh/odh-ml-pipelines-api-server-container:v1.18.0-8 + argoLauncherImage: quay.io/org/kfp-launcher:latest + argoDriverImage: quay.io/org/kfp-driver:latest + applyTektonCustomResource: true + archiveLogs: false + artifactImage: quay.io/modh/odh-ml-pipelines-artifact-manager-container:v1.18.0-8 + cacheImage: registry.access.redhat.com/ubi8/ubi-minimal + moveResultsImage: busybox + injectDefaultScript: true + stripEOF: true + terminateStatus: Cancelled + trackArtifacts: true + dbConfigConMaxLifetimeSec: 120 + collectMetrics: true + autoUpdatePipelineDefaultVersion: true + resources: + requests: + cpu: 250m + memory: 500Mi + limits: + cpu: 500m + memory: 1Gi + persistenceAgent: + deploy: true + image: quay.io/modh/odh-ml-pipelines-persistenceagent-container:v1.18.0-8 + numWorkers: 2 # Number of worker for sync job. + resources: + requests: + cpu: 120m + memory: 500Mi + limits: + cpu: 250m + memory: 1Gi + scheduledWorkflow: + deploy: true + image: quay.io/modh/odh-ml-pipelines-scheduledworkflow-container:v1.18.0-8 + cronScheduleTimezone: UTC + resources: + requests: + cpu: 120m + memory: 100Mi + limits: + cpu: 250m + memory: 250Mi + mlpipelineUI: + deploy: true + image: quay.io/opendatahub/odh-ml-pipelines-frontend-container:beta-ui + resources: + limits: + cpu: 100m + memory: 256Mi + requests: + cpu: 100m + memory: 256Mi + # requires this configmap to be created before hand, + # otherwise operator will not deploy DSPA + configMap: ds-pipeline-ui-configmap + database: + disableHealthCheck: false + mariaDB: # mutually exclusive with externalDB + deploy: true + image: registry.redhat.io/rhel8/mariadb-103:1-188 + username: mlpipeline + pipelineDBName: randomDBName + pvcSize: 20Gi + resources: + requests: + cpu: 300m + memory: 800Mi + limits: + cpu: "1" + memory: 1Gi + # requires this configmap to be created before hand, + # otherwise operator will not deploy DSPA + passwordSecret: + name: ds-pipelines-db-sample + key: password +# externalDB: +# host: mysql:3306 +# port: "8888" +# username: root +# pipelineDBName: randomDBName +# passwordSecret: +# name: somesecret +# key: somekey + objectStorage: + disableHealthCheck: false + minio: # mutually exclusive with externalStorage + deploy: true + image: quay.io/opendatahub/minio:RELEASE.2019-08-14T20-37-41Z-license-compliance + bucket: mlpipeline + pvcSize: 10Gi + resources: + requests: + cpu: 200m + memory: 100Mi + limits: + cpu: 250m + memory: 1Gi + # requires this configmap to be created before hand, + # otherwise operator will not deploy DSPA + s3CredentialsSecret: + secretName: somesecret-sample + accessKey: AWS_ACCESS_KEY_ID + secretKey: AWS_SECRET_ACCESS_KEY +# externalStorage: +# host: minio.com +# port: "9092" +# bucket: mlpipeline +# scheme: https +# s3CredentialsSecret: +# secretName: somesecret-db-sample +# accessKey: somekey +# secretKey: somekey + mlmd: # Deploys an optional ML-Metadata Component + deploy: true + envoy: + image: quay.io/opendatahub/ds-pipelines-metadata-envoy:1.7.0 + resources: + limits: + cpu: 100m + memory: 256Mi + requests: + cpu: 100m + memory: 256Mi + grpc: + image: quay.io/opendatahub/ds-pipelines-metadata-grpc:1.0.0 + port: "8080" + resources: + limits: + cpu: 100m + memory: 256Mi + requests: + cpu: 100m + memory: 256Mi + writer: + image: quay.io/opendatahub/ds-pipelines-metadata-writer:1.1.0 + resources: + limits: + cpu: 100m + memory: 256Mi + requests: + cpu: 100m + memory: 256Mi +status: + # Reports True iff: + # * ApiServerReady, PersistenceAgentReady, ScheduledWorkflowReady, DatabaseReady, ObjectStorageReady report True + # AND + # * MLPIpelinesUIReady is (Ready: True) OR is (Ready: False && DeploymentDisabled) + conditions: + - type: Ready + status: "True" + observedGeneration: 4 + lastTransitionTime: '2023-02-02T21:00:00Z' + reason: MinimumReplicasAvailable + message: 'some message' + - type: ApiServerReady + status: "True" + observedGeneration: 4 + lastTransitionTime: '2023-02-02T21:00:00Z' + reason: MinimumReplicasAvailable + message: 'some message' + - type: UserInterfaceReady + status: "True" + observedGeneration: 4 + lastTransitionTime: '2023-02-02T21:00:00Z' + reason: MinimumReplicasAvailable + message: 'some message' + - type: PersistenceAgentReady + status: "True" + observedGeneration: 4 + lastTransitionTime: '2023-02-02T21:00:00Z' + reason: MinimumReplicasAvailable + message: 'some message' + - type: ScheduledWorkflowReady + status: "True" + observedGeneration: 4 + lastTransitionTime: '2023-02-02T21:00:00Z' + reason: MinimumReplicasAvailable + message: 'some message' + # Do we need to do this?? API Server application already + # checks for db/storage connectivity, and pod will fail to come up + # in such a case. + - type: DatabaseReady + status: "True" + observedGeneration: 4 + lastTransitionTime: '2023-02-02T21:00:00Z' + reason: DataBaseReady + message: '' + - type: ObjectStorageReady + status: "True" + observedGeneration: 4 + lastTransitionTime: '2023-02-02T21:00:00Z' + reason: ObjectStorageReady + message: '' diff --git a/config/samples/dspa_simple_v2.yaml b/config/samples/v2/dspa-simple/dspa_simple.yaml similarity index 100% rename from config/samples/dspa_simple_v2.yaml rename to config/samples/v2/dspa-simple/dspa_simple.yaml diff --git a/config/samples/v2/dspa-simple/kustomization.yaml b/config/samples/v2/dspa-simple/kustomization.yaml new file mode 100644 index 000000000..d673cd998 --- /dev/null +++ b/config/samples/v2/dspa-simple/kustomization.yaml @@ -0,0 +1,2 @@ +resources: +- dspa_simple.yaml diff --git a/config/samples/v2/external-object-storage/dspa.yaml b/config/samples/v2/external-object-storage/dspa.yaml new file mode 100644 index 000000000..2874be390 --- /dev/null +++ b/config/samples/v2/external-object-storage/dspa.yaml @@ -0,0 +1,20 @@ +apiVersion: datasciencepipelinesapplications.opendatahub.io/v1alpha1 +kind: DataSciencePipelinesApplication +metadata: + name: sample +spec: + dspVersion: v2 + objectStorage: + externalStorage: + bucket: rhods-dsp-dev + host: s3.us-east-2.amazonaws.com + region: us-east-2 + s3CredentialsSecret: + accessKey: k8saccesskey + secretKey: k8ssecretkey + secretName: aws-bucket-creds + scheme: https + # Optional + mlpipelineUI: + # Image field is required + image: gcr.io/ml-pipeline/frontend:2.0.2 diff --git a/config/samples/v2/external-object-storage/kustomization.yaml b/config/samples/v2/external-object-storage/kustomization.yaml new file mode 100644 index 000000000..08838e669 --- /dev/null +++ b/config/samples/v2/external-object-storage/kustomization.yaml @@ -0,0 +1,4 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: + - dspa.yaml diff --git a/config/samples/v2/local-dev/dspa.yaml b/config/samples/v2/local-dev/dspa.yaml new file mode 100644 index 000000000..e7a8125d3 --- /dev/null +++ b/config/samples/v2/local-dev/dspa.yaml @@ -0,0 +1,35 @@ +# A simple DSPA with the Database and ObjectStore Health Checks Disabled +# +# Since the default database and storage options leverage internal Services, +# a locally-run DSPO that manages an external cluster (common development practice) +# would not be able to run the pre-deploy health checks on these prerequisite components +# and therefore the DSPA will never fully deploy without disabling them, as this DSPA sample does +apiVersion: datasciencepipelinesapplications.opendatahub.io/v1alpha1 +kind: DataSciencePipelinesApplication +metadata: + name: sample +spec: + dspVersion: v2 + apiServer: + deploy: true + image: gcr.io/ml-pipeline/api-server:2.0.2 + persistenceAgent: + image: gcr.io/ml-pipeline/persistenceagent:2.0.2 + scheduledWorkflow: + image: gcr.io/ml-pipeline/scheduledworkflow:2.0.2 + mlmd: + deploy: true + grpc: + image: gcr.io/tfx-oss-public/ml_metadata_store_server:1.14.0 + envoy: + image: gcr.io/ml-pipeline/metadata-envoy:2.0.2 + database: + disableHealthCheck: true + objectStorage: + disableHealthCheck: true + minio: + image: quay.io/opendatahub/minio:RELEASE.2019-08-14T20-37-41Z-license-compliance + mlpipelineUI: + image: gcr.io/ml-pipeline/frontend:2.0.2 + workflowController: + image: gcr.io/ml-pipeline/workflow-controller:v3.3.10-license-compliance diff --git a/config/samples/v2/local-dev/kustomization.yaml b/config/samples/v2/local-dev/kustomization.yaml new file mode 100644 index 000000000..8b8a2747f --- /dev/null +++ b/config/samples/v2/local-dev/kustomization.yaml @@ -0,0 +1,3 @@ +resources: +- dspa.yaml +- storage-creds.yaml diff --git a/config/samples/v2/local-dev/storage-creds.yaml b/config/samples/v2/local-dev/storage-creds.yaml new file mode 100644 index 000000000..6b48df417 --- /dev/null +++ b/config/samples/v2/local-dev/storage-creds.yaml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: Secret +metadata: + name: mlpipeline-minio-artifact + labels: + opendatahub.io/dashboard: 'true' + opendatahub.io/managed: 'true' + annotations: + opendatahub.io/connection-type: s3 + openshift.io/display-name: Minio Data Connection +data: + accesskey: QUtJQUlPU0ZPRE5ON0VYQU1QTEU= + secretkey: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQ== +type: Opaque diff --git a/tests/upgrades/main.sh b/tests/upgrades/main.sh index 1024ecf95..f6f8d7508 100755 --- a/tests/upgrades/main.sh +++ b/tests/upgrades/main.sh @@ -1,3 +1,3 @@ kubectl create namespace ${DSPA_NS} -cd ${GITHUB_WORKSPACE}/config/samples +cd ${GITHUB_WORKSPACE}/config/samples/v1/dspa-simple kustomize build . | kubectl -n ${DSPA_NS} apply -f -