Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add PVC for dev.kubeflow.org #139

Merged
merged 1 commit into from
May 25, 2018

Conversation

pdmack
Copy link
Member

@pdmack pdmack commented May 23, 2018

Closes kubeflow/kubeflow#854

Hopefully, the d.k.o default StorageClass is good to go? :-)


This change is Reviewable

@pdmack
Copy link
Member Author

pdmack commented May 23, 2018

/cc @jlewi

@k8s-ci-robot k8s-ci-robot requested a review from jlewi May 23, 2018 22:54
@jlewi
Copy link
Contributor

jlewi commented May 24, 2018

Did you actually run recreate_app.sh? Can you do so and check in the new ksonnet app? Can you also run ks apply to actually update dev.kubeflow.org?

Bonus points for fixing .gitignore to check in vendor for the ksonnet app.

@pdmack
Copy link
Member Author

pdmack commented May 24, 2018

@jlewi I don't know what "run ks apply to actually update dev.kubeflow.org" means. Adding the vendor dir and trying to ks apply led to lots of errors, starting with the seldon bits.

@@ -3,32 +3,32 @@ environments:
default:
destination:
namespace: kubeflow
server: https://35.188.73.10
server: https://10.19.47.136:6443
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like your minikube cluster. We probably need to point at the dev cluster before running recreate_app.sh

@jlewi
Copy link
Contributor

jlewi commented May 24, 2018

This is great thanks for doing it.

By ks apply I meant run ks apply on the generated app to deploy it in dev.kubeflow.org

/lgtm
/approve
/hold

Feel free to submit as is just cancel the hold.

I put the hold in there just in case you want to try updating dev.kubeflow.org first.

The GKE cluster is in GCP project: kubeflow-dev

@jlewi
Copy link
Contributor

jlewi commented May 24, 2018

Review status: 0 of 63 files reviewed at latest revision, 1 unresolved discussion.


dev-kubeflow-org/ks-app/components/issue-summarization.jsonnet, line 17 at r2 (raw file):

endpoint

Is this correct? Why is pvcName set to endpoint? Looks like a bug since you added the param pvcName


Comments from Reviewable

@pdmack
Copy link
Member Author

pdmack commented May 24, 2018

Review status: 0 of 63 files reviewed at latest revision, 2 unresolved discussions.


dev-kubeflow-org/ks-app/components/issue-summarization.jsonnet, line 17 at r2 (raw file):

Previously, jlewi (Jeremy Lewi) wrote…
endpoint

Is this correct? Why is pvcName set to endpoint? Looks like a bug since you added the param pvcName

typo
Sorry, trying to deal with multiple issues spawned from the original PR


Comments from Reviewable

@jlewi
Copy link
Contributor

jlewi commented May 25, 2018

Looks like @pdmack was just responding to the comment but didn't push a fix yet.
/issue-summarization.jsonnet

@pdmack
Copy link
Member Author

pdmack commented May 25, 2018

/hold cancel
/lgtm

The PV and PVC are definitely created but I think I'll raise a separate issue regarding the fs mount.

# oc describe pv pvc-51019469-6048-11e8-9263-42010a8000db
Name:            pvc-51019469-6048-11e8-9263-42010a8000db
Labels:          failure-domain.beta.kubernetes.io/region=us-central1
                 failure-domain.beta.kubernetes.io/zone=us-central1-a
Annotations:     kubernetes.io/createdby=gce-pd-dynamic-provisioner
                 pv.kubernetes.io/bound-by-controller=yes
                 pv.kubernetes.io/provisioned-by=kubernetes.io/gce-pd
StorageClass:    standard
Status:          Bound
Claim:           kubeflow/claim-accounts-2egoogle-2ecom-3apmackinn-40redhat-2ecom
Reclaim Policy:  Delete
Access Modes:    RWO
Capacity:        10Gi
Message:
Source:
    Type:       GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
    PDName:     gke-dev-cluster-22fdbd-pvc-51019469-6048-11e8-9263-42010a8000db
    FSType:     ext4
    Partition:  0
    ReadOnly:   false
Events:         <none>
# oc describe pvc claim-accounts-2egoogle-2ecom-3apmackinn-40redhat-2ecom
Name:          claim-accounts-2egoogle-2ecom-3apmackinn-40redhat-2ecom
Namespace:     kubeflow
StorageClass:  standard
Status:        Bound
Volume:        pvc-51019469-6048-11e8-9263-42010a8000db
Labels:        app=jupyterhub
               heritage=jupyterhub
               hub.jupyter.org/username=accounts_2Egoogle_2Ecom_3Apmackinn_40redhat_2Ecom
Annotations:   pv.kubernetes.io/bind-completed=yes
               pv.kubernetes.io/bound-by-controller=yes
               volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/gce-pd
Finalizers:    []
Capacity:      10Gi
Access Modes:  RWO
Events:
  Type    Reason                 Age   From                         Message
  ----    ------                 ----  ----                         -------
  Normal  ProvisioningSucceeded  4m    persistentvolume-controller  Successfully provisioned volume pvc-51019469-6048-11e8-9263-42010a8000db using kubernetes.io/gce-pd

@k8s-ci-robot
Copy link
Contributor

@pdmack: you cannot LGTM your own PR.

In response to this:

/hold cancel
/lgtm

The PV and PVC are definitely created but I think I'll raise a separate issue regarding the fs mount.

# oc describe pv pvc-51019469-6048-11e8-9263-42010a8000db
Name:            pvc-51019469-6048-11e8-9263-42010a8000db
Labels:          failure-domain.beta.kubernetes.io/region=us-central1
                failure-domain.beta.kubernetes.io/zone=us-central1-a
Annotations:     kubernetes.io/createdby=gce-pd-dynamic-provisioner
                pv.kubernetes.io/bound-by-controller=yes
                pv.kubernetes.io/provisioned-by=kubernetes.io/gce-pd
StorageClass:    standard
Status:          Bound
Claim:           kubeflow/claim-accounts-2egoogle-2ecom-3apmackinn-40redhat-2ecom
Reclaim Policy:  Delete
Access Modes:    RWO
Capacity:        10Gi
Message:
Source:
   Type:       GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
   PDName:     gke-dev-cluster-22fdbd-pvc-51019469-6048-11e8-9263-42010a8000db
   FSType:     ext4
   Partition:  0
   ReadOnly:   false
Events:         <none>
# oc describe pvc claim-accounts-2egoogle-2ecom-3apmackinn-40redhat-2ecom
Name:          claim-accounts-2egoogle-2ecom-3apmackinn-40redhat-2ecom
Namespace:     kubeflow
StorageClass:  standard
Status:        Bound
Volume:        pvc-51019469-6048-11e8-9263-42010a8000db
Labels:        app=jupyterhub
              heritage=jupyterhub
              hub.jupyter.org/username=accounts_2Egoogle_2Ecom_3Apmackinn_40redhat_2Ecom
Annotations:   pv.kubernetes.io/bind-completed=yes
              pv.kubernetes.io/bound-by-controller=yes
              volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/gce-pd
Finalizers:    []
Capacity:      10Gi
Access Modes:  RWO
Events:
 Type    Reason                 Age   From                         Message
 ----    ------                 ----  ----                         -------
 Normal  ProvisioningSucceeded  4m    persistentvolume-controller  Successfully provisioned volume pvc-51019469-6048-11e8-9263-42010a8000db using kubernetes.io/gce-pd

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@pdmack
Copy link
Member Author

pdmack commented May 25, 2018

Adding vendor/ means the kubeform_spawner.py needed to be blacklisted for pylint. Will port that over to kubeflow/kubeflow.

@jlewi with your lgtm, would like to move on.
/cc @texasmichelle

@jlewi
Copy link
Contributor

jlewi commented May 25, 2018

/lgtm
/approve

Thank you so much!

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jlewi

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot merged commit 6d3e487 into kubeflow:master May 25, 2018
@pdmack pdmack deleted the 854-verify-home-jovyan branch May 29, 2018 16:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants