Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OLM failed -- Tag latest not found in repository quay.io/coreos/olm #668

Closed
thebithead opened this issue Jan 15, 2019 · 6 comments
Closed

Comments

@thebithead
Copy link

thebithead commented Jan 15, 2019

-bash-4.2# oc version
oc v3.11.0+62803d0-1
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO
-bash-4.2# 

OLM Ansible Install:

-bash-4.2# sudo ansible-playbook -i ${HOME}/installcentos/openshift-ansible/inventory/hosts.localhost ${HOME}/installcentos/openshift-ansible/playbooks/olm/config.yml 

PLAY [Initialization Checkpoint Start] *****************************************

TASK [Set install initialization 'In Progress'] ********************************
ok: [localhost]

PLAY [Populate config host groups] *********************************************

TASK [Load group name mapping variables] ***************************************
ok: [localhost]

TASK [Evaluate groups - g_nfs_hosts is single host] ****************************
skipping: [localhost]

TASK [Evaluate oo_all_hosts] ***************************************************
ok: [localhost] => (item=localhost)

TASK [Evaluate oo_masters] *****************************************************
ok: [localhost] => (item=localhost)

TASK [Evaluate oo_first_master] ************************************************
ok: [localhost]

TASK [Evaluate oo_new_etcd_to_config] ******************************************

TASK [Evaluate oo_masters_to_config] *******************************************
ok: [localhost] => (item=localhost)

TASK [Evaluate oo_etcd_to_config] **********************************************
ok: [localhost] => (item=localhost)

TASK [Evaluate oo_first_etcd] **************************************************
ok: [localhost]

TASK [Evaluate oo_etcd_hosts_to_upgrade] ***************************************
ok: [localhost] => (item=localhost)

TASK [Evaluate oo_etcd_hosts_to_backup] ****************************************
ok: [localhost] => (item=localhost)

TASK [Evaluate oo_nodes_to_config] *********************************************
ok: [localhost] => (item=localhost)

TASK [Evaluate oo_lb_to_config] ************************************************

TASK [Evaluate oo_nfs_to_config] ***********************************************

TASK [Evaluate oo_glusterfs_to_config] *****************************************

TASK [Evaluate oo_etcd_to_migrate] *********************************************
ok: [localhost] => (item=localhost)
 [WARNING]: Could not match supplied host pattern, ignoring: oo_lb_to_config

 [WARNING]: Could not match supplied host pattern, ignoring: oo_nfs_to_config


PLAY [Ensure that all non-node hosts are accessible] ***************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

PLAY [Initialize basic host facts] *********************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [openshift_sanitize_inventory : include_tasks] ****************************
included: /root/installcentos/openshift-ansible/roles/openshift_sanitize_inventory/tasks/deprecations.yml for localhost

TASK [openshift_sanitize_inventory : Check for usage of deprecated variables] ***
ok: [localhost]

TASK [openshift_sanitize_inventory : debug] ************************************
skipping: [localhost]

TASK [openshift_sanitize_inventory : set_stats] ********************************
skipping: [localhost]

TASK [openshift_sanitize_inventory : set_fact] *********************************
ok: [localhost]

TASK [openshift_sanitize_inventory : Standardize on latest variable names] *****
ok: [localhost]

TASK [openshift_sanitize_inventory : Normalize openshift_release] **************
skipping: [localhost]

TASK [openshift_sanitize_inventory : Abort when openshift_release is invalid] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : include_tasks] ****************************
included: /root/installcentos/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml for localhost

TASK [openshift_sanitize_inventory : set_fact] *********************************

TASK [openshift_sanitize_inventory : Ensure that dynamic provisioning is set if using dynamic storage] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : Ensure the hosted registry's GlusterFS storage is configured correctly] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : Ensure the hosted registry's GlusterFS storage is configured correctly] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : Check for deprecated prometheus/grafana install] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : Ensure clusterid is set along with the cloudprovider] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : Ensure ansible_service_broker_remove and ansible_service_broker_install are mutually exclusive] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : Ensure template_service_broker_remove and template_service_broker_install are mutually exclusive] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : Ensure that all requires vsphere configuration variables are set] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : ensure provider configuration variables are defined] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : Ensure removed web console extension variables are not set] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : Ensure that web console port matches API server port] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : At least one master is schedulable] *******
skipping: [localhost]

TASK [Detecting Operating System from ostree_booted] ***************************
ok: [localhost]

TASK [set openshift_deployment_type if unset] **********************************
skipping: [localhost]

TASK [initialize_facts set fact openshift_is_atomic] ***************************
ok: [localhost]

TASK [Determine Atomic Host Docker Version] ************************************
skipping: [localhost]

TASK [assert atomic host docker version is 1.12 or later] **********************
skipping: [localhost]

PLAY [Retrieve existing master configs and validate] ***************************

TASK [openshift_control_plane : stat] ******************************************
ok: [localhost]

TASK [openshift_control_plane : slurp] *****************************************
ok: [localhost]

TASK [openshift_control_plane : set_fact] **************************************
ok: [localhost]

TASK [openshift_control_plane : Check for file paths outside of /etc/origin/master in master's config] ***
ok: [localhost]

TASK [openshift_control_plane : set_fact] **************************************
ok: [localhost]

TASK [set_fact] ****************************************************************
ok: [localhost]

TASK [set_fact] ****************************************************************
ok: [localhost]

PLAY [Initialize special first-master variables] *******************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [set_fact] ****************************************************************
ok: [localhost]

TASK [set_fact] ****************************************************************
ok: [localhost]

PLAY [Disable web console if required] *****************************************

TASK [set_fact] ****************************************************************
skipping: [localhost]

PLAY [Setup yum repositories for all hosts] ************************************
skipping: no hosts matched

PLAY [Install packages necessary for installer] ********************************

TASK [Gathering Facts] *********************************************************
skipping: [localhost]

TASK [Determine if chrony is installed] ****************************************
skipping: [localhost]

TASK [Install ntp package] *****************************************************
skipping: [localhost]

TASK [Start and enable ntpd/chronyd] *******************************************
skipping: [localhost]

TASK [Ensure openshift-ansible installer package deps are installed] ***********
skipping: [localhost]

PLAY [Initialize cluster facts] ************************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [get openshift_current_version] *******************************************
ok: [localhost]

TASK [set_fact openshift_portal_net if present on masters] *********************
ok: [localhost]

TASK [Gather Cluster facts] ****************************************************
changed: [localhost]

TASK [Set fact of no_proxy_internal_hostnames] *********************************
skipping: [localhost]

TASK [Initialize openshift.node.sdn_mtu] ***************************************
ok: [localhost]

TASK [set_fact l_kubelet_node_name] ********************************************
ok: [localhost]

PLAY [Initialize etcd host variables] ******************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [set_fact] ****************************************************************
ok: [localhost]

TASK [set_fact] ****************************************************************
ok: [localhost]

PLAY [Determine openshift_version to configure on first master] ****************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [include_role : openshift_version] ****************************************

TASK [openshift_version : Use openshift_current_version fact as version to configure if already installed] ***
ok: [localhost]

TASK [openshift_version : Set openshift_version to openshift_release if undefined] ***
skipping: [localhost]

TASK [openshift_version : debug] ***********************************************
ok: [localhost] => {
    "msg": "openshift_pkg_version was not defined. Falling back to -3.11.0"
}

TASK [openshift_version : set_fact] ********************************************
ok: [localhost]

TASK [openshift_version : debug] ***********************************************
ok: [localhost] => {
    "msg": "openshift_image_tag was not defined. Falling back to v3.11.0"
}

TASK [openshift_version : set_fact] ********************************************
ok: [localhost]

TASK [openshift_version : assert openshift_release in openshift_image_tag] *****
ok: [localhost] => {
    "changed": false, 
    "msg": "All assertions passed"
}

TASK [openshift_version : assert openshift_release in openshift_pkg_version] ***
ok: [localhost] => {
    "changed": false, 
    "msg": "All assertions passed"
}

TASK [openshift_version : debug] ***********************************************
ok: [localhost] => {
    "openshift_release": "3.11"
}

TASK [openshift_version : debug] ***********************************************
ok: [localhost] => {
    "openshift_image_tag": "v3.11.0"
}

TASK [openshift_version : debug] ***********************************************
ok: [localhost] => {
    "openshift_pkg_version": "-3.11.0*"
}

TASK [openshift_version : debug] ***********************************************
ok: [localhost] => {
    "openshift_version": "3.11.0"
}

PLAY [Set openshift_version for etcd, node, and master hosts] ******************
skipping: no hosts matched

PLAY [Verify Requirements] *****************************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [Run variable sanity checks] **********************************************
ok: [localhost]

TASK [Validate openshift_node_groups and openshift_node_group_name] ************
ok: [localhost]

PLAY [Verify Node NetworkManager] **********************************************
skipping: no hosts matched

PLAY [Initialization Checkpoint End] *******************************************

TASK [Set install initialization 'Complete'] ***********************************
ok: [localhost]

PLAY [OLM Install Checkpoint Start] ********************************************

TASK [Set OLM install 'In Progress'] *******************************************
ok: [localhost]

PLAY [Operator Lifecycle Manager] **********************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [olm : include_tasks] *****************************************************
included: /root/installcentos/openshift-ansible/roles/olm/tasks/install.yaml for localhost

TASK [olm : create operator-lifecycle-manager project] *************************
changed: [localhost]

TASK [olm : Make temp directory for manifests] *********************************
ok: [localhost]

TASK [olm : Copy manifests to temp directory] **********************************
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/aggregated-edit.clusterrole.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/aggregated-view.clusterrole.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/catalogsource.crd.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/certified-operators.catalogsource.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/certified-operators.configmap.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/installplan.crd.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/olm-operator.clusterrole.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/olm-operator.rolebinding.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/olm-operator.serviceaccount.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/rh-operators.catalogsource.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/rh-operators.configmap.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/subscription.crd.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/clusterserviceversion.crd.yaml)

TASK [olm : Set olm-operator template] *****************************************
changed: [localhost]

TASK [olm : Set catalog-operator template] *************************************
changed: [localhost]

TASK [olm : Apply olm-operator-serviceaccount ServiceAccount manifest] *********
changed: [localhost]

TASK [olm : Apply operator-lifecycle-manager ClusterRole manifest] *************
changed: [localhost]

TASK [olm : Apply olm-operator-binding-operator-lifecycle-manager ClusterRoleBinding manifest] ***
changed: [localhost]

TASK [olm : Apply clusterserviceversions.operators.coreos.com CustomResourceDefinition manifest] ***
changed: [localhost]

TASK [olm : Apply catalogsources.operators.coreos.com CustomResourceDefinition manifest] ***
changed: [localhost]

TASK [olm : Apply installplans.operators.coreos.com CustomResourceDefinition manifest] ***
changed: [localhost]

TASK [olm : Apply subscriptions.operators.coreos.com CustomResourceDefinition manifest] ***
changed: [localhost]

TASK [olm : Apply rh-operators ConfigMap manifest] *****************************
changed: [localhost]

TASK [olm : Apply rh-operators CatalogSource manifest] *************************
changed: [localhost]

TASK [olm : Apply certified-operators ConfigMap manifest] **********************
changed: [localhost]

TASK [olm : Apply certified-operators CatalogSource manifest] ******************
changed: [localhost]

TASK [olm : Apply olm-operator Deployment manifest] ****************************
changed: [localhost]

TASK [olm : Apply catalog-operator Deployment manifest] ************************
changed: [localhost]

TASK [olm : Apply aggregate-olm-edit ClusterRole manifest] *********************
changed: [localhost]

TASK [olm : Apply aggregate-olm-view ClusterRole manifest] *********************
changed: [localhost]

TASK [olm : include_tasks] *****************************************************
skipping: [localhost]

PLAY [OLM Install Checkpoint End] **********************************************

TASK [Set OLM install 'Complete'] **********************************************
ok: [localhost]

PLAY RECAP *********************************************************************
localhost                  : ok=80   changed=20   unreachable=0    failed=0   


INSTALLER STATUS ***************************************************************
Initialization  : Complete (0:00:52)
OLM Install     : Complete (0:00:38)
-bash-4.2# 

Checking:

-bash-4.2# oc adm manage-node localhost.localdomain --list-pods | grep operator-lifecycle-manager

Listing matched pods on node: localhost.localdomain

operator-lifecycle-manager          catalog-operator-599574b497-lgbpk              0/1       ImagePullBackOff   0          2m
operator-lifecycle-manager          olm-operator-66bf8f6bbc-7nwfk                  0/1       ErrImagePull       0          2m
-bash-4.2# oc get events -n operator-lifecycle-manager
LAST SEEN   FIRST SEEN   COUNT     NAME                                                 KIND         SUBOBJECT                           TYPE      REASON              SOURCE                           MESSAGE
2m          2m           1         olm-operator-66bf8f6bbc-7nwfk.157a02e4b60d64ec       Pod                                              Normal    Scheduled           default-scheduler                Successfully assigned operator-lifecycle-manager/olm-operator-66bf8f6bbc-7nwfk to localhost.localdomain
2m          2m           1         olm-operator-66bf8f6bbc.157a02e4b5a3339e             ReplicaSet                                       Normal    SuccessfulCreate    replicaset-controller            Created pod: olm-operator-66bf8f6bbc-7nwfk
2m          2m           1         olm-operator.157a02e4b4297a2e                        Deployment                                       Normal    ScalingReplicaSet   deployment-controller            Scaled up replica set olm-operator-66bf8f6bbc to 1
2m          2m           1         catalog-operator-599574b497.157a02e50838831e         ReplicaSet                                       Normal    SuccessfulCreate    replicaset-controller            Created pod: catalog-operator-599574b497-lgbpk
2m          2m           1         catalog-operator-599574b497-lgbpk.157a02e508eaa92e   Pod                                              Normal    Scheduled           default-scheduler                Successfully assigned operator-lifecycle-manager/catalog-operator-599574b497-lgbpk to localhost.localdomain
2m          2m           1         catalog-operator.157a02e507431281                    Deployment                                       Normal    ScalingReplicaSet   deployment-controller            Scaled up replica set catalog-operator-599574b497 to 1
2m          2m           2         catalog-operator-599574b497-lgbpk.157a02e5df26c029   Pod          spec.containers{catalog-operator}   Normal    Pulling             kubelet, localhost.localdomain   pulling image "quay.io/coreos/catalog"
2m          2m           2         olm-operator-66bf8f6bbc-7nwfk.157a02e57da5ccaf       Pod          spec.containers{olm-operator}       Normal    Pulling             kubelet, localhost.localdomain   pulling image "quay.io/coreos/olm"
2m          2m           2         catalog-operator-599574b497-lgbpk.157a02e6b7be0514   Pod          spec.containers{catalog-operator}   Warning   Failed              kubelet, localhost.localdomain   Failed to pull image "quay.io/coreos/catalog": rpc error: code = Unknown desc = Tag latest not found in repository quay.io/coreos/catalog
2m          2m           2         catalog-operator-599574b497-lgbpk.157a02e6b7be9959   Pod          spec.containers{catalog-operator}   Warning   Failed              kubelet, localhost.localdomain   Error: ErrImagePull
2m          2m           2         olm-operator-66bf8f6bbc-7nwfk.157a02e61b128505       Pod          spec.containers{olm-operator}       Warning   Failed              kubelet, localhost.localdomain   Error: ErrImagePull
2m          2m           2         olm-operator-66bf8f6bbc-7nwfk.157a02e61b11521b       Pod          spec.containers{olm-operator}       Warning   Failed              kubelet, localhost.localdomain   Failed to pull image "quay.io/coreos/olm": rpc error: code = Unknown desc = Tag latest not found in repository quay.io/coreos/olm
2m          2m           7         catalog-operator-599574b497-lgbpk.157a02e70afe6319   Pod                                              Normal    SandboxChanged      kubelet, localhost.localdomain   Pod sandbox changed, it will be killed and re-created.
2m          2m           7         olm-operator-66bf8f6bbc-7nwfk.157a02e64f2c0964       Pod                                              Normal    SandboxChanged      kubelet, localhost.localdomain   Pod sandbox changed, it will be killed and re-created.
2m          2m           6         catalog-operator-599574b497-lgbpk.157a02e7b45556c5   Pod          spec.containers{catalog-operator}   Warning   Failed              kubelet, localhost.localdomain   Error: ImagePullBackOff
2m          2m           6         catalog-operator-599574b497-lgbpk.157a02e7b455140c   Pod          spec.containers{catalog-operator}   Normal    BackOff             kubelet, localhost.localdomain   Back-off pulling image "quay.io/coreos/catalog"
2m          2m           6         olm-operator-66bf8f6bbc-7nwfk.157a02e6d52e6f75       Pod          spec.containers{olm-operator}       Normal    BackOff             kubelet, localhost.localdomain   Back-off pulling image "quay.io/coreos/olm"
2m          2m           6         olm-operator-66bf8f6bbc-7nwfk.157a02e6d52edcce       Pod          spec.containers{olm-operator}       Warning   Failed              kubelet, localhost.localdomain   Error: ImagePullBackOff
-bash-4.2# docker pull quay.io/coreos/olm
Using default tag: latest
Trying to pull repository quay.io/coreos/olm ... 
Pulling repository quay.io/coreos/olm
Tag latest not found in repository quay.io/coreos/olm
-bash-4.2# 
@thebithead
Copy link
Author

Will someone fix this?

@dejwsz
Copy link

dejwsz commented Mar 21, 2019

I had the same issue in OKD in version 3.11:
oc v3.11.0+62803d0-1
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://192.168.99.4:8443
openshift v3.11.0+92b7c41-132
kubernetes v1.11.0+d4cacc0

@heisenbergye
Copy link

I have run into the same problem, quay.io/coreos/olm:latest and quay.io/coreos/catalog:latest are not found

@flickerfly
Copy link
Contributor

Is this resolved now that the namespace of the operators is moved from coreos to operator-framework?

I just installed 0.12.0 on OpenShift 3.11 with oc create crds.yaml/olm.yaml and didn't run into a problem of bad tags.

@EpiqSty
Copy link

EpiqSty commented Nov 14, 2019

actually, even with 'operator-framework' it still same issue here:
Failed to pull image "quay.io/operator-framework/olm@sha256:91ac5bf350192e063a3c1be994827f67e254997939eda3d471253777cc840c45": rpc error: code = Unknown desc = failed to pull and unpack image "quay.io/operator-framework/olm@sha256:91ac5bf350192e063a3c1be994827f67e254997939eda3d471253777cc840c45": failed to resolve reference "quay.io/operator-framework/olm@sha256:91ac5bf350192e063a3c1be994827f67e254997939eda3d471253777cc840c45": failed to do request: Head https://quay.io/v2/operator-framework/olm/manifests/sha256:91ac5bf350192e063a3c1be994827f67e254997939eda3d471253777cc840c45: dial tcp: lookup quay.io on 8.8.4.4:53: read udp 10.42.1.0:44889->8.8.4

after this:
$ curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.12.0/install.sh | bash -s 0.12.0

but actually, it looks like not related to repo, but something related to network on cluster nodes ...

@ecordell
Copy link
Member

I think we have fixed up all of the install scripts at this point - please re-open if there are any further issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants