Skip to content
This repository has been archived by the owner on Jul 28, 2023. It is now read-only.

Fails to push image to registry #6

Closed
dacleyra opened this issue Jul 1, 2019 · 6 comments
Closed

Fails to push image to registry #6

dacleyra opened this issue Jul 1, 2019 · 6 comments
Milestone

Comments

@dacleyra
Copy link

dacleyra commented Jul 1, 2019

Tekton is deployed from
https://github.com/openshift/tektoncd-pipeline-operator
version 0.4.0-1 operator

First, set --skip-tls-verify for kaniko executor in build-task build-push-step

The openshift container registry automatically associates service accounts with a secret for the registry

builder sa has push ability
https://docs.openshift.com/container-platform/3.11/dev_guide/service_accounts.html#default-service-accounts-and-roles

We can add the role to the appsody-sa service account as well
oc policy add-role-to-user system:image-builder system:serviceaccount:dacleyra:appsody-sa

With these credentials, kaniko is not making use of them correctly

NAME                         TYPE                      
appsody-sa-dockercfg-kmw7s   kubernetes.io/dockercfg   


Error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "appsody-docker-registry.default.svc:5000/dacleyra/appsody-hello-world:latest": UNAUTHORIZED: authentication required; [map[Type:repository Class: Name:dacleyra/appsody-hello-world Action:pull] map[Class: Name:dacleyra/appsody-hello-world Action:push Type:repository]]

Neither does completely elevating the priveledge of the appsody-sa service account to cluster-admin
oc adm policy add-cluster-role-to-user cluster-admin -z appsody-sa -n dacleyra

If I try to switch the pipeline-run service account to builder, the same error occurs
kaniko is not making use of the credential

If I take builer's token and create a new secret: regsecret kubernetes.io/dockerconfigjson

kubectl create secret docker-registry regsecret \
--docker-server=appsody-docker-registry.default.svc:5000 \
--docker-username=serviceaccount \
--docker-password=eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkYWNsZXlyYSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJidWlsZGVyLXRva2VuLTZ6OXRyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImJ1aWxkZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJmYTRlMTYwMi05OGU1LTExZTktOGJiMC0wMDE2YWMxMDFmNDUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGFjbGV5cmE6YnVpbGRlciJ9.bQl55bWrWoGZpw7QZaBO962bce-X4iEzSJpYAu19GLeOYXL_6sOTX4gCgClh9E41BCqdd1GGQMJeq0fPZRCIXurWcaJ7QxBUj15mLjCaOWJwxu8K-0A0XA0041fEMxXMnBIsy0p8rN2JA0HGTXMeHLEDr3dxMYAwJCNhj2vigyRpk5Wnpk2-dIflNN1rUW--gdfthLHswbc-nXhuUCh3otAVqQ_gvjsFDaXA9K38fNMS9JFxu050r-dhTjEEzQSR2Icdl8s255pca158M58qYvBne-GnJEX0tGqck8NSx0Sltn0B8NZgzhMly1r18hCWYhJrIVrURS55RQZE910VLw \
--docker-email=serviceaccount@example.org

and then force mount that into kaniko, push is successful

      volumeMounts:
      - mountPath: /kaniko/.docker/config.json
        name: secret-volume-appsody-sa-dockercfg
        subPath: .dockerconfigjson
        
  volumes:
    - name: secret-volume-appsody-sa-dockercfg
      secret:
        defaultMode: 420
        secretName: regsecret

To try with docker hub also results in the same

error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "index.docker.io/dacleyra/appsody-hello-world:latest": UNAUTHORIZED: authentication required; [map[Type:repository Class: Name:dacleyra/appsody-hello-world Action:pull] map[Type:repository Class: Name:dacleyra/appsody-hello-world Action:push]]

secret & sa

kubectl create secret docker-registry regcred --docker-server=docker.io --docker-username=dacleyra --docker-password=PASSWORD --docker-email=dacleyra@us.ibm.com

kubectl get sa appsody-sa -o=yaml
apiVersion: v1
imagePullSecrets:
- name: appsody-sa-dockercfg-gx9gm
kind: ServiceAccount
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"appsody-sa","namespace":"dacleyra"}}
  creationTimestamp: 2019-07-01T15:22:38Z
  name: appsody-sa
  namespace: dacleyra
  resourceVersion: "3898486"
  selfLink: /api/v1/namespaces/dacleyra/serviceaccounts/appsody-sa
  uid: 0fce26f4-9c14-11e9-8bb0-0016ac101f45
secrets:
- name: appsody-sa-token-h7hnx
- name: appsody-sa-dockercfg-gx9gm
- name: regcred
@chilanti
Copy link
Contributor

chilanti commented Jul 1, 2019

The problem seems to be limited to Openshift. The pipeline works fine "as is" on minikube. On minikube, the secrets associated with the appsody-sa service account are available throughout the pipeline steps. On Openshift, the secrets are not made available to the build step (Kaniko).

@dacleyra
Copy link
Author

dacleyra commented Jul 2, 2019

Using the debug image
I can see the credentials in /builder/home/.docker/config.json
But kaniko is not consuming them for some reason
If I cp /builder/home/.docker/config.json /kaniko/.docker/config.json
It works

image: gcr.io/kaniko-project/executor:debug
command: ['/busybox/sh']
args: ['-c', 'cp /builder/home/.docker/config.json /kaniko/.docker/config.json && /kaniko/executor --dockerfile=${inputs.params.pathToDockerFile} --destination=${outputs.resources.docker-image.url} --context=${inputs.params.pathToContext} --skip-tls-verify']

@dacleyra
Copy link
Author

dacleyra commented Jul 2, 2019

This appears to be a manifestation of
GoogleContainerTools/kaniko#507

To workaround the issue, we can set in the pipeline task for the kaniko build-push-step container

      env:
        - name: DOCKER_CONFIG
          value: /builder/home/.docker

@neeraj-laad
Copy link

@chilanti I have merged the PR to update the documentation to reflect this additional configuration.

@dacleyra Can you please confirm that the documentation is not accurate to make this work with OpenShift. Given, this is just an example to show integration with tekton. I'd like to close this issue and just have the additional steps documented.

When we have proper integration with Tekton and other CI systems, we might consider providing more out-of-the-box experience.

I'm closing this issue based on this comment. Please re-open if you believe, we can do more on this.

@neeraj-laad neeraj-laad added this to the Milestone-1 milestone Jul 8, 2019
@nastacio
Copy link

I am seeing this exact same problem with the latest Kabanero foundation installation:
https://kabanero.io/docs/ref/general/#scripted-kabanero-foundation-setup.html.

See output:
oc logs $(oc get pods -l tekton.dev/pipelineRun=appsody-manual-pipeline-run -n kabanero --output=jsonpath={.items[0].metadata.name}) -n kabanero --all-containers > ~/tmp/tekton-issue-6.log
tekton-issue-6.log

@dacleyra , I also verified that the task run already contained the potential workaround mentioned in:
#6 (comment)
appsody-build-task.json.txt

"env": [ { "name": "DOCKER_CONFIG", "value": "/builder/home/.docker" } ],

@nastacio
Copy link

There are two problems here when trying to run the image in minishift, due to minishift using an registry over http instead of https.

  1. kaniko has a known issue with "docker push" always assuming "https" protocol, see
    error checking push permissions when pushing to insecure registry GoogleContainerTools/kaniko#702.

  2. Once that fix is released, we still need the appsody sample to pass the "--insecure" flag so that kaniko will use "http" instead of "https". Since kaniko cannot downgrade security automatically, we would need either a way to modify the task definition of "appsody-build-task" for usage with minishift. I am currently having some trouble coaxing openshift to modify it on the fly, but that is a much smaller problem.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants