Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with installation using the operator no action set #106

Closed
donmstewart opened this issue Jul 7, 2022 · 2 comments
Closed

Issue with installation using the operator no action set #106

donmstewart opened this issue Jul 7, 2022 · 2 comments
Assignees

Comments

@donmstewart
Copy link

When using operator v0.5.2 or v0.5.3 I am seeing failures due to the --action parameter not being set.

The output in the intaller pod is as follows:-

porter version
porter v1.0.0-beta.1 (eaf8d0d4)
porter invoke --action= control-plane-installer --reference= --debug --debug-plugins --driver=kubernetes --param=name=kubeconfig --param=...
--action is required
Closing plugins

This was on a new k8s installation with a brand new porter operator installation.

The installation file used was :-

apiVersion: porter.sh/v1
kind: Installation
metadata:
  name: control-plane-installer
  namespace: porter-operator-system
spec:
  schemaVersion: 1.0.0
  name: control-plane
  namespace: control-plane
  bundle:
    repository: xxx/control-plane
    version: 0.1.7-3-g6da7721
@carolynvs
Copy link
Member

I can't reproduce this with either v0.5.3 or the a custom installation using the latest on main with a beta.1 porter agent.

Can you check the image used on the porter operator pod and make sure that it's the version you expect? I sometimes run into problems where an older deployment isn't removed completely and the operator pod isn't running the desired version of the code.

Here's what I have after installing v0.5.3 (which corresponds to ghcr.io/getporter/porter-operator@sha256:93e1c6d7b6dc8074b915ce50d09231ee5ea05bcdf43082c1c91e9b532d353b13 for the operator's image). Note that it ran porter installation apply.

Only really old versions of the operator from the first POC ran porter invoke for an installation CRD. You can also get the new version of the operator to run invoke if you manually create an AgentAction and override the command but it doesn't sound like you did that.

$ kubectl get pods -n porter-operator-system
NAME                                                  READY   STATUS    RESTARTS   AGE
mongodb-7648d8b5f8-qchnf                              1/1     Running   0          13m
porter-operator-controller-manager-78cdc769bd-rshcz   2/2     Running   0          13m

$ kubectl describe pod -n porter-operator-system porter-operator-controller-manager-78cdc769bd-rshcz
Name:         porter-operator-controller-manager-78cdc769bd-rshcz
Namespace:    porter-operator-system
Priority:     0
Node:         aks-nodepool1-30454888-vmss000000/10.224.0.4
Start Time:   Thu, 07 Jul 2022 11:10:27 -0500
Labels:       control-plane=controller-manager
              pod-template-hash=78cdc769bd
Annotations:  <none>
Status:       Running
IP:           10.244.0.23
IPs:
  IP:           10.244.0.23
Controlled By:  ReplicaSet/porter-operator-controller-manager-78cdc769bd
Containers:
  kube-rbac-proxy:
    Container ID:  containerd://9724fd8d8f90fd8f908e09ec1e0674cec8c058ae175315cb1b80b56abfaf7490
    Image:         gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0
    Image ID:      gcr.io/kubebuilder/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --secure-listen-address=0.0.0.0:8443
      --upstream=http://127.0.0.1:8080/
      --logtostderr=true
      --v=10
    State:          Running
      Started:      Thu, 07 Jul 2022 11:10:27 -0500
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hrnx8 (ro)
  manager:
    Container ID:  containerd://1e5c1dee1038ae2180172efda3ee189e09e8d0436074c57581beadedd67139e3
    Image:         ghcr.io/getporter/porter-operator@sha256:93e1c6d7b6dc8074b915ce50d09231ee5ea05bcdf43082c1c91e9b532d353b13
    Image ID:      ghcr.io/getporter/porter-operator@sha256:93e1c6d7b6dc8074b915ce50d09231ee5ea05bcdf43082c1c91e9b532d353b13
    Port:          <none>
    Host Port:     <none>
    Command:
      /app/manager
    Args:
      --health-probe-bind-address=:8081
      --metrics-bind-address=127.0.0.1:8080
      --leader-elect
    State:          Running
      Started:      Thu, 07 Jul 2022 11:10:28 -0500
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  30Mi
    Requests:
      cpu:        100m
      memory:     20Mi
    Liveness:     http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3
    Readiness:    http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hrnx8 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kube-api-access-hrnx8:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  14m   default-scheduler  Successfully assigned porter-operator-system/porter-operator-controller-manager-78cdc769bd-rshcz to aks-nodepool1-30454888-vmss000000
  Normal  Pulled     14m   kubelet            Container image "gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0" already present on machine
  Normal  Created    14m   kubelet            Created container kube-rbac-proxy
  Normal  Started    14m   kubelet            Started container kube-rbac-proxy
  Normal  Pulling    14m   kubelet            Pulling image "ghcr.io/getporter/porter-operator@sha256:93e1c6d7b6dc8074b915ce50d09231ee5ea05bcdf43082c1c91e9b532d353b13"
  Normal  Pulled     14m   kubelet            Successfully pulled image "ghcr.io/getporter/porter-operator@sha256:93e1c6d7b6dc8074b915ce50d09231ee5ea05bcdf43082c1c91e9b532d353b13" in 287.508988ms
  Normal  Created    14m   kubelet            Created container manager
  Normal  Started    14m   kubelet            Started container manager

$ k get pods -n test
NAME                                      READY   STATUS      RESTARTS   AGE
hello-llama-q75rz-7sdvp--1-6dwww          0/1     Completed   0          5m44s
install-operator-mellama-qptwg--1-mdp6w   0/1     Completed   0          5m22s

$ kubectl logs -n test hello-llama-q75rz-7sdvp--1-6dwww
porter version
porter v1.0.0-alpha.13 (b699de42)
porter installation apply installation.yaml
Created installation	{"installation": "operator/mellama"}
Triggering because the installation has not completed successfully yet
The installation is out-of-sync, running the install action...
executing install action from hello-llama (installation: operator/mellama)
Hello, Porter
execution completed successfully!

@donmstewart
Copy link
Author

Closing for now as I built a new k8s cluster and am not seeing this now.

Porter and Mixins automation moved this from Inbox to Done Jul 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

No branches or pull requests

2 participants