From 62174dc88968104c57f290073720e74792a4c6c1 Mon Sep 17 00:00:00 2001 From: Scott Leggett Date: Tue, 16 Feb 2021 23:06:00 +0800 Subject: [PATCH 01/38] feat: update kubectl-build-deploy-dind to support running rootless --- images/kubectl-build-deploy-dind/Dockerfile | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/images/kubectl-build-deploy-dind/Dockerfile b/images/kubectl-build-deploy-dind/Dockerfile index 7135f04685..4d88e5a2c1 100644 --- a/images/kubectl-build-deploy-dind/Dockerfile +++ b/images/kubectl-build-deploy-dind/Dockerfile @@ -1,12 +1,10 @@ ARG IMAGE_REPO FROM ${IMAGE_REPO:-lagoon}/kubectl -# the kubectl image comes with an HOME=/home which is needed to run as unpriviledged, but kubectl-build-deploy-dind will run as root -RUN rm -rf /root && ln -s /home /root ENV LAGOON=kubectl-build-deploy-dind -RUN mkdir -p /kubectl-build-deploy/git -RUN mkdir -p /kubectl-build-deploy/lagoon +RUN mkdir -p /kubectl-build-deploy/git +RUN mkdir -p /kubectl-build-deploy/lagoon WORKDIR /kubectl-build-deploy/git @@ -20,4 +18,7 @@ COPY helmcharts /kubectl-build-deploy/helmcharts ENV IMAGECACHE_REGISTRY=imagecache.amazeeio.cloud +# enable running unprivileged +RUN fix-permissions /home && fix-permissions /kubectl-build-deploy + CMD ["/kubectl-build-deploy/build-deploy.sh"] From 5c58995e5b7dc0761df1c397f3756f44ee6b402e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Kasper=20Garn=C3=A6s?= Date: Thu, 3 Mar 2022 14:30:06 +0100 Subject: [PATCH 02/38] Correct version number for Solr 8 --- docs/docker-images/solr/solr-drupal.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docker-images/solr/solr-drupal.md b/docs/docker-images/solr/solr-drupal.md index 19998337b4..97ad5721d9 100644 --- a/docs/docker-images/solr/solr-drupal.md +++ b/docs/docker-images/solr/solr-drupal.md @@ -14,4 +14,4 @@ For each Solr version, there is a specific `solr-drupal:` Docker image. * 6.6 \(available for compatibility, no longer officially supported\) * 7.7 [Dockerfile](https://github.com/uselagoon/lagoon-images/blob/main/images/solr-drupal/7.7.Dockerfile) (no longer actively supported upstream) - `uselagoon/solr-7.7-drupal` * 7 [Dockerfile](https://github.com/uselagoon/lagoon-images/blob/main/images/solr-drupal/7.Dockerfile) - `uselagoon/solr-7-drupal` -* 7 [Dockerfile](https://github.com/uselagoon/lagoon-images/blob/main/images/solr-drupal/8.Dockerfile) - `uselagoon/solr-8-drupal` +* 8 [Dockerfile](https://github.com/uselagoon/lagoon-images/blob/main/images/solr-drupal/8.Dockerfile) - `uselagoon/solr-8-drupal` From 8cec789fa0884fbb85af865833931e0a3f0214d0 Mon Sep 17 00:00:00 2001 From: Toby Bellwood Date: Tue, 22 Mar 2022 19:51:35 +1100 Subject: [PATCH 03/38] initial update --- docs/installing-lagoon/add-deploy-key.md | 2 +- docs/installing-lagoon/add-project.md | 2 +- docs/installing-lagoon/deploy-project.md | 2 +- docs/installing-lagoon/efs-provisioner.md | 31 ++++--- docs/installing-lagoon/gitlab.md | 26 +++--- docs/installing-lagoon/install-harbor.md | 61 +++++++------ .../install-lagoon-remote.md | 86 ++++++++++--------- docs/installing-lagoon/lagoon-backups.md | 5 +- docs/installing-lagoon/lagoon-cli.md | 4 +- docs/installing-lagoon/lagoon-core.md | 13 ++- docs/installing-lagoon/lagoon-logging.md | 4 +- docs/installing-lagoon/logs-concentrator.md | 2 +- docs/installing-lagoon/opendistro.md | 8 +- docs/installing-lagoon/querying-graphql.md | 6 +- docs/installing-lagoon/requirements.md | 28 +++++- docs/using-lagoon-advanced/ssh.md | 2 +- mkdocs.yml | 10 +-- 17 files changed, 157 insertions(+), 135 deletions(-) diff --git a/docs/installing-lagoon/add-deploy-key.md b/docs/installing-lagoon/add-deploy-key.md index aa14d5a208..e536dd7f4a 100644 --- a/docs/installing-lagoon/add-deploy-key.md +++ b/docs/installing-lagoon/add-deploy-key.md @@ -3,5 +3,5 @@ Lagoon creates a deploy key for each project. You now need to add it as a deploy key in your Git repository. 1. Run the following command to get the deploy key: `lagoon get project-key --project ` -2. Copy the key and save it as a deploy key in your Git repository. +2. Copy the key and save it as a deploy key in your Git repository. 1. Instructions for adding a deploy key to [GitHub](https://docs.github.com/en/developers/overview/managing-deploy-keys#deploy-keys), [GitLab](https://docs.gitlab.com/ee/user/project/deploy\_keys/), [Bitbucket](https://support.atlassian.com/bitbucket-cloud/docs/add-access-keys/). diff --git a/docs/installing-lagoon/add-project.md b/docs/installing-lagoon/add-project.md index 8773637579..d5795535fe 100644 --- a/docs/installing-lagoon/add-project.md +++ b/docs/installing-lagoon/add-project.md @@ -4,5 +4,5 @@ 1. The value for `--openshift` is the ID of your Kubernetes cluster. 2. Your production environment should be the name of the branch you want to have as your production environment. 3. The branches you want to deploy might look like this: “^(main|develop)$” - 4. The name of your project is anything you want - “Company Website,” “example,” etc. + 4. The name of your project is anything you want - “Company Website,” “example,” etc. 2. Go to the Lagoon UI, and you should see your project listed! diff --git a/docs/installing-lagoon/deploy-project.md b/docs/installing-lagoon/deploy-project.md index b65cb21366..ae629a27e4 100644 --- a/docs/installing-lagoon/deploy-project.md +++ b/docs/installing-lagoon/deploy-project.md @@ -2,5 +2,5 @@ 1. Run the following command to deploy your project: `` lagoon deploy branch -p -b ` `` 2. Go to the Lagoon UI and take a look at your project - you should now see the environment for this project! -3. Look in your cluster at your pods list, and you should see the build pod as it begins to clone Git repositories, set up services, etc. +3. Look in your cluster at your pods list, and you should see the build pod as it begins to clone Git repositories, set up services, etc. 1. e.g. `kubectl get pods --all-namespaces | grep lagoon-build` diff --git a/docs/installing-lagoon/efs-provisioner.md b/docs/installing-lagoon/efs-provisioner.md index 49042b6c1b..dfbae9ed0d 100644 --- a/docs/installing-lagoon/efs-provisioner.md +++ b/docs/installing-lagoon/efs-provisioner.md @@ -5,21 +5,20 @@ 1. Add Helm repository: `helm repo add stable https://charts.helm.sh/stable` 2. Create `efs-provisioner-values.yml` in your config directory and update the values: + ```yaml title="efs-provisioner-values.yml" + efsProvisioner: + efsFileSystemId: + awsRegion: + path: / + provisionerName: example.com/aws-efs + storageClass: + name: bulk + isDefault: false + reclaimPolicy: Delete + mountOptions: [] + global: + deployEnv: prod -```yaml title="efs-provisioner-values.yml" -efsProvisioner: - efsFileSystemId: - awsRegion: - path: / - provisionerName: example.com/aws-efs - storageClass: - name: bulk - isDefault: false - reclaimPolicy: Delete - mountOptions: [] -global: - deployEnv: prod + ``` -``` - - 3\. Install EFS Provisioner:`helm upgrade --install --create-namespace --namespace efs-provisioner -f efs-provisioner-values.yaml efs-provisioner stable/efs-provisioner` \ No newline at end of file +3\. Install EFS Provisioner:`helm upgrade --install --create-namespace --namespace efs-provisioner -f efs-provisioner-values.yaml efs-provisioner stable/efs-provisioner` \ No newline at end of file diff --git a/docs/installing-lagoon/gitlab.md b/docs/installing-lagoon/gitlab.md index 47a4fcc3d4..fd784b7b90 100644 --- a/docs/installing-lagoon/gitlab.md +++ b/docs/installing-lagoon/gitlab.md @@ -4,21 +4,21 @@ Not needed for \*most\* installs, but this is configured to integrate Lagoon wit 1. [Create Personal Access token](https://docs.gitlab.com/ee/user/profile/personal\_access\_tokens.html) in GitLab for a User with Admin Access. 2. Create System Hooks under \`your-gitlab.com/admin/hooks\` pointing to: `webhookhandler.lagoon.example.com` and define a random secret token. - 1. Enable “repository update events” + 1. Enable “repository update events” 3. Update `lagoon-core-values.yaml`: - ```yaml title="lagoon-core-values.yaml" - api: - additionalEnvs: - GITLAB_API_HOST: <> - GITLAB_API_TOKEN: << Personal Access token with Access to API >> - GITLAB_SYSTEM_HOOK_TOKEN: << System Hook Secret Token >> - webhooks2tasks: - additionalEnvs: - GITLAB_API_HOST: <> - GITLAB_API_TOKEN: << Personal Access token with Access to API >> - GITLAB_SYSTEM_HOOK_TOKEN: << System Hook Secret Token >> - ``` + ```yaml title="lagoon-core-values.yaml" + api: + additionalEnvs: + GITLAB_API_HOST: <> + GITLAB_API_TOKEN: << Personal Access token with Access to API >> + GITLAB_SYSTEM_HOOK_TOKEN: << System Hook Secret Token >> + webhooks2tasks: + additionalEnvs: + GITLAB_API_HOST: <> + GITLAB_API_TOKEN: << Personal Access token with Access to API >> + GITLAB_SYSTEM_HOOK_TOKEN: << System Hook Secret Token >> + ``` 4. Update `lagoon-core helmchart` 5. If you've already created users in Keycloak, delete them. diff --git a/docs/installing-lagoon/install-harbor.md b/docs/installing-lagoon/install-harbor.md index 51355edefa..d50ce583d3 100644 --- a/docs/installing-lagoon/install-harbor.md +++ b/docs/installing-lagoon/install-harbor.md @@ -3,40 +3,39 @@ 1. Add Helm repo: `helm repo add harbor https://helm.goharbor.io` 2. Create the file `harbor-values.yml` inside of your config directory: -```yaml title="harbor-values.yml" -expose: - ingress: - annotations: - kubernetes.io/tls-acme: "true" - hosts: - core: harbor.lagoon.example.com - tls: - enabled: true - certSource: secret - secret: - secretName: harbor-harbor-ingress -externalURL: https://harbor.lagoon.example.com -harborAdminPassword: -chartmuseum: - enabled: false -clair: - enabled: false -notary: - enabled: false -trivy: - enabled: false -jobservice: - jobLogger: stdout -registry: - replicas: 1 + ```yaml title="harbor-values.yml" + expose: + ingress: + annotations: + kubernetes.io/tls-acme: "true" + hosts: + core: harbor.lagoon.example.com + tls: + enabled: true + certSource: secret + secret: + secretName: harbor-harbor-ingress + externalURL: https://harbor.lagoon.example.com + harborAdminPassword: + chartmuseum: + enabled: false + clair: + enabled: false + notary: + enabled: false + trivy: + enabled: false + jobservice: + jobLogger: stdout + registry: + replicas: 1 -``` + ``` -1. Install Harbor:`helm upgrade --install --create-namespace --namespace harbor --wait -f harbor-values.yaml --version=1.5.2 harbor harbor/harbor` - 1. We are currently using Harbor version 1.5.2. A recent update to Harbor breaks the API. +1. Install Harbor:`helm upgrade --install --create-namespace --namespace harbor --wait -f harbor-values.yaml --version=1.5.6 harbor harbor/harbor` + 1. We are currently using Harbor version 1.5.6. A recent update to Harbor (Harbor 2.2) breaks the API. 2. Visit Harbor at the URL you set in `harbor.yml`. 1. Username: admin 2. Password: `kubectl -n harbor get secret harbor-harbor-core -o jsonpath="{.data.HARBOR_ADMIN_PASSWORD}" | base64 --decode` -3. Add the above Harbor credentials to the Lagoon Core `values.yml` that you created at the beginning of the process, as well as `harbor-values.yml`. -4. Upgrade lagoon-core release with the updated `values.yml` file: `helm upgrade --namespace lagoon-core -f values.yaml lagoon-core lagoon/lagoon-core` +3. You will need to add the above Harbor credentials to the Lagoon Remote `values.yml` in the next step, as well as `harbor-values.yml`. diff --git a/docs/installing-lagoon/install-lagoon-remote.md b/docs/installing-lagoon/install-lagoon-remote.md index 9e27b66abb..fbee493071 100644 --- a/docs/installing-lagoon/install-lagoon-remote.md +++ b/docs/installing-lagoon/install-lagoon-remote.md @@ -1,48 +1,50 @@ # Install Lagoon Remote -Now we will install Lagoon Remote into the Lagoon namespace. The [RabbitMQ](../docker-images/rabbitmq.md) service is the broker. +Now we will install Lagoon Remote into the Lagoon namespace. The [RabbitMQ](../docker-images/rabbitmq.md) service is the broker. -1. Create `remote-values.yml` in your config directory as you did the previous two files, and update the values. - 1. rabbitMQPassword: `kubectl -n lagoon-core get secret lagoon-core-broker -o jsonpath="{.data.RABBITMQ_PASSWORD}" | base64 --decode` - 2. rabbitMQHostname: `lagoon-core-broker.lagoon-core.svc.local` - 3. taskSSHHost: `kubectl get service lagoon-core-broker-amqp-ext -o custom-columns="NAME:.metadata.name,IP ADDRESS:.status.loadBalancer.ingress[*].ip,HOSTNAME:.status.loadBalancer.ingress[*].hostname"` -2. Run `helm upgrade --install --create-namespace --namespace lagoon -f remote-values.yaml lagoon-remote lagoon/lagoon-remote` +1. Create `remote-values.yml` in your config directory as you did the previous two files, and update the values. + 1. **rabbitMQPassword** `kubectl -n lagoon-core get secret lagoon-core-broker -o jsonpath="{.data.RABBITMQ_PASSWORD}" | base64 --decode` + 2. **rabbitMQHostname** `lagoon-core-broker.lagoon-core.svc.local` + 3. **taskSSHHost** `kubectl get service lagoon-core-broker-amqp-ext -o custom-columns="NAME:.metadata.name,IP ADDRESS:.status.loadBalancer.ingress[*].ip,HOSTNAME:.status.loadBalancer.ingress[*].hostname"` + 4. **harbor-password** `kubectl -n harbor get secret harbor-harbor-core -o jsonpath="{.data.HARBOR_ADMIN_PASSWORD}" | base64 --decode` +2. Add the harbor configuration from the previous step +3. Run `helm upgrade --install --create-namespace --namespace lagoon -f remote-values.yaml lagoon-remote lagoon/lagoon-remote` -```yaml title="remote-values.yml" -lagoon-build-deploy: - enabled: true - extraArgs: - - "--enable-harbor=true" - - "--harbor-url=https://harbor.lagoon.example.com" - - "--harbor-api=https://harbor.lagoon.example.com/api/" - - "--harbor-username=admin" - - "--harbor-password=" - rabbitMQUsername: lagoon - rabbitMQPassword: - rabbitMQHostname: lagoon-core-broker.lagoon-core.svc.cluster.local - lagoonTargetName: - taskSSHHost: - taskSSHPort: "22" - taskAPIHost: "api.lagoon.example.com" -dbaas-operator: - enabled: true + ```yaml title="lagoon-remote-values.yml" + lagoon-build-deploy: + enabled: true + extraArgs: + - "--enable-harbor=true" + - "--harbor-url=https://harbor.lagoon.example.com" + - "--harbor-api=https://harbor.lagoon.example.com/api/" + - "--harbor-username=admin" + - "--harbor-password=" + rabbitMQUsername: lagoon + rabbitMQPassword: + rabbitMQHostname: lagoon-core-broker.lagoon-core.svc.cluster.local + lagoonTargetName: + taskSSHHost: + taskSSHPort: "22" + taskAPIHost: "api.lagoon.example.com" + dbaas-operator: + enabled: true - mariadbProviders: - production: - environment: production - hostname: 172.17.0.1.nip.io - readReplicaHostnames: - - 172.17.0.1.nip.io - password: password - port: '3306' - user: root + mariadbProviders: + production: + environment: production + hostname: 172.17.0.1.nip.io + readReplicaHostnames: + - 172.17.0.1.nip.io + password: password + port: '3306' + user: root - development: - environment: development - hostname: 172.17.0.1.nip.io - readReplicaHostnames: - - 172.17.0.1.nip.io - password: password - port: '3306' - user: root -``` + development: + environment: development + hostname: 172.17.0.1.nip.io + readReplicaHostnames: + - 172.17.0.1.nip.io + password: password + port: '3306' + user: root + ``` diff --git a/docs/installing-lagoon/lagoon-backups.md b/docs/installing-lagoon/lagoon-backups.md index 45e76d0579..c5e75c33f2 100644 --- a/docs/installing-lagoon/lagoon-backups.md +++ b/docs/installing-lagoon/lagoon-backups.md @@ -2,16 +2,17 @@ Lagoon uses the k8up backup operator: [https://k8up.io](https://k8up.io). Lagoon isn’t tightly integrated with k8up, it’s more that Lagoon can create its resources in a way that k8up can automatically discover and backup. -1. Create new AWS User with policies: [https://gist.github.com/Schnitzel/1ad9761042c388a523029a2b4ff9ed75](https://gist.github.com/Schnitzel/1ad9761042c388a523029a2b4ff9ed75) +1. Create new AWS User with policies: [https://gist.github.com/Schnitzel/1ad9761042c388a523029a2b4ff9ed75](https://gist.github.com/Schnitzel/1ad9761042c388a523029a2b4ff9ed75) 2. Create `k8up-values.yaml`.\ See gist example: [https://gist.github.com/Schnitzel/5b87a9e9ee7c59b2bc6b29f0f0839d56](https://gist.github.com/Schnitzel/5b87a9e9ee7c59b2bc6b29f0f0839d56) -3. Install k8up: +3. Install k8up: `helm repo add appuio https://charts.appuio.ch` `kubectl apply -f https://github.com/vshn/k8up/releases/download/v1.1.0/k8up-crd.yaml` `helm upgrade --install --create-namespace --namespace k8up -f k8up-values.yaml k8up appuio/k8up` + 4. Update `lagoon-core-values.yaml`: ```yaml title="lagoon-core-values.yaml" diff --git a/docs/installing-lagoon/lagoon-cli.md b/docs/installing-lagoon/lagoon-cli.md index e952b8fe9d..ec3521b2c6 100644 --- a/docs/installing-lagoon/lagoon-cli.md +++ b/docs/installing-lagoon/lagoon-cli.md @@ -10,9 +10,9 @@ 1. In the Lagoon UI (the URL is in `values.yml` if you forget), go to **Settings**. 2. Add your public SSH key. 3. You need to set the default Lagoon to _your_ Lagoon so that it doesn’t try to use the amazee.io defaults: - 1. `lagoon config default --lagoon ` + 1. `lagoon config default --lagoon ` 4. Now run `lagoon login` - 1. How the system works: + 1. How the system works: 1. Lagoon talks to SSH and authenticates against your public/private key pair, and gets a token for your username. 2. Verify via `lagoon whoami` that you are logged in. diff --git a/docs/installing-lagoon/lagoon-core.md b/docs/installing-lagoon/lagoon-core.md index 8e12928278..e9748d7ab5 100644 --- a/docs/installing-lagoon/lagoon-core.md +++ b/docs/installing-lagoon/lagoon-core.md @@ -1,13 +1,13 @@ # Install Lagoon Core -1. Add Lagoon Charts repository to your Helm: +1. Add Lagoon Charts repository to your Helm: 1. `helm repo add lagoon https://uselagoon.github.io/lagoon-charts/` -2. Create a directory for the configuration files we will create, and make sure that it’s version controlled. Ensure that you reference this path in commands referencing your `values.yml` files. - 1. Create `values.yml` in the directory you’ve just created. Example: [https://gist.github.com/Schnitzel/58e390bf1b6f93117a37a3eb02e8bae3](https://gist.github.com/Schnitzel/58e390bf1b6f93117a37a3eb02e8bae3) +2. Create a directory for the configuration files we will create, and make sure that it’s version controlled. Ensure that you reference this path in commands referencing your `values.yml` files. + 1. Create `values.yml` in the directory you’ve just created. Example: [https://gist.github.com/Schnitzel/58e390bf1b6f93117a37a3eb02e8bae3](https://gist.github.com/Schnitzel/58e390bf1b6f93117a37a3eb02e8bae3) 2. Update the endpoint URLs (change them from api.lagoon.example.com to your values). 3. Now run `helm upgrade --install` command, pointing to `values.yml`, like so:\ ****`helm upgrade --install --create-namespace --namespace lagoon-core -f values.yml lagoon-core lagoon/lagoon-core` -4. Lagoon Core is now installed! :tada: +4. Lagoon Core is now installed! :tada: 5. Visit the Keycloak dashboard at the URL you defined in the `values.yml` for Keycloak. 1. Click Administration Console 2. Username: `admin` @@ -28,7 +28,4 @@ 3. Retrieve the secret:` kubectl -n lagoon-core get secret lagoon-core-keycloak -o jsonpath="{.data.KEYCLOAK_LAGOON_ADMIN_PASSWORD}" | base64 --decode` !!! warning "Warning:" - Note: Sometimes we run into Docker Hub pull limits. We are considering moving our images elsewhere if this continues to be a problem. - -!!! Note "Note:" - Note: Currently Lagoon only supports one Lagoon per cluster - meaning you can’t currently split your dev/test/prod environments across separate clusters, but this is something we are looking to implement in the future. + Note: Sometimes we run into Docker Hub pull limits. We are considering moving our images elsewhere if this continues to be a problem. diff --git a/docs/installing-lagoon/lagoon-logging.md b/docs/installing-lagoon/lagoon-logging.md index 98044979aa..d442323f7b 100644 --- a/docs/installing-lagoon/lagoon-logging.md +++ b/docs/installing-lagoon/lagoon-logging.md @@ -8,9 +8,9 @@ Logging Overview: [**https://lucid.app/lucidchart/b1da011f-2b91-4798-9518-4164b1 See also: [Logging](../logging/logging.md). -Read more about Lagoon logging here: [https://github.com/uselagoon/lagoon-charts/tree/main/charts/lagoon-logging](https://github.com/uselagoon/lagoon-charts/tree/main/charts/lagoon-logging) +Read more about Lagoon logging here: [https://github.com/uselagoon/lagoon-charts/tree/main/charts/lagoon-logging](https://github.com/uselagoon/lagoon-charts/tree/main/charts/lagoon-logging) -1. Create` lagoon-logging-values.yaml` . Here's an example gist: [https://gist.github.com/Schnitzel/57b6706dc32ddf9dd00e61c56d98f5cc](https://gist.github.com/Schnitzel/57b6706dc32ddf9dd00e61c56d98f5cc) +1. Create` lagoon-logging-values.yaml` . Here's an example gist: [https://gist.github.com/Schnitzel/57b6706dc32ddf9dd00e61c56d98f5cc](https://gist.github.com/Schnitzel/57b6706dc32ddf9dd00e61c56d98f5cc) 2. Install `lagoon-logging`: `helm repo add banzaicloud-stable https://kubernetes-charts.banzaicloud.com` diff --git a/docs/installing-lagoon/logs-concentrator.md b/docs/installing-lagoon/logs-concentrator.md index e1fb0383a7..8959594331 100644 --- a/docs/installing-lagoon/logs-concentrator.md +++ b/docs/installing-lagoon/logs-concentrator.md @@ -4,4 +4,4 @@ Logs-concentrator collects the logs being sent by Lagoon clusters and augments t 1. Create certificates according to ReadMe: [https://github.com/uselagoon/lagoon-charts/tree/main/charts/lagoon-logs-concentrator](https://github.com/uselagoon/lagoon-charts/tree/main/charts/lagoon-logs-concentrator) 2. Create `logs-concentrator-values.yaml` . See gist for example: [https://gist.github.com/Schnitzel/0c76bfdd2922a211aad38600485e7dc1](https://gist.github.com/Schnitzel/0c76bfdd2922a211aad38600485e7dc1) -3. Install logs-concentrator: `helm upgrade --install --create-namespace --namespace lagoon-logs-concentrator -f logs-concentrator-values.yaml lagoon-logs-concentrator lagoon/lagoon-logs-concentrator` +3. Install logs-concentrator: `helm upgrade --install --create-namespace --namespace lagoon-logs-concentrator -f logs-concentrator-values.yaml lagoon-logs-concentrator lagoon/lagoon-logs-concentrator` diff --git a/docs/installing-lagoon/opendistro.md b/docs/installing-lagoon/opendistro.md index 89a8e981cf..3110d24ec0 100644 --- a/docs/installing-lagoon/opendistro.md +++ b/docs/installing-lagoon/opendistro.md @@ -2,9 +2,9 @@ To install an OpenDistro cluster, you will need to configure TLS and secrets so that Lagoon can talk to it securely. You're going to have to create a handful of JSON files - put these in the same directory as the values files you've been creating throughout this installation process. -1. Install OpenDistro Helm, according to [https://opendistro.github.io/for-elasticsearch-docs/docs/install/helm/](https://opendistro.github.io/for-elasticsearch-docs/docs/install/helm/) +1. Install OpenDistro Helm, according to [https://opendistro.github.io/for-elasticsearch-docs/docs/install/helm/](https://opendistro.github.io/for-elasticsearch-docs/docs/install/helm/) 2. Generate certificates - 1. Install CFSSL: [https://github.com/cloudflare/cfssl](https://github.com/cloudflare/cfssl) + 1. Install CFSSL: [https://github.com/cloudflare/cfssl](https://github.com/cloudflare/cfssl) _CFSSL is CloudFlare's PKI/TLS swiss army knife. It is both a command line tool and an HTTP API server for signing, verifying, and bundling TLS certificates. It requires Go 1.12+ to build._ 2. Generate CA. You'll need the following file: @@ -129,8 +129,8 @@ Next, we'll convert the key to the format supported by Java with the following c 1. `helm repo add incubator https://charts.helm.sh/incubator` `helm upgrade --namespace elasticsearch --create-namespace --install elasticsearch-secrets incubator/raw --values elasticsearch-secrets-values.yaml ` -4. You'll need to create `elasticsearch-values.yaml`. See this gist as an example: (fill all <\> with values) [https://gist.github.com/Schnitzel/1e386654b6abf75bf4d66a544db4aa6a](https://gist.github.com/Schnitzel/1e386654b6abf75bf4d66a544db4aa6a) -5. Install Elasticsearch: +4. You'll need to create `elasticsearch-values.yaml`. See this gist as an example: (fill all <\> with values) [https://gist.github.com/Schnitzel/1e386654b6abf75bf4d66a544db4aa6a](https://gist.github.com/Schnitzel/1e386654b6abf75bf4d66a544db4aa6a) +5. Install Elasticsearch: `helm upgrade --namespace elasticsearch --create-namespace --install elasticsearch opendistro-es-X.Y.Z.tgz` diff --git a/docs/installing-lagoon/querying-graphql.md b/docs/installing-lagoon/querying-graphql.md index ee520f4b15..0e0e2bae68 100644 --- a/docs/installing-lagoon/querying-graphql.md +++ b/docs/installing-lagoon/querying-graphql.md @@ -6,9 +6,9 @@ 3. Go to **Edit HTTP Headers**, and **Add Header**. 1. Header Name: `Authorization` 2. Value: `Bearer YOUR-TOKEN-HERE` - 1. In your home directory, the Lagoon CLI has created a `.lagoon.yml` file. Copy the token from that file and use it for the value here. + 1. In your home directory, the Lagoon CLI has created a `.lagoon.yml` file. Copy the token from that file and use it for the value here. 3. Save. -4. Now you’re ready to run some queries. Run the following test query to ensure everything is working correctly: +4. Now you’re ready to run some queries. Run the following test query to ensure everything is working correctly: `query allProjects {allProjects {name } }` 5. This should give you the following response: @@ -40,4 +40,4 @@ 2. token: `kubectl -n lagoon describe secret $(kubectl -n lagoon get secret | grep kubernetes-build-deploy | awk '{print $1}') | grep token: | awk '{print $2}'` !!! Note "Note:" - Note: Authorization tokens for GraphQL are very short term so you may need to generate a new one. Run `lagoon login` and then cat the `.lagoon.yml` file to get the new token, and replace the old token in the HTTP header with the new one. + Note: Authorization tokens for GraphQL are very short term so you may need to generate a new one. Run `lagoon login` and then cat the `.lagoon.yml` file to get the new token, and replace the old token in the HTTP header with the new one. diff --git a/docs/installing-lagoon/requirements.md b/docs/installing-lagoon/requirements.md index 845408fd80..6fd6ca386c 100644 --- a/docs/installing-lagoon/requirements.md +++ b/docs/installing-lagoon/requirements.md @@ -1,6 +1,6 @@ # Installing Lagoon Into Existing Kubernetes Cluster -## Requirements +## Requirements * Kubernetes 1.19+ (Kubernetes 1.22+ is not yet supported, see https://github.com/uselagoon/lagoon/issues/2816 for progress) * Familiarity with [Helm](https://helm.sh) and [Helm Charts](https://helm.sh/docs/topics/charts/#helm), and [kubectl](https://kubernetes.io/docs/tasks/tools/). @@ -9,4 +9,28 @@ * RWO storage !!! Note "Note:" - We acknowledge that this is a lot of steps, and our roadmap for the immediate future includes reducing the number of steps in this process. + We acknowledge that this is a lot of steps, and our roadmap for the immediate future includes reducing the number of steps in this process. + +## Specific requirements (as of March 2022) + +### Kubernetes +Lagoon supports Kubernetes versions 1.19, 1.20 and 1.21. Support for 1.22 is underway, and mostly complete. There are a number of relevant API deprecations in 1.22 that Lagoon utilised across a number of dependencies. + +### ingress-nginx +Lagoon is currently only for a single ingress-nginx controller, and therefore defining an IngressClass has not been necessary. + +This means that Lagoon currently works best with version 3 of the ingress-nginx helm chart - latest release [3.40.0](https://github.com/kubernetes/ingress-nginx/releases/tag/helm-chart-3.40.0) + +In order to use a version of the helm chart (>=4) that supports Ingress v1 (i.e for Kubernetes 1.22), the following configuration should be used,as per [the ingress-nginx docs](https://kubernetes.github.io/ingress-nginx/#what-is-an-ingressclass-and-why-is-it-important-for-users-of-ingress-nginx-controller-now) + +- nginx-ingress should be configured as the default controller - set `.controller.ingressClassResource.default: true` in helm values +- nginx-ingress should be configured to watch ingresses without IngressClass set - set `.controller.watchIngressWithoutClass: true` in helm values + +This will configure the controller to create any new ingresses with itself as the IngressClass, and also to handle any existing ingresses without an IngressClass set + +Other configurations may be possible, but have not been tested + +### Harbor +Only Harbor <2.2 is currently supported - the method of retrieving robot accounts was changed in 2.2, and we are working on a fix + +This means you should install Harbor [2.1.6](https://github.com/goharbor/harbor/releases/tag/v2.1.6) with helm chart [1.5.6](https://github.com/goharbor/harbor-helm/releases/tag/1.5.6) diff --git a/docs/using-lagoon-advanced/ssh.md b/docs/using-lagoon-advanced/ssh.md index 3e048a8914..efc376f5d5 100644 --- a/docs/using-lagoon-advanced/ssh.md +++ b/docs/using-lagoon-advanced/ssh.md @@ -38,7 +38,7 @@ SSH key support in Windows has improved markedly as of recently, and is now supp ### Via the UI -You can upload your SSH key(s) through the UI. Login as you normally would. +You can upload your SSH key(s) through the UI. Login as you normally would. In the upper right hand corner, click on Settings: diff --git a/mkdocs.yml b/mkdocs.yml index 0ce1e48228..47378aad41 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -70,19 +70,19 @@ nav: - Environment Idling: using-lagoon-advanced/environment-idling.md - Custom Tasks: using-lagoon-advanced/custom-tasks.md - DeployTarget Configs: using-lagoon-advanced/deploytarget-configs.md - - Blackfire: using-lagoon-advanced/blackfire.md + - Blackfire: using-lagoon-advanced/blackfire.md - Installing Lagoon: - Requirements: installing-lagoon/requirements.md - - Install Lagoon Core: installing-lagoon/lagoon-core.md - - Install the Lagoon CLI: installing-lagoon/lagoon-cli.md - - Create Lagoon User: installing-lagoon/create-user.md + - EFS Provisioner: installing-lagoon/efs-provisioner.md - Install Harbor: installing-lagoon/install-harbor.md + - Install Lagoon Core: installing-lagoon/lagoon-core.md - Install Lagoon Remote: installing-lagoon/install-lagoon-remote.md + - Install the Lagoon CLI: installing-lagoon/lagoon-cli.md - Querying with GraphQL: installing-lagoon/querying-graphql.md + - Create Lagoon User: installing-lagoon/create-user.md - Add a Project: installing-lagoon/add-project.md - Add Deploy Key: installing-lagoon/add-deploy-key.md - Deploy Your Project: installing-lagoon/deploy-project.md - - EFS Provisioner: installing-lagoon/efs-provisioner.md - Add Group: installing-lagoon/add-group.md - Lagoon Logging: installing-lagoon/lagoon-logging.md - OpenDistro: installing-lagoon/opendistro.md From 3be63d490dd7bcee2003fe8eb6db105073feb614 Mon Sep 17 00:00:00 2001 From: Blaize Kaye Date: Wed, 23 Mar 2022 12:58:21 +1300 Subject: [PATCH 04/38] First pass at disabling --- services/webhooks2tasks/src/index.ts | 1 + .../webhooks2tasks/src/webhooks/problems.ts | 25 +++++++++++++++++-- 2 files changed, 24 insertions(+), 2 deletions(-) diff --git a/services/webhooks2tasks/src/index.ts b/services/webhooks2tasks/src/index.ts index 642d47df41..bd5214753c 100644 --- a/services/webhooks2tasks/src/index.ts +++ b/services/webhooks2tasks/src/index.ts @@ -10,6 +10,7 @@ initSendToLagoonTasks(); const rabbitmqHost = process.env.RABBITMQ_HOST || "broker" const rabbitmqUsername = process.env.RABBITMQ_USERNAME || "guest" const rabbitmqPassword = process.env.RABBITMQ_PASSWORD || "guest" + // @ts-ignore const connection = amqp.connect([`amqp://${rabbitmqUsername}:${rabbitmqPassword}@${rabbitmqHost}`], { json: true }); diff --git a/services/webhooks2tasks/src/webhooks/problems.ts b/services/webhooks2tasks/src/webhooks/problems.ts index b0bc613de1..4d7fae758a 100644 --- a/services/webhooks2tasks/src/webhooks/problems.ts +++ b/services/webhooks2tasks/src/webhooks/problems.ts @@ -12,6 +12,16 @@ import { Project } from '../types'; +// NOTE: Here we are going through the process of deprecating the Trivy integration +const enableHarborIntegration = (() => { + if(process.env.ENABLE_DEPRECATED_TRIVY_INTEGRATION && process.env.ENABLE_DEPRECATED_TRIVY_INTEGRATION == "true") { + console.log("enabling trivy"); + return true; + } + console.log("Trivy is not enabled"); + return false; +})(); + export async function processProblems( rabbitMsg, channelWrapperWebhooks @@ -24,10 +34,21 @@ export async function processProblems( switch(webhook.event) { case 'harbor:scanningcompleted' : - await handle(harborScanningCompleted, webhook, `${webhooktype}:${event}`, channelWrapperWebhooks); + if(enableHarborIntegration == true) { + console.log("NOTE: Harbor integration for Problems is deprecated and will be removed from Lagoon in an upcoming release"); + await handle(harborScanningCompleted, webhook, `${webhooktype}:${event}`, channelWrapperWebhooks); + } else { + console.log("NOTE: Harbor scan recieved but not processed because Harbor/Problems integration is disabled"); + } + break case 'harbor:scanningresultfetched' : - await handle(processHarborVulnerabilityList, webhook, `${webhooktype}:${event}`, channelWrapperWebhooks); + if(enableHarborIntegration == true) { + console.log("NOTE: Harbor integration for Problems is deprecated and will be removed from Lagoon in an upcoming release"); + await handle(processHarborVulnerabilityList, webhook, `${webhooktype}:${event}`, channelWrapperWebhooks); + } else { + console.log("NOTE: Harbor scan recieved but not processed because Harbor/Problems integration is disabled"); + } break; case 'drutiny:resultset' : await handle(processDrutinyResultset, webhook, `${webhooktype}:${event}`, channelWrapperWebhooks); From aa352ca025c41826aee0d83c353e5152eed58de5 Mon Sep 17 00:00:00 2001 From: Brandon Williams Date: Tue, 22 Mar 2022 19:39:56 -0500 Subject: [PATCH 05/38] Add example for pinning Node.js version in `php-cli` images --- docs/docker-images/php-cli/README.md | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/docs/docker-images/php-cli/README.md b/docs/docker-images/php-cli/README.md index ce2843bac2..cbdc0c2364 100644 --- a/docs/docker-images/php-cli/README.md +++ b/docs/docker-images/php-cli/README.md @@ -35,7 +35,7 @@ This image is prepared to be used on Lagoon. There are therefore some things alr The included cli tools are: * [`composer` version 1.9.0](https://getcomposer.org/) \(changeable via `COMPOSER_VERSION` and `COMPOSER_HASH_SHA256`\) -* [`node.js` verison 12](https://nodejs.org/en/) \(as of Jan 2020\) +* [`node.js` verison 17](https://nodejs.org/en/) \(as of Mar 2022\) * [`npm`](https://www.npmjs.com/) * [`yarn`](https://yarnpkg.com/lang/en/) * `mariadb-client` @@ -43,7 +43,12 @@ The included cli tools are: ### Change Node.js Version -By default this image ships with the current Node.js Version \(v12 as of Jan 2020\). If you need another version you can remove the current version and install the one of your choice. +By default this image ships with the `nodejs-current` package \(v17 as of Mar 2022\). If you need another version you can remove the current version and install the one of your choice. For example, to install Node.js 16, modify your dockerfile to include: + +``` +RUN apk del nodejs-current \ + && apk add --no-cache nodejs=~16 +``` ## Environment variables From 9fd98c0a96768dcf882fe4e64c7798144f9c4405 Mon Sep 17 00:00:00 2001 From: Scott Leggett Date: Wed, 23 Mar 2022 16:17:27 +0800 Subject: [PATCH 06/38] fix: only run server on localhost and other minor fixes --- docs/contributing-to-lagoon/documentation.md | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/docs/contributing-to-lagoon/documentation.md b/docs/contributing-to-lagoon/documentation.md index efc10e0db5..5ad71a4242 100644 --- a/docs/contributing-to-lagoon/documentation.md +++ b/docs/contributing-to-lagoon/documentation.md @@ -8,18 +8,22 @@ We use [mkdocs](https://www.mkdocs.org/) with the excellent [Material](https://s ## Viewing and updating docs locally -From the root of this repo, just run +From the root of this repo, just run: -`docker run --rm -it -p 8000:8000 -v ${PWD}:/docs squidfunk/mkdocs-material` +```bash +docker run --rm -it -p 127.0.0.1:8000:8000 -v ${PWD}:/docs squidfunk/mkdocs-material +``` -and this will start a development server on your local port 8000, configured to livereload on any updates. The docker image contains all the necessary extensions. +This will start a development server on [http://127.0.0.1:8000](http://127.0.0.1:8000), configured to live-reload on any updates. +The Docker image contains all the necessary extensions. ## Editing in the cloud Each documentation page also has an "edit" pencil in the top right, that will take you to the correct page in the git repository. -Feel free to contribute here too - you can always use the inbuilt [github.dev web-based editor](https://docs.github.com/en/codespaces/the-githubdev-web-based-editor). It's got basic markdown previews, but none of the mkdocs loveliness +Feel free to contribute here too - you can always use the inbuilt [github.dev web-based editor](https://docs.github.com/en/codespaces/the-githubdev-web-based-editor). +It's got basic markdown previews, but none of the mkdocs loveliness ## How we deploy documentation -We use the [Deploy MkDocs](https://github.com/marketplace/actions/deploy-mkdocs) GitHub Action to build all main branch pushes, and trigger a deployment of the gh-pages branch. \ No newline at end of file +We use the [Deploy MkDocs](https://github.com/marketplace/actions/deploy-mkdocs) GitHub Action to build all main branch pushes, and trigger a deployment of the `gh-pages` branch. From a9dd1fc52b07549cdd5925098b0a5834820f6443 Mon Sep 17 00:00:00 2001 From: Blaize Kaye Date: Thu, 24 Mar 2022 11:32:35 +1300 Subject: [PATCH 07/38] Adds ack for environments with no workflows --- services/workflows/internal/handler/main.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/services/workflows/internal/handler/main.go b/services/workflows/internal/handler/main.go index 331f536d7e..bc5adea168 100644 --- a/services/workflows/internal/handler/main.go +++ b/services/workflows/internal/handler/main.go @@ -176,7 +176,6 @@ func processingIncomingMessageQueueFactory(h *Messaging) func(mq.Message) { return } - //Ahhh, the issue is that there is no environment name passed thought ... environmentIdentifier := fmt.Sprintf("%v", incoming.Meta.EnvironmentID) if incoming.Meta.Environment != "" { environmentIdentifier = fmt.Sprintf("%v:%v", incoming.Meta.Environment, incoming.Meta.EnvironmentID) @@ -190,6 +189,7 @@ func processingIncomingMessageQueueFactory(h *Messaging) func(mq.Message) { environmentWorkflows, err := lagoonclient.GetEnvironmentWorkflowsByEnvironmentId(context.TODO(), client, int(*incoming.Meta.EnvironmentID)) if err != nil { log.Println(err) + message.Ack(false) return } for _, wf := range environmentWorkflows { From dfe4a0e60f2d51fcaf4a10536814a8e46b800183 Mon Sep 17 00:00:00 2001 From: Scott Leggett Date: Wed, 23 Mar 2022 16:29:40 +0800 Subject: [PATCH 08/38] fix: fix formatting and add info to logging documentation --- docs/logging/logging.md | 44 ++++++++++++++++++++++------------------- 1 file changed, 24 insertions(+), 20 deletions(-) diff --git a/docs/logging/logging.md b/docs/logging/logging.md index c3f67c9324..171762b44b 100644 --- a/docs/logging/logging.md +++ b/docs/logging/logging.md @@ -3,29 +3,33 @@ Lagoon provides access to the following logs via Kibana: * Logs from the Kubernetes Routers, including every single HTTP and HTTPS request with: - * Source IP - * URL - * Path - * HTTP verb - * Cookies - * Headers - * User agent - * Project - * Container name - * Response size - * Response time + * Source IP + * URL + * Path + * HTTP verb + * Cookies + * Headers + * User agent + * Project + * Container name + * Response size + * Response time * Logs from containers: - * `stdout` and `stderr` messages - * Container name - * Project + * `stdout` and `stderr` messages + * Container name + * Project * Lagoon logs: - * Webhooks parsing - * Build logs - * Build errors - * Any other Lagoon related logs + * Webhooks parsing + * Build logs + * Build errors + * Any other Lagoon related logs * Application logs: - * Any logs sent by the running application - * For Drupal: install the [Lagoon Logs](https://www.drupal.org/project/lagoon_logs) module in order to receive logs from Drupal Watchdog. + * For Drupal: install the [Lagoon Logs](https://www.drupal.org/project/lagoon_logs) module in order to receive logs from Drupal Watchdog. + * For Laravel: install the [Lagoon Logs for Laravel](https://github.com/amazeeio/laravel_lagoon_logs) package. + * For other workloads: + * Send logs to `udp://application-logs.lagoon.svc:5140` + * Ensure logs are structured as JSON encoded objects. + * Ensure the `type` field contains the name of the Kubernetes namespace (`$LAGOON_PROJECT-$LAGOON_ENVIRONMENT`). To access the logs, please check with your Lagoon administrator to get the URL for the Kibana route \(for amazee.io this is [https://logs.amazeeio.cloud/](https://logs.amazeeio.cloud/)\). From f253b8a4df9017f2b38c08562df314ca5c8430a7 Mon Sep 17 00:00:00 2001 From: Scott Leggett Date: Thu, 24 Mar 2022 10:19:33 +0800 Subject: [PATCH 09/38] chore: appease the markdown link checker --- docs/contributing-to-lagoon/documentation.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/contributing-to-lagoon/documentation.md b/docs/contributing-to-lagoon/documentation.md index 5ad71a4242..295ffdd409 100644 --- a/docs/contributing-to-lagoon/documentation.md +++ b/docs/contributing-to-lagoon/documentation.md @@ -14,6 +14,7 @@ From the root of this repo, just run: docker run --rm -it -p 127.0.0.1:8000:8000 -v ${PWD}:/docs squidfunk/mkdocs-material ``` + This will start a development server on [http://127.0.0.1:8000](http://127.0.0.1:8000), configured to live-reload on any updates. The Docker image contains all the necessary extensions. From f22ce918ee7fc942342592ac422b40aa65ab040a Mon Sep 17 00:00:00 2001 From: Michael Schmid Date: Thu, 24 Mar 2022 14:24:44 -0400 Subject: [PATCH 10/38] add rootless rsync commands to drush rsync task With rootless systems running a normal `drush rsync` command causes some issues as rootless systems (like openshift or k8s rootless) do not allow the executing user to do some of the standard things that rsync wants to do (change owner and group to what the source had). Therefore we tell `drush rsync` to run `rsync` with some additional parameters that will prevent it to fail --- services/api/src/resources/task/resolvers.ts | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/services/api/src/resources/task/resolvers.ts b/services/api/src/resources/task/resolvers.ts index bf2d32c7e1..7ebdf870c2 100644 --- a/services/api/src/resources/task/resolvers.ts +++ b/services/api/src/resources/task/resolvers.ts @@ -668,7 +668,7 @@ export const taskDrushRsyncFiles: ResolverFn = async ( const command = `LAGOON_ALIAS_PREFIX="" && \ if [[ ! "" = "$(drush | grep 'lagoon:aliases')" ]]; then LAGOON_ALIAS_PREFIX="lagoon.\${LAGOON_PROJECT}-"; fi && \ - drush -y rsync @\${LAGOON_ALIAS_PREFIX}${sourceEnvironment.name}:%files @self:%files`; + drush -y rsync @\${LAGOON_ALIAS_PREFIX}${sourceEnvironment.name}:%files @self:%files -- --omit-dir-times --no-perms --no-group --no-owner --chmod=ugo=rwX`; const taskData = await Helpers(sqlClientPool).addTask({ name: `Sync files ${sourceEnvironment.name} -> ${destinationEnvironment.name}`, From 28d98e74ff727c8f3a214c335baa844a0e16ee74 Mon Sep 17 00:00:00 2001 From: Blaize Kaye Date: Mon, 28 Mar 2022 15:50:14 +1300 Subject: [PATCH 11/38] Adds deprecation messages and disable feature flag for Harbor/Trivy integration --- services/webhooks2tasks/Dockerfile | 2 ++ services/webhooks2tasks/src/webhooks/problems.ts | 8 ++++---- 2 files changed, 6 insertions(+), 4 deletions(-) diff --git a/services/webhooks2tasks/Dockerfile b/services/webhooks2tasks/Dockerfile index 443ae48e32..5c509243af 100644 --- a/services/webhooks2tasks/Dockerfile +++ b/services/webhooks2tasks/Dockerfile @@ -2,6 +2,7 @@ ARG LAGOON_GIT_BRANCH ARG IMAGE_REPO ARG UPSTREAM_REPO ARG UPSTREAM_TAG +ARG ENABLE_DEPRECATED_TRIVY_INTEGRATION=false # STAGE 1: Loading Image lagoon-node-packages-builder which contains node packages shared by all Node Services FROM ${IMAGE_REPO:-lagoon}/yarn-workspace-builder as yarn-workspace-builder @@ -10,6 +11,7 @@ FROM ${UPSTREAM_REPO:-uselagoon}/node-16:${UPSTREAM_TAG:-latest} ARG LAGOON_VERSION ENV LAGOON_VERSION=$LAGOON_VERSION +ENV ENABLE_DEPRECATED_TRIVY_INTEGRATION=$ENABLE_DEPRECATED_TRIVY_INTEGRATION # Copying generated node_modules from the first stage COPY --from=yarn-workspace-builder /app /app diff --git a/services/webhooks2tasks/src/webhooks/problems.ts b/services/webhooks2tasks/src/webhooks/problems.ts index 4d7fae758a..348e30f49a 100644 --- a/services/webhooks2tasks/src/webhooks/problems.ts +++ b/services/webhooks2tasks/src/webhooks/problems.ts @@ -15,10 +15,10 @@ import { // NOTE: Here we are going through the process of deprecating the Trivy integration const enableHarborIntegration = (() => { if(process.env.ENABLE_DEPRECATED_TRIVY_INTEGRATION && process.env.ENABLE_DEPRECATED_TRIVY_INTEGRATION == "true") { - console.log("enabling trivy"); + console.log("ENABLE_DEPRECATED_TRIVY_INTEGRATION is 'true' -- enabling Harbor/Trivy"); return true; } - console.log("Trivy is not enabled"); + console.log("ENABLE_DEPRECATED_TRIVY_INTEGRATION is not 'true' -- Harbor/Trivy integration is not enabled"); return false; })(); @@ -38,7 +38,7 @@ export async function processProblems( console.log("NOTE: Harbor integration for Problems is deprecated and will be removed from Lagoon in an upcoming release"); await handle(harborScanningCompleted, webhook, `${webhooktype}:${event}`, channelWrapperWebhooks); } else { - console.log("NOTE: Harbor scan recieved but not processed because Harbor/Problems integration is disabled"); + console.log("NOTE: Harbor scan recieved but not processed because Harbor/Trivy integration is disabled"); } break @@ -47,7 +47,7 @@ export async function processProblems( console.log("NOTE: Harbor integration for Problems is deprecated and will be removed from Lagoon in an upcoming release"); await handle(processHarborVulnerabilityList, webhook, `${webhooktype}:${event}`, channelWrapperWebhooks); } else { - console.log("NOTE: Harbor scan recieved but not processed because Harbor/Problems integration is disabled"); + console.log("NOTE: Harbor scan recieved but not processed because Harbor/Trivy integration is disabled"); } break; case 'drutiny:resultset' : From a70cdec618e29942fbfae9f5e41572bbc3d589a1 Mon Sep 17 00:00:00 2001 From: Alanna Burke Date: Mon, 28 Mar 2022 15:27:10 -0400 Subject: [PATCH 12/38] Update README.md --- README.md | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index aaeeb8dbcf..923756c780 100644 --- a/README.md +++ b/README.md @@ -4,6 +4,18 @@ # Lagoon - the developer-focused application delivery platform for Kubernetes +## Table of Contents +1. Project Description +2. Usage +3. Architecture +4. Testing +5. Other Lagoon Components +6. Contribution +7. History +8. Connect + +## Project Description + Lagoon solves what developers are dreaming about: A system that allows developers to locally develop their code and their services with Docker and run the exact same system in production. The same container images, the same service configurations and the same code. > Lagoon is an application delivery **platform**. Its primary focus is as a cloud-native tool for the deployment, management, security and operation of many applications. Lagoon greatly reduces the requirement on developers of those applications to have cloud-native experience or knowledge. @@ -12,7 +24,8 @@ Lagoon has been designed to handle workloads that have been traditionally more c Lagoon is fully open-source, built on open-source tools, built collaboratively with our users. -## Installing Lagoon +## Usage +### Installation *Note that is not necessary to install Lagoon on to your local machine if you are looking to maintain websites hosted on Lagoon.* @@ -25,7 +38,7 @@ For more information on developing or contributing to Lagoon, head to https://do For more information on installing and administering Lagoon, head to https://docs.lagoon.sh/administering-lagoon -## Lagoon architecture +### Architecture Lagoon comprises two main components: **Lagoon Core** and **Lagoon Remote**. It's also built on several other third-party services, Operators and Controllers. In a full production setting, we recommend installing Lagoon Core and Remote into different Kubernetes Clusters. A single Lagoon Core installation is capable of serving multiple Remotes, but they can also be installed into the same cluster if preferred. @@ -35,7 +48,7 @@ Lagoon services are mostly built in Node.js. More recent development occurs in G ### Lagoon Core -All the services that handle the API, authentication and external communication are installed here. Installation is via a [Helm Chart](https://github.com/uselagoon/lagoon-charts/tree/main/charts/lagoon-core) +All the services that handle the API, authentication and external communication are installed here. Installation is via a [Helm Chart].(https://github.com/uselagoon/lagoon-charts/tree/main/charts/lagoon-core) - API - [api](https://github.com/uselagoon/lagoon/tree/main/services/api) (the GraphQL API that powers Lagoon) - [api-db](https://github.com/uselagoon/lagoon/tree/main/services/api-db) (the MariaDB storage for the API) @@ -102,7 +115,7 @@ These services are usually installed alongside either Lagoon Core or Lagoon Remo - [k8up](https://k8up.io/) (provides a scheduled backup and prune service to environment namespaces) -### Testing +## Testing Lagoon has a comprehensive [test suite](https://github.com/uselagoon/lagoon/tree/main/tests/tests), designed to cover most end-user scenarios. The testing is automated in Ansible, and runs in Jenkins, but can also be run locally in a self-contained cluster. The testing provisions a standalone Lagoon cluster, running on Kind (Kubernetes in Docker). This cluster is made of Lagoon Core, Lagoon Remote, an image registry and a set of managed databases. It runs test deployments and scenarios for a range of Node.js, Drupal, Python and NGINX projects, all built using the latest Lagoon images. @@ -135,7 +148,7 @@ To add the repository `helm repo add lagoon https://amazeeio.github.io/charts/` -## Contribute +## Contribution Do you want to contribute to Lagoon? Fabulous! [See our Documentation](https://docs.lagoon.sh/contributing/) on how to get started. From d3f830ab85ae5a1880a988c9ceb50f241f9dbe45 Mon Sep 17 00:00:00 2001 From: Ben Jackson Date: Tue, 29 Mar 2022 10:38:38 +1100 Subject: [PATCH 13/38] feat: reduce verbosity in the rollout failure step, add a STEP failure notice --- .../scripts/exec-monitor-deploy.sh | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/images/kubectl-build-deploy-dind/scripts/exec-monitor-deploy.sh b/images/kubectl-build-deploy-dind/scripts/exec-monitor-deploy.sh index 63903fb679..49642f928d 100755 --- a/images/kubectl-build-deploy-dind/scripts/exec-monitor-deploy.sh +++ b/images/kubectl-build-deploy-dind/scripts/exec-monitor-deploy.sh @@ -27,6 +27,7 @@ stream_logs_deployment() { done } +set +x # reduce noise in build logs # start background logs streaming stream_logs_deployment & STREAM_LOGS_PID=$! @@ -36,6 +37,10 @@ kubectl rollout --insecure-skip-tls-verify -n ${NAMESPACE} status deployment ${S if [[ $ret -ne 0 ]]; then # stop all running stream logs + echo "##############################################" + echo "STEP Applying Deployments: Failed at $(date +"%Y-%m-%d %H:%M:%S") ($(date +"%Z"))" + echo "The information below could be useful in helping debug what went wrong" + echo "##############################################" pkill -P $STREAM_LOGS_PID || true # shows all logs we collected for the new containers @@ -45,6 +50,7 @@ if [[ $ret -ne 0 ]]; then echo "Rollout for ${SERVICE_NAME} failed, tried to gather some startup logs of the containers, hope this helps debugging:" find /tmp/kubectl-build-deploy/logs/container/${SERVICE_NAME}/ -type f -print0 2>/dev/null | xargs -0 -I % sh -c 'echo ======== % =========; cat %; echo' fi + echo "##############################################" # dump the pods of this service and the status/condition message from kubernetes into a table for debugging # Example: # @@ -60,3 +66,4 @@ fi # stop all running stream logs pkill -P $STREAM_LOGS_PID || true +set -x From 15866fabf9e9f9c32875784b6438269cbb8931a5 Mon Sep 17 00:00:00 2001 From: Ben Jackson Date: Tue, 29 Mar 2022 12:18:29 +1100 Subject: [PATCH 14/38] refactor: capture errors for deploytargets --- node-packages/commons/src/tasks.ts | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/node-packages/commons/src/tasks.ts b/node-packages/commons/src/tasks.ts index 966c626b1f..d360d553c1 100644 --- a/node-packages/commons/src/tasks.ts +++ b/node-packages/commons/src/tasks.ts @@ -729,7 +729,12 @@ export const createDeployTask = async function(deployData: any) { project, deployData } - return deployTargetBranches(lagoonData) + try { + let result = deployTargetBranches(lagoonData) + return result + } catch (error) { + throw error + } } else if (type === 'pullrequest') { // use deployTargetPullrequest function to handle let lagoonData = { @@ -740,7 +745,12 @@ export const createDeployTask = async function(deployData: any) { pullrequestTitle, deployData } - return deployTargetPullrequest(lagoonData) + try { + let result = deployTargetPullrequest(lagoonData) + return result + } catch (error) { + throw error + } } break; default: From 86705e475a17882eb7ac431c1d2fcf796e0b3ecf Mon Sep 17 00:00:00 2001 From: Ben Jackson Date: Tue, 29 Mar 2022 12:38:00 +1100 Subject: [PATCH 15/38] refactor: more errors for deploytargets --- node-packages/commons/src/deploy-tasks.ts | 25 ++++++++--------------- 1 file changed, 8 insertions(+), 17 deletions(-) diff --git a/node-packages/commons/src/deploy-tasks.ts b/node-packages/commons/src/deploy-tasks.ts index 406c2e22df..a605e4b990 100644 --- a/node-packages/commons/src/deploy-tasks.ts +++ b/node-packages/commons/src/deploy-tasks.ts @@ -75,7 +75,7 @@ const deployBranch = async function(data: any) { logger.debug( `projectName: ${projectName}, branchName: ${branchName}, branch deployments disabled` ); - return false + throw new NoNeedToDeployBranch('Branch deployments disabled'); default: { logger.debug( `projectName: ${projectName}, branchName: ${branchName}, regex ${branchesRegex}, testing if it matches` @@ -101,7 +101,9 @@ const deployBranch = async function(data: any) { logger.debug( `projectName: ${projectName}, branchName: ${branchName}, regex ${branchesRegex} did not match branchname, not deploying` ); - return false + throw new NoNeedToDeployBranch( + `configured regex '${branchesRegex}' does not match branchname '${branchName}'` + ); } } } @@ -154,10 +156,7 @@ const deployPullrequest = async function(data: any) { ); } case 'false': - logger.debug( - `projectName: ${projectName}, pullrequest: ${branchName}, pullrequest deployments disabled` - ); - return false + throw new NoNeedToDeployBranch('PullRequest deployments disabled'); default: { logger.debug( `projectName: ${projectName}, pullrequest: ${branchName}, regex ${pullrequestRegex}, testing if it matches PR title '${pullrequestTitle}'` @@ -183,7 +182,9 @@ const deployPullrequest = async function(data: any) { logger.debug( `projectName: ${projectName}, branchName: ${branchName}, regex ${pullrequestRegex} did not match PR title, not deploying` ); - return false + throw new NoNeedToDeployBranch( + `configured regex '${pullrequestRegex}' does not match PR Title '${pullrequestTitle}'` + ); } } } @@ -264,11 +265,6 @@ export const deployTargetBranches = async function(data: any) { return deploy } } - if (deploy == false) { - throw new NoNeedToDeployBranch( - `configured regex for all deploytargets does not match branchname '${branchName}'` - ); - } } else { // deploy the project using the projects default target let deployTarget @@ -356,11 +352,6 @@ export const deployTargetPullrequest = async function(data: any) { return deploy } } - if (deploy == false) { - throw new NoNeedToDeployBranch( - `configured regex for all deploytargets does not match pullrequest '${branchName}'` - ); - } } else { // deploy the project using the projects default target let deployTarget From 524526d74bd05ee797ad10395e15f86f7c676c5f Mon Sep 17 00:00:00 2001 From: Ben Jackson Date: Tue, 29 Mar 2022 12:40:04 +1100 Subject: [PATCH 16/38] refactor: more errors for deploytargets --- node-packages/commons/src/deploy-tasks.ts | 3 +++ 1 file changed, 3 insertions(+) diff --git a/node-packages/commons/src/deploy-tasks.ts b/node-packages/commons/src/deploy-tasks.ts index a605e4b990..e2a11e1eff 100644 --- a/node-packages/commons/src/deploy-tasks.ts +++ b/node-packages/commons/src/deploy-tasks.ts @@ -156,6 +156,9 @@ const deployPullrequest = async function(data: any) { ); } case 'false': + logger.debug( + `projectName: ${projectName}, pullrequest: ${branchName}, pullrequest deployments disabled` + ); throw new NoNeedToDeployBranch('PullRequest deployments disabled'); default: { logger.debug( From d0f803771efe36be01db4983d02f0fd4393a3848 Mon Sep 17 00:00:00 2001 From: Ben Jackson Date: Tue, 29 Mar 2022 14:29:34 +1100 Subject: [PATCH 17/38] refactor: reset some things and throw errors in different spots --- node-packages/commons/src/deploy-tasks.ts | 54 ++++++++++++++++------- 1 file changed, 38 insertions(+), 16 deletions(-) diff --git a/node-packages/commons/src/deploy-tasks.ts b/node-packages/commons/src/deploy-tasks.ts index e2a11e1eff..de9425404a 100644 --- a/node-packages/commons/src/deploy-tasks.ts +++ b/node-packages/commons/src/deploy-tasks.ts @@ -75,7 +75,7 @@ const deployBranch = async function(data: any) { logger.debug( `projectName: ${projectName}, branchName: ${branchName}, branch deployments disabled` ); - throw new NoNeedToDeployBranch('Branch deployments disabled'); + return false default: { logger.debug( `projectName: ${projectName}, branchName: ${branchName}, regex ${branchesRegex}, testing if it matches` @@ -101,9 +101,7 @@ const deployBranch = async function(data: any) { logger.debug( `projectName: ${projectName}, branchName: ${branchName}, regex ${branchesRegex} did not match branchname, not deploying` ); - throw new NoNeedToDeployBranch( - `configured regex '${branchesRegex}' does not match branchname '${branchName}'` - ); + return false } } } @@ -159,7 +157,7 @@ const deployPullrequest = async function(data: any) { logger.debug( `projectName: ${projectName}, pullrequest: ${branchName}, pullrequest deployments disabled` ); - throw new NoNeedToDeployBranch('PullRequest deployments disabled'); + return false default: { logger.debug( `projectName: ${projectName}, pullrequest: ${branchName}, regex ${pullrequestRegex}, testing if it matches PR title '${pullrequestTitle}'` @@ -185,9 +183,7 @@ const deployPullrequest = async function(data: any) { logger.debug( `projectName: ${projectName}, branchName: ${branchName}, regex ${pullrequestRegex} did not match PR title, not deploying` ); - throw new NoNeedToDeployBranch( - `configured regex '${pullrequestRegex}' does not match PR Title '${pullrequestTitle}'` - ); + return false } } } @@ -243,7 +239,7 @@ export const deployTargetBranches = async function(data: any) { if (deployTarget) { data.deployTarget = deployTarget let deploy = await deployBranch(data) - // EXISTING DEPLOY VIA ENVIRONMENT OPENSHIFT + // EXISTING DEPLOY VIA ENVIRONMENT KUBERNETES return deploy } @@ -261,13 +257,18 @@ export const deployTargetBranches = async function(data: any) { openshift: deployTargetConfigs.targets[i].deployTarget } data.deployTarget = deployTarget - // NEW DEPLOY VIA DEPLOYTARGETCONFIG OPENSHIFT + // NEW DEPLOY VIA DEPLOYTARGETCONFIG KUBERNETES deploy = await deployBranch(data) if (deploy) { // if the deploy is successful, then return return deploy } } + if (deploy == false) { + throw new NoNeedToDeployBranch( + `configured regex for all deploytargets does not match branchname '${branchName}'` + ); + } } else { // deploy the project using the projects default target let deployTarget @@ -283,8 +284,16 @@ export const deployTargetBranches = async function(data: any) { } data.deployTarget = deployTarget let deploy = await deployBranch(data) - // NEW DEPLOY VIA PROJECT OPENSHIFT - return deploy + // NEW DEPLOY VIA PROJECT KUBERNETES + if (deploy) { + // if the deploy is successful, then return + return deploy + } + if (deploy == false) { + throw new NoNeedToDeployBranch( + `configured regex for project does not match branchname '${branchName}'` + ); + } } throw new NoNeedToDeployBranch( `no deploy targets configured` @@ -330,7 +339,7 @@ export const deployTargetPullrequest = async function(data: any) { if (deployTarget) { data.deployTarget = deployTarget let deploy = await deployPullrequest(data) - // EXISTING DEPLOY VIA ENVIRONMENT OPENSHIFT + // EXISTING DEPLOY VIA ENVIRONMENT KUBERNETES return deploy } @@ -348,13 +357,18 @@ export const deployTargetPullrequest = async function(data: any) { openshift: deployTargetConfigs.targets[i].deployTarget } data.deployTarget = deployTarget - // NEW DEPLOY VIA DEPLOYTARGETCONFIG OPENSHIFT + // NEW DEPLOY VIA DEPLOYTARGETCONFIG KUBERNETES deploy = await deployPullrequest(data) if (deploy) { // if the deploy is successful, then return return deploy } } + if (deploy == false) { + throw new NoNeedToDeployBranch( + `configured regex for all deploytargets does not match pullrequest '${branchName}'` + ); + } } else { // deploy the project using the projects default target let deployTarget @@ -370,8 +384,16 @@ export const deployTargetPullrequest = async function(data: any) { } data.deployTarget = deployTarget let deploy = await deployPullrequest(data) - // NEW DEPLOY VIA PROJECT OPENSHIFT - return deploy + // NEW DEPLOY VIA PROJECT KUBERNETES + if (deploy) { + // if the deploy is successful, then return + return deploy + } + if (deploy == false) { + throw new NoNeedToDeployBranch( + `configured regex for all project does not match pullrequest '${branchName}'` + ); + } } throw new NoNeedToDeployBranch( `no deploy targets configured` From 83e3c4a4353990f90350976d9f3829e503bb72ab Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Dezs=C5=91=20BICZ=C3=93?= Date: Tue, 29 Mar 2022 15:18:23 +0200 Subject: [PATCH 18/38] Typo fix --- docs/docker-images/solr/solr-drupal.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docker-images/solr/solr-drupal.md b/docs/docker-images/solr/solr-drupal.md index 19998337b4..97ad5721d9 100644 --- a/docs/docker-images/solr/solr-drupal.md +++ b/docs/docker-images/solr/solr-drupal.md @@ -14,4 +14,4 @@ For each Solr version, there is a specific `solr-drupal:` Docker image. * 6.6 \(available for compatibility, no longer officially supported\) * 7.7 [Dockerfile](https://github.com/uselagoon/lagoon-images/blob/main/images/solr-drupal/7.7.Dockerfile) (no longer actively supported upstream) - `uselagoon/solr-7.7-drupal` * 7 [Dockerfile](https://github.com/uselagoon/lagoon-images/blob/main/images/solr-drupal/7.Dockerfile) - `uselagoon/solr-7-drupal` -* 7 [Dockerfile](https://github.com/uselagoon/lagoon-images/blob/main/images/solr-drupal/8.Dockerfile) - `uselagoon/solr-8-drupal` +* 8 [Dockerfile](https://github.com/uselagoon/lagoon-images/blob/main/images/solr-drupal/8.Dockerfile) - `uselagoon/solr-8-drupal` From c8233a94e8b230a6be5af7d90cc907c8516b97fc Mon Sep 17 00:00:00 2001 From: Ben Jackson Date: Thu, 31 Mar 2022 15:53:49 +1100 Subject: [PATCH 19/38] chore: gcp does not like ACL in params --- services/api/src/resources/file/resolvers.ts | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/services/api/src/resources/file/resolvers.ts b/services/api/src/resources/file/resolvers.ts index 47fa43781c..29e4cd0abb 100644 --- a/services/api/src/resources/file/resolvers.ts +++ b/services/api/src/resources/file/resolvers.ts @@ -4,6 +4,7 @@ import { s3Client } from '../../clients/aws'; import { query } from '../../util/db'; import { Sql } from './sql'; import { Sql as taskSql } from '../task/sql'; +import { getConfigFromEnv } from '../../util/config'; export const getDownloadLink: ResolverFn = async ({ s3Key }) => s3Client.getSignedUrl('getObject', { @@ -36,8 +37,7 @@ export const uploadFilesForTask: ResolverFn = async ( const s3_key = `tasks/${task}/${newFile.filename}`; const params = { Key: s3_key, - Body: newFile.createReadStream(), - ACL: 'private' + Body: newFile.createReadStream() }; // @ts-ignore await s3Client.upload(params).promise(); From 417a5f65eb704139647376ce740c34f21e57c8ca Mon Sep 17 00:00:00 2001 From: Ben Jackson Date: Thu, 31 Mar 2022 16:24:33 +1100 Subject: [PATCH 20/38] fix: shorten each part of a domain if required for autogenerated urls --- .../scripts/exec-autogenerated-pattern.sh | 80 +++++++++---------- 1 file changed, 39 insertions(+), 41 deletions(-) diff --git a/images/kubectl-build-deploy-dind/scripts/exec-autogenerated-pattern.sh b/images/kubectl-build-deploy-dind/scripts/exec-autogenerated-pattern.sh index 9e4d264530..76a5652807 100644 --- a/images/kubectl-build-deploy-dind/scripts/exec-autogenerated-pattern.sh +++ b/images/kubectl-build-deploy-dind/scripts/exec-autogenerated-pattern.sh @@ -5,7 +5,7 @@ ### Given a router pattern, it will create the required domains ############################################## function routerPattern2DomainGenerator { - ROUTER_URL=${1} + DOMAIN_PARTS=${1} SERVICE=${2} PROJECT=${3} ENVIRONMENT=${4} @@ -13,49 +13,48 @@ function routerPattern2DomainGenerator { HAS_SERVICE_PATTERN=false re='(.*)\$\{service\}(.*)' - if [[ $ROUTER_URL =~ $re ]]; then + if [[ $DOMAIN_PARTS =~ $re ]]; then HAS_SERVICE_PATTERN=true - ROUTER_URL2=${BASH_REMATCH[1]}${SERVICE} - ROUTER_URL=${ROUTER_URL2}${BASH_REMATCH[2]} + DOMAIN_PARTS2=${BASH_REMATCH[1]}${SERVICE} + DOMAIN_PARTS=${DOMAIN_PARTS2}${BASH_REMATCH[2]} fi re='(.*)\$\{project\}(.*)' - if [[ $ROUTER_URL =~ $re ]]; then - ROUTER_URL2=${BASH_REMATCH[1]}${PROJECT} - ROUTER_URL=${ROUTER_URL2}${BASH_REMATCH[2]} + if [[ $DOMAIN_PARTS =~ $re ]]; then + DOMAIN_PARTS2=${BASH_REMATCH[1]}${PROJECT} + DOMAIN_PARTS=${DOMAIN_PARTS2}${BASH_REMATCH[2]} fi re='(.*)\$\{environment\}(.*)' - SUFFIX="" - if [[ $ROUTER_URL =~ $re ]]; then - ROUTER_URL2=${BASH_REMATCH[1]}${ENVIRONMENT} - ROUTER_URL=${ROUTER_URL2}${BASH_REMATCH[2]} - SUFFIX=${BASH_REMATCH[2]} + if [[ $DOMAIN_PARTS =~ $re ]]; then + DOMAIN_PARTS2=${BASH_REMATCH[1]}${ENVIRONMENT} + DOMAIN_PARTS=${DOMAIN_PARTS2}${BASH_REMATCH[2]} fi # fallback to the default behaviour which adds the service with a dot # if the pattern doesn't have a service pattern defined in it if [ $HAS_SERVICE_PATTERN == "false" ]; then - ROUTER_URL2=${SERVICE}.${ROUTER_URL2} + DOMAIN_PARTS2=${SERVICE}.${DOMAIN_PARTS2} fi - SUFFIX_HASH=$(echo $ROUTER_URL2 | sha256sum | awk '{print $1}' | cut -c -8) - re='(.*)([.])(.*)' - if [[ $ROUTER_URL2 =~ $re ]]; then - if [ ${#BASH_REMATCH[3]} -gt 63 ]; then - ROUTER_URL2=$(echo ${BASH_REMATCH[3]} | cut -c -55 | sed -e 's/-$//')-${SUFFIX_HASH} - ROUTER_URL2=${BASH_REMATCH[1]}.${ROUTER_URL2} - echo $ROUTER_URL2$SUFFIX - return + # once all the parts of the router pattern have been + DOMAIN_HASH=$(echo $DOMAIN_PARTS | sha256sum | awk '{print $1}' | cut -c -8) + FINAL_DOMAIN="" + # split the domain up by the dot and iterate over each part to check its length + IFS='.' read -ra DOMAIN_PARTS_SPLIT <<< "$DOMAIN_PARTS" + for DOMAIN_PART in ${DOMAIN_PARTS_SPLIT[@]} + do + if [ ${#DOMAIN_PART} -gt 63 ]; then + # if the part of the domain is greater than 63, then keep 54 characters and add the domain hash (8) and a dash (1) + # to the remaining domain part (54+1+8=63) + DOMAIN_PART=$(echo ${DOMAIN_PART} | cut -c -54 | sed -e 's/-$//')-${DOMAIN_HASH} fi - echo $ROUTER_URL2$SUFFIX - else - if [ ${#ROUTER_URL2} -gt 63 ]; then - ROUTER_URL2=$(echo $ROUTER_URL2 | cut -c -55 | sed -e 's/-$//')-${SUFFIX_HASH} - fi - echo $ROUTER_URL2$SUFFIX - fi + # combine the parts + FINAL_DOMAIN=${FINAL_DOMAIN}${DOMAIN_PART}. + done + # strip the trailing dot from the domain + echo "$(echo ${FINAL_DOMAIN} | rev | cut -c 2- | rev)" } ############################################## @@ -63,7 +62,7 @@ function routerPattern2DomainGenerator { ### Performs the same function that the build-deploy controller currently does ############################################## function generateShortUrl { - ROUTER_URL=${1} + DOMAIN_PARTS=${1} SERVICE=${2} PROJECT=${3} ENVIRONMENT=${4} @@ -71,31 +70,30 @@ function generateShortUrl { HAS_SERVICE_PATTERN=false re='(.*)\$\{service\}(.*)' - if [[ $ROUTER_URL =~ $re ]]; then + if [[ $DOMAIN_PARTS =~ $re ]]; then HAS_SERVICE_PATTERN=true - ROUTER_URL2=${BASH_REMATCH[1]}${SERVICE} - ROUTER_URL=${ROUTER_URL2}${BASH_REMATCH[2]} + DOMAIN_PARTS2=${BASH_REMATCH[1]}${SERVICE} + DOMAIN_PARTS=${DOMAIN_PARTS2}${BASH_REMATCH[2]} fi re='(.*)\$\{project\}(.*)' - if [[ $ROUTER_URL =~ $re ]]; then + if [[ $DOMAIN_PARTS =~ $re ]]; then SHA256_B32_PROJECT=$(echo -e "import sys\nimport base64\nimport hashlib\nprint(base64.b32encode(bytearray(hashlib.sha256(sys.argv[1].encode()).digest())).decode('utf-8'))" | python3 - "${PROJECT}" | tr '[:upper:]' '[:lower:]' | cut -c -8) - ROUTER_URL2=${BASH_REMATCH[1]}${SHA256_B32_PROJECT} - ROUTER_URL=${ROUTER_URL2}${BASH_REMATCH[2]} + DOMAIN_PARTS2=${BASH_REMATCH[1]}${SHA256_B32_PROJECT} + DOMAIN_PARTS=${DOMAIN_PARTS2}${BASH_REMATCH[2]} fi re='(.*)\$\{environment\}(.*)' - if [[ $ROUTER_URL =~ $re ]]; then + if [[ $DOMAIN_PARTS =~ $re ]]; then SHA256_B32_ENVIRONMENT=$(echo -e "import sys\nimport base64\nimport hashlib\nprint(base64.b32encode(bytearray(hashlib.sha256(sys.argv[1].encode()).digest())).decode('utf-8'))" | python3 - "${ENVIRONMENT}" | tr '[:upper:]' '[:lower:]' | cut -c -8) - ROUTER_URL2=${BASH_REMATCH[1]}${SHA256_B32_ENVIRONMENT} - ROUTER_URL=${ROUTER_URL2}${BASH_REMATCH[2]} + DOMAIN_PARTS2=${BASH_REMATCH[1]}${SHA256_B32_ENVIRONMENT} + DOMAIN_PARTS=${DOMAIN_PARTS2}${BASH_REMATCH[2]} fi # fallback to the default behaviour which adds the service with a dot # if the pattern doesn't have a service pattern defined in it if [ $HAS_SERVICE_PATTERN == "false" ]; then - ROUTER_URL=${SERVICE}.${ROUTER_URL} + DOMAIN_PARTS=${SERVICE}.${DOMAIN_PARTS} fi - - echo $ROUTER_URL + echo $DOMAIN_PARTS } \ No newline at end of file From aa912baaac687d5b702a0b626f7714e08f45de08 Mon Sep 17 00:00:00 2001 From: cdchris12 Date: Thu, 31 Mar 2022 12:18:09 -0500 Subject: [PATCH 21/38] Adding support for additional SSH key types --- .../docker-entrypoint-initdb.d/00-tables.sql | 2 +- .../01-migrations.sql | 2 +- services/ui/src/components/SshKeys/AddSshKey.js | 17 +++++++++++------ 3 files changed, 13 insertions(+), 8 deletions(-) diff --git a/services/api-db/docker-entrypoint-initdb.d/00-tables.sql b/services/api-db/docker-entrypoint-initdb.d/00-tables.sql index 6f5788a069..21a2389ac3 100644 --- a/services/api-db/docker-entrypoint-initdb.d/00-tables.sql +++ b/services/api-db/docker-entrypoint-initdb.d/00-tables.sql @@ -6,7 +6,7 @@ CREATE TABLE IF NOT EXISTS ssh_key ( id int NOT NULL auto_increment PRIMARY KEY, name varchar(100) NOT NULL, key_value varchar(5000) NOT NULL, - key_type ENUM('ssh-rsa', 'ssh-ed25519') NOT NULL DEFAULT 'ssh-rsa', + key_type ENUM('ssh-rsa', 'ssh-ed25519','ecdsa-sha2-nistp256','ecdsa-sha2-nistp384','ecdsa-sha2-nistp521','sk-ecdsa-sha2-nistp256@openssh.com','sk-ssh-ed25519@openssh.com') NOT NULL DEFAULT 'ssh-rsa', key_fingerprint char(51) NULL UNIQUE, created timestamp DEFAULT CURRENT_TIMESTAMP ); diff --git a/services/api-db/docker-entrypoint-initdb.d/01-migrations.sql b/services/api-db/docker-entrypoint-initdb.d/01-migrations.sql index d4ea35a787..6e5cf021ac 100644 --- a/services/api-db/docker-entrypoint-initdb.d/01-migrations.sql +++ b/services/api-db/docker-entrypoint-initdb.d/01-migrations.sql @@ -552,7 +552,7 @@ CREATE OR REPLACE PROCEDURE AND column_name = 'key_type' ) THEN ALTER TABLE `ssh_key` - CHANGE `keyType` `key_type` ENUM('ssh-rsa', 'ssh-ed25519') NOT NULL DEFAULT 'ssh-rsa'; + CHANGE `keyType` `key_type` ENUM('ssh-rsa', 'ssh-ed25519','ecdsa-sha2-nistp256','ecdsa-sha2-nistp384','ecdsa-sha2-nistp521','sk-ecdsa-sha2-nistp256@openssh.com','sk-ssh-ed25519@openssh.com') NOT NULL DEFAULT 'ssh-rsa'; END IF; END; $$ diff --git a/services/ui/src/components/SshKeys/AddSshKey.js b/services/ui/src/components/SshKeys/AddSshKey.js index 0cc68f199f..ba1533e982 100644 --- a/services/ui/src/components/SshKeys/AddSshKey.js +++ b/services/ui/src/components/SshKeys/AddSshKey.js @@ -18,11 +18,16 @@ const AddSshKey = ({me: { id, email }}) => { const isFormValid = values.sshKeyName !== '' && !values.sshKey.includes('\n') && ( - values.sshKey.trim().startsWith('ssh-rsa') || - values.sshKey.trim().startsWith('ssh-ed25519') + values.sshKey.trim().startsWith('ssh-rsa') || + values.sshKey.trim().startsWith('ssh-ed25519') || + values.sshKey.trim().startsWith('ecdsa-sha2-nistp256') || + values.sshKey.trim().startsWith('ecdsa-sha2-nistp384') || + values.sshKey.trim().startsWith('ecdsa-sha2-nistp521') || + values.sshKey.trim().startsWith('sk-ecdsa-sha2-nistp256@openssh.com') || + values.sshKey.trim().startsWith('sk-ssh-ed25519@openssh.com') ); - const regex = /\s*(ssh-\S+)\s+(\S+).*/ + const regex = /\s*(ssh-rsa|ssh-ed25519|ecdsa-sha2-nistp256|ecdsa-sha2-nistp384|ecdsa-sha2-nistp521|sk-ecdsa-sha2-nistp256@openssh.com|sk-ssh-ed25519@openssh.com)\s+(\S+).*/ // First capture group is the type of the ssh key // Second capture group is the actual ssh key // Whitespace and comments are ignored @@ -33,7 +38,7 @@ const AddSshKey = ({me: { id, email }}) => { {(addSshKey, { loading, called, error, data }) => { - const addSshKeyHandler = () => { + const addSshKeyHandler = () => { addSshKey({ variables: { input: { @@ -57,7 +62,7 @@ const AddSshKey = ({me: { id, email }}) => { return (
- { error ?
{error.message.replace('GraphQL error:', '').trim()}
: "" } + { error ?
{error.message.replace('GraphQL error:', '').trim()}
: "" }
@@ -80,7 +85,7 @@ const AddSshKey = ({me: { id, email }}) => { type="text" onChange={handleChange} value={values.sshKey} - placeholder="Begins with 'ssh-rsa', 'ssh-ed25519'"/> + placeholder="Begins with 'ssh-rsa', 'ssh-ed25519', 'ecdsa-sha2-nistp256', 'ecdsa-sha2-nistp384', 'ecdsa-sha2-nistp521', 'sk-ecdsa-sha2-nistp256@openssh.com', 'sk-ssh-ed25519@openssh.com'"/>
From 98bfe86e03231928d7bb395d22b14c6c1498081f Mon Sep 17 00:00:00 2001 From: cdchris12 Date: Thu, 31 Mar 2022 12:56:23 -0500 Subject: [PATCH 22/38] Adding support for additional SSH key types, part 2 --- .../api-db/docker-entrypoint-initdb.d/00-tables.sql | 2 +- .../docker-entrypoint-initdb.d/01-migrations.sql | 2 +- services/api/src/resources/sshKey/resolvers.ts | 3 +++ services/api/src/resources/sshKey/sshKey.test.js | 12 ++++++++++-- services/api/src/typeDefs.js | 3 +++ services/ui/src/components/SshKeys/AddSshKey.js | 6 ++---- 6 files changed, 20 insertions(+), 8 deletions(-) diff --git a/services/api-db/docker-entrypoint-initdb.d/00-tables.sql b/services/api-db/docker-entrypoint-initdb.d/00-tables.sql index 21a2389ac3..447b59eca4 100644 --- a/services/api-db/docker-entrypoint-initdb.d/00-tables.sql +++ b/services/api-db/docker-entrypoint-initdb.d/00-tables.sql @@ -6,7 +6,7 @@ CREATE TABLE IF NOT EXISTS ssh_key ( id int NOT NULL auto_increment PRIMARY KEY, name varchar(100) NOT NULL, key_value varchar(5000) NOT NULL, - key_type ENUM('ssh-rsa', 'ssh-ed25519','ecdsa-sha2-nistp256','ecdsa-sha2-nistp384','ecdsa-sha2-nistp521','sk-ecdsa-sha2-nistp256@openssh.com','sk-ssh-ed25519@openssh.com') NOT NULL DEFAULT 'ssh-rsa', + key_type ENUM('ssh-rsa', 'ssh-ed25519','ecdsa-sha2-nistp256','ecdsa-sha2-nistp384','ecdsa-sha2-nistp521') NOT NULL DEFAULT 'ssh-rsa', key_fingerprint char(51) NULL UNIQUE, created timestamp DEFAULT CURRENT_TIMESTAMP ); diff --git a/services/api-db/docker-entrypoint-initdb.d/01-migrations.sql b/services/api-db/docker-entrypoint-initdb.d/01-migrations.sql index 6e5cf021ac..61759277ac 100644 --- a/services/api-db/docker-entrypoint-initdb.d/01-migrations.sql +++ b/services/api-db/docker-entrypoint-initdb.d/01-migrations.sql @@ -552,7 +552,7 @@ CREATE OR REPLACE PROCEDURE AND column_name = 'key_type' ) THEN ALTER TABLE `ssh_key` - CHANGE `keyType` `key_type` ENUM('ssh-rsa', 'ssh-ed25519','ecdsa-sha2-nistp256','ecdsa-sha2-nistp384','ecdsa-sha2-nistp521','sk-ecdsa-sha2-nistp256@openssh.com','sk-ssh-ed25519@openssh.com') NOT NULL DEFAULT 'ssh-rsa'; + CHANGE `keyType` `key_type` ENUM('ssh-rsa', 'ssh-ed25519','ecdsa-sha2-nistp256','ecdsa-sha2-nistp384','ecdsa-sha2-nistp521') NOT NULL DEFAULT 'ssh-rsa'; END IF; END; $$ diff --git a/services/api/src/resources/sshKey/resolvers.ts b/services/api/src/resources/sshKey/resolvers.ts index ea25c2bb85..fad077bf33 100644 --- a/services/api/src/resources/sshKey/resolvers.ts +++ b/services/api/src/resources/sshKey/resolvers.ts @@ -10,6 +10,9 @@ const formatSshKey = ({ keyType, keyValue }) => `${keyType} ${keyValue}`; const sshKeyTypeToString = R.cond([ [R.equals('SSH_RSA'), R.always('ssh-rsa')], [R.equals('SSH_ED25519'), R.always('ssh-ed25519')], + [R.equals('ECDSA-SHA2-NISTP256'), R.always('ecdsa-sha2-nistp256')], + [R.equals('ECDSA-SHA2-NISTP384'), R.always('ecdsa-sha2-nistp384')], + [R.equals('ECDSA-SHA2-NISTP521'), R.always('ecdsa-sha2-nistp521')] [R.T, R.identity] ]); diff --git a/services/api/src/resources/sshKey/sshKey.test.js b/services/api/src/resources/sshKey/sshKey.test.js index 5d7df0875c..45a26ced06 100644 --- a/services/api/src/resources/sshKey/sshKey.test.js +++ b/services/api/src/resources/sshKey/sshKey.test.js @@ -25,10 +25,18 @@ describe('Sql', () => { describe('validateSshKey', () => { test('should return true on valid ssh key format', () => { - const ret = validateSshKey( + const rsa_ret = validateSshKey( 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDEZlms5XsiyWjmnnUyhpt93VgHypse9Bl8kNkmZJTiM3Ex/wZAfwogzqd2LrTEiIOWSH1HnQazR+Cc9oHCmMyNxRrLkS/MEl0yZ38Q+GDfn37h/llCIZNVoHlSgYkqD0MQrhfGL5AulDUKIle93dA6qdCUlnZZjDPiR0vEXR36xGuX7QYAhK30aD2SrrBruTtFGvj87IP/0OEOvUZe8dcU9G/pCoqrTzgKqJRpqs/s5xtkqLkTIyR/SzzplO21A+pCKNax6csDDq3snS8zfx6iM8MwVfh8nvBW9seax1zBvZjHAPSTsjzmZXm4z32/ujAn/RhIkZw3ZgRKrxzryttGnWJJ8OFyF31JTJgwWWuPdH53G15PC83ZbmEgSV3win51RZRVppN4uQUuaqZWG9wwk2a6P5aen1RLCSLpTkd2mAEk9PlgmJrf8vITkiU9pF9n68ENCoo556qSdxW2pxnjrzKVPSqmqO1Xg5K4LOX4/9N4n4qkLEOiqnzzJClhFif3O28RW86RPxERGdPT81UI0oDAcU5euQr8Emz+Hd+PY1115UIld3CIHib5PYL9Ee0bFUKiWpR/acSe1fHB64mCoHP7hjFepGsq7inkvg2651wUDKBshGltpNkMj6+aZedNc0/rKYyjl80nT8g8QECgOSRzpmYp0zli2HpFoLOiWw== ansible-testing', ); - expect(ret).toBeTruthy(); + const ed25519_ret = validateSshKey( + 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDHJ7msp3s6HzHv8cYRo3PCAdrg8EwjllEQyRuKTg49D', + ); + const ecdsa_ret = validateSshKey( + 'ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBAFAX0rkOBwlrXr2rJNxYVi0fRj8IiHBaFCsAM0zO+o2fh+h4EuL1Mx4F237SX5G0zuL8R6Sbf9LrY2lhKZdDpiFdgF7pP1TZ8RuDvKgasppGDEzAIm9+7bmHR118CejWF7llgHD3oz+/aRHTZVpOOaCyTGkF2oPeUejrI74KoPHk3HHpQ==', + ); + expect(rsa_ret).toBeTruthy(); + expect(ed25519_ret).toBeTruthy(); + expect(ecdsa_ret).toBeTruthy(); }); test('should return false on invalid format', () => { diff --git a/services/api/src/typeDefs.js b/services/api/src/typeDefs.js index cbd6452c7b..dcf3f9e48b 100644 --- a/services/api/src/typeDefs.js +++ b/services/api/src/typeDefs.js @@ -14,6 +14,9 @@ const typeDefs = gql` enum SshKeyType { SSH_RSA SSH_ED25519 + ECDSA_SHA2_NISTP256 + ECDSA_SHA2_NISTP384 + ECDSA_SHA2_NISTP521 } enum DeployType { diff --git a/services/ui/src/components/SshKeys/AddSshKey.js b/services/ui/src/components/SshKeys/AddSshKey.js index ba1533e982..7fcda3d991 100644 --- a/services/ui/src/components/SshKeys/AddSshKey.js +++ b/services/ui/src/components/SshKeys/AddSshKey.js @@ -22,12 +22,10 @@ const AddSshKey = ({me: { id, email }}) => { values.sshKey.trim().startsWith('ssh-ed25519') || values.sshKey.trim().startsWith('ecdsa-sha2-nistp256') || values.sshKey.trim().startsWith('ecdsa-sha2-nistp384') || - values.sshKey.trim().startsWith('ecdsa-sha2-nistp521') || - values.sshKey.trim().startsWith('sk-ecdsa-sha2-nistp256@openssh.com') || - values.sshKey.trim().startsWith('sk-ssh-ed25519@openssh.com') + values.sshKey.trim().startsWith('ecdsa-sha2-nistp521') ); - const regex = /\s*(ssh-rsa|ssh-ed25519|ecdsa-sha2-nistp256|ecdsa-sha2-nistp384|ecdsa-sha2-nistp521|sk-ecdsa-sha2-nistp256@openssh.com|sk-ssh-ed25519@openssh.com)\s+(\S+).*/ + const regex = /\s*(ssh-rsa|ssh-ed25519|ecdsa-sha2-nistp256|ecdsa-sha2-nistp384|ecdsa-sha2-nistp521)\s+(\S+).*/ // First capture group is the type of the ssh key // Second capture group is the actual ssh key // Whitespace and comments are ignored From 0a99e65dad6f259a636a8ee23055be03c7983aa6 Mon Sep 17 00:00:00 2001 From: cdchris12 Date: Thu, 31 Mar 2022 13:07:47 -0500 Subject: [PATCH 23/38] Adding support for additional SSH key types, part 3 --- services/ui/src/components/SshKeys/AddSshKey.js | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/services/ui/src/components/SshKeys/AddSshKey.js b/services/ui/src/components/SshKeys/AddSshKey.js index 7fcda3d991..5765b7ab73 100644 --- a/services/ui/src/components/SshKeys/AddSshKey.js +++ b/services/ui/src/components/SshKeys/AddSshKey.js @@ -83,7 +83,7 @@ const AddSshKey = ({me: { id, email }}) => { type="text" onChange={handleChange} value={values.sshKey} - placeholder="Begins with 'ssh-rsa', 'ssh-ed25519', 'ecdsa-sha2-nistp256', 'ecdsa-sha2-nistp384', 'ecdsa-sha2-nistp521', 'sk-ecdsa-sha2-nistp256@openssh.com', 'sk-ssh-ed25519@openssh.com'"/> + placeholder="Begins with 'ssh-rsa', 'ssh-ed25519', 'ecdsa-sha2-nistp256', 'ecdsa-sha2-nistp384', 'ecdsa-sha2-nistp521'"/>
From 241940c896228c4ef4881e420466cabc6ffcdb42 Mon Sep 17 00:00:00 2001 From: cdchris12 Date: Thu, 31 Mar 2022 13:13:14 -0500 Subject: [PATCH 24/38] Adding support for additional SSH key types, part 4 --- services/api/src/resources/sshKey/resolvers.ts | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/services/api/src/resources/sshKey/resolvers.ts b/services/api/src/resources/sshKey/resolvers.ts index fad077bf33..0126be798b 100644 --- a/services/api/src/resources/sshKey/resolvers.ts +++ b/services/api/src/resources/sshKey/resolvers.ts @@ -12,7 +12,7 @@ const sshKeyTypeToString = R.cond([ [R.equals('SSH_ED25519'), R.always('ssh-ed25519')], [R.equals('ECDSA-SHA2-NISTP256'), R.always('ecdsa-sha2-nistp256')], [R.equals('ECDSA-SHA2-NISTP384'), R.always('ecdsa-sha2-nistp384')], - [R.equals('ECDSA-SHA2-NISTP521'), R.always('ecdsa-sha2-nistp521')] + [R.equals('ECDSA-SHA2-NISTP521'), R.always('ecdsa-sha2-nistp521')], [R.T, R.identity] ]); From eec2851014328e723bd8456721215632ca9ef94f Mon Sep 17 00:00:00 2001 From: cdchris12 Date: Thu, 31 Mar 2022 15:15:44 -0500 Subject: [PATCH 25/38] Adding support for additional SSH key types, part 5 --- services/api/src/resources/sshKey/resolvers.ts | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/services/api/src/resources/sshKey/resolvers.ts b/services/api/src/resources/sshKey/resolvers.ts index 0126be798b..a3c832d702 100644 --- a/services/api/src/resources/sshKey/resolvers.ts +++ b/services/api/src/resources/sshKey/resolvers.ts @@ -10,9 +10,9 @@ const formatSshKey = ({ keyType, keyValue }) => `${keyType} ${keyValue}`; const sshKeyTypeToString = R.cond([ [R.equals('SSH_RSA'), R.always('ssh-rsa')], [R.equals('SSH_ED25519'), R.always('ssh-ed25519')], - [R.equals('ECDSA-SHA2-NISTP256'), R.always('ecdsa-sha2-nistp256')], - [R.equals('ECDSA-SHA2-NISTP384'), R.always('ecdsa-sha2-nistp384')], - [R.equals('ECDSA-SHA2-NISTP521'), R.always('ecdsa-sha2-nistp521')], + [R.equals('ECDSA_SHA2_NISTP256'), R.always('ecdsa-sha2-nistp256')], + [R.equals('ECDSA_SHA2_NISTP384'), R.always('ecdsa-sha2-nistp384')], + [R.equals('ECDSA_SHA2_NISTP521'), R.always('ecdsa-sha2-nistp521')], [R.T, R.identity] ]); From 7acff3d21cefdc64ca9f58c2ba50cb6237851d75 Mon Sep 17 00:00:00 2001 From: Ben Jackson Date: Fri, 1 Apr 2022 11:24:20 +1100 Subject: [PATCH 26/38] fix: use the right variable for the domain parts --- .../scripts/exec-autogenerated-pattern.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/images/kubectl-build-deploy-dind/scripts/exec-autogenerated-pattern.sh b/images/kubectl-build-deploy-dind/scripts/exec-autogenerated-pattern.sh index 76a5652807..c614da525d 100644 --- a/images/kubectl-build-deploy-dind/scripts/exec-autogenerated-pattern.sh +++ b/images/kubectl-build-deploy-dind/scripts/exec-autogenerated-pattern.sh @@ -34,7 +34,7 @@ function routerPattern2DomainGenerator { # fallback to the default behaviour which adds the service with a dot # if the pattern doesn't have a service pattern defined in it if [ $HAS_SERVICE_PATTERN == "false" ]; then - DOMAIN_PARTS2=${SERVICE}.${DOMAIN_PARTS2} + DOMAIN_PARTS=${SERVICE}.${DOMAIN_PARTS} fi From 69d78e9bbcae39e89c9694152ff9391e0b66ab52 Mon Sep 17 00:00:00 2001 From: Ben Jackson Date: Fri, 1 Apr 2022 13:08:11 +1100 Subject: [PATCH 27/38] chore: remove unneeded reference --- services/api/src/resources/file/resolvers.ts | 1 - 1 file changed, 1 deletion(-) diff --git a/services/api/src/resources/file/resolvers.ts b/services/api/src/resources/file/resolvers.ts index 29e4cd0abb..3485b7e136 100644 --- a/services/api/src/resources/file/resolvers.ts +++ b/services/api/src/resources/file/resolvers.ts @@ -4,7 +4,6 @@ import { s3Client } from '../../clients/aws'; import { query } from '../../util/db'; import { Sql } from './sql'; import { Sql as taskSql } from '../task/sql'; -import { getConfigFromEnv } from '../../util/config'; export const getDownloadLink: ResolverFn = async ({ s3Key }) => s3Client.getSignedUrl('getObject', { From 684a583420db3fd78d20fef91021f3e4a5c68b55 Mon Sep 17 00:00:00 2001 From: cdchris12 Date: Fri, 1 Apr 2022 12:10:52 -0500 Subject: [PATCH 28/38] Adding support for additional SSH key types, part 6 --- .../01-migrations.sql | 25 ++++++++++++++++++- 1 file changed, 24 insertions(+), 1 deletion(-) diff --git a/services/api-db/docker-entrypoint-initdb.d/01-migrations.sql b/services/api-db/docker-entrypoint-initdb.d/01-migrations.sql index 61759277ac..8d7203b33b 100644 --- a/services/api-db/docker-entrypoint-initdb.d/01-migrations.sql +++ b/services/api-db/docker-entrypoint-initdb.d/01-migrations.sql @@ -552,7 +552,7 @@ CREATE OR REPLACE PROCEDURE AND column_name = 'key_type' ) THEN ALTER TABLE `ssh_key` - CHANGE `keyType` `key_type` ENUM('ssh-rsa', 'ssh-ed25519','ecdsa-sha2-nistp256','ecdsa-sha2-nistp384','ecdsa-sha2-nistp521') NOT NULL DEFAULT 'ssh-rsa'; + CHANGE `keyType` `key_type` ENUM('ssh-rsa', 'ssh-ed25519') NOT NULL DEFAULT 'ssh-rsa'; END IF; END; $$ @@ -1576,6 +1576,28 @@ CREATE OR REPLACE PROCEDURE END; $$ +CREATE OR REPLACE PROCEDURE + add_ecdsa_ssh_key_types() + + BEGIN + DECLARE column_type_argument_type varchar(74); + + SELECT COLUMN_TYPE INTO column_type_argument_type + FROM INFORMATION_SCHEMA.COLUMNS + WHERE + table_name = 'ssh_key' + AND table_schema = 'infrastructure' + AND column_name = 'key_type'; + + IF ( + column_type_argument_type = "enum('ssh-rsa', 'ssh-ed25519')" + ) THEN + ALTER TABLE ssh_key + MODIFY type ENUM('ssh-rsa', 'ssh-ed25519','ecdsa-sha2-nistp256','ecdsa-sha2-nistp384','ecdsa-sha2-nistp521'); + END IF; + END; +$$ + DELIMITER ; -- If adding new procedures, add them to the bottom of this list @@ -1657,6 +1679,7 @@ CALL add_priority_to_deployment(); CALL add_bulk_id_to_deployment(); CALL drop_legacy_permissions(); CALL change_name_index_for_advanced_task_argument(); +CALL add_ecdsa_ssh_key_types(); -- Drop legacy SSH key procedures DROP PROCEDURE IF EXISTS CreateProjectSshKey; From e5e431f27acbbb8a0f3a3a4bf1fd27a4251b46a5 Mon Sep 17 00:00:00 2001 From: Toby Bellwood Date: Sat, 2 Apr 2022 11:28:43 +1100 Subject: [PATCH 29/38] fix: migration column_type_argument_type length --- services/api-db/docker-entrypoint-initdb.d/01-migrations.sql | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/services/api-db/docker-entrypoint-initdb.d/01-migrations.sql b/services/api-db/docker-entrypoint-initdb.d/01-migrations.sql index 8d7203b33b..971609a0ba 100644 --- a/services/api-db/docker-entrypoint-initdb.d/01-migrations.sql +++ b/services/api-db/docker-entrypoint-initdb.d/01-migrations.sql @@ -1580,7 +1580,7 @@ CREATE OR REPLACE PROCEDURE add_ecdsa_ssh_key_types() BEGIN - DECLARE column_type_argument_type varchar(74); + DECLARE column_type_argument_type varchar(100); SELECT COLUMN_TYPE INTO column_type_argument_type FROM INFORMATION_SCHEMA.COLUMNS From 346c6f0a6dace773f787c283242a75a8aab0f6a5 Mon Sep 17 00:00:00 2001 From: Toby Bellwood Date: Sat, 2 Apr 2022 11:29:30 +1100 Subject: [PATCH 30/38] fix: sshKey.match.replace all dashes in keyType --- services/ui/src/components/SshKeys/AddSshKey.js | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/services/ui/src/components/SshKeys/AddSshKey.js b/services/ui/src/components/SshKeys/AddSshKey.js index 5765b7ab73..784b2c0f0f 100644 --- a/services/ui/src/components/SshKeys/AddSshKey.js +++ b/services/ui/src/components/SshKeys/AddSshKey.js @@ -42,7 +42,7 @@ const AddSshKey = ({me: { id, email }}) => { input: { name: values.sshKeyName, keyValue: values.sshKey.match(regex)[2], - keyType: values.sshKey.match(regex)[1].replace('-', '_').toUpperCase(), + keyType: values.sshKey.match(regex)[1].replace(/-/g, '_').toUpperCase(), user: { id, email From d368ff20e80cede5ce3e568a6ef9f4b5b81bfa6b Mon Sep 17 00:00:00 2001 From: Toby Bellwood Date: Sat, 2 Apr 2022 11:30:16 +1100 Subject: [PATCH 31/38] chore: add new keyTypes to tests, mocks, stories --- .../api-data/01-populate-api-data-general.gql | 35 ++++++++++++++++++ .../03-populate-api-data-kubernetes.gql | 36 +++++++++++++++++++ .../docker-entrypoint-initdb.d/00-tables.sql | 2 +- services/api/src/mocks.js | 6 ++-- .../src/components/SshKeys/index.stories.js | 17 ++++----- .../internal/lagoonclient/schema.graphql | 3 ++ 6 files changed, 87 insertions(+), 12 deletions(-) diff --git a/local-dev/api-data-watcher-pusher/api-data/01-populate-api-data-general.gql b/local-dev/api-data-watcher-pusher/api-data/01-populate-api-data-general.gql index e177e5eacf..a9e899e231 100644 --- a/local-dev/api-data-watcher-pusher/api-data/01-populate-api-data-general.gql +++ b/local-dev/api-data-watcher-pusher/api-data/01-populate-api-data-general.gql @@ -61,6 +61,14 @@ mutation PopulateApi { ) { id } + CiCustomerUserEcdsa: addUser( + input: { + email: "ci-customer-user-ecdsa@example.com" + comment: "ci-customer-user-ecdsa" + } + ) { + id + } ### SSH Keys: CredentialtestCustomerAccessSshKey: addSshKey( @@ -115,6 +123,19 @@ mutation PopulateApi { ) { id } + CiCustomerSshKeyEcdsa: addSshKey( + input: { + id: 6 + name: "ci-customer-sshkey-ecdsa" + keyValue: "AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBAD8E5wfvLg8vvfO9mmHVsZQK8dNgdKM5FrTxL4ORDq66Z50O8zUzBwF1VTO5Zx+qwB7najMdWsnW00BC6PMysSNJQD5HI4CokyKqmGdeSXcROYwvYOjlDQ+jD5qOSmkllRZZnkEYXE5FVBXaZWToyfGUGIoECvKGUQZxkBDHsbK13JdfA==" + keyType: ECDSA_SHA2_NISTP521 + user: { + email: "ci-customer-user-ecdsa@example.com" + } + } + ) { + id + } ## credentialtestbothgroupaccess_user: Access to group1 and group2 CredentialtestCustomerAccessUserAdd1: addUserToGroup( @@ -196,6 +217,20 @@ mutation PopulateApi { name } + CiCustomerUserAddEcdsa: addUserToGroup( + input: { + user: { + email:"ci-customer-user-ecdsa@example.com" + } + group: { + name: "ci-group" + } + role: OWNER + } + ) { + name + } + # Real RocketChat Hook on the amazeeio RocketChat for testing CiRocketChat: addNotificationRocketChat( input: { diff --git a/local-dev/api-data-watcher-pusher/api-data/03-populate-api-data-kubernetes.gql b/local-dev/api-data-watcher-pusher/api-data/03-populate-api-data-kubernetes.gql index f85ee0ed49..b473b98e58 100644 --- a/local-dev/api-data-watcher-pusher/api-data/03-populate-api-data-kubernetes.gql +++ b/local-dev/api-data-watcher-pusher/api-data/03-populate-api-data-kubernetes.gql @@ -33,6 +33,15 @@ mutation PopulateApi { id } + CiCustomerUserEcdsa: addUser( + input: { + email: "ci-customer-user-ecdsa@example.com" + comment: "ci-customer-user-ecdsa" + } + ) { + id + } + CiCustomerSshKeyRsa: addSshKey( input: { id: 4 @@ -59,6 +68,19 @@ mutation PopulateApi { ) { id } + CiCustomerSshKeyEcdsa: addSshKey( + input: { + id: 6 + name: "ci-customer-sshkey-ecdsa" + keyValue: "AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBAD8E5wfvLg8vvfO9mmHVsZQK8dNgdKM5FrTxL4ORDq66Z50O8zUzBwF1VTO5Zx+qwB7najMdWsnW00BC6PMysSNJQD5HI4CokyKqmGdeSXcROYwvYOjlDQ+jD5qOSmkllRZZnkEYXE5FVBXaZWToyfGUGIoECvKGUQZxkBDHsbK13JdfA==" + keyType: ECDSA_SHA2_NISTP521 + user: { + email: "ci-customer-user-ecdsa@example.com" + } + } + ) { + id + } CiGroup: addGroup( input: { @@ -96,6 +118,20 @@ mutation PopulateApi { name } + CiCustomerUserAddEcdsa: addUserToGroup( + input: { + user: { + email:"ci-customer-user-ecdsa@example.com" + } + group: { + name: "ci-group" + } + role: OWNER + } + ) { + name + } + # Real RocketChat Hook on the amazeeio RocketChat for testing CiRocketChat: addNotificationRocketChat( input: { diff --git a/services/api-db/docker-entrypoint-initdb.d/00-tables.sql b/services/api-db/docker-entrypoint-initdb.d/00-tables.sql index 447b59eca4..aa034d78ec 100644 --- a/services/api-db/docker-entrypoint-initdb.d/00-tables.sql +++ b/services/api-db/docker-entrypoint-initdb.d/00-tables.sql @@ -6,7 +6,7 @@ CREATE TABLE IF NOT EXISTS ssh_key ( id int NOT NULL auto_increment PRIMARY KEY, name varchar(100) NOT NULL, key_value varchar(5000) NOT NULL, - key_type ENUM('ssh-rsa', 'ssh-ed25519','ecdsa-sha2-nistp256','ecdsa-sha2-nistp384','ecdsa-sha2-nistp521') NOT NULL DEFAULT 'ssh-rsa', + key_type ENUM('ssh-rsa', 'ssh-ed25519','ecdsa-sha2-nistp256','ecdsa-sha2-nistp384','ecdsa-sha2-nistp521') NOT NULL DEFAULT 'ssh-rsa', key_fingerprint char(51) NULL UNIQUE, created timestamp DEFAULT CURRENT_TIMESTAMP ); diff --git a/services/api/src/mocks.js b/services/api/src/mocks.js index 44a2e1ad43..f32a738608 100644 --- a/services/api/src/mocks.js +++ b/services/api/src/mocks.js @@ -53,7 +53,7 @@ export const generator = (schema, min = 1, max) => { const mocks = { Date: () => faker.date.between('2018-11-01T00:00:00', '2019-10-31T23:59:59').toISOString(), JSON: () => ({ id: faker.random.number(), currency: 'usd' }), - SshKeyType: () => faker.random.arrayElement(['ssh_rsa', 'ssh_ed25519']), + SshKeyType: () => faker.random.arrayElement(['ssh_rsa', 'ssh_ed25519','ecdsa_sha2_nistp256','ecdsa_sha2_nistp384','ecdsa_sha2_nistp521']), DeployType: () => faker.random.arrayElement(['branch', 'pullrequest', 'promote']), EnvType: () => faker.random.arrayElement(['production', 'development']), NotificationType: () => faker.random.arrayElement(['slack', 'rocketchat', 'microsoftteams', 'email']), @@ -157,14 +157,14 @@ mocks.Me = () => ({ sshKeys: [{ id: faker.random.number(), name: faker.random.arrayElement(['key-1', 'key-2', 'key-3']), - keyType: faker.random.arrayElement(['SSH_RSA', 'SSH_ED25519']), + keyType: faker.random.arrayElement(['SSH_RSA', 'SSH_ED25519', 'ECDSA_SHA2_NISTP256', 'ECDSA_SHA2_NISTP384', 'ECDSA_SHA2_NISTP521']), created: mocks.Date(), keyFingerprint: faker.random.uuid() }, { id: faker.random.number(), name: faker.random.arrayElement(['key-1', 'key-2', 'key-3']), - keyType: faker.random.arrayElement(['SSH_RSA', 'SSH_ED25519']), + keyType: faker.random.arrayElement(['SSH_RSA', 'SSH_ED25519', 'ECDSA_SHA2_NISTP256', 'ECDSA_SHA2_NISTP384', 'ECDSA_SHA2_NISTP521']), created: mocks.Date(), keyFingerprint: faker.random.uuid() }] diff --git a/services/ui/src/components/SshKeys/index.stories.js b/services/ui/src/components/SshKeys/index.stories.js index cbdc95b1db..0ec5f5fd5f 100644 --- a/services/ui/src/components/SshKeys/index.stories.js +++ b/services/ui/src/components/SshKeys/index.stories.js @@ -7,20 +7,21 @@ export default { title: 'Components/SshKeys', } -const meData = - { - id: 1, - email: 'heyyo@me.com', +const meData = + { + id: 1, + email: 'heyyo@me.com', sshKeys: [ {"id":10,"name":"auto-add via api","keyType":"ssh-rsa","created":"1978-01-14 14:25:01","keyFingerprint": "SHA256:iLa2YGy/igmtxjM6C3ywV65umECdET/nIhaCeFlrWNs"}, {"id":12,"name":"My Personal Key","keyType":"ssh-ed25519","created":"2018-01-14 14:25:01","keyFingerprint": "SHA256:iLa2YGy/igmtxjM6C3ywV65umECdET/nIhaCeFlrWNs"} + {"id":14,"name":"My Other Key","keyType":"ecdsa-sha2-nistp521","created":"2022-04-01 14:25:01","keyFingerprint": "SHA256:RBRWA2mJFPK/8DtsxVoVzoSShFiuRAzlUBws7cXkwG0"} ] }; -const meDataNoKeys = - { - id: 1, - email: 'heyyo@me.com', +const meDataNoKeys = + { + id: 1, + email: 'heyyo@me.com', sshKeys: [] }; diff --git a/services/workflows/internal/lagoonclient/schema.graphql b/services/workflows/internal/lagoonclient/schema.graphql index 23d5b1f9d7..af0b49314b 100644 --- a/services/workflows/internal/lagoonclient/schema.graphql +++ b/services/workflows/internal/lagoonclient/schema.graphql @@ -1794,6 +1794,9 @@ type SshKey { enum SshKeyType { SSH_RSA SSH_ED25519 + ECDSA_SHA2_NISTP256 + ECDSA_SHA2_NISTP384 + ECDSA_SHA2_NISTP521 } type Subscription { From 3e26a04b01dceba2c923484b57aeb35a13ebb5da Mon Sep 17 00:00:00 2001 From: Toby Bellwood Date: Sat, 2 Apr 2022 11:31:43 +1100 Subject: [PATCH 32/38] docs: add new types to docs --- docs/administering-lagoon/graphql-queries.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/administering-lagoon/graphql-queries.md b/docs/administering-lagoon/graphql-queries.md index 38e0a42cd9..78bfbe1560 100644 --- a/docs/administering-lagoon/graphql-queries.md +++ b/docs/administering-lagoon/graphql-queries.md @@ -111,7 +111,7 @@ mutation { # This is the actual SSH public key (without the type at the beginning and without the comment at the end, ex. `AAAAB3NzaC1yc2EAAAADAQ...3QjzIOtdQERGZuMsi0p`). keyValue: "" # TODO: Fill in the keyType field. - # Valid values are either SSH_RSA or SSH_ED25519. + # Valid values are either SSH_RSA, SSH_ED25519, ECDSA_SHA2_NISTP256/384/521 keyType: SSH_RSA user: { # TODO: Fill in the userId field. From 66031bb1676d614a7d6a7c6854dcbecbcb246183 Mon Sep 17 00:00:00 2001 From: Toby Bellwood Date: Sat, 2 Apr 2022 11:38:55 +1100 Subject: [PATCH 33/38] chore: also add ecdsa key pair to test --- local-dev/cli_id_ecdsa | 12 ++++++++++++ local-dev/cli_id_ecdsa.pub | 1 + 2 files changed, 13 insertions(+) create mode 100644 local-dev/cli_id_ecdsa create mode 100644 local-dev/cli_id_ecdsa.pub diff --git a/local-dev/cli_id_ecdsa b/local-dev/cli_id_ecdsa new file mode 100644 index 0000000000..bdbb127e99 --- /dev/null +++ b/local-dev/cli_id_ecdsa @@ -0,0 +1,12 @@ +-----BEGIN OPENSSH PRIVATE KEY----- +b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAArAAAABNlY2RzYS +1zaGEyLW5pc3RwNTIxAAAACG5pc3RwNTIxAAAAhQQA/BOcH7y4PL73zvZph1bGUCvHTYHS +jORa08S+DkQ6uumedDvM1MwcBdVUzuWcfqsAe52ozHVrJ1tNAQujzMrEjSUA+RyOAqJMiq +phnXkl3ETmML2Do5Q0Pow+ajkppJZUWWZ5BGFxORVQV2mVk6MnxlBiKBAryhlEGcZAQx7G +ytdyXXwAAAEQ2qoa0tqqGtIAAAATZWNkc2Etc2hhMi1uaXN0cDUyMQAAAAhuaXN0cDUyMQ +AAAIUEAPwTnB+8uDy+9872aYdWxlArx02B0ozkWtPEvg5EOrrpnnQ7zNTMHAXVVM7lnH6r +AHudqMx1aydbTQELo8zKxI0lAPkcjgKiTIqqYZ15JdxE5jC9g6OUND6MPmo5KaSWVFlmeQ +RhcTkVUFdplZOjJ8ZQYigQK8oZRBnGQEMexsrXcl18AAAAQVr/ti+u4L5jRkZFILddaexL +mOE274AeMUG6NKlCQWsDdD2hroKJuUQ59TQdpe6e5jBoUZ300EHjA40wmbU+oC/8AAAAE3 +RvYnliZWxsd29vZEBwb3Atb3M= +-----END OPENSSH PRIVATE KEY----- diff --git a/local-dev/cli_id_ecdsa.pub b/local-dev/cli_id_ecdsa.pub new file mode 100644 index 0000000000..2b8adc2c90 --- /dev/null +++ b/local-dev/cli_id_ecdsa.pub @@ -0,0 +1 @@ +ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBAD8E5wfvLg8vvfO9mmHVsZQK8dNgdKM5FrTxL4ORDq66Z50O8zUzBwF1VTO5Zx+qwB7najMdWsnW00BC6PMysSNJQD5HI4CokyKqmGdeSXcROYwvYOjlDQ+jD5qOSmkllRZZnkEYXE5FVBXaZWToyfGUGIoECvKGUQZxkBDHsbK13JdfA== local-cli From e7589b30391b7ab9e96158051a5bbf3f96ad2d00 Mon Sep 17 00:00:00 2001 From: Ben Jackson Date: Tue, 5 Apr 2022 10:06:46 +1000 Subject: [PATCH 34/38] refactor: add flag for gcs, default false to add ACL --- services/api/src/resources/file/resolvers.ts | 8 +++-- services/logs2s3/src/readFromRabbitMQ.ts | 34 ++++++++++++-------- 2 files changed, 26 insertions(+), 16 deletions(-) diff --git a/services/api/src/resources/file/resolvers.ts b/services/api/src/resources/file/resolvers.ts index 3485b7e136..f8d5bbfd80 100644 --- a/services/api/src/resources/file/resolvers.ts +++ b/services/api/src/resources/file/resolvers.ts @@ -5,6 +5,9 @@ import { query } from '../../util/db'; import { Sql } from './sql'; import { Sql as taskSql } from '../task/sql'; +// if this is google cloud storage or not +const isGCS = process.env.S3_FILES_GCS || 'false' + export const getDownloadLink: ResolverFn = async ({ s3Key }) => s3Client.getSignedUrl('getObject', { Key: s3Key, @@ -34,9 +37,10 @@ export const uploadFilesForTask: ResolverFn = async ( const resolvedFiles = await Promise.all(files); const uploadAndTrackFiles = resolvedFiles.map(async (newFile: any) => { const s3_key = `tasks/${task}/${newFile.filename}`; - const params = { + let params = { Key: s3_key, - Body: newFile.createReadStream() + Body: newFile.createReadStream(), + ...(isGCS == 'false' && {ACL: 'private'}), }; // @ts-ignore await s3Client.upload(params).promise(); diff --git a/services/logs2s3/src/readFromRabbitMQ.ts b/services/logs2s3/src/readFromRabbitMQ.ts index 29003f9bf0..c68d5b9711 100644 --- a/services/logs2s3/src/readFromRabbitMQ.ts +++ b/services/logs2s3/src/readFromRabbitMQ.ts @@ -9,6 +9,8 @@ const secretAccessKey = process.env.S3_FILES_SECRET_ACCESS_KEY || 'minio123' const bucket = process.env.S3_FILES_BUCKET || 'lagoon-files' const region = process.env.S3_FILES_REGION const s3Origin = process.env.S3_FILES_HOST || 'http://docker.for.mac.localhost:9000' +// if this is google cloud storage or not +const isGCS = process.env.S3_FILES_GCS || 'false' const config = { origin: s3Origin, @@ -40,18 +42,18 @@ export async function readFromRabbitMQ( const { severity, project, uuid, event, meta, message } = logMessage; - switch (event) { // handle builddeploy build logs from lagoon builds case String(event.match(/^build-logs:builddeploy-kubernetes:.*/)): logger.verbose(`received ${event} for project ${project} environment ${meta.branchName} - name:${meta.jobName}, remoteId:${meta.remoteId}`); - await s3Client.putObject({ + const putParams = { Bucket: bucket, - Key: 'buildlogs/'+project+'/'+meta.branchName+'/'+meta.jobName+'-'+meta.remoteId+'.txt', ContentType: 'text/plain', - Body: Buffer.from(message, 'binary') - }).promise(); - + Body: Buffer.from(message, 'binary'), + Key: 'buildlogs/'+project+'/'+meta.branchName+'/'+meta.jobName+'-'+meta.remoteId+'.txt', + ...(isGCS == 'false' && {ACL: 'private'}), + } + await s3Client.putObject(putParams).promise(); channelWrapperLogs.ack(msg); break; // handle tasks events for tasks logs @@ -73,20 +75,24 @@ export async function readFromRabbitMQ( // some versions of the controller don't send this value in the log meta // the resolver in the api also knows to check in both locations when trying to load logs logger.verbose(`received ${event} for project ${project} environment ${environmentName} - id:${meta.task.id}, remoteId:${meta.remoteId}`); - await s3Client.putObject({ + const putParams = { Bucket: bucket, - Key: 'tasklogs/'+project+'/'+environmentName+'/'+meta.task.id+'-'+meta.remoteId+'.txt', ContentType: 'text/plain', - Body: Buffer.from(message, 'binary') - }).promise(); + Body: Buffer.from(message, 'binary'), + Key: 'tasklogs/'+project+'/'+environmentName+'/'+meta.task.id+'-'+meta.remoteId+'.txt', + ...(isGCS == 'false' && {ACL: 'private'}), + } + await s3Client.putObject(putParams).promise(); } else { logger.verbose(`received ${event} for project ${project} - id:${meta.task.id}, remoteId:${meta.remoteId}`); - await s3Client.putObject({ + const putParams = { Bucket: bucket, - Key: 'tasklogs/'+project+'/'+meta.task.id+'-'+meta.remoteId+'.txt', ContentType: 'text/plain', - Body: Buffer.from(message, 'binary') - }).promise(); + Body: Buffer.from(message, 'binary'), + Key: 'tasklogs/'+project+'/'+meta.task.id+'-'+meta.remoteId+'.txt', + ...(isGCS == 'false' && {ACL: 'private'}), + } + await s3Client.putObject(putParams).promise(); } channelWrapperLogs.ack(msg); break; From 301e6ff30a644b6f3a60c4c6a4bf196fa83e198c Mon Sep 17 00:00:00 2001 From: Ben Jackson Date: Tue, 5 Apr 2022 10:07:30 +1000 Subject: [PATCH 35/38] refactor: revert to const --- services/api/src/resources/file/resolvers.ts | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/services/api/src/resources/file/resolvers.ts b/services/api/src/resources/file/resolvers.ts index f8d5bbfd80..82bd793a3d 100644 --- a/services/api/src/resources/file/resolvers.ts +++ b/services/api/src/resources/file/resolvers.ts @@ -37,7 +37,7 @@ export const uploadFilesForTask: ResolverFn = async ( const resolvedFiles = await Promise.all(files); const uploadAndTrackFiles = resolvedFiles.map(async (newFile: any) => { const s3_key = `tasks/${task}/${newFile.filename}`; - let params = { + const params = { Key: s3_key, Body: newFile.createReadStream(), ...(isGCS == 'false' && {ACL: 'private'}), From 90c30aa5faada24439713af2b7cfa30e487a03ab Mon Sep 17 00:00:00 2001 From: Scott Leggett Date: Tue, 29 Mar 2022 15:51:54 +0800 Subject: [PATCH 36/38] feat: validate TLS for all k8s API interactions --- .../build-deploy-docker-compose.sh | 50 +++++++++---------- .../kubectl-build-deploy-dind/build-deploy.sh | 4 +- .../exec-generate-insights-configmap.sh | 16 +++--- .../scripts/exec-kubectl-mariadb-dbaas.sh | 20 ++++---- .../scripts/exec-kubectl-mongodb-dbaas.sh | 20 ++++---- .../scripts/exec-kubectl-postgres-dbaas.sh | 20 ++++---- .../scripts/exec-monitor-deploy.sh | 8 +-- .../scripts/exec-routes-generation.sh | 8 +-- .../kubectl-get-cluster-capabilities.sh | 4 +- 9 files changed, 75 insertions(+), 75 deletions(-) diff --git a/images/kubectl-build-deploy-dind/build-deploy-docker-compose.sh b/images/kubectl-build-deploy-dind/build-deploy-docker-compose.sh index 8211bcce83..926d1698b5 100755 --- a/images/kubectl-build-deploy-dind/build-deploy-docker-compose.sh +++ b/images/kubectl-build-deploy-dind/build-deploy-docker-compose.sh @@ -77,7 +77,7 @@ function featureFlag() { } set +x -SCC_CHECK=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get pod ${LAGOON_BUILD_NAME} -o json | jq -r '.metadata.annotations."openshift.io/scc" // false') +SCC_CHECK=$(kubectl -n ${NAMESPACE} get pod ${LAGOON_BUILD_NAME} -o json | jq -r '.metadata.annotations."openshift.io/scc" // false') set -x function patchBuildStep() { @@ -104,7 +104,7 @@ function patchBuildStep() { # patch the buildpod with the buildstep if [ "${SCC_CHECK}" == false ]; then - kubectl patch --insecure-skip-tls-verify -n ${4} pod ${LAGOON_BUILD_NAME} \ + kubectl patch -n ${4} pod ${LAGOON_BUILD_NAME} \ -p "{\"metadata\":{\"labels\":{\"lagoon.sh/buildStep\":\"${5}\"}}}" # tiny sleep to allow patch to complete before logs roll again @@ -127,21 +127,21 @@ set -x set +x echo "Updating lagoon-yaml configmap with a pre-deploy version of the .lagoon.yml file" -if kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get configmap lagoon-yaml &> /dev/null; then +if kubectl -n ${NAMESPACE} get configmap lagoon-yaml &> /dev/null; then # replace it # if the environment has already been deployed with an existing configmap that had the file in the key `.lagoon.yml` # just nuke the entire configmap and replace it with our new key and file - LAGOON_YML_CM=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get configmap lagoon-yaml -o json) + LAGOON_YML_CM=$(kubectl -n ${NAMESPACE} get configmap lagoon-yaml -o json) if [ "$(echo ${LAGOON_YML_CM} | jq -r '.data.".lagoon.yml" // false')" == "false" ]; then # if the key doesn't exist, then just update the pre-deploy yaml only - kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get configmap lagoon-yaml -o json | jq --arg add "`cat .lagoon.yml`" '.data."pre-deploy" = $add' | kubectl apply -f - + kubectl -n ${NAMESPACE} get configmap lagoon-yaml -o json | jq --arg add "`cat .lagoon.yml`" '.data."pre-deploy" = $add' | kubectl apply -f - else # if the key does exist, then nuke it and put the new key - kubectl --insecure-skip-tls-verify -n ${NAMESPACE} create configmap lagoon-yaml --from-file=pre-deploy=.lagoon.yml -o yaml --dry-run=client | kubectl replace -f - + kubectl -n ${NAMESPACE} create configmap lagoon-yaml --from-file=pre-deploy=.lagoon.yml -o yaml --dry-run=client | kubectl replace -f - fi else # create it - kubectl --insecure-skip-tls-verify -n ${NAMESPACE} create configmap lagoon-yaml --from-file=pre-deploy=.lagoon.yml + kubectl -n ${NAMESPACE} create configmap lagoon-yaml --from-file=pre-deploy=.lagoon.yml fi set -x @@ -334,7 +334,7 @@ do if [ "$SERVICE_TYPE" == "mariadb" ]; then # if there is already a service existing with the service_name we assume that for this project there has been a # mariadb-single deployed (probably from the past where there was no mariadb-shared yet, or mariadb-dbaas) and use that one - if kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get service "$SERVICE_NAME" &> /dev/null; then + if kubectl -n ${NAMESPACE} get service "$SERVICE_NAME" &> /dev/null; then SERVICE_TYPE="mariadb-single" elif checkDBaaSHealth; then # check if the dbaas operator responds to a health check @@ -372,7 +372,7 @@ do if [ "$SERVICE_TYPE" == "postgres" ]; then # if there is already a service existing with the service_name we assume that for this project there has been a # postgres-single deployed (probably from the past where there was no postgres-shared yet, or postgres-dbaas) and use that one - if kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get service "$SERVICE_NAME" &> /dev/null; then + if kubectl -n ${NAMESPACE} get service "$SERVICE_NAME" &> /dev/null; then SERVICE_TYPE="postgres-single" elif checkDBaaSHealth; then # check if the dbaas operator responds to a health check @@ -410,7 +410,7 @@ do if [ "$SERVICE_TYPE" == "mongo" ]; then # if there is already a service existing with the service_name we assume that for this project there has been a # mongodb-single deployed (probably from the past where there was no mongodb-shared yet, or mongodb-dbaas) and use that one - if kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get service "$SERVICE_NAME" &> /dev/null; then + if kubectl -n ${NAMESPACE} get service "$SERVICE_NAME" &> /dev/null; then SERVICE_TYPE="mongodb-single" elif checkDBaaSHealth; then # check if the dbaas operator responds to a health check @@ -499,7 +499,7 @@ set -x ############################################## LAGOON_CACHE_BUILD_ARGS=() -readarray LAGOON_CACHE_BUILD_ARGS < <(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get deployments -o yaml -l 'lagoon.sh/service' | yq e '.items[].spec.template.spec.containers[].image | capture("^(?P.+\/.+\/.+\/(?P.+)\@.*)$") | "LAGOON_CACHE_" + .name + "=" + .image' -) +readarray LAGOON_CACHE_BUILD_ARGS < <(kubectl -n ${NAMESPACE} get deployments -o yaml -l 'lagoon.sh/service' | yq e '.items[].spec.template.spec.containers[].image | capture("^(?P.+\/.+\/.+\/(?P.+)\@.*)$") | "LAGOON_CACHE_" + .name + "=" + .image' -) @@ -1099,7 +1099,7 @@ if [[ "${CAPABILITIES[@]}" =~ "backup.appuio.ch/v1alpha1/Schedule" ]]; then HELM_CUSTOM_BAAS_BACKUP_SECRET_KEY=${BAAS_CUSTOM_BACKUP_SECRET_KEY} else set +x - kubectl --insecure-skip-tls-verify -n ${NAMESPACE} delete secret baas-custom-backup-credentials --ignore-not-found + kubectl -n ${NAMESPACE} delete secret baas-custom-backup-credentials --ignore-not-found set -x fi fi @@ -1116,15 +1116,15 @@ if [[ "${CAPABILITIES[@]}" =~ "backup.appuio.ch/v1alpha1/Schedule" ]]; then HELM_CUSTOM_BAAS_RESTORE_SECRET_KEY=${BAAS_CUSTOM_RESTORE_SECRET_KEY} else set +x - kubectl --insecure-skip-tls-verify -n ${NAMESPACE} delete secret baas-custom-restore-credentials --ignore-not-found + kubectl -n ${NAMESPACE} delete secret baas-custom-restore-credentials --ignore-not-found set -x fi fi - if ! kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get secret baas-repo-pw &> /dev/null; then + if ! kubectl -n ${NAMESPACE} get secret baas-repo-pw &> /dev/null; then # Create baas-repo-pw secret based on the project secret set +x - kubectl --insecure-skip-tls-verify -n ${NAMESPACE} create secret generic baas-repo-pw --from-literal=repo-pw=$(echo -n "$PROJECT_SECRET-BAAS-REPO-PW" | sha256sum | cut -d " " -f 1) + kubectl -n ${NAMESPACE} create secret generic baas-repo-pw --from-literal=repo-pw=$(echo -n "$PROJECT_SECRET-BAAS-REPO-PW" | sha256sum | cut -d " " -f 1) set -x fi @@ -1239,7 +1239,7 @@ set -x if [ "$(ls -A $YAML_FOLDER/)" ]; then find $YAML_FOLDER -type f -exec cat {} \; - kubectl apply --insecure-skip-tls-verify -n ${NAMESPACE} -f $YAML_FOLDER/ + kubectl apply -n ${NAMESPACE} -f $YAML_FOLDER/ fi set +x @@ -1298,7 +1298,7 @@ if [ ! -z "$LAGOON_PROJECT_VARIABLES" ]; then HAS_PROJECT_RUNTIME_VARS=$(echo $LAGOON_PROJECT_VARIABLES | jq -r 'map( select(.scope == "runtime" or .scope == "global") )') if [ ! "$HAS_PROJECT_RUNTIME_VARS" = "[]" ]; then - kubectl patch --insecure-skip-tls-verify \ + kubectl patch \ -n ${NAMESPACE} \ configmap lagoon-env \ -p "{\"data\":$(echo $LAGOON_PROJECT_VARIABLES | jq -r 'map( select(.scope == "runtime" or .scope == "global") ) | map( { (.name) : .value } ) | add | tostring')}" @@ -1308,7 +1308,7 @@ if [ ! -z "$LAGOON_ENVIRONMENT_VARIABLES" ]; then HAS_ENVIRONMENT_RUNTIME_VARS=$(echo $LAGOON_ENVIRONMENT_VARIABLES | jq -r 'map( select(.scope == "runtime" or .scope == "global") )') if [ ! "$HAS_ENVIRONMENT_RUNTIME_VARS" = "[]" ]; then - kubectl patch --insecure-skip-tls-verify \ + kubectl patch \ -n ${NAMESPACE} \ configmap lagoon-env \ -p "{\"data\":$(echo $LAGOON_ENVIRONMENT_VARIABLES | jq -r 'map( select(.scope == "runtime" or .scope == "global") ) | map( { (.name) : .value } ) | add | tostring')}" @@ -1317,7 +1317,7 @@ fi set -x if [ "$BUILD_TYPE" == "pullrequest" ]; then - kubectl patch --insecure-skip-tls-verify \ + kubectl patch \ -n ${NAMESPACE} \ configmap lagoon-env \ -p "{\"data\":{\"LAGOON_PR_HEAD_BRANCH\":\"${PR_HEAD_BRANCH}\", \"LAGOON_PR_BASE_BRANCH\":\"${PR_BASE_BRANCH}\", \"LAGOON_PR_TITLE\":$(echo $PR_TITLE | jq -R)}}" @@ -1357,7 +1357,7 @@ done ### REDEPLOY DEPLOYMENTS IF CONFIG MAP CHANGES ############################################## -CONFIG_MAP_SHA=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get configmap lagoon-env -o yaml | shyaml get-value data | sha256sum | awk '{print $1}') +CONFIG_MAP_SHA=$(kubectl -n ${NAMESPACE} get configmap lagoon-env -o yaml | shyaml get-value data | sha256sum | awk '{print $1}') # write the configmap to the values file so when we `exec-kubectl-resources-with-images.sh` the deployments will get the value of the config map # which will cause a change in the deployment and trigger a rollout if only the configmap has changed yq3 write -i -- /kubectl-build-deploy/values.yaml 'configMapSha' $CONFIG_MAP_SHA @@ -1580,7 +1580,7 @@ if [ "$(ls -A $YAML_FOLDER/)" ]; then fi find $YAML_FOLDER -type f -exec cat {} \; - kubectl apply --insecure-skip-tls-verify -n ${NAMESPACE} -f $YAML_FOLDER/ + kubectl apply -n ${NAMESPACE} -f $YAML_FOLDER/ fi set -x @@ -1644,7 +1644,7 @@ do continue else #echo "Single cron missing: ${SINGLE_NATIVE_CRONJOB}" - kubectl --insecure-skip-tls-verify -n ${NAMESPACE} delete cronjob ${SINGLE_NATIVE_CRONJOB} + kubectl -n ${NAMESPACE} delete cronjob ${SINGLE_NATIVE_CRONJOB} fi done @@ -1696,12 +1696,12 @@ set -x set +x echo "Updating lagoon-yaml configmap with a post-deploy version of the .lagoon.yml file" -if kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get configmap lagoon-yaml &> /dev/null; then +if kubectl -n ${NAMESPACE} get configmap lagoon-yaml &> /dev/null; then # replace it, no need to check if the key is different, as that will happen in the pre-deploy phase - kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get configmap lagoon-yaml -o json | jq --arg add "`cat .lagoon.yml`" '.data."post-deploy" = $add' | kubectl apply -f - + kubectl -n ${NAMESPACE} get configmap lagoon-yaml -o json | jq --arg add "`cat .lagoon.yml`" '.data."post-deploy" = $add' | kubectl apply -f - else # create it - kubectl --insecure-skip-tls-verify -n ${NAMESPACE} create configmap lagoon-yaml --from-file=post-deploy=.lagoon.yml + kubectl -n ${NAMESPACE} create configmap lagoon-yaml --from-file=post-deploy=.lagoon.yml fi set -x diff --git a/images/kubectl-build-deploy-dind/build-deploy.sh b/images/kubectl-build-deploy-dind/build-deploy.sh index 05d1b4169c..292924c705 100755 --- a/images/kubectl-build-deploy-dind/build-deploy.sh +++ b/images/kubectl-build-deploy-dind/build-deploy.sh @@ -54,14 +54,14 @@ set +x # reduce noise in build logs DEPLOYER_TOKEN=$(cat /var/run/secrets/lagoon/deployer/token) kubectl config set-credentials lagoon/kubernetes.default.svc --token="${DEPLOYER_TOKEN}" -kubectl config set-cluster kubernetes.default.svc --insecure-skip-tls-verify=true --server=https://kubernetes.default.svc +kubectl config set-cluster kubernetes.default.svc --server=https://kubernetes.default.svc --certificate-authority=/run/secrets/kubernetes.io/serviceaccount/ca.crt kubectl config set-context default/lagoon/kubernetes.default.svc --user=lagoon/kubernetes.default.svc --namespace="${NAMESPACE}" --cluster=kubernetes.default.svc kubectl config use-context default/lagoon/kubernetes.default.svc if [ ! -z ${INTERNAL_REGISTRY_URL} ] && [ ! -z ${INTERNAL_REGISTRY_USERNAME} ] && [ ! -z ${INTERNAL_REGISTRY_PASSWORD} ] ; then echo "docker login -u '${INTERNAL_REGISTRY_USERNAME}' -p '${INTERNAL_REGISTRY_PASSWORD}' ${INTERNAL_REGISTRY_URL}" | /bin/bash # create lagoon-internal-registry-secret if it does not exist yet - if ! kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get secret lagoon-internal-registry-secret &> /dev/null; then + if ! kubectl -n ${NAMESPACE} get secret lagoon-internal-registry-secret &> /dev/null; then kubectl create secret docker-registry lagoon-internal-registry-secret --docker-server=${INTERNAL_REGISTRY_URL} --docker-username=${INTERNAL_REGISTRY_USERNAME} --docker-password=${INTERNAL_REGISTRY_PASSWORD} --dry-run -o yaml | kubectl apply -f - fi REGISTRY_SECRETS+=("lagoon-internal-registry-secret") diff --git a/images/kubectl-build-deploy-dind/scripts/exec-generate-insights-configmap.sh b/images/kubectl-build-deploy-dind/scripts/exec-generate-insights-configmap.sh index 369a9932eb..b4f9ca0e4a 100755 --- a/images/kubectl-build-deploy-dind/scripts/exec-generate-insights-configmap.sh +++ b/images/kubectl-build-deploy-dind/scripts/exec-generate-insights-configmap.sh @@ -18,20 +18,20 @@ processImageInspect() { set -x # If lagoon-insights-image-inpsect-[IMAGE] configmap already exists then we need to update, else create new - if kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get configmap $IMAGE_INSPECT_CONFIGMAP &> /dev/null; then - kubectl --insecure-skip-tls-verify \ + if kubectl -n ${NAMESPACE} get configmap $IMAGE_INSPECT_CONFIGMAP &> /dev/null; then + kubectl \ -n ${NAMESPACE} \ create configmap $IMAGE_INSPECT_CONFIGMAP \ --from-file=${IMAGE_INSPECT_OUTPUT_FILE} \ -o json \ --dry-run=client | kubectl replace -f - else - kubectl --insecure-skip-tls-verify \ + kubectl \ -n ${NAMESPACE} \ create configmap ${IMAGE_INSPECT_CONFIGMAP} \ --from-file=${IMAGE_INSPECT_OUTPUT_FILE} fi - kubectl --insecure-skip-tls-verify \ + kubectl \ -n ${NAMESPACE} \ label configmap ${IMAGE_INSPECT_CONFIGMAP} \ lagoon.sh/insightsProcessed- \ @@ -64,8 +64,8 @@ processSbom() { set -x # If lagoon-insights-sbom-[IMAGE] configmap already exists then we need to update, else create new - if kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get configmap $SBOM_CONFIGMAP &> /dev/null; then - kubectl --insecure-skip-tls-verify \ + if kubectl -n ${NAMESPACE} get configmap $SBOM_CONFIGMAP &> /dev/null; then + kubectl \ -n ${NAMESPACE} \ create configmap $SBOM_CONFIGMAP \ --from-file=${SBOM_OUTPUT_FILE} \ @@ -73,12 +73,12 @@ processSbom() { --dry-run=client | kubectl replace -f - else # Create configmap and add label (#have to add label separately: https://github.com/kubernetes/kubernetes/issues/60295) - kubectl --insecure-skip-tls-verify \ + kubectl \ -n ${NAMESPACE} \ create configmap ${SBOM_CONFIGMAP} \ --from-file=${SBOM_OUTPUT_FILE} fi - kubectl --insecure-skip-tls-verify \ + kubectl \ -n ${NAMESPACE} \ label configmap ${SBOM_CONFIGMAP} \ lagoon.sh/insightsProcessed- \ diff --git a/images/kubectl-build-deploy-dind/scripts/exec-kubectl-mariadb-dbaas.sh b/images/kubectl-build-deploy-dind/scripts/exec-kubectl-mariadb-dbaas.sh index 114d2d4051..30dae3d91d 100644 --- a/images/kubectl-build-deploy-dind/scripts/exec-kubectl-mariadb-dbaas.sh +++ b/images/kubectl-build-deploy-dind/scripts/exec-kubectl-mariadb-dbaas.sh @@ -5,7 +5,7 @@ OPERATOR_COUNTER=1 OPERATOR_TIMEOUT=180 # use the secret name from the consumer to prevent credential clash -until kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get mariadbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.database +until kubectl -n ${NAMESPACE} get mariadbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.database do if [ $OPERATOR_COUNTER -lt $OPERATOR_TIMEOUT ]; then let SERVICE_BROKER_COUNTER=SERVICE_BROKER_COUNTER+1 @@ -18,26 +18,26 @@ fi done set +x # Grab the details from the consumer spec -DB_HOST=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get mariadbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.services.primary) -DB_USER=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get mariadbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.username) -DB_PASSWORD=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get mariadbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.password) -DB_NAME=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get mariadbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.database) -DB_PORT=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get mariadbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.provider.port) +DB_HOST=$(kubectl -n ${NAMESPACE} get mariadbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.services.primary) +DB_USER=$(kubectl -n ${NAMESPACE} get mariadbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.username) +DB_PASSWORD=$(kubectl -n ${NAMESPACE} get mariadbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.password) +DB_NAME=$(kubectl -n ${NAMESPACE} get mariadbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.database) +DB_PORT=$(kubectl -n ${NAMESPACE} get mariadbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.provider.port) # Add credentials to our configmap, prefixed with the name of the servicename of this servicebroker -kubectl patch --insecure-skip-tls-verify \ +kubectl patch \ -n ${NAMESPACE} \ configmap lagoon-env \ -p "{\"data\":{\"${SERVICE_NAME_UPPERCASE}_HOST\":\"${DB_HOST}\", \"${SERVICE_NAME_UPPERCASE}_USERNAME\":\"${DB_USER}\", \"${SERVICE_NAME_UPPERCASE}_PASSWORD\":\"${DB_PASSWORD}\", \"${SERVICE_NAME_UPPERCASE}_DATABASE\":\"${DB_NAME}\", \"${SERVICE_NAME_UPPERCASE}_PORT\":\"${DB_PORT}\"}}" # only add the DB_READREPLICA_HOSTS variable if it exists in the consumer spec # since the operator can support multiple replica hosts being defined, we should comma seperate them here -if DB_READREPLICA_HOSTS=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get mariadbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.services.replicas); then +if DB_READREPLICA_HOSTS=$(kubectl -n ${NAMESPACE} get mariadbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.services.replicas); then DB_READREPLICA_HOSTS=$(echo $DB_READREPLICA_HOSTS | cut -c 3- | rev | cut -c 1- | rev | sed 's/^\|$//g' | paste -sd, -) - kubectl patch --insecure-skip-tls-verify \ + kubectl patch \ -n ${NAMESPACE} \ configmap lagoon-env \ -p "{\"data\":{\"${SERVICE_NAME_UPPERCASE}_READREPLICA_HOSTS\":\"${DB_READREPLICA_HOSTS}\"}}" fi -set -x \ No newline at end of file +set -x diff --git a/images/kubectl-build-deploy-dind/scripts/exec-kubectl-mongodb-dbaas.sh b/images/kubectl-build-deploy-dind/scripts/exec-kubectl-mongodb-dbaas.sh index 85b7a21335..1c5a44d747 100644 --- a/images/kubectl-build-deploy-dind/scripts/exec-kubectl-mongodb-dbaas.sh +++ b/images/kubectl-build-deploy-dind/scripts/exec-kubectl-mongodb-dbaas.sh @@ -5,7 +5,7 @@ OPERATOR_COUNTER=1 OPERATOR_TIMEOUT=180 # use the secret name from the consumer to prevent credential clash -until kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get mongodbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.database +until kubectl -n ${NAMESPACE} get mongodbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.database do if [ $OPERATOR_COUNTER -lt $OPERATOR_TIMEOUT ]; then let OPERATOR_COUNTER=OPERATOR_COUNTER+1 @@ -18,17 +18,17 @@ fi done set +x # Grab the details from the consumer spec -DB_HOST=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get mongodbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.services.primary) -DB_USER=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get mongodbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.username) -DB_PASSWORD=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get mongodbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.password) -DB_NAME=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get mongodbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.database) -DB_PORT=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get mongodbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.provider.port) -DB_AUTHSOURCE=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get mongodbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.provider.auth.source) -DB_AUTHMECHANISM=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get mongodbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.provider.auth.mechanism) -DB_AUTHTLS=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get mongodbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.provider.auth.tls) +DB_HOST=$(kubectl -n ${NAMESPACE} get mongodbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.services.primary) +DB_USER=$(kubectl -n ${NAMESPACE} get mongodbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.username) +DB_PASSWORD=$(kubectl -n ${NAMESPACE} get mongodbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.password) +DB_NAME=$(kubectl -n ${NAMESPACE} get mongodbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.database) +DB_PORT=$(kubectl -n ${NAMESPACE} get mongodbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.provider.port) +DB_AUTHSOURCE=$(kubectl -n ${NAMESPACE} get mongodbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.provider.auth.source) +DB_AUTHMECHANISM=$(kubectl -n ${NAMESPACE} get mongodbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.provider.auth.mechanism) +DB_AUTHTLS=$(kubectl -n ${NAMESPACE} get mongodbconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.provider.auth.tls) # Add credentials to our configmap, prefixed with the name of the servicename of this servicebroker -kubectl patch --insecure-skip-tls-verify \ +kubectl patch \ -n ${NAMESPACE} \ configmap lagoon-env \ -p "{\"data\":{\"${SERVICE_NAME_UPPERCASE}_HOST\":\"${DB_HOST}\", \"${SERVICE_NAME_UPPERCASE}_USERNAME\":\"${DB_USER}\", \"${SERVICE_NAME_UPPERCASE}_PASSWORD\":\"${DB_PASSWORD}\", \"${SERVICE_NAME_UPPERCASE}_DATABASE\":\"${DB_NAME}\", \"${SERVICE_NAME_UPPERCASE}_PORT\":\"${DB_PORT}\", \"${SERVICE_NAME_UPPERCASE}_AUTHSOURCE\":\"${DB_AUTHSOURCE}\", \"${SERVICE_NAME_UPPERCASE}_AUTHMECHANISM\":\"${DB_AUTHMECHANISM}\", \"${SERVICE_NAME_UPPERCASE}_AUTHTLS\":\"${DB_AUTHTLS}\" }}" diff --git a/images/kubectl-build-deploy-dind/scripts/exec-kubectl-postgres-dbaas.sh b/images/kubectl-build-deploy-dind/scripts/exec-kubectl-postgres-dbaas.sh index 319f6f30e1..ea49292be0 100644 --- a/images/kubectl-build-deploy-dind/scripts/exec-kubectl-postgres-dbaas.sh +++ b/images/kubectl-build-deploy-dind/scripts/exec-kubectl-postgres-dbaas.sh @@ -5,7 +5,7 @@ OPERATOR_COUNTER=1 OPERATOR_TIMEOUT=180 # use the secret name from the consumer to prevent credential clash -until kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get postgresqlconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.database +until kubectl -n ${NAMESPACE} get postgresqlconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.database do if [ $OPERATOR_COUNTER -lt $OPERATOR_TIMEOUT ]; then let SERVICE_BROKER_COUNTER=SERVICE_BROKER_COUNTER+1 @@ -18,26 +18,26 @@ fi done set +x # Grab the details from the consumer spec -DB_HOST=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get postgresqlconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.services.primary) -DB_USER=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get postgresqlconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.username) -DB_PASSWORD=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get postgresqlconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.password) -DB_NAME=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get postgresqlconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.database) -DB_PORT=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get postgresqlconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.provider.port) +DB_HOST=$(kubectl -n ${NAMESPACE} get postgresqlconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.services.primary) +DB_USER=$(kubectl -n ${NAMESPACE} get postgresqlconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.username) +DB_PASSWORD=$(kubectl -n ${NAMESPACE} get postgresqlconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.password) +DB_NAME=$(kubectl -n ${NAMESPACE} get postgresqlconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.database) +DB_PORT=$(kubectl -n ${NAMESPACE} get postgresqlconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.provider.port) # Add credentials to our configmap, prefixed with the name of the servicename of this servicebroker -kubectl patch --insecure-skip-tls-verify \ +kubectl patch \ -n ${NAMESPACE} \ configmap lagoon-env \ -p "{\"data\":{\"${SERVICE_NAME_UPPERCASE}_HOST\":\"${DB_HOST}\", \"${SERVICE_NAME_UPPERCASE}_USERNAME\":\"${DB_USER}\", \"${SERVICE_NAME_UPPERCASE}_PASSWORD\":\"${DB_PASSWORD}\", \"${SERVICE_NAME_UPPERCASE}_DATABASE\":\"${DB_NAME}\", \"${SERVICE_NAME_UPPERCASE}_PORT\":\"${DB_PORT}\"}}" # only add the DB_READREPLICA_HOSTS variable if it exists in the consumer spec # since the operator can support multiple replica hosts being defined, we should comma seperate them here -if DB_READREPLICA_HOSTS=$(kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get postgresqlconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.services.replicas); then +if DB_READREPLICA_HOSTS=$(kubectl -n ${NAMESPACE} get postgresqlconsumer/${SERVICE_NAME} -o yaml | shyaml get-value spec.consumer.services.replicas); then DB_READREPLICA_HOSTS=$(echo $DB_READREPLICA_HOSTS | cut -c 3- | rev | cut -c 1- | rev | sed 's/^\|$//g' | paste -sd, -) - kubectl patch --insecure-skip-tls-verify \ + kubectl patch \ -n ${NAMESPACE} \ configmap lagoon-env \ -p "{\"data\":{\"${SERVICE_NAME_UPPERCASE}_READREPLICA_HOSTS\":\"${DB_READREPLICA_HOSTS}\"}}" fi -set -x \ No newline at end of file +set -x diff --git a/images/kubectl-build-deploy-dind/scripts/exec-monitor-deploy.sh b/images/kubectl-build-deploy-dind/scripts/exec-monitor-deploy.sh index 39ef1d8bf5..028377c44a 100755 --- a/images/kubectl-build-deploy-dind/scripts/exec-monitor-deploy.sh +++ b/images/kubectl-build-deploy-dind/scripts/exec-monitor-deploy.sh @@ -13,10 +13,10 @@ stream_logs_deployment() { while [ 1 ] do # Gather all pods and their containers for the current rollout and stream their logs into files - kubectl -n ${NAMESPACE} get --insecure-skip-tls-verify pods -l pod-template-hash=${LATEST_POD_TEMPLATE_HASH} -o json | jq -r '.items[] | .metadata.name + " " + .spec.containers[].name' | + kubectl -n ${NAMESPACE} get pods -l pod-template-hash=${LATEST_POD_TEMPLATE_HASH} -o json | jq -r '.items[] | .metadata.name + " " + .spec.containers[].name' | { while read -r POD CONTAINER ; do - kubectl -n ${NAMESPACE} logs --insecure-skip-tls-verify --timestamps -f $POD -c $CONTAINER $SINCE_TIME 2> /dev/null > /tmp/kubectl-build-deploy/logs/container/${SERVICE_NAME}/$POD-$CONTAINER.log & + kubectl -n ${NAMESPACE} logs --timestamps -f $POD -c $CONTAINER $SINCE_TIME 2> /dev/null > /tmp/kubectl-build-deploy/logs/container/${SERVICE_NAME}/$POD-$CONTAINER.log & done # this will wait for all log streaming we started to finish @@ -35,7 +35,7 @@ ret=0 # default progressDeadlineSeconds is 600, doubling that here for a timeout on the status check for 1200s (20m) as a fallback for exceeding the progressdeadline # when there may be another issue with the rollout failing, the progresdeadline doesn't always work # (eg, existing pod in previous replicaset fails to terminate properly) -kubectl rollout --insecure-skip-tls-verify -n ${NAMESPACE} status deployment ${SERVICE_NAME} --watch --timeout=1200s || ret=$? +kubectl rollout -n ${NAMESPACE} status deployment ${SERVICE_NAME} --watch --timeout=1200s || ret=$? if [[ $ret -ne 0 ]]; then # stop all running stream logs @@ -55,7 +55,7 @@ if [[ $ret -ne 0 ]]; then # solr-abcd12345-abcde Pending PodScheduled 0/3 nodes are available: 3 Too many pods. # echo "If there is any additional information about the status of pods, it will be available here" - kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get pods -l lagoon.sh/service=${SERVICE_NAME} -o json | \ + kubectl -n ${NAMESPACE} get pods -l lagoon.sh/service=${SERVICE_NAME} -o json | \ jq -r '["POD/SERVICE NAME","STATUS","CONDITION","MESSAGE"], (.items[] | . as $pod | .status.conditions[] | [ $pod.metadata.name, $pod.status.phase, .type, .message]) | @tsv' exit 1 diff --git a/images/kubectl-build-deploy-dind/scripts/exec-routes-generation.sh b/images/kubectl-build-deploy-dind/scripts/exec-routes-generation.sh index 4cd9cea730..8103897827 100644 --- a/images/kubectl-build-deploy-dind/scripts/exec-routes-generation.sh +++ b/images/kubectl-build-deploy-dind/scripts/exec-routes-generation.sh @@ -353,15 +353,15 @@ done ### Add the merged or to be created routes into a configmap echo "${FINAL_ROUTES_JSON}" | jq -r > /kubectl-build-deploy/routes.json echo "Updating lagoon-routes configmap with the newly generated routes JSON" -if kubectl --insecure-skip-tls-verify -n ${NAMESPACE} get configmap lagoon-routes &> /dev/null; then +if kubectl -n ${NAMESPACE} get configmap lagoon-routes &> /dev/null; then # if the key does exist, then nuke it and put the new key - kubectl --insecure-skip-tls-verify -n ${NAMESPACE} create configmap lagoon-routes --from-file=lagoon-routes=/kubectl-build-deploy/routes.json -o yaml --dry-run=client | kubectl replace -f - + kubectl -n ${NAMESPACE} create configmap lagoon-routes --from-file=lagoon-routes=/kubectl-build-deploy/routes.json -o yaml --dry-run=client | kubectl replace -f - else # create it - kubectl --insecure-skip-tls-verify -n ${NAMESPACE} create configmap lagoon-routes --from-file=lagoon-routes=/kubectl-build-deploy/routes.json + kubectl -n ${NAMESPACE} create configmap lagoon-routes --from-file=lagoon-routes=/kubectl-build-deploy/routes.json fi ### Run the generation function to create all the kubernetes resources etc echo "Generating the routes templates" generateRoutes "$(cat /kubectl-build-deploy/routes.json | jq -r)" false -set -x \ No newline at end of file +set -x diff --git a/images/kubectl-build-deploy-dind/scripts/kubectl-get-cluster-capabilities.sh b/images/kubectl-build-deploy-dind/scripts/kubectl-get-cluster-capabilities.sh index 860d4ece71..1dc24ae718 100755 --- a/images/kubectl-build-deploy-dind/scripts/kubectl-get-cluster-capabilities.sh +++ b/images/kubectl-build-deploy-dind/scripts/kubectl-get-cluster-capabilities.sh @@ -22,6 +22,6 @@ while IFS='/' read -ra VERSION; do # api groups and versions are separated by `/ else CAPABILITIES+=("${API_GROUP}/${API_VERSION}/${RESOURCE}") fi - done < <(kubectl --insecure-skip-tls-verify api-resources --no-headers --cached --namespaced=true --api-group="${API_GROUP}" | awk '{print $NF}' ) + done < <(kubectl api-resources --no-headers --cached --namespaced=true --api-group="${API_GROUP}" | awk '{print $NF}' ) -done < <(kubectl --insecure-skip-tls-verify api-versions) +done < <(kubectl api-versions) From c9334e5c93f50807cbd92d6e63b429ac080a991f Mon Sep 17 00:00:00 2001 From: Alanna Burke Date: Wed, 6 Apr 2022 15:42:15 -0400 Subject: [PATCH 37/38] Update install-lagoon-remote.md Super minor punctuation update. --- docs/installing-lagoon/install-lagoon-remote.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/installing-lagoon/install-lagoon-remote.md b/docs/installing-lagoon/install-lagoon-remote.md index 05fceddf91..d73832554c 100644 --- a/docs/installing-lagoon/install-lagoon-remote.md +++ b/docs/installing-lagoon/install-lagoon-remote.md @@ -7,7 +7,7 @@ Now we will install Lagoon Remote into the Lagoon namespace. The [RabbitMQ](../d * **rabbitMQHostname** `lagoon-core-broker.lagoon-core.svc.local` * **taskSSHHost** `kubectl get service lagoon-core-broker-amqp-ext -o custom-columns="NAME:.metadata.name,IP ADDRESS:.status.loadBalancer.ingress[*].ip,HOSTNAME:.status.loadBalancer.ingress[*].hostname"` * **harbor-password** `kubectl -n harbor get secret harbor-harbor-core -o jsonpath="{.data.HARBOR_ADMIN_PASSWORD}" | base64 --decode` -2. Add the harbor configuration from the previous step +2. Add the Harbor configuration from the previous step. 3. Run `helm upgrade --install --create-namespace --namespace lagoon -f remote-values.yaml lagoon-remote lagoon/lagoon-remote` ```yaml title="lagoon-remote-values.yml" From b0346c0303eaa1f2e6699e820f84483c021ed7b7 Mon Sep 17 00:00:00 2001 From: Alanna Burke Date: Wed, 6 Apr 2022 15:44:19 -0400 Subject: [PATCH 38/38] Update requirements.md Minor text changes. --- docs/installing-lagoon/requirements.md | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/docs/installing-lagoon/requirements.md b/docs/installing-lagoon/requirements.md index 15c78c95e1..bee118c713 100644 --- a/docs/installing-lagoon/requirements.md +++ b/docs/installing-lagoon/requirements.md @@ -14,29 +14,29 @@ ## Specific requirements (as of March 2022) ### Kubernetes -Lagoon supports Kubernetes versions 1.19, 1.20 and 1.21. Support for 1.22 is underway, and mostly complete. There are a number of relevant API deprecations in 1.22 that Lagoon utilised across a number of dependencies. +Lagoon supports Kubernetes versions 1.19, 1.20 and 1.21. Support for 1.22 is underway, and mostly complete. There are a number of relevant API deprecations in 1.22 that Lagoon utilized across a number of dependencies. ### ingress-nginx Lagoon is currently only for a single ingress-nginx controller, and therefore defining an IngressClass has not been necessary. -This means that Lagoon currently works best with version 3 of the ingress-nginx helm chart - latest release [3.40.0](https://github.com/kubernetes/ingress-nginx/releases/tag/helm-chart-3.40.0) +This means that Lagoon currently works best with version 3 of the ingress-nginx Helm chart - latest release [3.40.0](https://github.com/kubernetes/ingress-nginx/releases/tag/helm-chart-3.40.0). -In order to use a version of the helm chart (>=4) that supports Ingress v1 (i.e for Kubernetes 1.22), the following configuration should be used,as per [the ingress-nginx docs](https://kubernetes.github.io/ingress-nginx/#what-is-an-ingressclass-and-why-is-it-important-for-users-of-ingress-nginx-controller-now) +In order to use a version of the Helm chart (>=4) that supports Ingress v1 (i.e for Kubernetes 1.22), the following configuration should be used, as per [the ingress-nginx docs](https://kubernetes.github.io/ingress-nginx/#what-is-an-ingressclass-and-why-is-it-important-for-users-of-ingress-nginx-controller-now). -- nginx-ingress should be configured as the default controller - set `.controller.ingressClassResource.default: true` in helm values -- nginx-ingress should be configured to watch ingresses without IngressClass set - set `.controller.watchIngressWithoutClass: true` in helm values +- nginx-ingress should be configured as the default controller - set `.controller.ingressClassResource.default: true` in Helm values +- nginx-ingress should be configured to watch ingresses without IngressClass set - set `.controller.watchIngressWithoutClass: true` in Helm values -This will configure the controller to create any new ingresses with itself as the IngressClass, and also to handle any existing ingresses without an IngressClass set +This will configure the controller to create any new ingresses with itself as the IngressClass, and also to handle any existing ingresses without an IngressClass set. -Other configurations may be possible, but have not been tested +Other configurations may be possible, but have not been tested. ### Harbor -Only Harbor <2.2 is currently supported - the method of retrieving robot accounts was changed in 2.2, and we are working on a fix +Only Harbor <2.2 is currently supported - the method of retrieving robot accounts was changed in 2.2, and we are working on a fix. -This means you should install Harbor [2.1.6](https://github.com/goharbor/harbor/releases/tag/v2.1.6) with helm chart [1.5.6](https://github.com/goharbor/harbor-helm/releases/tag/1.5.6) +This means you should install Harbor [2.1.6](https://github.com/goharbor/harbor/releases/tag/v2.1.6) with Helm chart [1.5.6](https://github.com/goharbor/harbor-helm/releases/tag/1.5.6). ## How much Kubernetes experience/knowledge is required? -Lagoon uses some very involved Kubernetes and Cloud Native concepts, and whilst full familiarity may not be necessary to install and configure Lagoon, diagnosing issues and contributing may prove difficult without a good level of familiarity. +Lagoon uses some very involved Kubernetes and Cloud Native concepts, and while full familiarity may not be necessary to install and configure Lagoon, diagnosing issues and contributing may prove difficult without a good level of familiarity. As an indicator, comfort with the curriculum for the [Certified Kubernetes Administrator](https://www.cncf.io/certification/cka/) would be suggested as a minimum.