From ab674e824e75b021b0f2513ccb9be5e163f73b34 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Christoph=20Schw=C3=A4gerl?= Date: Wed, 15 Jan 2025 08:58:25 +0100 Subject: [PATCH 01/13] docs: Update local-test-setup.md to use scripts --- docs/contributor/04-local-test-setup.md | 335 ++++++++---------------- scripts/tests/version.sh | 13 +- versions.yaml | 15 +- 3 files changed, 121 insertions(+), 242 deletions(-) diff --git a/docs/contributor/04-local-test-setup.md b/docs/contributor/04-local-test-setup.md index 3df96bcc2c..60934e73ef 100644 --- a/docs/contributor/04-local-test-setup.md +++ b/docs/contributor/04-local-test-setup.md @@ -16,283 +16,158 @@ This setup is deployed with the following security features enabled: * Strict mTLS connection between Kyma Control Plane (KCP) and SKR clusters * SAN Pinning (SAN of client TLS certificate needs to match the DNS annotation of a corresponding Kyma CR) -## Procedure - -### KCP Cluster Setup - -1. Create a local KCP cluster: +## Prerequisites - ```shell - k3d cluster create kcp-local --port 9443:443@loadbalancer \ - --registry-create k3d-registry.localhost:0.0.0.0:5111 \ - --k3s-arg '--disable=traefik@server:0' \ - --k3s-arg --tls-san=host.k3d.internal@server:* - ``` +The following tooling is required in the versions defined in [`versions.yaml`](../../versions.yaml): -2. Open `/etc/hosts` file on your local system: +- cmctl (cert-manager) +- docker +- go +- golangci-lint +- istioctl +- k3d +- kubectl +- kustomize +- [modulectl](https://github.com/kyma-project/modulectl) +- yq - ```shell - sudo nano /etc/hosts - ``` +## Procedure - Add an entry for your local k3d registry created in step 1: +Execute the following scripts from the project root. - ```txt - 127.0.0.1 k3d-registry.localhost - ``` +## Create Test Clusters -3. Install the following prerequisites required by Lifecycle Manager: +Create local test clusters for SKR and KCP. - 1. Istio CRDs using `istioctl`: +```sh +K8S_VERSION=$(yq e '.k8s' ./versions.yaml) +CERT_MANAGER_VERSION=$(yq e '.certManager' ./versions.yaml) +./scripts/tests/create_test_clusters.sh --k8s-version $K8S_VERSION --cert-manager-version $CERT_MANAGER_VERSION +``` - ```shell - brew install istioctl && \ - istioctl install --set profile=demo -y - ``` +## Install the CRDs - 2. `cert-manager` by Jetstack: +Install the CRDs to the KCP cluster. - ```shell - kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.3/cert-manager.yaml - ``` +```sh +./scripts/tests/install_crds.sh +``` -4. Deploy Lifecycle Manager in the cluster: +## Deploy lifecycle-manager - ```shell - make local-deploy-with-watcher IMG=europe-docker.pkg.dev/kyma-project/prod/lifecycle-manager:latest - ``` +Deploy a built image from the registry, e.g. the `latest` image from the `prod` registry. - > **TIP:** If you get an error similar to the following, wait a couple of seconds and rerun the command. - > - > ```shell - > Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": no endpoints available for service "cert-manager-webhook" - > ``` +```sh +REGISTRY=prod +TAG=latest +./scripts/tests/deploy_klm_from_registry.sh --image-registry $REGISTRY --image-tag $TAG +``` -
- Custom Lifecycle Manager image deployment - If you want to test a custom image of Lifecycle Manager, run the following: +OR build a new image from the local sources, push it to the local KCP registry and deploy it. - ```shell - # build your local image - make docker-build IMG=k3d-registry.localhost:5111/: - # push the image to the local registry - make docker-push IMG=k3d-registry.localhost:5111/: - # deploy Lifecycle Manager using the image (note the change to port 5000 which is exposed in the cluster) - make local-deploy-with-watcher IMG=k3d-registry.localhost:5000/: - ``` +```sh +./scripts/tests/deploy_klm_from_sources.sh +``` -
+## Deploy a Kyma CR -5. Create a ModuleTemplate CR using [modulectl](https://github.com/kyma-project/modulectl). - The ModuleTemplate CR includes component descriptors for module installations. +```sh +SKR_HOST=host.k3d.internal +./scripts/tests/deploy_kyma.sh $SKR_HOST +``` - In this tutorial, we will create a ModuleTemplate CR from the [`template-operator`](https://github.com/kyma-project/template-operator) repository. - Adjust the path to your `template-operator` local directory or any other reference module operator accordingly. +## Verify if the Kyma becomes Ready - ```shell - cd - - # generate the manifests and save them to the template-operator.yaml file - make build-manifests - - # create the a ModuleTemplate CR and save it to the template.yaml file - modulectl create --config-file ./module-config.yaml --registry http://k3d-registry.localhost:5111 --insecure - ``` +Verify Kyma is Ready in KCP (takes roughly 1-2 minutes). -6. Verify images pushed to the local registry: +```sh +kubectl config use-context k3d-kcp +kubectl get kyma/kyma-sample -n kcp-system +``` - ```shell - curl http://k3d-registry.localhost:5111/v2/_catalog\?n\=100 - ``` +Verify Kyma is Ready in SKR (takes roughly 1-2 minutes). - The output should look like the following: +```sh +kubectl config use-context k3d-skr +kubectl get kyma/default -n kyma-system +``` - ```shell - {"repositories":["component-descriptors/kyma-project.io/module/template-operator"]} - ``` +## [OPTIONAL] Deploy template-operator module -7. Open the generated `template.yaml` file and change the following line: +Build it locally and deploy it. - ```yaml - <...> - - baseUrl: k3d-registry.localhost:5111 - <...> - ``` +```sh +cd - To the following: +make build-manifests +modulectl create --config-file ./module-config.yaml --registry http://localhost:5111 --insecure - ```yaml - <...> - - baseUrl: k3d-registry.localhost:5000 - <...> - ``` +kubectl config use-context k3d-kcp +# repository URL is localhost:5111 on the host machine but must be k3d-kcp-registry.localhost:5000 within the cluster +yq e '.spec.descriptor.component.repositoryContexts[0].baseUrl = "k3d-kcp-registry.localhost:5000"' ./template.yaml | kubectl apply -f - - You need the change because the operators are running inside of two local k3d clusters, and the internal port for the k3d registry is set by default to `5000`. +MT_VERSION=$(yq e '.spec.version' ./template.yaml) +cd +./scripts/tests/deploy_modulereleasemeta.sh template-operator regular:$MT_VERSION +``` -8. Apply the template: +## [OPTIONAL] Add the template-operator module to the Kyma CR and verify if it becomes Ready - ```shell - kubectl apply -f ./template.yaml - ``` +Add the module. -### SKR Cluster Setup +```sh +kubectl config use-context k3d-skr +kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.modules[0]={"name": "template-operator"}' | kubectl apply -f - +``` -Create a local Kyma runtime (SKR) cluster: +Verify if the module becomes ready (takes roughly 1-2 minutes). -```shell -k3d cluster create skr-local --k3s-arg --tls-san=host.k3d.internal@server:* +```sh +kubectl config use-context k3d-skr +kubectl get kyma/default -n kyma-system -o wide ``` -### Create a Kyma CR and a Remote Secret - -1. Switch the context for using the KCP cluster: - - ```shell - kubectl config use-context k3d-kcp-local - ``` - -2. Generate and apply a sample Kyma CR and its corresponding Secret on KCP: - - ```shell - cat < - Running Lifecycle Manager on a local machine and not on a cluster - If you are running Lifecycle Manager on your local machine and not as a deployment on a cluster, use the following to create a Kyma CR and Secret: - - ```shell - cat << EOF | kubectl apply -f - - --- - apiVersion: v1 - kind: Secret - metadata: - name: kyma-sample - namespace: kcp-system - labels: - "operator.kyma-project.io/kyma-name": "kyma-sample" - "operator.kyma-project.io/managed-by": "lifecycle-manager" - data: - config: $(k3d kubeconfig get skr-local | base64 | tr -d '\n') - --- - apiVersion: operator.kyma-project.io/v1beta2 - kind: Kyma - metadata: - annotations: - skr-domain: "example.domain.com" - name: kyma-sample - namespace: kcp-system - spec: - channel: regular - modules: - - name: template-operator - EOF - ``` - - -### Watcher and Module Installation Verification - -Check the Kyma CR events to verify if the `SKRWebhookIsReady` condition is set to `True`. -Also make sure if the state of the `template-operator` is `Ready` and check the overall `state`. - -```yaml -status: - activeChannel: regular - conditions: - - lastTransitionTime: "2023-02-28T06:42:00Z" - message: skrwebhook is synchronized - observedGeneration: 1 - reason: SKRWebhookIsReady - status: "True" - type: Ready - lastOperation: - lastUpdateTime: "2023-02-28T06:42:00Z" - operation: kyma is ready - modules: - - channel: regular - fqdn: kyma-project.io/module/template-operator - manifest: - apiVersion: operator.kyma-project.io/v1beta2 - kind: Manifest - metadata: - generation: 1 - name: kyma-sample-template-operator-3685142144 - namespace: kcp-system - name: template-operator - state: Ready - template: - apiVersion: operator.kyma-project.io/v1beta2 - kind: ModuleTemplate - metadata: - generation: 1 - name: moduletemplate-template-operator - namespace: kcp-system - version: 1.2.3 - state: Ready -``` +To remove the module again. -### (Optional) Check the Functionality of the Watcher Component +```sh +kubectl config use-context k3d-skr +kubectl get kyma/default -n kyma-system -o yaml | yq e 'del(.spec.modules[0])' | kubectl apply -f - +``` -1. Switch the context to use the SKR cluster: +## [OPTIONAL] Verify conditions - ```shell - kubectl config use-context k3d-skr-local - ``` +Check the conditions of the Kyma. -2. Change the channel of the `template-operator` module to trigger a watcher event to KCP: +- `SKRWebhook` to determine if the webhook has been installed to the SKR +- `ModuleCatalog` to determine if the ModuleTemplates and ModuleReleaseMetas haven been synced to the SKR cluster +- `Modules` to determine if the added modules are ready - ```yaml - modules: - - name: template-operator - channel: fast - ``` +```sh +kubectl config use-context k3d-kcp +kubectl get kyma/kyma-sample -n kcp-system -o yaml | yq e '.status.conditions' +``` -### Verify logs +## [OPTIONAL] Verify if watcher events reach KCP -1. By watching the `skr-webhook` deployment logs, verify if the KCP request is sent successfully: +Flick the channel to trigger an event. - ```log - 1.6711877286771238e+09 INFO skr-webhook Kyma UPDATE validated from webhook - 1.6711879279507768e+09 INFO skr-webhook incoming admission review for: operator.kyma-project.io/v1alpha1, Kind=Kyma - 1.671187927950956e+09 INFO skr-webhook KCP {"url": "https://host.k3d.internal:9443/v1/lifecycle-manager/event"} - 1.6711879280545895e+09 INFO skr-webhook sent request to KCP successfully for resource default/kyma-sample - 1.6711879280546305e+09 INFO skr-webhook kcp request succeeded - ``` +```sh +kubectl config use-context k3d-skr +kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.channel="regular"' | kubectl apply -f - +kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.channel="fast"' | kubectl apply -f - +``` -2. In Lifecycle Manager's logs, verify if the listener is logging messages indicating the reception of a message from the watcher: + Verify if lifecyle-manger received the event on KCP. - ```log - {"level":"INFO","date":"2023-01-05T09:21:51.01093031Z","caller":"event/skr_events_listener.go:111","msg":"dispatched event object into channel","context":{"Module":"Listener","resource-name":"kyma-sample"}} - {"level":"INFO","date":"2023-01-05T09:21:51.010985Z","logger":"listener","caller":"controllers/setup.go:100","msg":"event coming from SKR, adding default/kyma-sample to queue","context":{}} - {"level":"INFO","date":"2023-01-05T09:21:51.011080512Z","caller":"controllers/kyma_controller.go:87","msg":"reconciling modules","context":{"controller":"kyma","controllerGroup":"operator.kyma-project.io","controllerKind":"Kyma","kyma":{"name":"kyma-sample","namespace":"default"},"namespace":"default","name":"kyma-sample","reconcileID":"f9b42382-dc68-41d2-96de-02b24e3ac2d6"}} - {"level":"INFO","date":"2023-01-05T09:21:51.043800866Z","caller":"controllers/kyma_controller.go:206","msg":"syncing state","context":{"controller":"kyma","controllerGroup":"operator.kyma-project.io","controllerKind":"Kyma","kyma":{"name":"kyma-sample","namespace":"default"},"namespace":"default","name":"kyma-sample","reconcileID":"f9b42382-dc68-41d2-96de-02b24e3ac2d6","state":"Processing"}} - ``` +```sh +kubectl config use-context k3d-kcp +kubectl logs deploy/klm-controller-manager -n kcp-system | grep "event received from SKR" +``` -### Cleanup +## [OPTIONAL] Delete the local test clusters -Run the following command to remove the local testing clusters: +Remove the local SKR and KCP test clusters. ```shell -k3d cluster rm kcp-local skr-local +k3d cluster rm kcp skr ``` diff --git a/scripts/tests/version.sh b/scripts/tests/version.sh index 94e20333d5..999d0c81d7 100755 --- a/scripts/tests/version.sh +++ b/scripts/tests/version.sh @@ -4,12 +4,15 @@ # Using a simplified version of semantic versioning regex pattern, which is bash compatible SEM_VER_REGEX="^([0-9]+)\.([0-9]+)\.([0-9]+)(-[0-9A-Za-z-]+(\.[0-9A-Za-z-]+)*)?(\+[0-9A-Za-z-]+(\.[0-9A-Za-z-]+)*)?$" +# Change to root directory of the project +cd "$(git rev-parse --show-toplevel)" + # Set default values for variables -KUBECTL_VERSION_DEFAULT=$(yq e '.kubectl' versions.yaml) -GO_VERSION_DEFAULT=$(yq e '.go' versions.yaml) -K3D_VERSION_DEFAULT=$(yq e '.k3d' versions.yaml) -DOCKER_VERSION_DEFAULT=$(yq e '.docker' versions.yaml) -ISTIOCTL_VERSION_DEFAULT=$(yq e '.istio' versions.yaml) +KUBECTL_VERSION_DEFAULT=$(yq e '.kubectl' ./versions.yaml) +GO_VERSION_DEFAULT=$(yq e '.go' ./versions.yaml) +K3D_VERSION_DEFAULT=$(yq e '.k3d' ./versions.yaml) +DOCKER_VERSION_DEFAULT=$(yq e '.docker' ./versions.yaml) +ISTIOCTL_VERSION_DEFAULT=$(yq e '.istio' ./versions.yaml) versioning_error=false # Check if required tools are installed diff --git a/versions.yaml b/versions.yaml index 73b46c8fe0..233f5fc24c 100644 --- a/versions.yaml +++ b/versions.yaml @@ -1,12 +1,13 @@ # defines the versions of the tools used in the project -istio: "1.24.1" -k3d: "5.7.4" -modulectl: "1.1.3" certManager: "1.15.0" -k8s: "1.30.3" -kustomize: "5.3.0" controllerTools: "0.14.0" +docker: "27.4.0" +go: "1.23.4" golangciLint: "1.60.3" +istio: "1.24.1" +k3d: "5.7.4" +k8s: "1.30.3" kubectl: "1.31.3" -go: "1.23.4" -docker: "27.4.0" +kustomize: "5.3.0" +modulectl: "1.1.3" +yq: "4.45.1" From dfaf36a2c27a69c4daf557af517b60011fe6223b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Christoph=20Schw=C3=A4gerl?= Date: Thu, 16 Jan 2025 15:30:07 +0100 Subject: [PATCH 02/13] Update docs/contributor/04-local-test-setup.md --- docs/contributor/04-local-test-setup.md | 1 - 1 file changed, 1 deletion(-) diff --git a/docs/contributor/04-local-test-setup.md b/docs/contributor/04-local-test-setup.md index 60934e73ef..928cf824da 100644 --- a/docs/contributor/04-local-test-setup.md +++ b/docs/contributor/04-local-test-setup.md @@ -20,7 +20,6 @@ This setup is deployed with the following security features enabled: The following tooling is required in the versions defined in [`versions.yaml`](../../versions.yaml): -- cmctl (cert-manager) - docker - go - golangci-lint From dcf2572b319fa1d48b57af55ef04cf5cf00f8d6b Mon Sep 17 00:00:00 2001 From: Amritanshu Sikdar Date: Mon, 20 Jan 2025 16:46:49 +0100 Subject: [PATCH 03/13] docs: Reference Mandatory Modules Controller in Architecture Docs (#2197) * reference controller in architecture docs * fix markdown lint * improve docs --- docs/contributor/01-architecture.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/contributor/01-architecture.md b/docs/contributor/01-architecture.md index 2e9b37743e..5f05104bc1 100644 --- a/docs/contributor/01-architecture.md +++ b/docs/contributor/01-architecture.md @@ -32,6 +32,7 @@ Apart from the custom resources, Lifecycle Manager uses also Kyma, Manifest, and * [Kyma controller](../../internal/controller/kyma/controller.go) - reconciles the Kyma CR which means creating Manifest CRs for each Kyma module enabled in the Kyma CR and deleting them when modules are disabled in the Kyma CR. It is also responsible for synchronising ModuleTemplate CRs between KCP and Kyma runtimes. * [Manifest controller](../../internal/controller/manifest/controller.go) - reconciles the Manifest CRs created by the Kyma controller, which means, installing components specified in the Manifest CR in the target SKR cluster and removing them when the Manifest CRs are flagged for deletion. +* [Mandatory Modules controller](02-controllers.md#mandatory-modules-controllers) - reconciles the mandatory ModuleTemplate CRs that have the `operator.kyma-project.io/mandatory-module` label, selecting the highest version if duplicates exist. It translates the ModuleTemplate CRs to Manifest CRs linked to the Kyma CR, ensuring changes propagate. For removal, a deletion controller marks the related Manifest CRs, removes finalizers, and deletes the ModuleTemplate CR. * [Purge controller](../../internal/controller/purge/controller.go) - reconciles the Kyma CRs that are marked for deletion longer than the grace period, which means purging all the resources deployed by Lifecycle Manager in the target SKR cluster. * [Watcher controller](../../internal/controller/watcher/controller.go) - reconciles the Watcher CR which means creating Istio Virtual Service resources in KCP when a Watcher CR is created and removing the same resources when the Watcher CR is deleted. This is done to configure the routing of the messages that come from the watcher agent, installed on each Kyma runtime, and go to a listener agent deployed in KCP. From 5d6ce7821c0e02de109a385ec39c1332a25656f4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Christoph=20Schw=C3=A4gerl?= Date: Mon, 20 Jan 2025 17:16:01 +0100 Subject: [PATCH 04/13] feat: Implement MaintenanceWindow determination logic (#2196) * feat: Add metadata and status helpers to Kyma type * add go sum * cleanup go mod, add api to coverage * remove api from unit-test coverage * feat: MaintenanceWindow service * chore(dependabot): bump k8s.io/apimachinery from 0.32.0 to 0.32.1 in /api (#2185) chore(dependabot): bump k8s.io/apimachinery in /api Bumps [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery) from 0.32.0 to 0.32.1. - [Commits](https://github.com/kubernetes/apimachinery/compare/v0.32.0...v0.32.1) --- updated-dependencies: - dependency-name: k8s.io/apimachinery dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(dependabot): bump sigs.k8s.io/controller-runtime from 0.19.4 to 0.20.0 (#2192) chore(dependabot): bump sigs.k8s.io/controller-runtime Bumps [sigs.k8s.io/controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) from 0.19.4 to 0.20.0. - [Release notes](https://github.com/kubernetes-sigs/controller-runtime/releases) - [Changelog](https://github.com/kubernetes-sigs/controller-runtime/blob/main/RELEASE.md) - [Commits](https://github.com/kubernetes-sigs/controller-runtime/compare/v0.19.4...v0.20.0) --- updated-dependencies: - dependency-name: sigs.k8s.io/controller-runtime dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * feat: Add metadata and status helpers to Kyma type * cleanup go mod, add api to coverage * remove obsolete comment * fix fake arguments in suite_test * avoid handler name in suite_test * rename to maintenanceWindow consistently * underscore unused receiver arg * omit receiver arg --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- cmd/main.go | 14 +- internal/controller/kyma/controller.go | 3 +- .../maintenance_policy_handler.go | 44 --- .../maintenance_policy_handler_test.go | 112 ------ .../maintenancewindows/maintenance_window.go | 110 ++++++ .../maintenance_window_test.go | 338 ++++++++++++++++++ pkg/templatelookup/regular.go | 17 +- pkg/templatelookup/regular_test.go | 33 +- pkg/testutils/builder/kyma.go | 6 + pkg/testutils/builder/moduletemplate.go | 5 + pkg/testutils/moduletemplate.go | 13 +- .../integration/controller/kcp/suite_test.go | 7 +- .../integration/controller/kyma/suite_test.go | 7 +- 13 files changed, 533 insertions(+), 176 deletions(-) delete mode 100644 internal/maintenancewindows/maintenance_policy_handler.go delete mode 100644 internal/maintenancewindows/maintenance_policy_handler_test.go create mode 100644 internal/maintenancewindows/maintenance_window.go create mode 100644 internal/maintenancewindows/maintenance_window_test.go diff --git a/cmd/main.go b/cmd/main.go index 89f56799e6..d2b10cfada 100644 --- a/cmd/main.go +++ b/cmd/main.go @@ -64,6 +64,7 @@ import ( "github.com/kyma-project/lifecycle-manager/pkg/log" "github.com/kyma-project/lifecycle-manager/pkg/matcher" "github.com/kyma-project/lifecycle-manager/pkg/queue" + "github.com/kyma-project/lifecycle-manager/pkg/templatelookup" "github.com/kyma-project/lifecycle-manager/pkg/watcher" _ "k8s.io/client-go/plugin/pkg/client/auth" @@ -192,15 +193,16 @@ func setupManager(flagVar *flags.FlagVar, cacheOptions cache.Options, scheme *ma kymaMetrics := metrics.NewKymaMetrics(sharedMetrics) mandatoryModulesMetrics := metrics.NewMandatoryModulesMetrics() - // The maintenance windows policy should be passed to the reconciler to be resolved: https://github.com/kyma-project/lifecycle-manager/issues/2101 - _, err = maintenancewindows.InitializeMaintenanceWindowsPolicy(setupLog, maintenanceWindowPoliciesDirectory, + maintenanceWindow, err := maintenancewindows.InitializeMaintenanceWindow(setupLog, + maintenanceWindowPoliciesDirectory, maintenanceWindowPolicyName) if err != nil { setupLog.Error(err, "unable to set maintenance windows policy") } setupKymaReconciler(mgr, descriptorProvider, skrContextProvider, eventRecorder, flagVar, options, skrWebhookManager, - kymaMetrics, setupLog) - setupManifestReconciler(mgr, flagVar, options, sharedMetrics, mandatoryModulesMetrics, setupLog, eventRecorder) + kymaMetrics, setupLog, maintenanceWindow) + setupManifestReconciler(mgr, flagVar, options, sharedMetrics, mandatoryModulesMetrics, setupLog, + eventRecorder) setupMandatoryModuleReconciler(mgr, descriptorProvider, flagVar, options, mandatoryModulesMetrics, setupLog) setupMandatoryModuleDeletionReconciler(mgr, descriptorProvider, eventRecorder, flagVar, options, setupLog) if flagVar.EnablePurgeFinalizer { @@ -277,7 +279,8 @@ func scheduleMetricsCleanup(kymaMetrics *metrics.KymaMetrics, cleanupIntervalInM func setupKymaReconciler(mgr ctrl.Manager, descriptorProvider *provider.CachedDescriptorProvider, skrContextFactory remote.SkrContextProvider, event event.Event, flagVar *flags.FlagVar, options ctrlruntime.Options, - skrWebhookManager *watcher.SKRWebhookManifestManager, kymaMetrics *metrics.KymaMetrics, setupLog logr.Logger, + skrWebhookManager *watcher.SKRWebhookManifestManager, kymaMetrics *metrics.KymaMetrics, + setupLog logr.Logger, maintenanceWindow templatelookup.MaintenanceWindow, ) { options.RateLimiter = internal.RateLimiter(flagVar.FailureBaseDelay, flagVar.FailureMaxDelay, flagVar.RateLimiterFrequency, flagVar.RateLimiterBurst) @@ -303,6 +306,7 @@ func setupKymaReconciler(mgr ctrl.Manager, descriptorProvider *provider.CachedDe Metrics: kymaMetrics, RemoteCatalog: remote.NewRemoteCatalogFromKyma(mgr.GetClient(), skrContextFactory, flagVar.RemoteSyncNamespace), + TemplateLookup: templatelookup.NewTemplateLookup(mgr.GetClient(), descriptorProvider, maintenanceWindow), }).SetupWithManager( mgr, options, kyma.SetupOptions{ ListenerAddr: flagVar.KymaListenerAddr, diff --git a/internal/controller/kyma/controller.go b/internal/controller/kyma/controller.go index c81ae9d7b9..1b70afcc2b 100644 --- a/internal/controller/kyma/controller.go +++ b/internal/controller/kyma/controller.go @@ -72,6 +72,7 @@ type Reconciler struct { IsManagedKyma bool Metrics *metrics.KymaMetrics RemoteCatalog *remote.RemoteCatalog + TemplateLookup *templatelookup.TemplateLookup } // +kubebuilder:rbac:groups=operator.kyma-project.io,resources=kymas,verbs=get;list;watch;create;update;patch;delete @@ -504,7 +505,7 @@ func (r *Reconciler) updateKyma(ctx context.Context, kyma *v1beta2.Kyma) error { } func (r *Reconciler) reconcileManifests(ctx context.Context, kyma *v1beta2.Kyma) error { - templates := templatelookup.NewTemplateLookup(client.Reader(r), r.DescriptorProvider).GetRegularTemplates(ctx, kyma) + templates := r.TemplateLookup.GetRegularTemplates(ctx, kyma) prsr := parser.NewParser(r.Client, r.DescriptorProvider, r.InKCPMode, r.RemoteSyncNamespace) modules := prsr.GenerateModulesFromTemplates(kyma, templates) diff --git a/internal/maintenancewindows/maintenance_policy_handler.go b/internal/maintenancewindows/maintenance_policy_handler.go deleted file mode 100644 index 64d004922b..0000000000 --- a/internal/maintenancewindows/maintenance_policy_handler.go +++ /dev/null @@ -1,44 +0,0 @@ -package maintenancewindows - -import ( - "fmt" - "os" - - "github.com/go-logr/logr" - - "github.com/kyma-project/lifecycle-manager/maintenancewindows/resolver" -) - -func InitializeMaintenanceWindowsPolicy(log logr.Logger, - policiesDirectory, policyName string, -) (*resolver.MaintenanceWindowPolicy, error) { - if err := os.Setenv(resolver.PolicyPathENV, policiesDirectory); err != nil { - return nil, fmt.Errorf("failed to set the policy path env variable, %w", err) - } - - policyFilePath := fmt.Sprintf("%s/%s.json", policiesDirectory, policyName) - if !MaintenancePolicyFileExists(policyFilePath) { - log.Info("maintenance windows policy file does not exist") - return nil, nil //nolint:nilnil //use nil to indicate an empty Maintenance Window Policy - } - - maintenancePolicyPool, err := resolver.GetMaintenancePolicyPool() - if err != nil { - return nil, fmt.Errorf("failed to get maintenance policy pool, %w", err) - } - - maintenancePolicy, err := resolver.GetMaintenancePolicy(maintenancePolicyPool, policyName) - if err != nil { - return nil, fmt.Errorf("failed to get maintenance window policy, %w", err) - } - - return maintenancePolicy, nil -} - -func MaintenancePolicyFileExists(policyFilePath string) bool { - if _, err := os.Stat(policyFilePath); os.IsNotExist(err) { - return false - } - - return true -} diff --git a/internal/maintenancewindows/maintenance_policy_handler_test.go b/internal/maintenancewindows/maintenance_policy_handler_test.go deleted file mode 100644 index 2eec61ace0..0000000000 --- a/internal/maintenancewindows/maintenance_policy_handler_test.go +++ /dev/null @@ -1,112 +0,0 @@ -package maintenancewindows_test - -import ( - "fmt" - "testing" - "time" - - "github.com/go-logr/logr" - "github.com/stretchr/testify/require" - - "github.com/kyma-project/lifecycle-manager/internal/maintenancewindows" - "github.com/kyma-project/lifecycle-manager/maintenancewindows/resolver" -) - -func TestMaintenancePolicyFileExists_FileNotExists(t *testing.T) { - got := maintenancewindows.MaintenancePolicyFileExists("testdata/file.json") - - require.False(t, got) -} - -func TestMaintenancePolicyFileExists_FileExists(t *testing.T) { - got := maintenancewindows.MaintenancePolicyFileExists("testdata/policy.json") - - require.True(t, got) -} - -func TestInitializeMaintenanceWindowsPolicy_FileNotExist_NoError(t *testing.T) { - got, err := maintenancewindows.InitializeMaintenanceWindowsPolicy(logr.Logger{}, "testdata", "policy-1") - - require.Nil(t, got) - require.NoError(t, err) -} - -func TestInitializeMaintenanceWindowsPolicy_DirectoryNotExist_NoError(t *testing.T) { - got, err := maintenancewindows.InitializeMaintenanceWindowsPolicy(logr.Logger{}, "files", "policy") - - require.Nil(t, got) - require.NoError(t, err) -} - -func TestInitializeMaintenanceWindowsPolicy_InvalidPolicy(t *testing.T) { - got, err := maintenancewindows.InitializeMaintenanceWindowsPolicy(logr.Logger{}, "testdata", "invalid-policy") - - require.Nil(t, got) - require.ErrorContains(t, err, "failed to get maintenance window policy") -} - -func TestInitializeMaintenanceWindowsPolicy_WhenFileExists_CorrectPolicyIsRead(t *testing.T) { - got, err := maintenancewindows.InitializeMaintenanceWindowsPolicy(logr.Logger{}, "testdata", "policy") - require.NoError(t, err) - - ruleOneBeginTime, err := parseTime("01:00:00+00:00") - require.NoError(t, err) - ruleOneEndTime, err := parseTime("01:00:00+00:00") - require.NoError(t, err) - - ruleTwoBeginTime, err := parseTime("21:00:00+00:00") - require.NoError(t, err) - ruleTwoEndTime, err := parseTime("00:00:00+00:00") - require.NoError(t, err) - - defaultBeginTime, err := parseTime("21:00:00+00:00") - require.NoError(t, err) - defaultEndTime, err := parseTime("23:00:00+00:00") - require.NoError(t, err) - - expectedPolicy := &resolver.MaintenanceWindowPolicy{ - Rules: []resolver.MaintenancePolicyRule{ - { - Match: resolver.MaintenancePolicyMatch{ - Plan: resolver.NewRegexp("trial|free"), - }, - Windows: resolver.MaintenanceWindows{ - { - Days: []string{"Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"}, - Begin: resolver.WindowTime(ruleOneBeginTime), - End: resolver.WindowTime(ruleOneEndTime), - }, - }, - }, - { - Match: resolver.MaintenancePolicyMatch{ - Region: resolver.NewRegexp("europe|eu-|uksouth"), - }, - Windows: resolver.MaintenanceWindows{ - { - Days: []string{"Sat"}, - Begin: resolver.WindowTime(ruleTwoBeginTime), - End: resolver.WindowTime(ruleTwoEndTime), - }, - }, - }, - }, - Default: resolver.MaintenanceWindow{ - Days: []string{"Sat"}, - Begin: resolver.WindowTime(defaultBeginTime), - End: resolver.WindowTime(defaultEndTime), - }, - } - - require.NoError(t, err) - require.Equal(t, expectedPolicy, got) -} - -func parseTime(value string) (time.Time, error) { - t, err := time.Parse("15:04:05Z07:00", value) - if err != nil { - return time.Time{}, fmt.Errorf("failed to parse time: %w", err) - } - - return t, nil -} diff --git a/internal/maintenancewindows/maintenance_window.go b/internal/maintenancewindows/maintenance_window.go new file mode 100644 index 0000000000..2f1a5389f4 --- /dev/null +++ b/internal/maintenancewindows/maintenance_window.go @@ -0,0 +1,110 @@ +package maintenancewindows + +import ( + "errors" + "fmt" + "os" + "time" + + "github.com/go-logr/logr" + + "github.com/kyma-project/lifecycle-manager/api/v1beta2" + "github.com/kyma-project/lifecycle-manager/maintenancewindows/resolver" +) + +var ErrNoMaintenanceWindowPolicyConfigured = errors.New("no maintenance window policy configured") + +type MaintenanceWindowPolicy interface { + Resolve(runtime *resolver.Runtime, opts ...interface{}) (*resolver.ResolvedWindow, error) +} + +type MaintenanceWindow struct { + // make this private once we refactor the API + // https://github.com/kyma-project/lifecycle-manager/issues/2190 + MaintenanceWindowPolicy MaintenanceWindowPolicy +} + +func InitializeMaintenanceWindow(log logr.Logger, + policiesDirectory, policyName string, +) (*MaintenanceWindow, error) { + if err := os.Setenv(resolver.PolicyPathENV, policiesDirectory); err != nil { + return nil, fmt.Errorf("failed to set the policy path env variable, %w", err) + } + + policyFilePath := fmt.Sprintf("%s/%s.json", policiesDirectory, policyName) + if !MaintenancePolicyFileExists(policyFilePath) { + log.Info("maintenance windows policy file does not exist") + return &MaintenanceWindow{ + MaintenanceWindowPolicy: nil, + }, nil + } + + maintenancePolicyPool, err := resolver.GetMaintenancePolicyPool() + if err != nil { + return nil, fmt.Errorf("failed to get maintenance policy pool, %w", err) + } + + maintenancePolicy, err := resolver.GetMaintenancePolicy(maintenancePolicyPool, policyName) + if err != nil { + return nil, fmt.Errorf("failed to get maintenance window policy, %w", err) + } + + return &MaintenanceWindow{ + MaintenanceWindowPolicy: maintenancePolicy, + }, nil +} + +func MaintenancePolicyFileExists(policyFilePath string) bool { + if _, err := os.Stat(policyFilePath); os.IsNotExist(err) { + return false + } + + return true +} + +// IsRequired determines if a maintenance window is required to update the given module. +func (MaintenanceWindow) IsRequired(moduleTemplate *v1beta2.ModuleTemplate, kyma *v1beta2.Kyma) bool { + if !moduleTemplate.Spec.RequiresDowntime { + return false + } + + if kyma.Spec.SkipMaintenanceWindows { + return false + } + + // module not installed yet => no need for maintenance window + moduleStatus := kyma.Status.GetModuleStatus(moduleTemplate.Spec.ModuleName) + if moduleStatus == nil { + return false + } + + // module already installed in this version => no need for maintenance window + installedVersion := moduleStatus.Version + return installedVersion != moduleTemplate.Spec.Version +} + +// IsActive determines if a maintenance window is currently active. +func (mw MaintenanceWindow) IsActive(kyma *v1beta2.Kyma) (bool, error) { + if mw.MaintenanceWindowPolicy == nil { + return false, ErrNoMaintenanceWindowPolicyConfigured + } + + runtime := &resolver.Runtime{ + GlobalAccountID: kyma.GetGlobalAccount(), + Region: kyma.GetRegion(), + PlatformRegion: kyma.GetPlatformRegion(), + Plan: kyma.GetPlan(), + } + + resolvedWindow, err := mw.MaintenanceWindowPolicy.Resolve(runtime) + if err != nil { + return false, err + } + + now := time.Now() + if now.After(resolvedWindow.Begin) && now.Before(resolvedWindow.End) { + return true, nil + } + + return false, nil +} diff --git a/internal/maintenancewindows/maintenance_window_test.go b/internal/maintenancewindows/maintenance_window_test.go new file mode 100644 index 0000000000..299fe33fd8 --- /dev/null +++ b/internal/maintenancewindows/maintenance_window_test.go @@ -0,0 +1,338 @@ +package maintenancewindows_test + +import ( + "errors" + "fmt" + "testing" + "time" + + "github.com/go-logr/logr" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + + "github.com/kyma-project/lifecycle-manager/api/shared" + "github.com/kyma-project/lifecycle-manager/api/v1beta2" + "github.com/kyma-project/lifecycle-manager/internal/maintenancewindows" + "github.com/kyma-project/lifecycle-manager/maintenancewindows/resolver" + "github.com/kyma-project/lifecycle-manager/pkg/testutils/builder" + "github.com/kyma-project/lifecycle-manager/pkg/testutils/random" +) + +func TestMaintenancePolicyFileExists_FileNotExists(t *testing.T) { + got := maintenancewindows.MaintenancePolicyFileExists("testdata/file.json") + + require.False(t, got) +} + +func TestMaintenancePolicyFileExists_FileExists(t *testing.T) { + got := maintenancewindows.MaintenancePolicyFileExists("testdata/policy.json") + + require.True(t, got) +} + +func TestInitializeMaintenanceWindowsPolicy_FileNotExist_NoError(t *testing.T) { + got, err := maintenancewindows.InitializeMaintenanceWindow(logr.Logger{}, "testdata", "policy-1") + + require.Nil(t, got.MaintenanceWindowPolicy) + require.NoError(t, err) +} + +func TestInitializeMaintenanceWindowsPolicy_DirectoryNotExist_NoError(t *testing.T) { + got, err := maintenancewindows.InitializeMaintenanceWindow(logr.Logger{}, "files", "policy") + + require.Nil(t, got.MaintenanceWindowPolicy) + require.NoError(t, err) +} + +func TestInitializeMaintenanceWindowsPolicy_InvalidPolicy(t *testing.T) { + got, err := maintenancewindows.InitializeMaintenanceWindow(logr.Logger{}, "testdata", "invalid-policy") + + require.Nil(t, got) + require.ErrorContains(t, err, "failed to get maintenance window policy") +} + +func TestInitializeMaintenanceWindowsPolicy_WhenFileExists_CorrectPolicyIsRead(t *testing.T) { + got, err := maintenancewindows.InitializeMaintenanceWindow(logr.Logger{}, "testdata", "policy") + require.NoError(t, err) + + ruleOneBeginTime, err := parseTime("01:00:00+00:00") + require.NoError(t, err) + ruleOneEndTime, err := parseTime("01:00:00+00:00") + require.NoError(t, err) + + ruleTwoBeginTime, err := parseTime("21:00:00+00:00") + require.NoError(t, err) + ruleTwoEndTime, err := parseTime("00:00:00+00:00") + require.NoError(t, err) + + defaultBeginTime, err := parseTime("21:00:00+00:00") + require.NoError(t, err) + defaultEndTime, err := parseTime("23:00:00+00:00") + require.NoError(t, err) + + expectedPolicy := &resolver.MaintenanceWindowPolicy{ + Rules: []resolver.MaintenancePolicyRule{ + { + Match: resolver.MaintenancePolicyMatch{ + Plan: resolver.NewRegexp("trial|free"), + }, + Windows: resolver.MaintenanceWindows{ + { + Days: []string{"Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"}, + Begin: resolver.WindowTime(ruleOneBeginTime), + End: resolver.WindowTime(ruleOneEndTime), + }, + }, + }, + { + Match: resolver.MaintenancePolicyMatch{ + Region: resolver.NewRegexp("europe|eu-|uksouth"), + }, + Windows: resolver.MaintenanceWindows{ + { + Days: []string{"Sat"}, + Begin: resolver.WindowTime(ruleTwoBeginTime), + End: resolver.WindowTime(ruleTwoEndTime), + }, + }, + }, + }, + Default: resolver.MaintenanceWindow{ + Days: []string{"Sat"}, + Begin: resolver.WindowTime(defaultBeginTime), + End: resolver.WindowTime(defaultEndTime), + }, + } + + require.NoError(t, err) + require.Equal(t, expectedPolicy, got.MaintenanceWindowPolicy) +} + +func parseTime(value string) (time.Time, error) { + t, err := time.Parse("15:04:05Z07:00", value) + if err != nil { + return time.Time{}, fmt.Errorf("failed to parse time: %w", err) + } + + return t, nil +} + +var installedModuleStatus = v1beta2.ModuleStatus{ + Name: "module-name", + Version: "1.0.0", +} + +func Test_IsRequired_Returns_False_WhenNotRequiringDowntime(t *testing.T) { + maintenanceWindow := maintenancewindows.MaintenanceWindow{ + MaintenanceWindowPolicy: maintenanceWindowInactiveStub{}, + } + + kyma := builder.NewKymaBuilder(). + WithModuleStatus(installedModuleStatus). + Build() + moduleTemplate := builder.NewModuleTemplateBuilder(). + WithVersion("2.0.0"). + WithModuleName(installedModuleStatus.Name). + WithRequiresDowntime(false). + Build() + + result := maintenanceWindow.IsRequired(moduleTemplate, kyma) + + assert.False(t, result) +} + +func Test_IsRequired_Returns_False_WhenSkippingMaintenanceWindows(t *testing.T) { + maintenanceWindow := maintenancewindows.MaintenanceWindow{ + MaintenanceWindowPolicy: maintenanceWindowInactiveStub{}, + } + + kyma := builder.NewKymaBuilder(). + WithModuleStatus(installedModuleStatus). + WithSkipMaintenanceWindows(true). + Build() + moduleTemplate := builder.NewModuleTemplateBuilder(). + WithVersion("2.0.0"). + WithModuleName(installedModuleStatus.Name). + WithRequiresDowntime(true). + Build() + + result := maintenanceWindow.IsRequired(moduleTemplate, kyma) + + assert.False(t, result) +} + +func Test_IsRequired_Returns_False_WhenModuleIsNotInstalledYet(t *testing.T) { + maintenanceWindow := maintenancewindows.MaintenanceWindow{ + MaintenanceWindowPolicy: maintenanceWindowInactiveStub{}, + } + + kyma := builder.NewKymaBuilder(). + WithSkipMaintenanceWindows(false). + Build() + moduleTemplate := builder.NewModuleTemplateBuilder(). + WithVersion("2.0.0"). + WithModuleName(installedModuleStatus.Name). + WithRequiresDowntime(false). + Build() + + result := maintenanceWindow.IsRequired(moduleTemplate, kyma) + + assert.False(t, result) +} + +func Test_IsRequired_Returns_False_WhenSameVersionIsAlreadyInstalled(t *testing.T) { + maintenanceWindow := maintenancewindows.MaintenanceWindow{ + MaintenanceWindowPolicy: maintenanceWindowInactiveStub{}, + } + + kyma := builder.NewKymaBuilder(). + WithModuleStatus(installedModuleStatus). + WithSkipMaintenanceWindows(false). + Build() + moduleTemplate := builder.NewModuleTemplateBuilder(). + WithVersion("1.0.0"). + WithModuleName(installedModuleStatus.Name). + WithRequiresDowntime(true). + Build() + + result := maintenanceWindow.IsRequired(moduleTemplate, kyma) + + assert.False(t, result) +} + +func Test_IsRequired_Returns_True_WhenMaintenanceWindowIsRequire(t *testing.T) { + maintenanceWindow := maintenancewindows.MaintenanceWindow{ + MaintenanceWindowPolicy: maintenanceWindowInactiveStub{}, + } + + kyma := builder.NewKymaBuilder(). + WithModuleStatus(installedModuleStatus). + WithSkipMaintenanceWindows(false). + Build() + moduleTemplate := builder.NewModuleTemplateBuilder(). + WithVersion("2.0.0"). + WithModuleName(installedModuleStatus.Name). + WithRequiresDowntime(true). + Build() + + result := maintenanceWindow.IsRequired(moduleTemplate, kyma) + + assert.True(t, result) +} + +func Test_IsActive_Returns_Error_WhenResolvingMaintenanceWindowPolicyFails(t *testing.T) { + maintenanceWindow := maintenancewindows.MaintenanceWindow{ + MaintenanceWindowPolicy: maintenanceWindowErrorStub{}, + } + + kyma := builder.NewKymaBuilder().Build() + + result, err := maintenanceWindow.IsActive(kyma) + + assert.False(t, result) + require.Error(t, err) +} + +func Test_IsActive_Returns_False_WhenOutsideMaintenanceWindow(t *testing.T) { + maintenanceWindow := maintenancewindows.MaintenanceWindow{ + MaintenanceWindowPolicy: maintenanceWindowInactiveStub{}, + } + + kyma := builder.NewKymaBuilder().Build() + + result, err := maintenanceWindow.IsActive(kyma) + + assert.False(t, result) + require.NoError(t, err) +} + +func Test_IsActive_Returns_True_WhenInsideMaintenanceWindow(t *testing.T) { + maintenanceWindow := maintenancewindows.MaintenanceWindow{ + MaintenanceWindowPolicy: maintenanceWindowActiveStub{}, + } + + kyma := builder.NewKymaBuilder().Build() + + result, err := maintenanceWindow.IsActive(kyma) + + assert.True(t, result) + require.NoError(t, err) +} + +func Test_IsActive_PassesRuntimeArgumentCorrectly(t *testing.T) { + receivedRuntime := resolver.Runtime{} + maintenanceWindowPolicyStub := maintenanceWindowRuntimeArgStub{ + receivedRuntime: &receivedRuntime, + } + maintenanceWindow := maintenancewindows.MaintenanceWindow{ + MaintenanceWindowPolicy: maintenanceWindowPolicyStub, + } + + runtime := resolver.Runtime{ + GlobalAccountID: random.Name(), + Region: random.Name(), + PlatformRegion: random.Name(), + Plan: random.Name(), + } + kyma := builder.NewKymaBuilder(). + WithLabel(shared.GlobalAccountIDLabel, runtime.GlobalAccountID). + WithLabel(shared.RegionLabel, runtime.Region). + WithLabel(shared.PlatformRegionLabel, runtime.PlatformRegion). + WithLabel(shared.PlanLabel, runtime.Plan). + Build() + + result, err := maintenanceWindow.IsActive(kyma) + + assert.False(t, result) + require.NoError(t, err) + assert.Equal(t, runtime, receivedRuntime) +} + +func Test_IsActive_Returns_False_And_Error_WhenNoPolicyConfigured(t *testing.T) { + maintenanceWindow := maintenancewindows.MaintenanceWindow{ + MaintenanceWindowPolicy: nil, + } + + kyma := builder.NewKymaBuilder().Build() + + result, err := maintenanceWindow.IsActive(kyma) + + assert.False(t, result) + require.ErrorIs(t, err, maintenancewindows.ErrNoMaintenanceWindowPolicyConfigured) +} + +// test stubs + +type maintenanceWindowInactiveStub struct{} + +func (s maintenanceWindowInactiveStub) Resolve(runtime *resolver.Runtime, opts ...interface{}) (*resolver.ResolvedWindow, error) { + return &resolver.ResolvedWindow{ + Begin: time.Now().Add(1 * time.Hour), + End: time.Now().Add(2 * time.Hour), + }, nil +} + +type maintenanceWindowActiveStub struct{} + +func (s maintenanceWindowActiveStub) Resolve(runtime *resolver.Runtime, opts ...interface{}) (*resolver.ResolvedWindow, error) { + return &resolver.ResolvedWindow{ + Begin: time.Now().Add(-1 * time.Hour), + End: time.Now().Add(1 * time.Hour), + }, nil +} + +type maintenanceWindowErrorStub struct{} + +func (s maintenanceWindowErrorStub) Resolve(runtime *resolver.Runtime, opts ...interface{}) (*resolver.ResolvedWindow, error) { + return &resolver.ResolvedWindow{}, errors.New("test error") +} + +type maintenanceWindowRuntimeArgStub struct { + receivedRuntime *resolver.Runtime +} + +func (s maintenanceWindowRuntimeArgStub) Resolve(runtime *resolver.Runtime, opts ...interface{}) (*resolver.ResolvedWindow, error) { + *s.receivedRuntime = *runtime + + return &resolver.ResolvedWindow{}, nil +} diff --git a/pkg/templatelookup/regular.go b/pkg/templatelookup/regular.go index 79a759e060..f617215507 100644 --- a/pkg/templatelookup/regular.go +++ b/pkg/templatelookup/regular.go @@ -25,22 +25,33 @@ var ( ErrTemplateUpdateNotAllowed = errors.New("module template update not allowed") ) +type MaintenanceWindow interface { + IsRequired(moduleTemplate *v1beta2.ModuleTemplate, kyma *v1beta2.Kyma) bool + IsActive(kyma *v1beta2.Kyma) (bool, error) +} + type ModuleTemplateInfo struct { *v1beta2.ModuleTemplate - Err error - DesiredChannel string + Err error + WaitingForNextMaintenanceWindow bool + DesiredChannel string } -func NewTemplateLookup(reader client.Reader, descriptorProvider *provider.CachedDescriptorProvider) *TemplateLookup { +func NewTemplateLookup(reader client.Reader, + descriptorProvider *provider.CachedDescriptorProvider, + maintenanceWindow MaintenanceWindow, +) *TemplateLookup { return &TemplateLookup{ Reader: reader, descriptorProvider: descriptorProvider, + maintenanceWindow: maintenanceWindow, } } type TemplateLookup struct { client.Reader descriptorProvider *provider.CachedDescriptorProvider + maintenanceWindow MaintenanceWindow } type ModuleTemplatesByModuleName map[string]*ModuleTemplateInfo diff --git a/pkg/templatelookup/regular_test.go b/pkg/templatelookup/regular_test.go index a024b8f25d..98a9175309 100644 --- a/pkg/templatelookup/regular_test.go +++ b/pkg/templatelookup/regular_test.go @@ -332,7 +332,7 @@ func Test_GetRegularTemplates_WhenInvalidModuleProvided(t *testing.T) { for _, tt := range tests { test := tt t.Run(tt.name, func(t *testing.T) { - lookup := templatelookup.NewTemplateLookup(nil, provider.NewCachedDescriptorProvider()) + lookup := templatelookup.NewTemplateLookup(nil, provider.NewCachedDescriptorProvider(), maintenanceWindowStub{}) kyma := &v1beta2.Kyma{ Spec: test.KymaSpec, Status: test.KymaStatus, @@ -466,7 +466,8 @@ func TestTemplateLookup_GetRegularTemplates_WhenSwitchModuleChannel(t *testing.T t.Run(testCase.name, func(t *testing.T) { lookup := templatelookup.NewTemplateLookup(NewFakeModuleTemplateReader(testCase.availableModuleTemplate, testCase.availableModuleReleaseMeta), - provider.NewCachedDescriptorProvider()) + provider.NewCachedDescriptorProvider(), + maintenanceWindowStub{}) got := lookup.GetRegularTemplates(context.TODO(), testCase.kyma) assert.Equal(t, len(got), len(testCase.want)) for key, module := range got { @@ -539,7 +540,8 @@ func TestTemplateLookup_GetRegularTemplates_WhenSwitchBetweenModuleVersions(t *t t.Run(testCase.name, func(t *testing.T) { lookup := templatelookup.NewTemplateLookup(NewFakeModuleTemplateReader(availableModuleTemplates, availableModuleReleaseMetas), - provider.NewCachedDescriptorProvider()) + provider.NewCachedDescriptorProvider(), + maintenanceWindowStub{}) got := lookup.GetRegularTemplates(context.TODO(), testCase.kyma) assert.Len(t, got, 1) for key, module := range got { @@ -631,7 +633,8 @@ func TestTemplateLookup_GetRegularTemplates_WhenSwitchFromChannelToVersion(t *te t.Run(testCase.name, func(t *testing.T) { lookup := templatelookup.NewTemplateLookup(NewFakeModuleTemplateReader(availableModuleTemplates, availableModuleReleaseMetas), - provider.NewCachedDescriptorProvider()) + provider.NewCachedDescriptorProvider(), + maintenanceWindowStub{}) got := lookup.GetRegularTemplates(context.TODO(), testCase.kyma) assert.Len(t, got, 1) for key, module := range got { @@ -723,7 +726,8 @@ func TestTemplateLookup_GetRegularTemplates_WhenSwitchFromVersionToChannel(t *te t.Run(testCase.name, func(t *testing.T) { lookup := templatelookup.NewTemplateLookup(NewFakeModuleTemplateReader(availableModuleTemplates, availableModuleReleaseMetas), - provider.NewCachedDescriptorProvider()) + provider.NewCachedDescriptorProvider(), + maintenanceWindowStub{}) got := lookup.GetRegularTemplates(context.TODO(), testCase.kyma) assert.Len(t, got, 1) for key, module := range got { @@ -836,7 +840,8 @@ func TestNewTemplateLookup_GetRegularTemplates_WhenModuleTemplateContainsInvalid } lookup := templatelookup.NewTemplateLookup(NewFakeModuleTemplateReader(*givenTemplateList, moduleReleaseMetas), - provider.NewCachedDescriptorProvider()) + provider.NewCachedDescriptorProvider(), + maintenanceWindowStub{}) got := lookup.GetRegularTemplates(context.TODO(), testCase.kyma) assert.Equal(t, len(got), len(testCase.want)) for key, module := range got { @@ -898,7 +903,8 @@ func TestTemplateLookup_GetRegularTemplates_WhenModuleTemplateNotFound(t *testin givenTemplateList := &v1beta2.ModuleTemplateList{} lookup := templatelookup.NewTemplateLookup(NewFakeModuleTemplateReader(*givenTemplateList, v1beta2.ModuleReleaseMetaList{}), - provider.NewCachedDescriptorProvider()) + provider.NewCachedDescriptorProvider(), + maintenanceWindowStub{}) got := lookup.GetRegularTemplates(context.TODO(), testCase.kyma) assert.Equal(t, len(got), len(testCase.want)) for key, module := range got { @@ -1035,7 +1041,8 @@ func TestTemplateLookup_GetRegularTemplates_WhenModuleTemplateExists(t *testing. } lookup := templatelookup.NewTemplateLookup(NewFakeModuleTemplateReader(*givenTemplateList, moduleReleaseMetas), - provider.NewCachedDescriptorProvider()) + provider.NewCachedDescriptorProvider(), + maintenanceWindowStub{}) got := lookup.GetRegularTemplates(context.TODO(), testCase.kyma) assert.Equal(t, len(got), len(testCase.want)) for key, module := range got { @@ -1192,3 +1199,13 @@ func (mtlb *ModuleTemplateListBuilder) Build() v1beta2.ModuleTemplateList { func moduleToInstallByVersion(moduleName, moduleVersion string) v1beta2.Module { return testutils.NewTestModuleWithChannelVersion(moduleName, "", moduleVersion) } + +type maintenanceWindowStub struct{} + +func (m maintenanceWindowStub) IsRequired(moduleTemplate *v1beta2.ModuleTemplate, kyma *v1beta2.Kyma) bool { + return false +} + +func (m maintenanceWindowStub) IsActive(kyma *v1beta2.Kyma) (bool, error) { + return false, nil +} diff --git a/pkg/testutils/builder/kyma.go b/pkg/testutils/builder/kyma.go index 2b1cbba064..219029a1e2 100644 --- a/pkg/testutils/builder/kyma.go +++ b/pkg/testutils/builder/kyma.go @@ -122,6 +122,12 @@ func (kb KymaBuilder) WithInternal(internal bool) KymaBuilder { return kb } +// WithSkipMaintenanceWindows sets v1beta2.Kyma.Spec.SkipMaintenanceWindows. +func (kb KymaBuilder) WithSkipMaintenanceWindows(skip bool) KymaBuilder { + kb.kyma.Spec.SkipMaintenanceWindows = skip + return kb +} + // Build returns the built v1beta2.Kyma. func (kb KymaBuilder) Build() *v1beta2.Kyma { return kb.kyma diff --git a/pkg/testutils/builder/moduletemplate.go b/pkg/testutils/builder/moduletemplate.go index 2ba4bf1b5a..b9c87ac5d5 100644 --- a/pkg/testutils/builder/moduletemplate.go +++ b/pkg/testutils/builder/moduletemplate.go @@ -141,6 +141,11 @@ func (m ModuleTemplateBuilder) WithOCMPrivateRepo() ModuleTemplateBuilder { return m } +func (m ModuleTemplateBuilder) WithRequiresDowntime(value bool) ModuleTemplateBuilder { + m.moduleTemplate.Spec.RequiresDowntime = value + return m +} + func (m ModuleTemplateBuilder) Build() *v1beta2.ModuleTemplate { return m.moduleTemplate } diff --git a/pkg/testutils/moduletemplate.go b/pkg/testutils/moduletemplate.go index 395020c25c..707dcb8a07 100644 --- a/pkg/testutils/moduletemplate.go +++ b/pkg/testutils/moduletemplate.go @@ -33,7 +33,8 @@ func GetModuleTemplate(ctx context.Context, namespace string, ) (*v1beta2.ModuleTemplate, error) { descriptorProvider := provider.NewCachedDescriptorProvider() - templateLookup := templatelookup.NewTemplateLookup(clnt, descriptorProvider) + // replace maintenancePolicyHandlerStub with proper implementation for tests + templateLookup := templatelookup.NewTemplateLookup(clnt, descriptorProvider, maintenanceWindowStub{}) availableModule := templatelookup.ModuleInfo{ Module: module, } @@ -170,3 +171,13 @@ func ReadModuleVersionFromModuleTemplate(ctx context.Context, clnt client.Client return ocmDesc.Version, nil } + +type maintenanceWindowStub struct{} + +func (m maintenanceWindowStub) IsRequired(moduleTemplate *v1beta2.ModuleTemplate, kyma *v1beta2.Kyma) bool { + return false +} + +func (m maintenanceWindowStub) IsActive(kyma *v1beta2.Kyma) (bool, error) { + return false, nil +} diff --git a/tests/integration/controller/kcp/suite_test.go b/tests/integration/controller/kcp/suite_test.go index a29f9c504e..784209be82 100644 --- a/tests/integration/controller/kcp/suite_test.go +++ b/tests/integration/controller/kcp/suite_test.go @@ -42,11 +42,13 @@ import ( "github.com/kyma-project/lifecycle-manager/internal/crd" "github.com/kyma-project/lifecycle-manager/internal/descriptor/provider" "github.com/kyma-project/lifecycle-manager/internal/event" + "github.com/kyma-project/lifecycle-manager/internal/maintenancewindows" "github.com/kyma-project/lifecycle-manager/internal/pkg/flags" "github.com/kyma-project/lifecycle-manager/internal/pkg/metrics" "github.com/kyma-project/lifecycle-manager/internal/remote" "github.com/kyma-project/lifecycle-manager/pkg/log" "github.com/kyma-project/lifecycle-manager/pkg/queue" + "github.com/kyma-project/lifecycle-manager/pkg/templatelookup" . "github.com/kyma-project/lifecycle-manager/pkg/testutils" "github.com/kyma-project/lifecycle-manager/tests/integration" testskrcontext "github.com/kyma-project/lifecycle-manager/tests/integration/commontestutils/skrcontextimpl" @@ -82,7 +84,8 @@ func TestAPIs(t *testing.T) { var _ = BeforeSuite(func() { ctx, cancel = context.WithCancel(context.TODO()) - logf.SetLogger(log.ConfigLogger(9, zapcore.AddSync(GinkgoWriter))) + logr := log.ConfigLogger(9, zapcore.AddSync(GinkgoWriter)) + logf.SetLogger(logr) var err error By("bootstrapping test environment") @@ -140,6 +143,7 @@ var _ = BeforeSuite(func() { testSkrContextFactory = testskrcontext.NewDualClusterFactory(kcpClient.Scheme(), testEventRec) descriptorProvider = provider.NewCachedDescriptorProvider() crdCache = crd.NewCache(nil) + maintenanceWindow, _ := maintenancewindows.InitializeMaintenanceWindow(logr, "/not-required", "not-required") err = (&kyma.Reconciler{ Client: kcpClient, SkrContextFactory: testSkrContextFactory, @@ -152,6 +156,7 @@ var _ = BeforeSuite(func() { IsManagedKyma: true, Metrics: metrics.NewKymaMetrics(metrics.NewSharedMetrics()), RemoteCatalog: remote.NewRemoteCatalogFromKyma(kcpClient, testSkrContextFactory, flags.DefaultRemoteSyncNamespace), + TemplateLookup: templatelookup.NewTemplateLookup(kcpClient, descriptorProvider, maintenanceWindow), }).SetupWithManager(mgr, ctrlruntime.Options{}, kyma.SetupOptions{ListenerAddr: UseRandomPort}) Expect(err).ToNot(HaveOccurred()) diff --git a/tests/integration/controller/kyma/suite_test.go b/tests/integration/controller/kyma/suite_test.go index 22f39c1b08..0e43ca2a3f 100644 --- a/tests/integration/controller/kyma/suite_test.go +++ b/tests/integration/controller/kyma/suite_test.go @@ -41,11 +41,13 @@ import ( "github.com/kyma-project/lifecycle-manager/internal/controller/kyma" "github.com/kyma-project/lifecycle-manager/internal/descriptor/provider" "github.com/kyma-project/lifecycle-manager/internal/event" + "github.com/kyma-project/lifecycle-manager/internal/maintenancewindows" "github.com/kyma-project/lifecycle-manager/internal/pkg/flags" "github.com/kyma-project/lifecycle-manager/internal/pkg/metrics" "github.com/kyma-project/lifecycle-manager/internal/remote" "github.com/kyma-project/lifecycle-manager/pkg/log" "github.com/kyma-project/lifecycle-manager/pkg/queue" + "github.com/kyma-project/lifecycle-manager/pkg/templatelookup" "github.com/kyma-project/lifecycle-manager/tests/integration" testskrcontext "github.com/kyma-project/lifecycle-manager/tests/integration/commontestutils/skrcontextimpl" @@ -80,7 +82,8 @@ func TestAPIs(t *testing.T) { var _ = BeforeSuite(func() { ctx, cancel = context.WithCancel(context.TODO()) - logf.SetLogger(log.ConfigLogger(9, zapcore.AddSync(GinkgoWriter))) + logr := log.ConfigLogger(9, zapcore.AddSync(GinkgoWriter)) + logf.SetLogger(logr) By("bootstrapping test environment") @@ -134,6 +137,7 @@ var _ = BeforeSuite(func() { kcpClient = mgr.GetClient() testEventRec := event.NewRecorderWrapper(mgr.GetEventRecorderFor(shared.OperatorName)) testSkrContextFactory := testskrcontext.NewSingleClusterFactory(kcpClient, mgr.GetConfig(), testEventRec) + maintenanceWindow, _ := maintenancewindows.InitializeMaintenanceWindow(logr, "/not-required", "/not-required") err = (&kyma.Reconciler{ Client: kcpClient, Event: testEventRec, @@ -144,6 +148,7 @@ var _ = BeforeSuite(func() { InKCPMode: false, RemoteSyncNamespace: flags.DefaultRemoteSyncNamespace, Metrics: metrics.NewKymaMetrics(metrics.NewSharedMetrics()), + TemplateLookup: templatelookup.NewTemplateLookup(kcpClient, descriptorProvider, maintenanceWindow), }).SetupWithManager(mgr, ctrlruntime.Options{ RateLimiter: internal.RateLimiter( 1*time.Second, 5*time.Second, From c525f4e1002aa8ab3cf0adfb585222ce301ee4fd Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Christoph=20Schw=C3=A4gerl?= Date: Tue, 21 Jan 2025 08:52:43 +0100 Subject: [PATCH 05/13] title numbering and casing --- docs/contributor/04-local-test-setup.md | 45 +++++++++++++------------ 1 file changed, 23 insertions(+), 22 deletions(-) diff --git a/docs/contributor/04-local-test-setup.md b/docs/contributor/04-local-test-setup.md index 928cf824da..c998239400 100644 --- a/docs/contributor/04-local-test-setup.md +++ b/docs/contributor/04-local-test-setup.md @@ -1,8 +1,5 @@ # Local Test Setup in the Control Plane Mode Using k3d -> ### Supported Versions -> For more information on the tooling versions expected in the project, see [`versions.yaml`](../../versions.yaml). - ## Context This tutorial shows how to configure a fully working e2e test setup including the following components: @@ -34,7 +31,7 @@ The following tooling is required in the versions defined in [`versions.yaml`](. Execute the following scripts from the project root. -## Create Test Clusters +### 1. Create Test Clusters Create local test clusters for SKR and KCP. @@ -44,7 +41,7 @@ CERT_MANAGER_VERSION=$(yq e '.certManager' ./versions.yaml) ./scripts/tests/create_test_clusters.sh --k8s-version $K8S_VERSION --cert-manager-version $CERT_MANAGER_VERSION ``` -## Install the CRDs +### 2. Install the CRDs Install the CRDs to the KCP cluster. @@ -52,7 +49,9 @@ Install the CRDs to the KCP cluster. ./scripts/tests/install_crds.sh ``` -## Deploy lifecycle-manager +### 3. Deploy lifecycle-manager + +#### 3.1 Deploy lifecycle-manager from a Registry Deploy a built image from the registry, e.g. the `latest` image from the `prod` registry. @@ -62,38 +61,40 @@ TAG=latest ./scripts/tests/deploy_klm_from_registry.sh --image-registry $REGISTRY --image-tag $TAG ``` -OR build a new image from the local sources, push it to the local KCP registry and deploy it. +#### 3.2 Deploy lifecycle-manager from Local Sources + +Build a new image from the local sources, push it to the local KCP registry and deploy it. ```sh ./scripts/tests/deploy_klm_from_sources.sh ``` -## Deploy a Kyma CR +### 4. Deploy a Kyma CR ```sh SKR_HOST=host.k3d.internal ./scripts/tests/deploy_kyma.sh $SKR_HOST ``` -## Verify if the Kyma becomes Ready +### 5. Verify If The Kyma Becomes Ready -Verify Kyma is Ready in KCP (takes roughly 1-2 minutes). +#### 5.1 Verify If Kyma Is Ready in KCP (Takes Roughly 1–2 Minutes) ```sh kubectl config use-context k3d-kcp kubectl get kyma/kyma-sample -n kcp-system ``` -Verify Kyma is Ready in SKR (takes roughly 1-2 minutes). +#### 5.1 Verify If Kyma Is Ready in SKR (Takes Roughly 1-2 Minutes) ```sh kubectl config use-context k3d-skr kubectl get kyma/default -n kyma-system ``` -## [OPTIONAL] Deploy template-operator module +### 6. [OPTIONAL] Deploy template-operator Module -Build it locally and deploy it. +Build the template-operator module from the local sources, push it to the local KCP registry and deploy it. ```sh cd @@ -110,30 +111,30 @@ cd ./scripts/tests/deploy_modulereleasemeta.sh template-operator regular:$MT_VERSION ``` -## [OPTIONAL] Add the template-operator module to the Kyma CR and verify if it becomes Ready +### 7. [OPTIONAL] Add the template-operator Module to the Kyma CR and Verify If It Becomes Ready -Add the module. +#### 7.1 Add the Module to the Kyma CR Spec ```sh kubectl config use-context k3d-skr kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.modules[0]={"name": "template-operator"}' | kubectl apply -f - ``` -Verify if the module becomes ready (takes roughly 1-2 minutes). +#### 7.2 Verify If the Module Becomes Ready (Takes Roughly 1–2 Minutes) ```sh kubectl config use-context k3d-skr kubectl get kyma/default -n kyma-system -o wide ``` -To remove the module again. +#### 7.3 Remove the Module from the Kyma CR Spec ```sh kubectl config use-context k3d-skr kubectl get kyma/default -n kyma-system -o yaml | yq e 'del(.spec.modules[0])' | kubectl apply -f - ``` -## [OPTIONAL] Verify conditions +### 8. [OPTIONAL] Verify Conditions Check the conditions of the Kyma. @@ -146,9 +147,9 @@ kubectl config use-context k3d-kcp kubectl get kyma/kyma-sample -n kcp-system -o yaml | yq e '.status.conditions' ``` -## [OPTIONAL] Verify if watcher events reach KCP +### 9. [OPTIONAL] Verify If Watcher Events Reach KCP -Flick the channel to trigger an event. +#### 9.1 Flick the Channel to Trigger an Event ```sh kubectl config use-context k3d-skr @@ -156,14 +157,14 @@ kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.channel="regular"' kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.channel="fast"' | kubectl apply -f - ``` - Verify if lifecyle-manger received the event on KCP. +#### 9.2 Verify if lifecyle-manger Received the Event on KCP ```sh kubectl config use-context k3d-kcp kubectl logs deploy/klm-controller-manager -n kcp-system | grep "event received from SKR" ``` -## [OPTIONAL] Delete the local test clusters +#### 10. [OPTIONAL] Delete the Local Test Clusters Remove the local SKR and KCP test clusters. From d1a8ac1bb363388e2fcdbd3396670a8b137d92d5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Christoph=20Schw=C3=A4gerl?= Date: Wed, 22 Jan 2025 13:21:45 +0100 Subject: [PATCH 06/13] Apply suggestions from code review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Małgorzata Świeca --- docs/contributor/04-local-test-setup.md | 58 ++++++++++++------------- 1 file changed, 29 insertions(+), 29 deletions(-) diff --git a/docs/contributor/04-local-test-setup.md b/docs/contributor/04-local-test-setup.md index c998239400..f7c05f64e3 100644 --- a/docs/contributor/04-local-test-setup.md +++ b/docs/contributor/04-local-test-setup.md @@ -1,4 +1,4 @@ -# Local Test Setup in the Control Plane Mode Using k3d +# Configure a Local Test Setup ## Context @@ -15,21 +15,21 @@ This setup is deployed with the following security features enabled: ## Prerequisites -The following tooling is required in the versions defined in [`versions.yaml`](../../versions.yaml): +Install the following tooling in the versions defined in [`versions.yaml`](../../versions.yaml): -- docker -- go -- golangci-lint -- istioctl -- k3d -- kubectl -- kustomize +- [Docker](https://www.docker.com/) +- [Go](https://go.dev/) +- [golangci-lint](https://golangci-lint.run/) +- [istioctl](https://istio.io/latest/docs/ops/diagnostic-tools/istioctl/) +- [k3d](https://k3d.io/stable/) +- [kubectl](https://kubernetes.io/docs/tasks/tools/) +- [kustomize](https://kustomize.io/) - [modulectl](https://github.com/kyma-project/modulectl) -- yq +- [yq](https://github.com/mikefarah/yq/tree/master) ## Procedure -Execute the following scripts from the project root. +Follow the steps using scripts from the project root. ### 1. Create Test Clusters @@ -41,7 +41,7 @@ CERT_MANAGER_VERSION=$(yq e '.certManager' ./versions.yaml) ./scripts/tests/create_test_clusters.sh --k8s-version $K8S_VERSION --cert-manager-version $CERT_MANAGER_VERSION ``` -### 2. Install the CRDs +### 2. Install the Custom Resource Definitions Install the CRDs to the KCP cluster. @@ -49,11 +49,12 @@ Install the CRDs to the KCP cluster. ./scripts/tests/install_crds.sh ``` -### 3. Deploy lifecycle-manager +### 3. Deploy Lifecycle Manager -#### 3.1 Deploy lifecycle-manager from a Registry +You can deploy Lifecycle Manager either from the registry or local sources. Choose one of the below options: + +3.1 Deploy a built image from the registry, for example, the `latest` image from the `prod` registry. -Deploy a built image from the registry, e.g. the `latest` image from the `prod` registry. ```sh REGISTRY=prod @@ -61,9 +62,8 @@ TAG=latest ./scripts/tests/deploy_klm_from_registry.sh --image-registry $REGISTRY --image-tag $TAG ``` -#### 3.2 Deploy lifecycle-manager from Local Sources +3.2 Build a new image from the local sources, push it to the local KCP registry, and deploy it. -Build a new image from the local sources, push it to the local KCP registry and deploy it. ```sh ./scripts/tests/deploy_klm_from_sources.sh @@ -76,16 +76,16 @@ SKR_HOST=host.k3d.internal ./scripts/tests/deploy_kyma.sh $SKR_HOST ``` -### 5. Verify If The Kyma Becomes Ready +### 5. Verify If the Kyma CR Becomes Ready -#### 5.1 Verify If Kyma Is Ready in KCP (Takes Roughly 1–2 Minutes) +5.1 Verify if the Kyma CR is in the `Ready` state in KCP. It takes roughly 1–2 minutes. ```sh kubectl config use-context k3d-kcp kubectl get kyma/kyma-sample -n kcp-system ``` -#### 5.1 Verify If Kyma Is Ready in SKR (Takes Roughly 1-2 Minutes) +5.1 Verify if the Kyma CR is in the `Ready` state in SKR. It takes roughly 1-2 minutes. ```sh kubectl config use-context k3d-skr @@ -94,7 +94,7 @@ kubectl get kyma/default -n kyma-system ### 6. [OPTIONAL] Deploy template-operator Module -Build the template-operator module from the local sources, push it to the local KCP registry and deploy it. +Build the template-operator module from the local sources, push it to the local KCP registry, and deploy it. ```sh cd @@ -113,21 +113,21 @@ cd ### 7. [OPTIONAL] Add the template-operator Module to the Kyma CR and Verify If It Becomes Ready -#### 7.1 Add the Module to the Kyma CR Spec +7.1 Add the template-operator module to the Kyma CR spec. ```sh kubectl config use-context k3d-skr kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.modules[0]={"name": "template-operator"}' | kubectl apply -f - ``` -#### 7.2 Verify If the Module Becomes Ready (Takes Roughly 1–2 Minutes) +7.2 Verify if the module becomes `Ready`. It takes roughly 1–2 minutes. ```sh kubectl config use-context k3d-skr kubectl get kyma/default -n kyma-system -o wide ``` -#### 7.3 Remove the Module from the Kyma CR Spec +7.3 Remove the module from the Kyma CR spec. ```sh kubectl config use-context k3d-skr @@ -139,8 +139,8 @@ kubectl get kyma/default -n kyma-system -o yaml | yq e 'del(.spec.modules[0])' | Check the conditions of the Kyma. - `SKRWebhook` to determine if the webhook has been installed to the SKR -- `ModuleCatalog` to determine if the ModuleTemplates and ModuleReleaseMetas haven been synced to the SKR cluster -- `Modules` to determine if the added modules are ready +- `ModuleCatalog` to determine if the ModuleTemplate CRs and ModuleReleaseMeta CRs haven been synced to the SKR cluster +- `Modules` to determine if the added modules are `Ready` ```sh kubectl config use-context k3d-kcp @@ -149,7 +149,7 @@ kubectl get kyma/kyma-sample -n kcp-system -o yaml | yq e '.status.conditions' ### 9. [OPTIONAL] Verify If Watcher Events Reach KCP -#### 9.1 Flick the Channel to Trigger an Event +9.1 Flick the channel to trigger an event. ```sh kubectl config use-context k3d-skr @@ -157,14 +157,14 @@ kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.channel="regular"' kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.channel="fast"' | kubectl apply -f - ``` -#### 9.2 Verify if lifecyle-manger Received the Event on KCP +9.2 Verify if Lifecycle Manger received the event in KCP. ```sh kubectl config use-context k3d-kcp kubectl logs deploy/klm-controller-manager -n kcp-system | grep "event received from SKR" ``` -#### 10. [OPTIONAL] Delete the Local Test Clusters +### 10. [OPTIONAL] Delete the Local Test Clusters Remove the local SKR and KCP test clusters. From c3c6205a239e758d055f932bc5e0440b52f3fd69 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Christoph=20Schw=C3=A4gerl?= Date: Wed, 22 Jan 2025 13:24:02 +0100 Subject: [PATCH 07/13] Update docs/contributor/04-local-test-setup.md --- docs/contributor/04-local-test-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/contributor/04-local-test-setup.md b/docs/contributor/04-local-test-setup.md index f7c05f64e3..1dd1b339e3 100644 --- a/docs/contributor/04-local-test-setup.md +++ b/docs/contributor/04-local-test-setup.md @@ -136,7 +136,7 @@ kubectl get kyma/default -n kyma-system -o yaml | yq e 'del(.spec.modules[0])' | ### 8. [OPTIONAL] Verify Conditions -Check the conditions of the Kyma. +Check the conditions of the Kyma CR in the KCP cluster. - `SKRWebhook` to determine if the webhook has been installed to the SKR - `ModuleCatalog` to determine if the ModuleTemplate CRs and ModuleReleaseMeta CRs haven been synced to the SKR cluster From 781ab2d745d8539286cff358089cc6871affa4cc Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Christoph=20Schw=C3=A4gerl?= Date: Wed, 22 Jan 2025 13:23:45 +0100 Subject: [PATCH 08/13] indent scripts --- docs/contributor/04-local-test-setup.md | 74 ++++++++++++------------- 1 file changed, 37 insertions(+), 37 deletions(-) diff --git a/docs/contributor/04-local-test-setup.md b/docs/contributor/04-local-test-setup.md index 1dd1b339e3..8822d46233 100644 --- a/docs/contributor/04-local-test-setup.md +++ b/docs/contributor/04-local-test-setup.md @@ -36,9 +36,9 @@ Follow the steps using scripts from the project root. Create local test clusters for SKR and KCP. ```sh -K8S_VERSION=$(yq e '.k8s' ./versions.yaml) -CERT_MANAGER_VERSION=$(yq e '.certManager' ./versions.yaml) -./scripts/tests/create_test_clusters.sh --k8s-version $K8S_VERSION --cert-manager-version $CERT_MANAGER_VERSION + K8S_VERSION=$(yq e '.k8s' ./versions.yaml) + CERT_MANAGER_VERSION=$(yq e '.certManager' ./versions.yaml) + ./scripts/tests/create_test_clusters.sh --k8s-version $K8S_VERSION --cert-manager-version $CERT_MANAGER_VERSION ``` ### 2. Install the Custom Resource Definitions @@ -46,7 +46,7 @@ CERT_MANAGER_VERSION=$(yq e '.certManager' ./versions.yaml) Install the CRDs to the KCP cluster. ```sh -./scripts/tests/install_crds.sh + ./scripts/tests/install_crds.sh ``` ### 3. Deploy Lifecycle Manager @@ -57,23 +57,23 @@ You can deploy Lifecycle Manager either from the registry or local sources. Choo ```sh -REGISTRY=prod -TAG=latest -./scripts/tests/deploy_klm_from_registry.sh --image-registry $REGISTRY --image-tag $TAG + REGISTRY=prod + TAG=latest + ./scripts/tests/deploy_klm_from_registry.sh --image-registry $REGISTRY --image-tag $TAG ``` 3.2 Build a new image from the local sources, push it to the local KCP registry, and deploy it. ```sh -./scripts/tests/deploy_klm_from_sources.sh + ./scripts/tests/deploy_klm_from_sources.sh ``` ### 4. Deploy a Kyma CR ```sh -SKR_HOST=host.k3d.internal -./scripts/tests/deploy_kyma.sh $SKR_HOST + SKR_HOST=host.k3d.internal + ./scripts/tests/deploy_kyma.sh $SKR_HOST ``` ### 5. Verify If the Kyma CR Becomes Ready @@ -81,15 +81,15 @@ SKR_HOST=host.k3d.internal 5.1 Verify if the Kyma CR is in the `Ready` state in KCP. It takes roughly 1–2 minutes. ```sh -kubectl config use-context k3d-kcp -kubectl get kyma/kyma-sample -n kcp-system + kubectl config use-context k3d-kcp + kubectl get kyma/kyma-sample -n kcp-system ``` 5.1 Verify if the Kyma CR is in the `Ready` state in SKR. It takes roughly 1-2 minutes. ```sh -kubectl config use-context k3d-skr -kubectl get kyma/default -n kyma-system + kubectl config use-context k3d-skr + kubectl get kyma/default -n kyma-system ``` ### 6. [OPTIONAL] Deploy template-operator Module @@ -97,18 +97,18 @@ kubectl get kyma/default -n kyma-system Build the template-operator module from the local sources, push it to the local KCP registry, and deploy it. ```sh -cd + cd -make build-manifests -modulectl create --config-file ./module-config.yaml --registry http://localhost:5111 --insecure + make build-manifests + modulectl create --config-file ./module-config.yaml --registry http://localhost:5111 --insecure -kubectl config use-context k3d-kcp -# repository URL is localhost:5111 on the host machine but must be k3d-kcp-registry.localhost:5000 within the cluster -yq e '.spec.descriptor.component.repositoryContexts[0].baseUrl = "k3d-kcp-registry.localhost:5000"' ./template.yaml | kubectl apply -f - + kubectl config use-context k3d-kcp + # repository URL is localhost:5111 on the host machine but must be k3d-kcp-registry.localhost:5000 within the cluster + yq e '.spec.descriptor.component.repositoryContexts[0].baseUrl = "k3d-kcp-registry.localhost:5000"' ./template.yaml | kubectl apply -f - -MT_VERSION=$(yq e '.spec.version' ./template.yaml) -cd -./scripts/tests/deploy_modulereleasemeta.sh template-operator regular:$MT_VERSION + MT_VERSION=$(yq e '.spec.version' ./template.yaml) + cd + ./scripts/tests/deploy_modulereleasemeta.sh template-operator regular:$MT_VERSION ``` ### 7. [OPTIONAL] Add the template-operator Module to the Kyma CR and Verify If It Becomes Ready @@ -116,22 +116,22 @@ cd 7.1 Add the template-operator module to the Kyma CR spec. ```sh -kubectl config use-context k3d-skr -kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.modules[0]={"name": "template-operator"}' | kubectl apply -f - + kubectl config use-context k3d-skr + kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.modules[0]={"name": "template-operator"}' | kubectl apply -f - ``` 7.2 Verify if the module becomes `Ready`. It takes roughly 1–2 minutes. ```sh -kubectl config use-context k3d-skr -kubectl get kyma/default -n kyma-system -o wide + kubectl config use-context k3d-skr + kubectl get kyma/default -n kyma-system -o wide ``` 7.3 Remove the module from the Kyma CR spec. ```sh -kubectl config use-context k3d-skr -kubectl get kyma/default -n kyma-system -o yaml | yq e 'del(.spec.modules[0])' | kubectl apply -f - + kubectl config use-context k3d-skr + kubectl get kyma/default -n kyma-system -o yaml | yq e 'del(.spec.modules[0])' | kubectl apply -f - ``` ### 8. [OPTIONAL] Verify Conditions @@ -143,8 +143,8 @@ Check the conditions of the Kyma CR in the KCP cluster. - `Modules` to determine if the added modules are `Ready` ```sh -kubectl config use-context k3d-kcp -kubectl get kyma/kyma-sample -n kcp-system -o yaml | yq e '.status.conditions' + kubectl config use-context k3d-kcp + kubectl get kyma/kyma-sample -n kcp-system -o yaml | yq e '.status.conditions' ``` ### 9. [OPTIONAL] Verify If Watcher Events Reach KCP @@ -152,16 +152,16 @@ kubectl get kyma/kyma-sample -n kcp-system -o yaml | yq e '.status.conditions' 9.1 Flick the channel to trigger an event. ```sh -kubectl config use-context k3d-skr -kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.channel="regular"' | kubectl apply -f - -kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.channel="fast"' | kubectl apply -f - + kubectl config use-context k3d-skr + kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.channel="regular"' | kubectl apply -f - + kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.channel="fast"' | kubectl apply -f - ``` 9.2 Verify if Lifecycle Manger received the event in KCP. ```sh -kubectl config use-context k3d-kcp -kubectl logs deploy/klm-controller-manager -n kcp-system | grep "event received from SKR" + kubectl config use-context k3d-kcp + kubectl logs deploy/klm-controller-manager -n kcp-system | grep "event received from SKR" ``` ### 10. [OPTIONAL] Delete the Local Test Clusters @@ -169,5 +169,5 @@ kubectl logs deploy/klm-controller-manager -n kcp-system | grep "event received Remove the local SKR and KCP test clusters. ```shell -k3d cluster rm kcp skr + k3d cluster rm kcp skr ``` From 45882bd63c93faf869cdee970f4a85ad108973a0 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Christoph=20Schw=C3=A4gerl?= Date: Wed, 22 Jan 2025 13:25:43 +0100 Subject: [PATCH 09/13] link CRDs --- docs/contributor/04-local-test-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/contributor/04-local-test-setup.md b/docs/contributor/04-local-test-setup.md index 8822d46233..54836b6ade 100644 --- a/docs/contributor/04-local-test-setup.md +++ b/docs/contributor/04-local-test-setup.md @@ -43,7 +43,7 @@ Create local test clusters for SKR and KCP. ### 2. Install the Custom Resource Definitions -Install the CRDs to the KCP cluster. +Install the [Lifecycle Manager CRDs](./resources/README.md) to the KCP cluster. ```sh ./scripts/tests/install_crds.sh From b0980c31fef7f643a8870c7ce1c1195107fc1f23 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Christoph=20Schw=C3=A4gerl?= Date: Wed, 22 Jan 2025 13:48:28 +0100 Subject: [PATCH 10/13] Update docs/contributor/04-local-test-setup.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Małgorzata Świeca --- docs/contributor/04-local-test-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/contributor/04-local-test-setup.md b/docs/contributor/04-local-test-setup.md index 54836b6ade..325b580d28 100644 --- a/docs/contributor/04-local-test-setup.md +++ b/docs/contributor/04-local-test-setup.md @@ -43,7 +43,7 @@ Create local test clusters for SKR and KCP. ### 2. Install the Custom Resource Definitions -Install the [Lifecycle Manager CRDs](./resources/README.md) to the KCP cluster. +Install the [Lifecycle Manager CRDs](./resources/README.md) in the KCP cluster. ```sh ./scripts/tests/install_crds.sh From 075a967aff14be3c74252580311d63a4863d6c6d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Christoph=20Schw=C3=A4gerl?= Date: Wed, 22 Jan 2025 13:48:35 +0100 Subject: [PATCH 11/13] Update docs/contributor/04-local-test-setup.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Małgorzata Świeca --- docs/contributor/04-local-test-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/contributor/04-local-test-setup.md b/docs/contributor/04-local-test-setup.md index 325b580d28..3ac4a39875 100644 --- a/docs/contributor/04-local-test-setup.md +++ b/docs/contributor/04-local-test-setup.md @@ -92,7 +92,7 @@ You can deploy Lifecycle Manager either from the registry or local sources. Choo kubectl get kyma/default -n kyma-system ``` -### 6. [OPTIONAL] Deploy template-operator Module +### 6. [OPTIONAL] Deploy the template-operator Module Build the template-operator module from the local sources, push it to the local KCP registry, and deploy it. From 60d3254e880cbe94ebd945d750dee576ad2df7e1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Christoph=20Schw=C3=A4gerl?= Date: Wed, 22 Jan 2025 13:51:01 +0100 Subject: [PATCH 12/13] fix indentation --- docs/contributor/04-local-test-setup.md | 60 ++++++++++++------------- 1 file changed, 30 insertions(+), 30 deletions(-) diff --git a/docs/contributor/04-local-test-setup.md b/docs/contributor/04-local-test-setup.md index 3ac4a39875..411db221de 100644 --- a/docs/contributor/04-local-test-setup.md +++ b/docs/contributor/04-local-test-setup.md @@ -35,19 +35,19 @@ Follow the steps using scripts from the project root. Create local test clusters for SKR and KCP. -```sh + ```sh K8S_VERSION=$(yq e '.k8s' ./versions.yaml) CERT_MANAGER_VERSION=$(yq e '.certManager' ./versions.yaml) ./scripts/tests/create_test_clusters.sh --k8s-version $K8S_VERSION --cert-manager-version $CERT_MANAGER_VERSION -``` + ``` ### 2. Install the Custom Resource Definitions Install the [Lifecycle Manager CRDs](./resources/README.md) in the KCP cluster. -```sh + ```sh ./scripts/tests/install_crds.sh -``` + ``` ### 3. Deploy Lifecycle Manager @@ -56,47 +56,47 @@ You can deploy Lifecycle Manager either from the registry or local sources. Choo 3.1 Deploy a built image from the registry, for example, the `latest` image from the `prod` registry. -```sh + ```sh REGISTRY=prod TAG=latest ./scripts/tests/deploy_klm_from_registry.sh --image-registry $REGISTRY --image-tag $TAG -``` + ``` 3.2 Build a new image from the local sources, push it to the local KCP registry, and deploy it. -```sh + ```sh ./scripts/tests/deploy_klm_from_sources.sh -``` + ``` ### 4. Deploy a Kyma CR -```sh + ```sh SKR_HOST=host.k3d.internal ./scripts/tests/deploy_kyma.sh $SKR_HOST -``` + ``` ### 5. Verify If the Kyma CR Becomes Ready 5.1 Verify if the Kyma CR is in the `Ready` state in KCP. It takes roughly 1–2 minutes. -```sh + ```sh kubectl config use-context k3d-kcp kubectl get kyma/kyma-sample -n kcp-system -``` + ``` 5.1 Verify if the Kyma CR is in the `Ready` state in SKR. It takes roughly 1-2 minutes. -```sh + ```sh kubectl config use-context k3d-skr kubectl get kyma/default -n kyma-system -``` + ``` ### 6. [OPTIONAL] Deploy the template-operator Module Build the template-operator module from the local sources, push it to the local KCP registry, and deploy it. -```sh + ```sh cd make build-manifests @@ -109,30 +109,30 @@ Build the template-operator module from the local sources, push it to the local MT_VERSION=$(yq e '.spec.version' ./template.yaml) cd ./scripts/tests/deploy_modulereleasemeta.sh template-operator regular:$MT_VERSION -``` + ``` ### 7. [OPTIONAL] Add the template-operator Module to the Kyma CR and Verify If It Becomes Ready 7.1 Add the template-operator module to the Kyma CR spec. -```sh + ```sh kubectl config use-context k3d-skr kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.modules[0]={"name": "template-operator"}' | kubectl apply -f - -``` + ``` 7.2 Verify if the module becomes `Ready`. It takes roughly 1–2 minutes. -```sh + ```sh kubectl config use-context k3d-skr kubectl get kyma/default -n kyma-system -o wide -``` + ``` 7.3 Remove the module from the Kyma CR spec. -```sh + ```sh kubectl config use-context k3d-skr kubectl get kyma/default -n kyma-system -o yaml | yq e 'del(.spec.modules[0])' | kubectl apply -f - -``` + ``` ### 8. [OPTIONAL] Verify Conditions @@ -142,32 +142,32 @@ Check the conditions of the Kyma CR in the KCP cluster. - `ModuleCatalog` to determine if the ModuleTemplate CRs and ModuleReleaseMeta CRs haven been synced to the SKR cluster - `Modules` to determine if the added modules are `Ready` -```sh + ```sh kubectl config use-context k3d-kcp kubectl get kyma/kyma-sample -n kcp-system -o yaml | yq e '.status.conditions' -``` + ``` ### 9. [OPTIONAL] Verify If Watcher Events Reach KCP 9.1 Flick the channel to trigger an event. -```sh + ```sh kubectl config use-context k3d-skr kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.channel="regular"' | kubectl apply -f - kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.channel="fast"' | kubectl apply -f - -``` + ``` 9.2 Verify if Lifecycle Manger received the event in KCP. -```sh + ```sh kubectl config use-context k3d-kcp kubectl logs deploy/klm-controller-manager -n kcp-system | grep "event received from SKR" -``` + ``` ### 10. [OPTIONAL] Delete the Local Test Clusters Remove the local SKR and KCP test clusters. -```shell + ```shell k3d cluster rm kcp skr -``` + ``` From 05e0ca78227d57b30d2c7af752ede679b8ec725c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Christoph=20Schw=C3=A4gerl?= Date: Wed, 22 Jan 2025 15:04:06 +0100 Subject: [PATCH 13/13] fix indentation and numbering --- docs/contributor/04-local-test-setup.md | 131 ++++++++++++------------ 1 file changed, 66 insertions(+), 65 deletions(-) diff --git a/docs/contributor/04-local-test-setup.md b/docs/contributor/04-local-test-setup.md index 411db221de..718c447bb4 100644 --- a/docs/contributor/04-local-test-setup.md +++ b/docs/contributor/04-local-test-setup.md @@ -35,62 +35,63 @@ Follow the steps using scripts from the project root. Create local test clusters for SKR and KCP. - ```sh - K8S_VERSION=$(yq e '.k8s' ./versions.yaml) - CERT_MANAGER_VERSION=$(yq e '.certManager' ./versions.yaml) - ./scripts/tests/create_test_clusters.sh --k8s-version $K8S_VERSION --cert-manager-version $CERT_MANAGER_VERSION - ``` +```sh +K8S_VERSION=$(yq e '.k8s' ./versions.yaml) +CERT_MANAGER_VERSION=$(yq e '.certManager' ./versions.yaml) +./scripts/tests/create_test_clusters.sh --k8s-version $K8S_VERSION --cert-manager-version $CERT_MANAGER_VERSION +``` ### 2. Install the Custom Resource Definitions Install the [Lifecycle Manager CRDs](./resources/README.md) in the KCP cluster. - ```sh - ./scripts/tests/install_crds.sh - ``` +```sh +./scripts/tests/install_crds.sh +``` ### 3. Deploy Lifecycle Manager You can deploy Lifecycle Manager either from the registry or local sources. Choose one of the below options: -3.1 Deploy a built image from the registry, for example, the `latest` image from the `prod` registry. +1. Deploy a built image from the registry, for example, the `latest` image from the `prod` registry. - ```sh - REGISTRY=prod - TAG=latest - ./scripts/tests/deploy_klm_from_registry.sh --image-registry $REGISTRY --image-tag $TAG - ``` + ```sh + REGISTRY=prod + TAG=latest + ./scripts/tests/deploy_klm_from_registry.sh --image-registry $REGISTRY --image-tag $TAG + ``` -3.2 Build a new image from the local sources, push it to the local KCP registry, and deploy it. +1. Build a new image from the local sources, push it to the local KCP registry, and deploy it. - ```sh - ./scripts/tests/deploy_klm_from_sources.sh - ``` + ```sh + ./scripts/tests/deploy_klm_from_sources.sh + ``` ### 4. Deploy a Kyma CR - ```sh - SKR_HOST=host.k3d.internal - ./scripts/tests/deploy_kyma.sh $SKR_HOST - ``` +```sh +SKR_HOST=host.k3d.internal +./scripts/tests/deploy_kyma.sh $SKR_HOST +``` ### 5. Verify If the Kyma CR Becomes Ready -5.1 Verify if the Kyma CR is in the `Ready` state in KCP. It takes roughly 1–2 minutes. +1. Verify if the Kyma CR is in the `Ready` state in KCP. It takes roughly 1–2 minutes. - ```sh - kubectl config use-context k3d-kcp - kubectl get kyma/kyma-sample -n kcp-system - ``` + ```sh + kubectl config use-context k3d-kcp + kubectl get kyma/kyma-sample -n kcp-system + ``` -5.1 Verify if the Kyma CR is in the `Ready` state in SKR. It takes roughly 1-2 minutes. - ```sh - kubectl config use-context k3d-skr - kubectl get kyma/default -n kyma-system - ``` +1. Verify if the Kyma CR is in the `Ready` state in SKR. It takes roughly 1-2 minutes. + + ```sh + kubectl config use-context k3d-skr + kubectl get kyma/default -n kyma-system + ``` ### 6. [OPTIONAL] Deploy the template-operator Module @@ -113,26 +114,26 @@ Build the template-operator module from the local sources, push it to the local ### 7. [OPTIONAL] Add the template-operator Module to the Kyma CR and Verify If It Becomes Ready -7.1 Add the template-operator module to the Kyma CR spec. +1. Add the template-operator module to the Kyma CR spec. - ```sh - kubectl config use-context k3d-skr - kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.modules[0]={"name": "template-operator"}' | kubectl apply -f - - ``` + ```sh + kubectl config use-context k3d-skr + kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.modules[0]={"name": "template-operator"}' | kubectl apply -f - + ``` -7.2 Verify if the module becomes `Ready`. It takes roughly 1–2 minutes. +1. Verify if the module becomes `Ready`. It takes roughly 1–2 minutes. - ```sh - kubectl config use-context k3d-skr - kubectl get kyma/default -n kyma-system -o wide - ``` + ```sh + kubectl config use-context k3d-skr + kubectl get kyma/default -n kyma-system -o wide + ``` -7.3 Remove the module from the Kyma CR spec. +1. Remove the module from the Kyma CR spec. - ```sh - kubectl config use-context k3d-skr - kubectl get kyma/default -n kyma-system -o yaml | yq e 'del(.spec.modules[0])' | kubectl apply -f - - ``` + ```sh + kubectl config use-context k3d-skr + kubectl get kyma/default -n kyma-system -o yaml | yq e 'del(.spec.modules[0])' | kubectl apply -f - + ``` ### 8. [OPTIONAL] Verify Conditions @@ -142,32 +143,32 @@ Check the conditions of the Kyma CR in the KCP cluster. - `ModuleCatalog` to determine if the ModuleTemplate CRs and ModuleReleaseMeta CRs haven been synced to the SKR cluster - `Modules` to determine if the added modules are `Ready` - ```sh - kubectl config use-context k3d-kcp - kubectl get kyma/kyma-sample -n kcp-system -o yaml | yq e '.status.conditions' - ``` +```sh +kubectl config use-context k3d-kcp +kubectl get kyma/kyma-sample -n kcp-system -o yaml | yq e '.status.conditions' +``` ### 9. [OPTIONAL] Verify If Watcher Events Reach KCP -9.1 Flick the channel to trigger an event. +1. Flick the channel to trigger an event. - ```sh - kubectl config use-context k3d-skr - kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.channel="regular"' | kubectl apply -f - - kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.channel="fast"' | kubectl apply -f - - ``` + ```sh + kubectl config use-context k3d-skr + kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.channel="regular"' | kubectl apply -f - + kubectl get kyma/default -n kyma-system -o yaml | yq e '.spec.channel="fast"' | kubectl apply -f - + ``` -9.2 Verify if Lifecycle Manger received the event in KCP. +1. Verify if Lifecycle Manger received the event in KCP. - ```sh - kubectl config use-context k3d-kcp - kubectl logs deploy/klm-controller-manager -n kcp-system | grep "event received from SKR" - ``` + ```sh + kubectl config use-context k3d-kcp + kubectl logs deploy/klm-controller-manager -n kcp-system | grep "event received from SKR" + ``` ### 10. [OPTIONAL] Delete the Local Test Clusters Remove the local SKR and KCP test clusters. - ```shell - k3d cluster rm kcp skr - ``` +```shell +k3d cluster rm kcp skr +```