Skip to content
This repository has been archived by the owner on Aug 19, 2024. It is now read-only.

Commit

Permalink
Initial Docs (#64)
Browse files Browse the repository at this point in the history
* yaml/configMap default configuration

* fix make test

* fix with new objects

* fix with new objects

* config small fixes

* fix for #51

* fix for #58

* initial doc commit (WIP)

* initial docs

* initial docs

* Update config/manager/kustomization.yaml

Co-authored-by: Armel Soro <armel@rm3l.org>

* Update docs/developer.md

Co-authored-by: Armel Soro <armel@rm3l.org>

* Update README.md

Co-authored-by: Armel Soro <armel@rm3l.org>

* Update README.md

Co-authored-by: Nick Boldt <nboldt@redhat.com>

* Update README.md

Co-authored-by: Armel Soro <armel@rm3l.org>

* Update README.md

Co-authored-by: Armel Soro <armel@rm3l.org>

* Update README.md

Co-authored-by: Armel Soro <armel@rm3l.org>

* Update README.md

Co-authored-by: Armel Soro <armel@rm3l.org>

* admin guide

* Update docs/admin.md - typo

* Update docs/admin.md

Co-authored-by: Nick Boldt <nboldt@redhat.com>

* Update docs/admin.md

Co-authored-by: Nick Boldt <nboldt@redhat.com>

* Update README.md

Co-authored-by: Armel Soro <armel@rm3l.org>

* Update docs/admin.md

Co-authored-by: Armel Soro <armel@rm3l.org>

* Update README.md

Co-authored-by: Armel Soro <armel@rm3l.org>

* Update docs/admin.md

Co-authored-by: Nick Boldt <nboldt@redhat.com>

* design

* Update design.md

* more docs added and updated

* Update docs/admin.md

Co-authored-by: Nick Boldt <nboldt@redhat.com>

* fixed examples/bs1.yaml

* added version to config table

* db-service-hl added to admin.yaml

* Update docs/admin.md

Co-authored-by: Nick Boldt <nboldt@redhat.com>

* Update docs/admin.md

Co-authored-by: Nick Boldt <nboldt@redhat.com>

* Update docs/admin.md

Co-authored-by: Nick Boldt <nboldt@redhat.com>

* Update docs/admin.md

Co-authored-by: Nick Boldt <nboldt@redhat.com>

* Fix formatting of ConfigMap keys table in admin.md

* Apply review suggestions

Co-authored-by: Gennady Azarenkov <gazarenkov@redhat.com>
Co-authored-by: Nick Boldt <nboldt@redhat.com>

* Update docker/bundle.Dockerfile

* Update docs/admin.md

Co-authored-by: Nick Boldt <nboldt@redhat.com>

* Update README.md

Co-authored-by: Nick Boldt <nboldt@redhat.com>

* Update README.md

Co-authored-by: Nick Boldt <nboldt@redhat.com>

* Update README.md

Co-authored-by: Nick Boldt <nboldt@redhat.com>

* Update README.md

Co-authored-by: Nick Boldt <nboldt@redhat.com>

* Update bundle/manifests/backstage-operator.clusterserviceversion.yaml

Co-authored-by: Nick Boldt <nboldt@redhat.com>

---------

Co-authored-by: Armel Soro <armel@rm3l.org>
Co-authored-by: Nick Boldt <nboldt@redhat.com>
Co-authored-by: Armel Soro <asoro@redhat.com>
Co-authored-by: Gennady Azarenkov <gazarenkov@redhat.com>
  • Loading branch information
5 people committed Jan 4, 2024
1 parent 906d808 commit 7f1df4f
Show file tree
Hide file tree
Showing 14 changed files with 308 additions and 81 deletions.
115 changes: 40 additions & 75 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,99 +1,64 @@
# backstage-operator
Operator for deploying Backstage for Janus-IDP.
# Backstage Operator

## Description
Implementing https://janus-idp.io/docs/deployment/k8s/ procedure
At first stage CR update does not affect Backstage Objects, just installation (same as Helm)
TODO: Do we need to continuosly sync the states? Which way if so: from CR to Objects or back or (somehow) back and forth?
## The Goal
The Goal of [Backstage](https://backstage.io) Operator project is creating Kubernetes Operator for configuring, installing and synchronizing Backstage instance on Kubernetes/OpenShift.
The initial target is in support of Red Hat's assemblies of Backstage - specifically supporting [dynamic-plugins](https://github.com/janus-idp/backstage-showcase/blob/main/showcase-docs/dynamic-plugins.md)) on OpenShift. This includes [Janus-IDP](https://janus-idp.io/) and [Red Hat Developer Hub (RHDH)](https://developers.redhat.com/rhdh) but may be flexible enough to install any compatible Backstage instance on Kubernetes. See additional information under [Configuration](docs/configuration.md)).
The Operator provides clear and flexible configuration options to satisfy a wide range of expectations, from "no configuration for default quick start" to "highly customized configuration for production".

Make sure namespace is created.

Local Database (PostgreSQL) is created by default, to disable
spec:
skipLocalDb: true
This way third party DB can theorethically be configured. TODO: It just requires some changes in Backstage appConfig (I think),
because it only expects either in-container SQLite or MySQL.
TODO: should we consider using in-container SQLite for K8s deployment as well (single container deployment)?

TODO: POSTGRES_HOST = <name-of the service> , POSTGRES_PORT = <port>[5432] can be delivered to the Backstage
Deployment out of Postgres Secret? Indeed, it is not really a secret.
[More documentation...](#more-documentation)

## Getting Started
You’ll need a Kubernetes cluster to run against. You can use [KIND](https://sigs.k8s.io/kind) to get a local cluster for testing, or run against a remote cluster.
You’ll need a Kubernetes or OpenShift cluster. You can use [Minikube](https://minikube.sigs.k8s.io/docs/) or [KIND](https://sigs.k8s.io/kind) for local testing, or deploy to a remote cluster.
**Note:** Your controller will automatically use the current context in your kubeconfig file (i.e. whatever cluster `kubectl cluster-info` shows).

### Running on the cluster
1. Install Instances of Custom Resources:
```sh
kubectl apply -f config/samples/
```
2. Build and push your image to the location specified by `IMG`:
```sh
make docker-build docker-push IMG=<some-registry>/backstage-operator:tag
```
3. Deploy the controller to the cluster with the image specified by `IMG`:
```sh
make deploy IMG=<some-registry>/backstage-operator:tag
```
### Uninstall CRDs
To delete the CRDs from the cluster:
```sh
make uninstall
```
### Undeploy controller
UnDeploy the controller from the cluster:
```sh
make undeploy
```
## Build and Deploy with OLM
1. To build operator, bundle and catalog images:
To test it on minikube from the source code:

Both **kubectl** and **minikube** must be installed. See [tools](https://kubernetes.io/docs/tasks/tools/).

1. Get your copy of Operator from GitHub:
```sh
make release-build
git clone https://github.com/janus-idp/operator
```
2. To push operator, bundle and catalog images to the registry:
2. Deploy Operator on the minikube cluster:
```sh
make release-push
cd <your-janus-idp-operator-project-dir>
make deploy
```
3. To deploy or update catalog source:
you can check if the Operator pod is up by running
```sh
make catalog-update
kubectl get pods -n backstage-system
It should be something like:
NAME READY STATUS RESTARTS AGE
backstage-controller-manager-cfc44bdfd-xzk8g 2/2 Running 0 32s
```
4. To deloy the operator with OLM
3. Create Backstage Custom resource on some namespace (make sure this namespace exists)
```sh
make deploy-olm
kubectl -n <your-namespace> apply -f examples/bs1.yaml
```
4. To undeloy the operator with OLM
you can check if the Operand pods are up by running
```sh
make undeploy-olm
```
## Contributing
// TODO(user): Add detailed information on how you would like others to contribute to this project

### How it works
This project aims to follow the Kubernetes [Operator pattern](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/).
kubectl get pods -n <your-namespace>
It should be something like:
NAME READY STATUS RESTARTS AGE
backstage-85fc4657b5-lqk6r 1/1 Running 0 78s
backstage-psql-bs1-0 1/1 Running 0 79s

It uses [Controllers](https://kubernetes.io/docs/concepts/architecture/controller/)
which provides a reconcile function responsible for synchronizing resources until the desired state is reached on the cluster.

### Test It Out
1. Install the CRDs into the cluster:
```sh
make install
```
2. Run your controller (this will run in the foreground, so switch to a new terminal if you want to leave it running):
4. Tunnel Backstage Service and get URL for access Backstage
```sh
make run
minikube service -n <your-namespace> backstage --url
Output:
>http://127.0.0.1:53245
```
**NOTE:** You can also run this in one step by running: `make install run`
5. Access your Backstage instance in your browser using this URL.

### Modifying the API definitions
If you are editing the API definitions, generate the manifests such as CRs or CRDs using:
```sh
make manifests
```
**NOTE:** Run `make --help` for more information on all potential `make` targets
## More documentation

- [Openshift deployment](docs/openshift.md)
- [Configuration](docs/configuration.md)
- [Developer Guide](docs/developer.md)
- [Operator Design](docs/developer.md)

More information can be found via the [Kubebuilder Documentation](https://book.kubebuilder.io/introduction.html)

## License

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@ metadata:
}
]
capabilities: Basic Install
createdAt: "2023-12-21T16:17:51Z"
operators.operatorframework.io/builder: operator-sdk-v1.33.0
createdAt: "2024-01-02T10:59:10Z"
operators.operatorframework.io/builder: operator-sdk-v1.32.0
operators.operatorframework.io/project_layout: go.kubebuilder.io/v3
name: backstage-operator.v0.0.1
namespace: placeholder
Expand Down Expand Up @@ -206,7 +206,7 @@ spec:
value: quay.io/fedora/postgresql-15:latest
- name: RELATED_IMAGE_backstage
value: quay.io/janus-idp/backstage-showcase:next
image: quay.io/rhdh/backstage-operator:v0.0.1
image: quay.io/janus/operator:next
livenessProbe:
httpGet:
path: /healthz
Expand Down
2 changes: 1 addition & 1 deletion bundle/metadata/annotations.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ annotations:
operators.operatorframework.io.bundle.metadata.v1: metadata/
operators.operatorframework.io.bundle.package.v1: backstage-operator
operators.operatorframework.io.bundle.channels.v1: alpha
operators.operatorframework.io.metrics.builder: operator-sdk-v1.33.0
operators.operatorframework.io.metrics.builder: operator-sdk-v1.32.0
operators.operatorframework.io.metrics.mediatype.v1: metrics+v1
operators.operatorframework.io.metrics.project_layout: go.kubebuilder.io/v3

Expand Down
2 changes: 1 addition & 1 deletion config/manager/kustomization.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: controller
newName: quay.io/rhdh/backstage-operator
newName: quay.io/janus/operator
newTag: v0.0.1

generatorOptions:
Expand Down
99 changes: 99 additions & 0 deletions docs/admin.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
# Administrator Guide

## Backstage Operator configuration

### Context

As it is described in Design doc (TODO), Backstage CR's desired state is defined using layered configuration approach, which means:
- By default each newly created Backstage CR uses Operator scope Default Configuration
- Which can be fully or partially overriden for particular CR instance using ConfigMap with the name pointed in BackstageCR.spec.RawConfig
- Which in turn can be customized by other BackstageCR.spec fields (see Backstage API doc)

Cluster Administrator may want to customize Default Configuration due to internal preferences/limitations, for example:
- Preferences/restrictions for Backstage and|or PostgreSQL images due to Airgapped environment.
- Existed Taints and Tolerations policy, so Backstage Pods have to be configured with certain tolerations restrictions.
- ...

Default Configuration is implemented as a ConfigMap called *backstage-default-config*, deployed on *backstage-system* namespace and mounted to Backstage controller container as a */default-config* directory.
This config map contains the set of keys/values which maps to file names/contents in the */default-config*.
These files contain yaml manifests of objects used by Backstage controller as an initial desired state of Backstage CR according to Backstage Operator configuration model:

![Backstage Default ConfigMap and CR](images/backstage_admin_configmap_and_cr.jpg)


Mapping of configMap keys (yaml files) to runtime objects (NOTE: for the time (Dec 20'23) it is a subject of change):

| Key/File name | k8s/OCP Kind | Mandatory* | version | Notes |
|--------------------------------|--------------------|----------------|---------|-------------------------------------------------|
| deployment.yaml | appsv1.Deployment | Yes | all | Backstage deployment |
| service.yaml | corev1.Service | Yes | all | Backstage Service |
| db-statefulset.yaml | appsv1.Statefulset | For DB enabled | all | PostgreSQL StatefulSet |
| db-service.yaml | corev1.Service | For DB enabled | all | PostgreSQL Service |
| db-service-hl.yaml | corev1.Service | For DB enabled | all | PostgreSQL Service |
| db-secret.yaml | corev1.Secret | For DB enabled | all | Secret to connect Backstage to PSQL |
| route.yaml | openshift.Route | No (for OCP) | all | Route exposing Backstage service |
| app-config.yaml | corev1.ConfigMap | No | 0.0.2 | Backstage app-config.yaml |
| configmap-files.yaml | corev1.ConfigMap | No | 0.0.2 | Backstage config file inclusions from configMap |
| configmap-envs.yaml | corev1.ConfigMap | No | 0.0.2 | Backstage env variables from configMap |
| secret-files.yaml | corev1.Secret | No | 0.0.2 | Backstage config file inclusions from Secret |
| secret-envs.yaml | corev1.Secret | No | 0.0.2 | Backstage env variables from Secret |
| dynamic-plugins.yaml | corev1.ConfigMap | No | 0.0.2 | dynamic-plugins config * |
| dynamic-plugins-configmap.yaml | corev1.ConfigMap | No | 0.0.1 | dynamic-plugins config * |
| backend-auth-configmap.yaml | corev1.ConfigMap | No | 0.0.1 | backend auth config |


NOTES:
- Mandatory means it is needed to be present in either (or both) Default and CR Raw Configuration.
- dynamic-plugins.yaml is a fragment of app-config.yaml provided with RHDH/Janus-IDP, which is mounted into a dedicated initContainer.
- items marked as version 0.0.1 are not supported in version 0.0.2
### Operator Bundle configuration

With Backstage Operator's Makefile you can generate bundle descriptor using *make bundle* command

Along with CSV manifest it generates default-config ConfigMap manifest, which can be modified and applied to Backstage Operator.

[//]: # (TODO: document how an administrator can make changes to the default operator configuration, using their own configuration file (perhaps based on the generated one), and apply it using `kubectl` or `oc`.

### Kustomize deploy configuration

Make sure use the current context in your kubeconfig file is pointed to correct place, change necessary part of your config/manager/default-config or just replace some of the file(s) with yours and run
``
make deploy
``

### Direct ConfigMap configuration

You can change default configuration by directly changing the default-config ConfigMap with kubectl like:

- retrieve the current `default-config` from the cluster

``
kubectl get -n backstage-system configmap default-config > my-config.yaml
``

- modify the file in your editor of choice

- apply the updated configuration to your cluster

``
kubectl apply -n backstage-system -f my-config.yaml
``

It has to be re-applied to the controller's container after being reconciled by kubernetes processes.


### Use Cases

#### Airgapped environment

When creating the Backstage CR, the Operator will try to create a Backstage Pod, deploying:
- Backstage Container from the image, configured in *(deployment.yaml).spec.template.spec.Containers[].image*
- Init Container (applied for RHDH/Janus-IDP configuration, usually the same as Backstage Container)

Also, if Backstage CR configured with *EnabledLocalDb*, it will create a PostgreSQL container pod, configured in *(db-deployment.yaml).spec.template.spec.Containers[].image*

By default, the Backstage Operator is configured to use publicly available images.
If you plan to deploy to a [restricted environment](https://docs.openshift.com/container-platform/4.14/operators/admin/olm-restricted-networks.html),
you will need to configure your cluster or network to allow these images to be pulled.
For the list of related images deployed by the Operator, see the `RELATED_IMAGE_*` env vars or `relatedImages` section of the [CSV](../bundle/manifests/backstage-operator.clusterserviceversion.yaml).
See also https://docs.openshift.com/container-platform/4.14/operators/admin/olm-restricted-networks.html
1 change: 1 addition & 0 deletions docs/configuration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
WIP
62 changes: 62 additions & 0 deletions docs/design.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Backstage Operator Design [WIP]

The goal of Backstage Operator is to deploy Backstage workload to the Kubernetes namespace and keep this workload synced with the desired state defined by configuration.

## Backstage Kubernetes Runtime

Backstage Kubernetes workload consists of set of Kubernetes resources (Runtime Objects).
Approximate set of Runtime Objects necessary for Backstage server on Kubernetes is shown on the diagram below:

![Backstage Kubernetes Runtime](images/backstage_kubernetes_runtime.jpg)

The most important object is Backstage Pod created by Backstage Deployment. That is where we run 'backstage-backend' container with Backstage application inside.
This Backstage application is a web server which can be reached using Backstage Service.
Actually, those 2 are the core part of Backstage workload.

Backstage application uses SQL database as a data storage and it is possible to install PostgreSQL DB on the same namespace as Backstage instance.
It brings PostgreSQL StatefulSet/Pod, Service to connect to Backstage and PV/PVC to store the data.

For providing external access to Backstage server it is possible, depending on underlying infrastructure, to use Openshift Route or
K8s Ingress on top of Backstage Service.
Note that in versions up to 0.0.2, only Route configuration is supported by the Operator.

Finally, the Backstage Operator supports all the [Backstage configuration](https://backstage.io/docs/conf/writing) options, which can be provided by creating dedicated
ConfigMaps and Secrets, then contributing them to the Backstage Pod as mounted volumes or environment variables (see [Configuration](configuration.md) guide for details).

## Configuration

### Configuration layers

The Backstage Operator can be configured to customize the deployed workload.
With no changes to the default configuration, an admin user can deploy a Backstage instance to try it out for a local, personal, or small group test deployment.

When you do want to customize your Backstage instance, there are 3 layers of configuration available.

![Backstage Operator Configuration Layers](images/backstage_operator_configuration_layers.jpg)

As shown in the picture above:

- There is an Operator (Cluster) level Default Configuration implemented as a ConfigMap inside Backstage system namespace
(where Backstage controller is launched). It allows to choose some optimal for most cases configuration which will be applied
if there are no other config to override (i.e. Backstage CR is empty).
- Another layer overriding default is instance (Backstage CR) scoped, implemented as a ConfigMap which
has the same as default structure but inside Backstage instance's namespace. The name of theis ConfigMap
is specified on Backstage.Spec.RawConfig field. It offers very flexible way to configure certain Backstage instance
- And finally, there are set of fields on Backstage.Spec to override configuration made on level 1 and 2.
It offers simple configuration of some parameters. So, user is not required to understand the
overall structure of Backstage runtime object and is able to simply configure "the most important" parameters.
(see [configuration](configuration.md) for more details)

### Backstage Application

Backstage Application comes with advanced configuration features.

As per the [Backstage configuration](https://backstage.io/docs/conf/writing), a user can define and overload multiple _app-config.yaml_
files and flexibly configure them by including environment variables.
Backstage Operator supports this flexibility allowing to define these configurations components in all the configuration levels
(default, raw and CR)

![Backstage App with Advanced Configuration](images/backstage_application_advanced_config.jpg)

### Networking
TODO
Loading

0 comments on commit 7f1df4f

Please sign in to comment.