Skip to content
This repository has been archived by the owner on Aug 19, 2024. It is now read-only.

Initial Docs #64

Merged
merged 56 commits into from
Jan 4, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
56 commits
Select commit Hold shift + click to select a range
cfb83b2
yaml/configMap default configuration
gazarenkov Nov 28, 2023
c952470
Merge remote-tracking branch 'upstream/main' into status2
gazarenkov Nov 28, 2023
9563358
fix make test
gazarenkov Nov 28, 2023
46f0e7f
Merge remote-tracking branch 'upstream/main' into status2
gazarenkov Nov 30, 2023
99b4e54
fix with new objects
gazarenkov Nov 30, 2023
cffa417
fix with new objects
gazarenkov Nov 30, 2023
2bf0716
config small fixes
gazarenkov Nov 30, 2023
47ba2f9
fix for https://github.com/janus-idp/operator/issues/51
gazarenkov Nov 30, 2023
5882f62
Merge branch 'main' of https://github.com/janus-idp/operator into sta…
gazarenkov Dec 2, 2023
4863874
Merge branch 'main' of https://github.com/janus-idp/operator into doc
gazarenkov Dec 6, 2023
4bdb1a0
fix for https://github.com/janus-idp/operator/issues/58
gazarenkov Dec 6, 2023
1970621
Merge remote-tracking branch 'upstream/main' into doc
gazarenkov Dec 7, 2023
58e0287
initial doc commit (WIP)
gazarenkov Dec 7, 2023
0f24d60
initial docs
gazarenkov Dec 12, 2023
3cc03bd
Merge remote-tracking branch 'upstream/main' into doc
gazarenkov Dec 12, 2023
19dd9ea
initial docs
gazarenkov Dec 12, 2023
f6eabef
Update config/manager/kustomization.yaml
gazarenkov Dec 13, 2023
39fe89d
Update docs/developer.md
gazarenkov Dec 13, 2023
67ed0c6
Update README.md
gazarenkov Dec 13, 2023
71c9480
Update README.md
gazarenkov Dec 20, 2023
1e2682f
Update README.md
gazarenkov Dec 20, 2023
c073d37
Update README.md
gazarenkov Dec 20, 2023
cbc4516
Update README.md
gazarenkov Dec 20, 2023
2c2b627
Update README.md
gazarenkov Dec 20, 2023
dec5278
admin guide
gazarenkov Dec 20, 2023
3ba5ef6
Merge remote-tracking branch 'upstream/main' into doc
gazarenkov Dec 20, 2023
dbf7fb4
Merge remote-tracking branch 'origin/doc' into doc
gazarenkov Dec 20, 2023
3fb9783
Update docs/admin.md - typo
nickboldt Dec 20, 2023
17b0078
Update docs/admin.md
gazarenkov Dec 20, 2023
23b931e
Update docs/admin.md
gazarenkov Dec 20, 2023
e5a518b
Update README.md
gazarenkov Dec 20, 2023
7a7a45f
Update docs/admin.md
gazarenkov Dec 20, 2023
f8a567f
Update README.md
gazarenkov Dec 20, 2023
9924719
Update docs/admin.md
gazarenkov Dec 20, 2023
6469e77
Merge remote-tracking branch 'upstream/main' into doc
gazarenkov Dec 26, 2023
21c09b2
design
gazarenkov Dec 28, 2023
e5cce28
Merge branch 'doc' of https://github.com/gazarenkov/janus-idp-operato…
gazarenkov Dec 28, 2023
c6692dc
Update design.md
gazarenkov Dec 28, 2023
ecb19ad
more docs added and updated
gazarenkov Jan 2, 2024
5eabc7f
Update docs/admin.md
gazarenkov Jan 2, 2024
42c5df6
fixed examples/bs1.yaml
gazarenkov Jan 2, 2024
10e6d34
added version to config table
gazarenkov Jan 2, 2024
d6be975
db-service-hl added to admin.yaml
gazarenkov Jan 2, 2024
d6436c8
Update docs/admin.md
gazarenkov Jan 3, 2024
63dbec6
Update docs/admin.md
gazarenkov Jan 3, 2024
68aba95
Update docs/admin.md
gazarenkov Jan 3, 2024
137b7a0
Update docs/admin.md
gazarenkov Jan 3, 2024
ebb6c8f
Fix formatting of ConfigMap keys table in admin.md
rm3l Jan 3, 2024
eadc109
Apply review suggestions
rm3l Jan 3, 2024
f764850
Update docker/bundle.Dockerfile
nickboldt Jan 4, 2024
2115f53
Update docs/admin.md
gazarenkov Jan 4, 2024
dbd07bf
Update README.md
gazarenkov Jan 4, 2024
ed7e166
Update README.md
gazarenkov Jan 4, 2024
a7e144c
Update README.md
gazarenkov Jan 4, 2024
c1d673b
Update README.md
gazarenkov Jan 4, 2024
93708eb
Update bundle/manifests/backstage-operator.clusterserviceversion.yaml
gazarenkov Jan 4, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
115 changes: 40 additions & 75 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,99 +1,64 @@
# backstage-operator
Operator for deploying Backstage for Janus-IDP.
# Backstage Operator

## Description
Implementing https://janus-idp.io/docs/deployment/k8s/ procedure
At first stage CR update does not affect Backstage Objects, just installation (same as Helm)
TODO: Do we need to continuosly sync the states? Which way if so: from CR to Objects or back or (somehow) back and forth?
## The Goal
The Goal of [Backstage](https://backstage.io) Operator project is creating Kubernetes Operator for configuring, installing and synchronizing Backstage instance on Kubernetes/OpenShift.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we're targetting k8s, only openshift. IIRC the helmchart is for k8s (and openshift too) but the operator is ONLY for openshift. @jasperchui @christophe-f am I correct here?

Copy link
Member Author

@gazarenkov gazarenkov Dec 20, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nickboldt the codebase is working and tested on both K8s AND Openshift.
This is the message for GitHub repository README.
We probably need to slightly change this message and we definitely need to add more information about openshift/operatorhub as soon as we have it, but technically it is the truth and I doubt it will be much differ from the helm chart I guess (support of openshift == support of k8s + Route for this operator)

So, I'd definitely welcome improving the message from the marketing standpoint if it is the goal here but technically the current message it is relevant :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll let @christophe-f weigh in from the PM side. I'm OK with technically correct docs, but we don't want to imply support that we won't actually do.

gazarenkov marked this conversation as resolved.
Show resolved Hide resolved
The initial target is in support of Red Hat's assemblies of Backstage - specifically supporting [dynamic-plugins](https://github.com/janus-idp/backstage-showcase/blob/main/showcase-docs/dynamic-plugins.md)) on OpenShift. This includes [Janus-IDP](https://janus-idp.io/) and [Red Hat Developer Hub (RHDH)](https://developers.redhat.com/rhdh) but may be flexible enough to install any compatible Backstage instance on Kubernetes. See additional information under [Configuration](docs/configuration.md)).
The Operator provides clear and flexible configuration options to satisfy a wide range of expectations, from "no configuration for default quick start" to "highly customized configuration for production".

Make sure namespace is created.

Local Database (PostgreSQL) is created by default, to disable
spec:
skipLocalDb: true
This way third party DB can theorethically be configured. TODO: It just requires some changes in Backstage appConfig (I think),
because it only expects either in-container SQLite or MySQL.
TODO: should we consider using in-container SQLite for K8s deployment as well (single container deployment)?

TODO: POSTGRES_HOST = <name-of the service> , POSTGRES_PORT = <port>[5432] can be delivered to the Backstage
Deployment out of Postgres Secret? Indeed, it is not really a secret.
[More documentation...](#more-documentation)

## Getting Started
You’ll need a Kubernetes cluster to run against. You can use [KIND](https://sigs.k8s.io/kind) to get a local cluster for testing, or run against a remote cluster.
You’ll need a Kubernetes or OpenShift cluster. You can use [Minikube](https://minikube.sigs.k8s.io/docs/) or [KIND](https://sigs.k8s.io/kind) for local testing, or deploy to a remote cluster.
**Note:** Your controller will automatically use the current context in your kubeconfig file (i.e. whatever cluster `kubectl cluster-info` shows).

### Running on the cluster
1. Install Instances of Custom Resources:
```sh
kubectl apply -f config/samples/
```
2. Build and push your image to the location specified by `IMG`:
```sh
make docker-build docker-push IMG=<some-registry>/backstage-operator:tag
```
3. Deploy the controller to the cluster with the image specified by `IMG`:
```sh
make deploy IMG=<some-registry>/backstage-operator:tag
```
### Uninstall CRDs
To delete the CRDs from the cluster:
```sh
make uninstall
```
### Undeploy controller
UnDeploy the controller from the cluster:
```sh
make undeploy
```
## Build and Deploy with OLM
1. To build operator, bundle and catalog images:
To test it on minikube from the source code:

Both **kubectl** and **minikube** must be installed. See [tools](https://kubernetes.io/docs/tasks/tools/).

1. Get your copy of Operator from GitHub:
```sh
make release-build
git clone https://github.com/janus-idp/operator
```
2. To push operator, bundle and catalog images to the registry:
2. Deploy Operator on the minikube cluster:
```sh
make release-push
cd <your-janus-idp-operator-project-dir>
make deploy
```
3. To deploy or update catalog source:
you can check if the Operator pod is up by running
```sh
make catalog-update
kubectl get pods -n backstage-system
It should be something like:
NAME READY STATUS RESTARTS AGE
backstage-controller-manager-cfc44bdfd-xzk8g 2/2 Running 0 32s
```
4. To deloy the operator with OLM
3. Create Backstage Custom resource on some namespace (make sure this namespace exists)
```sh
make deploy-olm
kubectl -n <your-namespace> apply -f examples/bs1.yaml
gazarenkov marked this conversation as resolved.
Show resolved Hide resolved
```
4. To undeloy the operator with OLM
you can check if the Operand pods are up by running
```sh
make undeploy-olm
```
## Contributing
// TODO(user): Add detailed information on how you would like others to contribute to this project

### How it works
This project aims to follow the Kubernetes [Operator pattern](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/).
kubectl get pods -n <your-namespace>
It should be something like:
NAME READY STATUS RESTARTS AGE
backstage-85fc4657b5-lqk6r 1/1 Running 0 78s
backstage-psql-bs1-0 1/1 Running 0 79s

It uses [Controllers](https://kubernetes.io/docs/concepts/architecture/controller/)
which provides a reconcile function responsible for synchronizing resources until the desired state is reached on the cluster.

### Test It Out
1. Install the CRDs into the cluster:
```sh
make install
```
2. Run your controller (this will run in the foreground, so switch to a new terminal if you want to leave it running):
4. Tunnel Backstage Service and get URL for access Backstage
```sh
make run
minikube service -n <your-namespace> backstage --url
Output:
>http://127.0.0.1:53245
```
**NOTE:** You can also run this in one step by running: `make install run`
5. Access your Backstage instance in your browser using this URL.

### Modifying the API definitions
If you are editing the API definitions, generate the manifests such as CRs or CRDs using:
```sh
make manifests
```
**NOTE:** Run `make --help` for more information on all potential `make` targets
## More documentation

- [Openshift deployment](docs/openshift.md)
- [Configuration](docs/configuration.md)
- [Developer Guide](docs/developer.md)
- [Operator Design](docs/developer.md)

More information can be found via the [Kubebuilder Documentation](https://book.kubebuilder.io/introduction.html)

## License

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@ metadata:
}
]
capabilities: Basic Install
createdAt: "2023-12-21T16:17:51Z"
operators.operatorframework.io/builder: operator-sdk-v1.33.0
createdAt: "2024-01-02T10:59:10Z"
operators.operatorframework.io/builder: operator-sdk-v1.32.0
operators.operatorframework.io/project_layout: go.kubebuilder.io/v3
name: backstage-operator.v0.0.1
namespace: placeholder
Expand Down Expand Up @@ -206,7 +206,7 @@ spec:
value: quay.io/fedora/postgresql-15:latest
- name: RELATED_IMAGE_backstage
value: quay.io/janus-idp/backstage-showcase:next
image: quay.io/rhdh/backstage-operator:v0.0.1
image: quay.io/janus/operator:next
livenessProbe:
httpGet:
path: /healthz
Expand Down
2 changes: 1 addition & 1 deletion bundle/metadata/annotations.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ annotations:
operators.operatorframework.io.bundle.metadata.v1: metadata/
operators.operatorframework.io.bundle.package.v1: backstage-operator
operators.operatorframework.io.bundle.channels.v1: alpha
operators.operatorframework.io.metrics.builder: operator-sdk-v1.33.0
operators.operatorframework.io.metrics.builder: operator-sdk-v1.32.0
operators.operatorframework.io.metrics.mediatype.v1: metrics+v1
operators.operatorframework.io.metrics.project_layout: go.kubebuilder.io/v3

Expand Down
2 changes: 1 addition & 1 deletion config/manager/kustomization.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: controller
newName: quay.io/rhdh/backstage-operator
newName: quay.io/janus/operator
newTag: v0.0.1
gazarenkov marked this conversation as resolved.
Show resolved Hide resolved

generatorOptions:
Expand Down
99 changes: 99 additions & 0 deletions docs/admin.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
# Administrator Guide

## Backstage Operator configuration

### Context

As it is described in Design doc (TODO), Backstage CR's desired state is defined using layered configuration approach, which means:
- By default each newly created Backstage CR uses Operator scope Default Configuration
- Which can be fully or partially overriden for particular CR instance using ConfigMap with the name pointed in BackstageCR.spec.RawConfig
- Which in turn can be customized by other BackstageCR.spec fields (see Backstage API doc)

Cluster Administrator may want to customize Default Configuration due to internal preferences/limitations, for example:
- Preferences/restrictions for Backstage and|or PostgreSQL images due to Airgapped environment.
rm3l marked this conversation as resolved.
Show resolved Hide resolved
- Existed Taints and Tolerations policy, so Backstage Pods have to be configured with certain tolerations restrictions.
- ...

Default Configuration is implemented as a ConfigMap called *backstage-default-config*, deployed on *backstage-system* namespace and mounted to Backstage controller container as a */default-config* directory.
This config map contains the set of keys/values which maps to file names/contents in the */default-config*.
These files contain yaml manifests of objects used by Backstage controller as an initial desired state of Backstage CR according to Backstage Operator configuration model:

![Backstage Default ConfigMap and CR](images/backstage_admin_configmap_and_cr.jpg)


Mapping of configMap keys (yaml files) to runtime objects (NOTE: for the time (Dec 20'23) it is a subject of change):

| Key/File name | k8s/OCP Kind | Mandatory* | version | Notes |
|--------------------------------|--------------------|----------------|---------|-------------------------------------------------|
| deployment.yaml | appsv1.Deployment | Yes | all | Backstage deployment |
| service.yaml | corev1.Service | Yes | all | Backstage Service |
| db-statefulset.yaml | appsv1.Statefulset | For DB enabled | all | PostgreSQL StatefulSet |
| db-service.yaml | corev1.Service | For DB enabled | all | PostgreSQL Service |
| db-service-hl.yaml | corev1.Service | For DB enabled | all | PostgreSQL Service |
| db-secret.yaml | corev1.Secret | For DB enabled | all | Secret to connect Backstage to PSQL |
| route.yaml | openshift.Route | No (for OCP) | all | Route exposing Backstage service |
| app-config.yaml | corev1.ConfigMap | No | 0.0.2 | Backstage app-config.yaml |
| configmap-files.yaml | corev1.ConfigMap | No | 0.0.2 | Backstage config file inclusions from configMap |
| configmap-envs.yaml | corev1.ConfigMap | No | 0.0.2 | Backstage env variables from configMap |
| secret-files.yaml | corev1.Secret | No | 0.0.2 | Backstage config file inclusions from Secret |
| secret-envs.yaml | corev1.Secret | No | 0.0.2 | Backstage env variables from Secret |
| dynamic-plugins.yaml | corev1.ConfigMap | No | 0.0.2 | dynamic-plugins config * |
| dynamic-plugins-configmap.yaml | corev1.ConfigMap | No | 0.0.1 | dynamic-plugins config * |
| backend-auth-configmap.yaml | corev1.ConfigMap | No | 0.0.1 | backend auth config |


NOTES:
- Mandatory means it is needed to be present in either (or both) Default and CR Raw Configuration.
- dynamic-plugins.yaml is a fragment of app-config.yaml provided with RHDH/Janus-IDP, which is mounted into a dedicated initContainer.
- items marked as version 0.0.1 are not supported in version 0.0.2
### Operator Bundle configuration

With Backstage Operator's Makefile you can generate bundle descriptor using *make bundle* command

Along with CSV manifest it generates default-config ConfigMap manifest, which can be modified and applied to Backstage Operator.

[//]: # (TODO: document how an administrator can make changes to the default operator configuration, using their own configuration file (perhaps based on the generated one), and apply it using `kubectl` or `oc`.

### Kustomize deploy configuration
gazarenkov marked this conversation as resolved.
Show resolved Hide resolved

Make sure use the current context in your kubeconfig file is pointed to correct place, change necessary part of your config/manager/default-config or just replace some of the file(s) with yours and run
``
make deploy
``

### Direct ConfigMap configuration

You can change default configuration by directly changing the default-config ConfigMap with kubectl like:

- retrieve the current `default-config` from the cluster

``
kubectl get -n backstage-system configmap default-config > my-config.yaml
``

- modify the file in your editor of choice

- apply the updated configuration to your cluster

``
kubectl apply -n backstage-system -f my-config.yaml
``

It has to be re-applied to the controller's container after being reconciled by kubernetes processes.


### Use Cases

#### Airgapped environment

When creating the Backstage CR, the Operator will try to create a Backstage Pod, deploying:
- Backstage Container from the image, configured in *(deployment.yaml).spec.template.spec.Containers[].image*
- Init Container (applied for RHDH/Janus-IDP configuration, usually the same as Backstage Container)

Also, if Backstage CR configured with *EnabledLocalDb*, it will create a PostgreSQL container pod, configured in *(db-deployment.yaml).spec.template.spec.Containers[].image*

By default, the Backstage Operator is configured to use publicly available images.
If you plan to deploy to a [restricted environment](https://docs.openshift.com/container-platform/4.14/operators/admin/olm-restricted-networks.html),
you will need to configure your cluster or network to allow these images to be pulled.
For the list of related images deployed by the Operator, see the `RELATED_IMAGE_*` env vars or `relatedImages` section of the [CSV](../bundle/manifests/backstage-operator.clusterserviceversion.yaml).
See also https://docs.openshift.com/container-platform/4.14/operators/admin/olm-restricted-networks.html
1 change: 1 addition & 0 deletions docs/configuration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
WIP
62 changes: 62 additions & 0 deletions docs/design.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Backstage Operator Design [WIP]

The goal of Backstage Operator is to deploy Backstage workload to the Kubernetes namespace and keep this workload synced with the desired state defined by configuration.

## Backstage Kubernetes Runtime

Backstage Kubernetes workload consists of set of Kubernetes resources (Runtime Objects).
Approximate set of Runtime Objects necessary for Backstage server on Kubernetes is shown on the diagram below:

![Backstage Kubernetes Runtime](images/backstage_kubernetes_runtime.jpg)

The most important object is Backstage Pod created by Backstage Deployment. That is where we run 'backstage-backend' container with Backstage application inside.
This Backstage application is a web server which can be reached using Backstage Service.
Actually, those 2 are the core part of Backstage workload.

Backstage application uses SQL database as a data storage and it is possible to install PostgreSQL DB on the same namespace as Backstage instance.
It brings PostgreSQL StatefulSet/Pod, Service to connect to Backstage and PV/PVC to store the data.

For providing external access to Backstage server it is possible, depending on underlying infrastructure, to use Openshift Route or
K8s Ingress on top of Backstage Service.
Note that in versions up to 0.0.2, only Route configuration is supported by the Operator.

Finally, the Backstage Operator supports all the [Backstage configuration](https://backstage.io/docs/conf/writing) options, which can be provided by creating dedicated
ConfigMaps and Secrets, then contributing them to the Backstage Pod as mounted volumes or environment variables (see [Configuration](configuration.md) guide for details).

## Configuration

### Configuration layers

The Backstage Operator can be configured to customize the deployed workload.
With no changes to the default configuration, an admin user can deploy a Backstage instance to try it out for a local, personal, or small group test deployment.

When you do want to customize your Backstage instance, there are 3 layers of configuration available.

![Backstage Operator Configuration Layers](images/backstage_operator_configuration_layers.jpg)

As shown in the picture above:

- There is an Operator (Cluster) level Default Configuration implemented as a ConfigMap inside Backstage system namespace
(where Backstage controller is launched). It allows to choose some optimal for most cases configuration which will be applied
if there are no other config to override (i.e. Backstage CR is empty).
- Another layer overriding default is instance (Backstage CR) scoped, implemented as a ConfigMap which
has the same as default structure but inside Backstage instance's namespace. The name of theis ConfigMap
is specified on Backstage.Spec.RawConfig field. It offers very flexible way to configure certain Backstage instance
- And finally, there are set of fields on Backstage.Spec to override configuration made on level 1 and 2.
It offers simple configuration of some parameters. So, user is not required to understand the
overall structure of Backstage runtime object and is able to simply configure "the most important" parameters.
(see [configuration](configuration.md) for more details)

### Backstage Application

Backstage Application comes with advanced configuration features.

As per the [Backstage configuration](https://backstage.io/docs/conf/writing), a user can define and overload multiple _app-config.yaml_
files and flexibly configure them by including environment variables.
Backstage Operator supports this flexibility allowing to define these configurations components in all the configuration levels
(default, raw and CR)

![Backstage App with Advanced Configuration](images/backstage_application_advanced_config.jpg)

### Networking
TODO
Loading