Skip to content

Commit

Permalink
Update docs to main
Browse files Browse the repository at this point in the history
  • Loading branch information
ArtemisCloud Bot committed Jun 5, 2024
1 parent 3362c42 commit de77b5b
Show file tree
Hide file tree
Showing 5 changed files with 931 additions and 73 deletions.
117 changes: 44 additions & 73 deletions content/en/docs/help/bundle.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,87 +13,43 @@ weight: 630
toc: true
---

# Bunding A Bundle and Deploy it into the Operator Lifecycle Manager(OLM)
# Bundle

## About the Operator Lifecycle Manager (OLM)
## Operator Lifecycle Manager (OLM)
The [Operator Lifecycle Manager](https://olm.operatorframework.io/) can help users to install and manage operators. The ArtemisCloud operator can be built into a bundle image and installed into OLM.

The [Operator Lifecycle Manager](https://olm.operatorframework.io/) can help users to install and manage operators.
The ArtemisCloud operator can be built into a bundle image and installed into OLM.
### Install OLM
Check out the latest [releases on github](https://github.com/operator-framework/operator-lifecycle-manager/releases) for release-specific install instructions.

## Building
## Create a repository
Create a repository that Kubernetes will uses to pull your catalog image. You can create a public one for free on quay.io, see [how to create a repo](https://docs.quay.io/guides/create-repo.html).

### Creating the bundle's manifests/metadata

Before you build the bundle image generate the manifests and metadata:

```$xslt
make IMAGE_TAG_BASE=<bundle image registry> OPERATOR_IMAGE_REPO=<operator image registry> OPERATOR_VERSION=<operator tag> bundle
```

### Building the bundle image:

```$xslt
make IMAGE_TAG_BASE=<bundle image registry> bundle-build
```
The result image tag takes the form like
```$xslt
${IMAGE_TAG_BASE}-bundle:v0.0.1
## Build a catalog image
Set your repository in CATALOG_IMG and execute the following command:
```
Note: the version v0.0.1 is defined by VERSION variable in the Makefile

To push the built bundle image

```$xslt
make IMAGE_TAG_BASE=<bundle image registry> bundle-push
make CATALOG_IMG=quay.io/my-org/activemq-artemis-operator-index:latest catalog-build
```

### Building the catalog image

Now with the bundle image in place, build the catalog(index) iamge:

```$xslt
make IMAGE_TAG_BASE=<bundle image registry> catalog-build
## Push a catalog image
Set your repository in CATALOG_IMG and execute the following command:
```
The result image tag takes the form like
```$xslt
${IMAGE_TAG_BASE}-index:v0.0.1
make CATALOG_IMG=quay.io/my-org/activemq-artemis-operator-index:latest catalog-push
```

To push the catalog image to repo:

```$xslt
make IMAGE_TAG_BASE=<bundle image registry> catalog-push
```

## Installing operator via OLM (Minikube)

### Install olm (If olm is not installed already)

Make sure the Minikube is up and running.

Use the [operator-sdk tool](https://sdk.operatorframework.io/):

```$xslt
operator-sdk olm install
```
It will deploy the latest olm into Minikube.

### Create a catalog source (e.g. catalog-source.yaml):
## Create a catalog source (e.g. catalog-source.yaml):
Before creating the catalog source, ensure to update the **image** field within the `spec` section with your own built catalog image specified by the `CATALOG_IMG` environment variable.
For the `CATALOG_IMG`, refer to the [Build a catalog image](#build-a-catalog-image) section.

```
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: artemis-index
name: activemq-artemis-operator-source
namespace: operators
spec:
displayName: ActiveMQ Artemis Operators
image: quay.io/my-org/activemq-artemis-operator-index:latest
sourceType: grpc
image: quay.io/hgao/operator-catalog:v0.0.1
displayName: ArtemisCloud Index
publisher: ArtemisCloud
updateStrategy:
registryPoll:
interval: 10m
```

and deploy it:
Expand All @@ -104,25 +60,24 @@ $ kubectl create -f catalog-source.yaml
In a moment you will see the index image is up and running in namespace **operators**:

```$xslt
[a]$ kubectl get pod -n operators
NAME READY STATUS RESTARTS AGE
artemis-index-bzh75 1/1 Running 0 42s
$ kubectl get pod -n operators
NAME READY STATUS RESTARTS AGE
activemq-artemis-operator-source-g94fd 1/1 Running 0 42s
```

### Creating a subscription (e.g. subscription.yaml)
## Create a subscription (e.g. subscription.yaml)

```
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: my-subscription
name: activemq-artemis-operator-subscription
namespace: operators
spec:
channel: upstream
name: activemq-artemis-operator
source: artemis-index
source: activemq-artemis-operator-source
sourceNamespace: operators
installPlanApproval: Automatic
```

and deploy it:
Expand All @@ -134,7 +89,23 @@ An operator will be installed into **operators** namespace.
```$xslt
$ kubectl get pod -n operators
NAME READY STATUS RESTARTS AGE
9365c56f188be1738a1fabddb5a408a693d8c1f2d7275514556644e52ejpdpj 0/1 Completed 0 2m20s
activemq-artemis-controller-manager-84d58db649-tkt89 1/1 Running 0 117s
artemis-index-frpn4 1/1 Running 0 3m35s
069c5d363d51fc04d639086da1c5180883a6cea8ec9d9f9eedde1a55f6v7jsq 0/1 Completed 0 9m55s
activemq-artemis-controller-manager-54c99b9df6-6xdzh 1/1 Running 0 9m28s
activemq-artemis-operator-source-g94fd 1/1 Running 0 58m
```

## Create a single ActiveMQ Artemis

This step creates a single ActiveMQ Artemis broker instance by applying the custom resource (CR) defined in artemis_single.yaml file.

```$xslt
$ kubectl apply -f examples/artemis/artemis_single.yaml
```

To check the status of the broker, run:

```$xslt
$ kubectl get ActivemqArtemis
NAME READY AGE
artemis-broker True 39s
```
142 changes: 142 additions & 0 deletions content/en/docs/help/hostname_resolution.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,142 @@
---
title: "Resolve you cluster domain"
description: "Various possible configuration to make your local cluster domain
resolvable"
draft: false
images: []
menu:
docs:
parent: "help"
weight: 110
toc: true
---

If you are running a local `k8s` instance, you might want to configure your
local setup so that it is able to resolve the domain of the cluster. So that
out-of-cluster running programs can access your services.

There are a couple of options available to you, all with their pros & cons. We
will list some of them in this document. Note that this is not a exhaustive
coverage of all the possibilities and feel free to contribute to the
documentation if you have other ways of doing this.

## Prerequisite

Before you start, you need to have access to a running Kubernetes cluster
environment. A [Minikube](https://minikube.sigs.k8s.io/docs/start/) instance
running on your laptop will do fine.

### Start minikube with a parametrized `dns-domain` name

```console
$ minikube start --dns-domain='demo.artemiscloud.io'

😄 minikube v1.32.0 on Fedora 39
🎉 minikube 1.33.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.33.1
💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'

✨ Automatically selected the kvm2 driver. Other choices: qemu2, ssh
👍 Starting control plane node minikube in cluster minikube
🔥 Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🐳 Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring bridge CNI (Container Networking Interface) ...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎 Verifying Kubernetes components...
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
```

Get minikube's ip

```console
minikube ip
192.168.39.54
```

Note that every time you restart minikube you'll have to follow update the ip in
the configuration files.

## /etc/hosts

The generic way to make the url resolve to an IP address is to update the
`/etc/hosts/`. There's no wildcard in this file, this means you'll need to
specify all the urls you are interested in.

Here's an example for an ingress:
```console
$ cat /etc/hosts
192.168.39.54 ing.sslacceptor.send-receive-0.send-receive-project.demo.artemiscloud.io
```
### pros

* Works on every setup and is simple

### cons

* No wildcard, you need to list every domains and subdomains you'll need to access

## NetworkManager's DNSMasq plugin

We will use networkmanager's dnsmasq plugin
([source](https://fedoramagazine.org/using-the-networkmanagers-dnsmasq-plugin/))to
configure the ip associated with the domain `demo.artemiscloud.io`. The dnsmasq plugin
has wildcards which is better than setting manually every hosts in the
`/etc/hosts` file.

### Configure DNSMasq

The goal here is to set up enable dnsmasq and to make it resolve your cluster
domain `demo.artemiscloud.io` to the cluster's ip address. Because DNSMasq has a
wildcard, all subdomains will also resolve to the same ip address.

1. Create the following files:

```console
$ cat << EOF > /etc/NetworkManager/conf.d/00-use-dnsmasq.conf
[main]
dns=dnsmasq
EOF
```

```console
$ cat << EOF > /etc/NetworkManager/dnsmasq.d/00-demo.artemiscloud.io.conf
local=/demo.artemiscloud.io/
address=/.demo.artemiscloud.io/192.168.39.54
EOF
```

```console
$ cat << EOF > /etc/NetworkManager/dnsmasq.d/02-add-hosts.conf
addn-hosts=/etc/hosts
EOF
```

2. restart NetworkManager:

```console
$ sudo systemctl restart NetworkManager
```

### pros

* Has wildcard, you only need to setup the ip once

### Cons

* Works only with NetworkManager

## Minikube's `ingress-dns` plugin:

[Follow the official documentation.](https://minikube.sigs.k8s.io/docs/handbook/addons/ingress-dns/)

### pros

* Has wildcard, you only need to setup the ip once
* Supported for every setup (linux, mac, windows)

### cons

* Can only resolve the ingresses URLs
Loading

0 comments on commit de77b5b

Please sign in to comment.