Skip to content

Commit

Permalink
docs: Remove old docs, update README
Browse files Browse the repository at this point in the history
  • Loading branch information
Michaelpalacce committed Feb 11, 2024
1 parent cb69b5f commit ea6792c
Show file tree
Hide file tree
Showing 9 changed files with 131 additions and 341 deletions.
250 changes: 122 additions & 128 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,128 +1,122 @@
# Preface
<img src="https://raw.githubusercontent.com/kubernetes/kubernetes/master/logo/logo.png" width="150px" alt="">

> WARNING
> Currently longhorn does not support kubernetes 1.25 ( automatic upgrades have been disabled due to this for the future as well. Going to do it manually )
This repository contains basic HELM local charts for application installation as well as FluxCD2 HelmReleases for GitOps.
I'm not going to move away from the local helm charts where possible as they make this repository pretty beginner-friendly.

# :open_book: Check out the Documentation
* [Documentation](./docs)

# :checkered_flag: Getting Started
1. [Prerequisites](./docs/Prerequisites.md)
2. [Cluster Setup](./docs/ClusterSetup.md)
3. [Cert Manager](./docs/SettingUpCertManager.md)
4. [Setting Up Renovate](./docs/SettingUpRenovate.md)
5[Backups](./docs/Backups.md)

# Main tools used
1. **FluxCD 2** - GitOps for my HomeLab.
2. **Renovate** - Checks for updates to actions, helm charts, helm releases, docker containers.
3. **ingress-nginx** - Kubernetes ingress. This is used to access services using reverse proxy instead of exposing them on a port.
4. **cert-manager + reflector** - cert-manager generates certificates for my services and reflector duplicates the generated ssl
certificate secret to all the namespaces. The secret is called `ingress`.
5. **Longhorn** - K8S native storage.
6. **SimpleSecrets** - Kubernetes secret manager.
7. **Calico** - Provides Networking for my HomeLab
8. **Ansible** - Used to provision the architecture
9. **Velero** - K8S and PVC backup. Free and open source by VMware
10. **Kube-vip** - For a Virtual IP that I can use to access all my servers

# GitOps :construction:
GitOps is applied wherever possible using Flux2.
CI/CD is done by bootstrapping flux into my cluster. Flux polls GitHub for changes and applies them automatically on my server.
It is currently pretty stable and works fine

# Image updates
Image updates are done via Renovate Bot :robot:. Renovate bot does periodic scans for new image versions and submits pull request for each change.

# Accessing services ( ingress-nginx, cert-manager )
Apps are currently exposed by ingress-nginx and have SSL certificates provided by cert-manager.
A wildcard certificate is issued for my domain `*.stefangenov.site` and when the secret is created
it is replicated in all namespace as `ingress` to be consumed by the ingress resources. This replication is
needed because `Let's encrypt` rate limits certificate requests.

#### :desktop_computer: Exposing Apps
As a legacy approach I used to expose my apps via NodePort. This ability is removed but can be easily enabled by
removing the commented out nodePort values in the Helm Charts, and I also try to add this functionality to future apps
and services I install.

# Storage ( Longhorn )
Longhorn is a great replicated storage option with a great UI for better visualisation. It's fast and tailor made for
k8s. Developed by the same people responsible for k3s/rancher and other great tools. [Official site](https://longhorn.io/)

# Networking ( Calico CNI )
Calico is a great and mature CNI/IPAM software that is fast, scalable and feature rich. [Source code](https://github.com/projectcalico/calico)

# SimpleSecrets ( Secrets Management )
This is a tool that I've been developing in my spare time. **It is not audited or tested by security professionals !**
It allows for you to store secrets via the UI/API and create K8S Secrets by creating a SimpleSecrets object instead, allowing
me to commit `SimpleSecrets` to git, while not exposing anything to the internet.

# Backup ( Velero )
Velero allows me to back up selected namespaces and ( with the help of restic ) ship the data to different sources.
In my case I'm using the velero AWS plugin.

The velero backup runs on a schedule every day during the evening hours and I pay around ~ $4 each month

# What if I don't want to use Flux
Well it's absolutely fine. You can go to `Helm/apps` and install any app you want ( e.g. `helm install media media -n media --create-namespace` ).
However things like ingress, cert-management, longhorn are handled only via Flux. Information on the helm chart that is
used can be found in the `helm-release.yaml` for the specific service. Let's look at an example:
~~~yaml
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: longhorn-system # What to call the deployment
namespace: longhorn-system # Where to install the helm chart
spec:
interval: 5m # How often do we poll for changes
install:
createNamespace: true # Same as --create-namespace
chart:
spec:
chart: longhorn # Which chart to use
version: 1.2.4 # Which version of the chart
interval: 5m
# Where to find information for this chart ( in my case I have a HelmRepository defined in cluster/homelab/helm/longhorn-system
sourceRef:
kind: HelmRepository
name: longhorn-system
namespace: flux-system
# Overwriting some values
values:
ingress:
enabled: true
host: longhorn.stefangenov.site
ingressClassName: nginx
tls: true
tlsSecret: ingress

service:
ui:
type: NodePort
nodePort: 30030
~~~

This would be the same as:
1. Creating a new file with the content:

`values.yaml`:
~~~yaml
ingress:
enabled: true
host: longhorn.stefangenov.site
ingressClassName: nginx
tls: true
tlsSecret: ingress

service:
ui:
type: NodePort
nodePort: 30030
~~~
2. Running: `helm repo add longhorn https://charts.longhorn.io; helm repo update` to add the longhorn helm repo
3. Running: `helm install longhorn/longhorn --name longhorn --create-namespace -n longhorn-system -f values.yaml`
# Preface
<img src="https://raw.githubusercontent.com/kubernetes/kubernetes/master/logo/logo.png" width="150px" alt="">

> WARNING
> Currently longhorn does not support kubernetes 1.25 ( automatic upgrades have been disabled due to this for the future as well. Going to do it manually )
This repository contains basic HELM local charts for application installation as well as FluxCD2 HelmReleases for GitOps.
I'm not going to move away from the local helm charts where possible as they make this repository pretty beginner-friendly.

# :open_book: Check out the Documentation
* [Documentation](./docs)

# :checkered_flag: Getting Started
1. [Prerequisites](./docs/Prerequisites.md)
2. [Cluster Setup](./docs/ClusterSetup.md)
3. [Cert Manager](./docs/SettingUpCertManager.md)
4. [Setting Up Renovate](./docs/SettingUpRenovate.md)
5[Backups](./docs/Backups.md)

# Main tools used
1. **FluxCD 2** - GitOps for my HomeLab.
2. **Renovate** - Checks for updates to actions, helm charts, helm releases, docker containers.
3. **ingress-nginx** - Kubernetes ingress. This is used to access services using reverse proxy instead of exposing them on a port.
4. **cert-manager + reflector** - cert-manager generates certificates for my services and reflector duplicates the generated ssl
certificate secret to all the namespaces. The secret is called `ingress`.
5. **Longhorn** - K8S native storage.
6. **Calico** - Provides Networking for my HomeLab
7. **Ansible** - Used to provision the architecture
8. **Velero** - K8S and PVC backup. Free and open source by VMware
9. **Kube-vip** - For a Virtual IP that I can use to access all my servers

# GitOps :construction:
GitOps is applied wherever possible using Flux2.
CI/CD is done by bootstrapping flux into my cluster. Flux polls GitHub for changes and applies them automatically on my server.
It is currently pretty stable and works fine

# Image updates
Image updates are done via Renovate Bot :robot:. Renovate bot does periodic scans for new image versions and submits pull request for each change.

# Accessing services ( ingress-nginx, cert-manager )
Apps are currently exposed by ingress-nginx and have SSL certificates provided by cert-manager.
A wildcard certificate is issued for my domain `*.stefangenov.site` and when the secret is created
it is replicated in all namespace as `ingress` to be consumed by the ingress resources. This replication is
needed because `Let's encrypt` rate limits certificate requests.

#### :desktop_computer: Exposing Apps
As a legacy approach I used to expose my apps via NodePort. This ability is removed but can be easily enabled by
removing the commented out nodePort values in the Helm Charts, and I also try to add this functionality to future apps
and services I install.

# Storage ( Longhorn )
Longhorn is a great replicated storage option with a great UI for better visualisation. It's fast and tailor made for
k8s. Developed by the same people responsible for k3s/rancher and other great tools. [Official site](https://longhorn.io/)

# Networking ( Calico CNI )
Calico is a great and mature CNI/IPAM software that is fast, scalable and feature rich. [Source code](https://github.com/projectcalico/calico)

# Backup ( Velero )
Velero allows me to back up selected namespaces and ( with the help of restic ) ship the data to different sources.
In my case I'm using the velero AWS plugin.

The velero backup runs on a schedule every day during the evening hours and I pay around ~ $4 each month

# What if I don't want to use Flux
Well it's absolutely fine. You can go to `Helm/apps` and install any app you want ( e.g. `helm install media media -n media --create-namespace` ).
However things like ingress, cert-management, longhorn are handled only via Flux. Information on the helm chart that is
used can be found in the `helm-release.yaml` for the specific service. Let's look at an example:
~~~yaml
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: longhorn-system # What to call the deployment
namespace: longhorn-system # Where to install the helm chart
spec:
interval: 5m # How often do we poll for changes
install:
createNamespace: true # Same as --create-namespace
chart:
spec:
chart: longhorn # Which chart to use
version: 1.2.4 # Which version of the chart
interval: 5m
# Where to find information for this chart ( in my case I have a HelmRepository defined in cluster/homelab/helm/longhorn-system
sourceRef:
kind: HelmRepository
name: longhorn-system
namespace: flux-system
# Overwriting some values
values:
ingress:
enabled: true
host: longhorn.stefangenov.site
ingressClassName: nginx
tls: true
tlsSecret: ingress

service:
ui:
type: NodePort
nodePort: 30030
~~~

This would be the same as:
1. Creating a new file with the content:

`values.yaml`:
~~~yaml
ingress:
enabled: true
host: longhorn.stefangenov.site
ingressClassName: nginx
tls: true
tlsSecret: ingress

service:
ui:
type: NodePort
nodePort: 30030
~~~
2. Running: `helm repo add longhorn https://charts.longhorn.io; helm repo update` to add the longhorn helm repo
3. Running: `helm install longhorn/longhorn --name longhorn --create-namespace -n longhorn-system -f values.yaml`
11 changes: 0 additions & 11 deletions docs/CNI.md

This file was deleted.

14 changes: 4 additions & 10 deletions docs/Flux.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,18 @@
# Flux

### Flux bootstrap
## Flux bootstrap
1. Add env variable GITHUB_TOKEN
2. Run: `flux bootstrap github --owner=Michaelpalacce --repository=HomeLab --branch=master --path=./cluster/homelab/base --personal`
3. Flux needs to run a reconciliation after which it will bootstrap the cluster with all apps in order.
1. Note: **This will take a while**

### How does it work?
## How does it work?
`cluster/homelab/base` is the entrypoint. It holds Kustomizations for all the other 3 modules as well as the flux-system ( which is the flux installation )
Each Kustomization is a separate file with dependencies of one another.

#### Steps of import:
### Steps of import:
1. `helm.yaml` - Holds all the helm charts needed.
2. `core.yaml` - Depends on helm Kustomization and holds core functionality for the cluster to function like storage, certificates, ingress, etc.
3. `apps.yaml` - Depends on both `core.yaml` and `helm.yaml` and holds all the apps currently installed on my cluster.







4. `configs.yaml` - Depends on `core.yaml` and holds all the configurations for the cluster

9 changes: 4 additions & 5 deletions docs/ClusterSetup.md → docs/Getting Started.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,7 @@ but if you chose a different ansible user, make sure to modify accordingly ) **N
Ideally you should either pass in your password every time or setup passwordless authentication**
- **If you did not fix the iptables**, ( when it comes to raspberry pis ) do it now: `ansible -i hosts/inventory -b -m shell -a "iptables -F && update-alternatives --set iptables /usr/sbin/iptables-legacy && update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy && reboot" all -k`
- Run `ansible-galaxy install -r playbooks/install/requirements.yml` to install all the needed ansible roles from Ansible Galaxy
- Run `ansible-playbook -i hosts/inventory playbooks/install/main.yml --tags preflight -k` At this point you have
everything needed to set up kubernetes ( all the needed binaries )
- Run `ansible-playbook -i hosts/inventory playbooks/install/main.yml --tags setup -k` This will initialize the
master on the master PI and add all the workers
- You should check the Troubleshooting options regarding svclb and enable container ip forwarding.
- Run `ansible-playbook -i hosts/inventory playbooks/install/main.yml `
- You should check the Troubleshooting options regarding svclb and enable container ip forwarding.

Next Steps: - [Flux](./Flux.md)
5 changes: 0 additions & 5 deletions docs/Hacks.md

This file was deleted.

Loading

0 comments on commit ea6792c

Please sign in to comment.