Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add docs for VMs #310

Merged
merged 3 commits into from
Oct 8, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 27 additions & 0 deletions docs/docs/infrastructure/kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,3 +108,30 @@ Maybe in the future, shifting to [Longhorn](https://longhorn.io/) or something s
[Helm](https://helm.sh/) is a tool that packages entire Kubernetes applications. Just like how images are packages of a containerized application or program, Helm works with Charts, which package applications written to work using Kubernetes. This is a more standard way of installing applications onto a Kubernetes cluster, and many charts are stored on [Artifact Hub](https://artifacthub.io/).

To use helm, the project must be restructured to fit what it expected such that it can be packaged into a chart. [This video](https://www.youtube.com/watch?v=5_J7RWLLVeQ) may be a helpful resource, though there are other explanations online that may work better. The current development team has little experience with Helm, so as a result, this section is lacking in a full explanation.

## Glados Virtual Machines

To run the cluster we have created several virtual machines that are hosted by Rose-Hulman. The docs below will layout how these virtual machines are structured.

!!! note

Currently Ubuntu 22.04 LTS is the operating system of choice. This will be supported until April of 2027. There were some issues with Ubuntu 24.04 LTS running the cluster on our test machine. We can reevaluate this choice in the future.

| VM Host Name | Description | Specs | What runs here?
| ----------- | ------------------------------------ | ------------------------------------ | --------------------------------
| glados | This VM will run the kubernetes control plane. See above for the function of the control plane | 2 CPU cores, 4GB of RAM, 50GB of storage | control-plane
| glados-db | This VM will run and store the MongoDB. | 4 CPU cores, 8GB of RAM, 1TB of storage?, ability to run AVX instructions | database
| glados-w1 | This will be a general worker node. | 4 CPU cores, 8GB of RAM, 50GB of storage | non-specific
| glados-w2 | This will be a general worker node. | 4 CPU cores, 8GB of RAM, 50GB of storage | non-specific

!!! note

If we need more worker nodes they can follow the scheme above, glados-w3, glados-w4, etc.

### Justification

The glados host name will only run the control plane because it will be able to keep running in the case that another system goes down. The control plane is able to redistribute tasks in the case of a system crash. This system does not need a lot of resources since it is only directing the work to the rest of the cluster.

Glados-db will run the MongoDB, this means that it may need quite a bit of persistent storage. Therefore we need quite a bit of storage on this VM. MongoDB also appears to be quite resource intensive.

Glados worker nodes can all be the same specs and can be spun up/down as they are needed. When adding/removing nodes you will have to instruct the control plane to do so.
1 change: 1 addition & 0 deletions docs/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -55,3 +55,4 @@ markdown_extensions:
- def_list
# # https://yakworks.github.io/docmark/extensions/permalinks/
# - toc(permalink=true)
- tables
Loading