Skip to content

Commit

Permalink
chore(website): update some clustertool guides
Browse files Browse the repository at this point in the history
  • Loading branch information
PrivatePuffin committed Oct 24, 2024
1 parent c43f2ba commit 28cbcfa
Show file tree
Hide file tree
Showing 3 changed files with 27 additions and 30 deletions.
7 changes: 2 additions & 5 deletions website/src/content/docs/clustertool/csi/topolvm.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,12 +128,9 @@ The following example can be used and adjust where necesarry.
## Snapshots
TBD

## Optional: Non-ClusterTool only

The following steps are already included in clustertool by default.


### Kernel Modules
## Kernel Modules

Add these two kernel modules. Use modprobe for typical linux installs or add them to your talconfig.yaml if using TalHelper or ClusterTool as shown below:
```yaml
Expand Down Expand Up @@ -168,7 +165,7 @@ Create a Thin Pool
lvcreate -l 100%FREE --chunksize 256 -T -A n -n topolvm_thin topolvm_vg
```

### Create Privilaged Namespace
## Create Privilaged Namespace

Create the namespace with these labels:
```yaml
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -111,32 +111,13 @@ These include, but are not limited to

### Storage Recommendations

The file created on your host's storage device to be used by the VM is almost always a single continuous file. So whilst an SSD will obviously greatly improve the speed at which this file can be accessed by the VM, a HDD is adequate.

Additionally, the storage backend we are using on Talos **requires** the presence of two separate "disks" to be presented to the Talos VM. As noted above however, these are [sparsely allocated](https://en.wikipedia.org/wiki/Sparse_file). This means that whilst you'd want to have the entirety of the space able to be occupied available, it will not all be used immediately.
An an SSD, HDD+METADATA zfs pool and/or having sync-writes disabled, will greatly improve performance and is assumed to be required.

Sparse allocation is adviced:
For example: A 512GB "sparsely allocated" disk for the Talos VM, housed on a 1TB disk in the host system, will not immediately/always take up 512GB of space. 512GB is the maximum amount of space the file *could* occupy if needed.

### GPU Recommendations

Unfortunately, AMD (i)GPUs continue to be rather lacklustre in the Kubernetes world. AMD GPUs are *supposed* to work under Kubernetes, but suffer limitations such as only being able to be used by 1 app/chart at a time, which makes them hard to recommend.

Nvidia, and to some extent Intel, GPUs by comparison will almost always work "out of the box".

### SCALE VM Host Caveats

Users running the Talos VM atop a TrueNAS SCALE host system that want to also take advantage of GPU passthrough to the VM will require a **minimum** of 2 *different* GPUs to be present in the system.

The GPU desired to be passed through to the Talos VM will need to be [isolated](/clustertool/virtual-machines/truenas-scale/#gpu-isolation) within SCALE.

This could include any of the following combinations:

**GPU1:** Dedicated Nvidia GPU isolated within SCALE for VM passthrough

**GPU2:** Intel/AMD iGPU

or

**GPU1:** Motherboard IPMI GPU

**GPU2:** Intel iGPU or dedicated Nvidia GPU isolated within SCALE for VM passthrough
Original file line number Diff line number Diff line change
Expand Up @@ -134,8 +134,6 @@ Go back to the "preparation" section and make sure the IP you are trying to move
7. Hit `save` and wait for the system to create the Zvol. The GUI should refresh and then show your Zvol in the list of datasets like so
![ZVOL Creation 2](./img/vm_zvol2.png)

8. As you can see, I have 2 Zvols in my dataset. Multiple storage devices/drives aren't required for Talos, but if you would like them then you can repeat the above steps, however this time set `Size for this zvol` to between `768GiB` and `2TiB` or however large you desire it to be. The first Zvol you created will be the "system" disk for Talos, and the second Zvol will be the "data" disk for Talos. Instructions on how to attach the second Zvol to the Talos VM will follow below.

</Steps>

## GPU Isolation
Expand Down Expand Up @@ -189,9 +187,9 @@ Minimum recommended amount of RAM: `32GB`

![VM CPU And Memory](./img/vm_cpu_memory.png)

### Disks
### Disk

Select the **first** previously created Zvol for your VM as shown below:
Select the previously created Zvol for your VM as shown below:

![VM Disks](./img/vm_disks.png)

Expand Down Expand Up @@ -229,6 +227,8 @@ If you followed this guide correctly, the options shown should look similar to t

You can skip this step if you don't have multiple disks configured for usage with Talos

Please be warned, we do NOT actively provide support for multi-disk setups and this *will* require modifications to the default CSI setup

:::

Now that we have created the Talos VM, we need to attach the second Zvol we created earlier to it.
Expand Down Expand Up @@ -326,3 +326,22 @@ Workernodes can be pretty basic and should "just work".
installDiskSelector:
size: <= 600GB
```
### GPU pass-through Caveats
Users running the Talos VM atop a TrueNAS SCALE host system that want to also take advantage of GPU passthrough to the VM will require a **minimum** of 2 *different* GPUs to be present in the system.
The GPU desired to be passed through to the Talos VM will need to be [isolated](/clustertool/virtual-machines/truenas-scale/#gpu-isolation) within SCALE.
This could include any of the following combinations:
**GPU1:** Dedicated Nvidia GPU isolated within SCALE for VM passthrough
**GPU2:** Intel/AMD iGPU
or
**GPU1:** Motherboard IPMI GPU
**GPU2:** Intel iGPU or dedicated Nvidia GPU isolated within SCALE for VM passthrough

0 comments on commit 28cbcfa

Please sign in to comment.