Background: https://huggingface.co/blog/stable_diffusion
Main project: https://github.com/CompVis/stable-diffusion
Optimized for GPU's with lower RAM: https://github.com/basujindal/stable-diffusion
Installation process:
Stable Diffusion is built off of Python and makes use of the project's Anaconda environment. Prerequisite software includes Anaconda (from conda.io) and git. Anaconda environments are designed to run from a user's home directory, so the installation notes below describe a per-user setup that each user needs to complete for themselves.
# apt install libgl1-mesa-glx libegl1-mesa libxrandr2 libxrandr2 libxss1 libxcursor1 libxcomposite1 libasound2 libxi6 libxtst6
# apt install git
created /ckpt at the root level as a place to store model checkpoints for users to access
downloaded the recommended checkpoint file from HuggingFace https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt
Reminder: SSH/SFTP and port 22 are available from VPN only!
hostname: gpu-stats-20212.iac.gatech.edu
username: (your GT username)
password: (your GT password)
port: 22
If the script reports "no CUDA GPUs are available"
sources:
https://stackoverflow.com/questions/70148547/wsl2-pytorch-runtimeerror-no-cuda-gpus-are-available-with-rtx3080
https://www.nvidia.com/Download/index.aspx?lang=en-us
Had to remove the default nvidia drivers and CUDA libraries I installed on GPU-Stats-2021 and reinstall. The cuda toolkit previously installed wasn't happy with either the A100 cards or the version of PyTorch vended in the Stable Diffusion python env.
apt-get remove --purge nvidia*
wget https://us.download.nvidia.com/tesla/525.60.13/NVIDIA-Linux-x86_64-525.60.13.run
Version 525.60.13 with CUDA 12 support works with this version of the Stable Diffusion project. This may change in the future.
If calling conda activate gives an error:
edit the .bashrc file and remove all data. re-run $ conda init bash