Run Stable Diffusion with companion models on a GPU-enabled Kubernetes Cluster - complete with a WebUI and automatic model fetching for a 2 step install that takes less than 2 minutes (excluding download times).
Uses the nvidia/cuda
image as a base.
- Automatic Model Fetching
- Works with
gpu-operator
, bundling CUDA libraries - Interactive UI with many features, and more on the way!
- GFPGAN for face reconstruction, RealESRGAN for super-sampling.
- Textual Inversion
- many more!
- Kubernetes Cluster with GPUs attached to atleast one node, and NVIDIA's
gpu-operator
set up successfully helm
installed locally
- Add the helm repo with
helm repo add amithkk-sd https://amithkk.github.io/stable-diffusion-k8s
- Fetch latest charts with
helm repo update
- (Optional) Create your own
values.yaml
with customized settings- Some things that you might want to change could include the
nodeAffinity
,cliArgs
(see below) andingress
settings (that will allow you to access this externally without needing tokubectl port-forward
)
- Some things that you might want to change could include the
- Install with
helm install --generate-name amithkk-sd/stable-diffusion -f <your-values.yaml>
Wait for the containers to come up and follow the instructions returned by Helm to connect. This may take a while as it has to download a ~5GiB docker image and ~5GiB of models
By extending your values.yaml
you can change the cliArgs
key, which contains the arguments that will be passed to the WebUI. By default: --extra-models-cpu --optimized-turbo
are given, which allow you to use this model on a 6GB GPU. However, some features might not be available in the mode.
You can find the full list of arguments here
- To enable Textual Inversion remove
--optimize
and--optimize-turbo
flags and add--no-half
tocliFlags
when installing, more info here. - If output is a always a green image, use
--precision full --no-half
.
The author(s) of this project are not responsible for any content generated using this interface.
Special thanks to everyone behind these awesome projects, without them, none of this would have been possible: