Using this repository, you should be able to create your own LoRA models that you can upload to CivitAI without using Automatic1111
LoRA models can be made with significantly low compute and memory requirements as opposed to things like Dreambooth, CLIP, or textual inversion (about a few MBs).
Setup a T4 instance using this image.
Create the environment
- Install Diffusers from source as we will need to modify SD libraries directly:-
pip install git+https://github.com/huggingface/diffusers
. Comment out the lines that consist of# black image
to turn off the safety filter. pip install accelerate transformers datasets evaluate
- Assuming torch is already installed;
pip3 install torchvision torchaudio
- Make a folder consisting of images that you want to fine-tune a LoRA model. Don't worry about the folder structure. Make sure the image files are in a format recognized by Pillow.
- Create
(image, text)
image -caption pairs dataset with your images withpython3 blip_captions.py
- (Change any hardcoded values in the file with your requirements first) - Update
DATASET_NAME_MAPPING
insd.py
- The following command is meant to fine-tune your model. Change the parameters' values as necessary.
export MODEL_NAME="runwayml/stable-diffusion-v1-5"
export OUTPUT_DIR="waifu"
export HUB_MODEL_ID="waifu-lora"
accelerate launch --mixed_precision="fp16" sd.py --pretrained_model_name_or_path=$MODEL_NAME --dataset_name=$DATASET_NAME --dataloader_num_workers=8 --resolution=512 --center_crop --random_flip --train_batch_size=1 --gradient_accumulation_steps=4 --max_train_steps=15000 --learning_rate=1e-04 --max_grad_norm=1 --lr_scheduler="cosine" --lr_warmup_steps=0 --output_dir=${OUTPUT_DIR} --push_to_hub --hub_model_id=${HUB_MODEL_ID} --checkpointing_steps=500 --validation_prompt="a woman wearing red lipstick with black hair" --train_data_dir="waifu_dataset" --seed=1337
- To figure out base model name:
python3 sd_test.py
- To perform inference using base_model and fine-tuned LoRA weights together.
python3 sd_test2.py
- Change correct model path and prompt.