Skip to content

xyfJASON/diffusion-models-pytorch

Repository files navigation

Diffusion-Models-Implementations

Implement Diffusion Models with PyTorch.

This is a research-oriented repository aiming to implement and reproduce diffusion models, including:


Installation

Clone this repo:

git clone https://github.com/xyfJASON/Diffusion-Models-Implementations.git
cd Diffusion-Models-Implementations

Create and activate a conda environment:

conda create -n diffusion python=3.11
conda activate diffusion

Install dependencies:

pip install -r requirements.txt

Install xformers (optional but recommended):

pip install xformers==0.0.23.post1

Documentations

For instructions on training / sampling / evaluation, please refer to the docs folder.


Pretrained weights

Checkpoints and training logs

All the checkpoints and training logs trained by this repository are uploaded to huggingface.

Loading models from other repositories

Training a diffusion model on a large-scale dataset from scratch is time-consuming, especially with limited devices. Thus, this repository supports loading models from other open source repositories, as listed below.

Model Arch. Dataset Resolution Original Repo Config file
UNet by pesser CelebA-HQ 256x256 pesser/pytorch_diffusion config
LSUN-Church 256x256 config
ADM by openai ImageNet (unconditional) 256x256 openai/guided-diffusion config
ImageNet (conditional) 256x256 config
AFHQ-Dog 256x256 jychoi118/ilvr_adm config
AFHQ-Cat 256x256 ChenWu98/cycle-diffusion config
AFHQ-Wild 256x256 config
CelebA-HQ 256x256 andreas128/RePaint config
DiT by meta ImageNet (conditional) 256x256 facebookresearch/DiT config
ImageNet (conditional) 512x512 config
MDT ImageNet (conditional) 256x256 sail-sg/MDT config
Stable Diffusion (v1.5 / v2.1) LAION 512x512 runwayml/stable-diffusion config
LAION 768x768 Stability-AI/stablediffusion config
Stable Diffusion XL LAION 1024x1024 Stability-AI/generative-models config

The configuration files are located at ./weights/<github username>/<repo name>/<weights filename>.yaml, so it should be easy to find the corresponding model weights. Please put the downloaded weights next to the configuration files. For example:

weights
├── pesser
│   └── pytorch_diffusion
│       ├── ema_diffusion_celebahq_model-560000.pt
│       └── ema_diffusion_celebahq_model-560000.yaml
├── openai
│   └── guided-diffusion
│       ├── 256x256_diffusion_uncond.pt
│       ├── 256x256_diffusion_uncond.yaml
│       ├── 256x256_diffusion.pt
│       └── 256x256_diffusion.yaml
└── ...

Streamlit WebUI

Besides the command-line interface, this repo also provides a WebUI based on Streamlit library for easy interaction with the implemented models and algorithms. To run the WebUI, execute the following command:

streamlit run streamlit/Hello.py


Preview

This section provides previews of the results generated by the implemented models and algorithms.

For more comprehensive quantitative and qualitative results, please refer to the documentations in the docs folder.

DDPM

DDIM

Classifier-Free Guidance

CLIP Guidance

Mask Guidance

ILVR

SDEdit

DDIB

About

Implement Diffusion Models with PyTorch.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages