Skip to content

Developed a comprehensive implementation of the CLIP model using PyTorch, focusing on efficient self-attention mechanisms and feedforward layers.

Notifications You must be signed in to change notification settings

A7medM0sta/coding_stable_diffusion

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

stable_diffusion

Demo

Demo

Introduction

Developed a comprehensive implementation of the CLIP model using PyTorch, focusing on efficient self-attention mechanisms and feedforward layers. The project includes a detailed setup guide, dependency management, and data handling instructions. Enhanced the repository with clear documentation and interactive Colab notebooks for easy experimentation and reproducibility.

Output

from stable_diffusion_pytorch import pipeline

prompt = "a photograph of an astronaut riding a horse in mountain"  
prompts = [prompt]

uncond_prompt = ""  
uncond_prompts = [uncond_prompt] if uncond_prompt else None

upload_input_image = False  c
input_images = None
if upload_input_image:
    from PIL import Image
    from google.colab import files
    print("Upload an input image:")
    path = list(files.upload().keys())[0]
    input_images = [Image.open(path)]

strength = 0.89 

do_cfg = True  c
cfg_scale = 5  
height = 512  
width = 512  c
sampler = "k_lms"  c
n_inference_steps = 50 

use_seed = False  
if use_seed:
    seed = 42  
else:
    seed = None

pipeline.generate(prompts=prompts, uncond_prompts=uncond_prompts,
                  input_images=input_images, strength=strength,
                  do_cfg=do_cfg, cfg_scale=cfg_scale,
                  height=height, width=width, sampler=sampler,
                  n_inference_steps=n_inference_steps, seed=seed,
                  models=models, device='cuda', idle_device='cpu')[0]

How to Install

  1. Clone or Download the Repository:

    • Clone the repository using Git:
      git clone https://github.com/A7medM0sta/coding_stable_diffusion
    • Or download the repository as a ZIP file and extract it.
  2. Install Dependencies:

    • Navigate to the project directory:
      cd coding_stable_diffusion
    • Alternatively, you can install all dependencies listed in requirements.txt:
      pip install -r requirements.txt
  3. Download and Unpack Data:

    • Download the data.v20221029.tar file from Hugging Face.
    • Unpack the downloaded file into the parent folder of stable_diffusion_pytorch. Your folder structure should look like this:
      coding_stable_diffusion/
      ├─ data/
      │  ├─ ckpt/
      │  ├─ ...
      ├─ stable_diffusion_pytorch/
      │  ├─ samplers/
      └  ┴─ ...
      ├─ src/
      │  ├─ demo.ipynb/
      └  ┴─ ...
      

Note: The checkpoint files included in data.zip have a different license. You must agree to the license to use these checkpoint files.

References

About

Developed a comprehensive implementation of the CLIP model using PyTorch, focusing on efficient self-attention mechanisms and feedforward layers.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published