Skip to content

models stabilityai stable diffusion xl refiner 1 0

github-actions[bot] edited this page Jun 13, 2024 · 10 revisions

stabilityai-stable-diffusion-xl-refiner-1-0

Overview

SDXL consists of an ensemble of experts pipeline for latent diffusion: In a first step, the base model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) is used to generate (noisy) latents, which are then further processed with a refinement model specialized for the final denoising steps. Note that the base model can be used as a standalone module.

Alternatively, we can use a two-stage pipeline as follows: First, the base model is used to generate latents of the desired output size. In the second step, we use a specialized high-resolution model and apply a technique called SDEdit (https://arxiv.org/abs/2108.01073, also known as "img2img") to the latents generated in the first step, using the same prompt. This technique is slightly slower than the first one, as it requires more function evaluations.

The model is intended for research purposes only. Possible research areas and tasks include

  • Generation of artworks and use in design and other artistic processes.
  • Applications in educational or creative tools.
  • Research on generative models.
  • Safe deployment of models which have the potential to generate harmful content.
  • Probing and understanding the limitations and biases of generative models.

Evaluation Results

This chart evaluates user preference for SDXL (with and without refinement) over SDXL 0.9 and Stable Diffusion 1.5 and 2.1. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance.

Limitations and Biases

Limitations

  • The model does not achieve perfect photorealism
  • The model cannot render legible text
  • The model struggles with more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
  • Faces and people in general may not be generated properly.
  • The autoencoding part of the model is lossy.

Bias

While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.

Out-of-Scope Use

The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.

License

CreativeML Open RAIL++-M License

Inference Samples

Inference type Python sample (Notebook) CLI with YAML
Real time image-text-to-image-online-endpoint.ipynb image-text-to-image-online-endpoint.sh
Batch image-text-to-image-batch-endpoint.ipynb image-text-to-image-batch-endpoint.sh

Inference with Azure AI Content Safety (AACS) samples

Inference type Python sample (Notebook)
Real time safe-image-text-to-image-online-endpoint.ipynb
Batch safe-image-text-to-image-batch-endpoint.ipynb

Sample input and output

Sample input

{
   "input_data": {
        "columns": ["prompt", "image"],
        "data": [
            {
                "prompt": "Face of a yellow cat, high resolution, sitting on a park bench",
                "image": "image1",
            },
            {
                "prompt": "Face of a green cat, high resolution, sitting on a park bench",
                "image": "image2",
            }
        ],
        "index": [0, 1]
    }
}

Note:

  • "image1" and "image2" strings are base64 format.

Sample output

[
    {
        "prompt": "Face of a yellow cat, high resolution, sitting on a park bench",
        "generated_image": "generated_image1",
        "nsfw_content_detected": null
    },
    {
        "prompt": "Face of a green cat, high resolution, sitting on a park bench",
        "generated_image": "generated_image2",
        "nsfw_content_detected": null
    }
]

Note:

  • "generated_image1" and "generated_image2" strings are in base64 format.
  • The stabilityai-stable-diffusion-xl-refiner-1-0 model doesn't check for the NSFW content in generated image. We highly recommend to use the model with Azure AI Content Safety (AACS). Please refer sample online and batch notebooks for AACS integrated deployments.

Visualization for the prompt - "gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k"

stabilityai-stable-diffusion-xl-refiner-1-0 input image and output visualization

Version: 4

Tags

Preview SharedComputeCapacityEnabled license : creativeml-openrail++-m task : image-to-image hiddenlayerscanned author : stabilityai training_dataset : LAION-5B huggingface_model_id : stabilityai/stable-diffusion-xl-refiner-1.0 inference_compute_allow_list : ['Standard_NC6s_v3', 'Standard_NC12s_v3', 'Standard_NC24s_v3', 'Standard_ND40rs_v2', 'Standard_ND96amsr_A100_v4', 'Standard_ND96asr_v4']

View in Studio: https://ml.azure.com/registries/azureml/models/stabilityai-stable-diffusion-xl-refiner-1-0/version/4

License: creativeml-openrail++-m

Properties

SharedComputeCapacityEnabled: True

SHA: 5d4cfe854c9a9a87939ff3653551c2b3c99a4356

inference-min-sku-spec: 6|1|112|736

inference-recommended-sku: Standard_NC6s_v3, Standard_NC12s_v3, Standard_NC24s_v3, Standard_ND40rs_v2, Standard_ND96amsr_A100_v4, Standard_ND96asr_v4

Clone this wiki locally