diff --git a/projects/fair-diffusion/index.html b/projects/fair-diffusion/index.html index 24b4eb5..af65086 100644 --- a/projects/fair-diffusion/index.html +++ b/projects/fair-diffusion/index.html @@ -1,102 +1,130 @@ - - - - - - - - - - - - - - - - - - - - - - - - Academic Project Page - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + Fair Diffusion + + + + + + + + + + + + + + + + + + + + +
+ + + + + Home + +
+
+
+
+
+
+

Fair Diffusion:
Instructing Text-to-Image Generation Models on Fairness

+ +
+ TU Darmstadt, hessian.AI, DFKI, LAION, Huggingface
+
-
- -
- -
-
-
- -

- Aliquam vitae elit ullamcorper tellus egestas pellentesque. Ut lacus tellus, maximus vel lectus at, placerat pretium mi. Maecenas dignissim tincidunt vestibulum. Sed consequat hendrerit nisl ut maximus. -

-
-
-
- -
-
-
-
-

Abstract

-
-

- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Proin ullamcorper tellus sed ante aliquam tempus. Etiam porttitor urna feugiat nibh elementum, et tempor dolor mattis. Donec accumsan enim augue, a vulputate nisi sodales sit amet. Proin bibendum ex eget mauris cursus euismod nec et nibh. Maecenas ac gravida ante, nec cursus dui. Vivamus purus nibh, placerat ac purus eget, sagittis vestibulum metus. Sed vestibulum bibendum lectus gravida commodo. Pellentesque auctor leo vitae sagittis suscipit. -

+
+
+
+

Abstract

+
+

+ Generative AI models have recently achieved astonishing results in quality and are consequently employed in a fast-growing number of applications. However, since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer from degenerated and biased human behavior, as we demonstrate. In fact, they may even reinforce such biases. To not only uncover but also combat these undesired effects, we present a novel strategy, called Fair Diffusion, to attenuate biases after the deployment of generative text-to-image models. Specifically, we demonstrate shifting a bias, based on human instructions, in any direction yielding arbitrarily new proportions for, e.g., identity groups. As our empirical evaluation demonstrates, this introduced control enables instructing generative image models on fairness, with no data filtering and additional training required. +

+
+
-
-
-
-
- -
- - - -
-
-
- -

Video Presentation

-
-
- -
- - -
+
+
+ +

Implementation and Usage

+

Fair Diffusion is build on Sega which is fully integrated into the diffusers library. For more details check out the documentation. + A minimal usage example could look like this: +

+
from diffusers import SemanticStableDiffusionPipeline
+import torch
+
+device = 'cuda'
+pipe = SemanticStableDiffusionPipeline.from_pretrained(
+    "runwayml/stable-diffusion-v1-5",
+).to(device)
+
+gen = torch.Generator(device=device)
+
+gen.manual_seed(21)
+out = pipe(prompt='a photo of the face of a firefighter', generator=gen, num_images_per_prompt=1, guidance_scale=7,
+           editing_prompt=['male person',       # Concepts to apply
+                           'female person'],
+           reverse_editing_direction=[True, False], # Direction of guidance i.e. decrease the first and increase the second concept
+           edit_warmup_steps=[10, 10], # Warmup period for each concept
+           edit_guidance_scale=[4, 4], # Guidance scale for each concept
+           edit_threshold=[0.95, 0.95], # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions
+           edit_momentum_scale=0.3, # Momentum scale that will be added to the latent guidance
+           edit_mom_beta=0.6, # Momentum beta
+           edit_weights=[1,1] # Weights of the individual concepts against each other
+          )
+images = out.images
-
-
- -
-
-
-

Another Carousel

-
- - - - - - - - -
-
-
-

Poster

- - - -
-
-
- -
+
-

BibTeX

-
BibTex Code Here
+

BibTeX

+ If you like or use our work please cite us: +
@article{friedrich2023FairDiffusion,
+      title={Fair Diffusion: Instructing Text-to-Image Generation Models on Fairness},
+      author={Felix Friedrich and Manuel Brack and Lukas Struppek and Dominik Hintersdorf and Patrick Schramowski and Sasha Luccioni and Kristian Kersting},
+      year={2023},
+      journal={arXiv preprint at arXiv:2302.10893}
+}
- - + - + - - + + diff --git a/projects/safe-latent-diffusion/index.html b/projects/safe-latent-diffusion/index.html index 5136db7..98dc655 100644 --- a/projects/safe-latent-diffusion/index.html +++ b/projects/safe-latent-diffusion/index.html @@ -76,7 +76,7 @@

Safe Latent Diffusion
Mitigating In Manuel Brack*, - Björn Deiseroth + Björn Deiseroth, Kristian Kersting diff --git a/static/images/deployment_figure_fair.png b/static/images/deployment_figure_fair.png new file mode 100644 index 0000000..10e9237 Binary files /dev/null and b/static/images/deployment_figure_fair.png differ diff --git a/static/images/firefighter_example.png b/static/images/firefighter_example.png new file mode 100644 index 0000000..cb87ed0 Binary files /dev/null and b/static/images/firefighter_example.png differ