diff --git a/projects/fair-diffusion/index.html b/projects/fair-diffusion/index.html index 24b4eb5..af65086 100644 --- a/projects/fair-diffusion/index.html +++ b/projects/fair-diffusion/index.html @@ -1,102 +1,130 @@
- - - - - - - - - - - - - - - - - - - - - - - -- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Proin ullamcorper tellus sed ante aliquam tempus. Etiam porttitor urna feugiat nibh elementum, et tempor dolor mattis. Donec accumsan enim augue, a vulputate nisi sodales sit amet. Proin bibendum ex eget mauris cursus euismod nec et nibh. Maecenas ac gravida ante, nec cursus dui. Vivamus purus nibh, placerat ac purus eget, sagittis vestibulum metus. Sed vestibulum bibendum lectus gravida commodo. Pellentesque auctor leo vitae sagittis suscipit. -
++ Generative AI models have recently achieved astonishing results in quality and are consequently employed in a fast-growing number of applications. However, since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer from degenerated and biased human behavior, as we demonstrate. In fact, they may even reinforce such biases. To not only uncover but also combat these undesired effects, we present a novel strategy, called Fair Diffusion, to attenuate biases after the deployment of generative text-to-image models. Specifically, we demonstrate shifting a bias, based on human instructions, in any direction yielding arbitrarily new proportions for, e.g., identity groups. As our empirical evaluation demonstrates, this introduced control enables instructing generative image models on fairness, with no data filtering and additional training required. +
+Fair Diffusion is build on Sega which is fully integrated into the diffusers library. For more details check out the documentation. + A minimal usage example could look like this: +
+from diffusers import SemanticStableDiffusionPipeline
+import torch
+
+device = 'cuda'
+pipe = SemanticStableDiffusionPipeline.from_pretrained(
+ "runwayml/stable-diffusion-v1-5",
+).to(device)
+
+gen = torch.Generator(device=device)
+
+gen.manual_seed(21)
+out = pipe(prompt='a photo of the face of a firefighter', generator=gen, num_images_per_prompt=1, guidance_scale=7,
+ editing_prompt=['male person', # Concepts to apply
+ 'female person'],
+ reverse_editing_direction=[True, False], # Direction of guidance i.e. decrease the first and increase the second concept
+ edit_warmup_steps=[10, 10], # Warmup period for each concept
+ edit_guidance_scale=[4, 4], # Guidance scale for each concept
+ edit_threshold=[0.95, 0.95], # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions
+ edit_momentum_scale=0.3, # Momentum scale that will be added to the latent guidance
+ edit_mom_beta=0.6, # Momentum beta
+ edit_weights=[1,1] # Weights of the individual concepts against each other
+ )
+images = out.images
BibTex Code Here
+ @article{friedrich2023FairDiffusion,
+ title={Fair Diffusion: Instructing Text-to-Image Generation Models on Fairness},
+ author={Felix Friedrich and Manuel Brack and Lukas Struppek and Dominik Hintersdorf and Patrick Schramowski and Sasha Luccioni and Kristian Kersting},
+ year={2023},
+ journal={arXiv preprint at arXiv:2302.10893}
+}