- [2024/03/12] 🔥 Code uploaded.
-
Text-Guided Editing:Allows users to select an object within an image and replace or refine it based on a text description.
- Key features:
- Generates more realistic details and smoother transitions than alternative methods
- Focuses edits specifically on the targeted object
- Preserves unrelated parts of the image
- Key features:
-
Image-Guided Editing: Enables users to choose an object from a reference image and transplant it into another image while preserving its identity.
- Key features:
- Ensures seamless integration of the object into the new context
- Adapts the object's appearance to match the target image's style
- Works effectively even when the object's appearance differs significantly between reference and target images
- Key features:
-
Mask-Based Editing: Involves manipulating objects by directly editing their masks.
- Key features:
- Allows for operations like moving, reshaping, resizing, and refining objects
- Fills in new details according to the object's associated prompt
- Produces natural-looking results that maintain consistency with the overall image
- Key features:
-
Item Removal: Enables users to remove objects from images by deleting the mask-object associations.
- Key features:
- Intelligently fills in the empty space left by removed objects
- Ensures a coherent final image
- Maintains the integrity of the surrounding image elements
- Key features:
- Python >= 3.8 (Recommend to use Anaconda or Miniconda)
- PyTorch >= 2.1.0
conda create --name dedit python=3.10
conda activate dedit
pip install -U pip
# Install requirements
pip install -r requirements.txt
Put the image (of any resolution) to be edited into the folder with a specified name, and rename the image as "img.png" or "img.jpg". Then run the segmentation model
sh ./scripts/run_segment.sh
Alternatively, run GroundedSAM to detect with text prompt
sh ./scripts/run_segmentSAM.sh
Optionally, if segmentation is not good, refine masks with GUI by locally running the mask editing web:
python ui_edit_mask.py
For image-based editing, repeat this step for both reference and target images.
Finetune UNet cross-attention layer of diffusion models by running
sh ./scripts/sdxl/run_ft_sdxl_1024.sh
or finetune full UNet with lora
sh ./scripts/sdxl/run_ft_sdxl_1024_fulllora.sh
If image-based editing is needed, finetune the model with both reference and target images using
sh ./scripts/sdxl/run_ft_sdxl_1024_fulllora_2imgs.sh
To see if the original image can be constructed
sh ./scripts/sdxl/run_recon.sh
Replace the target item (tgt_index) with the item described by the text prompt (tgt_prompt)
sh ./scripts/sdxl/run_text.sh
Replace the target item (tgt_index) in the target image (tgt_name) with the item (src_index) in the reference image
sh ./scripts/sdxl/run_image.sh
For target items (tgt_indices_list), resize it (resize_list), move it (delta_x, delta_y) or reshape it by manually editing the mask shape (using UI).
The resulting new masks (processed by a simple algorithm) can be visualized in './example1/move_resize/seg_move_resize.png', if it is not reasonable, edit using the UI.
sh ./scripts/sdxl/run_move_resize.sh
Remove the target item (tgt_index), the remaining region will be reassigned to the nearby regions with a simple algorithm. The resulting new masks (processed by a simple algorithm) can be visualized in './example1/remove/seg_removed.png', if it is not reasonable, edit using the UI.
sh ./scripts/sdxl/run_move_resize.sh
- We partition the image into three regions as shown below. Regions with the hard mask are frozen, regions with the active mask are generated with diffusion model, and regions with soft mask keep the original content in the first "strength*N" sampling steps.
- During editing, if you use an edited segmentation that is different from finetuning, add --load_edited_mask; For mask-based and remove, if you edit the masks automatically processed by the algorithm as mentioned, add --load_edited_processed_mask.
If you find D-Edit useful for your research and applications, please cite us using this BibTeX:
@article{feng2024dedit,
title={An item is Worth a Prompt: Versatile Image Editing with Disentangled Control},
author={Aosong Feng, Weikang Qiu, Jinbin Bai, Kaicheng Zhou, Zhen Dong, Xiao Zhang, Rex Ying, and Leandros Tassiulas},
journal={arXiv preprint arXiv:2403.04880},
year={2024}
}