Skip to content

[3DV 2025] GarmentDreamer: 3DGS Guided Garment Synthesis with Diverse Geometry and Texture Details

License

Notifications You must be signed in to change notification settings

boqian-li/GarmentDreamer

Repository files navigation

GarmentDreamer
3DGS Guided Garment Synthesis with Diverse Geometry and Texture Details

Boqian Li · Xuan Li · Ying Jiang · Tianyi Xie · Feng Gao · Huamin Wang · Yin Yang · Chenfanfu Jiang

International Conference on 3D Vision (3DV) 2025


Paper PDF Project Page Code Github

📝 TODO

  • fix gpu, fix finalmesh reverse problem
  • Better configs
  • Upload more templates
  • Improve Garment_3DGS to obtain more significant deformation
  • Release the code of AutoEncoder_dgcnn and Garment_Diffusion and release the pretrained models

🛠️ Environment Setup

  1. Environment:
  • Ubuntu 20.04
  • CUDA 11.8
  1. Create Env:

    conda create -n garmentdreamer python==3.10
    conda activate garmentdreamer
    conda install pytorch==2.3.1 torchvision==0.18.1 pytorch-cuda=11.8 -c pytorch -c nvidia # specific channel matters
    pip install -r requirements.txt
    pip install xformers==0.0.27 --extra-index-url https://download.pytorch.org/whl/cu118
    
  2. Install pytorch3d:

  1. Install requirements for 3DGS

    pip install Garment_3DGS/gaussiansplatting/submodules/diff-gaussian-rasterization
    pip install Garment_3DGS/gaussiansplatting/submodules/simple-knn
    
  2. Huggingface login (国内需要先 export HF_ENDPOINT=https://hf-mirror.com)

    huggingface-cli login 
    # Then input your huggingface token for authentication
    

🚀 Get Started

🧩 For Normal Estimator

  • In this version, we use the normal estimator in Metric3D to estimate the normal map of the input garment. You can also use your own normal estimator.
  • As discribed in Metric3D, download the pretrained normal estimator from here and put it in Garment_3DGS/Normal_estimator_Metric3D/weight/metric_depth_vit_large_800k.pth.

🧩 For mesh templates

  • Option 1: Download the mesh templates from here and put them in Garment_Deformer_NeTF/input_data/.

  • Option 2: You can also use your own mesh templates. But please make sure the direction of the mesh is the same as those in Option 1.

  • tips:

    • Our provided mesh templates are unwrinkled, but wrinkled meshes can be better for the final performance.
    • If you got not good results, please try another mesh template, it's very likely that the mesh template is not suitable for your prompt.
    • You may need to translate the mesh template first to ensure the mesh is within the camera's field of view, please check the outputs/{sample_dir}/gs_check to see if the position of the mesh is good.

🏃‍♂️ Run the Code

  • After setting up the environment and downloading necessary files, you can run the main script using the following command:

    CUDA_VISIBLE_DEVICES=/gpu_id/only_one/is_supported python launch_garmentdreamer.py --template_path /path/to/your/mesh/template.obj --prompt "your prompt"
  • tips:

    • I have tried to give the best config as I can but it's still not perfect. All important parameters are in the config files as launch_garmentdreamer.py shows and you can modify them to get better results. If you have any questions, please feel free to contact me.
    • A very useful strategy to describe the garment is to use the prompt template: a {style} {garment type} made of {color} {material} or a {color} {material} {garment type}. For example, 'a traditional royal style dress made of blue silk' or 'a blue denim tee'.

Acknowledgment

This implementation is built based on GaussianDreamer, Metric3D, Neural Deferred Shading and threefiner.

📬 Contact

Please contact Boqian Li via boqianlihuster@gmail.com

📑 Citation

If you find this code or our method useful for your academic research, please cite our paper

@article{li2024garmentdreamer,
  title={GarmentDreamer: 3DGS Guided Garment Synthesis with Diverse Geometry and Texture Details},
  author={Li, Boqian and Li, Xuan and Jiang, Ying and Xie, Tianyi and Gao, Feng and Wang, Huamin and Yang, Yin and Jiang, Chenfanfu},
  journal={arXiv preprint arXiv:2405.12420},
  year={2024}
}

Releases

No releases published

Packages

No packages published