Skip to content

nttcslab/improved_sfda

Repository files navigation

Official implementation of the paper "Understanding and Improving Source-free Domain Adaptation from a Theoretical Perspective" [CVPR2024]

PyTorch Lightning Config: Hydra Template
Static Badge

This is an official implementation of the CVPR paper "Understanding and Improving Source-free Domain Adaptation from a Theoretical Perspective

Abstract

Source-free Domain Adaptation (SFDA) is an emerging and challenging research area that addresses the problem of unsupervised domain adaptation (UDA) without source data. Though numerous successful methods have been proposed for SFDA, a theoretical understanding of why these methods work well is still absent. In this paper, we shed light on the theoretical perspective of existing SFDA methods. Specifically, we find that SFDA loss functions comprising discriminability and diversity losses work in the same way as the training objective in the theory of self-training based on the expansion assumption, which shows the existence of the target error bound. This finding brings two novel insights that enable us to build an improved SFDA method comprising 1) Model Training with Auto-Adjusting Diversity Constraint and 2) Augmentation Training with Teacher-Student Framework, yielding a better recognition performance. Extensive experiments on three benchmark datasets demonstrate the validity of the theoretical analysis and our method.

YouTube

youtube video

How to run

Create Environment

You can build the environment using Dockerfile in the docker directory.

Also you can build the environment by following the instruction below.

# clone project
git clone https://github.com/nttcslab/improvsed_sfda
cd improved_sfda

# create conda environment
cd docker
conda create -n myenv python=3.10
conda activate myenv

# install pytorch according to instructions
# https://pytorch.org/get-started/
conda install pytorch=2.0.0 torchvision=0.15.0 pytorch-cuda=11.8 -c pytorch -c nvidia

# install requirements
pip install -r requirements.txt

Prepare Dataset

Please download the datasets from the original sources. Then, please place them as below.

data/
├── office31/
│   ├── amazon
│   ├── dslr
│   ├── webcam
│   └── image_list
├── officehome/
│   ├── Art
│   ├── Clipart
│   ├── Product
│   ├── Real_World
│   └── image_list 
└── visda2017/
    ├── train
    ├── validation
    └── image_list

Source Training

Training configuration is based on Hydra. Please see there for the format and instructions on how to use it.

python src/train.py trainer=gpu experiment=office31_src

Target Training

python src/train.py trainer=gpu experiment=office31_tgt_ours_pb_teachaug_directed

Please see details in: configs/experiment/

To see results

The logs are managed by mlflow.

cd logs/mlflow

mlflow ui

Acknowledgement

Our implementation is based on the following works. We greatly appreciate all these excellent works.

Citation

@InProceedings{Mitsuzumi_2024_CVPR,
    author    = {Mitsuzumi, Yu and Kimura, Akisato and Kashima, Hisashi},
    title     = {Understanding and Improving Source-free Domain Adaptation from a Theoretical Perspective},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2024},
    pages     = {28515-28524}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages