Skip to content

iopzhu/gsfedavg

Repository files navigation



                             ______ __               __                        __  
                            / ____// /              / /_   ___   ____   _____ / /_ 
                           / /_   / /     ______   / __ \ / _ \ / __ \ / ___// __ \
                          / __/  / /___  /_____/  / /_/ //  __// / / // /__ / / / /
                         /_/    /_____/          /_____/ \___//_/ /_/ \___//_/ /_/


GitHub License GitHub closed issues GitHub Repo stars GitHub Repo forks

This is a benchmark for evaluating well-known traditional, personalized and domain generalization federated learning methods. This benchmark straightforward and easy to extend.

Methods 🧬

Traditional FL Methods

Personalized FL Methods

FL Domain Generalization Methods

Environment Preparation 🧩

PyPI 🐍

pip install -r requirements.txt

Conda 💻

conda env create -f environment.yml

Poetry 🎶

At China mainland

poetry install

Not at China mainland

sed -i "10,14d" pyproject.toml && poetry lock --no-update && poetry install

Docker 🐳

At China mainland

docker build -t fl-bench .

Not at China mainland

docker build \
-t fl-bench \
--build-arg IMAGE_SOURCE=karhou/ubuntu:basic \
--build-arg CHINA_MAINLAND=false \
.

Easy Run 🏃‍♂️

ALL classes of methods are inherited from FedAvgServer and FedAvgClient. If you wanna figure out the entire workflow and detail of variable settings, go check src/server/fedavg.py and src/client/fedavg.py.

# partition the CIFAR-10 according to Dir(0.1) for 100 clients
python generate_data.py -d cifar10 -a 0.1 -cn 100

# run FedAvg on CIFAR-10 with default settings.
# Use main.py like python main.py <method> [args ...]
# ❗ Method name should be identical to the `.py` file name in `src/server`.
python main.py fedavg -d cifar10

About methods of generating federated dastaset, go check data/README.md for full details.

Monitor 📈 (recommended 👍)

  1. Run python -m visdom.server on terminal.
  2. Run python main.py <method> --visible 1
  3. Go check localhost:8097 on your browser.

Generic Arguments 🔧

📢 All generic arguments have their default value. Go check get_fedavg_argparser() in FL-bench/src/server/fedavg.py for full details of generic arguments.

You can also write your own .yaml config file. I offer you a template in config and recommend you to save your config files there also.

One example: python main.py fedavg -cfg config/template.yaml

About the default values and hyperparameters of advanced FL methods, go check corresponding FL-bench/src/server/<method>.py for full details.

Argument Description
--dataset The name of dataset that experiment run on.
--model The model backbone experiment used.
--seed Random seed for running experiment.
--join_ratio Ratio for (client each round) / (client num in total).
--global_epoch Global epoch, also called communication round.
--local_epoch Local epoch for client local training.
--finetune_epoch Epoch for clients fine-tunning their models before test.
--test_gap Interval round of performing test on clients.
--eval_test Non-zero value for performing evaluation on joined clients' testset before and after local training.
--eval_val Non-zero value for performing evaluation on joined clients' valset before and after local training.
--eval_train Non-zero value for performing evaluation on joined clients' trainset before and after local training.
-op, --optimizer Client local optimizer, selected from [sgd, adam]
--local_lr Learning rate for client local training.
--momentum Momentum for client local opitimizer.
--weight_decay Weight decay for client local optimizer.
--verbose_gap Interval round of displaying clients training performance on terminal.
--batch_size Data batch size for client local training.
--use_cuda Non-zero value indicates that tensors are in gpu.
--visible Non-zero value for using Visdom to monitor algorithm performance on localhost:8097.
--save_log Non-zero value for saving algorithm running log in out/<method>.
--straggler_ratio The ratio of stragglers (set in [0, 1]). Stragglers would not perform full-epoch local training as normal clients. Their local epoch would be randomly selected from range [--straggler_min_local_epoch, --local_epoch).
--straggler_min_local_epoch The minimum value of local epoch for stragglers.
--external_model_params_file The relative file path of external model parameters. Please confirm whether the shape of parameters compatible with the model by yourself. ⚠ This feature is enabled only when unique_model=False, which is pre-defined by each FL method.
--save_model Non-zero value for saving output model(s) parameters in out/<method>.pt`.
--save_fig Non-zero value for saving the accuracy curves showed on Visdom into a .jpeg file at out/<method>.
--save_metrics Non-zero value for saving metrics stats into a .csv file at out/<method>.
--viz_win_name Custom visdom window name (active when setting --visible as a non-zero value).
--config_file Relative file path of custom config .yaml file.
--check_convergence Non-zero value for checking convergence after training.

Supported Models 🚀

This benchmark supports bunch of models that common and integrated in Torchvision:

  • ResNet family
  • EfficientNet family
  • DenseNet family
  • MobileNet family
  • LeNet5 ...

🤗 You can define your own custom model by filling the CustomModel class in src/utils/models.py and use it by specifying --model custom when running.

Supported Datasets 🎨

This benchmark only supports to solve image classification task for now.

Regular Image Datasets

  • MNIST (1 x 28 x 28, 10 classes)

  • CIFAR-10/100 (3 x 32 x 32, 10/100 classes)

  • EMNIST (1 x 28 x 28, 62 classes)

  • FashionMNIST (1 x 28 x 28, 10 classes)

  • Syhthetic Dataset

  • FEMNIST (1 x 28 x 28, 62 classes)

  • CelebA (3 x 218 x 178, 2 classes)

  • SVHN (3 x 32 x 32, 10 classes)

  • USPS (1 x 16 x 16, 10 classes)

  • Tiny-ImageNet-200 (3 x 64 x 64, 200 classes)

  • CINIC-10 (3 x 32 x 32, 10 classes)

Domain Generalization Image Datasets

Medical Image Datasets

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published