🧩 A federated extension of nnU-Net for privacy-preserving medical image segmentation.
FednnU-Net enables training nnU-Net models in a decentralized, privacy-preserving manner — while maintaining full compatibility with the original framework.
🩻 Modalities Evaluated:
- 🩷 Breast MRI
- ❤️ Cardiac MRI
- 🤰 Fetal Ultrasound
🚀 Key Findings:
- ⚙️ Adaptive aggregation enables robust cross-site learning.
- 📈 Achieves performance comparable to or surpassing centralized training.
- 🧩 Demonstrates strong scalability and real-world applicability in federated medical imaging.
Federated nnU-Net was tested on real-world, multi-hospital datasets, ensuring a realistic federated learning setup.
👉 See the Extended Results Table for full performance metrics.
Clone and install FednnU-Net on all computational nodes that are part of the federated training (including the coordinating server)
git clone https://github.com/faildeny/FednnUNetThen run the installation script that will clone the original nnUNet as a submodule and then will install both libraries
bash ./install.sh✅ If you see:
Installation complete. You can now use fednnUNet.
then installation was successful.
Start server
python fednnunet/server.py command --num_clients number_of_nodes fold --port network_portStart each node
python fednnunet/client.py command dataset_id configuration fold --port network_portFor prototyping and simulation purposes, it is possible to start all nodes on the same machine. The dataset ids for each data-center can be provided as list in the following string form "dataset_id_1 dataset_id_2 dataset_id_3".
python fednnunet/run.py train "dataset_id_1 dataset_id_2 dataset_id_3" configuration fold --port 8080Prepare local dataset on each node in a format required by nnU-Net.
To run a complete configuration and training follow the steps of the original nnU-Net pipeline.
First, configure the experiment and preprocess the data by running plan_and_preprocess as the command for server and nodes.
After the experiment is prepared, you can start distributed training with train command.
Supported tasks
extract_fingerprintcreates one common fingerprint based on data from all nodesplan_and_preprocessextracts fingerprint + creates experiment plan and preprocesses datatrainfederated training of nnUNet
fednnunet/client.py supports identical set of CLI arguments that can be passed during a typical nnUNetv2_train command execution
To run training with 5 nodes on one machine for 3d_fullres configuration and all 5 folds:
python fednnunet/run.py train "301 302 303 304 305" 3d_fullres all --port 8080Following the FednnU-Net paper, it is possible to train the model in two ways: Federated Fingerprint Extraction (FFE) and Asymetric Federated Averaging (AsymFedAvg). While both methods works well in different scenarios, it is best to use the FFE method as the starting point as it's more robust and gives results very close to models trained in a centralized way.
Run the federated 3d experiment configuration with:
python fednnunet/run.py plan_and_preprocess "301 302 303 304 305" 3d_fullres --port 8080This command will:
- extract the dataset fingerprint for each node
- aggregate and distribute it's federated version to all clients
- create a training plan on each node
- preprocess all datasets locally for the requested configuration
Run the FFE training with:
python fednnunet/run.py train "301 302 303 304 305" 3d_fullres all --port 8080This will run the FFE federated training for 3d_fullres configuration model and all cross-validation folds.
Run the 2d experiment configuration with:
python fednnunet/run.py plan_and_preprocess "301 302 303 304 305" 2d --asym --port 8080This command will:
- extract the dataset fingerprint for each node
- create a training plan on each node
- preprocess all datasets locally for the requested configuration
Run the FFE training with:
python fednnunet/run.py train "301 302 303 304 305" 2d 1 --port 8080This will run the AsymFedAvg federated training for 2d configuration model and 1 cross-validation fold.
Each federated training node saves it's checkpoints and final model following the default nnU-Net behaviour. To evaluate the trained model on the local data, use the native commands provided in nnU-Net's documentation:
nnUNetv2_predict -i INPUT_FOLDER -o OUTPUT_FOLDER -d DATASET_NAME_OR_ID -c CONFIGURATION --save_probabilitiesPlease cite the following paper when using FednnU-Net:
@article{skorupko2025federated,
title={Federated nnU-Net for privacy-preserving medical image segmentation},
author={Skorupko, Grzegorz and Avgoustidis, Fotios and Mart{\'\i}n-Isla, Carlos and Garrucho, Lidia and Kessler, Dimitri A and Pujadas, Esmeralda Ruiz and D{\'\i}az, Oliver and Bobowicz, Maciej and Gwo{\'z}dziewicz, Katarzyna and Bargall{\'o}, Xavier and others},
journal={Scientific Reports},
volume={15},
number={1},
pages={38312},
year={2025},
publisher={Nature Publishing Group UK London}
}FednnU-Net is developed by the Barcelona Artificial Intelligence in Medicine Lab (BCN-AIM) at Universitat de Barcelona
This work received funding from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No. 101057849 (DataTools4Heart). This work has been supported by the European Union’s research and innovation programmes: Horizon Europe under Grant Agreement No. 101057699 (RadioVal) and Grant Agreement No. 101044779 (AIMIX), Horizon 2020 under Grant Agreement No. 952103 (EuCanImage).








