Overview • About • Topics • Quick Start • Experiments
Core federated learning repository covering algorithm baselines, partitioning strategies, communication efficiency, personalization, and privacy-aware training.
federated-learning-core provides reusable FL building blocks:
- FedAvg/FedProx/FedAdam orchestration layers
- IID and non-IID client data partitioning
- Compression strategies for communication cost reduction
- Personalization modules for client-specific adaptation
- DP-SGD privacy mechanisms and accounting
- Engineered for experimentation and modular FL system design
- Supports research workflows and production migration paths
- Includes ready-to-run scenario and experiment scripts
federated-learning fedavg non-iid flower differential-privacy distributed-ml pytorch privacy
src/algorithms/: aggregation and strategy modulessrc/data/partitioning/: IID and skew partitionerssrc/training/flower/: orchestrated FL trainingsrc/optimization/: compression and personalizationsrc/privacy/dp_sgd/: private training componentssrc/scenarios/: cross-silo and vertical setups
pip install -r requirements.txt
pytest -q tests/test_public_surfaces.pysrc/experiments/run_fedavg.pysrc/experiments/run_flower.pysrc/experiments/run_compression.pysrc/experiments/run_cross_silo.pysrc/experiments/run_vertical.pysrc/experiments/run_personalization.pysrc/experiments/run_dp.py
FL/ML: PyTorch, Flower, NumPy, scikit-learn
Optimization: compression + personalization techniques
Privacy: DP-SGD and accounting utilities