GeoMX: A fast and unified system for distributed machine learning over geo-distributed data centers.
-
Updated
Mar 5, 2024 - C++
GeoMX: A fast and unified system for distributed machine learning over geo-distributed data centers.
C3-SL: Circular Convolution-Based Batch-Wise Compression for Communication-Efficient Split Learning (IEEE MLSP 2022)
Main Stooa's repository. The online fishbowl tool.
CVPR 2024 accepted paper, An Upload-Efficient Scheme for Transferring Knowledge From a Server-Side Pre-trained Generator to Clients in Heterogeneous Federated Learning
AAAI 2024 accepted paper, FedTGP: Trainable Global Prototypes with Adaptive-Margin-Enhanced Contrastive Learning for Data and Model Heterogeneity in Federated Learning
Atomo: Communication-efficient Learning via Atomic Sparsification
Event-Triggered Communication in Parallel Machine Learning
[ICML2022] ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training
The implementation of "Two-Stream Federated Learning: Reduce the Communication Costs" (VCIP 2018)
A project that investigated, designed and evaluated different methods to reduce overall up-link communication (client -> server) during federated learning
FedAnil++ is a Privacy-Preserving and Communication-Efficient Federated Deep Learning Model to address non-IID data, privacy concerns, and communication overhead. This repo hosts a simulation for FedAnil++ written in Python.
Code for the paper "A Quadratic Synchronization Rule for Distributed Deep Learning"
Communication-Efficient Stratified Stochastic Gradient Descent for Distributed Matrix Completion
This project provides a computation and communication efficient approach for federated learning based urban sensing applications against inference attacks
Add a description, image, and links to the communication-efficient topic page so that developers can more easily learn about it.
To associate your repository with the communication-efficient topic, visit your repo's landing page and select "manage topics."