This repository contains the code required to reproduce the findings in the paper "Using autoencoders on differentially private federated learning GANs" that was written as part of the CSE3000 Research Project of TU Delft in quarter 4 of the academic calendar year 2021-2022.
Below are some instructions to be able to reproduce the results in the paper, these instructions work under the assumption that conda and bazelisk are installed and in your path and you are in the root of this repository.
- Create a conda environment with Python 3.8 (skip this step on consequent runs)
conda create --name py3.8 python=3.8
- Activate the conda environment
conda activate py3.8
- Install the required packages (skip this step on consequent runs)
pip install --upgrade pip
pip install -r working-requirements.txt
- Run the experiment with the hyperparameters from the paper. Modify the paths and exp_name before running.
bazel run --sandbox_writable_path={path_to_ccache_dir} --strategy=CppCompile=standalone //tensorflow_federated/python/research/gans/experiments/emnist:train -- --root_output_dir={path_to_output_folder} --filtering='by_user' --invert_imagery_probability='0p0' --accuracy_threshold='gt0p939' --num_client_disc_train_steps=6 --num_server_gen_train_steps=6 --dp_l2_norm_clip=0.1 --dp_noise_multiplier=0.01 --num_rounds_per_eval=10 --num_rounds_per_save_images=10 --num_clients_per_round=10 --num_rounds=1000 --use_dp=True --exp_name={experiment_name}
The code in this repository has been forked and modified from has been modified from the TensorFlow Federated repository, the original README starts below.
TensorFlow Federated (TFF) is an open-source framework for machine learning and other computations on decentralized data. TFF has been developed to facilitate open research and experimentation with Federated Learning (FL), an approach to machine learning where a shared global model is trained across many participating clients that keep their training data locally. For example, FL has been used to train prediction models for mobile keyboards without uploading sensitive typing data to servers.
TFF enables developers to use the included federated learning algorithms with their models and data, as well as to experiment with novel algorithms. The building blocks provided by TFF can also be used to implement non-learning computations, such as aggregated analytics over decentralized data.
TFF's interfaces are organized in two layers:
-
Federated Learning (FL) API The
tff.learning
layer offers a set of high-level interfaces that allow developers to apply the included implementations of federated training and evaluation to their existing TensorFlow models. -
Federated Core (FC) API At the core of the system is a set of lower-level interfaces for concisely expressing novel federated algorithms by combining TensorFlow with distributed communication operators within a strongly-typed functional programming environment. This layer also serves as the foundation upon which we've built
tff.learning
.
TFF enables developers to declaratively express federated computations, so they could be deployed to diverse runtime environments. Included with TFF is a single-machine simulation runtime for experiments. Please visit the tutorials and try it out yourself!
See the install documentation for instructions on how to install TensorFlow Federated as a package or build TensorFlow Federated from source.
See the get started documentation for instructions on how to use TensorFlow Federated.
There are a number of ways to contribute depending on what you're interested in:
-
If you are interested in developing new federated learning algorithms, the best way to start would be to study the implementations of federated averaging and evaluation in
tff.learning
, and to think of extensions to the existing implementation (or alternative approaches). If you have a proposal for a new algorithm, we recommend starting by staging your project in theresearch
directory and including a colab notebook to showcase the new features.You may want to also develop new algorithms in your own repository. We are happy to feature pointers to academic publications and/or repos using TFF on tensorflow.org/federated.
-
If you are interested in applying federated learning, consider contributing a tutorial, a new federated dataset, or an example model that others could use for experiments and testing, or writing helper classes that others can use in setting up simulations.
-
If you are interested in helping us improve the developer experience, the best way to start would be to study the implementations behind the
tff.learning
API, and to reflect on how we could make the code more streamlined. You could contribute helper classes that build upon the FC API or suggest extensions to the FC API itself. -
If you are interested in helping us develop runtime infrastructure for simulations and beyond, please wait for a future release in which we will introduce interfaces and guidelines for contributing to a simulation infrastructure.
Please be sure to review the contribution guidelines for guidelines on the coding style, best practices, etc.
The following table describes the compatibility between the TensorFlow Federated and TensorFlow Python packages.
Use GitHub issues for tracking requests and bugs.
Please direct questions to Stack Overflow using the tensorflow-federated tag.