You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+6-5Lines changed: 6 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,14 @@
1
1
2
2
# PrivateKube
3
3
4
-
PrivateKube is an extension to the popular Kubernetes datacenter orchestrator that adds privacy as a new type of resource to be managed alongside other traditional compute resources, such as CPU, GPU, and memory. A description of the project can be found on our [webpage](https://systems.cs.columbia.edu/PrivateKube/) and in our OSDI'21 paper, titled [Privacy Budget Scheduling](https://www.usenix.org/conference/osdi21/presentation/luo) (PDF locally available [here](https://columbia.github.io/PrivateKube/papers/osdi2021privatekube.pdf)).
4
+
PrivateKube is an extension to the popular Kubernetes datacenter orchestrator that adds privacy as a new type of resource to be managed alongside other traditional compute resources, such as CPU, GPU, and memory. A description of the project can be found on our [webpage](https://systems.cs.columbia.edu/PrivateKube/) and in our OSDI'21 paper, titled [Privacy Budget Scheduling](https://www.usenix.org/conference/osdi21/presentation/luo) (PDF locally available [here](https://columbia.github.io/PrivateKube/papers/osdi2021privatekube.pdf) and extended version available on [arXiv](https://arxiv.org/abs/2106.15335).
5
5
6
6
7
7
## Repo structure
8
8
9
9
This repository contains the artifact release for the OSDI paper:
10
10
-[system](system/): The PrivateKube system, which implements the privacy resource and a new scheduling algorithm for it, called *Dominant Privacy Fairness (DPF)*.
11
-
-[privatekube](privatekube/): A python client for interaction with the PrivateKube system and performing macrobenchmark evaluation.
11
+
-[privatekube](privatekube/): A Python client for interaction with the PrivateKube system and performing macrobenchmark evaluation.
12
12
-[simulator](simulator/): A simulator for microbenchmarking privacy scheduling algorithms in tightly controlled settings.
13
13
-[examples](examples/): Usage examples for various components, please refer its [README](./examples/README.md) for details.
14
14
-[evaluation](evaluation/): Scripts to reproduce the macrobenchmark and microbenchmark evaluation results from our paper.
@@ -27,7 +27,7 @@ This repository contains the artifact release for the OSDI paper:
27
27
-[1.4. Example usage in a DP ML pipeline](#14-example-usage-in-a-dp-ml-pipeline)
28
28
-[2. Getting started with the simulator](#2-getting-started-with-the-simulator)
(You can learn more about how to use Microk8s without sudo [here](https://github.com/ubuntu/microk8s/blob/feature/dev-docs/docs/access-without-sudo.md))
66
67
67
68
You can now start and stop your cluster with:
68
69
```bash
@@ -247,8 +248,8 @@ This simulator is used for prototyping and microbenchmark evaluation of privacy
247
248
248
249
### 2.1 Setup
249
250
250
-
#### Setup python environment
251
-
Install conda, create and activate an isolated python environment "ae".
251
+
#### Setup a Python environment
252
+
Install Conda, create and activate an isolated Python environment "ae".
Copy file name to clipboardExpand all lines: evaluation/macrobenchmark/README.md
+7Lines changed: 7 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,21 @@
1
1
2
2
# Macrobenchmark
3
3
4
+
5
+
## Requirements
6
+
4
7
The commands in this section have to be run from the `macrobenchmark` directory. You can jump there with:
5
8
6
9
```bash
7
10
cd evaluation/macrobenchmark
8
11
```
9
12
13
+
You should have the `privatekube` Python package installed, as described in the [main README](https://github.com/columbia/PrivateKube).
14
+
10
15
Please note that the steps below can take several days to run, depending on your hardware. If you want to examine the experimental data without running the preprocessing or the training yourself, you can download some artifacts from this [public bucket](https://storage.googleapis.com/privatekube-public-artifacts).
11
16
17
+
Training will be faster with a Nvidia GPU, but you can also use your CPU by specifying `--device=cpu` in the script arguments.
18
+
12
19
## Download and preprocess the dataset
13
20
14
21
To download a preprocessed and truncated (140Mb instead of 7Gb) version of the dataset, run the following:
0 commit comments