Skip to content
This repository has been archived by the owner on Apr 17, 2023. It is now read-only.

Adding doc, adding a timer, fixing bugs. #84

Open
wants to merge 31 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 11 commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
3950b0c
Update README.md
teytaud Sep 26, 2019
c4c4b87
Update README.md
teytaud Sep 26, 2019
d5f5841
Update README.md
teytaud Sep 26, 2019
8d2cd8f
Update README.md
teytaud Sep 26, 2019
c6379f9
Update README.md
teytaud Sep 26, 2019
1a07faf
Update train.py
teytaud Sep 26, 2019
5599bc2
Update inspirational_generation.py
teytaud Sep 26, 2019
da128a1
Update README.md
teytaud Sep 26, 2019
39e7ee9
Update README.md
teytaud Sep 26, 2019
05a8a1c
Update train.py
teytaud Sep 26, 2019
b15a162
Update progressive_gan_trainer.py
teytaud Sep 26, 2019
2b13f24
Update README.md
teytaud Sep 30, 2019
b1f0983
Update README.md
teytaud Sep 30, 2019
1805b95
Update gan_trainer.py
teytaud Sep 30, 2019
b3d62b0
Update progressive_gan_trainer.py
teytaud Sep 30, 2019
d6577ba
Update README.md
teytaud Sep 30, 2019
dcdaa66
Update README.md
teytaud Sep 30, 2019
57fdfaf
Update DCGAN_trainer.py
teytaud Sep 30, 2019
d2a11a9
Update progressive_gan_trainer.py
teytaud Sep 30, 2019
fa54887
Update gan_trainer.py
teytaud Sep 30, 2019
8374753
Update DCGAN_trainer.py
teytaud Sep 30, 2019
88eca90
Update README.md
teytaud Oct 1, 2019
d0a40a6
Update README.md
teytaud Oct 1, 2019
10e2247
add inspirational images example
teytaud Oct 1, 2019
dcb4f4b
Update README.md
teytaud Oct 1, 2019
b43210c
Update README.md
teytaud Oct 1, 2019
4451d70
Update README.md
teytaud Oct 14, 2019
4fc893d
Update README.md
teytaud Oct 14, 2019
3dccf5e
Update README.md
teytaud Oct 14, 2019
d812b0a
Update train.py
teytaud Oct 14, 2019
6f0cec7
BUGFIX: minibatchsceduler remaining with DCGAN
Jan 7, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 36 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
A GAN toolbox for researchers and developers with:
- Progressive Growing of GAN(PGAN): https://arxiv.org/pdf/1710.10196.pdf
- DCGAN: https://arxiv.org/pdf/1511.06434.pdf
- To come: StyleGAN https://arxiv.org/abs/1812.04948
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:(

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So no StyleGAN implementation in the end, @Molugan ?😞

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a PR for styleGAN !

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

styleGAN incoming (#95) so no need to remove this one

- StyleGAN https://arxiv.org/abs/1812.04948

<img src="illustration.png" alt="illustration">
Picture: Generated samples from GANs trained on celebaHQ, fashionGen, DTD.
Expand Down Expand Up @@ -48,6 +48,21 @@ pip install -r requirements.txt
- DTD: https://www.robots.ox.ac.uk/~vgg/data/dtd/
- CIFAR10: http://www.cs.toronto.edu/~kriz/cifar.html

For a quick start with CelebAHQ, you might:
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just wanted to facilitate people's life by copy-pasting the TLDR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea for celebaHQ, it was a pain in the *** to make the first time.

I would however add a section ## Quick download

```
git clone https://github.com/nperraud/download-celebA-HQ.git
cd download-celebA-HQ
conda create -n celebaHQ python=3
source activate celebaHQ
conda install jpeg=8d tqdm requests pillow==3.1.1 urllib3 numpy cryptography scipy
pip install opencv-python==3.4.0.12 cryptography==2.1.4
sudo apt-get install p7zip-full
python download_celebA.py ./
python download_celebA_HQ.py ./
python make_HQ_images.py ./
export PATH_TO_CELEBAHQ=`readlink -f ./celebA-HQ/512`
```

## Quick training

The datasets.py script allows you to prepare your datasets and build their corresponding configuration files.
Expand All @@ -64,8 +79,9 @@ And wait for a few days. Your checkpoints will be dumped in output_networks/cele
For celebaHQ:

```
python datasets.py celebaHQ $PATH_TO_CELEBAHQ -o $OUTPUT_DATASET - f
python train.py PGAN -c config_celebaHQ.json --restart -n celebaHQ
python datasets.py celebaHQ $PATH_TO_CELEBAHQ -o $OUTPUT_DATASET - f # Prepare the dataset and build the configuration file.
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure of this << - f >> ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

-f accelerate the training by saving intermediate images of smaller sizes.

python train.py PGAN -c config_celebaHQ.json --restart -n celebaHQ # Train.
python eval.py inception -n celebaHQ -m PGAN # If you want to check the inception score.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rather not add the inception part here.

```

Your checkpoints will be dumped in output_networks/celebaHQ. You should get 1024x1024 generations at the end.
Expand Down Expand Up @@ -130,7 +146,8 @@ Where:

1 - MODEL_NAME is the name of the model you want to run. Currently, two models are available:
- PGAN(progressive growing of gan)
- PPGAN(decoupled version of PGAN)
- DCGAN
- StyleGAN

2 - CONFIGURATION_FILE(mandatory): path to a training configuration file. This file is a json file containing at least a pathDB entry with the path to the training dataset. See below for more informations about this file.

Expand Down Expand Up @@ -209,19 +226,19 @@ You need to use the eval.py script.

You can generate more images from an existing checkpoint using:
```
python eval.py visualization -n $modelName -m $modelType
Molugan marked this conversation as resolved.
Show resolved Hide resolved
python eval.py visualization -n $runName -m $modelName
```

Where modelType is in [PGAN, PPGAN, DCGAN] and modelName is the name given to your model. This script will load the last checkpoint detected at testNets/$modelName. If you want to load a specific iteration, please call:
Where modelName is in [PGAN, StyleGAN, DCGAN] and runName is the name given to your run (trained model). This script will load the last checkpoint detected at output_networks/$modelName. If you want to load a specific iteration, please call:

```
python eval.py visualization -n $modelName -m $modelType -s $SCALE -i $ITER
python eval.py visualization -n $runName -m $modelName -s $SCALE -i $ITER
```

If your model is conditioned, you can ask the visualizer to print out some conditioned generations. For example:

```
python eval.py visualization -n $modelName -m $modelType --Class T_SHIRT
python eval.py visualization -n $runName -m $modelName --Class T_SHIRT
```

Will plot a batch of T_SHIRTS in visdom. Please use the option - -showLabels to see all the available labels for your model.
Expand All @@ -231,16 +248,21 @@ Will plot a batch of T_SHIRTS in visdom. Please use the option - -showLabels to
To save a randomly generated fake dataset from a checkpoint please use:

```
python eval.py visualization -n $modelName -m $modelType --save_dataset $PATH_TO_THE_OUTPUT_DATASET --size_dataset $SIZE_OF_THE_OUTPUT
python eval.py visualization -n $runName -m $modelName --save_dataset $PATH_TO_THE_OUTPUT_DATASET --size_dataset $SIZE_OF_THE_OUTPUT
```

### SWD metric

Using the same kind of configuration file as above, just launch:

```
python eval.py laplacian_SWD -c $CONFIGURATION_FILE -n $modelName -m $modelType
python eval.py laplacian_SWD -c $CONFIGURATION_FILE -n $runName -m $modelName
```
for the SWD score, to be maximized, or for the inception score:
Copy link
Contributor

@Molugan Molugan Jan 7, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, the SWD should be minimized

```
python eval.py inception -c $CONFIGURATION_FILE -n $runName -m $modelName
```
also to be maximized (see https://hal.inria.fr/hal-01850447/document for a discussion).


Where $CONFIGURATION_FILE is the training configuration file called by train.py (see above): it must contains a "pathDB" field pointing to path to the dataset's directory. For example, if you followed the instruction of the Quick Training section to launch a training session on celebaHQ your configuration file will be config_celebaHQ.json.

Expand All @@ -252,6 +274,7 @@ You can add optional arguments:

### Inspirational generation

An inspirational generation consists in generation with your GAN an image which looks like a given input image.
To make an inspirational generation, you first need to build a feature extractor:

```
Expand All @@ -261,14 +284,14 @@ python save_feature_extractor.py {vgg16, vgg19} $PATH_TO_THE_OUTPUT_FEATURE_EXTR
Then run your model:

```
python eval.py inspirational_generation -n $modelName -m $modelType --inputImage $pathTotheInputImage -f $PATH_TO_THE_OUTPUT_FEATURE_EXTRACTOR
python eval.py inspirational_generation -n $runName -m $modelName --inputImage $pathTotheInputImage -f $PATH_TO_THE_OUTPUT_FEATURE_EXTRACTOR
```

### I have generated my metrics. How can i plot them on visdom ?

Just run
```
python eval.py metric_plot -n $modelName
python eval.py metric_plot -n $runName
```

## LICENSE
Expand Down
2 changes: 1 addition & 1 deletion models/eval/inspirational_generation.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ def updateParser(parser):
parser.add_argument('--weights', type=float, dest='weights',
nargs='*', help="Weight of each classifier. Default \
value is one. If specified, the number of weights must\
match the number of feature exatrcators.")
match the number of feature extractors.")
parser.add_argument('--gradient_descent', help='gradient descent',
action='store_true')
parser.add_argument('--random_search', help='Random search',
Expand Down
9 changes: 7 additions & 2 deletions models/trainer/progressive_gan_trainer.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
import os
import time

from .standard_configurations.pgan_config import _C
from ..progressive_gan import ProgressiveGAN
Expand All @@ -22,6 +23,7 @@ def __init__(self,
pathdb,
miniBatchScheduler=None,
datasetProfile=None,
max_time=0,
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@brozi I added a timer, which will help for DFO or AS for GAN optimization.

configScheduler=None,
**kwargs):
r"""
Expand All @@ -30,6 +32,7 @@ def __init__(self,
dataset
- useGPU (bool): set to True if you want to use the available GPUs
for the training procedure
- max_time (int): max number of seconds for training (0 = infinity).
- visualisation (module): if not None, a visualisation module to
follow the evolution of the training
- lossIterEvaluation (int): size of the interval on which the
Expand All @@ -46,7 +49,7 @@ def __init__(self,
- stopOnShitStorm (bool): should we stop the training if a diverging
behavior is detected ?
"""

self.max_time = max_time
self.configScheduler = {}
if configScheduler is not None:
self.configScheduler = {
Expand Down Expand Up @@ -208,6 +211,7 @@ def train(self):
+ "_train_config.json")
self.saveBaseConfig(pathBaseConfig)

start = time.time()
for scale in range(self.startScale, n_scales):

self.updateDatasetForScale(scale)
Expand All @@ -230,7 +234,8 @@ def train(self):
shiftAlpha += 1

while shiftIter < self.modelConfig.maxIterAtScale[scale]:

if self.max_time > 0 and time.time() - start > self.max_time:
break
self.indexJumpAlpha = shiftAlpha
status = self.trainOnEpoch(dbLoader, scale,
shiftIter=shiftIter,
Expand Down
5 changes: 4 additions & 1 deletion train.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,13 +30,15 @@ def getTrainer(name):
parser = argparse.ArgumentParser(description='Testing script')
parser.add_argument('model_name', type=str,
help='Name of the model to launch, available models are\
PGAN and PPGAN. To get all possible option for a model\
PGAN and DCGAN and StyleGAN. To get all possible option for a model\
please run train.py $MODEL_NAME -overrides')
parser.add_argument('--no_vis', help=' Disable all visualizations',
action='store_true')
parser.add_argument('--np_vis', help=' Replace visdom by a numpy based \
visualizer (SLURM)',
action='store_true')
parser.add_argument('--max_time', help=' Maximum time in seconds (0 for infinity)', type=int,
dest='max_time', default=0)
parser.add_argument('--restart', help=' If a checkpoint is detected, do \
not try to load it',
action='store_true')
Expand Down Expand Up @@ -124,6 +126,7 @@ def getTrainer(name):
lossIterEvaluation=kwargs["evalIter"],
checkPointDir=checkPointDir,
saveIter= kwargs["saveIter"],
max_time=kwargs["max_time"],
modelLabel=modelLabel,
partitionValue=partitionValue,
**trainingConfig)
Expand Down