Skip to content
This repository has been archived by the owner on Apr 17, 2023. It is now read-only.

Adding doc, adding a timer, fixing bugs. #84

Open
wants to merge 31 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
3950b0c
Update README.md
teytaud Sep 26, 2019
c4c4b87
Update README.md
teytaud Sep 26, 2019
d5f5841
Update README.md
teytaud Sep 26, 2019
8d2cd8f
Update README.md
teytaud Sep 26, 2019
c6379f9
Update README.md
teytaud Sep 26, 2019
1a07faf
Update train.py
teytaud Sep 26, 2019
5599bc2
Update inspirational_generation.py
teytaud Sep 26, 2019
da128a1
Update README.md
teytaud Sep 26, 2019
39e7ee9
Update README.md
teytaud Sep 26, 2019
05a8a1c
Update train.py
teytaud Sep 26, 2019
b15a162
Update progressive_gan_trainer.py
teytaud Sep 26, 2019
2b13f24
Update README.md
teytaud Sep 30, 2019
b1f0983
Update README.md
teytaud Sep 30, 2019
1805b95
Update gan_trainer.py
teytaud Sep 30, 2019
b3d62b0
Update progressive_gan_trainer.py
teytaud Sep 30, 2019
d6577ba
Update README.md
teytaud Sep 30, 2019
dcdaa66
Update README.md
teytaud Sep 30, 2019
57fdfaf
Update DCGAN_trainer.py
teytaud Sep 30, 2019
d2a11a9
Update progressive_gan_trainer.py
teytaud Sep 30, 2019
fa54887
Update gan_trainer.py
teytaud Sep 30, 2019
8374753
Update DCGAN_trainer.py
teytaud Sep 30, 2019
88eca90
Update README.md
teytaud Oct 1, 2019
d0a40a6
Update README.md
teytaud Oct 1, 2019
10e2247
add inspirational images example
teytaud Oct 1, 2019
dcb4f4b
Update README.md
teytaud Oct 1, 2019
b43210c
Update README.md
teytaud Oct 1, 2019
4451d70
Update README.md
teytaud Oct 14, 2019
4fc893d
Update README.md
teytaud Oct 14, 2019
3dccf5e
Update README.md
teytaud Oct 14, 2019
d812b0a
Update train.py
teytaud Oct 14, 2019
6f0cec7
BUGFIX: minibatchsceduler remaining with DCGAN
Jan 7, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
74 changes: 60 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@
A GAN toolbox for researchers and developers with:
- Progressive Growing of GAN(PGAN): https://arxiv.org/pdf/1710.10196.pdf
- DCGAN: https://arxiv.org/pdf/1511.06434.pdf
- To come: StyleGAN https://arxiv.org/abs/1812.04948
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:(

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So no StyleGAN implementation in the end, @Molugan ?😞

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a PR for styleGAN !

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

styleGAN incoming (#95) so no need to remove this one


<img src="illustration.png" alt="illustration">
Picture: Generated samples from GANs trained on celebaHQ, fashionGen, DTD.
Expand Down Expand Up @@ -48,6 +47,21 @@ pip install -r requirements.txt
- DTD: https://www.robots.ox.ac.uk/~vgg/data/dtd/
- CIFAR10: http://www.cs.toronto.edu/~kriz/cifar.html

For a quick start with CelebAHQ, you might:
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just wanted to facilitate people's life by copy-pasting the TLDR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea for celebaHQ, it was a pain in the *** to make the first time.

I would however add a section ## Quick download

```
git clone https://github.com/nperraud/download-celebA-HQ.git
cd download-celebA-HQ
conda create -n celebaHQ python=3
source activate celebaHQ
conda install jpeg=8d tqdm requests pillow==3.1.1 urllib3 numpy cryptography scipy
pip install opencv-python==3.4.0.12 cryptography==2.1.4
sudo apt-get install p7zip-full
python download_celebA.py ./
python download_celebA_HQ.py ./
python make_HQ_images.py ./
export PATH_TO_CELEBAHQ=`readlink -f ./celebA-HQ/512`
```

## Quick training

The datasets.py script allows you to prepare your datasets and build their corresponding configuration files.
Expand All @@ -64,8 +78,9 @@ And wait for a few days. Your checkpoints will be dumped in output_networks/cele
For celebaHQ:

```
python datasets.py celebaHQ $PATH_TO_CELEBAHQ -o $OUTPUT_DATASET - f
python train.py PGAN -c config_celebaHQ.json --restart -n celebaHQ
python datasets.py celebaHQ $PATH_TO_CELEBAHQ -o $OUTPUT_DATASET # Prepare the dataset and build the configuration file.
python train.py PGAN -c config_celebaHQ.json --restart -n celebaHQ # Train.
python eval.py inception -n celebaHQ -m PGAN # If you want to check the inception score.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rather not add the inception part here.

```

Your checkpoints will be dumped in output_networks/celebaHQ. You should get 1024x1024 generations at the end.
Expand Down Expand Up @@ -130,7 +145,7 @@ Where:

1 - MODEL_NAME is the name of the model you want to run. Currently, two models are available:
- PGAN(progressive growing of gan)
- PPGAN(decoupled version of PGAN)
- DCGAN

2 - CONFIGURATION_FILE(mandatory): path to a training configuration file. This file is a json file containing at least a pathDB entry with the path to the training dataset. See below for more informations about this file.

Expand Down Expand Up @@ -209,19 +224,19 @@ You need to use the eval.py script.

You can generate more images from an existing checkpoint using:
```
python eval.py visualization -n $modelName -m $modelType
Molugan marked this conversation as resolved.
Show resolved Hide resolved
python eval.py visualization -n $runName -m $modelName
```

Where modelType is in [PGAN, PPGAN, DCGAN] and modelName is the name given to your model. This script will load the last checkpoint detected at testNets/$modelName. If you want to load a specific iteration, please call:
Where modelName is in [PGAN, DCGAN] and runName is the name given to your run (trained model). This script will load the last checkpoint detected at output_networks/$modelName. If you want to load a specific iteration, please call:

```
python eval.py visualization -n $modelName -m $modelType -s $SCALE -i $ITER
python eval.py visualization -n $runName -m $modelName -s $SCALE -i $ITER
```

If your model is conditioned, you can ask the visualizer to print out some conditioned generations. For example:

```
python eval.py visualization -n $modelName -m $modelType --Class T_SHIRT
python eval.py visualization -n $runName -m $modelName --Class T_SHIRT
```

Will plot a batch of T_SHIRTS in visdom. Please use the option - -showLabels to see all the available labels for your model.
Expand All @@ -231,16 +246,21 @@ Will plot a batch of T_SHIRTS in visdom. Please use the option - -showLabels to
To save a randomly generated fake dataset from a checkpoint please use:

```
python eval.py visualization -n $modelName -m $modelType --save_dataset $PATH_TO_THE_OUTPUT_DATASET --size_dataset $SIZE_OF_THE_OUTPUT
python eval.py visualization -n $runName -m $modelName --save_dataset $PATH_TO_THE_OUTPUT_DATASET --size_dataset $SIZE_OF_THE_OUTPUT
```

### SWD metric

Using the same kind of configuration file as above, just launch:

```
python eval.py laplacian_SWD -c $CONFIGURATION_FILE -n $modelName -m $modelType
python eval.py laplacian_SWD -c $CONFIGURATION_FILE -n $runName -m $modelName
```
for the SWD score, to be maximized, or for the inception score:
Copy link
Contributor

@Molugan Molugan Jan 7, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, the SWD should be minimized

```
python eval.py inception -c $CONFIGURATION_FILE -n $runName -m $modelName
```
also to be maximized (see https://hal.inria.fr/hal-01850447/document for a discussion).


Where $CONFIGURATION_FILE is the training configuration file called by train.py (see above): it must contains a "pathDB" field pointing to path to the dataset's directory. For example, if you followed the instruction of the Quick Training section to launch a training session on celebaHQ your configuration file will be config_celebaHQ.json.

Expand All @@ -250,27 +270,53 @@ You can add optional arguments:
- -i $ITER: specify the iteration to evaluate(if not set, will take the highest one)
- --selfNoise: returns the typical noise of the SWD distance for each resolution

### Inspirational generation
### Inspirational generation (https://arxiv.org/abs/1906.11661)

You might want to generate clothese (or faces, or whatever) using an inspirational image, e.g.:
Copy link
Contributor

@Molugan Molugan Jan 7, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: clothese -> clothes

I don't find the description very clear :(.

<img src="inspir.png" alt="celeba" class="center">

An inspirational generation consists in generating with your GAN an image which looks like a given input image.
This is based on optimizing the latent vector z such that similarity(GAN(z), TargetImage) is maximum.
To make an inspirational generation, you first need to build a feature extractor:

```
python save_feature_extractor.py {vgg16, vgg19} $PATH_TO_THE_OUTPUT_FEATURE_EXTRACTOR --layers 3 4 5
```
This feature extractor is then used for computing the similarity.

Then run your model:

```
python eval.py inspirational_generation -n $modelName -m $modelType --inputImage $pathTotheInputImage -f $PATH_TO_THE_OUTPUT_FEATURE_EXTRACTOR
python eval.py inspirational_generation -n $runName -m $modelName --inputImage $pathTotheInputImage -f $PATH_TO_THE_OUTPUT_FEATURE_EXTRACTOR
```
You can compare choose for the optimization one of the optimizers in Nevergrad
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can compare choose for the optimization one of the optimizers in Nevergrad ->
You can also try out some optimizers from Nevergrad

(https://github.com/facebookresearch/nevergrad/). For example you can run:
```
python eval.py inspirational_generation -n $runName -m $modelName --inputImage $pathTotheInputImage -f $PATH_TO_THE_OUTPUT_FEATURE_EXTRACTOR --nevergrad CMA
```
if you want to use CMA-ES; or another optimizer in 'CMA', 'DE', 'PSO', 'TwoPointsDE', 'PortfolioDiscreteOnePlusOne', 'DiscreteOnePlusOne', 'OnePlusOne'. If you do not specify --nevergrad, then Adam is used.


### I have generated my metrics. How can i plot them on visdom ?

Just run
```
python eval.py metric_plot -n $modelName
python eval.py metric_plot -n $runName
```

## LICENSE

This project is under BSD-3 license.

## Citing

```bibtex
@misc{pytorchganzoo,
author = {M. Riviere},
title = {{Pytorch GAN Zoo}},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://GitHub.com/FacebookResearch/pytorch_GAN_zoo}},
}
```
3 changes: 2 additions & 1 deletion datasets.py
Original file line number Diff line number Diff line change
Expand Up @@ -281,7 +281,8 @@ def resizeDataset(inputPath, outputPath, maxSize):
maxSize = 1024
moveLastScale = False
keepOriginalDataset = True
config["miniBatchScheduler"] = {"7": 12, "8": 8}
if args.model_type == 'PGAN':
config["miniBatchScheduler"] = {"7": 12, "8": 8}
if args.model_type == 'DCGAN':
print("WARNING: DCGAN is diverging for celebaHQ")

Expand Down
Binary file added inspir.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion models/eval/inspirational_generation.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ def updateParser(parser):
parser.add_argument('--weights', type=float, dest='weights',
nargs='*', help="Weight of each classifier. Default \
value is one. If specified, the number of weights must\
match the number of feature exatrcators.")
match the number of feature extractors.")
parser.add_argument('--gradient_descent', help='gradient descent',
action='store_true')
parser.add_argument('--random_search', help='Random search',
Expand Down
6 changes: 5 additions & 1 deletion models/trainer/DCGAN_trainer.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
import os
import time

from ..DCGAN import DCGAN
from .gan_trainer import GANTrainer
Expand All @@ -18,6 +19,7 @@ def getDefaultConfig(self):

def __init__(self,
pathdb,
miniBatchScheduler=None,
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding this argument is necessary because dataset.py sets up miniBatchScheduler and we get a crash at runtime because init does not have such an argument ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It shouldn't crash unless you add a scheduler in the configuration file.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Presumably there is a problem in the automatic generation of the configuration file.
@Molugan how should we proceed regarding this issue ? I would go for keeping my fix so that at least it does not crash.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue is DCGAN + celebaHQ in the dataset maker. I hadn'y considered the possibility that someone would want to use both.

Copy link
Contributor

@Molugan Molugan Jan 7, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then the dataset maker should be changed, not this part: indeed the idea of the GANTrainer is to be as general as possible. Specific features like multi-scale are defined by child classes.

**kwargs):
r"""
Args:
Expand Down Expand Up @@ -46,10 +48,12 @@ def train(self):
self.saveBaseConfig(pathBaseConfig)

maxShift = int(self.modelConfig.nEpoch * len(self.getDBLoader(0)))

start = time.time()
for epoch in range(self.modelConfig.nEpoch):
dbLoader = self.getDBLoader(0)
self.trainOnEpoch(dbLoader, 0, shiftIter=shift)
if self.max_time > 0 and time.time() - start > self.max_time:
break

shift += len(dbLoader)

Expand Down
3 changes: 3 additions & 0 deletions models/trainer/gan_trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ def __init__(self,
checkPointDir=None,
modelLabel="GAN",
config=None,
max_time=0,
pathAttribDict=None,
selectedAttributes=None,
imagefolderDataset=False,
Expand All @@ -50,6 +51,7 @@ def __init__(self,
- modelLabel (string): name of the model
- config (dictionary): configuration dictionnary.
for all the possible options
- max_time (int): max number of seconds for training (0 = infinity).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NIT: working with second is not very practical, hours looks like a better unit to me.

- pathAttribDict (string): path to the attribute dictionary giving
the labels of the dataset
- selectedAttributes (list): if not None, consider only the listed
Expand All @@ -70,6 +72,7 @@ def __init__(self,
self.path_db = pathdb
self.pathPartition = pathPartition
self.partitionValue = partitionValue
self.max_time = max_time

if config is None:
config = {}
Expand Down
6 changes: 4 additions & 2 deletions models/trainer/progressive_gan_trainer.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
import os
import time

from .standard_configurations.pgan_config import _C
from ..progressive_gan import ProgressiveGAN
Expand Down Expand Up @@ -46,7 +47,6 @@ def __init__(self,
- stopOnShitStorm (bool): should we stop the training if a diverging
behavior is detected ?
"""

self.configScheduler = {}
if configScheduler is not None:
self.configScheduler = {
Expand Down Expand Up @@ -208,6 +208,7 @@ def train(self):
+ "_train_config.json")
self.saveBaseConfig(pathBaseConfig)

start = time.time()
for scale in range(self.startScale, n_scales):

self.updateDatasetForScale(scale)
Expand All @@ -230,7 +231,8 @@ def train(self):
shiftAlpha += 1

while shiftIter < self.modelConfig.maxIterAtScale[scale]:

if self.max_time > 0 and time.time() - start > self.max_time:
break
self.indexJumpAlpha = shiftAlpha
status = self.trainOnEpoch(dbLoader, scale,
shiftIter=shiftIter,
Expand Down
6 changes: 4 additions & 2 deletions train.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@
def getTrainer(name):

match = {"PGAN": ("progressive_gan_trainer", "ProgressiveGANTrainer"),
"StyleGAN":("styleGAN_trainer", "StyleGANTrainer"),
"DCGAN": ("DCGAN_trainer", "DCGANTrainer")}

if name not in match:
Expand All @@ -30,13 +29,15 @@ def getTrainer(name):
parser = argparse.ArgumentParser(description='Testing script')
parser.add_argument('model_name', type=str,
help='Name of the model to launch, available models are\
PGAN and PPGAN. To get all possible option for a model\
PGAN and DCGAN. To get all possible option for a model\
please run train.py $MODEL_NAME -overrides')
parser.add_argument('--no_vis', help=' Disable all visualizations',
action='store_true')
parser.add_argument('--np_vis', help=' Replace visdom by a numpy based \
visualizer (SLURM)',
action='store_true')
parser.add_argument('--max_time', help=' Maximum time in seconds (0 for infinity)', type=int,
dest='max_time', default=0)
parser.add_argument('--restart', help=' If a checkpoint is detected, do \
not try to load it',
action='store_true')
Expand Down Expand Up @@ -124,6 +125,7 @@ def getTrainer(name):
lossIterEvaluation=kwargs["evalIter"],
checkPointDir=checkPointDir,
saveIter= kwargs["saveIter"],
max_time=kwargs["max_time"],
modelLabel=modelLabel,
partitionValue=partitionValue,
**trainingConfig)
Expand Down