-
Notifications
You must be signed in to change notification settings - Fork 271
Adding doc, adding a timer, fixing bugs. #84
base: main
Are you sure you want to change the base?
Conversation
README.md
Outdated
@@ -64,8 +79,9 @@ And wait for a few days. Your checkpoints will be dumped in output_networks/cele | |||
For celebaHQ: | |||
|
|||
``` | |||
python datasets.py celebaHQ $PATH_TO_CELEBAHQ -o $OUTPUT_DATASET - f | |||
python train.py PGAN -c config_celebaHQ.json --restart -n celebaHQ | |||
python datasets.py celebaHQ $PATH_TO_CELEBAHQ -o $OUTPUT_DATASET - f # Prepare the dataset and build the configuration file. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure of this << - f >> ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-f accelerate the training by saving intermediate images of smaller sizes.
@@ -48,6 +48,21 @@ pip install -r requirements.txt | |||
- DTD: https://www.robots.ox.ac.uk/~vgg/data/dtd/ | |||
- CIFAR10: http://www.cs.toronto.edu/~kriz/cifar.html | |||
|
|||
For a quick start with CelebAHQ, you might: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just wanted to facilitate people's life by copy-pasting the TLDR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea for celebaHQ, it was a pain in the *** to make the first time.
I would however add a section ## Quick download
@@ -22,6 +23,7 @@ def __init__(self, | |||
pathdb, | |||
miniBatchScheduler=None, | |||
datasetProfile=None, | |||
max_time=0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@brozi I added a timer, which will help for DFO or AS for GAN optimization.
@@ -18,6 +19,7 @@ def getDefaultConfig(self): | |||
|
|||
def __init__(self, | |||
pathdb, | |||
miniBatchScheduler=None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding this argument is necessary because dataset.py sets up miniBatchScheduler and we get a crash at runtime because init does not have such an argument ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It shouldn't crash unless you add a scheduler in the configuration file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Presumably there is a problem in the automatic generation of the configuration file.
@Molugan how should we proceed regarding this issue ? I would go for keeping my fix so that at least it does not crash.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue is DCGAN + celebaHQ in the dataset maker. I hadn'y considered the possibility that someone would want to use both.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then the dataset maker should be changed, not this part: indeed the idea of the GANTrainer is to be as general as possible. Specific features like multi-scale are defined by child classes.
@@ -3,7 +3,6 @@ | |||
A GAN toolbox for researchers and developers with: | |||
- Progressive Growing of GAN(PGAN): https://arxiv.org/pdf/1710.10196.pdf | |||
- DCGAN: https://arxiv.org/pdf/1511.06434.pdf | |||
- To come: StyleGAN https://arxiv.org/abs/1812.04948 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So no StyleGAN implementation in the end, @Molugan ?😞
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a PR for styleGAN !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
styleGAN incoming (#95) so no need to remove this one
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm ok with most changes. Just a little stuff to correct and this can be merged.
was waiting for feedback around other possibilities for the bugfix; maybe time to go ahaead.
python train.py PGAN -c config_celebaHQ.json --restart -n celebaHQ | ||
python datasets.py celebaHQ $PATH_TO_CELEBAHQ -o $OUTPUT_DATASET # Prepare the dataset and build the configuration file. | ||
python train.py PGAN -c config_celebaHQ.json --restart -n celebaHQ # Train. | ||
python eval.py inception -n celebaHQ -m PGAN # If you want to check the inception score. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather not add the inception part here.
``` | ||
You can compare choose for the optimization one of the optimizers in Nevergrad |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can compare choose for the optimization one of the optimizers in Nevergrad ->
You can also try out some optimizers from Nevergrad
``` | ||
for the SWD score, to be maximized, or for the inception score: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, the SWD should be minimized
### Inspirational generation | ||
### Inspirational generation (https://arxiv.org/abs/1906.11661) | ||
|
||
You might want to generate clothese (or faces, or whatever) using an inspirational image, e.g.: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo: clothese -> clothes
I don't find the description very clear :(.
@@ -50,6 +51,7 @@ def __init__(self, | |||
- modelLabel (string): name of the model | |||
- config (dictionary): configuration dictionnary. | |||
for all the possible options | |||
- max_time (int): max number of seconds for training (0 = infinity). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NIT: working with second is not very practical, hours looks like a better unit to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The README should be updated. I'm pushing a fix for the dataset maker.
Adding documentation.
Fixing a crash for DCGAN.
Adding a timer.