Skip to content
This repository has been archived by the owner on Apr 17, 2023. It is now read-only.

Discriminator variability on the same image when in different batches #118

Open
erezposner opened this issue Jun 2, 2020 · 4 comments
Open

Comments

@erezposner
Copy link

Hi, I've been using the discriminator of the trained NN as a black box to distinguish between generated and real image. My main goal is to define the range of the discriminator output values (as it is acts as a critic and not a classifier I understand it is not bound to a specific range but it would be nice to have an estimation of it).

My problem is that If I were to take an image and run it with different images in the batch it's output would change (I assume it has something to do with the minibatch normalization).

Also, If I set the batchsize to one, it seems as there is a large bias to the discriminator output.
for example:

  • Running a single image usually result within the range -260 to -250. As you increase the batch size the results slowly getting bigger and bigger.

Is there a way to disable the affect of other images on each image result at inference?

@Molugan
Copy link
Contributor

Molugan commented Jun 8, 2020

Hello,

Sorry for the delay. A good way to do this would be to get the average value of both the mean and the standard deviation batch of size subGroupSize (see here

y = torch.mean(y, 1).view(G, 1)
) on a set of images representative of the domain you want to work with (or at training time if you train the model by yourself) and use it to get your scores.

@erezposner
Copy link
Author

What do you mean? I actually did something pretty similar to what you suggested. That is to hand pick a control group (or a set of images representative of the domain) and then just change my one sample to be tested.

In my approach I always use the same mini batch with just changing the sample. How can I use the mean and std on a single sample then?

@Molugan
Copy link
Contributor

Molugan commented Jul 1, 2020

You need to compute the mean and std on a selected "training" set and uses theses values for each each test iteration. See for example the code of batchNorm on pytorch https://pytorch.org/docs/master/_modules/torch/nn/modules/batchnorm.html#BatchNorm2d

@gengcong940126
Copy link

I set the self.gan.netD.eval(),but it also has this problem.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants