Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

3D CT GAN #2

Closed
4 of 5 tasks
bodokaiser opened this issue Sep 7, 2017 · 6 comments
Closed
4 of 5 tasks

3D CT GAN #2

bodokaiser opened this issue Sep 7, 2017 · 6 comments

Comments

@bodokaiser
Copy link
Owner

bodokaiser commented Sep 7, 2017

Implementation of 3D CT GAN Image Synthesis.

Steps to reproduce:

  • export interpolated CT, MR volumes about
  • extract random overlapping 3D volume patches
  • implement auto-context on patches
  • merge patches to volume
  • implement generator loss lambda1*G_GAN_Loss + lambda2*MSE_Loss + lambda3*Gradient_Difference_Loss

Generator Architecture (Conv3d-BNorm-ReLU):

  1. [10x32x32x32x1] -> [10x32x32x32x32]
  2. [10x32x32x32x32] -> [10x32x32x32x32]
  3. [10x32x32x32x32] -> [10x32x32x32x32]
  4. [10x32x32x32x32] -> [10x32x32x32x64]
  5. [10x32x32x32x64] -> [10x32x32x32x64]
  6. [10x32x32x32x64] -> [10x32x32x32x64]
  7. [10x32x32x32x64] -> [10x32x32x32x64]
  8. [10x32x32x32x64] -> [10x32x32x32x32]
  9. [10x32x32x32x32] -> [10x32x32x32x32]
  10. [10x32x32x32x32] -> [10x16x16x16x1]

Discriminator Architecture (Conv3d-BNorm-ReLU-MaxPool):

  1. [10x16x16x16x1] -> [10x16x16x16x32]
  2. [10x16x16x16x32] -> [10x16x16x16x64]
  3. [10x16x16x16x64] -> [10x16x16x16x128]
  4. [10x16x16x16x128] -> [10x16x16x16x256]

Discriminator Architecture (FCN):
5. [10x16x16x16x256] -> [10x16x16x16x512]
6. [10x16x16x16x512] -> [10x16x16x16x128]
7. [10x16x16x16x512] -> [10x16x16x16x1]
8. Sigmoid

Additional:

  • filter sizes 5x5x5
  • lambda1=0.5, lambda2=lambda3=1
  • learn_rate=1e-6, beta1=0.5
@bodokaiser
Copy link
Owner Author

bodokaiser commented Sep 19, 2017

Further todos:

  • reduce patient information from data.transform to indices, values wherein indices contains the patient id
  • use tf.layers.conv3d compatible output shape for patches
  • remove collection.namedtuple from data.transform
  • use tf.contrib.data.Dataset.from_generator with RandomSampler and GridSampler classes Dataset.from_generator is a python implementation, hence will be slower
  • update grid sampling delta to 2
  • move patch aggregation to support class
  • use tf.name_scope and op(..., name='...') to make graph more readable no time for this
  • get training up and running

@bodokaiser
Copy link
Owner Author

bodokaiser commented Sep 21, 2017

When training is running on all volumes:

  • add gradient loss
  • add GAN loss
  • add random indices

@bodokaiser
Copy link
Owner Author

bodokaiser commented Sep 21, 2017

We are having problems on allocating enough memory for volume and weight variables, possible solutions:

  • use tensors with tf.dynamic_sticht instead
  • directly write variables to filesystem

New idea which somehow navigates around above problem. We stage our training process:

  1. train on 3d patches randomly extracted by our dataset pipeline until convergence
  2. reconstruct 3d volumes with fixed weights
  3. retrain with new weights on 3d patches randomly extracted from our reconstructed 3d volumes*
  4. reconstruct 3d volumes with new weights
  5. repeat

*this way we will get auto-context as a side effect

@bodokaiser
Copy link
Owner Author

bodokaiser commented Sep 22, 2017

Network performs bad, ideas:

  • instead of padding the volumes we should crop them according to their paper
  • relus are misplaced
  • lrelus instead of relus
  • xavier initialization

@bodokaiser
Copy link
Owner Author

  • only one patient
  • slice away zero data
  • train only generator

@bodokaiser
Copy link
Owner Author

Last missing feature will be continued at #4.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant