Skip to content

Developed a VQ-VAE to learn quantized latent representations & trained an Auto-Regressive Model to generate realistic skin lesion images from the ISIC dataset. Utilized advanced pre-processing techniques & WandB for tracking, visualization, & evaluating generative performance.

Notifications You must be signed in to change notification settings

Manaswi-Vichare/Generative-Modeling-for-Skin-Lesion-Synthesis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

Generative-Modeling-for-Skin-Lesion-Synthesis

Objective:

  1. Train a Vector-Quantized Variational Autoencoder (VQ-VAE) on the skin lesion dataset to efficiently encode and decode high-dimensional image data while capturing meaningful latent representations.
  2. Train an Auto-Regressive Model of your choice to generate new, realistic images based on the learned latent space representations.

Dataset: ISIC dataset.
Visualization can be observed below:
image

Reconstructed images on Test Data after 2500 Epochs:
image test2

Generated images using PixelCNN:
pcnn1 (1) pcnn2 (7)

About

Developed a VQ-VAE to learn quantized latent representations & trained an Auto-Regressive Model to generate realistic skin lesion images from the ISIC dataset. Utilized advanced pre-processing techniques & WandB for tracking, visualization, & evaluating generative performance.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published