Implementation of MetaVAE for Few-shot Image Generation in Pytorch, the model is trained on Omniglot dataset
The Omniglot data set is designed for developing more human-like learning algorithms. It contains 1623 different handwritten characters from 50 different alphabets. Each of the 1623 characters was drawn online via Amazon's Mechanical Turk by 20 different people.
- train Meta_VAE on 5-shot:
$ python meta_train.py --name mt_vae_results --meta_dataroot omniglot-py/images_background/ --k_spt 5 --k_qry 5 --update_step 100 --finetune_step 100 --num_epochs 500
- test Meta_VAE on 5-shot:
$ python fine_tuning.py --name mt_vae_results --meta_dataroot omniglot-py/images_background/ --k_spt 5 --k_qry 5 --update_step 100 --finetune_step 100 --test_epochs 50