This is a personal mini-project. I tried to reimplement the Style Transfer methods described in the Gatys' paper and Adobe's paper. The code is written with Pytorch
Typically for the pretrained vgg19
(without batch_normalization
) model of Pytorch, before passing to
the network, the image should be:
- rescaled (to size minimum
224x224
) - transformed from
BGR
toRGB
- normalized with
mean=[0.40760392, 0.45795686, 0.48501961]
andstd=[1, 1, 1]
- the matrix should then be upscaled by a factor
255.0
The postprocess should be a complete inverse of the preprocess above.
You can try the following command:
python extract.py -c <content-img-fn> -s <style-img-fn> --size <height> <width> --lambd 0.5
All images should be located in ./images/
and with format .jpg
For example, you can try the following command
python extract.py -c girl -s stl --size 600 800 --lambd 0.5
, expectedly it will produce
Here are some other results:
compared to original pictures: