This is an example for realtime Arbitrary Image Stylization using Magenta's arbitrary-image-stylization model. Different from regular Image Stylization, this model is not trained on one specific style, but can apply any style given by an input style image.
Example made with love by Jonathan Frank 2022
Video available at this Google Drive
For this example we do not have the python code that produced the model. However, a SavedModel has been uploaded to TensorFlow Hub.
This model requires two inputs. One is the image that gets stylized and the other is the image providing the style.
Therefore, we need to use ofxTF2::Model::runMultiModel
and set the inputs accordingly.
Taking a look at the output of the saved_model_cli
tool we find that this model expects the inputs to be:
inputs['placeholder'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, -1, 3)
name: serving_default_placeholder:0
inputs['placeholder_1'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, -1, 3)
name: serving_default_placeholder_1:0
Therefore, we setup up the model as follows:
model.setup({ {"serving_default_placeholder"} ,{"serving_default_placeholder_1"} },
{"StatefulPartitionedCall"} );
NOTE: Remember the first dimension is always the batch size which is usually 1 in realtime applications. Please refer to example_basics_multi_IO
for more information.
The image and video are both loaded from bin/data
.
NOTE: it is always recommended to read the provided README in tfhub.dev:
It is recommended that the style image is about 256 pixels (this size was used when training the style transfer network). The content image can be any size.
NOTE: You can modify the output dimensions in src/ofApp.h
.
Exploring the structure of a real-time, arbitrary neural artistic stylization network