The principle for the augmentation mechanism of TFOps-Aug relies on Google's AutoAugment paper "Learning Augmentation Policies from Data" [1]. This repository implements the augmentation policy logic and many of the augmentation functions. Like in the original implementation, the augmentation procedure is defined as a policy, which consists of several subpolicies and operations.
The augmentation operations rely on Tensorflow 2 operations which allow scalability and high computational throughput
even with large images. Furthermore, the augmentation pipeline can be easily integrated into the tf.data
API, because
all operations rely on Tensorflow operations and can be execute on image representations of type tf.Tensor
. Currently,
only image representations of type tf.uint8
are supported.
The package is available on pypi.org and can be installed with pip
:
pip install tfops-aug
policy = {'sub_policy0': {'op0': ['adjust_saturation', 0.2, 2],
'op1': ['equalize', 0.1, 6],
'op2': ['add_noise', 0.9, 6]},
'sub_policy1': {'op0': ['adjust_contrast', 0.1, 7],
'op1': ['add_noise', 0.0, 10]},
'sub_policy2': {'op0': ['posterize', 0.9, 6],
'op1': ['unbiased_gamma_sampling', 0.5, 1]},
'sub_policy3': {'op0': ['adjust_brightness', 0.3, 1],
'op1': ['adjust_hue', 0.4, 5]},
'sub_policy4': {'op0': ['adjust_saturation', 0.2, 9],
'op1': ['add_noise', 0.1, 0]},
'sub_policy5': {'op0': ['adjust_contrast', 1.0, 1],
'op1': ['unbiased_gamma_sampling', 0.4, 9]},
'sub_policy6': {'op0': ['unbiased_gamma_sampling', 0.3, 0],
'op1': ['adjust_hue', 0.1, 6]},
'sub_policy7': {'op0': ['solarize', 0.6, 0],
'op1': ['adjust_gamma', 0.3, 6]},
'sub_policy8': {'op0': ['adjust_jpeg_quality', 0.7, 10],
'op1': ['adjust_hue', 0.1, 2]},
'sub_policy9': {'op0': ['equalize', 0.6, 0],
'op1': ['solarize', 0.0, 6]}}
Similar to Google's AutoAugment, a single augmentation policy consists of several subpolicies, which in turn consists of one or more augmentation operation. Each operation is defined as a tuple of augmentation method, probability and intensity. Several operations within one subpolicy are applied in sequence. The augmentation policy from above would augment the original image to the following output:
A full example script for image classification can be found in classification_example.py. This excerpt demonstrates the simplicity for the usage of the augmentation methods:
import tensorflow as tf
from tfops_aug.augmentation_utils import apply_augmentation_policy
def augmentor_func(img, label):
img = apply_augmentation_policy(img, policy)
return img, label
train_dataset = tf.keras.preprocessing.image_dataset_from_directory(
"PetImages",
subset="training",
image_size=(180, 180),
batch_size=1
).unbatch()
train_dataset = train_dataset.map(augmentor_func).batch(32).prefetch(32)
A list of all implemented augmentation techniques is given here. Additional, methods will be implemented in the near
future. Performance is measured with the test_image.jpg
which has size 2048 x 1024
. All augmentation methods are
executed with level=5
. Averaged over 500 samples on the Intel Core i7 Prozessor 8665U.
[1] AutoAugment: Learning Augmentation Policies from Data - 2019
Ekin Dogus Cubuk and Barret Zoph and Dandelion Mane and Vijay Vasudevan and Quoc V. Le
https://arxiv.org/pdf/1805.09501.pdf
- More Augmentation Methods
- Shear X
- Shear Y
- Translate X
- Translate Y
- Random Translation
- Random Rotation
- Make augmentation min and max values configurable
- Implement Learning Pipeline
- Implement augmentation policies identical to these in [1]
- Implement augmentation policy search with Ray Tune
- Clean up Code (Unified Docstrings)
- Create Python package
- Support image representation types, other than
uint8