A generative adversarial network based image transformation policy for visual self-supervision
Distribution Estimation to Automate Transformation Policies for Self-Supervision
In recent visual self-supervision works, an imitated classificationobjective, called pretext task, is established by assigning labels to transformed or augmented input images.
The goal of pretext can be predicting what transformations are applied to the image.
However, it is observed that image transformations already present in the dataset might be less effective in learning such self-supervised representations.
This automated policy allows to estimate the transformation distribution of a dataset and also construct its complementary distribution from which training pairs are sampled for the pretext task.
We evaluated our framework using several visual recognitiondatasets to show the efficacy of our automated transformation policy.
Authors
Seunghan Yang, Debasmit Das, Simyung Chang, Sungrack Yun, Fatih Porikli