Synthetic18K: Learning better representations for person re-ID and attribute recognition from 1.4 million synthetic images
Journal Article
Abstract Learning robust representations is critical for the success of person re-identification and attribute recognition systems. However, to achieve this, we must use a large dataset of diverse person images as well as annotations of identity labels and/or a set of different attributes. Apart from the obvious concerns about privacy issues, the manual annotation process is both time consuming and too costly. In this paper, we instead propose to use synthetic person images for addressing these difficulties. Specifically, we first introduce Synthetic18K, a large-scale dataset of over 1 million computer generated person images of 18K unique identities with relevant attributes. Moreover, we demonstrate that pretraining of simple deep architectures on Synthetic18K for person re-identification and attribute recognition and then fine-tuning on real data leads to significant improvements in prediction performances, giving results better than or comparable to state-of-the-art models.

BibTeX
@article{uner2021image,
title={Synthetic18K: Learning better representations for person re-ID and attribute recognition from 1.4 million synthetic images},
author={Onur Can Uner and Cem Aslan and Burak Ercan and Tayfun Ates and Ufuk Celikcan and Aykut Erdem and Erkut Erdem},
journal={Signal Processing: Image Communication},
year={2021},
volume={},
month={}
}