I'm a first year PhD. student at QUvA Lab, The University of Amsterdam (UvA). QUvA Lab is a collaborative effort between Qualcomm and UvA, which is directed by Max Welling, Arnold Smeulders and Cees Snoek. My research interest is at the intersection of vision and language. Specifically, I am interested in developing visual representations that can be utilized for image captioning, visual question answering and visual dialogue systems. If interested, please find more details about me here: https://kilickaya.github.io/
Understanding Images and Visualizing Text: Semantic Inference and Retrieval by Integrating Computer Vision and Natural Language Processing (as Student)
Data-Driven Image Captioning via Salient Region DiscoveryIET Computer Vision
Mert Kilickaya, Burak Kerim Akkus, Ruket Cakici, Aykut Erdem, Erkut Erdem, Nazli Ikizler-CinbisRe-evaluating Automatic Metrics for Image CaptioningThe 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017)
Mert Kilickaya, Aykut Erdem, Nazli Ikizler Cinbis, Erkut ErdemLeveraging Captions in the Wild to Improve Object DetectionThe 5th Workshop on Vision and Language (VL'16) – in conjuction with ACL 2016
Mert Kilickaya, Nazli Ikizler-Cinbis, Erkut Erdem and Aykut ErdemMeta-sınıf tabanlı Getirme ile veriye Dayalı İmge Altyazilama (Data-driven Image Captioning with Meta-class Based Retrieval)22. IEEE Sinyal İşleme ve İletişim Uygulamaları Kurultayı (SIU 2014), Trabzon, Nisan 2014
Mert Kilickaya, Erkut Erdem, Aykut Erdem, Nazli Ikizler Cinbis, Ruket Cakici