Leveraging Semantic Saliency Maps for Query-Specific Video Summarization
Journal Article
Abstract The immense amount of videos being uploaded to video sharing plat- forms makes it impossible for a person to watch all the videos understand what happens in them. Hence, machine learning techniques are now deployed to index videos by recognizing key objects, actions and scenes or places. Summarization is another alternative as it offers to extract only important parts while covering the gist of the video content. Ideally, the user may prefer to analyze a certain action or scene by searching a query term within the video. Current summarization methods generally do not take queries into account or require exhaustive data labeling. In this work, we present a weakly supervised query-focused video summarization method. Our proposed approach makes use of semantic attributes as an indicator of query relevance and semantic attention maps to locate related regions in the frames and utilizes both within a submodular maximization framework. We conducted experiments on the recently introduced RAD dataset and obtained highly competitive results. Moreover, to better evaluate the performance of our approach on longer videos, we collected a new dataset, which consists of 10 videos from YouTube and annotated with shot-level multiple attributes. Our dataset enables much diverse set of queries that can be used to summarize a video from different perspectives with more degrees of freedom.

BibTeX
@article{cizmeciler2022mtap,
title={Leveraging Semantic Saliency Maps for Query-Specific Video Summarization},
author={Kemal Cizmeciler and Erkut Erdem and Aykut Erdem},
journal={Multimedia Tools and Applications},
year={2022},
volume={in press}
}