This is the PyTorch implementation of our paper:
Learning to Generate Grounded Visual Captions without Localization Supervision
Chih-Yao Ma, Yannis Kalantidis, Ghassan AlRegib, Peter Vajda,
Marcus Rohrbach, Zsolt Kira
European Conference on Computer Vision (ECCV), 2020
Clone the repo recursively:
git clone --recursive [email protected]:chihyaoma/cyclical-visual-captioning.git
If you didn't clone with the --recursive flag, then you'll need to manually clone the pybind submodule from the top-level directory:
git submodule update --init --recursive
The proposed cyclical method can be applied directly to image and video captioning tasks.
Currently, installation guide and our code for video captioning on the ActivityNet-Entities dataset are provided in anet-video-captioning.
Chih-Yao Ma and Zsolt Kira were partly supported by DARPA’s Lifelong Learning Machines (L2M) program, under Cooperative Agreement HR0011-18-2-0019, as part of their affiliation with Georgia Tech. We thank Chia-Jung Hsu for her valuable and artistic helps on the figures.
If you find this repository useful, please cite our paper:
@inproceedings{ma2020learning,
title={Learning to Generate Grounded Image Captions without Localization Supervision},
author={Ma, Chih-Yao and Kalantidis, Yannis and AlRegib, Ghassan and Vajda, Peter and Rohrbach, Marcus and Kira, Zsolt},
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
year={2020},
url={https://arxiv.org/abs/1906.00283},
}