Object bounding box annotations for the GTEA Gaze+ dataset

  1. Núñez-Marcos, Adrián 1
  2. Azkune, Gorka 2
  3. Arganda-Carreras, Ignacio 3
  1. 1 DeustoTech Institute, University of Deusto
  2. 2 IXA NLP Group, University of the Basque Country
  3. 3 Department of Computer Science and Artificial Intelligence, University of the Basque Country

Editorial: Zenodo

Año de publicación: 2020

Tipo: Dataset

DOI: 10.5281/ZENODO.3949796 GOOGLE SCHOLAR lock_openAcceso abierto editor

Resumen

Object bounding box annotations for the GTEA Gaze+ dataset of the works <em>Learning to recognize daily actions using gaze</em> (Fathi et al., 2012) and <em>Delving into Egocentric Actions </em>(Li et al., 2015). The dataset contains folders for each of the subjects, within each of them folders for actions, and, within each action folder, a folder for each video. The video folder has a name composed of &lt;name of the original video&gt;_&lt;start frame&gt;_&lt;end frame&gt;. Within this folder, a json file for some frames can be found. The json contains two keys: <em>filename</em> and <em>objects</em>. <em>filename</em> refers to the path to the image and <em>objects</em> to a dictionary of objects. The keys of the dictionary are the objects present in the image. Each of the objects values is a list containing bounding box coordinates. Each coordinate list is composed of the ymin, xmin, ymax and ymax values.

Referencias bibliográficas

  • 10.1007/978-3-642-33718-5_23
  • 10.1109/cvpr.2015.7298625