Computationally efficient deformable 3D object tracking with a monocular RGB camera

  1. Goenetxea Imaz, Jon
Zuzendaria:
  1. Fadi Dornaika Zuzendaria
  2. Ignacio Arganda-Carreras Zuzendaria
  3. Luis Unzueta Irurtia Zuzendaria
  4. Blanca Rosa Cases Gutiérrez Zuzendaria

Defentsa unibertsitatea: Universidad del País Vasco - Euskal Herriko Unibertsitatea

Fecha de defensa: 2020(e)ko abendua-(a)k 17

Epaimahaia:
  1. María del Carmen Hernández Gómez Presidentea
  2. Oihana Otaegui Madurga Idazkaria
  3. Franck Davone Kidea
Saila:
  1. Konputazio Zientzia eta Adimen Artifiziala

Mota: Tesia

Teseo: 153454 DIALNET lock_openADDI editor

Laburpena

Monocular RGB cameras are present in most scopes and devices, including embedded environments like robots, cars and home automation. Most of these environments have in common a significant presence of human operators with whom the system has to interact. This context provides the motivation to use the captured monocular images to improve the understanding of the operator and the surrounding scene for more accurate results and applications.However, monocular images do not have depth information, which is a crucial element in understanding the 3D scene correctly. Estimating the three-dimensional information of an object in the scene using a single two-dimensional image is already a challenge. The challenge grows if the object is deformable (e.g., a human body or a human face) and there is a need to track its movements and interactions in the scene.Several methods attempt to solve this task, including modern regression methods based on Deep NeuralNetworks. However, despite the great results, most are computationally demanding and therefore unsuitable for several environments. Computational efficiency is a critical feature for computationally constrained setups like embedded or onboard systems present in robotics and automotive applications, among others.This study proposes computationally efficient methodologies to reconstruct and track three-dimensional deformable objects, such as human faces and human bodies, using a single monocular RGB camera. To model the deformability of faces and bodies, it considers two types of deformations: non-rigid deformations for face tracking, and rigid multi-body deformations for body pose tracking. Furthermore, it studies their performance on computationally restricted devices like smartphones and onboard systems used in the automotive industry. The information extracted from such devices gives valuable insight into human behaviour a crucial element in improving human-machine interaction.We tested the proposed approaches in different challenging application fields like onboard driver monitoring systems, human behaviour analysis from monocular videos, and human face tracking on embedded devices.