Evaluación y análisis de una aproximación a la fusión sensorial neuronal mediante el uso de sensores pulsantes de visión / audio y redes neuronales de convolución
- Ríos Navarro, José Antonio
- Alejandro Linares Barranco Director/a
- Ángel Jiménez Fernández Director/a
- Gabriel Jiménez Moreno Director/a
Universidad de defensa: Universidad de Sevilla
Fecha de defensa: 19 de julio de 2017
- Julio Abascal González Presidente/a
- Saturnino Vicente Díaz Secretario/a
- Antonio Abad Civit Balcells Vocal
- Arturo Morgado Estévez Vocal
- Enrique Cabello Pardos Vocal
Tipo: Tesis
Resumen
In this work it is intended to advance on the knowledge and possible hardware implementations of the Deep Learning mechanisms, as well as on the use of sensory fusión efficiently using such mechanisms. At the beginning, it is performed an analysis and study of the current parallel programing, furthermore of the Deep Learning mechanisms for audiovisual sensory fusion using neuromorphic sensor on FPGA platforms. Based on these studies, first of all it is proposed solution implemented on OpenCL as well as dedicated hardware, described on systemverilog, for the acceleration of Deep Learning algorithms, starting with the use of a vision sensor as input. The results are analysed and a comparison between them has been made. Next, an audio sensor is added and classic statistical mechanisms are proposed, which, without providing learning capacity, allow the integration of information from both sensors, analysing the results obtained along with their limitations. Finally, in order to provide the system with learning capacity, Deep Learning mechanisms, in particular CNN, are used to merge audiovisual information and train the model to develop a specific task. In the end, the performance and efficiency of these mechanisms have been evaluated, obtaining conclusions and proposing improvements that will be indicated to be implemented as future works