Assessing the representation of seen and unseen contents in human brains and deep artificial networks

  1. MEI, NING
Supervised by:
  1. David Soto Blanco Director
  2. Roberto Santana Hermida Director
  3. Manuel Carreiras Valiña Tutor

Defence university: Universidad del País Vasco - Euskal Herriko Unibertsitatea

Fecha de defensa: 26 November 2022

  1. Ciencia de la Computación e Inteligencia Artificial

Type: Thesis

Teseo: 773214 DIALNET lock_openADDI editor


The functional scope of unconscious visual information processing and its implementation in the human brain remains a highly contested issue in cognitive neuroscience. The influential global workspace and higher-order theories predict that unconscious visual processing is restricted to representations in the visual cortex, which are not read-out further by frontoparietal areas. The present thesis employs fMRI and computational approaches to develop a high-precision, within-subject framework in order to define the properties of the brain representations of unconscious content associated with null perceptual sensitivity. Machine learning models were used to read-out multivariate unconscious content from fMRI signals throughout the ventral visual pathway, and model-based representational similarity analysis examined the properties of both conscious and unconscious representations. Finally, feedforward convolutional neural network (FCNN) models were used to simulate the fMRI results, namely, to probe the existence of informative representations of visual objects with null perceptual sensitivity in artificial networks. The results show that even when human observers display null perceptual sensitivity at a behavioral level, there are neural representations of unconscious content widely distributed throughout the cortex, and these are not only contained in visual regions but also extend to higher-order regions in the ventral visual pathway, parietal and even prefrontal areas. The computational simulations with different FCNN models trained to perform the same visual task with noisy images demonstrated that even when the FCNN models failed to classify the category of the noisy images, the hidden representation of the FCNN models contained an informative representation that allowed for decoding of the image class. The implications of the results for models of visual consciousness are discussed.