News

Robust inter-subject decoding of audiovisual cinematic features – a functional magnetic resonance imaging study

elte3
Written by film

Dear All,

 

The Faculty of Cognitive Psychology, ELTE is pleased to invite you all for the upcoming lecture by

 

Gal Raz (School of Film and Television, Tel Aviv University, Izrael, researchgate webpage)

entitled

 

Robust inter-subject decoding of audiovisual cinematic features – a functional magnetic resonance imaging study

 

 

date: 7thFebruary 2019, 14:00

place: room 403, Institute of Psychology ELTE, 46 Izabella street, Budapest, 1064

 

Abstract:

 

A line of functional magnetic resonance imaging (fMRI) studies of cinematic experiences by Uri Hasson and others has consistently pointed to a remarkable similarity across viewers in terms of their neural responses to the film. Another strand of research has employed naturalistic audiovisual content in the context of neural decoding, which is about the reconstruction of perceived features or mental content based on neuroimaging data. Such “brain reading” was demonstrated in the context of decoding mental states including action intentions, reward assessment, and response inhibition; low-level features such as visual patterns in dynamic video, geometrical patterns, text, and optical flow acceleration in a video game; and semantic elements such as animal and objects categories, visual imagery content during sleep, and actions and events in a video game. However, most of the eminent neural decoding achievements were gained using a within-subject design including only five subjects or less (sometimes the authors themselves). This fact has limited the examination of the reproducibility of the results.

 

In my talk, I will present a recent fMRI study in which my colleagues and I demonstrated a successful robust inter-subject decoding of various audiovisual features. Employing a machine learning approach, which is based on kernel ridge regression, we trained our algorithm on a data set of 234 fMRI scans and tested it on two separate samples of 63 scans performed during the viewing of 9 different movies, and 93 scans under a music listening condition.  I will finally discuss the potential of individual “brain readability” (the accuracy of the decoding of a certain audiovisual feature from one’s brain) as a possible biomarker.

 

Background reading:

Raz, Gal, Michele Svanera, Neomi Singer, Gadi Gilam, Maya Bleich Cohen, Tamar Lin, Roee Admon et al. “Robust inter-subject audiovisual decoding in functional magnetic resonance imaging using high-dimensional regression.” Neuroimage 163 (2017): 244-263.

 

Hope to see you there!