Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

WIN Wednesday Works In Progress

Neuroimaging of Human Reward and Social Behaviour

Presented by Tavish Traut

Abstract: How do people make economic, reward-trading decisions when interacting with social partners with varying preferences? How do neural reward systems integrate the distinct subjective reward values for self and social other into a trading decision variable that maximises both own and other’s economic utility? We measure neural activity using 3T fMRI while participants trade multicomponent rewards (i.e., ingested, liquid foods) with different social partners. We use formal economic approaches to quantify participants’ and partners’ preferences, evaluate the efficiency of participants’ trading behaviour, and use these variables as regressors for neural activity.




WIN Wednesday Methods Series

Decoding MEG at scale

Presented by Oiwi Parker Jones 

Abstract: The past few years have seen remarkable advances in speech decoding from electrophysiological brain recordings. Two key factors have been increasingly large data leveraged by deep learning. Yet the largest datasets available have been limited by the field’s inability to train effective models across datasets or even, in many cases, between subjects. This has been a major obstacle to further advances in brain-computer interfaces (BCIs) as the field has been unable to make collective use of the many existing public datasets. In this talk, I describe recent work in my group (PNPL) which successfully removes this obstacle, primarily through the use of unsupervised pretext learning. Results will be presented for two downstream decoding tasks in magnetoencephalography (MEG) data acquired while subjects listen to connected speech, demonstrating our current capacity to scale speech decoding over arbitrarily many subjects and datasets, with continued improvements in decoding performance.