AI-based approaches to reward prediction learning and scene recognition
Shin Ishii, Kyoto University
WIN Seminar WIN Wednesday
Wednesday, 20 September 2023, 12pm to 1pm
Hybrid via Teams and in the Cowey Room, WIN Annexe
Hosted by Ben Seymour
Shin Ishii is currently a full professor of Graduate School of Informatics, Kyoto University, Kyoto, a director of Neural Information Analysis Laboratories, ATR, Kyoto, and an affiliated faculty member, International Research Center for Neurointelligence, the University of Tokyo, Tokyo.
Dr. Ishii is a specialist of machine learning including reinforcement learning and (variational) inference, and also interested in modeling animal’s decision making and inference. Dr. Ishii is now a co-PI of Japanese Brain Initiative, Brain/MINDs and a leader of Japanese national AI-based robotics project, NEDO Cyborg-AI. In the field of AI/machine learning, his co-authored conference papers appeared in NeurIPS, ICML, CVPR, ICLR, etc., and in the field of neuroscience, his co-authored papers have appeared in Nature, Science, Neuron, PNAS, etc.
In this talk, I present a couple of our attempts to integrate artificial intelligence (AI) and animal’s intelligence (brain).
First, I introduce our dichotomous learning hypothesis in the rodent basal ganglia circuit. Striatal dopamine type-I receptor expressing (D1) neurons are well-known center of reward-association, or, reinforcement, learning. In 2020, Yagishita and his colleagues found that striatal dopamine type-II receptor expressing (D2) neurons show a different learning scheme which was called discrimination learning, in contrast to the D1 neuron’s generalization learning, We proposed several computational models that could reproduce the generalization-discrimination, or, dichotomous, learning above. Interestingly, if we apply some disturbance to the computational/AI circuit, it exhibits strange behaviors, which might be similar to positive symptoms in psychosis.
Next, I introduce our new AI-based image transformation technology, called GANSID, which was used for transforming from natural images to artificial images with making their saliency maps unchanged. When watching these artificial images, the fixation density map of human participants showed good agreement with the saliency map computationally defined by Itti and colleagues. When we compared human fMRI when the participants were looking at natural and artificial images, lower-level visual areas (V1/V2) and higher-level visual areas (V4, etc.), were activated, respectively. Since this technique seems effective for examining cortical involvements in visual processing from awake humans, we are performing an advanced experiment with an attempt to control the prior state during the participants performed a recognition task to the currently presented image. I will present some model-based analysis of fMRI data with a Bayesian scene recognition model.