A distributed, hierarchical and recurrent framework for reward-based choice.
Hunt LT., Hayden BY.
Many accounts of reward-based choice argue for distinct component processes that are serial and functionally localized. In this Opinion article, we argue for an alternative viewpoint, in which choices emerge from repeated computations that are distributed across many brain regions. We emphasize how several features of neuroanatomy may support the implementation of choice, including mutual inhibition in recurrent neural networks and the hierarchical organization of timescales for information processing across the cortex. This account also suggests that certain correlates of value are emergent rather than represented explicitly in the brain.