Using Recurrent Neural Networks to Understand Human Reward Learning

Mingyu Song, Yael Niv, Ming Bo Cai

Research output: Contribution to conferencePaperpeer-review


Computational models are greatly useful in cognitive science in revealing the mechanisms of learning and decision making. However, it is hard to know whether all meaningful variance in behavior has been account for by the best-fit model selected through model comparison. In this work, we propose to use recurrent neural networks (RNNs) to assess the limits of predictability afforded by a model of behavior, and reveal what (if anything) is missing in the cognitive models. We apply this approach in a complex reward-learning task with a large choice space and rich individual variability. The RNN models outperform the best known cognitive model through the entire learning phase. By analyzing and comparing model predictions, we show that the RNN models are more accurate at capturing the temporal dependency between subsequent choices, and better at identifying the subspace in the space of choices where participants’ behavior is more likely to reside. The RNNs can also capture individual differences across participants by utilizing an embedding. The usefulness of this approach suggests promising applications of using RNNs to predict human behavior in complex cognitive tasks, in order to reveal cognitive mechanisms and their variability.

Original languageAmerican English
Number of pages7
StatePublished - 2021
Event43rd Annual Meeting of the Cognitive Science Society: Comparative Cognition: Animal Minds, CogSci 2021 - Virtual, Online, Austria
Duration: Jul 26 2021Jul 29 2021


Conference43rd Annual Meeting of the Cognitive Science Society: Comparative Cognition: Animal Minds, CogSci 2021
CityVirtual, Online

ASJC Scopus subject areas

  • Cognitive Neuroscience
  • Artificial Intelligence
  • Computer Science Applications
  • Human-Computer Interaction


  • model comparison
  • probabilistic reward learning
  • recurrent neural network
  • sequential decision making

Cite this