TY - GEN

T1 - A Bayesian model of memory in a multi-context environment

AU - Kleinschmidt, David

AU - Hemmer, Pernille

N1 - Funding Information: This work was supported by a National Science Foundation Grant 1453276 awarded to Pernille Hemmer. Publisher Copyright: © Cognitive Science Society: Creativity + Cognition + Computation, CogSci 2019.All rights reserved.

PY - 2019

Y1 - 2019

N2 - In a noisy but structured world, memory can be improved by enhancing limited stimulus-specific memory with statistical information about the context. To do this, people have to learn the statistical structure of their current environment. We present a Sequential Monte Carlo (particle filter) model of how people track the statistical properties of the environment across multiple contexts. This model approximates non-parametric Bayesian clustering of percepts over time, capturing how people impute structure in their perceptual experience in order to more efficiently encode that experience in memory. Each trial is treated as a draw from a context-specific distribution, where the number of contexts is unknown (and potentially infinite). The model maintains a finite set of hypotheses about how the percepts encountered thus far are assigned to contexts, updating these in parallel as each new percept comes in. We apply this model to a recall task where subjects had to recall the position of dots (Robbins, Hemmer, & Tang, 2014). Unbeknownst to subjects, each dot appeared in one of a few pre-defined regions on the screen. Our model captures subjects' ability to learn the inventory of contexts, the statistics of dot positions within each context, and the statistics of transitions between contexts-as reflected in both recall and prediction.

AB - In a noisy but structured world, memory can be improved by enhancing limited stimulus-specific memory with statistical information about the context. To do this, people have to learn the statistical structure of their current environment. We present a Sequential Monte Carlo (particle filter) model of how people track the statistical properties of the environment across multiple contexts. This model approximates non-parametric Bayesian clustering of percepts over time, capturing how people impute structure in their perceptual experience in order to more efficiently encode that experience in memory. Each trial is treated as a draw from a context-specific distribution, where the number of contexts is unknown (and potentially infinite). The model maintains a finite set of hypotheses about how the percepts encountered thus far are assigned to contexts, updating these in parallel as each new percept comes in. We apply this model to a recall task where subjects had to recall the position of dots (Robbins, Hemmer, & Tang, 2014). Unbeknownst to subjects, each dot appeared in one of a few pre-defined regions on the screen. Our model captures subjects' ability to learn the inventory of contexts, the statistics of dot positions within each context, and the statistics of transitions between contexts-as reflected in both recall and prediction.

KW - Bayesian modeling

KW - belief updating

KW - learning

KW - memory

UR - http://www.scopus.com/inward/record.url?scp=85139412887&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85139412887&partnerID=8YFLogxK

M3 - Conference contribution

T3 - Proceedings of the 41st Annual Meeting of the Cognitive Science Society: Creativity + Cognition + Computation, CogSci 2019

SP - 2024

EP - 2030

BT - Proceedings of the 41st Annual Meeting of the Cognitive Science Society

PB - The Cognitive Science Society

T2 - 41st Annual Meeting of the Cognitive Science Society: Creativity + Cognition + Computation, CogSci 2019

Y2 - 24 July 2019 through 27 July 2019

ER -