Dynamic distortion of inferred reward probability shapes choice over time

Avatar
Poster
Voice is AI-generated
Connected to paperThis paper is a preprint and has not been certified by peer review

Dynamic distortion of inferred reward probability shapes choice over time

Authors

Grabenhorst, M.; Maloney, L. T.

Abstract

Many choices are triggered by discrete events whose timing determines which options are rewarded. Without informative sensory evidence between events, behavior must rely on internal estimates of latent variables - most notably elapsed time and reward probability. Existing computational frameworks, including evidence-accumulation models, are not designed for this regime, leaving the principles of time-dependent choice unresolved. Here, we formalize choice as an inference problem governed by uncertainty about both elapsed time and reward over time. Participants learned dynamic reward probabilities to guide choices. Behavior approached optimality but exhibited a systematic distortion of inferred reward probability over time, captured by a linear transformation in log-odds space. Crucially, temporal uncertainty was modulated by reward probability but not by elapsed time itself, contradicting Weber-law scaling. These results identify two interacting computational principles - dynamic mapping of reward probability to choice and reward-based temporal precision - that jointly shape behavior when time and reward must be inferred.

Follow Us on

0 comments

Add comment