Optimal randomized multilevel Monte Carlo for repeatedly nested expectations

Yasa Syed, Guanyang Wang

Research output: Contribution to journalConference articlepeer-review

Abstract

The estimation of repeatedly nested expectations is a challenging task that arises in many real-world systems. However, existing methods generally suffer from high computational costs when the number of nestings becomes large. Fix any non-negative integer D for the total number of nestings. Standard Monte Carlo methods typically cost at least O(ε−(2+D)) and sometimes O(ε−2(1+D)) to obtain an estimator up to εerror. More advanced methods, such as multilevel Monte Carlo, currently only exist for D = 1. In this paper, we propose a novel Monte Carlo estimator called READ, which stands for “Recursive Estimator for Arbitrary Depth.” Our estimator has an optimal computational cost of O(ε−2) for every fixed D under suitable assumptions, and a nearly optimal computational cost of O(ε−2(1+δ)) for any 0 < δ < 12 under much more general assumptions. Our estimator is also unbiased, which makes it easy to parallelize. The key ingredients in our construction are an observation of the problem's recursive structure and the recursive use of the randomized multilevel Monte Carlo method.

Original languageAmerican English
Pages (from-to)33343-33364
Number of pages22
JournalProceedings of Machine Learning Research
Volume202
StatePublished - 2023
Event40th International Conference on Machine Learning, ICML 2023 - Honolulu, United States
Duration: Jul 23 2023Jul 29 2023

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Optimal randomized multilevel Monte Carlo for repeatedly nested expectations'. Together they form a unique fingerprint.

Cite this