Hi all,
My new paper with Longjian Li (NYU) and Tianling Luo (Columbia) may be of interest to you:
Abstract: We study how a decision-maker (DM) learns from data of unknown quality to form
robust, “general-purpose” posterior beliefs. We develop a framework for robust learning
and belief formation under a minimax-regret criterion, cast as a zero-sum game: the
DM chooses posterior beliefs to minimize ex-ante regret, while an adversarial Nature
selects the data-generating process (DGP). We show that, in large samples of n signal
draws, Nature optimally induces ambiguity by choosing a process whose precision
converges to the uninformative signals at the rate 1/
√
n. As a result, learning against
the adversarial DGP is nontrivial as well as incomplete: the DM’s ex-ante regret
remains strictly positive even with an infinite amount of data. However, when the
true DGP is fixed and informative (even if only slightly), our DM with a robust
updating rule eventually learns the state with enough data. Still, learning occurs at a
sub-exponential rate—quantifying the asymptotic price of robustness—and it exhibits
“under-inference” bias. Our framework provides a decision-theoretic dual to the local
alternatives method in asymptotic statistics, deriving the characteristic 1/
√
n-scaling
endogenously from the signal ambiguity.
--
_________________
Yeon-Koo Che
Kelvin J. Lancaster Professor of Economic Theory
420 W. 118th Street, IAB 1029
Columbia University