Reminder: we have a bonus theory seminar by Alex Slivkins today
(03/06) in GCS 302C.
See below for the information.
> Hi everyone,
>
> in addition to our regular Thursday noon theory seminar, this week, we will
> have a bonus theory seminar on Friday, 03/06/2026, from 2:00-3:00pm. The talk
> will also be in our new home, GCS 302C.
> The speaker will be Alex Slivkins from Microsoft Research.
>
>
> Title: Bandit Social Learning: Exploration (Failures) under Exploitation
>
> Abstract: We consider social learning dynamics where the agents
> collectively face a multi-armed bandit problem. Self-interested
> users/customers ("agents") choose among available alternatives ("arms")
> under uncertainty on their quality. The agents return feedback on their
> experiences, which is aggregated and served back to the future agents. As
> individual agents are reluctant to explore for the sake of others, they may
> catastrophically fail to explore as a collective. The same concern underpins
> and motivates a huge, decades-long literature on algorithmic exploration in
> multi-armed bandits. Yet, this learning dynamics has been surprisingly
> poorly understood until recently. We study how it plays out, depending on
> the particularities of the agents' behavior and the underlying learning
> problem.
>
> Based on two papers: "Bandit Social Learning under Myopic Behavior"
> (NeurIPS'23;
https://arxiv.org/abs/2302.07425) and "Greedy Algorithm for
> Structured Bandits" (NeurIPS'25;
https://arxiv.org/abs/2503.04010).
>
> Speaker: Alex Slivkins
> (
https://www.microsoft.com/en-us/research/people/slivkins/).
>
> Bio: Alex Slivkins is a Senior Principal Researcher at Microsoft Research NYC.
> Some time ago he was a researcher at MSR Silicon
> Valley (now defunct), after receiving his Ph.D. from Cornell. His research
> interests span learning theory, algorithmic economics, and networks. He is
> particularly interested in exploration-exploitation tradeoff and its
> manifestations in socioeconomic environments. His work has been recognized
> with the best paper award at ACM EC 2010, the best paper nomination at WWW
> 2015, and the best student paper award at ACM PODC 2005. His book,
> "Introduction to Multi-Armed Bandits", has been published (with online
> revisions) in 2019-2024.
>
> --
> David Kempe <
david....@gmail.com>
>
--
David Kempe <
david....@gmail.com>