Daily TMLR digest for Nov 05, 2022

1 view
Skip to first unread message

TMLR

unread,
Nov 4, 2022, 8:00:11 PM11/4/22
to tmlr-anno...@googlegroups.com

New submissions
===============


Title: A Unified Linear Speedup Analysis of Stochastic and Nesterov Federated Averaging

Abstract: Federated learning (FL) learns a model jointly from a set of participating devices without
sharing each other’s privately held data. The characteristics of non-i.i.d. data across
the network, low device participation, high communication costs, and the mandate that
data remain private bring challenges in understanding the convergence of FL algorithms,
particularly regarding how convergence scales with the number of participating devices. In
this paper, we focus on Federated Averaging (FedAvg), one of the most popular and effective
FL algorithms in use today, and conduct a systematic study of how its convergence scales with
the number of participating devices under non-i.i.d. data and partial participation in convex
settings. We provide a unified analysis that establishes convergence guarantees for FedAvg
under strongly convex, convex, and overparameterized strongly convex problems. We show
that FedAvg enjoys linear speedup in each case, although with different convergence rates and
communication efficiencies. For strongly convex and convex problems, we also characterize
the corresponding convergence rates for the Nesterov accelerated FedAvg algorithm, which
are the first linear speedup guarantees for momentum variants of FedAvg in convex settings.
Empirical studies of the algorithms in various settings have supported our theoretical results.

URL: https://openreview.net/forum?id=rlg07ZFnzA

---

Title: Bandits Corrupted by Nature: Lower Bounds on Regret and Robust Optimistic Algorithms

Abstract: We study the corrupted bandit problem, i.e. a stochastic multi-armed bandit problem with $k$ unknown reward distributions, which are heavy-tailed and corrupted by a history-independent adversary or Nature. To be specific, the reward obtained by playing an arm comes from corresponding heavy-tailed reward distribution with probability $1-\varepsilon \in (0.5,1]$ and an arbitrary corruption distribution of unbounded support with probability $\varepsilon \in [0,0.5)$.
First, we provide \textit{a problem-dependent lower bound on the regret} of any corrupted bandit algorithm. The lower bounds indicate that the corrupted bandit problem is harder than the classical stochastic bandit problem with subGaussian or heavy-tail rewards.
Following that, we propose a novel UCB-type algorithm for corrupted bandits, namely \texttt{HubUCB}, that builds on Huber's estimator for robust mean estimation. Leveraging a novel concentration inequality of Huber's estimator, we prove that \texttt{HubUCB} achieves a near-optimal regret upper bound.
Since computing Huber's estimator has quadratic complexity, we further introduce a sequential version of Huber's estimator that exhibits linear complexity. We leverage this sequential estimator to design \texttt{SeqHubUCB} that enjoys similar regret guarantees while reducing the computational burden.
Finally, we experimentally illustrate the efficiency of \texttt{HubUCB} and \texttt{SeqHubUCB} in solving corrupted bandits for different reward distributions and different levels of corruptions.

URL: https://openreview.net/forum?id=oGIR0ic3jU

---

Reply all
Reply to author
Forward
0 new messages