Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

RAPPEL: All About That... Seminar Series - Optimisation stochastique, 18 Avril 2025

2 views
Skip to first unread message

All About That Bayes

unread,
Apr 17, 2025, 4:05:47 AMApr 17
to All About That Bayes

English version below


Chères et chers collègues,


petit rappel concernant la séance du séminaire « All About that… » qui aura lieu demain, 18 Avril 2025 au SCAI (Plan d'accès) ! Le programme est ci-dessous !


L’après-midi est accessible à toutes et tous, mais merci d’indiquer votre participation en vous inscrivant via le lien suivant :  https://forms.office.com/e/APAHDfyYfQ


En espérant vous voir nombreuses et nombreux !


Bien cordialement,

Julien Stoehr et Sylvain Le Corff
Pour le Groupe Spécialisé « Statistique Bayésienne » de la Société Française de Statistique (SFdS)


--------------------------------------------------------------------------------------------------------


Dear colleagues,


Quick reminder about the next session of the "All About that..." seminar series that takes place tomorrow, April 18, 2025 at SCAI (Access map). The program is below !


The event is open to all, but please confirm your attendance by registering via the following link: https://forms.office.com/e/APAHDfyYfQ


We look forward to seeing many of you there!


Best regards,

Julien Stoehr and Sylvain Le Corff

For the Bayesian Statistics Group of the Société Française de Statistique (SFdS)



Theme: Optimisation Stochastique

Date: Vendredi 18 Avril 2025

Time: De 14h00 à 16h00

Location: SCAI (Sorbonne Université, Campus Pierre et Marie Curie)


Program:


14h00 - 15h00 Clément Bonet (ENSAE) - Mirror and Preconditioned Gradient Descent in Wasserstein Space

 
Abstract: As the problem of minimizing functionals on the Wasserstein space encompasses many applications in machine learning, different optimization algorithms on Rd have received their counterpart analog on the Wasserstein space. We focus here on lifting two explicit algorithms: mirror descent and preconditioned gradient descent. These algorithms have been introduced to better capture the geometry of the function to minimize and are provably convergent under appropriate (namely relative) smoothness and convexity conditions. Adapting these notions to the Wasserstein space, we prove guarantees of convergence of some Wasserstein-gradient-based discrete-time schemes for new pairings of objective functionals and regularizers. The difficulty here is to carefully select along which curves the functionals should be smooth and convex. We illustrate the advantages of adapting the geometry induced by the regularizer on ill-conditioned optimization tasks, and showcase the improvement of choosing different discrepancies and geometries in a computational biology task of aligning single-cells.


15h00 - 16h00 Antoine Godichon Baggioni (Sorbonne Université) - Stochastic Newton algorithms with O(Nd) operations


Abstract: The majority of machine learning methods can be regarded as the minimization of an unavailable risk function. To optimize this function using samples provided in an online fashion, stochastic gradient descent is a common tool. However, it can be highly sensitive to ill-conditioned problems. To address this issue, we focus on Stochastic Newton methods. We first examine a version based on the Ricatti (or Sherman-Morrison) formula, which allows recursive estimation of the inverse Hessian with reduced computational time. Specifically, we show that this method leads to asymptotically efficient estimates and requires$O(Nd^2)$ operations (where N is the sample size and d is the dimension). Finally, we explore how to adapt the Stochastic Newton algorithm for a streaming context, where data arrives in blocks, and demonstrate that this approach can reduce the computational requirement to $ O(Nd) $ operations.



Reply all
Reply to author
Forward
0 new messages