SMILE seminar - Shai Ben-David, 13th December, Telecom Paristech 14:00 -16:00, save the date

14 views
Skip to first unread message

Robert Gower

unread,
Dec 2, 2018, 6:31:50 PM12/2/18
to SMILE in Paris
Dear SMILE participants,

on the 13th of December, 14h00 - 16h00 at Telecom Paristech Shai Ben-David  will give a tutorial for the "SMILE in Paris" session. 

Though the title and room details are yet to be defined (and will be sent soon), please save the date. 

------------------------------------------------------------------------------------
Thursday, December. 13th

14h00 - 16h00

Telecom Paristech, room To be defined
(6 Rue Barrault, 75013 Paris, France)
Shai Ben-David (Professor of Computer Science at the University of Waterloo)

Title: To be defined.

Robert Gower

unread,
Dec 5, 2018, 2:30:56 PM12/5/18
to SMILE in Paris
Dear SMILERS,

just to follow-up with the details,  Shai Ben-David's tutorial be

When: Thursday, December 13th, 14h00 - 16h00

Where: Telecom Paristech, room l'amphi Saphir
(6 Rue Barrault, 75013 Paris, France)

Who: Shai Ben-David (Professor of Computer Science at the University of Waterloo)

Title:  Learnability, uniform convergence and finiteness of a combinatorial dimension

Abstract Vapnik's "Fundamental theorem of statistical learning" states that for binary classification prediction 
the following conditions of a class H of predictors are equivalent:
1) H is PAC learnable (in both the realizable and the agnostic cases)
2) H has the uniform convergence property (hence learnable by any ERM learner).
3) H has a finite VC dimension

However, for other natural learning scenarios this equivalence breaks down.
This has been shown for a general model of learning by Shalev-Shwartz et al
for a model of learning from unlabeled samples when the labeling rule is known by Ben-David et al
and for multi-class learning by Daniely et al (http://amitdaniely.com/multiclass.pdf).

In this tutorial I will survey those results, where learnability holds in spite of the failure of uniform convergence. 
I shall discuss what is known about the existence of a dimension that characterizes learnability in such cases.
In particular, I will present a recent result that uses set theory tools to prove that there can be no
combinatorial dimension that characterizes learnability in Vapnik's General Model of Statistical Learning
Reply all
Reply to author
Forward
0 new messages