[seminarios-mlpb] Thursday June 6th, 13:00, PA2: Inês Almeida on "DJAM: Distributed Jacobi Asynchronous Method for Learning Personalized Models"

1 view
Skip to first unread message

Sebastião Miranda

unread,
May 29, 2019, 5:21:08 AM5/29/19
to priberam_...@googlegroups.com

Hello all,


Next week Inês Almeida, PhD student at the 
Signal and Image Processing Group (SIPG) - Institute for Systems and Robotics (ISR), will present her work on "DJAM: Distributed Jacobi Asynchronous Method for Learning Personalized Models" on Thursday June 6th at 13:00h (room PA2- Pav. Matemática, IST).


Due to a large number of leftovers, we kindly ask you to please register only if you intend to attend the event:

https://www.eventbrite.pt/e/djam-distributed-jacobi-asynchronous-method-for-learning-personalized-models-tickets-62464635406  

Best regards,
Sebastião Miranda,
sebastia...@priberam.pt


Priberam Labs
http://labs.priberam.com/

Priberam

https://www.priberam.com/

Priberam is hiring!

If you are interested in working with us please send your info to labs@priberam.pt

Image result for priberam logoPRIBERAM SEMINARS   --  Room PA2
__________________________________________________


Priberam Machine Learning Lunch Seminar
Speaker:  Inês Almeida (SIPG - ISR)
Venue: IST Alameda, Room PA2 (Pavilhão de Matemática)
Date: Thursday, June 6th, 2019
Time: 13:00 (Lunch will be provided)

Title:

DJAM: Distributed Jacobi Asynchronous Method for Learning Personalized Models
 

Abstract:

With the widespread of data collection agent networks, distributed optimization and learning methods become preferable over centralized solutions. Typically, distributed machine learning problems are solved by having the network’s agents aim for a common (or consensus) model. In certain applications, however, each agent may be interested in meeting a personal goal which may differ from the consensus solution. This problem is referred to as (asynchronous) distributed learning of personalized models: Each agent reaches a compromise between agreeing with its neighbours and minimizing its personal loss function. We present a Jacobi-like distributed algorithm which converges with probability one to the centralized solution, provided the personal loss functions are strongly convex. We then evidence that our algorithm’s performance is comparable to or better than that of distributed ADMM in a number of applications. These very experiments suggest that our Jacobi method converges linearly to the centralized solution.

 

Bio:

Inês is currently doing a PhD on distributed optimization at IST/ISR. Before that, she worked for three years as a data scientist on a number of companies; her work focused mostly on credit scoring, mobile data analytics, and model explainability. She completed her Master degree in Physics at IST in 2013.


More info: 
http://labs.priberam.com/Academia-Partnerships/Seminars.aspx

Eventbrite:
https://www.eventbrite.pt/e/djam-distributed-jacobi-asynchronous-method-for-learning-personalized-models-tickets-62464635406   

 

Reply all
Reply to author
Forward
0 new messages