[Priberam ML Seminars] Priberam Machine Learning Lunch Seminars (T12) - 1 - "TrimTuner: Efficient Optimization of Machine Learning Jobs in the Cloud via Sub-Sampling", Pedro Mendes (IST / INESC-ID)

7 views
Skip to first unread message

Rúben Cardoso

unread,
Mar 2, 2021, 11:21:26 AM3/2/21
to priberam_...@googlegroups.com, si...@omni.isr.ist.utl.pt, isr-...@isr.tecnico.ulisboa.pt
Hello all,

Hope you are all safe and healthy, the 
Priberam Machine Learning Seminars are back for their 12th season!
As usual, we will continue to explore cutting edge developments in the fields of Machine Learning and Artificial Intelligence.
All seminars will take place remotely via zoom every other week on Tuesdays at 13h.

Next Tuesday, March 9th, Pedro G. Mendes, an IST / INESC-ID Ph.D student will present his work on "TrimTuner: Efficient Optimization of Machine Learning Jobs in the Cloud via Sub-Sampling" at 13:00h.

You can watch and participate on this seminar through the following Zoom link:
Please note that the seminar is limited to 100 people and it will work on a 1st come 1st served basis. So please, try to be on time if you wish to attend.

Best Regards,
Rúben Cardoso

Priberam Labs
http://labs.priberam.com/

Priberam is hiring!
If you are interested in working with us please consult the available positions at priberam.com/careers. 

Image result for priberam logoPRIBERAM SEMINARS   --  Zoom 816 8265 1891
__________________________________________________


Priberam Machine Learning Lunch Seminar
Speaker:  Pedro G. Mendes (IST / INESC-ID)
Venue: https://us02web.zoom.us/j/81682651891?pwd=cVUrWUd1N2FvUForOE1ibkp3blFLZz09

Date: Tuesday, March 9th, 2021
Time: 13:00 
Title:
TrimTuner: Efficient Optimization of Machine Learning Jobs in the Cloud via Sub-Sampling
Abstract:
This work introduces TrimTuner - the first system for optimizing machine learning jobs in the cloud by exploiting sub-sampling techniques to reduce the cost of the optimization process, while keeping into account user-specified constraints. TrimTuner jointly optimizes the cloud and application-specific parameters and, unlike state of the art works for cloud optimization, eschews the need to train the model with the full training set every time a new configuration is sampled. Indeed, by leveraging sub-sampling techniques and datasets that are up to 60x smaller than the original one, we show that TrimTuner can reduce the cost of the optimization process by up to 50x.
Further, TrimTuner speeds-up the recommendation process by 65x with respect to state of the art techniques for hyperparameter optimization that use sub-sampling techniques. The reasons for this improvement are twofold: i) a novel domain specific heuristic that reduces the number of configurations for which the acquisition function has to be evaluated; ii) the adoption of an ensemble of decision trees that enables boosting the speed of the recommendation process by one additional order of magnitude.
Short Bio:
Pedro Mendes is a doctoral student of Computer Science and Engineering at Instituto Superior Técnico (IST) - Universidade de Lisboa, being advised by Prof. Paolo Romano. His research and main areas of interest are focused on Distributed Systems, Cloud Computing, Virtualization, Optimization, Machine Learning, Computer Networks, and Artificial Intelligence (AI). The current work was developed during his master's thesis and presented last year at the international conference MASCOTS 2020. Currently, Pedro is working on a research project that aims at improving the efficiency of AI platforms while ensuring compliance with real-time constraints during the training and inference phases of machine learning models on the cloud. This work is developed in the context of the CAMELOT project.
More info:
Image result for priberam logo
 

Rúben Cardoso

unread,
Mar 9, 2021, 5:55:02 AM3/9/21
to priberam_...@googlegroups.com, si...@omni.isr.ist.utl.pt, isr-...@isr.tecnico.ulisboa.pt
Hello all,

This is just a reminder that the first Priberam ML Lunch Seminar is just a few hours away.

Today, Tuesday, March 9th, Pedro G. Mendes, an IST / INESC-ID Ph.D student will present his work on "TrimTuner: Efficient Optimization of Machine Learning Jobs in the Cloud via Sub-Sampling" at 13:00h.

You can watch and participate on this seminar through the following Zoom link:
Please note that the seminar is limited to 100 people and it will work on a 1st come 1st served basis. So please, try to be on time if you wish to attend.

Best Regards,
Rúben Cardoso

Priberam Labs
http://labs.priberam.com/

Priberam is hiring!
If you are interested in working with us please consult the available positions at priberam.com/careers. 


PRIBERAM SEMINARS   --  Zoom 816 8265 1891
__________________________________________________


Priberam Machine Learning Lunch Seminar
Speaker:  Pedro G. Mendes (IST / INESC-ID)
Venue: https://us02web.zoom.us/j/81682651891?pwd=cVUrWUd1N2FvUForOE1ibkp3blFLZz09
Date: Tuesday, March 9th, 2021
Time: 13:00 
Title:
TrimTuner: Efficient Optimization of Machine Learning Jobs in the Cloud via Sub-Sampling
Abstract:
This work introduces TrimTuner - the first system for optimizing machine learning jobs in the cloud by exploiting sub-sampling techniques to reduce the cost of the optimization process, while keeping into account user-specified constraints. TrimTuner jointly optimizes the cloud and application-specific parameters and, unlike state of the art works for cloud optimization, eschews the need to train the model with the full training set every time a new configuration is sampled. Indeed, by leveraging sub-sampling techniques and datasets that are up to 60x smaller than the original one, we show that TrimTuner can reduce the cost of the optimization process by up to 50x.
Further, TrimTuner speeds-up the recommendation process by 65x with respect to state of the art techniques for hyperparameter optimization that use sub-sampling techniques. The reasons for this improvement are twofold: i) a novel domain specific heuristic that reduces the number of configurations for which the acquisition function has to be evaluated; ii) the adoption of an ensemble of decision trees that enables boosting the speed of the recommendation process by one additional order of magnitude.
Reply all
Reply to author
Forward
0 new messages