[Call for Papers] 1st International Workshop “Adapting to Change: Reliable Learning Across Domains” @ ECML-PKDD 2023 - September 18th, Turin, Italy

2 views
Skip to first unread message

Raffaello Camoriano

unread,
May 23, 2023, 12:55:44 PM5/23/23
to ai...@aixia.it

Dear colleagues,


We are pleased to announce that the 1st International Workshop “Adapting to Change: Reliable Learning Across Domains” will take place as part of ECML-PKDD 2023 (https://2023.ecmlpkdd.org/) on Monday, September 18th, 2023 in Turin, Italy.

In this workshop, we aim at bringing together researchers across the ML community to present and discuss recent results in reliable learning across domains, fostering new connections between theory and practical methods, and identifying solutions targeting different modalities (images, videos, language, and more) and application areas.


Submit a contribution: https://cmt3.research.microsoft.com/ECMLPKDDworkshop2023/Track/42/Submission/Create

Workshop website: https://sites.google.com/view/adapting-to-change-ecml-pkdd/


For any questions, feel free to contact us through the form available on the website.

Important Dates


All deadlines are 11:59PM UTC-12:00 ("anywhere on Earth").

  • Submission deadline: June 12th, 2023
  • Notification of publication decision: July 12th, 2023
  • Camera-ready due: September 1st, 2023 (to be confirmed)
  • Workshop: September 18th, 2023 (afternoon)

Abstract


Most Machine Learning algorithms assume training and test sets to be sampled from the same data distribution. Despite its convenience for analyzing generalization properties, such assumption is easily violated in real-world problems. As a result, the predictive performance of classical methods can be unreliable when deployed in the wild. This is a crucial limitation preventing the application of learning-based solutions to safety-critical settings (e.g., autonomous driving, robotics, medical imaging). At the same time, leveraging data from similar, yet distinct, domains can greatly reduce labeling costs of target applications. This allows powerful, data-hungry deep models to benefit fields with scarce data via pre-training on general-purpose datasets and fine-tuning on smaller problem-specific ones.

The growing demand for reliable and data-efficient learning methods able to generalize across domains has fueled research in Transfer Learning. This includes Domain Adaptation (DA), which exploits few potentially unlabeled examples from a target domain to adapt models trained on a different source domain, and Domain Generalization (DG), with the purpose of enhancing model robustness to unseen target domain variability. Lastly, many applications require models able to deal with continuously shifting target distributions, potentially with novel tasks presented sequentially. This is typically tackled by Continual Learning (CL) methods. Importantly, DA, DG, and CL share many similarities with the Learning-to-Learn framework, which aims at optimizing a learner over a meta-distribution of domains to generalize to unseen ones.

In this workshop, we aim at bringing together researchers across the above fields and the broader ML community to present and discuss recent results in reliable learning across domains, fostering new connections between theory and practical methods, and identifying solutions targeting different modalities (images, videos, language, and more) and application areas.

Workshop Topics


The topics of special interest for this workshop (though submissions are not limited to these) are:

  • Transfer Learning
  • Domain Generalization (DG)
    • Data Augmentation Approaches
    • Adversarial Approaches
    • Regularization Approaches
    • ...
  • Domain Adaptation (DA)
    • Zero-shot DA
    • One-shot DA
    • Few-shot DA
    • Source-free Unsupervised DA (SF-UDA)
    • Multi-source & Multi-target DA
    • Test-time Domain Adaptation
    • Black-box model adaptation
    • ...
  • Meta Learning / Learning to Learn
  • Reliable Learning Across domains
    • Open-set DA
    • Partial-set DA
    • Universal DA
    • Anomaly Detection
    • Uncertainty Quantification in DA
    • Out-of-distribution Detection
    • Distribution Mismatch Measures
    • ...
  • Continual/Lifelong/Incremental Learning
  • Multimodal learning
    • Cross-modal adaptation
    • Multimodal transfer learning
    • ...
  • Multimodal DA/DG
  • Invariance and Equivariance in Deep Neural Networks
  • Weakly-supervised Learning
  • Self-supervised Learning
  • Active Learning
  • Federated Learning (FL)
    • Federated DA
    • Personalized FL
  • Applications
    • Computer Vision
    • Robotics
    • Autonomous Driving
    • NLP
    • ...
  • Evaluation protocols
  • Datasets
  • ...

Submission Instructions

This workshop allows for two different paper formats (maximum 10MB):

  • Short papers of maximum 4 pages (references included): not eligible for proceedings, cover early-stage ideas and/or work recently accepted, published or under review at other venues;
  • Full-length papers  of maximum 8 pages (references included): eligible for proceedings (opt-in), cover novel work not previously published and not under review elsewhere.
A maximum of 2 optional supplementary material files (maximum 50MB) are allowed (Note: supplementary files can be uploaded from the Author Console only after the main manuscript has been uploaded):
  • A PDF Appendix file using the same Latex tamplate as the main manuscript;
  • A ZIP archive containing other supplementary material such as videos in MP4 format (maximum 3-minutes long) and/or code. A readme.txt file must be included in the archive to describe content.
Workshop papers must be prepared and submitted using the following Latex template: https://resource-cms.springernature.com/springer-cms/rest/v1/content/19238648/data/v6

The Workshops and Tutorials will be included in a joint Post-Workshop proceeding published by Springer Communications in Computer and Information Science, in 1-2 volumes, organised by focused scope and possibly indexed by WOS. CCIS web page: https://www.springer.com/series/7899

Authors of full-length papers have the faculty to opt-in or opt-out.

We encourage authors who want to present and discuss their ongoing work to choose the Short Paper format, so that it can be potentially submitted concurrently to other venues and not count as a dual/double submission, or to opt-out from the proceedings.

For any inquiries or technical assistance, please contact the orgenizers via the form available at: https://sites.google.com/view/adapting-to-change-ecml-pkdd/contacts


Invited Speakers

Organizing Committee

  • Raffaello Camoriano, Ph. D.
    Assistant Professor, VANDAL Lab
    Department of Control and Computer Engineering (DAUIN), Politecnico di Torino
    ELLIS Unit Turin, Member

  • Carlo Masone, Ph. D.
    Assistant Professor, VANDAL Lab
    Department of Control and Computer Engineering (DAUIN), Politecnico di Torino

  • Giuseppe Averta, Ph. D.
    Assistant Professor, VANDAL Lab
    Department of Control and Computer Engineering (DAUIN), Politecnico di Torino
    ELLIS Unit Turin, Member

  • Francesca Pistilli, Ph. D.
    Postdoctoral Researcher, VANDAL Lab
    Department of Control and Computer Engineering (DAUIN), Politecnico di Torino

  • Tatiana Tommasi, Ph. D.
    Associate Professor, VANDAL Lab
    Department of Control and Computer Engineering (DAUIN), Politecnico di Torino
    ELLIS Unit Turin, Director
Acknowledgments


Raffaello Camoriano
Assistant Professor (non-TT, RTDa) in Machine Learning and Robotics
Visual and Multimodal Applied Learning Laboratory (VANDAL), 
ELLIS Unit Turin
DAUIN, Politecnico di Torino, Turin, Italy

Skype ID: raffaello.camoriano
Mailing: C.so Francesco Ferrucci, 112 - 10141 Turin, Italy
Reply all
Reply to author
Forward
0 new messages