NeurIPS 2023 Workshop on Distribution Shifts

32 views
Skip to first unread message

Masashi Sugiyama

unread,
Sep 5, 2023, 11:42:04 PM9/5/23
to 情報論的学習理論と機械学習 (IBISML)

IBISMLの皆さまへ


理研/東大の杉山将です.

2021, 2022年に続いて,2023年のNeurIPSでもDistribution Shiftに関するワークショップを開催します.

論文のご投稿・ワークショップへのご参加をお待ちしています.


==============================


We’d like to invite you to submit to the NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models.


Website: https://sites.google.com/view/distshift2023

Paper submission deadline: October 2, 2023 (Anywhere on Earth)

Author notification: October 27, 2023

Workshop: December 15, 2023, in-person in New Orleans, USA. 


Authors who will not be able to attend in person are still encouraged to submit. Accepted papers will be accompanied by a short pre-recorded video to allow authors to present their work remotely.


Please reach out to distshift-w...@googlegroups.com if you have any questions.

 

Call for papers

Distribution shifts—where a model is deployed on a data distribution different from what it was trained on—pose significant robustness challenges in real-world ML applications. Such shifts are often unavoidable in the wild and have been shown to substantially degrade model performance. For example, models can systematically fail when tested on patients from different hospitals or people from different demographics. Training models that are robust to such distribution shifts is a rapidly growing area of interest in the ML community.


In recent years, foundation models—large pretrained models that can be adapted for a wide range of tasks—have achieved unprecedented performance on a broad variety of discriminative and generative tasks, including in distribution shift scenarios. Foundation models open up an exciting new frontier in the study of distribution shifts.The goal of our workshop is to foster discussions and further research on distribution shifts, especially in the context of foundation models. 


Examples of relevant topics include, but are not limited to:

  • Effects of foundation models (e.g., pre-training, scale) on robustness

  • Robust adaptation of foundation models to downstream tasks

  • Distribution shifts from pretraining to downstream distributions, including in the context of generative foundation models.

Beyond the above topics, we are broadly interested in methods, empirical studies, and theory of distribution shifts, including those that do not involve foundation models. 


Invited speakers

Aditi Raghunathan, Carnegie Mellon University

Balaji Lakshminarayanan, Google DeepMind

Hoifung Poon, Microsoft Research

Kate Saenko, Boston University

Ludwig Schmidt, University of Washington

Peng Cui, Tsinghua University


Organizers

Becca Roelofs, Google

Fanny Yang, ETH Zurich

Hongseok Namkoong, Columbia University

Jacob Eisenstein, Google

Masashi Sugiyama, RIKEN & University of Tokyo 

Pang Wei Koh, University of Washington

Shiori Sagawa, Stanford University

Tatsunori Hashimoto, Stanford University

Yoonho Lee, Stanford University


Reply all
Reply to author
Forward
0 new messages