Working on fine-tuning (of large language models or other models) in theory and/or practice?
Consider submitting your paper to the 1st Workshop on Fine-Tuning in Modern Machine Learning: Principles and Scalability (FITML) at NeurIPS 2024 in Vancouver!
Important dates (AoE time):
Submission Deadline: September 15th, 2024, AoE
Author notification: October 9th, 2024, AoE
- Workshop Date: December 14 or 15, 2024
Topics: include but are not limited to:
New methodology for fine-tuning of various strategies, architectures and systems, from low-rank representation to sparse representation, from deep neural networks to LLMs, from algorithmic design to hardware design.
Theoretical foundations of fine-tuning, e.g. approximation, optimization, and generalization from the perspective of transfer learning, deep learning theory, RLHF. Besides, theoretical understanding of low-rank representation from sketching and signal recovery are also welcome.
New experimental observations that can help advance our understanding of the underlying mechanisms of fine-tuning, a discrepancy between existing theoretical analyses and practice, explainability and interpretability of fine-tuning in scientific contexts.
Awards: Among exceptional research papers with high review scores, we will select one best paper award, two runner-ups, and best poster award.
If you have any questions about paper submission and the workshop, please send email to:
neurips...@outlook.com
We look forward to your contributions!