Dear RL-List community,
We are excited to announce the Call for Papers for the ICLR 2026 Workshop on Scaling Post-Training for LLMs (SPOT).
Post-training, spanning Supervised Fine-Tuning (SFT), Reinforcement Learning (RL), and beyond, has become a compute-intensive, first-class phase that increasingly determines the capabilities, safety, and efficiency of modern foundation models. Despite its importance, post-training at scale remains far less understood than pre-training. SPOT aims to close this gap by developing a principled scientific framework for post-training, across algorithms, systems, data, architectures, and objectives.
We warmly invite submissions from the RL community, especially work related to:
RL formulations for LLM post-training, stability, and scaling behavior
Reward modeling, preference learning, and verification
Efficient feedback mechanisms and evaluation at scale
Infrastructure and systems challenges
Architectures for scalable post-training
Data curation, synthetic data, and generalization
Safety, alignment, and real-world or embodied environments
Key dates:
Paper submission: February 05, 2026
Author notification: March 01, 2026
Camera-ready: April 01, 2026
Workshop date: April 26/27, 2026
Invited Speakers
We are thrilled to host an outstanding lineup of invited speakers:
Submission portal: OpenReview Submission Portal
Call for Reviewers: Reviewer Nomination Form
Workshop website: https://spoticlr.github.io
Contact: spot...@gmail.com
We believe SPOT will be a great venue for researchers working at the intersection of RL, alignment, and large-scale post-training to exchange ideas and shape a more rigorous understanding of how post-training should scale.
We look forward to your submissions and participation. Follow our X (previously Twitter) handle to stay updated.
Best regards,
Gagan Jain
Website | Twitter
(on behalf of the SPOT 2026 Organizing Committee)