Starkly Speaking: Transformers Discover Molecular Structure Without Graph Priors

22 views
Skip to first unread message

Hannes Stärk

unread,
Oct 12, 2025, 2:22:54 PMOct 12
to stark...@googlegroups.com
Hi together,

Tomorrow we discuss:

Paper:
Transformers Discover Molecular Structure Without Graph Priors https://arxiv.org/abs/2510.02259 (Tobias KreimanYutong BaiFadi AtiehElizabeth WeaverEric QuAditi S. Krishnapriyan)
Graph Neural Networks (GNNs) are the dominant architecture for molecular machine learning, particularly for molecular property prediction and machine learning interatomic potentials (MLIPs). GNNs perform message passing on predefined graphs often induced by a fixed radius cutoff or k-nearest neighbor scheme. While this design aligns with the locality present in many molecular tasks, a hard-coded graph can limit expressivity due to the fixed receptive field and slows down inference with sparse graph operations. In this work, we investigate whether pure, unmodified Transformers trained directly on Cartesian coordinates–without predefined graphs or physical priors–can approximate molecular energies and forces. As a starting point for our analysis, we demonstrate how to train a Transformer to competitive energy and force mean absolute errors under a matched training compute budget, relative to a state-of-the-art equivariant GNN on the OMol25 dataset. We discover that the Transformer learns physically consistent patterns–such as attention weights that decay inversely with interatomic distance–and flexibly adapts them across different molecular environments due to the absence of hard-coded biases. The use of a standard Transformer also unlocks predictable improvements with respect to scaling training resources, consistent with empirical scaling laws observed in other domains. Our results demonstrate that many favorable properties of GNNs can emerge adaptively in Transformers, challenging the necessity of hard-coded graph inductive biases and pointing toward standardized, scalable architectures for molecular modeling.

Speaker:
Tobias Kreiman, who is a PhD student in Aditi Krishnapriyan's group at UC Berkeley.

Meeting Details:
Every Monday at 12:00 ET / 9:00 PT / 18:00 CE(S)T.  
https://zoom.us/j/5775722530?pwd=ZzlGTXlDNThhUDZOdU4vN2JRMm5pQT09

Add it to your calendar:
Subscribe via Google Calendar, or subscribe via iCal.
Alternatively, add the events, or add this single event.

Slack Workspace for discussion and paper voting:
https://join.slack.com/t/logag/shared_invite/zt-2zuxi7gd1-rLUgxg6gnCkhO7WlRsyElg

All information: Schedule of upcoming papers, recordings, etc.:
https://portal.valencelabs.com/starklyspeaking
Reply all
Reply to author
Forward
0 new messages