Starkly Speaking: "UMA: A Family of Universal Models for Atoms"
20 views
Skip to first unread message
Hannes Stärk
unread,
Jun 29, 2025, 3:33:29 PMJun 29
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to stark...@googlegroups.com
Hi together,
Tomorrow we cover a recent breakthrough paper in the world of NNPs:
Paper: UMA: A Family of Universal Models for Atoms https://ai.meta.com/research/publications/uma-a-family-of-universal-models-for-atoms/ (Brandon M Wood et al.) The ability to quickly and accurately compute properties from atomic simulations is critical for advancing a large number of applications in chemistry and materials science including drug discovery, energy storage, and semiconductor manufacturing. To address this need, Meta FAIR presents a family of Universal Models for Atoms UMA, designed to push the frontier of speed, accuracy, and generalization. UMA models are trained on half a billion unique 3D atomic structures (the largest training runs to date) by compiling data across multiple chemical domains, e.g. molecules, materials, and catalysts. We develop empirical scaling laws to help understand how to increase model capacity alongside dataset size to achieve the best accuracy. The UMA small and medium models utilize a novel architectural design we refer to as mixture of linear experts that enables increasing model capacity without sacrificing speed. For example, UMA-medium has 1.4B parameters but only ~50M active parameters per atomic structure. We evaluate UMA models on a diverse set of tasks across multiple domains and find that, remarkably, a single model without any fine-tuning can perform similarly or better than specialized models. We are releasing the UMA code, weights, and associated data to accelerate computational workflows and enable the community to continue to build increasingly capable AI models.
Speaker: Brandon Wood is a research scientist on the FAIR chemistry team in San Francisco. His research lies at the intersection of deep learning, chemistry/physics, and large scale computing. Lately, he has been working on generalizable machine learning potentials and generative models for molecules and materials. Prior to joining FAIR, he was a postdoctoral fellow at NERSC and completed his PhD with Kristin Persson at UC Berkeley. bmw...@meta.com.