LoGG tomorrow's paper: Does equivariance matter at scale?
0 views
Skip to first unread message
Hannes Stärk
unread,
Jan 5, 2025, 3:05:56 PMJan 5
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to lo...@googlegroups.com
Hi together,
Long time no hear and happy new year! First dose of paper will be:
Paper: Does equivariance matter at scale? https://arxiv.org/abs/2410.23179 (Johann Brehmer, Sönke Behrends, Pim de Haan, Taco Cohen) Given large data sets and sufficient compute, is it beneficial to design neural architectures for the structure and symmetries of each problem? Or is it more efficient to learn them from data? We study empirically how equivariant and non-equivariant networks scale with compute and training samples. Focusing on a benchmark problem of rigid-body interactions and on general-purpose transformer architectures, we perform a series of experiments, varying the model size, training steps, and dataset size. We find evidence for three conclusions. First, equivariance improves data efficiency, but training non-equivariant models with data augmentation can close this gap given sufficient epochs. Second, scaling with compute follows a power law, with equivariant models outperforming non-equivariant ones at each tested compute budget. Finally, the optimal allocation of a compute budget onto model size and training duration differs between equivariant and non-equivariant models.
Speaker: Johann Brehmer whos is a physicist turned machine learner and a research scientist at CuspAI in Amsterdam. There he works on machine learning–driven discovery of materials for carbon capture.