The FLAMINGO project: From High-performance computing to simulating the whole Universe | 9am PT Tues Nov 14

13 views
Skip to first unread message

Grigory Bronevetsky

unread,
Nov 9, 2023, 11:57:16 AM11/9/23
to ta...@modelingtalks.org

image.pngModeling Talks

The FLAMINGO project: From High-performance computing to simulating the whole Universe

Matthieu Schaller,
Lorenz Institude, Leiden Observatory

image.png

Tuesday, Nov 14 | 9am PT

Meet | Youtube Stream


Hi all,


The presentation will be via Meet and all questions will be addressed there. If you cannot attend live, the event will be recorded and can be found afterward at
https://sites.google.com/modelingtalks.org/entry/flamingo-from-hpc-to-simulating-the-whole-universe


Abstract:
The interpretation of data coming from cosmology surveys (such as the recently launched Euclid satellite) rely on comparison with accurate theoretical models including all the known relevant physical phenomena. The precision reached by modern instruments make this a extremely challenging task for numerical physicists who have to make use of some of the largest HPC facilities to run their calculations. In this talk, I will talk about the recently completed FLAMINGO project, a virtual twin to our own universe. This suite of simulations contains, among other runs, the largest cosmological calculation ever performed. I will introduce the key physics and cosmology question as well as cover some of the technical computational challenges and the solutions we implemented in the SWIFT cosmology code to overcome them.

Bio:
Mattieu Schaller is an assistant professor in numerical cosmology at the Lorentz Institute for theoretical physics and Leiden Observatory, where he works on the development and analysis of cosmological simulations. His research focuses the development of numerical simulation tools for cosmology and astrophysics, mainly on the SWIFT code and associated packages, as well as the preparation, running, and analysis of galaxy formation and cosmology simulations, such as the state-of-the-art EAGLE, SIBELIUS, FLAMINGO, and COLIBRE projects, which have been used in 100s of subsequent research studies around the world. The research in his group encompasses the low-level technical challenges of high-performance computing, the development of accurate numerical methods, and the construction of tools to interpret the simulated results and confront them to the observed Universe.


More information on previous and future talks: https://sites.google.com/modelingtalks.org/entry/home

Grigory Bronevetsky

unread,
Dec 11, 2023, 1:21:48 PM12/11/23
to Talks, Grigory Bronevetsky

Video Recording: https://youtu.be/1ypdgLu_bN8

Slides: https://docs.google.com/presentation/d/1__dyZeSJNG9X0tFHJZp1iDk8ysh9WP7YCkMLnYO-L_s/edit?usp=sharing


Summary

  • Pillars of modern physics

    • General Relativity: gravity

    • Quantum Field Theory

    • Standard Model of Particle Physics

    • Lambda-Cold Dark Matter (λCDM) model of universe

  • Cosmic microwave background: temperature variations in early universe

    • Can describe power spectrum using a 6-parameter model based on cosmological parameters

    • Cosmology tension: model’s predictions differ from astronomical measurements

    • Are we missing laws of physics or is it a measurement error?

  • Approach: simulation of the universe to predict how it should look like after evolving from different starting conditions

    • EAGLE: Evolution and Assembly of GaLaxies and their Environments: https://icc.dur.ac.uk/Eagle

      • Gravitational collapse of more dense regions evolved into 

        • filaments and nodes of denser regions and 

        • then into individual galaxies

    • Depending on the different initial conditions the distribution of this filament graph changes

      • Simulations that incorporate different proposals for dark matter make very different predictions

        • Few, massive particles vs Many, light particles

    • Details that we may want to model:

      • Initial conditions: known from CMB

      • Physical dynamics: gravity, hydrodynamics, magnetic fields, radiative transfer, cosmic rays

      • Constituents: dark energy, dark matter, normal matter, other details of matter (stars, planets, etc.)

      • Depending on level of detail this can go from easy to very hard on available supercomputers

  • Challenges of developing this simulation

    • Gravity is hard:

      • Very long-range: so need to exchange information among many different simulated objects 

      • Always attractive: errors accumulate during the simulation (additive errors not compensated by subtraction)

      • All scales matter almost equally (hard to approximate)

      • Cheap calculation: inefficient use of available compute resources

    • Gas and stars affect the evolution of the universe

      • Galaxy is emitting hot gas

      • Powered by galaxy center region 1e3 times smaller

      • Powered by a central black hole that is 1e6 times smaller than that

    • Full resolution computation on an exascale computer would take 1e24 seconds = 1e6 times age of universe

  • Need software solutions to make this tractable: 

    • Multiscale

      • Multi-grid

      • Particle splitting

      • (semi-)Lagrangian

      • Focus most compute on one single region

    • Load varies in both time and scale, to need to dynamically adapt numerical scheme based on current state of the simulation

      • Dynamic re-meshing

      • Task-based parallelism

      • But this means that the computer spends more time on management logic at the expense of less computation

  • Example: Astro-SPH

    • Task-based parallelism

    • Threads work on regions of space that are available

  • Modeling astrophysical details

    • Large scale universal structures can be fully simulated

    • Small scale structures need to be approximated via sub-grid models

      • cooling/heating of gas

      • Star formation

      • Enriching of gas from stars

      • Black holes

      • ….

    • These models are approximate, so hard to capture their error or how it pollutes the dynamics of the larger simulation

      • Simple empirical parameter-fit models

      • Machine learning approach: 

        • Run simulation of target phenomenon and train an ML model on its dynamics

        • ML model is approximate and very cheap to run

        • Use ML model to simulate small-scale phenomena

      • Observational approach: take observations of relevant phenomenon

  • FLAMINGO project: https://flamingo.strw.leidenuniv.nl/

    • Models many phenomena simultaneously: gravity, gases, stars, black holes

    • Calibration of sub-grid models

      • Run many simulations with different parameters for the sub-grid models

      • Use Gaussian processes to compute the probability distribution on the most likely values of these parameters

      • Validated against observational data

      • Fine-grained simulations not used for validation because the basic physics of stars, black holes, etc. are still being developed

    • Computation Cost:

      • 42 days on 30k CPUs

      • 31m CPI hours

      • 145 MWh of power use 

      • £20k cost

      • 3.9T of CO2

    • Validation against observational data must account for the properties of telescopes, atmosphere, interference from the moon (affects the distribution of sky patches that could be observed), etc.

    • Outcome: predictions still differ from the observations, so more work to be done on understanding the physics that drive the universe

  • Debugging

    • For solvable/standard parts of the PDEs can compare to reference solutions or other simulations

    • To figure out which sub-grid models are most responsible for the error, need to look for which parts of the prediction are wrong

    • There are questions about whether the major laws (e.g. gravity) need revision. Simulation allows modeling of alternatives.

Reply all
Reply to author
Forward
0 new messages