Molecular Simulations (ragdolls) Download

5 views
Skip to first unread message

Mood Phaneuf

unread,
Aug 3, 2024, 4:22:54 PM8/3/24
to backrestthetu

I'm a programmer. I code in C++, C#, HTML5, and PHP. There are many graphics engines I have at my disposal. The question is: Does there exist a graphics engine that is as true to our reality as possible given our current understanding of physics? For instance, I can easily create macroscopic objects in a 3D space but what about all of the elements in reality that make up these macroscopic objects? What if for instance I wanted to start from the bottom up, creating simulations at the planck scale, particles, atomic structures, cells, microbiology, etc.? What if I want to simulate quantum mechanics? Of course I can make a model of an atom but it winds up not being correct in terms of being exactly analogous to real life.

I would like to correctly simulate these structures and their behaviors. Assume also that I have access to an immense amount of parallel computing processing power (I do). Perhaps some non-gaming scientific graphics engines exist that I'm not aware of.

Unless you are an important person in the Chinese computational science world (using Tianhe-2), or you have access to secret government computers us mere mortals don't know exist (so they don't appear in rankings of the best supercomputers in the world), I probably have access to more ;) And I can't even imagine tackling one billionth of one billionth of the problem you want to tackle. In fact, I'm certain the combined efforts of every computer on the planet, secret or not, could not begin to approach the problem you want to solve.

You very much overestimate the amount of computing power available on Earth. To connect atomic scales to macroscopic ones, a useful number to have in mind is Avogadro's number, about $6\times10^23$. That's how many atoms there are in a few grams of typical materials. Going back to that top computer in the world, it has about $1.5\times10^15$ bytes of memory. That is, even at one byte per atom, you couldn't store enough information in memory to represent a speck of dust. And at less than $10^17$ floating point operations per second, it would take eons to do any useful calculation on even that reduced dataset.

Add in the Planck scale being a good $20$ to $25$ orders of magnitude smaller than the atomic scale, and the problem becomes mind-bogglingly overwhelming. Note there are only about $10^50$ atoms on Earth, so turning the entire planet into a single computer, composed of science-fiction-level single-atom transistors, would still fall short of being able to render macroscopic things exactly from such first principles.

A laudable goal, and one shared by many physicists. However, a key ingredient in physics is knowing what approximations to make so as to make a problem tractable. This applies to computation as much as anything else. In particular, physics often divides up large, complicated systems into components with simple rules, whose behavior is inferred from dividing up examples of them into smaller components.

More clearly, a hypothetical example. If you want to model a person, you have components at the organ level, like blood and skin. Each one has reasonably simple, aggregate behavior that is an approximation to its "true" underlying, fundamental nature (but often a very good approximation!). One know the behavior of blood because you do simulations of ideal, continuum fluids with similar viscosities. You know how viscosity works from empirical experiment, but if you want to simulate it you could make a simulation of a small patch of approximate water molecules imbued with electrostatic interactions. These electrostatic interactions can come from molecular dynamics simulations, and so on.

1 In fact, I posit that if all you do is simulate nature as closely as possible, you have done nothing at all. You could have just let nature run its course. Simulation is only valuable as a tool, like experiment or theory, for gaining new insights into how nature works.

Finite element analysis means taking a solid body and breaking it down into tetrahedral or cubic elements and applying the laws of physics (normally just stress-strain relationship) to each one. To get a really good result on a relatively small object, you're going to need about 1000 elements in each of 3 dimensions, so that's a gigabyte of memory assuming one byte per element (in reality each element will need at least ten times that to store, as a minimum, its 6 degrees of translational and rotational freedom.) This is already looking like a memory issue for regular PC. By increasing the mesh size a bit, we can run such a simulation on a PC, but it may take several hours for the effects of even a static load to propagate through the model to convergence. Modeling oscillation (time + 3 spatial dimensions) is pretty much impossible on a PC, both in terms of the time taken and the amount of data generated (several gigabytes per timestep.) Reducing to time + 2 spatial dimensions helps a lot.

In order to make the calculations reasonable, civil and structural engineers use a simplifictation to perform structural analysis. Programs like Staad Pro work with elements such as beams and columns, assuming they will bend according to known models. The engineer builds meccano-like model for the program input, specifying the nodes where the beams connect, indicating whether the joint is fixed or free rotation is possible, etc. In this way, full four dimensional (time+3 space) analysis is possible.

Computational fluid dynamics is the equivalent of finite element analysis, but for fluids rather than solids. Again we use a mesh of cubes or tetrahedra to represent the volume, but there are different issues. This is the type of simulation I have personal experience of, using Floworks software, which uses a cubic mesh and very usefully allows you to reduce the mesh scale to a half or a quarter of the main mesh in critical areas. Nevertheless, the experience has led me to believe that you can predict anything you want with computational fluid dynamics software. I see it as a useful qualitative tool for identifying problem areas, rather than a means of quantitatively predicting pressure drop vs velocity.

Again we need about a billion elements for a really good simulation of a small object in 3 spatial dimensions, with at least pressure and three degrees of freedom in velocity for each element. Again, predicting flow in three spatial dimensions plus time uses excessive computing power. But unfortunately in the case of computational flow dynamics, a system with stable inlet and outlet flow can very likely to have an oscillation somewhere inside the model, which is not the case of finite element analysis under constant load. Sometimes we can simplify to 2 spatial dimensions. A 2 spatial dimension + time analysis of a cross section of a chimeney with wind blowing across it can be done, and it may reveal that the system oscillates due to vortex shedding, which apart from the cyclic stress placed on the chimeney, results in greater drag than would be seen with a time-averaged model. The equations used are called the navier-stokes equations, and though very simple in concept, can lead to surprisingly complex results if turbulence ensues (google Reynolds number for more info.) It isn't really possible to extend the calculation into 4 dimensions on a PC, so approximations have to be made to account for the effect of turbulence.

In my field (combustion and heat transfer) the burner manufacturers introduce some simple combustion thermochemistry into their models. That adds another level of complexity which means a pretty powerful computer is needed.

So good luck, go ahead and perform a computational fluid dynamics simulation with a 1000x1000x1000 mesh for 1000 timesteps and you will generate several terabytes of data. Don't forget that each iteration will need to converge properly before you continue to the next timestep. Interpreting all that data is another issue. Do this every day for a year and you will have several petabytes. Do you have that much storage? You will quickly see why engineers prefer to use the relationship between Reynolds number and Friction factor rather than use the Navier-Stokes equations to calculate everything from first principles.

Whoa! that really is a lot of computing power. A molecule of haemoglobin weighs about 64000 daltons (about the same as 64000 hydrogen atoms.) A dalton is 1.66E-24 g, the reciprocal of Avogradro's number. haemoglobin is an interesting protein because it has four separate binding sites for oxygen, and binding of one oxygen causes a change in conformation than enhances the binding strength of the others; it's a kind of natural molecular machine (which is independent of the rather more complex molecular machines like ribosomes and cell membranes.)

I don't have the exact molecular formula for Haemoglobin handy, but let's make some assumptions. Let's assume the number of protons and neutrons is equal. That means there are 32000 protons and 32000 electrons (I'm not really interested in the protons, we'll stay away from nuclear physics, but it's the best way to get an idea of the number of electrons. The average atomic mass will be somewhat similar to glucose: about 7.5. In round figures, let's say there are 10000 atoms. In addition, proteins keep their shape due to being surrounded by solvent, so let's say we need to multiply both those numbers by 10: thats 3200000 electrons and 100000 atoms.

Now you might hope to make some simplifications regarding the charge interactions and be able to predict the shape of the molecule, and maybe even the binding of O2 (though that is rather dependent on the iron atom where the O2 is bound, so you migh prefer to rely on known data for that.) This type of thing is indeed done and is one way of trying to find suitable drug molecules that bind to receptors. But when I was in the industry 15 years ago, it was far more fashionable to use automation to physically synthesize and screen vast numbers of potential drug substances.

c80f0f1006
Reply all
Reply to author
Forward
0 new messages