DEME Bin Max Allowance

22 views
Skip to first unread message

Mohammad Wasfi

unread,
Dec 29, 2022, 9:13:02 PM12/29/22
to ProjectChrono
Hello, 

This is DEME related question. 

How does the function SetCDUpdateFreq affect the maximum bin allowance? I have been running a code with SetCDUpdateFreq(6) and everything was working fine. However, when I changed the input from 6 to 15, I started getting an error saying that Bin 32 contains 264 triangular mesh facets, exceeding maximum allowance (256). Also, I am running a cone penetration test, does a value of 6 considered a low value for such a test?


Thank you so much in advance, 
 

Ruochun Zhang

unread,
Dec 30, 2022, 4:21:09 AM12/30/22
to ProjectChrono
Hi Mohammad,

SetCDUpdateFreq sets the maximum number of time steps by which the DEM physics can run ahead of the contact detection. So naturally, if this number is large, the contact detection subroutine needs to add a bigger envelope/safety margin to ensure it does not miss a contact that can potentially appear in the "future". That's why it is possible that when you increase it, more geometries appear in one bin and potentially cause problems, since the geometries here mean the geometries with the safety margin included.

But this should really be less of a problem for meshes, and I am a bit surprised here. You know I wrote this error message, but I myself never ever saw that in my simulation outputs. Meshes used in DEME are expected to have triangles somewhat larger than the particles (if you do need the mesh to represent geometric features that are smaller than your particles, then the contact force won't be calculated accurately anyway, because your particle shapes are approximations too), but it seems to be not the cause in your simulation, as you see problems with the mesh before you see that with the particles. If you just happen to be using huge particles (and therefore comparably small mesh facets) for this simulation and you just want it to run, then you can always use smaller bin sizes to resolve the problem.

As for what number you should use, this is hardware- and problem-dependent. The target is always to use the smallest number that is able to minimize the times that dT is held back. This means right now, you have to try it out to find the best choice. I found with A100s many typical use cases benefit from that being set to something like 20. With consumer-grade GPUs, it tends to be larger, like 30 or 40. Just so you know, if the sweet spot is 20, and you set it to be 5 for example, then you are pretty much running at 25% speed; if you set it to 30 on the other hand, you will be running at a slightly reduced speed but not too bad. But a number too large has the associated risk of leading to too many geometries in a bin.

Now there is good news. After deliberating, I think I know a way to make the bin size and CDUpdateFreq adapt automatically, and remove the limit of the number of geometries in a bin, meaning when it materializes you will no longer see this error again. If it does pan out, I'll let people know.

Thank you,
Ruochun
Message has been deleted

Mohammad Wasfi

unread,
Jan 1, 2023, 4:08:01 PM1/1/23
to ProjectChrono
Hi Ruochun, 

Thank you so much for your reply. I have been trying to reduce the bin size in order to try to resolve the problem. However, I keep getting similar problems until I reach a point where I exceed the maximum number of bins allowed. I have tried to minimize my domain as well to help with minimizing bin size. I got to a point where my domain is reduced to the very minimum and my bin size is set to be the smallest possible without exceeding the maximum number allowed. However, I am still having the same problem. Ex: "Bin 7 contains 357 triangular mesh facets, exceeding maximum allowance (256)". I was wondering about what it would cost to increase either the maximum number of meshes contained by particles or the maximum number of bins of a simulation.

About the mesh size, I am trying to verify my implementation of the cohesion model I implemented using this paper (Discrete element modeling of planetary ice analogs: mechanical behavior upon sintering | SpringerLink). In that paper, the cone has a small mesh (0.2mm) and the shaft has a larger mesh (2mm). However, the particle size is 2mm. Do you see any obvious problems with that? you mentioned that having a smaller mesh than the particles causes the forces calculations to be inaccurate. Could you please explain that a little more? 

Thank you so much for your help, I really appreciate it. 

Ruochun Zhang

unread,
Jan 2, 2023, 12:01:09 AM1/2/23
to ProjectChrono
Hi Mohammad,

Yes, I see a problem. The mesh facet size is 10 times smaller than the particle. Think about it as the characteristic length scale of your simulation in this case is 0.2mm, not 2mm, and the bin size has to be set based on the 0.2mm estimation. Considering the domain size, this is therefore a massive simulation. Indeed, you can re-define the data type for binID so it can support a much larger number of bins: In VariableTypes.h, change the line
typedef unsigned int binID_t;
to 
typedef uint64_t binID_t;
Be warned, after this, you need to do a clean rebuild (starting from a new, empty build directory) to have any hope of this working. Also, having binID_t as uint64 is less tested and I cannot be sure it works for your case. But you are welcome to try and if anything goes wrong, you can always let us know.

In terms of how this impacts the performance, I have to do more testing to understand. But it should make the kT run slower and eat much more memory. This is not only because of the usage of 64-bit integers, but also when you have small bins then each sphere is touching more bins, making the array storing bin--sphere pairs much larger. Meanwhile, a sphere cannot touch more than 65536 bins with this code (unless you also re-define binsSphereTouches_t, that is), so you also don't want the bins to be extremely small.

Right now you can probably agree that the best approach should be doing something to your mesh, instead of what I just said. The point of my previous post is that, you probably don't need the smallest facet in your mesh to be 0.2mm in size, if the smallest particle in the simulation is 2mm in size. How does the finest feature in your mesh help in the department of simulation accuracy? You won't capture the physics that you had hoped to achieve with the 0.2mm mesh anyway, because the particles are so large that they are the bottleneck and main source of systematic error. I imagine the 0.2mm facets are the fillet facets of your mesh of some sort, or they are just a lot of triangles representing a shape that really does not require that many triangles to approximate. If that is the case, you can always remesh to remove those small features so that your mesh is then made of large facets, on par with or larger than the particles, then everything is easier, and you are not missing any important physics, I suppose.

I understand that sometimes we got our hands on some meshes that just have fine features. We probably don't care about those features, but we don't want to change the mesh either, we just want the simulation to run. I understand. That is why I was working on some changes to ease this restriction. I don't want these changes to promote inefficient usage of this code though, so my main argument here is that you have to think whether using a mesh this fine is meaningful, given the particle size you are using.

Thank you,
Ruochun
Reply all
Reply to author
Forward
0 new messages