Hi Mohammad,
Yes, I see a problem. The mesh facet size is 10 times smaller than the particle. Think about it as the characteristic length scale of your simulation in this case is 0.2mm, not 2mm, and the bin size has to be set based on the 0.2mm estimation. Considering the domain size, this is therefore a massive simulation. Indeed, you can re-define the data type for binID so it can support a much larger number of bins: In VariableTypes.h, change the line
typedef unsigned int binID_t;
to
typedef uint64_t binID_t;
Be warned, after this, you need to do a clean rebuild (starting from a new, empty build directory) to have any hope of this working. Also, having binID_t as uint64 is less tested and I cannot be sure it works for your case. But you are welcome to try and if anything goes wrong, you can always let us know.
In terms of how this impacts the performance, I have to do more testing to understand. But it should make the kT run slower and eat much more memory. This is not only because of the usage of 64-bit integers, but also when you have small bins then each sphere is touching more bins, making the array storing bin--sphere pairs much larger. Meanwhile, a sphere cannot touch more than 65536 bins with this code (unless you also re-define binsSphereTouches_t, that is), so you also don't want the bins to be extremely small.
Right now you can probably agree that the best approach should be doing something to your mesh, instead of what I just said. The point of my previous post is that, you probably don't need the smallest facet in your mesh to be 0.2mm in size, if the smallest particle in the simulation is 2mm in size. How does the finest feature in your mesh help in the department of simulation accuracy? You won't capture the physics that you had hoped to achieve with the 0.2mm mesh anyway, because the particles are so large that they are the bottleneck and main source of systematic error. I imagine the 0.2mm facets are the fillet facets of your mesh of some sort, or they are just a lot of triangles representing a shape that really does not require that many triangles to approximate. If that is the case, you can always remesh to remove those small features so that your mesh is then made of large facets, on par with or larger than the particles, then everything is easier, and you are not missing any important physics, I suppose.
I understand that sometimes we got our hands on some meshes that just have fine features. We probably don't care about those features, but we don't want to change the mesh either, we just want the simulation to run. I understand. That is why I was working on some changes to ease this restriction. I don't want these changes to promote inefficient usage of this code though, so my main argument here is that you have to think whether using a mesh this fine is meaningful, given the particle size you are using.
Thank you,
Ruochun