Hello everyone,
I am currently using DEM-Engine to model planetary regolith in scenarios involving interactions with space exploration objects. I would now like to extend this modeling to include the study of regolith–fluid interactions.
In your opinion, what would be the most convenient approach: integrating DEM with a solver such as DualSPHysics, or directly implementing a fluid model within DEM itself? In both cases, this would require modifications to the DEM codebase. That is why I am writing here, hoping to get some feedback from the developers: perhaps there is already something undocumented, or maybe you have already considered an approach in this direction.
Thanks in advance,
Sabrina
Sabrina,
In theory, you can do this in the SPH solver in Chrono, hopefully my colleague Radu will comment on this. It’d require very large sim times because the number of SPH particles would be really large, which is needed to capture the dynamics of the grains.
Another way to do it is DEM-LBM. Chrono has no support for this, and no plan to implement in the immediate future. The sim times would probably be very long, but it’d be a nice approach. If Ruochun sees this, he might comment on this idea.
Lastly, you can homogenize this and represent the regolith–fluid interactions through a continuum and then use the CRM solver in Chrono. You’d need to have the right material model, which means that you’ll have to go beyond the hypo-elastoplastic material model that we have there right now (Drucker-Prager plasticity, with no cap).
Dan
---------------------------------------------
Bernard A. and Frances M. Weideman Professor
NVIDIA CUDA Fellow
Department of Mechanical Engineering
Department of Computer Science
University of Wisconsin - Madison
4150ME, 1513 University Avenue
Madison, WI 53706-1572
---------------------------------------------
--
You received this message because you are subscribed to the Google Groups "ProjectChrono" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
projectchron...@googlegroups.com.
To view this discussion visit
https://groups.google.com/d/msgid/projectchrono/fa00b521-e8a1-4dd6-be54-7e37358cd230n%40googlegroups.com.
Hi Sabrina,
This is very ambitious indeed. I can comment on it based on what I know, and you can decide if it is relevant for your research.
First, I hope the APIs provided in DEM-Engine are enough to allow your inclusion of the thermal and electrostatic systems. It seems they are, but feel free to post threads here if you later discover other needs.
The biggest bottleneck in regolith–fluid simulations is the enormous scale required, and that's why Dan suggested using another, unified model for it. But since your focus is on building a comprehensive model, not an engineering solution for a particular problem, that's not an option, and I assume you'd want two-way coupling (i.e., as much coupling as possible) in your simulation. I'd also assume you don't need extreme fluid velocities, like over 0.3 Ma. Then the biggest question is: since your DEM side model is already heavy, how much emphasis would you like to put on the fluid part? Or, put another way, I think it's a problem of what fluid–solid ping-pong paradigm to use, not what package to use. One thing is for sure: none of the approaches will be “convenient” to make happen.
Using SPH is fine, but I suspect you'll need markers much smaller than DEM particles, so limiting the overall problem scale is important. It may face more challenges if the Reynolds number is high. Also, it would involve the integration between two GPU packages, which is a more serious software engineering task, and there might be people who have tried that on DualSPHysics' forum. I'd say if you go this route, you are certainly treating the fluid part no less seriously than the DEM part, and consulting the developers there beforehand is certainly needed.
FVM- or FEA-based CFD solvers are fine too, and I can imagine myself building/using a PETSc-based solver for this task. The key would be to update the DEM particle-represented boundary (if moving mesh) or track/mark the moving boundary-influenced nodes (if immersed boundary), which has very little to do with DEM itself — it only needs particle pos/vel info, which DEM-Engine can certainly supply. I'd probably recommend an immersed boundary approach for reasons I'll give in the LBM-related part. This is also how I imagined DEM-Engine users would do fluid co-simulation. As you will have a lot of things to do on the host anyway (mesh adjustment, node marking...), you'll use DEM-Engine's tracker to bring the information to the host, update the mesh and fluid solver, run it, and then feed the fluid force back to DEM-Engine. This should position you more as a user of computing packages, rather than a solver developer. This approach can be used regardless of whether you think the fluid is an emphasis, as you can always choose to use fewer features of the solver to make the fluid part easier and faster, or do the opposite. But you probably won’t modify the fluid solver that much, so there might be more restrictions in coding flexibility.
You can also write your own fluid solver, but I think most likely that means the fluid is not a main focus of the research you want to present. And if you do, like Dan said, I would say LBM is a good choice. I only recently became interested in LBM’s usage in related co-simulations. Two main benefits:
It's fully Eulerian, therefore easy to use alongside DEM, as the DEM particles are the only moving part. For the LBM part, you just mark the DEM particle-shadowed grid points as solid. It's similar to why I think the immersed boundary is better for your use case. The method is also in general easy to implement. You can literally ask ChatGPT to write one for you, after you read the basics of it.
It's massively parallel, and should go well with DEM-Engine on GPUs.
The downside is that LBM is certainly much less used and appreciated than, say, FVM. While it should be very serviceable for you, convincing the community might be another issue. You could, of course, implement a FVM solver yourself — it's again very doable if you don't aim too high. It really doesn't matter if it's fully on GPU and only exchanges device arrays with DEM (“implementing a fluid model within DEM itself”), or if it brings data to the host for processing; I think in the grand scheme of such an ambitious project, it's a minor issue and we can always resolve it later if it matters.
As for the software we can provide: Publishing a GPU-based LBM solver is a possibility in the longer term, but you have a PhD to finish, so it doesn't seem you can wait for us. You could write it yourself, as making one that is at least usable is not too hard. I do have a plan to provide a performance-centric FEA/FVM-based fluid solver on GPU relatively soon. If you are going to spend a couple of months looking into the DEM model before having to consider fluid, then the timing may line up. It should naturally go well in co-simulation with DEM-Engine or Chrono, as it's the same family and allows for step-wise advancing of the simulation, too. However, as it stands, we cannot make promises to you about a ready-to-use DEM-capable fluid solution right now.
Ruochun – I read your email, super thoughtful.
Circling back to Sabrina – the question is also how big the problem is.
The FSI solver in Chrono has made huge progress over the last 12 months – Radu and Luning and Huzaifa can speak to that.
If we are talking about 1000 DEM particles here, then I think that the easiest way to go is to simply simulate the entire thing in Chrono: the DEM part, using DEME; the CFD side, in Chrono::FSI. The solution would be entirely on the GPU. We never optimized the communication for DEM-FSI “on-the-GPU” simulation since we’ve never been faced with such requests. But the big deal here is that the memory for DEME and FSI is GPU resident and therefore can draw on the TB/s bandwidth and small latency of device memory (compared to host-device traffic). I truly think that if there was funding to do this GPU-GPU, FSI-DEME co-simulation, a full blown Chrono solution would be top notch.
However, if for the problem at hand Sabrina needs say 1,000,000 DEM particles, that’s a different story. I think no matter what approach is taken in that case, it’s going to be really, really slow if one fully resolves the dynamics of both the particles and the fluid.
Dan
---------------------------------------------
Bernard A. and Frances M. Weideman Professor
NVIDIA CUDA Fellow
Department of Mechanical Engineering
Department of Computer Science
University of Wisconsin - Madison
4150ME, 1513 University Avenue
Madison, WI 53706-1572
---------------------------------------------
To view this discussion visit https://groups.google.com/d/msgid/projectchrono/f67977ca-5c2c-41b6-ad5e-ba3298ede1ban%40googlegroups.com.
Thank you very much for your replies and for the suggestions.
To give you an idea of the scale of my work: during my Master’s thesis I carried out simulations with between 500,000 and 1,000,000 elements. For the fluid extension, I expect the order of magnitude to be similar.
Addressing the point raised by Ruochun about fluid velocities: in my case I need to consider both the solar wind impacting the surface and the rocket exhaust gases. These are therefore regimes that can exceed Mach 1. Moreover, the focus of my work will not be so much on defining the fluid model itself, but rather on analyzing the interactions with the regolith. For this reason, and based on your suggestions, it seems to me that the most reasonable options are:
to evaluate CFD solvers based on FVM or FEA, if I can find a package suitable for my case;
or to consider developing an LBM solver, which from what you say could be easier to integrate.
In the coming months I will be focusing on reviewing the literature, and I will certainly keep in mind the solver you are developing, should it become available in the near future.
I would also like to ask for your advice on hardware. At the moment I am using a workstation with two RTX A4000 GPUs: with this setup, a simulation with about one million particles and a duration of 430 seconds takes around 6 days of computation. Looking ahead to future co-simulations, I have the possibility to upgrade: which GPUs would you recommend, while staying within a mid or mid-high range, without considering top-of-the-line models?
Thank you again for your availability and support.
Best regards,
Sabrina
To view this discussion visit https://groups.google.com/d/msgid/projectchrono/b8b5fb57-3605-4a6b-8ab8-2f377a0583a7n%40googlegroups.com.
Sabrina – a few comments:
I believe you underestimate the work required to implement a fluid solver from scratch and couple it with a multibody and/or DEM solver. The work on the preCICE interfaces will start relatively soon (in a couple of months or so). I think you’re better off waiting for us to have an updated architecture of the Chrono::FSI framework and first implementations or preCICE adapters for Chrono MBD and Chrono::SPH, at which point you could look into providing a similar adapter for DEME.
--Radu
To view this discussion visit https://groups.google.com/d/msgid/projectchrono/CAG%2BQucN7q8VgYh5%2ByJpZyS9jJj2CUZ-hHvKyo9CK02wW%3DF2t1Q%40mail.gmail.com.