Hello all,
I am using deal.ii to solve a system of Hamilton Jacobi equations. I need to use FE_Nothing element in a part of my domain.
I read a couple of posts in this forum regarding this but I have some questions. I will appreciate if anyone can answer these.
Since I need to use MPI, and I need to use hp::DofHandler so I cannot use p::d::triangulation. So, I need to use Triangulation and then use metis to partition that.
In one of the posts I read
"you can always partition regular triangulation by
GridTools::partition_triangulation (n_mpi_processes, triangulation);
and adjust your assembly loops to
if (cell->subdomain_id == this_mpi_process)
to parallelize with MPI. The downside is that all your processes will now own the **complete mesh**.
There are also some other minor downsides like serial SolutionTransfer and not straight-forward serialization/restart."
1. So, I wanted to know what are the minor downsides with solution transfer and restart. I was using Parallel Distributed Solution transfer upto now and need to have checkpoint/restart option available.
2. Is there no concept of locally owned or locally ghost cells if I use the GridTools::partition_triangulation()? Uptil now, some calculations I was doing were for both (loc. owned + ghost) cells like moving the mesh. What happens here?
3. Will there be any change regarding locally owned dofs and locally relevant dofs?
4. Even with p::d::tria, the coarsest mesh was owned by all processors. So, what exactly does this mean -- "The downside is that all your processes will now own the **complete mesh**."
Thanks.