Hello deal.II community,
I wrote a small application which uses a fully distributed (simplicial) mesh, PETSc wrappers, and handles a Lagrange multiplier on a boundary with the hp machinery. With the non-hp version of this solver, I can checkpoint the current and previous solutions and restart without any issue, as done in step-83, but the restart fails on the hp version.
As I understand it, I also have to serialize the fe indices with dof_handler.prepare_for_serialization_of_active_fe_indices(), but this function is only implemented for distributed triangulations.
I have joined a minimal example, for both quads and simplices. It compares a checkpoint/restart for both the non-hp and hp setting, for quads and simplices. On my end, it succeeds on a single MPI rank, but either reads mismatched vectors or segfaults on more ranks, without showing a stack trace in debug. I tried to reproduce the exact same behavior as in the solver, but they differ slightly and I'm not sure why. In both cases, they fail in restart() : the attached example fails in SolutionTransfer::deserialize(...) after successfully loading the triangulation, in interpolate(...) from solution_transfer.templates.h, whereas in the actual solver, it fails when loading the triangulation and throws an std::bad_alloc at line 1051 of grid/tria.cc in the current master (dest_data_variable.resize(size_on_proc);).
Assuming the issue is indeed that I need to serialize/deserialize the fe_indices, is there a workaround that would work for fully distributed triangulations? I saw that prepare_for_serialization_of_active_fe_indices() uses CellDataTransfer which do not seem to exist for fully distributed meshes, so I'm assuming it is not that trivial.
Thank you for your time,
Arthur