Hello,
I want to know how apply boundary constraints in a time-dependent, adaptively refined and distributed memory code. In a time-step, I want multiple refinements. I could not find such an example in deal.II tutorial.
I have the following code, with questionable code in red. Basically, when transferring old solution, I need boundary constraints for one time-step back. When assembling the system, I need boundary constraints for the current time-step. But I don't know how to satisfy both.
for (int time_step = 0; time_step < n_time_steps; time_step++) {
time += dt;
old_locally_relevant_solution = locally_relevant_solution;
for (int refine_step = 0; refine_step < n_refine_steps; refine_step++) {
assemble_system(); // this function accesses old_locally_relevant_solution and calls constraints.distribute_local_to_global (...) to add local matrix to global matrix.
// The boundary constraints were computed for time - dt, i.e., one time-step back, however, here what we need is boundary constraints for the current time step.
solve(); // update locally_relevant_solution
if (refine_step+1 < n_refine_steps) {
// Estimate Kelly errors;
...
// Do triangulation, and transfer old solution from old mesh to new mesh
triangulation.prepare_coarsening_and_refinement();
parallel::distributed::SolutionTransfer<dim, PETScWrappers::MPI::Vector> soltrans(dof_handler);
soltrans.prepare_for_coarsening_and_refinement(old_locally_relevant_solution);
triangulation.execute_coarsening_and_refinement ();
setup_system(); // reinit vector, matrix and constraints for the new mesh. The boundary function is computed for time - dt, since that is the time when old_locally_relevant_solution was computed
PETScWrappers::MPI::Vector interpolated_solution(locally_owned_dofs, mpi_communicator);
soltrans.interpolate(interpolated_solution);
constraints.distribute(interpolated_solution);
old_locally_relevant_solution = interpolated_solution;
}
}
Thank you!