How to apply boundary constraints in time-dependent and adaptive code?

214 views
Skip to first unread message

Junchao Zhang

unread,
Jul 21, 2016, 6:34:10 PM7/21/16
to dea...@googlegroups.com
Hello,
  I want to know how apply boundary constraints in a time-dependent, adaptively refined and distributed memory code. In a time-step, I want multiple refinements.  I could not find such an example in deal.II tutorial. 
  I have the following code, with questionable code in red.  Basically, when transferring old solution, I need boundary constraints for one time-step back. When assembling the system, I need boundary constraints for the current time-step. But I don't know how to satisfy both.
  
      for (int time_step = 0; time_step < n_time_steps; time_step++) {
       time += dt;
       old_locally_relevant_solution = locally_relevant_solution;

      for (int refine_step = 0; refine_step < n_refine_steps; refine_step++) {
         assemble_system(); // this function accesses old_locally_relevant_solution and calls constraints.distribute_local_to_global (...) to add local matrix to global matrix.
                                          // The boundary constraints were computed for time - dt, i.e., one time-step back, however, here what we need is boundary constraints for the current time step.
         solve(); // update locally_relevant_solution

         if (refine_step+1 < n_refine_steps) {
            // Estimate Kelly errors;
           ...
            // Do triangulation, and transfer old solution from old mesh to new mesh
           triangulation.prepare_coarsening_and_refinement();
           parallel::distributed::SolutionTransfer<dim, PETScWrappers::MPI::Vector> soltrans(dof_handler);
           soltrans.prepare_for_coarsening_and_refinement(old_locally_relevant_solution);
           triangulation.execute_coarsening_and_refinement ();

           setup_system(); // reinit vector, matrix and constraints for the new mesh. The boundary function is computed for time - dt, since that is the time when old_locally_relevant_solution was computed

           PETScWrappers::MPI::Vector interpolated_solution(locally_owned_dofs, mpi_communicator);
           soltrans.interpolate(interpolated_solution);

           constraints.distribute(interpolated_solution); 
           old_locally_relevant_solution = interpolated_solution;
         }
    }


   Thank you!

--Junchao Zhang

Daniel Arndt

unread,
Jul 22, 2016, 1:50:56 PM7/22/16
to deal.II User Group
Junchao,

For what matters here, you are just interesting in how to deal with boundary constraints for a time-dependent problem.

I am not quite sure what your problem is. You have time-dependent boundary conditions so you want to use boundary
conditions corresponding to the correct time step. Why would you want to change the boundary conditions for an old solution?

The interpolated solution should already have the correct (old) boundary conditions.

Best,
Daniel

Junchao Zhang

unread,
Jul 22, 2016, 4:44:21 PM7/22/16
to deal.II User Group


On Friday, July 22, 2016 at 12:50:56 PM UTC-5, Daniel Arndt wrote:
Junchao,

For what matters here, you are just interesting in how to deal with boundary constraints for a time-dependent problem.

I am not quite sure what your problem is. You have time-dependent boundary conditions so you want to use boundary
conditions corresponding to the correct time step. Why would you want to change the boundary conditions for an old solution?
My PDE is just a simple convection-diffusion equation: \dot{u}= D \Delta u - \vec{v}\cdot \nabla u. The initial value and boundary value are defined by a time-dependant function.

The interpolated solution should already have the correct (old) boundary conditions.
Is it true? As you see, I call constraints.distribute(interpolated_solution) to impose boundary constraints, which are computed in setup_system() as follows

  constraints.clear ();
  constraints.reinit(locally_relevant_dofs);
  DoFTools::make_hanging_node_constraints (dof_handler, constraints);
  VectorTools::interpolate_boundary_values (dof_handler,
                                            0,
                                            ExactSolution<2>(1, time-dt),
                                            constraints);
  constraints.close ();

When the code runs in the inner loop to assemble_system(), which calls constraints.distribute_local_to_global (), I feel it actually imposes old (wrong) boundary constraints. What we need is to assemble matrix and impose boundary constraints of the current time step.
Am I right?

Daniel Arndt

unread,
Jul 22, 2016, 5:16:47 PM7/22/16
to deal.II User Group
Junchao,


The interpolated solution should already have the correct (old) boundary conditions.
Is it true? As you see, I call constraints.distribute(interpolated_solution) to impose boundary constraints, which are computed in setup_system() as follows
What I meant to say is that the interpolated vector has the old boundary values if you don't apply additional constraints.
On cells that you refine, you can improve the approximation of the boundary values by applying a ConstraintMatrix (at time-dt) and hanging node constraints.

For the current time step, you then initialize a different ConstraintMatrix containing both Dirichlet boundary values (at time instead of time-dt) and hanging node constraints
and use this when assembling your system.

Best,
Daniel

Junchao Zhang

unread,
Jul 22, 2016, 5:42:22 PM7/22/16
to deal.II User Group
OK, I see.  I need to build two ConstraintMatrix. There is a little waste here since both have the same mesh and same hanging node constraints.
 

Best,
Daniel
Reply all
Reply to author
Forward
0 new messages