vector<unsigned int> new_number(dof_handler.n_dofs()); for (unsigned int i = 0; i < dof_handler.n_dofs(); i++) new_number[i] = dof_handler.n_dofs() - i - 1; // simple example
vector<unsigned int> local_new_number; for (unsigned int dof : info.locally_owned) local_new_number.push_back(new_number[dof]);
dof_handler.renumber_dofs(local_new_number);
info.locally_owned = dof_handler.locally_owned_dofs(); DoFTools::extract_locally_relevant_dofs(dof_handler, info.locally_relevant); LA::MPI::Vector tmp_newton, tmp_rhs;
tmp_newton.reinit(info.locally_owned, MPI_COMM_WORLD); tmp_rhs.reinit(info.locally_owned, MPI_COMM_WORLD);
tmp_newton = newton_update; tmp_rhs = system_rhs;
solver.solve(system_matrix, tmp_newton, tmp_rhs);
cout << fmt::format("[{:d}] mat = {:e}", rank, system_matrix.l1_norm()) << endl; cout << fmt::format("[{:d}] rhs = {:e}", rank, tmp_rhs.l2_norm()) << endl; cout << fmt::format("[{:d}] sol = {:e}", rank, tmp_newton.l2_norm()) << endl; solution.reinit(info.locally_owned, info.locally_relevant, MPI_COMM_WORLD); // ghosted for fe_values old_timestep_solution.reinit(info.locally_owned, info.locally_relevant, MPI_COMM_WORLD); // same as solution newton_update.reinit(info.locally_owned, info.locally_relevant, MPI_COMM_WORLD); // ghosted, bc of solution += newton_update
system_rhs.reinit(info.locally_owned, MPI_COMM_WORLD); // ghosted / non-ghosted ?
DynamicSparsityPattern dsp(info.locally_relevant); DoFTools::make_flux_sparsity_pattern(dof_handler, dsp, constraints, false);
SparsityTools::distribute_sparsity_pattern(dsp, dof_handler.n_locally_owned_dofs_per_processor(), MPI_COMM_WORLD, info.locally_relevant);
system_matrix.reinit(info.locally_owned, info.locally_owned, dsp, MPI_COMM_WORLD);- Debug is enabled (at least for dealii, I will have to rebuild trilinos with debug later)
- I am not sure if I got you correctly, but If I use a regular Triangulation, then every rank owns all dofs and finally the initialization of the distributed vectors fails (as expected)
What I additionally tried is: (with 2 ranks)1) assemble the rhs / matrix in serial,2) create a partition by hand: [0, n/2), [n/2, n)3) copy/distribute3) solve in parallelwhich works.However, when I changed the partition into something like{ 0, 2, 4, ... } , { 1, 3, 5, ... }it fails, which makes me believe that non-contiguous partitions are not (completely) supported by dealii or trilinos
On 10 Aug 2016, at 13:47, Daniel Jodlbauer <jdsc...@gmx.at> wrote:
Ok, if I use SolverGMRES<>, it reports the error "Column map of matrix does not fit with vector map!", however, TrilinosWrappers::SolverGMRES seems to work.
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to a topic in the Google Groups "deal.II User Group" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/dealii/ncEIt6y7EHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to dealii+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to a topic in the Google Groups "deal.II User Group" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/dealii/ncEIt6y7EHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to dealii+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
<test_renumbering.cc>
On 10 Aug 2016, at 18:09, Daniel Jodlbauer <jdsc...@gmx.at> wrote:
I think DoFRenumbering::Cuthill_McKee(dof_handler) does the renumbering only on the locally owned dofs, therefore these index sets wont change.