Difference between 2 Wasy to Initialize System Matrix in MPI World

85 views
Skip to first unread message

Lex Lee

unread,
Sep 27, 2023, 9:15:14 PM9/27/23
to deal.II User Group
Hello all, 


Previously, I initialized my system matrix in this way (option A)

DoFTools::make_flux_sparsity_pattern(dof_handler,
                                    dsp,
                                    cell_coupling,
                                    face_coupling);
constraints_newton_update.condense(dsp);
SparsityTools::distribute_sparsity_pattern(dsp,
                                   hp_index_set,
                                   mpi_communicator,
                                   hp_relevant_set);
system_matrix.reinit(hp_index_set,
                     hp_index_set,
                     dsp,
                     mpi_communicator);

However, the generated system matrix is singular. I was stuck by the singular matrix problem for about 2 weeks.


Then, I initialized the matrix with option B, which generates a full rank matrix. Although, I don't see the difference between these 2 ways.  Can someone explain this to me? Thans in advance.

DoFTools::make_flux_sparsity_pattern(dof_handler,
                                     dsp,
                                     cell_coupling,
                                     face_coupling);
constraints_newton_update.condense(dsp);
sparsity_pattern.copy_from(dsp);
system_matrix.reinit(hp_index_set,
                     sparsity_pattern,
                     mpi_communicator);

Abbas Ballout

unread,
Sep 28, 2023, 1:27:17 PM9/28/23
to deal.II User Group
Did you try to run in Debug? If not give it a try and see if an exception is thrown. 

Abbas

Lex Lee

unread,
Sep 28, 2023, 2:50:02 PM9/28/23
to deal.II User Group
Hi Abbas, thanks for the reply. I did debug with option A, but no error reported/no exception is thrown. To me, option A is the same as option B. However, the two generated system matrixs in the code are not the same. 

Timo Heister

unread,
Sep 28, 2023, 4:22:49 PM9/28/23
to dea...@googlegroups.com
Lex,

I would strongly advice against using AffineConstraints::condense() in an MPI parallel computation. It is inefficient and also likely incorrect if you run it before distribute_sparsity_pattern().
Instead, use distribute_local_to_global during assembly (see step-40 for an example).


From: dea...@googlegroups.com <dea...@googlegroups.com> on behalf of Lex Lee <hitl...@gmail.com>
Sent: Wednesday, September 27, 2023 8:15:14 PM
To: deal.II User Group <dea...@googlegroups.com>
Subject: [deal.II] Difference between 2 Wasy to Initialize System Matrix in MPI World
 

This Message Is From An External Sender: Use caution when opening links or attachments if you do not recognize the sender.

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/abecb5ba-e894-47eb-bc0d-1ef5590ba14cn%40googlegroups.com.

Lex Lee

unread,
Sep 28, 2023, 5:05:50 PM9/28/23
to deal.II User Group
Hello Timo, 

Thanks for your reply and suggestion. Now, I know what the difference is.

Since  I'm using "make_flux_sparsity_pattern" function and have to input the coupling dofs table -- ''cell coupling" + "face coupling". It seems that I have no choice but to use "constraints.condense(dsp)" after "make_flux_sparsity_pattern" implementation.  

There are 4 template member functions for "make_flux_sparsity_pattern" in this link: https://www.dealii.org/current/doxygen/deal.II/group__constraints.html
In my opinion, only #3 is suitable for my case.  

Any better idea?


Again, thanks a lot for sharing your ideas with me. 


Best,
Lex
Reply all
Reply to author
Forward
0 new messages