Zero-flux condition on interfaces

290 views
Skip to first unread message

Nikhil Vaidya

unread,
Mar 19, 2020, 7:06:22 AM3/19/20
to moose-users
Hello,

I am solving a problem involving ohmic heating. There are interfaces in the geometry which need to be electrically insulated, but thermally conducting. How can this be implemented in MOOSE?

Best regards,
Nikhil

Alexander Lindsay

unread,
Mar 19, 2020, 12:13:45 PM3/19/20
to moose...@googlegroups.com
Can you be more specific about what you mean by interfaces? Do you have different kernels/variables living on different subdomains?

At a boundary, the default boundary condition is no-flux, e.g. if you don't set a boundary condition in your input file, you will get no "stuff" coming into  or out of your boundary.

--
You received this message because you are subscribed to the Google Groups "moose-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/ce822712-0cd8-40e0-9313-f00584cbef19%40googlegroups.com.

Nikhil Vaidya

unread,
Mar 19, 2020, 5:19:32 PM3/19/20
to moose...@googlegroups.com
I am attaching an image of a simple geometry which explains the situation. The geometry consists of two subdomains 
\Omega_1 and \Omega_2 (think of them as different materials). The transient heat conduction PDE is solved on both these subdomains (i.e. the variable 'temperature' lives on both subdomains). Each time-step, a steady electrical potential diffusion equation is solved on both subdomains (i.e. the variable 'electric_potential' lives on both subdomains). The electrical conductivity is temperature dependent. The heat conduction equation includes an ohmic heat source (which lives on both subdomains). Now, the boundary \Gamma_t has a non-zero Dirichlet boundary condition on 'electric_potential', whereas the boundary \Gamma_b has a zero Dirichlet BC on 'electric_potential'. The boundaries \Gamma_l1, \Gamma_l2, \Gamma_r1 and \Gamma_r2 are electrically insulated. The internal interfaces between the two subdomains are \Gamma_i1, \Gamma_i2 and \Gamma_c. The interface \Gamma_c conducts electricity and heat, while the interfaces \Gamma_i1 and \Gamma_i2 allow only heat to flow through. On interfaces \Gamma_i1 and \Gamma_i2, the electric potential gradient should be zero. How should I implement this? (the temperature boundary conditions on the external boundaries of the domain are irrelevant)

You received this message because you are subscribed to a topic in the Google Groups "moose-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/moose-users/N3A_zZUO_3E/unsubscribe.
To unsubscribe from this group and all its topics, send an email to moose-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/CANFcJrE3A8NNxuFUw_gjV7Y4cVrSUDm1sn3UaVyKiizdycQ5rw%40mail.gmail.com.
Geom_schematic.png
Schematic_Geom.png

Nikhil Vaidya

unread,
Mar 19, 2020, 5:20:46 PM3/19/20
to moose...@googlegroups.com
I attached two files by mistake. Please refer to the figure 'Schematic_Geom.png'.

On Thu, Mar 19, 2020 at 5:13 PM Alexander Lindsay <alexlin...@gmail.com> wrote:
You received this message because you are subscribed to a topic in the Google Groups "moose-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/moose-users/N3A_zZUO_3E/unsubscribe.
To unsubscribe from this group and all its topics, send an email to moose-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/CANFcJrE3A8NNxuFUw_gjV7Y4cVrSUDm1sn3UaVyKiizdycQ5rw%40mail.gmail.com.

Alexander Lindsay

unread,
Mar 20, 2020, 12:09:00 PM3/20/20
to moose...@googlegroups.com
So the way I would do this would be to create different variables for the electric potential on omega1 and omega2. Then I would apply an InterfaceKernel for the electric potential on Gamma_c. You could either use something like

1) moose/test/src/interfacekernels/PenaltyInterfaceDiffusion with a sufficiently large penalty to simultaneously enforce continuity of potential and potential flux (this method is optimally convergent with respect to FEM for sufficiently large penalties) or
2) you could use something like moose/test/src/interfackernels/InterfaceDiffusion to enforce continuity of potential flux and then moose/framework/src/bcs/MatchedValueBC to enforce continuity of  potential. This method also works but is not optimally convergent

If you move to different variables for the electric potential, then you can simply not provide any boundary or interface conditions at Gamma_i1 and Gamma_i2 and then you should observe a zero potential gradient (although I'm curious whether you run into solver convergence issues at the corners where the Gamma_i* and Gamma_c meet)

Nikhil Vaidya

unread,
Mar 26, 2020, 5:07:50 AM3/26/20
to moose-users
Hi Alex,

Thanks for the hints. I implemented them and I indeed get the convergence issues you mentioned. I tried changing the penalty parameter value, and it still diverges for a few values (from 1e0 to 1e10) that I tested. I am curious as to why this happens in my problem but not in the interfacekernel tests.

Best,
Nikhil


On Friday, March 20, 2020 at 5:09:00 PM UTC+1, Alexander Lindsay wrote:
So the way I would do this would be to create different variables for the electric potential on omega1 and omega2. Then I would apply an InterfaceKernel for the electric potential on Gamma_c. You could either use something like

1) moose/test/src/interfacekernels/PenaltyInterfaceDiffusion with a sufficiently large penalty to simultaneously enforce continuity of potential and potential flux (this method is optimally convergent with respect to FEM for sufficiently large penalties) or
2) you could use something like moose/test/src/interfackernels/InterfaceDiffusion to enforce continuity of potential flux and then moose/framework/src/bcs/MatchedValueBC to enforce continuity of  potential. This method also works but is not optimally convergent

If you move to different variables for the electric potential, then you can simply not provide any boundary or interface conditions at Gamma_i1 and Gamma_i2 and then you should observe a zero potential gradient (although I'm curious whether you run into solver convergence issues at the corners where the Gamma_i* and Gamma_c meet)

On Thu, Mar 19, 2020 at 2:19 PM Nikhil Vaidya <nikhilv...@gmail.com> wrote:
I am attaching an image of a simple geometry which explains the situation. The geometry consists of two subdomains 
\Omega_1 and \Omega_2 (think of them as different materials). The transient heat conduction PDE is solved on both these subdomains (i.e. the variable 'temperature' lives on both subdomains). Each time-step, a steady electrical potential diffusion equation is solved on both subdomains (i.e. the variable 'electric_potential' lives on both subdomains). The electrical conductivity is temperature dependent. The heat conduction equation includes an ohmic heat source (which lives on both subdomains). Now, the boundary \Gamma_t has a non-zero Dirichlet boundary condition on 'electric_potential', whereas the boundary \Gamma_b has a zero Dirichlet BC on 'electric_potential'. The boundaries \Gamma_l1, \Gamma_l2, \Gamma_r1 and \Gamma_r2 are electrically insulated. The internal interfaces between the two subdomains are \Gamma_i1, \Gamma_i2 and \Gamma_c. The interface \Gamma_c conducts electricity and heat, while the interfaces \Gamma_i1 and \Gamma_i2 allow only heat to flow through. On interfaces \Gamma_i1 and \Gamma_i2, the electric potential gradient should be zero. How should I implement this? (the temperature boundary conditions on the external boundaries of the domain are irrelevant)

On Thu, Mar 19, 2020 at 5:13 PM Alexander Lindsay <alexlin...@gmail.com> wrote:
Can you be more specific about what you mean by interfaces? Do you have different kernels/variables living on different subdomains?

At a boundary, the default boundary condition is no-flux, e.g. if you don't set a boundary condition in your input file, you will get no "stuff" coming into  or out of your boundary.

On Thu, Mar 19, 2020 at 4:06 AM Nikhil Vaidya <nikhilv...@gmail.com> wrote:
Hello,

I am solving a problem involving ohmic heating. There are interfaces in the geometry which need to be electrically insulated, but thermally conducting. How can this be implemented in MOOSE?

Best regards,
Nikhil

--
You received this message because you are subscribed to the Google Groups "moose-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moose...@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "moose-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/moose-users/N3A_zZUO_3E/unsubscribe.
To unsubscribe from this group and all its topics, send an email to moose...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "moose-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moose...@googlegroups.com.

Alexander Lindsay

unread,
Mar 26, 2020, 1:23:16 PM3/26/20
to moose...@googlegroups.com
Do we have nodes at which the connected faces have different interface conditions in our tests? I think probably not, but I could be wrong

To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/e3651685-531c-4896-bc3d-58ac00b0952f%40googlegroups.com.

Nikhil Vaidya

unread,
Mar 31, 2020, 10:28:59 AM3/31/20
to moose-users
I checked the test interfacekernels/2d_interface/coupled_value_coupled_flux.i. Indeed there aren't two interfaces with common points on which two different interfacekernels are applied. I wanted to check if the convergence problem occurs in the coupled_value_coupled_flux example if there exist multiple interfaces with different interfacekernels. I created my own mesh very similar to one in the test, but with the interface split into two parts (the input file and mesh are attached). On running this I got a memory fault. Interestingly, to code runs properly when the interfacekernel block is commented out. I investigated the memory fault location in gdb. The backtrace at the memory fault location is as follows:

#0  0x00007fffd19eb681 in __strlen_sse2_pminub () from /usr/lib64/libc.so.6
#1  0x00007fffd2abc985 in std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(char const*, std::allocator<char> const&) () from /usr/lib64/libstdc++.so.6
#2  0x00007fffdde51fd6 in CommandLine::CommandLine (this=0x6d92e8, argc=3, argv=0x6ffad0) at /home/2014-0004_focal_therapy/PhDs/AdapTT/Nikhil/moose/framework/src/parser/CommandLine.C:23
#3  0x00007fffde788836 in __gnu_cxx::new_allocator<CommandLine>::construct<CommandLine, int&, char**&> (this=0x7fffffffcc20, __p=0x6d92e8, __args=@0x7fffffffd1f8: 0x6ffad0, __args=@0x7fffffffd1f8: 0x6ffad0)
    at /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../include/c++/4.8.5/ext/new_allocator.h:120
#4  0x00007fffde7887e2 in std::allocator_traits<std::allocator<CommandLine> >::_S_construct<CommandLine, int&, char**&> (__a=..., __p=0x6d92e8, __args=@0x7fffffffd1f8: 0x6ffad0,
    __args=@0x7fffffffd1f8: 0x6ffad0) at /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../include/c++/4.8.5/bits/alloc_traits.h:254
#5  0x00007fffde7885c5 in std::allocator_traits<std::allocator<CommandLine> >::construct<CommandLine, int&, char**&> (__a=..., __p=0x6d92e8, __args=@0x7fffffffd1f8: 0x6ffad0, __args=@0x7fffffffd1f8: 0x6ffad0)
    at /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../include/c++/4.8.5/bits/alloc_traits.h:393
#6  0x00007fffde7884e2 in std::_Sp_counted_ptr_inplace<CommandLine, std::allocator<CommandLine>, (__gnu_cxx::_Lock_policy)2>::_Sp_counted_ptr_inplace<int&, char**&> (this=0x6d92d0, __a=...,
    __args=@0x7fffffffd1f8: 0x6ffad0, __args=@0x7fffffffd1f8: 0x6ffad0) at /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../include/c++/4.8.5/bits/shared_ptr_base.h:399
#7  0x00007fffde7883cd in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<CommandLine, std::allocator<CommandLine>, (__gnu_cxx::_Lock_policy)2> >::construct<std::_Sp_counted_ptr_inplace<CommandLine, std::allocator<CommandLine>, (__gnu_cxx::_Lock_policy)2>, std::allocator<CommandLine> const, int&, char**&>(std::_Sp_counted_ptr_inplace<CommandLine, std::allocator<CommandLine>, (__gnu_cxx::_Lock_policy)2>*, std::allocator<CommandLine> const&&, int&, char**&) (this=0x7fffffffcd78, __p=0x6d92d0, __args=@0x7fffffffd1f8: 0x6ffad0, __args=@0x7fffffffd1f8: 0x6ffad0, __args=@0x7fffffffd1f8: 0x6ffad0)
    at /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../include/c++/4.8.5/ext/new_allocator.h:120
#8  0x00007fffde788347 in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<CommandLine, std::allocator<CommandLine>, (__gnu_cxx::_Lock_policy)2> > >::_S_construct<std::_Sp_counted_ptr_inplace<CommandLine, std::allocator<CommandLine>, (__gnu_cxx::_Lock_policy)2>, std::allocator<CommandLine> const, int&, char**&> (__a=..., __p=0x6d92d0, __args=@0x7fffffffd1f8: 0x6ffad0,
    __args=@0x7fffffffd1f8: 0x6ffad0, __args=@0x7fffffffd1f8: 0x6ffad0) at /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../include/c++/4.8.5/bits/alloc_traits.h:254
#9  0x00007fffde7881ea in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<CommandLine, std::allocator<CommandLine>, (__gnu_cxx::_Lock_policy)2> > >::construct<std::_Sp_counted_ptr_inplace<CommandLine, std::allocator<CommandLine>, (__gnu_cxx::_Lock_policy)2>, std::allocator<CommandLine> const, int&, char**&> (__a=..., __p=0x6d92d0, __args=@0x7fffffffd1f8: 0x6ffad0, __args=@0x7fffffffd1f8: 0x6ffad0,
    __args=@0x7fffffffd1f8: 0x6ffad0) at /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../include/c++/4.8.5/bits/alloc_traits.h:393
#10 0x00007fffde78805e in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::__shared_count<CommandLine, std::allocator<CommandLine>, int&, char**&> (this=0x7fffffffd1e8, __a=...,
    __args=@0x7fffffffd1f8: 0x6ffad0, __args=@0x7fffffffd1f8: 0x6ffad0) at /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../include/c++/4.8.5/bits/shared_ptr_base.h:502
#11 0x00007fffde787f6a in std::__shared_ptr<CommandLine, (__gnu_cxx::_Lock_policy)2>::__shared_ptr<std::allocator<CommandLine>, int&, char**&> (this=0x7fffffffd1e0, __tag=..., __a=...,
    __args=@0x7fffffffd1f8: 0x6ffad0, __args=@0x7fffffffd1f8: 0x6ffad0) at /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../include/c++/4.8.5/bits/shared_ptr_base.h:956
#12 0x00007fffde787ef2 in std::shared_ptr<CommandLine>::shared_ptr<std::allocator<CommandLine>, int&, char**&> (this=0x7fffffffd1e0, __tag=..., __a=..., __args=@0x7fffffffd1f8: 0x6ffad0,
    __args=@0x7fffffffd1f8: 0x6ffad0) at /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../include/c++/4.8.5/bits/shared_ptr.h:316
#13 0x00007fffde787e44 in std::allocate_shared<CommandLine, std::allocator<CommandLine>, int&, char**&> (__a=..., __args=@0x7fffffffd1f8: 0x6ffad0, __args=@0x7fffffffd1f8: 0x6ffad0)
    at /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../include/c++/4.8.5/bits/shared_ptr.h:597
#14 0x00007fffde786ef5 in std::make_shared<CommandLine, int&, char**&> (__args=@0x7fffffffd1f8: 0x6ffad0, __args=@0x7fffffffd1f8: 0x6ffad0)
    at /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../include/c++/4.8.5/bits/shared_ptr.h:613
#15 0x00007fffde78625f in AppFactory::createAppShared (app_type="MooseTestApp", argc=3, argv=0x6ffad0, comm_world_in=1140850688)
    at /home/2014-0004_focal_therapy/PhDs/AdapTT/Nikhil/moose/framework/src/base/AppFactory.C:40
#16 0x000000000040cf95 in main (argc=3, argv=0x6ffad0) at /home/2014-0004_focal_therapy/PhDs/AdapTT/Nikhil/moose/test/src/main.C:28


Is there a workaround in case multiple interfaces with different interfacekernels are involved?

Best,
Nikhil
coupled_value_coupled_flux_user_test.i
user_test.msh

Alexander Lindsay

unread,
Mar 31, 2020, 10:40:04 AM3/31/20
to moose...@googlegroups.com
This stack trace doesn't appear to have anything to do with interface kernels, so I don't have anything helpful off the top of my head.

Can you share the input file and the mesh that lead to that stack trace, so that I can try to run it and reproduce?

To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/f56695f8-0cbc-4acc-bd27-07d7a7483c41%40googlegroups.com.

Nikhil Vaidya

unread,
Mar 31, 2020, 11:20:15 AM3/31/20
to moose-users
Thanks for your quick response. The input file and mesh are attached to my previous message.

Alexander Lindsay

unread,
Mar 31, 2020, 4:15:37 PM3/31/20
to moose...@googlegroups.com
Your input runs fine for me both in oprof and dbg modes. I'd suggest maybe trying to rebuild and/or make sure you're using all the shared libraries you intend to be using. Maybe something got corrupted somewhere.

lindad@pop-os:~/projects/moose/test(next)$ ./moose_test-oprof -i coupled_value_coupled_flux_user_test.i


Framework Information:
MOOSE Version:           git commit 8fdf5f581b on 2020-03-31
LibMesh Version:         ef7dd34ad117998e7b7ef909e85a6ae53f57e057
PETSc Version:           3.12.1
SLEPc Version:           3.12.0
Current Time:            Tue Mar 31 13:12:47 2020
Executable Timestamp:    Tue Mar 31 13:12:39 2020

Parallelism:
  Num Processors:          1
  Num Threads:             1

Mesh:
  Parallel Type:           replicated
  Mesh Dimension:          2
  Spatial Dimension:       2
  Nodes:                  
    Total:                 13
    Local:                 13
  Elems:                  
    Total:                 16
    Local:                 16
  Num Subdomains:          2
  Num Partitions:          1

Nonlinear System:
  Num DOFs:                16
  Num Local DOFs:          16
  Variables:               "u" "v"
  Finite Element Types:    "LAGRANGE" "LAGRANGE"
  Approximation Orders:    "FIRST" "FIRST"

Execution Information:
  Executioner:             Steady
  Solver Mode:             NEWTON
  MOOSE Preconditioner:    SMP


Postprocessor Values:
+----------------+----------------+----------------+
| time           | u_int          | v_int          |
+----------------+----------------+----------------+
|   0.000000e+00 |   0.000000e+00 |   0.000000e+00 |
+----------------+----------------+----------------+

 0 Nonlinear |R| = 9.718253e-01
      0 Linear |R| = 9.718253e-01
      1 Linear |R| = 9.198999e-01
      2 Linear |R| = 3.955706e-01
      3 Linear |R| = 1.373886e-03
      4 Linear |R| = 7.735885e-07
 1 Nonlinear |R| = 7.735885e-07
      0 Linear |R| = 7.735885e-07
      1 Linear |R| = 9.377470e-08
      2 Linear |R| = 2.328850e-08
      3 Linear |R| = 1.379170e-09
      4 Linear |R| = 4.228744e-12
 2 Nonlinear |R| = 4.228056e-12
 Solve Converged!

Postprocessor Values:
+----------------+----------------+----------------+
| time           | u_int          | v_int          |
+----------------+----------------+----------------+
|   0.000000e+00 |   0.000000e+00 |   0.000000e+00 |
|   1.000000e+00 |   3.113636e+00 |   0.000000e+00 |
+----------------+----------------+----------------+


To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/8162b416-bf5b-4b8f-ae28-c6925ec065a5%40googlegroups.com.

Nikhil Vaidya

unread,
Apr 2, 2020, 10:43:43 AM4/2/20
to moose-users
I tried rebuilding MOOSE and the memory error problem persists.

I realized that the input file I sent you has the interfacekernel block commented out (hence the value of the v_int postprocessor is zero). I get said memory fault when the interfacekernel block is included in the simulation. Could you please try to run the input file and mesh I am attaching with this message?

Best,
Nikhil
user_test.msh
coupled_value_coupled_flux_user_test.i

Alexander Lindsay

unread,
Apr 2, 2020, 12:53:25 PM4/2/20
to moose...@googlegroups.com
So I believe the problem stems from how we interact with gmsh mesh files.

In libMesh we interact most regularly with cubit-generated exodus formats. In the exodus format a sideset is associated with a particular element face. However, in the gmsh mesh format (at least in the version 2 format) the "sideset" physical entities are associated only with the lower-dimensional geometric entity that is the boundary between two higher-dimensional element faces. It does not have a sidedness to it like the exodus format does.

So in MOOSE we loop over elements, and then for boundary objects like interface kernels we loop over sides. If there is a boundary ID associated with an element side, then we go and run the interface kernel. Where you are triggering the assertion, is on the face between elements 2 and 11 which are on subdomains 5 and 6 respectively. Because your sideset does not have "sidedness" we attempt to run the interface kernel twice. When we try to run it for elem 2, we trigger the assertion because there are no degrees of freedom for variable u on subdomain 5.

I think I can probably add a check that will prevent this from happening...

To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/cfa45f58-fd8b-42cc-a70f-9f87d98bbfa2%40googlegroups.com.

Alexander Lindsay

unread,
Apr 2, 2020, 3:17:51 PM4/2/20
to moose...@googlegroups.com
I've created a pull request that addresses the issue here: https://github.com/idaholab/moose/pull/15024

I've also created a ticket in libmesh for being able to mimic exodus sidedness through gmsh physical names here: https://github.com/libMesh/libmesh/issues/2494, although I don't anticipate anyone getting to that very soon

Nikhil Vaidya

unread,
Apr 3, 2020, 3:27:17 AM4/3/20
to moose-users
Hi Alexander,

Thanks so much!

Best,
Nikhil

Nikhil Vaidya

unread,
Apr 9, 2020, 5:02:00 AM4/9/20
to moose-users
Hi Alexander,

I tried to run the same input file as before, but with the modified interface kernels implementation. The residuals now do converge (albeit slowly) in spite of interfaces which share points but have different conditions on them. Now, when I try to change the geometry, the convergence issues return. Is there some workaround in this case?

Best,
Nikhil

Alexander Lindsay

unread,
Apr 9, 2020, 12:44:17 PM4/9/20
to moose...@googlegroups.com
I would need a little more detail to give informed comments. Are you sure that your Jacobian is accurate? You can test the accuracy of your Jacobian on a small problem  by passing the command line option: `-snes_test_jacobian`. You want a ratio around 1e-7 to 1e-8.

To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/1022319d-2a6b-46dd-88f0-b6d9c796ce53%40googlegroups.com.

Nikhil Vaidya

unread,
Apr 10, 2020, 12:06:36 PM4/10/20
to moose-users
If I understand correctly, the '-snes_test_jacobian' option is to be provided to the petsc solver from within the the executioner block (please correct me if I am wrong). I tried this in the following way:

petsc_options_iname = '-snes_test_jacobian'
petsc_options_value = '1'

but this does not produce any output any different than the usual one.

Instead, I tried using the analyzejacobian.py script. This told me that I have not implemented the off-diagonal jacobian terms (I am trying to set up coupled PDEs). I remember reading in the forum that the PJFNK solver does not require a jacobian to provided by the user and computes it from the residuals through finite differences.

I am attaching my input files here. Some of the kernels used have been written by me, and are not part of the moose source code. Please let me know if you need more information.
coupled_eqn_simple_geom.i
user_test.msh

Alexander Lindsay

unread,
Apr 10, 2020, 1:06:29 PM4/10/20
to moose...@googlegroups.com
what is your PETSc version? If it's old you might have to use `-snes_check_jacobian`. I would pass this option directly on the command line. If you really want it in the input file, use the parameter: `petsc_options`. so like this:

[Executioner]
  type = Steady
  solve_type = PJFNK
  petsc_options = '-snes_test_jacobian'
[]

As to commenting on PJFNK: I would need to know what is slow about the residual convergence. Is the linear solve taking a lot of iterations? Or is the nonlinear solve? You can paste in a representative solve if you would like. The convergence rate of the linear iterations is determined by how strong/accurate your preconditioner is. If you are missing important coupling terms in your preconditioning matrix, then the linear convergence may be quite slow.

Now with PJFNK your nonlinear convergence should be quite good, given that you solve the linear problem accurately (and our default linear solve tolerance of 1e-5 should be sufficiently accurate). If the nonlinear convergence is not good, then that may mean that your problem is not very well scaled and the finite-difference approximation of the Jacobian action is not accurate. You can often check whether the finite-differences are working by passing the PETSc option `-ksp_monitor_true_residual`. If the true residuals converge, then the finite differences are working well.

[Executioner]
  type = Steady
  solve_type = PJFNK
  petsc_options = '-snes_test_jacobian -ksp_monitor_true_residual'
[]

[Outputs]
  blah blah
  print_linear_residuals = false
[]

To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/9a293027-abfd-44ce-888c-ff8a48a10e0c%40googlegroups.com.

Nikhil Vaidya

unread,
Apr 13, 2020, 7:41:27 AM4/13/20
to moose-users
@petsc version
Indeed I have the old petsc version. The option -snes_check_jacobian works. The jacobian check ratio is ~1e-9 for the fist non-linear iteration and then ~1e-5 for the subsequent non-linear iterations. How is this interpreted? (I know that my jacobian is missing the off-diagonal terms. I am using PJFNK.)

@PJFNK
The linear solve is the one requiring a lot of iterations. 4 non-linear iterations are required, while the number of linear iterations in each non-linear iteration is of the order of 100 (see attached output file).

@PJFNK non-linear convergence
I am not sure what "good" non-linear convergence means, so I printed out the true residuals. I observe the following:

Non-linear iteration number: 0
true linear residual falls to e-5 times the initial value

Non-linear iteration number: 1
true linear residual falls to e-1 times the initial value

Non-linear iteration number: 2
true linear residual falls to e-4 times the initial value

Non-linear iteration number: 3
true linear residual falls to e-3 times the initial value

I said in an earlier message that the solve did not converge for my complex geometry. There is a clarification regarding this statement. The solve does converge for the complex geometry when PJFNK is used. As you can see in the attached output file, the linear convergence is slow.

Given the background, I have the following questions:
1. The linear convergence is slow. Could this be fixed by including the coupling terms in the jacobian?
2. I'm not sure whether the true linear residual is converging, since it drops by different orders of magnitude in different non-linear iterations. Is it? If not, should I provide the Jacobian myself and use the NEWTON solver instead of PJFNK?
3. What is the impact of the nodes being shared by the interfaces with different interfacekernels? Is the node-sharing certainly a problem which needs a workaround?

Best,
Nikhil
output_PJFNK_complex_true_residual.txt

Alexander Lindsay

unread,
Apr 13, 2020, 12:57:58 PM4/13/20
to moose...@googlegroups.com
On Mon, Apr 13, 2020 at 4:41 AM Nikhil Vaidya <nikhilv...@gmail.com> wrote:
@petsc version
Indeed I have the old petsc version. The option -snes_check_jacobian works. The jacobian check ratio is ~1e-9 for the fist non-linear iteration and then ~1e-5 for the subsequent non-linear iterations. How is this interpreted? (I know that my jacobian is missing the off-diagonal terms. I am using PJFNK.)

@PJFNK
The linear solve is the one requiring a lot of iterations. 4 non-linear iterations are required, while the number of linear iterations in each non-linear iteration is of the order of 100 (see attached output file).

@PJFNK non-linear convergence
I am not sure what "good" non-linear convergence means, so I printed out the true residuals. I observe the following:

Non-linear iteration number: 0
true linear residual falls to e-5 times the initial value

Non-linear iteration number: 1
true linear residual falls to e-1 times the initial value

Non-linear iteration number: 2
true linear residual falls to e-4 times the initial value

Non-linear iteration number: 3
true linear residual falls to e-3 times the initial value

I said in an earlier message that the solve did not converge for my complex geometry. There is a clarification regarding this statement. The solve does converge for the complex geometry when PJFNK is used. As you can see in the attached output file, the linear convergence is slow.

Given the background, I have the following questions:
1. The linear convergence is slow. Could this be fixed by including the coupling terms in the jacobian?

It very well might help. However, that ratio of 1e-5 is not all that bad. For debugging purposes I would try running with `-pc_type lu` and see how many linear iterations that takes. If it takes 10 or more than I would say that you definitely should add the off-diagonal terms. If `lu` takes a handful of linear iterations or less then your hand-coded approximate Jacobian is fine, but you may need to use a strong preconditioning "process".  If you're running in parallel, which it looks like you are, you might try running with something `-pc_type asm -pc_asm_overlap 2 -sub_pc_factor_levels 4`. asm will be a stronger parallel preconditioner than block Jacobi which is the default. Increasing overlap increases the strength of the preconditioner at the cost of more communication. Increasing the number of factor levels on the sub-block also increases the strength of the preconditioner. As factor_levels -> large_number, you approach a direct solve on the sub-block.

2. I'm not sure whether the true linear residual is converging, since it drops by different orders of magnitude in different non-linear iterations. Is it? If not, should I provide the Jacobian myself and use the NEWTON solver instead of PJFNK?

There definitely is some deviation of the true residuals from the unpreconditioned residual norm, which is troubling but not suprising for PJFNK. However, at least for the solve you pasted here, the true residuals seem to be dropping "enough" such that your non-linear residuals are dropping well. Increasing the strength of the preconditioner could lead to better correspondence between the true residual and the unpreconditioned residual. You could also try playing around with this PETSc parameter: `-mat_mffd_err`. The default value is around 1e-8. You might try  `-mat_mffd_err 1e-5` and see how that effects the true residuals.

If it's not too much work to get your Jacobian right, it very well may be beneficial to switch to NEWTON. We should also add ADInterfaceKernel so that you don't have to write Jacobians :-)

3. What is the impact of the nodes being shared by the interfaces with different interfacekernels? Is the node-sharing certainly a problem which needs a workaround?

I don't know. In general discontinuities in any simulation make a solve harder. If you get to a place where you feel like you've done everything you can to make your solve process optimal, and it still doesn't work, then it may be time to think about a way to reformulate the problem so that it's smoother.

But I'd first try debugging with `-pc_type lu` and see where you get.

Alex

To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/6661b071-6c9a-48cb-9b33-9128c8c37c44%40googlegroups.com.

Nikhil Vaidya

unread,
Apr 15, 2020, 12:22:57 PM4/15/20
to moose-users
Thanks a lot for your suggestions.

I have now included the off-diagonal jacobian terms in my kernel block. The jacobian check returns ratios of ~1e-8, so I think the jacobian is correct. I am now using NEWTON as solve_type. I have also included the asm preconditioner as you suggested earlier. Now the linear solve converges in a handful of iterations. The non-linear residual, on the other hand reduces by only 2 to 3 orders of magnitude even when 50 non-linear iterations are complete.

I am wondering what the "correct" value for nl_rel_tol would be.

I played around with the initial values a bit. These are having a big influence on the non-linear residual. In general, how does one know that enough non-linear iterations have been performed?

Best regards,
Nikhil

Alexander Lindsay

unread,
Apr 15, 2020, 6:28:23 PM4/15/20
to moose...@googlegroups.com
Is your application somewhere that I can clone it? It would probably be faster for me to be able to play around directly with your app instead of iterating over email.

It's good to hear that the linear solve is converging. The fact that the non-linear residual is not converging (whereas it does with PJFNK right?) suggests to me that your Jacobian actually is not correct (although maybe it is correct for the first non-linear iteration corresponding to the initial condition; do you have constant initial conditions? Maybe there is an error in a term involving a gradient...)

The size of the initial residual can be important for numerics, but we need to nail down that the Jacobian is accurate first.

To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/7f6b54b5-069f-40e6-a88f-e2dc882e0a05%40googlegroups.com.

Nikhil Vaidya

unread,
Apr 16, 2020, 5:55:33 AM4/16/20
to moose-users
Here is the link to the repository: https://github.com/nikhilgv91/my_app

It is private but I have added you as collaborator. so you should be able to clone it. I am attaching my input file and the corresponding mesh to this message.

Best,
Nikhil
coupled_eqn_w_interface.i
Mesh.msh

Alexander Lindsay

unread,
Apr 17, 2020, 2:01:52 PM4/17/20
to moose...@googlegroups.com
Ok, I will take a look soon

To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/118e868a-c84e-4738-854d-abfd7dee25cc%40googlegroups.com.

Nikhil Vaidya

unread,
Apr 17, 2020, 3:01:37 PM4/17/20
to moose...@googlegroups.com
Thanks!

To unsubscribe from this group and all its topics, send an email to moose-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/CANFcJrHqE5%3DSJC9TWR0NXfTDz%2B5E3iKiFEKEiFVn%2BuybQtm91w%40mail.gmail.com.

Alexander Lindsay

unread,
Apr 23, 2020, 3:32:51 PM4/23/20
to moose...@googlegroups.com
Ok, the attached input file runs well for me. I changed two lines:

1. Reduced the penalty on the interface kernel from 1e6 to 1e2
2. Removed `nl_rel_tol =  8e-3` line, because with the change in 1. the problem converges to the default relative tolerance of 1e-8

It seems that the physics/residuals simply don't allow trying to tie the interface solution values any tighter than that achieved with a penalty of 1e2. If I try to bump it to even 1e3, the problem no longer converges.

Based on the excellent convergence with a penalty of 1e2, I'm included to think that your Jacobian is good, but I can't do really robust testing of the Jacobian unless the problem is < 500-1000 degrees of freedom. I can say that your solve runs better with SMP, NEWTON than it does with either FDP,NEWTON or SMP,PJFNK so that is usually a pretty good indication that your Jacobian is right!

Actually the only way I can get a SMP,PJFNK solve to converge is by using `-pc_type lu` because when using any other preconditioning process we see classic PJFNK truncation error producing a non-linear operator, resulting in true residuals actually increasing:

   50 KSP unpreconditioned resid norm 4.500626851836e-03 true resid norm 5.632099807950e-03 ||r(i)||/||b|| 4.063439724707e-02
   51 KSP unpreconditioned resid norm 4.380993432456e-03 true resid norm 5.706280253799e-03 ||r(i)||/||b|| 4.116959332089e-02
   52 KSP unpreconditioned resid norm 4.170625674049e-03 true resid norm 6.032467763036e-03 ||r(i)||/||b|| 4.352296653503e-02
   53 KSP unpreconditioned resid norm 3.721582948545e-03 true resid norm 6.692541274787e-03 ||r(i)||/||b|| 4.828525594810e-02
   54 KSP unpreconditioned resid norm 3.713086495009e-03 true resid norm 6.715476938719e-03 ||r(i)||/||b|| 4.845073186492e-02
   55 KSP unpreconditioned resid norm 3.709427759415e-03 true resid norm 6.846552137071e-03 ||r(i)||/||b|| 4.939641142683e-02
   56 KSP unpreconditioned resid norm 3.260896792648e-03 true resid norm 8.104937748678e-03 ||r(i)||/||b|| 5.847539485676e-02
   57 KSP unpreconditioned resid norm 2.826555731824e-03 true resid norm 9.348816360129e-03 ||r(i)||/||b|| 6.744971337886e-02
   58 KSP unpreconditioned resid norm 2.076467343757e-03 true resid norm 1.123073725484e-02 ||r(i)||/||b|| 8.102737070576e-02
   59 KSP unpreconditioned resid norm 1.221657623341e-03 true resid norm 1.166892892163e-02 ||r(i)||/||b|| 8.418882999552e-02

On Thu, Apr 23, 2020 at 7:23 AM Nikhil Vaidya <nikhilv...@gmail.com> wrote:
Hi Alexander,

Sorry for that. I have updated my code. Could you please have a look again?

Best regards,
Nikhil

On Thu, Apr 23, 2020 at 12:13 AM Alexander Lindsay <alexlin...@gmail.com> wrote:
It appears that your application is not compatible with modern MOOSE. Could you update it? I see errors related to discarding const qualifiers from `Function` which was a change that happened a long time ago...

coupled_eqn_w_interface.i

Alexander Lindsay

unread,
Apr 23, 2020, 3:46:07 PM4/23/20
to moose...@googlegroups.com
An alternative formulation of your residual would be to use something like `InterfaceDiffusion` (with a modification so that you can use your electrical conductivity temperature functions) in conjunction with a MatchedValueBC so that there's no penalty factor and you are strongly imposing a match for the potential values on the interface (I actually don't know if that's what you want?)

The moose/test/tests/interfacekernels/1d_interface/coupled_value_coupled_flux.i input file actually has all the objects to run both the way you're currently running and the way I just proposed. See the corresponding tests file for how we setup each one.

Alexander Lindsay

unread,
Apr 23, 2020, 4:00:35 PM4/23/20
to moose...@googlegroups.com
A bit rusty on my Maxwell's equations. Based on the last page here: http://home.sandiego.edu/~ekim/e171f00/lectures/boundary.pdf, the continuity of the tangential electric field component at an interface translates to continuity of potential...so yea I'm guessing you really want pot_1 and pot_3 to be as matched as possible at that interface. So try out the second method I proposed in my last email and see whether it converges...

Nikhil Vaidya

unread,
May 8, 2020, 11:40:35 AM5/8/20
to moose-users
Hi Alex,

Thanks for the suggestion about InterfaceDiffusion and MatchedValueBC. That method works great for the steady case. My goal is to ultimately solve a transient interface problem. I tried a transient run, and now the non-linear residuals do reduce, but not below the default non-linear tolerance. The reason for non-convergence is a diverged line search. Again the jacobian ratios are ~1e-7. Does this mean that the Jacobians are incorrect? Attached is a sample output from the solve.

Best,
Nikhil
...
output.txt

Alexander Lindsay

unread,
May 11, 2020, 12:47:32 PM5/11/20
to moose...@googlegroups.com
On Fri, May 8, 2020 at 8:40 AM Nikhil Vaidya <nikhilv...@gmail.com> wrote:
Hi Alex,

Thanks for the suggestion about InterfaceDiffusion and MatchedValueBC. That method works great for the steady case.

That's great to hear!

My goal is to ultimately solve a transient interface problem. I tried a transient run, and now the non-linear residuals do reduce, but not below the default non-linear tolerance. The reason for non-convergence is a diverged line search. Again the jacobian ratios are ~1e-7. Does this mean that the Jacobians are incorrect? Attached is a sample output from the solve.

A ratio of 1e-7 is pretty good. I think we have a block like this:

[Debug]
  show_var_residual_norms = true
[]

Can you try that? It looks like your residual is getting jammed and just can't go any lower. I'm curious if there's one particular variable that is the problem. That being said, you're still getting a drop of 7 orders of magnitude in the nonlinear residual, which isn't bad. You might just try loosening the nonlinear residual tolerance (nl_rel_tol) and see whether you're satisfied with the results.

Alex

--
You received this message because you are subscribed to the Google Groups "moose-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.

Nikhil Vaidya

unread,
May 12, 2020, 3:03:06 AM5/12/20
to moose...@googlegroups.com
Hi Alex,

I tried running with the Debug block in the input file. The output I obtained is attached. The residuals for the variables "T" and "pot_5" seem to have convergence problems. The residual for variable  "T" starts at 1e-13 in each non-linear iteration. My problem has constant initial conditions for "T".

Best regards,
Nikhil

You received this message because you are subscribed to a topic in the Google Groups "moose-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/moose-users/N3A_zZUO_3E/unsubscribe.
To unsubscribe from this group and all its topics, send an email to moose-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/CANFcJrEFHVH-D4GP0VH3PxE%2BNciqn%3D%3DsH0yYZqfZ7%2BFEaMv3Bw%40mail.gmail.com.
output.txt

Alexander Lindsay

unread,
May 12, 2020, 11:32:39 AM5/12/20
to moose...@googlegroups.com
Hmm, I still think this looks ok. T jumps up to a value > 1 after the first nonlinear iteration and finishes in the 1e-8 range. pot_5 finishes ~8 orders of magnitude below where it starts. So I'd try just loosening the relative tolerance a little bit and seeing whether your results look good to the eye.

Reply all
Reply to author
Forward
0 new messages