Questions related to FE_Nothing

59 views
Skip to first unread message

RAJAT ARORA

unread,
Nov 17, 2017, 6:38:25 PM11/17/17
to deal.II User Group
Hello all, 

I am using deal.ii to solve a system of Hamilton Jacobi equations. I need to use FE_Nothing element in a part of my domain.

I read a couple of posts in this forum regarding this but I have some questions. I will appreciate if anyone can answer these.

Since I need to use MPI, and I need to use  hp::DofHandler so I cannot use p::d::triangulation. So, I need to use Triangulation and then use metis to partition that. 

In one of the posts I read

 "you can always partition regular triangulation by 
GridTools::partition_triangulation (n_mpi_processes, triangulation); 
and adjust your assembly loops to
if (cell->subdomain_id == this_mpi_process)
to parallelize with MPI. The downside is that all your processes will now own the **complete mesh**.
There are also some other minor downsides like serial SolutionTransfer and not straight-forward serialization/restart."

1. So, I wanted to know what are the minor downsides with solution transfer and restart. I was using Parallel Distributed Solution transfer upto now and need to have checkpoint/restart option available.

2. Is there no concept of locally owned or locally ghost cells if I use the GridTools::partition_triangulation()? Uptil now, some calculations I was doing were for both (loc. owned + ghost) cells like moving the mesh. What happens here?

3. Will there be any change regarding locally owned dofs and locally relevant dofs?

4. Even with p::d::tria, the coarsest mesh was owned by all processors. So, what exactly does this mean -- "The downside is that all your processes will now own the **complete mesh**." 

Thanks.

RAJAT ARORA

unread,
Nov 19, 2017, 9:58:25 PM11/19/17
to deal.II User Group
Hello,

I am partly answering my own questions. I missed step 18 which discusses some of the things I asked. (I visited step 18 when I was using version 8.3 and didn't know that this has changed since deal.ii v 8.4)

One can use parallel::shared::triangulation. This automatically partitions the grid using metis. Also, the concept of locally owned dofs, locally relevant dofs, and locally owned and ghost cells is valid when using p:shared:tria.

But I am still not sure about the answers to the other questions (mentioned below). I will appreciate if anyone can help me with these.

1. Can Parallel::shared::triangulation be used with hp::dofhandler? I want to use Fe_Nothing in a certain part of the domain.

2. So, I wanted to know what are the minor downsides with solution transfer and restart. I was using Parallel Distributed Solution transfer until now and was able to use triangulation.save() or tria.load() for serialization and deserialization. How can I use checkpoint/restart with parallel:shared:tria.


Thanks.

Bruno Turcksin

unread,
Nov 20, 2017, 8:53:16 AM11/20/17
to deal.II User Group
Hi,


On Sunday, November 19, 2017 at 9:58:25 PM UTC-5, RAJAT ARORA wrote:

But I am still not sure about the answers to the other questions (mentioned below). I will appreciate if anyone can help me with these.

1. Can Parallel::shared::triangulation be used with hp::dofhandler? I want to use Fe_Nothing in a certain part of the domain.
Yes, but you will need the development version of deal.II (see https://github.com/dealii/dealii/pull/4593) hp for parallel computation is still a work in progress see https://github.com/dealii/dealii/projects/5

Best,

Bruno
Reply all
Reply to author
Forward
0 new messages