DG explicit time integration for linear advection equation with MeshWorker (suggestions)

59 views
Skip to first unread message

vachan potluri

unread,
Sep 17, 2019, 3:03:52 AM9/17/19
to deal.II User Group
Hello all,


I am a beginner in dealii. I want to solve a linear, transient advection equation explicitly in two dimensions using DG. The resulting discrete equation will have a mass matrix as the system matrix and a sum of terms which depend on previous solution (multiplied by mass, differentiation, flux and boundary matrix) as the rhs.

linearAdvection2D.png


Instead of using MeshWorker::loop for every single time step, I think the following approach would be better. I am using a ghost cell approach to specify the boundary condition: the boundary condition can be specified by an appropriately calculated normal numerical flux.

  1. Before the any time steps, use a MeshWorker::loop each for each of the four matrices: mass, differentiation, flux and boundary
  2. During each update
    1. Again use MeshWorker::loop, but this time only to calculate the normal numerical flux.
    2. Use the normal numerical flux and the previous solution to obtain the RHS using appropriate matrix-vector products
    3. Solve the system
I have few question regarding this approach.
  1. Is it feasible
  2. Can it give significant improvement in performance over the case when assembling is done for every time step
  3. (Assuming answers to above questions are positive) For higher orders, the flux and boundary matrices will be very sparse. The normal numerical flux (which will be a vector) will also be sparse. Can the matrix-vector product involving these combinations be optimised by using appropriate sparsity pattern? Can a sparsity pattern be specified for a vector too?

Doug

unread,
Sep 17, 2019, 7:17:38 PM9/17/19
to deal.II User Group
Hello Vachan,

What you are describing is very similar to Hesthaven's framework of build matrix operators and apply them to your degrees of freedom. The main advantage of this operator form is to have a common mass (if linear elements), differentiation, flux, and boundary operators for all your elements since the operators are defined on the reference element. Therefore, using MeshWorker loop to go through every single element to build those matrices would result in building redundant local matrices and storing them in a global matrix. Can be very expensive memory wise.

Yes, you could use MeshWorker to compute only your normal fluxes. 

So, it's feasible, but I wouldn't use the MeshWorker loop to build those matrices . You are welcome to look at step-33 to see how you can loop through your elements, and then build your different mass matrices. Then use a similar loop to apply your, interpolation, differentiation, lifting matrices on the local solution to assemble your right-hand side.

Yes, if you build those operators, you will save a significant amount of time for explicit time stepping since you will basically have concatenated multiple operators together. Interpolation, differentiation, integration, etc. into a single matrix-vector product.

I think know what you are referring to with that sparse flux matrix. If it's a local matrix, you would want to store the lifting operator, which is the mass inverse times your sparse flux matrix, which is not necessarily a sparse matrix (unless mass is diagonal). Either way, I wouldn't worry too much about that operation taking a long time. Your other operations on the volume take much longer, and if you decide to go with nonlinear fluxes, the time it takes to apply this lifted flux vector is meaningless. deal.II has a SparseMatrix class, in case you want to use it, and you can provide the set the SparsityPattern if you want. 

If you were planning on building multiple global matrices, then yes, your approach makes sense using SparseMatrix; be careful about memory. If you only plan on doing linear advection, your problems should be simple enough that you wouldn't care about memory nor computational time. I haven't seen people build those global matrices, but it should be faster than assembling every loop. It's the old trade-off between memory and computation.

Doug

Praveen C

unread,
Sep 17, 2019, 10:27:04 PM9/17/19
to Deal. II Googlegroup
If you are working only on Cartesian grids, these matrices will be same for all cells, except maybe for scalar factors. In this case, you can first compute them on a reference cell. For assembly, you loop over cells and perform a small matrix-vector product on each cell. Similar things can be done for face integrals.
best
praveen

On 17-Sep-2019, at 12:33 PM, vachan potluri <vachanpo...@gmail.com> wrote:

Hello all,


I am a beginner in dealii. I want to solve a linear, transient advection equation explicitly in two dimensions using DG. The resulting discrete equation will have a mass matrix as the system matrix and a sum of terms which depend on previous solution (multiplied by mass, differentiation, flux and boundary matrix) as the rhs.

<linearAdvection2D.png>


Instead of using MeshWorker::loop for every single time step, I think the following approach would be better. I am using a ghost cell approach to specify the boundary condition: the boundary condition can be specified by an appropriately calculated normal numerical flux.

  1. Before the any time steps, use a MeshWorker::loop each for each of the four matrices: mass, differentiation, flux and boundary
  2. During each update
    1. Again use MeshWorker::loop, but this time only to calculate the normal numerical flux.
    2. Use the normal numerical flux and the previous solution to obtain the RHS using appropriate matrix-vector products
    3. Solve the system
I have few question regarding this approach.
  1. Is it feasible
  2. Can it give significant improvement in performance over the case when assembling is done for every time step
  3. (Assuming answers to above questions are positive) For higher orders, the flux and boundary matrices will be very sparse. The normal numerical flux (which will be a vector) will also be sparse. Can the matrix-vector product involving these combinations be optimised by using appropriate sparsity pattern? Can a sparsity pattern be specified for a vector too?

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/7fd51309-f137-421f-91bc-8c2d1c6b7ff1%40googlegroups.com.
<linearAdvection2D.png>

vachan potluri

unread,
Sep 18, 2019, 12:33:43 AM9/18/19
to deal.II User Group
Doug and Praveen,

Thanks for your answers. I had a look at step-33. As far as I understand, although the looping through cells is not through MeshWorker, the assembly is still global! So, for a non-cartesian mesh, I think you are suggesting using such a loop over all cells to calculate local matrices. Will these cell-local matrices stored as an array of matrices in the conservation law class?

I now have one more question. In assemble_face_term function of step-33, the normal numerical flux is calculated. This function is called for every cell-neighbor pair from the assemble_system function. This means, if I am not wrong, at every interior face quadrature point, the numerical flux is calculated twice. This might be very costly. Is there a way to avoid this? One can probably force only one cell to calculate numerical flux at a face based on the face normal's orientation. But then how can we communicate the calculated flux to the other cell. Or even better, is it possible to loop over faces? We could then add contributions to both the cells sharing this face without double computation.

Thanks again!

Praveen C

unread,
Sep 18, 2019, 1:53:11 AM9/18/19
to Deal. II Googlegroup


I now have one more question. In assemble_face_term function of step-33, the normal numerical flux is calculated. This function is called for every cell-neighbor pair from the assemble_system function. This means, if I am not wrong, at every interior face quadrature point, the numerical flux is calculated twice. This might be very costly. Is there a way to avoid this? One can probably force only one cell to calculate numerical flux at a face based on the face normal's orientation. But then how can we communicate the calculated flux to the other cell. Or even better, is it possible to loop over faces? We could then add contributions to both the cells sharing this face without double computation.
step-33 does compute interior face integrals twice.

One way I handle this is to attach a cell user index 

// set cell number
unsigned int index=0;
for (typename Triangulation<dim,spacedim>::active_cell_iterator
cell=triangulation.begin_active();
cell!=triangulation.end(); ++cell, ++index)
cell->set_user_index (index);

and calculate face integral only once

for (unsigned int f=0; f<GeometryInfo<dim>::faces_per_cell; ++f)
{

if(cell->face(f)->at_boundary)
{
// boundary flux integral
}
else if(cell->neighbor_is_coarser(f))
{
// assemble from finer side
}
else if(cell->user_index() < cell->neighbor(f)->user_index())
{
// compute face integral
}

// add residual to left cell
// subtract residual from right cell
}

You add/subtract face integral to both cells adjacent to the face.

If you have adapted grid, then you assemble from the finer side. 

Best
praveen

Doug

unread,
Sep 18, 2019, 2:10:23 AM9/18/19
to deal.II User Group
Hello Vachan,

Quick note. I'm only suggesting looping yourself because I am not totally familiar with MeshWorker. Looping myself exposed the mechanics a bit more for me, which I wanted.

I was pointing to step-33 for the loop, not the operators. I don't remember seeing a deal.II example implementing deal.II in operator form. 

So, I suggested looping though the cells to compute the operators that are not common to all matrices. This is usually the mass matrix if your cells are not linear (aka cartesian mesh). The other operators can be made the same for all the cells, except for some cofactor Jacobian matrix to transfer derivatives and normals from reference space to physical space. If you only ever plan on using a cartesian mesh, then all your local matrices will be the same. There is no point in assembling a global matrix that will operate on your solution. If I were you, I'd take a look at chapter 3 & 6 of Hesthaven's book where he builds the same local operators that will be used on each cell. Therefore, you wouldn't have an array of differentiation matrices. Just one small differentiation matrix that applies to all cells. Feel free to look at how I do the [mass matrix](https://github.com/dougshidong/PHiLiP/blob/master/src/dg/dg.cpp#L752)

Step-33 is CG by the way, therefore, two cells at the same level (no hanging nodes) don't need to be added to the residual. Therefore, they don't show that case. Feel free to look at my [loops](https://github.com/dougshidong/PHiLiP/blob/master/src/dg/dg.cpp#L246), where I loop over all the cells, and all the faces. For internal faces, only one cell will be responsible to evaluate the flux and add it to both sides.

If someone wants to chime in with a better way to loop over all the faces just once, I would love to know about it too.

You can also keep on eye out on the code I linked. It currently does not pre-compute the operators, but it's only a matter of moving the loops out to pre-processing. 

Praveen,

I did inspire my loops out of your dflo ;). Note that if you use a distributed grid, the user_index() will not be consistent across MPI boundaries. Therefore, you might end up with doubly-counted faces. Here is a [commit](https://github.com/dougshidong/PHiLiP/commit/d0d7e6bd663b0cef032c7f953d2b2dddce4950a7#diff-d2da12f45bddf4e965acafc118ff8366) to your if with an if-statement that works for refine/distributed grids.

Praveen C

unread,
Sep 18, 2019, 2:57:47 AM9/18/19
to Deal. II Googlegroup


On 18-Sep-2019, at 11:40 AM, Doug <doug.s...@gmail.com> wrote:

Praveen,

I did inspire my loops out of your dflo ;). Note that if you use a distributed grid, the user_index() will not be consistent across MPI boundaries. Therefore, you might end up with doubly-counted faces. Here is a [commit](https://github.com/dougshidong/PHiLiP/commit/d0d7e6bd663b0cef032c7f953d2b2dddce4950a7#diff-d2da12f45bddf4e965acafc118ff8366) to your if with an if-statement that works for refine/distributed grids.

Thanks a lot for pointing this out. I suppose I never stumbled across this issue since I used meshworker in my parallel code.

Best regards
praveen

vachan potluri

unread,
Sep 19, 2019, 1:45:18 AM9/19/19
to deal.II User Group
step-33 does compute interior face integrals twice.

One way I handle this is to attach a cell user index
 Thanks Praveen, this is similar to owner/neighbour concept of OpenFOAM :).


Doug,
Many thanks for the detailed explanations and your code :).
Reply all
Reply to author
Forward
0 new messages