96 views
Skip to first unread message

Shahin Heydari

unread,
May 20, 2021, 4:11:47 AM5/20/21
to dea...@googlegroups.com
Dear all
Hello, hope you are safe and healthy.

I am trying to use deal.ii for my work but I have encountered some difficulties during implementation and I would highly appreciate it if you can help me with it.

I want  to use flux corrected transport scheme, and for this I should go through all the neighbouring of a certain node and compute the fluxes to that node, there is also a flux limiter which is depend on this flux, so I can't go cell by cell as many of the tutorial in dealii has suggested, instead I need to go through a patch consist of all neighbouring cells, now I am just wondering whether there is any class in dealii which can do this process, I would really appreciate any help and feedback from you.

On the other hand there is a part in which I should compute a lump matrix and add it to the scheme, but I feel like this is also wrong and should not be done over a cell(please see the attachment).

I am trying to implement some initial conditions, whenever I run each part of the code separately I can get the picture but when I am trying to put it all together in a loop it won't work (please see the attachment). I would be thankful if you could let me know how I can fix it. 

I am looking forward to hearing from you.
Sincerely 
Shahin Heydari

Capture.JPG
Capture1.JPG
lump.pdf

Bruno Turcksin

unread,
May 20, 2021, 8:48:58 AM5/20/21
to deal.II User Group
Shahin,

I am not sure I understand what's your question about lumping.

Best,

Bruno

Shahin Heydari

unread,
May 21, 2021, 2:54:32 PM5/21/21
to dea...@googlegroups.com
Dear Bruno

Thank you so much for your reply, It was very helpful. Regarding the mass lumping question,  I just compute it as follows:

for(unsigned int q_index = 0 ; q_index < n_q_points ; ++q_index)

{

double lump = 0.0;

for(unsigned int i = 0; i<dofs_per_cell; ++i)

for(unsigned int j = 0; j<dofs_per_cell; ++j)

lump += fe_values.shape_value(i, q_index) *

fe_values.shape_value(j, q_index) *

 fe_values.JxW(q_index);


lump_cell_matrix(i,i) = lump;

}

 

or another way can be 

for(unsigned int q_index = 0 ; q_index < n_q_points ; ++q_index)

{

for(unsigned int i = 0; i<dofs_per_cell; ++i)

  for(unsigned int j = 0; j<dofs_per_cell; ++j)

 if(i==j)

 lump_cell_matrix(i,j) += fe_values.shape_value(i, q_index)*

fe_values.JxW(q_index) ;

else

0;

}

but I am not sure if it always works, what if I want to use Q2, I feel like it's not correct. I would be happy to hear your opinion or advice.

Regards
Shahin


--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/c77c7def-c89e-4bdb-8f2d-6cc23fc1eec8n%40googlegroups.com.

Virus-free. www.avast.com

Bruno Turcksin

unread,
May 21, 2021, 3:27:54 PM5/21/21
to dea...@googlegroups.com
Shahin,

The two codes are not equivalent and I think that they are both wrong. The second code is not lumping anything, you just keep the diagonal of the matrix and that's it. The first code set lump = 0.0 at the wrong place and you forgot a += or you need to reorder the loops. I think you want something like this:

  for(unsigned int i = 0; i<dofs_per_cell; ++i)

  {

   double lump = 0.;

    for(unsigned int j = 0; j<dofs_per_cell; ++j)

   {

       for(unsigned int q_index = 0 ; q_index < n_q_points ; ++q_index)

       {

         lump += fe_values.shape_value(i, q_index) *fe_values.shape_value(j, q_index) * fe_values.JxW(q_index);

       }

   }

   lump_cell_matrix(i,i) = lump;

  }



Best,

Bruno

You received this message because you are subscribed to a topic in the Google Groups "deal.II User Group" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/dealii/0B1nGC0O1Tc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to dealii+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/CALUbCFwGm8duazCftHqU%2BOUzkOxHxgsm%2B4EgZZk1r9Thf%3DiTQA%40mail.gmail.com.

Shahin Heydari

unread,
May 21, 2021, 3:52:47 PM5/21/21
to dea...@googlegroups.com
Thank you for your help, I really appreciate it. I thought regarding the second one we can also compute the lump, because we have 
m_i = sigma (m_ij) = int(phi_i dx)
and 
M_l = diag(m_1,m_2,...,m_N).


Regards
Shahin


Virus-free. www.avast.com

Wolfgang Bangerth

unread,
May 24, 2021, 11:46:27 AM5/24/21
to dea...@googlegroups.com
On 5/21/21 1:52 PM, Shahin Heydari wrote:
> m_i = sigma (m_ij) = int(phi_i dx)

Right. You compute the integral as a sum over the integrals on each cell. So
when you do

for (cell = ...)
for (i=...)
{
lump = 0;
for (q=...)
lump += ...

lump_cell_matrix(i,i) = lump;
}

then this is correct if you then add lump_cell_matrix(i,i) to your global matrix.

You mention in your original question that you are unsure about what to do
with Q2 (or higher) elements. This is a difficult question, but there is a
good amount of literature on the topic. Just search for "mass lumping higher
order elements".

Best
W.

--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

Shahin Heydari

unread,
May 25, 2021, 5:52:15 AM5/25/21
to dea...@googlegroups.com
Thank you so much Sir, appreciate your help.

Regards
Shahin

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.

Simon Wiesheier

unread,
Jun 4, 2021, 11:15:01 AM6/4/21
to dea...@googlegroups.com
Hello,

I am dealing with a smilar question as Shahin, i.e. computing global L2-projections and approximations of mass-matrices with a lumped integration. So I thought it makes sense to continue in this chat.

I produced two fields via L2-projection:
-In the first case I computed the mass matrix as a matrix, i.e. (phi_i,phi_j).
-In the second case I approximated the mass matrix  with a lumped integration. Instead of storing a mass matrix I computed a "mass-vector" by summing up all off-diagonal elements in each row of the corresponding mass-matrix and adding it do the corresponding diagonal. From the start I directly used a vector instead of a matrix.

My distribution of the field (stresses) in terms of critical peaks,... is of course the same.
However I am a little bit surprised that the "lumped nodal values" result in lower stresses at most places. I actually expected the other way round, because:
When thinking on a simple bar-element with 2 nodes, a lumped integration concentrates the total mass of the bar to the nodes so that the inertia becomes bigger compared with other Quadrature formulas.

So intuitively speaking, what is the effect of the lumped integration? From a mathematical viewpoint there are no more couplings between the DoFs. But does this lead to lower nodal values, higher nodal values or is this context dependent?

Best
Simon


Wolfgang Bangerth

unread,
Jun 5, 2021, 7:58:39 PM6/5/21
to dea...@googlegroups.com
On 6/4/21 9:14 AM, Simon Wiesheier wrote:
>
> So intuitively speaking, what is the effect of the lumped integration? From a
> mathematical viewpoint there are no more couplings between the DoFs. But does
> this lead to lower nodal values, higher nodal values or is this context dependent?

I have to admit that I have no intuition.

Do both schemes converge? To the same solution?

Simon

unread,
Jun 7, 2021, 10:12:20 AM6/7/21
to deal.II User Group
Yes, actually both schemes converge to the same solution.
Maybe I should do some postprocessing with a visualiuation in order to figure out what the lumped integration does with the values at the qps.

I´ve got one more basic question ragarding the determination of the convergence rate as it is done in many tutorials (step 7, step 51,...). For that purpose I watched your new videos (3.9, 3.91, 3.93, 3.95) where you derived the error estimation for the laplace equation in certain norms. You mentioned in there that for many PDEs one comes up with the same results, i.e. that the finite element error is at least as good as the interpolation error times some equation-dependent constant.
I want to determine the convergence rate for the elasticity equations with the PDE: div(stress-tensor) = 0.
-Since I determine the convergence rate, the before mentioned constant in the inequality cancels out, i.e. I should get the same convergence rate as for the laplace equation. Is that right?

-I actually solve the nonlinear elasticity equations (hyperelasticity); The finite element spaces, test functions,... are the same as for linear elasticity, but the stresses depend now nonlinear on the gradient of the displacements. My question is if the nonlinearity changes the error estimates?
Let´s assume I determine the convergence rate in the L2-norm for linear elasticity, which is in the best case 2 for for p=1. Should I also get 2 for the nonlinear elasticity equations?

I know that these questions are not directly related to dealii, but I guess you have a deep knowledge in that area. I would appreciate if you helped my answering that.

Best
Simon

Wolfgang Bangerth

unread,
Jun 7, 2021, 11:47:51 AM6/7/21
to dea...@googlegroups.com
On 6/7/21 8:12 AM, Simon wrote:
> Yes, actually both schemes converge to the same solution.
> Maybe I should do some postprocessing with a visualiuation in order to figure
> out what the lumped integration does with the values at the qps.

Then the difference you observe is simply numerical error.


> I want to determine the convergence rate for the elasticity equations with the
> PDE: div(stress-tensor) = 0.
> -Since I determine the convergence rate, the before mentioned *constant* in
> the inequality cancels out, i.e. I should get the same convergence rate as for
> the laplace equation. Is that right?

Correct. The *rate* is independent of the factor.


> -I actually solve the nonlinear elasticity equations (hyperelasticity); The
> finite element spaces, test functions,... are the same as for linear
> elasticity, but the stresses depend now nonlinear on the gradient of the
> displacements. My question is if the nonlinearity changes the error estimates?
> Let´s assume I determine the convergence rate in the L2-norm for linear
> elasticity, which is in the best case 2 for for p=1. Should I also get 2 for
> the nonlinear elasticity equations?

I'm sure someone has proved this, but my intuition is that for every elliptic
equation without an advection term, you will get the same convergence rate as
for the Laplace equation as long as the coefficient in the elliptic operator
is nice enough. Nice enough here will mean that your stress-strain
relationship does not degenerate, i.e.,
||sigma|| / ||eps||
does not go to zero or infinity, and is a smooth function of some sort.

Simon Wiesheier

unread,
Jun 7, 2021, 1:07:27 PM6/7/21
to dea...@googlegroups.com
" I'm sure someone has proved this, but my intuition is that for every elliptic
equation without an advection term, you will get the same convergence rate as
for the Laplace equation as long as the coefficient in the elliptic operator
is nice enoug."

I'll will search in the literature to get deeper information, so thanks for the hint!
And the nonlinear equations of elasticity are actually elliptic equations?

One last question regarding the convergence rate:
As I previously said in this chat, I am projecting the dim*dim coefficients of the stress tensor, which are only availabe at the qps, to the nodes using a global L2-projection and for comparison a superconvergent approach (superconvergent patch recovery introduced by Zienkiewicz and Zhu).
I will then also determine the convergence rate of the two approaches. As the name suggests, the superconvergent approach should give me a higher convergence rate as expected.
-Is the expected rate the rate which the laplace equation (assuming that this holds for elasticity as just discussed) yields for the gradients of the solution, i.e. ||grad (u-u_h)||_L2 <= ...h^p ?

For me it's not yet fully clear how convergence rate of a projection method (L2, SPR) is linked to ||grad (u-u_h)||_L2 <= ...h^p . 
Basically the input for the projection methods are the gradients (stresses) at the qps. Of course these values will be changed by the method( apply projection -> get_function_values), but is this "change" totally independent of the rate p from ||grad (u-u_h)||_L2 <= ...h^p ?
For me this seems to be the case since in the literature people compare the rate of  a L2-projection with the rate of a SPR-projection (or other methods).

Thanks again for helping me!

Best
Simon

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.

Wolfgang Bangerth

unread,
Jun 7, 2021, 4:49:56 PM6/7/21
to dea...@googlegroups.com, Simon Wiesheier

> And the nonlinear equations of elasticity are actually elliptic equations?

Yes. If the stress-strain relationship is monotonic, i.e., if the stress
increases with increasing strain. (So no bizarre materials with negative
compressibility, for example.)


> -Is the *expected rate* the rate which the laplace equation (assuming that
> this holds for elasticity as just discussed) yields for the gradients of the
> solution, i.e. ||grad (u-u_h)||_L2 <= ...h^p ?

Yes. You'd expect h^p for the gradients, and the ZZ approach can yield
something like h^{p+1} at least in certain points. That is, you'd get
something like
|| r - nabla u || + O(h^{p+1})
where r is the reconstructed gradient. (Assuming your coefficient is 1.
Otherwise you'd have something like
|| r - kappa nabla u || + O(h^{p+1})
where r is the reconstructed stress and kappa is the stress strain tensor.


> For me it's not yet fully clear how convergence rate of a projection method
> (L2, SPR) is linked to ||grad (u-u_h)||_L2 <= ...h^p .
> Basically the input for the projection methods are the gradients (stresses) at
> the qps. Of course these values will be changed by the method( apply
> projection -> get_function_values), but is this "change" totally independent
> of the rate p from ||grad (u-u_h)||_L2 <= ...h^p ?

When you say "these values will be changed", can you explain what you refer
to? What is "before" and what is "after" this change?
Reply all
Reply to author
Forward
0 new messages