Compute the divergence of a tensor

56 views
Skip to first unread message

Sclah

unread,
Apr 4, 2025, 11:24:38 AM4/4/25
to deal.II User Group
Hello everyone,
I am trying to implement a continuous mechanics simulation and I am encountering troubles trying to compute the divergence of the deformation gradient inverse. I need it to assemble the variational form terms.
My code reads something like the following:

- take the gradient of the previous solution
fe_values[components].get_function_gradients(old_sol, local_old_grad);
- for every quadrature point "q" compute the deformation gradient (F = Identity + grad(u))
Tensor<2, dim> F = Identity + local_old_grad(q);
Tensor<2, dim> F_inv = invert(F);
Now I'd like to compute the divergence of F_inv, is there a way to do that?
I was thinking about automatic differentiation but maybe there is a more straightforward way

Thank you

Wolfgang Bangerth

unread,
Apr 4, 2025, 11:49:05 AM4/4/25
to dea...@googlegroups.com
On 4/4/25 09:24, Sclah wrote:
>
> - take the gradient of the previous solution
> /fe_values[components].get_function_gradients(old_sol, local_old_grad);/
> - for every quadrature point "q" compute the deformation gradient (F =
> Identity + grad(u))
> /Tensor<2, dim> F = Identity + local_old_grad(q);/
> /Tensor<2, dim> F_inv = invert(F);/
> Now I'd like to compute the divergence of /F_inv/, is there a way to do that?
> I was thinking about automatic differentiation but maybe there is a more
> straightforward way

You need to apply the chain rule. If I understand correctly, you need
div ( [I + grad(u)]^{-1} )
= trace grad ( [I + grad(u)]^{-1} )
which is of the form
grad ( A^{-1} )
and which you can compute by observing that on the one hand
grad ( A A^{-1} ) = grad I = 0
and on the other hand by using the product rule
grad ( A A^{-1} ) = (grad A) A^{-1} + A (grad A^{-1})
and so
grad A^{-1} = - A^{-1} (grad A) A^{-1}
with an appropriate choice of contraction over indices. So that gives you
something (you'll have to check the details) of the form
grad ( [I + grad(u)]^{-1} )
= - Finv grad (I + grad(u)) Finv
= - Finv [grad^2(u))] Finv

Best
W.

--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/


Sclah

unread,
Apr 4, 2025, 12:25:01 PM4/4/25
to deal.II User Group
Yes! Thank you very much, I was indeed missing the mathematical derivation.
I better checked my computations and I actually need:
div([I + grad(u)]^{-T}) instead of div([I + grad(u)]^{-1})  but following your steps that leads to a similar result:
div([I + grad(u)]^{-T})  = - Finv [grad(grad(u)^T)] Finv
Then I guess that using  get_function_hessians() on u is what I need to assemble correctly my weak form.

Thank you again!

Wolfgang Bangerth

unread,
Apr 4, 2025, 12:29:26 PM4/4/25
to dea...@googlegroups.com
On 4/4/25 10:25, Sclah wrote:
> **
>
> Yes! Thank you very much, I was indeed missing the mathematical derivation.
> I better checked my computations and I actually need:
> /div([I + grad(u)]^{-T})/ instead of /div([I + grad(u)]^{-1})/  but following
> your steps that leads to a similar result:
> /div([I + grad(u)]^{-T})  = - Finv [grad(grad(u)^T)] Finv/
> Then I guess that using get_function_hessians() on /u/ is what I need to
> assemble correctly my weak form.

Yes. I imagine you're aware of that, but if you're using the usual set of
finite elements, the gradient is discontinuous and so the second derivatives
are not defined at cell interfaces; in general, the second derivative of the
finite element solution is a poor approximation of the second derivative of
the exact solution. (In the extreme case, if you were using linear elements on
triangles, then the gradient is constant and the second derivative is always
zero.)

Sclah

unread,
Apr 4, 2025, 12:51:00 PM4/4/25
to deal.II User Group
Yes, thank you for the warning, that could be an issue indeed. In my case I am using P2 elements, I'll check the best way to approach this, thank you!
Reply all
Reply to author
Forward
0 new messages