nodal solutions and postprocessors

433 views
Skip to first unread message

Jesse Carter

unread,
Jul 31, 2015, 5:24:01 PM7/31/15
to moose-users
Hello all -

A couple questions that came up while looking through the source code:

1. A Nodal Postprocessor operates on the element nodes, obviously, but why in some of the MOOSE Nodal Postprocessors is the value set equal to _u[_qp]? That looks like the value of the variable at the quadrature point, which is inside the element (at least for Gauss quadrature on linear elements), and not some point at the node, which is the edge of the element. Does it have something to do with the way _u is initialized to a nodal solution instead of just a regular solution like in an Element Postprocessor? If so, what is a nodal solution and how is it indexed?

2. Is there a nodal time derivative and if so how do you initialize it? In other words, var.sln() is to var.uDot() as var.nodalSln() is to ???

That's it for now!

Cody Permann

unread,
Aug 3, 2015, 10:14:38 AM8/3/15
to moose-users
MOOSE uses _qp for indexing both "integrated" and "non-integrated" calculations. We do this for consistency and code re-use. For instance, you can code up an Auxiliary Kernel and change it from a nodal to an elemental (integrated) type without changing any code or recompiling simply by changing the variable type in your input file. So if you are working with a nodal postprocessors your Postprocessor is being called over single nodes so accessing the solutions at _u[_qp] is always the solution at *this* node (i.e. _qp is always zero).

There is no nodal time derivative and hopefully you don't have to make calls to any of the solution types in MooseVariable yourself. Normally we give you the right values through the coupling interface.

Cody

--
You received this message because you are subscribed to the Google Groups "moose-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.
Visit this group at http://groups.google.com/group/moose-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/62bf0b2d-f67a-4584-975e-8b5d2b6fc3c4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Jesse Carter

unread,
Aug 10, 2015, 3:58:34 PM8/10/15
to moose-users
That's convenient!

The reason I ask is because I was looking into using the NodalL2Norm value of the "save_in" variable from the TimeDerivative kernel to inform my Timestepper or possibly implementing this without the "save_in" function as a standalone Postprocessor (to [maybe?] avoid a slow-down as Derek suggested here). Looking into the code a bit to see what "save_in" does exactly, I see that in framework/src/kernels/Kernel.C, the code is simply saving the vector of element-integrated values to the auxiliary variable while also adding its contribution to the overall residual vector. Going off what Derek suggested here, I had begun to wonder if NodalL2Norm was the right Postprocessor for this because, since it loops over all qp in the test function, that would be double-counting (or more depending on the order of the shape function) that element-integrated value since it is called at each quadrature point. Based on what you're saying though, this is not the case and MOOSE correctly calls the variable value only once at the node.

What is being done then in the case where you have an element-averaged quantity like this and you save it to a nodal auxiliary variable? Is it just averaging the value of adjacent elements and saving that to a node?

Derek Gaston

unread,
Aug 10, 2015, 4:07:08 PM8/10/15
to moose...@googlegroups.com
On Mon, Aug 10, 2015 at 3:58 PM Jesse Carter <jesse....@gmail.com> wrote:
What is being done then in the case where you have an element-averaged quantity like this and you save it to a nodal auxiliary variable? Is it just averaging the value of adjacent elements and saving that to a node?

What "element-averaged quantity" are you referring to here?  The residual values are not "averaged"... they're summed together for each degree of freedom.  To get technical, what we're doing here is taking advantage of the fact that Lagrange shape functions have one degree of freedom for each node in the domain... so the "residual" naturally lines up with the nodes so it's easy to plot.  It also makes it possible to use a "nodal" Postprocessor to do things like compute an L2-Norm of the save_in residual (again, because there is a single value at each node that corresponds to an entry in the residual vector).

This wouldn't work with any other shape functions. 

Derek

Jesse Carter

unread,
Aug 10, 2015, 5:08:14 PM8/10/15
to moose-users
Ah, yes, I meant to say "element-integrated" quantity, which implementing numerically means doing quadrature.

So when doing a "save_in" operation, (which is just this, right? https://github.com/idaholab/moose/blob/master/framework/src/kernels/Kernel.C#L64), is it more appropriate to use an "element" (constant monomial) auxiliary variable since the residual contribution is essentially a quantity that is valid for the whole element, i.e. the element-integrated (by quadrature) value?

And let me see if I understand what you're saying, Derek. If I were to use a "nodal" (linear lagrange) auxiliary variable, is the "save_in" value for element i getting saved to node i, element i+1 to node i+1, and so on (at least for internal nodes/elements)?

Derek Gaston

unread,
Aug 10, 2015, 6:58:15 PM8/10/15
to moose-users
save_in can only happen to an Auxiliary variable of the exact same type.

Let me see if I can explain this better:

Let's say you have one element with 4 nodes and you are solving one equation for "u" using Linear Lagrange shape functions (ie one "linear Quad").

You will have _4_ entries in your residual vector.  One for each Lagrange shape function you are testing your residual against.  For a refresher on what goes in each entry, check out the last equation in the "Numerical Integration" section (the first section) here: http://mooseframework.org/wiki/MooseTraining/FEM/NumericalImplementation/.  Note that that equation is for R_i where "i" is the test function.

So, for each test function (and there is one associated with each node with Linear Lagrange) you need to do an integration over the element and fill in R_i.  So while you are "integrating" something over the element all of the contributions related to one test function all flow into _one_ entry in the residual.  It's not spreading it out, or averaging it.  It's integrating the current residual against that test function on that element and putting that value into R_i.

Because of the way Lagrange shape functions are constructed (such that only one shape function has a value at each node... and that value is 1.0) you can think of each of each of those test functions as being associated with one node.  So, think about the residual as a vector for a moment.  It has four entries in it... and we have 4 nodes.  So if we want to do a vector L2 norm of the residual we have two choices:

1)  We can loop through the residual directly... squaring each entry and summing up the result and ultimately taking the square root.  This is what MOOSE (and PETSc) do internally during the solve when we're printing out the norm of the residual.

2)  We can save a copy of the residual over in the Auxiliary system in a Linear Lagrange variable (which will give us 4 values just like the residual vector has and in the same order).  Then we can loop through the nodes and lookup to see what entry in the residual vector (really the auxiliary solution vector that is holding a copy of the residual vector) goes with that node... then we can square that value at each node, sum all of those and then take a square root at the end.  That's what the NodalL2Norm Postprocessor does.

Both things achieve the same goal.  They are just two different ways of thinking about the residual vector.

NOW... like I said, that only works for Lagrange.  If you are using more exotic shape functions like 3rd order Hermites then they actually have multiple residual entries associated with each node... so looping through the nodes and getting one value (like NodalL2Norm does), squaring them, summing them then square rooting them will NOT give you the same value as looping through the residual vector directly.

I apologize if that last point threw you off before.  I just wanted you to be aware that using NodalL2Norm with other shape functions other than Lagrange will NOT yield the expected result...

Derek

--
You received this message because you are subscribed to the Google Groups "moose-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.
Visit this group at http://groups.google.com/group/moose-users.

Jesse Carter

unread,
Aug 12, 2015, 9:34:54 AM8/12/15
to moose-users
Thanks, Derek, for taking the time to explain that. I lose track sometimes of what's going on under the hood.

A follow up question: testing the residual at every node implies testing the solution at every node, meaning the nodes are where you are solving your system of PDE's. So for a system using Linear Lagrange elements, is there a difference between the solution and nodal solution, i.e. var.sln() vs. var.nodalSln(), since you are solving at the nodes? Does it have anything to do with quadrature points since the solution is often referenced (in kernels or postprocessors for example) at the quadrature points (e.g., _u[_qp]) or are those just interpolated on-the-fly using the shape functions? Now that I think of it, how does MOOSE know which element is called when the solution is only being indexed by the local quadrature point number (0 or 1 in this case)? There most be some fancy coding at work.

     - Jesse

Derek Gaston

unread,
Aug 12, 2015, 10:49:31 AM8/12/15
to moose-users
The residuals are not "nodal".  The residual vector entries are integrals over each element testing against one test function.  Those integrals are performed on quadrature points (which are typically not located at the nodes.  However, in the special case of Linear Lagrange there happens to be just one test function associated with each node... so that naturally leads to a kind of parity between nodes and the entries in the residual vector.

The solution is a set of coefficients for basis functions.  Those coefficients, together with their basis functions, describe functions over the whole domain.  You can evaluate a finite-element solution at any point within the domain by evaluating the basis functions at that point, multiplying by their respective coefficients and summing.


Derek

Jesse Carter

unread,
Aug 13, 2015, 11:02:08 AM8/13/15
to moose-users
I'm with you, Derek. I have been dealing solely with linear lagrange elements, so the natural parity between the nodes and the residual vector that you speak of is the only way I've really looked at things.

Now I've been playing around with this concept and I've run across an issue. I thought I'd test Cody's statement above about code reuse for both nodal and elemental (integrated) variables (just for fun; I do trust you guys). I made an AuxKernel that basically copies the solution to an auxiliary variable using a coupling operation: _v(CoupledValue("coupled_variable")) in the constructor. Indeed, using some cout statements right before the return statement in computeValue(), when I pass it a linear lagrange aux (nodal) variable, it gives me a vector of size N+1 where N is the number of elements (one per node), and when I pass it a constant monomial (elemental) variable, it gives me a vector of size 2N (one per quadrature point), all with one piece of code. Oh and I'm just doing 1D for now.

Then I thought I'd instead copy over the time derivative with CoupledDot() for a transient simulation. Even though Cody said above that there isn't a nodal time derivative, I thought it would be fun to see what happens (error message? crash?). Plus the CoupledDot function makes reference to nodalSlnDot (here), and I wanted to see what it was.

So here's the thing: I was getting all zeros for my time derivative variables in the output mesh for both the elemental and nodal variables at all timesteps. To investigate this, I had the cout statement also write out the time derivative (_v_dot[_qp]). Now, of course the time derivative is zero when stepping to timestep 1. At the beginning of subsequent steps, the cout statement shows non-zero values for _v_dot for both the elemental and nodal variables, and they look reasonable, but then the cout statements are written again immediately before the first (actually 0th) nonlinear residual is displayed and the _v_dot values are all zero while the _v values are still the same. The values for _v_dot continue to be zero after further nonlinear iterations and so it is not surprising that all zeros are written to the output mesh.

What is happening? I'm probably missing something.
Reply all
Reply to author
Forward
0 new messages