On 1/20/20 7:59 AM, 'Maxi Miller' via deal.II User Group wrote:
>
> I wrote a short test program which should solve the diffusion equation
> using the time-stepping class, and implemented both methods. When
> calculating the matrix and applying it to my solution vector, I get a
> different result compared to reading the gradients from the solution
> with get_function_gradients() and multiplying it with the gradients
> returned by shape_grad(). The results obtained from the matrix
> multiplication are correct compared to the expected solution, the
> results obtained from the direct approach are not. Why is that?
Maxi -- if I understand you correctly, you're asking what the difference
is between computing
F_i = \int \grad\varphi_i . grad u_h
and
F_i = (AU)_i
where A=Laplace matrix, U=coefficient vector corresponding to u_h.
There shouldn't be a difference in principle, but you have to pay
attention to what hanging nodes and Dirichlet boundary conditions do. In
particular, you might have to call F.condense() in the first case.
You only say that the results are different, but not *how* they are
different. Have you looked at that? Are these two vectors different only
in hanging nodes? Only for shape functions at the boundary?
Best
W.
--
------------------------------------------------------------------------
Wolfgang Bangerth email:
bang...@colostate.edu
www:
http://www.math.colostate.edu/~bangerth/