Tianju,
> |I am solving a Laplacian equation with mixed finite element
> with/**/|/*Taylor-Hood */||element pair|/*(Q2, Q1)*/ over a unit square. The
> governing equations simply looks like:|
> |
> |
> |*u* + grad p = 0
> |
> |div *u* = 0
> |
> |
> |
> |As for boundary conditions, I specify /*no-flow*/ boundary conditions for
> /boundary_id()==1/ and I specify/*pressure*/ boundary conditions for
> |||/boundary_id()==0./||
> ||/
> /||
> | 1
> |
> | ----------------|
> | | |
> |
> | | |
> |
> | 0 | | 0
> |
> | | |
> |
> | | |
> |
> | | |
> |
> | ----------------
> |
> | 1
> |
> |
> |
> |As you can tell, I am following what step-20 does, so my code can be viewed
> as a simplified version of step-20, with the exception that it is in a
> parallel version also using a direct solver.
> |
> |
> |
> |However, the output solution does not meet my expectation. For the (*u*, p)
> pair, I found p is right, but there are problems tied to the solution of*u*.
> |
> |
> |
> |||Here I put a picture of the magnitude of *u*.
> |
> |
> |
>
> result.png
>
>
> Ideally, it should be uniformly of value 1. However, there are two "bad
> spots". Anyone who can help me figure out why? Appreciate it!
Have you figured the problem out in the meantime? If not, a good approach to
debugging these sorts of problems is to try and run the program with just one
processor. In your case, you say that you're running with 4. If the problem
goes away when you run with 1 processor, you know that the issue has something
to do with the parallel mesh handling -- for example, with the way you compute
constraints. If the problem persists with 1 processor, then you know that the
parallel handling is not at fault and that the issue must be somewhere else.
Best
W.
--
------------------------------------------------------------------------
Wolfgang Bangerth email:
bang...@colostate.edu
www:
http://www.math.colostate.edu/~bangerth/