Thanks to everybody for the comments.
I know that a correction term is needed to take account of the tangential flux. At the same time when the mesh distortions are small, we can live with a first order approximation. I know you are working to implement the mimetic finite difference (MFD) for the MPHASE module, this should take care not only of unstructured meshes but also of structured distorted meshes. Is it still a priority?
We would like to test how the MFD already implemented for the RICHARDS mode.
This is implemented only for structured meshes, but it should handle distortions correctly, right?
Are the quadrilateral faces divided in the two triangular ones in the local MFD discretisation (i.e. is the hex considered as a general polyhedron and split into tets?)
We came across several models where the use of distorted structured meshes can be very handy (i.e. representing the real geology but keeping the simplicity and efficiency of structured meshes). We are considering to implement a mesh loader for structured meshes to allow different vertical node distributions for each vertex column. From the code we analysed so far it seems that we need to introduce/change several functions called in the initialisation to accomplish two main tasks: (a) create new data structure to read in the additional dz,
(b) use some of the UMesh tools to compute cell centres, volume and areas.
However, once the grid is formed there should not be any other changes, right?
Do you see any major issues in doing this?
Will you be interested in checking and adding this extra feature to the standard release once we have done it? ... a more sophisticated way to handle fluxes can be added later
Paolo
On Sunday, September 8, 2013 8:47:45 PM UTC+2, Gautam Bisht wrote:Hi Paolo,On Sun, Sep 8, 2013 at 11:06 AM, Satish Karra <satk...@gmail.com> wrote:
Can you generate a voronoi mesh with this domain? If yes, you can use the explicit unstructured grid method of reading that voronoi mesh. The problem with orthogonality would be solved.--Satish
Sent from my iPhoneI think you would need to add some correction terms to get sufficiently accurate results....PeterOn Sep 8, 2013, at 1:44 AM, Paolo Orsini - BLOSint <paolo....@gmail.com> wrote:Hi,Is it possible to define a structured grid defining dx and dy spacing in the usual way, but having a dz spacing that varies for each column of nodes?ExampleSay we have a structured grid with NX x NY x NZ cells, (NX+1)x(NY+1)x(NZ+1) points.Is it possible to input NX dx in x direction, NY dy in y direction, and (NX+1)x(NY+1)XNZ dz spacing for the vertical direction Z? i.e. one each vertical node columnI believe for a structured grid, PFLOTRAN only allows specification of NZ 'dz' values [ not (NX+1)*(NY+1)*NZ dz values].Unstructured grid format in PFLOTRAN will be able to accommodate your grid. You can use:- Implicit unstructured grid format: in which you should split hexahedrons into prismatic control volumes, or- Explicit unstructured grid format: in which you can use hexahedron control volumes.-GautamI know that in this case the horizontal faces will not respect the orthogonality condition with respect to the line joining the cell centres, however in many reservoir problems the variation of Z are very small compared to the dx and dy spacing, and the error should be contained.If not implemented, it should not be a huge work to adapt the code to do this, or am I missing something?what do you think?Paolo
Hi Gautam,
Thanks for your answers/comments.Our idea is to implement the "Standard FV with distorted cell without tangential flux correction" [not implemented for structured grid]
Note that you will need to read (nx+1)*(ny+1) dz’s. This is all based on the number of connections in the horizontal. A similar capability existed in the past with general_grid.F90, but that capability is out of date. It lists me as the author, but I simply converted routines that Lichtner or Lu wrote years earlier.
Glenn
--
You received this message because you are subscribed to the Google Groups "pflotran-dev" group.
To view this discussion on the web visit
https://groups.google.com/d/msgid/pflotran-dev/bc382b75-f412-4ff8-bd24-bf0954a0dac0%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pflotran-dev/DF0B2AE3F6166745843F1066C7E2F38E8307C7%40EXMB01.srn.sandia.gov.
As you said this can already be handled by PFLOTRAN using the unstructured grids, but we see the following problems/limitations:
- Using parallel unstructured requires a more complex and less efficient domain partition. The load balancing will be always easier to control and more efficient when using structured grids
- When using the parallel unstructured grids, ParMetis is required, which is not free, and does not follow the open source model. One solution could be to replace Parmetis with the Scotch library, which from what I gathered has been already linked to PETSC. However, I don't know how much work there is to do and how hard it would be. And if there is any interest in doing this. Any comments on this?
It basically seems a shame not to use structured meshes when is possible, and we realised that structured meshes are still heavily used by reservoir engineers.
A tangential flux correction to this distorted-structured grid could be added later, this requires more work and a more careful thought.
We want to test the MFD (as you recommended), and look into more classical approaches ( least square method, cell gradient reconstruction [e.g. Turner proposes several methods] ). Again, the structured nature of the grid should give more options for the implementation of a correction.
Having said that, I recognise the power of unstructured grids, and having a proper unstructured solver in PFLOTRAN for every mode will be great (e.g. MFD for unstructured meshes in every mode developed). At the same time, having also a structured grid capability that can handle real geometries at this stage can attract many more users.
What do you think about this distorted-structured alternative ?
I would argue that you can still use two-point flux on unstructured/distorted grids, just make sure that you generate a voronoi mesh for your domain. PFLOTRAN can read the areas of faces and volumes via explicit unstructured capability. This can handle real geometries and domains with discrete fracture networks (you can find a two intersecting fracture example in pflotran-dev/regression_tests/default/discretization/dfn_explicit.in) and give results with better accuracy (than a hex mesh, in your case).
Thanks,
Satish
To view this discussion on the web visit https://groups.google.com/d/msgid/pflotran-dev/CE783CFB.483B%25satkarra%40lanl.gov.
Hi Jed,I found out what the problem is.First of all I printed all the matrices out, to be sure that MatConvert was working fine.And that was ok: AIJ_mat had the same connectivity table than Adj_mat.Then I compared the dual graph (stored in DualMat) computed by the Parmetis routine (MatMeshToCellGraph) and with MatMatTransposeMult. And they were quite different.The computation of the dual graph of the mesh is a bit more complicated than multiplying the adjacency matrix by its transpose, but not far off. With this operation, also the cells that share only one node are connected in the dual graph.Instead, the minimum number of common nodes is >1 (2 in 2D probelms, 3 in 3D problems). In fact, this is an input of MatMeshToCellGraph, I should have understood this before.This can be computed doing the transpose adjacency matrix (Adj_T), then doing the multiplication line by line of Adj time Adj_T, and discard the non zero entries coming from to elements that share a number of nodes less than the minimum number of common nodes imposed. I have not implement this yet, any suggestion is welcome.I also found out that Schotch has a facility to compute a dual graph from a mesh, but not PTScotch.Once the graph is computed, PTSchotch can load the central dual graph, and distribute it into several processors during the loading.Am i right to say that PETSC is interfaced only with PTSchotch and not with Scotch?To check if the PTSchotch partition works (within PFLOTRAN ), I am computing a DualMat with parmetis, saving it into a file. Then I recompile the code (using a petsc compiled with ptscotch), an load the DualMat from a file rather then forming a new one. I did a successful test when running on one processor. but I am having trouble when try on more.
I though the the dual graph was computed only once, even during the mpi process, instead it seems to be recomputed more than once. Not sure why.... sure i am missing something ???
--
You received this message because you are subscribed to the Google Groups "pflotran-dev" group.
To view this discussion on the web visit https://groups.google.com/d/msgid/pflotran-dev/CACGnotZfbdvEieXFn%2B6RYzj4o4yM_%3DK0NkHcFeK5CESL%2BovcKw%40mail.gmail.com.
Hi Paolo,On Sun, Nov 3, 2013 at 3:53 AM, Paolo Orsini <paolo....@gmail.com> wrote:
Hi Jed,I found out what the problem is.First of all I printed all the matrices out, to be sure that MatConvert was working fine.And that was ok: AIJ_mat had the same connectivity table than Adj_mat.Then I compared the dual graph (stored in DualMat) computed by the Parmetis routine (MatMeshToCellGraph) and with MatMatTransposeMult. And they were quite different.The computation of the dual graph of the mesh is a bit more complicated than multiplying the adjacency matrix by its transpose, but not far off. With this operation, also the cells that share only one node are connected in the dual graph.Instead, the minimum number of common nodes is >1 (2 in 2D probelms, 3 in 3D problems). In fact, this is an input of MatMeshToCellGraph, I should have understood this before.This can be computed doing the transpose adjacency matrix (Adj_T), then doing the multiplication line by line of Adj time Adj_T, and discard the non zero entries coming from to elements that share a number of nodes less than the minimum number of common nodes imposed. I have not implement this yet, any suggestion is welcome.I also found out that Schotch has a facility to compute a dual graph from a mesh, but not PTScotch.Once the graph is computed, PTSchotch can load the central dual graph, and distribute it into several processors during the loading.Am i right to say that PETSC is interfaced only with PTSchotch and not with Scotch?To check if the PTSchotch partition works (within PFLOTRAN ), I am computing a DualMat with parmetis, saving it into a file. Then I recompile the code (using a petsc compiled with ptscotch), an load the DualMat from a file rather then forming a new one. I did a successful test when running on one processor. but I am having trouble when try on more.
I though the the dual graph was computed only once, even during the mpi process, instead it seems to be recomputed more than once. Not sure why.... sure i am missing something ???In PFLOTRAN, MatCreateMPIAdj() is called:- Once, if unstructured grid is specified in IMPLICIT format [in unstructured_grid.F90];
- Twice, if unstructured grid is specified in EXPLICIT format [in unstructured_grid.F90].
To view this discussion on the web visit https://groups.google.com/d/msgid/pflotran-dev/f83e4303-67df-4827-bf5d-ac8de192835a%40googlegroups.com.
--
You received this message because you are subscribed to a topic in the Google Groups "pflotran-dev" group.
To view this discussion on the web visit https://groups.google.com/d/msgid/pflotran-dev/3D6736E5-A08E-447B-97DE-FE721625714A%40gmail.com.
Hi Jed,I have tried to compile PFLOTRAN, after compiling petsc with ptscotch.The problem is that PFLOTRAN calls MatMeshToCellGraph to create the Dual graph, operation which is done by using the the parmetis function ParMETIS_V3_Mesh2Dual.Is there an equivalent function of MatMeshToCellGraph that uses PTScotch?Paolo
On Mon, Oct 7, 2013 at 9:03 PM, Jed Brown <jedb...@mcs.anl.gov> wrote:
--
You received this message because you are subscribed to a topic in the Google Groups "pflotran-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/pflotran-dev/MeGoaEvRHfs/unsubscribe.
To unsubscribe from this group and all its topics, send an email to pflotran-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pflotran-dev/c88a51d8-6898-422a-a6c2-1876ab92e738%40googlegroups.com.
From: pflotr...@googlegroups.com <pflotr...@googlegroups.com>
On Behalf Of Paolo Orsini
Sent: Friday, October 25, 2019 6:12 AM
To: pflotran-dev <pflotr...@googlegroups.com>
Subject: [EXTERNAL] Re: [pflotran-dev: 5805] Re: [pflotran-users: 998] Distorted structured grid
Hi Paolo,
We build with PTScotch all the times, for the master branch, as ParMetis is open but not free.
However there are limitations, mainly you cannot use it with Implicit unstructured grids, as the domain decomposition of those type of meshes required a Parmetis function to create the dual graph (if I remember well).
Unless something has changed recently, as I haven't looked at the implicit grids recently.
However, for unstructured explicit mesh, which is the only ones we use, that function is not required, and you can build entirely without parmetis.
Decomposition of the “implicit” unstructured grids currently requires a call to MatMeshToCellGraph, which is wired to ParMETIS (https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/MatOrderings/MatMeshToCellGraph.html). We would need to work with PETSc to have PT-SCOTCH replace it, if possible. I note that the PT-SCOTCH CeCILL-C license is compatible with GNU LGPL.
The “explicit” unstructured grid does not require ParMETIS as it calls MatPartitioningApply, which can leverage a number of partitioning types (https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatPartitioningType.html#MatPartitioningType).
I would first look into gaining permission to use METIS/ParMETIS before looking into alternate partitioners.
Glenn
--
You received this message because you are subscribed to the Google Groups "pflotran-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
pflotran-dev...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/pflotran-dev/CACGnotbtcgf8j%3D5pe8Hx0pLXTbjfy7b0vo5P5JRU0SOsfb30Aw%40mail.gmail.com.