Re: [pflotran-users: 998] Distorted structured grid

125 views
Skip to first unread message

Gautam Bisht

unread,
Oct 6, 2013, 5:45:19 PM10/6/13
to pflotra...@googlegroups.com, pflotr...@googlegroups.com
Hi Paolo,

On Sun, Oct 6, 2013 at 11:11 AM, Paolo Orsini - BLOSint <paolo....@gmail.com> wrote:
Thanks to everybody for the comments.

I know that a correction term is needed to take account of the tangential flux. At the same time when the mesh distortions are small, we can live with a first order approximation. I know you are working to implement the mimetic finite difference (MFD) for the MPHASE module, this should take care not only of unstructured meshes but also of structured distorted meshes. Is it still a priority?

To my knowledge, there is no ongoing MFD development for any mode.
 

We would like to test how the MFD already implemented for the RICHARDS mode.

I would highly recommend the above step, since we have only done limited testing with MFD.

 
This is implemented only for structured meshes, but it should handle distortions correctly, right?

MFD has been extended to support unstructured grid (but very limited testing has been done). Have a look at regression_tests_legacy/mfd/ugrid_45x1x50 for which you will have to compile the code with the following option: 

make -f makefile_legacy mfd=1 mfd_ugrid=1 pflotran

 
Are the quadrilateral faces divided in the two triangular ones in the local MFD discretisation (i.e. is the hex considered as a general polyhedron and split into tets?)  

At present, [FV/MFD]-structured grid does not allow for distortion and control volumes are treated as hexahedrons.
 

We came across several models where the use of distorted structured meshes can be very handy (i.e. representing the real geology but keeping the simplicity and efficiency of structured meshes). We are considering to implement a mesh loader for structured meshes to allow different vertical node distributions for each vertex column. From the code we analysed so far it seems that we need to introduce/change several functions called in the initialisation to accomplish two main tasks: (a) create new data structure to read in the additional dz,

I would recommend using unstructured grid for distorted control volumes instead of creating a new "distorted structured grid type" because:
- structured grid in PFLOTRAN implies control volumes are orthogonal, and
- you will not have to implement "(a) create new data structure to read in the additional dz,"
 
(b) use some of the UMesh tools to compute cell centres, volume and areas.

For distorted control volumes, new subroutines may be needed to compute geometric properties while accounting for cell distortion.
 
However, once the grid is formed there should not be any other changes, right?

Apart from grid setup, I too don't see any additionally changes.
 
Do you see any major issues in doing this?  
 Will you be interested in checking and adding this extra feature to the standard release once we have done it? ... a more sophisticated way to handle fluxes can be added later 

Now, I'm lost as to what feature you plan to implement in the model: 
- MFD with distorted cells, or
- Standard FV with distorted cell without tangential flux correction [already implemented as unstructured grid], or
- Standard FV with distorted cell with tangential flux correct [not implemented]?
 
-Gautam.


Paolo
   


On Sunday, September 8, 2013 8:47:45 PM UTC+2, Gautam Bisht wrote:
Hi Paolo,


On Sun, Sep 8, 2013 at 11:06 AM, Satish Karra <satk...@gmail.com> wrote:
Can you generate a voronoi mesh with this domain? If yes, you can use the explicit unstructured grid method of reading that voronoi mesh. The problem with orthogonality would be solved. 

--Satish

Sent from my iPhone

On Sep 8, 2013, at 9:07 AM, Peter Lichtner <peter.l...@gmail.com> wrote:

I think you would need to add some correction terms to get sufficiently accurate results.
...Peter

On Sep 8, 2013, at 1:44 AM, Paolo Orsini - BLOSint <paolo....@gmail.com> wrote:

Hi,

Is it possible to define a structured grid defining dx and dy spacing in the usual way, but having a dz spacing that varies for each column of nodes?
Example
Say we have a structured grid with NX x NY x NZ cells, (NX+1)x(NY+1)x(NZ+1) points.
Is it possible to input NX dx in x direction, NY dy in y direction, and (NX+1)x(NY+1)XNZ dz spacing for the vertical direction Z? i.e. one each vertical node column 
I believe for a structured grid, PFLOTRAN only allows specification of NZ 'dz' values [ not (NX+1)*(NY+1)*NZ dz values].

Unstructured grid format in PFLOTRAN will be able to accommodate your grid. You can use:
- Implicit unstructured grid format: in which you should split hexahedrons into prismatic control volumes, or
- Explicit unstructured grid format: in which you can use hexahedron control volumes.

-Gautam

 

I know that in this case the horizontal faces will not respect the orthogonality condition with respect to the line joining the cell centres, however in many reservoir problems the variation of Z are very small compared to the dx and dy spacing, and the error should be contained.

If not implemented, it should not be a huge work to adapt the code to do this, or am I missing something?

what do you think?

Paolo
     

________________
Peter Lichtner
Santa Fe, NM 87507
OFM Research/LANL Guest Scientist



Paolo Orsini - BLOSint

unread,
Oct 7, 2013, 9:44:46 AM10/7/13
to pflotra...@googlegroups.com, pflotr...@googlegroups.com
Hi Gautam,

Thanks for your answers/comments.
Our idea is to implement the "Standard FV with distorted cell without tangential flux correction" [not implemented for structured grid] 

As you said this can already be handled by PFLOTRAN using the unstructured grids, but we see the following problems/limitations:

- Using parallel unstructured requires a more complex and less efficient domain partition. The load balancing will be always easier to control and more efficient when using structured grids

- When using the parallel unstructured grids, ParMetis is required, which is not free, and does not follow the open source model. One solution could be to replace Parmetis with the Scotch library, which from what I gathered has been already linked to PETSC. However, I don't know how much work there is to do and how hard it would be. And if there is any interest in doing this. Any comments on this?

It basically seems a shame not to use structured meshes when is possible, and we realised that structured meshes are still heavily used by reservoir engineers. 

A tangential flux correction to this distorted-structured grid could be added later, this requires more work and a more careful thought. 
We want to test the MFD (as you recommended), and look into more classical approaches ( least square method, cell gradient reconstruction [e.g. Turner proposes several methods] ). Again, the structured nature of the grid should give more options for the implementation of a correction.

Having said that, I recognise the power of unstructured grids, and having a proper unstructured solver in PFLOTRAN for every mode will be great (e.g. MFD for unstructured meshes in every mode developed). At the same time, having also a structured grid capability that can handle real geometries at this stage can attract many more users.

What do you think about this distorted-structured alternative ? 

Paolo

Jed Brown

unread,
Oct 7, 2013, 10:46:42 AM10/7/13
to Paolo Orsini - BLOSint, pflotra...@googlegroups.com, pflotr...@googlegroups.com
Paolo Orsini - BLOSint <paolo....@gmail.com> writes:
> - Using parallel unstructured requires a more complex and less efficient
> domain partition. The load balancing will be always easier to control and
> more efficient when using structured grids

I'm not sure I agree with this. For example, chemistry may be much more
expensive in some parts of the domain and load balancing with
non-uniform weights is trivial with unstructured grids, but tends to
break structured partitions.

> - When using the parallel unstructured grids, ParMetis is required, which
> is not free, and does not follow the open source model. One solution could
> be to replace Parmetis with the Scotch library, which from what I gathered
> has been already linked to PETSC. However, I don't know how much work there
> is to do and how hard it would be. And if there is any interest in doing
> this. Any comments on this?

Apart from a couple needless PETSC_HAVE_PARMETIS guards, it's a run-time
option -mat_partitioning_type ptscotch.

Note that in our experience, PTScotch is often a bit higher quality than
ParMETIS, but tends to be substantially more expensive.

Gautam Bisht

unread,
Oct 7, 2013, 11:11:06 AM10/7/13
to pflotra...@googlegroups.com, pflotr...@googlegroups.com
Hi Paolo,

On Mon, Oct 7, 2013 at 6:44 AM, Paolo Orsini - BLOSint <paolo....@gmail.com> wrote:
Hi Gautam,

Thanks for your answers/comments.
Our idea is to implement the "Standard FV with distorted cell without tangential flux correction" [not implemented for structured grid] 

The low level subroutines in PFLOTRAN that compute residual/jacobian terms loop over a list of cell connections (not a list of cells). Thus, your above mentioned approach is already supported as "FV unstructured mesh using two-point flux" and I don't see an advantage of introducing a new "distorted-structured grid".

-Gautam.

Paolo Orsini - BLOSint

unread,
Oct 7, 2013, 12:00:48 PM10/7/13
to pflotra...@googlegroups.com, pflotr...@googlegroups.com
Hi Gautam

The idea is to write only a different "mesh loader", which allows to define structured meshes with a dz spacing that can vary column by column.
Modifying/adding only few functions before the computational grid is formed.

The numerical scheme will be unchanged: 2 point flux formula, (i.e. same jacobian and residual computations already implemented).
The only advantage would be dealing with a structured mesh and not using the unstructured partitioning that require ParMetis or PTSchotch. 
Basically a different pre-processor feature, no more than that. 

My comments on the flux corrections were for a separate implementation task, to improve the numerical scheme. But that is a different and much bigger work.

Paolo  

Hammond, Glenn E

unread,
Oct 7, 2013, 12:15:43 PM10/7/13
to pflotr...@googlegroups.com, pflotra...@googlegroups.com

Note that you will need to read (nx+1)*(ny+1) dz’s.  This is all based on the number of connections in the horizontal.  A similar capability existed in the past with general_grid.F90, but that capability is out of date.  It lists me as the author, but I simply converted routines that Lichtner or Lu wrote years earlier.

 

Glenn

 

--
You received this message because you are subscribed to the Google Groups "pflotran-dev" group.
To view this discussion on the web visit https://groups.google.com/d/msgid/pflotran-dev/bc382b75-f412-4ff8-bd24-bf0954a0dac0%40googlegroups.com.

Gautam Bisht

unread,
Oct 7, 2013, 12:21:45 PM10/7/13
to pflotr...@googlegroups.com, pflotra...@googlegroups.com
Hi Paolo,

For an unstructured mesh comprised of hexahedron cells, below is the pseudo-code you can use to compute dx/dy/dz :

do local_id = 1, unstructured_grid%nlmax

if ( unstructured_grid%cell_vertices(0,local_id) == 8) then

do ii = 1, 4
! This assumes that the unstructured mesh was created such that
! vertices - 1,2,3,4 make up the bottom face of hexahedron, and
! vertices - 5,6,7,8 make up the top face of hexahedron

vertex_id_bot = unstructured_grid%cell_vertices(ii,local_id)
vertex_id_top = unstructured_grid%cell_vertices(ii+4,local_id)

point_bot = unstructured_grid%vertices(vertex_id_bot)
point_top = unstructured_grid%vertices(vertex_id_top)

dx = point_top%x - point_bot%x
dy = point_top%y - point_bot%y 
dz = point_top%z - point_bot%z

endif

enddo
 
This will save you trouble of writing a "mesh loader". But, if you want to resurrect the general_grid capability (which I didn't realize existed in the code), feel free to do so.

-Gautam.



Karra, Satish

unread,
Oct 7, 2013, 12:41:22 PM10/7/13
to pflotr...@googlegroups.com, pflotra...@googlegroups.com
Hi Paolo,

As you said this can already be handled by PFLOTRAN using the unstructured grids, but we see the following problems/limitations:

 

- Using parallel unstructured requires a more complex and less efficient domain partition. The load balancing will be always easier to control and more efficient when using structured grids

 

- When using the parallel unstructured grids, ParMetis is required, which is not free, and does not follow the open source model. One solution could be to replace Parmetis with the Scotch library, which from what I gathered has been already linked to PETSC. However, I don't know how much work there is to do and how hard it would be. And if there is any interest in doing this. Any comments on this?

 

It basically seems a shame not to use structured meshes when is possible, and we realised that structured meshes are still heavily used by reservoir engineers. 

 

A tangential flux correction to this distorted-structured grid could be added later, this requires more work and a more careful thought. 

We want to test the MFD (as you recommended), and look into more classical approaches ( least square method, cell gradient reconstruction [e.g. Turner proposes several methods] ). Again, the structured nature of the grid should give more options for the implementation of a correction.

 

Having said that, I recognise the power of unstructured grids, and having a proper unstructured solver in PFLOTRAN for every mode will be great (e.g. MFD for unstructured meshes in every mode developed). At the same time, having also a structured grid capability that can handle real geometries at this stage can attract many more users.

What do you think about this distorted-structured alternative ? 

I would argue that you can still use two-point flux on unstructured/distorted grids, just make sure that you generate a voronoi mesh for your domain. PFLOTRAN can read the areas of faces and volumes via explicit unstructured capability. This can handle real geometries and domains with discrete fracture networks (you can find a two intersecting fracture example in pflotran-dev/regression_tests/default/discretization/dfn_explicit.in) and give results with better accuracy (than a hex mesh, in your case).

Thanks,

Satish  




======================================
Dr. Satish Karra
Staff Scientist
EES-16: Computational Earth Science Group
Los Alamos National Laboratory
Los Alamos, NM 87545
======================================


Peter Lichtner

unread,
Oct 7, 2013, 1:09:24 PM10/7/13
to pflotr...@googlegroups.com, pflotra...@googlegroups.com
Here is an example of the kind of trouble you get into with the 2-pt. flux approx. for a structured grid but with varying heights. This is the sacroc grid. Pressure is plotted which should be uniform and parallel to the sides. Uniform properties are used for porosity and permeability. MFD gave parallel pressure contours for this grid.
...Peter


Jed Brown

unread,
Oct 7, 2013, 3:03:43 PM10/7/13
to Paolo Orsini, pflotran-dev
Paolo Orsini <paolo....@gmail.com> writes:

> Hi Jed,
>
> Thanks for this.
>
> So to use PTScotch, all we need to do is to (a) recompile PETSC (adding the
> option "--download-ptscotch -mat_partitioning_type ptscotch", and (b)
> change two PFLOTRAN guards from PETSC_HAVE_PARMETIS to PETSC_HAVE_SCOTCH
> (in discretization.F90 and unstructured_grid.F90) ?

Change the guards to accept either (assuming you want compile-time
failure when neither is available).

> That's all?

Should be, yes.

> Paolo
>
>
> On Mon, Oct 7, 2013 at 6:10 PM, Jed Brown <jedb...@mcs.anl.gov> wrote:
>
>> Paolo Orsini <paolo....@gmail.com> writes:
>> > When you say that PTScotch tends to be substantially more expensive, do
>> you
>> > refer to computational cost? Or actual financial cost?
>>
>> Computational cost.
>>
>> PTScotch uses the CeCILL-C V1 license, which is similar to LGPL.
>>

Paolo Orsini

unread,
Nov 1, 2013, 3:09:43 PM11/1/13
to Jed Brown, pflotran-dev
Hi Jed,

I have tried to compile PFLOTRAN, after compiling petsc with ptscotch. 

The problem is that PFLOTRAN calls MatMeshToCellGraph to create the Dual graph, operation which is done by using the  the parmetis function ParMETIS_V3_Mesh2Dual. 

Is there an equivalent function of MatMeshToCellGraph that uses PTScotch?

Paolo

Jed Brown

unread,
Nov 1, 2013, 3:25:35 PM11/1/13
to Paolo Orsini, pflotran-dev
Paolo Orsini <paolo....@gmail.com> writes:

> Hi Jed,
>
> I have tried to compile PFLOTRAN, after compiling petsc with ptscotch.
>
> The problem is that PFLOTRAN calls MatMeshToCellGraph to create the Dual
> graph, operation which is done by using the the parmetis function
> ParMETIS_V3_Mesh2Dual.
>
> Is there an equivalent function of MatMeshToCellGraph that uses PTScotch?

PTScotch does not provide this function. I don't know if they have
something similar that could be used. If they do, it would be worth
adding, but it's not currently supported.

Jed Brown

unread,
Nov 1, 2013, 5:18:56 PM11/1/13
to Paolo Orsini, pflotran-dev
Please keep the discussion on at least one list so it can be archived
and so others can comment.

Paolo Orsini <paolo....@gmail.com> writes:

> I really don't know how the DualGraph is going to affect the the parallel
> computation efficiency...
> I am trying to use this partitioning as a black box, not sure what's the
> right thing to do..
>
> In the code calling the partitioning (PFLOTRAN), i can see that MPIAdj is
> built, then the Dual graph is computed, and finally the partitioning takes
> place, see below:
>
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> call MatCreateMPIAdj(option%mycomm,num_cells_local_old, &
> unstructured_grid%num_vertices_global, &
> local_vertex_offset, &
> local_vertices,PETSC_NULL_INTEGER,Adj_mat,ierr)
>
> call MatMeshToCellGraph(Adj_mat,num_common_vertices,Dual_mat,ierr)

You can MatConvert Adj_mat to AIJ or assemble as AIJ to begin with, then
use MatMatTransposeMult to create Dual_mat (which will be in AIJ
format).

> .........
>
> call MatPartitioningCreate(option%mycomm,Part,ierr)
> call MatPartitioningSetAdjacency(Part,Dual_mat,ierr)
> call MatPartitioningSetFromOptions(Part,ierr)
> call MatPartitioningApply(Part,is_new,ierr)
> call MatPartitioningDestroy(Part,ierr)
>
> allocate(cell_counts(option%mycommsize))
> call ISPartitioningCount(is_new,option%mycommsize,cell_counts,ierr)
> num_cells_local_new = cell_counts(option%myrank+1)
> call
> MPI_Allreduce(num_cells_local_new,iflag,ONE_INTEGER_MPI,MPIU_INTEGER, &
> MPI_MIN,option%mycomm,ierr)
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
>
> You are telling me we can build a dual graph simply multiplying Adj_mat by
> its transpose?

Yeah, doesn't it connect all cells that share at least one vertex? That
is exactly the nonzero pattern of the matrix product.

> Why using a simple Aij when we are pre-processing a parallel process?

MPIAIJ implements MatMatTransposeMult, but we don't have an
implementation for MPIAdj (without calling ParMETIS).

> Sorry, may be what I want to do is less black box than thought...
>
> And thanks again for your patience and help
>
> Paolo
>
>
>
>
> On Fri, Nov 1, 2013 at 9:17 PM, Jed Brown <jedb...@mcs.anl.gov> wrote:
>
>> Paolo Orsini <paolo....@gmail.com> writes:
>>
>> > Hi Jed,
>> >
>> > Thanks for your prompt reply.
>> >
>> > Is it possible to do the partitioning with PTSchotch, without
>> > using MatMeshToCellGraph?
>> >
>> > Is there any other function (not using ParMetis) to compute the adjacency
>> > graph needed by MatPartitioningSetAdjacency, for parallel unstructured
>> > grids?
>>
>> You can either compute the dual graph yourself or we can implement that
>> function without calling ParMETIS. Actually, I believe the operation is
>> just symbolic
>>
>> DualGraph = E * E^T
>>
>> where E is the element connectivity matrix. So you could build a normal
>> (AIJ instead of MPIAdj) matrix with the connectivity, then call
>> MatMatTransposeMult().
>>
>> Would that work for you?
>>

Paolo Orsini

unread,
Nov 1, 2013, 7:44:22 PM11/1/13
to Jed Brown, pflotran-dev
Hi Jed,

I believe you are right. 
Multiplying the connectivity matrix by its tranpose returns an ncell x ncell matrix with non-zero entries only for those cell sharing at least one node. Hence the Dual Graph we need....

Following your suggestions I change the code as follows:

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
call MatCreateMPIAdj(option%mycomm,num_cells_local_old, &
                       unstructured_grid%num_vertices_global, &
                       local_vertex_offset, &
                       local_vertices,PETSC_NULL_INTEGER,Adj_mat,ierr)

call MatConvert(Adj_mat, MATAIJ,MAT_INITIAL_MATRIX,AIJ_mat,ierr)

call MatMatTransposeMult(AIJ_mat,AIJ_mat,MAT_INITIAL_MATRIX,PETSC_DEFAULT_DOUBLE_PRECISION,Dual_mat,ierr)
!!  call MatMeshToCellGraph(Adj_mat,num_common_vertices,Dual_mat,ierr)

  call MatDestroy(Adj_mat,ierr)
  call MatDestroy(AIJ_mat,ierr)
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

When I try to run, the MatConvert takes ages, (I suppose I can form directly the AIJ_mat), the PTScotch partitioning seems quick, but then I get an error later on in the code, when attempting to compute the internal connection.... Something is wrong...

I need to double check... do you see anything wrong so far?

P



  

Jed Brown

unread,
Nov 1, 2013, 10:57:03 PM11/1/13
to Paolo Orsini, pflotran-dev, pets...@mcs.anl.gov
Paolo Orsini <paolo....@gmail.com> writes:

> Hi Jed,
>
> I believe you are right.
> Multiplying the connectivity matrix by its tranpose returns an ncell x
> ncell matrix with non-zero entries only for those cell sharing at least one
> node. Hence the Dual Graph we need....
>
> Following your suggestions I change the code as follows:
>
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> call MatCreateMPIAdj(option%mycomm,num_cells_local_old, &
> unstructured_grid%num_vertices_global, &
> local_vertex_offset, &
> local_vertices,PETSC_NULL_INTEGER,Adj_mat,ierr)
>
> call MatConvert(Adj_mat, MATAIJ,MAT_INITIAL_MATRIX,AIJ_mat,ierr)
>
> call
> MatMatTransposeMult(AIJ_mat,AIJ_mat,MAT_INITIAL_MATRIX,PETSC_DEFAULT_DOUBLE_PRECISION,Dual_mat,ierr)
> !! call MatMeshToCellGraph(Adj_mat,num_common_vertices,Dual_mat,ierr)
>
> call MatDestroy(Adj_mat,ierr)
> call MatDestroy(AIJ_mat,ierr)
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
>
> When I try to run, the MatConvert takes ages, (I suppose I can form
> directly the AIJ_mat),

I see MatConvert_Basic does not even try to preallocate, which is
certainly why it takes a long time. That's easy to fix in PETSc (at
least for AIJ).

PETSc devs, why doesn't it make one pass counting and another pass
inserting?

> the PTScotch partitioning seems quick, but then I get an error later
> on in the code, when attempting to compute the internal
> connection.... Something is wrong...

Always send the entire error message.

Paolo Orsini

unread,
Nov 3, 2013, 6:53:22 AM11/3/13
to Jed Brown, pflotran-dev, pets...@mcs.anl.gov
Hi Jed,

I found out what the problem is.
First of all I printed all the matrices out, to be sure that MatConvert was working fine.
And that was ok: AIJ_mat had the same connectivity table than Adj_mat.
Then I compared the dual graph (stored in DualMat) computed by the Parmetis routine (MatMeshToCellGraph) and with MatMatTransposeMult. And they were quite different.
  
The computation of the dual graph of the mesh is a bit more complicated than multiplying the adjacency matrix by its transpose, but not far off. With this operation, also the cells that share only one node are connected in the dual graph.
Instead, the minimum number of common nodes is >1 (2 in 2D probelms, 3 in 3D problems). In fact, this is an input of MatMeshToCellGraph, I should have understood this before.

This can be computed doing the transpose adjacency matrix (Adj_T), then doing the multiplication line by line of Adj time Adj_T, and discard the non zero entries coming from to elements that share a number of nodes less than  the minimum number of common nodes imposed. I have not implement this yet, any suggestion is welcome.

I also found out that Schotch has a facility to compute a dual graph from a mesh, but not PTScotch.
Once the graph is computed, PTSchotch can load the central dual graph, and distribute it into several processors during the loading.  
Am i right to say that PETSC is interfaced only with PTSchotch and not with Scotch?

To check if the PTSchotch partition works (within PFLOTRAN ), I am computing a DualMat with parmetis, saving it into a file. Then I recompile the code (using a petsc compiled with ptscotch), an load the DualMat from a file rather then forming a new one. I did a successful test when running on one processor. but I am having trouble when try on more.    

I though the the dual graph was computed only once, even during the mpi process, instead it seems to be recomputed more than once. Not sure why.... sure i am missing something ???
 See below the original call from PFLOTRAN:

xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
!!! this is the pass where the adjacency matrix is filled
  num_cells_local_old = unstructured_grid%nlmax 
  allocate(local_vertices(unstructured_grid%max_nvert_per_cell* &
                          num_cells_local_old))
  allocate(local_vertex_offset(num_cells_local_old+1))
  local_vertices = 0
  local_vertex_offset = 0
  count = 0
  local_vertex_offset(1) = 0
  do local_id = 1, num_cells_local_old
    do ivertex = 1, unstructured_grid%max_nvert_per_cell
      if (unstructured_grid%cell_vertices(ivertex,local_id) == 0) exit
      count = count + 1
      ! local vertices must be zero-based for MatCreateMPIAdj; thus subtract 1
      local_vertices(count) = &
        unstructured_grid%cell_vertices(ivertex,local_id) - 1
    enddo
    local_vertex_offset(local_id+1) = count 
  enddo
    
  select case (unstructured_grid%grid_type)
    case(TWO_DIM_GRID)
      num_common_vertices = 2 ! cells must share at least this number of vertices
    case(THREE_DIM_GRID)
      num_common_vertices = 3 ! cells must share at least this number of vertices
    case default
        option%io_buffer = 'Grid type not recognized '
        call printErrMsg(option)
    end select

  ! determine the global offset from 0 for cells on this rank
  global_offset_old = 0
  call MPI_Exscan(num_cells_local_old,global_offset_old, &
                  ONE_INTEGER_MPI,MPIU_INTEGER,MPI_SUM,option%mycomm,ierr)

  !! the adjacency matrix is computed
  call MatCreateMPIAdj(option%mycomm,num_cells_local_old, &
                       unstructured_grid%num_vertices_global, &
                       local_vertex_offset, &
                       local_vertices,PETSC_NULL_INTEGER,Adj_mat,ierr)

!! the mesh dual graph is computed
#if defined(PETSC_HAVE_PARMETIS)
   call MatMeshToCellGraph(Adj_mat,num_common_vertices,Dual_mat,ierr)
#endif
  call MatDestroy(Adj_mat,ierr)
 
  call UGridPartition(unstructured_grid,option,Dual_mat,is_new, &
                      num_cells_local_new)
  
  if (allocated(local_vertices)) deallocate(local_vertices)
  if (allocated(local_vertex_offset)) deallocate(local_vertex_offset)

xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Paolo


Jed Brown

unread,
Nov 3, 2013, 7:34:53 AM11/3/13
to Paolo Orsini, pflotran-dev, pets...@mcs.anl.gov
Paolo Orsini <paolo....@gmail.com> writes:

> The computation of the dual graph of the mesh is a bit more complicated
> than multiplying the adjacency matrix by its transpose, but not far off.
> With this operation, also the cells that share only one node are connected
> in the dual graph.
> Instead, the minimum number of common nodes is >1 (2 in 2D probelms, 3 in
> 3D problems). In fact, this is an input of MatMeshToCellGraph, I should
> have understood this before.
>
> This can be computed doing the transpose adjacency matrix (Adj_T), then
> doing the multiplication line by line of Adj time Adj_T, and discard the
> non zero entries coming from to elements that share a number of nodes less
> than the minimum number of common nodes imposed. I have not implement this
> yet, any suggestion is welcome.


You can just put in weights of 1.0, call MatMatTransposeMult, and filter
the resulting matrix. It'll take a bit more memory, but probably not
prohibitively much.

> I also found out that Schotch has a facility to compute a dual graph from a
> mesh, but not PTScotch.
> Once the graph is computed, PTSchotch can load the central dual graph, and
> distribute it into several processors during the loading.
> Am i right to say that PETSC is interfaced only with PTSchotch and not with
> Scotch?

Scotch is serial so even if we had a specialized Scotch interface, it
would not be scalable (memory or time).

> To check if the PTSchotch partition works (within PFLOTRAN ), I am
> computing a DualMat with parmetis, saving it into a file. Then I recompile
> the code (using a petsc compiled with ptscotch), an load the DualMat from a
> file rather then forming a new one. I did a successful test when running on
> one processor. but I am having trouble when try on more.

Make sure you load on the same communicator with the same sizes set (so
that the distribution matches what you expect). You'll have to be more
specific if you want more help.

> I though the the dual graph was computed only once, even during the mpi
> process, instead it seems to be recomputed more than once. Not sure why....
> sure i am missing something ???

That seems like a question about your program logic. Should be easy to
figure out if you set a breakpoint in the debugger.

Gautam Bisht

unread,
Nov 3, 2013, 10:40:32 AM11/3/13
to pflotr...@googlegroups.com, Jed Brown, pets...@mcs.anl.gov
Hi Paolo,


On Sun, Nov 3, 2013 at 3:53 AM, Paolo Orsini <paolo....@gmail.com> wrote:
Hi Jed,

I found out what the problem is.
First of all I printed all the matrices out, to be sure that MatConvert was working fine.
And that was ok: AIJ_mat had the same connectivity table than Adj_mat.
Then I compared the dual graph (stored in DualMat) computed by the Parmetis routine (MatMeshToCellGraph) and with MatMatTransposeMult. And they were quite different.
  
The computation of the dual graph of the mesh is a bit more complicated than multiplying the adjacency matrix by its transpose, but not far off. With this operation, also the cells that share only one node are connected in the dual graph.
Instead, the minimum number of common nodes is >1 (2 in 2D probelms, 3 in 3D problems). In fact, this is an input of MatMeshToCellGraph, I should have understood this before.

This can be computed doing the transpose adjacency matrix (Adj_T), then doing the multiplication line by line of Adj time Adj_T, and discard the non zero entries coming from to elements that share a number of nodes less than  the minimum number of common nodes imposed. I have not implement this yet, any suggestion is welcome.

I also found out that Schotch has a facility to compute a dual graph from a mesh, but not PTScotch.
Once the graph is computed, PTSchotch can load the central dual graph, and distribute it into several processors during the loading.  
Am i right to say that PETSC is interfaced only with PTSchotch and not with Scotch?

To check if the PTSchotch partition works (within PFLOTRAN ), I am computing a DualMat with parmetis, saving it into a file. Then I recompile the code (using a petsc compiled with ptscotch), an load the DualMat from a file rather then forming a new one. I did a successful test when running on one processor. but I am having trouble when try on more.    

I though the the dual graph was computed only once, even during the mpi process, instead it seems to be recomputed more than once. Not sure why.... sure i am missing something ???

In PFLOTRAN, MatCreateMPIAdj() is called:
- Once, if unstructured grid is specified in IMPLICIT format [in unstructured_grid.F90];
- Twice, if unstructured grid is specified in EXPLICIT format [in unstructured_grid.F90]. The first call creates a Adjacency matrix based on 'CONNECTIONS' data of the explicit grid; while the second call is used to create a Dual matrix.

In order to better understand the code, you could compile the code with 'ugrid_debug=1' and look at the Adj*.out/Dual*.out for the two runs:
- regression_tests/default/discretization/mixed_explicit.in
- regression_tests/default/discretization/mixed_implicit.in

-Gautam.

 
--
You received this message because you are subscribed to the Google Groups "pflotran-dev" group.

Gautam Bisht

unread,
Nov 3, 2013, 12:48:32 PM11/3/13
to pflotr...@googlegroups.com, Jed Brown, pets...@mcs.anl.gov
On Sun, Nov 3, 2013 at 7:40 AM, Gautam Bisht <gbi...@lbl.gov> wrote:
Hi Paolo,


On Sun, Nov 3, 2013 at 3:53 AM, Paolo Orsini <paolo....@gmail.com> wrote:
Hi Jed,

I found out what the problem is.
First of all I printed all the matrices out, to be sure that MatConvert was working fine.
And that was ok: AIJ_mat had the same connectivity table than Adj_mat.
Then I compared the dual graph (stored in DualMat) computed by the Parmetis routine (MatMeshToCellGraph) and with MatMatTransposeMult. And they were quite different.
  
The computation of the dual graph of the mesh is a bit more complicated than multiplying the adjacency matrix by its transpose, but not far off. With this operation, also the cells that share only one node are connected in the dual graph.
Instead, the minimum number of common nodes is >1 (2 in 2D probelms, 3 in 3D problems). In fact, this is an input of MatMeshToCellGraph, I should have understood this before.

This can be computed doing the transpose adjacency matrix (Adj_T), then doing the multiplication line by line of Adj time Adj_T, and discard the non zero entries coming from to elements that share a number of nodes less than  the minimum number of common nodes imposed. I have not implement this yet, any suggestion is welcome.

I also found out that Schotch has a facility to compute a dual graph from a mesh, but not PTScotch.
Once the graph is computed, PTSchotch can load the central dual graph, and distribute it into several processors during the loading.  
Am i right to say that PETSC is interfaced only with PTSchotch and not with Scotch?

To check if the PTSchotch partition works (within PFLOTRAN ), I am computing a DualMat with parmetis, saving it into a file. Then I recompile the code (using a petsc compiled with ptscotch), an load the DualMat from a file rather then forming a new one. I did a successful test when running on one processor. but I am having trouble when try on more.    

I though the the dual graph was computed only once, even during the mpi process, instead it seems to be recomputed more than once. Not sure why.... sure i am missing something ???

In PFLOTRAN, MatCreateMPIAdj() is called:
- Once, if unstructured grid is specified in IMPLICIT format [in unstructured_grid.F90];
- Twice, if unstructured grid is specified in EXPLICIT format [in unstructured_grid.F90].

Typo on my part, EXPLICIT format is read from unstructured_explicit.F90

Paolo Orsini

unread,
Nov 15, 2013, 1:19:16 PM11/15/13
to Jed Brown, pflotran-dev, pets...@mcs.anl.gov
Hi Jed,

I am back to work on this now...

Thanks for explaining how I can do this. it makes perfectly sense:

A. Assign entries values = 1, when forming the Adj matrix,

B. Convert the Adj matrix to AIJ matrix, so I can use MatMatTransposeMult (or form AIJ directly): can I preallocate AIJ before calling MatConvert? To avoid slow performance? Is there any conflict between the preallocoation functions and MatConvert?

C. Filter AIJ, to remove entries with values <3 (for 3d problems). What function shall I use to do this operation? 

Thank you 

Paolo 
   

Jed Brown

unread,
Nov 15, 2013, 1:34:19 PM11/15/13
to Paolo Orsini, pflotran-dev, pets...@mcs.anl.gov
Paolo Orsini <paolo....@gmail.com> writes:

> Hi Jed,
>
> I am back to work on this now...
>
> Thanks for explaining how I can do this. it makes perfectly sense:
>
> A. Assign entries values = 1, when forming the Adj matrix,
>
> B. Convert the Adj matrix to AIJ matrix, so I can use MatMatTransposeMult
> (or form AIJ directly): can I preallocate AIJ before calling MatConvert? To
> avoid slow performance? Is there any conflict between the preallocoation
> functions and MatConvert?

MatConvert() should be fast.

> C. Filter AIJ, to remove entries with values <3 (for 3d problems). What
> function shall I use to do this operation?

Don't try to do it in-place. Do you want the result in a matrix or in
another data structure? Anyway, I would just walk along calling
MatGetRow() and picking out the column indices that have values of at
least 3.

Paolo Orsini

unread,
Nov 15, 2013, 1:56:05 PM11/15/13
to Jed Brown, pflotran-dev, pets...@mcs.anl.gov
MatConvert was slow when I tested 10 days ago, because was not preallocating the new matrix (see initial part of this thread). I was trying to find a work around. 
Unless you already fixed this, did you?... I haven't reinstalled petsc from then

I need the final Dual Graph (the filtered matrix) in Adj mat format. 
I will extract indices from the AIJ matrix (the matrix to filter), and I I will create a new Adj (avoiding to do an in-place operation as you suggested)

Thanks

Paolo
 
  

Jed Brown

unread,
Nov 15, 2013, 3:47:35 PM11/15/13
to Paolo Orsini, pflotran-dev, pets...@mcs.anl.gov
Paolo Orsini <paolo....@gmail.com> writes:

> MatConvert was slow when I tested 10 days ago, because was not
> preallocating the new matrix (see initial part of this thread). I was
> trying to find a work around.
> Unless you already fixed this, did you?... I haven't reinstalled petsc from
> then

Thanks for the reminder, I forgot that we were missing a specialized
function. What I recommended should work and be fast with 'next'. Can
you try that?

> I need the final Dual Graph (the filtered matrix) in Adj mat format.
> I will extract indices from the AIJ matrix (the matrix to filter), and I I
> will create a new Adj (avoiding to do an in-place operation as you
> suggested)

Yeah, it can't really be done in-place anyway.

Jed Brown

unread,
Nov 15, 2013, 3:50:04 PM11/15/13
to Paolo Orsini, pflotran-dev, pets...@mcs.anl.gov
Jed Brown <jedb...@mcs.anl.gov> writes:
> Thanks for the reminder, I forgot that we were missing a specialized
> function. What I recommended should work and be fast with 'next'. Can
> you try that?

Hang on, I messed something up. Will fix and reply here.

Jed Brown

unread,
Nov 15, 2013, 4:07:08 PM11/15/13
to Paolo Orsini, pflotran-dev, pets...@mcs.anl.gov

Paolo Orsini

unread,
Nov 18, 2013, 5:31:51 AM11/18/13
to Jed Brown, pflotran-dev, pets...@mcs.anl.gov
Hi Jed,

I tried the MatConvert updated version, (from MPIAdj type to MPIAij type) but it seems still slow as before.

Anyway, don't worry to much. I abandoned the idea of the conversion.
I am forming the adjacency matrix directly in MPI AIJ format, so I don't have to do the conversion.
I also realised that the Dual Graph is formed in parallel (in the current PFLOTRAN implementation), so I can't use the MatMatTransposeMult (which works only for sequential matrix).

I will use instead MatCreatTranspose and MAtMatMult, then filtering the resulting matrix row by row and from the DualGraph.
It will require more memory, but it should work. Do you see any problem with this?

Paolo

Jed Brown

unread,
Nov 18, 2013, 11:07:31 AM11/18/13
to Paolo Orsini, pflotran-dev, pets...@mcs.anl.gov
Paolo Orsini <paolo....@gmail.com> writes:

> Hi Jed,
>
> I tried the MatConvert updated version, (from MPIAdj type to MPIAij type)
> but it seems still slow as before.

Did you update to 'next' after my changes on Friday afternoon? I tested
performance with a 3D Laplacian:

diff --git i/src/ksp/ksp/examples/tutorials/ex45.c w/src/ksp/ksp/examples/tutorials/ex45.c
index 4a67263..36f61d7 100644
--- i/src/ksp/ksp/examples/tutorials/ex45.c
+++ w/src/ksp/ksp/examples/tutorials/ex45.c
@@ -149,5 +149,13 @@ PetscErrorCode ComputeMatrix(KSP ksp,Mat jac,Mat B,MatStructure *stflg,void *ctx
ierr = MatAssemblyBegin(B,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);
ierr = MatAssemblyEnd(B,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);
*stflg = SAME_NONZERO_PATTERN;
+
+ {
+ Mat Badj,Baij;
+ ierr = MatConvert(B,MATMPIADJ,MAT_INITIAL_MATRIX,&Badj);CHKERRQ(ierr);
+ ierr = MatConvert(Badj,MATMPIAIJ,MAT_INITIAL_MATRIX,&Baij);CHKERRQ(ierr);
+ ierr = MatDestroy(&Badj);CHKERRQ(ierr);
+ ierr = MatDestroy(&Baij);CHKERRQ(ierr);
+ }
PetscFunctionReturn(0);
}


$ mpiexec -n 4 ./ex45 -da_grid_x 100 -da_grid_y 100 -da_grid_z 100 -ksp_max_it 1 -log_summary | great MatConvert
MatConvert 2 1.0 3.6934e-01 1.0 0.00e+00 0.0 1.6e+01 1.0e+04 1.9e+01 41 0 25 14 24 41 0 25 14 25 0

> Anyway, don't worry to much. I abandoned the idea of the conversion.
> I am forming the adjacency matrix directly in MPI AIJ format, so I don't
> have to do the conversion.
> I also realised that the Dual Graph is formed in parallel (in the current
> PFLOTRAN implementation), so I can't use the MatMatTransposeMult (which
> works only for sequential matrix).
>
> I will use instead MatCreatTranspose

Use MatTranspose(). MatCreateTranspose doesn't move any data around, it
just makes a matrix that behaves like the transpose.

> and MAtMatMult, then filtering the resulting matrix row by row and
> from the DualGraph. It will require more memory, but it should
> work. Do you see any problem with this?

Forming the AIJ matrix up-front is less memory than creating a MatAdj.

Dan Zhou

unread,
Jul 26, 2015, 5:39:31 AM7/26/15
to pflotran-users, pflotr...@googlegroups.com, peter.l...@gmail.com
Hi Peter,

I have a similar structured grid with varying heights need to be established for pflotran like the following grid easily made in visual modflow. I  have no idea how to make this grid for pflotran. Could you please recommand some way to do that? e.g. mesh maker that easy to convert into pflotran format. Bests, Dan

Gautam Bisht

unread,
Jul 27, 2015, 10:58:35 AM7/27/15
to pflotra...@googlegroups.com, pflotr...@googlegroups.com, Peter Lichtner
Hi Dan,

You can use to PFLOTRAN's implicit unstructured grid format. Following information will help you get started:

- Example inputdeck: regression_tests/default/discretization/mixed_implicit.in
- Example mesh: regression_tests/default/discretization/mixed.ugi
- Documentation about unstructured grid: Section 4.27.1 of user_manual.pdf (docs/user_manual/user_manual.tex)

-Gautam.

Dan Zhou

unread,
Jul 27, 2015, 11:35:13 AM7/27/15
to pflotran-users, pflotr...@googlegroups.com, peter.l...@gmail.com, gbi...@lbl.gov
Great thanks.

paolo....@gmail.com

unread,
Sep 10, 2016, 1:23:52 PM9/10/16
to pflotran-dev, pflotra...@googlegroups.com
Hi Peter,

I am playing with some flux correction terms.
Do you sill have the PFLOTRAN mesh/input you used in the case below (sacroc grid)?
If you don't mind to share it, it would help to prepare a test.

Thanks

Paolo  

Peter Lichtner

unread,
Sep 10, 2016, 6:09:12 PM9/10/16
to pflotr...@googlegroups.com, pflotra...@googlegroups.com
Hi Paolo: I think this was done with tough. I will have to take a look around. Unfortunately I am leaving for travel for a week and would have to look through some older hard drives which I won’t have access to.
…Peter

Paolo Orsini

unread,
Sep 12, 2016, 9:33:47 AM9/12/16
to pflotran-dev, pflotra...@googlegroups.com
Hi Peter,
Take your time. If you can find it, even if it is in TOUGH format it would help.
Otherwise I will build another case.
Thanks  for your help
Paolo


--
You received this message because you are subscribed to a topic in the Google Groups "pflotran-dev" group.
To view this discussion on the web visit https://groups.google.com/d/msgid/pflotran-dev/3D6736E5-A08E-447B-97DE-FE721625714A%40gmail.com.

paolo.t...@amphos21.com

unread,
Oct 25, 2019, 8:49:25 AM10/25/19
to pflotran-dev

Hi Paolo, hi all,

sorry to bring back this very old discussion. I wonder whether you have succesfully compiled PETSC with ptscotch. If yes, is there any available fork version of PFLOTRAN that can be used?

Thanks in advance

The other Paolo


El viernes, 1 de noviembre de 2013, 20:09:43 (UTC+1), Paolo Orsini escribió:
Hi Jed,

I have tried to compile PFLOTRAN, after compiling petsc with ptscotch. 

The problem is that PFLOTRAN calls MatMeshToCellGraph to create the Dual graph, operation which is done by using the  the parmetis function ParMETIS_V3_Mesh2Dual. 

Is there an equivalent function of MatMeshToCellGraph that uses PTScotch?

Paolo

On Mon, Oct 7, 2013 at 9:03 PM, Jed Brown <jedb...@mcs.anl.gov> wrote:

Paolo Orsini

unread,
Oct 25, 2019, 9:12:29 AM10/25/19
to pflotran-dev
Hi Paolo,

We build with PTScotch all the times, for the master branch, as ParMetis is open but not free. 
However there are limitations, mainly you cannot use it with Implicit unstructured grids, as the domain decomposition of those type of meshes required a Parmetis function to create the dual graph (if I remember well).
Unless something has changed recently, as I haven't looked at the implicit grids recently.

However, for unstructured explicit mesh, which is the only ones we use, that function is not required, and you can build entirely without parmetis.

Note that this builds is straightforward in linux, but in windows can be very tricky

I hope this helps

Paolo


--
You received this message because you are subscribed to a topic in the Google Groups "pflotran-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/pflotran-dev/MeGoaEvRHfs/unsubscribe.
To unsubscribe from this group and all its topics, send an email to pflotran-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pflotran-dev/c88a51d8-6898-422a-a6c2-1876ab92e738%40googlegroups.com.

Hammond, Glenn E

unread,
Oct 28, 2019, 11:09:10 AM10/28/19
to pflotr...@googlegroups.com

From: pflotr...@googlegroups.com <pflotr...@googlegroups.com> On Behalf Of Paolo Orsini
Sent: Friday, October 25, 2019 6:12 AM
To: pflotran-dev <pflotr...@googlegroups.com>
Subject: [EXTERNAL] Re: [pflotran-dev: 5805] Re: [pflotran-users: 998] Distorted structured grid

 

Hi Paolo,

 

We build with PTScotch all the times, for the master branch, as ParMetis is open but not free. 

However there are limitations, mainly you cannot use it with Implicit unstructured grids, as the domain decomposition of those type of meshes required a Parmetis function to create the dual graph (if I remember well).

Unless something has changed recently, as I haven't looked at the implicit grids recently.

 

However, for unstructured explicit mesh, which is the only ones we use, that function is not required, and you can build entirely without parmetis.

 

Decomposition of the “implicit” unstructured grids currently requires a call to MatMeshToCellGraph, which is wired to ParMETIS (https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/MatOrderings/MatMeshToCellGraph.html).  We would need to work with PETSc to have PT-SCOTCH replace it, if possible.  I note that the PT-SCOTCH CeCILL-C license is compatible with GNU LGPL.

 

The “explicit” unstructured grid does not require ParMETIS as it calls MatPartitioningApply, which can leverage a number of partitioning types (https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatPartitioningType.html#MatPartitioningType).

 

I would first look into gaining permission to use METIS/ParMETIS before looking into alternate partitioners.

 

Glenn

--
You received this message because you are subscribed to the Google Groups "pflotran-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pflotran-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pflotran-dev/CACGnotbtcgf8j%3D5pe8Hx0pLXTbjfy7b0vo5P5JRU0SOsfb30Aw%40mail.gmail.com.

Reply all
Reply to author
Forward
0 new messages