Error in Running Official GeoClaw Radial Flat Example

212 views
Skip to first unread message

Muhammad Ali

unread,
Jul 18, 2024, 10:14:00 PM7/18/24
to claw-users

Dear Community,

I have successfully installed Clawpack 5.10 on my machine. However, when I try to run the official example provided for Boussinesq solvers in two space dimensions (located at geoclaw/examples/bouss/radial_flat within the Clawpack root folder), I encounter the following error:
```
  ==> Applying Bouss equations to selected grids between levels 1 and 10 ==> Use Bouss. in water deeper than 1.0000000000000000 Using a PETSc solver Using Bouss equations from the start rnode allocated... node allocated... listOfGrids allocated... Storage allocated... bndList allocated... Gridding level 1 at t = 0.000000E+00: 4 grids with 10000 cells Setting initial dt to 2.9999999999999999E-002 At line 42 of file /home/saad/project/clawpack/geoclaw/src/2d/bouss/setMatrixIndex.f90 Fortran runtime error: Index '20' of dimension 1 of array 'node' above upper bound of 19 Error termination. Backtrace: #0 0x799231223960 in ??? #1 0x7992312244d9 in ??? #2 0x799231224ad6 in ??? #3 0x58f1d7e6fac4 in ??? #4 0x58f1d7e4e5e1 in ??? #5 0x58f1d7e59c94 in ??? #6 0x58f1d7e5ad19 in ??? #7 0x799230e29d8f in __libc_start_call_main at ../sysdeps/nptl/libc_start_call_main.h:58 #8 0x799230e29e3f in __libc_start_main_impl at ../csu/libc-start.c:392 #9 0x58f1d7dbf5b4 in ??? #10 0xffffffffffffffff in ???
```

I have PETSc 3.21 installed on my machine. Any assistance in resolving this issue would be greatly appreciated.

Best regards,
Muhammad Ali

Muhammad Ali

unread,
Jul 19, 2024, 1:53:02 AM7/19/24
to claw-users

This error was resolved by downgrading to Clawpack version 5.9.2. The downgrade occurred automatically when I reinstalled Clawpack in another folder. After the downgrade, the 2D solvers started running successfully.

Thank you for your assistance.

Best regards,

Ren

unread,
Jul 19, 2024, 8:07:59 AM7/19/24
to claw-users
Hi, Ali,

I also met some problems in run 2d Boussinesq case. Here is my  problem. https://groups.google.com/g/claw-users/c/teW0B02sWU4

And could you show me how you install the 2D Boussinesq of the geoclaw?  Thank you very much!

Regards,
Zhiyuan 

berger Marsha

unread,
Jul 19, 2024, 9:24:06 AM7/19/24
to claw-...@googlegroups.com
That happens when the compiler is using the wrong module. Please delete the file called amr_module.mod in the directory $CLAW/amrclaw/src/2d/  and recompile. Then it should use the one in the Bouss directory.

Marsha

--
You received this message because you are subscribed to the Google Groups "claw-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to claw-users+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/claw-users/1d5fed15-9222-4159-9118-4d8a63135600n%40googlegroups.com.

Muhammad Ali

unread,
Jul 22, 2024, 1:36:04 PM7/22/24
to claw-...@googlegroups.com
I following following steps to run the 2D Bouss 
*Preliminaries.*
Update the packges, and install build dependencies
```shell
sudo apt update && sudo apt upgrade -y
sudo apt install git build-essential gfortran virtualenv meson ninja-build python3-dev libblas-dev liblapack-dev

```

Setup the root directory for the project (update the PATH accordingly)
```shell
export PROJECT_DIR=$HOME

```


Setup the python environment, and install commonly required dependencies
```shell
virtualenv venv --python=3.10
source venv/bin/activate

pip install numpy scipy matplotlib netcdf4 meson-python ninja spin pytest nose
```

Add `collections.Callable = collections.abc.Callable` after `import collections` in `suite.py` at `$PROJECT_DIR/venv/lib/python3.11/site-packages/nose/suite.py` if running `nosetests -sv`

*Setup PETCS*
Obtain the PETSc 3.20.5 source code as a TAR file
```shell
cd $PROJECT_DIR
curl -o petsc-3.20.5.tar.gz https://web.cels.anl.gov/projects/petsc/download/release-snapshots/petsc-3.20.5.tar.gz
```

Extract, configure and build
```shell
tar xf petsc-3.20.5.tar.gz

cd petsc-3.20.5
./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mpich --download-fblaslapack
make all check
```

*Setup clawpack*
Alternative 1. (Works better)

cd $PROJECT_DIR
git clone https://github.com/clawpack/clawpack.git clawpack
cd clawpack
git checkout v5.10.0     # or an older version; `git tag -l` to list options
git submodule init
git submodule update
pip install --no-build-isolation -e ./


Alternative 2.
```shell
cd $PROJECT_DIR
pip install --src=$PROJECT_DIR --no-build-isolation -e git+https://github.com/clawpack/clawpa...@v5.9.2#egg=clawpack
```

Run tests (FORTRAN)
```shell
cd $CLAW/classic/tests
nosetests -sv
```

*Setup geoclaw with bouss*
```shell
cd $PROJECT_DIR

git clone https://github.com/clawpack/geoclaw_1d geoclaw_1d
git clone -b bouss_2d --single-branch https://github.com/clawpack/geoclaw geoclaw_bouss
```

```shell
cd $PROJECT_DIR

cp -R geoclaw_1d/src/1d_classic $CLAW/geoclaw/src/
cp -R geoclaw_1d/src/python/geoclaw_1d $CLAW/geoclaw/src/python/
cp -R geoclaw_bouss/src/python/geoclaw/* $CLAW/geoclaw/src/python/geoclaw/
cp -R geoclaw_bouss/src/2d/bouss $CLAW/geoclaw/src/2d/
cp -R geoclaw_bouss/examples/1d_classic $CLAW/geoclaw/examples/
cp -R geoclaw_bouss/examples/bouss $CLAW/geoclaw/examples/
```

Try an example (GEOCLAW)
```shell
cd $CLAW/geoclaw/examples/tsunami/chile2010
make all
```
Afterwards you can move to geoclaw/examples/bouss/radial_flat 
First run make .output inside 1D directory of this Radial example then come outsite of 1D and root of radial_flat as pasted above and again run make .plots (it will run make .output automatically). 
For further clarification read Readme inside radial_flat folder. 
Regards,

--
You received this message because you are subscribed to a topic in the Google Groups "claw-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/claw-users/ChdgLREhHBc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to claw-users+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/claw-users/65bd92ef-c300-41fb-876b-8ad193895049n%40googlegroups.com.

Muhammad Ali

unread,
Jul 22, 2024, 1:40:58 PM7/22/24
to claw-...@googlegroups.com
Hi Ren!
Please check the steps I sent in my earlier reply and you can use the latest version of PETSC 3.21.3. I forgot to change the version in the earlier reply. Although 3.20 also works. 
Regards

Ren

unread,
Aug 5, 2024, 11:26:39 PM8/5/24
to claw-users
Hi Ali,

Sorry to be late due to the holiday. I try to install  petsc-3.20.5 instead of 3.21.2. 
./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mpich --download-fblaslapack
make all check


However, when I "make all check", it still has errors, attached.

I guess maybe I made some mistakes in  installing petsc. 

BTW, could you show me your environmental parameters?
make all check_errors.txt

Praveen C

unread,
Aug 5, 2024, 11:35:44 PM8/5/24
to Clawpack Google Group
If you are using conda, you can install petsc through that. This has worked well for me.

best
praveen

You received this message because you are subscribed to the Google Groups "claw-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to claw-users+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/claw-users/4345310d-c9ff-4557-98b0-410c24157187n%40googlegroups.com.
<make all check_errors.txt>

Ren

unread,
Aug 7, 2024, 12:16:08 AM8/7/24
to claw-users
Dear Praveen,

Thanks for the reply. I have using conda to install petsc. And it seems well. But, do you know how to set the environmental parameters for the petsc. 

Regards,
Zhiyuan

Axel Loïc Giboulot

unread,
Aug 7, 2024, 6:37:38 AM8/7/24
to Ren, claw-...@googlegroups.com
Hi! 

I never found out a smart way to find PETSC_DIR and PETSC_ARCH so I just searched it in the whole computer and did it my way by first running :

find / -name petsc 2>/dev/null

Only then I could locate a path resembling full_path_to/petsc<version>/architecture (/usr/lib/petscdir/petsc3.19/x86_64-linux-gnu-real in my case). That path is decomposed into two variables: $PETSC_DIR/$PETSC_ARCH. 

Finally, you simple have to append your findings to ~/.bashrc or ~/.bash_profile (as per PETSC's recommandations) or to your virtual environment's activate file:

export PETSC_DIR=full_path_to/petsc<version>
export PETSCH_ARCH=architecture

Hope this helps. 
Axel

Praveen C

unread,
Sep 20, 2024, 11:47:23 AM9/20/24
to claw-users
Did you manage to run this ? I am getting some crash like this. It looks like the executable is crashing. I am using Pe...@3.21.5

I have not changed the code, running the version in geoclaw. Is anybody able to run this ?

Thanks
praveen

  Using SGN equations

==> Applying Bouss equations to selected grids between levels   1 and  10
 ==> Use Bouss. in water deeper than    1.0000000000000000    
 Using a PETSc solver
 Using Bouss equations from the start
 rnode allocated...
 node allocated...
 listOfGrids allocated...
 Storage allocated...
 bndList allocated...
Gridding level   1 at t =  0.000000E+00:     4 grids with       10000 cells
   Setting initial dt to    2.9999999999999999E-002
  max threads set to            1
 
 Done reading data, starting computation ...  
 
 Total zeta at initial time:    39269.907650665169    
GEOCLAW: Frame    0 output files done at time t =  0.000000D+00

[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind and https://petsc.org/release/faq/
[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[0]PETSC ERROR: to get more information on the crash.
[0]PETSC ERROR: Run with -malloc_debug to check if memory corruption is causing the crash.
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_SELF
  Proc: [[26140,1],0]
  Errorcode: 59

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
prterun has exited due to process rank 0 with PID 0 on node euler calling
"abort". This may have caused other processes in the application to be
terminated by signals sent by prterun (as reported here).
--------------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/praveen/Applications/clawpack/clawutil/src/python/clawutil/runclaw.py", line 242, in runclaw
    proc = subprocess.check_call(cmd_split,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/miniforge/envs/claw/lib/python3.12/subprocess.py", line 413, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/opt/miniforge/envs/claw/./bin/mpiexec', '-n', '6', '/tmp/bouss/radial_flat/xgeoclaw']' returned non-zero exit status 59.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/praveen/Applications/clawpack/clawutil/src/python/clawutil/runclaw.py", line 341, in <module>
    runclaw(*args)
  File "/home/praveen/Applications/clawpack/clawutil/src/python/clawutil/runclaw.py", line 249, in runclaw
    raise ClawExeError(exe_error_str, cpe.returncode, cpe.cmd,
ClawExeError:

*** FORTRAN EXE FAILED ***

make[1]: *** [/home/praveen/Applications/clawpack/clawutil/src/Makefile.common:246: output] Error 1
make[1]: Leaving directory '/tmp/bouss/radial_flat'
make: *** [/home/praveen/Applications/clawpack/clawutil/src/Makefile.common:240: .output] Error 2

Ren

unread,
Oct 11, 2024, 11:28:42 PM10/11/24
to claw-users
Dear  praveen,

I have met the same problem. Did you solve it? Thanks!

Zhiyuan Ren 

Praveen C

unread,
Oct 12, 2024, 12:20:21 AM10/12/24
to Clawpack Google Group
No luck. It looks like something is going wrong inside petsc.

best
praveen

Praveen C

unread,
Oct 12, 2024, 2:43:27 AM10/12/24
to Clawpack Google Group
With pe...@3.22 I am getting somewhat different errors. All of the packages are installed with conda.

See below. It is complaining about given petsc options, it is not able to use them. After printing this, the code does not progress, but it seems to be running without any output.

$ make check

===================

CLAW = /Users/praveen/Applications/clawpack

OMP_NUM_THREADS = 1

BOUSS_MPI_PROCS = 6

PETSC_OPTIONS=-options_file /Users/praveen/Applications/clawpack/geoclaw/examples/bouss/petscMPIoptions

PETSC_DIR = /opt/homebrew/Caskroom/miniforge/base/envs/claw

PETSC_ARCH = .

RUNEXE = /opt/homebrew/Caskroom/miniforge/base/envs/claw/./bin/mpiexec -n 6

FFLAGS = -march=armv8.3-a -ftree-vectorize -fPIC -fno-stack-protector -O2 -pipe -isystem /opt/homebrew/Caskroom/miniforge/base/envs/claw/include -DHAVE_PETSC -ffree-line-length-none

===================


best
praveen

 Using a PETSc solver

 Using Bouss equations from the start

 rnode allocated...

 node allocated...

 listOfGrids allocated...

 Storage allocated...

 bndList allocated...

Gridding level   1 at t =  0.000000E+00:     4 grids with       10000 cells

   Setting initial dt to    2.9999999999999999E-002

  max threads set to            1

  

 Done reading data, starting computation ...  

  

 Total zeta at initial time:    39269.907650665169     

GEOCLAW: Frame    0 output files done at time t =  0.000000D+00


[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------

[0]PETSC ERROR: Petsc has generated inconsistent data

[0]PETSC ERROR: Unable to locate PCMPI allocated shared address 0x140358000

[0]PETSC ERROR: WARNING! There are unused option(s) set! Could be the program crashed before usage or a spelling mistake, etc!

[0]PETSC ERROR:   Option left: name:-ksp_type value: preonly source: file

[0]PETSC ERROR:   Option left: name:-mpi_ksp_max_it value: 200 source: file

[0]PETSC ERROR:   Option left: name:-mpi_ksp_reuse_preconditioner (no value) source: file

[0]PETSC ERROR:   Option left: name:-mpi_ksp_rtol value: 1.e-9 source: file

[0]PETSC ERROR:   Option left: name:-mpi_ksp_type value: gmres source: file

[0]PETSC ERROR:   Option left: name:-mpi_linear_solver_server_view (no value) source: file

[0]PETSC ERROR:   Option left: name:-mpi_pc_gamg_sym_graph value: true source: file

[0]PETSC ERROR:   Option left: name:-mpi_pc_gamg_symmetrize_graph value: true source: file

[0]PETSC ERROR:   Option left: name:-mpi_pc_type value: gamg source: file

[0]PETSC ERROR:   Option left: name:-pc_mpi_minimum_count_per_rank value: 5000 source: file

[0]PETSC ERROR:   Option left: name:-pc_type value: mpi source: file

[0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.

[0]PETSC ERROR: Petsc Release Version 3.22.0, Sep 28, 2024 

[0]PETSC ERROR: /tmp/bouss/radial_flat/xgeoclaw with 6 MPI process(es) and PETSC_ARCH  on MacMiniHome.local by praveen Sat Oct 12 10:54:16 2024

[0]PETSC ERROR: Configure options: AR=arm64-apple-darwin20.0.0-ar CC=mpicc CXX=mpicxx FC=mpifort CFLAGS="-ftree-vectorize -fPIC -fstack-protector-strong -O2 -pipe -isystem /opt/homebrew/Caskroom/miniforge/base/envs/claw/include  " CPPFLAGS="-D_FORTIFY_SOURCE=2 -isystem /opt/homebrew/Caskroom/miniforge/base/envs/claw/include -mmacosx-version-min=11.0 -mmacosx-version-min=11.0" CXXFLAGS="-ftree-vectorize -fPIC -fstack-protector-strong -O2 -pipe -stdlib=libc++ -fvisibility-inlines-hidden -fmessage-length=0 -isystem /opt/homebrew/Caskroom/miniforge/base/envs/claw/include  " FFLAGS="-march=armv8.3-a -ftree-vectorize -fPIC -fno-stack-protector -O2 -pipe -isystem /opt/homebrew/Caskroom/miniforge/base/envs/claw/include  " LDFLAGS="-Wl,-headerpad_max_install_names -Wl,-dead_strip_dylibs -Wl,-rpath,/opt/homebrew/Caskroom/miniforge/base/envs/claw/lib -L/opt/homebrew/Caskroom/miniforge/base/envs/claw/lib" LIBS="-Wl,-rpath,/opt/homebrew/Caskroom/miniforge/base/envs/claw/lib -lmpi_mpifh -lgfortran" --COPTFLAGS=-O3 --CXXOPTFLAGS=-O3 --FOPTFLAGS=-O3 --with-clib-autodetect=0 --with-cxxlib-autodetect=0 --with-fortranlib-autodetect=0 --with-debugging=0 --with-blas-lib=libblas.dylib --with-lapack-lib=liblapack.dylib --with-yaml=1 --with-hdf5=1 --with-fftw=1 --with-hwloc=0 --with-hypre=1 --with-metis=1 --with-mpi=1 --with-mumps=1 --with-parmetis=1 --with-pthread=1 --with-ptscotch=1 --with-shared-libraries --with-ssl=0 --with-scalapack=1 --with-superlu=1 --with-superlu_dist=1 --with-superlu_dist-include=/opt/homebrew/Caskroom/miniforge/base/envs/claw/include/superlu-dist --with-superlu_dist-lib=-lsuperlu_dist --with-suitesparse=1 --with-suitesparse-dir=/opt/homebrew/Caskroom/miniforge/base/envs/claw --with-x=0 --with-scalar-type=real   --with-cuda=0 --with-batch --prefix=/opt/homebrew/Caskroom/miniforge/base/envs/claw

[0]PETSC ERROR: #1 PetscShmgetMapAddresses() at /Users/runner/miniforge3/conda-bld/petsc_1728030427805/work/src/sys/utils/server.c:114

[0]PETSC ERROR: #2 PCMPISetMat() at /Users/runner/miniforge3/conda-bld/petsc_1728030427805/work/src/ksp/pc/impls/mpi/pcmpi.c:269

[0]PETSC ERROR: #3 PCSetUp_MPI() at /Users/runner/miniforge3/conda-bld/petsc_1728030427805/work/src/ksp/pc/impls/mpi/pcmpi.c:853

[0]PETSC ERROR: #4 PCSetUp() at /Users/runner/miniforge3/conda-bld/petsc_1728030427805/work/src/ksp/pc/interface/precon.c:1071

[0]PETSC ERROR: #5 KSPSetUp() at /Users/runner/miniforge3/conda-bld/petsc_1728030427805/work/src/ksp/ksp/interface/itfunc.c:415

[0]PETSC ERROR: #6 KSPSolve_Private() at /Users/runner/miniforge3/conda-bld/petsc_1728030427805/work/src/ksp/ksp/interface/itfunc.c:826

[0]PETSC ERROR: #7 KSPSolve() at /Users/runner/miniforge3/conda-bld/petsc_1728030427805/work/src/ksp/ksp/interface/itfunc.c:1075


Ren

unread,
Oct 12, 2024, 4:59:35 AM10/12/24
to claw-users
Dear  Praveen,

Thank for your reply! It is really a hard work!  /(ㄒoㄒ)/~~

Regards,
Zhiyuan

berger Marsha

unread,
Oct 13, 2024, 2:54:47 PM10/13/24
to claw-...@googlegroups.com, Barry Smith
I actually have the same error when I just ported to linux from my mac. What machines are you two  running on?

I will look into his with the Petsc development team.

-- Marsha 

--
You received this message because you are subscribed to the Google Groups "claw-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to claw-users+...@googlegroups.com.

Ren

unread,
Oct 14, 2024, 9:29:37 PM10/14/24
to claw-users
Dear Marsha,

I use the Ubuntu 20.04, with Petsc-3.21.2. In fact, I found that many people met this problem. See, https://github.com/clawpack/geoclaw/issues/606.
Up to now, I find that, if I modify the "petscMPIoptions", it could avoid this problem, and give the good results. However, the simulation time is very long. The modified "petscMPIoptions" is shown below(red words).
I think this is not the best solution way. Because even I use 40 threads, it still run a very long time. We should find a better way to solve this problem.

Regards,
Zhiyuan Ren



# linear solver:
-mpi_linear_solver_server
-ksp_type gmres
-mpi_ksp_type gmres
-mpi_ksp_max_it 200
-mpi_ksp_reuse_preconditioner

# preconditioner:
-pc_type none
-mpi_pc_type gamg
-mpi_pc_gamg_symmetrize_graph true
-mpi_pc_gamg_sym_graph true
-mpi_linear_solver_server_view


frame0020fig20.png

Ren

unread,
Oct 14, 2024, 9:35:00 PM10/14/24
to claw-users
Sorry, it should be Ubuntu 22.04

berger Marsha

unread,
Oct 15, 2024, 6:10:47 PM10/15/24
to claw-...@googlegroups.com
Thanks to the users who noticed the new PETSC didn't work properly.

Please  put the following options in your petscMPIoptions instead.  I will commit to git soon, but wanted to tell you first and have you try it.

# set min numbers of matrix rows per MPI rank  (default is 10000)

-mpi_linear_solve_minimum_count_per_rank 5000



# Krylov linear solver:

-mpi_linear_solver_server

-mpi_linear_solver_server_view

-ksp_type gmres

-ksp_max_it 200

-ksp_reuse_preconditioner

-ksp_rtol 1.e-9


# preconditioner:

-pc_type gamg



On Oct 14, 2024, at 9:29 PM, Ren <renzhi...@gmail.com> wrote:

Dear Marsha,

I use the Ubuntu 20.04, with Petsc-3.21.2. In fact, I found that many people met this problem. See, https://github.com/clawpack/geoclaw/issues/606.
Up to now, I find that, if I modify the "petscMPIoptions", it could avoid this problem, and give the good results. However, the simulation time is very long. The modified "petscMPIoptions" is shown below(red words).
I think this is not the best solution way. Because even I use 40 threads, it still run a very long time. We should find a better way to solve this problem.

Regards,
Zhiyuan Ren



# linear solver:
-mpi_linear_solver_server
-ksp_type gmres
-mpi_ksp_type gmres
-mpi_ksp_max_it 200
-mpi_ksp_reuse_preconditioner

# preconditioner:
-pc_type none
-mpi_pc_type gamg
-mpi_pc_gamg_symmetrize_graph true
-mpi_pc_gamg_sym_graph true
-mpi_linear_solver_server_view


Praveen C

unread,
Oct 15, 2024, 10:55:41 PM10/15/24
to Clawpack Google Group
I am still getting same error as before. Which version of petsc worked for you, I am using 3.22 installed via miniforge.

After printing following, nothing more happens, I can see geoclaw running in top, but no output is shown.

Thanks
praveen

  Using SGN equations

==> Applying Bouss equations to selected grids between levels   1 and  10

 ==> Use Bouss. in water deeper than    1.0000000000000000     

 Using a PETSc solver

 Using Bouss equations from the start

 rnode allocated...

 node allocated...

 listOfGrids allocated...

 Storage allocated...

 bndList allocated...

Gridding level   1 at t =  0.000000E+00:     4 grids with       10000 cells

   Setting initial dt to    2.9999999999999999E-002

  max threads set to            1

  

 Done reading data, starting computation ...  

  

 Total zeta at initial time:    39269.907650665169     

GEOCLAW: Frame    0 output files done at time t =  0.000000D+00


[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------

[0]PETSC ERROR: Petsc has generated inconsistent data

[0]PETSC ERROR: Unable to locate PCMPI allocated shared address 0x128488000

[0]PETSC ERROR: WARNING! There are unused option(s) set! Could be the program crashed before usage or a spelling mistake, etc!

[0]PETSC ERROR:   Option left: name:-ksp_max_it value: 200 source: file

[0]PETSC ERROR:   Option left: name:-ksp_reuse_preconditioner (no value) source: file

[0]PETSC ERROR:   Option left: name:-ksp_rtol value: 1.e-9 source: file

[0]PETSC ERROR:   Option left: name:-ksp_type value: gmres source: file

[0]PETSC ERROR:   Option left: name:-mpi_linear_solve_minimum_count_per_rank value: 5000 source: file

[0]PETSC ERROR:   Option left: name:-mpi_linear_solver_server_view (no value) source: file

[0]PETSC ERROR:   Option left: name:-pc_type value: gamg source: file

[0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.

[0]PETSC ERROR: Petsc Release Version 3.22.0, Sep 28, 2024 

[0]PETSC ERROR: /tmp/bouss/radial_flat/xgeoclaw with 6 MPI process(es) and PETSC_ARCH  on chandra.tifrbng.res.in by praveen Wed Oct 16 08:22:01 2024

[0]PETSC ERROR: Configure options: AR=arm64-apple-darwin20.0.0-ar CC=mpicc CXX=mpicxx FC=mpifort CFLAGS="-ftree-vectorize -fPIC -fstack-protector-strong -O2 -pipe -isystem /opt/homebrew/Caskroom/miniforge/base/envs/claw/include  " CPPFLAGS="-D_FORTIFY_SOURCE=2 -isystem /opt/homebrew/Caskroom/miniforge/base/envs/claw/include -mmacosx-version-min=11.0 -mmacosx-version-min=11.0" CXXFLAGS="-ftree-vectorize -fPIC -fstack-protector-strong -O2 -pipe -stdlib=libc++ -fvisibility-inlines-hidden -fmessage-length=0 -isystem /opt/homebrew/Caskroom/miniforge/base/envs/claw/include  " FFLAGS="-march=armv8.3-a -ftree-vectorize -fPIC -fno-stack-protector -O2 -pipe -isystem /opt/homebrew/Caskroom/miniforge/base/envs/claw/include  " LDFLAGS="-Wl,-headerpad_max_install_names -Wl,-dead_strip_dylibs -Wl,-rpath,/opt/homebrew/Caskroom/miniforge/base/envs/claw/lib -L/opt/homebrew/Caskroom/miniforge/base/envs/claw/lib" LIBS="-Wl,-rpath,/opt/homebrew/Caskroom/miniforge/base/envs/claw/lib -lmpi_mpifh -lgfortran" --COPTFLAGS=-O3 --CXXOPTFLAGS=-O3 --FOPTFLAGS=-O3 --with-clib-autodetect=0 --with-cxxlib-autodetect=0 --with-fortranlib-autodetect=0 --with-debugging=0 --with-blas-lib=libblas.dylib --with-lapack-lib=liblapack.dylib --with-yaml=1 --with-hdf5=1 --with-fftw=1 --with-hwloc=0 --with-hypre=1 --with-metis=1 --with-mpi=1 --with-mumps=1 --with-parmetis=1 --with-pthread=1 --with-ptscotch=1 --with-shared-libraries --with-ssl=0 --with-scalapack=1 --with-superlu=1 --with-superlu_dist=1 --with-superlu_dist-include=/opt/homebrew/Caskroom/miniforge/base/envs/claw/include/superlu-dist --with-superlu_dist-lib=-lsuperlu_dist --with-suitesparse=1 --with-suitesparse-dir=/opt/homebrew/Caskroom/miniforge/base/envs/claw --with-x=0 --with-scalar-type=real   --with-cuda=0 --with-batch --prefix=/opt/homebrew/Caskroom/miniforge/base/envs/claw

[0]PETSC ERROR: #1 PetscShmgetMapAddresses() at /Users/runner/miniforge3/conda-bld/petsc_1728030427805/work/src/sys/utils/server.c:114

[0]PETSC ERROR: #2 PCMPISetMat() at /Users/runner/miniforge3/conda-bld/petsc_1728030427805/work/src/ksp/pc/impls/mpi/pcmpi.c:269

[0]PETSC ERROR: #3 PCSetUp_MPI() at /Users/runner/miniforge3/conda-bld/petsc_1728030427805/work/src/ksp/pc/impls/mpi/pcmpi.c:853

[0]PETSC ERROR: #4 PCSetUp() at /Users/runner/miniforge3/conda-bld/petsc_1728030427805/work/src/ksp/pc/interface/precon.c:1071

[0]PETSC ERROR: #5 KSPSetUp() at /Users/runner/miniforge3/conda-bld/petsc_1728030427805/work/src/ksp/ksp/interface/itfunc.c:415

[0]PETSC ERROR: #6 KSPSolve_Private() at /Users/runner/miniforge3/conda-bld/petsc_1728030427805/work/src/ksp/ksp/interface/itfunc.c:826

[0]PETSC ERROR: #7 KSPSolve() at /Users/runner/miniforge3/conda-bld/petsc_1728030427805/work/src/ksp/ksp/interface/itfunc.c:1075

Ren

unread,
Oct 16, 2024, 9:45:03 AM10/16/24
to claw-users
Dear Marsha,

Thanks a lot! It could work! However, I am not sure whether the MPI works.
For this case, firstly, I modify the grid size: mx=my=500, amr_max_level=1.
Even I used 80 threads, the SGN case will take 2600 seconds. However, the shallow water equation
case only takes 8 seconds. The differences is very large. Do you know the reasons? Thank you very much!

Regards,
Zhiyuan

11.jpg

Ren

unread,
Oct 18, 2024, 4:23:51 AM10/18/24
to claw-users
To update the information,

I test the case hypothetical asteroid impact. The paper said "The simulation up to 50 minutes required about 30 minutes (wall time) on a laptop as described at the beginning of section 4. A simulation using the SWE with
the same refinement levels and strategy (not shown) required about 4 minutes."

However, I model the tsunami propagation 1800s, which would need 5600s. I think maybe the mpi did not work. It also reminds that "There is one unused database option. It is Option left: name: - mpi_linear_solver_server_minimum_count_per_rank : 5000 source"
 I check that it is right -mpi_linear_solver_server_minimum_count_per_rank 5000, see https://web.cels.anl.gov/projects/petsc/vault/petsc-3.21/docs/manualpages/PC/PCMPI.html

Regards,
Zhiyuan

Ren

unread,
Oct 27, 2024, 10:46:51 PM10/27/24
to claw-users
Dear Marsha,

May I ask how you set in your laptop for the petsc?  Which version of the petsc you use?

Regards,
Zhiyuan

NAVEEN R

unread,
Mar 17, 2025, 2:14:58 PMMar 17
to claw-users
Hi,

I was wondering if anyone else is still having issues with this type of output issue due to the Petsc solver - https://github.com/clawpack/geoclaw/issues/606
Inspite of testing different petsc MPI options discussed here, my results dont change.

I am using Clawpack 5.11 and tested it with both petsc 3.21 and 3.22 on ubuntu linux.

Thanks
Naveen
Reply all
Reply to author
Forward
0 new messages