Error when exporting results (but calculation finished)

109 views
Skip to first unread message

Francis Jenner Bernales

unread,
Nov 21, 2021, 8:08:06 PM11/21/21
to openqua...@googlegroups.com
Dear OpenQuake developers,
Good day!
I have encountered an error in our classical PSHA calculation when exporting the results (calculation phase already finished).
This is the first time I've encountered this problem, so your advice will be really helpful.
For disclosure, this is ran on a Windows machine with OQ installed via universal installation script
Attached herewith is our config file for the hazard calculation, while the Traceback is pasted below.
Looking forward to hearing from you.
Thank you!

Traceback (most recent call last):

File "C:\Users\patrick.selda\openquake\lib\site-packages\openquake\engine\engine.py", line 257, in run_calc

    calc.run()

File "C:\Users\patrick.selda\openquake\lib\site-packages\openquake\calculators\base.py", line 236, in run

    self.post_execute(self.result)

File "C:\Users\patrick.selda\openquake\lib\site-packages\openquake\calculators\classical.py", line 666, in post_execute

    self.store_stats()

File "C:\Users\patrick.selda\openquake\lib\site-packages\openquake\calculators\classical.py", line 724, in store_stats

    parallel.Starmap(

File "C:\Users\patrick.selda\openquake\lib\site-packages\openquake\baselib\parallel.py", line 835, in reduce

return self.submit_all().reduce(agg, acc)

File "C:\Users\patrick.selda\openquake\lib\site-packages\openquake\baselib\parallel.py", line 594, in reduce

    acc = agg(acc, result)

File "C:\Users\patrick.selda\openquake\lib\site-packages\openquake\calculators\classical.py", line 631, in save_hazard

for kind in pmap_by_kind: # hmaps-XXX, hcurves-XXX

TypeError: 'NoneType' object is not iterable
job_hazard.ini

Michele Simionato

unread,
Nov 22, 2021, 4:01:37 AM11/22/21
to OpenQuake Users
This error means that you ran out of memory.

francisj...@gmail.com

unread,
Nov 22, 2021, 4:33:33 AM11/22/21
to OpenQuake Users
Hi Michele, 

What is your recommended solution here?

This hasn't occurred for us in older versions of OQ, but in OQ 3.12 this happened.

This machine has 32GB of RAM and ~24GB of free disk space, so I think that's enough for this singe-site calculation.

By any chance, can this be connected to any Administrator restriction in Windows, since this was ran on a user-level access (although the error message does not suggest anything like that)? 

Michele Simionato

unread,
Nov 22, 2021, 12:54:25 PM11/22/21
to OpenQuake Users

My guess is that you have too many realizations (how many do you have?).
If that's the case, set number_of_logic_tree_samples to a smaller number.

       Michele

Francis Jenner Bernales

unread,
Nov 23, 2021, 11:49:03 PM11/23/21
to openqua...@googlegroups.com
For this run, we have 1 source model (containing 20 area sources) x 3 b-value increments x 3 Mmax increments x 4 GMPEs for ASC x 3 GMPEs for Subduction Interface x 3 GMPEs for Subduction Intraslab.

So, that makes it 324 logic tree paths/realizations? By the way, this calculation is set to full path enumeration. Any guidance on what reasonable value we can adopt for number_of_logic_tree_samples for MC Sampling?



--
You received this message because you are subscribed to the Google Groups "OpenQuake Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openquake-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openquake-users/441e8061-e221-4fef-9e77-b62ad0334500n%40googlegroups.com.

Michele Simionato

unread,
Nov 23, 2021, 11:59:33 PM11/23/21
to OpenQuake Users
This is a really small calculation that should not run out of memory unless you are doing something wrong like using an area discretization too small.
I cannot say much more without having the input files.

  Michele

francisj...@gmail.com

unread,
Dec 10, 2021, 4:27:33 AM12/10/21
to OpenQuake Users
Hi, Michele.

For our PSHA calculations involving horizontal-component IMs, setting the pointsource_distance parameter helped run the Classical PSHA Calculations successfully with no memory error.

However, when using the newly-implemented V/H GSIMs, the memory errors persist. We already tried logic tree sampling instead of the default full-path enumeration, in addition to the pointsource_distance parameter, but the memory errors are still there.

This makes me curious why a calculation involving 1 source model (containing 20 area sources) x 3 b-value increments x 3 Mmax increments x 3 GMPEs for ASC x 3 GMPEs for Subduction Interface x 3 GMPEs for Subduction Intraslab somehow suffers from memory errors like this, knowing that this is not really a large calculation.

Is this somehow tied to how the V/H models work, given that they utilize horizontal and vertical GSIMs concurrently? Does that make it twice as large as a calculation involving purely horizontal or purely vertical GSIMs only?

I am attaching herewith the input files we're using in the calc, as well as the corresponding traceback.

Francis

traceback_VH.txt
PSHA_Trial.rar

Michele Simionato

unread,
Dec 10, 2021, 5:20:29 AM12/10/21
to OpenQuake Users
I am not sure why you have the idea that this is a small calculation. By running it I get a line of log 

PointSource ruptures: 7_390_800

i.e. there are over 7 million ruptures, so it is a medium/large calculation. The fact that there is a single site
does not make it small, since the GMPEs you are using are not vectorized. So a good idea
but this can be difficult. The easy way is to increase the area_source_discretization parameter from 5 km to 10 km
and you will be 4 times faster. About the memory consumption I am not sure, with current master the calculation is using well below
1 GB per core on my workstation with 10 cores, which version of the engine are you using?

         Michele

Francis Jenner Bernales

unread,
Dec 10, 2021, 9:42:26 PM12/10/21
to openqua...@googlegroups.com
Dear Michele,

Thanks for your insights. Really helpful.

I had the misconception that the size of the calculation that will consume a lot of memory is primarily dependent on the total LT paths/samples. I failed to account for the part of the total number of ruptures generated from the whole source model.

The fact that there is a single site
does not make it small, since the GMPEs you are using are not vectorized

I am using OQ 3.12 for this calculation. I also thought that in 3.12, the GSIM library has already been vectorized, but am I correct that it is still ongoing based on this?

The easy way is to increase the area_source_discretization parameter from 5 km to 10 km
and you will be 4 times faster.

I've also considered increasing the grid size for area_source_discretization but have not tried it yet. Do you think decreasing the number of LT samples in Latin Hypercube sampling will have a more significant reduction in memory consumption than increasing area_source_discretization?

About the memory consumption I am not sure, with current master the calculation is using well below
1 GB per core on my workstation with 10 cores, which version of the engine are you using?

Interesting. The machine used for this calculation has an 8-core processor, 32GB RAM, and ~24GB of free disk space, so it should not be far off, right? Any other things I may be missing that are causing this problem?

Again, thank you very much for your help, Michele. Really appreciate it.

Francis

You received this message because you are subscribed to a topic in the Google Groups "OpenQuake Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/openquake-users/uDhkZrplp4Y/unsubscribe.
To unsubscribe from this group and all its topics, send an email to openquake-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openquake-users/041838f2-de67-4aa5-a3ff-8becab1a42bcn%40googlegroups.com.

Michele Simionato

unread,
Dec 11, 2021, 3:13:19 AM12/11/21
to openqua...@googlegroups.com
On Sat, Dec 11, 2021 at 3:42 AM Francis Jenner Bernales <francisj...@gmail.com> wrote:
Dear Michele,

Thanks for your insights. Really helpful.

I had the misconception that the size of the calculation that will consume a lot of memory is primarily dependent on the total LT paths/samples. I failed to account for the part of the total number of ruptures generated from the whole source model.

The fact that there is a single site
does not make it small, since the GMPEs you are using are not vectorized

I am using OQ 3.12 for this calculation. I also thought that in 3.12, the GSIM library has already been vectorized, but am I correct that it is still ongoing based on this?


The vectorization is done on-demand when there are paying projects, otherwise old GMPEs are left as they are. But you can do the vectorization yourself if you are willing to invest the time and contribute back to hazardlib. That would be much appreciated.
 
The easy way is to increase the area_source_discretization parameter from 5 km to 10 km
and you will be 4 times faster.

I've also considered increasing the grid size for area_source_discretization but have not tried it yet. Do you think decreasing the number of LT samples in Latin Hypercube sampling will have a more significant reduction in memory consumption than increasing area_source_discretization?

No, changing the sampling will change next to nothing. Actually I would use full enumeration in your case since there are not so many realizations.
The performance will likely be the same or nearly the same. 

About the memory consumption I am not sure, with current master the calculation is using well below
1 GB per core on my workstation with 10 cores, which version of the engine are you using?

Interesting. The machine used for this calculation has an 8-core processor, 32GB RAM, and ~24GB of free disk space, so it should not be far off, right? Any other things I may be missing that are causing this problem?


I dunno, I should run the calculation with engine 3.12 to see if I can reproduce the memory issue.
Increasing the grid spacing may also solve the memory problem, I would try that first.

            Michele 

Michele Simionato

unread,
Dec 11, 2021, 4:18:31 AM12/11/21
to openqua...@googlegroups.com
I confirm that the calculation runs out of memory with engine-3.12 for me too. Luckily engine 3.13 has strong memory optimizations, so you can just upgrade to the current master.

    Michele
Reply all
Reply to author
Forward
0 new messages