Dear OpenQuake developers,
Good day!
I have encountered an error in our classical PSHA calculation when exporting the results (calculation phase already finished).
This is the first time I've encountered this problem, so your advice will be really helpful.
For disclosure, this is ran on a Windows machine with OQ installed via universal installation script
Attached herewith is our config file for the hazard calculation, while the Traceback is pasted below.
Looking forward to hearing from you.
Thank you!
Traceback (most recent call last):
TypeError: 'NoneType' object is not iterable
--
You received this message because you are subscribed to the Google Groups "OpenQuake Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openquake-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openquake-users/441e8061-e221-4fef-9e77-b62ad0334500n%40googlegroups.com.
The fact that there is a single site
does not make it small, since the GMPEs you are using are not vectorized
The easy way is to increase the area_source_discretization parameter from 5 km to 10 km
and you will be 4 times faster.
About the memory consumption I am not sure, with current master the calculation is using well below
1 GB per core on my workstation with 10 cores, which version of the engine are you using?
You received this message because you are subscribed to a topic in the Google Groups "OpenQuake Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/openquake-users/uDhkZrplp4Y/unsubscribe.
To unsubscribe from this group and all its topics, send an email to openquake-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openquake-users/041838f2-de67-4aa5-a3ff-8becab1a42bcn%40googlegroups.com.
Dear Michele,Thanks for your insights. Really helpful.I had the misconception that the size of the calculation that will consume a lot of memory is primarily dependent on the total LT paths/samples. I failed to account for the part of the total number of ruptures generated from the whole source model.The fact that there is a single site
does not make it small, since the GMPEs you are using are not vectorizedI am using OQ 3.12 for this calculation. I also thought that in 3.12, the GSIM library has already been vectorized, but am I correct that it is still ongoing based on this?
The easy way is to increase the area_source_discretization parameter from 5 km to 10 km
and you will be 4 times faster.I've also considered increasing the grid size for area_source_discretization but have not tried it yet. Do you think decreasing the number of LT samples in Latin Hypercube sampling will have a more significant reduction in memory consumption than increasing area_source_discretization?
About the memory consumption I am not sure, with current master the calculation is using well below
1 GB per core on my workstation with 10 cores, which version of the engine are you using?Interesting. The machine used for this calculation has an 8-core processor, 32GB RAM, and ~24GB of free disk space, so it should not be far off, right? Any other things I may be missing that are causing this problem?