Practical Implementation of Non-Ergodic PSHA in Open Quake?

133 views
Skip to first unread message

EarthKuei

unread,
Jun 14, 2021, 2:38:00 PM6/14/21
to OpenQuake Users

Hi all,

We are interested in performing a practical application/study of non-ergodic (i.e., site-specific) PSHA using Open Quake, following the methodology as discussed in Stewart et al. (2017) and Stewart and Hashash (2014). The basic premise following their recipe is to characterize the mean amplification function and its dispersion using a combination of site-specific ground response analysis and empirical data/inference. These terms are subsequently modified in the seismic hazard model such that the integration should reflect a more appropriate site-specific amplification term.

Has anyone had experience with implementing this approach within the OpenQuake framework? It would be great to hear about anyone’s experience (successful or otherwise) with the implementation, potential limitations, and challenges along the way before we attempt embarking down this route ourselves.

My understanding is that there might be two possible ways of approaching the problem:

1)      Using amplification tables; or

2)      Cloning and modifying the OpenQuake source files directly

The first approach at a superficial level seems like it might be the cleanest way to implement this, but I have not really seen any documentation/examples on its use in the users/advanced users manuals. I’m also not sure if it is only appropriate for trivial/simple models, or if they would be applicable for multiple GMMs? Could anyone confirm this/chime in regarding this aspect?

For context, we would be working with either the 5th or 6th Generation Canadian Seismic Model.  My concern is the overall complexity of this base model we would be working with, as the tree is not trivial, and Stewart’s method requires differentiation of different uncertainty components which I am not certain we could get from all GMM models used.

Regards,

Kevin

 

Some similar thread requests:

https://groups.google.com/g/openquake-users/c/syfIYWwSCj4/m/A1oOr2j-CAAJ

https://groups.google.com/g/openquake-users/c/K_JkoVVyWms/m/JbvE-YQ3BwAJ

 

 

Marco Pagani

unread,
Jun 15, 2021, 12:13:04 PM6/15/21
to OpenQuake Users

The OQ Engine currently does not support the Stewart et al. 2017 approach but it could be implemented (for example as part of a project or a collaboration). As mentioned in your message, the approach available in the OQ Engine for performing site-specific analyses uses amplification tables in conjunction with the convolution approach (see for example https://github.com/gem/oq-engine/tree/master/openquake/qa_tests_data/classical/case_49). You can, for example, account for epistemic uncertainties in the ground motion model using the modifiable GMPE (https://github.com/gem/oq-engine/blob/master/openquake/hazardlib/gsim/mgmpe/modifiable_gmpe.py). These features are not documented since we still consider them experimental. You can use them at your own risk. Any feedback is of course welcome.
Thanks
Hazard Team


--
You received this message because you are subscribed to the Google Groups "OpenQuake Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openquake-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openquake-users/1a968691-e373-4044-a572-59569d5606a1n%40googlegroups.com.

MARCO PAGANI | Seismic Hazard Team Lead | Skype mm.pagani | +39-0382-5169863
GLOBAL EARTHQUAKE MODEL | working together to assess risk

Farhan Javed

unread,
Jun 15, 2021, 12:24:08 PM6/15/21
to openqua...@googlegroups.com
Hi,

I am new to use openquake for hazard assessment!

Please can you guide on which system I installed the openquake.

1) I have desktop PC with core i7, 64GB Ram and 1Tb harddisk...
2) I have also single node cluster with 12 core  having 24 GB Ram..

Which system will be more suitable for the fast calculation?
Thanks for your time!

Farhan




Michele Simionato

unread,
Jun 15, 2021, 11:59:35 PM6/15/21
to OpenQuake Users
If you want to run GEM models at a continental scale you need some serious hardware, like 120 cores per 240 GB (minimum).
With what you have you can run site-specific analysis and computations for few sites. The major issue is the memory: with few
cores you will be slow, but with not enough memory the calculation will not run at all.

                   Michele

Reply all
Reply to author
Forward
0 new messages