Hazard calculated result shows all zero in hazard_map result

146 views
Skip to first unread message

許銘凱

unread,
Jul 16, 2015, 4:59:39 AM7/16/15
to openqua...@googlegroups.com
Dear admin,

I try to run the hazard with my own exposure_file and fragility_file. However, the calculated result for my hazard_map xml file show all zero in "iml"
The attachment are the apart file I use. And the following are the running process 
-------------------------------------------------------------------------------------------------------------------------------------------------------------
kensheu@kensheu-VirtualBox:~/oq-engine$ ./bin/oq-engine --nd --rh=./CGS1410292015_m0_5.0/job_500.ini 
[2015-07-01 09:51:29,125 hazard job #186 - PROGRESS MainProcess/6718] **  pre_executing (hazard)
[2015-07-01 09:53:22,533 hazard job #186 - PROGRESS MainProcess/6718] **  initializing sites
[2015-07-01 09:53:25,554 hazard job #186 - PROGRESS MainProcess/6718] **  initializing site collection
[2015-07-01 09:53:25,567 hazard job #186 - PROGRESS MainProcess/6718] **  initializing sources
[2015-07-01 09:53:25,696 hazard job #186 - INFO MainProcess/6718] Considering 2 of 4 sources for model CGStaiwan_source_model-141027.xml('b1',), TRT=Interface subduction zone
[2015-07-01 09:53:25,798 hazard job #186 - INFO MainProcess/6718] Considering 12 of 12 sources for model CGStaiwan_source_model-141027.xml('b1',), TRT=Intraslab subduction zone
[2015-07-01 09:53:25,798 hazard job #186 - INFO MainProcess/6718] Submitting task _filter_sources #1
[2015-07-01 09:53:25,918 hazard job #186 - INFO MainProcess/6718] Submitting task _filter_sources #2
[2015-07-01 09:53:26,047 hazard job #186 - INFO MainProcess/6718] Submitting task _filter_sources #3
[2015-07-01 09:53:26,192 hazard job #186 - INFO MainProcess/6718] Submitting task _filter_sources #4
[2015-07-01 09:53:26,339 hazard job #186 - INFO MainProcess/6718] Sent 0M of data
[2015-07-01 09:53:26,340 hazard job #186 - INFO MainProcess/6718] spawned 4 tasks of kind _filter_sources
[2015-07-01 09:53:26,340 hazard job #186 - INFO MainProcess/6718] _filter_sources  25%
[2015-07-01 09:53:26,340 hazard job #186 - INFO MainProcess/6718] _filter_sources  50%
[2015-07-01 09:53:26,340 hazard job #186 - INFO MainProcess/6718] _filter_sources  75%
[2015-07-01 09:53:26,340 hazard job #186 - INFO MainProcess/6718] _filter_sources 100%
[2015-07-01 09:53:26,340 hazard job #186 - INFO MainProcess/6718] Received 0M of data
[2015-07-01 09:53:26,341 hazard job #186 - INFO MainProcess/6718] Considering 46 of 62 sources for model CGStaiwan_source_model-141027.xml('b1',), TRT=Active Shallow Crust
[2015-07-01 09:53:27,086 hazard job #186 - INFO MainProcess/6718] splitting <TrtModel #0 Interface subduction zone, 25 source(s)>
[2015-07-01 09:53:40,117 hazard job #186 - INFO MainProcess/6718] splitting <TrtModel #1 Intraslab subduction zone, 332 source(s)>
[2015-07-01 09:53:42,734 hazard job #186 - INFO MainProcess/6718] splitting <TrtModel #2 Active Shallow Crust, 1959 source(s)>
[2015-07-01 09:53:42,745 hazard job #186 - INFO MainProcess/6718] Total weight of the sources=1042687.0
[2015-07-01 09:53:42,746 hazard job #186 - INFO MainProcess/6718] Expected output size=32784.0
[2015-07-01 09:53:42,758 hazard job #186 - PROGRESS MainProcess/6718] **  executing (hazard)
[2015-07-01 09:53:42,763 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #1
/usr/lib/python2.7/dist-packages/openquake/hazardlib/gsim/base.py:601: RuntimeWarning: divide by zero encountered in log
  return numpy.log(values)
[2015-07-01 10:10:28,986 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #2
[2015-07-01 10:18:51,483 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #3
[2015-07-01 10:22:25,038 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #4
[2015-07-01 10:26:48,593 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #5
[2015-07-01 10:27:17,017 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #6
[2015-07-01 10:28:49,806 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #7
[2015-07-01 10:30:20,465 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #8
[2015-07-01 10:33:06,303 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #9
[2015-07-01 10:36:58,076 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #10
[2015-07-01 10:40:49,346 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #11
[2015-07-01 10:45:01,109 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #12
[2015-07-01 10:50:30,046 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #13
[2015-07-01 10:56:22,931 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #14
[2015-07-01 11:03:33,250 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #15
[2015-07-01 11:11:09,434 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #16
[2015-07-01 11:20:21,500 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #17
[2015-07-01 11:31:46,690 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #18
[2015-07-01 11:33:16,155 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #19
[2015-07-01 11:46:34,416 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #20
[2015-07-01 12:02:16,067 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #21
[2015-07-01 12:20:34,331 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #22
[2015-07-01 12:41:44,754 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #23
[2015-07-01 13:05:09,022 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #24
[2015-07-01 13:32:49,801 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #25
[2015-07-01 14:02:42,740 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #26
[2015-07-01 14:35:57,108 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #27
[2015-07-01 14:37:26,545 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #28
[2015-07-01 14:39:11,277 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #29
[2015-07-01 14:40:55,754 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #30
[2015-07-01 14:43:03,411 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #31
[2015-07-01 14:45:27,221 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #32
[2015-07-01 14:47:50,076 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #33
[2015-07-01 14:50:12,955 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #34
[2015-07-01 14:53:35,255 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #35
[2015-07-01 14:56:45,738 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #36
[2015-07-01 14:58:01,298 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #37
[2015-07-01 15:00:27,572 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #38
[2015-07-01 15:03:49,212 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #39
[2015-07-01 15:08:44,408 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #40
[2015-07-01 15:13:57,897 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #41
[2015-07-01 15:33:13,142 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #42
[2015-07-01 15:44:55,788 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #43
[2015-07-01 15:46:40,178 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #44
[2015-07-01 15:48:01,836 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #45
[2015-07-01 15:48:14,825 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #46
[2015-07-01 15:48:22,997 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #47
[2015-07-01 15:48:38,459 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #48
[2015-07-01 15:48:41,229 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #49
[2015-07-01 15:48:44,331 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #50
[2015-07-01 15:48:48,267 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #51
[2015-07-01 15:48:52,814 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #52
[2015-07-01 15:48:54,997 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #53
[2015-07-01 15:53:20,349 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #54
[2015-07-01 15:56:08,528 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #55
[2015-07-01 17:07:45,488 hazard job #186 - PROGRESS MainProcess/6718] **  Submitting task compute_hazard_curves #56
[2015-07-01 17:51:33,926 hazard job #186 - PROGRESS MainProcess/6718] **  Sent 0M of data
[2015-07-01 17:51:33,926 hazard job #186 - PROGRESS MainProcess/6718] **  spawned 56 tasks of kind compute_hazard_curves
[2015-07-01 17:51:33,933 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves   1%
[2015-07-01 17:51:33,941 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves   3%
[2015-07-01 17:51:33,947 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves   5%
[2015-07-01 17:51:33,954 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves   7%
[2015-07-01 17:51:33,961 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves   8%
[2015-07-01 17:51:33,968 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  10%
[2015-07-01 17:51:33,973 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  12%
[2015-07-01 17:51:33,978 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  14%
[2015-07-01 17:51:33,984 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  16%
[2015-07-01 17:51:33,992 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  17%
[2015-07-01 17:51:33,999 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  19%
[2015-07-01 17:51:34,005 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  21%
[2015-07-01 17:51:34,011 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  23%
[2015-07-01 17:51:34,016 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  25%
[2015-07-01 17:51:34,026 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  26%
[2015-07-01 17:51:34,032 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  28%
[2015-07-01 17:51:34,038 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  30%
[2015-07-01 17:51:34,045 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  32%
[2015-07-01 17:51:34,053 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  33%
[2015-07-01 17:51:34,061 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  35%
[2015-07-01 17:51:34,067 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  37%
[2015-07-01 17:51:34,073 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  39%
[2015-07-01 17:51:34,079 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  41%
[2015-07-01 17:51:34,087 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  42%
[2015-07-01 17:51:34,093 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  44%
[2015-07-01 17:51:34,100 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  46%
[2015-07-01 17:51:34,107 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  48%
[2015-07-01 17:51:34,115 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  50%
[2015-07-01 17:51:34,121 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  51%
[2015-07-01 17:51:34,126 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  53%
[2015-07-01 17:51:34,132 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  55%
[2015-07-01 17:51:34,138 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  57%
[2015-07-01 17:51:34,146 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  58%
[2015-07-01 17:51:34,153 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  60%
[2015-07-01 17:51:34,160 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  62%
[2015-07-01 17:51:34,166 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  64%
[2015-07-01 17:51:34,172 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  66%
[2015-07-01 17:51:34,181 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  67%
[2015-07-01 17:51:34,186 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  69%
[2015-07-01 17:51:34,192 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  71%
[2015-07-01 17:51:34,200 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  73%
[2015-07-01 17:51:34,207 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  75%
[2015-07-01 17:51:34,214 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  76%
[2015-07-01 17:51:34,221 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  78%
[2015-07-01 17:51:34,226 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  80%
[2015-07-01 17:51:34,232 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  82%
[2015-07-01 17:51:34,241 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  83%
[2015-07-01 17:51:34,247 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  85%
[2015-07-01 17:51:34,253 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  87%
[2015-07-01 17:51:34,261 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  89%
[2015-07-01 17:51:34,272 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  91%
[2015-07-01 17:51:34,279 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  92%
[2015-07-01 17:51:34,285 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  94%
[2015-07-01 17:51:34,291 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  96%
[2015-07-01 17:51:34,299 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves  98%
[2015-07-01 17:51:34,305 hazard job #186 - PROGRESS MainProcess/6718] **  compute_hazard_curves 100%
[2015-07-01 17:51:34,306 hazard job #186 - PROGRESS MainProcess/6718] **  Received 0M of data
[2015-07-01 17:51:34,316 hazard job #186 - PROGRESS MainProcess/6718] **  post_executing (hazard)
[2015-07-01 17:51:34,317 hazard job #186 - PROGRESS MainProcess/6718] **  initializing realizations
[2015-07-01 17:51:34,319 hazard job #186 - INFO MainProcess/6718] Creating 1 GMPE realization(s) for model CGStaiwan_source_model-141027.xml, ('b1',)
[2015-07-01 17:51:34,434 hazard job #186 - INFO MainProcess/6718] saving 4098 hazard curves for 406||hazard_curve||Hazard Curve rlz-27-SA(1.0), imt=SA(1.0)
[2015-07-01 17:51:34,924 hazard job #186 - PROGRESS MainProcess/6718] **  post_processing (hazard)
/usr/lib/python2.7/dist-packages/openquake/engine/calculators/hazard/post_processing.py:74: RuntimeWarning: divide by zero encountered in log
  imls = numpy.log(numpy.array(imls[::-1]))
[2015-07-01 17:51:35,951 hazard job #186 - PROGRESS MainProcess/6718] **  export (hazard)
[2015-07-01 17:51:35,957 hazard job #186 - PROGRESS MainProcess/6718] **  clean_up (hazard)
[2015-07-01 17:51:35,961 hazard job #186 - PROGRESS MainProcess/6718] **  complete (hazard)
Calculation 186 completed in 28806 seconds. Results:
  id | output_type | name
 406 | Hazard Curve | Hazard Curve rlz-27-SA(1.0)
 405 | Hazard Curve (multiple imts) | hc-multi-imt-rlz-27
 408 | Hazard Map | Hazard Map(0.02) SA(1.0) rlz-27

   
CGStaiwan_source_model-141027.xml
exposure_model_test20150625ex.xml
gmpe_logic_tree_Han_140505.xml
job_500ex.ini
structural_fragility_modelex.xml

Michele Simionato

unread,
Jul 17, 2015, 4:29:24 AM7/17/15
to openqua...@googlegroups.com
Here you are extracting the intensity measure levels from the fragility function. They are 0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4
The problem is that your hazard curves are (essentially) 1 for the first level and 0 for all the other levels.
Your poes are 0.1 and 0.02 and for such values the hazard is zero, so the hazard maps are zero.
So technically the engine is doing the right thing.

I will leave to the risk scientists to answer if the fragility function and the IMLs are the right one for your case.

Michele Simionato

unread,
Jul 17, 2015, 4:40:23 AM7/17/15
to openqua...@googlegroups.com
BTW, I see that you have used a fragilility function for Nepal whereas you exposure is for Taiwan, so I assume you are just doing some random experiment. 

anirudh.rao

unread,
Jul 17, 2015, 6:22:33 AM7/17/15
to openqua...@googlegroups.com
Following up on Michele's reply, here are a couple of suggestions:

1. In your file `job_500ex.ini`, the investigation time is being set twice, once to 50 years, and then again to 1 year. In this case the second value is being considered by OpenQuake. If you are interested in obtaining the hazard maps for 2% and 10% poe in 50 years, you should remove the second definition of the investigation_time parameter in your job file.

2. The intensity levels specified in your fragility function start at 0.2g. As Michele mentioned, the probabilities of exceedance for all Sa values ≥ 0.2g for your calculation are zero. Thus, instead of specifying the intensity measure levels using a fragility model, try specifying them directly in the job file, for example:
intensity_measure_types_and_levels = {"SA(1.0)": [0.0010, 0.0015, 0.0025, 0.0040, 0.0065, 0.010, 0.015, 0.025, 0.040, 0.065, 0.10, 0.15, 0.25, 0.40, 0.65, 1.0, 1.5, 2.5]}



On Friday, July 17, 2015 at 10:29:24 AM UTC+2, Michele Simionato wrote:

許銘凱

unread,
Jul 19, 2015, 10:35:43 PM7/19/15
to openqua...@googlegroups.com
Dear admin,

   if I set  intensity_measure_types_and_levels in job.ini , it shows such message:
------------------------------------------------------------------------------------- 
Traceback (most recent call last):
  File "./bin/oq-engine", line 576, in <module>
    main()
  File "./bin/oq-engine", line 489, in main
    log_file, args.exports)
  File "/usr/lib/python2.7/dist-packages/openquake/engine/engine.py", line 352, in run_job
    hazard_calculation_id)
  File "/usr/lib/python2.7/dist-packages/django/db/transaction.py", line 217, in inner
    res = func(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/openquake/engine/engine.py", line 464, in job_from_file
    oqparam = readinput.get_oqparam(params, calculators)
  File "/usr/lib/python2.7/dist-packages/openquake/commonlib/readinput.py", line 112, in get_oqparam
    oqparam = OqParam(**job_ini)
  File "/usr/lib/python2.7/dist-packages/openquake/commonlib/valid.py", line 753, in __init__
    raise ValueError(doc + '\nGot:\n' + dump)
ValueError: If the IMTs and levels are extracted from the risk models,
they must not be set directly. Moreover, if
`intensity_measure_types_and_levels` is set directly,
`intensity_measure_types` must not be set.
Got:
area_source_discretization=10.0
base_path=/home/kensheu/oq-engine/./CGS1410292015_m0_5.0
calculation_mode=classical
description=2014 TEM_Model (141029: CGS fault model:more period of Sa)
export_dir=/tmp
hazard_maps=True
inputs={'gsim_logic_tree': '/home/kensheu/oq-engine/./CGS1410292015_m0_5.0/gmpe_logic_tree_Han_140505.xml', 'exposure': '/home/kensheu/oq-engine/./CGS1410292015_m0_5.0/exposure_model_test20150625.xml', 'fragility': '/home/kensheu/oq-engine/./CGS1410292015_m0_5.0/structural_fragility_model.xml', 'source_model_logic_tree': '/home/kensheu/oq-engine/./CGS1410292015_m0_5.0/source_model_logic_tree.xml', 'source': ['/home/kensheu/oq-engine/./CGS1410292015_m0_5.0/CGStaiwan_source_model-141027.xml']}
intensity_measure_types_and_levels={'SA(1.0)': [0.005, 0.0098, 0.0192, 0.0376, 0.0738, 0.145, 0.284, 0.556, 1.09, 1.52, 2.13], 'SA(0.5)': [0.005, 0.0098, 0.0192, 0.0376, 0.0738, 0.145, 0.284, 0.556, 1.09, 1.52, 2.13], 'SA(0.3)': [0.005, 0.0098, 0.0192, 0.0376, 0.0738, 0.145, 0.284, 0.556, 1.09, 1.52, 2.13], 'PGA': [0.005, 0.0098, 0.0192, 0.0376, 0.0738, 0.145, 0.284, 0.556, 1.09, 1.52, 2.13], 'SA(0.1)': [0.005, 0.0098, 0.0192, 0.0376, 0.0738, 0.145, 0.284, 0.556, 1.09, 1.52, 2.13]}
investigation_time=50.0
maximum_distance=200.0
mean_hazard_curves=False
number_of_logic_tree_samples=0
poes=[0.1, 0.02]
quantile_hazard_curves=[]
random_seed=23
reference_depth_to_1pt0km_per_sec=100.0
reference_depth_to_2pt5km_per_sec=5.0
reference_vs30_type=measured
reference_vs30_value=760.0
rupture_mesh_spacing=2.5
truncation_level=2.0
uniform_hazard_spectra=True
width_of_mfd_bin=0.1
-----------------------------------------------------------------------------------------------------------------

anirudh.rao於 2015年7月17日星期五 UTC+8下午6時22分33秒寫道:

anirudh.rao

unread,
Jul 20, 2015, 2:15:01 AM7/20/15
to openqua...@googlegroups.com
Once you specify the intensity_measure_types_and_levels directly, you no longer need to provide the fragility model in the job file. Removing the line specifying the path to the fragility_file from your job.ini should resolve this error.

許銘凱

unread,
Jul 20, 2015, 11:56:51 PM7/20/15
to openqua...@googlegroups.com
Dear admin,

 Thanks a lot for you answer, now I can run the hazard model. But when I run the risk, it show such message:

The attachment is my job_risk.ini
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
kensheu@kensheu-VirtualBox:~/oq-engine$ ./bin/oq-engine --nd --rr=./CGS1410292015_m0_5.0/job_risk.ini --hazard-calculation-id 192
[2015-07-21 05:49:20,604 hazard job #195 - WARNING MainProcess/22307] The parameter 'steps_per_interval' is unknown, ignoring
Traceback (most recent call last):
  File "./bin/oq-engine", line 576, in <module>
    main()
  File "./bin/oq-engine", line 505, in main
    hazard_calculation_id=args.hazard_calculation_id)
  File "/usr/lib/python2.7/dist-packages/openquake/engine/engine.py", line 352, in run_job
    hazard_calculation_id)
  File "/usr/lib/python2.7/dist-packages/django/db/transaction.py", line 217, in inner
    res = func(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/openquake/engine/engine.py", line 464, in job_from_file
    oqparam = readinput.get_oqparam(params, calculators)
  File "/usr/lib/python2.7/dist-packages/openquake/commonlib/readinput.py", line 112, in get_oqparam
    oqparam = OqParam(**job_ini)
  File "/usr/lib/python2.7/dist-packages/openquake/commonlib/valid.py", line 753, in __init__
    raise ValueError(doc + '\nGot:\n' + dump)
ValueError: Invalid calculation_mode="classical_risk" or missing
fragility_file/vulnerability_file in the .ini file.
Got:
base_path=/home/kensheu/oq-engine/./CGS1410292015_m0_5.0
calculation_mode=classical_risk
description=Classical PSHA-Based Damage
export_dir=/tmp
inputs={'fragility': '/home/kensheu/oq-engine/./CGS1410292015_m0_5.0/structural_fragility_model.xml'}
investigation_time=100.0
maximum_distance=20.0
region_constraint=POLYGON((119.5 21.5, 122.5 21.5, 122.5 25.5, 119.5 25.5, 119.5 21.5))


anirudh.rao於 2015年7月20日星期一 UTC+8下午2時15分01秒寫道:
job_risk.ini

anirudh.rao

unread,
Jul 21, 2015, 4:44:08 AM7/21/15
to openqua...@googlegroups.com
If you intend to run a classical PSHA based risk calculation, the parameter calculation_mode must be set to classical_risk and you will need to provide the path to your vulnerability model using the parameter vulnerability_file.

Whereas if you wish to run a classical PSHA based damage calculation, the parameter calculation_mode must be set to classical_damage and you will need to provide the path to your fragility model using the parameter fragility_file.

Also, please note that the name of the parameter maximum_distance in the risk configuration file has been changed recently to asset_hazard_distance in order to avoid confusion with another parameter of the same name in the hazard configuration file.

許銘凱

unread,
Jul 21, 2015, 5:22:07 AM7/21/15
to openqua...@googlegroups.com
Dear admin,
I have try to change calculation_mode = classical_damage and it shows suh message:
--------------------------------------------------------------------------
kensheu@kensheu-VirtualBox:~/oq-engine$ ./bin/oq-engine --nd --rr=./CGS1410292015_m0_5.0/job_risk.ini --hazard-calculation-id 192
Traceback (most recent call last):
  File "./bin/oq-engine", line 576, in <module>
    main()
  File "./bin/oq-engine", line 505, in main
    hazard_calculation_id=args.hazard_calculation_id)
  File "/usr/lib/python2.7/dist-packages/openquake/engine/engine.py", line 352, in run_job
    hazard_calculation_id)
  File "/usr/lib/python2.7/dist-packages/django/db/transaction.py", line 217, in inner
    res = func(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/openquake/engine/engine.py", line 453, in job_from_file
    check_hazard_risk_consistency(haz_job, params['calculation_mode'])
  File "/usr/lib/python2.7/dist-packages/openquake/engine/engine.py", line 397, in check_hazard_risk_consistency
    expected_mode = RISK_HAZARD_MAP[risk_mode]
KeyError: 'classical_damage'
-------------------

anirudh.rao於 2015年7月21日星期二 UTC+8下午4時44分08秒寫道:

anirudh.rao

unread,
Jul 21, 2015, 8:48:46 AM7/21/15
to openqua...@googlegroups.com
The classical PSHA based damage calculator was introduced in version 1.4 of the engine, so the calculation will not run on previous versions.

Could you check which version of the engine you are using? You can find out using the command: oq-engine --version

In case you are already running version 1.4, could you please upload your files for both the hazard and risk calculations, so that we can better identify the source of the error?

許銘凱

unread,
Jul 21, 2015, 8:53:38 AM7/21/15
to openqua...@googlegroups.com
Dear admin,
My openquake version is 1.2.2-git so the calculation can not run on previous versions ?

anirudh.rao於 2015年7月21日星期二 UTC+8下午8時48分46秒寫道:

anirudh.rao

unread,
Jul 21, 2015, 9:42:22 AM7/21/15
to openqua...@googlegroups.com
That is indeed the case. This calculator (classical_damage) is only available starting with v1.4 of the engine and will not run on previous versions. Thus you would need to upgrade your installation if you wish to run this particular calculator.

許銘凱

unread,
Jul 22, 2015, 10:47:01 PM7/22/15
to OpenQuake Users, aniru...@globalquakemodel.org
Dear admin,
  I run the risk model and it successfully run, however, the result(file dmg-per-asset.csv) shows such data.
What is the reason for it show these? Is it because my fragility function's intensity measure levels are set 0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4 ?
I try to set the intensity measure levels to  0 0.01 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 and the file dmg-per-asset.csv shows nothing.
By the way, the intensity measure levels in my hazard's job.ini is 

intensity_measure_types_and_levels = {"PGA": [0.005, 0.0098, 0.0192, 0.0376, 0.0738, 0.145, 0.284, 0.556, 1.09, 1.52, 2.13],"SA(0.1)": [0.005, 0.0098, 0.0192, 0.0376, 0.0738, 0.145, 0.284, 0.556, 1.09, 1.52, 2.13],"SA(0.3)": [0.005, 0.0098, 0.0192, 0.0376, 0.0738, 0.145, 0.284, 0.556, 1.09, 1.52, 2.13],"SA(0.5)": [0.005, 0.0098, 0.0192, 0.0376, 0.0738, 0.145, 0.284, 0.556, 1.09, 1.52, 2.13],"SA(1.0)": [0.005, 0.0098, 0.0192, 0.0376, 0.0738, 0.145, 0.284, 0.556, 1.09, 1.52, 2.13]}

So, is the problem come from I set different intensity measure levels? 
--------------------------------------------------------------------------------------------------------   
a1,          NAN,          NAN,          NAN,          NAN,          NAN
a10,          NAN,          NAN,          NAN,          NAN,          NAN
a100,          NAN,          NAN,          NAN,          NAN,          NAN
a1000,          NAN,          NAN,          NAN,          NAN,          NAN
a10000,          NAN,          NAN,          NAN,          NAN,          NAN
a10001,          NAN,          NAN,          NAN,          NAN,          NAN
a10002,          NAN,          NAN,          NAN,          NAN,          NAN
a10003,          NAN,          NAN,          NAN,          NAN,          NAN
a10004,          NAN,          NAN,          NAN,          NAN,          NAN
a10005,          NAN,          NAN,          NAN,          NAN,          NAN
a10006,          NAN,          NAN,          NAN,          NAN,          NAN
a10007,          NAN,          NAN,          NAN,          NAN,          NAN
a10008,          NAN,          NAN,          NAN,          NAN,          NAN
a10009,          NAN,          NAN,          NAN,          NAN,          NAN

anirudh.rao於 2015年7月21日星期二 UTC+8下午9時42分22秒寫道:

anirudh.rao

unread,
Jul 23, 2015, 12:07:30 PM7/23/15
to OpenQuake Users, kensh...@gmail.com
It is difficult to diagnose without looking at the input files used for the analysis. If it's possible, could you please provide the input files you used when you obtained these results? If they can not be publicly disclosed, please send them directly to in...@openquake.org

許銘凱

unread,
Aug 13, 2015, 2:05:37 PM8/13/15
to OpenQuake Users, kensh...@gmail.com
Dear admin,

I have send the file to  in...@openquake.org about two weeks and doesn't get any reply. Is there any problem for my attachement?

anirudh.rao於 2015年7月24日星期五 UTC+8上午12時07分30秒寫道:

許銘凱

unread,
Aug 28, 2015, 12:39:16 PM8/28/15
to OpenQuake Users
Dear admin,
  I run the risk model and it successfully run, however, the result(file dmg-per-asset.csv) shows such data.
What is the reason for it show these? Is it because my fragility function's intensity measure levels are set 0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4 ?
I try to set the intensity measure levels to  0 0.01 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 and the file dmg-per-asset.csv shows nothing.
By the way, the intensity measure levels in my hazard's job.ini is 

intensity_measure_types_and_levels = {"PGA": [0.005, 0.0098, 0.0192, 0.0376, 0.0738, 0.145, 0.284, 0.556, 1.09, 1.52, 2.13],"SA(0.1)": [0.005, 0.0098, 0.0192, 0.0376, 0.0738, 0.145, 0.284, 0.556, 1.09, 1.52, 2.13],"SA(0.3)": [0.005, 0.0098, 0.0192, 0.0376, 0.0738, 0.145, 0.284, 0.556, 1.09, 1.52, 2.13],"SA(0.5)": [0.005, 0.0098, 0.0192, 0.0376, 0.0738, 0.145, 0.284, 0.556, 1.09, 1.52, 2.13],"SA(1.0)": [0.005, 0.0098, 0.0192, 0.0376, 0.0738, 0.145, 0.284, 0.556, 1.09, 1.52, 2.13]}

So, is the problem come from I set different intensity measure levels? 
--------------------------------------------------------------------------------------------------------   
a1,          NAN,          NAN,          NAN,          NAN,          NAN
a10,          NAN,          NAN,          NAN,          NAN,          NAN
a100,          NAN,          NAN,          NAN,          NAN,          NAN
a1000,          NAN,          NAN,          NAN,          NAN,          NAN
a10000,          NAN,          NAN,          NAN,          NAN,          NAN
a10001,          NAN,          NAN,          NAN,          NAN,          NAN
a10002,          NAN,          NAN,          NAN,          NAN,          NAN
a10003,          NAN,          NAN,          NAN,          NAN,          NAN
a10004,          NAN,          NAN,          NAN,          NAN,          NAN
a10005,          NAN,          NAN,          NAN,          NAN,          NAN
a10006,          NAN,          NAN,          NAN,          NAN,          NAN
a10007,          NAN,          NAN,          NAN,          NAN,          NAN
a10008,          NAN,          NAN,          NAN,          NAN,          NAN
a10009,          NAN,          NAN,          NAN,          NAN,          NAN

Michele Simionato於 2015年7月17日星期五 UTC+8下午4時29分24秒寫道:

anirudh.rao

unread,
Sep 1, 2015, 7:44:51 AM9/1/15
to OpenQuake Users
Dear user,

Thank you for your message.

The problem appears to be the conversion of the hazard curves from probabilities of exceedance to annual rates of exceedance, since many of the 50-year exceedance probabilities in the hazard curves for your calculation are 1.0. The conversion equation from probabilities to rates based on the Poisson assumption, λ = –ln(1–p), breaks down at p=1.0. Some suggestions to help avoid running into this problem:

1) Remove the very low intensity levels and corresponding damage state probabilities from the fragility model, or set the noDamageLimit to a higher intensity level. For instance, in your fragility model, the intensity levels 0.005g, and 0.0098g may be safely removed without affecting the final results. An alternative is to set the noDamageLimit to 0.05g instead of 0.005g.
2) Reduce the investigation_time in the hazard calculation job file from 50 years to 1 year. Instead, you can set the parameter risk_investigation_time in the risk job file to 50 years to get the 50 year damage state probabilities.

With these two modifications, your computation should run without problems. Meanwhile, we're adding a clear and helpful error message when this situation with 1.0 probabilities of exceedance in the hazard curves is encountered. We're also investigating the option of extracting the annual rates of exceedance directly from the hazard calculation instead of using the Poissonian conversion equation to derive them from the probabilities.
Reply all
Reply to author
Forward
0 new messages