design return period

54 views
Skip to first unread message

chiara....@gmail.com

unread,
May 23, 2013, 4:42:25 AM5/23/13
to reliabil...@googlegroups.com
Hi Steve,
we spoke a lot during the course about the famous 475 years return period that is used everywhere in the world, but it was calibrated for California only. Are there references which explain how it was derived, and what should be the right method to follow to design in other countries?
Thanks a lot
Chiara

Steve W

unread,
May 23, 2013, 12:18:36 PM5/23/13
to reliabil...@googlegroups.com
Chiara,

Thanks for the question.  I think Paolo is a lot closer to this return period problem in the seismic context, so I'll ask him to respond to this specifically.  For wind/wave loads on offshore structures, design practice is to use the 100-year load, multiplied by a conservative load factor on the order of 1.3.  This leads to inconsistent reliability levels across the world -- e.g., higher reliability in North Sea than in Gulf of Mexico -- due to different statistical behavior of the loads PDF tails in these different regions of the world.  Design against (onshore) wind typically uses the 50-year event, again with a load factor of about 1.3.  There is a push in some places to add an additional code check of the 10,000-year event, with no conservative load factor.  This would presumably be the "right" thing to do if you wish to achieve a failure prob no worse than 1/10000 yr.    It requires, of course, that the load statistics be specified out further -- to a return period of 10,000 rather than 100 years -- but that seems a more honest approach than hiding behind a mysterious factor like 1.3 that can't be universally valid.

paolo bazzurro

unread,
May 24, 2013, 2:59:17 AM5/24/13
to Steve W, reliabil...@googlegroups.com
Hi all,

Unfortunately I cannot be of much help here because I am unaware of a paper that explains this issue of how the 475yr level came about. I had several discussion with Allin Cornell about the genesis of it. However, the main idea behind that work was a calibration of the return period to levels of hazard that were already generally in tune with the pre-existing acceleration design level in building codes. Of course, the 475yr acceleration close to most faults was much higher than the level adopted by the code and, hence, the code people applied a magic 2/3 fudge factor to reduce the design level where the 475yr rule was too high to bear for them. Note that in most places in CA the hazard computation at these return periods is controlled by the high rate of occurrence of earthquakes rather than by the uncertainty in the GMPE. This is, however, NOT true in many other less seismic places, including the Eastern US and Italy. There is no reason whatsoever that the 475yr rule would make sense anywhere else in the world, given what the premises for its selection were.

The way to go, of course, is towards a risk-consistent design level and not a hazard-consistent design level. The work by Nico Luco at USGS goes right in that direction and I believe it has been adopted recently by one of the recent guidelines. 

Cheers

Paolo


--
You received this message because you are subscribed to the Google Groups "ReliabilityatRose" group.
To unsubscribe from this group and stop receiving emails from it, send an email to reliabilityatr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 
Reply all
Reply to author
Forward
0 new messages