verification and validation report for calculations …verification and validation report for...
TRANSCRIPT
Verification and Validation Report for Calculations Performed for the Thyspunt PSHA and Compliance with RD-0016
J.J. Bommer, F.O. Strasser, E.M. Rathje, A. Rodriguez-Marek, P.J. Stafford, and E. Hattingh
Council of Geoscience Report Number 2013-0032 (Rev. 0)
Confidential
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version i
DOCUMENT APPROVAL SHEET
REFERENCE:
CGS REPORT
2013-0032
ESKOM
REVISION
0
COPY No.
Verification and Validation Report for Calculations Performed for the Thyspunt PSHA and Compliance with RD-0016.
DATE OF
RELEASE:
21 May 2013
CONFIDENTIAL
REVISION DESCRIPTION OF REVISION DATE MINOR
REVISIONS
APPROVAL
AUTHORS
COMPILED BY:
COMPILED BY:
COMPILED BY:
ACCEPTED BY:
J.J. Bommer F.O. Strasser P.J. Stafford N. Keyser
COMPILED BY:
COMPILED BY:
COMPILED BY:
AUTHORISED BY:
E.M. Rathje A. Rodriguez-Marek E. Hattingh G. Graham
This image cannot currently be displayed.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version ii
Table of Contents
Acknowledgements iii 1. INTRODUCTION 1 2. SCOPE 2
2.1. Defining Validation and Verification (V&V) 2 2.2. RD-0016 Requirements and Intent 3 2.3. The Role of PSHA in Safety Calculations 4
3. DEFINITION of SEISMIC ACTIONS for DESIGN 6
3.1. Earthquake Processes and Variability 6 3.2. Probabilistic Seismic Hazard Analysis 13 3.3. Uncertainty and Logic-Trees 17 3.4. The SSHAC Level 3 Process 20 3.5. QA, V&V and the SSHAC Level 3 Process 22
4. V&V for SSC MODEL CALCULATIONS 25
4.1. Mmax Calculations 25 4.2. Recurrence Calculations 27
5. V&V for GMC MODEL for BEDROCK MOTIONS CALCULATIONS 32
5.1. Host GMPE Implementations 33 5.2. IRVT-Generated Fourier Spectra 36 5.3. Application of Vs-Kappa Adjustments 39 5.4. Sigma Values 42
6. V&V for SITE RESPONSE CALCULATIONS 49
6.1. STRATA Software for Site Response Calculations 49 6.2. Convolution of Hazard Calculations with Site Amplification Factors 52
7. V&V for HAZARD CALCULATIONS 53
7.1. The Nature of Checks on PSHA Calculations 53 7.2. Choice of FRISK88 Software 54 7.3. Implementation of GMC Model 56 7.4. Implementation of SSC Model 57 7.5. Checks on Pre-processing Steps for Hazard Inputs 63 7.6. Setup of ATTENDLL 74 7.7. Monitoring of Runs 78 7.8. File transfers and Integrity Checks 80 7.9. Checks on Post-processing of Hazard Results 81
REFERENCES 84 APPENDIX A: Supplementary Files 87
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version iii
Acknowledgements
The authors of this report are indebted to many individuals who assisted in one way or another
with the calculations and checks reported herein. The most notable contributions in this respect
are as follows:
• Marinda Havenga and Vunganai Midzi, for their work as members of the Hazard
Calculation Team, for assisting with the checks on the FRISK88 input and output files
• Dr John Douglas of BRGM, France, for his work as a Specialty Contractor assisting with
many checks related to the implementation of the GMC and SSC models in the hazard
software, and for checking the MatLab routines for post-processing of the hazard results
• Dr Marco Pagani and Dr Damiano Monelli of the Global Earthquake Model (GEM)
Foundation for their hard work related to the checking of the model implementations
using their state-of-the-art seismic hazard code in OpenQuake; special thanks are also
due to Marco Pagani for travelling to South Africa to participate in Workshop #3 and to
London for meetings to plan the hazard implementation checks
• Dr Rui Pinho and Dr Helen Crowley, Secretary General and Assistant Secretary General
of the GEM Foundation, for their support for the contribution made to the Thyspunt
project through the application of the OpenQuake software
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 1
1. INTRODUCTION
As part of the characterisation of the Thyspunt site for the preparation of a license application to
the National Nuclear Regulator (NNR), a probabilistic seismic hazard analysis (PSHA) has been
conducted for the site by the Council for Geoscience (CGS), assisted by an international team
of specialist consultants. This report presents the validation and verification exercises
undertaken for the calculations conducted within the PSHA project, in order to demonstrate
compliance with the NNR Requirements Document Requirements for Authorisation
Submissions Involving Computer Software and Evaluation Models for Safety Calculations (RD-
0016; NNR, 2006). The report takes cognisance of the guidance provided in the Eskom
interpretation document Specification for Validation and Verification Tasks for Simulation
Models Used in Nuclear Siting (NSIP02761; Eskom, 2013) but is focused primarily on
addressing the requirement of RD-0016.
Rather than entering directly into a presentation of tasks executed to address the requirements
listed in RD-0016, the report first presents the scope of the exercise. In particular, Chapter 2
discusses how the PSHA study relates to both the specific requirements and the general
intentions of RD-0016. The key distinction made therein is that the outcome of the PSHA study
provides input to safety-related calculations, but it is not a safety calculation in itself. This does
not diminish its importance, however, and the need to ensure that there is confidence in the
seismic design actions that are defined using the output from the seismic hazard analysis.
Chapter 2 concludes by indentifying the ways in which the procedures following in the Thyspunt
PSHA addresses the need to provide assurance regarding the numerical outputs.
Chapter 3 then provides an overview of how the PSHA study addresses this need, which
includes some explanations of the processes involved. These are provided because the study
relates to a highly specialised discipline, which differs in several respects from routine
engineering calculations. As well as explaining the PSHA process, Chapter 3 also presents the
SSHAC Level 3 framework within which the PSHA was conducted, which is critical to
understanding the treatment of uncertainties in the study. The chapter concludes with an
overview of how Quality Assurance (QA) and Validation and Verification (V&V) relate to the
SSHAC Level 3 process, specifically distinguishing between information that informs the
decision-making process by the expert teams in developing the inputs to the PSHA calculations,
and the numerical values obtained from calculations that have a direct impact on the PSHA
calculations. The remaining chapters of the report then document the V&V for the calculations
falling into the second category.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 2
2. SCOPE
This section addresses the authors’ understanding of the V&V requirements of RD-0016, as
they apply to the Thyspunt PSHA. The purpose of this section is to explain to the reader the
ways in which this document is intended to meet the specific requirements of RD-0016, where
this is appropriate, and also how it intends to satisfy the broader intent of that document. This is
not an attempt, in any shape or form, to sideline the importance of the V&V with regards the
calculation of seismic hazard, but rather to illustrate the full extent to which the Thyspunt PSHA
project has addressed the issues and concerns that motivate the requirement for validation and
verification of all calculations related to the safety of nuclear installations.
2.1. Defining Validation and Verification
The NNR document RD-0016 defines validation as follows (NNR, 2006; p.5):
“The evidence that demonstrates that the calculation method is fit for its purpose. When
calculating physical processes it may mean showing that the calculation is bounding with
a suitable degree of confidence rather than a best estimate.”
This is interpreted to mean that in all parts of the Thyspunt PSHA, appropriate types of
calculations have been adopted. The arguments to support these choices made in this project—
which in most cases are simply global best practice but in some instance actually relate to state-
of-the-art developments—are presented in the following chapters.
Verification is defined in the following way in RD-0016 (NNR, 2006; p.5):
“The process of ensuring that the controlling physical equations have been correctly
translated into software coding, or in the case of hand calculations, correctly
incorporated into the calculation procedure. For the purposes of this document
verification is taken to be part of the validation submission.”
This is interpreted to imply that measures are taken to demonstrate that the selected calculation
approach is correctly implemented in the software used for the project. In this report,
explanations are given regarding the confidence that can be provided that this is the case for all
of the software, including both commercial packages and in-house programs, that have been
used to obtain numerical output that is part of the PSHA. In addition, the following chapters
report the various checks that were conducted to demonstrate that the input data were correctly
entered into the calculations. This is particular important for the actual PSHA calculations, since
the input parameters are very complex, as is made clear from the discussions in Section 3.3.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 3
The different ways that the inputs to the PSHA calculations were checked are presented in
Chapter 7 of this report.
2.2. RD-0016 Requirements and Intent
The principal requirements of RD-0016 are presented in overview form in Section 3 of that
document, and are summarised here as follows:
1. Information about computer software and evaluation models used for safety
calculations; an evaluation model is defined as “a calculation framework consisting of
one or more calculation models and specific inputs used to model specific system
behaviour under certain conditions and linked to specific Safety Case assessment(s)
and/or objective(s).”
2. Demonstration that all models used are robust and have been directly or indirectly
benchmarked against experimental data.
3. A complete technical description of each evaluation model, sufficient to permit technical
review of the calculation procedure, its implementation, and the parameter values
employed.
4. Demonstration of solution convergence for each calculation (for example, in terms of
time steps in dynamic calculations).
5. Sensitivity studies for the influence of variations in calculation features (such as time
steps) and justification for the choices made.
6. Comparisons of empirical models and correlations with relevant data. For predictions
from the entire evaluation model, the comparisons must be made with applicable
experimental data. For an evaluation model for the behaviour of a plant system during a
postulated accident using one or more computer programs, the overall program
behaviour needs to be checked against results from standard problems or benchmarks.
Review of these requirements, particularly items #2 and #6 referring to experimental data and,
in the latter case, plant system response, strongly suggests that the focus of RD-0016 is on
calculations of the behaviour of systems, structures and components in nuclear plant. This is
fully understandable, since such calculations are ultimately the basis for a safety case. The
Thyspunt PSHA does not involve any calculations related to the proposed nuclear power plant
at the site or any of its components, and indeed is completely independent of the chosen
technology for the reactor units. The PSHA rather provides input into calculations of structural
response under the specific case of earthquake loading. The distinction is important for a
number of reasons, one being that seismic hazard results cannot be validated against
experimental data or other observations, as explained in more detail in Section 7.1. The
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 4
guidance in RD-0016 suggests that for the calculations with which it is primarily concerned,
empirical validation is feasible, as demonstrated, for example by the following text from Section
4.1.5 of that document (NNR, 2006; p.9):
“One way of gaining calculation validation evidence is by analysing experiments and
comparing the predictions of the software (or other calculation tool) against experimental
results.”
In passing, it may be noted here that RD-0016 does not give some indication about the
expected levels of accuracy and precision for calculations related to extreme events. The
document refers to limiting physical behaviour of the systems, structures and components
comprising a nuclear power plant, but might also be interpreted to have some implications for
calculations of earthquake-induced ground shaking of very low probability, which is also an
extreme case in so much that it is far beyond available observations. In this regard, the
following text, from the Scope of RD-0016, is noted (NNR, 2006; p.6):
“Calculations of severe accident conditions may involve predictions of extreme physical
behaviour and the calculation methods used are not so amenable to rigorous validation.
Nevertheless, any validation submissions for severe accident calculation methods
should conform in a general way to the requirements given in this document. ”
In the case of PSHA, at the annual probability levels that are used to define the design basis
earthquake loading (from 10-4 to 10-5), it is not that the calculation methods employed break
down or become unstable, but rather that the calculations are based on extreme extrapolations
of available observations and empirical data. In this regard, as explained in Section 2.3 below,
one of the key challenges in a PSHA for a safety-critical facility is to ensure that the hazard
estimates take full account of the uncertainties associated with their calculation.
2.3. The Role of PSHA in Safety Calculations
Although the calculations conducted within the PSHA are not directly related to plant behaviour,
they clearly do have implications for the safety levels claimed or inferred from such structural
response calculations. If the level of seismic loading is underestimated, then calculated safety
margins may be unsafely overestimated, and therefore it is imperative that the calculations of
earthquake loading provide an adequate degree of assurance regarding their likelihood of being
exceeded. In the characterisation of earthquake hazard at the sites of nuclear power plants, this
concern is addressed in three different ways, all of which are applicable to the Thyspunt study
to which this report relates:
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 5
• A probabilistic rather than deterministic approach is adopted for calculating the design
basis ground motions. The use of PSHA is implicitly required by the specification of
target annual exceedance frequencies in other NNR documents (e.g., RD-0024; NNR,
2008), and it is generally preferred because it allows a risk-informed approach to
earthquake-resistant design. As explained in Section 3.2, PSHA allows an appropriate
level of ground acceleration to be adopted for design on the basis of its associated
probability of being exceeded. This feature of PSHA contrasts with deterministic
approaches that provide estimates of ground motion of unknown likelihood, and which in
practice are often dangerously unconservative.
• As is explained in Section 3.2, PSHA takes account of the inherent variability in the
occurrence and characteristics of future earthquakes, but consideration also needs to be
given to the uncertainty associated with models developed to represent the earthquake
processes that could affect a site (Section 3.3). This epistemic uncertainty must be
identified, quantified and incorporated into the calculations so that its full influence on the
hazard estimates can be accounted for in the definition of the design basis ground
motions. In this regard, a noteworthy feature of the Thyspunt study is the adoption of the
state-of-the-art approach to dealing with uncertainty in PSHA, namely the SSHAC
Level 3 process (Section 3.4).
• The calculations must be performed correctly. This means that an appropriate method of
calculation is selected, and this method is appropriately implemented into the software
employed. As was noted in Section 2.1, it also means that the parameter values were
correctly entered into the programs, and that all the results were processed without error.
Chapters 4, 5, 6 and 7 of this report all address the steps that were taken to ensure that
the calculations were made correctly, with a particular focus on the calculations from
which numerical output directly influences the final seismic hazard results and the
design basis ground motions (Section 3.5).
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 6
3. DEFINITION of SEISMIC ACTIONS for DESIGN
The purpose of this section is to provide background on the way seismic actions for design and
analysis of nuclear facilities are assessed using probabilistic seismic hazard analysis (PSHA).
After a brief overview of earthquake and ground-motion processes in the first section, the
second section introduces the mechanics of PSHA calculations and how these account for all
sources of natural variability in these processes. This includes an explanation of the
shortcomings of deterministic seismic hazard analysis (DSHA) and why that approach often fails
to provide genuinely ‘conservative’ estimates of the earthquake hazard. The next section then
introduces the nature of epistemic uncertainties in the inputs to PSHA and how these are
generally handled through a logic-tree. This third section also explains the role of expert
judgement in the quantification of epistemic uncertainty. This leads to the SSHAC Level 3
process as a formal framework for conducting such assessments with multiple experts, which is
described in the fourth section. The emphasis in that description is how the process inherently
includes review and ensures that uncertainties are effectively captured. The fifth and final
section discusses the relationship between QA, V&V and the SSHAC process.
3.1. Earthquake Processes and Variability
Natural earthquakes (as distinct from those generated by anthropogenic causes such as
mining) are the result of the rupture of geological excepts; possible exceptions to this statement
are volcanic earthquakes associated with the movement of magma and very deep earthquakes
in subduction zones that are thought to be associated with phase changes. When an
earthquake occurs, the rupture initiates at a point on the fault (called the focus or hypocentre)
and then propagates at high velocity (2-3 km/s) along the fault. As the fault breaks, the
surrounding rocks of the Earth’s crust relax, releasing stored strain energy which radiates away
from the fault in the form of seismic waves. The size of an earthquake is measured by
magnitude or seismic moment (the latter generally being expressed as a moment magnitude),
which is proportional to the seismic energy released. The length of the fault rupture associated
with the earthquake also increases with the magnitude, as does the slip on the rupture
(Figure 3.1).
Geological investigations, sometimes supplemented by historical studies when a large
earthquake has occurred in the relatively recent past, can reveal the size, location and
approximate timing of co-seismic displacements on faults, from which the magnitude and
average recurrence intervals for earthquakes on that structure can be estimated. These can be
expressed as recurrence relationships, which for individual faults will generally conform to the
maximum magnitude model or the characteristic earthquake model (Figure 3.2.).
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 7
Figure 3.1. Median values of predicted rupture lengths (left) and fault slip (right) as a function of
magnitude obtained from the empirical relationships of Wells & Coppersmith (1994).
Figure 3.2. Earthquake recurrence relationships for faults expressed in discrete (upper row) and
cumulative (lower row) forms: maximum magnitude model (left) and characteristic earthquake model (right) (Bommer & Stafford, 2008).
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 8
As the seismic waves propagate away from the earthquake source, many of the ray paths are
reflected and refracted towards the ground surface, where they can cause strong shaking at
sites located within a few tens of kilometres of the fault rupture (and at greater distances if there
are very soft soils that can amplify the motion). Recordings of the ground shaking can be
obtained on seismic instruments such as accelerographs which display the acceleration of the
ground as a function of time (Figure 3.3). From such recordings, the values of parameters such
as peak ground acceleration (PGA) and response spectra of acceleration (Figure 3.4) can be
calculated. These parameters characterise the ground shaking in a quantitative manner that
reflects the damage potential of the motion for different types of structures, and they can also
be used as input to the seismic design and analysis of systems, structures and components.
Figure 3.3. Strong-motion accelerogram.
Figure 3.4. Acceleration response spectra (for 4 damping levels) for the accelerogram in Figure 3.3.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 9
Taking large numbers of such recordings, empirical equations can be derived to predict
selected ground-motion parameter values as a function of magnitude (M), distance (R), site
classification (S) and other characteristics of the earthquake and the environment. The data
always display a large scatter around the curves representing these equations (Figure 3.5),
which arises because even the most complex ground-motion prediction equations (GMPEs) are
very simple representations of very complex processes. With respect to the predictions obtained
from any GMPE, the scatter appears to be random, for which reason it is generally referred to
as aleatory variability.
Figure 3.5. PGA values recorded during the magnitude 6 Parkfield, California, earthquake of September
2004, and predicted median values from a Californian GMPE
The scatter is characterised by the residuals, δ, which are calculated as the observed value
minus the predicted value (actually the logarithmic values since the regressions are always
performed on the logarithmic values of acceleration). The logarithmic residuals are generally
found to fit a standard normal or Gaussian distribution that can be fully characterised by a
standard deviation, which is often referred to as sigma because it is generally represented by
the Greek letter σ. The general form of a GMPE for a ground-motion parameter such as PGA is:
εσδ +=+= ),,(),,()log( SRMfSRMfPGA (3.1)
where ε is the number of standard deviations above (or below, if negative) the logarithmic mean.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 10
The standard deviation is as much a part of the GMPE as the coefficients for predicting the
median values of the ground-motion parameter, and these equations should be thought of not
as predicting unique values but rather of predicting probabilistic distributions of, for example,
PGA values (Figure 3.6). For a given site and a specific earthquake (in terms of magnitude and
distance from the site), the median values of PGA from a GMPE will have a probability of 0.5 of
being exceeded. The mean-plus-one-standard-deviation value of PGA, which would be about
80% greater for many GMPEs, would have a probability of just 0.16 of being exceeded.
Figure 3.6. Preidcted PGA values on rock sites at different distances predicted for a magnitude 7 by the same Californian GMPE shown in Figure 3.5. The thick line represents the median values, the shaded
band shows the interval for ±σ; there is a a probability of about two-thirds that obsered values of acceleration will fall within this band. Both panels show identical information but plotted on linear (left) and
logarithmic (right) axes.
We can now consider a very simple case for which an estimate is required of potential
earthquake actions (vibratory ground motions) to be considered in the analysis or design of a
critical facility. Imagine the site of the project is located about 1 km away from a geological fault
on which the seismic activity is well represented by the maximum magnitude model (Figure 3.2),
with earthquakes of about magnitude 7 occurring, on average, once every 450 years with
almost no activity in between, and where other sources of earthquakes are sufficiently remote
for the hazard at our site to be dominated by the nearby fault. In passing, it may be noted that
this is roughly the situation for the new Pacific locks for the expanded Panama Canal and the
Pedro Miguel fault (Rockwell et al., 2011). Now, to assess the appropriate levels of design
ground motion at the site, the magnitude in this case is known (it is the characteristic value), but
the timing and location (along the fault) of the next earthquake are unknowns that may be
considered as random variables. However, it would not be unreasonable to assume that the
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 11
earthquake will occur during the useful life of the engineering structure under consideration, and
to assume that the rupture will include the closest location of the fault to the site. Knowing the
site conditions, we then have unique values of M, R and S that can be entered into the GMPE
to predict values of the chosen ground-motion parameter. However, because the GMPE
predicts a probabilistic distribution of earthquakes rather than a unique value, we also need to
choose a value of ε (i.e., how many standard deviations from the logarithmic mean). Ignoring
this issue means taking a value of 0, and then predicting accelerations that would have a 50%
chance of being exceeded in the event of this earthquake, and the owner and/or regulator
would need to make a judgement as to whether this would be sufficiently safe. Alternatively,
positive values of ε could be chosen, higher values yielding greater values of acceleration with
lower probabilities of being exceeded. The decision regarding the appropriate ground-motion
level demonstrates that even in such a case that seems to lend itself perfectly to the application
of DSHA, probabilistic considerations are unavoidable.
If the seismic activity associated with the nearby fault were better described by the
characteristic earthquake distribution (Figure 3.2), the decision-making process becomes
slightly more complicated. The distance may still be fixed, but we might need to consider the
possible impact of earthquakes that are smaller than characteristic event but which by virtue of
their shorter recurrence intervals are actually more likely to happen during the useful life of the
structure or facility. One could argue that the ‘conservative’ approach would be to continue to
consider only the largest earthquake, but the decision regarding the ε value still needs to be
made. Since this relates to a probability level, there is a rational basis for the decision, but this
may be undermined to some extent if one is ignoring the probability of occurrence of the
earthquake itself.
In the vast majority of applications, the situation regarding the sources of future earthquakes is
far less well defined, and even if seismogenic faults are mapped and characterised, many
earthquakes will not be associated with these active structures. There are many reasons why it
is difficult to unambiguously associate all observed earthquakes with known geological faults,
including the fact the epicentres (the point on the Earth’s surface directly above the hypocentre)
is inevitably located with an uncertainty of a few kilometres, and focal depths carry even greater
uncertainty. These location uncertainties increase as we go back in time and look at older
earthquakes, especially those which occurred before the advent of instrumental seismology
around the beginning of the 20th Century. At the same time, the rupture dimensions and
associated slip of smaller earthquakes are such that these can easily go undetected (Figure
3.1), to which we need to add that the ruptures may be offshore, entirely buried within the crust,
or obscured by vegetation or surface processes such as erosion and agriculture. Since not all
seismic activity can be assigned to known faults, area seismic sources are always defined, in
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 12
which earthquakes may be considered equally likely to occur at any location, and recurrence
relationships take the form of the classic Gutenberg-Richter model, truncated at a value
considered to be the physical upper limit for magnitudes within that source zone (Figure 3.7). A
typical Seismic Source Characterisation (SSC) model will therefore consist of both fault and
area sources, as is the case for the Thyspunt site characterisation study (Figure 3.8).
Figure 3.7. Earthquake recurrence relationships for area sources expressed in discrete (left) and
cumulative (right) forms (Bommer & Stafford, 2008).
Figure 3.8. Seismic source characterisation for the seismic hazard analysis for the Thyspunt site (red star). There are 5 area sources (ECC, KAR, CK, NAM and SYN) and five fault sources (purple lines)
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 13
3.2. Probabilistic Seismic Hazard Analysis
Once the Seismic Source Characterisation model is defined and an appropriate GMPE selected,
a DSHA would seek to estimate the ground motions at the site resulting from single
earthquakes for each seismic source, and then adopt the most severe of these as this basis for
the design motions. The approach is usually to adopt the largest possible earthquake size
(sometimes referred to as the Maximum Credible Earthquake) and then to place this event at
the closest possible location to the site, and calculate the resulting ground motions at the
median or 84-percentile levels. The claim of proponents of DSHA is that this approach
envelopes uncertainties by defining the ‘worst-case scenario’, which would be consistent with
the ‘Biased Calculations’ approach envisaged in Clause 4.1.9 of RD-0016 (NNR, 2006), which
states that “Uncertainties in the representation of important physical processes may be such
that pessimistic models of these processes are deliberately built into the calculation procedure.”.
However, the specification in the clause also states that “conservatism shall mean that the
calculated relevant safety parameters...are biased on the conservative side throughout the
calculation” [emphasis added]. In terms of seismic hazard analysis, this would mean placing the
largest earthquake considered physically possible in the ECC source zone directly beneath the
site (i.e., at zero distance) and then calculate the resulting ground motions using a value of at
least 3 or 4 for ε. The resulting levels of motion would very probably be prohibitively high, which
is one of the reasons that in practice DSHA always moves away from this extreme case (the
actual worst-case scenario) and places the scenario earthquake at some distance from the site
and using value no higher than 1 for ε. The level of conservatism of the resulting ground
motions is therefore unknown and the level of safety that will result from taking this as the
design basis is difficult to assess.
In order to avoid making arbitrary decisions regarding appropriate values of M, R and ε,
probabilistic seismic hazard analysis (PSHA) treats all three of these quantities as random
variables and integrates over their values to define the resulting frequency or probability of
exceeding different levels of acceleration at the site under study. This was expressed succinctly
by Cornell (1968) in the landmark paper that is the basis for PSHA: “In the determination of the
distribution of maximum annual earthquake intensity at a site, one must consider not only the
distribution of the size (magnitude) of an event, but also its uncertain distance from the site and
the uncertain number of events in any time period.” The ranges of values of M and R, and the
average recurrence rates for the former, are defined by the SSC model, and the GMPE—which
is, in the simplest case, the Ground Motion Characterisation (GMC) model—defines the
distribution of ε. For each combination of these three parameters, levels of motion are
calculated at the site, and the corresponding exceedance frequency is calculated as the
probability associated with the value of ε divided by the recurrence interval of M. Assuming all
the earthquakes to be independent events, the frequencies can be summed to obtain the total
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 14
frequency of exceeding each level of motion, and these quantities can be plotted to obtain a
seismic hazard curve at the site for the chosen ground-motion parameter (Figure 3.9). The
advantage of this approach is that a level of design motion can then be chosen that has a
suitably low probability of being exceeded, and the hazard curves can be combined with fragility
curves for different components and structures to calculate risk in terms of total probability of
failure.
Figure 3.9. Schematic overview of the steps involved in PSHA (adapted from Reiter, 1990)
The Gutenberg-Richter recurrence relationship (Figure 3.7) for a seismic source can be
expressed in the follow way:
−−
= −−
−−−−
)mm(
)mm()mm(
mminmax
minmaxmin
min e1ee)m(N β
ββ
υ (3.2)
Where mmin is the smallest magnitude considered to be of engineering significance (usually 5.0
for nuclear projects) and mmax is the largest magnitude considered possible for that seismic
source; minmυ is the annual rate of earthquakes of magnitude mmin and greater; The parameter
β is equal to the b-value in the classical Gutenberg-Richter equation, multiplied by ln(10). The
PSHA calculations then integrate the following expression to determine the annual rate at which
any given level of PGA will be exceeded at the site:
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 15
[ ] ( ){ }∑ ∫∫∫ >=i
E,R,MiPGA dmdrd,r,mf,r,m*pgaPGAI*)pga( εευελ (3.3)
where I(PGA>pga*) is an indicator function taking a value of unity if PGA > pga* and is zero
otherwise; iυ is as defined in Equation (3.2); the last term in the integrand is the probability
density function of magnitude, distance and ground-motion exceedance (ε).Figure 3.10 provides
a graphical representation of this integration.
Figure 3.10. (a) source zone and site at distance ri from the 1 km x 1 km sub-element under
consideration; (b) non-cumulative recurrence relationship for the source (c) for the minimum magnitude earthquake in this sub-element, two standard deviations are required to generate a PGA of 0.2g at the
site, implying a low exceedance level (e), whereas for a larger earthquake (M 6.4), the median prediction of PGA suffices (d), so the ground-motion exceedance is higher for this less frequent earthquakes
scenario (f).
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 16
By conducting the PSHA calculations in terms of the spectral accelerations at different oscillator
frequencies, hazard curves can be obtained in terms of each spectral ordinate. By reading off
the accelerations for each oscillator frequency at a given annual exceedance frequency,
uniform hazard response spectra (UHRS) can be constructed (Figure 3.11). These UHRS can
be associated with their return periods, which are simply the reciprocals of the annual
exceedance frequencies.
Figure 3.11. Seismic hazard curves for different ground-motion parameters (upper) and unfirom hazard
response spectra (lower) calculated from such curves (ASCE, 2004).
In the discussions up until this point, it is implicitly assumed that the SSC and GMC models are
uniquely defined, in which case performing the integrations described by Equation (3.3) are
quite straightforward. In practice, however, this is never the case, because the nature of the
data available for any region and site are such that there will be an appreciable degree of
uncertainty. The uncertainty arises for two main reasons, the first of which is that different
subject matter experts may have different interpretations of the data, all of which are technically
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 17
defensible. The second reason is that the PSHA calculations will include scenarios (for example,
ground-motion levels at short distances from large earthquakes) for which there are no data in
the region in question, and possibly none globally. These uncertainties then reflect our lack of
knowledge about earthquake and ground-motion processes both in general and in the specific
region under study, for which reason they are referred to by the name epistemic uncertainty
(derived from the Greek word for knowledge). One of the most important challenges in
conducting a PSHA that can provide public and regulatory assurance is to identify and quantify
epistemic uncertainties.
3.3. Uncertainty and Logic-Trees
The best practice in seismic hazard assessment is to start by gathering all available data
relevant to the region and the site; this must involve compilation of existing data, and may also
include the collection of new data depending on the constraints of schedule and budget.
Analysis and assessment of these data will then lead to then develop of the best estimates of
the various parameters that define the SSC and GMC models. Then it is necessary to define the
epistemic uncertainty range about this best estimate, in effect defining a distribution. The best
estimate models define the centre of this distribution, and the alternative (but less favoured)
interpretations of the data define the body of the distribution. The limiting values, in terms of
upper and lower bounds that must be considered since they lie beyond what the data itself can
reveal, define the range of the distribution.
The approach of developing a best estimate model is entirely consistent with Clause 4.1.10 of
RD-0016: “A best-estimation calculation employs modelling that attempts to describe
realistically the physical processes occurring in the plant”, although in the case of seismic
hazard analysis it is the physical processes occurring in the Earth. The same clause of RD-0016
goes on to state that the key challenge in this approach is defining the associated uncertainty:
“Deriving the overall uncertainty for a best-estimate calculation method may be a difficult
undertaking.” The requirement on the analyst is then very clearly expressed as follows:
“When a calculation method is used to make unbiased or best-estimate calculations the
validation submission must present a detailed derivation of the uncertainty bounds to be
associated with important results” (NNR, 2006).
In PSHA, it is recognised that there will be epistemic uncertainty associated with all of the
parameters defining the SSC and GMC model, and the tool adopted to handle these is the
logic-tree. A node is defined for each component of the models and branches emerging from
each node carry the alternative models or parameter values that the analysts select to represent
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 18
the centre, the body and the range of technically-defensible interpretations (or CBR of the TDI).
Each branch is assigned a weight that reflects the relative confidence of the analysts in that
model or value being the most appropriate; the branch weights at each node, which are
subsequently treated as probabilities, sum to one (Figure 3.12).
Figure 3.12. Example of a logic-tree for PSHA; the first three nodes define the SSC model, the final node
the GMC model (McGuire, 2004).
With a logic-tree established, instead of performing a single set of PSHA calculations (for each
ground-motion parameter), hazard calculations are conducted for every single possible
combination of branches, which can be computationally very demanding (even for a very simple
logic-tree like the one in Figure 3.12, the hazard calculations will be repeated 16 times; in
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 19
practice for critical facilities, there will often be hundreds or even thousands of logic-tree branch
combinations). Whereas the integration over aleatory variability influences the shape of a single
hazard curve, the explicit consideration of epistemic uncertainty leads to multiple hazard curves.
The total weight, or probability, assigned to each resulting hazard curve is obtained from the
product of the individual branch weights. Rather than display hundreds or thousands of
individual hazard curves, the statistics of the hazard (exceedance frequency), for each ground-
motion amplitude, are calculated and the hazard expressed in terms of fractiles and the mean
value (Figure 3.13).
Figure 3.13. Hazard curves for spectral accelerations at 0.2 s period at the Thyspunt site; the separation
of the median and mean hazard curves is an indication of the capture of epistemic uncertainty.
The construction of a logic-tree for the SSC and GMC models that define the input to PSHA,
and consequently the range of uncertainty in the resulting hazard estimates, inevitably requires
expert judgement. The likelihood of identifying and quantifying all sources of epistemic
uncertainty is increased by obtaining the judgements of several experts. The challenge that
then arises is to establish a framework for the interaction among the experts (to avoid
differences arising from misunderstandings or access to different datasets, while at the same
time ensuring that artificial consensus is not constructed) and for combining the expert
judgements. The state-of-the-art approach to conducting multiple-expert hazard assessment for
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 20
safety-critical facilities is the SSHAC Level 3 framework, which was adopted for the Thyspunt
PSHA study.
3.4. The SSHAC Level 3 Process
The Senior Seismic Hazard Analysis Committee (SSHAC) was formed by the US Nuclear
Regulatory Commission (USNRC), the US Department of Energy (DOE) and the Electric Power
Research Institute (EPRI) to investigate the large discrepancies found both within and between
two major multiple-expert PSHA studies conducted in the 1980s for nuclear power plant sites in
Central and Eastern United States. The SSHAC report (Budnitz et al., 1997) that the problems
were primarily procedural, rather than technical, and in response proposed formal frameworks
for organising expert judgments and quantification of uncertainty in seismic hazard studies.
Recognising that different levels of endeavour were needed to address seismic hazard
assessments for different applications, the SSHAC report proposed four study levels—
increasing in complexity and resource requirements from Level 1 to Level 4—for conducting
seismic hazard studies. Only Level 3 and Level 4 studies are considered appropriate for nuclear
sites but no distinction is made between these two levels in terms of the degree of regulatory
assurance they provide (USNRC, 2012b).
In a SSHAC Level 3 study, the Technical Integration (TI) Teams are responsible for developing
the SSC and GMC models, and are required to assume intellectual ownership for these models
and their technical bases. The first stage of a SSHAC Level 3 project is the development of
SSC and GMC databases, which is developed by the TI Teams assisted by database
developers; for the Thyspunt PSHA, the database developers were staff members from the
Council for Geoscience (CGS) and external Specialty Contractors. Once the SSC and GMC
databases are considered to be complete, the TI Teams are charged with evaluating the
available data, methods and models in terms of their quality and their applicability to the seismic
characterisation of the region and site under study. In this process, the TI Teams are informed
by Resource Experts (individuals having knowledge of a particular dataset, for example) and
Proponent Experts (individuals who advocate the use of a particular model, generally developed
by themselves). The interactions of the TI Teams with the Resource and Proponent Experts
take place mainly in the first two of three formal Workshops that are essential requirements of a
SSHAC Level 3 process (Figure 3.14). Informed by the evaluation process, the TI Teams then
move into the integration phase of the project in which they develop SSC and GMC logic-trees
to capture the CBR of the TDI. Preliminary models are developed in the period between the
second and third Workshops, and extensive hazard sensitivity analyses are then presented at
Workshop #3 in order to provide insights into the most important uncertainties in the SSC and
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 21
GMC models, allowing the TI Teams to identify those elements of the models that warrant
additional investigation and refinement.
Figure 3.14. Schematic overview of the structure of a SSHAC Level PSHA, with timing running from top
to bottom of the diagram. PM is Project Management (other terms are defined in the text).
A vitally important role in the SSHAC Level 3 process is played by the Participatory Peer
Review Panel (PPRP). The PPRP is an independent body of experts who conduct process and
technical review of the entire project from beginning to end. The charge of the PPRP involves
several facets, which can be summarised as follows:
1. To ascertain whether the TI Teams identified and evaluated all the available data,
methods and models that may be pertinent to the seismic characterisation of the site
2. To assess whether the technical bases for all of the decisions by the TI Teams—with
regards to models used and models rejected, and also with regards to the weights
assigned to models—are adequately justified and fully documented.
3. To judge whether the final logic-trees, and consequently the resulting hazard results,
have captured the centre, body and range of the technically-defensible interpretations of
the available data, methods and models.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 22
The PPRP executes its review of the SSHAC process and the technical aspects of the hazard
assessment through a series of activities, which for the Thyspunt PSHA including all of the
following:
• Attendance of the full Panel, as observers, at all three Workshops
• Review and feedback on the proposed list of Proponent Experts invited to Workshop #2
• Attendance of at least one Panel member, as an observer, present at each of the four
formal Working Meetings conducted by both the SSC and GMC TI Teams
• Posing direct questions to the TI Teams about the technical bases for the SSC and
GMC models during Workshop #3
• A comprehensive review of the final draft report followed by review of the TI Team
responses and the final report
After reviewing the TI Teams’ responses to the original review comments and the revised report,
the PPRP issues a closure letter that forms part of the project record. If the PPRP is fully
satisfied on all three of the criteria listed above, without exception or reservation, then this
closing letter states the concurrence of the PPRP with the conduct and deliverables of the
PSHA. This is the basis for acceptance of the seismic hazard study, and USNRC (and other
regulatory bodies) have accepted SSHAC Level 3 and Level 4 PSHA studies on the basis of the
PPRP concurrence. The PPRP only gives such concurrence if entirely convinced that the
project has been conducted to acceptable standards both in terms of process and the technical
bases for the hazard model. The members of the PPRP, collectively and individually, stake their
reputations on the endorsement of the study, which is therefore not given lightly.
Another aspect of the SSHAC Level 3 process that is noteworthy in this context, which is that
the TI Teams need to produce logic-trees that represent the final consensus of the Team
members and for which each and every Team member is prepared to take full ownership (which
means being able and willing to explain and defend the model and its technical bases). This
inevitably generates considerable technical challenge and defence among the members of the
TI Teams, and this process of scientific discussion and exploration ensures that the models
undergo appreciable internal review and examination.
3.5. QA, V&V and the SSHAC Level 3 Process
Quality Assurance is clearly a very important aspect of any nuclear project and becomes
particularly relevant for safety-related issues such as the characterisation of external hazards.
All data collection activities for the Thyspunt study were subject to QA, which is fully
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 23
documented elsewhere. The issues of relevance here are QA for the SSC and GMC models,
and QA for the hazard calculations using those models.
The first question that needs to be addressed is whether the SSHAC Level 3 process can be
considered to comply with nuclear-level QA. The position of the USNRC on this questions is
clear and has been stated in NUREG-2117 (Section 5.12) as follows: “Therefore, it is the
collective, informed judgment of the TI Team (via the process of data evaluation and model
integration) and the concurrence of the PPRP (via the participatory peer review process) as well
as adherence to the national standards described above that ultimately leads to the assurance
of quality in the process followed and in the products resulting from the SSHAC hazard
assessment framework” (USNRC, 2012b). This mitigates the need for any additional QA review
process for the products of a SSHAC Level 3 or 4 study. In other words, adherence to the
requirements of the SSHAC process in terms of the objective evaluation of all the relevant
sources of information identified and the development of a logic-tree to capture the CBR of the
TDI, all reviewed and approved by the PPRP, is sufficient to provide assurance regarding the
hazard models.
This then leads to the next question, which relates to actual calculations using the hazard model.
Clause 4.2 of RD-0016 identifies three aspects to this question, namely:
• The calculations should be performed by suitably qualified and experienced individuals,
and the V&V activities for the calculations should be performed by people who are
sufficiently independent of the software developer and those who compiled the input
data
• The software used must be appropriate to the application both in terms of the
mathematical models it embodies and in terms of the correct implementation of these
models
• The input data must be checked by establishing “suitable measures....to trap input data
errors and erroneous results.”
In terms of the first of these requirements, this information will be presented in Appendix 1 of the
final report on the Thyspunt PSHA, which will provide biographical summaries for all of the key
participants involved in the development of the models, their review and their implementation. A
survey among other practitioners in this field would confirm that the project was able to bring
together many of the foremost experts in seismic hazard analysis globally to occupy the various
roles within this application of the SSHAC Level 3 process; there is no doubt whatsoever that
the technical staff of the study were eminently qualified to undertake these analyses.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 24
The remainder of this report is devoted to addressing the remaining questions related to the
software used for calculations and the correct implementation of the hazard model in the
software. The focus is only on those calculations that produced quantitative results that were
used directly as input to the hazard calculations, and on the hazard calculations themselves.
Other calculations that may have been conducted only to inform the TI Teams as they
undertook the evaluation and integration processes are not included since there is no need for
such calculations to be subject to the same degree of V&V. Indeed, to create the need for all
calculations to be subject to full V&V and documented in this way would have the result of
actively discouraging the exploratory analyses that characterise scientific investigations.
Chapter 4 presents the V&V activities for the calculations related to estimating the parameters
of the recurrence relationships for the seismic sources, which constitute the SSC model.
Chapter 5 presents the V&V for the calculations that were involved in the development of the
GMC model for ground motions in the deep horizon within the Goudini bedrock below the
Thyspunt site. Chapter 6 then presents the V&V for the site response analyses conducted to
estimate the amplification factors for the overlying rock and the convolution of these factors with
the bedrock hazard to obtain the final hazard estimates at the reference elevation at the surface
of the Goudini deposit. Finally, Chapter 7 presents the extensive V&V activities conducted to
ensure that the hazard calculations were conducted correctly.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 25
4. V&V for SSC MODEL CALCULATIONS
The SSC model is defined essentially by the geographical coordinates and fault geometry of the
sources, and the recurrence relationships that define the average rates of occurrence for
earthquakes of different magnitudes. The recurrence parameters for the fault sources (Figure
3.8) are inferred from geological information and geo-chronology measurements, whereas for
the area sources the recurrence parameters are calculated from the statistics of the earthquake
catalogue. These calculations involved in the derivation of the recurrence parameters for the
area source zones included in the model are the focus of this chapter.
4.1. Mmax Calculations
Mmax calculations were undertaken for the preliminary SSC model presented at WS3 in August
2012, following the same methodology as implemented in the CEUS SSC project (USNRC,
2012a). These calculations were subsequently updated to reflect changes made to the model
during the finalisation stage. Additionally, the logic-tree branches corresponding to the
calculation of Mmax values were translated into an equivalent set of non-repeated Mmax values
with equivalent weights, in order to enhance the efficiency of the final hazard calculations.
4.1.1 Setup of Mmax calculations and V&V
The Mmax model adopted for the Thyspunt PSHA SSC model includes multiple approaches to
Mmax calculation, including:
1. the Bayesian approach developed by Johnston et al. (1994) with the updated SCR
priors developed for the CEUS SSC project (USNRC, 2012a); this approach is further
subdivided into a single-prior and two-prior approach, with the latter considering source-
specific weights for priors corresponding to MESE (Mesozoic extension) and NMESE
(no Mesozoic extension) conditions,
2. the Kijko-Bayes-Selevoll (KSB) approach developed by Kijko (2004).
Both these approaches are described in detail in Chapter 5 of the CEUS SSC report (USNRC,
2012a), which served as the reference for the implementation. The implementation was done
using the commercial scientific computing package MatLab®. Self-checks in the form of
graphical displays of intermediate results were included during development of the version used
for the Thyspunt PSHA preliminary SSC model presented at WS3. For each of the 5 area
seismic sources, and each of the possible Mmax calculation approaches, conditional
distributions of Mmax values binned in 0.1 magnitude unit-wide bins were developed. A
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 26
composite distribution combining the individual distributions with appropriate weighting was also
developed. These results were circulated within the SSC TI Team and presented at WS3.
The Mmax distributions for the Thyspunt PSHA final model were developed using an updated
version of the routines developed for the preliminary model. The updates consisted in the
lowering of the upper bound on Mmax from 8.25 to 8.00, as well as revisions to the coordinates
of the area sources. The final Mmax calculation routines were therefore integrated with version-
controlled versions of the earthquake catalogue and source boundaries. The distributions
developed took the same format as for the preliminary model, and were circulated in graphical
format to the SSC TI Team for review.
For implementation in the final hazard calculations, the Mmax distributions for each model were
discretised using the 5-point distribution of Miller & Rice (1983) and rounding to the nearest 0.1
magnitude units as recommended by Dr Robert Youngs at WS2 and in subsequent interactions.
The resulting Mmax values were tabulated in the Hazard Input Document (HID), and
subsequently implemented in the input files for the hazard calculations using FRISK88 (see
Section 7.5.1 for details).
The V&V activities on the development of the codes used for the Mmax calculations are
documented in the CGS/TP10/FM04 forms included in the electronic appendix to this report, in
the folder 4_SSC_MODEL_CALCULATIONS\4_1_MmaxCalculation\MmaxCalculationCode\.
4.1.2 Collapsing to unique Mmax branches
The full tree for Mmax (including three levels for the Mmax calculation approach plus one level for
the 5-point discretisation) leads to partial duplication of hazard calculations in cases where
multiple approaches lead to the same Mmax value (within the 0.1 unit precision used). This
increases calculation time unnecessarily, since the same hazard integration is performed more
than once. Therefore, in the numerical implementation, these four levels are collapsed into a
single equivalent logic-tree level branching between all possible Mmax values, as described
below and illustrated in Figure 4.1.
In the original setup, each seismic source geometry is linked with 20 Mmax values resulting
from four approaches and the 5-point discretisation. Some of these values may be repeated,
and others may have zero weight if the corresponding approach has zero weight. To obtain the
equivalent representation, the Mmax values are re-ordered in increasing order and the weights
for any duplicate values summed to obtain an equivalent weight. Finally, only values with non-
zero weights are retained. This reduces the number of Mmax branches to consider to a number
between 7 and 16, depending on the source zone considered.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 27
Figure 4.1. Equivalent representation of Mmax branches in SSC logic-tree using non-repeated Mmax values and equivalent weights.
Given that the recurrence parameters depend only on the Mmax value (and not how it is
obtained), source zone geometry and (if applicable) completeness scaling factor, the hazard
results are exactly the same for both implementations provided an equivalent precision is
retained for the equivalent weights. This equivalent precision can be obtained by summing the
precisions of the individual weights, as shown in Figure 4.1. In the case under consideration,
the equivalent precision was determined to be 7 decimal digits.
The implementation of this transformation into equivalent branches was independently checked,
and is documented in the CGS/TP10/FM03 form that can be found in the electronic appendix to
this report, in the folder 4_SSC_MODEL_CALCULATIONS\4_1_MmaxCalculation\MmaxEquiv
alentBranches.
MmaxN (SZ)
Mmax1 (SZ)weq1
weqNprec = 7
WKB
prec=2
WPM
prec=1
WNP
prec=1
WMV
prec=3
(w=1.0) (w=1.0)
Equivalent tree (non-repeated Mmax values)
Equivalent weights:Weqi= [wKB*wNP*wPM*wMV]
Equivalent precision for weights:prec(Weqi) = prec(wKB )+prec(wNP)+prec(wPM)+prec(wMV)
= 2 + 1 + 1 + 3 = 7MmaxN (SZ)
Mmax1 (SZ)weq1
weqNprec = 7
MmaxN (SZ)
Mmax1 (SZ)weq1
weqNprec = 7
weq1
weqNprec = 7
WKB
prec=2
WPM
prec=1
WNP
prec=1
WMV
prec=3
(w=1.0) (w=1.0)
WKB
prec=2
WPM
prec=1
WNP
prec=1
WMV
prec=3
(w=1.0) (w=1.0)
Equivalent tree (non-repeated Mmax values)
Equivalent weights:Weqi= [wKB*wNP*wPM*wMV]
Equivalent precision for weights:prec(Weqi) = prec(wKB )+prec(wNP)+prec(wPM)+prec(wMV)
= 2 + 1 + 1 + 3 = 7
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 28
4.2. Recurrence Calculations
The second step of the recurrence calculations for area seismic sources consisted in the
determination of the recurrence parameters characterising the absolute and relative levels of
seismicity within the context of a truncated exponential (Gutenberg-Richter) distribution model,
namely the activity rate a and the b-value. In FRISK88, these are represented by the cumulative
annual number of earthquakes above the minimum magnitude considered in the calculations
(NU) and the parameter BETA, which is simply equal to ln(10) times the b-value. Following
PPRP recommendations at WS3 and interactions with Dr Robert Youngs, the approach adopted
is the penalised maximum-likelihood approach developed by Veneziano & Van Dyck (1985),
which is an extension of the widely used maximum-likelihood approach of Weichert (1980).
4.2.1 Setup of recurrence calculations and V&V
The maximum-likelihood approach of Veneziano & Van Dyck (1985) was implemented in the
commercial software MatLab® based on the description of the method provided in Johnston et
al. (1994). This implementation was independently reviewed by Dr Peter Stafford from Imperial
College London, and benchmarked against the implementation in the programme LIKEMAX
v5.0 developed and used by Dr Robert Youngs of AMEC using a set of test runs based on the
same inputs. These test runs are summarised in Tables 4.1 and 4.2, and show that the two
implementations give the same results within the numerical precision expected from using
different software packages in the implementation.
Table 4.1. Benchmark tests for recurrence calculations, test case 1.
EXAMPLE CASE 1 Total area = 32101 km2 Prior on b = 0.85 Penalty weight = 16/ln(10) Mmin = 4.0 Mmax = 8.0
Mi 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5
Ki 7 7 2 1 1 0 0 0
TEi 203 209 256 303 338 363 363 363
RES
ULT
S
Without penalty function Penalised likelihood
LIKEMAX Thyspunt PSHA LIKEMAX Thyspunt PSHA
N(m0) 0.8130E-01 n0MLEo 0.0813 N(m0) 0.8205E-01 n0MLEp 0.0825
s[N(m0)] 0.1935E-01 sign0o 0.0193 s[N(m0)] 0.1946E-01 sign0p 0.0195
b 0.688 bMLEo 0.6884 b 0.734 bMLEp 0.7664
s[b] 0.153 sigbo 0.1532 s[b] 0.136 sigbp 0.1178
rho 0.139 rhoo 0.1386 rho 0.110 rhop 0.0873
The V&V activities on the development of the codes used for the recurrence calculations are
documented in the CGS/TP10/FM03 and CGS/TP10/FM04 forms included in the electronic
appendix to this report , in the folder 4_SSC_MODEL_CALCULATIONS\4_2_RecurrenceCalcu
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 29
lation\RecurrenceCalculationCode\.
Table 4.2. Benchmark tests for recurrence calculations, test case 2
EXAMPLE CASE 2 Total area = 185657 km2 Prior on b = 0.85 Penalty weight = 16/ln(10) Mmin = 4.0 Mmax = 8.0
Mi 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5
Ki 11 3 1 0 0 0 0 0
TEi 105 129 159 199 236 262 304 346
RES
ULT
S
Without penalty function Penalised likelihood
LIKEMAX Thyspunt PSHA LIKEMAX Thyspunt PSHA
N(m0) 0.1345E+00 n0MLEo 0.1345 N(m0) 0.1296E+00 n0MLEp 0.1276
s[N(m0)] 0.3496E-01 sign0o 0.0350 s[N(m0)] 0.3367E-01 sign0p 0.0331
b 1.383 bMLEo 1.3831 b 1.079 bMLEp 0.9921
s[b] 0.340 sigbo 0.3397 s[b] 0.177 sigbp 0.1355
rho 0.117 rhoo 0.1172 rho 0.110 rhop 0.0998
The penalty weight adopted for the final calculation was developed in consultation with
Dr Youngs, who reviewed the penalised b-values obtained through application of the method.
The recurrence parameter distributions were then generated based on the values of the
likelihood function on a 5-by-5 grid centred on the maximum-likelihood estimates of NU and
BETA, with marginal ranges of parameters spanning from -2.5 to +2.5 standard deviations
around this central value, resulting in 25 recurrence curves for each combination of source
geometry and associated Mmax value. Given the low level of seismicity associated with the
ECC host source zone, this process resulted in a number of bins with a negative NU value,
which is not physically possible. The probability weights for these bins were set to 0, and the
probability weights for the remaining bins renormalised so that the total probabilities add up to
1.0. The recurrence curves thus obtained were circulated to the SSC TI team for review and
incorporated into the HID, as well as being displayed in graphical format in the final Thyspunt
PSHA report.
4.2.2 Generation of *.REC files and V&V
For the generation of the recurrence inputs incorporated in the HID, the penalised maximum-
likelihood routine was integrated with version-controlled catalogue and seismic source files
containing information regarding source coordinates, source zone areas and maximum
magnitude values. This resulted in the information listed in the electronic appendix to the HID,
which was simultaneously stored in MatLab® binary format.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 30
As discussed in Section 4.1.2, the maximum magnitude values have been reduced to a set of
non-repeated values with equivalent weights for each source. For similar considerations (i.e.,
avoiding the duplication of the same integration in the hazard calculations), the completeness
scaling factors for the ECC sources, which are applied as weighted multiplicative factors on the
NU parameter, were also integrated into the recurrence curve files used in the final hazard
calculations. For compatibility with the inputs used by FRISK88, these files were prepared as a
set of ASCII files based on the binary information stored in MatLab®, which is identical to the
data in the HID. There was one such file prepared for each combination of source geometry and
Mmax value, following the file naming convention described in Figure 4.2. As for other FRISK88
basic input files, the essential information characterising each file was reproduced in the file
header, as illustrated in Figure 4.3, and later passed on to the FRISK88 run input (*.INP) files,
as explained in Section 7.5.1.
Figure 4.2. Explanation of file-naming conventions for area source recurrence parameter files used in the
final hazard calculations.
Self-checks in the form of spot checks were performed as part of the development process, to
ensure that the recurrence information listed in the *.REC files was identical to that incorporated
in the HID. The completeness and integrity of the sets of *.REC files, as well as their correct
There is one *. REC file for each unique combination of:• Non repeated Mmax value (goes into the maximum-likelihood calculation of the parameters)• Source geometry alternative (since the parameter listed for activity is number of EQ/yr, which is area-dependent).In the case of multiple completeness scaling factors R (which operate on the seismicity rate), these are combined with the appropriate weights and applied to the seismicity rate.
This is reflected in the file name:e.g. A10M6p0Rco.recis to be interpreted as Area source 10, Mmax=6.0, with appropriately combined R-factors.
This information is also summarised in the header of the file, which also explains the data format. The source indexing convention is given to the left.
(w=1.0) (w=1.0)
Source indexing convention:
• A10 = ECC• A11 = ECC_alt1• A12 = ECC_alt2• A13 = ECC_alt3• A20 = SYN• A21 = SYNalt• A30 = KAR• A40 = CK• A50 = NAM
Collapsed to unique Mmax values
There is one *. REC file for each unique combination of:• Non repeated Mmax value (goes into the maximum-likelihood calculation of the parameters)• Source geometry alternative (since the parameter listed for activity is number of EQ/yr, which is area-dependent).In the case of multiple completeness scaling factors R (which operate on the seismicity rate), these are combined with the appropriate weights and applied to the seismicity rate.
This is reflected in the file name:e.g. A10M6p0Rco.recis to be interpreted as Area source 10, Mmax=6.0, with appropriately combined R-factors.
This information is also summarised in the header of the file, which also explains the data format. The source indexing convention is given to the left.
(w=1.0) (w=1.0)
Source indexing convention:
• A10 = ECC• A11 = ECC_alt1• A12 = ECC_alt2• A13 = ECC_alt3• A20 = SYN• A21 = SYNalt• A30 = KAR• A40 = CK• A50 = NAM
There is one *. REC file for each unique combination of:• Non repeated Mmax value (goes into the maximum-likelihood calculation of the parameters)• Source geometry alternative (since the parameter listed for activity is number of EQ/yr, which is area-dependent).In the case of multiple completeness scaling factors R (which operate on the seismicity rate), these are combined with the appropriate weights and applied to the seismicity rate.
This is reflected in the file name:e.g. A10M6p0Rco.recis to be interpreted as Area source 10, Mmax=6.0, with appropriately combined R-factors.
This information is also summarised in the header of the file, which also explains the data format. The source indexing convention is given to the left.
(w=1.0) (w=1.0)
Source indexing convention:
• A10 = ECC• A11 = ECC_alt1• A12 = ECC_alt2• A13 = ECC_alt3• A20 = SYN• A21 = SYNalt• A30 = KAR• A40 = CK• A50 = NAM
(w=1.0) (w=1.0)
Source indexing convention:
• A10 = ECC• A11 = ECC_alt1• A12 = ECC_alt2• A13 = ECC_alt3• A20 = SYN• A21 = SYNalt• A30 = KAR• A40 = CK• A50 = NAM
Collapsed to unique Mmax values
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 31
linking to the appropriate source geometry and Mmax option, was verified as part of the checks
and pre-processing operations on the FRISK88 input files (see Section 7.5.1 for details).
Figure 4.3. Example of *.REC file used in the seismic hazard calculations, with conrol features highlighted: (1) Full version-controlled file path; (2) Creation timestamp; (3) Echo of source geometry option features; (4) Echo of Mmax information; (5) Check on sum of probability weights; (6) Echo of
completeness factor information.
1 23 546
1 23 546
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 32
5 V&V for GMC MODEL for BEDROCK MOTIONS CALCULATIONS
The hazard calculations were performed for ground motions at a horizon deep within the
Goudini formation below the Thyspunt site, where the shear-wave velocity reaches 3 km/s. The
GMC logic-tree was composed of three published GMPEs, but these needed to be adjusted to
this very hard reference rock condition. The V&V activities associated with the calculations to
develop and apply these adjustments are described in the first three sections of this chapter.
The final section describes the V&V for calculations undertaken in the development of the logic-
tree branches for the associated aleatory variability (sigma values) of the GMC logic-tree.
The development of the GMC model required that a series of adjustments be made to a chosen
set of ‘backbone’ ground-motion models. These adjustments, that were made to both the
median predictions and the logarithmic standard deviations of the models, necessarily require
that the original models be implemented correctly in the first instance. In the case that the final
adjusted models are compared to observations, an implementation error in the backbone would
not have a practical consequence provided that the same implementation of the backbone was
used in the hazard calculations. This is because the adjustment factors that were developed
would correct for both the implementation error and the actual difference between the host
ground-motion model predictions and the observations of the target region. However, within this
project, the same implementations are not used for the GMC model development and the
hazard calculations, and we are not making direct comparisons with data. It is therefore
imperative that the backbone models be implemented correctly so that there is consistency with
the PSHA calculations, but also so that the adjustments that are made can be viewed from a
physical perspective. That is, it is important to know what the physical implication of scaling the
model by a given factor is (e.g., what does a factor of 1.2 mean in terms of stress drop
difference), and this can only be achieved by having confidence in the implementation of the
backbone and from understanding what physical parameters are consistent with these
backbone models.
As noted above, the two main changes that were made to the ground-motion models used in
the hazard calculations related to a modification to the median predictions to account for the
specific velocity profile and near-surface diminution of the reference bedrock conditions, and a
modification for the partial-ergodic assumption that affects the logarithmic standard deviation.
The following sections describe the various checks that were performed in order to ensure that
the adjustment factors related to these two components were derived correctly.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 33
5.1 Host GMPE Implementations
The implementations of the three backbone ground-motion models were used for developing
Vs-Kappa adjustment factors and were also used subsequently for defining the target spectra
for the site response calculations (discussed in detail in Chapter 6). The three ground-motion
models that were implemented were Abrahamson & Silva (2008), Akkar & Cagnan (2010) and
Chiou & Youngs (2008).
However, in the case of Abrahamson & Silva (2008) two slight adjustments to the model that
was originally published were included. The first is a correction of their model, made by
Abrahamson & Silva (2009) and published on the PEER NGA website
(http://peer.berkeley.edu/ngawest/nga_models.html) in a file named ‘AS08_NGA_errata.pdf’.
This file contains a correction to the published equations for the standard deviation of the model,
as well as a new adjustment to the treatment of hanging wall effects. Note that the only
correction of relevance for the present study is that related to the standard deviation as the
hanging wall effects were not considered within the hazard calculations.
The second modification relates to the inclusion of a long-period adjustment that is advocated
within the original Abrahamson & Silva (2008) publication, but that is not adopted for the
Thyspunt PSHA. This modification was proposed by the authors on the basis of reasoning that
was shown to be circumspect during the development of the GMC model.
Two independent implementations of these ground-motion models were used within the
development of the GMC model. For the purposes of deriving Fourier spectra from response
spectral predictions using Inverse Random Vibration Theory (IRVT) (explained in more detail in
Section 5.2) Mathematica (http://www.wolfram.com/mathematica/) scripts were written and used
by Frank Scherbaum. For the purpose of providing target response spectra for the site response
calculations performed by Ellen Rathje, scripts written in R (http://www.r-project.org) were used
by Peter Stafford. The implementation of the models within the Mathematica scripts used by
Frank Scherbaum were validated by comparing the predictions of the Mathematica
implementations with those from the R implementations of Peter Stafford. These checks were
only carried out for a particular set of earthquake scenarios relevant for the IRVT calculations.
There is no need to check the implementation of the Mathematica models for other earthquake
scenarios as these scenarios were not directly considered elsewhere in the development of the
Vs-Kappa corrections.
The reference implementations were therefore those of Peter Stafford written using the R
language. For two of the models, Abrahamson & Silva (2008) and Chiou & Youngs (2008), it is
possible to make use of an openly available package called ‘nga’ (Kaklamanos & Thompson,
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 34
2011). This package is written entirely in R with the principle developer of the code being James
Kaklamanos. Version 1.4-1 of this package was used for the purposes of the present study and
this version includes the correction to the Abrahamson & Silva (2008) model mentioned
previously. The implementation of the ‘nga’ pacakge itself has previously been tested and
validated in the report by Kaklamanos et al. (2010). This report, co-authored by two of the NGA
model developers, systematically compared the implementations of all of the NGA ground-
motion models in both R and Fortran. The underlying Fortran code that was used for those
comparisons is also the same code that has been used for checking the implementation of the
ground-motion models for the hazard calculations of this study.
The ‘nga’ package only provides implementations of the models of Abrahamson & Silva (2008,
2009) and Chiou & Youngs (2008) of the three models considered by the GMC TI Team. While
the implementations within the ‘nga’ package could be assumed to be robust given the extent of
verification and validation already conducted by Kaklamanos et al. (2010), independent checks
on this code have also been carried out by Peter Stafford. These checks were made using
independent implementations of the models written in C++ and MATLAB. In all cases, the
predicted motions are found to be consistent to within the precision possible. For ground-motion
models the precision of the predictions is governed by the number of significant figures provided
by the model developers when they publish tables of model coefficients. This precision is
typically four to five significant figures.
The other remaining model of Akkar & Cagnan (2010) was coded in R by Peter Stafford. In this
case, there is no publically available package against which one can check the implementation
of the model. However, of the three models, the model of Akkar & Cagnan (2010) is the
simplest and is the least likely model to cause implementation problems. The reason for this is
that the functional form of the model is relatively traditional, and contains fewer functional terms
than the other backbone models. In order to check the implementation of the Akkar & Cagnan
(2010) model in R, Peter Stafford compared the predictions of his R implementation with
separate implementations written by himself in C++ and MATLAB. Agreement to a level of
precision available from the model coefficients was obtained in all cases. In addition, agreement
to within this same precision was also obtained when making the comparisons between the
Mathematica implementation of Frank Scherbaum and the R implementation of Peter Stafford.
The earthquake scenario used for the development of the final Vs-Kappa correction factors was
defined by the following set of meta-data:
• Moment magnitude, Mw 5.75
• Joyner-Boore distance, RJB 10 km
• Rupture distance, RRUP 12.7 km
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 35
• Average shear-wave velocity, VS30 620 m/s
• Depth to top of rupture, ZTOR 4.4 km
• Depth to velocity horizon of 1.0 km/s, Z1.0 45 m
• Normal style-of-faulting flag, FNM 1
• Reverse style-of-faulting flag, FRV 0
• Rupture dip, 60 degrees
• Rupture width, W 8.3 km
For the models of Abrahamson & Silva (2008) and Chiou & Youngs (2008) the earthquake was
assumed to be a mainshock event, and the average shear-wave velocity was assumed to have
been measured rather than estimated. A comparison of the predictions of response spectral
ordinates made for this scenario is shown in Figure 5.1.
Figure 5.1. Visual comparison of the predictions made from the Mathematica implementation of Frank
Scherbaum and the R implementation of Peter Stafford. Here, AC2010, CY2008 and AS2008 denote the models of Akkar & Cagnan (2010), Chiou & Youngs (2008) and Abrahamson & Silva (2008), respectively
and FS and PS denote Frank Scherbaum and Peter Stafford.
In Figure 5.1 it should be noted that there are a series of markers corresponding to spectral
ordinates beyond a response period of 2 seconds for Frank Scherbaum’s implementation of the
Akkar & Cagnan (2010) model. However, this model only provides coefficients up to and
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 36
including a response period of 2 seconds. The predictions from the implementation of Peter
Stafford stop at 2 seconds while Frank Scherbaum has extrapolated the Akkar & Cagnan
(2010) predictions in this instance. This extrapolation is not related to the implementation of the
model itself and there was no reason to check this as a result. In addition to the visual
comparison shown in Figure 5.1, the numerical values where also compared directly.
5.2 IRVT-Generated Fourier Spectra
The Vs-Kappa correction coefficients were computed using an approach that first requires that
an estimate of the underlying Fourier spectrum for a given response spectrum be available.
However, there is not a unique mathematical mapping between the Fourier spectrum and the
response spectrum and so any inversion approach to obtain a Fourier spectrum using only
information about the corresponding response spectrum is also non-unique. The specific
Fourier spectrum that is obtained from a given inversion depends upon the implementation
details of the inversion procedure.
For the Thyspunt study, Frank Scherbaum derived the Vs-Kappa adjustment factors by inverting
the response spectral predictions of the backbone ground-motion models using (IRVT). The
process is to start with a prediction of the acceleration response spectrum for a particular
scenario (using the Mathematica code validated as described in Section 5.1) and to invert this
using IRVT to obtained the Fourier spectrum of acceleration for this same scenario. The shape
of this Fourier spectrum at relatively high frequencies can then be used to infer the value of
Kappa that is implicitly included within the backbone models. That is, we infer the host value of
Kappa from the shape of the high-frequency Fourier spectral ordinates.
Because this process is used to infer values for physical quantities it is important to be confident
that the inversion procedure is robust with respect to the implementation of the method. For that
reason, two independent implementations of the IRVT approach were compared. One
implementation was made in Mathematica by Frank Scherbaum (and this was the
implementation actually used for the development of the adjustment factors) and the inverted
Fourier spectra from this implementation were compared to an entirely independent
implementation made in C++ by Albert Kotke and Ellen Rathje. This latter C++ implementation
is that contained within the program STRATA, which is discussed in detail in Section 6.1.
While the conceptual framework behind each of these implementations is the same, the
practical methods used by each set of developers is different. The main differences lie in three
places: (1) how the initial estimate of the Fourier spectrum is obtained; (2) what modifications, if
any, are made at the low-frequency end of the spectrum; and (3) what modification, if any, is
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 37
made at the high-frequency end of the spectrum. Of these three, the approach for obtaining the
initial estimate of the Fourier spectrum has an impact on how quickly the inversion procedure
converges rather than on the actual shape of the inverted spectrum. The low frequency
adjustments can have important implications for other applications, but here the main interest is
upon the high frequency end of the spectrum and so the details of the inversion schemes for
low frequencies are not of great interest. At the high-frequency end of the spectrum, the
frequency range from 5-15 Hz was used as the range for inferring the slope of the spectra, and
hence for obtaining Kappa values. It is therefore this frequency range that is of greatest interest
when making comparisons of the implementations.
In addition to the high frequency end of the spectrum, the shape of the Fourier spectrum over
the intermediate frequency range is also important because the IRVT approach is based upon
the use of spectral moments. These moments are found from integrating the Fourier spectrum
(or the product of the square of this spectrum and a generally increasing function of frequency)
over the full frequency range. The greatest contributions to the spectral moments for low order
moments comes from the intermediate frequency range where the amplitudes are greatest, but
then moves to higher frequencies as higher-order moments are considered. When making
comparisons between the Fourier spectra it is therefore important to look both at the specific
range from 5-15 Hz, as well as at lower frequencies down to below 1 Hz. Figure 5.2 shows a
comparison between the Fourier spectra that were obtained by inverting the response spectrum
shown in Figure 5.1 for the model of Akkar & Cagnan (2010).
Figure 5.2. Comparison of the Fourier spectra obtained for the GMPE of Akkar & Cagnan (2010) by Frank Scherbaum (Mathematica) and Ellen Rathje (C++) for the earthquake scenario described in Section 5.1.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 38
Similar figures for the other two backbone models are shown in Figures 5.3 (Abrahamson &
Silva) and 5.4 (Chiou & Youngs).
Figure 5.3: Comparison of the Fourier spectra obtained for the GMPE of Abrahamson & Silva by Frank
Scherbaum (Mathematica) and Ellen Rathje (C++) for the earthquake scenario described in Section 5.1.
Figure 5.4. Comparison of the Fourier spectra obtained for the GMPE of Chiou & Youngs by Frank
Scherbaum (Mathematica) and Ellen Rathje (C++) for the earthquake scenario described in Section 5.1.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 39
What is clear from consideration of Figures 5.2 through to 5.4 is that over the frequency range
of interest, the two independent implementations of the IRVT framework are providing very
consistent results. The implication is that the values of Kappa that would be inferred from the
host GMPEs using either implementation would be very, very similar and that both
implementations would be able to tell us essentially the same information about the physical
character of the near-surface diminution.
While figures, such as those presented in this section, were used to judge the degree of
agreement between the models, the numerical values of the spectral ordinates were also
computed and compared. No criteria were established prior to running these inversions for the
purpose of defining what is an acceptable degree of agreement. The reason being that such
comparisons have, to the knowledge of the authors, never previously been made. Indeed, the
intuitive expectation prior to conducting this validation exercise was that the algorithms used in
each implementation were sufficiently different that greater differences were to be expected.
That the results are in such close agreement over the frequency range of interest implies that
the IRVT approach, when used for the purpose intended in this study, is robust with respect to
the particular implementation of the method.
5.3 Application of Vs-Kappa Adjustments
The IRVT approach implemented by Frank Scherbaum results in the generation of Vs-Kappa
adjustment factors that are subsequently applied to the backbone ground-motion models. For
the hazard calculations, only a particular set of response periods are considered and this set is
not a direct subset of the response periods for which the Vs-Kappa adjustment factors are
obtained. The majority of response periods required for the TNSP either have Vs-Kappa
adjustment values provided directly, or have coefficients for response periods that are extremely
close to those desired. In order to ensure that we have adjustment factors specifically
prescribed at the periods required for the hazard calculations within the TNSP an interpolation
procedure was applied.
The function ‘splinefun’ included with the default ‘stats’ package of the R statistical programming
environment was used to develop cubic spline interpolants that were fitted to the Vs-Kappa
adjustments provided to Peter Stafford from Frank Scherbaum. Examples of the fits that were
obtained for each of the three backbone models are provided in Figures 5.5 to 5.7.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 40
Figure 5.5.: Vs-Kappa adjustment factors developed by Frank Scherbaum (blue) for the model of Akkar &
Cagnan (2010), and the interpolated values at the TNSP response periods (Peter Stafford). The three curves, consistently from top to bottom are the factors for the target Kappa of 0.001, 0.003 and 0.005.
Vertical grey lines denote the TNSP periods.
Figure 5.6. Vs-Kappa adjustment factors developed by Frank Scherbaum (blue) for the model of
Abrahamson & Silva (2008), and the interpolated values at the TNSP response periods (Peter Stafford). The three curves, consistently from top to bottom are the factors for the target Kappa of 0.001, 0.003 and
0.005. Vertical grey lines denote the TNSP periods.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 41
Figure 5.7. Vs-Kappa adjustment factors developed by Frank Scherbaum (blue) for the model of Chiou &
Youngs (2008), and the interpolated values at the TNSP response periods (Peter Stafford). The three curves, consistently from top to bottom are the factors for the target Kappa of 0.001, 0.003 and 0.005.
Vertical grey lines denote the TNSP periods.
The spline interpolation here does not make use of any physical constraints and is a pure data-
fitting exercise. The ‘splinefun’ function in R was used directly, without modification, and the
abcissa values were passed as (natural) logarithmic periods. During the process of checking
these interpolated values, Adrian Rodriguez-Marek independently fit a cubic spline to the same
data using the built-in ‘spline’ function within the MATLAB program. His numbers, while
comparable, did not match those from Stafford, but this is simply a result of the two
implementations of the spline algorithms in R and MATLAB being different. In particular, the
manner in which they treat data points near the end of the range is different. Given that the
interpolations are performed on very closely-spaced data values and that no physical
constraints are imposed upon the fitting, there is no reason to prefer one set of interpolations
over another. Equally, very similar results would have been obtained using an alternative
interpolation scheme, such as log-linear interpolation.
The overall process for developing the final backbone models for use within the PSHA
calculations arose after the following steps were taken in terms of Verification and Validation:
1. The earthquake scenario used for defining the Vs-Kappa adjustments was identified.
2. Peter Stafford computed any meta-data associated with this earthquake scenario that
did not come directly from the disaggregation information (such as obtaining consistent
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 42
distance metrics, and depth to the top of rupture information). The full set of meta-data
was then passed to Frank Scherbaum.
3. Frank Scherbaum then computed the acceleration response spectra using his
Mathematica implementations of the backbone ground-motion models. Peter Stafford
also computed the response spectra using his implementations in R and checked these
with those from Scherbaum.
4. Once agreement was obtained from the previous step, Frank Scherbaum applied his
Mathematica implementation of IRVT to obtain the corresponding Fourier spectra for
each ground-motion model. At the same time, Ellen Rathje used her C++
implementation to also compute the Fourier spectra.
5. After checking that the two implementations of the IRVT approach were yielding
sufficiently similar results, Frank Scherbaum then proceeded to derive the Vs-Kappa
adjustment factors using his Mathematica routines.
6. The final Vs-Kappa adjustment factors were then obtained for the TNSP response
periods by Peter Stafford using the R implementation of the spline interpolation. The
interpolated values were checked against an independent spline fitting in MATLAB
performed by Adrian Rodriguez-Marek.
7. The final sets of adjustment factors, and all other information required for
implementation of the median backbone models were compiled with the HID document
and were checked by all members of the GMC TI Team before being handed over to the
hazard calculation team.
5.4 Sigma Values
The model for standard deviation or sigma for the TNSP project is given by Eq. (5.1):
𝜎𝑇𝑁𝑆𝑃 = �(𝑋𝜏)2 + (𝑋𝜙𝑠𝑠)2 + (𝛿𝜙𝑆2𝑆)2 (5.1)
where τ is the between-event standard deviation, φss is the single-station phi, the δφS2S is a
correction term to account for site-to-site variability, and X is a multiplier that quantifies the
epistemic uncertainty in the sigma model. The components of Eq. (5.1) are given in Table 5.1
both for a magnitude-independent branch (homo), and a magnitude-dependent branch (hetero),
and the values of δφS2S and X are given in Tables 5.2 and 5.3, respectively.
Section 5.4.1 of this report presents the V&V for the φss and τ values, and the V&V for the
computation of the total sigma is given in Section 5.4.2
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 43
Table 5.1. Definition of the components of the magnitude-independent (HOMO) and magnitude dependent (HETERO) branches of the TNSP sigma model (Eq. 5.1)
HOMO HETERO
φss 0.45a 0.8 of 𝜙 value from AS08b τ Value from CY08c at Mw 6.0 Value from CY08
δφS2S Table 5.3 Table 5.3 a. Except at T = 2 sec, where 𝜙𝑠𝑠 is equal to 0.4206 b. AS08 denotes Abrahamson & Silva (2008). Vs30 set higher than VLIN so that non-linear site
response effects are not included, and with the flag for ‘measured’ Vs30 value c. CY08 denotes Chiou & Youngs (2008)
Table 5.2. Definition of epistemic uncertainty branches for the sigma model Sub-branch X Weight
High 1.16 0.2
Central 1.0 0.6
Low 0.84 0.2
Table 5.3. Values of δφS2S to be used in Eq. (5.1)
𝜹𝝓𝑺𝟐𝑺 0.01 0.02 0.03 0.04 0.05 0.1 0.2 0.4 1 2 0 0 0 0 0 0 0 0.0717 0.0991 0.0999
5.4.1 Validation and Verification for φss and τ
The value of φss=0.45 (and φss = 0.4206 at T = 2 seconds, where the φss was modified such that
the magnitude-independent model matches the magnitude-dependent model at M=5.7) for the
magnitude-independent branch is obtained based on φss values computed from regression
analyses presented in Rodriguez-Marek et al. (2012). These regressions have been checked by
repeating the analyses through independent regressions. Additional checks were performed by
comparing the resulting models with other publications that use a similar dataset. A sample of
these comparisons is shown in Figure 5.8. The value 0.45 is the rounded average of the data
for spectral periods up to T=1 second. These data is shown numerically in Table 5.4.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 44
Figure 5.8. Comparison of ss values obtained in Rodriguez-Marek et al. (2012, denoted by Rea12) and Lin et al. (2011, denoted by Lea11). Where only one model is seen, the data plot on top of each other
(except for T=0.2 sec, where there is no data in Lin et al., 2011).
Table 5.4. φss values from Rodriguez-Marek et al. (2012)
Period (s) φss 0.01 0.4438 0.1 0.4383 0.2 0.464 0.3 0.4663 0.5 0.4584 1.0 0.4477
Average 0.4531
The GMPEs used for the sigma models are the Chiou & Youngs (2008) and the Abrahamson &
Silva (2008) models. The implementation of the sigma values from these models was checked
by comparisons with an independent implementation of the models by Dr. Peter Stafford. In
addition, the Abrahamson & Silva (2008) standard deviations used in the sigma model were
compared as follows. The φ model in Abrahamson & Silva (2008) is given in Equations 23 to 28
in that publication, but for Vs30 ≥ VLIN it reduces to:
𝜙 = �𝑠1 𝑓𝑜𝑟 𝑀 ≤ 5
𝑠2 𝑓𝑜𝑟 𝑀 > 7 (5.2)
Figure 5.9 is a copy Table 6 from Abrahamson &S Silva (2008) listing the values of s1 and s2.
These values are reproduced in Table 5.5 and compared to the values proposed for the TNSP
Rea12
Lea11
PGA 0.1 0.2 0.3 0.5 1.0 3.00.2
0.3
0.4
0.5
0.6
0.7
0.8
Period (Sec)
φ ss
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 45
model. Note that the TNSP model values are identical to the scaled Abrahamson & Silva (2008)
values.
Figure 5.9. Reproduction of Table 6 from Abrahamson & Silva (2008)
Table 5.5. 𝜙 values reproduced from Table 6 in Abrahamson & Silva (2008), and φss values for the TNSP sigma computation
From Abrahamson & Silva (2008) (for Vs30
measured)
Scaled values from Abrahamson & Silva
(2008) φss TNSP values
Period (s) s1 s2 0.8*s1 0.8*s2 M=5 M=7 0.010 0.576 0.453 0.4608 0.3624 0.4608 0.3624 0.02 0.576 0.453 0.4608 0.3624 0.4608 0.3624 0.03 0.591 0.461 0.4728 0.3688 0.4728 0.3688 0.04 0.602 0.466 0.4816 0.3728 0.4816 0.3728 0.05 0.610 0.471 0.4880 0.3768 0.4880 0.3768 0.1 0.617 0.485 0.4936 0.3880 0.4936 0.3880 0.2 0.614 0.495 0.4912 0.3960 0.4912 0.3960 0.4 0.608 0.501 0.4864 0.4008 0.4864 0.4008 1.0 0.594 0.503 0.4752 0.4024 0.4752 0.4024 2.0 0.544 0.491 0.4352 0.3928 0.4352 0.3928
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 46
The τ values are obtained from Chiou & Youngs (2008). The τ model in Chiou & Youngs (2008)
is given in Equation 19 in that publication. For the three magnitudes needed to compute the τ
values for the TNSP model (M=5, M=6, and M=7), this equation reduces to:
𝜏 =
⎩⎪⎨
⎪⎧
𝜏1 𝑓𝑜𝑟 𝑀 ≤ 5
𝜏1+𝜏22
𝑓𝑜𝑟 𝑀 = 6
𝜏2 𝑓𝑜𝑟 𝑀 > 7
(5.3)
The values of τ1 and τ2 are given in Table 4 in Chiou & Youngs (2008), which is copied in
Figure 5.10. Table 5.6 reproduces these values and shows the average of τ1 and τ2. These
values are compared with the values proposed for the TNSP model. Note that the TNSP model
values are identical to the respective Chiou & Youngs (2008) values.
Figure 5.10. Reproduction of Table 4 from Chiou & Youngs (2008)
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 47
Table 5.6. Values for τ1 and τ2 reproduced from Table 4 in Chiou & Youngs (2008), and values of t for the
TNSP model
From Chiou & Youngs (2008) TNSP model
Period (s) τ1 τ2 𝝉𝟏 + 𝝉𝟐
𝟐
Hetero (M5)
Hetero (M7) Homo
0.01 0.3437 0.2637 0.3037 0.3437 0.2637 0.3037 0.02 0.3471 0.2671 0.3071 0.3471 0.2671 0.3071 0.03 0.3603 0.2803 0.3203 0.3603 0.2803 0.3203 0.04 0.3718 0.2918 0.3318 0.3718 0.2918 0.3318 0.05 0.3848 0.3048 0.3448 0.3848 0.3048 0.3448 0.1 0.3835 0.3152 0.34935 0.3835 0.3152 0.3493 0.2 0.3601 0.3076 0.33385 0.3601 0.3076 0.3338 0.4 0.3351 0.2984 0.31675 0.3351 0.2984 0.3167 1 0.3577 0.3419 0.3498 0.3577 0.3419 0.3498 2 0.4023 0.4023 0.4023 0.4023 0.4023 0.4023
5.4.2 Validation and Verification for the TNSP sigma computations
The sigma models were presented in a tabular format for sigmas for all periods of interest and
for each of the logic tree branches (reproduced in Table 5.7 below).
Table 5.7. Values of sigma for the TNSP model
Sigma Model 0.01 0.02 0.03 0.04 0.05 0.1 0.2 0.4 1 2
High HOMO 0.6298 0.6320 0.6407 0.6486 0.6576 0.6608 0.6500 0.6424 0.6685 0.6825
HOMO 0.5429 0.5448 0.5524 0.5591 0.5669 0.5697 0.5603 0.5550 0.5785 0.5905
Low HOMO 0.4560 0.4576 0.4640 0.4696 0.4762 0.4785 0.4707 0.4678 0.4889 0.4990
High HETERO (M5) 0.6668 0.6692 0.6895 0.7058 0.7209 0.7251 0.7065 0.6889 0.6970 0.6947
High HETERO (M7) 0.5199 0.5222 0.5373 0.5492 0.5622 0.5799 0.5817 0.5841 0.6205 0.6598
HETERO (M5) 0.5749 0.5769 0.5944 0.6084 0.6215 0.6251 0.6091 0.5950 0.6030 0.6010
HETERO (M7) 0.4482 0.4502 0.4632 0.4734 0.4846 0.4999 0.5014 0.5048 0.5373 0.5711
Low HETERO (M5) 0.4829 0.4846 0.4993 0.5111 0.5220 0.5251 0.5116 0.5013 0.5093 0.5078
Low HETERO (M7) 0.3765 0.3782 0.3891 0.3977 0.4071 0.4199 0.4212 0.4258 0.4545 0.4827
These values are computed using Eq. (5.1) along with the information in Tables 5.1, 5.2, and
5.3. These values were computed independently by Dr. Adrian Rodriguez-Marek and Dr. Peter
Stafford, and both implementations coincided to within four decimal digits. The comparison
done by Dr. Peter Stafford is shown in Table 5.8 (from file TNSP Final GMC HID_Rev 2 PJS).
This table shows the difference between the sigma values computed by Dr. Stafford, and the
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 48
values in the HID. Differences lower than 4 significant digits are not significant. The only
difference of more than 5 significant digits is for a period of 2 seconds for the Homo branch
(highlighted cells in Table 5.8). The difference in the fourth decimal is due to a slightly different
sequence of averaging the models such that the magnitude-independent model matches the
magnitude-dependent model at M=5.7.
Table 5.8. Evaluation of the sigma values performed by Dr. Peter Stafford. Shown below are the
differences between the sigma values obtained by Dr. Stafford, and the sigma values in the HID (From email exchange with Dr. Stafford, October 19, 2012)
0.01 0.02 0.03 0.04 0.05 0.1 0.2 0.4 1 2
High HOMO -4E-05 -3E-05 3E-05 -5E-05 2E-05 4E-05 -3E-05 -4E-05 5E-05 -2E-04 HOMO -6E-06 3E-06 -5E-05 -1E-06 1E-05 -1E-05 2E-05 -5E-05 2E-05 -1E-04 Low HOMO 3E-05 3E-05 -2E-05 4E-05 4E-06 4E-05 -3E-05 -2E-05 2E-05 -1E-04 High HETERO -7E-06 -5E-05 -2E-05 -4E-05 2E-05 5E-05 5E-06 5E-06 3E-05 5E-06 High HETERO -1E-06 6E-06 -5E-05 -4E-05 2E-05 5E-05 -4E-05 -5E-05 -1E-05 3E-05 HETERO 5E-05 -2E-05 2E-05 -2E-06 -5E-06 -5E-05 -4E-05 -6E-06 -2E-05 2E-05 HETERO 3E-05 2E-05 4E-05 4E-05 2E-06 2E-05 3E-05 1E-06 -5E-05 -3E-05 Low HETERO 6E-06 1E-05 -4E-05 3E-05 -3E-05 -4E-05 7E-06 7E-06 5E-05 -4E-05 Low HETERO -5E-05 2E-05 3E-05 3E-05 -2E-05 -4E-07 3E-06 1E-05 -1E-05 5E-05
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 49
6 V&V for SITE RESPONSE CALCULATIONS
The incorporation of site amplification into the hazard assessment at TNSP involves calculating
the site response for the local rock/soil conditions and convolving the site amplification with the
bedrock hazard curves to generate the hazard curves for the surface elevation. The site
response is performed using the site response computer code, STRATA (version 394, Kottke &
Rathje 2008), and convolution is performed using a spread sheet program. The validation and
verification of these calculations are described below.
6.1 STRATA Software for Site Response Calculations
Site response calculations for seismic design are almost exclusively performed using one-
dimensional (1D) analysis in which only vertically propagating shear waves and vertical
variations in shear wave velocity are taken into account. While three-dimensional (3D) wave
propagation through 3D variations in shear wave velocity certainly occur in nature, 1D analysis
has been shown to adequately predict site amplification when compared to field recordings of
site amplification derived from borehole and surface sensors (e.g., Lee et al., 2006).
STRATA performs 1D wave propagation in the frequency domain using a linear elastic transfer
function that converts a Fourier Amplitude Spectrum in the bedrock to a Fourier Amplitude
Spectrum at the ground surface. The transfer function assumes a layered site and is computed
based on the layer thicknesses and the properties (i.e., shear modulus (G), damping ratio (D),
and mass density) of the layers. The transfer function also assumes a linear viscoelastic
response for the layers. The nonlinear response of the soil/rock layers is taken into account
using the equivalent-linear approach, first introduced by Schnabel et al. (1972). The equivalent-
linear approach assigns properties (i.e., G, D) to each layer that are compatible with the shear
strains induced by the input motion. Shear modulus reduction and damping curves describe the
variation of G and D with shear strain, and an iterative approach is used to assign the
appropriate strain-compatible properties.
The verification of STRATA was performed using the independent site response code Shake91
(Idriss & Sun, 1992). Shake91 is a modified version of the original equivalent-linear site
response program Shake (Schnabel et al., 1972), and it is commonly used in practice. Thus,
Shake91 is the ideal code to use for verification. The verification involved performing linear
elastic and equivalent-linear analyses for a hypothetical test site using both codes. The test site
represents a 91-m thick alluvium site over bedrock and the shear wave velocity profile is shown
in Figure 6.1. The shear modulus reduction and damping curves used for the linear-elastic (LE)
and equivalent-linear (EQL) analyses are shown in Figure 6.2. The input motion is the VAS090
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 50
recording from Vasquez Rocks Park during the 1994 Northridge, California earthquake. The
response spectrum for this input motion is shown in Figure 6.3.
Figure 6.1. Shear-wave velocity profile used in verification analyses
Figure 6.2. Shear modulus reduction and damping curves used in verification analyses
0
10
20
30
40
50
60
70
80
90
100
0 200 400 600 800
Dept
h (m
)
Vs (m/s)
0.0
0.2
0.4
0.6
0.8
1.0
0.0001 0.001 0.01 0.1 1
G/G
max
Shear Strain (%)
Linear-ElasticEquivalent-Linear
0
5
10
15
20
0.0001 0.001 0.01 0.1 1
Dam
ping
(%)
Shear Strain (%)
Linear-ElasticEquivalent-Linear
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 51
Figure 6.3. Response spectrum of the VAS090 input motion used in verification analyses
Figure 6.4 shows the computed surface acceleration response spectra for STRATA and
Shake91 for both linear elastic (LE) and equivalent-linear (EQL) analyses. The LE analyses use
the small-strain, linear-elastic properties to compute the response and therefore no iterations
are required to find the strain-compatible properties. The EQL analyses are initiated with the
linear-elastic properties but iterations are used to identify the shear modulus and damping ratio
for each layer that are compatible with the induced strains. The computed surface response
spectra are very similar and this result verifies that STRATA accurately computes equivalent-
linear site response.
Figure 6.4. Surface response spectra calculated by STRATA and Shake91 for Linear Elastic (LE) and
Equivalent-Linear (EQL) analyses
0.0
0.2
0.4
0.6
0.8
1.0
1.2
0.01 0.1 1
Sa (g
)
Period (s)
Input Motion
0.0
0.2
0.4
0.6
0.8
1.0
1.2
0.01 0.1 1
Sa (g
)
Period (s)
STRATA-LE
Shake91-LE
0.0
0.2
0.4
0.6
0.8
1.0
1.2
0.01 0.1 1
Sa (g
)
Period (s)
STRATA-EQL
Shake91-EQL
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 52
6.2 Convolution of Hazard Calculations with Site Amplification Factors
The bedrock hazard curves are convolved with site amplification curves to generate the hazard
curves for the surface elevation. This approach to developing site-specific hazard curves for the
surface elevation is based on the recommendations in McGuire et al. (2001) and Bazzurro &
Cornell (2004), and is one of the few approaches that incorporate the variability and uncertainty
in the site response analysis.
The convolution was performed using an Excel spread sheet developed by Dr Ellen Rathje. It
uses as input: (1) the bedrock hazard curve at a specific period, (2) an amplification function
that describes the variation of amplification factor (AF) with input spectral acceleration intensity,
and (3) the standard deviation of the natural log of AF. The output is the hazard curve at the
surface elevation at a specific period. Dr Adrian Rodriguez-Marek developed an independent
MatLab code to perform the same convolution calculation and the resulting surface elevation
hazard curves were compared. The comparison was performed using the preliminary bedrock
hazard curve and the preliminary site amplification function for T = 0.01 s. The resulting surface
elevation hazard curves are shown in Figure 6.5, and the curves are almost identical.
Figure 6.5. Surface hazard curves computed using spread sheet of Dr Ellen Rathje (EMR) and MatLab
code of Dr Adrian Rodriguez-Marek (ARM)
1.E-06
1.E-05
1.E-04
1.E-03
0 1 2 3
Annu
al F
requ
ency
of E
xcee
danc
e
Sa (g)
Rock Hazard
Surface Hazard Curve-EMR
Surface Hazard Curve-ARM
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 53
7 V&V for HAZARD CALCULATIONS
This is the most important chapter of this report since the hazard calculations and their output
are the final deliverable of the PSHA project (albeit that the hazard results are then convolved
with the site amplification functions, as described in Section 6.2, but those are relatively
straightforward calculations). The first section of this chapter explains exactly what is, and
indeed what is not, the focus of the V&V activities for the hazard calculations, and this maps out
the structure for the remaining sections.
7.1 The Nature of Checks on PSHA Calculations
Understanding of this section is essential to making sense of the remainder of this chapter,
which cannot be properly appreciated without a clear appreciation of what the purpose and
focus of the activities described in the remainder of the chapter.
The purpose of the V&V activities on the hazard assessment overall is to provide assurance
that the calculations undertaken are appropriate and correct for the specific goal of delivering a
seismic characterisation of the Thyspunt site in terms of vibratory ground motion due to natural
earthquakes. To fully meet this objective, seven sequential steps can be identified:
1. To establish that PSHA is an appropriate approach to the characterisation of the ground
shaking hazard at the site
2. To select a software package that is certified as being a correct and valid
implementation of the PSHA procedure
3. To define the input to the PSHA calculations that gives a best estimate of the hazard
together with estimates of the associated uncertainty
4. To demonstrate that any calculations involved in defining inputs to the PSHA
calculations have been carried out appropriately and correctly
5. To check that the input parameters and relationships have been correctly implemented
in the PSHA software
6. To provide clear and documented control of all input and output files related to the PSHA
calculations
7. To check the post-processing applied to the output from the PSHA calculations in order
to combine the results for different input sets and calculate means and fractiles of the
hazard estimates, as well as performing disaggregations
Requirement #1 is satisfied by adherence to the guidelines for seismic hazard assessment in
USNRC Regulatory Guide RG 1.208 (USNRC, 2007), which specifically requires the design
earthquake motions be calculated using PSHA. Moreover, NNR requirement documents specify
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 54
external hazards in terms of annual probabilities of exceedance, which in itself makes PSHA
necessary.
Requirement #2 is satisfied by selecting the FRISK88 software for the formal project
calculations conducted at CGS. There are several software packages available for performing
PSHA calculations, including several that are free to install and use, but FRISK88 was selected
precisely because it has undergone QA for n nuclear applications. This is discussed in Section
7.2.
Requirement #3 is satisfied by virtue of the adoption and application of the SSHAC Level 3
process for conducting the PSHA, as discussed in Chapter 3. As explained Section 3.5, PPRP
concurrence obviates the need for any additional QA with regards to the SSC and GMC models
as appropriate representations for the hazard input to capture best estimates and associated
uncertainties. The only exception to this is that any calculations involved in developing
numerical values directly used in the hazard calculations—or directly applied to the hazard
results, as in the case of the site amplification functions (Chapter 6)—need to be subject to V&V.
This has been covered in details in Chapter 4, 5 and 6, which collectively address requirement
#4 from the list above.
Requirement #5 is addressed in Sections 7.3 and 7.4 below. Requirement #6 is addressed in
Sections 7.5 through to 7.8, and requirement #7 is addressed in Section 7.9.
7.2 Choice of FRISK88 Software
As noted above, the FRISK88 software was chosen—despite the considerable cost for a
licence to install and operate this program—precisely because of its nuclear credentials (see
Figure 7.1). The FRISK88 software has been used in many nuclear applications, including in
the United States and in Switzerland, and it has been accepted by the nuclear regulatory bodies
in those countries as a program for performing PSHA calculations.
The important point to emphasise, therefore, is that no tests were necessary to validate or verify
the FRISK88 software as a correct implementation of the PSHA calculation procedures. The
exercises described in the following sections were concerned only with the input files entered
into the program and the post-processing applied to assemble the output files obtained from
running the program. The FRISK88 calculations themselves, by virtue of their having been
certified for nuclear applications, are not explored or investigated since the software has already
been subject to V&V outside of this project, and we deployed a fully qualified version of the
program.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 55
Figure 7.1. Letter from Risk Engineering Inc., explaining the certification of FRISK88. The letter also confirms that Risk Engineering Inc., also provided in-house training for staff at the Council for Geoscience
in Pretoria regarding the use of the software, which further supports the claim the calculations were performed by suitably qualified and competent individuals
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 56
7.3 Implementations of GMC Model
The GMC model (for bedrock motions) is a logic-tree with a total of 216 branches defining the
median values of spectral acceleration at the 10 target oscillator periods and the associated
aleatory variability (sigma values). For calculating the hazard associated with each of these 10
response spectral accelerations, hazard calculations were run for each branch of the SSC logic-
tree combined with each of these 216 branches, one by one.
Since there are many more branches in the SSC logic-tree, it was decided to test the
implementation of the GMC logic-tree in FRISK88 before testing the full hazard calculations.
This would make it easier to locate the source of any discrepancies encountered when
implementing the hazard calculations for the full logic-tree associated with a particular seismic
source.
For this exercise, a three-way check was carried out whereby the GMC logic-tree was
implemented in FRISK88 by the Hazard Calculation Team at CGS, in the OpenQuake software
(see Section 7.4) by the GEM Foundation team in Pavia, Italy, and in a separate application by
Dr John Douglas at BRGM. The Project Technical Integrator, Dr Julian Bommer, generated a
list of almost 6,500 test scenarios based on different combinations of GMPE, sigma model,
oscillator period, magnitude, style-of-faulting, and distance, and including median and 84-
percentile values.
The branches of the GMC logic-tree for median motions are three published GMPEs, all of
which have rather complex functional forms, and these needed to be implemented with specific
parameter settings. Moreover, the equations were modified by the application of Vs-kappa
adjustment factors, which varied for each GMPE and each oscillator period (Section 5.3). For
each median branch, there were six branches for the sigma branches, which were based on
modifications of published sigma values rather than the actual values published with each of the
GMPEs (Section 5.4). In conclusion, the GMC model is sufficiently complicated to warrant
careful checks on the implementation in terms of functional form, coefficients and values of
those parameters held constant.
The three-way check involved the calculation of response spectral accelerations for all 6,500
scenarios in three different (and completely independent) implementations. The reason for a 3-
way check was to allow the source of any discrepancies to be located more easily (on the basis
of the reasonable assumption that if two implementations were in agreement, any error was
most likely to be encountered in the third implementation that differed). The exercise concluded
with exact agreement in the predicted accelerations, to several decimal places, for all of the
scenarios. The full exercise is documented in Douglas (2012b).
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 57
Although the GMC logic-tree was implemented in three different computer applications, the
purpose was to check the implementation in FRISK88, which was duly achieved. However, it
was also necessary to check the implementation in the nhlib calculation engine of OpenQuake,
as the starting point for checking the implementation of the SSC logic-tree in FRISK88.
7.4 Implementations of SSC Model
The SSC logic-tree for each of the 10 seismic sources includes hundreds of branches that
represent the different combinations of parameter defining the source geometry, the geometry
and slip mechanism of faults, and the recurrence models included to capture the full range of
epistemic uncertainty in this part of the hazard model. Although rigorous systems were put in
place to check the input and output files for the model implementation in the FRISK88 code, it
was considered highly desirable to also conduct some independent checks on the
implementation of the model through a check on the resulting hazard estimates. There is no
closed form solution for such checks, given the complexity of the models and necessity for
extensive numerical integrations, so the approach adopted was to seek another hazard
software and to implement the SSC model in both FRISK88 and that second program. For this
purpose, the project was very fortunate to be able to benefit from the opportunity to have
selected branches of the logic-tree run in the OpenQuake software that has been developed
within the Global Earthquake Model (GEM) project (http://globalquakemodel.org), a venture
sponsored by both the insurance industry and several government agencies worldwide. The
OpenQuake software is ultimately intended for the calculation of risk in terms of losses due to
earthquake events, but at its core is a state-of-the-art code for the calculation of probabilistic
seismic hazard. The PSHA code nhlib within OpenQuake is based on the work of Field et al.
(2003) and has been subjected to extensive QA and testing, at various levels, during its
ongoing development, including systematic reproduction of the test cases documented in the
report by Thomas et al. (2010). The OpenQuake software is capable of executing PSHA
calculations for the configurations defined by the SSC and GMC models for the Thyspunt study,
and it was established through presentations and discussions at Workshop #3 that it performs
the hazard calculations in a very similar manner to FRISK88.
A point that needs to be strongly emphasised here is that independent hazard runs were not
carried out to test the FRISK88 software (or, for that matter, the OpenQuake software) since
both of these hazard codes have been extensively tested and validated as implementations of
PSHA. The purpose of the exercise was rather to test the implementation of the actual hazard
model in the FRISK88 code, which is a desirable objective given the complexity of the model.
There was no expectation that the two hazard codes would produce identical results for any
given test case, since each hazard code employs different slightly discretisation intervals and
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 58
integration schemes to perform the calculation of the total AFE for a specified ground-motion
amplitude. Examples of the differences in results that may be obtained from different hazard
calculation codes, even for relatively simple seismic source configurations, may be seen in the
freely available report by Thomas et al. (2001). Examples are presented in Figure 7.2, which
shows the hazard curves obtained at two sites affected by a single fault source obtained with a
single GMPE, and using various widely-used hazard software packages. These comparisons,
which only show the results at rather large annual frequencies of exceedance (AFE), should be
borne in mind when viewing the figures below, which show the comparative hazard results
obtained as the weighted mean of hundreds of calculations (all the logic-tree branches for a
single seismic source) and plotted to AFEs as low as 10-7.
Figure 7.2. Comparisons of hazard curves obtained with various PSHA programs for sites 1 (left) and 2 (right) for the simple test set #1, case 9(c) specified in Thomas et al. (2010).
In view of the fact that some degree of divergence between the results from different hazard
codes therefore can be accepted as inevitable (and not indicative of one or the other being
invalidated), the desired outcome from these tests was not exact agreement between the mean
hazard curves in each case. Rather, correct implementation of the SSC model in both FRISK88
and OpenQuake would be indicated by only small differences that were stable and consistent,
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 59
meaning that the differences would not increase markedly with decreasing AFE, even as low as
10-7.
For the SSC logic-tree implementation testing, means hazard curves were generated for 15
branches of the total logic-tree. For ease of reference, the sources defined in the SSC model
are shown in Figure 7.3. Each hazard curve was generated for the contributions from a single
seismic source, effectively providing the mean hazard for all of the logic-tree branches within
that source. One such run was performed for each of the sources except the host ECC zone
within which the Thyspunt site is located. The emphasis on checking the implementation of the
model for ECC was made because it is generally found that in low seismicity regions the hazard
at a site is dominated by contributions from the host source. The 15 combinations of source,
backbone GMPE, sigma model and response period are summarised in Table 7.1.
In order to increase the chances of picking up on any problems, all three backbone GMPEs
were used in different runs, and both the homo- and heteroscedastic sigma models (using
medium branch in both cases) were employed. Three different response periods were used
because of their different sensitivities to earthquake magnitude, which once again was designed
to increase the chances of identifying any discrepancies. For the fault sources, which generally
produce only larger magnitude events, the longest of the three response periods was chosen.
For the critical ECC host zone, all three GMPEs and oscillator periods, as well as both sigma
models, were used in the runs.
Table 7.1. Hazard input for the 15 test cases checked against OpenQuake
Source Backbone Sigma T(s) ECC AS08 HOMmd 0.01 ECC AS08 HOMmd 0.10 ECC CY08 HETmd 0.10 ECC CY08 HETmd 1.00 ECC AC10 HOMmd 0.01 ECC AC10 HETmd 1.00 SYN AS08 HETmd 0.10 KAR AC10 HOMmd 0.10 CK CY08 HOMmd 0.10
NAM CY08 HOMmd 0.10 KNG AS08 HETmd 1.00 AFZ AC10 HETmd 1.00
GAM AS08 HETmd 1.00 PLE AC10 HOMmd 1.00
WOR CY08 HOMmd 1.00
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 60
Figure 7.3. Seismic source characterisation for the seismic hazard analysis for the Thyspunt site (red star). There are 5 area sources (ECC, KAR, CK, NAM and SYN) and five fault sources (purple lines)
The runs were performed independently and the two teams submitted their sets of results to
Dr John Douglas and to the PTI, but not to each other. Dr Douglas and the PTI discussed each
set of results and in the few cases where there was appreciable divergence, this was
communicated to the teams individually, without providing any details regarding the other set of
results, and making suggests regarding elements that could be checked. After a few iterations—
including one case where it became necessary to obtain some intermediate results in order to
identify the subtle difference in implementation for the one of the fault sources—convergence
was judged to have been reached. The results are presented in Figure 7.4 to 7.7, grouped by
source types and response periods.
Several important observations can be made on the plots in these figures, the first being that
the level of agreement is very high considering the complexity of the some of the sources and
the SSC logic-trees defining the ranges of activity and Mmax. For the two faults sources closest
to the site, PLET and GAM (Figure 7.4), the two pairs of hazard curves are almost identical. For
the area sources, the agreement is generally even better, and neither of the hazard codes
produces consistently higher or lower results (which would probably mean that in the full hazard
calculations the differences would remain small since many of the minor differences would
cancel each other out).
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 61
Figure 7.4. Hazards curves obtained for the fault sources using FISK88 (solid curves) and OpenQuake
(dashed curves)
Figure 7.5. Hazards curves for Sa(0.01s) obtained for the host ECC source using FISK88 (solid curves)
and OpenQuake (dashed curves) for two different GMPEs
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 62
Figure 7.6. Hazards curves for Sa(1.0s) obtained for the host ECC source using FISK88 (solid curves)
and OpenQuake (dashed curves) for two different GMPEs
Figure 7.7. Hazards curves for Sa(0.1s) obtained for all area sources using FISK88 (solid curves) and
OpenQuake (dashed curves)
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 63
The most interesting, and probably most important, observations concern the hazard
contributions from the host ECC source zone (Figure 7.5-7.7). For the AC10 GMPE (green
curves in Figures 7.5 and 7.6) the differences are very small, whereas using the other two
backbone GMPEs larger differences are seen at low AFEs. This is the result of the AC10
equation using the RJB distance metric whereas the AS08 and CY08 equations both use Rrup,
which makes the latter two models far more sensitive to the details of how extended fault
ruptures within the area sources are simulated. This is supported by the fact that in those cases
where this divergence is observed, it does not occur at high AFEs, where the hazard will be
dominated by small-magnitude earthquakes that will correspond to very small virtual ruptures.
As the AFE decreases, larger magnitudes come into play and the influence of the differences in
the way the virtual ruptures are simulated become more influential. For the case of the ECC
source zone, where there are differences the FRISK88 codes seems to generally give higher
results. The clear conclusion from these comparisons is that the hazard model was correctly
implemented and therefore the final hazard results can be accepted with confidence.
7.5 Checks on Pre-processing Steps for Hazard Inputs
The first step of executing the hazard calculations consists in the implementation of the SSC
and GMC models as specified in the HID (see Sections 7.3 and 7.4) in the file formats used by
FRISK88 (see Figure 7.8). All these steps occur in a dedicated folder (C:\F88_IN\) with a fixed
directory structure reflecting the different file types, as explained below.
7.5.1 Preparation and V&V of *.SRC and *.TREE files
The SSC model is implemented in the *.SRC and *.TREE files, with the recurrence information
for area listed in *.REC files linked to the *.SRC files. The *.REC files have been created
automatically as described in Section 4.2.2. For operational reasons, each of the seismic
sources described in the HID was split up into source alternatives reflecting global options (i.e.
choices on parameters that affect more than one seismic source jointly). In the Thyspunt PSHA
SSC model, the global options considered were the crustal thickness, which is coupled across
all sources (3 options), and the boundary between ECC and SYN, which is coupled across
these two sources (2 options). As a result, there are 6 source alternatives for the ECC and SYN
sources, and 3 source alternatives for all other sources, as summarised in Table 7.2.. The runs
are performed separately for each source alternative, and consequently there is one *.SRC file
per source alternative (i.e., 36 *.SRC files in total). The *.TREE files controlling logic-tree
options other than those routinely included in the setup of the *.SRC files were common to all
source alternatives, hence there were 10 .TREE files in total.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 64
Figure 7.8. General overview of the setup of the FRISK88M bedrock hazard calculations. (A) Preparation of inputs. (B) Pre-processing using the FRISK88M pre-processor PREP88. (C) Calculations using the FRISK88M main calculation engine. (D) Post-processing using project-specific routines developed in
MatLab.
FRISK88M Pre-processor
*.TREE *.ATT
*.MAS
*.INP *.EPR *.STAT
FRISK88M Calculation
Engine
Project-specificATTENDLL.DLL
Mean Hazard Curve & Uniform Hazard Spectrum
Disaggregation Results
Fractiles
*.SRC
*.REC
*.FRAC *.HCUR *.CRPT*.MRD*.ERR *.ECH *.STAT
MatLab Post-processing Routines
A
B
C
D
SSC Model InputsGMC Model InputsFRISK88 Executables and Input FilesFRISK88 Pre-processor Diagnostic FilesFRISK88 Data Output FilesFRISK88 Diagnostic Output FilesMatLab Post-processing RoutinesBedrock Hazard Outputs
FRISK88M Pre-processor
*.TREE *.ATT
*.MAS
*.INP *.EPR *.STAT
FRISK88M Calculation
Engine
Project-specificATTENDLL.DLL
Mean Hazard Curve & Uniform Hazard Spectrum
Disaggregation Results
Fractiles
*.SRC
*.REC
*.FRAC *.HCUR *.CRPT*.MRD*.ERR *.ECH *.STAT
MatLab Post-processing Routines
A
B
C
D
SSC Model InputsGMC Model InputsFRISK88 Executables and Input FilesFRISK88 Pre-processor Diagnostic FilesFRISK88 Data Output FilesFRISK88 Diagnostic Output FilesMatLab Post-processing RoutinesBedrock Hazard Outputs
SSC Model InputsGMC Model InputsFRISK88 Executables and Input FilesFRISK88 Pre-processor Diagnostic FilesFRISK88 Data Output FilesFRISK88 Diagnostic Output FilesMatLab Post-processing RoutinesBedrock Hazard Outputs
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 65
Table 7.2. Summary of source alternatives
Source ECC/SYN Boundary
Crustal Thickness Thin Medium Thick
ECC Position A ECCaH1 ECCaH2 ECCaH3 Position B ECCbH1 ECCbH2 ECCbH3
SYN Position A SYNaH1 SYNaH2 SYNaH3 Position B SYNbH1 SYNbH2 SYNbH3
KAR Not applicable KARoH1 KARoH2 KARoH3 CK Not applicable CKFoH1 CKFoH2 CKFoH3
NAM Not applicable NAMoH1 NAMoH2 NAMoH3 KNG Not applicable KNGoH1 KNGoH2 KNGoH3 AFZ Not applicable AFZoH1 AFZoH2 AFZoH3 GAM Not applicable GAMoH1 GAMoH2 GAMoH3 PLE Not applicable PLEoH1 PLEoH2 PLEoH3 WOR Not applicable WORoH1 WORoH2 WORoH3
Additionally, the *.SRC files for the area source reference *.REC files containing information
about the recurrence curves (cumulative annual number of earthquakes greater than the
minimum magnitude, NU5, and BETA parameter equal to ln(10) times the Gutenberg-Richter b-
value, with associated probability weight P). The {NU5,BETA,P} triplets being dependent on
both the Mmax value and the source geometry, the call to the recurrence parameter files was
set up as a conditional call using uniquely defined file names (see Section 4.2.2), including full
filepaths for version control purposes. As noted in Section 4.2.2, the recurrence portion of the
logic-tree for area sources was collapsed numerically to consider unique values of Mmax for
numerical efficiency. Similarly, the completeness scaling factors for ECC were integrated into
the *.REC files with the appropriate weights.
During development, self-checks were undertaken in the form of direct checks as well as
sample runs with the FRISK88 pre-processor PREP88, followed by sample FRISK88 runs of the
*.INP input files thus generated, with examination of the echo files (*.EPR and *.ECH) as well as
error files (*.ERR) created to check the setup was correct in terms of syntax. Following these
self-checks, the *.SRC and *.TREE files were independently checked against the information
contained in the HID. The CGS/TP10/FM04 forms documenting these checks can be found in
the electronic appendix to this report in the folder 7_5_HazardPreProcessing\SRC and TREE
Check Forms.
These checks focused on ensuring that all of the information contained in the HID was
implemented, and that there were no transfer errors in the implementations. Figures 7.9 to 7.11
and the accompanying Tables 7.3 to 7.5 give details of the implementation and associated
checks.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 66
Figure 7.9. SSC logic-tree branches common to all seismic sources
Table 7.3. Implementation of SSC model branches common to all sources
ID Logic-Tree Branch
Implementation details Reference Checks
SSC1 Crustal thickness
Implemented as separate source alternatives. Option reflected in name of *.SRC file and specification of DEPTH parameter therein. Weights applied at post-processing stage.
HID
Direct checks on *.SRC files (CGS/TP10/FM04 forms in Appendix) Post-processing checks (weights).
SSC2 Source selection alternatives
Implemented as separate source alternatives. Option reflected in name of *.SRC file and specification of GEOM parameter therein. Weights applied at post-processing stage.
HID
Direct checks on *.SRC files (CGS/TP10/FM04 forms in Appendix) Post-processing checks (weights).
SSC3 Applicabilty of clustering behaviour
Conceptual branch, not implemented - -
SSC4 Clustering behaviour
Only implemented for KNG via TEMPCLUSTER user-specified variable in *.SRC and *.TREE files for KNG.
HID
Direct checks on *.SRC files (CGS/TP10/FM04 forms in Appendix)
SSC5 Source-specific coordinates
Specification of the GEOM parameter in each of the *.SRC files. For ECC, the coordinates are also conditional on parameter ECCEAST specified as user-defined variable in the *.SRC and *.TREE files
Direct checks on *.SRC files (CGS/TP10/FM04 forms in Appendix) Sample runs for OpenQuake checks confirmed setup (with corrections to SYN to have a simple polygon specification)
SSC1 SSC2 SSC3 SSC4 SSC5SSC1 SSC2 SSC3 SSC4 SSC5
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 67
Figure 7.10. SSC logic-tree branches specific to area sources
Table 7.4. Implementation of SSC model branches specific to area sources
ID Logic-Tree Branch Implementation details Reference Checks
SSC6a Maximum magnitude approach
Integrated into calculation of unique Mmax values and equivalent weights (See Section 4.1.2)
Excel spreadsheet with unique Mmax values (See Section 4.1.2)
Checks on Mmax spreadsheet (see Section 4.1.2)
SSC7a Number of priors
Integrated into calculation of unique Mmax values and equivalent weights (See Section 4.1.2)
Excel spreadsheet with unique Mmax values (See Section 4.1.2)
Checks on Mmax spreadsheet (see Section 4.1.2)
SSC8a Prior model
Integrated into calculation of unique Mmax values and equivalent weights (See Section 4.1.2)
Excel spreadsheet with unique Mmax values (See Section 4.1.2)
Checks on Mmax spreadsheet (see Section 4.1.2)
SSC9a Mmax value
Mmax values implemented are the unique Mmax values Specified as user-defined parameter MMAXVAL in *.SRC and *.TREE files to allow conditional specification of recurrence parameters in PAIR block of *SRC files
Excel spreadsheet with unique Mmax values (See Section 4.1.2)
Checks on Mmax spreadsheet (see Section 4.1.2)
SSC10a Recurrence model
ICORE parameter in source file, set to ICORE = 1 for the truncated exponential model
HID
Direct checks on *.SRC files (CGS/TP10/FM04 forms in Appendix)
SSC11a Recurrence parameters
Listed in *.REC files conditional on Mmax value and source geometry, listed in specification of PAIR parameter
HID
Direct checks on *.SRC files (CGS/TP10/FM04 forms in Appendix) for correct file association Checks on *.REC files (see Section 4)
SSC12a Completeness scaling
Integrated into calculation of NU5 parameter in *.REC files HID Checks on *.REC files (see
Section 4)
SSC6a SSC7a SSC8a SSC9a SSC10a SSC11a SSC12a
Unique Mmax values *.REC files
SSC6a SSC7a SSC8a SSC9a SSC10a SSC11a SSC12a
Unique Mmax values *.REC files
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 68
Figure 7.11. SSC logic-tree branches specific to fault sources
Table 7.5. Implementation of SSC model branches specific to fault sources
ID Logic-Tree Branch Implementation details Reference Checks
SSC6f Recurrence model
MAGMOD user-specified variable in *.SRC and *.TREE files, and choice of ICORE parameter in the PAIR block of the *.SRC files Switch between characteristic and maximum moment model implemented through the setting of the Mmin parameter in the MMAX block of the *.SRC files
HID
Direct checks on *.SRC files (CGS/TP10/FM04 forms in Appendix)
SSC7f Characteristic magnitude
MCHAR user-specified variable in *.SRC and *.TREE files, controlling the setting of the Mmax value in the the MMAX block of the *.SRC files
HID
Direct checks on *.SRC files (CGS/TP10/FM04 forms in Appendix)
SSC8f b-value Specification of the BETA parameter in the PAIR block of the *.SRC files HID
Direct checks on *.SRC files (CGS/TP10/FM04 forms in Appendix)
SSC9f Recurrence rate estimator
RECAPP user-specified variable in *.SRC and *.TREE files, and choice of ICORE parameter in the PAIR block of the *.SRC files
HID
Direct checks on *.SRC files (CGS/TP10/FM04 forms in Appendix)
SSC10f Recurrence rate parameter
Specification of the NU parameter in the PAIR block of the *.SRC files HID
Direct checks on *.SRC files (CGS/TP10/FM04 forms in Appendix)
SSC11f Style-of-faulting
Specification of SoF percentages as coefficients C9 and C10 in *.ATT files to use with user-defined SoF option in ATTENDLL (JY = 9, see below).
HID Spot-checks on *.ATT files and OpenQuake runs (see Section 7.4)
SSC12f Dip direction
Reflected in value of dip angle using FRISK88 convention in DEPTH block of *.SRC files. This convention depends on order of specification of coordinates.
HID Direct checks on *.SRC files (CGS/TP10/FM04 forms in Appendix)
SSC13f Dip angle
Value of dip angle specified using FRISK88 convention in DEPTH block of *.SRC files. This convention includes information about dip direction.
HID Direct checks on *.SRC files (CGS/TP10/FM04 forms in Appendix)
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 69
7.5.2 Preparation and V&V of *.ATT files
The GMC model is implemented in the project-specific DLL (discussed separately in Section
7.6), supplemented by *.ATT files with run-specific coefficients. The different branches of the
GMC logic-tree are identified via the JCALC parameter, which uniquely identifies the GMPE
corresponding to each branch tip. The JCALC values were set up as 6-digit integers reflecting
the various branches of the logic-tree, as shown in Figure 7.12 and 7.13 and explained in Table
7.6.
Figure 7.12. Schematic explanation of JCALC value conventions (see also Table 7.6)
Figure 7.13. Schematic overview of GMC logic-tree branches in relation to the various digits of the
JCALC value (see also Table 7.6)
• • • • • •
JX JY JZ
JX1 JX2 JX3 JX4
GMC1 GMC2 GMC3 GMC4 GMC5
JX1 JX3 JX4 JZ
GMC1 GMC2 GMC3 GMC4 GMC5
JX1 JX3 JX4 JZ
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 70
Table 7.6. Conventions for JCALC value digits.
ID Desription Options
JX1 Backbone GMPE model JX1 = 1 Abrahamson & Silva (2008) JX1 = 4 Chiou & Youngs (2008)) JX1 = 1 Akkar & Çağnan (2010)
JX2 Unused suboption JX2 = 0 for all Thyspunt PSHA runs
JX3 Kappa option JX3 = 1 Kappa = 0.005 s JX3 = 2 Kappa = 0.003s JX3 = 3 Kappa = 0.0011s
JX4 Stress parameter scaling option
JX4 = 1 Scaled by 1.50 JX4 = 2 Scaled by 1.25 JX4 = 3 Scaled by 1.00 JX4 = 4 Scaled by 0.75
JY Style-of-faulting option
JY = 0 Generic/unspecified JY = 1 Strike-slip JY = 2 Normal JY = 3 Reverse JY = 9 User-defined weighted combination controlled by coefficients C9 [pNO] and C10 [pSS] in *.ATT file NB: All Thypsunt PSHA runs set up with option JY = 9 for more flexibility and consistency
JZ Sigma model
JZ = 0 As-published sigma JZ = 1 As published sigma (option 2) JZ = 2 Thyspunt PSHA Homoscedastic model, high value JZ = 3 Thyspunt PSHA Homoscedastic model, medium value JZ = 4 Thyspunt PSHA Homoscedastic model, low value JZ = 5 Thyspunt PSHA Heteroscedastic model, high value JZ = 6 Thyspunt PSHA Heteroscedastic model, high value JZ = 7 Thyspunt PSHA Heteroscedastic model, high value NB: All Thypsunt PSHA runs only use JZ = 2 through JZ = 7.
Details of the implementation of the GMC branches are given in Table 7.7. Note that for ease of
implementation, all runs were implemented using the JY = 9 option, even when the style-of-
faulting for a given source was “pure” and could have been implemented using one of the other
options. The proportions pNO and pSS of normal-faulting and strike-slip faulting events are
specified in the *.ATT files as coefficients C9 and C10 that are passed to the ATTENDLL.
Similarly, other coefficients of the .ATT files are used to pass explanatory variables of the
GMPE backbone models that are user-defined, whereas the coefficients of the equations are
hard-wired into the relevant FORTRAN code files. The parameters set in the *.ATT files are
listed in Table 7.8. These parameters also include identification of the seismic source and
source alternative to enable a calculation of the depth-to-top of rupture, ZTOR, that takes into
account the crustal thickness and specific source geometries (width controlled by dip angle).
The *.ATT files also include parameters relating to the specification of the run parameters, such
as the site coordinates and target ground-motion amplitude levels (which include 18 values
ranging from 0.0001g to 5.0g for the first set of runs (AFE0, i.e. full hazard curve), and a single
ground-motion value determined from the mean UHS for the subsequent sets of runs
corresponding to disaggregation (AFE4, AFE5 and AFE6)). As a result, there is one *.ATT file
per combination of target AFE, response period, source alternative and JCALC value.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 71
Table 7.7. Implementation of GMC model logic-tree branches and associated checks
ID Logic-Tree Branch
Implementation details Reference Checks
GMC1 Backbone GMPE
Equations implemented in ATTENDLL in as-published form in the following subroutines TNSPUlt_AbrahamsonSilva2008_v1f TNSPUlt_ChiouYoungs2008_v.f TNSPUlt_AkkarCagnan2010_v1.f Identified by JX1 (1st digit of JCALC):
Original publications (incl. published errata): Abrahamson & Silva (2008, 2009) Chiou & Youngs (2008) Akkar & Çağnan (2010) HID for specific assumptions and coefficient interpolation.
(1) Direct checks on FORTRAN subroutines (see forms CGS/TP10/FM04 in Appendix). (2) 3-way blind comparison exercise (see Section 7.3) (3) Weights checked as part of checks on post-processing routines
GMC2 Vs-kappa adjustment
Adjustments implemented in ATTENDLL via the mean adjustment factor implemented in subroutine TNSPUlt_MedianAdjustFactor_v1.f Identified by JX3 (3rd digit of JCALC):
HID including electronic supplements
(1) Direct checks on FORTRAN subroutines (see forms CGS/TP10/FM04 in Appendix). (2) Partial checks via comparison with OpenQuake (3) Weights checked as part of checks on post-processing routines
GMC3 Stress parameter adjustment
Adjustments implemented in ATTENDLL via the mean adjustment factor implemented in subroutine TNSPUlt_MedianAdjustFactor_v1.f Identified by JX4 (4th digit of JCALC):
HID including electronic supplements
(1) Direct checks on FORTRAN subroutines (see forms CGS/TP10/FM04 in Appendix). (2) Partial checks via comparison with OpenQuake (3) Weights checked as part of checks on post-processing routines
GMC4 Homoscedastic vs. heteroscedastic sigma
Implemented in ATTENDLL in subroutine TNSPult_Sigma_v1.f Identified by JZ (6th digit of JCALC value), in combination with GMC5
HID including electronic supplements
(1) Direct checks on FORTRAN subroutines (see forms CGS/TP10/FM04 in Appendix). (2) Partial checks via comparison with OpenQuake (3) Weights checked as part of checks on post-processing routines
GMC5 High, medium or low sigma
Implemented in ATTENDLL in subroutine TNSPult_Sigma_v1.f Identified by JZ (6th digit of JCALC value), in combination with GMC4
) Direct checks on FORTRAN subroutines (see forms CGS/TP10/FM04 in Appendix). (2) Partial checks via comparison with OpenQuake (3) Weights checked as part of checks on post-processing routines
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 72
Table 7.8. Parameters set in the *ATT files
Coefficient Description Notes
C1 Response period, T, in seconds, or period index among the Thyspunt PSHA periods (-1 to -10)
Always use negative index approach for numerical efficiency.
C2 Average shear-wave velocity over top 30m, VS30, in m/s
Set to 620 m/s throughout as per HID.
C3 Unused Always set to zero C4 Unused Always set to zero
C5 Depth to 1000 m/s shear-wave velocity horizon Z1.0, in m
Set to 45 m for AS08 and CY08 as per HID, left at 0 for AC10
C6 Identifier of crustal thickness option 1 = thin, 2 = medium, 3 = thick C7 Identifier for source 10 times the source index from 1 to 10
C8 Identifier for source geometry alternative 0 = single alternative, 1 = alternative A, 2 = alternative B
C9 Proportion of normal-faulting event, pNO Set to source-specific value as per HID C10 Proportion of normal-faulting event, pNO Set to source-specific value as per HID
RZERO Pseudo-depth Always set to zero Has to be 0 for Rjb based equations
RONE Unused Always set to zero
JCALC 101192 to 503497 (not all combinations) Reflects GMC logic-tree branch See Section 7.5.2 for conventions
SIGA Unused Always set to zero ITRUN Truncation option Always set to zero (no truncation) TRUN Truncation parameter Always set to zero IDIST Distance metric Always set to zero
For a given AFE, response ordinate and source, the *.ATT files were generated automatically
by looping through all source alternatives and JCALC values, and setting any dependent
parameters appropriately. The successful creation of the *.ATT files was monitored via run logs,
which can be found in the electronic appendix to this report in the folder
7_5_HazardPreProcessing\ATT_FILE_LOGS.
7.5.3 Preparation and V&V of *.MAS files
Each individual run, for a given AFE, corresponds to a source alternative and a branch of the
GMC logic-tree (JCALC value) at a given response period. The relevant *.SRC, *.TREE and
*.ATT files are linked by specifying the full filepaths in a *.MAS (master) file. This file is passed
to the FRISK88 pre-processor program PREP88, which retrieves the information and translates
it into a FRISK88 input file (*.INP). Since the .ATT files are linked to a specific source alternative,
there are is one *.MAS file per *.ATT file, linked to the appropriate *.SRC and *.TREE files.
For a given AFE, response ordinate and source, the *.MAS files were generated automatically
by looping through all source alternatives and JCALC values. The successful creation of the
*.MAS files was monitored via run logs, which can be found in the electronic appendix to this
report in the folder 7_5_HazardPreProcessing\MAS_FILE_LOGS.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 73
7.5.4 Preparation and V&V of *.INP files using PREP88
The *.MAS files described above constitute the input files for the FRISK88 pre-processor
programme PREP88, which retrieves the information and translates it into a FRISK88 input file
(*.INP).
PREP88 returns an error message and suspends execution if it encounters a syntax error,
otherwise it returns a message confirming the creation of the *.INP file by PREP88 was
successful, as shown in Figure 7.14. In view of the large number of input files to be created, the
PREP88 runs were run in batch mode, with one batch run covering a given AFE, response
period, source alternative and backbone GMPE combination. For easier tracking, all batch runs
for a given source were regrouped in high-level batch runs stepping through all combinations of
source alternatives and backbone models (i.e., 9 or 18 batch runs depending on the source).
The PREP88 batch runs were monitored through logs corresponding to the low-level batch runs,
as illustrated in Figure 7.14. A full set of these logs can be found in the electronic appendix to
this report, in the folder 7_5_HazardPreProcessing\P88_RUN_LOGS.
Figure 7.14. Example of PREP88 run log, highlighting control features: (1) Batch run header summarising
run parameters; (2) Full path of version-controlled unambiguous batch file; (3) Header of batch file with time stamp; (4) Message confirming successful completion of individual run.
12
3
4
12
3
4
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 74
7.5.5 Preparation of FRISK88 batch input files
Once all individual run *.INP and associated diagnostic files for a given high-level batch run had
been generated, they were moved from the *.MAS folder into dedicated folders in C:\F88_IN
using an automated routine which monitors that all files have been copied successfully.
Similarly to the execution of the PREP88 batch runs, batch files listing all input files for a given
AFE, response period, source alternative and backbone model were generated to be used as
inputs for the FRISK88 calculation engine. Since these batch files also need to contain
machine-specific user-credentials, two sets of batch files were generated, one set for use on the
FRISK88 desktop PC (prefixed with “D_”) and a second set for use on the FRISK88 laptop PC
(prefixed with “L_”). Following the creation of the batch files, these were copied, along with all
the *.INP files listed therein, from the input folder C:\F88_IN\INP\ to the calculation folder
C:\F88_CALC\INP\, as well as being exported to an external hard drive and imported on the
second machine (e.g. the desktop PC, if starting from the C:\F88_IN\ directory on the laptop
PC). The success of the copy operations was monitored using log files, a full set of which can
be found in the electronic appendix to this report, in the folder 7_5_HazardPreProcessing\INP
_2_CALC. This monitoring would also pick up any missing *.INP and batch files, prompting the
user to create them.
7.6 Setup of ATTENDLL
Since every PSHA project uses a different GMC model reflecting the specific conditions of the
site(s) under consideration, the FRISK88 software models ground-motion prediction equations
(GMPEs) via a customisable dynamic-link library (DLL) called ATTENDLL. This dynamic-link
library is coded up in FORTRAN and compiled using the commercial software Visual Fortran
Composer XE 2011 for Windows developed by Intel, which is the compiler prescribed by the
FRISK88 user manual.
As discussed in Section 7.3, the GMC model is implemented using backbone models
corresponding to existing, published GMPEs, which are adapted to the needs of the project
using customised adjustment factors as well as a customised model for the aleatory variability
(sigma, see Section 5.4). The correct implementation of the as-published backbone GMPEs
was validated through a blind three-way checking exercise, as already discussed in Section 7.3.
The present section addresses more specifically the implementation of the project-specific
components (adjustment factors and sigma) and related V&V activities. The overall structure of
the ATTENDLL setup is illustrated in Figure 7.15.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 75
Figure 7.15. Flowchart illustrating the setup of ATTEN.DLL within the FRISK88 calculation package.
In view of the critical nature of this component, multiple and redundant checks were
implemented and executed throughout the development phase, including:
*.ATT file
EDIST : Distance value from FRISK88MAMAG : Magnitude value from FRISK88MDEPTH : Depth value from FRISK88M (fault sources)ZTOR, ZBOR, Z3: 3D-fault geometry parameters (fault sources)
C1,…, C10 : Equation coefficientsRZERO, RONE: Additional coefficientsSIGAF/SIGAA : Sigma value or sigma reduction factorIn modular version, the published coefficients are hard-wired and these coefficients are used to store T, Vs30 and other variables (Ztor, Z1.0, etc…)
ATTEN(area source)
ATTEN1(fault source)
FRISK88MCalculation
Engine
ATTEN.DLL
Split JCALC into JX,JY,JZ
Set DIST passed to ATTEN and ATTEN1 as a function of EDIST
Get/set input parameters for equations based on inputs from *.ATT file
JCALC : Model IDJX : GMPE switchJY : SoF switchJZ : Site condition switch
DIST : Distance value used in GMPE
AMAG
C1,…, C10 : Equation coefficientsRZERO, RONE: Additional coefficientsSIGAF/SIGAA : Sigma value or sigma reduction factor
AMEAN : LN(Ymedian)SIGMA : Aleatory variability
Site coordinatesProjection parameter JPROJSimulation setup parameters:
NSTEP, ECIN, DZ, AMSTEP, NRLTruncation parameters:
ITRUN/A, TRUNF/ADistance index: IDIST
JCALC : Model ID
DEPTH, ZTOR, ZBOR, Z3
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 76
(1) Direct checks of the final FORTRAN code to ensure there are no translation/reporting
errors with respect to the HID, documented in the CGS/TP10/FM04 forms listed in the
electronic appendix to this report in the folder
7_HAZARD_CALCULATIONS\ 7_6_ATTNDLLSetup\FORTRANSourceFileChecks
(2) Three-way blind-check of the coding of the as-published backbone models (see Section
7.3) for selected parameter combinations spanning the parameter space of the
explanatory variables considered in the Thyspunt PSHA hazard calculations (Douglas,
2012b)
(3) Blind comparison with independent implementation in OpenQuake (validated for
backbone implementation, see above in Section 7.3. and 7.4) for a selection of
scenarios spanning the space of the project-specific adjustments (Douglas, 2013)
(4) Early checks and review of the FORTRAN code during the development phase (Douglas,
2012a)
7.6.1 Development
The FORTRAN compiler used (Intel Visual Fortran Composer XE 2011 integration for Microsoft
Studio 2008, version 12.0.3470.2008) is embedded into a Microsoft Developer Studio shell,
which includes a FORTRAN language debugger returning, when necessary, error messages
and warnings as part of the compilation process. During development, the individual
subroutines were checked using this debugger until they were found to be free of errors, and
then the DLL compiled, again iterating until it was found to be error-free. The successful
compilation of the DLL is documented in the build log, included as file BuildLog.htm in the
electronic appendix to this report, in the folder
7_HAZARD_CALCULATIONS\7_6_ATTNDLLSetup\Compiler.
In the development phase, the code was written in verbose mode (i.e. returning values of inputs
and outputs to the screen upon execution) as an additional self-check. In the final version, this
feature was turned off as it negatively affects the efficiency of the software by considerably
slowing down execution (which is an undesirable feature given the large number of runs to be
performed), however a minimum level of output allowing the identification and checking of the
ATTENDLL version as well as of the data inputs read from the *.ATT file was preserved at the
beginning of each individual run (corresponding to a single *.INP file) as discussed in more
detail in Section 7.7. Additionally, the code was deliberately set up in a way as to minimise the
potential for human mistakes and maximise the early detection of errors:
(1) Version control of all subroutines via the subroutine name and file name, with a
development history detailing the changes from the previous version (or noting the lack
thereof).
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 77
(2) Explicit declaration of all variables used (“IMPLICIT NONE” statement in FORTRAN) to
avoid accidental assignment of null values
(3) Hard-wiring of equation coefficients and other constants common to all runs in the
FORTRAN subroutines; these “master” coefficients were directly checked against the
relevant publications and the HID.
(4) Use of a version-controlled path for ATTENDLL in the FRISK88 runs to eliminate
version-control issues that could arise from the use of a fixed file name
7.6.2 File naming conventions
For operational reasons, the GMC model is implemented in such a way that each branch of the
GMC model corresponds to a single GMPE uniquely identified in ATTENDLL as well as in the
*.ATT files through the JCALC variable. For the Thyspunt PSHA project, a convention was
adopted to use 6-digit JCALC numbers, with each digit modelling a specific portion of the GMC
logic-tree, as explained previously in Section 7.5.1.
The use of finite-fault rupture metrics including the depth-to-top of rupture parameter ZTOR in
GMC branches based on the Abrahamson & Silva (2008) and Chiou & Youngs (2008)
backbone models. Whilst considering this parameter in the internal source-to-site distance
calculations, FRISK88 does not return among the parameters that can be passed to ATTENDLL
the ZTOR parameter for area sources using virtual faults. Consequently, average values of the
ZTOR parameter obtained by emulating the weighted approach implemented in FRISK88 were
calculated for each source and crustal thickness option. This required the splitting of each
seismic source into “source alternatives” reflecting each of the three crustal thickness options,
as explained in Section 7.5.2. This was reflected in the *.ATT files by explicitly identifying the
seismic source alternatives and crustal depth options via coefficients C6, C7 and C8, which
were passed to subroutine to ensure the appropriate average value of ZTOR was used for each
source alternative. The ZTOR values thus calculated are source-specific, since they also
depend on the dip distribution that has been specified for each individual source. The validity of
this approach to modelling ZTOR was confirmed by the excellent match obtained with the
results calculated using OpenQuake (see Section 7.3), which calculates ZTOR explicitly for
each virtual fault generated.
Thus the switching between the various options implemented in the ATTENDLL file is entirely
controlled by the JCALC value and specific coefficients listed in the *.ATT file that were passed
to the *.INP file in the pre-processing step.
Another file naming issue concerns the DLL itself, which for compatibility with FRISK88
conventions needs to be have the fixed file name ATTENDLL.DLL. For purposes of version
control, the DLL used in the calculations was only renamed ATTENDLL at the end, keeping a
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 78
copy of the DLL file with version-controlled name. Identity between these two files can be
established based on file properties (datestamp and filesize), and both files a stored in a folder
with a name that also reflects version information. Finally, version information about the DLL is
displayed on-screen on first call to the DLL for each FRISK88 run (see next section).
7.6.3 Direct checks on code
Direct independent checks on the FORTRAN source code were undertaken at CGS. These
checks are documented in the CGS/TP10/FM04 forms stored in
7_HAZARD_CALCULATIONS\ 7_6_ATTNDLLSetup\FORTRANSourceFileChecks. The main
focus of these checks is to ensure that there are no transfer errors in the implementation of
backbone models, adjustment factors and sigma values with respect to the information
contained in the original publications and the HID. These checks also verified that any issues
identified during the blind three-way checks documented in Douglas (2012b) had been
corrected in the final versions of these files.
7.7 Monitoring of Runs
During execution of the runs, the correct execution of the software was monitored through the
user interface of the FRISK88 software, which returns status messages at each step of the
execution and concludes each run with a message indicating either the successful completion
of the run, or an abortion message indicating an error. In the latter case, the error message is
also reported in the *.ERR file created by FRISK88. In the former case, the *.ERR file remains
blank, with a filesize of 1kb.
The runs were executed through the use of version-controlled batch files, with one batch file per
source alternative, backbone GMPE and PC used, i.e. cycling over 72 JCALC options for a
given AFE and response period. The size of these batch files was deliberately kept relatively
modest in order to preserve flexibility in the scheduling of the runs, as well as to enable topical
corrective action to be taken in case of operational problems (e.g. loss of power or internet
connection). The routine launching the batch file returns an error and aborts execution file when
the version-controlled ATTENDLL, or the input *BAT file cannot be found. The progress for each
batch run was monitored using the run-time outputs of FRISK88, which were kept as log files,
an example of which is shown in Figure 7.16. A complete set of these files can be found in the
electronic appendix to this report, in the directory 7_HAZARD_CALCULATIONS\7_7_FRISK88
RunMonitoring\F88_RUN_LOGS.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 79
For ease of progress monitoring, the runs were by default cycled automatically over all the
source alternatives corresponding to a given seismic source (e.g., ECCaH1, ECCaH2, ECCaH3,
ECCbH1, ECCbH2, ECCbH3 for ECC, and KARoH1, KARoH2 and KARoH3 for KAR), as well
as over the three backbone models, for a given AFE and response period. In the following,
these are termed high-level batch runs, corresponding to either 18 (ECC and SYN) or 9 (other
sources) individual FRISK88 batch input files. The execution of low-level batch runs
corresponding to a single source alternative and backbone model (and thus a single FRISK88
batch input file) was still possible; this mode was use predominantly to repeat batch runs when
operational problems had been encountered. In some cases, only a few individual runs had to
be repeated; these were executed manually using the FRISK88 graphical user-interface (GUI).
Figure 7.16. Example of FRISK88 control run, highlighting control features. Standard FRISK88 information is highlighted in blue: (1) Time-stamped initiation message; (2) Status message documenting
execution progress; (3) Time-stamped successful completion message. Additional control features developed for the Thyspunt PSHA project are highlighted in red: (1) Header summarising batch run
parameters; (2) Calculation directory; (3) Header of batch file; (4) Version-controlled path to FRISK88 calculation engine and ATTENDLL.DLL (5) Unambiguous, version-controlled input file name; (6) Display
of information passed to ATTENDLL.DLL on first call.
Overall progress was monitored using an Excel spreadsheet with colour-coded cells (at the low-
level batch run level) indicating successful runs, runs that needed correction and runs still to be
done, as assessed by self-checks testing the presence and size of the expected output files.
These diagnostics were then summarised at the high-level batch run level (all source
alternatives and backbone models for a given AFE, response period and seismic source). In the
1
23
5
4
6
1
3
2
1
23
5
4
6
1
3
2
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 80
vast majority of cases, the corrections had to do with operational issues (most commonly, loss
of internet during execution), with missing runs picked up by the analyst during self-checks of
the outputs produced for each run. Where the number of individual runs that were missing was
more substantial, the runs was repeated using a manually edited version of the batch input file
listing only those runs that were missing, in order to avoid re-running runs that had been
successful. A few instances of blank input files were also picked, up, which were traced back to
file access conflicts; these were addressed similarly, by regenerating the input files in question,
copying them to the calculation directory and re-running the individual runs in question. The
ultimate successful completion of the high-level batch runs following these self-checks and
remedial actions was checked through successful completion of the output files from the
calculation directory (C:\F88_CALC) to the post-processing directory (C:\F88_OUT\), as
described in Section 7.8. The checking process of the runs was recorded in the seismic hazard
run forms CGS/TP10/FM10, a full set of which can be found in the electronic appendix to this
report, in the directory 7_HAZARD_CALCULATIONS\7_7_FRISK88RunMonitoring\FM10_HAZ
ARD_RUN_FORMS.
7.8 File Transfer and Integrity Checks
As discussed in previous sections, the self-check process before and after execution of the
FRISK88 runs included file checks to ascertain the completeness of the input and output file
sets in the calculation directory (C:\F88_CALC\). Once these checks had been succesful, the
output files of the runs were exported to the post-processing directory (C:\F88_OUT) of the
laptop PC; for runs that were executed on the desktop PC, this was done via the F88_OUT
directory on a mobile hard drive, in a similar manner to the transfer of the *.INP files. The
completeness and integrity of the files were checked again after performing the file transfer.
Whilst these file checks are redundant with those performed during the execution of post-
processing routines, which are set up in such a way that they will abort execution with an error
message and/or produce unusable results (NaN instead of numerical values) in the case files
are missing, corrupted or incomplete, they were nevertheless carried out at each file transfer
stage, since experience had shown that individual files may in some cases become corrupted
during transfer.
The file transfers from the F88_CALC to the F88_OUT directory are documented in the set of
log files provided in the electronic appendix to this report, in the folder
7_HAZARD_CALCULATIONS\7_8_FileTransfersAndIntegrity\F88_CleanUp_LOGS\. The
successful transfers to the laptop F88_OUT directory are documented in the detailed listings
providing file names, date stamps and file sizes of the files in the post-processing directory, a
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 81
complete set of which can be found in the electronic appendix to this report, in the directory
7_HAZARD_CALCULATIONS\7_8_FileTransfersAndIntegrity\F88OUT_FileListings\. These files
complement the FM10 forms described in Section 7.7.
7.9 Checks on Post-processing of Hazard Results
For each run, FRISK88 generates a standard set of output files with results and diagnostic
information (see Figure 7.8 for details), which were exported from the calculation directory
(C:\F88_CALC\) to the post-processing directory (C:\F88_OUT\) following successful file
existence checks. All the runs were regrouped onto a single machine for post-processing, since
the latter entails regrouping the results from all branches of the logic-tree and all sources,
applying the appropriate weights. The post-processing steps were undertaken using a set of
MatLab routines developed specifically for the project, which were rigourously checked and
tested by an independent reviewer, Dr John Douglas of BRGM. These checks are documented
in the CGS/TP10/FM04 forms provided in the electronic appendix to this report in the folder
7_HAZARD_CALCULATIONS\7_9_ResultPostProcessing. Details of the various post-
processing steps are given in the individual sections below.
7.9.1 Mean Hazard Results
The first post-processing step consists in the retrieval of the mean hazard curves at the site
based on the results listed in the *.FRAC files of the first set of runs (indexed AFE0) considering
18 target ground-motion amplitude levels. The routines used for this extract the data from the
individual *.FRAC files, apply the appropriate weights for each source alternative and JCALC
value, and sum the AFEs at each ground-motion level to obtain the full hazard curve at the site,
for each of the 10 response periods considered. These full hazard curves were stored in the
binary format (*.MAT) used by MatLab, and therefrom exported to Excel to serve as inputs to
the site response calculations (see Chapter 6).
The V&V on this code is documented in the CGS/TP10/FM04 form in the electronic appendix of
this report, which can be found in the folder 7_HAZARD_CALCULATIONS\7_9_ResultPostProc
essing\MeanHazard\. These checks included direct review of the code as well as testing of the
execution using a sample set of *.FRAC files. Graphical representations of the results, which
were reviewed by the PTI, served as additional checks for these results.
7.9.2 Uniform Hazard Spectra
To obtain the uniform hazard spectrum (UHS) for a give response period and target AFE, the
mean hazard curves at fixed ground-motion levels need to be interpolated in order to obtain the
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 82
ground-motion amplitude at the target AFE. Following common practice, the interpolation was
implemented as a linear interpolation in log(GM)-log(AFE) space.
The checks of the relevant routines are documented in the CGS/TP10/FM04 form in the
electronic appendix of this report, which can be found in the folder
7_HAZARD_CALCULATIONS\7_9_ResultPostProcessing\UHS\. These checks included direct
review of the code as well as testing of the execution using the *.MAT file created during the
check of the mean hazard results. The UHS for all periods for AFEs of 10-4, 10-5 and 10-6
furthermore served as input to the site response calculations described in Chapter 6, hence
these results also benefited from scrutiny by experienced down-stream users. Finally, graphical
representations of the results, which were reviewed by the PTI, served as additional checks for
these results, for all the target AFEs considered.
7.9.3 Disaggregation
A second post-processing step consists in the development of disaggregation plots illustrating
the contributions of individual magnitude, distance and in some cases epsilon bins to the total
hazard. This post-processing step was performed for the AFEs of 10-4, 10-5 and 10-6 considered
in the site response calculations (Chapter 6). The FRISK88 setup requires runs to be repeated
for the appropriate ground-motion level (extracted from the UHS described in the previous
section) used as target ground-motion level in the *.ATT files and consequently the *.INP files.
The runs for AFEs of 10-4, 10-5 and 10-6 were indexed AFE4, AFE5 and AFE6, respectively.
The disaggregation post-processing extracts the results from the *.MRD files output by these
FRISK88 runs, which were recombined with application of the appropriate weights in a similar
manner to the *.FRAC file results used for the determination of the mean hazard curves (see
Section 7.9.1). The checks of the relevant routines are documented in the CGS/TP10/FM04
form in the electronic appendix of this report, which can be found in the folder
7_HAZARD_CALCULATIONS\7_9_ResultPostProcessing\Disaggregation\. These checks
included direct review of the code as well as testing of the execution using a sample set of
*.MRD files. Finally, graphical representations of the results in formats that had been presented
at several working meetings and workshops, and which were reviewed by the PTI, served as
additional checks for these disaggregation results.
7.9.4 Fractiles
The third post-processing step consisted in the calculation of fractiles of the hazard curve
distribution, in addition to the mean hazard results discussed in Section 7.9.1. This
disaggregation step required extraction of the information in the *.HCUR and *.CRPT files
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 83
generated by FRISK88, with application of the appropriate weights. The regrouping of hazard
curves for fractile calculation is essentially a multiplicative operation (when combining across
source) and thus differs from the additive operations undertaken previously. Unlike mean
hazard calculations, the fractile calculations also need to keep track of the particular global
alternative (logic-tree branch affecting several sources jointly) to which a given hazard curve
corresponds, since only hazard curves corresponding to the same global alternative can be
combined across sources. For any given source and global alternative, the various hazard
curves have to be concatenated in order to obtain a weighted distribution. The combination of
the hazard curves was therefore done in several steps, namely:
(1) Extraction, in batch mode, of the information contained in the *.HCUR and *.CRPT files
for each individual run;
(2) Concatenation of the GMC logic-tree branches for each individual source alternative to
obtain weighted hazard curve distributions;
(3) Combination of the hazard curve distributions across sources, for a given crustal
thickness;
(4) Combination of the hazard curve distributions across crustal thickness options, in order
to obtain the final hazard curve distribution.
The combination of hazard curves across sources is an memory-intensive operation, therefore
the relevant cumulative density distributions (CDFs) were resampled to a discretised version
considering the AFEs corresponding to 1000 equally-spaced points on the CDF.
The checks of the relevant routines are documented in the CGS/TP10/FM04 forms in the
electronic appendix of this report, which can be found in the folder
7_HAZARD_CALCULATIONS\7_9_ResultPostProcessing\Fractiles\. These checks included
direct review of the code as well as testing of the execution using a sample set of *.HCUR and
*.CRPT files. .
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 84
References
Abrahamson, N.A. & Silva, W.J. (2008). Summary of the Abrahamson & Silva NGA ground-motion relations. Earthquake Spectra 24(1), 67-97 Abrahamson, N.A. & Silva, W.J. (2009). Errata for ‘Summary of the Abrahamson & Silva NGA ground-motion relations’. http://peer.berkeley.edu/ngawest/nga_models.html. Akkar, S. & Çağnan, Z. (2010). A local ground-motion predictive model for Turkey and its comparison with other regional and global ground-motion models. Bulletin of the Seismological Society of America 100(6), 2978-2995. ASCE (2004). Seismic design criteria for structures, systems, and components in nuclear facilities and commentary. ASCE Standard 43-05, American Society of Civil Engineers. Bazzurro, P. and C.A. Cornell (2004). Nonlinear soil-site effects in probabilistic seismic hazard analysis. Bulletin of the Seismological Society of America 94(6), 2110-2123. Bommer, J.J. and P.J. Stafford (2008). Seismic hazard and earthquake actions. In: Seismic Design of Buildings to Eurocode 8, A.Y. Elghazouli ed., Taylor and Francis, 6-46. Chiou, B.S.J. & Youngs, R.R. (2008). An NGA model for the average horizontal component of peak ground motion and response spectra. Earthquake Spectra 24(1), 173-215. Cornell, C.A. (1968). Engineering seismic risk analysis. Bulletin of the Seismological Society of America 58(5), 1583-1606. Erratum: 59(4), 1733. Douglas, J. (2012a). Verification of hazard calculations made during the Thyspunt SSHAC level 3 project: Phase 2. CGS Report No. 2012-0071, Council for Geoscience, Pretoria. Douglas, J. (2012b). Verification of GMPE implementation made during the Thyspunt SSHAC level 3 project: Phase 3. CGS Report No. 2012-0231, Rev. 0, Council for Geoscience, Pretoria. Douglas, J. (2013). Verification of hazard calculations made during the Thyspunt SSHAC level 3 project: Phase 3. CGS Report No. 2013-0004, Rev. 0, Council for Geoscience, Pretoria. Eskom (2013). Specification for validation and verification tasks for model used in nuclear siting. Report NSIP02761, Eskom Nuclear Engineering/Nuclear Sites. Field, E. H., T.H. Jordan and C.A. Cornell (2003). OpenSHA – A developing community modeling environment for seismic hazard analysis. Seismological Research Letters 74, 406–419. Idriss, I. and J. Sun (1992). User’s Manual for SHAKE91. Center for Geotechnical Modelling, University of California, Davis, CA. Johnston, A.C., K.J. Coppersmith, L.R. Kanter and C.A. Cornell (1994). The Earthquakes of Stable Continental Regions, Five vols. Report for Electric Power Research Institute (EPRI), Palo Alto, CA, EPRI TR-102261. Kaklamanos, J., D.M. Boore, E.M. Thompson and K.W. Campbell (2010). Implementation of the Next Generation Attenuation (NGA) ground-motion prediction equations in Fortran and R. U.S. Geological Survey Open-File Report 2010-1296, 43 pp. Kaklamanos, J. And E.M. Thompson (2011). nga: NGA earthquake ground motion prediction equations. R package version 1.4-1., http://CRAN.R-project.org/package=nga
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 85
Kijko, A. (2004). Estimation of the maximum earthquake magnitude, mmax. Pure and Applied Geophysics 161, 1-27. Kottke, A.R. and E.M. Rathje (2008). Technical Manual for Strata. PEER Report 2008/10, Pacific Earthquake Engineering Research Center, University of California at Berkeley, February, 84 pp. Lee, C.-P., Y.-B. Tsai and K.-L. Wen (2006). Analysis of nonlinear site response using the LSST downhole accelerometer array data. Soil Dynamics and Earthquake Engineering 26, 435–460. Lin, P-S., B.S-J. Chiou, N.A. Abrahamson, M. Walling, C.-T. Lee and C.-T. Cheng (2011). Repeatable source, site, and path effects on the standard deviation for empirical ground-motion prediction models. Bulletin of the Seismological Society of America 101(5), 2281-2295. McGuire, R.K. (2004). Seismic hazard and risk analysis. EERI Monograph MNO-10, Earthquake Engineering Research Institute, Oakland, California. McGuire, R.K., W.J. Silva and C.J. Costantino (2001). Technical basis for revision of regulatory guidance on design ground motions: Hazard- and risk-consistent ground motion spectra guidelines. NUREG/CR-6728, US Nuclear Regulatory Commission, Washington D.C. Miller, A.C. and T.R. Rice (1983). Discrete approximations of probability distributions, Management Science 29, 352-362. NNR (2006). Requirements for authorisation submissions involving computer software and evaluation models for safety calculations. Requirements Document RD-0016, National Nuclear Regulator. NNR (2008). Requirements on risk assessment and compliance with principal safety criteria for nuclear installations. Requirements Document RD-0024, National Nuclear Regulator. Rockwell, T., E. Gath, T. González, C. Madden, D. Verdugo, C. Lippincott, T. Dawson, L.A. Owen, M. Fuchs, A. Cadena, P. Williams, E. Weldon, and P. Franceschi (2010). Neotectonics and paleoseismology of the Limón and Pedro Miguel Faults in Panamá: Earthquake hazard to the Panamá Canal. Bulletin of the Seismological Society of America 100(6), 3097-3129. Rodriguez-Marek, A., F. Cotton, N.A. Abrahamson, S. Akkar, L. Al Atik, B. Edwards, G.A. Montalv and H., Dawood (2012). A model for single-station standard deviation using data from various tectonic regions. Bulletin of the Seismological Society of America (submitted for publication in December 2012). Schnabel, P.B., H.B. Seed and J.B. Lysmer (1972). SHAKE: A computer program for earthquake response analysis on horizontally layered sites. Report No. UCB/EERC-72/12, Earthquake Engineering Research Center, University of California, Berkeley, CA. Thomas, P., I. Wong and N. Abrahamson (2010). Verification of probabilistic seismic hazard analysis computer programs. PEER Report 2010/106, Pacific Earthquake Engineering Research Center, UC Berkeley, California. USNRC (2007). A performance-based approach to define the site-specific earthquake ground motion. Regulatory Guide 1.208, US Nuclear Regulatory Commission, Washington D.C. USNRC (2012a). Central and Eastern United States Seismic Source Characterization for Nuclear Faciltiies. NUREG-2115, US Nuclear Regulatory Commission, Washington DC. USNRC (2012b). Practical implementation guidelines for SSHAC Level 3 and 4 hazard studies. NUREG-2117, Rev. 1, April, US Nuclear Regulatory Commission, Washington DC.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 86
Veneziano, D. and J. Van Dyck (1985). Statistical discrimination of aftershocks and their contribution to seismic hazard, in Seismic Hazard Methodology for Nuclear Facilities in the Eastern U.S., Volume 2, Appendix A-4, EPRI/SOG Draft 85-1. Weichert, D.H. (1980). Estimation of the earthquake recurrence parameters for unequal observation periods for different magnitudes, Bulletin of the Seismological Society of America 70(4), 1337-1346. Wells, D.L. and K.J. Coppersmith (1994). New empirical relationships among magnitude, rupture length, rupture width, rupture area, and surface displacement. Bulletin of the Seismological Society of America 84(4), 974-1002.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version 87
APPENDIX A
Supplementary Files (Files on CD at the back of this report)
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version A-1
The electronic appendix to this report is structured with folders and sub-folders whose
numbering reflects the section numbering in the body of the report, containing electronic
versions of the files discussed in the latter part of the report (Chapters 4 to 7), which
deals with the specific V&V activities undertaken for specific components of the
Thyspunt PSHA. These files pertain to Chapter 4 (SSC Model Calculations) and Chapter
7 (Hazard Calculations). For Chapters 5 and 6, both of which fall under technical
procedure CGS/TP10/PR16 (“Usage of expert software”), there are no additional files as
all relevant results are included in the text.
Folder 4_SSC_MODEL_CALCULATIONS
This folder contains all the files related to the V&V activities on calculations related to the
development of the SSC model, which comprise the following:
• Calculations of the maximum magnitude, Mmax (Section 4.1); the corresponding
files are included in subfolder 4_1_MmaxCalculation, as detailed in Table A.1.
• Calculations of the recurrence parameters for area sources (Section 4.2); the
corresponding files are included in subfolder 4_2_RecurrenceCalculation, as
detailed in Table A.2.
Folder 7_HAZARD_CALCULATIONS
This folder contains all the files related to the V&V activities on the Thyspunt PSHA
hazard calculations, which comprise the following:
• Checks of the files prepared at the pre-processing stage, as documented in
Section 7.5; the corresponding files are included in subfolder
7_5_HazardPreProcessing, as detailed in Table A.3.
• Checks of the setup of the dynamic-link library ATTENDLL.dll providing he project-
specific ground-motion predictions (Section 7.6); the corresponding files are
included in subfolder 7_6_ATTENDLLSetup, as detailed in Table A.4.
• Monitoring of the FRISK88 runs during execution, as described in Section 7.7; the
corresponding files are included in subfolder 7_7_FRISK88RunMonitoring, as
detailed in Table A.5.
• File integrity checks on the output files created by completed runs following
transfers between the various machines used for calculations and post-processing
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version A-2
(Section 7.8); the corresponding files are included in subfolder
7_8_FileTransfersAndIntegrity, as detailed in Table A.6.
• Checks on the routines used for the post-processing of the results (Section 7.9);
the corresponding files are included in subfolder 7_9_ResultPostProcessing, as
detailed in Table A.7.
The files dealing with input and output files, as well as run execution are organised in file
structures mirroring the file structures used for the calculations. For input and output files,
these are grouped response period, as illustrated in the left panel of Figure A.1; for run
execution, files consider individual combinations of seismic source alternatives (see
Section 7.7 for details), and are additionally grouped by seismic source, as illustrated in
the right panel of Figure A.1. The file and folder naming conventions used throughout for
this type of files are summarised in Table A.8.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the
authorised database version A-3
Figure A.1: Illustration of directory structure used for (a) input and output related V&V files; (b) V&V files related to FRISK88 run execution.
(a) (b)(a) (b)
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the authorised database version A-4
Table A.1: Files relating to V&V on Mmax calculations Subfolder Name Files Description
MmaxCalculationCode FM04_MmaxBayesianCalculations_Rev0.docx Form CGS/TP10/FM04 documenting the V&V activities relating to the code used to calculate Mmax using the Bayesian approach
FM04_MmaxKijkoCalculations_Rev0.docx Form CGS/TP10/FM04 documenting the V&V activities relating to the code used to calculate Mmax using the Kijko approach
MmaxEquivalentBranches FM03_MmaxEquivalentBranchesXLS.docx Form CGS/TP10/FM03 documenting the checks on the computation of equivalent weights for the Mmax branches combining duplicate values for each source.
MmaxImplementation.ppt Powerpoint slide explaining the equivalent branches approach for Mmax MmaxBranchesEquivalentBranches_2012_11_15_v1.xlsx Excel spreadsheet containing the Mmax equvialent weight calculations
Table A.2: Files relating to V&V on recurrence parameter calculations Subfolder Name Files Description
RecurrenceCalculationCode FM03_RecurrenceCalcDerivationDOC_Rev0.doc Form CGS/TP10/FM03 documenting the V&V of the detailed calculation method implemented for the recurrence calculations for area sources.
FM04_RecurrenceData_Rev0_doc Form CGS/TP10/FM04 documenting the V&V checks on the calculation routine developed to calculate the recurrence parameters for area sources using a penalised maximum likelihood approach.
Recurrence_Examples_Benchmark3 Outputs from the CGS recurrence calculation routine for two benchmark examples.
satest.out Outputs from Dr Robert Youngs’s software LIKEMAX for the same two benchmark examples.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the authorised database version A-5
Table A.3: Files relating to V&V on hazard pre-processing steps; see Figure A.1 and Table A.8 for explanation of folder and file naming conventions Subfolder Name Files Description
ATT_FILE_LOGS ATT_TNSPult_V01_AFE[x]_A[x]_T[xx].LOG
Files logging the successful automated creation of the *.ATT files passed to the PREP88 pre-processor. Each of the four run type subfolders (AFE0, AFE4, AFE5 and AFE6) contains ten subfolders, one for each response period (T01 to T10), which in turn each contain 10 of these files (5 for the area sources A1 to A5, plus 5 for the fault sources F1 to F5).
INP_2_CALC INP2CALCTNSPult_V01_AFE[x]_A[x]_T[xx].LOG
Files logging the successful transfer of the input files created by the PREP88 pre-processor, as well as the corresponding batch files, to the corresponding calculation directory. Each of the four run type subfolders (AFE0, AFE4, AFE5 and AFE6) contains ten subfolders, one for each response period (T01 to T10), which in turn each contain 10 of these files (5 for the area sources A1 to A5, plus 5 for the fault sources F1 to F5).
MAS_FILE_LOGS MAS_TNSPult_V01_AFE[x]_A[x]_T[xx].LOG
Files logging the successful automated creation of the *.MAS files passed to the PREP88 pre-processor. Each of the four run type subfolders (AFE0, AFE4, AFE5 and AFE6) contains ten subfolders, one for each response period (T01 to T10), which in turn each contain 10 of these files (5 for the area sources A1 to A5, plus 5 for the fault sources F1 to F5).
P88_RUN_LOGS P88run_TNSPult_V01_AFE[x]_A[x]_T[xx].LOG
Files logging the successful execution of the PREP88 pre-processor routine creating the input files (*.INP) for the FRISK88 runs. Each of the four run type subfolders (AFE0, AFE4, AFE5 and AFE6) contains ten subfolders, one for each response period (T01 to T10), which in turn each contain 10 of these files (5 for the area sources A1 to A5, plus 5 for the fault sources F1 to F5).
SRC and TREE Check Forms
FM04_SRCandTREE_AFZ.docx FM04_SRCandTREE_CK.docx FM04_SRCandTREE_ECC.docx FM04_SRCandTREE_GAM.docx FM04_SRCandTREE_KAR.docx FM04_SRCandTREE_KNG.docx FM04_SRCandTREE_NAM.docx FM04_SRCandTREE_PLE.docx FM04_SRCandTREE_SYN.docx FM04_SRCandTREE_WOR.docx
CGS/TP10/FM04 forms documenting the V&V checks on the manually prepared *.SRC and *.TREE files passed to the PREP88 pre-processor. There is one such form per seismic source, with the latter identified in the filename.
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the authorised database version A-6
Table A.4: Files relating to V&V on the setup of the ATTENDL.dll dynamic-link library Subfolder Name Files Description
Compiler BuildLog.htm
Log documenting the building of the ATTENDLL file by the Intel compiler by linking the relevant source code files. This includes documentation of warnings and errors (“ 0 error(s), 0 warning(s)” in this case).
FORTRANSourceFileChecks FM04_AbrahamsonSilva2008.docx CGS/TP10/FM04 form documenting the direct checks on the FORTRAN source code implementing the Abrahamson & Silva (2008) backbone model.
FM04_AkkarCagnan2010_v1.docx CGS/TP10/FM04 form documenting the direct checks on the FORTRAN source code implementing the Akkar & Çağnan (2010) backbone model.
FM04_ChiouYoungs2008_v1.docx CGS/TP10/FM04 form documenting the direct checks on the FORTRAN source code implementing the Chiou & Youngs (2008) backbone model.
FM04_MedianAdjustFactor_v1.docx CGS/TP10/FM04 form documenting the direct checks on the FORTRAN source code implementing the Thyspunt PSHA project-specific adjustments to the median ground-motion, as described in the Hazard Input Document (HID).
FM04_ModularATTENDLL_v1.docx CGS/TP10/FM04 form documenting the direct checks on the FORTRAN source code on the main module of the ATTENDLL program.
FM04_Sigma_v1 CGS/TP10/FM04 form documenting the direct checks on the FORTRAN source code implementing the Thyspunt PSHA project-specific ground-motion sigma model, as described in the Hazard Input Document (HID).
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the authorised database version A-7
Table A.5: Files relating to V&V on the monitoring of the FRISK88 runs; see Figure A.1 and Table A.8 for explanation of folder and file naming conventions Subfolder Name Files Description
F88_RUN_LOGS F88RUN_TNSPult_V01_AFE[x]_[SrcAlt]_[GMPE]_T[xx].LOG Log documenting the execution of the FRISK88 runs at low-level batch file level. The log contains headers for check and identification, followed by the screen output generated by FRISK88. Each of the four run type subfolders (AFE0, AFE4, AFE5 and AFE6) contains ten subfolders, one for each response period (T01 to T10), which in turn each contain 10 subfolders, one for each seismic source (5 for area sources A1 to A5, plus 5 for the fault sources F1 to F5). Each of these latter subfolders contains a number of files reflecting the number of low-level btach runs executed in a high-level batch run (18 for ECC and SYN, 9 for all other sources).
FM10_HAZARD_RUN_FORMS FM10_TNSPultV01_AFE0_AFZ_T[xx]_rev0.docx FM10_TNSPultV01_AFE0_CKF_T[xx]_rev0.docx FM10_TNSPultV01_AFE0_ECC_T[xx]_rev0.docx FM10_TNSPultV01_AFE0_GAM_T[xx]_rev0.docx FM10_TNSPultV01_AFE0_KAR_T[xx]_rev0.docx FM10_TNSPultV01_AFE0_KNG_T[xx]_rev0.docx FM10_TNSPultV01_AFE0_NAM_T[xx]_rev0.docx FM10_TNSPultV01_AFE0_PLE_T[xx]_rev0.docx FM10_TNSPultV01_AFE0_SYN_T[xx]_rev0.docx FM10_TNSPultV01_AFE0_WOR_T[xx]_rev0.docx
CGS/TP10/FM10 forms documenting the V&V on the FRISK88 runs performed. Each of the four run type subfolders (AFE0, AFE4, AFE5 and AFE6) contains ten subfolders, one for each response period (T01 to T10), which in turn each contain 10 of these forms, one for each seismic source (5 for area sources, plus 5 for the fault sources).
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the authorised database version A-8
Table A.6: Files relating to V&V on file integrity and file transfer in hazard calculations; see Figure A.1 and Table A.8 for explanation of folder and file naming conventions Subfolder Name Files Description
F88_CleanUp_LOGS F88CleanUpTNSPult_V01_AFE[x]_A[x]_T[xx].LOG
Files logging the successful transfer of the output files created by FRISK88 pre-processor to the corresponding post-processing directory (C:\F88_OUT\).
F88OUT_FileListings F88OUT_TNSPult_V01_AFE[x]_A[x]_T[xx].LIST
Files listing all the output files in the post-processing directory , at the high-level batch run level, along with their time stamp and filesize. Each of the four run type subfolders (AFE0, AFE4, AFE5 and AFE6) contains ten subfolders, one for each response period (T01 to T10), which in turn each contain 10 of these files (5 for the area sources A1 to A5, plus 5 for the fault sources F1 to F5).
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the authorised database version A-9
Table A.7: Files relating to V&V on hazard calculation results post-processing; see Figure A.1 and Table A.8 for explanation of folder and file naming conventions Subfolder Name
Files Description
Disaggregation FM04_ExtractMRE_fs21dec2012.docx CGS/TP10/FM04 form documenting the V&V on the routine to extract disaggregation results from the *.MRD files created by FRISK88, for disaggregation runs (AFE4, AFE5. AFE6).
FM04_MREAFES_fs21dec2012.docx CGS/TP10/FM04 form documenting the V&V on the routine summing the disaggregation AFEs across all 10 seismic sources considered.
FM04_MRESingleSource_fs21dec2012.docx CGS/TP10/FM04 form documenting the V&V on the routine summing the extracted disaggregation AFEs for an individual seismic source.
Fractiles FM04_CombineHazCurvSglH_19jan2013.docx CGS/TP10/FM04 form documenting the V&V on the routine combining across seismic sources the hazard curves already combined across GMC branches, for fractile calculations, for a fixed value of seismogenic crustal thickness.
FM04_ExtractHCURandCRPT_fs07jan2013.docx CGS/TP10/FM04 form documenting the V&V on the routine to extract the individual hazard curves from the *.HCUR and *.CRPT files created by FRISK88, for fractile calculations.
FM04_FractHazCurve_19jan2013.docx CGS/TP10/FM04 form documenting the V&V on the routine providing the overall fractiles of the hazard results for a given response period.
FM04_JCALCBranches_fs07jan2013 CGS/TP10/FM04 form documenting the V&V on the routine to combine the individual hazard curves across GMC branches, for fractile calculations
MeanHazard FM04_SaveMEANS_fs21dec2012.docx CGS/TP10/FM04 form documenting the V&V on the routine extracting mean hazard results from the *.FRAC files generated by FRISK88
UHS FM04_UHS_fs21dec2012.docx CGS/TP10/FM04 form documenting the V&V on the routine extracting uniform hazard spectrum (UHS) values from the mean hazard curves of AFE0-type runs (full hazard curve).
The downloaded document is uncontrolled; therefore the user must ensure that it conforms to the authorised database version A-10
Table A.8: Naming conventions for files and folders dealing with FRISK88 input and output files, as well as run execution. See Figure A.1 for folder structure. The square brackets in elements in the “Tag” column identifies the portion of the tag that can change according to the option chosen, as summarised in the “Options” column Tag Description Options TNSPult_V[xx] Identifies version of the runs TNSPult_V01: First version of the final Thyspunt
PSHA model (November 2012 – January 2013) AFE[x] Identifies type of run (full hazard curve or disaggregation) and target annual frequency of exceedance (AFE)
if disaggregation AFE0 = Full hazard curve (18 target ground-motion levels) AFE4 = Disaggregation at 10-4 AFE AFE5 = Disaggregation at 10-5 AFE AFE6 = Disaggregation at 10-6 AFE
A[x] Identifies area sources in some file names, and in source alternative tags. In the latter, followed by “a” (ECC and SYN, western position of the ECC-SYN boundary), “b” (ECC and SYN, western position of the ECC-SYN boundary) or “o” (all other sources), and the tag identifying the seismogenic crustal thickness (see below).
A1 = ECC (SrcAlt = ECCaH[x] or ECCbH[x]) A2 = SYN (SrcAlt = SYNaH[x] or SYNbH[x]) A3 = KAR (SrcAlt = KARoH[x]) A4 = CK (SrcAlt = CKFoH[x]) A5 = NAM (SrcAlt = NAMoH[x])
F[x] Identifies fault sources in some file names , and in source alternative tags. In the latter, followed by “r “o” (all fault sources), and the tag identifying the seismogenic crustal thickness (see below).
F1 = KNG (SrcAlt = F1oH[x]) F2 = AFZ (SrcAlt = F2oH[x]) F3 = GAM (SrcAlt = F3oH[x]) F4 = PLET (SrcAlt = F4oH[x]) F5 = WOR (SrcAlt = F5oH[x])
T[xx] Identifies response period under consideration in folder and file names T01 = 0.01s T02 = 0.02s T03 = 0.03s T04 = 0.04s T05 = 0.05s
T06 = 0.10s T07 = 0.20s T08 = 0.40s T09 = 1.00s T10 = 2.00s
[GMPE] Identifies backbone GMPE in low-level batch run files AS08 = Abrahamson & Silva (2008) AC10 = Akkar & Çağnan (2010) CY08 = Chiou & Youngs (2008)
H[x] Identifies seismogenic crustal thickness in low-level batch run files H1 = thin seismogenic crustal thickness H2 = medium seismogenic crustal thickness H3 = thick seismogenic crustal thickness