vikram garg pecosroystgnr/uncertainty_propagation-talk.pdfinstitute for computational engineering...
TRANSCRIPT
![Page 1: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/1.jpg)
PECOSPredictive Engineering and Computational Sciences
Accelerating Forward Uncertainty Propagation
Roy H. StognerVikram Garg
Institute for Computational Engineering and SciencesThe University of Texas at Austin
October 18, 2012
Stogner & Garg Uncertainty Propagation October 18, 2012 1 / 31
![Page 2: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/2.jpg)
Outline
1 Notation
2 Enhanced Model EvaluationGoal-Oriented Adaptivity
3 Enhanced Monte CarloEnhanced Sampling StrategiesEnhanced EstimatorsResults
Stogner & Garg Uncertainty Propagation October 18, 2012 2 / 31
![Page 3: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/3.jpg)
Notation
Notation
Primal/Forward Problem
Parameter value(s) ξ ∈ ΞR ⊂ Ξ ≡ RNp
System solution u(x, t; ξ) ∈ UR ⊂ U
Primal weighted residual R(u(ξ),v; ξ) ≡ 0 ∀v ∈ V
Quantity of interest functional Q : UR ×ΞR → R
Quantity of interest output q = Q(u(ξ), ξ)
• Highly nonlinear forward solves
• Large Np-dimensional parameter space
• Few quantities of interest
Stogner & Garg Uncertainty Propagation October 18, 2012 3 / 31
![Page 4: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/4.jpg)
Notation
Notation
Primal/Forward Problem
Parameter value(s) ξ ∈ ΞR ⊂ Ξ ≡ RNp
System solution u(x, t; ξ) ∈ UR ⊂ U
Primal weighted residual R(u(ξ),v; ξ) ≡ 0 ∀v ∈ V
Quantity of interest functional Q : UR ×ΞR → R
Quantity of interest output q = Q(u(ξ), ξ)
• Highly nonlinear forward solves
• Large Np-dimensional parameter space
• Few quantities of interest
Stogner & Garg Uncertainty Propagation October 18, 2012 3 / 31
![Page 5: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/5.jpg)
Enhanced Model Evaluation Goal-Oriented Adaptivity
Notation
Adjoint Problem
Adjoint solution z(x, t; ξ) ∈ V
Adjoint equation Ru(u, z; ξ)(u) ≡ Qu(u; ξ)(u) ∀u ∈ U
Adjoint-based sensitivities dqdξ = Qξ(u; ξ)−Rξ(u, z; ξ)
Adjoint-based error Q(uh)−Q(u) = R(uh, z; ξ) +H.O.T.
• Linear, relatively cheap adjoint solves
• One linear solve per QoI
• Sensitivity: One dot product per parameter
• Error: One weighted residual evaluation per QoI
Stogner & Garg Uncertainty Propagation October 18, 2012 4 / 31
![Page 6: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/6.jpg)
Enhanced Model Evaluation Goal-Oriented Adaptivity
Notation
Adjoint Problem
Adjoint solution z(x, t; ξ) ∈ V
Adjoint equation Ru(u, z; ξ)(u) ≡ Qu(u; ξ)(u) ∀u ∈ U
Adjoint-based sensitivities dqdξ = Qξ(u; ξ)−Rξ(u, z; ξ)
Adjoint-based error Q(uh)−Q(u) = R(uh, z; ξ) +H.O.T.
• Linear, relatively cheap adjoint solves
• One linear solve per QoI
• Sensitivity: One dot product per parameter
• Error: One weighted residual evaluation per QoI
Stogner & Garg Uncertainty Propagation October 18, 2012 4 / 31
![Page 7: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/7.jpg)
Enhanced Model Evaluation Goal-Oriented Adaptivity
Adjoint Residual Error Indicator
Error Indicators• Efficiently bounding eQ via per-element terms
• From our error estimator,
R(uh, z; ξ) =∑E
RE( uh∣∣∣E, z|E ; ξ)
• zh is cheaper than higher order approximation of z;
• zh is already needed for sensitivity calculations
Ignoring higher order terms:
q − qh = −Ru(u, z − zh; ξ)(u− uh)∣∣∣q − qh∣∣∣ ≤∑E
∣∣∣∣REu ∣∣∣∣B(UE ,VE∗)
∣∣∣∣∣∣ u|E − uh∣∣∣E
∣∣∣∣∣∣UE
∣∣∣∣∣∣ z|E − zh∣∣∣E
∣∣∣∣∣∣VE
Stogner & Garg Uncertainty Propagation October 18, 2012 5 / 31
![Page 8: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/8.jpg)
Enhanced Model Evaluation Goal-Oriented Adaptivity
Adjoint Residual Error Indicator
ηE ≡∣∣∣∣ u|E − uh∣∣E∣∣∣∣UE
∣∣∣∣ z|E − zh∣∣E∣∣∣∣VE
(neglecting∣∣∣∣REu ∣∣∣∣)
Primal Error Adjoint Error QoI Error
Stogner & Garg Uncertainty Propagation October 18, 2012 6 / 31
![Page 9: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/9.jpg)
Enhanced Model Evaluation Goal-Oriented Adaptivity
Norm vs QoI Based Refinement
2 2.5 3 3.5 4 4.5−6.5
−6
−5.5
−5
−4.5
−4
−3.5
−3
−2.5
Log10
(Dofs), Degrees of Freedom
Lo
g1
0(A
bso
lute
Err
or)
Convergence of the interior QoI, ε = 10−8
Uniform Refinements
Flux Jump Error Estimator
Adjoint Error Estimator
• Global error indicatorignores QoI sensitivity,plateaus
• Rapid convergence fromadjoint-based refinement
Stogner & Garg Uncertainty Propagation October 18, 2012 7 / 31
![Page 10: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/10.jpg)
Enhanced Model Evaluation Goal-Oriented Adaptivity
Multiphysics and Adjoint Weighting
Where does∣∣∣∣REu ∣∣∣∣B(UE ,VE∗)
become non-negligible?
• Spatially varying constitutive parameters, nonlinearities
• Variable scaling in dimensionalized multiphysics problems
• Interaction pattern between multiphysics variables
Generalized Adjoint Residual Error Indicator
∣∣∣Q(uh)−Q(u)∣∣∣ ≤ ∣∣∣R(uh, z − zh)(u− uh)
∣∣∣≤∑E
∣∣∣∣∣∣zi − zhi∣∣∣∣∣∣iMij
∣∣∣∣∣∣uj − uhj∣∣∣∣∣∣j
Choose M , ||·||i, ||·||j to match physics
Stogner & Garg Uncertainty Propagation October 18, 2012 8 / 31
![Page 11: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/11.jpg)
Enhanced Model Evaluation Goal-Oriented Adaptivity
Multiphysics and Adjoint Weighting
Where does∣∣∣∣REu ∣∣∣∣B(UE ,VE∗)
become non-negligible?
• Spatially varying constitutive parameters, nonlinearities
• Variable scaling in dimensionalized multiphysics problems
• Interaction pattern between multiphysics variables
Generalized Adjoint Residual Error Indicator
∣∣∣Q(uh)−Q(u)∣∣∣ ≤ ∣∣∣R(uh, z − zh)(u− uh)
∣∣∣≤∑E
∣∣∣∣∣∣zi − zhi∣∣∣∣∣∣iMij
∣∣∣∣∣∣uj − uhj∣∣∣∣∣∣j
Choose M , ||·||i, ||·||j to match physics
Stogner & Garg Uncertainty Propagation October 18, 2012 8 / 31
![Page 12: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/12.jpg)
Enhanced Model Evaluation Goal-Oriented Adaptivity
Multiphysics Adjoint Weighting Example
Stokes Flow
Q(u)−Q(uh) =
∫Ω∇(~u− ~uh) · ∇( ~u∗ − ~u∗
h) dx
−∫
Ω∇ · (~u− ~uh) (p∗ − p∗h) dx−
∫Ω∇ · ( ~u∗ − ~u∗
h) (p− ph) dx
Matrix Form
Q(u)−Q(uh) ≤[e(u1)e(u2)
]T [1 00 1
] [e(u∗1)e(u∗2)
]
+
e(u1,1)e(u2,2)e(p)
T 0 0 10 0 11 1 0
e(u∗1,1)
e(u∗2,2)
e(p∗)
Stogner & Garg Uncertainty Propagation October 18, 2012 9 / 31
![Page 13: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/13.jpg)
Enhanced Model Evaluation Goal-Oriented Adaptivity
Multiphysics Adjoint Weighting Example
2.5 3 3.5 4 4.5−8
−7
−6
−5
−4
−3
−2
Log10
(Dofs), Degrees of Freedom
Log
10(R
ela
tive E
rror)
UniformAdjoint Residual (naive weighting)Adjoint Residual (matrix weighting)
• Stokes flow +transport, cornersingularities
• Naive weighting“staggers”
• Matrix weightingconvergessmoothly topenalty BCprecision
Stogner & Garg Uncertainty Propagation October 18, 2012 10 / 31
![Page 14: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/14.jpg)
Enhanced Monte Carlo Enhanced Sampling Strategies
Notation
Uncertainty Propagation Problems
Input PDF ξ ∼ pξ
Expected (mean) output q ≡ E [q] ≡∫ΞR q(ξ)dpξ
Output standard deviation σq ≡√
E[(q − q)2
]Output risk PC ≡ E [q ∈ Cq]
• Integrals in high dimensional spaces RNp• “Curse of dimensionality”
I Deterministic integration cost exponential in Np
• Monte Carlo: dimension independent
Stogner & Garg Uncertainty Propagation October 18, 2012 11 / 31
![Page 15: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/15.jpg)
Enhanced Monte Carlo Enhanced Sampling Strategies
Notation
Uncertainty Propagation Problems
Input PDF ξ ∼ pξ
Expected (mean) output q ≡ E [q] ≡∫ΞR q(ξ)dpξ
Output standard deviation σq ≡√
E[(q − q)2
]Output risk PC ≡ E [q ∈ Cq]
• Integrals in high dimensional spaces RNp• “Curse of dimensionality”
I Deterministic integration cost exponential in Np
• Monte Carlo: dimension independent
Stogner & Garg Uncertainty Propagation October 18, 2012 11 / 31
![Page 16: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/16.jpg)
Enhanced Monte Carlo Enhanced Sampling Strategies
Monte Carlo Integration
Errors
• qMC − q variance: σ2e[q] = σ2
Ns
•(σMCq
)2 − σ2q variance: σ2
e[σ2] = σ4(
2Ns−1 + κ
Ns
)• PMC
C − PC variance: σ2e[PC ] =
PC−P 2C
Ns
• Sampling-based scheme errors are PDFs
• Errors based on variance σ2 of sampled entity
• Error “bound” width: σe ∝ N−1/2s
• Importance sampling improves convergence constant, not rate
Stogner & Garg Uncertainty Propagation October 18, 2012 12 / 31
![Page 17: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/17.jpg)
Enhanced Monte Carlo Enhanced Sampling Strategies
Monte Carlo Integration
Errors
• qMC − q variance: σ2e[q] = σ2
Ns
•(σMCq
)2 − σ2q variance: σ2
e[σ2] = σ4(
2Ns−1 + κ
Ns
)• PMC
C − PC variance: σ2e[PC ] =
PC−P 2C
Ns
• Sampling-based scheme errors are PDFs
• Errors based on variance σ2 of sampled entity
• Error “bound” width: σe ∝ N−1/2s
• Importance sampling improves convergence constant, not rate
Stogner & Garg Uncertainty Propagation October 18, 2012 12 / 31
![Page 18: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/18.jpg)
Enhanced Monte Carlo Enhanced Sampling Strategies
Latin Hypercube Sampling
Algorithm• Quantiles in each parameter• Samples permuted to bins
I Reducing correlations?
• Random sample placementwithin bins
Uses• Reduces variance from
additive components
• Higher-order convergencefor separable functions
Stogner & Garg Uncertainty Propagation October 18, 2012 13 / 31
![Page 19: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/19.jpg)
Enhanced Monte Carlo Enhanced Sampling Strategies
Hierarchical Latin Hypercube Sequences
• LHS improves convergence but not naturally incremental
• HLHS based methods offer more flexible incrementation
(a) Hierarchical Latin Hypercubes
X5,0
X4,0
X3,3
X2,7
X3,2
X4,1
X2,6
i = 5, 25 samples each
i = 4, 24 samples each
i = 3, 23 samples each
i = 2, 22 samples each
(b) Tree Structure
Stogner & Garg Uncertainty Propagation October 18, 2012 14 / 31
![Page 20: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/20.jpg)
Enhanced Monte Carlo Enhanced Sampling Strategies
Numerical Experiments
• Correlation reduction used for both ILHS and HLHS
• HILHS points show finer grained increments
100
101
102
103
104
10−3
10−2
10−1
100
log10
(Ns)
log
10(e
µ)
Mean error in the mean
HILHS
ILHS
SRS
(c) Q(ξ) = round(∑Np=16i=1 ξi)
100
101
102
103
104
10−2
10−1
100
101
log10
(Ns)
log
10(e
µ)
Mean error in the mean
HILHS
ILHS
(d) Q(ξ) = e
Np=16∑i=1
ξi
Stogner & Garg Uncertainty Propagation October 18, 2012 15 / 31
![Page 21: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/21.jpg)
Enhanced Monte Carlo Enhanced Estimators
Control VariateKnown Surrogate• Quantity of interest q has correlated surrogate statistic qs• “Known” mean qs ≡ E [qs]
cqqs ≡E [(q − q)(qs − qs)]
σqσqs
Unbiased, Variance-reduced estimator
E [q] = E [q − αqs] + E [αqs] = E [s]
s ≡ q − αqs + αqs
σ2s = σ2
q − 2αcqqsσqσqs + α2σ2qs
Integrating s via MC sampling, α ≡ 1, qs → q gives σe[q] → 0.
Stogner & Garg Uncertainty Propagation October 18, 2012 16 / 31
![Page 22: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/22.jpg)
Enhanced Monte Carlo Enhanced Estimators
Sensitivity Derivative EnhancementSDEMC Surrogate• Justification: Adjoints are cheap• One linearization at input mean
I Adds one forward, one sensitivity solve
• qs(ξ) ≡ q(ξ) + qξ(ξ)(ξ − ξ)
Stogner & Garg Uncertainty Propagation October 18, 2012 17 / 31
![Page 23: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/23.jpg)
Enhanced Monte Carlo Enhanced Estimators
Local Sensitivity Derivative Enhancement
LSDEMC Surrogate• Take any input sample set• Evaluate forward and adjoint at each
I Adds one sensitivity solve per sample
• Linearize around each nearby sample
Stogner & Garg Uncertainty Propagation October 18, 2012 18 / 31
![Page 24: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/24.jpg)
Enhanced Monte Carlo Enhanced Estimators
Bias
Problem• Use every sample to construct surrogate?
• E[∑Ns
i=1 q(ξi)− qs(ξi)]
= 0
• E[qL]
= qLs 6= q
• May have systemic bias for any problem
• Huge error for high-dimensional benchmark problems
Solution• Subdivide sample set (e.g. 2, 4, 8 subsets; HLHS applicable)
• Use one subset to construct surrogate, remainder to integrate bias
• Repeat for all subsets; average.
Stogner & Garg Uncertainty Propagation October 18, 2012 19 / 31
![Page 25: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/25.jpg)
Enhanced Monte Carlo Enhanced Estimators
Bias
Problem• Use every sample to construct surrogate?
• E[∑Ns
i=1 q(ξi)− qs(ξi)]
= 0
• E[qL]
= qLs 6= q
• May have systemic bias for any problem
• Huge error for high-dimensional benchmark problems
Solution• Subdivide sample set (e.g. 2, 4, 8 subsets; HLHS applicable)
• Use one subset to construct surrogate, remainder to integrate bias
• Repeat for all subsets; average.
Stogner & Garg Uncertainty Propagation October 18, 2012 19 / 31
![Page 26: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/26.jpg)
Enhanced Monte Carlo Enhanced Estimators
Bias
Unbiased LSDEMC Estimator
1
NR
NR∑r=1
1
Nss
Nss∑l=1
Qr,1(ξl) +1
Ns − NsNR
Ns− NsNR∑
c=1
(Q(ξc)−Qr,1(ξc))
Stogner & Garg Uncertainty Propagation October 18, 2012 20 / 31
![Page 27: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/27.jpg)
Enhanced Monte Carlo Enhanced Estimators
Convergence
Proofs• Holding Ns/NR constant
• Assuming sufficiently smooth Q
• LSDEMC converges asymptotically faster than Monte Carlo
Issues• How many subsets is optimal?
I More subsets == larger N integrating bias termI Fewer subsets == smaller σ in bias termI Numerical results show dimension dependence
• What convergence rate to expect?
Stogner & Garg Uncertainty Propagation October 18, 2012 21 / 31
![Page 28: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/28.jpg)
Enhanced Monte Carlo Results
Visualizing 250 Thousand Trials
Methodology• Dependent variable: mean(abs(error)) from 256 trials per case
• 32, 128, 512 true sample evaluations per trial
• 32, 128, 512 surrogate samples per true sample
• Statistics: mean, standard deviation
• Sampling: SRS, HLHS
• Methods: MC, SDEMC, LSDEMC2/4/8
Graph Key• Purple: MC, Red: SDEMC
• Blue/Cyan/Green: LSDEMC2/4/8
• X: SRS, O: LHS1e-06
1e-05
0.0001
0.001
0.01
0.1
10 100 1000 10000
Err
or
in A
ppro
xim
ate
d O
utp
ut
Number of Forward Evaluations
Forward UQ Convergence: Mean
MC+SRSSDEMC+SRS
LSDMC2+SRSLSDMC4+SRSLSDMC8+SRS
MC+LHSSDEMC+LHS
LSDMC2+LHSLSDMC4+LHSLSDMC8+LHS
Stogner & Garg Uncertainty Propagation October 18, 2012 22 / 31
![Page 29: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/29.jpg)
Enhanced Monte Carlo Results
Visualizing 250 Thousand Trials
Methodology• Dependent variable: mean(abs(error)) from 256 trials per case
• 32, 128, 512 true sample evaluations per trial
• 32, 128, 512 surrogate samples per true sample
• Statistics: mean, standard deviation
• Sampling: SRS, HLHS
• Methods: MC, SDEMC, LSDEMC2/4/8
Graph Key• Purple: MC, Red: SDEMC
• Blue/Cyan/Green: LSDEMC2/4/8
• X: SRS, O: LHS1e-06
1e-05
0.0001
0.001
0.01
0.1
10 100 1000 10000
Err
or
in A
ppro
xim
ate
d O
utp
ut
Number of Forward Evaluations
Forward UQ Convergence: Mean
MC+SRSSDEMC+SRS
LSDMC2+SRSLSDMC4+SRSLSDMC8+SRS
MC+LHSSDEMC+LHS
LSDMC2+LHSLSDMC4+LHSLSDMC8+LHS
Stogner & Garg Uncertainty Propagation October 18, 2012 22 / 31
![Page 30: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/30.jpg)
Enhanced Monte Carlo Results
Benchmark Problem: Smooth
Normal→ Lognormal Distribution
ξ ≡(ξ1, ...ξNp
)q(ξ) ≡ e
∑i ξi
ξi ∼ N
(µ
Np,
σ√Np
)q ∼ Log-N (µ, σ)
• µ ≡ 1, σ ≡ 1
• Arbitrary dimensionality Np
• All parameters equally significant
• Simple analytic exact solution moments
• Variance, MC error independent of Np
Stogner & Garg Uncertainty Propagation October 18, 2012 23 / 31
![Page 31: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/31.jpg)
Enhanced Monte Carlo Results
1 Parameter
0.01
0.1
1
10 100 1000
Err
or
in A
pp
roxim
ate
d O
utp
ut
Number of Forward Evaluations
Forward UQ Convergence: Exp Benchmark, Mean
MC+SRSSDEMC+SRS
LSDEMC2+SRSLSDEMC4+SRSLSDEMC8+SRS
MC+LHSSDEMC+LHS
LSDEMC2+LHSLSDEMC4+LHSLSDEMC8+LHS
Stogner & Garg Uncertainty Propagation October 18, 2012 24 / 31
![Page 32: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/32.jpg)
Enhanced Monte Carlo Results
4 Parameters
0.01
0.1
1
10 100 1000
Err
or
in A
pp
roxim
ate
d O
utp
ut
Number of Forward Evaluations
Forward UQ Convergence: Exp Benchmark, Mean
MC+SRSSDEMC+SRS
LSDEMC2+SRSLSDEMC4+SRSLSDEMC8+SRS
MC+LHSSDEMC+LHS
LSDEMC2+LHSLSDEMC4+LHSLSDEMC8+LHS
Stogner & Garg Uncertainty Propagation October 18, 2012 25 / 31
![Page 33: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/33.jpg)
Enhanced Monte Carlo Results
16 Parameters
0.01
0.1
1
10 100 1000
Err
or
in A
pp
roxim
ate
d O
utp
ut
Number of Forward Evaluations
Forward UQ Convergence: Exp Benchmark, Mean
MC+SRSSDEMC+SRS
LSDEMC2+SRSLSDEMC4+SRSLSDEMC8+SRS
MC+LHSSDEMC+LHS
LSDEMC2+LHSLSDEMC4+LHSLSDEMC8+LHS
Stogner & Garg Uncertainty Propagation October 18, 2012 26 / 31
![Page 34: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/34.jpg)
Enhanced Monte Carlo Results
64 Parameters1
10 100 1000
Err
or
in A
pp
roxim
ate
d O
utp
ut
Number of Forward Evaluations
Forward UQ Convergence: Exp Benchmark, Mean
MC+SRSSDEMC+SRS
LSDEMC2+SRSLSDEMC4+SRSLSDEMC8+SRS
MC+LHSSDEMC+LHS
LSDEMC2+LHSLSDEMC4+LHSLSDEMC8+LHS
Stogner & Garg Uncertainty Propagation October 18, 2012 27 / 31
![Page 35: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/35.jpg)
Enhanced Monte Carlo Results
Benchmark Problem: C0 abs
Normal→ Folded Normal Distribution
ξ ≡(ξ1, ...ξNp
)q(ξ) ≡
∣∣∣∣∣∑i
ξi
∣∣∣∣∣ξi ∼ N
(µ
Np,
σ√(Np
)q ∼ F −N (µ, σ)
• µ ≡ 1, σ ≡ 1
• Same benefits as previous benchmark
• Response function now piecewise linear
• Differentiable except on one hyperplane
• Derivative defined as ~0 there
Stogner & Garg Uncertainty Propagation October 18, 2012 28 / 31
![Page 36: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/36.jpg)
Enhanced Monte Carlo Results
4 Parameters
0.001
0.01
0.1
1
10 100 1000
Err
or
in A
ppro
xim
ate
d O
utp
ut
Number of Forward Evaluations
Forward UQ Convergence: Abs Benchmark, Mean
MC+SRSSDEMC+SRS
LSDEMC2+SRSLSDEMC4+SRSLSDEMC8+SRS
MC+LHSSDEMC+LHS
LSDEMC2+LHSLSDEMC4+LHSLSDEMC8+LHS
Stogner & Garg Uncertainty Propagation October 18, 2012 29 / 31
![Page 37: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/37.jpg)
Enhanced Monte Carlo Results
64 Parameters
0.01
0.1
1
10 100 1000
Err
or
in A
pp
roxim
ate
d O
utp
ut
Number of Forward Evaluations
Forward UQ Convergence: Abs Benchmark, Mean
MC+SRSSDEMC+SRS
LSDEMC2+SRSLSDEMC4+SRSLSDEMC8+SRS
MC+LHSSDEMC+LHS
LSDEMC2+LHSLSDEMC4+LHSLSDEMC8+LHS
Stogner & Garg Uncertainty Propagation October 18, 2012 30 / 31
![Page 38: Vikram Garg PECOSroystgnr/uncertainty_propagation-talk.pdfInstitute for Computational Engineering and Sciences The University of Texas at Austin October 18, 2012 Stogner & Garg Uncertainty](https://reader030.vdocuments.us/reader030/viewer/2022011920/6020a23bb0143d6f48200884/html5/thumbnails/38.jpg)
Enhanced Monte Carlo Results
Future Improvements
Adaptivity• Regularization of “peak value” QoIs
HILHS• Correlation Reduction improvements
LSDEMC• Stochastic Voronoi moments lemma
• Anisotropic Voronoi metric
• Smooth (multi-sample-based) surrogates
Questions?
Stogner & Garg Uncertainty Propagation October 18, 2012 31 / 31