geo7600 inverse theory 09 oct 2008
DESCRIPTION
GEO7600 Inverse Theory 09 Oct 2008. Inverse Theory: Goals are to (1) Solve for parameters from observational data; (2) Know something about the range of models that can fit the data within uncertainties. - PowerPoint PPT PresentationTRANSCRIPT
GEO7600 Inverse Theory 09 Oct 2008
Inverse Theory: Goals are to (1) Solve for parameters from observational data; (2) Know something about the range of models that can fit the data within uncertainties
From last time, Nonlinear Inverse Problems F(m) = d can be solved by any of several approaches:
Approach 1: Apply a linearizing transformation (but then must apply the same transformation to data uncertainties, which may not be trivial!)
Approach 2: Grid search via forward modeling all of the possible combinations of parameter vector m, and evaluate the resulting residual norms. This can be very computationally expensive, and must be careful to explore entire likely solution space…
Inverse Theory for Nonlinear Problems
Approach 3: Iterative solution based on gradient search:
Multi-dimensional Taylor Series expansion about some initial model “guess” m0 is given by:
€
Fi m0 + Δm( ) = Fi m0( )+∂Fi m0( )
∂m1
Δm1 +∂Fi m0( )
∂m2
Δm2 +L +∂Fi m0( )
∂mM
ΔmM +Ο Δm2 ⎛
⎝ ⎜
⎞ ⎠ ⎟
where
€
Δm = Δm1,Δm2 ,K ,ΔmM( )T
We want to find the Δm that approximates the difference between m0 & mtrue. For N data points,
€
F m0 + Δm( )− F m0( ) ≈ G m0( )Δm
with sensitivity coefficients
€
Gij =∂Fi m0( )
∂m j
To get the desired Δm, we seek the Δm that satisfies:
€
d = F m0 + Δm( )
Substituting for in the Taylor Series approximation,
€
d
€
F m0 + Δm( )
€
d − F m0( ) ≡ Δd = GΔm
We can solve for Δm using the same linear pseudo-inverse techniques we have used up to now!
€
Δm = G+ Δd
Because we truncated the Taylor Series at first order, our estimate is only accurate to … But we can
update our model “guess” to be and then iterate…
€
Ο Δm2 ⎛
⎝ ⎜
⎞ ⎠ ⎟
€
m1 = m0 + Δm
Since each new model mk should be closer to mtrue than the previous, and the error in each successive estimate decreases proportional to the length-squared of Δm, the model estimate should converge to mtrue (subject to errors in d as in the linear case).
The sensitivity coefficients
are determined for i = 1,2,…,N data and j = 1,2,…,M model parameters so G is an NxM matrix.Note however that this implies the physical model must be differentiable…
Consider as example fitting an exponential:
€
Gij =∂Fi
∂m j
€
Fi m( ) = m1 exp −m2xi( )
Will have:
€
Gi1 = exp −m2xi( )
€
Gi2 = −m1xi exp −m2xi( )&
Note that most of the same tools we used for the linear problem (e.g., estimates of parameter error from the parameter covariance matrix; model resolution and covariance matrices; parameter etc.) still apply to the iterative solution for a nonlinear model… The main difference being that we apply these metrics to the final (iterated) model estimate using the sensitivity matrix G of the best-fitting model.