lecture 10 nonuniqueness and localized averages
DESCRIPTION
Lecture 10 Nonuniqueness and Localized Averages. Syllabus. - PowerPoint PPT PresentationTRANSCRIPT
Lecture 10
Nonuniquenessand
Localized Averages
SyllabusLecture 01 Describing Inverse ProblemsLecture 02 Probability and Measurement Error, Part 1Lecture 03 Probability and Measurement Error, Part 2 Lecture 04 The L2 Norm and Simple Least SquaresLecture 05 A Priori Information and Weighted Least SquaredLecture 06 Resolution and Generalized InversesLecture 07 Backus-Gilbert Inverse and the Trade Off of Resolution and VarianceLecture 08 The Principle of Maximum LikelihoodLecture 09 Inexact TheoriesLecture 10 Nonuniqueness and Localized AveragesLecture 11 Vector Spaces and Singular Value DecompositionLecture 12 Equality and Inequality ConstraintsLecture 13 L1 , L∞ Norm Problems and Linear ProgrammingLecture 14 Nonlinear Problems: Grid and Monte Carlo Searches Lecture 15 Nonlinear Problems: Newton’s Method Lecture 16 Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals Lecture 17 Factor AnalysisLecture 18 Varimax Factors, Empirical Orthogonal FunctionsLecture 19 Backus-Gilbert Theory for Continuous Problems; Radon’s ProblemLecture 20 Linear Operators and Their AdjointsLecture 21 Fréchet DerivativesLecture 22 Exemplary Inverse Problems, incl. Filter DesignLecture 23 Exemplary Inverse Problems, incl. Earthquake LocationLecture 24 Exemplary Inverse Problems, incl. Vibrational Problems
Purpose of the Lecture
Show that null vectors are the source of nonuniqueness
Show why some localized averages of model parameters are unique while others aren’t
Show how nonunique averages can be bounded using prior information on the bounds of the underlying model parameters
Introduce the Linear Programming Problem
Part 1
null vectors as the source of
nonuniqueness
in linear inverse problems
suppose two different solutions exactly satisfy the same data
since there are twothe solution is nonunique
then the difference between the solutions satisfies
the quantity
mnull = m(1) – m(2)
is called a null vector
it satisfies
G mnull = 0
an inverse problem can have more than one null vector
mnull(1) mnull(2) mnull(3)...
any linear combination of null vectors is a null vector
αmnull(1) + βmnull(2) +γmnull(3)is a null vector for any α, β, γ
suppose that a particular choice of model parameters
mparsatisfies
G mpar=dobs with error E
then
has the same error Efor any choice of αi
since
e = dobs-Gmgen = dobs-Gmpar + Σi αi 0
since
since αi is arbitrarythe solution is nonunique
hence
an inverse problem isnonunique
if it has null vectors
Gmexample
consider the inverse problem
a solution with zero error ismpar=[d1, d1, d1, d1]T
the null vectors are easy to work out
note that times any
of these vectors is zero
the general solution to the inverse problem is
Part 2
Why some localized averages are
unique
while others aren’t
let’s denote a weighted average of the model parameters as
<m> = aT mwhere a is the vector of weights
let’s denote a weighted average of the model parameters as
<m> = aT mwhere a is the vector of weights
a may or may not be “localized”
a = [0.25, 0.25, 0.25, 0.25]T
a = [0. 90, 0.07, 0.02, 0.01]T
not localized
localized near m1
examples
now compute the average of the general solution
now compute the average of the general solution
if this term is zero for all i,then <m> does not depend on αi,
so average is unique
an average <m>=aTm is unique
if the average of all the null vectors
is zero
if we just pick an averageout of the hat
because we like it ... its nicely localized
chances are that it will not zero all the null vectors
so the average will not be unique
relationship to model resolution R
relationship to model resolution R
aT is a linear combination of the rows of the data kernel G
if we just pick an averageout of the hat
because we like it ... its nicely localized
its not likely that it can be built out of the rows of G
so it will not be unique
suppose we pick aaverage that is not unique
is it of any use?
Part 3
bounding localized averages
even though they are nonunique
we will now show
if we can put weak bounds on mthey may translate into stronger
bounds on <m>
example
with
so
example
with
so
nonunique
but suppose mi is bounded0 > mi > 2d1
smallest α3= -d1largest α3= +d1
(2/3) d1 > <m> > (4/3)d1
smallest α3= -d1largest α3= +d1
(2/3) d1 > <m> > (4/3)d1
smallest α3= -d1largest α3= +d1
bounds on <m> tighter than bounds on mi
the question is how to do this in more complicated cases
Part 4
The Linear Programming Problem
the Linear Programming problem
the Linear Programming problemflipping sign
switches minimization to maximization
flipping signs of A and bswitches to ≥
in Business
unit profitquantity of each product profit
maximizes
no negative productionphysical limitations of factory
government regulationsetc
care about both profit z and product quantities x
in our case
am <m>
bounds on mnot needed Gm=d
first minimizethen
maximize
care only about <m>, not m
In MatLab
Example 1
simple data kernelone datum
sum of mi is zero
bounds|mi| ≤ 1average
unweighted average of K model parameters
2 4 6 8 10 12 14 16 18 200
0.5
1
1.5
width
boun
d on
|<m
>|
Kbounds on
absolute
value
of weigh
ted avera
ge
2 4 6 8 10 12 14 16 18 200
0.5
1
1.5
width
boun
d on
|<m
>|
Kbounds on
absolute
value
of weigh
ted avera
ge
if you know that the sum of 20 things is zeroand
if you know that the things are bounded by ±1then you know
the sum of 19 of the things is bounded by about ±0.1
2 4 6 8 10 12 14 16 18 200
0.5
1
1.5
width
boun
d on
|<m
>|
Kbounds on
absolute
value
of weigh
ted avera
ge
for K>10<m> has tigher boundsthan mi
Example 2
more complicated data kerneldk weighted average of first 5k/2 m’s
bounds0 ≤ mi ≤ 1average
localized average of 5 neighboringmodel parameters
02
46
810
-0.5 0
0.5 1
1.5
z
mG mtrue mi (zi)
depth, ziw
idth, w
(A)(B)
= =
=
≈
dobs j
i
j
i
02
46
810
-0.5 0
0.5 1
1.5
z
mG mtrue mi (zi)
depth, ziw
idth, w
(A)(B)
= =
=
≈
dobs j
i
j
i
complicated Gbut
reminiscent of Laplace Transform kernel
02
46
810
-0.5 0
0.5 1
1.5
z
mG mtrue mi (zi)
depth, ziw
idth, w
(A)(B)
= =
=
≈
dobs j
i
j
i
true mi increased with depth zi
02
46
810
-0.5 0
0.5 1
1.5
z
mG mtrue mi (zi)
depth, ziw
idth, w
(A)(B)
= =
=
≈
dobs j
i
j
i
minimum length solution
02
46
810
-0.5 0
0.5 1
1.5
z
mG mtrue mi (zi)
depth, ziw
idth, w
(A)(B)
= =
=
≈
dobs j
i
j
i
lower boundon solution
upperbound
on solution
02
46
810
-0.5 0
0.5 1
1.5
z
mG mtrue mi (zi)
depth, ziw
idth, w
(A)(B)
= =
=
≈
dobs j
i
j
i
lower boundon average
upperbound
onaverage