Download - CUJ 8 Feb 2014_Hari
-
Inverse Theory Applications in
Petrophysics
K.C. Hari Kumar
ONGC, Baroda. 2013
-
25
Interpretation
Issues and Challenges
ONGC a Wealth Creator
1
2
Inverse Theory Application
3
4
SVD and Anatomy of Inverse Problem
Presentation Overview
Regularization Methods
-
Data Inversion
-
Role of numerical methods in Petrophysics
4ONGC a Wealth Creator
The down-hole measurements (N number) make up the data
used for volumetric parameter estimation
data, d d d d = [d1, d2, dN]T Earth model in terms of mineral volumes is to be retrieved
model parameters, m m m m = [m1, m2, mM]T Physical theory or quantitative model to predict the data is the
forward problem
Inverse theory aims at estimating an earth model from the data
-
Quantitative Model
Quantitative Model
est pre
est obs
estimates predictions
observationsestimates
Forward Theory
Inverse Theory
-
6Play of Errors
ddddpre ddddobs
mmmmest mmmmtrueObservational errors
Error propagation
Understanding the effects of observational error is central to
Inverse Theory
-
Tool response at a depth point di is a composite function of an array
of formation properties and other parameters (gi): = , , 1 T (di), the log values at depth di represent a measurement averaged
over an earth volume surrounding the point of measurement and the
collective signal from various parts of the borehole.
The surrounding formation is represented by the convolution integral
of the form:
7
Down-hole measurements
! = " " " #! ; %, &, 3600
0
0
%, &, %& 2
The kernel K (di; x, r, ) accounts for the geometrical effects included in integration to ensure that the signal belongs to the formation in the
vicinity of the depth point: x, r, are the cylindrical coordinates and g(x, r, ) is the geophysical property distribution of the probe sensitive volume.
-
The integral equation is converted into a set of m linear
algebraic equations in n unknowns (mn or m n)represented as +, = - .
8
Well Log Data Inversion
A is the response matrix derived of the parameter values of
each tool for 100% of each formation component, x the
volume vector of formation components and b the data vector
constituted of the different tool measurements.
ToolsA x b
Quartz Calcite Kaolinite Muscovite Water Volumes Tool Data
RHOB 2.65 2.71 2.41 2.79 1 0.41 2.196NPHI -0.06 -0.02 0.37 0.25 1 0.07 0.318
DT 55.5 48 120 55 189 0.22 102.515GR 12 0 105 270 0 0.05 41.520
VOLSUM 1 1 1 1 1 0.25 1.000
-
9The Inverse Problem: x = A-1b
The linear equations Ax = b are solved under constrained
conditions of m = n subject to appropriate handling of the ill-
conditioned system. One of the earliest demonstrations of the
method may be seen with Doveton (1986). Sensitivity of A-1
makes the solution unstable for noise and round off error.
Names x A-1 b
Volumes Inverse Operator Tool Data
Quartz 0.41 -4.519 -10.642 0.022 0.011 10.968 2.196Calcite 0.07 3.827 9.567 -0.030 -0.014 -7.745 0.318
Kaolinite 0.22 2.174 0.494 0.023 -0.002 -6.976 102.515Muscovite 0.05 -0.645 0.281 -0.010 0.004 2.225 41.520
Water 0.25 -0.838 0.300 -0.005 0.000 2.528 1.000
Geophysical inverse problems are ill-posed as the solutions
are either non-unique or unstable or both.
Hadamard (1902) Existence, Uniqueness and Instability
-
In general the model that one seeks is a continuous function
of the space variables with infinitely many degrees of freedom.
On the other hand, the data space is discrete and always of
finite dimension because any real experiment can only result
in a finite number of measurements.
A simple count of variables shows that the mapping from the
data to a model cannot be unique; or equivalently, there must
be elements of the model space that have no influence on the
data.
This lack of uniqueness is apparent even for problems
involving idealized, noise-free measurements. The problem
only becomes worse when the uncertainties of real
measurements are taken into account.
No guarantee that Ax = b contains enough information for
unique estimate x and ascertaining of the sufficiency of
information devolves upon the discrete inverse theory.
10
The Challenge arising from Ax = b
-
11
Model Space and Data Space
Earth is a model space of infinite degree of freedom
Physics of the experiment decides the model and finite data
-
Earth model and the linear Ax = b formulation: A is also known
as the Sensitivity matrix which contains measurements (100%
volumes of minerals) corresponding to the end members.
Volume vector has the obvious constraint that the sum is equal
to 1.
Tool vector consists of the different measurements or data
inverted.
12
./0120/ 34567 +
88892:;64?
5@22: 9/=526 -
Linear model from earth
-
13
Generalized Inverse = Least Squares
A x bVariance
1.5%
77.97 79.73 70.91 82.09 29.42 0.41 66.67 1
-14.16 -4.72 87.32 59.00 235.99 0.09 66.67 1
38.14 32.98 82.46 37.79 129.88 0.22 66.67 1
17.05 0.00 149.19 383.63 0.00 0.07 66.67 1
66.67 66.67 66.67 66.67 66.67 0.21 66.67 1
Variance reduced to unity
-
Given the assumptions of normal distribution of errors which
are uncorrelated, the L-2 norm is employed to characterize the
solution vector x. The Generalized inverse gives the minimum
length solution, m > n and becomes the maximum likelihood
solution when m = n.
Method fails when (ATA) has no inverse
14
x
0.0538 -0.0424 -0.0276 0.0084 0.0103 12377.58 0.41
-0.0424 0.0340 0.0204 -0.0061 -0.0077 11644.29 0.09
-0.0276 0.0204 0.0175 -0.0056 -0.0063 30436.01 0.22
0.0084 -0.0061 -0.0056 0.0018 0.0020 41945.12 0.07
0.0103 -0.0077 -0.0063 0.0020 0.0023 30796.89 0.21
-
The limits or the uncertainty of the model parameters is
estimated from A = B+C+ DE . When B =1, the Hessianinverse operator +C+ DE is the model variance-covariancematrix with the diagonal elements as the variance.
Uncertainty in the estimated model parameters xi is obtained
as the square root of the variance converted into percentage.
15
Parameter Covariance Matrix A = B+C+ DEModel Uncertainty % & Correlation of xi x
23.19 -0.04 -0.03 0.01 0.01 0.41
-0.04 18.45 0.02 -0.01 -0.01 0.09
-0.03 0.02 13.23 -0.01 -0.01 0.22
0.01 -0.01 -0.01 4.25 0.00 0.07
0.01 -0.01 -0.01 0.00 4.80 0.21
-
Tool QUAR SM1 SMEC ILLI KAOL CHLO W x bSTD
Error
RHOB 2.65 2.35 2.12 2.53 2.42 2.77 1.09 0.35 2.3153 0.035
SGR 5 150 180 12 44 2.04 0 0.05 51.333 0.770
TH/K 3 10 12 3.5 14 16 0 0.2 7.085 0.106
TNPH 0.04 0.4 0.44 0.3 0.37 0.52 1 0.09 0.3513 0.005
PEF 2 2 2.04 3.45 1.83 6.37 0.8 0.11 2.3254 0.035
DT 53.5 60 60 87 77 100 189 0.08 80.705 1.211
SUM 1 1 1 1 1 1 1 0.12 1 0.015
16
Example 2
Model Uncertainty % & Correlation of xi x46.82 0.06 -0.01 -0.29 -0.13 0.11 0.05 0.35
0.06 24.75 -0.04 -0.09 -0.04 0.03 0.02 0.05
-0.01 -0.04 16.86 0.02 0.01 -0.01 0.00 0.20
-0.29 -0.09 0.02 63.24 0.18 -0.14 -0.07 0.09
-0.13 -0.04 0.01 0.18 28.18 -0.06 -0.03 0.11
0.11 0.03 -0.01 -0.14 -0.06 22.56 0.03 0.08
0.05 0.02 0.00 -0.07 -0.03 0.03 11.29 0.12
-
Anatomy of Inversion
-
The ill-posed character of the discrete problem i.e. the
structure and numerical behavior of the response matrix A
and the modes of linear transformation can be understood
with the help of singular value decomposition (SVD).
The operator matrix A can be resolved into three component
matrices in such a way that -
18
Singular Value Decomposition
F = G8 = H I!! J!K
!=1
U and V are the orthogonal matrices of left and right
singular vectors representing the model space and the data
space while is a diagonal matrix of singular values i whichare amplitudes of the mode of transformation.
, = FD- = H I@L
M
J
-
19
SVD of Operator A
A = ,
2.650.0655.5121
2.710.024801
2.410.371201051
2.79 10.25 1 55 189270 1 1 1= UVT
Left Singular Vectors: U -0.01336 0.00499 0.94988 -0.09353 -0.29796-0.00216 0.00364 -0.12703 0.75599 -0.64212-0.54151 0.84058 -0.01229 -0.00658 0.00126-0.84056 -0.54164 -0.00878 -0.00058 0.00081-0.00566 0.00454 0.28525 0.64783 0.70633
Singular Value Spectrum: 319.69 0 0 0 0
0 201.053 0 0 00 0 3.281 0 0
0 0 0 0.185 0
0 0 0 0 0.047
Right Singular Vectors (Transpose) VT-0.12569 0.1998 0.61649 -0.09546 0.74506-0.08144 0.20077 0.69249 0.34282 -0.59665-0.47946 0.21892 0.04011 -0.80287 -0.27565-0.80321 -0.49734 -0.0431 0.31647 0.07408-0.32021 0.79025 -0.37004 0.35862 0.0862
-
Important insights offered by the SVD in the present case are:
Condition number can be derived as the ratio of the largest and
the least singular values to be 6834.
Ill-conditioning indicated by Cond(A) finds support also from
the elements of the left and right singular vectors u, and v,
which tend to be more oscillatory i.e. show more sign changes
as the index i increases or the corresponding singular values
decreases.
A small singular value i as compared to 1 = ||A||2, pointtowards the existence of a certain linear combination of the
columns of A spanned by the elements of the right singular
vector vi such that ||Avi||2= i is small. Similar argument holdsfor the left singular vector ui and the rows of A. Small singular
values therefore are indicative that the operator is nearly rank
deficient and that the associated ui and vi are effectively the
numerical null vectors of AT and A respectively.
20
Implications of the SVD
-
The accumulative contribution ratio for the possible truncation
stages is expressed as:
21
Impact of Truncation
TU = UM MModes i
Modes% Cumulative
Normalizedi %
1 319.69 60.98 60.98 100.02 201.053 38.35 99.33 62.93 3.281 0.63 99.96 1.04 0.185 0.04 99.99 0.15 0.047 0.01 100.00 0.0
U V -0.013 0.005 0.95 -0.094 -0.298 -0.126 0.2 0.616 -0.095 0.745-0.002 0.004 -0.127 0.756 -0.642 -0.081 0.201 0.692 0.343 -0.597-0.542 0.841 -0.012 -0.007 0.001 -0.479 0.219 0.04 -0.803 -0.276-0.841 -0.542 -0.009 -0.001 0.001 -0.803 -0.497 -0.043 0.316 0.074-0.006 0.005 0.285 0.648 0.706 -0.32 0.79 -0.37 0.359 0.086 99.33 % of the total variance is contributed by the 1st and 2nd modes
u1 and v1 have no sign changes at all and sign changes increases with
other columns. 3 = 1.02% of 1while and 4 and 5 are abysmally lower and suggests a rank 3 approximation
-
The minimum norm solution x obtained is [0.23, 0.24, 0.21,
0.06, 0.26] Reconstructed kernel A3 = A with condition number
97 against the 6834 for A. 22
Rank 3 Approximation
F = G8@ % = 8WG@LOriginal A Rank 3 Reconstruction A3 x
2.65 2.71 2.41 2.79 1 2.66 2.71 2.39 2.80 1.01 0.232-0.06 -0.02 0.37 0.25 1 -0.02 -0.09 0.47 0.21 0.95 0.23655.5 48 120 55 189 55.50 48.00 120.00 55.00 189.00 0.21412 0 105 270 0 12.00 0.00 105.00 270.00 0.00 0.0601 1 1 1 1 0.99 0.98 1.10 0.96 0.95 0.261
Data Vector b for x Solution x: Truncated SVD Solutions x b Rank 5 Rank 4 Rank 3 Rank 2
0.41 2.196 0.41 0.24 0.23 0.100.07 0.318 0.07 0.21 0.24 0.090.22 102.515 0.22 0.28 0.21 0.200.05 41.520 0.05 0.03 0.06 0.070.25 1.000 0.25 0.23 0.26 0.34
-
Diagonal elements of the resolution matrix equal to 1 is
indicative of good resolution Unique solution
Non-zero elements other than the diagonal ones in a row
indicate that the predicted value of the parameter is a weighted
average of the observed data.
Considering the 1strow for example, [0.44, 0.44, 0.21, -0.06, -
0.06], the first element of the predicted data vector bp = [b1P, b2
P,
b3P, b4
P, b5P] viz., b1
P will be predicted as a weighted average of
the observed data d = [d1, d2, d3, d4, d5]. With d = [2.1936,
0.3225, 102.12, 40.38, 0.99], bp = [2.128, 0.326, 103.141,
39.572, 0.99]
Note: 0.44*2.196+0.44*2.196+0.21*2.196-0.06*2.196-0.06*2.196 = 2.12823
Model Resolution (V5V5T = I5 )
Model Resolution V4V4T (Rank 4) Model Resolution V3V3T (Rank 3)0.44 0.44 0.21 -0.06 -0.06 0.44 0.48 0.13 -0.02 -0.030.44 0.64 -0.16 0.04 0.05 0.48 0.53 0.11 -0.06 -0.070.21 -0.16 0.92 0.02 0.02 0.13 0.11 0.28 0.27 0.31-0.06 0.04 0.02 0.99 -0.01 -0.02 -0.06 0.27 0.89 -0.12-0.06 0.05 0.02 -0.01 0.99 -0.03 -0.07 0.31 -0.12 0.86
-
Data resolution matrix presents the information density i.e.
which data contribute independent information to the
solution. Value of 1 for the diagonal element indicates
information independent of other observations. U3U3T I3
shows that there exists a data null space and the model fit for
the data shall be poor.
These results are in agreement with the ill-conditioned
behavior of the original design matrix A which had a lesser
number of linearly independent vectors than those indicated
by the rank of 5.
24
Data Resolution
Data Resolution U4U4T(Rank 4) Data Resolution U3U3T(Rank 3)0.91 -0.19 0.00 0.00 0.21 0.90 -0.12 0.00 0.00 0.27-0.19 0.59 0.00 0.00 0.45 -0.12 0.02 0.01 0.00 -0.040.00 0.00 1.00 0.00 0.00 0.00 0.01 1.00 0.00 0.000.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 1.00 0.000.21 0.45 0.00 0.00 0.50 0.27 -0.04 0.00 0.00 0.08
-
The three energetic model factors of the activated space explains
99.96% of the variance in the data. The first function u1 and v1contribute 66.98% followed by u2 and v2 38.35 and u3 and v3adding 0.63%. Vk represents the optimized distribution of row
vectors of the model space and Uk represents the optimized
distribution of column vectors of the data space.
Singular value decomposition thus enables an objective ranking
of uncorrelated modes of variability or latent variables which
helps to differentiate the data from noise.25
SVD Conclusions
Modes iModes
%Cumulative
Normalized
i %
1 319.69 60.98 60.98 100.0
2 201.053 38.35 99.33 62.9
3 3.281 0.63 99.96 1.0
4 0.185 0.04 99.99 0.1
5 0.047 0.01 100.00 0.0
-
The original kernel A5x5 may be viewed as an A3x3 data kernel
perturbed by a noise matrix of norm . Such a perturbation shallchange the zero singular values by and therefore any singularvalues < stand the chance of being contributed by the noise.
R(A) = 5 needs to be contrasted with the scenario R(A,) = 5 byexamining the norm of the data error possible with the kernel A.
For any of the variables in the kernel A i.e. columns, the standard
error is 1.5% and for the set of observations considered viz., RHOB,
NPHI, DT, GR and VOL_SUM, the likely norm of the error may be
computed from a likely data vector, say, [2.196, 0.318, 102.515,
41.52, 1] for which the errors will be [0.033, 0.005, 1.538, 0.623,
0.015], the norm of the error shall be 1.66.
If we consider the balanced uncertainties given by proprietors of
certain software tools, the noise vector shall be [0.027, 0.015, 2.25, 6,
1.5] but the value for GR (=6) is almost unpredictable and inclusion
may lead to a high value of error norm. Excluding GR, we obtain 2.7 and hence by any count the low singular values we seen above, 4= 0.185 and 5 = 0.047 cannot be treated as causing a genuineaddition to the range of the problem.
26
Impact of Noise
-
The truncated solutions, theoretically free from noise, differ
much from the true model xtrue used for deriving the data
vector b. Rank-4 and Rank-3 solutions are nearly the same
but significantly different from xtrue.
Here a question arises: The linear formulation has
sufficient information content as to permit the retrieval
of the true model?
Each column of A represented an end-member configuration
realizable as data in 2 decimal digits like 2.65, 0.35 etc. In
other words, the forward problem is expressed in terms of 2
decimal digits multiplied by a fractional volume vector to yield
data of maximum accuracy up to 1 decimal digit only. Given
such a forward problem, can the measurements be of
information content as to retrieve the volume vector in 2
decimal digits?
27
Truth of the linear formulation Ax = b
-
Rank-4 and Rank-3 operators have not contributed much to
the stability of the solution as may be noted from the above
data.
Constraints are needed to ensure positive values and to make
the solution agree with prior information about the solution x,
volume vector.
Situation demands one or the other of the known regularization
methods depending upon their suitability adjudged domain
wise
28
Round-off Error
b Rank-5x5
b* x5* -L2Rank-4
-L2Rank-3
-L2x4 x4* x3 x3*2.196 0.41 2.22 0.53
0.186
0.240 0.253
0.125
0.232 0.253
0.0050.318 0.07 0.3 -0.05 0.206 0.165 0.236 0.165
102.515 0.22 103.51 0.28 0.282 0.385 0.214 0.38541.52 0.05 42.5 0.02 0.033 -0.003 0.060 -0.003
1 0.25 1 0.22 0.231 0.188 0.261 0.188
-
Application of regularization methods is problem specific. Advantage
of one method over the other to address instability due to rank
deficiency in specific kind of problems is missing in the literature.
Tikhonov regularization can be found to be quite popular in its
application
Common for all these regularization methods is that they replace the
ill-posed problem with a nearby well-posed problem which is much
less sensitive to perturbations Per Christian Hansen
Given the rank deficient problem of comparatively small size kernel
we have, Tikhonov regularization is one of the best options
29
Model Space
-
Equations of the kind, Ax = b do not yield the right numerical
solution when the data b contains noise. If it is known that the
given data satisfies an error estimate L L X , Tikhonovstates that an approximate solution can be found by
minimizing the regularization functional:
30
Tikhonov Regularization
C ,, - = , = +, -E YBB Z B Y, YBB [\], ^ _L is typically the identity matrix or a well conditioned discrete
approximation to some derivative operator. In operator
notation:
, = ++C Z ` DE+C-, a ^ _
-
31
Tikhonov continued
% = 8bccccd
!1!12 Z 0 00 !
!2 Z 00 0 !Z1!Z12 Z ef
fffg
&
GL
hiijiik 8
bccccd
!1!12 Z 0 00 !
!2 Z 00 0 !Z1!Z12 Z ef
fffg
&
G lmIno&!pm qKJm&rm FZlsiitiiu
hiijiik
0, FZlFZ: 8bcccccd 1! 0 00 1
! 00 0 1
! efffffg&
G wmKm&on!pm qKJm&rm FZsiitiiu
-
Application of a small value for such as =1, in fact gave asolution closer to the truncated A3 solution. Minimum norm
solution happens for = 0 and this is not an expected behaviorand may be the consequence of design or the use of
hypothetical data.
Petro-physicists are in need of the mathematicians help to
interpret the results
32
Tikhonov Example-1
b xtrue =0.001 =1 =5 2.196 0.41 0.36 0.05 0.22 0.19 0.19 0.220.318 0.07 0.11 -0.04 0.22 -0.15 0.19 -0.12
102.515 0.22 0.24 -0.02 0.22 0.00 0.21 0.0141.52 0.05 0.05 0.00 0.06 -0.01 0.06 -0.01
1 0.25 0.24 0.01 0.27 -0.02 0.29 -0.04L2 norm 0.54 0.51 0.07 0.47 0.24 0.45 0.25
-
The L-curve is a plot of the size
of the regularized solution
versus the size of the
corresponding residual for all
valid regularization parameters.
L-curve does not depict a
distinct elbow but the point of
maximum curvature can be
identified to be between = 4and = 1.
33
Tikhonov Example-2
b% = FWL% = 0
% = x F% L % = y F% L % = 1 F% L % = F% L
2.401 -1.29 0.20 -0.034 0.24 -0.046 0.24 -0.096 0.22 -0.2060.412 1.44 0.00 -0.040 0.26 -0.032 0.25 -0.022 0.23 -0.007
115.260 1.01 0.34 0.000 0.27 0.000 0.24 -0.001 0.23 -0.00336.749 -0.20 -0.01 0.000 0.02 0.000 0.03 0.001 0.04 0.004
1.0 0.04 0.26 0.082 0.29 0.090 0.31 0.079 0.33 0.047F% L 4.8 0.31 0.0096 0.29 0.0111 0.28 0.0160 0.26 0.0446
IIAx-bII2
L - Curve
I
I
x
I
I
2
L= 5 = 0.046
L= 4
= 0.185
L= 1
L= 3
= 3.28
-
Considering the few solutions which are likely to be valid, for = 4 =0.18, =1 and = 2, it can be seen that for the log data vector used,the porosity values %xhad been on the increase as increased i.e.0.29, 0.31, 0.32 which raises a question mark on the validity of the
regularization as the log values cannot be of so high a porosity value.
34
Validity of the Regularized solution
b % = y F% L % = 1 F% L % = 2 F% L
2.401 0.24 -0.046 0.24 -0.096 0.23 -0.1490.412 0.26 -0.032 0.25 -0.022 0.24 -0.014
115.260 0.27 0.000 0.24 -0.001 0.24 -0.00236.749 0.02 0.000 0.03 0.001 0.03 0.002
1.0 0.29 0.090 0.31 0.079 0.32 0.064F% L 0.29 0.0111 0.28 0.0160 0.27 0.0265x51 =1-
(x11+..x41) 0.21 0.24 0.26With the unity constraint imposed and the porosity value is derived as
x51 =1-(x11+..x41), the value for porosity appears to be a distorted valueexceeding 0.20, viz., 0.24 when = 1 and 0.26 when = 2. For a matrixdensity of 2.65, the porosity could have been only around 0.15.
-
35
Tikhonov Regularization with prior
z% = o& {!K F% L Z |% %1626 , = BYCY Z +C+ DE BYCY,}~\~ Z +C
Prior = 0.23 = 0.30 = 1.0 = 5 = 15xprior x ax-b x ax-b x ax-b x ax-b x ax-b0.46 0.40 -0.033 0.43 -0.033 0.46 -0.028 0.48 0.027 0.48 0.0490.07 0.11 -0.057 0.09 -0.056 0.07 -0.054 0.08 -0.061 0.09 -0.0640.22 0.32 0.000 0.29 0.000 0.25 -0.001 0.25 -0.012 0.25 -0.0980.05 0.00 0.000 0.00 0.000 0.02 0.000 0.02 0.005 0.02 0.0460.20 0.27 0.085 0.27 0.088 0.29 0.096 0.28 0.113 0.28 0.119
Norms ||x||2 ||ax-b||2 ||x||2 ||ax-b||2 ||x||2 ||ax-b||2 ||x||2 ||ax-b||2 ||x||2 ||ax-b||20.31 0.3418 0.0116 0.3511 0.0121 0.3672 0.0129 0.3775 0.0174 0.3810 0.0325x 1.09 1.09 1.10 1.11 1.12
x51had been on the increase as increased and the unity constraint wasnot adhered to in the above exercise of regularization. With the xprior
initially assumed for deriving b, the minimum norm solution and
minimum residual norm happens for = 0
-
Unity constraint may be enforced by deriving x51 = 1-
SUM(x11+x21+ x31+x41) or by altering the algorithm to solve for
only the (n-1) elements and deriving the nth element as 1
minus sum of (n-1) elements.
36
Prior with Unity Constraint
Volumes = 0.23 = 0.30 = 1 = 5 = 15 No Priorx11 0.40 0.43 0.46 0.48 0.48 0.24x21 0.11 0.09 0.07 0.08 0.09 0.25x31 0.32 0.29 0.25 0.25 0.25 0.24x41 0.00 0.00 0.02 0.02 0.02 0.03
x51 =1-(x11+..x41) 0.18 0.19 0.19 0.17 0.16 0.23
With the unity constraint imposed, the porosity values have
become reasonable but when the same is contrasted with the
no-prior solution, it becomes apparent that the linear
formulation without prior values cannot return even a
reasonably true model
-
No distinct elbow is seen and in the following slide an
alternative prior vector is used to study the issue further
37
L-Curve in the above case
L - Curve
I
I
x
I
I
2
IIAx-bII2
-
In this case it is seen that the prior without the unity
constraint fails to retrieve a reasonable solution
With = 1 and the sum of volumes =1 constraint, the prior isalmost reproduced.
Can such use of priors have any meaning or lead to a valid
technique when Ax= b lacks the information?
38
Alternate prior
Prior2
= 0.6 = 1 = 2 = 3 = 5x ax-b x ax-b x ax-b x ax-b x ax-b
0.35 0.55 0.011 0.36 -0.030 0.26 -0.180 0.21 -0.33 0.17 -0.530.15 0.00 -0.071 0.16 -0.043 0.21 -0.013 0.19 0.01 0.15 0.040.22 0.28 0.000 0.25 -0.001 0.24 -0.005 0.23 -0.01 0.23 -0.040.08 0.00 0.000 0.02 0.000 0.03 0.003 0.04 0.01 0.04 0.010.20 0.27 0.103 0.30 0.096 0.32 0.054 0.34 0.01 0.37 -0.05||x||2 ||x||2 ||ax-b||2 ||x||2 ||ax-b||2 ||x||2 ||ax-b||2 ||x||2 ||ax-b||2 ||x||2 ||ax-b||20.24 0.4536 0.0159 0.3094 0.0120 0.2690 0.0356 0.2521 0.1088 0.2378 0.2831
x51 =1-(x11+..x41) 0.17 0.20 0.27 0.33 0.42x 1.10 1.10 1.05 1.01 0.95
-
L-Curve depicts a distinct elbow and = 1 becomes anobvious choice as a regularization parameter
Interpretation has to be specific to the problem
39
L-Curve in the above case
L - Curve
I
I
x
I
I
2
IIAx-bII2
-
The SVD solution for x is be expressed as:
40
Discrete Picard Condition
% = FZL = H I!L !
K!=1
J! Discrete Picard condition states that the numerator (I@L)
must decay faster than the denominator (i). But the
literature presents contradictory opinions on the trust
that can be placed on the discrete Picard condition. Due
to noise in A or other reasons of design of the inverse
model DPC is found to get violated.
May be it is not applicable to the small size problem
discussed above but why it is so remains to be discussed
in applied mathematics literature.
-
The DPC for A and A scaled to remove the units AUF
41
Discrete Picard Condition
Discrete Picard condition is apparently not satisfied. Is this
relevant to the small size inversion problems?
A AUF
FourierCoefficientsI@L
SingularValues
I@L
FourierCoefficient
sI@LSingularValues
I@L
90.45 319.69 0.28 118.73 506.15 0.2363.70 201.05 0.32 82.20 259.24 0.320.71 3.28 0.22 37.13 140.34 0.260.02 0.18 0.09 1.67 14.06 0.120.01 0.05 0.23 0.67 3.12 0.21
-
Data of a 7x7 example is used here to illustrate the failure of
the DPC. But what it means physically to the problem?
42
Picard Plot-1
-
Kernel A Weight Matrix Modified A = Aw2.65 2.71 2.41 2.79 1 0.555 0 0 0 0 1.47 2.71 0.01 0.00 0.01-0.06 -0.02 0.37 0.25 1 0 1.00 0 0 0 -0.03 -0.02 0.00 0.00 0.0155.5 48 120 55 189 0 0 0.0051 0 0 30.83 48.00 0.62 0.04 1.8912 0 105 270 0 0 0 0 0.0007 0 6.67 0.00 0.54 0.20 0.001 1 1 1 1 0 0 0 0 0.01 0.56 1.00 0.01 0.00 0.01
43
Impact of Weights
Aw = UVTU VT
-0.054 -0.043 0.949 -0.128 -0.280 57.295 0.000 0.000 0.000 0.000 -0.545 -0.838 -0.011 -0.001 -0.0330.001 -0.003 -0.106 -0.990 0.093 0.000 5.630 0.000 0.000 0.000 0.834 -0.543 0.088 0.033 -0.021-0.996 -0.061 -0.060 0.006 -0.004 0.000 0.000 0.102 0.000 0.000 0.000 0.039 -0.014 0.075 -0.996-0.064 0.997 0.041 -0.007 0.001 0.000 0.000 0.000 0.004 0.000 0.085 -0.042 -0.914 -0.394 -0.019-0.020 -0.014 0.288 0.059 0.955 0.000 0.000 0.000 0.000 0.00027 0.006 -0.002 -0.395 0.915 0.074
1. Instantly it strikes that the scaled matrix presents a far
greater condition number and ill-conditioning than the
original matrix.
2. Vector u1 and v1 depict increased oscillation
-
Number of energetic modes has been reduced when
contrasted with SVD of A.
Plot of the singular values as shown has a distinct elbow
which calls for a rank 2 approximation in contrast to the rank
3 approximation of A.
Number of manifest variables has become 2 instead of 3 in A
and the singular values right of the elbow are discarded as of
noise.
44
Modes and variances
Modes iModes
% CumulativeNormalized
i %
1 57.295 90.899 90.899 100 12 5.6303 8.9326 99.832 9.83 103 0.1016 0.1612 99.993 0.18 5644 0.004 0.0064 100 0.01 142355 0.0003 0.0004 100 0.00 215394
-
A number of issues arise against the background of the above discussion:1. Where can we find the inversion theory applicable for the
small scale problems of the above kind?
2. Applied Mathematics literature is silent about diagnosis in
respect of (a) information content of the linear formulation
(b) SVD analysis of the small size problems (c) Discussions
on the weights and scaling of the problems (d)
Regularization (e) Discrete Picard condition
3. Can the iterative procedures do any magic when direct
solutions are not valid?
4. Levenberg-Marquardt algorithm can offer something extra
in an iterative process?
45
Diagnosis for a valid solution
-
Where = F@F & F@F is the Hessian with which weightedleast squares method is sought to be implemented.
H will be more ill-conditioned than A and hence a positive
constant times the diagonal elements of H are added toensure greater eigen values *diag (H).
Direction of error decides the tuning of up and down tominimize the error and Marquardts innovation had helped to
take a large step in the direction with low curvature and a
small step when the update reaches a direction with high
curvature.
Equivalence is drawn between the stochastic inverse A+s and
the LM algorithm46
Levenberg-Marquardt algorithm
%W = % Z aq D %W = % Z a[!o] D = AT AAT Z n2
m2 I1 = ATA Z n2
m2 I1 AT
-
47
Example for Uncertainty vs QUAR SM1 SMEC ILLI KAOL CHLO W b b*0.015 \
RHOB 2.65 2.35 2.12 2.53 2.42 2.77 1.09 2.3287 0.034931 638.77SGR 5 150 180 12 44 2.04 0 32.082 0.48123 334.89TH/K 3 10 12 3.5 14 16 0 6.455 0.096825 192.39TNPH 0.04 0.4 0.44 0.3 0.37 0.52 1 0.3338 0.005007 89.15PEF 2 2 2.04 3.45 1.83 6.37 0.8 2.1498 0.032247 64.75DT 53.5 60 60 87 77 100 189 83.29 1.24935 3.82SUM 1 1 1 1 1 1 1 1 0.015 1.12
=0 =1 7 =2 =3 =3.82=6 =5
xUncert.
% xUncert.
% xUncert.
% xUncert.
% xUncert.
% xUncert.
%0.4 47.06 0.37 26.54 0.35 11.91 0.34 6.58 0.33 4.55 0.33 3.04
0.05 24.81 0.04 20.20 0.05 15.80 0.05 12.26 0.06 9.86 0.07 7.260.07 16.84 0.07 15.58 0.06 13.01 0.06 10.25 0.05 8.28 0.05 6.110.09 63.58 0.13 35.51 0.16 15.30 0.17 7.89 0.17 5.14 0.17 3.170.2 28.33 0.22 15.87 0.23 6.97 0.23 3.77 0.24 2.65 0.24 1.91
0.05 22.68 0.04 12.68 0.03 5.50 0.02 2.90 0.02 1.97 0.02 1.340.14 11.33 0.13 6.37 0.13 2.82 0.13 1.55 0.12 1.09 0.12 0.78
-
48
Model & Data Resolution for = 0, 2 R = 1.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.000.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.000.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.000.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.000.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.000.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.000.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00
= ++
=20.78 -0.04 -0.01 0.29 0.13 -0.10 -0.05 0.85 -0.01 0.00 0.00 0.02 -0.05 0.19-0.04 0.86 0.11 0.09 0.03 -0.03 -0.01 -0.01 1.00 0.00 0.01 0.00 -0.03 0.02-0.01 0.11 0.91 -0.01 0.00 0.01 0.00 0.00 0.00 1.00 0.02 0.00 -0.03 0.020.29 0.09 -0.01 0.61 -0.17 0.14 0.07 0.00 0.01 0.02 0.91 0.00 0.17 -0.110.13 0.03 0.00 -0.17 0.92 0.06 0.03 0.02 0.00 0.00 0.00 1.00 0.00 -0.02-0.10 -0.03 0.01 0.14 0.06 0.95 -0.02 -0.05 -0.03 -0.03 0.17 0.00 0.66 0.28-0.05 -0.01 0.00 0.07 0.03 -0.02 0.99 0.19 0.03 0.02 -0.11 -0.02 0.28 0.61
-
Given the uncertainty and resolution as above can the linear
formulation Ax = b under discussion contain sufficient
information as to facilitate a retrieval of a reasonably true
solution?
If the above deductions are incorrect, how the efficiency of a
linear formulation and method of inversion can be analysed for
efficiency?
How the small scale problems can be better understood with
the help of mathematical theory?
49
Model & Data resolution for = 5
0.71 0.00 -0.06 0.36 0.16 -0.13 -0.07 0.66 0.00 0.02 -0.10 0.04 0.05 0.330.00 0.62 0.30 0.09 0.03 -0.04 -0.01 0.00 1.00 0.00 0.02 0.00 -0.04 0.03-0.06 0.30 0.75 -0.01 0.01 0.00 -0.01 0.02 0.00 0.99 0.03 0.00 -0.05 0.010.36 0.09 -0.01 0.51 -0.22 0.17 0.09 -0.10 0.02 0.03 0.82 0.02 0.29 -0.090.16 0.03 0.01 -0.22 0.90 0.08 0.04 0.04 0.00 0.00 0.02 0.99 -0.01 -0.03-0.13 -0.04 0.00 0.17 0.08 0.94 -0.03 0.05 -0.04 -0.05 0.29 -0.01 0.48 0.28-0.07 -0.01 -0.01 0.09 0.04 -0.03 0.98 0.33 0.03 0.01 -0.09 -0.03 0.28 0.47
-
Lagrangian to be minimized is defined here as:
Y = Z Z = F% L Z , , 2 Z Sx
S=
1 1 0 0 00 1 1 0 0000000
100110
011 Differentiating the Lagrangian with respect to x and
equating to zero, the solution x can be obtained as:
% = F@F Z q Z T@r DF@L Z qT 50
Constrained Optimization
-
No appreciable improvement in the solution characteristics by
the use of the derivative operator
With the parameter choice of 1 =1 and 2 =1, the errors to thetune of 1.5% do not alter the regularized solution computed
against a specific and precise solution prior vector (xref)
chosen.
But the departure of the regularized solution from the solution
prior vector and the relation of both to the true solution vector
becomes a subtle issue that needs more detailed deliberations.
51
b1 %W 1 2 xref x = x1 Ax-b1 b2= b1+e %W x2 b3= b2+e %W x32.2518 0.46
1 1
0.46 0.31 -0.019 2.2856 0.30 0.31 2.3199 0.30 0.320.2829 0.05 0.05 0.19 0.023 0.2871 0.17 0.19 0.2915 0.17 0.1997.36 0.2 0.2 0.16 0.000 98.8204 0.31 0.17 100.3027 0.31 0.1745.42 0.07 0.07 0.09 0.000 46.1013 0.04 0.09 46.7928 0.04 0.09
1 0.22 0.22 0.24 0.001 1 0.19 0.25 1.0000 0.19 0.25L2-norm 0.55 0.55 0.48 0.03 0.50 0.48 0.50 0.49
Solutions by the Lagrange multiplier method
-
Here the precise solution prior vector has been changed to see
the impact on the solution and it can be found that the
regularized solution did not significantly respond to the prior
solution vector used.
But when a log vector is applied to the same scenario, the
solution showed departure from the unity constraint.
When the data vector used is different from the inverse crime
scenario the solution is perturbed suggesting instability of the
solution despite the LMM implementation
52
b1 %W 1 2 xref x = V1 Ax-b1 xref x Log xref %:2W x Ax-b12.2518 0.46
1 1
0.50 0.32 -0.007 0.40 0.29 2.468 0.40 -2.46 0.34 0.1440.2829 0.05 0.10 0.19 0.022 0.20 0.22 0.4782 0.20 2.54 0.26 0.07097.36 0.2 0.15 0.15 0.000 0.10 0.15 110.3399 0.10 1.09 0.14 12.98045.42 0.07 0.03 0.09 0.000 0.08 0.10 32.0829 0.08 -0.19 0.05 -13.336
1 0.22 0.22 0.25 0.006 0.22 0.25 1 0.22 0.03 0.31 0.104L2-norm 0.55 0.55 0.48 0.02 0.55 0.48 0.55 0.55
Changed Prior Vector
-
Tikhonov solution vectors are different many elements
differing drastically from the constrained optimization output.
Here the issue comes up:
How to choose between different optimization methods?
Under what circumstances one regularization method is
preferred over the other?
53
Tikhnov versus LMM solutions
L = 0 L = 0.04678 L = 0.18459 L = 0.001 L= 0.01 L=1b1 xtikh = x+ xtikh Ax-b xtikh Ax-b xtikh Ax-b xtikh Ax-b xtikh Ax-b
2.2518 0.46 0.25 -0.01 0.24 0.00 0.39 0.0006 0.29 0.002 0.23 -0.060.2829 0.05 0.23 0.02 0.24 0.02 0.10 0.0031 0.19 0.010 0.23 0.0397.36 0.20 0.23 0.00 0.22 0.00 0.22 0.0002 0.25 0.000 0.21 0.0045.42 0.07 0.07 0.00 0.07 0.00 0.06 -0.0008 0.06 -0.001 0.08 0.00
1 0.22 0.22 0.00 0.22 0.00 0.21 -0.0028 0.21 -0.006 0.23 -0.02L2-norm 0.55 0.47 0.023 0.47 0.016 0.51 0.00 0.48 0.01 0.46 0.07
(L2)2 0.31 0.22 0.00 0.22 0.00 0.26 0.00 0.23 0.00 0.21 0.01
-
1.Where does the problem lie, A is a smoothing operator?
When A is a smoothing operator, the forward solution of
equation Ax = b dampens the variability in the vector
space x when transforming it to the vector space b.
Smoothing operators under inversion acts like amplifiers.
The smoothing can also dampen the signals in the input
data b below the noise levels, effectively leading to a loss of
information, with no means of getting it back in an
inversion operation.
2. Is it a problem with the accuracy of the data vector b?
Can enhanced instrumentation and accuracy to b bring in
more quality or efficiency to the data inversion process?
54
Diagnosis for a valid solution x from Ax = b
-
3. Ill-conditioning of the problem is also a result of the
discretization of the continuous functions in which information
is lost. As the resolution of the discretised function increases,
each component of the solution vector x will have less and less
influence on the data b.
In the inversion problem, the influence of x can be viewed as
the information content of x in each data point bi. Therefore,
as the influence of x decreases, each data point bi will contain
less information about the higher resolution elements of x.
As the information content decreases, it is subjected to higher
and higher amplification to reconstruct the unknowns. This
amplification works on the error in the observations and
kernel as well, creating situations where the error of the
reconstructed values are very much larger than the values
themselves. Thus, a trade-off between the solution resolution
and error exists.
55
Diagnosis for a valid solution x from Ax = b
-
1. Small scale inverse problems of the kind encountered in
well log data inversion (Petrophysics) call for adequate
theory to explain the usefulness of the method.
2. Interpretation of the inverse theory elements and SVD are
domain specific and Petrophysics remains an untouched
area.
3. Areas like use of the regularization method, L-curve and
applicability of the discrete Picard condition towards the
diagnosis of a valid solution remains to be explored.
4. Efficiency of Tikhonov regularization and Levenberg-
Marquardt algorithm to be studied against the linear earth
model used to describe the measurements.
5. Possibility of developing better alternatives and the relative
merits over the deterministic approach are to be reckoned.
6. Quantification of the uncertainty of model parameters is an
essential requirement.
56
Conclusions
-
57
-
Condition number is 4108157 and it is obvious that the
residual cannot be of any indication about the quality of the
solution.
Efficacy of iterative methods also comes into question as the
Picard coefficients are unfavorably distributed against small
singular values.
58
SVD Analysis Example
UT*b iUT*b/
i
Variance(I)^2
% Variance IU
T*b/iI%
EnergyRatio to
max value20.1974 4502.54 0.004 20272866.45 0.95 0.004 0.004 0.012-7.1581 1069.07 -0.007 1142910.66 0.05 0.007 0.006 0.0184.5046 108.86 0.041 11850.72 0.00 0.041 0.037 0.111-0.4225 10.58 -0.040 112.02 0.00 0.040 0.036 0.1081.8436 4.97 0.371 24.66 0.00 0.371 0.335 1.0000.3201 1.07 0.299 1.14 0.00 0.299 0.270 0.8060.0004 0.00 0.344 0.00 0.00 0.344 0.311 0.928
-
Scaling can be resorted to reduce the condition number and
application of a weight matrix derived as the inverse of the error
estimates on data (1.5%) gives a weighted operator of condition
number 427708.
As we are working in an inverse crime scenario where the data
vector has been artificially created from the model, we get the
model x used above as the minimum norm solution.
Adding a noise of mean zero and standard deviation 1 to the
data vector b, the weighted least square solutions depicts
residuals varying from 1.13 to 3.14 for the same solution vector.
In fact, the prior is dominating the solution as the linear
operator is lacking in structure to illuminate the model space
with information retrieved from data space.
No other explanation can be thought of for the Picard
coefficients weighted towards the ill-conditioned model space.
59
SVD Analysis contd.
-
Inverse Crime Operations Axi = bi and xi = A+bi
60
Data vector at a depth
Depth Q C K M W A Transpose b N t 1 0.46 0.05 0.20 0.07 0.22 2.65 -0.06 55.5 12 1 2.252 0.283 97.36 45.422 0.48 0.03 0.18 0.07 0.24 2.71 -0.02 48 0 1 2.222 0.295 98.89 43.563 0.50 0.03 0.16 0.05 0.26 2.41 0.37 120 105 1 2.191 0.301 100.28 36.34 0.50 0.04 0.16 0.05 0.25 2.79 0.25 55 270 1 2.209 0.291 98.87 36.35 0.52 0.03 0.15 0.03 0.27 1 1 189 0 1 2.175 0.301 100.98 30.09
x1 x2 x3 x4 x5 b1 b2 b3 b4 b50.46 0.48 0.50 0.50 0.52 2.2518 2.2224 2.1914 2.2085 2.17450.05 0.03 0.03 0.04 0.03 0.2829 0.2947 0.3011 0.2909 0.30120.20 0.18 0.16 0.16 0.15 97.36 98.89 100.28 98.87 100.980.07 0.07 0.05 0.05 0.03 45.42 43.56 36.3 36.3 30.090.22 0.24 0.26 0.25 0.27 1 1 1 1 1
-
The coefficient matrix is retrieved from observed tool data by
using the inverse of the volume matrix.
61
Kernel A from the data
Inverse of Volume Matrix Observed data Coefficient Matrix-22.5 54 -106.5 42 34 2.252 0.283 97.36 45.42 1 2.65 -0.06 55.5 12.00 127.5 -96 93.5 42 -66 2.222 0.295 98.89 43.56 1 2.71 -0.02 48 0.00 127.5 4 -6.5 -58 34 2.191 0.301 100.28 36.3 1 2.41 0.37 120 105.00 1-22.5 4 43.5 42 -66 2.209 0.291 98.87 36.3 1 2.79 0.25 55 270.00 127.5 -96 193.5 -58 -66 2.175 0.301 100.98 30.09 1 1 1 189 0.00 1
-
A x Ax = b Error2.65 2.71 2.41 2.79 1 0.41 2.196 0.0329-0.06 -0.02 0.37 0.25 1 0.07 0.318 0.004855.5 48 120 55 189 0.22 102.515 1.537712 0 105 270 0 0.05 41.52 0.62281 1 1 1 1 0.25 1 0.015
62
Additional Examples-1
Unit Free Operator AUF bUF Error80.449 82.271 73.163 84.699 30.358 66.66667 1-12.579 -4.193 77.568 52.411 209.644 66.66667 136.092 31.215 78.037 35.767 122.909 66.66667 119.268 0.000 168.593 433.526 0.000 66.66667 166.667 66.667 66.667 66.667 66.667 66.66667 1
U VT-0.25 0.15 0.684 -0.133 0.656 506.15 0.00 0.00 0.00 0.00 -0.11 -0.079 -0.427 -0.875 -0.184
-0.229 0.716 -0.502 -0.38 0.195 0.00 259.24 0.00 0.00 0.00 0.121 0.166 0.218 -0.325 0.897-0.185 0.483 0.093 0.844 -0.107 0.00 0.00 140.34 0.00 0.00 0.662 0.665 0.119 -0.14 -0.292-0.896 -0.393 -0.2 0.046 -0.033 0.00 0.00 0.00 14.06 0.00 0.145 -0.455 0.786 -0.311 -0.239-0.221 0.277 0.482 -0.352 -0.721 0.00 0.00 0.00 0.00 3.12 -0.717 0.563 0.371 -0.113 -0.138
-
Tikhonov Regularization for AUF
Scaling did show a significant effect on the singular value spectrum
of the operator. Rationale followed in achieving a good scaling is
that the uncertainties in all the elements of A are of the same order.
Quality of the scaling done may be understood by looking at the
norm of the columns of the operator.
63
b xtrue L=0 e L =1 e L=2 e L = 3 e2.196 0.41 0.41 0.00 0.40 0.015 0.38 0.026 0.37 0.040.318 0.07 0.07 0.00 0.08 -0.014 0.09 -0.024 0.10 -0.03
102.515 0.22 0.21 0.00 0.22 -0.012 0.23 -0.017 0.23 -0.0241.52 0.05 0.05 0.00 0.05 0.000 0.05 0.001 0.05 0.00
1 0.25 0.25 0.00 0.25 0.001 0.25 0.003 0.24 0.01L2 norm 0.53 0.01 0.53 0.023 0.52 0.04 0.52 0.05
b xtrue L=5 e L =10 e L=30 e L = 50 e2.196 0.41 0.36 0.053 0.33 0.079 0.29 0.119 0.28 0.130.318 0.07 0.11 -0.045 0.14 -0.066 0.17 -0.101 0.18 -0.11
102.515 0.22 0.24 -0.029 0.25 -0.040 0.26 -0.052 0.26 -0.0541.52 0.05 0.05 0.005 0.04 0.008 0.04 0.011 0.04 0.01
1 0.25 0.24 0.008 0.24 0.012 0.23 0.017 0.23 0.02L2 norm 0.51 0.08 0.50 0.11 0.49 0.17 0.48 0.18
-
How to interpret the change in the norm?
64
Interpreting the norm of the columns
Operator Norms of the columnsA 57 48 159 276 189
AUF 113 110 224 451 254
-
A: L-Curve Parameters Regularized Solution[x]T ||Ax-b|| ||x|| ||e|| = x11 x21 x31 x41 x51
0.0468 0.01 0.48 0.22 0.24 0.22 0.24 0.05 0.251 0.06 0.47 0.24 0.22 0.22 0.22 0.06 0.273 0.16 0.46 0.25 0.20 0.20 0.21 0.06 0.285 0.23 0.45 0.25 0.19 0.19 0.21 0.06 0.29
10 0.34 0.44 0.27 0.17 0.16 0.21 0.06 0.3020 0.46 0.43 0.28 0.15 0.14 0.21 0.07 0.31
65
Tikhonov in both the cases
A is good in minimizing the residual norm while AUF minimizes
the residual solution x error norm. Norm of x is almost thesame in both. Tikhonov regularization gives approximate
solution x as the unique minimizer of the quadratic cost
function F% L Z % % i.e. two options are available (contd)
-
66
Tikhonov in both the cases
(a) Define an upper bound for the solution error norm andminimize the residual
(b) (b)Limit the residue by choice and minimize the error norm =||x-x||.
AUF: L-Curve Parameters Regularized Solution[x]T ||Ax-b|| ||x|| ||e|| = x11 x21 x31 x41 x515 0.22 0.51 0.07 0.36 0.11 0.24 0.05 0.2410 0.29 0.50 0.11 0.33 0.14 0.25 0.04 0.2420 0.41 0.49 0.14 0.30 0.16 0.26 0.04 0.2350 0.70 0.48 0.18 0.28 0.18 0.26 0.04 0.23
100 1.06 0.48 0.20 0.26 0.20 0.26 0.04 0.23
-
67
Additional Examples-1
-
68
Additional Examples-1
-
69
Additional Examples-1