ee616
DESCRIPTION
EE616. Computer Aided Analysis of Electronic Circuits. Dr. Janusz Starzyk. Computer Aided Analysis of Electronic Circuits. Innovations in numerical techniques had profound import on CAD: Sparse matrix methods. Multi-step methods for solution of differential equation. - PowerPoint PPT PresentationTRANSCRIPT
![Page 1: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/1.jpg)
EE616
Dr. Janusz Starzyk
![Page 2: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/2.jpg)
Computer Aided Analysis of Electronic Circuits
• Innovations in numerical techniques had profound import on CAD:– Sparse matrix methods.– Multi-step methods for solution of differential
equation.– Adjoint techniques for sensitivity analysis.– Sequential quadratic programming in
optimization.
![Page 3: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/3.jpg)
Fundamental Concepts• NETWORK ELEMENTS:
– One-portResistor voltage controlled.
or current controlled
Capacitor
Inductor
condition
)(vfi
)(ifv
+ V -i
dt
dqiandvfq )(
dt
dvandif
)(
0)0( f
.constv
.consti
Independence voltage source
Independence current source
![Page 4: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/4.jpg)
Fundamental Concepts– Two-port:
Voltage to voltage transducer (VVT):
Voltage to current transducer (VCT):
Current to voltage transducer (CVT):
Current to current transducer (CCT):
Ideal transformer (IT):
Ideal gyrator (IG):
+
V1
-
+
V2
-
i1 i2
121 0 vvi
121 0 gvii
121 0 rivv
121 0 iiv
2121
1i
ninvv
2221 rivriv
![Page 5: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/5.jpg)
Fundamental ConceptsPositive impedance converter (PIC)
Negative impedance converter (NIC)
Ideal operational amplifier (OPAMP)
OPAMP is equivalent to nullor constructed from two singular one-ports:
nullator
and norator
OPAMP nullor
211211 ikivkv
221211 ikivkv 00 11 iv
+ V -i 00 iv
+ V -i arbitraryiv,
+
V2
-
+
V1
- +
V1
-
+
V2
-
i1 i2
i1 i2
![Page 6: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/6.jpg)
Network ScalingTypical design deals with network elements having resistivity from ohms
to MEG ohms, capacitance from fF to mF, inductance from mH to H within frequency range Hz. Consider
EXAMPLE:
Calculate derivative with 6 digits accuracy?
Let
but because of roundoff errors:
Which is 16% error.
910
)()()(
ooo xf
x
xfxxf
86.0)(
10
0000086.1)(
1)(
/
5
0
o
o
xf
x
xxf
xf
1)(
00001.1)(/
o
o
xfand
xxf
![Page 7: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/7.jpg)
Scaling is used to bring network impedance close to unity
Impedance scaling: Design values have subscript d and scaled values
subscript s.
For scaling factor K we get:
Frequency scaling: has effect on reactive elements:
With:
K
LL
K
SL
K
ZZ
KCCKSCK
ZZ
K
RR
dS
dLdLS
dSd
cdCS
dS
1
os
SSdoSdL
SSdoSdC
LjLjLjZ
CjCjCjZ
111
doSdoS LLCC
![Page 8: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/8.jpg)
For both impedance and frequency scaling we have:
WT, CCT, IT, PIC, NIC, OPAMP remain unchanged.
VCT the transcondactance g is multiplied by K.
CVT, IG the transresistance r is divided by K.
KCCK
LL
K
RR
odS
odS
dS
![Page 9: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/9.jpg)
NODAL EQUATIONS
For (n+1) terminal network:
Y V = J
or:
V1
V2
V3
j1
j2
j3
Vn+1
Jn+1
1
2
1
1
2
1
121
122221
111211
nnnnnn
n
n
j
j
j
V
V
V
yyy
yyy
yyy
![Page 10: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/10.jpg)
Y is called indefinite admittance matrix.
For network with R, L, C and VCT we can obtain Y directly from the network.
For VCT: k
m
i
jV1 gV1
gg
ggfrom i
to j
from k to m
![Page 11: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/11.jpg)
when k=I and m=j we have one-port and g = Y:
Liner Equations and Gaussian Elimination:For liner network nodal equations are linear. Nonlinear networks can be
solved by linearization about operating point. Thus solution of linear equations is basic to many problems.
Consider the system of liner equations:
or:
i=Yv
K=i
m=j
Y
YY
YY
from k
from k
to m
to m
bxA
![Page 12: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/12.jpg)
Solution can be obtained by inverting matrix
but this approach is not practical.
Gaussian elimination:
Rewrite equations in explicit from and denote bi by ai,n+1 to simplify notation :
bAx 1
nnnnnn
n
n
b
b
b
x
x
x
aaa
aaa
aaa
2
1
2
1
21
22221
11211
~A
![Page 13: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/13.jpg)
How to start Gaussian elimination?
Divide the first equation by a11 obtaining:
Where
Multiply this equation by a21 and add it to the second. The coefficients of the new second equation are
with this transformation becomes zero. Similarly for the other equations, setting:
1,2211
1,22222121
1,11212111
nnnnnnn
nnn
nnn
axaxaxa
axaxaxa
axaxaxa
1,1
)1(
1
)1(
212
)1(
1 nnn axaxax
1,2,111
11
)1(
njaa
a jj
1,2,1)1(
12122
)1(
njaaaa jjj
)1(
21a
![Page 14: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/14.jpg)
makes all coefficients of the first column zero with exception of .
We repeat this process selecting diagonal elements as dividers and obtaining general formulas
where superscript shows how many changes were made. The resulting equations have the form:
ni
njaaaa jijj
,2
1,2,1)1(
1121
)1(
)1(
11a
kj
kk
ik
k
ijij
k
k
kk
k
kjkj
k
aaaa
aaa)()1()1()(
)1()1()(
/
1,,1
,,1
,,1
nkj
nki
nk
)(
1,
)2(
1,2
)2(
22
)1(
1,1
)1(
12
)1(
121
n
nnn
nnn
nnn
ax
axax
axaxax
![Page 15: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/15.jpg)
Back substitution is used to obtain solution. Last variable is used to obtain xn-1 and so on.
In general:
Gaussian elimination requires operations.
EXAMPLE:
1,11
)()(
1,
nixaaxn
ij
i
jij
i
nii
33n
Example 2.5.b (p70)
![Page 16: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/16.jpg)
While back substitutions requires .
Triangular decomposition:
Triangular decomposition has an advantage over Gaussian elimination as it can give simple solution to systems with different right-hand-side vectors and transpose systems required in sensitivity computations.
Assume we can factor matrix as follows:
where
22n
~A
~~~ULA
nnnn lll
ll
l
L
21
2221
11
~ 0
1
1
1
0
223
11312
~
n
n
uu
uuu
U
![Page 17: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/17.jpg)
L stands for lower triangular and U for upper triangular. Replacing A by LU the system of equations takes a form:
L U X = b
Define an auxiliary vector Z as
U X = Z
then L X = b and Z can be found easily as:
so
Zn=b1/l11
and
nnnnnn bzlzlzl
bzlzl
bzl
2211
2222121
1111
nilZlbZ ii
i
jjijii ,,2
1
1
![Page 18: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/18.jpg)
This is called forward elimination. Solution of UX=Z is called backward substitution. We have
so Xn=Zn
and
to find LU decomposition consider matrix.
Taking product of L and U we have :
4424421441432342134142124141
343324321431332332133132123131
242214212322132122122121
14111311121111
lulullulullull
ululullulullull
ulululullull
ululull
A
nn
nn
nn
zx
zxux
zxuxux
222
112121
1,,11
niZUZXn
ijjijii
44
![Page 19: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/19.jpg)
From the first column we have
from the first row we find
from the second column we have
and so on…
In machine implementation L and U will overwrite A with L occupying the lower and U the upper triangle of A.
In general the algorithm of LU decomposition can be written as (Crout algorithm):
1. Set k=1 and go to step 3.
2. Compute column k of L using:
4,,111 ial ii
4,,21111 jlau jj
4,,212122 iulal iii
![Page 20: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/20.jpg)
if k=n stop.
3. Compute row k of U using
4. Set k=k+1 and go to step 2.
This technique is represented in text by CROUT subroutine. Modification which is dealing with rows only by LUROW.
Modification of Gaussian elimination which give LU decompositions realized by LUG subroutine.
Features of LU decomposition:
1. Simple calculation of determinant
2.if only right-hand-side vector b is changed there is no need to recalculate the decomposition and only forward and backward substitution are performed, which takes n2 operations.
3.Transpose system AT X = C required for sensitivity calculation `can be solved easily as AT = UTLT.
kiulalk
mmkimikik
1
1
kjlulau kk
k
mmjkmkjkj
1
1
n
iiilA
1
det
![Page 21: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/21.jpg)
4. Number of operation required for LU decomposition is (equivalent to Gaussian elimination.)
nnM 3
3
1
Example 2.5.1
![Page 22: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/22.jpg)
2.6 PIVOTING:
the element by which we divide (must not be zero) in gaussian elimination is called pivot. To improve accuracy pivot element should have large absolute value.
Partial pivoting: search the largest element in the column.
Full pivoting: search the largest element in the matrix.
Example 2.6.1
![Page 23: EE616](https://reader033.vdocuments.us/reader033/viewer/2022051517/56815a8b550346895dc80105/html5/thumbnails/23.jpg)
SPARSE MATRIX PRINCIPLES
To reduce number of operations in case many coefficients in the matrix A are zero we use sparse matrix technique. This not only reduces time required to solve the system of equations but reduces memory requirements as zero coefficients are not stored at all.
(read section 2.7)
Pivot selection strategies are motivated mostly by the possibility to reduce the number of operations.