numerical analysis of electronic circuits by mehrotra 2002

86
Numerical Analysis of Electronic Circuits Amit Mehrotra September 27, 2002

Upload: abonajool

Post on 04-Mar-2016

4 views

Category:

Documents


1 download

DESCRIPTION

Numerical Analysis of Electronic Circuits

TRANSCRIPT

Page 1: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 1/86

Numerical Analysis of Electronic Circuits

Amit Mehrotra

September 27, 2002

Page 2: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 2/86

2

Page 3: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 3/86

Contents

1 Formulation of Circuit Equations: Modified Nodal Analysis 51.1 Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2 Fundamental Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3 Widely Used Circuit Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.4 Modified Nodal Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.5 MNA Stamps of common devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2 Solution of Linear Equations: Direct Methods 112.1 Gaußian Elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.1.1 LU Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.2 Pivoting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.2.1 Pivoting Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.3 Error Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.3.1 Detecting Ill-Conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.3.2 Finite Precision Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.3.3 Scaling and Equilibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.4 Sparse Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.4.1 Determination of Pivots: Markowitz Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3 Iterative Methods for Solving Large Linear Systems 19

3.1 Krylov Subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.2 Generation of Krylov Subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.2.1 Arnoldi Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.2.2 Lanczos Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

3.3 Conjugate Gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.4 MINRES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.5 GMRES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.6 QMR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.7 Preconditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

4 Solution of Nonlinear Equations 294.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294.2 Solution Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

4.2.1 Contraction Mapping Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304.3 Newton Raphson (scalar) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4.3.1 Newton Raphson Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.4 Nonlinear Circuit Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324.5 Globally Convergent Newton’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4.5.1 General Idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.5.2 Backtracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344.5.3 Exact Line Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

4.6 Continuation Methods and Homotopies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354.6.1 Continuation Methods: Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3

Page 4: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 4/86

4 CONTENTS

4.6.2 Saddle Node Bifurcation Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394.6.3 Circuit Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

5 Numerical Integration of Differential Equations 415.1 Solution of Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415.2 Linear Multistep Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425.3 Local Truncation Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

5.3.1 Algorithm for Choosing LMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465.4 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485.5 Absolute Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

5.5.1 Region of Absolute Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545.5.2 Finding Region of Absolute Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555.5.3 A-Stable Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

5.6 Stiff Equations and Stiff Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585.6.1 Requirements for Stiff Stability: Numerical Methods . . . . . . . . . . . . . . . . . . . . . . . . 595.6.2 Choice of Step Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615.6.3 Application of LMS Methods to Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615.6.4 Stability with Variable Time Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

6 Small Signal and Noise Analysis of Circuits 656.1 General Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656.2 AC Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666.3 Noise Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

7 Steady-State Methods for Solving Periodic Circuits 677.1 Periodic Steady-State Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

7.1.1 Finite Difference Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677.1.2 Shooting Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687.1.3 Harmonic Balance Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

7.2 Oscillator Steady-State Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737.2.1 Finite Difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737.2.2 Shooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

7.2.3 Harmonic Balance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 758 Periodic Small Signal Analysis of Circuits 77

8.1 Periodic AC Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778.2 Periodic Noise Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

9 Model Order Reduction of Large Linear Circuits 819.1 Pade Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819.2 Pade Via Lanczos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

Page 5: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 5/86

Chapter 1

Formulation of Circuit Equations:Modified Nodal Analysis

1.1 Conventions

Current is always treated positive going from the positive node to the negative node as shown below

+

vi

e+

e−

Currents are denoted by i, branch voltages are denoted by v and node voltages are denoted by e. For the aboveelement

v = e+ − e−

Also, at a given node, currents going out of the node are considered positive.

1.2 Fundamental Laws

The fundamental laws governing circuit equations are

Kirchoff’s Current Law (KCL) states that the sum of all currents at a node is zero.

Kirchoff’s Voltage Law (KVL) states that the sum of all branch voltages around a loop is zero.

Branch Equations are mathematical models which model the circuit components in terms of voltage, current,charge, flux etc.

KCL and KVL can be conveniently expressed in matrix form. Consider the following circuit:

R4R1 I S 5

R3

+ v3

G2v3

1 2

0

R8

E 7v3

E S 6 3

−+−

+−4

5

Page 6: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 6/86

6 CHAPTER 1. FORMULATION OF CIRCUIT EQUATIONS: MODIFIED NODAL ANALYSIS

KCL can be written as

1 1 1 0 0 0 0 00 0 −1 1 −1 1 0 00 0 0 0 0 1 0 10 0 0 0 0 0 1 −1

i1

i2

i3

i4

i5

i6

i7

i8

=

0000

orAI = 0

KVL can be written as

v1

v2

v3

v4

v5

v6

v7

v8

1 0 0 01 0 0 01 −1 0 00 1 0 00 −1 0 00 1 1 00 0 0 1

0 0 1 −1

e1

e2

e3

e4

=

0000000

0

orV − AT E = 0

1.3 Widely Used Circuit Elements

We now describe branch equations of widely used linear elements. Nonlinear elements will be considered later.

Resistors are characterized by an algebraic relationship between their current and voltage

iR

vR+ −

In general, the relationship can be nonlineariR = i(vR)

Linear resistors have a linear relationship between the voltage and current

iR = GvR = 1

RvR

Capacitors are characterized by an algebraic relationship between their charge and voltage

vC + −

iC = dqC

dt

q C = q (vC ) iC = dq (vC )

dt

One might be tempted to write

iC = C (vC )dvC

dt

The two forms of equations give identical results for linear capacitors but when the relationship is nonlinear, itis better to use the charge equation for numerical reasons as will be shown later.

Page 7: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 7/86

1.3. WIDELY USED CIRCUIT ELEMENTS 7

Inductors are characterized by an algebraic relationship between their flux and current

iL

+ −vL = dφ(iL)dt

φL = φ(iL) vL =

dφ(iL)

dt

Independent Voltage Sources are ideal elements which maintain a given voltage across them and deliver anyamount of current.

v(t)+ −

iV

Independent Current Sources are ideal elements which generate a given current independent of the voltage acrossthem.

vI + −

i(t)

Voltage Controlled Current Sources are elements of the type

gvc

+

vc

The element connected between the controlling nodes can be arbitrary.

Voltage Controlled Voltage Sources are elements of the type

evc

+

vc

+

The element connected between the controlling nodes can be arbitrary.

Current Controlled Current Sources are elements of the type

ficic

The element connected between the controlling nodes can be arbitrary. However for ease of implementation,many circuit simulators require that for current controlled sources, the element between the controlling nodesshould be a voltage source or an inductor.

Current Controlled Voltage Sources are elements of the type

hic

+

ic

Page 8: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 8/86

8 CHAPTER 1. FORMULATION OF CIRCUIT EQUATIONS: MODIFIED NODAL ANALYSIS

1.4 Modified Nodal Analysis

This is a compact and efficient way of generating the circuit matrix from the circuit description. This is bestillustrated with an example.

R4R1 I S 5

R3

+ v3G2v3

1 2

0

R8

E 7v3

E S 6 3

+−

+−4

The modified nodal analysis proceeds as follows:

1. Write KCL

i1 + i2 + i3 = 0

−i3 + i4 − i5 − i6 = 0

i6 + i8 = 0

i7 − i8 = 0

2. Use branch equations to eliminate as many branch currents as possible

v1

R1+ G2v3 +

v3

R3= 0

−v3

R3+

v4

R4− i6 = I S 5

i6 + v8

R8= 0

i7 − v8

R8= 0

3. Write down the unused branch equations

v6 = E S 6v7 − E 7v3 = 0

4. Use KVL to eliminate branch voltages

e1

R1+ G2(e1 − e2) +

e1 − e2

R3= 0

−e1 − e2

R3+

e2

R4− i6 = I S 5

i6 + e3 − e4

R8= 0

i7 − e3 − e4

R8= 0

e3 − e2 = E S 6e4 − E 7(e1 − e2) = 0

In matrix form

1R1

+ G2 + 1R3

−G2 − 1R3

0 0 0 0

− 1R3

1R3

+ 1R4

0 0 −1 0

0 0 1R8

− 1R8

1 0

0 0 − 1R8

1R8

0 1

0 −1 1 0 0 0E 7 −E 7 0 −1 0 0

e1

e2

e3

e4

i6

i7

=

0I S 5

00

E S 60

Page 9: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 9/86

1.5. MNA STAMPS OF COMMON DEVICES 9

or Y n BC 0

ei

= S

• all node voltage are unknowns

• branch current introduced as an additional variable for voltage source or inductor

• MNA can be used for any circuit

• MNA can be assembled directly from the input

1.5 MNA Stamps of common devices

Each device adds entries in the MNA matrix which called “stamp” of a device. The MNA stamps of some commondevices are shown below

Resistor Let a resistor of resistance R be connected between nodes i and j . Then

KCL at node i :

(other currents) +

ei − ej

R = 0

KCL at node j : (other currents) − ei − ej

R = 0

Therefore the resistor stamp isi . . . j

i...

j

1

R − 1

R

− 1R

1R

Voltage controlled current source of value G between nodes i and j with controlling voltage being the potentialdifference of nodes k and l

k . . . li...

j G −G

−G G

Independent current source of value I S between nodes i and j

i...

j

=

−I S

I S

Independent voltage source of value V between nodes i and j at branch k

i j k RHSi

jk

1

−11 −1 0

0

0V

Voltage controlled voltage source of value E between nodes i and j at branch k with controlling voltage beingthe potential difference of nodes l and m

i j k l mi

jk

1

−11 −1 −E E

Page 10: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 10/86

10 CHAPTER 1. FORMULATION OF CIRCUIT EQUATIONS: MODIFIED NODAL ANALYSIS

Current controlled current source of value F between nodes i and j with the controlling current at branch k

i j ki

j

F

−F

Current controlled voltage source of value H between nodes i and j at branch k with the controlling current atbranch li j k l

i jk

1

−11 −1 −H

The MNA stamps of nonlinear devices are somewhat complicated and will be addressed later.

Page 11: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 11/86

Chapter 2

Solution of Linear Equations: DirectMethods

Here we present methods of directly solving Ax = b, A ∈ Cn×n, x, b ∈ Cn.

2.1 Gaußian Elimination

Consider the following system of equations

a11x1 + a12x2 + . . . + a1nxn = b1

a21x1 + a22x2 + . . . + a2nxn = b2

...

an1x1 + an2x2 + . . . + annxn = bn

or in matrix form

Ax = b

From the first equation, the expressions of x1 can written as

x1 = b1 −

ni=2 a1ixi

a11

Substituting this value of x1 in all other equations

a11x1 + a12x2 + . . . + a1nxn = b1a22 −

a12a21

a11

x2 +

a23 −

a13a21

a11

x3 + . . . +

a2n −

a1na21

a11

xn = b2 −

a21

a11b1

.

..an2 −

a12an1

a11

x2 +

an3 −

a13an1

a11

x3 + . . . +

ann −

a1nan1

a11

xn = bn −

an1

a11b1

In matrix form

a11 a12 a13 . . . a1n

0 a22 − a12a21a11

a23 − a13a21a11

. . . a2n − a1na21a11

. . .

0 an2 − a12an1a11

an3 − a13an1a11

. . . ann − a1nan1a11

x1

x2

...xn

=

b1

b2 − a21a11

b1

...bn − an1

a11b1

11

Page 12: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 12/86

12 CHAPTER 2. SOLUTION OF LINEAR EQUATIONS: DIRECT METHODS

or

a11 a12 a13 . . . a1n

0 a(2)22 a

(2)23 . . . a

(2)2n

. . .

0 a(2)n2 a

(2)n3 . . . a

(2)nn

x1

x2

...xn

=

b1

b(2)2...

b(2)n

whereb

(2)i = bi − l

(1)i1 b1 and a

(2)ij = aij − a1j l

(1)i1

where

l(1)i1 =

a(1)i1

a(1)11

In matrix form we can write this as A(2)x = b(2). The new coefficient matrix A(2) is related to the original matrix asfollows

A(2) =

1 0 0 . . . 0

−l(1)21 1 0 . . . 0

−l(1)31 0 1 . . . 0

. ..

−l(1)n1 0 . . . 0 1

A = L−11 A

Also

b(2) = L−11 b

Note that

L−11 = I − l(1)eT 1

where

e1 =

100...0

and l(1) =

0

l(1)21

l(1)31...

l(1)n1

At the kth step of Gaußian elimination, we multiply the ith equation by l(k)ik and subtract it from the kth equation.

l(k)ik =

a(k)ik

a(k)kk

a(k+1)ij = a

(k)ij − l

(k)ik a

(k)kj

We can represent this as matrix multiplication

L−1k A(k) = A(k+1)

where

L−1k = I − l(k)eT k

and

ek =

0...1...00

← kthand l(k) =

0...0

l(k)k+1,k

...

l(k)nk

Page 13: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 13/86

2.1. GAUSSIAN ELIMINATION 13

After n steps

a11x1 + a12x2 + . . . + a1nxn = b1

a(2)22 x2 + . . . + a

(2)2n = b

(2)2

. . .

a(n)nn xn = b

(n)n

or in matrix form

U x =

b1

b(2)2...

b(n)n

= b

Note thatU = L−1

n L−1n−1 . . . L−1

1 A

Further

Lk =

I − l(k)eT k

−1

= I + l(k)eT k

It can be shown thatLiLj = I + l(i)eT i + l(j)eT j i < j

ThereforeL1L2 . . . Ln = L = lower triangular matrix

andA = LU

Back substitutionU x = b

This is solved as follows

xn =bn

unn

xn−1 =bn−1 − un−1,nxn

un−1,n−1

...

xi =b1 −

nj=i+1 ui,jxj

u1,1

2.1.1 LU Decomposition

To solve Ax = b

• create LU = A

• Let y = U x. Then Ly = b. Solve for y (forward substitution)

• U x = y. Solve for x (backward substitution)

aij =

ik=1 likukj i ≤ jjk=1 likukj i > j

uij = aij −i−1k=1

likukj if i ≤ j

lij = aij −

j−1k=1 likukj

ujj

if i > j

Page 14: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 14/86

14 CHAPTER 2. SOLUTION OF LINEAR EQUATIONS: DIRECT METHODS

2.2 Pivoting

Example: After two steps of GE, the matrix becomes

∗ ∗ 0 0 0 00 ∗ 0 0 ∗ 0

0 0

1

R −1

R 1 00 0 − 1R

1R

0 10 0 ∗ 0 ∗ 00 0 0 ∗ ∗ 0

Another GE step

∗ ∗ 0 0 0 00 ∗ 0 0 ∗ 00 0 1

R − 1

R 1 0

0 0 0 0 0 10 0 0 ∗ ∗ 00 0 0 ∗ ∗ 0

li4 = a(4)

i4

a(4)44

= ∞

Solution: interchange rows and/or columns to bring nonzero elements into position (k, k).0 0 ∗

∗ ∗ 0∗ ∗ 0

∗ ∗ 0

0 0 ∗∗ ∗ 0

This would be a problem even if exact arithmetic computer was used because of the structure of the matrix. Most of the times, the problems occur due to the finite precision of the computer. Consider the following example

1.25 × 10−4

1.2512.5 12.5 x1

x2

= 6.2575

Assume finite arithmetic with 3 digit floating point. After first step of GE1.25 × 10−4 1.25

0 −1.25 × 105

x1

x2

=

6.25

−6.25 × 105

which implies x2 = 5.00. Substituting this in the first equation

1.25 × 10−4x1 + 1.25x2 = 6.25

we have x1 = 0.00. The actual solution (to 3 digits accuracy) x1 = 1.00 and x2 = 5.00. The error in this computation

occurred because a11 was too small relative to other numbers. This can be easily rectified by row interchange 12.5 12.5

1.25 × 10−4 1.25

x2

x1

=

756.25

After one step of GE 12.5 12.5

0 1.25

x2

x1

=

756.25

whose solution is x1 = 1, x2 = 5. Therefore, we need to use pivoting even to maintain accuracy.

Page 15: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 15/86

2.3. ERROR MECHANISM 15

2.2.1 Pivoting Strategies

1. Partial pivoting (row interchange only), choose l as the smallest integer such that

a

(k)lk

= max

j=k,...,n

a

(k)jk

Interchange rows i and l.

2. Complete pivoting (row and column interchange), choose l and m as the smallest integer such that

a(k)lm

= maxi=k,...,nj=k,...,n

a(k)jk

Interchange rows i and l and columns j and m.

3. Threshold pivoting apply partial pivoting only if a

(k)kk

< p

a

(k)lk

, apply complete pivoting only if

a

(k)kk

<

p a(k)lm .

2.3 Error Mechanism

Gaußian elimination algorithm works well if exact arithmetic with pivoting is used. However, in finite precisionarithmetic, errors may occur even with pivoting. The two main reasons for this are

• numerically singular matrix

• numerical stability of the method

Consider the following example:

x − y = 0

x + y = 1

The solution x = 1, y = 1 can be computed accurately. Now consider the following system

x − y = 0

x − 1.01y = 0.01

The system here is called ill-conditioned and the solution cannot be computed very accurately. We need a way todetect this.

Page 16: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 16/86

16 CHAPTER 2. SOLUTION OF LINEAR EQUATIONS: DIRECT METHODS

2.3.1 Detecting Ill-Conditioning

Vector Norms are defined as follows:

L1 : x1 =

ni=1

|xi|

L2 : x2 = n

i=1

|xi|2 1

2

L∞ : x∞ = maxi

|xi|

Regardless of which norm is used, they satisfy the following properties:

x = 0 ⇔ x = 0

αx = |α|x α scalar

x + y ≤ x + y

Matrix Norms can be defined from vector norms as follows:

A = maxx=0

Ax

xFrom the above definition, various matrix norms can be computed as:

L1 : A1 = maxj

ni=1

|aij |

L2 : A2 =

largest eigenvalue of AT A

L∞ : A∞ = maxi

nj=1

|aij |

Matrix norm satisfy the following additional properties:

AB ≤ AB

Ax ≤ Ax

Now consider A, x and b, such that Ax = b. Consider the effect of perturbing b or A on the solution x:

• when b is perturbed, b → b + δb

A(x + δx) = b + δb

Aδx = δb

δx ≤A−1

δb

δx

x ≤ A

A−1 δb

b = k(A)

δb

b

wherek(A) = AA−1

• when A is perturbed, A → A + δA

(A + δA)(x + δx) = b

Aδx + δA(x + δx) = 0

δx <A−1

δAx + δx

δx

x + δx ≤ A

A−1 δA

A = k(A)

δA

A

k(A) is called the condition number of A. Large k(A) implies that A is close to being singular, i.e., it is ill conditioned.

Page 17: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 17/86

2.3. ERROR MECHANISM 17

2.3.2 Finite Precision Arithmetic

Let u be the rounding unit of the floating point representation of a number in a computer.

E.g. on Linux u =

10−8 floating point

10−16 double precision

10−24 or 10−32 long double precision

Then for any number a, a is its floating point machine representation and

a − a ≤ u|a|

Thus

δA∞ = A − A∞ ≤ uA∞

δb∞ = b − b∞ ≤ ub∞

Before we proceed, it is useful to consider the nearly ideal situation in which no roundoff errors occur during the entiresolution process except when A and b are stored. Thus the computed solution x = x + δx satisfies

(A + δA)(x + δx) = b + δb

Then

⇒ x − x∞ ≤ δxb∞ + δxA∞ ≤ 2uk(A)x∞

Therefore there is error in the solution x even with a perfect algorithm.The following two theorems (stated without proof) determine the bounds on the error in GE

Theorem: Assume that A is an n × n matrix of floating point numbers. If no zero pivots are encountered during theexecution of Gaußian elimination then the computed triangular matrices L and U satisfy

LU = A + H

H ≤ 3(n − 1)uA + LU + O(u2)

Theorem: Let L and U be the computed LU factors of A obtained using Gaußian elimination. If forward and backsubstitution are used to produce the computed solution y to Ly = b and the computed solution x to U x = y.Then (A + E )x = b with

E ≤ nu

3A + 5LU

+ O(u2)

Partial and/or complete pivoting ensure that L∞ ≤ n. Define growth factor ρ as

ρ = maxi,j,k

a(k)i,j

A∞

It follows that

E ∞ ≤ 8n3ρA∞u + O(u2)

For partial pivoting ρ ≤ 2n−1 and for complete pivoting ρ ≤ 1.8n14 loge n. However, typically, ρ < 10.

2.3.3 Scaling and Equilibration

Ax = b

Rescale xx = D−1

1 x

Page 18: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 18/86

18 CHAPTER 2. SOLUTION OF LINEAR EQUATIONS: DIRECT METHODS

EquilibrateD2AD1x = D2b

D1 and D2 can be chosen such that the condition number of D2AD1 is much smaller compared to the condition numberof A. In circuit applications, one can rescale unknowns to reflect the difference in units, e.g. volts and microamperes.Choose D2 such that

maxj

|aij | ≤ 1

Scaling and equilibration only affect the choice of pivots elements in a pivoting scheme.

2.4 Sparse Matrix

Typically in a circuit, the number of elements connected to a node is limited to 5 or 6. Therefore for a large circuit,the number of zero elements in the circuit matrix is very large. Such matrices are called sparse . The computationaland storage complexity of sparse matrices can be reduced by the following optimizations:

• avoid storing zeros - use data structure of linked lists or pointers

• avoid trivial operations 0 × x = 0, 0 + x = x

• avoid losing sparsity, minimize “fill-ins”

∗ ∗ ∗ ∗ ∗ ∗∗ ∗∗ ∗∗ ∗∗ ∗∗ ∗

LU⇒

∗ ∗ ∗ ∗ ∗ ∗∗ ∗ ∗ ∗ ∗ ∗∗ ∗ ∗ ∗ ∗ ∗∗ ∗ ∗ ∗ ∗ ∗∗ ∗ ∗ ∗ ∗ ∗∗ ∗ ∗ ∗ ∗ ∗

∗ ∗∗ ∗

∗ ∗∗ ∗

∗ ∗

∗ ∗ ∗ ∗ ∗ ∗

LU⇒

∗ ∗∗ ∗

∗ ∗∗ ∗

∗ ∗

∗ ∗ ∗ ∗ ∗ ∗

2.4.1 Determination of Pivots: Markowitz Criterion

kth pivot; reduced matrix A(k)

∗ ∗ ∗ ∗∗ ∗ ∗

∗ ∗∗ ∗

∗ ∗

43222

3 3 2 2 3 cj/ri

Markowitz product for A(k)i,j = 0

• = (ri − 1)(cj − 1)

• = maximum possible number if fill-ins

• number of multiplications required is ri(cj − 1) in GE to pivot on i, j

Therefore for sparse matrices, an additional criteria for pivoting is to choose a pivot which causes the least number of fill-ins.

Page 19: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 19/86

Chapter 3

Iterative Methods for Solving LargeLinear Systems

The complexity of solving a dense system of equations is O

n3

which can be prohibitive for large n. Even when

A is sparse, directly solving a linear equation can be very expensive. In many of these cases, the solution can be

obtained efficiently using a Krylov subspace method. Therefore before proceeding further we first give the definitionof Krylov subspace.

3.1 Krylov Subspace

Definition Given a matrix A and a vector v , the ith order Krylov subspace is defined as

Ki (v, A) = span

v,Av,A2v , . . . , Ai−1v

Obviously i cannot be made arbitrarily large. If the rank of the matrix A is n then i ≤ n. More precisely, i is theorder of the annihilating polynomial for the matrix A and vector v .

Definition Annihilating polynomial is the polynomial

p(x) = xi + ai−1xi−1 + . . . + a1x + a0

of minimum degree i such that p(A)v = 0

It can be shown that annihilating polynomial is unique for a given A and v .A generic Krylov subspace algorithm for solving Ax = b can be described as follows:

1. guess a solution x(0), letr(0) = b − Ax(0)

and i = 0.

2. while r(i) ≥

where is some predefined error tolerance:

(a) i → i + 1.

(b) generate Ki

r(0), A

.

(c) generate

x(i) ∈ x(0) + Ki

r(0), A

such that

r(i) is minimized in “some sense”.

19

Page 20: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 20/86

20 CHAPTER 3. ITERATIVE METHODS FOR SOLVING LARGE LINEAR SYSTEMS

Krylov subspace methods differ from each other in (a) how do they generate the Krylov subspace in step 2b and (b)how do they minimize the residue in step 2c. As will be seen in the following section, generation of Krylov subspaceinvolves only matrix vector products. Therefore Krylov subspace methods can be easily applied to situations wherethe matrix may not be directly available and generating, storing and multiplying with that matrix involves significantoverhead. For instance, in harmonic balance method, the coefficient matrix is available as a sequence of transformswhich can be efficiently applied to a vector. However, the coefficient matrix generated from the product of those

transforms is dense which involves significant overhead in storing and multiplication; factoring it is obviously out of question.

3.2 Generation of Krylov Subspace

We now describe some methods for generating the set of vectors which span the Krylov subspace Ki (v, A). Theobvious way of doing this is to successively generate Aiv vectors. This is numerically a bad way of generating theKrylov subspace. To understand why, let A be diagonalized as

A = W ΛW −1

Then

Ai = W ΛiW −1

On a finite precision computer, as i increases, Ai only has the information about the dominant eigenvalues of A and allother eigenvalues disappear. In other words, as the dimension of Ki (v, A) is increased, the new basis vector, which wassupposed the increase the dimension of this subspace by 1, has a component in the new dimension which is numericallyinsignificant compared to the components which point in the previously generated dimensions. Therefore numericallythe dimension of the subspace has not increased.

3.2.1 Arnoldi Process

One obvious way of circumventing this problem is to remove the components in the previously generated directionsand optionally renormalize the resulting vector. At the ith step, let

Ki (v, A) = span b0, b1, . . . , bi−1

and bT j bk = 0, j = k. To increase the dimension by 1, form

di = Abi

orthogonalize it against b0, b1, . . . , bi−1

ci = di −i−1j=0

β jbj

where β js are chosen such that bT j ci = 0 ∀ j < i, and normalize it

bi = cici

From the condition cT i bj = 0, it follows that

β j =bT j Abi

bT j bj

This orthogonalization procedure is the so called modified Graham-Schmidt orthogonalization procedure. The obviousdisadvantage of this method is that as i increases, the computational cost of each iteration increases. The methoditself is O

n2

.

Page 21: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 21/86

3.2. GENERATION OF KRYLOV SUBSPACE 21

3.2.2 Lanczos Process

Another method for generating the Krylov subspace is the Lanczos process which was originally proposed for matrixtridiagonalization which is used for eigenvalue computation of the matrix. Consider the following set of equations

Ab1 = b1r1,1 + b2

Ab2 = b1r1,2 + b2r2,2 + b3

Ab3 = b1r1,3 + b2r2,3 + b3r3,3 + b4

... (3.1)

Abk−1 = b1r1,k−1 + b2r2,k−1 + . . . + bk−1rk−1,k−1 + bk

Abk = b1r1,k + b2r2,k + . . . + bkrk,k

for some ri,js. Similarly we can write these equations for cis and A∗.

A∗c1 = c1s1,1 + c2

A∗c2 = c1s1,2 + c2s2,2 + c3

A∗c3 = c1s1,3 + c2s2,3 + c3s3,3 + c4

.

.. (3.2)A∗ck−1 = c1s1,k−1 + c2s2,k−1 + . . . + ck−1sk−1,k−1 + ck

A∗ck = c1s1,k + c2s2,k + . . . + cksk,k

Here k is minimum of the degrees of the annihilating polynomials for b1 and A and c1 and A∗. It follows from theabove equations that b1, . . . , bi span the Krylov subspace Ki−1 (b1, A). We can rewrite the above set of equations asAB = B(J + R) where B = [b1, b2 . . . bk], R is an upper triangular matrix and J = [e2, e3 . . . ek 0]. I.e.,

J =

0 0 0 . . . 0 01 0 0 . . . 0 00 1 0 . . . 0 0

. . .

0 0 0 . . . 1 0

Similarly we can form A∗C = C (J +S ) where C = [c1, c2 . . . ck] and W 1 = c1 and S is another upper triangular matrix.We select r1,1 and s1,1 such that c∗1b2 = 0 and b∗1c2 = 0, r1,2, r2,2, s1,2 and s2,2 such that c∗1b3 = c∗2b3 = b∗1c3 = b∗2c3 = 0and so on such that C ∗B = D is diagonal.

Now we will show that the elements of C ∗B and S and R can be chosen such that C ∗B = D is a nonsingulardiagonal matrix. Assume c∗1b2 = b∗1c2 = 0. This simplifies to

c∗1(Ab1 − r1,1b1) = 0 or r1,1 = c∗1Ab1

c∗1b1

b∗1(A∗c1 − s1,1c1) = 0 or s1,1 = b∗1A∗c1

b∗1c1= r1,1

where ¯ denotes conjugation. Hence we have to choose b1 and c1 such that c∗1b1 = 0.

Requiring c∗1b3 = c∗2b3 = b∗1c3 = b∗2c3 = 0 we obtain

c∗1(Ab2 − r1,2b1 − r2,2b2) = 0 or r1,2 = c∗1Ab2

c∗1b1since c∗1b2 = 0

c∗2(Ab2 − r1,2b1 − r2,2b2) = 0 or r2,2 = c∗2Ab2

c∗2b2since c∗2b1 = 0

b∗1(A∗c2 − s1,2c1 − s2,2c2) = 0 or s1,2 = b∗1A∗c2

b∗1c1since b∗1c2 = 0

Page 22: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 22/86

22 CHAPTER 3. ITERATIVE METHODS FOR SOLVING LARGE LINEAR SYSTEMS

b∗2(A∗c2 − s1,2c1 − s2,2c2) = 0 or s2,2 = b∗2A∗c2

b∗2c2since b∗2c1 = 0

Hence we have to choose c1 and b1 such that c∗2b2 = 0.Requiring c∗1b4 = c∗2b4 = c∗3b4 = b∗1c4 = b∗2c4 = b∗3c4 = 0 we obtain

c∗1(Ab3 − r1,3b1 − r2,3b2 − r3,3b3) = 0 or r1,3 = c∗1Ab3

c∗

1b1

since c∗1b3 = c∗1b3 = 0

c∗2(Ab3 − r1,3b1 − r2,3b2 − r3,3b3) = 0 or r2,3 = c∗2Ab3

c∗2b2since c∗2b1 = c∗2b2 = 0

c∗3(Ab3 − r1,3b1 − r2,3b2 − r3,3b3) = 0 or r3,3 = c∗3Ab3

c∗3b3since c∗3b1 = c∗3b2 = 0

b∗1(A∗c3 − s1,3c1 − s2,3c2 − s3,3c3) = 0 or s1,3 = b∗1A∗c3

b∗1c1since b∗1c2 = b∗1c3 = 0

b∗2(A∗c3 − s1,3c1 − s2,3c2 − s3,3c3) = 0 or s2,3 = b∗2A∗c3

b∗2c2since b∗2c1 = b∗2c3 = 0

b∗3(A∗c3 − s1,3c1 − s2,3c2 − s3,3c3) = 0 or s3,3 = b∗3A∗c3

b∗3c3

since b∗3c1 = b∗3c2 = 0

Hence we have to choose c1 and b1 such that c∗3b3 = 0. In general we can show that

ri,j = c∗i Abj

c∗i biand si,j =

b∗i A∗cjb∗i ci

Consider

ri,j = c∗i Abj

c∗i bii < j − 1

Substituting ciA∗ = s1,ic1 + s2,ic2 + . . . + si,ici + ci+1 in the above equation we obtain

ri,j = (s1,ic∗1 + s2,ic∗2 + . . . + si,ic∗i + c∗i+1)bj

c∗i bi= 0

Similarly si,j = 0 i < j − 1. Hence, R and S have nonzero entries along the diagonal and super diagonal only. Thisalso implies that ci are A-orthogonal to bj for i < j − 1 and bi are A∗-orthogonal to cj for i < j − 1.

Consider

ri,j = c∗i Abj

c∗i bii = j − 1

Substituting the value of ciA∗ as above we obtain

ri,j = c∗i+1bj

c∗i bi

Substituting i = j − 1 we have

rj−1,j =c∗j bj

c∗j−1bj−1

Similarly we can show that

sj−1,j =b∗jcj

b∗j−1cj−1= rj−1,j

We have already seen thatsj,j = rj,j

Hence the Lanczos process can be written as

0. Choose b1 and c1 such that b1 = 0, c1 = 0 and c∗1b1 = 0. Set b0 = c0 = 0 and c∗0b0 = 1.

Page 23: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 23/86

3.3. CONJUGATE GRADIENT 23

For i = 0, 1, . . . do

1. Compute c∗i bi. If c∗i bi = 0, then stop.

2. Set

bi+1 = Abi − c∗i Abi

c∗i bibi −

c∗i bic∗i−1bi−1

bi−1

ci+1 = A∗ci − b∗i A

cib∗i ci

ci − b∗i ci

b∗i−1ci−1ci−1

This algorithm terminates if ci = 0 or bi = 0. It can be shown that bi = 0 occurs iff Ki (b1, A) is an A invariantsubspace and the vectors b1, . . . , bi form a basis of the subspace. This is the regular termination of the algorithm.However the procedure breaks down if ci = 0 and bi = 0 but c∗i bi = 0 or c∗i bi ≈ 0. In finite precision arithmetic exactcancellations are unlikely but we can have c∗i bi ≈ 0 but ci ≈ 0 and bi ≈ 0. These will cause numerical instability insubsequent iterations.

3.3 Conjugate Gradient

Conjugate gradient is a Krylov subspace method for solving Ax = b where A is symmetric (Hermitian if complex)and positive definite, which uses the Lanczos process for a Hermitian matrix and at each step, minimizes the A−1-normof the residue i.e., rT A−1r. Since A is Hermitian and positive definite rT A−1r is well-defined norm. Before proceeding,we first point out the relevant properties of Lanczos process for Hermitian matrices.

If A∗ = A, then:

1. for Lanczos process if one chooses c1 = b1, then ci = bi ∀i

2. since ci = bi and bi and ci are biorthogonal, it follows that bi are themselves orthogonal.

3. since ci is A-orthogonal to bj , i < j − 1, it follows that bi is A-orthogonal to bj i < j − 1.

Now consider the choice of x(i) ∈ x(0) + Ki

r(0), A

i.e.,

x(i) = x(0) + Biy

where Bi =

b1 b2 . . . bi

, such that r(i)

T

A−1r(i)

is minimized. Consider r(i)

T

A−1r(i) =

b − Ax(i)T

A−1

b − Ax(i)

=

r(0) − ABiyT

A−1

r(0) − ABiy

=

r(0)T

A−1r(0) − 2

r(0)T

Biy + yT BT i ABiy

Setting the derivative of the above with respect to y to zero we get

BT i ABiy = BT

i r(0)

If in every iteration of Conjugate Gradient, we need to solve the above equation and then form x(i) = x(0) + Biy thenthe storage and computational requirement can be excessive. However, as shown below, this is not needed and x(i)

can be related to x(i−1) using a very simple recursion.First note that bis are orthogonal therefore BT

i r(0) = bT 1 b1e1 where e1 is the first unit vector. Further, note that

since bi is A-orthogonal to bj i < j − 1, BT i ABi is symmetric tridiagonal matrix whose diagonal terms are bT

i Abi andthe off diagonal terms are bT

i−1Abi = bT i bi. Let T i = BT i ABi. Then

x(i) = x(0) + BiT −1i BT

i r(0)

Page 24: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 24/86

24 CHAPTER 3. ITERATIVE METHODS FOR SOLVING LARGE LINEAR SYSTEMS

Let T i be factored as T i = LiDiLT i where

Li =

1 0 . . . 0µ1 1 . . . 0

. . .

0 . . . µi−1 1

Di = diag(d1 d2 . . . di)

A direct comparison yields that d1 = bT 1 Ab1 and

µk−1 = bT k bkdk−1

dk = bT k Abk − bT k bkµi

Therefore we need only to calculate

µi−1 = bT i bi

di−1

di = b

T

i Abi − b

T

i biµi−1

in order to obtain Li and Di from Li−1 and Di−1. Now consider

x(i) = x(0) + BiT −1i BT

i r(0)

= x(0) + BiL−T i D−1

i L−1i BT

i r(0)

Let

Gi = BiL−T i ⇒ LT

i Gi = Bi

pi = D−1i L−1

i BT i r(0) ⇒ LiDi pi = BT

i r(0)

Then

x

(i)

= x

(0)

+ Gi pi

Let Gi =

g1 g2 . . . gi

. Then

g1 = b1

µk−1gk−1 + gk = bk

I.e., gi can be computed only from bi and gi−1

gi = bi − µi−1gi−1

Also let

pi =

ρ1

ρ2

.

..ρi

Then LiDi pi = BT i r(0) becomes

Li−1Di−1 0

0 . . . 0 µi−1di−1 di

ρ1

ρ2

...ρi−1

ρi

=

bT 1 b1

0...00

Page 25: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 25/86

3.3. CONJUGATE GRADIENT 25

Since Li−1Di−1 pi−1 = BT i−1r(0) it follows that

pi =

pi−1

ρi

where

ρi = −µi−1di−1ρi−1

di

Therefore we can writex(i) = x(0) + Gi pi

= x(0) + Gi−1 pi−1 + giρi

= x(i−1) + giρi

Therefore the conjugate gradient algorithm becomes

1. guess x(0) as the solution; let

r(0) = b − Ax(0)

i = 1

b1 = r(0)

b0 = 0d1 = bT 1 Ab1

g1 = b1

ρ1 = bT 1 b1

d1

x(1) = x(0) + ρ1g1

2. while r(i) >

(a) i → i + 1

(b) increase the dimension of the Krylov subspace by setting

bi = Abi−1 − bT

i−1Abi−1

bT i−1bi−1bi−1 −

bT i−1bi−1

bT i−2bi−2bi−2

(c) compute

µi−1 = bT i bi

di−1

di = bT i Abi − bT i biµi−1

gi = bi − µi−1gi−1

ρi = −µi−1di−1ρi−1

di

x(i) = x(i−1) + ρigi

r(i) = r(i−1) − ρiAgi

Let x∗ be the exact solution, i.e., Ax∗ = b. Let (i) be the error

(i) = x(i) − x∗

It can also be shown that e(i)A

≤ 2

k(A) − 1 k(A) + 1

i e(0)A

where k(A) is the condition number of A.

Page 26: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 26/86

26 CHAPTER 3. ITERATIVE METHODS FOR SOLVING LARGE LINEAR SYSTEMS

3.4 MINRES

When the matrix is symmetric but indefinite, Lanczos process can still be used to generate the Krylov subspace.However, xT Ax is no longer a valid norm and therefore cannot be used for minimization. MINRES method overcomesthis limitation by minimizing the L2 norm of the residue. Eliminating the search direction gi from the conjugategradient equations we get

Ar(i) = r(i+1)ti+1,i + r(i)ti,i + r(i−1)ti−1,i

Recall that the above equation is indeed the Lanczos process for a symmetric matrix. This can be written in matrixform as

ARi = Ri+1T i

where

Ri =

r(0) r(1) . . . r(i−1)

and T i is a i × (i + 1) tridiagonal matrix

T i =

. . . . . .

.. .

.. .

.. .

. . . . . .

. . .

. . . . . .

. . .

Since xT Ax is no longer a valid norm, we minimize the residue in L2 norm. First choose

x(i) ∈ x(0) + Ki

r(0), A

= x(0) + span

r(0), r(1), . . . , r(i−1)

i.e.,

x(i)

= x(0)

+ Riy

such that Ax(i) − b

2

is minimized. Recall that Ax(i) − b

2=ARiy −

b − Ax(0)

2

=Ri+1T iy − r(0)

2

Let

Di+1 = diagr(0) , r(1) , . . .r(i)Then Ri+1D−1

i+1 is an orthonormal transformation with respect to the current Krylov subspace and

Ax(i) − b

2=Ri+1T iy − r(0)

2

=Di+1T iy −

r(0)

2e1

2

where e1 is the first unit vector. This final expression can be seen as minimum norm least squares problem. Theelement in the i + 1, ith position can be removed by Given’s rotation and the resulting bidiagonal system can be easilysolved. This method is know as the Minimum Residue (MINRES) method.

Page 27: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 27/86

3.5. GMRES 27

3.5 GMRES

When the matrix is not symmetric, the Krylov subspace cannot be implicitly formed by the residues, it needs tobe formed explicitly. One option is use the Arnoldi’s method to form the Krylov subspace. One can use the exact same minimization procedure as above, i.e., choose

x(i) ∈ x(0) + Ki r(0), A = x(0) + spanr(0), r(1), . . . , r(i−1)

i.e.,x(i) = x(0) + Riy

such that Ax(i) − b

2

is minimized. AgainARi = Ri+1H i

but H i is an upper Hessenberg matrix instead of a simple tridiagonal matrix. Therefore the computational complexityof this algorithm is quadratic in the number of iterations. This method is known as the Generalized Minimum Residue(GMRES) method.

3.6 QMR

Instead of using the (expensive) Arnoldi process to generate the Krylov subspace, one can use the Lanczos processfor this purpose. Using least square minimizations, yields the so called Quasi-Minimum Residue method. In order toprevent break down in the underlying Lanczos process, look-ahead Lanczos can be used instead.

3.7 Preconditioning

Recall that the convergence rate of a Krylov subspace strongly depends on the condition number or spectralproperties of the coefficient matrix. Therefore one may speed-up the convergence by transforming the original systeminto another one which has the same solution but more favourable spectral properties or condition number. This

process is called preconditioning . For instance, if the matrix M approximates the coefficient matrix in some way, thenthe transformed system

M −1Ax = M −1b

has the same solution as the original system but the spectral properties of M −1A may be more favourable. Thesuccessful use of Krylov subspace methods in most of the situation critically hinges on the ability to form an appropriatepreconditioner. Obviously solving M x = y should be easy.

Page 28: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 28/86

28 CHAPTER 3. ITERATIVE METHODS FOR SOLVING LARGE LINEAR SYSTEMS

Page 29: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 29/86

Chapter 4

Solution of Nonlinear Equations

4.1 Introduction

Recall that a linear system of equations Ax = b either has exactly one solution (when A is nonsingular) or has anentire continuum of solutions (when A is rank deficient). This is not the case if the equations are nonlinear. Consider

the following circuit

+

i

E v

The diode shown above is a tunnel diode whose current is related to the diode voltage by the following nonlinearrelation

i = f (v) = 17.76v − 103.79v2 + 229.62v3 − 226.31v4 + 83.72v5

Let us investigate the solutions of this circuit when the source voltage E is increased from 0. The figure below plotsthe diode current i = f (v) and the voltage source current i = (E − v)/1 (also known as the load line ). The intersectionpoint(s) of these curves are the solutions of the circuit.

2

1.8

1.6

1.4

1.2

1

0.8

0.6

0.4

0.2

00 0.2 0.4 0.6 0.8 1 v

i

29

Page 30: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 30/86

30 CHAPTER 4. SOLUTION OF NONLINEAR EQUATIONS

In the range 0 ≤ E < 0.6 and E > 1.15, the circuit has one solution. In the range 0.6 < E < 1.15, there are three distinct solutions. At E = 0.6 and E = 1.15, there are two solutions but one of the solutions is “degenerate” or“non-isolated”. Consider the E = 0.6 load line and the solution close around v = 4.7. If E is increased slightly, twosolutions appear and if E is decreased slightly, both the solutions disappear. Such points are known as bifurcation points and systems with such devices can be chaotic.

4.2 Solution Methods

Solve

f (x) = 0 f (x) : Rn → Rn

Generalize to: solve

f (x, λ) = 0 f (x) : Rn → Rn λ ∈ R

Iterative method:

• start from an initial guess x(0)

• generate successive approximations x(1)

, x(2)

, . . . to the solution x∗

using an iterative function

x(i+1) = φ

x(i)

• stop when x(i+1) − x(i) ≤ 1f

x(i+1)

≤ 2

If ξ is a fixed point, i.e., ξ = φ(ξ ), if all fixed points are zeros of f and φ is continuous in the neighbourhood of eachfixed point then the limit point of the sequence xi is a fixed point of φ and hence a zero of f .

4.2.1 Contraction Mapping Theorem

Suppose φ : D ⊂ Rn → R

n maps a closed set D0 ⊂ D into itself and

φ(x) − φ(y) ≤ αx − y ∀x, y ∈ D0

for some 0 < α < 1, then φ has a unique fixed point x∗ ∈ D0 and sequence

x(k+1) = φ

x(i)

converges to x∗ for all x(0) ∈ D0

.Theorem: Let J (x) = ∂φ

∂x exist in a set

D0 = x|x − x∗ < ρ

then if J (x) ≤ m < 1 for all in x ∈ D0, the iteration converges to x∗ for all x(0) ∈ D0.Proof x∗ − x(k+1)

=φ (x∗) − φ

x(k)

≤ α

x∗ − x(k)

Page 31: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 31/86

4.3. NEWTON RAPHSON (SCALAR) 31

4.3 Newton Raphson (scalar)

If ξ is a zero of f (x) : R → R and f is sufficiently differentiable in a neighbourhood N of ξ , then form the Taylorseries expansion about x(0)

f (ξ ) = 0 = f x(0)

+ ξ − x(0)

f x(0)

+ 1

2! ξ − x(0)

2

f x(0)

+ . . .

0 ≈ f

x(0) +

ξ − x(0) f

x(0)ξ = x(0) −

f

x(0)

f

x(0)

Therefore the iteration function φ is

φ(x) = x − f (x)

f (x)

The progression of Newton-Raphson is illustrated below

x∗

x(2) x(1) x(0) x

f (x)

In the vector case,

f (x) ≈ f

x(k)

+ ∂ f

x(k)

∂x

x − x(k)

Here ∂f

∂x is the Jacobian of f ; denoted by J (x)

J ij = ∂f i∂xj

Newton Iteration

x(k+1) = x(k) − J −1 x(k) f x(k)Solve

J

x(k)

∆x(k+1) = −f

x(k)

x(k+1) = x(k) + ∆x(k+1)

orJ

x(k)

x(k+1) = J

x(k)

x(k) − f

x(k)

4.3.1 Newton Raphson Convergence

An iteration

x(k)

is said to converge with order q if there exists a vector norm such that ∀k

x(k+1) − x∗ ≤ αx(k) − x∗q

Theorem If

• J (x) is Lipschitz continuousJ (x) − J (x) ≤ l0x − x ∀x, x

• J (x∗) is nonsingular

then

1. x(k) → x∗ provided x(0) sufficiently close to x∗

Page 32: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 32/86

32 CHAPTER 4. SOLUTION OF NONLINEAR EQUATIONS

2. x(k+1) − x∗ ≤ C

x(k) − x∗2

Newton Raphson is not a fool-proof method and will run into convergence problems. Consider the followingfunction

f (x)

x

In this case NR keeps oscillating between the two points without ever converging to the right solution. Similarlyconsider the case when the derivative is incorrectly computed as shown below

f (x)

x

or when the function is discontinuous as shown below

f (x)

x

In all these cases, NR will not converge to the solution. These observations have some important implications.

1. Device model equations must be continuous with continuous derivatives

2. Derivative calculation must be accurate

3. Nodes must not be left floating (J singular)

4. Give good initial guesses for x(0)

5. Most model computations produce errors in function values and derivatives. Want to have convergence criteria

x(k+1) − x(k) <

such that is more than model errors/numerical precision.

4.4 Nonlinear Circuit Elements

G I +

vi

Page 33: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 33/86

4.5. GLOBALLY CONVERGENT NEWTON’S METHOD 33

i = f (v) = I S

exp

v

V T

− 1

df

dv =

I S V T

exp

v

V T

Newton i(k+1) = f

v(k)

+

df

dv v(k)

v(k+1) − v(k)

= f v(k) − df dv

v(k) v(k) I (k)

+ df dv

v(k) G(k)

v(k+1)

Therefore diode can be replaced by a conductance of value G(k) and current source I (k) at every iteration.Exponential nonlinearities such as diodes and BJTs present a special challenge to NR. Consider the following

circuit

f

v(1

≈ ∞

I O

RI O

i

v

f (v)

+

vRI O

If NR is started from 0, the very next iterate can be so large that the current is infinite. In such cases, for betternumerical behaviour the current equation is taken to be the tangent to the I-V curve at v(0) and the next iterate andthe current is calculated using the tangent as shown below

vlim v(1)v(0) v

i

Once the current at the next iterate is computed, the correct voltage vlim is calculated from the original I-V curveand the Newton step is “limited” such that the new voltage is vlim.

4.5 Globally Convergent Newton’s Method

4.5.1 General Idea

Recall that the Newton step for the set of equations

f (x) = 0 (4.1)

(where f , x ∈ Rn×1) is

x(k+1) = x(k) + ∆x(k)

where

∆x(k) = −

J

x(k)−1

f

x(k)

Page 34: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 34/86

34 CHAPTER 4. SOLUTION OF NONLINEAR EQUATIONS

where J

x(k)

is the Jacobian matrix. A reasonable strategy to use when deciding whether to accept the Newton step

∆x(k) is to require that the step decrease |f |2 = f T f . Let

g = 1

2f T f

Note that the Newton step is a descent direction for g

∇gT ∆x = (f T J ) · (−J −1f ) = −f T f < 0

Therefore a globally convergent Newton, the following strategy will be used: We first always try the full Newton step;we get quadratic convergence when close to the solution. However, at each iteration, we check whether the proposedstep reduces g. If not we backtrack along the Newton direction until we find an acceptable solution. Because theNewton step is descent direction for g , we are guaranteed to find an acceptable step by backtracking.

4.5.2 Backtracking

The modified Newton step now becomes

x(k+1) = x(k) + λ∆x(k) 0 < λ ≤ 1

The aim is to find λ such that g x(k) + λ∆x(k) has decreased sufficiently . Note that it is not sufficient to require

merely that g

x(k+1)

< g

x(k)

. This criterion can fail to converge to a minimum of g in one of the following twoways

1. It is possible to construct a sequence of step satisfying the criterion with g decreasing too slowly relative to steplengths.

2. One can have a sequence where the step lengths are too small relative to the initial rate of decrease of g .

A simple way to fix the first problem is to require that the average rate of decrease of g to be at least some fractionα of the initial rate of decrease ∇f T ∆x, i.e.,

g x(k+1)

≤ g x(k)

+ α∇f T x(k+1) − x(k)

where 0 ≤ α ≤ 1. We can get away with quite small values of α; α = 10−4 is a good choice.

The second problem can be addressed by requiring that the decrease of f at x(k+1) be greater than some fractionβ of the rate of decrease of f at x(k). In practice, this is automatically ensured by making sure that the backtrackingalgorithm will have a built-in cutoff to avoid taking steps that are too small.

Defineh(λ) ≡ g

x(k) + λ∆x(k)

so that

h(λ) = dh(λ)

dλ = ∇gT ∆x

If we need to backtrack, then we model h with the most current information we have and choose λ to minimize h(λ).We start with h(0) and h(0) available. The first step is always the Newton step, λ = 1. If this step is not acceptable,

we have h(1) available as well. We can therefore model h(λ) as a quadratic

h(λ) ≈ [h(1) − h(0) − h(0)]λ2 + h(0)λ + h(0)

Taking the derivative of this quadratic, we find that the it is minimum when

λ = − h(0)

2[h(1) − h(0) − h(0)]

Since the Newton step (i.e., λ = 1) failed, we can show that λ 12 for small α. We need to guard against too small a

value of λ, however. We set λmin = 0.1.

Page 35: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 35/86

4.6. CONTINUATION METHODS AND HOMOTOPIES 35

If subsequent backtracks are required, we model h as a cubic in λ, using the previous value h(λ1) and second mostrecent value h(λ2)

h(λ) = aλ3 + bλ2 + h(0)λ + h(0)

where a and b are chosen such that the above cubic approximation gives the correct values of h at λ1 and λ2, i.e.,

a

b =

1

λ1 − λ2

1λ21

− 1λ22

−λ2λ21

λ1λ22

h(λ1) − h(0)λ1 − h(0)

h(λ2) − h(0)λ2 − h(0)The minimum of the cubic is at

λ = −b +

b2 − 3h(0)

3a

We enforce that λ lie between λmax = 0.5λ1 and λmin = 0.1λ1.

4.5.3 Exact Line Search

Another idea is using line search to exactly minimize g . At each step k ,

x(k+1) = x(k) + λ∆x(k)

We want to find λ such that g x(k+1) is minimum. We will further limit that

0 < λ ≤ γ J

x(k) + λ∆x(k)

for some γ . The intuition for the upper limit is that if the slope of the function is small at the current iterate,Newton-Raphson will take a large step and we need to limit it more. We can choose λ in many ways. One possiblemethod is the following: Recall that f is minimized when dg

dλ = 0. I.e.,

0 =

dg

dx

x(k)+λ∆x(k)

T ∆x(k) = f T

x(k) + λ∆x(k)

J

x(k) + λ∆x(k)

∆x(k)

A binary search algorithm can be used to estimate the value of λ. Note that this method requires the evaluation of

the Jacobian at each binary search step and therefore is expensive compared to the previous step. However, no matrixinversion is required. In general the previous method is preferred but if line search takes a lot of iterations or NewtonRaphson and the line search start opposing each other, this line search can be invoked.

4.6 Continuation Methods and Homotopies

Sometimes, using the above line search method also turns out to be insufficient. Continuation methods are usedin such cases to obtain a solution.

4.6.1 Continuation Methods: Theory

Consider the following generalization of (4.1)

f (x, λ) = 0 f, x ∈ Rn λ ∈ R p (4.2)

We can view the solution process of (4.1) as a particular case of solution of (4.2) for a given λ.Now consider the problem of obtaining the family of solutions x as λ is varied over a range as shown in Figure 4.1.

In many circuits, the solution manifold may fold around itself. Consider the point λSNB . For λ > λSNB there are nosolutions and for λ < λSNB there are two solutions. Therefore there is no close neighbourhood around λSNB where aunique solution exists. Such points are call saddle node bifurcation (SNB) points.

A somewhat naıve method of obtaining this manifold is to solve for a sequence of λ values in the given range andthen the set of solutions will describe the manifold. However, NR may not converge on many of the points and thecomputational complexity can be prohibitive. We will consider some alternatives below.

Page 36: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 36/86

36 CHAPTER 4. SOLUTION OF NONLINEAR EQUATIONS

λ

x

λSNB

Figure 4.1: Solution manifold for variations in the parameter λ

λ

x

λSNB

Figure 4.2: Source stepping

First we will specialize the problem somewhat as follows: Solve for (4.2) between λ1 and λ2, i.e., for the straightline

λ(µ) = µλ1 + (1 − µ)λ2 µ ∈ R 0 ≤ µ ≤ 1

A more pertinent form of the above problem (in the context of DC analysis of circuit simulation) is, given (x1, λ1)

such that f (x1, λ1) = 0, find x2 given λ2 such that f (x2, λ2) = 0. Typically λ1 is chosen such that x1 is the trivialsolution x1 = 0. It is also assumed that at λ2, straight NR does not yield a solution because the initial guess is veryfar from the solution.

The general algorithm for obtaining x2 is the following:

1. let µi = 0 and µf = 1 and the known solution at µi be x(λ(µi))

2. use the solution at µi to predict the solution at µf (predictor step)

3. use the predicted value of the solution at µf to obtain the actual solution at µf (corrector step)

4. (a) if the corrector step failed, set µf = 0.5(µi + µf ) and repeat

(b) if the corrector step succeeded

i. if µf = 1, then doneii. else set muf = 1, µi = 0.5(µi + µf ) and repeat

Now consider various alternatives of predictors and correctors

• NR as the corrector and solution at µi as the predictor. This is called source stepping in context of circuitsimulation. This scheme is graphically depicted in Figure 4.2. The disadvantage of this method is that thestep size is small compared to some of the other methods and so the computational complexity is large. Theother major disadvantage is that if the solution manifold “folds” around itself (for instance around λSNB inFigure 4.2), this technique cannot follow the solution as λ varies.

Page 37: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 37/86

4.6. CONTINUATION METHODS AND HOMOTOPIES 37

λ

x

λSNB

Figure 4.3: Tangent following continuation method

• The second idea is to use NR as the corrector but use a predictor based on linear approximation of the manifoldat the previous solution as shown in Figure 4.3. Differentiating (4.2) we have

df = 0 = ∂f

∂xdx +

∂ f

∂λdλ ⇒ dx = −

∂f

∂x

−1∂f

∂λdλ

Therefore the predicted value of x(µf ) is given by

xpredicted (µf ) = x(µi) −

∂f

∂x

x(µi), λ(µi)

−1∂f

∂λ

x(µi), λ(µi)

λ(µf ) − λ(µi)

Since NR is used at the corrector step and ∂f

∂x is the Jacobian of the system, the LU factors of ∂f

∂x are already

available at µi. Therefore this computation is very inexpensive.

The advantage of this method over the previous one is that λ step size is much larger and therefore the compu-tational complexity is much smaller. However, this method fails to follow the manifold around λSNB because∂f ∂x

is singular at λSNB and is ill-conditioned around λSNB. There are two ways around this problem.

Reparameterization Instead of fixing a value of λ, one component of x can be fixed and solved for. Let xi bethe component that is fixed. Define

y =

x1...

xi−1

xi+1

...xn

λ

Then

df = 0 = ∂f

∂ydy +

∂f

∂xi

dxi ⇒ dy = −

∂f

∂y

−1∂f

∂xi

dxi

It can be shown that this matrix is generally nonsingular. The only drawback is that the Jacobian is

different from the Jacobian in NR and needs to be refactorized. The method is shown in Figure 4.4.Euler Homotopy (EH) This can be used to follow the folding of the manifold around λSNB . This method

uses the unit tangent vector v as the predictor, but moves a specified distance τ along v rather than aspecified distance along the λ space (as in the previous methods). Let

z =

Recall that the tangent vector v satisfies

∂f

∂zv = 0

Page 38: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 38/86

38 CHAPTER 4. SOLUTION OF NONLINEAR EQUATIONS

λ

x

λSNB

Figure 4.4: Reparameterization around SNB point

λ

x

λSNB

Figure 4.5: Euler Homotopy

A practical way of computing v is to write the Jacobian ∂f ∂z

in terms of its components, i.e.,

∂f ∂x

∂f ∂λ

vx

= 0

Rearranging the above equation we have

vx = −∂f ∂x−1

∂f ∂λ vλ

Therefore to compute v, choose arbitrary vλ, compute vx and then normalize v such that vT v = 1. Thismethod does not work around λSNB where the Jacobian is ill-conditioned. In such cases, arbitrarily choosevxj = 1 for some j and proceed.

Now if the starting point is

zi =

xi

λi

then the predictor point is

z p = zi + τ v

The corrector uses the hyperplane passing through z p and is normal to v given by

(z − z p)T

v = 0 ⇒ (z − zi)T

v = τ

The corrector finds the intersection of the solution manifold and this hyperplane

f (z) = 0

(z − zi)T v − τ = 0

This new set of nonlinear equations (in n + 1 unknowns) can be solved using NR. The Jacobian for theabove system is

∂f ∂x

∂f ∂λ

vT x vT

λ

Page 39: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 39/86

4.6. CONTINUATION METHODS AND HOMOTOPIES 39

Around tight corners, step size τ may need to be reduced. This approach does not solve directly for a“final” solution corresponding to λ2 though it can be used for that purpose. It is mainly used for pathfollowing. The method is shown in Figure 4.5.

4.6.2 Saddle Node Bifurcation Point

With the continuation methods, one can determine the approximate location of the saddle node bifurcation point(xSNB, λSNB). However, a more accurate determination is also possible. Recall that the saddle node bifurcation pointhas to satisfy the nonlinear equation, i.e.,

f (xSNB, λSNB) = 0

Furthermore, the Jacobian of f with respect to x is singular at that point, i.e., ∃v = 0 such that

J (xSNB, λSNB) v = 0

where as before

J = ∂f

∂x

If v satisfies the above equation then αv satisfies the above equation for any α = 0. Therefore, v = 0 can be enforcedby insisting that

vT

v = 1Therefore the set of equations to be solved are

f (x, λ) = 0

J (x, λ)v = 0

vT v − 1 = 0

The Jacobian for this system of equations is

∂f ∂x

∂f ∂λ

0∂ 2f ∂x2

v ∂ 2f ∂x∂λ

v ∂f ∂x

0 0 2v

The obvious disadvantage of this formulation is that the device equations need to be twice differentiable and the devicemodels need to supply the entries of the Hessian

∂ 2f

∂x2

Note that ∂ 2f ∂x∂λ

is still a matrix since λ ∈ R. The approximate values of the bifurcation points obtained from thecontinuation curves can be used as initial guess for the above system. The vector v is the unit tangent vector to themanifold at the saddle node bifurcation point.

4.6.3 Circuit Implementation

The circuit equations can be written as

f (x, λ) = g(x) + λb = 0

where b is a vector of (all or some of) the independent voltage and current sources. For λ = 0, the solution of thisequation is x = 0 if all the independent sources are included in b and we are interested in solution at λ = 1. Here,

∂f

∂x =

∂g

∂x

is the circuit Jacobian obtained by running NR and ∂f ∂λ

= b. Since we are interested in obtaining the solution forλ = 1 and not as much in describing the entire curve (as against DC sweep where we are interested in the entirecurve), we use tangent predictor continuation for most part except for points close to SNB where we use EH. We can

Page 40: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 40/86

40 CHAPTER 4. SOLUTION OF NONLINEAR EQUATIONS

detect whether we are close to SNB by monitoring the condition number of the circuit Jacobian ∂f ∂x

. When using EH,one might be tempted to view the Jacobian as a 2 × 2 block matrix and use block LU decomposition to factor theJacobian. This would save the explicit formation of a (n + 1) × (n + 1) sparse Jacobian. However, since EH is to beused close to SNB point where the Jacobian ∂f

∂x is ill-conditioned and vλ ≈ 0, this natural block decomposition is not

very well suited for our purpose. Therefore, one needs to explicitly form the Jacobian.

Page 41: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 41/86

Chapter 5

Numerical Integration of DifferentialEquations

Consider the following circuit

R I S q (v) = q 0

exp

vV T

− 1

The circuit equations are

I S = v

R + I 0

exp

v

V T

− 1

+

dq

dt

q (v) = q 0

exp

v

V T

− 1

(5.1)

These are differential equations and their solution is a function (of time) rather than just a number.

5.1 Solution of Differential Equations

In its most general form, the problem of solving a first-order differential equation is as follows: Given a function

F : Rn × Rn × R → R

n

and its initial valuesx0 ∈ R

n, t0 ∈ R

find a vector valued functionx(t) ∈ R

n, t ≥ t0

such that

F (x(t), x(t), t) = 0 t ≥ t0

x(t0) = x0

Using a first-order differential equation is not a limitation since any higher-order differential equation can be reformu-lated as a first-order differential equation. Since x(t0) is given, the above system is called an initial value problem . Aninitial value problem may have no solutions or it may have multiple solutions.

An important special case is differential equation for which the function F is such that

F (x(t), x(t), t) = f (x, t) − x

41

Page 42: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 42/86

42 CHAPTER 5. NUMERICAL INTEGRATION OF DIFFERENTIAL EQUATIONS

In this case, the initial value problem can be written in the following explicit form

x = f (x, t)

x(t0) = x0

Theorem Let f (x, t) be continuous for all x ∈ Rn and all t ≥ t0 and in addition, let f be Lipschitz continuous with

respect to x, i.e., ∃L (independent of x and t) such that

f (x, t) − f (y, t) ≤ Lx − y

∀x, y ∈ Rn and all t ≥ t0. Then for any x0 ∈ R

n, there exists an unique function

x(t), t ≥ t0

such that

x = f (x, t)

x(t0) = x0

5.2 Linear Multistep MethodsOne way to solve for (5.1) is to think of dq

dt as a new variable. Therefore (5.1) has three variable v , q and dqdt and

two equations. Therefore we need one more relationship between dqdt and the rest of the variables. This is determined

by so called Linear Multistep Methods Letdx

dt = f (x)

Then a k-step linear multistep method is defined as

0 =

ki=0

αixn−i + hn

ki=0

β i xn−i

Here

xn−i ≡ x(tn−i)

xn−i = dx

dt (tn−i)

tn = tn−1 + hn

Examples

1. Forward Euler: α0 = 1, α1 = −1, β 1 = −1

yn − yn−1 − hn yn−1 = 0 yn−1 = yn − yn−1

hn

2. Backward Euler: α0 = 1, α1 = −1, β 0 = −1

yn − yn−1 − hn yn = 0 yn = yn − yn−1

hn

3. Trapezoidal: α0 = 1, α1 = −1, β 0 = − 12 , β 1 = − 1

2

yn − yn−1 − 1

2hn(yn + yn−1) = 0

1

2(yn + yn−1) =

yn − yn−1

hn

Page 43: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 43/86

5.3. LOCAL TRUNCATION ERROR 43

Linear multistep methods with β 0 = 0 are called explicit methods as against implicit methods where β 0 = 0. Mostoften α0 = 1.

Therefore given a differential equation of the form

dq (x)

dt + f (x) + b(t) = 0

discretize the time scale into a number of time steps and use an appropriate linear multistep method to eliminate q as follows:

−k

i=1

αi

hnβ 0q n−i −

ki=1

β iβ 0

q n−i − α0

hnβ 0q (xn) + f (xn) + b(tn) = 0

This is a nonlinear function with xn as unknown (since q i and q i for i < n are known). Therefore this can be solvedusing Newton Raphson. The Jacobian for this system is

− α0

hnβ 0

dq

dx +

df

dx

As discussed earlier, it is crucial to have a good initial guess x(0)n for xn in order to guarantee quadratic convergence

of Newton’s method. Such an initial guess can be generated by an explicit k-step method

xn = −k

i=1

αi

α0xn−i + hn

β iα0

xn−i

where αis and β i are the parameters of the explicit linear multistep method. This approach of combining an implicitk step method with an explicit k-step method for generating initial values for Newton’s method is called a predictor-corrector method .

5.3 Local Truncation Error

Sources of errors

• local error due to finite time-step

local error

• global error due to flow, round-off, local error

global error

Page 44: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 44/86

44 CHAPTER 5. NUMERICAL INTEGRATION OF DIFFERENTIAL EQUATIONS

A linear multistep method computes an approximate solution of an initial value problem and it is desirable that theapproximation error is as small as possible. Local truncation error (LTE) measures the error introduced in taking onetime-step of the linear multi-step method assuming that all the values computed at previous time points are exact.Let x(tn) denote the exact solution and xn denote the computed (approximate) solution. Then local truncation errormeasures how close xn is to x(tn) in the following sense

LTE x(tn) − xn

Consider a test problem x = f (x). Then xn satisfies the following equation

xn +

ki=1

αix(tn−i) + hn

β 0f (xn) +

ki=1

β i x(tn−i)

= 0

As a numerical example, consider x = −x, x(0) = 1. The exact solution of this equation is x(t) = exp(−t).Consider integrating this equation with backward Euler with time-steps of 0.1. At time tn, for this test problem thefollowing relationship holds.

xn − xn−1

hn

= xn = −xn ⇒ xn = xn−1

1 + hn

Now consider the solution at time t = 0.5. If xn−1 is assumed perfect, i.e., x4 = 0.67032, then

x5

= 0.67032

1 + 0.1 = 0.60938

whereas the exact solution is 0.60653. Therefore the LTE is

LTE = 0.60653 − 0.60938 = −0.0028512

Recall that the linear multistep method is itself an approximation which relates xi and xi by a linear relationship.Since this is an approximation, if the exact solution is substituted in an LMS equation, the result will not be zero.Again for the above example,

x(tn) − x(tn−1)

hn

− x(tn) = x(tn) − x(tn−1)

hn

+ x(tn) = 0.60653 − 0.67032

0.1 + 0.60653 = −0.031363

The amount by which the linear multistep method is incorrect is called local error (E k).

E k

ki=0

αix(tn−i) +

ki=0

hnβ i x(tn−i)

Therefore in the previous example, local error is −0.031363.Local error and local truncation error are obviously related

LTE = x(tn) − xn

= x(tn) +k

i=1

αix(tn−i) + hn

β 0f (xn) +

ki=1

β i x(tn−i)

= x(tn) +k

i=1

αix(tn−i) + hn

β 0f (x(tn)) +

ki=1

β i x(tn−i)

+ hnβ 0[f (xn) − f (x(tn))]

= E k + hnβ 0[f (xn) − f (x(tn))]

Since f (x) is Lipschitz continuous,f (xn) − f (x(tn)) ≤ lxn − x(tn)

for some l. Therefore

LTE ≤ E k + hn|β 0| f (xn) − f (x(tn))

LTE ≤ E k + hn|β 0|l xn − x(tn)

LTE ≤ E k + hn|β 0|l LTE

LTE ≤ E k

1 − hnl|β 0|

Page 45: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 45/86

5.3. LOCAL TRUNCATION ERROR 45

Typically we use E k instead of LTE to estimate error (same if β 0 = 0.)

The advantage of using local error is that we can relate the accuracy (or order) of a linear multistep method to it.Local error can be viewed as a quantity which is a function of hn. Obviously as hn → 0, E k → 0 is required. ExpandE k in Taylor series around hn = 0 as follows:

E [x(t), h] = E [x, 0] + E (1)[x, 0]h + E (2)[x, 0]h2

2!

+ . . . + E (k+1)[x, 0] hk+1

k + 1!

+ O(hk+2)

A multistep formula is said to be a pth order method if

E (i)[x, 0] = 0 0 ≤ i ≤ p

for all x(t) with at least p + 1 derivatives.Equivalently, E [q (t), h] = 0 for any polynomial q (t) of degree p or less.

Consider the following basis polynomials

q (t) =

tn − t

hn

l

l = 0, 1, . . . , p

Local error for these polynomials is given by

E [q (t), h] =k

i=0

[αiq (tn−i) + hnβ i q (tn−i)]

=k

i=0

αi

tn − tn−i

hn

l

− lβ i

tn − tn−i

hn

l−1

Therefore for a pth order method we want

0 =

ki=0

αi

tn − tn−i

hn

l

− lβ i

tn − tn−i

hn

l−1

for l = 0, 1, . . . , p. These conditions are known as exactness equations .

For uniform step sizek

i=0

αii

l − lβ iil−1

= 0

Examples

1. Forward Euler α0 = 1, α1 = −1, β 1 = −1 αi = 0 l = 0

(iαi − β i) = 0 l = 1(i2αi − 2iβ i) = 0

2. Backward Euler α0 = 1, α1 = −1, β 0 = −1 αi = 0 l = 0

(iαi − β i) = 0 l = 1(i2αi − 2iβ i) = 0

Page 46: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 46/86

46 CHAPTER 5. NUMERICAL INTEGRATION OF DIFFERENTIAL EQUATIONS

3. Trapezoidal α0 = 1, α1 = −1, β 0 = − 12 , β 1 = − 1

2 αi = 0 l = 0

(iαi − β i) = 0 l = 1

(i2αi − 2iβ i) = 0 l = 2(i3αi − 3i2β i) = 0

Therefore Forward Euler and Backward Euler are first order methods while Trapezoidal is a second order method.Usually α0 = 1, therefore we have 2k + 1 unknowns and p + 1 exactness conditions. Therefore

k ≥ 1

2 p

5.3.1 Algorithm for Choosing LMS

1. choose order of accuracy p

2. choose step length k of the method where k ≥ 12 p

3. write down p + 1 exactness equations

4. if k > 12 p, choose 2k − p other constraints

Example: with p = k, the additional k constraints might be β 1 = β 2 = . . . = β k = 0

ki=0

αixn−i + hnβ 0 xn = 0

xn = − 1

β 0h

ki=0

αixn−i

kth order backward differentiation method (Gear’s method)

We now relate the local error to the solution x(t). Assuming smooth solution, expand x(t) as a Taylor series aroundtn as follows

x(t) = x(tn) + x(1)(tn)(t − tn) + . . . qp(t)

+ x( p+1)(tn)(t − tn) p+1

p + 1! + . . .

r(t)

For a pth order LMS, the local error will be

E k = E [q p(t), hn] + E [r, hn]

= 0 + E [r, hn]

=k

i=0

αix

( p+1)(tn)(tn−i − tn) p+1

p + 1! + hnβ ix

( p+1)(tn)(tn−i − tn) p

p!

+ O

h p+2n

=

ki=0

(−1) p+1

αi

tn − tn−i

hn

p+1

− ( p + 1)β i

tn − tn−i

hn

px( p+1)(tn)

p + 1! h p+1 + O

h p+2n

= p+1x( p+1)(tn)h p+1 + O

h p+2n

where

p+1 = 1

p + 1!(−1) p+1

ki=0

αi

tn − tn−i

hn

p+1

− ( p + 1)β i

tn − tn−i

hn

p

For example

Page 47: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 47/86

5.3. LOCAL TRUNCATION ERROR 47

1. Forward Euler: α0 = 1, α1 = −1, β 0 = 0, β 1 = −1, p = 1

2 = −1 − 2 × −1

2 =

1

2

2. Backward Euler: α0 = 1, α1 = −1, β 0 = 1, β 1 = 0, p = 1

2 = −1

2 = −

1

2

3. Trapezoidal: α0 = 1, α1 = −1, β 0 = 12 , β 1 = 1

2 , p = 2

3 = −1 − 3 × − 1

2

3! =

1

12

Therefore to summarize, in order to perform transient simulation of

q (x) + f (x) + b(t) = 0

choose a linear multistep method

0 = − 1

hnβ 0

pi=0

αiq (xn−i) +

pi=1

β i q (xn−i)

+ f (xn) + b(tn)

Then the differential equation becomes a nonlinear algebraic equation

0 = F (xn) − 1

hnβ 0α0q (xn) + f (xn) −

1

hnβ 0

pi=1

[αiq (xn−i) + β i q (xn−i)] + b(tn)

cn known

The Newton iteration for this equation is

x(k+1)n = x(k)

n −

df

dx(xn) −

α0

hnβ 0

dq

dx(xn)

−1

− 1

hnβ 0α0q (xn) + f (xn) + cn

Let us investigate the convergence properties of the Newton iteration. Let df dx and dq

dx be Lipschitz continuous. Let

J (x) = df

dx −

α0

hnβ 0

dq

dx

Then

J (x1) − J (x2) = df dx

(x1) − α0

hnβ 0dq dx

(x1) − df dx

(x2) + α0

hnβ 0dq dx

(x2)≤

df

dx(x1) −

df

dx(x2)

+

− α0

hnβ 0

dq

dx(x1) +

αi

hnβ 0

dq

dx(x2)

lf + lq

α0

hnβ 0

x1 − x2

Thus J (x) Lipschitz continuous. By NR convergence theorem, convergence if x(0)n close to x∗n. As pointed out earlier,

we can use a predictor for generating a x(0)n from previous points.

Page 48: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 48/86

48 CHAPTER 5. NUMERICAL INTEGRATION OF DIFFERENTIAL EQUATIONS

5.4 Stability

A method is said to be consistent if p ≥ 1. Thus

limh→0

LTEi

h → 0

Recall that the global error is given by

Global Error =i

LTEi ≈ T

hLTE

If global error → 0 as h → 0, we need limh→0LTEi

h .

A method is said to be convergent if

limh→0

max0≤m≤M

xm − x(tm) → 0

where xm is the computed solution and x(tm) is the true solution, tm = mh, M = T h

.

A method is stable if ∃h0 and k < ∞ such that for any two different initial conditions x0 and x0 and h = T M

< h0,

x(tm) − x

(tm) ≤ kx0 − x

0Classical theorem: consistency + stability ⇔ convergence

5.5 Absolute Stability

Here we examine the stability properties of various linear multistep methods, i.e., range of parameters where thesolution is stable and where the solution is unstable. Ideally we want the linear multistep method to have the samestability properties as the original system, i.e., in the linear case, if the eigenvalues are in the left half plane, theapproximate solution generated by the LMS should be stable and if the eigenvalues are in the right half plane, theapproximate solution generated by the LMS should be unstable. Prior to formally introducing the concept, considera test problem x = −x, x(0) = 1. The exact solution e−t. Let us try to solve this using the explicit midpoint method

xn−1 = xn − xn−2

2hn

For h = 0.1 the solution looks like the following

explicit mid-point with h = 0.1

20

2

1.5

1

0.5

0

-0.5

0-1

5time

10 15

Page 49: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 49/86

5.5. ABSOLUTE STABILITY 49

This is obviously unstable and undesirable. Now let us try to solve this by reducing the time step. For h = 0.01 thesolution looks like the following

explicit mid-point with h = 0.01

20

2

1.5

1

0.5

0

-0.5

0-1

5time

10 15

Again the solution is unstable while the original system was stable. It can be shown that this method is unstable forall h! Now consider Forward Euler with h = 0.1.

forward euler with h = 0.1

20

2

1.5

1

0.5

0

-0.5

0-1

5time

10 15

The computed approximation is reasonably close to the solution. Let us increase the step size to h = 1.

Page 50: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 50/86

50 CHAPTER 5. NUMERICAL INTEGRATION OF DIFFERENTIAL EQUATIONS

forward euler with h = 1

20

2

1.5

1

0.5

0

-0.5

0-1

5time

10 15

Now the calculated solution is quite different from the exact solution. Let us try to increase the time-step even more.Let h = 3.

forward euler with h = 3

20

2

1.5

1

0.5

0

-0.5

0-1

5time

10 15

Now the method becomes unstable. Therefore Forward Euler is conditionally stable. Let us try Backward Euler andTrapezoidal for h = 0.1, 1, 3.

Page 51: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 51/86

5.5. ABSOLUTE STABILITY 51

backward euler with h = 0.1

20

2

1.5

1

0.5

0

-0.5

0-1

5time

10 15

backward euler with h = 1

20

2

1.5

1

0.5

0

-0.5

0-1

5time

10 15

Page 52: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 52/86

52 CHAPTER 5. NUMERICAL INTEGRATION OF DIFFERENTIAL EQUATIONS

backward euler with h = 3

20

2

1.5

1

0.5

0

-0.5

0-1

5time

10 15

trapezoidal with h = 0.1

20

2

1.5

1

0.5

0

-0.5

0-1

5time

10 15

Page 53: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 53/86

5.5. ABSOLUTE STABILITY 53

trapezoidal with h = 1

20

2

1.5

1

0.5

0

-0.5

0-1

5time

10 15

trapezoidal with h = 3

20

2

1.5

1

0.5

0

-0.5

0-1

5time

10 15

Therefore Backward Euler and Trapezoidal seem to be stable for all h. Also for a given h, Trapezoidal seems to bethe most accurate of the three. Let us now formalize this notion of absolute stability of a linear multistep method.

Consider the following test problem

x = λx x(0) = 1 λ complex

One might be tempted to use a more complicated test problem but it turns out that this problem suffices because it is

1. simple

2. local behaviour of nonlinear systems can be approximated by

δ x = A(t)δx

linearization around the current operating point

3. A system x = Ax can often be diagonalized˙x = Dx

where D is diagonal matrix of eigenvalues of A.

Page 54: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 54/86

54 CHAPTER 5. NUMERICAL INTEGRATION OF DIFFERENTIAL EQUATIONS

For a general linear multistep method

ki=0

[αixn−i + hnβ i xn−i] = 0

we have

ki=0

[αixn−i + hnβ iλxn−i] = 0 =k

i=0

(αi + qβ i)xn−i q λh

This can be treated as a difference equation and we can use the discrete time transform variable z to rewrite the aboveequation as

ki=0

(αi + qβ i)zk−i = 0

Since the degree of this polynomial is k , it will have k roots ri and the generic solution for distinct roots is

xn =k

i=1

cirni

If each ri has multiplicity mi

xn = . . . +

ci0 + nci1 + . . . + ciminmi−1

+ . . .

mi = k

This system is stable if |ri| < 1 for all i, or if |ri| = 1 then mi = 1 and all other roots satisfy |ri| < 1. Otherwise thissystem is unstable.

5.5.1 Region of Absolute Stability

The region of absolute stability of an LMS method is the set of q = λh such that all solutions of the differenceequation

0 =

ki=0

(αi + qβ i)xn−i

remain bounded as n → ∞. A method is stable if the stability region contains q = 0.Theorem The solutions of the difference equation are bounded iff all roots of the associated polynomial

ki=0

(αi + qβ i)zk−i = 0

are inside or on the complex unit circle ( |z| ≤ 1) and roots on the unit circle have multiplicity 1.

The roots of the above equation change as q changes. The region of absolute stability is the set of all values of q

where the necessary and sufficient conditions are satisfied.As an example consider Backward Euler.

xn = xn−1 + hxn

⇒ z = 1

1 − q

|z| ≤ 1 ⇔

1

1 − q

≤ 1

The region of absolute stability is the shaded area.

Page 55: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 55/86

Page 56: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 56/86

56 CHAPTER 5. NUMERICAL INTEGRATION OF DIFFERENTIAL EQUATIONS

1. choose q , compute roots, repeat for q (as John McEnroe would put it “You can’t be serious!”)

2. solve for q = − p(z)σ(z) where

p(z) =k

i=0

αizk−i

σ(z) =k

i=0

β izk−i

Look at

S q |q = − p(z)

σ(z), |z| ≤ 1

I.e., let z wander around |z| ≤ 1 and record all values of q seen. This method is also not very useful because wemight get some q values for two or more different z ’s one with |z| ≤ 1 and one with |z| > 1.

The most efficient method for this is to use the concept of conformal mapping from complex number theory. LetC (q ) be the contour defined by

q = −

p(z)

σ(z)

with z = exp(ıθ)

z-plane q -plane

Then we can use some basic results from theory of complex variables

1. mapping − p(z)σ(z) is conformal

2. q -plane is separated into disjoint sets. In each set, the number of zeros from the outside the unit circle is constant.

3. boundary of the stability region is a subset of C (q )

Examples:

• FE:q (z = exp(ıθ)) = exp(ıθ) − 1

unstable

q -plane

The unit circle in the z plane just gets shifted by (−1, 0), therefore the area outside the above circle is unstableregion.

• BE:

q (z = exp(ıθ)) = 1 − 1

exp(ıθ) = 1 − exp(−ıθ)

Page 57: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 57/86

5.5. ABSOLUTE STABILITY 57

unstable

q -plane

In this transformation the unit circle shifts by (1, 0) and its rotation is reverse hence the area inside the circle isunstable.

• Trapezoidal:

xn = xn−1 + q

2(xn−1 + xn)

q = 2(exp(ıθ) − 1)

(exp(ıθ) + 1)

unstable

q -plane

In this case the unit circle is mapped to the imaginary axis. Hence the area to the left of the imaginary axis isthe region of absolute stability.

As an example consider the Adams-Bashforth 7th Order Method

7i=0

(αi + qβ i)z7−i = 0 β 0 = 0 α2 = . . . = α7 = 0

Here

p(z) = z7 − z6

σ(z) = β 1z6 + . . . + β 7

The characteristic polynomial is 0 = p(z) + qσ(z). For q = 0, p(z) = z 7 − z6, roots are z = 0 multiplicity 6, z = 1multiplicity 1 so stable at q = 0.

Page 58: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 58/86

58 CHAPTER 5. NUMERICAL INTEGRATION OF DIFFERENTIAL EQUATIONS

2

2

2

2

3

3

-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

-1.6 -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0 0.2

0 11

The region just outside the stability region has one unstable root as so on.

5.5.3 A-Stable Methods

A method is A-stable if the region of absolute stability includes the entire left-half q plane. Examples: backwardEuler, trapezoidal.

Dahlquist’s Theorem

1. An A-stable LMS method cannot exceed second order of accuracy

2. The most accurate A-stable method (smallest local truncation error) is the trapezoidal rule.

5.6 Stiff Equations and Stiff Stability

As an example consider the following system of equations

x = −λ1(x − s(t)) + ds

dtx(0) = x0

s(t) = 1 − exp(−λ2t)

λ1 = 106

λ2 = 1

The exact solution of this system is

x(t) = x0 exp(−λ1t) + 1 − exp(−λ2t)

which is plotted below (not to scale)

t

x

x0 exp(−λ1t)1 − exp(−λ2t)

5 × 10−6 5

Page 59: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 59/86

5.6. STIFF EQUATIONS AND STIFF STABILITY 59

For t ≥ 5 × 10−6, x0 exp(−λ1t) ≈ 0. For t ≥ 5, 1 − exp(−λ2t) ≈ 1. The interval of interest is [0, 5].If uniform step size is used in numerical integration of this set of equations then for accuracy purposes h ≤ 10−6.

This would imply that we need to take 5 × 106 steps!!A more optimal strategy is to take 5 steps of size 10−6 for accuracy during the initial phase and 5 steps of size 1.

Try this with forward Euler

t 0 1µ 2µ 3µ 4µ 5µ 1 2 3 4 5x 1 1µ 2µ 3µ 4µ 5µ 1 −3.7 × 105 3.7 × 1011 −3.7 × 1017 3.7 × 1023

Forward Euler is obviously not suited for this problem because it has a very small stability region |1 + q | ≤ 1.Therefore for solving practical differential equations, we need

1. variable time steps

2. methods with large region of stability

3. methods which are stable for variable time steps

Stiff problems occur when

1. natural time constants

2. input time constants

3. interval of interest

are widely separated.

5.6.1 Requirements for Stiff Stability: Numerical Methods

Consider the following set of equationsx = Ax λ(A) = γ + ıδ

The solution isx(t) ∼ exp(γt) cos(δt)

1. For accuracy we want to follow the sinusoid accurately so we want steps ∆t = h at least 8 per cycle

δh

2π ≤

1

8 ⇒ |Im(q )| ≤ π

4

2. For accuracy when Re(λ) = γ > 0, we want accurate method when 0 ≤Re(q ) ≤ µ

3. Also we want to take larger time steps when t ≥ |γ | no matter what λ is

Thus stability region should include q = −∞. Recall that

q = − p(z)

σ(z)

For q = −∞, σ (z) = 0. Let β 1 = . . . = β k = 0. Then as q → −∞, all roots → 0 with multiplicity k . Therefore such amethod includes −∞ in its region of stability.

These class of methods where β 1 = . . . = β k = 0 are called Backward Differentiation Formula or Gear ’s method.For these methods, order of accuracy p = k (step length), β 0 = 0, and β 1 = . . . = β k = 0

ni=0

[αixn−i] + hβ 0 xn = 0

or

xn = − 1

hnβ 0

ni=0

αixn−i

k = 1, corresponds to backward Euler. Note that there are k + 1 coefficients β 0, α1, . . . , αk and to get accuracy p, weneed to satisfy the p + 1 exactness conditions. Hence the number of exactness conditions is equal to the number of unknowns. The region of absolute stability for Gear methods of various orders with uniform step size is shown below

Page 60: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 60/86

60 CHAPTER 5. NUMERICAL INTEGRATION OF DIFFERENTIAL EQUATIONS

−10 −5 0 5 10 15 20 25 30−25

−20

−15

−10

−5

0

5

10

15

20

25

Gear 1

Gear 2

Gear 3

Gear 4

Gear 5

Gear6

The main difference between trapezoidal method and 2nd order Gear method is that trapezoidal method requiresknowledge of only the previous step whereas Gear’s method requires the knowledge of previous two time steps.Therefore the coefficients are functions of the time step.

0 = α0 + α1 + α2

0 = α1tn − tn−1

hn

+ α2tn − tn−2

hn

− β 0

= α1 + α2

1 +

hn−1

hn

− β 0

0 = α1 + α2

1 +

hn−1

hn

2

let r = hn−1hn

α2 = 1

r(r + 2)

α1 = −(1 + r)2

r(r + 2)

β 0 = −r + 1

r + 2

Note however that the coefficients depend only on the step size ratio and not the absolute values.

Page 61: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 61/86

5.6. STIFF EQUATIONS AND STIFF STABILITY 61

5.6.2 Choice of Step Size

For efficiency reasons the goal is to take the least number of time steps consistent with the error bounds. Recallthat the local error is given by

LEn = h p+1

n x( p+1)(tn)

p + 1! + O

h p+2n

where

=k

i=1

αi

tn − tn−i

hn

p+1

− ( p + 1)β i

tn − tn−i

hn

p

At each time step we want the local error to be less than the given error bound E n. This implies

hn ≤

( p + 1)!E nx( p+1)(tn)

1

p+1

For a given multistep method we have a formula for . If we know x( p+1) we would want to take hn to be equal to

hn = ( p + 1)!E nx( p+1)(tn)

1p+1

One way to compute x( p+1)(tn) is to use divided differences

DD1 = xn − xn−1

hn

≈ xn

DD2 = DD1(tn) − DD1(tn−1)

hn + hn−1≈

x(2)(tn)

2!

DDk+1 = DDk(tn) − DDk(tn−1)k

i=0 hn−i

≈ x(k+1)(tn)

k + 1!

In principle, if we have a choice of different methods with different accuracy p, we would choose method which givesthe largest step size

hn = E n

DD p+1

1

p+1

However,

1. DD is error prone

2. DD is expensive to compute

3. it is expensive to switch p

Therefore, typical circuit simulations follow some heuristic rules

1. don’t change steps hn too often

2. change the order k only if improvement is 2h

3. change step size and order only if LE < E n for k + 1 steps after last change or error is too large

5.6.3 Application of LMS Methods to Circuits

As an example consider a nonlinear capacitor whose charge is related to its voltage by

q (v) = q 0 exp

v

V T

Page 62: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 62/86

62 CHAPTER 5. NUMERICAL INTEGRATION OF DIFFERENTIAL EQUATIONS

Therefore the current through the capacitor is given by

dq

dt = i =

q 0V T

exp

v

V T

dv

dt

Using backward Euler we have

in = q 0

V T exp vn

V T vn − vn−1

hn

Therefore the capacitor looks like a nonlinear voltage dependent resistor in parallel with a current source. This canbe solved using Newton Raphson as

i(k+1)n =

q 0V T

exp

v

(k)n

V T

v

(k)n − vn−1

hn

+

q 0

hnV T exp

v

(k)n

V T

+

q 0V 2T

exp

v

(k)n

V T

v

(k)n − vn−1

hn

v(k+1)n − v(k)

n

= Gv(k+1)

n + I

where

G = q 0hnV T

exp

v

(k)n

V T

1 +

v(k)n − vn−1

hn

I = −Gv(k)n +

q 0V T

expv(k)

n

V T

v(k)n − vn−1

hn

Another way of doing this is to apply the LMS directly to charge terms as follows:

in = q n − q n−1

hn

i(k+1)n =

q

v(k)n

+ q

v

(k)n

v

(k+1)n − v

(k)n

− q (vn−1)

hn

= Gv(k+1)n + I

where

G =q

v(k)n

hn

I =−q

v

(k)n

v

(k)n + q

v

(k)n

− q (vn−1)

hn

q is calculated by the nonlinear expression of charges. The question now is which one is better. Consider the twoways of writing the above equations again

dq (x)

dt + f (x) + b(t) = 0

C (x)dx

dt

+ f (x) + b(t) = 0

Now the total charge in the circuit should be conserved, i.e.,

m+1i=1

q i(x) = K

Since charge is constant, it also follows that

m+1i=1

f i(x) + bi(t) = 0

Page 63: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 63/86

5.6. STIFF EQUATIONS AND STIFF STABILITY 63

However, what happens when we apply a numerical integration method to the two forms? We hope that charge shouldbe conserved. Let us apply backward Euler to the second equation.

0 = C (xn)xn − xn−1

hn

+ f (xn) + b(tn)

Now expand q (xn−1) in Taylor series around xn.

q (xn−1) = q (xn) + C (xn)(xn−1 − xn) + O(h2n)

Eliminating C (xn) from the above equations we have

m+1i=1

q (xn−1) =

m+1i=1

q (xn) +

m+1i=1

hn(f (xn) + b(tn)) + O(h2n)

K = K + 0 + O(h2n)

which implies that charge is not conserved which is unphysical!!Now apply backward Euler to the first equation

0 =

q (xn) − q (xn−1)

hn + f (xn) + b(tn)m+1i=1

q i(xn−1) =

m+1i=1

q i(xn) +

m+1i=1

hn(f i(xn) + bi(tn))

K = K + 0

Thus charge is conserved.Theorem: Any consistent multistep method conserves charge when applied to

dq (x)

dt + f (x) + b(t) = 0

5.6.4 Stability with Variable Time Steps

Once again consider the test problemx = λx

Applying backward Euler to it we have

xn ≤

1

1 − q n

xn−1

This recurrence relation is stable for all h such that 1

1−λhn

≤ 1. Similarly, Trapezoidal is stable for h such that λh

is in the left half plane. Recall that backward Euler and Trapezoidal methods only require information at two timepoints and their coefficient are independent of time steps.

However, for Gear 2 the coefficients depend on hnhn−1

= rn. Therefore stability does depend on the step size ratio.

The following theorem establishes the condition when Gear 2 is stable:Theorem

β 0(rn)q nxn + xn + α1(rn)xn−1 + α2(rn)xn−2 = 0

is stable for all q n where

|π − Im(q n)| ≤ π

4Re(q n) ≤ 0

rn ≤ 1.2

Page 64: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 64/86

64 CHAPTER 5. NUMERICAL INTEGRATION OF DIFFERENTIAL EQUATIONS

Page 65: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 65/86

Chapter 6

Small Signal and Noise Analysis of Circuits

6.1 General Formulation

Consider again the problem of solvingdq (x)

dt + f (x) + b = 0 (6.1)

where b is assumed to the independent of time. Let xs be the DC (steady-state) solution of this system. Now considerthat a “small” input signal D(x)ξ (t) is added to the above equation, i.e.,

dq (x)

dt + f (x) + b + D(x)ξ (t) = 0 (6.2)

We would like to find the solution of the above equation. From linear perturbation analysis, the solution of the abovesystem is xs + x p(t) where x p(t) is small. Substituting this form of the solution in (6.2), we have

0 = dq (xs + x p)

dt + f (xs + x p) + b + D(xs + x p)ξ (t)

Expanding q , f and D as in Taylor series around xs and ignoring second order terms in the expansion, we have

0 = dq (xs + x p)

dt + f (xs + x p) + b + D(xs + x p)ξ (t)

≈dq (xs) + ∂q

∂x

xs

x p(t)

dt + f (xs) +

∂f

∂x

xs

x p(t) + b +

D(xs) +

∂D

∂x

xs

x p(t)

ξ (t)

Let

G = ∂f

∂x

xs

C = ∂q

∂x xsAlso note that xs is independent of t and satisfies (6.1) and

∂D

∂x

xs

x p(t)

is also a second order term. Therefore

0 = C dx p(t)

dt + Gx p(t) + D(xs)ξ (t) (6.3)

This small signal analysis can be used for so-called AC and noise analyses in circuits.

65

Page 66: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 66/86

66 CHAPTER 6. SMALL SIGNAL AND NOISE ANALYSIS OF CIRCUITS

6.2 AC Analysis

In AC analysis, D(x)ξ (t) is a small sinusoidal source A exp( 2πf t) whose frequency f is swept over a range. If thecircuit is nonoscillatory, x p(t) will also be a sinusoid with the same frequency, i.e.,

x p(t) = X p exp( 2πf t)

Note that X p ∈ C. Substituting the above form in (6.3), we get

[( 2πf C + G)X p + A]exp( 2πf t) = 0 = ( 2πf C + G)X p + A

X p can now be solved using a complex linear solver.

6.3 Noise Analysis

In noise analysis, D(xs) ≡ D(xs, f ) and ξ (t) are unit uncorrelated white and flicker noise sources. (6.2) canbe viewed as a linear time invariant system with some transfer function h(t) whose Fourier transform is H (f ) =−( 2πf C + G)−1. From stochastic differential equation theory

x p(t) =

∞−∞

h(t − s)D(xs)ξ (s)ds

Typically in circuit simulation, we are interested in the second order statistics (power spectral density, total noisepower etc.) of one component of x p(t). Let ei be the ith unit vector where i is the index of the component of x p(t)which is of interest. Therefore

eT i x p(t) =

∞−∞

eT i h(t − s)D(xs)ξ (s)ds

and the autocorrelation function is given by

E

eT i x p(t1)xT

p (t2)ei

= E

∞−∞

∞−∞

eT i h(t1 − s1)D(xs)ξ (s1)ds1ds2ξ T (s2)DT (xs)hT (t2 − s2)ei

where E [] denotes the expectation operator. Interchanging the order of expectation and integration and using the factthat

E

ξ (s1)ξ T (s2)

= I δ (s1 − s2)

we have

E

eT i x p(t1)xT p (t2)ei

=

∞−∞

∞−∞

eT i h(t1 − s1)D(xs)δ (s1 − s2)ds1ds2DT (xs)hT (t2 − s2)ei

=

∞−∞

eT i h(t1 − s1)D(xs)DT (xs)hT (t2 − s1)eids1

Expressing h(t) in terms of its Fourier transform

E

eT i x p(t1)xT p (t2)ei

=

∞−∞

eT i H (f 1)exp( 2πf 1(t1 − s1))D(xs)DT (xs)H T (f 2) exp( 2πf 2(t2 − s1))eids1df 1df 2

=

∞−∞

eT i H (f 1)D(xs)DT (xs)H T (f 2)ei exp[ 2π(f 1t1 + f 2t2)] exp[− 2π(f 1 + f 2)s1]ds1df 1df 2

= ∞

−∞

eT i

H (f 1)D(xs)DT (xs)H T (f 2)ei exp[ 2π(f 1t1 + f 2t2)]δ (f 1 + f 2)df 1df 2

=

∞−∞

eT i H (f 1)D(xs)DT (xs)H T (−f 1)ei exp[ 2πf 1(t1 − t2)]df 1

Therefore the autocorrelation function for x p is a function only of t1 − t2, i.e., x p is a wide-sense stationary stochasticprocess. The Fourier transform of the autocorrelation function is therefore given by

S xpi ,xpi (f ) = eT i H (f )D(xs, f )DT (xs, f )H T (−f )ei

The power spectral density S xpi ,xpi (f ) can be calculated by solving H T (f )x = ei, multiplying the result by DT (xs, f )and taking the absolute value of the result.

Page 67: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 67/86

Chapter 7

Steady-State Methods for SolvingPeriodic Circuits

7.1 Periodic Steady-State Analysis

Consider a nonautonomous circuit whose equations are given by

dq (x(t))

dt + f (x(t)) + b(t) = 0 (7.1)

The independent sources are assumed to be periodic with period T . Since the circuit is nonautonomous, the circuitsteady-state response x(t) will also be periodic with period T . A trivial method for determining x(t) is to runtransient and wait for all the waveforms to settle to their steady-state. However, this may take too long so we willdiscuss methods with compute the steady-state response xs(t) for one period directly. The first two methods are intime-domain while the last method is in the frequency domain.

7.1.1 Finite Difference Method

First discretize the time period [0, T ] into n steps t0, t1, . . . , tn where t0 = 0 and tn = T . Further, define

hi = ti − ti−1

Note that these steps need not be equal. We rewrite (7.1) on each of these time steps by discretizing the differentialoperator using Backward Euler (for example)

q (x1) − q (x0)

h1+ f (x1) + b(t1) = 0

q (x2) − q (x1)

h2+ f (x2) + b(t2) = 0

...

q (xn) − q (xn−1)

hn

+ f (xn) + b(tn) = 0

Periodicity of the solution requires that x0 = xn. Then the above equations become

q(x1)−q(xn)h1

+ f (x1) + b(t1)q(x2)−q(x1)

h2+ f (x2) + b(t2)

...q(xn)−q(xn−1)

hn+ f (xn) + b(tn)

=

00...0

= F fd

67

Page 68: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 68/86

68 CHAPTER 7. STEADY-STATE METHODS FOR SOLVING PERIODIC CIRCUITS

The above system has nm equations in nm variables X where

X =

x1

x2

...xn

where m is the circuit size. Therefore these equations can be solved using Newton’s method. The Jacobian for theabove system of equations is

J fd =

C 1h1

+ G1 0 . . . −C nh1

−C 1h2

C 2h2

+ G2

. . .

. . . 0 −C n−1

hn

C nhn

+ Gn

where as usual

C i = ∂q

∂x xiGi =

∂f

∂x

xi

Instead of solving the above system of equations by direct factorization, we will use Krylov subspace methods tosolve these equations. Recall that the success of a Krylov subspace method critically depends on the choice of a goodpreconditioner. For this case, let us write the Jacobian as

J fd = L + B

where

L =

C 1h1

+ G1 0 . . . 0

−C 1h2

C 2h2

+ G2

.. .

. . . 0 −C n−1

hn

C nhn

+ Gn

is the lower triangular part of J fd and

B =

0 0 . . . −C nh1

0 0. . .

. . . 0 0 0

Instead of solvingJ fd∆X = −F fd

we solve

L

−1

J fd∆x = (I + L

−1

B)∆x = −L

−1

F fdSince L is a block lower bidiagonal matrix, solving linear equations of the sort Lx = y is very cheap (O

nm1.3

).

This can be easily solved using Krylov subspace methods. Recall that the only computation involved is matrix vectorproducts. The multiplication with I + L−1B can be performed very efficiently.

7.1.2 Shooting Method

Recall that transient analysis is the solution of an initial value problem , i.e., solve (7.1) given an initial conditionx(t0). Shooting methods are used for solving so-called boundary-valued problem where a desired solution x(tn) at sometime point tn is given and the problem is to obtain an initial condition x(t0) and (optionally) the trajectory x(t). We

Page 69: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 69/86

7.1. PERIODIC STEADY-STATE ANALYSIS 69

can use shooting to determine the steady-state solution of (7.1). For this problem, we need to find x0 such that attime T , x0 = x(T ). The solution trajectory can be viewed as a function of both time t and the initial condition x0,i.e.,

x(t) = φ(t, x0)

Therefore the shooting equation can be written as

F sh = φ(T, x0) − x0 = 0

The above equation can be viewed as a nonlinear equation with m variables x0 and therefore can be solved usingNewton’s method. The Jacobian for this system is

J sh = ∂φ(T, x0)

∂x0− I

In order to use Newton’s method, we need to be able to evaluate F sh and J sh for a given x0. φ(T, x0) and thereforeF sh can be evaluated by running transient analysis with initial condition x0 for time T . To evaluate J sh note that

∂φ(T, x0)

∂x0=

∂xn

∂x0

Using chain rule,

∂xn

∂x0=

ni=1

∂xi

∂xi−1

To evaluate ∂xi∂xi−1

, recall that

q (xi) − q (xi−1)

hi

+ f (xi) + b(ti) = 0

Differentiate the above equation with respect to xi−1,

C ihi

∂xi

∂xi−1−

C i−1

hi

+ Gi

∂xi

∂xi−1= 0

which yields∂xi

∂xi−1 = C ihi + Gi

−1C i−1

hi

Therefore∂xn

∂x0=

ni=1

C ihi

+ Gi

−1C i−1

hi

Note thatC ihi

+ Gi

are already factored during the transient solution phase. Therefore the computational cost is dominated by factoringthe Jacobian matrix which is a dense matrix. This method therefore becomes impractical for large circuits. However,the following observation facilitates the use of Krylov subspace methods for shooting. Recall that the preconditionedcoefficient matrix for the finite difference method is

I + L−1B

The last block column of L−1B is given by

−C 1h1

+ G1

−1C nh1

−C 2h2

+ G2

−1C 1h2

C 1h1

+ G1

−1C nh1

...

−n

i=1

C ihi

+ Gi

−1C i−1hi

Page 70: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 70/86

70 CHAPTER 7. STEADY-STATE METHODS FOR SOLVING PERIODIC CIRCUITS

Note that the last entry is − ∂xn∂x0

. This suggests that instead of solving

J shδx0 = −F sh

we solve the following system

(−L−1B − I )δX =

0

0...0

−F sh

using a Krylov subspace method and recover δx0 as the last m entries of δX .

7.1.3 Harmonic Balance Method

Unlike shooting and finite difference which solve (7.1) in time domain, harmonic balance solves them in frequencydomain. Since the circuit is nonoscillatory, if the input signal is T -periodic, the steady-state solution x(t) and itsfunctions q (x) and f (x) are T -periodic. Since these signals are T -periodic, they can be expanded in Fourier-series asfollows

b(t) =∞

i=−∞

Bi exp( 2πift)

x(t) =∞

i=−∞

X i exp( 2πift)

f (t) =

∞i=−∞

F i exp( 2πift)

q (t) =∞

i=−∞

Qi exp( 2πift)

where f = 1T . (7.1) can be written in frequency-domain as

∞i=−∞

[ 2πifQi + F i + Bi] exp( 2πift) = 0

Since exp( 2πift) are orthogonal, it follows that

2πifQi + F i + Bi = 0

∀i. In practice the infinite summations are truncated to a finite number of harmonics k. Collocating the above equationat i ∈ [−k, k], we have

2π(−k)f Q−k + F −k + B−k = 0...

2π0f Q0 + F 0 + B0 = 0

...

2πkfQk + F k + Bk = 0

In matrix form

F hb = 2πf ΩQ + F + B = 0 (7.2)

Page 71: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 71/86

7.1. PERIODIC STEADY-STATE ANALYSIS 71

where

Ω = diag

−k . . . 0 . . . k

Q =

Q−k

...Q0

...Qk

B , F and X are similarly defined. (7.2) represents a system of m(2k + 1) equations in m(2k + 1) unknowns X whichcan be solved using Newton’s method. The Jacobian for (7.2) is given by

J hb = 2πf Ω∂ Q

∂ X +

∂ F

∂ X

Therefore, given a X one needs to evaluate Q and F and the Jacobian of the above system.The relationship between the various frequency domain quantities and time domain quantities is best illustrated

by an example. Consider a circuit of size 2 and k = 1. Then (7.2) is written as

2πf

−1 0 0 0 0 00 −1 0 0 0 00 0 0 0 0 00 0 0 0 0 00 0 0 0 1 00 0 0 0 0 1

Q(1)−1

Q(2)−1

Q(1)0

Q(2)0

Q(1)1

Q(2)1

+

F (1)−1

F (2)−1

F (1)0

F (2)0

F (1)1

F (2)1

+

B(1)−1

B(2)−1

B(1)0

B(2)0

B(1)1

B(2)1

=

000000

where the superscripts denote the circuit variable index. Let P be a permutation matrix such that

P

Q(1)−1

Q(2)−1

Q(1)0

Q(2)

0Q(1)1

Q(2)1

=

Q(1)−1

Q(1)0

Q(1)1

Q(2)

−1Q(2)0

Q(2)1

i.e.,

P =

1 0 0 0 0 00 0 1 0 0 00 0 0 0 1 00 1 0 0 0 00 0 0 1 0 00 0 0 0 0 1

Let D denote the three point DFT matrix, i.e.,

D = 1 1 1ω2 1 ω

ω 1 ω2

ω = exp

23

π

Recall that

D−1 = 1

3

1 ω ω2

1 1 11 ω2 ω

Let

D =

D 00 D

Page 72: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 72/86

72 CHAPTER 7. STEADY-STATE METHODS FOR SOLVING PERIODIC CIRCUITS

where 0 is the 3 × 3 null matrix. Therefore

DP Q =

D 00 D

Q(1)−1

Q(1)0

Q(1)1

Q(2)−1

Q(2)0

Q(2)1

=

q (1)t1

q (1)t2

q (1)t3

q (2)t1

q (2)t2

q (2)t3

Multiplying the above by P −1, we have

P −1 DP Q =

1 0 0 0 0 00 0 0 1 0 00 1 0 0 0 00 0 0 0 1 00 0 1 0 0 00 0 0 0 0 1

q

(1)

t1

q (1)t2

q (1)t3

q (2)t1

q (2)t2

q (2)t3

=

q (1)t1

q (2)t1

q (1)t2

q (2)t2

q (1)t3

q (2)t3

= Q

This relationship can be used to compute the Jacobian J hb. ∂ Q∂ X

can be written in terms of its time-domain quantitiesas

∂ Q

∂ X =

∂P D−1P −1 Q

∂ X

= P D−1P −1 ∂ Q

∂ X P DP −1

Let C (t) be given by

C (t) = ∂q

∂x

x(t)

Define

C =

C (t1) 0 0

0 C (t2) 00 0 C (t3)

Note that

C = ∂ Q

∂ X

ThereforeJ hb = P D−1P −1( 2πf ΩC + G )P DP −1

Page 73: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 73/86

7.2. OSCILLATOR STEADY-STATE ANALYSIS 73

This Jacobian is large and dense and therefore storing, multiplying or factoring it is extremely inefficient. However, if Krylov subspace methods are used to solve for

J hb∆X = −F hb

the only computation involved is matrix vector product. This can be achieved using permutations (no cost), Fouriertransforms (O (m(2k + 1) log(2k + 1))) and sparse matrix vector multiplications O (m(2k + 1)). Therefore, a properly

preconditioned Krylov subspace method can quickly find the solution. For an appropriate choice of preconditioner,assume that C (t) and G(t) are constant. In this case, it is obvious that the Jacobian is

2π(−k)fC + G 0

0 2π(−(k − 1))f C + G 0. . .

0 2πkfC + G

Therefore for problems with mild nonlinearity, the above matrix is a very good preconditioner for the harmonic balanceJacobian. For strongly nonlinear problems, harmonic balance runs into convergence problems.

7.2 Oscillator Steady-State Analysis

The oscillator equations are given bydq (x(t))

dt + f (x(t)) + b = 0

The difference between the above equation and (7.1) is that b is constant and the period is also an unknown in theabove equation. Similar to the nonautonomous case, the steady-state response can be computed either in time or infrequency domain. First consider time-domain methods.

7.2.1 Finite Difference

Let the time period be discretized into n steps h1, . . . , hn. Discretizing the differential operator using BackwardEuler (for example) we have

q (x1) − q (x0)

h1 + f (x1) + b = 0q (x2) − q (x1)

h2+ f (x2) + b = 0

...

q (xn) − q (xn−1)

hn

+ f (xn) + b = 0

Enforcing that x0 = xn, we have

q (x1) − q (xn)

h1+ f (x1) + b = 0

q (x2) − q (x1)

h2

+ f (x2) + b = 0

...

q (xn) − q (xn−1)

hn

+ f (xn) + b = 0

The above equations can be solved using Newton Raphson to obtain the steady-state response of the oscillator.However, unlike the nonautonomous case, the period T is unknown and therefore h1, . . . hn are also unknown. Oneway to fix this problem is to insist that

hi

T = αi

Page 74: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 74/86

74 CHAPTER 7. STEADY-STATE METHODS FOR SOLVING PERIODIC CIRCUITS

are fixed throughout the Newton iteration. αis can be predetermined by running an initial transient. This still leavesus with n equations and n + 1 unknowns. This implies that there are a continuum of solutions for this problem andthe Newton-Raphson will not work because the solutions are nonisolated. In terms of equations, the Jacobian forthe above equations is singular at the solution with a rank deficiency of 1. This observation is physically consistentbecause for the oscillator steady-state, if xs(t) is a solution, xs(t + β ) is also a solution for any fixed β . In order torectify this, one of the variables is assigned a fixed value which fixes the phase and then the equations can be solved.

Therefore the system of equations is

q (x1) − q (xn)

h1+ f (x1) + b = 0

q (x2) − q (x1)

h2+ f (x2) + b = 0

...

q (xn) − q (xn−1)

hn

+ f (xn) + b = 0

xn − xn0 = 0

with unknowns x1

x2

...xn

T

The Jacobian for this system of equations is

C 1h1

+ G1 0 . . . −C nh1

− q(x1)−q(xn)h1T

−C 1h2

C 2h2

+ G2 − q(x2)−q(x1)h2T

. . .

. . . 0 −C n−1hn

C nhn + Gn − q(xn)−q(xn−1)hnT

0 . . . 0 . . . 1 0

The linear system can be efficiently solved using Krylov subspace methods using the lower triangular portion of theJacobian as a preconditioner. Then the Krylov subspace method is guaranteed to converge in n + 2 iterations wheren is the circuit size.

7.2.2 Shooting

The oscillator shooting equations are written as

φ(x(0), 0, T ) − x(0) = 0

xn − xn0 = 0

where φ(x(0), 0, T ) is the nonlinear state-transition function. The derivatives with respect to x(0) and T can be writtenas

dxn

dx0=

ni=1

dxi

dxi−1=

ni=1

C ihi

+ Gi

−1C i−1

hi

dxi

dT =

C ihi

+ Gi

−1 C i−1

hi

dxi−1

dT +

q (xi) − q (xi−1)

hiT

Page 75: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 75/86

7.2. OSCILLATOR STEADY-STATE ANALYSIS 75

In matrix notation, these equations can be written as

C 1h1

+ G1

−C 1h2

C 2h2

+ G2

. . .

−C n−1

hn

C nhn

+ Gn

dx1dx0dx2dx0

...dxndx

0

=

C 0h1

0...0

C 1h1

+ G1

−C 1h2

C 2h2

+ G2

. . .

−C n−1

hn

C nhn

+ Gn

dx1dT dx2dT ...

dxndT

=

q(x1)−q(x0)h1T

q(x2)−q(x1)h2T

...q(xn)−q(xn−1)

hnT

The Jacobian for oscillator shooting is of the form dxndx0

− I dxndT

1 0

Consider the finite difference Jacobian multiplied with its preconditioner

C 1h1

+ G1 0 . . . 0 0

−C 1h2

C 2h2

+ G2 0. . .

. . . 0 −C n−1

hn

C nhn

+ Gn 0

0 . . . 0 . . . 0 1

−1

C 1h1

+ G1 0 . . . −C nh1

− q(x1)−q(xn)h1T

−C 1h2

C 2h2

+ G2 − q(x2)−q(x1)h2T

. . .

. . . 0 −C n−1

hn

C nhn

+ Gn − q(xn)−q(xn−1)hnT

0 . . . 0 . . . 1 0

=

I 0 . . . − dx1dx0

− dx1dT

0 I . . . − dx2dx0

− dx2dT

. . .

0 . . . I − dxndx0

− dxndT

0 . . . 1 0

Therefore instead of directly solving the Jacobian, the following system is solved using a Krylov subspace method

0 0 . . . dx1dx0

dx1dT

0 0 . . . dx2dx0

dx2dT

. . .

0 . . . dxndx0

dxndT

0 . . . 1 0

I 0 . . . 0 00 I . . . 0 0

. . .

0 . . . I 00 . . . 0 0

The initial condition and the period are the last m+1 variables of this system of equations and all others are discarded.

7.2.3 Harmonic Balance

Let ω0 = 2πT

. Since x(t) is T -periodic,

x(t) =

∞i=−∞

X i exp( iω0t)

Assume that the Fourier series is truncated to the kth harmonic, i.e.,

x(t) ≈k

i=−k

X i exp( iω0t)

Page 76: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 76/86

76 CHAPTER 7. STEADY-STATE METHODS FOR SOLVING PERIODIC CIRCUITS

Collocating the equations at these 2k + 1 harmonics, we have

ωΩQ + F + B = 0

where Ω, Q F and B are as defined earlier. Similar to the time domain case, we have a continuum of solutions and inorder for Newton Raphson to be used, we need to fix one condition. Say X i = X i0 . This can be determined by DCanalysis. So the combined set of equations to be solved is

ωΩQ + F + B X i − X i0

=

00

with unknowns X ω

The Jacobian for this system of equations is

ωΩP −1 DP C P −1 D−1P + P −1 DP G P −1 D−1P ΩQei 0

where P , D, C and G are as defined before. The matrix equation can be solved efficiently using a Krylov subspace

method. The preconditioner for the regular harmonic balance appended with an extra row and column with 1 as thediagonal entry in the last column / row works well.

Page 77: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 77/86

Chapter 8

Periodic Small Signal Analysis of Circuits

Consider again the problem of solving

dq (x)

dt + f (x) + b(t) = 0 (8.1)

where b(t) is assumed to be T -periodic. Let xs(t) be the steady-state T -periodic solution of this system. Now considerthat a “small” input signal D(x)ξ (t) is added to the above equation, i.e.,

dq (x)

dt + f (x) + b + D(x)ξ (t) = 0 (8.2)

We would like to find the solution of the above equation. From linear perturbation analysis, the solution of the abovesystem is xs(t) + x p(t) where x p(t) is small. Substituting this form of the solution in (8.2), we have

0 = dq (xs + x p)

dt + f (xs + x p) + b + D(xs + x p)ξ (t)

Expanding q , f and D as in Taylor series around xs and ignoring second order terms in the expansion, we have

0 =

dq (xs + x p)

dt + f (xs + x p) + b + D(xs + x p)ξ (t)

≈dq (xs) + ∂q

∂x

xs

x p(t)

dt + f (xs) +

∂f

∂x

xs

x p(t) + b +

D(xs) +

∂D

∂x

xs

x p(t)

ξ (t)

Ignoring all the second order terms we get

0 = dC (xs(t))x p(t)

dt + G(xs(t))x p(t) + D(xs(t))ξ (t) (8.3)

where C and G are as defined before. Note that all the coefficients in the above equation are T -periodic. This smallsignal analysis can be used for so-called periodic AC and periodic noise analyses in circuits.

8.1 Periodic AC AnalysisJust as in AC analysis, D(x)ξ (t) is a small sinusoidal source A exp( 2πf t) whose frequency f is swept over a range.

Given that the system described by (8.3) is linear periodic time-varying we will first establish the generic form of theresponse x p(t). Recall that

x p(t) = −

∞−∞

h(t, s)D(xs(s))ξ (s)ds

The transfer function h(t, s) is T -periodic in both its arguments, i.e.,

h(t + T, s + T ) = h(s, t)

77

Page 78: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 78/86

78 CHAPTER 8. PERIODIC SMALL SIGNAL ANALYSIS OF CIRCUITS

Therefore h(t, s) can be expanded in Fourier series as

h(t, s) =∞

i=−∞

hi(t − s) exp( 2πif 0s)

his can be expressed in frequency domain as

hi(t) = ∞−∞

H i(f ) exp( 2πf t)df

Combining the above two expressions

h(t, s) =

∞i=−∞

∞−∞

H i(f ) exp( 2πf (t − s))exp( 2πif 0s)df

and

x p(t) = −

∞−∞

∞i=−∞

∞−∞

H i(f ) exp( 2πf (t − s)) exp( 2πif 0s)A exp( 2πf 1s)dsdf

= −∞

i=−∞

−∞ ∞

−∞

H i(f )A exp( 2πf t)exp( 2π(if 0 + f 1 − f )s)dsdf

= −∞

i=−∞

∞−∞

H i(f )A exp( 2πf t)δ (if 0 + f 1 − f )df

= −∞

i=−∞

H i(if 0 + f 1)A exp( 2π(if 0 + f 1)t)

Therefore

x p(t) = −∞

i=−∞

X pi(if 0 + f 1) exp( 2π(if 0 + f 1)t) (8.4)

Consider x p(t + T ). From the above expression it is clear that

x p(t + T ) = x p(t) exp( 2πf 1T )

We can use this property to develop our periodic AC analysis method. (8.3) can be discretized on time-pointst0, t1, . . . , tn as before.

0 = C 1x p1 − C 0x p0

h1+ G1x p1 + A exp( 2πf 1t1)

0 = C 2x p2 − C 1x p1

h2+ G2x p2 + A exp( 2πf 1t2)

...

0 = C nx pn − C n−1x pn−1

hn

+ Gnx pn + A exp( 2πf 1tn)

Since the large signal response is periodic, C 0 = C n. Also,

x p0 = exp(− 2πf 1T )x pn

Substituting these in the above set of equations,

C 1h1

+ G1 0 −C nh1

exp(− 2πf 1T )

−C 1h2

C 2h2

+ G2 0. . .

0 −C n−1

hn

C nhn

+ Gn

x p1x p2

...x pn

= −A

exp( 2πf 1t1)exp( 2πf 1t2)

...exp( 2πf 1tn)

Page 79: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 79/86

8.2. PERIODIC NOISE ANALYSIS 79

The above equation can be solved by using Krylov subspace methods with L−1 as the preconditioner. The Fourierseries expansion of the resulting x pi yields X pi(if 0 + f 1).

This analysis is done at a range of frequencies f 1. A naıve method would be to solve the above linear equationrepeatedly for each f 1. However, note that the preconditioned coefficient matrix in the above equation is of the form

I + α(f 1)E

whereE = L−1B

where α(f 1) is scalar. It can be shown that the Krylov subspace for the family of matrices of the type I + αG isinvariant for any α. Furthermore

(I + α2G)v = α2

α1(I + α1G)v +

1 −

α2

α1

v

Therefore as α(f 1) is swept, the Krylov subspace vectors need not be generated using matrix vector products and canbe generated by algebraic manipulations of computed basis vectors from the previous value of α. As f 1 varies, thedimension of the Krylov subspace may need to be increased but in spite of this, the reuse results in large savings indoing this computation, specially if the frequency range is large. This is called Krylov subspace recycling.

8.2 Periodic Noise Analysis

If the circuit is driven by large periodic excitations,

x p(t) = −

∞−∞

h(t, s)D(xs(s))dB(s)

To find the autocorrelation of one particular component of x p(t) with itself

Rxpi(t1, t2) = E

∞−∞

∞−∞

eT i h(t1, s1)D(xs(s1))dB(s1)dBT (s2)DT (xs(s2))hT (t2, s2)ei

=

∞−∞

eT i h(t1, s1)D(xs(s1))DT (xs(s1))hT (t2, s1)eids1

Using the fact that R(t1, t2) = R(t1 + T, t2 + T ), rewrite the above equation as

Rxpi(τ, t2) =

∞−∞

eT i h(τ + t2, s1)D(xs(s1))DT (xs(s1))hT (t2, s1)eids1

Since Rxpi(τ, t2) is periodic in t2, let the stationary component of Rxpi

(τ, t2) be denoted by R0xpi

(τ ). Then

R0xpi

(τ ) = 1

T

T 0

Rxpi(τ, t2)dt2

= 1

T

T 0

∞−∞

eT i h(τ + t2, s1)D(xs(s1))DT (xs(s1))hT (t2, s1)eids1dt2

The Fourier transform of R0xpi

(τ ) is

S 0xpi (f ) =

∞−∞

R0xpi

(τ ) exp(− 2πf τ )dτ

=

∞−∞

1

T

T 0

∞−∞

eT i h(τ + t2, s1)D(xs(s1))DT (xs(s1))hT (t2, s1)ei exp(− 2πf τ )ds1dt2dτ

(8.5)

Since the steady state response is periodic,

D(xs(s1)) =

∞k=−∞

Dk exp( 2πkf 0s1)

Page 80: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 80/86

80 CHAPTER 8. PERIODIC SMALL SIGNAL ANALYSIS OF CIRCUITS

Furthermore

h(t1, t2) = h(t1 − t2, t2)

=∞

k=−∞

hk(t1 − t2)exp( 2πkf 0t2)

=

∞k=−∞

∞−∞

H k(f )exp( 2πf (t1 − t2))df exp( 2πkf 0t2)

Substituting all these expressions in (8.5)

S 0xpi (f ) = 1

T

∞−∞

T 0

∞−∞

∞k=−∞

∞−∞

∞l=−∞

∞m=−∞

∞n=−∞

∞−∞

eT i H k(f 1) exp( 2πf 1(τ + t2 − s1)) exp( 2πkf 0s1)

Dl exp( 2πlf 0s1)DT m exp( 2πmf 0s1)H T n (f 2) exp( 2πf 2(t2 − s1))exp( 2πnf 0s1) exp(− 2πf τ )eidf 1df 2ds1dt2dτ

=∞

k,l,m,n=−∞

1

T

T 0

∞−∞

eT i H k(f 1) exp( 2πf 1(t2 − s1)) exp( 2πkf 0s1)Dl exp( 2πlf 0s1)DT m exp( 2πmf 0s1)

H T n (f 2)exp( 2πf 2(t2 − s1)) exp( 2πnf 0s1)δ (f 1 − f )eidf 1df 2ds1dt2

=∞

k,l,m,n=−∞

1

T

T 0

∞−∞

eT i H k(f 1) exp( 2πf 1t2)DlDT mH T n (f 2) exp( 2πf 2t2)δ (f 1 − f )

δ (−f 1 − f 2 + (k + l + m + n)f 0)eidf 1df 2dt2

=∞

k,l,m,n=−∞

∞−∞

eT i H k(f 1)DlDT mH T n (f 2)eiδ (f 1 − f )δ (−f 1 − f 2 + (k + l + m + n)f 0)

exp( 2π(f 1 + f 2)T ) − 1

2π(f 1 + f 2)T df 1df 2

=∞

k,l,m,n=−∞

eT i H k(f )DlDT mH T n ((k + l + m + n)f 0 − f )ei

exp( 2π(k + l + m + n)f 0T ) − 1

2π(k + l + m + n)f 0

=

∞k,l,m=−∞

eT i H k(f )DlD

T mH T −k−l−m(−f )ei

=∞

k,l,m=−∞

eT i H k(f )DlD

T mH ∗k+l+m(f )ei

where ∗ denotes conjugate transpose. H T k ei can be computed using the Recycled Krylov subspace method as in theperiodic AC analysis case.

Page 81: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 81/86

Chapter 9

Model Order Reduction of Large LinearCircuits

Typically an extracted circuit of a layout is a few orders of magnitude larger than the original circuit. The increasein size is due to the fact that nodes in a circuit schematic translate to physical interconnects in layouts and high

accuracy extractors will generate a large number of linear resistors, capacitors, inductors and mutual inductors forsuch interconnects. The simulation of such an extracted netlist is prohibitively expensive. The basic aim of a modelorder reduction method is to replace a large linear network with a much smaller model which captures the input-outputbehaviour of the large network over a specified range of frequencies.

9.1 Pade Approximation

A linear circuit is described by the following system of equations

C x + Gx + b = 0

For a single input single output system b = de where e is a scalar and the output is

y = lT

x

The transfer function in Laplace domain for this system is given by

Y (s)

E (s) = H (s) = −lT (G + sC )−1d

Let s0 ∈ C be an arbitrary but fixed expansion point such that G + s0C is nonsingular. Let

s = s0 + σ

A = −(G + s0C )−1C

r = −(G + s0C )−1d

Then the transfer function becomes

H (s0 + σ) = −lT (s0C + G + σC )−1d

= lT (I − σA)−1r

Let A be diagonalized as A = W ΛW −1. Then

H (s0 + σ) = lT W f

(I − σλ)−1 W −1r g

=

ni=1

f igi

1 − σλi

81

Page 82: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 82/86

82 CHAPTER 9. MODEL ORDER REDUCTION OF LARGE LINEAR CIRCUITS

This is impractical because as A increases, its diagonalization becomes very expensive. Therefore the above transferfunction is approximated by

H p(σ) = b0 + b1σ + . . . + b p−1σ p−1

1 + a1σ + . . . + a pσ p

such that the Taylor series of H p(σ) matches the Taylor series of H (s0 + σ) at least in the first 2 p + 1 terms. The

Taylor series expansion of H (s0 + σ) is given by

H (s0 + σ) = lT (I + σA + σ2A2 + . . .)r

=∞i=0

lT Airσi

=∞i=0

miσi

Equating H (s0 + σ) and H p(σ), we have

p−1i=0

biσi

≈ pi=0

aiσi ∞

i=0miσi

Equating the first 2 p + 1 powers of σ , we get

b0

b1

...b p−1

=

m0 0 . . . 0m1 m0 0 . . . 0m2 m1 m0 0

. . .

m p−1 m p−2 . . . m1 m0

1a1

...a p−1

m0 m1 . . . m p−1

m1

m2

. . . m p. . . m2 p−3

m p−1 . . . m2 p−3 m2 p−2

a1

a2

...a p

= −m p

m p+1

...m2 p−1

(9.1)

The roots of the denominator polynomial

0 = 1 + a1σ + . . . + aqσq

can be computed using standard methods. Once the roots are determined, H p(σ) can be written in partial fractionform as

H p(σ) =

qi=1

riσ − pi

where ri is the residue corresponding to the ith pole pi.

The problem with this approach is that the explicit computation of mi results in the same problem we discussedearlier in the generation of Krylov subspace. Clearly, one should generate Air from the basis vectors of the Krylovsubspace generated either by Arnoldi process or Lanczos process.

9.2 Pade Via Lanczos

Recall that the Lanczos algorithm is

Page 83: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 83/86

9.2. PADE VIA LANCZOS 83

1. Initialization set

ρ1 = r2

η1 = l2

b1 = r

ρ1

c1 = lη1

b0 = 0

c0 = 0

δ 0 = 1

2. Main iteration for n = 1, 2, . . . , p do

(a) compute

δ n = cT

n bn

(b) set

αn = cT n Abn

δ n

β n = δ nδ n−1

ηn

γ n = δ nδ n−1

ρn

(c) set

b = Abn − αnbn − β nbn−1

c = Acn − αncn − γ ncn−1

(d) set

ρn+1 = b2

ηn+1 = c2

bn+1 = b

ρn+1

cn+1 = c

ηn+1

Note that the only difference in the above algorithm compared to the one described earlier is that the Krylov subspacebasis vectors are normalized at every step. The reason for this normalization will be explained later on.

Recall that

1. cn and bn are biorthogonal

Page 84: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 84/86

84 CHAPTER 9. MODEL ORDER REDUCTION OF LARGE LINEAR CIRCUITS

2. let

T p =

α1 β 2 0 . . . 0

ρ2 α2 β 3. . . 0

0 ρ3. . . 0.

. . β p0 0 ρ p α p

T p =

α1 γ 2 0 . . . 0

η2 α2 γ 3. . . 0

0 η3. . . 0. . . γ p

0 0 η p α p

then

AB p = B pT p +

0 . . . 0 b p+1

ρ p+1

AT

C p = C p T p + 0 . . . 0 c p+1 η p+1

where as before

B p =

b1 b2 . . . b p

C p =

c1 c2 . . . c p

3. since the basis vectors are normalized at every step, the two tridiagonal matrices are not equal (or conjugate incomplex case) by are related to each other by the following relationship

T T p = D pT pD−1 p

whereD p = diag(δ 1, . . . , δ p)

Now consider the evaluation of Air as follows:

Air = ρ1Aib1

= ρ1AiB pe1

= ρ1B pT i pe1

Similarly

lT Ai =

AT i

lT

= AT

i

lT

=

η1

AT

ic1

T

=

η1

AT

iC pe1

T

=

η1C p T ie1

T

=

η1C pD−T p

T T p

iDT

p e1

T

=

η1C pD−1 p

T T p

iD pe1

T

Page 85: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 85/86

9.2. PADE VIA LANCZOS 85

The last step follows since D p is diagonal. Furthermore, D pe1 = δ 1e1 therefore

lT Ai =

η1δ 1C pD−1 p

T T p

ie1

T

= η1δ 1eT 1 T i pD−1 p C T P

Therefore in order to evaluate lT Air, let i = i + i for any 0 < i, i < i. Then

lT Air = lT AiAir

= η1δ 1eT 1 T i

p D−1 p C T P ρ1B pT i

p e1

Since C T p B p = D p, we have

lT Air = η1δ 1ρ1eT 1 T i

p T i

p e1

= η1δ 1ρ1eT 1 T i pe1

Note that η1δ 1ρ1 = lT r. Thereforemi = lT Air = lT reT

1 T i pe1

Note that if Lanczos process was carried out without normalization

mi = lT Air

= eT 1

T i

p C T p

B p T i

p e1

= eT 1

T i

p D p

T i

p e1

where ˆ denotes the corresponding quantities generated by Lanczos process without normalization. Since D p is anarbitrary diagonal matrix, one cannot in general write

mi = keT 1

T i pe1

for some k which is critical for Pade approximation.Therefore after running the Lanczos process for q steps, m1, . . . , m2q−1 are generated. However we need not

explicitly solve (9.1). To see this, first note that

lT reT 1 (I − σT p)−1e1 =

∞i=0

lT reT 1 T i pσie1

=

2 p−1i=0

lT Airσi + O

σ2 p

=

2 p−1i=0

miσi + O

σ2 p

Therefore

H p(σ) ≈ lT reT 1 (I − σT p)−1e1

Let T p be diagonalized as

T p = W pΛ pW −1 p

Then

H p(σ) = lT reT 1 W p(I − σΛ p)−1W −1

p e1

= lT rµT (I − σΛ p)−1ν

where obviously

µ = W T p e1

ν = W −1 p e1

Page 86: Numerical Analysis of Electronic Circuits by Mehrotra 2002

7/21/2019 Numerical Analysis of Electronic Circuits by Mehrotra 2002

http://slidepdf.com/reader/full/numerical-analysis-of-electronic-circuits-by-mehrotra-2002 86/86

86 CHAPTER 9. MODEL ORDER REDUCTION OF LARGE LINEAR CIRCUITS

Therefore

H p(σ) =

pi=1

lT rµiν i1 − σλ p,j

=

p

i=1

− lT rµiν iλp,j

σ − 1

λp,j

Therefore the poles and residues of the approximate transfer function are readily available. If the expansion is to beaccurate between some frequency range −f max ≤ f ≤ f max then it is recommended that

s0 = 2πf max