introduction to scientific computing · 2.1 introduction to scientific computing scientific...

Post on 01-Jun-2020

22 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

University of Tartu, Institute of Computer Science

Introduction to Scientific Computing

MTAT.08.025

eero.vainikko@ut.ee

Spring 2016

2 Practical information

Lectures: Liivi 2 - 202/511 WED 14:15Computer classes: Liivi 2 - 205, THU 12:15 Amnir Hadachi <am-

nir.hadachi@ut.ee>3 ECTS

Lectures: 16h; Computer Classes: 16h; Independent work: 46hFinal grade forms from :

1. Active partitipation at lectures 10%

2. Stand-up quiz(es) 10%

3. Computer class activities 50%

4. Exam 30%

Course homepage (http://courses.cs.ut.ee/2016/isc)

3 Introduction 1.1 Syllabus

1 Introduction1.1 SyllabusLectures:

• Python for Scientific Computing NumPy, SciPy

• Scientific Computing - an Overview

• Floating point numbers, how to deal with roundoff errors

• Large problems in Linear Algebra, condition number

• Memory hierarchies and making use of it

• Numerical integration and differentiation

• Numerical solution of differential and integral equations

• Fast Fourier Transform

4 Introduction 1.1 Syllabus

Computer Classes (preliminary plan)

1. Python & Sage; Fibonacci numbers; Collatz conjecture

2. Discretization and round-off errors

3. NumPy arrays, matrices; LU-Factorization with Gauss Elimination Method(GEM)

4. UT HPC server; LU-Factorization and GEM on HPC cluster

5. Floating point numbers

6. Fractals

7. Fourier series and Fast Fourier Transform

8. Discrete-time models and ordinary differential equations

5 Introduction 1.2 Literature

1.2 Literature

General Scientific Computing:

1. RH Landau, A First Course in Scientific Computing. Symbolic, Graphic, andNumeric Modeling Using Maple, Java, Mathematica, and Fortran90. PrincentonUniversity Press, 2005.

2. LR Scott, T Clark, B Bagheri. Scientific Parallel Computing. Princenton Uni-versity Press, 2005.

3. MT Heath, Scientific Computing; ISBN: 007112229X, McGraw-Hill Compa-nies, 2001.

4. JW Demmel, Applied Numerical Linear Algebra; ISBN: 0898713897, Societyfor Industrial & Applied Mathematics, Paperback, 1997.

6 Introduction 1.2 Literature

Python & Sage:

1. Sage Tutorials and documentation: accessible through UT Sage (http://sage.math.ut.ee)

2. Hans Petter Langetangen, A Primer on Scientific Programming withPython, Springer, 2009. Book webpage (http://vefur.simula.no/intro-programming/).

3. Hans Petter Langtangen, Python Scripting for Computational Science. ThirdEdition, Springer 2008. Web-site for the book (http://folk.uio.no/hpl/scripting/).

4. Neeme Kahusk, Sissejuhatus Pythonisse (http://www.cl.ut.ee/inimesed/nkahusk/sissejuhatus-pythonisse/).

5. Travis E. Oliphant, Guide to NumPy (http://www.tramy.us), TrelgolPublishing 2006.

7 Introduction 1.3 Scripting vs programming

1.3 Scripting vs programming

1.3.1 What is a script?

• Very high-level, often short, programwritten in a high-level scripting language

• Scripting languages:

– Unix shells,

– Tcl,

– Perl,

– Python,

– Ruby,

– Scheme,

– Rexx,

– JavaScript,

– VisualBasic,

– ...

8 Introduction 1.3 Scripting vs programming

1.3.2 Characteristics of a script

• Glue other programs together

• Extensive text processing

• File and directory manipulation

• Often special-purpose code

• Many small interacting scripts may yield a big system

• Perhaps a special-purpose GUI on top

• Portable across Unix, Windows, Mac

• Interpreted program (no compilation+linking)

9 Introduction 1.3 Scripting vs programming

1.3.3 Why not stick to Java, C/C++ or Fortran?

Features of Perl and Python compared with Java, C/C++ and Fortran:

• shorter, more high-level programs

• much faster software development

• more convenient programming

• you feel more productive

• no variable declarations ,but lots of consistency checks at run time

• lots of standardized libraries and tools

10 Introduction 1.4 Scripts yield short code

1.4 Scripts yield short code

Consider reading real numbers from a file, where each line can contain an arbitrarynumber of real numbers: 1.1 9 5.2

1.762543E-02

0 0.01 0.001 9 3 7Python solution:

F = open(filename, ’r’)

n = F.read().split()Perl solution:

open F, $filename;

$s = join "", <F>;

@n = split ’ ’, $s;

11 Introduction 1.5 Performance issues

Ruby solution: n = IO.readlines(filename).join.split

...Doing this in C++ or Java requires at least a loop, and in Fortran and C quitesome code lines are necessary

1.5 Performance issues

1.5.1 Scripts can be slow

• Perl and Python scripts are first compiled to byte-code

• The byte-code is then interpreted

• Text processing is usually as fast as in C

• Loops over large data structures might be very slow

12 Introduction 1.5 Performance issues

for i in range(len(A)):

A[i] = ...• Fortran, C and C++ compilers are good at optimizing such loops at compile time

and produce very efficient assembly code (e.g. 100 times faster)

• Fortunately, long loops in scripts can easily be migrated to Fortran or C (orspecial libraries like numpy!)

13 Introduction 1.5 Performance issues

1.5.2 Scripts may be fast enough

Read 100 000 (x,y) data from file and write (x,f(y)) out again

• Pure Python: 4s

• Pure Perl: 3s

• Pure Tcl: 11s

• Pure C (fscanf/fprintf): 1s

• Pure C++ (iostream): 3.6s

• Pure C++ (buffered streams): 2.5s

• Numerical Python modules: 2.2s (!)

• Remark: in practice, 100 000 data points are written and read in binary format,resulting in much smaller differences

14 Introduction 1.5 Performance issues

1.5.3 When scripting is convenient

• The application’s main task is to connect together existing components

• The application includes a graphical user interface

• The application performs extensive string/text manipulation

• The design of the application code is expected to change significantly

• CPU-time intensive parts can be migrated to C/C++ or Fortran

• The application can be made short if it operates heavily on list or hash structures

• The application is supposed to communicate with Web servers

• The application should run without modifications on Unix, Windows, and Mac-intosh computers, also when a GUI is included

15 Introduction 1.5 Performance issues

1.5.4 When to use C, C++, Java, Fortran

• Does the application implement complicated algorithms and data structures?

• Does the application manipulate large datasets so that execution speed iscritical?

• Are the application’s functions well-defined and changing slowly?

• Will type-safe languages be an advantage, e.g., in large development teams?

16 Introduction 1.5 Performance issues

This course

• we use python

• through sage environment http://sage.math.ut.ee

• + numeric libraries numpy and scipy

NB! HOME EXERCISE for tomorrow’s computer class:

• Get your “hands dirty” programming in python!

– tutorials (found under Help) in UT sage server

– videos at youtube or

∗ at last year’s course we used these slides; the lecture got recorded atLecture 1 video (– starting from 47th minute...)

17 What is Scientific Computing? 2.1 Introduction to Scientific Computing

2 What is Scientific Computing?2.1 Introduction to Scientific Computing

• Scientific computing – subject on crossroads of

• physics, chemistry, [social, engineering,...] sciences

– problems typically translated into

∗ linear algebraic problems

∗ sometimes combinatorial problems

• a computational scientist needs knowledge of some aspects of

– numerical analysis

– linear algebra

– discrete mathematics

18 What is Scientific Computing? 2.1 Introduction to Scientific Computing

• An efficient implementation needs some understanding of

– computer architecture

∗ both on the CPU level

∗ on the level of parallel computing

– some specific skills of software management

Scientific Computing – field of study concerned with constructing math-ematical models and numerical solution techniques and using computersto analyse and solve scientific and engineering problems

• typically – application of computer simulation and other forms of computationto problems in various scientific disciplines.

19 What is Scientific Computing? 2.1 Introduction to Scientific Computing

Main purpose of Scientific Computing:

• mirroring

• predictionof real world processes’

• characteristics

– behaviour

– development

Example of Computational SimulationASTROPHYSICS: what happens with collision of two black holes in the universe?Situation which is

• impossible to observe in the nature,

• test in a lab

• estimate barely theoretically

20 What is Scientific Computing? 2.1 Introduction to Scientific Computing

Computer simulation CAN HELPBut what is needed for simulation?

• adequate mathematical model (Einstein’s general relativity theory)

• algorithm for numerical solution of the equations

• enough big computer for actual realisation of the algorithms

Frequently: need for simulation of situations that could be performed experimantally,but simulation on computers is needed because:

• HIGH COST OF THE REAL EXPERIMENT. Examples:

– car crash-tests

– simulation of gas explosions

– nuclear explosion simulation

– behaviour of ships in Ocean waves

21 What is Scientific Computing? 2.1 Introduction to Scientific Computing

– airplane aerodynamics

– strength calculations in big constructions (for example oil-platforms)

– oil-field simulations

• TIME FACTOR. Some examples:

– Climate change predictions

– Geological development of the Earth (including oil-fields)

– Glacier flow model

– Weather prediction

• SCALE OF THE PROBLEM. Some examples:

– modeling chemical reactions on the molecular level

– development of biological ecosystems

22 What is Scientific Computing? 2.1 Introduction to Scientific Computing

• PROCESSES THAT CAN NOT BE INTERVENED Some examples:

– human heart model

– global economy model

23 What is Scientific Computing? 2.2 Specifics of computational problems

2.2 Specifics of computational problems

Usually, computer simulation consists of:

1. Creation of mathematical model – usually in a form of equations –physical properties and dependencies of the subject

2. Algorithm creation for numerical solution of the equations

3. Application of the algorithms in computer software

4. Using the created software on a computer in a particular simulationprocess

5. Visualizing the results in an understandable way using computer graph-ics, for example

6. Integration of the results and repetition/redesign of arbitrary given stepabove

24 What is Scientific Computing? 2.2 Specifics of computational problems

Most often:

• algorithm

– written down in an intuitive way

– and/or using special modeling software

• computer program written, based on the algorithm

• testing

• iterating

Explorative nature of Scientific Computing!

25 What is Scientific Computing? 2.3 Mathematical model

2.3 Mathematical model

GENERAL STRATEGY:REPLACE A DIFFICULT PROBLEM WITH A MORE SIMPLE ONE

• – which has the same solution

• – or at least approximate solution

– but still reflecting the most important features of the problem

26 What is Scientific Computing? 2.3 Mathematical model

SOME EXAMPLES OF SUCH TECHNIQUES:

• Replacing infinite spaces with finite ones (in maths sense)

• Infinite processes replacement with finite ones

– replacing integrals with finite sums

– derivatives replaced by finite differences

• Replacing differential equations with algebraic equations

• Nonlinear equations replaced by linear equations

• Replacing higher order systems with lower order ones

• Replacing complicated functions with more simple ones (like polynomials)

• Arbitrary structured matrix replacement with more simple structured matrices

27 What is Scientific Computing? 2.3 Mathematical model

AT THIS COURSE WE TRY TO GIVE:

an overview of some methods and analysis for development of reliable andefficient software for Scientific Computing

Reliability means here both the reliability of the software as well as adequatyof the results – how much can one rely on the achieved results:

• Is the solution acceptable at all? is it a real solution? (extraneous solution?,instability of the solution? etc); Does the solution algorithm guarantee a solutionat all?

• How big is the calculated solution’s deviation from the real solution? How wellthe simulation reflects the real world?

Another aspect: software reliability

28 What is Scientific Computing? 2.3 Mathematical model

Efficiency expressed on various levels of the solution process

• speed

• amount of used resources

Resources can be:

– Time

– Cost

– Number of CPU cycles

– Number of processes

– Amount of RAM

– Human labour

29 What is Scientific Computing? 2.3 Mathematical model

General formula:

minapproximation error

timeEven more generally: – minimise time of the solution

Efficient method requires:

(i) good discretisation

(ii) good computer implementation

(iii) depends on computer architecture (processor speed, RAM size, mem-ory bus speed, availability of cache, number of cache levels and otherproperties )

30 Approximation 3.1 Sources of approximation error

3 Approximation in Scientific Computing

3.1 Sources of approximation error

3.1.1 Error sources that are under our control

MODELLING ERRORS – some physical entities in the model are simplified or evennot taken into account at all (for example: air resistance, viscosity, friction etc)

(Usually it is OK but sometimes not... )(You may want to look: http://en.wikipedia.org/wiki/Spherical_cow :-)

31 Approximation 3.1 Sources of approximation error

MEASUREMENT ERRORS – laboratory equipment has its precision

Errors come also out of

• random measurement deviation

• backward noise

As an example, Newton and Planck constants are used with 8-9 decimal places whilelaboratory measurements are performed with much less precision!

THE EFFECT OF PREVIOUS CALCULATIONS – the input for calculations isoften already output of some previous calculation with some computational errors

32 Approximation 3.1 Sources of approximation error

3.1.2 Errors created during the calculations

Discretisation

As an example:

• replacing derivatives with finite differences

• finite sums used instead of infinite series

• etc

Round-off errors – error created during the calculations due to limited available pre-cision, which the calcualtions are performed with

33 Approximation 3.1 Sources of approximation error

Example 4.1

Suppose, a computer program can find function f value f (x) for arbitrary x.Task: find an algorithm for calculating approximation to the derivative f ′(x)Algorithm: Choose small h > 0 and approximate:

f ′(x)≈ [ f (x+h)− f (x)]/h

Discretisation error is:

T := | f ′(x)− [ f (x+h)− f (x)]/h|.

Using Taylor series, we get an estimation:

T ≤ h2

∥∥ f ′′∥∥

∞. (1)

34 Approximation 3.1 Sources of approximation error

Computational error is created using finite precision arithmetics approximatingthe real f (x) with an approximation f (x). Computational error C is:

C =

∣∣∣∣ f (x+h)− f (x)h

− f (x+h)− f (x)h

∣∣∣∣=

∣∣∣∣ [ f (x+h)− f (x+h)]− [ f (x)− f (x)]h

∣∣∣∣ ,which gives an estimate:

C ≤ 2h‖ f − f‖∞. (2)

The resulting error is ∣∣∣∣ f ′(x)− f (x+h)− f (x)h

∣∣∣∣ ,which can be estimated using (1) and (2):

T +C ≤ h2‖ f ′′‖∞ +

2h‖ f − f‖∞. (3)

35 Approximation 3.1 Sources of approximation error

=⇒ if h is large – the discretisation error is dominating, if h is small, computationalerror starts dominating.

3.1.3 Forward error (arvutuslik viga e. tulemuse viga) and backward error(algandmete viga)

Consider computing y = f (x). Usually we can only compute an approximation ofy, we denote the approximately calculated value by y. We can observe two measuresof the error associated with this computation.

Forward error

The forward error is a measure of the difference between the approximation y andthe true value y:

absolute forward error: |y− y|relative forward error: |y−y|

|y|

36 Approximation 3.1 Sources of approximation error

The forward error would be a natural quantity to measure, but usually (since wedon’t know the actual value of y) we can only get an upper bound on it. Moreover,tight upper bounds on it can be very difficult.

Backward error

The question we might want to ask: For what input data we actually performed thecalculations? We would like to find the smallest ∆x for which

y = f (x+∆x)

– Here we have y as the exact value of f (x+∆x). The value |∆x| (or |∆x||x| ) is called

backward error. This means, backward error is the one which we have in the input.(Like the forward error is the error we observe in the output of the calculations or analgorithms.)

37 Approximation 3.1 Sources of approximation error

Condition number – upper limit of their ratio:

forward error≤ condition number×backward error

From (2) it follows that in Example 4.1 the value of condition number is: 2/h.In given calculations all the values are absolute: actual values of the approximated

entities are not considered. Relative forward error and relative backward error are inthis case:

C|( f (x+h)− f (x))/h|

and‖ f − f‖∞

‖ f‖∞

.

Assuming that minx | f ′(x)|> 0, it follows easily from (2), that:

C|( f (x+h)− f (x))/h|

2h‖ f‖∞

minx | f ′(x)|

‖ f − f‖∞

‖ f‖∞

.

38 Approximation 3.1 Sources of approximation error

The value in the brackets · is called relative condition number of the problem.In general:

• If (absolute or relative) condition number is small,

– then (absolute or relative) error in the input data can produce only a smallerror in the result.

• If condition number is large

– then large error in the result can be caused even by a small error in theinput data

– such problems are said to be ill-conditioned

39 Approximation 3.1 Sources of approximation error

• Sometimes, in the case of finite precision arithmetics:

– backward error is much more simple to estimate than forward error

– Backward error combined with condition number makes it possibe to esti-mate the forward error (absolute or relative)

40 Approximation 3.1 Sources of approximation error

Example 4.2 (One of the key problems in Scientific Computing)Consider solving the system of linear equations:

Ax = b, (4)

where the input consists of

• nonsingular n×n matrix A

• b ∈ Rn

The task is to calculate – an approximate solution: x ∈ Rn.Suppose, instead of exact matrix A – given its approximation A = A+δA, but (for

simplicity) b known exactly. The solution x = x+δx satisfies the system of equations

(A+δA)(x+δx) = b. (5)

41 Approximation 3.1 Sources of approximation error

Then from (4),(5) it follows that

(A+δA)δx =−(δA)x.

Multiplying it with (A+δA)−1 and taking norms, we estimate

‖δx‖ ≤ ‖(A+δA)−1‖‖δA‖‖x‖.

It follows that if x 6= 0 and A 6= 0, we have:

‖δx‖‖x‖

≤ ‖(A+δA)−1‖‖A‖‖δA‖‖A‖

∼= ‖A−1‖‖A‖‖δA‖‖A‖

, (6)

which is satisfied with δA sufficiently small.

42 Approximation 3.1 Sources of approximation error

• =⇒for calculation of x an important factor is relative condition numberκ(A) := ‖A−1‖‖A‖.

– It is usually called as condition number of matrix A.

– Depends on norm ‖ · ‖

Therefore, common practice for forward error estimation is to:

• find an estimate to the backward error

• use the estimate (6)

43 Approximation 3.2 Floating-Point Numbers

3.2 Floating-Point Numbers

The number −3.1416 in scientific notation is −0.31416× 101 or (as computeroutput) -0.31416E01.

sign

exponent

−.31416 101

mantissa base

– floating point numbers in computer notation. Usually, base is 2 (with a few excep-tions like IBM 370 had a base 16; base 10 in most of hand-held calculators; 3 in anill-fated Russian computer).

For example, .101012×23 = 5.2510.Formally, a floating-point number system F, is characterised by four integers:

• Base (or radix) β > 1

• Precision p > 0

• Exponent range [L,U ]: L < 0 <U

44 Approximation 3.2 Floating-Point Numbers

Any floating-point number x ∈ F has the form

x =±d0 +d1β−1 + ...+dp−1β

1−pβ E , (7)

where integers di satify

0≤ di ≤ β −1, i = 0, ..., p−1,

and E ∈ [L,U ] (E is positive, zero or negative integer). The number E is called anexponent and in the part in the brackets · is called mantissa

Example. In arithmetics with precision 4 and base 10 the number 2347 is repre-sented as

2+3×10−1 +4×10−2 +7×10−3103.

Note that exact representation of this number in precision 3 and base 10 is not possible!

45 Approximation 3.3 Normalised floating-point numbers

3.3 Normalised floating-point numbers

A number is normalised if d0 > 0Example. The number .101012×23 is normalised, but .0101012×24 is notFloating point systems are usually normalised because:

• Representation of each number is then unique

• No digits are wasted on leading zeros

• In normalised binary (β = 2) system, the leading bit always 1 =⇒ no need tostore it!

Smallest positive normalised number in form (7) is 1×β L – underflow threashold.(In case of underflow, the result is smaller than the smallest representable floating-point number)

46 Approximation 3.3 Normalised floating-point numbers

Largest positive normalised number in form (7) is

(β −1)1+β−1 + ...+β

1−pβU

= (1−β−p)βU+1.

– overflow threashold.If the result of an arithmetic operation is an exact number not represented in the

floating-point number system F, the result is represented as (hopefully close) elementof F. (Rounding)

47 Approximation 3.4 IEEE (Normalised) Arithmetics

3.4 IEEE (Normalised) Arithmetics

• β = 2 (binary)

• d0 = 1 always – not stored

Single precision:

• p = 24, L =−126, U = 127

• Underflow threashold = 2−126 ≈ 10−38

• Overflow threashold = 2127 · (2−2−23)≈ 2128 ≈ 1038

• One bit for sign, 23 for mantissa and 8 for exponent:

1 23 8

• – 32-bit word.

48 Approximation 3.4 IEEE (Normalised) Arithmetics

Double precision:

• p = 53, L =−1022, U = 1023

• Underflow threashold = 2−1022 ≈ 10−308

• Overflow threashold= 21023 · (2−2−52)≈ 21024 ≈ 10308

• One bit for sign, 52 for mantissa and 11 for exponent:

1 52 11

– 64-bit word

• IEEE arithmetics standard – rounding towards the nearest element in F.

• (If the result is exactly between the two elements, the rounding is towards thenumber which has the least significant bit equal to 0 – rounding towards theclosest even number)

49 Approximation 3.4 IEEE (Normalised) Arithmetics

IEEE subnormal numbers - unnormalised numbers with minimal possible expo-nent.

• Between 0 and the smallest normalised floating point value.

• Guarantees that f l(x− y) (the result of operation x− y in floating point arith-metics) in case x 6= y never zero – to avoid underflow in such situatons

IEEE symbols Inf and NaN – Inf (±∞), NaN (Not a Number)

• Inf - in case of overflow

– x/±∞ = 0 in case of arbitrary finite floating/point x

– +∞+∞ =+∞, etc.

• NaN is returned when operation does not have a well/defined finite or ininitevalue, for example

50 Approximation 3.4 IEEE (Normalised) Arithmetics

– ∞−∞

– 00

–√−1

– NaNx (where – one of operations: +, - , *, / ), etc

IEEE defines also double extended floating-point values

• 64 bit mantissa; 15 bit exponent

• most of the compilers do not support it

• Many platforms support also quadruple precision (double*16)

– often emulated with lower precision and therefore slow performance

51 Python in SC 4.1 Numerical Python (NumPy)

4 Python in Scientfic Computing

4.1 Numerical Python (NumPy)

• NumPy enables efficient numerical computing in Python

• NumPy is a package of modules, which offers efficient arrays (contiguous stor-age) with associated array operations coded in C or Fortran

• There are three implementations of Numerical Python

• Numeric from the mid 90s

• numarray from about 2000

• numpy from 2006 (the new and leading implementation)

• numpy (by Travis Oliphant) – recommended

52 Python in SC 4.1 Numerical Python (NumPy)

1 # A taste of NumPy: a least-squares procedure

2 from numpy import *3 n = 100; x = linspace(0.0, 1.0, n) # coordinates

4 y_line = -2*x + 3

5 y = y_line + random.normal(0, 0.55, n) # line with noise

6 # create and solve least squares system:

7 A = array([x, ones(n)])

8 A = A.transpose()

9 result = linalg.lstsq(A, y)

10 # result is a 4-tuple, the solution (a,b) is the 1st entry:

11 a, b = result[0]

12 p=[(x[i],y[i]) for i in range(len(x))]

13 p0 = (0,a*0 + b); p1 = (1,a*1 + b)

14 G=list_plot(p,color=’red’)+line([(0,3),(1,1)],color=’blue’)

15 G=G+line([p0, p1], color=’red’)

16 G=G+text(’Blue - original line -2*x+3’, (0.7, 3.5), color=’blue’) G=

G+text(’Red - line fitted to data’, (0.3, 0.5), color=’red’)

17 show(G) # note: retype symbols "’" when copy-pasting code to sage

53 Python in SC 4.1 Numerical Python (NumPy)

Resulting plot:

54 Python in SC 4.1 Numerical Python (NumPy)

4.1.1 NumPy: making arrays >>> from numpy import *>>> n = 4

>>> a = zeros(n) # one-dim. array of length n

>>> print a # str(a), float (C double) is default type

[ 0. 0. 0. 0.]

>>> a # repr(a)

array([ 0., 0., 0., 0.])

>>> p = q = 2

>>> a = zeros((p,q,3)) # p*q*3 three-dim. array

>>> print a

[[[ 0. 0. 0.]

[ 0. 0. 0.]]

[[ 0. 0. 0.]

[ 0. 0. 0.]]]

>>> a.shape # a’s dimension

(2, 2, 3)

55 Python in SC 4.1 Numerical Python (NumPy)

4.1.2 NumPy: making float, int, complex arrays >>> a = z e r o s ( 3 )>>> p r i n t a . dtype # a ’ s da ta t y p ef l o a t 6 4>>> a = z e r o s ( 3 , i n t )>>> p r i n t a , a . dtype[0 0 0 ] i n t 6 4( or i n t 3 2 , depend ing on a r c h i t e c t u r e )>>> a = z e r o s ( 3 , f l o a t 3 2 ) # s i n g l e p r e c i s i o n>>> p r i n t a[ 0 . 0 . 0 . ]>>> p r i n t a . dtypef l o a t 3 2>>> a = z e r o s ( 3 , complex ) ; aarray ( [ 0 . + 0 . j , 0 . + 0 . j , 0 . + 0 . j ] )>>> a . dtypedtype ( ’ complex128 ’ )

56 Python in SC 4.1 Numerical Python (NumPy)

• Given an array a, make a new array of same dimension and data type:

>>> x = zeros(a.shape, a.dtype)

4.1.3 Array with a sequence of numbers

• linspace(a, b, n) generates n uniformly spaced coordinates, startingwith a and ending with b

>>> x = linspace(-5, 5, 11)

>>> print x

[-5. -4. -3. -2. -1. 0. 1. 2. 3. 4. 5.]

57 Python in SC 4.1 Numerical Python (NumPy)

• arange works like range >>> x = arange(-5, 5, 1, float)

>>> print x # upper limit 5 is not included

[-5. -4. -3. -2. -1. 0. 1. 2. 3. 4.]

4.1.4 Warning: arange is dangerous

• arange’s upper limit may or may not be included (due to round-off errors)

4.1.5 Array construction from a Python list

array(list, [datatype]) generates an array from a list: >>> pl = [0, 1.2, 4, -9.1, 5, 8]

>>> a = array(pl)

58 Python in SC 4.1 Numerical Python (NumPy)

• The array elements are of the simplest possible type:

>>> z = array([1, 2, 3])

>>> print z # int elements possible

[1 2 3]

>>> z = array([1, 2, 3], float)

>>> print z

[ 1. 2. 3.]• A two-dim. array from two one-dim. lists:

>>> x = [0, 0.5, 1]; y = [-6.1, -2, 1.2] # Python lists

>>> a = array([x, y]) # form array with x and y as rows

59 Python in SC 4.1 Numerical Python (NumPy)

• From array to list: alist = a.tolist()

4.1.6 From “anything” to a NumPy array

• Given an object a, a = asarray(a)

converts a to a NumPy array (if possible/necessary)

• Arrays can be ordered as in C (default) or Fortran: a = asarray(a, order=’Fortran’)

isfortran(a) # returns True of a’s order is Fortran

60 Python in SC 4.1 Numerical Python (NumPy)

• Use asarray to, e.g., allow flexible arguments in functions:

def myfunc(some_sequence, ...):

a = asarray(some_sequence)

# work with a as array

myfunc([1,2,3], ...)

myfunc((-1,1), ...)

myfunc(zeros(10), ...)

61 Python in SC 4.1 Numerical Python (NumPy)

4.1.7 Changing array dimensions >>> a = array([0, 1.2, 4, -9.1, 5, 8])

>>> a.shape = (2,3) # turn a into a 2x3 matrix

>>> a.shape

(2, 3)

>>> a.size

6

>>> a.shape = (a.size,) # turn a into a vector of length

6 again

>>> a.shape

(6,)

>>> a = a.reshape(2,3) # same effect as setting a.shape

>>> a.shape

(2, 3)

62 Python in SC 4.1 Numerical Python (NumPy)

4.1.8 Array initialization from a Python function >>> def myfunc(i, j):

... return (i+1)*(j+4-i)

...

>>> # make 3x6 array where a[i,j] = myfunc(i,j):

>>> a = fromfunction(myfunc, (3,6))

>>> a

array([[ 4., 5., 6., 7., 8., 9.],

[ 6., 8., 10., 12., 14., 16.],

[ 6., 9., 12., 15., 18., 21.]])

63 Python in SC 4.1 Numerical Python (NumPy)

4.1.9 Basic array indexing a = linspace(-1, 1, 6)

# array([-1. , -0.6, -0.2, 0.2, 0.6, 1. ])

a[2:4] = -1 # set a[2] and a[3] equal to -1

a[-1] = a[0] # set last element equal to first one

a[:] = 0 # set all elements of a equal to 0

a.fill(0) # set all elements of a equal to 0

a.shape = (2,3) # turn a into a 2x3 matrix

print a[0,1] # print element (0,1)

a[i,j] = 10 # assignment to element (i,j)

a[i][j] = 10 # equivalent syntax (slower)

print a[:,k] # print column with index k

print a[1,:] # print second row

a[:,:] = 0 # set all elements of a equal to 0

64 Python in SC 4.1 Numerical Python (NumPy)

4.1.10 More advanced array indexing >>> a = linspace(0, 29, 30)

>>> a.shape = (5,6)

>>> a

array([[ 0., 1., 2., 3., 4., 5.,]

[ 6., 7., 8., 9., 10., 11.,]

[ 12., 13., 14., 15., 16., 17.,]

[ 18., 19., 20., 21., 22., 23.,]

[ 24., 25., 26., 27., 28., 29.,]])

>>> a[1:3,:-1:2] # a[i,j] for i=1,2 and j=0,2,4

array([[ 6., 8., 10.],

[ 12., 14., 16.]])

>>> a[::3,2:-1:2] # a[i,j] for i=0,3 and j=2,4

array([[ 2., 4.],

[ 20., 22.]])

65 Python in SC 4.1 Numerical Python (NumPy)

>>> i = slice(None, None, 3); j = slice(2, -1, 2)

>>> a[i,j]

array([[ 2., 4.],

[ 20., 22.]])

4.1.11 Slices refer the array data

• With a as list, a[:] makes a copy of the data

• With a as array, a[:] is a reference to the data!!! >>> b = a[1,:] # extract 2nd column of a

>>> print a[1,1]

12.0

>>> b[1] = 2

>>> print a[1,1]

66 Python in SC 4.1 Numerical Python (NumPy)

2.0 # change in b is reflected in a• Take a copy to avoid referencing via slices:

>>> b = a[1,:].copy()

>>> print a[1,1]

12.0

>>> b[1] = 2 # b and a are two different arrays now

>>> print a[1,1]

12.0 # a is not affected by change in b

67 Python in SC 4.1 Numerical Python (NumPy)

4.1.12 Integer arrays as indices

• An integer array or list can be used as (vectorized) index >>> a = linspace(1, 8, 8)

>>> a

array([ 1., 2., 3., 4., 5., 6., 7., 8.])

>>> a[[1,6,7]] = 10

>>> a # ?

array([ 1., 10., 3., 4., 5., 6., 10., 10.])

>>> a[range(2,8,3)] = -2

>>> a # ?

array([ 1., 10., -2., 4., 5., -2., 10., 10.])

>>> a[a < 0] # pick out the negative elements of a

array([-2., -2.])

>>> a[a < 0] = a.max()

>>> a # ?

68 Python in SC 4.1 Numerical Python (NumPy)

array([ 1., 10., 10., 4., 5., 10., 10., 10.])• Such array indices are important for efficient vectorized code

4.1.13 Loops over arrays

• Standard loop over each element:

for i in xrange(a.shape[0]):

for j in xrange(a.shape[1]):

a[i,j] = (i+1)*(j+1)*(j+2)

print ’a[%d,%d]=%g ’ % (i,j,a[i,j]),

print # newline after each row

69 Python in SC 4.1 Numerical Python (NumPy)

• A standard for loop iterates over the first index: >>> print a

[[ 2. 6. 12.]

[ 4. 12. 24.]]

>>> for e in a:

... print e

...

[ 2. 6. 12.]

[ 4. 12. 24.]• View array as one-dimensional and iterate over all elements:

for e in a.flat:

print e

70 Python in SC 4.1 Numerical Python (NumPy)

• For loop over all index tuples and values:

>>> for index, value in ndenumerate(a):

... print index, value

...

(0, 0) 2.0

(0, 1) 6.0

(0, 2) 12.0

(1, 0) 4.0

(1, 1) 12.0

(1, 2) 24.0

71 Python in SC 4.1 Numerical Python (NumPy)

4.1.14 Array computations

• Arithmetic operations can be used with arrays:

b = 3*a - 1 # a is array, b becomes array

1) compute t1 = 3*a, 2) compute t2= t1 - 1, 3) set b = t2

72 Python in SC 4.1 Numerical Python (NumPy)

• Array operations are much faster than element-wise operations:

>>> import time # module for measuring CPU time

>>> a = linspace(0, 1, 1E+07) # create some array

>>> t0 = time.clock()

>>> b = 3*a -1

>>> t1 = time.clock() # t1-t0 is the CPU time of 3*a-1

>>> for i in xrange(a.size): b[i] = 3*a[i] - 1

>>> t2 = time.clock()

>>> print ’3*a-1: %g sec, loop: %g sec’ % (t1-t0, t2-t1)

3*a-1: 2.09 sec, loop: 31.27 sec4.1.15 In-place array arithmetics

• Expressions like 3*a-1 generates temporary arrays

73 Python in SC 4.1 Numerical Python (NumPy)

• With in-place modifications of arrays, we can avoid temporary arrays (to someextent)

b = a

b *= 3 # or multiply(b, 3, b)

b -= 1 # or subtract(b, 1, b)Note: a is changed, use b = a.copy()

74 Python in SC 4.1 Numerical Python (NumPy)

• In-place operations: a *= 3.0 # multiply a’s elements by 3

a -= 1.0 # subtract 1 from each element

a /= 3.0 # divide each element by 3

a += 1.0 # add 1 to each element

a **= 2.0 # square all elements• Assign values to all elements of an existing array:

a[:] = 3*c - 1

75 Python in SC 4.1 Numerical Python (NumPy)

4.1.16 Standard math functions can take array arguments # let b be an array

c = sin(b)

c = arcsin(c)

c = sinh(b)

# same functions for the cos and tan families

c = b**2.5 # power function

c = log(b)

c = exp(b)

c = sqrt(b)

76 Python in SC 4.1 Numerical Python (NumPy)

4.1.17 Other useful array operations # a is an array

a.clip(min=3, max=12) # clip elements

a.mean(); mean(a) # mean value

a.var(); var(a) # variance

a.std(); std(a) # standard deviation

median(a)

cov(x,y) # covariance

trapz(a) # Trapezoidal integration

diff(a) # finite differences (da/dx) # more Matlab-like functions:

corrcoeff, cumprod, diag, eig, eye, fliplr, flipud, max,

min,

prod, ptp, rot90, squeeze, sum, svd, tri, tril, triu

77 Python in SC 4.1 Numerical Python (NumPy)

4.1.18 Temporary arrays

• Let us evaluate f1(x) for a vector x:

def f1(x):

return exp(-x*x)*log(1+x*sin(x))1. temp1 = -x

2. temp2 = temp1*x

3. temp3 = exp(temp2)

4. temp4 = sin(x)

5. temp5 = x*temp4

6. temp6 = 1 + temp4

7. temp7 = log(temp5)

8. result = temp3*temp7

78 Python in SC 4.1 Numerical Python (NumPy)

4.1.19 More useful array methods and attributes >>> a = zeros(4) + 3

>>> a

array([ 3., 3., 3., 3.]) # float data

>>> a.item(2) # more efficient than a[2]

3.0

>>> a.itemset(3,-4.5) # more efficient than a[3]=-4.5

>>> a

array([ 3. , 3. , 3. , -4.5])

>>> a.shape = (2,2)

>>> a

array([[ 3. , 3. ],

[ 3. , -4.5]])

79 Python in SC 4.1 Numerical Python (NumPy)

>>> a.ravel() # from multi-dim to one-dim

array([ 3. , 3. , 3. , -4.5])

>>> a.ndim # no of dimensions

2

>>> len(a.shape) # no of dimensions

2

>>> rank(a) # no of dimensions

2

>>> a.size # total no of elements

4

>>> b = a.astype(int) # change data type

>>> b

array([3, 3, 3, 3])

80 Python in SC 4.1 Numerical Python (NumPy)

4.1.20 Complex number computing >>> from math import sqrt

>>> sqrt(-1) # ?

Traceback (most recent call last):

File "<stdin>", line 1, in <module>

ValueError: math domain error

>>> from numpy import sqrt

>>> sqrt(-1) # ?

Warning: invalid value encountered in sqrt

nan

>>> from cmath import sqrt # complex math functions

>>> sqrt(-1) # ?

1j

>>> sqrt(4) # cmath functions always return complex...

(2+0j)

81 Python in SC 4.1 Numerical Python (NumPy)

>>> from numpy.lib.scimath import sqrt

>>> sqrt(4)

2.0 # real when possible

>>> sqrt(-1)

1j # otherwise complex

82 Python in SC 4.1 Numerical Python (NumPy)

4.1.21 A root function # Goal: compute roots of a parabola, return real when possible,

# otherwise complex

def roots(a, b, c):

# compute roots of a*x^2 + b*x + c = 0

from numpy.lib.scimath import sqrt

q = sqrt(b**2 - 4*a*c) # q is real or complex

r1 = (-b + q)/(2*a)

r2 = (-b - q)/(2*a)

return r1, r2

>>> a = 1; b = 2; c = 100

>>> roots(a, b, c) # complex roots

((-1+9.94987437107j), (-1-9.94987437107j))

>>> a = 1; b = 4; c = 1

>>> roots(a, b, c) # real roots

(-0.267949192431, -3.73205080757)

83 Python in SC 4.1 Numerical Python (NumPy)

4.1.22 Array type and data type >>> import numpy

>>> a = numpy.zeros(5)

>>> type(a)

<type ’numpy.ndarray’>

>>> isinstance(a, ndarray) # is a of type ndarray?

True

>>> a.dtype # data (element) type object

dtype(’float64’)

>>> a.dtype.name

’float64’

>>> a.dtype.char # character code

’d’

>>> a.dtype.itemsize # no of bytes per array element

8

84 Python in SC 4.1 Numerical Python (NumPy)

>>> b = zeros(6, float32)

>>> a.dtype == b.dtype # do a and b have the same data type?

False

>>> c = zeros(2, float)

>>> a.dtype == c.dtype

True

85 Python in SC 4.1 Numerical Python (NumPy)

4.1.23 Matrix objects

• NumPy has an array type, matrix, much like Matlab’s array type >>> x1 = array([1, 2, 3], float)

>>> x2 = matrix(x) # or just mat(x)

>>> x2 # row vector

matrix([[ 1., 2., 3.]])

>>> x3 = mat(x).transpose() # column vector

>>> x3

matrix([[ 1.],

[ 2.],

[ 3.]])

>>> type(x3)

<class ’numpy.core.defmatrix.matrix’>

>>> isinstance(x3, matrix)

True

86 Python in SC 4.1 Numerical Python (NumPy)

• Only 1- and 2-dimensional arrays can be matrix

• For matrix objects, the * operator means matrix-matrix or matrix-vector multi-plication (not elementwise multiplication):

>>> A = eye(3) # identity matrix

>>> A = mat(A) # turn array to matrix

>>> A

matrix([[ 1., 0., 0.],

[ 0., 1., 0.],

[ 0., 0., 1.]])

>>> y2 = x2*A # vector-matrix product

>>> y2

matrix([[ 1., 2., 3.]])

>>> y3 = A*x3 # matrix-vector product

>>> y3

matrix([[ 1.],

[ 2.],

[ 3.]])

87 Python in SC 4.2 NumPy: Vectorisation

4.2 NumPy: Vectorisation

• Loops over an array run slowly

• Vectorization = replace explicit loops by functions calls such that the whole loopis implemented in C (or Fortran)

• Explicit loops: r = zeros(x.shape, x.dtype)

for i in xrange(x.size):

r[i] = sin(x[i])• Vectorised version:

r = sin(x)

88 Python in SC 4.2 NumPy: Vectorisation

• Arithmetic expressions work for both scalars and arrays

• Many fundamental functions work for scalars and arrays

• Ex: x**2 + abs(x) works for x scalar or array

A mathematical function written for scalar arguments can (normally) take a arrayarguments: >>> def f(x):

... return x**2 + sinh(x)*exp(-x) + 1

...

>>> # scalar argument:

>>> x = 2

>>> f(x)

5.4908421805556333

>>> # array argument:

>>> y = array([2, -1, 0, 1.5])

89 Python in SC 4.2 NumPy: Vectorisation

>>> f(y)

array([ 5.49084218, -1.19452805, 1. ,

3.72510647])

4.2.1 Vectorisation of functions with if tests; problem

• Consider a function with an if test: def somefunc(x):

if x < 0:

return 0

else:

return sin(x)

# or

def somefunc(x): return 0 if x < 0 else sin(x)

90 Python in SC 4.2 NumPy: Vectorisation

• This function works with a scalar x but not an array

• Problem: x<0 results in a boolean array, not a boolean value that can be used inthe if test

>>> x = linspace(-1, 1, 3); print x

[-1. 0. 1.]

>>> y = x < 0

>>> y

array([ True, False, False], dtype=bool)

>>> ’ok’ if y else ’not ok’ # test of y in scalar

boolean context

...

ValueError: The truth value of an array with more than

one

element is ambiguous. Use a.any() or a.all()

91 Python in SC 4.2 NumPy: Vectorisation

4.2.2 Vectorisation of functions with if tests; solutions

A. Simplest remedy: call NumPy’s vectorize function to allow array arguments toa function: >>> somefuncv = vectorize(somefunc, otypes=’d’)

>>> # test:

>>> x = linspace(-1, 1, 3); print x

[-1. 0. 1.]

>>> somefuncv(x) # ?

array([ 0. , 0. , 0.84147098])Note: The data type must be specified as a character

• The speed of somefuncv is unfortunately quite slow

92 Python in SC 4.2 NumPy: Vectorisation

B. A better solution, using where: def somefunc_NumPy2(x):

x1 = zeros(x.size, float)

x2 = sin(x)

return where(x < 0, x1, x2)

93 Python in SC 4.2 NumPy: Vectorisation

4.2.3 General vectorization of if-else tests def f(x): # scalar x

if condition:

x = <expression1>

else:

x = <expression2>

return x def f_vectorized(x): # scalar or array x

x1 = <expression1>

x2 = <expression2>

return where(condition, x1, x2)

94 Python in SC 4.2 NumPy: Vectorisation

4.2.4 Vectorization via slicing

• Consider a recursion scheme (which arises from a one-dimensional diffusionequation)

• Straightforward (slow) Python implementation:

n = size(u)-1

for i in xrange(1,n,1):

u_new[i] = beta*u[i-1] + (1-2*beta)*u[i] + beta*u[i+1]• Slices enable us to vectorize the expression:

u[1:n] = beta*u[0:n-1] + (1-2*beta)*u[1:n] + beta*u[2:n+1]

95 Python in SC 4.3 NumPy: Random numbers

4.3 NumPy: Random numbers

• Drawing scalar random numbers: import random

random.seed(2198) # control the seed

print ’uniform random number on (0,1):’, random.random()

print ’uniform random number on (-1,1):’, random.uniform(-1,1)

print ’Normal(0,1) random number:’, random.gauss(0,1)• Vectorized drawing of random numbers (arrays):

from numpy import random

random.seed(12) # set seed

u = random.random(n) # n uniform numbers on (0,1)

u = random.uniform(-1, 1, n) # n uniform numbers on (-1,1)

u = random.normal(m, s, n) # n numbers from N(m,s)

96 Python in SC 4.3 NumPy: Random numbers

• Note that both modules have the name random! A remedy:

import random as random_number # rename random for scalars

from numpy import * # random is now numpy.random

97 Python in SC 4.4 NumPy: Basic linear algebra

4.4 NumPy: Basic linear algebra

NumPy contains the linalg module for

• solving linear systems

• computing the determinant of a matrix

• computing the inverse of a matrix

• computing eigenvalues and eigenvectors of a matrix

• solving least-squares problems

• computing the singular value decomposition of a matrix

• computing the Cholesky decomposition of a matrix

98 Python in SC 4.4 NumPy: Basic linear algebra

4.4.1 A linear algebra session 1 from numpy import * # includes import of linalg

2 n=100 # fill matrix A and vectors x and b:

3 A=random.uniform(0.0,1.0,(n,n)); x=random.uniform(-1,1,n)

4 b = dot(A, x) # matrix-vector product

5 y = linalg.solve(A, b) # solve A*y = b

6 if allclose(x, y, atol=1.0E-12, rtol=1.0E-12):

7 print ’--correct solution’

8 d = linalg.det(A); B = linalg.inv(A)

9 # check result:

10 R = dot(A, B) - eye(n) # residual

11 R_norm = linalg.norm(R) # Frobenius norm of matrix R

12 print ’Residual R = A*A-inverse - I:’, R_norm

13 A_eigenvalues = linalg.eigvals(A) # eigenvalues only

14 A_eigenvalues, A_eigenvectors = linalg.eig(A)

15 for e, v in zip(A_eigenvalues, A_eigenvectors):

16 print ’eigenvalue %g has corresponding vector\n%s’ % (e, v)

99 Python in SC 4.5 Python: Plotting modules

4.5 Python: Plotting modules

By default, Sage comes with:

• Interface to Gnuplot (curve plotting, 2D scalar and vector fields)

• Matplotlib (curve plotting, 2D scalar and vector fields)

• 3D: Tachyon (ray-tracing) Jmol (interactive plotting)

Available Python interfaces to:

• Interface to Vtk (2D/3D scalar andvector fields)

• Interface to OpenDX (2D/3D scalarand vector fields)

• Interface to IDL

• Interface to Grace

• Interface to Matlab

• Interface to R

• Interface to Blender

• PyX (PostScript/TEX-like drawing)

100 Python in SC 4.5 Python: Plotting modules

from numpy import *n = 100 ; x = linspace(0.0, 1.0, n); y = linspace(0.0,

1.0, n)

a=-2; b=3; c=7

z_line = a*x +b*y + c

rscal=0.05

xx = x + random.normal(0, rscal, n)

yy = y + random.normal(0, rscal, n)

zz = z_line + random.normal(0, rscal, n)

A = array([xx, yy, ones(n)])

A = A.transpose()

result = linalg.lstsq(A, zz)

aa, bb, cc = result[0]

p0 = (x[0], y[0], a*x[0]+b*y[0]+c)

p1 = (x[n-1],y[n-1],a*x[n-1]+b*y[n-1]+c)

101 Python in SC 4.5 Python: Plotting modules

pp=[(xx[i],yy[i],zz[i]) for i in range(len(x))]

p=[(x[i],y[i],z_line[i]) for i in range(len(x))]

pp0 = (xx[0], yy[0], aa*xx[0]+bb*yy[0]+cc); pp1 = (xx[n

-1],yy[n-1],aa*xx[n-1]+bb*yy[n-1]+cc)

G=line3d([p0,p1],color=’blue’)

G=G+list_plot(pp,color=’red’,opacity=0.2)

G=G+line3d([pp0, pp1], color=’red’)

G=G+text3d(’Blue - original line: ’+’%.4f*x+%.4f*y+%.4f’

%(a,b,c), (p[0][0], p[0][1], p[0][2]), color=’blue’)

G=G+text3d(’Red - fitted line: ’+’%.4f*x+%.4f*y+%.4f’ %

(aa,bb,cc), (p[n-1][0], p[n-1][1], p[n-1][2]), color=’

red’)

show(G)

102 Python in SC 4.5 Python: Plotting modules

103 Python in SC 4.6 I/O

4.6 I/O

4.6.1 File I/O with arrays; plain ASCII format

• Plain text output to file (just dump repr(array)): a = linspace(1, 21, 21); a.shape = (2,10)

# In case of Sage Notebook, use the variable DATA, which

holds the current working directory name for current

worksheet

file = open(DATA+’tmp.dat’, ’w’)

file.write(’Here is an array a:\n’)

file.write(repr(a)) # dump string representation of a

file.close()(If you need the objects in a different worksheet, use the directory name that was

stored in variable DATA of the original worksheet...)

104 Python in SC 4.6 I/O

• Plain text input (just take eval on input line):

file = open(DATA+’tmp.dat’, ’r’)

file.readline() # load the first line (a comment)

b = eval(file.read())

file.close()

105 Python in SC 4.6 I/O

4.6.2 File I/O with arrays; binary pickling

• Dump (serialized) arrays with cPickle:

# a1 and a2 are two arrays

import cPickle

file = open(DATA+’tmp.dat’, ’wb’)

file.write(’This is the array a1:\n’)

cPickle.dump(a1, file)

file.write(’Here is another array a2:\n’)

cPickle.dump(a2, file)

file.close()

106 Python in SC 4.6 I/O

Read in the arrays again (in correct order): file = open(DATA+’tmp.dat’, ’rb’)

file.readline() # swallow the initial comment line

b1 = cPickle.load(file)

file.readline() # swallow next comment line

b2 = cPickle.load(file)

file.close()In Sage: Almost all Sage objects x can be saved in compressed form to disk using

save(x,filename) (or in many cases x.save(filename)) A=matrix(RR,10,range(100))

save(A,DATA+’A’) B=load(DATA+’A’)

107 Python in SC 4.7 SciPy

4.7 SciPy

4.7.1 Overview

• SciPy is a comprehensive package (by Eric Jones, Travis Oliphant, Pearu Peter-son) for scientific computing with Python

• Much overlap with ScientificPython

• SciPy interfaces many classical Fortran packages from Netlib (QUADPACK,ODEPACK, MINPACK, ...)

• Functionality: special functions, linear algebra, numerical integration, ODEs,random variables and statistics, optimization, root finding, interpolation, ...

• May require some installation efforts (applies ATLAS)

Included in Sage by default; See SciPy homepage (http://www.scipy.org)

108 Systems of LE 5.1 Systems of Linear Equations

5 Solving Systems of Linear Equations

5.1 Systems of Linear Equations

System of linear equations:

a11x1 +a12x2 + ...+a1nxn = b1

a21x1 +a22x2 + ...+a2nxn = b2

. . . . . . . . . . . . . .

am1x1 +am2x2 + ...+amnxn = bm

109 Systems of LE 5.1 Systems of Linear Equations

Matrix form: A- given matrix, vector b - given; vector of unknowns xa11 a12 · · · a1n

a21 a22 · · · a2n...

... . . . ...am1 am2 · · · amn

x1

x2...

xn

=

b1

b2...

bm

or Ax = b (8)

Suppose,

• m > n – overdetermined system; Does this system have a solution?a system with more equations than unknownshas usually no solution

• m < n – underdetermined system; How many solutions does it have?a system with fewer equations than unknownshas usually infinitely many solutions

• m = n – what about the solution in this case?a system with the same number of equations and unknowns has usuallya single unique solution←− we will deal only with this case now

110 Systems of LE 5.2 Classification

5.2 Classification

Two main types of systems of linear equations: systems with

• full matrix

– most of the values are nonzero

– how to store it?storage in a 2D array

• sparse matrix

– most of the matrix values are zero

– How to store such matrices?storing in a full matrix system would be waste of memory

∗ different sparse matrix storage schemes

Quite different strategies for solution of systems with full or sparse matrices

111 Systems of LE 5.2 Classification

5.2.1 Problem Transformation

• Common strategy is to modify the problem (8) such that

– the solution remains the same

– modified problem – more easy to solve

What kind of transformations do not change the solution?

• Possible to multiply the both sides of the equation (8) with an arbitrary nonsin-gular matrix M without the change in solution.

– To check it, notice that the solution of MAz = Mb is:

z = (MA)−1Mb = A−1M−1Mb = A−1b = x.

112 Systems of LE 5.2 Classification

– For example, M = D – diagonal matrix,

– or M = P permutation matrix

NB! Although, theoretically the multiplication of (8) with nonsingular matrix M doesnot change the solution, we will see later that it may change the numerical process ofthe solution and the exactness of the solution...

The next question we ask: what type of systems are easy to solve?

113 Systems of LE 5.3 Triangular linear systems

5.3 Triangular linear systemsIf the system matrix A has a row i with a nonzero only on the diagonal, it is easy

to calculate xi = bi/aii; if now there is row j where except the diagnal a j j 6= 0 theonly nonzero is at position a ji, we find that x j = (b j− a jixi)/a j j and again, if thereexist a row k such that akk 6= 0 and akl = 0 if l 6= i, j, we can have xk = (bk−akixi−ak jx j)/akk etc.

• Such systems – easy to solve

• called triangular systems.

With rearranging rows and unknowns (columns) it is possible to transform the systemto Lower Triangular form L or Upper Triangular form U :

L =

`11 0 · · · 0`21 `22 · · · 0...

... . . . ...`n1 `n2 · · · `nn

, U =

u11 u12 · · · u1n

0 u22 · · · u2n...

... . . . ...0 0 · · · unn

114 Systems of LE 5.3 Triangular linear systems

L =

`11 0 · · · 0`21 `22 · · · 0...

... . . . ...`n1 `n2 · · · `nn

, U =

u11 u12 · · · u1n

0 u22 · · · u2n...

... . . . ...0 0 · · · unn

Solving system Lx = b is calledForward Substitution :x1 = b1/`11

xi =(

bi−∑i−1j=1 `i jx j

)/`ii

Solving system Ux = b is calledBack Substitution :xn = bn/unn

xi =(

bi−∑nj=i+1 ui jx j

)/uii

But how to transform an arbitrary matrix to a triangular form?

115 Systems of LE 5.4 Elementary Elimination Matrices

5.4 Elementary Elimination Matrices

for a1 6= 0: [1 0

−a2/a1 1

][a1

a2

]=

[a1

0

].

In general case, if a = [a1,a2, ...,an] and ak 6= 0:

Mka =

1 · · · 0 0 · · · 0... . . . ...

... . . . ...0 · · · 1 0 · · · 00 · · · −mk+1 1 · · · 0... . . . ...

... . . . ...0 · · · −mn 0 · · · 1

a1...

ak

ak+1...

an

=

a1...

ak

0...0

,

where mi = ai/ak, i = k+1, ...,n.

116 Systems of LE 5.4 Elementary Elimination Matrices

The divider ak is called pivot (juhtelement)Matrix Mk is called also elementary elimination matrix or Gauss transforma-

tion

1. Mk is nonsingular⇐= Why?being lower triangular and unit diagonal

2. Mk = I−meTk , where m = [0, ...,0,mk+1, ...,mn]

T and ek is column k of unitmatrix

3. Lk =(de f ) M−1

k = I +meTk .

4. If M j, j > k is some other elementary elimination matrix with multiplicationvector t, then

MkM j = I−meTk − teT

j +meTk teT

j = I−meTk − teT

j ,

due to eTk t = 0.

117 Systems of LE 5.5 Gauss Elimination and LU Factorisation

5.5 Gauss Elimination and LU Factorisation

• apply series of Gauss elimination matrices from the left: M1, M2,...,Mn−1, takingM = Mn−1 · · ·M1

– we get the linear system:

MAx = Mn−1 · · ·M1Ax = Mn−1 · · ·M1b = Mb

– upper triangular =⇒

∗ easy to solve.

The process is called Gauss Elimination Method (GEM)

118 Systems of LE 5.5 Gauss Elimination and LU Factorisation

• Denoting U = MA and L = M−1, we get that

L = M−1 = (Mn−1 · · ·M1)−1 = M−1

1 · · ·M−1n−1 = L1 · · ·Ln−1

is lower unit triangular (ones on the diagonal)

• =⇒A = LU.

Expressed in an algorithm:

119 Systems of LE 5.5 Gauss Elimination and LU Factorisation

Algoritm 5.1. LU-factorisation using Gauss elimination method (GEM) do k=1,...,n-1 # cycle over matrix columns

if akk==0 then stop # stop in case pivot == 0

do i=k+1,n

mik = aik/akk # coefficient calculation in column kenddo

do i=k+1,n

do j=k+1,n # applying transformations to

ai j = ai j−mikak j # the rest of the matrix

enddo

enddo

enddoNB! In practical implementation: For storing mik use corresponding elements in

A (will be zeroes anyway)

120 Systems of LE 5.6 Number of operations in GEM

5.6 Number of operations in GEM

Finding operation counts for Alg. 5.1:

• Replace loops with corresponding sums over number of particular operations:

n−1

∑i=1

(n

∑j=i+1

1+n

∑j=i+1

n

∑k=i+1

2

)

=n−1

∑i=1

((n− i)+2(n− i)2) =23

n3 +O(n2)

• used that ∑mi=1 ik =mk+1/(k+1)+O(mk) (which is enough for finding the num-

ber of operations with the highest order)

• How many operations there are for solving a triangular system?Number of operations for forward and backward substitution for L and U isO(n2)

121 Systems of LE 5.7 GEM with row permutations

• =⇒ the whole system solution Ax = b takes 23n3 +O(n2) operations

5.7 GEM with row permutations

• If pivot == 0 GEM won’t work

• Row permutations or partial pivoting may help

• For numerical stability, also the pivot must not be small

Example 5.1

• Consider matrix

A =

[0 11 0

]

– non-singular

– but LU-factorisation impossible without row permutations

122 Systems of LE 5.7 GEM with row permutations

• But on contrary, the matrix

A =

[1 11 1

]

– has the LU-factorisation

A =

[1 11 1

]=

[1 01 1

][1 10 0

]= LU

– But, what is wrong with matrix A?with A being actually singular matrix!

123 Systems of LE 5.7 GEM with row permutations

Example 5.2. Small pivots

• Consider

A =

[ε 11 1

],

with ε such that 0 < ε < εmach in given floating point system

– (i.e. 1+ ε = 1 in floating point arithmetics)

– Without row permutation we get (in floating-point arithmetics):

M =

[1 0−1/ε 1

]=⇒ L =

[1 0

1/ε 1

],U =

[ε 10 1−1/ε

]=

[ε 10 −1/ε

]

• But then

LU =

[1 0

1/ε 1

][ε 10 −1/ε

]=

[ε 11 0

]6= A

124 Systems of LE 5.7 GEM with row permutations

Using row permutation

• the pivot is 1;

• multiplier −ε =⇒

M =

[1 0−ε 1

]=⇒ L =

[1 0ε 1

],U =

[1 10 1− ε

]=

[1 10 1

]

in floating point arithmetics

• =⇒

LU =

[1 0ε 1

][1 10 1

]=

[1 1ε 1

]← OK!

125 Systems of LE 5.7 GEM with row permutations

Algoritm 5.2. LU-factorisation with GEM using row permutations do k=1,...,n-1 # cycle over matrix columns

Find index p such that: # looking for the best pivot

|apk| ≥ |aik|, k ≤ i≤ n # in given column

if p 6= k then interchange rows k and pif akk = 0 then continue with next k # skip such column

do i=k+1,n

mik = aik/akk # multiplier calculation in column kenddo

do i=k+1,n

do j=k+1,n # transformation application

ai j = ai j−mikak j # to the rest of the matrix

enddo

enddo

enddo

126 Systems of LE 5.7 GEM with row permutations

As a result, MA =U , where U upper-triangular, OK so far?but actually

M = Mn−1Pn−1 · · ·M1P1

• M−1 still lower-triangular?is not lower-triangular any more, although it is still denoted by L

– but is it triangular?we have still triangular L

– knowing the permutations P = Pn−1 · · ·P1 in advance would give

PA = LU,

where L indeed lower triangular matrix

127 Systems of LE 5.7 GEM with row permutations

• But, do we really need to actually perform the row exchanges – explicitly?instead of row exchanges we can perform just appropriate mapping ofmatrix (and vector) indexes

– We start with unit index mapping p = [1,2,3,4, ...,n]

– If rows i and j need to be exchanged, we exchange corresponding valuesp[i] and p[ j]

– In the algorithm, take everywhere ap[i] j instead of ai j (and other arrayscorrespondingly)

To solve the system Ax = b (8) How does the whole algorithm look like now?

• Solve the lower triangular system Ly = Pb with forward substitution

• Solve the upper triangular system Ux = y backward substitution

The term partial pivoting comes from the fact that we are seeking for the best pivotonly in the current column (starting from the diagonal and below) of the matrix

128 Systems of LE 5.7 GEM with row permutations

• Complete pivoting – the best pivot is chosen from the whole remaining part ofthe matrix

– This means exchanging both rows and columns of the matrix

PAQ = LU,

where P and Q are permutation matrices.

– The system is solved in three stages: Ly = Pb; Uz = y and x = Qz

Although numerical stability is better in case of complete pivoting it is rarely used,because

• more costly

• usually not needed

129 Systems of LE 5.8 Reliability of the LU-factorisation with partial pivoting

5.8 Reliability of the LU-factorisation with partial pivoting

Introduce the vector norm:

‖x‖∞= maxi|xi|, x ∈ Rn

and corresponding matrix norm:

‖A‖∞ = supx∈Rn‖Ax‖∞

‖x‖∞

, x ∈ Rn.

Looking at the rounding errors in GEM, it can be shown that actually we find L, andU which satisfy the relation:

PA = LU−E,

where error E can be estimated:

‖E‖∞ ≤ nε‖L‖∞‖U‖∞ (9)

130 Systems of LE 5.8 Reliability of the LU-factorisation with partial pivoting

where ε is machine epsilonIn practice we replace the system PAx = Pb with the system LU = Pb. =⇒the

system we are solving is actually

(PA+E)x = Pb

for finding the approximate solution x.

• How far is it from the real solution?

From the matrix perturbation theory:Let us solve the system

Ax = b (10)

where A ∈ Rn×n and b ∈ Rn are given and x ∈ Rn is unknown. Suppose, A is givenwith an error A = A+δA. Perturbed solution satisfies the system of linear equations

(A+δA)(x+δx) = b. (11)

131 Systems of LE 5.8 Reliability of the LU-factorisation with partial pivoting

Theorem 5.1. Let A be nonsingular and δA be sufficiently small, such that

‖δA‖∞‖A−1‖∞ ≤12. (12)

Then (A+δA) is nonsingular and

‖δx‖∞

‖x‖∞

≤ 2κ(A)‖δA‖∞

‖A‖∞

, (13)

where κ(A) = ‖A‖∞‖A−1‖∞ is the condition number.Proof of the theorem (http://www.ut.ee/~eero/SC/konspekt/

perturb-toest/perturb-proof.pdf)(EST) (http://www.ut.ee/~eero/SC/konspekt/perturb-toest/

perturb-toest.pdf)

132 Systems of LE 5.8 Reliability of the LU-factorisation with partial pivoting

Remarks

1. The result is true for an arbitrary matrix norm derived from the correspondingvector norm

2. Theorem 5.1 says, that in case of small condition number a small relative errorin matrix A can cause small or large?only a small error in the solution x

3. If condition number is big, what can happen?everything can happen

4. It is not simple to calculate condition number but still, it can be estimated

Combining the result (9) with Theorem 5.1, we see that:GEM forward error can be estimated as follows:

‖δx‖∞

‖x‖∞

≤ 2nεκ(A)G,

where the coefficient G =[‖L‖∞‖U‖∞

‖A‖∞

]is called growth factor

133 Systems of LE 5.8 Reliability of the LU-factorisation with partial pivoting

Conclusions

• If G is not large, then well-conditioned matrix gives fairly good answer (i.e.O(nε)).

• In case of partial pivoting, the elements of matrix L are ≤ 1.

– Nevertherless, ∃ examples, where with well-conditioned matrix A the ele-ments of U exponentially large in comparison with the elements in A.

– But these examples are more like “academic” – in practice rarely one findssuch (and the method can be used without any fair)

134 BLAS 6.1 Motivation

6 BLAS (Basic Linear Algebra Subroutines)

6.1 Motivation

How to optimise programs that use a lot of linear algebra operations?

• Efficiency depends on

– processor speed

– number of arithmeticoperations

• but also on:

– the speed of memoryreferences

• Hierarchical memory structure:

135 BLAS 6.1 Motivation

Tape, CS, DVD, Network storage

HardDisk

RAM

cache

Registers

slow, large, cheap

fast, small, expensive

• Where are arithmetic operations performed on the picture?Useful arithmetic operations only on the top of the hierarchy

• before operations: what is data movement direction before/after arithmetic op.?data needs to be moved up; after operations the data is moveddown the hierarchy

• information movement faster on top or bottom?information movement faster on top

• What is faster, speed of arithmetic operations or data movement?As a rule: speed of arithmetic operations faster than data movement

136 BLAS 6.1 Motivation

Consider an arbitrary algorithm. Denote:

• f – flops (# arithmetic operations: +, - ×, /)

• m – # memory references

Introduce q = f/m.

Why this number is important?Why this number is important? To prefer it to be small or large value?

• t f – time spent on 1 flop • tm time for memory access,

Then calculation time is:

f ∗ t f +m∗ tm = f ∗ t f (1+1q(tmt f))

In general, tm t f and therefore, total time is reflecting processor speed only if q islarge or small?large

137 BLAS 6.1 Motivation

Example. Gauss elimination method – for each i – key operations:

A(i+1 : n, i) = A(i+1 : n, i)/A(i, i), (14)

A(i+1 : n, i+1 : n) = A(i+1 : n, i+1 : n)−A(i+1 : n, i)∗A(i, i+1 : n). (15)

• Operation (14) represents following general operation:

y = ax+y, x,y ∈ Rn, a ∈ R (16)

– Operation (16) called saxpy ((single-precision) a times x plus y)

• (15) represents:A = A−vwT , A ∈ Rn×n, v,w ∈ Rn (17)

– (17) – matrix A rank-1 update, (Matrix vwT has rank 1, but Why?, because each rowof it is a multiple of vector w)

138 BLAS 6.1 Motivation

Operation (16) analysis:

• m = 3n+1 memory references:

– 2n+1 reads

∗ vectors x, y

∗ scalar a

– n writes

∗ new y

• Computations take f = 2n flops

• =⇒ q = 2/3 + O(1/n) ≈ 2/3 forlarge n

Operation (17) analysis:

• m = 2n2 +2n memory references:

– n2 +2n reads

– n2 writes

• Computations – f = 2n2 flops

• =⇒ q = 1+O(1/n)≈ 1 with largen

(16) – 1st order operation (O(n) flops); (17) – 2nd order operation (O(n2) flops).Note that coefficient q is O(1) in both cases

139 BLAS 6.1 Motivation

Faster results in case of 3rd order operations (O(n3) operations with O(n2) mem-ory references). For example, matrix multiplication:

C = AB+C, A, B,C ∈ Rn×n. (18)

Here m = 4n2 and f = n2(2n−1)+n2 = 2n3 (check it!) =⇒ q = n/2→ ∞ if n→ ∞.This operation can give processor work near peak performance, with good algorithmscheduling!

140 BLAS 6.2 BLAS implementations

6.2 BLAS implementations

• BLAS – standard library for simple 1st, 2nd and 3rd order operations

– BLAS – freeware, available for example from netlib (http://www.netlib.org/blas/)

– Processor vendors often supply their own implementation

– BLAS ATLAS implementation ATLAS (http://math-atlas.sourceforge.net/) – self-optimising code

Example of using BLAS (fortran90):

• LU factorisation using BLAS3 operations (http://www.ut.ee/~eero/SC/konspekt/Naited/lu1blas3.f90.html)

• main program for testing different BLAS levels (http://www.ut.ee/~eero/SC/konspekt/Naited/testblas3.f90.html)

141 DE 7.1 Ordinary Differential Equations (ODE)

7 Numerical Solution of Differential Equations

Differential equation – mathematical equation for an unknown function of oneor several variables that relates the values of the function itself and its derivatives ofvarious orders

Example: the velocity of a ball falling through the air, considering only gravityand air resistance.

Order of Differential Equation – the highest derivative of the dependent variablewith respect to the independent variable

7.1 Ordinary Differential Equations (ODE)

Ordinary Differential Equation (ODE) is a differential equation in which theunknown function (also known as the dependent variable) is a function of a singleindependent variable

142 DE 7.1 Ordinary Differential Equations (ODE)

Initial value problem

y′(t) = f (t,y(t)), y(t0) = y0,

where function f : [to,∞)×Rd → Rd , y0 ∈ Rd - initial condition

• (Boundary value problem - giving the solution at more than one point (onboundaries))

We consider here only first-order ODEs

• Higher ODE can be converted into system of first order ODE

– Example: y′′ = −y can be rewritten as two first-order equations: y′ = zand z′ =−y.

7.1.1 Numerical methods for solving ODEs

143 DE 7.1 Ordinary Differential Equations (ODE)

Euler method (or forward Euler method)

• finite difference approximation

y′(t)≈ y(t +h)− y(t)h

⇒ y(t +h)≈ y(t)+hy′(t)⇒ y(t +h)≈ y(t)+h f (t,y(t))Start with t0, t1 = t0 +h, t2 = t0 +2h, etc.

yn+1 = yn +h f (tn,yn).

• Explicit method – the new value (yn+1) depends on values already known (yn)

144 DE 7.1 Ordinary Differential Equations (ODE)

Backward Euler method

• Different finite difference version:

y′(t)≈ y(t)− y(t−h)h

⇒ yn+1 = yn +h f (tn+1,yn+1)

• Implicit method – need to solve an equation to find yn+1!

145 DE 7.1 Ordinary Differential Equations (ODE)

Comparison of the methods

• Implicit methods computationally more complex

• Explicit methods can be unstable – in case of stiff equations

stiff equation – differential equation for which certain numerical methods for solvingthe equation are numerically unstable, unless the step size is taken to be extremelysmall

146 DE 7.1 Ordinary Differential Equations (ODE)

Examples 1

Initial value problem y′(t) =−15y(t), t ≥ 0, y(0) = 1Exact solution: y(t) = e−15t ⇒ y(t)→ 0 as t→ ∞

Explicit schemes with h = 1/4, h = 1/8Adams-Moulton scheme (Trapezoidal method)yn+1 = yn +

12h( f (tn,un)+ f (tn+1,yn+1))

147 DE 7.1 Ordinary Differential Equations (ODE)

Example 2Partial differential quation (see below): Wave equation (in 1D and) 2D

Wave equation

−(

∂ 2u∂x2 +

∂ 2u∂y2

)+

∂ 2u∂ t2 = f (x,y, t),

where

• u(x,y, t)– height of a surface (e.g. water level) in point (x,y) at time t

• f (x,y, t)– external force applied to the surface at time t (For simplicity heref (x,y, t) = 0)

• solving on the domain (x,y) ∈Ω = [0,1]× [0,1] at time t ∈ [0,T ]

• Dirichlet boundary conditions u(x,y, t) = 0, (x,y)∈ ∂Ω and the values of deriva-tives ∂u

∂ t |t=0 for (x,y) ∈Ω

148 DE 7.1 Ordinary Differential Equations (ODE)

Some examples on comparison of explicit vs implicit schemes (http://courses.cs.ut.ee/2009/sc/Main/FDMschemes)

1D wave equation failing with larger h value (http://www.ut.ee/~eero/SC/1DWaveEqExpl-failure.avi)

149 PDE 7.2 Partial Differential Equations (PDE)

7.2 Partial Differential Equations (PDE)

PDE overviewExamples of PDE-s:

• Laplace’s equation

– important in many fields of science,

∗ electromagnetism

∗ astronomy

∗ fluid dynamics

– behaviour of electric, gravitational, and fluid potentials

– The general theory of solutions to Laplace’s equation – potential theory

– In the study of heat conduction, the Laplace equation – the steady-stateheat equation

150 PDE 7.2 Partial Differential Equations (PDE)

• Maxwell’s equations – electrical and magnetical fields’ relationships– set of four partial differential equations

– describe the properties of the electric and magnetic fields and relate themto their sources, charge density and current density

• Navier-Stokes equations – fluid dynamics (dependencies between pressure,speed of fluid particles and fluid viscosity)

• Equations of linear elasticity – vibrations in elastic materials with given prop-erties and in case of compression and stretching out

• Schrödinger equations – quantum mechanics – how the quantum state of aphysical system changes in time. It is as central to quantum mechanics as New-ton’s laws are to classical mechanics

• Einstein field equations – set of ten equations in Einstein’s theory of generalrelativity – describe the fundamental interaction of gravitation as a result ofspacetime being curved by matter and energy.

151 PDE 7.3 2nd order PDEs

7.3 2nd order PDEs

We consider now only a single equation caseIn many practical cases, 2nd order PDE-s occur, for example:

• Heat equation: ut = uxx

• Wave equation: utt = uxx

• Laplace’s equation: uxx +uyy = 0.

General second order PDE has the form: (canonical form)

auxx +buxy + cuyy +dux + euy + f u+g = 0.

Assuming not all of a, b and c zero, then depending on discriminant b2−4ac:b2−4ac > 0: hyperbolic equation, typical representative – wave equation;b2−4ac = 0: parabolic equation, typical representative – heat equationb2−4ac < 0: elliptical equation, typical representative – Poisson equation

152 PDE 7.3 2nd order PDEs

• In case of changing coefficient in time, equations can change their type

• In case of equation systems, each equation can be of different type

• Of course, problem can be non-linear or higher order as well

In general,

• Hyperbolic PDE-s describe time-dependent conservative physical processes likewave propagation

• Parabolic PDE-s describe time-dependent dissipative (or scattering) physicalprocesses like diffusion, which move towards some fixed-point

• Elliptic PDE-s describe systems that have reached a fixed-point and are there-fore independent of time

153 PDE 7.4 Time-independent PDE-s

7.4 Time-independent PDE-s

7.4.1 Finite Difference Method (FDM)

• Discrete mesh in solving region

• Derivatives replaced with approximation by finite differences

Example. Conside Poisson equation in 2D:

−uxx−uyy = f , 0≤ x≤ 1, 0≤ y≤ 1, (19)

• boundary values as on the figure on the left:

154 PDE 7.4 Time-independent PDE-s

0

y

x

0

1

0

y

x

0

1

0

• Define discrete nodes as on the figure on right

• Inner nodes, where computations are carried out are defined with

(xi,y j) = (ih, jh), i, j = 1, ...,n

– (in our case n = 2 and h = 1/(n+1) = 1/3)

155 PDE 7.4 Time-independent PDE-s

Consider here that f = 0.Replacing 2nd order derivatives with standard 2nd order differences in mid-points,

we get

ui+1, j−2ui, j +ui−1, j

h2 +ui, j+1−2ui, j +ui, j−1

h2 = 0, i, j = 1, ...,n,

where ui, j is approximation of the real solution u = u(xi,y j) in point (xi,y j), and in-cludes one boundary value, if i or j is 0 or n+1. As a result we get:

4u1,1−u0,1−u2,1−u1,0−u1,2 = 0

4u2,1−u1,1−u3,1−u2,0−u2,2 = 0

4u1,2−u0,2−u2,2−u1,2−u1,3 = 0

4u2,2−u1,2−u3,2−u2,2−u2,3 = 0.

In matrix form:

156 PDE 7.4 Time-independent PDE-s

Ax =

4 −1 −1 0−1 4 0 −1−1 0 4 −10 −1 −1 4

u1,1

u2,1

u1,2

u2,2

=

u0,1 +u1,0

u3,1 +u2,0

u0,2 +u1,3

u3,2 +u2,3

=

0011

= b.

This positively defined system can be solved directly with Cholesky factorisation(Gauss elimination for symmetric matrix, where factorisation A = LT L is found) oriteratively. Exact solution of the problem is:

x =

u1,1

u2,1

u1,2

u2,2

=

0.1250.1250.3750.375

.

157 PDE 7.4 Time-independent PDE-s

In general case n2×n2 Laplace’s matrix has form:

A =

B −I 0 · · · 0

−I B −I . . . ...

0 −I B . . . 0... . . . . . . . . . −I0 · · · 0 −I B

, (20)

where n×n matrix B is of form:

B =

4 −1 0 · · · 0

−1 4 −1 . . . ...

0 −1 4 . . . 0... . . . . . . . . . −10 · · · 0 −1 4

.

It means that most of the elements of matrix A are zero How are such matrices called?– it is a sparse matrix

158 PDE 7.4 Time-independent PDE-s

7.4.2 Finite element Method (FEM)

Consider as an example Poisson equation

−∆u(x) = f (x), ∀x ∈Ω,

u(x) = g(x), ∀x ∈ Γ,

where Laplacian ∆ is defined by

(∆u)(x) = (∂ 2u∂x2 +

∂ 2u∂y2 )(x), x =

(xy

)

In Finite element Method the region is divided into finite elements.

159 PDE 7.4 Time-independent PDE-s

A region divided into Finite Elements:

160 PDE 7.4 Time-independent PDE-s

Consider unit square.

y

x

The problem in Variational Formulation: Find uh ∈Vh such that

a(uh,v) = ( f ,v), ∀v ∈Vh (21)

where in case of Poisson equation

a(u,v) =∫

Ω

∇u ·∇vdx

161 PDE 7.4 Time-independent PDE-s

and (u,v) =∫

Ωu(x)v(x)dx. The gradient ∇ of a scalar function f (x,y) is defined by:

∇ f =(

∂ f∂x

,∂ f∂y

)

• In FEM the equation (21) needs to be satisfied on a set of testfunctions ϕi =

ϕi(x),

– which are defined such that

ϕi =

1, x = xi

0 x = x j j 6= i

• and it is demanded that (21) is satisfied with each ϕi (i = 1, ...,N) .

• As a result, a matrix of the linear equations is obtained

162 PDE 7.5 Sparse matrix storage schemes

• The matrix is identical with the matrix from (20) !

• Benefits of FEM over finite difference schemes

– more flexible in choosing discretization

– existence of thorough mathematical constructs for proof of convergenceand error estimates

7.5 Sparse matrix storage schemes

• As we saw, different discretisation schemes give systems with similar matrixstructures

• (In addition to FDM and FEM often also some other discretisation schemes areused like Finite Volume Method (but we do not consider it here))

• In each case, the result is a system of linear equations with sparse matrix.

How to store sparse matrices?How to store sparse matrices?

163 PDE 7.5 Sparse matrix storage schemes

7.5.1 Triple storage format

• n×m matrix A each nonzero with 3 values: integers i and j and (in most appli-cations) real matrix element ai j. =⇒ three arrays:

indi(1:nz), indj(1:nz), vals(1:nz)of length nz, – number of matrix A nonzeroes

Advantages of the scheme:

• Easy to refer to a particular element

• Freedom to choose the order of the elements

Disadvantages :

• Nontrivial to find, for example, all nonzeroes of a particular row or column andtheir positions

164 PDE 7.5 Sparse matrix storage schemes

7.5.2 Column-major storage format

For each matrix A column k a vector row_ind(j) – giving row numbers i forwhich ai j 6= 0.

• To store the whole matrix, each column nonzeros

– added into a 1-dimensonal array row_ind(1:nz)

– introduce cptr(1:M) referring to column starts of each column inrow_ind.

row_ind(1:nz), cptr(1:M), vals(1:nz)

Advantages:

• Easy to find matrix column nonzeroes together with their positions

165 PDE 7.5 Sparse matrix storage schemes

Disadvantages:

• Algorithms become more difficult to read

• Difficult to find nonzeroes in a particular row

7.5.3 Row-major storage format

For each matrix A row k a vector col_ind(i) giving column numbers j forwhich ai j 6= 0.

• To store the whole matrix, each row nonzeros

– added into a 1-dimensonal array col_ind(1:nz)

– introduce rptr(1:N) referring to row starts of each row in col_ind.

col_ind(1:nz), rptr(1:N), vals(1:nz)

166 PDE 7.5 Sparse matrix storage schemes

Advantages:

• Easy to find matrix row nonzeroes together with their positions

Disadvantages:

• Algorithms become more difficult to read.

• Difficult to find nonzeroes in a particular column.

7.5.4 Combined schemes

Triple format enhanced with cols(1:nz), cptr(1:M), rows(1:nz),

rptr(1:N). Here cols and rows refer to corresponding matrix A values intriple format. E.g., to access row-major type stuctures, one has to index throughrows(1:nz)

Advantages:

167 PDE 7.5 Sparse matrix storage schemes

• All operations easy to perform

Disadvantages:

• More memory needed.

• Reference through indexing in all cases

168 Iterative methods 8.1 Problem setup

8 Iterative methods

8.1 Problem setup

Itereative methods for solving systems of linear equations withsparse matrices

Consider system of linear equations

Ax = b, (22)

where N×N matrix A

• is sparse,

– number of elements for which Ai j 6= 0 is O(N).

• Typical example: Poisson equation discretisation on n×n mesh, (N = n×n)

– in average 5, nonzeros per A row

169 Iterative methods 8.1 Problem setup

In case of direct methods, like LU-factorisation

• memory consumption (together with fill-in): O(N2) = O(n4).

• flops: 2/3 ·N3 +O(N2) = O(n6).

Banded matrix LU-decomposition

• memory consumption (together with fill-in): O(N · L) = O(n3), where L isbandwidth

• flops: 2/3 ·N ·L2 +O(N ·L) = O(n4).

170 Iterative methods 8.2 Jacobi Method

8.2 Jacobi Method• Iterative method for solving (22)

• With given initial approximation x(0), approximate solution x(k), k = 1,2,3, ...of (22) real solution x are calculated as follows:

– i-th component of x(k+1), x(k+1)i is obtained by taking from (22) only the

i-th row:Ai,1x1 + · · ·+Ai,ixi + · · ·+Ai,NxN = bi

– solving this with respect to xi, an iterative scheme is obtained:

x(k+1)i =

1Ai,i

(bi−∑

j 6=iAi, jx

(k)j

)(23)

171 Iterative methods 8.2 Jacobi Method

The calculations are in essence parallel with respect to i – no dependence onother componens x(k+1)

j , j 6= i. Iteration stop criteria can be taken, for example:∥∥∥x(k+1)−x(k)∥∥∥< ε or k+1≥ kmax, (24)

– ε – given error tolerance

– kmax – maximal number of iterations

• memory consumption (no fill-in):

– NA6=0 – number of nonzeroes of matrix A

• Number of iterations to reduce∥∥∥x(k)−x

∥∥∥2< ε

∥∥∥x(0)−x∥∥∥

2:

#IT≥ 2lnε−1

π2 (n+1)2 = O(n2)

172 Iterative methods 8.2 Jacobi Method

• flops/iteration ≈ 10 ·N = O(n2), =⇒

#IT · flopsiteration

=Cn4 +O(n3) = O(n4).

coefficent C in front of n4 is:

C ≈ 2lnε−1

π2 ·10≈ 2 · lnε−1

• Is this good or bad?This is not very good at all... We need some better methods, because

– For LU-decomposition (banded matrices) we had C = 2/3

173 Iterative methods 8.3 Conjugate Gradient Method (CG)

8.3 Conjugate Gradient Method (CG) C a l c u l a t e r(0) = b−Ax(0) wi th g i v e n s t a r t i n g v e c t o r x(0)

f o r i = 1 , 2 , . . .s o l v e Mz(i−1) = r(i−1) # we assume here t h a t M = I nowρi−1 = r(i−1)T z(i−1)

i f i ==1p(1) = z(0)

e l s eβi−1 = ρi−1/ρi−2

p(i) = z(i−1)+βi−1p(i−1)

e n d i fq(i) = Ap(i) ; αi = ρi−1/p(i)T q(i)

x(i) = x(i−1)+αip(i) ; r(i) = r(i−1)−αiq(i)

check c o n v e r g e n c e ; c o n t in u e i f neededend

174 Iterative methods 8.3 Conjugate Gradient Method (CG)

• memory consumption (no fill-in):

NA6=0 +O(N) = O(n2),

where NA6=0 – # nonzeroes ofA

• Number of iterations to achieve∥∥∥x(k)−x

∥∥∥2< ε

∥∥∥x(0)−x∥∥∥

2:

#IT≈ lnε−1

2

√κ(A) = O(n)

• Flops/iteration ≈ 24 ·N = O(n2) , =⇒

#IT · flopsiteration

=Cn3 +O(n2) = O(n3),

175 Iterative methods 8.3 Conjugate Gradient Method (CG)

where C ≈ 12lnε−1 ·√

κ2(A).

=⇒ C depends on condition number of A! This paves the way for preonditioningtechnique

176 Iterative methods 8.4 Preconditioning

8.4 PreconditioningIdea:Replace Ax = b with system M−1Ax = M−1b.Apply CG to

Bx = c, (25)

where B = M−1A and c = M−1b.But how to choose M?Preconditioner M = MT to be chosen such that

(i) Problem Mz = r being easy to solve

(ii) Matrix B being better conditioned than A, meaning that κ2(B)< κ2(A)

177 Iterative methods 8.4 Preconditioning

Then#IT(25) = O(

√κ2(B))< O(

√κ2(A)) = #IT(22)

butflops

iteration(25) =

flopsiteration

(22)+ (i) >flops

iteration(22)

• =⇒We need to make a compromise!

• (In extreme cases M = I or M = A)

• Preconditioned Conjugate Gradients (PCG) Method

– obtained if to take in previous algorithm M 6= I

178 Iterative methods 8.5 Preconditioner examples

8.5 Preconditioner examplesDiagonal Scaling (or Jacobi method)

M = diag(A)

(i) flopsIteration = N

(ii) κ2(B) = κ2(A)

⇒ Is this good?no large improvement to be expeted

179 Iterative methods 8.5 Preconditioner examples

Incomplete LU-factorisation

M = LU ,

• L and U – approximations to actual factors L and U in LU-decompoition

– nonzeroes in Li j and Ui j only where Ai j 6= 0 (i.e. fill-in is ignored in LU-factorisation algorithm)

(i) flopsIiteration = O(N)

(ii) κ2(B)< κ2(A)

How good is this preconditioner?Some improvement at least expected!

κ2(B) = O(n2)

180 Iterative methods 8.5 Preconditioner examples

Gauss-Seidel method do k=1,2,...

do i=1,...,n

x(k+1)i =

1Ai,i

(bi−

i−1

∑j=1

Ai jx(k+1)j −

n

∑j=i+1

Ai, jx(k)j

)(26)

enddo

enddoNote that in real implementation, the method is done like:

do k=1,2,...

do i=1,...,n

xi =1

Ai,i

(bi−∑

j 6=iAi jx j

)(27)

enddo

enddoDo you see a problem with this preconditioner (with PCG method)?But the preconditioner is not symmetric, which makes CG not to converge!

181 Iterative methods 8.5 Preconditioner examples

Symmetric Gauss-Seidel methodTo get the symmetric preconditioner, another step is added:

do k=1,2,...

do i=1,...,n

xi =1

Ai,i

(bi−∑ j 6=i Ai jx j

)enddo

enddo

do k=1,2,...

do i=n,...,1

xi =1

Ai,i

(bi−∑ j 6=i Ai jx j

)enddo

enddo

182 Iterative methods CONTENTS

Contents

1 Introduction 31.1 Syllabus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3 Scripting vs programming . . . . . . . . . . . . . . . . . . . . . . . 7

1.3.1 What is a script? . . . . . . . . . . . . . . . . . . . . . . . . 71.3.2 Characteristics of a script . . . . . . . . . . . . . . . . . . . . 81.3.3 Why not stick to Java, C/C++ or Fortran? . . . . . . . . . . . 9

1.4 Scripts yield short code . . . . . . . . . . . . . . . . . . . . . . . . . 101.5 Performance issues . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.5.1 Scripts can be slow . . . . . . . . . . . . . . . . . . . . . . . 111.5.2 Scripts may be fast enough . . . . . . . . . . . . . . . . . . . 131.5.3 When scripting is convenient . . . . . . . . . . . . . . . . . . 141.5.4 When to use C, C++, Java, Fortran . . . . . . . . . . . . . . . 15

2 What is Scientific Computing? 17

183 Iterative methods CONTENTS

2.1 Introduction to Scientific Computing . . . . . . . . . . . . . . . . . . 172.2 Specifics of computational problems . . . . . . . . . . . . . . . . . . 232.3 Mathematical model . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3 Approximation in Scientific Computing 303.1 Sources of approximation error . . . . . . . . . . . . . . . . . . . . . 30

3.1.1 Error sources that are under our control . . . . . . . . . . . . 303.1.2 Errors created during the calculations . . . . . . . . . . . . . 323.1.3 Forward error (arvutuslik viga e. tulemuse viga) and back-

ward error (algandmete viga) . . . . . . . . . . . . . . . . . . 353.2 Floating-Point Numbers . . . . . . . . . . . . . . . . . . . . . . . . . 433.3 Normalised floating-point numbers . . . . . . . . . . . . . . . . . . . 453.4 IEEE (Normalised) Arithmetics . . . . . . . . . . . . . . . . . . . . 47

4 Python in Scientfic Computing 514.1 Numerical Python (NumPy) . . . . . . . . . . . . . . . . . . . . . . 51

4.1.1 NumPy: making arrays . . . . . . . . . . . . . . . . . . . . . 54

184 Iterative methods CONTENTS

4.1.2 NumPy: making float, int, complex arrays . . . . . . . . . . . 554.1.3 Array with a sequence of numbers . . . . . . . . . . . . . . . 564.1.4 Warning: arange is dangerous . . . . . . . . . . . . . . . . . 574.1.5 Array construction from a Python list . . . . . . . . . . . . . 574.1.6 From “anything” to a NumPy array . . . . . . . . . . . . . . 594.1.7 Changing array dimensions . . . . . . . . . . . . . . . . . . . 614.1.8 Array initialization from a Python function . . . . . . . . . . 624.1.9 Basic array indexing . . . . . . . . . . . . . . . . . . . . . . 634.1.10 More advanced array indexing . . . . . . . . . . . . . . . . . 644.1.11 Slices refer the array data . . . . . . . . . . . . . . . . . . . . 654.1.12 Integer arrays as indices . . . . . . . . . . . . . . . . . . . . 674.1.13 Loops over arrays . . . . . . . . . . . . . . . . . . . . . . . . 684.1.14 Array computations . . . . . . . . . . . . . . . . . . . . . . . 714.1.15 In-place array arithmetics . . . . . . . . . . . . . . . . . . . 724.1.16 Standard math functions can take array arguments . . . . . . 754.1.17 Other useful array operations . . . . . . . . . . . . . . . . . . 76

185 Iterative methods CONTENTS

4.1.18 Temporary arrays . . . . . . . . . . . . . . . . . . . . . . . . 774.1.19 More useful array methods and attributes . . . . . . . . . . . 784.1.20 Complex number computing . . . . . . . . . . . . . . . . . . 804.1.21 A root function . . . . . . . . . . . . . . . . . . . . . . . . . 824.1.22 Array type and data type . . . . . . . . . . . . . . . . . . . . 834.1.23 Matrix objects . . . . . . . . . . . . . . . . . . . . . . . . . 85

4.2 NumPy: Vectorisation . . . . . . . . . . . . . . . . . . . . . . . . . . 874.2.1 Vectorisation of functions with if tests; problem . . . . . . . . 894.2.2 Vectorisation of functions with if tests; solutions . . . . . . . 914.2.3 General vectorization of if-else tests . . . . . . . . . . . . . . 934.2.4 Vectorization via slicing . . . . . . . . . . . . . . . . . . . . 94

4.3 NumPy: Random numbers . . . . . . . . . . . . . . . . . . . . . . . 954.4 NumPy: Basic linear algebra . . . . . . . . . . . . . . . . . . . . . . 97

4.4.1 A linear algebra session . . . . . . . . . . . . . . . . . . . . 984.5 Python: Plotting modules . . . . . . . . . . . . . . . . . . . . . . . . 994.6 I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

186 Iterative methods CONTENTS

4.6.1 File I/O with arrays; plain ASCII format . . . . . . . . . . . . 1034.6.2 File I/O with arrays; binary pickling . . . . . . . . . . . . . . 105

4.7 SciPy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1074.7.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

5 Solving Systems of Linear Equations 1085.1 Systems of Linear Equations . . . . . . . . . . . . . . . . . . . . . . 1085.2 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

5.2.1 Problem Transformation . . . . . . . . . . . . . . . . . . . . 1115.3 Triangular linear systems . . . . . . . . . . . . . . . . . . . . . . . . 1135.4 Elementary Elimination Matrices . . . . . . . . . . . . . . . . . . . . 1155.5 Gauss Elimination and LU Factorisation . . . . . . . . . . . . . . . . 1175.6 Number of operations in GEM . . . . . . . . . . . . . . . . . . . . . 1205.7 GEM with row permutations . . . . . . . . . . . . . . . . . . . . . . 1215.8 Reliability of the LU-factorisation with partial pivoting . . . . . . . . 129

6 BLAS (Basic Linear Algebra Subroutines) 134

187 Iterative methods CONTENTS

6.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1346.2 BLAS implementations . . . . . . . . . . . . . . . . . . . . . . . . . 140

7 Numerical Solution of Differential Equations 1417.1 Ordinary Differential Equations (ODE) . . . . . . . . . . . . . . . . 141

7.1.1 Numerical methods for solving ODEs . . . . . . . . . . . . . 1427.2 Partial Differential Equations (PDE) . . . . . . . . . . . . . . . . . . 1497.3 2nd order PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1517.4 Time-independent PDE-s . . . . . . . . . . . . . . . . . . . . . . . . 153

7.4.1 Finite Difference Method (FDM) . . . . . . . . . . . . . . . 1537.4.2 Finite element Method (FEM) . . . . . . . . . . . . . . . . . 158

7.5 Sparse matrix storage schemes . . . . . . . . . . . . . . . . . . . . . 1627.5.1 Triple storage format . . . . . . . . . . . . . . . . . . . . . . 1637.5.2 Column-major storage format . . . . . . . . . . . . . . . . . 1647.5.3 Row-major storage format . . . . . . . . . . . . . . . . . . . 1657.5.4 Combined schemes . . . . . . . . . . . . . . . . . . . . . . . 166

188 Iterative methods CONTENTS

8 Iterative methods 1688.1 Problem setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1688.2 Jacobi Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1708.3 Conjugate Gradient Method (CG) . . . . . . . . . . . . . . . . . . . 1738.4 Preconditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1768.5 Preconditioner examples . . . . . . . . . . . . . . . . . . . . . . . . 178

top related