introduction to saddle point problems · introduction to saddle point problems motivation and goals...

153
Introduction to Saddle Point Problems Introduction to Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Woudschoten, 4 October 2012

Upload: others

Post on 28-Feb-2020

12 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Introduction to Saddle Point Problems

Michele Benzi

Department of Mathematics and Computer ScienceEmory University

Atlanta, GA

Woudschoten, 4 October 2012

Page 2: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals

Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given.

Let 〈·, ·〉 denote the standard inner product in Rn.

Consider the following two problems:

1 Solve Au = f

2 Minimize the function J(u) = 12〈Au, u〉 − 〈f , u〉

Note that ∇J(u) = Au − f . Hence, if A is positive definite (SPD), the twoproblems are equivalent, and there exists a unique solution u∗ = A−1f .

Many algorithms exist for solving SPD linear systems: Cholesky, PreconditionedConjugate Gradients, AMG, etc.

Page 3: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals

Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given.

Let 〈·, ·〉 denote the standard inner product in Rn.

Consider the following two problems:

1 Solve Au = f

2 Minimize the function J(u) = 12〈Au, u〉 − 〈f , u〉

Note that ∇J(u) = Au − f . Hence, if A is positive definite (SPD), the twoproblems are equivalent, and there exists a unique solution u∗ = A−1f .

Many algorithms exist for solving SPD linear systems: Cholesky, PreconditionedConjugate Gradients, AMG, etc.

Page 4: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals

Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given.

Let 〈·, ·〉 denote the standard inner product in Rn.

Consider the following two problems:

1 Solve Au = f

2 Minimize the function J(u) = 12〈Au, u〉 − 〈f , u〉

Note that ∇J(u) = Au − f . Hence, if A is positive definite (SPD), the twoproblems are equivalent, and there exists a unique solution u∗ = A−1f .

Many algorithms exist for solving SPD linear systems: Cholesky, PreconditionedConjugate Gradients, AMG, etc.

Page 5: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals (cont.)

Now we add a set of linear constraints:

Minimize J(u) = 12〈Au, u〉 − 〈f , u〉

subject to Bu = g

where

A is n × n, symmetric

B is m × n, with m < n

f ∈ Rn, g ∈ Rm are given (either f or g could be 0, but not both)

Standard approach: Introduce Lagrange multipliers, p ∈ Rm

Lagrangian L(u, p) = 12〈Au, u〉 − 〈f , u〉+ 〈p, Bu − g〉

First-order optimality conditions: ∇u L = 0, ∇p L = 0

Page 6: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals (cont.)

Now we add a set of linear constraints:

Minimize J(u) = 12〈Au, u〉 − 〈f , u〉

subject to Bu = g

where

A is n × n, symmetric

B is m × n, with m < n

f ∈ Rn, g ∈ Rm are given (either f or g could be 0, but not both)

Standard approach: Introduce Lagrange multipliers, p ∈ Rm

Lagrangian L(u, p) = 12〈Au, u〉 − 〈f , u〉+ 〈p, Bu − g〉

First-order optimality conditions: ∇u L = 0, ∇p L = 0

Page 7: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals (cont.)

Now we add a set of linear constraints:

Minimize J(u) = 12〈Au, u〉 − 〈f , u〉

subject to Bu = g

where

A is n × n, symmetric

B is m × n, with m < n

f ∈ Rn, g ∈ Rm are given (either f or g could be 0, but not both)

Standard approach: Introduce Lagrange multipliers, p ∈ Rm

Lagrangian L(u, p) = 12〈Au, u〉 − 〈f , u〉+ 〈p, Bu − g〉

First-order optimality conditions: ∇u L = 0, ∇p L = 0

Page 8: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals (cont.)

Optimality conditions:

∇u L = Au + BTp − f = 0, ∇p L = Bu − g = 0

or, „A BT

B O

«„up

«=

„fg

«(1)

System (1) is a saddle point problem. Its solutions (u∗, p∗) are saddle pointsfor the Lagrangian L(u, p):

minu

maxpL(u, p) = L(u∗, p∗) = max

umin

pL(u, p)

Also called a KKT system (Karush–Kuhn–Tucker), or equilibrium equations.

Gil Strang calls (1) “the fundamental problem of scientific computing.”

Page 9: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals (cont.)

Optimality conditions:

∇u L = Au + BTp − f = 0, ∇p L = Bu − g = 0

or, „A BT

B O

«„up

«=

„fg

«(1)

System (1) is a saddle point problem. Its solutions (u∗, p∗) are saddle pointsfor the Lagrangian L(u, p):

minu

maxpL(u, p) = L(u∗, p∗) = max

umin

pL(u, p)

Also called a KKT system (Karush–Kuhn–Tucker), or equilibrium equations.

Gil Strang calls (1) “the fundamental problem of scientific computing.”

Page 10: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals (cont.)

Optimality conditions:

∇u L = Au + BTp − f = 0, ∇p L = Bu − g = 0

or, „A BT

B O

«„up

«=

„fg

«(1)

System (1) is a saddle point problem. Its solutions (u∗, p∗) are saddle pointsfor the Lagrangian L(u, p):

minu

maxpL(u, p) = L(u∗, p∗) = max

umin

pL(u, p)

Also called a KKT system (Karush–Kuhn–Tucker), or equilibrium equations.

Gil Strang calls (1) “the fundamental problem of scientific computing.”

Page 11: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals (cont.)

Optimality conditions:

∇u L = Au + BTp − f = 0, ∇p L = Bu − g = 0

or, „A BT

B O

«„up

«=

„fg

«(1)

System (1) is a saddle point problem. Its solutions (u∗, p∗) are saddle pointsfor the Lagrangian L(u, p):

minu

maxpL(u, p) = L(u∗, p∗) = max

umin

pL(u, p)

Also called a KKT system (Karush–Kuhn–Tucker), or equilibrium equations.

Gil Strang calls (1) “the fundamental problem of scientific computing.”

Page 12: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals (cont.)

Optimality conditions:

∇u L = Au + BTp − f = 0, ∇p L = Bu − g = 0

or, „A BT

B O

«„up

«=

„fg

«(1)

System (1) is a saddle point problem. Its solutions (u∗, p∗) are saddle pointsfor the Lagrangian L(u, p):

minu

maxpL(u, p) = L(u∗, p∗) = max

umin

pL(u, p)

Also called a KKT system (Karush–Kuhn–Tucker), or equilibrium equations.

Gil Strang calls (1) “the fundamental problem of scientific computing.”

Page 13: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals (cont.)

Saddle point problems do occur frequently, e.g.:

Incompressible flow problems (Stokes, linearized Navier–Stokes)

Linear elasticity

Mixed FEM formulations of 2nd- and 4th-order elliptic PDEs

PDE-constrained optimization (e.g., variational data assimilation)

SQP and IP methods for nonlinear constrained optimization

Structural analysis

Resistive networks, power network analysis

Image processing (e.g., image registration)

Comprehensive survey: M. Benzi, G. Golub and J. Liesen, Numerical solution ofsaddle point problems, Acta Numerica 14 (2005), pp. 1–137.

The bibliography in this paper contains 535 items.

Google Scholar reports over 700 citations to date.

Page 14: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals (cont.)

Saddle point problems do occur frequently, e.g.:

Incompressible flow problems (Stokes, linearized Navier–Stokes)

Linear elasticity

Mixed FEM formulations of 2nd- and 4th-order elliptic PDEs

PDE-constrained optimization (e.g., variational data assimilation)

SQP and IP methods for nonlinear constrained optimization

Structural analysis

Resistive networks, power network analysis

Image processing (e.g., image registration)

Comprehensive survey: M. Benzi, G. Golub and J. Liesen, Numerical solution ofsaddle point problems, Acta Numerica 14 (2005), pp. 1–137.

The bibliography in this paper contains 535 items.

Google Scholar reports over 700 citations to date.

Page 15: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals (cont.)

Saddle point problems do occur frequently, e.g.:

Incompressible flow problems (Stokes, linearized Navier–Stokes)

Linear elasticity

Mixed FEM formulations of 2nd- and 4th-order elliptic PDEs

PDE-constrained optimization (e.g., variational data assimilation)

SQP and IP methods for nonlinear constrained optimization

Structural analysis

Resistive networks, power network analysis

Image processing (e.g., image registration)

Comprehensive survey: M. Benzi, G. Golub and J. Liesen, Numerical solution ofsaddle point problems, Acta Numerica 14 (2005), pp. 1–137.

The bibliography in this paper contains 535 items.

Google Scholar reports over 700 citations to date.

Page 16: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals (cont.)

Saddle point problems do occur frequently, e.g.:

Incompressible flow problems (Stokes, linearized Navier–Stokes)

Linear elasticity

Mixed FEM formulations of 2nd- and 4th-order elliptic PDEs

PDE-constrained optimization (e.g., variational data assimilation)

SQP and IP methods for nonlinear constrained optimization

Structural analysis

Resistive networks, power network analysis

Image processing (e.g., image registration)

Comprehensive survey: M. Benzi, G. Golub and J. Liesen, Numerical solution ofsaddle point problems, Acta Numerica 14 (2005), pp. 1–137.

The bibliography in this paper contains 535 items.

Google Scholar reports over 700 citations to date.

Page 17: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals (cont.)

Saddle point problems do occur frequently, e.g.:

Incompressible flow problems (Stokes, linearized Navier–Stokes)

Linear elasticity

Mixed FEM formulations of 2nd- and 4th-order elliptic PDEs

PDE-constrained optimization (e.g., variational data assimilation)

SQP and IP methods for nonlinear constrained optimization

Structural analysis

Resistive networks, power network analysis

Image processing (e.g., image registration)

Comprehensive survey: M. Benzi, G. Golub and J. Liesen, Numerical solution ofsaddle point problems, Acta Numerica 14 (2005), pp. 1–137.

The bibliography in this paper contains 535 items.

Google Scholar reports over 700 citations to date.

Page 18: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals (cont.)

The aim of these lectures:

To review the basic properties of saddle point systems

To give a broad overview of “standard” solution algorithms

To present some recent developments

The emphasis of this lecture (and the next) will be on iterative solvers for large,sparse saddle point problems, with a focus on our own recent work onpreconditioners for incompressible flow problems.

The ultimate goal: to develop robust preconditioners that perform uniformlywell independently of discretization details and problem parameters.

For flow problems, we would like to have solvers that converge fast regardlessof mesh size, viscosity, etc. Moreover, the cost per iteration should be linear inthe number of unknowns.

Page 19: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals (cont.)

The aim of these lectures:

To review the basic properties of saddle point systems

To give a broad overview of “standard” solution algorithms

To present some recent developments

The emphasis of this lecture (and the next) will be on iterative solvers for large,sparse saddle point problems, with a focus on our own recent work onpreconditioners for incompressible flow problems.

The ultimate goal: to develop robust preconditioners that perform uniformlywell independently of discretization details and problem parameters.

For flow problems, we would like to have solvers that converge fast regardlessof mesh size, viscosity, etc. Moreover, the cost per iteration should be linear inthe number of unknowns.

Page 20: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals (cont.)

The aim of these lectures:

To review the basic properties of saddle point systems

To give a broad overview of “standard” solution algorithms

To present some recent developments

The emphasis of this lecture (and the next) will be on iterative solvers for large,sparse saddle point problems, with a focus on our own recent work onpreconditioners for incompressible flow problems.

The ultimate goal: to develop robust preconditioners that perform uniformlywell independently of discretization details and problem parameters.

For flow problems, we would like to have solvers that converge fast regardlessof mesh size, viscosity, etc. Moreover, the cost per iteration should be linear inthe number of unknowns.

Page 21: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals (cont.)

The aim of these lectures:

To review the basic properties of saddle point systems

To give a broad overview of “standard” solution algorithms

To present some recent developments

The emphasis of this lecture (and the next) will be on iterative solvers for large,sparse saddle point problems, with a focus on our own recent work onpreconditioners for incompressible flow problems.

The ultimate goal: to develop robust preconditioners that perform uniformlywell independently of discretization details and problem parameters.

For flow problems, we would like to have solvers that converge fast regardlessof mesh size, viscosity, etc. Moreover, the cost per iteration should be linear inthe number of unknowns.

Page 22: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals (cont.)

The aim of these lectures:

To review the basic properties of saddle point systems

To give a broad overview of “standard” solution algorithms

To present some recent developments

The emphasis of this lecture (and the next) will be on iterative solvers for large,sparse saddle point problems, with a focus on our own recent work onpreconditioners for incompressible flow problems.

The ultimate goal: to develop robust preconditioners that perform uniformlywell independently of discretization details and problem parameters.

For flow problems, we would like to have solvers that converge fast regardlessof mesh size, viscosity, etc. Moreover, the cost per iteration should be linear inthe number of unknowns.

Page 23: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals (cont.)

The aim of these lectures:

To review the basic properties of saddle point systems

To give a broad overview of “standard” solution algorithms

To present some recent developments

The emphasis of this lecture (and the next) will be on iterative solvers for large,sparse saddle point problems, with a focus on our own recent work onpreconditioners for incompressible flow problems.

The ultimate goal: to develop robust preconditioners that perform uniformlywell independently of discretization details and problem parameters.

For flow problems, we would like to have solvers that converge fast regardlessof mesh size, viscosity, etc. Moreover, the cost per iteration should be linear inthe number of unknowns.

Page 24: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Motivation and goals (cont.)

The aim of these lectures:

To review the basic properties of saddle point systems

To give a broad overview of “standard” solution algorithms

To present some recent developments

The emphasis of this lecture (and the next) will be on iterative solvers for large,sparse saddle point problems, with a focus on our own recent work onpreconditioners for incompressible flow problems.

The ultimate goal: to develop robust preconditioners that perform uniformlywell independently of discretization details and problem parameters.

For flow problems, we would like to have solvers that converge fast regardlessof mesh size, viscosity, etc. Moreover, the cost per iteration should be linear inthe number of unknowns.

Page 25: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Outline

1 Properties of saddle point matrices

2 Examples of saddle point problems

3 Some solution algorithms

Page 26: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Outline

1 Properties of saddle point matrices

2 Examples of saddle point problems

3 Some solution algorithms

Page 27: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Outline

1 Properties of saddle point matrices

2 Examples of saddle point problems

3 Some solution algorithms

Page 28: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Properties of saddle point matrices

Outline

1 Properties of saddle point matrices

2 Examples of saddle point problems

3 Some solution algorithms

Page 29: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Properties of saddle point matrices

Solvability of saddle point problems

The following result establishes necessary and sufficient conditions for theunique solvability of the saddle point problem (1).

Theorem. Assume that

A is symmetric positive semidefinite n × n

B has full rank: rank (B) = m

Then the coefficient matrix

A =

„A BT

B O

«is nonsingular ⇔ Null (A) ∩ Null (B) = 0.

Furthermore, A is indefinite, with n positive and m negative eigenvalues.

In particular, A is invertible if A is SPD and B has full rank (“standard case”).

Page 30: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Properties of saddle point matrices

Solvability of saddle point problems

The following result establishes necessary and sufficient conditions for theunique solvability of the saddle point problem (1).

Theorem. Assume that

A is symmetric positive semidefinite n × n

B has full rank: rank (B) = m

Then the coefficient matrix

A =

„A BT

B O

«is nonsingular ⇔ Null (A) ∩ Null (B) = 0.

Furthermore, A is indefinite, with n positive and m negative eigenvalues.

In particular, A is invertible if A is SPD and B has full rank (“standard case”).

Page 31: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Properties of saddle point matrices

Solvability of saddle point problems

The following result establishes necessary and sufficient conditions for theunique solvability of the saddle point problem (1).

Theorem. Assume that

A is symmetric positive semidefinite n × n

B has full rank: rank (B) = m

Then the coefficient matrix

A =

„A BT

B O

«is nonsingular ⇔ Null (A) ∩ Null (B) = 0.

Furthermore, A is indefinite, with n positive and m negative eigenvalues.

In particular, A is invertible if A is SPD and B has full rank (“standard case”).

Page 32: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Properties of saddle point matrices

Solvability of saddle point problems

The following result establishes necessary and sufficient conditions for theunique solvability of the saddle point problem (1).

Theorem. Assume that

A is symmetric positive semidefinite n × n

B has full rank: rank (B) = m

Then the coefficient matrix

A =

„A BT

B O

«is nonsingular ⇔ Null (A) ∩ Null (B) = 0.

Furthermore, A is indefinite, with n positive and m negative eigenvalues.

In particular, A is invertible if A is SPD and B has full rank (“standard case”).

Page 33: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Properties of saddle point matrices

Solvability of saddle point problems

The following result establishes necessary and sufficient conditions for theunique solvability of the saddle point problem (1).

Theorem. Assume that

A is symmetric positive semidefinite n × n

B has full rank: rank (B) = m

Then the coefficient matrix

A =

„A BT

B O

«is nonsingular ⇔ Null (A) ∩ Null (B) = 0.

Furthermore, A is indefinite, with n positive and m negative eigenvalues.

In particular, A is invertible if A is SPD and B has full rank (“standard case”).

Page 34: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Properties of saddle point matrices

Generalizations, I

In some cases, a stabilization (or regularization) term needs to be added in the(2,2) position, leading to linear systems of the form

„A BT

B −βC

«„up

«=

„fg

«(2)

where β > 0 is a small parameter and the m ×m matrix C is symmetricpositive semidefinite, and often singular, with ‖C‖2 = 1.

This type of system arises, for example, from the stabilization of FEM pairsthat do not satisfy the LBB (‘inf-sup’) condition.

Another important example is the discretization of the Reissner–Mindlin platemodel in linear elasticity. In this case β is related to the thickness of the plate;the limit case β = 0 can be seen as a reformulation of the biharmonic problem.

Page 35: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Properties of saddle point matrices

Generalizations, I

In some cases, a stabilization (or regularization) term needs to be added in the(2,2) position, leading to linear systems of the form„

A BT

B −βC

«„up

«=

„fg

«(2)

where β > 0 is a small parameter and the m ×m matrix C is symmetricpositive semidefinite, and often singular, with ‖C‖2 = 1.

This type of system arises, for example, from the stabilization of FEM pairsthat do not satisfy the LBB (‘inf-sup’) condition.

Another important example is the discretization of the Reissner–Mindlin platemodel in linear elasticity. In this case β is related to the thickness of the plate;the limit case β = 0 can be seen as a reformulation of the biharmonic problem.

Page 36: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Properties of saddle point matrices

Generalizations, I

In some cases, a stabilization (or regularization) term needs to be added in the(2,2) position, leading to linear systems of the form„

A BT

B −βC

«„up

«=

„fg

«(2)

where β > 0 is a small parameter and the m ×m matrix C is symmetricpositive semidefinite, and often singular, with ‖C‖2 = 1.

This type of system arises, for example, from the stabilization of FEM pairsthat do not satisfy the LBB (‘inf-sup’) condition.

Another important example is the discretization of the Reissner–Mindlin platemodel in linear elasticity. In this case β is related to the thickness of the plate;the limit case β = 0 can be seen as a reformulation of the biharmonic problem.

Page 37: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Properties of saddle point matrices

Generalizations, I

In some cases, a stabilization (or regularization) term needs to be added in the(2,2) position, leading to linear systems of the form„

A BT

B −βC

«„up

«=

„fg

«(2)

where β > 0 is a small parameter and the m ×m matrix C is symmetricpositive semidefinite, and often singular, with ‖C‖2 = 1.

This type of system arises, for example, from the stabilization of FEM pairsthat do not satisfy the LBB (‘inf-sup’) condition.

Another important example is the discretization of the Reissner–Mindlin platemodel in linear elasticity. In this case β is related to the thickness of the plate;the limit case β = 0 can be seen as a reformulation of the biharmonic problem.

Page 38: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Properties of saddle point matrices

Generalizations, II

In other cases, the matrix A is not symmetric: A 6= AT . In this case, the saddlepoint system does not arise from a constrained minimization problem.

The most important examples of this case are linear systems arising from thePicard and Newton linearizations of the steady incompressible Navier–Stokesequations. The following result is applicable to the Picard linearization (Oseenproblem):

Theorem. Assume that

H = 12(A + AT ) is symmetric positive semidefinite n × n

B has full rank: rank (B) = m

Then

Null(H) ∩ Null(B) = 0 ⇒ A invertible

A invertible ⇒ Null(A) ∩ Null(B) = 0.

Page 39: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Properties of saddle point matrices

Generalizations, II

In other cases, the matrix A is not symmetric: A 6= AT . In this case, the saddlepoint system does not arise from a constrained minimization problem.

The most important examples of this case are linear systems arising from thePicard and Newton linearizations of the steady incompressible Navier–Stokesequations. The following result is applicable to the Picard linearization (Oseenproblem):

Theorem. Assume that

H = 12(A + AT ) is symmetric positive semidefinite n × n

B has full rank: rank (B) = m

Then

Null(H) ∩ Null(B) = 0 ⇒ A invertible

A invertible ⇒ Null(A) ∩ Null(B) = 0.

Page 40: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Properties of saddle point matrices

Generalizations, II

In other cases, the matrix A is not symmetric: A 6= AT . In this case, the saddlepoint system does not arise from a constrained minimization problem.

The most important examples of this case are linear systems arising from thePicard and Newton linearizations of the steady incompressible Navier–Stokesequations. The following result is applicable to the Picard linearization (Oseenproblem):

Theorem. Assume that

H = 12(A + AT ) is symmetric positive semidefinite n × n

B has full rank: rank (B) = m

Then

Null(H) ∩ Null(B) = 0 ⇒ A invertible

A invertible ⇒ Null(A) ∩ Null(B) = 0.

Page 41: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Properties of saddle point matrices

Generalizations, II

In other cases, the matrix A is not symmetric: A 6= AT . In this case, the saddlepoint system does not arise from a constrained minimization problem.

The most important examples of this case are linear systems arising from thePicard and Newton linearizations of the steady incompressible Navier–Stokesequations. The following result is applicable to the Picard linearization (Oseenproblem):

Theorem. Assume that

H = 12(A + AT ) is symmetric positive semidefinite n × n

B has full rank: rank (B) = m

Then

Null(H) ∩ Null(B) = 0 ⇒ A invertible

A invertible ⇒ Null(A) ∩ Null(B) = 0.

Page 42: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Properties of saddle point matrices

Nonsymmetric, positive definite form

Consider the following equivalent formulation:

„A BT

−B O

«„up

«=

„f−g

«

Theorem. Assume B has full rank. If H = 12(A + AT ) is positive definite, then

the spectrum of

A− :=

„A BT

−B O

«lies entirely in the open right-half plane Re(z) > 0. Moreover, if A is SPD andthe following condition holds:

λmin(A) > 4 λmax(S) where S = BA−1BT (“Schur complement”),

then A− is diagonalizable with real positive eigenvalues. In this case, thereexists a non-standard inner product on Rn+m in which A− is self-adjoint andpositive definite, and a corresponding conjugate gradient method(B./Simoncini, NM 2006).

Page 43: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Properties of saddle point matrices

Nonsymmetric, positive definite form

Consider the following equivalent formulation:

„A BT

−B O

«„up

«=

„f−g

«

Theorem. Assume B has full rank. If H = 12(A + AT ) is positive definite, then

the spectrum of

A− :=

„A BT

−B O

«lies entirely in the open right-half plane Re(z) > 0. Moreover, if A is SPD andthe following condition holds:

λmin(A) > 4 λmax(S) where S = BA−1BT (“Schur complement”),

then A− is diagonalizable with real positive eigenvalues. In this case, thereexists a non-standard inner product on Rn+m in which A− is self-adjoint andpositive definite, and a corresponding conjugate gradient method(B./Simoncini, NM 2006).

Page 44: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Properties of saddle point matrices

Nonsymmetric, positive definite form

Consider the following equivalent formulation:

„A BT

−B O

«„up

«=

„f−g

«

Theorem. Assume B has full rank. If H = 12(A + AT ) is positive definite, then

the spectrum of

A− :=

„A BT

−B O

«lies entirely in the open right-half plane Re(z) > 0. Moreover, if A is SPD andthe following condition holds:

λmin(A) > 4 λmax(S) where S = BA−1BT (“Schur complement”),

then A− is diagonalizable with real positive eigenvalues.

In this case, thereexists a non-standard inner product on Rn+m in which A− is self-adjoint andpositive definite, and a corresponding conjugate gradient method(B./Simoncini, NM 2006).

Page 45: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Properties of saddle point matrices

Nonsymmetric, positive definite form

Consider the following equivalent formulation:

„A BT

−B O

«„up

«=

„f−g

«

Theorem. Assume B has full rank. If H = 12(A + AT ) is positive definite, then

the spectrum of

A− :=

„A BT

−B O

«lies entirely in the open right-half plane Re(z) > 0. Moreover, if A is SPD andthe following condition holds:

λmin(A) > 4 λmax(S) where S = BA−1BT (“Schur complement”),

then A− is diagonalizable with real positive eigenvalues. In this case, thereexists a non-standard inner product on Rn+m in which A− is self-adjoint andpositive definite, and a corresponding conjugate gradient method(B./Simoncini, NM 2006).

Page 46: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Outline

1 Properties of saddle point matrices

2 Examples of saddle point problems

3 Some solution algorithms

Page 47: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 1: Mixed formulation of the Poisson equation

Let Ω be a domain in Rd and consider the system

u−∇p = f in Ω,

div u = g in Ω,

u · n = 0 on ∂Ω.

Eliminating the vector field u, the scalar field p must satisfy

−∆p = div f − g in Ω,∂p

∂n= −f · n on ∂Ω.

Weak formulation: Find (u, p) ∈ (L2(Ω))d × H1(Ω) ∩ L20(Ω) such that

〈u, v〉 − 〈∇p, v〉 = 〈f, v〉, v ∈ (L2(Ω))d ,

−〈u,∇q〉 = 〈g , q〉, q ∈ H1(Ω) ∩ L20(Ω),

where 〈·, ·〉 denotes the L2 inner product.

Page 48: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 1: Mixed formulation of the Poisson equation

Let Ω be a domain in Rd and consider the system

u−∇p = f in Ω,

div u = g in Ω,

u · n = 0 on ∂Ω.

Eliminating the vector field u, the scalar field p must satisfy

−∆p = div f − g in Ω,∂p

∂n= −f · n on ∂Ω.

Weak formulation: Find (u, p) ∈ (L2(Ω))d × H1(Ω) ∩ L20(Ω) such that

〈u, v〉 − 〈∇p, v〉 = 〈f, v〉, v ∈ (L2(Ω))d ,

−〈u,∇q〉 = 〈g , q〉, q ∈ H1(Ω) ∩ L20(Ω),

where 〈·, ·〉 denotes the L2 inner product.

Page 49: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 1: Mixed formulation of the Poisson equation

Let Ω be a domain in Rd and consider the system

u−∇p = f in Ω,

div u = g in Ω,

u · n = 0 on ∂Ω.

Eliminating the vector field u, the scalar field p must satisfy

−∆p = div f − g in Ω,∂p

∂n= −f · n on ∂Ω.

Weak formulation: Find (u, p) ∈ (L2(Ω))d × H1(Ω) ∩ L20(Ω) such that

〈u, v〉 − 〈∇p, v〉 = 〈f, v〉, v ∈ (L2(Ω))d ,

−〈u,∇q〉 = 〈g , q〉, q ∈ H1(Ω) ∩ L20(Ω),

where 〈·, ·〉 denotes the L2 inner product.

Page 50: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 1: Mixed formulation of the Poisson equation

Let Ω be a domain in Rd and consider the system

u−∇p = f in Ω,

div u = g in Ω,

u · n = 0 on ∂Ω.

Eliminating the vector field u, the scalar field p must satisfy

−∆p = div f − g in Ω,∂p

∂n= −f · n on ∂Ω.

Weak formulation: Find (u, p) ∈ (L2(Ω))d × H1(Ω) ∩ L20(Ω) such that

〈u, v〉 − 〈∇p, v〉 = 〈f, v〉, v ∈ (L2(Ω))d ,

−〈u,∇q〉 = 〈g , q〉, q ∈ H1(Ω) ∩ L20(Ω),

where 〈·, ·〉 denotes the L2 inner product.

Page 51: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 1: Mixed formulation of the Poisson equation

Let Ω be a domain in Rd and consider the system

u−∇p = f in Ω,

div u = g in Ω,

u · n = 0 on ∂Ω.

Eliminating the vector field u, the scalar field p must satisfy

−∆p = div f − g in Ω,∂p

∂n= −f · n on ∂Ω.

Weak formulation: Find (u, p) ∈ (L2(Ω))d × H1(Ω) ∩ L20(Ω) such that

〈u, v〉 − 〈∇p, v〉 = 〈f, v〉, v ∈ (L2(Ω))d ,

−〈u,∇q〉 = 〈g , q〉, q ∈ H1(Ω) ∩ L20(Ω),

where 〈·, ·〉 denotes the L2 inner product.

Page 52: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 1: Mixed formulation of the Poisson equation

Let Ω be a domain in Rd and consider the system

u−∇p = f in Ω,

div u = g in Ω,

u · n = 0 on ∂Ω.

Eliminating the vector field u, the scalar field p must satisfy

−∆p = div f − g in Ω,∂p

∂n= −f · n on ∂Ω.

Weak formulation: Find (u, p) ∈ (L2(Ω))d × H1(Ω) ∩ L20(Ω) such that

〈u, v〉 − 〈∇p, v〉 = 〈f, v〉, v ∈ (L2(Ω))d ,

−〈u,∇q〉 = 〈g , q〉, q ∈ H1(Ω) ∩ L20(Ω),

where 〈·, ·〉 denotes the L2 inner product.

Page 53: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 1: Mixed formulation of the Poisson equation

Let Ω be a domain in Rd and consider the system

u−∇p = f in Ω,

div u = g in Ω,

u · n = 0 on ∂Ω.

Eliminating the vector field u, the scalar field p must satisfy

−∆p = div f − g in Ω,∂p

∂n= −f · n on ∂Ω.

Weak formulation: Find (u, p) ∈ (L2(Ω))d × H1(Ω) ∩ L20(Ω) such that

〈u, v〉 − 〈∇p, v〉 = 〈f, v〉, v ∈ (L2(Ω))d ,

−〈u,∇q〉 = 〈g , q〉, q ∈ H1(Ω) ∩ L20(Ω),

where 〈·, ·〉 denotes the L2 inner product.

Page 54: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 1: Mixed formulation of the Poisson equation

Let Ω be a domain in Rd and consider the system

u−∇p = f in Ω,

div u = g in Ω,

u · n = 0 on ∂Ω.

Eliminating the vector field u, the scalar field p must satisfy

−∆p = div f − g in Ω,∂p

∂n= −f · n on ∂Ω.

Weak formulation: Find (u, p) ∈ (L2(Ω))d × H1(Ω) ∩ L20(Ω) such that

〈u, v〉 − 〈∇p, v〉 = 〈f, v〉, v ∈ (L2(Ω))d ,

−〈u,∇q〉 = 〈g , q〉, q ∈ H1(Ω) ∩ L20(Ω),

where 〈·, ·〉 denotes the L2 inner product.

Page 55: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 1: Mixed formulation of the Poisson equation

Let Ω be a domain in Rd and consider the system

u−∇p = f in Ω,

div u = g in Ω,

u · n = 0 on ∂Ω.

Eliminating the vector field u, the scalar field p must satisfy

−∆p = div f − g in Ω,∂p

∂n= −f · n on ∂Ω.

Weak formulation: Find (u, p) ∈ (L2(Ω))d × H1(Ω) ∩ L20(Ω) such that

〈u, v〉 − 〈∇p, v〉 = 〈f, v〉, v ∈ (L2(Ω))d ,

−〈u,∇q〉 = 〈g , q〉, q ∈ H1(Ω) ∩ L20(Ω),

where 〈·, ·〉 denotes the L2 inner product.

Page 56: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 1: Mixed formulation of the Poisson equation (cont.)

Discretization using LBB-stable finite element pairs leads to an algebraic saddlepoint problem:

„M BT

B O

«„up

«=

„fg

«

Here M is a mass matrix, B the discrete divergence, and BT the discrete(negative) gradient.

For this problem, several optimal preconditioned iterative solvers have beendeveloped.

By optimal we mean the following:

The convergence rate is independent of discretization parameters

The cost of each iteration is linear in the number of unknowns

Page 57: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 1: Mixed formulation of the Poisson equation (cont.)

Discretization using LBB-stable finite element pairs leads to an algebraic saddlepoint problem: „

M BT

B O

«„up

«=

„fg

«

Here M is a mass matrix, B the discrete divergence, and BT the discrete(negative) gradient.

For this problem, several optimal preconditioned iterative solvers have beendeveloped.

By optimal we mean the following:

The convergence rate is independent of discretization parameters

The cost of each iteration is linear in the number of unknowns

Page 58: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 1: Mixed formulation of the Poisson equation (cont.)

Discretization using LBB-stable finite element pairs leads to an algebraic saddlepoint problem: „

M BT

B O

«„up

«=

„fg

«

Here M is a mass matrix, B the discrete divergence, and BT the discrete(negative) gradient.

For this problem, several optimal preconditioned iterative solvers have beendeveloped.

By optimal we mean the following:

The convergence rate is independent of discretization parameters

The cost of each iteration is linear in the number of unknowns

Page 59: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 1: Mixed formulation of the Poisson equation (cont.)

Discretization using LBB-stable finite element pairs leads to an algebraic saddlepoint problem: „

M BT

B O

«„up

«=

„fg

«

Here M is a mass matrix, B the discrete divergence, and BT the discrete(negative) gradient.

For this problem, several optimal preconditioned iterative solvers have beendeveloped.

By optimal we mean the following:

The convergence rate is independent of discretization parameters

The cost of each iteration is linear in the number of unknowns

Page 60: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 1: Mixed formulation of the Poisson equation (cont.)

Discretization using LBB-stable finite element pairs leads to an algebraic saddlepoint problem: „

M BT

B O

«„up

«=

„fg

«

Here M is a mass matrix, B the discrete divergence, and BT the discrete(negative) gradient.

For this problem, several optimal preconditioned iterative solvers have beendeveloped.

By optimal we mean the following:

The convergence rate is independent of discretization parameters

The cost of each iteration is linear in the number of unknowns

Page 61: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 1: Mixed formulation of the Poisson equation (cont.)

Discretization using LBB-stable finite element pairs leads to an algebraic saddlepoint problem: „

M BT

B O

«„up

«=

„fg

«

Here M is a mass matrix, B the discrete divergence, and BT the discrete(negative) gradient.

For this problem, several optimal preconditioned iterative solvers have beendeveloped.

By optimal we mean the following:

The convergence rate is independent of discretization parameters

The cost of each iteration is linear in the number of unknowns

Page 62: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 1: Mixed formulation of the Poisson equation (cont.)

Discretization using LBB-stable finite element pairs leads to an algebraic saddlepoint problem: „

M BT

B O

«„up

«=

„fg

«

Here M is a mass matrix, B the discrete divergence, and BT the discrete(negative) gradient.

For this problem, several optimal preconditioned iterative solvers have beendeveloped.

By optimal we mean the following:

The convergence rate is independent of discretization parameters

The cost of each iteration is linear in the number of unknowns

Page 63: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 2: the generalized Stokes problem

Let Ω be a domain in Rd and let α ≥ 0, ν > 0. Consider the system

αu− ν∆u +∇p = f in Ω,

div u = 0 in Ω,

u = 0 on ∂Ω.

Weak formulation: Find (u, p) ∈ (H10 (Ω))d × L2

0(Ω) such that

α〈u, v〉+ ν〈∇u,∇v〉 − 〈p, div v〉 = 〈f, v〉, v ∈ (H10 (Ω))d ,

〈q, div u〉 = 0, q ∈ L20(Ω).

The standard Stokes problem is obtained for α = 0 (steady case). In this casewe can assume ν = 1.

Page 64: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 2: the generalized Stokes problem

Let Ω be a domain in Rd and let α ≥ 0, ν > 0. Consider the system

αu− ν∆u +∇p = f in Ω,

div u = 0 in Ω,

u = 0 on ∂Ω.

Weak formulation: Find (u, p) ∈ (H10 (Ω))d × L2

0(Ω) such that

α〈u, v〉+ ν〈∇u,∇v〉 − 〈p, div v〉 = 〈f, v〉, v ∈ (H10 (Ω))d ,

〈q, div u〉 = 0, q ∈ L20(Ω).

The standard Stokes problem is obtained for α = 0 (steady case). In this casewe can assume ν = 1.

Page 65: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 2: the generalized Stokes problem

Let Ω be a domain in Rd and let α ≥ 0, ν > 0. Consider the system

αu− ν∆u +∇p = f in Ω,

div u = 0 in Ω,

u = 0 on ∂Ω.

Weak formulation: Find (u, p) ∈ (H10 (Ω))d × L2

0(Ω) such that

α〈u, v〉+ ν〈∇u,∇v〉 − 〈p, div v〉 = 〈f, v〉, v ∈ (H10 (Ω))d ,

〈q, div u〉 = 0, q ∈ L20(Ω).

The standard Stokes problem is obtained for α = 0 (steady case). In this casewe can assume ν = 1.

Page 66: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 2: the generalized Stokes problem

Let Ω be a domain in Rd and let α ≥ 0, ν > 0. Consider the system

αu− ν∆u +∇p = f in Ω,

div u = 0 in Ω,

u = 0 on ∂Ω.

Weak formulation: Find (u, p) ∈ (H10 (Ω))d × L2

0(Ω) such that

α〈u, v〉+ ν〈∇u,∇v〉 − 〈p, div v〉 = 〈f, v〉, v ∈ (H10 (Ω))d ,

〈q, div u〉 = 0, q ∈ L20(Ω).

The standard Stokes problem is obtained for α = 0 (steady case). In this casewe can assume ν = 1.

Page 67: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 2: the generalized Stokes problem

Let Ω be a domain in Rd and let α ≥ 0, ν > 0. Consider the system

αu− ν∆u +∇p = f in Ω,

div u = 0 in Ω,

u = 0 on ∂Ω.

Weak formulation: Find (u, p) ∈ (H10 (Ω))d × L2

0(Ω) such that

α〈u, v〉+ ν〈∇u,∇v〉 − 〈p, div v〉 = 〈f, v〉, v ∈ (H10 (Ω))d ,

〈q, div u〉 = 0, q ∈ L20(Ω).

The standard Stokes problem is obtained for α = 0 (steady case). In this casewe can assume ν = 1.

Page 68: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 2: the generalized Stokes problem

Let Ω be a domain in Rd and let α ≥ 0, ν > 0. Consider the system

αu− ν∆u +∇p = f in Ω,

div u = 0 in Ω,

u = 0 on ∂Ω.

Weak formulation: Find (u, p) ∈ (H10 (Ω))d × L2

0(Ω) such that

α〈u, v〉+ ν〈∇u,∇v〉 − 〈p, div v〉 = 〈f, v〉, v ∈ (H10 (Ω))d ,

〈q, div u〉 = 0, q ∈ L20(Ω).

The standard Stokes problem is obtained for α = 0 (steady case). In this casewe can assume ν = 1.

Page 69: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 2: the generalized Stokes problem

Let Ω be a domain in Rd and let α ≥ 0, ν > 0. Consider the system

αu− ν∆u +∇p = f in Ω,

div u = 0 in Ω,

u = 0 on ∂Ω.

Weak formulation: Find (u, p) ∈ (H10 (Ω))d × L2

0(Ω) such that

α〈u, v〉+ ν〈∇u,∇v〉 − 〈p, div v〉 = 〈f, v〉, v ∈ (H10 (Ω))d ,

〈q, div u〉 = 0, q ∈ L20(Ω).

The standard Stokes problem is obtained for α = 0 (steady case). In this casewe can assume ν = 1.

Page 70: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 2: the generalized Stokes problem

Let Ω be a domain in Rd and let α ≥ 0, ν > 0. Consider the system

αu− ν∆u +∇p = f in Ω,

div u = 0 in Ω,

u = 0 on ∂Ω.

Weak formulation: Find (u, p) ∈ (H10 (Ω))d × L2

0(Ω) such that

α〈u, v〉+ ν〈∇u,∇v〉 − 〈p, div v〉 = 〈f, v〉, v ∈ (H10 (Ω))d ,

〈q, div u〉 = 0, q ∈ L20(Ω).

The standard Stokes problem is obtained for α = 0 (steady case). In this casewe can assume ν = 1.

Page 71: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 2: the generalized Stokes problem (cont.)

Discretization using LBB-stable finite element pairs or other div-stable schemeleads to an algebraic saddle point problem:

„A BT

B O

«„up

«=

„f0

«

Here A is a discrete reaction-diffusion operator, BT the discrete gradient, andB the discrete (negative) divergence. For α = 0, A is just the discrete vectorLaplacian.

If an unstable FEM pair is used, then a regularization term −βC is added inthe (2, 2) block of A. The specific choice of β and C depends on the particulardiscretization used.

Robust, optimal solvers have been developed for this problem:Cahouet–Chabard for α > 0; Silvester–Wathen for α = 0.

Page 72: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 2: the generalized Stokes problem (cont.)

Discretization using LBB-stable finite element pairs or other div-stable schemeleads to an algebraic saddle point problem:„

A BT

B O

«„up

«=

„f0

«

Here A is a discrete reaction-diffusion operator, BT the discrete gradient, andB the discrete (negative) divergence. For α = 0, A is just the discrete vectorLaplacian.

If an unstable FEM pair is used, then a regularization term −βC is added inthe (2, 2) block of A. The specific choice of β and C depends on the particulardiscretization used.

Robust, optimal solvers have been developed for this problem:Cahouet–Chabard for α > 0; Silvester–Wathen for α = 0.

Page 73: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 2: the generalized Stokes problem (cont.)

Discretization using LBB-stable finite element pairs or other div-stable schemeleads to an algebraic saddle point problem:„

A BT

B O

«„up

«=

„f0

«

Here A is a discrete reaction-diffusion operator, BT the discrete gradient, andB the discrete (negative) divergence. For α = 0, A is just the discrete vectorLaplacian.

If an unstable FEM pair is used, then a regularization term −βC is added inthe (2, 2) block of A. The specific choice of β and C depends on the particulardiscretization used.

Robust, optimal solvers have been developed for this problem:Cahouet–Chabard for α > 0; Silvester–Wathen for α = 0.

Page 74: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 2: the generalized Stokes problem (cont.)

Discretization using LBB-stable finite element pairs or other div-stable schemeleads to an algebraic saddle point problem:„

A BT

B O

«„up

«=

„f0

«

Here A is a discrete reaction-diffusion operator, BT the discrete gradient, andB the discrete (negative) divergence. For α = 0, A is just the discrete vectorLaplacian.

If an unstable FEM pair is used, then a regularization term −βC is added inthe (2, 2) block of A. The specific choice of β and C depends on the particulardiscretization used.

Robust, optimal solvers have been developed for this problem:Cahouet–Chabard for α > 0; Silvester–Wathen for α = 0.

Page 75: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 2: the generalized Stokes problem (cont.)

Discretization using LBB-stable finite element pairs or other div-stable schemeleads to an algebraic saddle point problem:„

A BT

B O

«„up

«=

„f0

«

Here A is a discrete reaction-diffusion operator, BT the discrete gradient, andB the discrete (negative) divergence. For α = 0, A is just the discrete vectorLaplacian.

If an unstable FEM pair is used, then a regularization term −βC is added inthe (2, 2) block of A. The specific choice of β and C depends on the particulardiscretization used.

Robust, optimal solvers have been developed for this problem:Cahouet–Chabard for α > 0; Silvester–Wathen for α = 0.

Page 76: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Sparsity pattern: 2D stokes (Q1-P0)

Without stabilization (C = O)

Page 77: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Sparsity pattern: 2D stokes (Q1-P0)

With stabilization (C 6= O)

Page 78: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 3: the generalized Oseen problem

Let Ω be a domain in Rd and let α ≥ 0 and ν > 0. Also, let w be adivergence-free vector field on Ω. Consider the system

αu− ν∆u + (w · ∇) u +∇p = f in Ω,

div u = 0 in Ω,

u = 0 on ∂Ω.

Note that for w = 0 we recover the generalized Stokes problem.

Weak formulation: Find (u, p) ∈ (H10 (Ω))d × L2

0(Ω) such that

α〈u, v〉+ ν〈∇u,∇v〉+ 〈(w · ∇) u, v〉 − 〈p, div v〉 = 〈f, v〉, v ∈ (H10 (Ω))d ,

〈q, div u〉 = 0, q ∈ L20(Ω).

The standard Oseen problem is obtained for α = 0 (steady case).

Page 79: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 3: the generalized Oseen problem

Let Ω be a domain in Rd and let α ≥ 0 and ν > 0. Also, let w be adivergence-free vector field on Ω. Consider the system

αu− ν∆u + (w · ∇) u +∇p = f in Ω,

div u = 0 in Ω,

u = 0 on ∂Ω.

Note that for w = 0 we recover the generalized Stokes problem.

Weak formulation: Find (u, p) ∈ (H10 (Ω))d × L2

0(Ω) such that

α〈u, v〉+ ν〈∇u,∇v〉+ 〈(w · ∇) u, v〉 − 〈p, div v〉 = 〈f, v〉, v ∈ (H10 (Ω))d ,

〈q, div u〉 = 0, q ∈ L20(Ω).

The standard Oseen problem is obtained for α = 0 (steady case).

Page 80: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 3: the generalized Oseen problem

Let Ω be a domain in Rd and let α ≥ 0 and ν > 0. Also, let w be adivergence-free vector field on Ω. Consider the system

αu− ν∆u + (w · ∇) u +∇p = f in Ω,

div u = 0 in Ω,

u = 0 on ∂Ω.

Note that for w = 0 we recover the generalized Stokes problem.

Weak formulation: Find (u, p) ∈ (H10 (Ω))d × L2

0(Ω) such that

α〈u, v〉+ ν〈∇u,∇v〉+ 〈(w · ∇) u, v〉 − 〈p, div v〉 = 〈f, v〉, v ∈ (H10 (Ω))d ,

〈q, div u〉 = 0, q ∈ L20(Ω).

The standard Oseen problem is obtained for α = 0 (steady case).

Page 81: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 3: the generalized Oseen problem

Let Ω be a domain in Rd and let α ≥ 0 and ν > 0. Also, let w be adivergence-free vector field on Ω. Consider the system

αu− ν∆u + (w · ∇) u +∇p = f in Ω,

div u = 0 in Ω,

u = 0 on ∂Ω.

Note that for w = 0 we recover the generalized Stokes problem.

Weak formulation: Find (u, p) ∈ (H10 (Ω))d × L2

0(Ω) such that

α〈u, v〉+ ν〈∇u,∇v〉+ 〈(w · ∇) u, v〉 − 〈p, div v〉 = 〈f, v〉, v ∈ (H10 (Ω))d ,

〈q, div u〉 = 0, q ∈ L20(Ω).

The standard Oseen problem is obtained for α = 0 (steady case).

Page 82: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 3: the generalized Oseen problem

Let Ω be a domain in Rd and let α ≥ 0 and ν > 0. Also, let w be adivergence-free vector field on Ω. Consider the system

αu− ν∆u + (w · ∇) u +∇p = f in Ω,

div u = 0 in Ω,

u = 0 on ∂Ω.

Note that for w = 0 we recover the generalized Stokes problem.

Weak formulation: Find (u, p) ∈ (H10 (Ω))d × L2

0(Ω) such that

α〈u, v〉+ ν〈∇u,∇v〉+ 〈(w · ∇) u, v〉 − 〈p, div v〉 = 〈f, v〉, v ∈ (H10 (Ω))d ,

〈q, div u〉 = 0, q ∈ L20(Ω).

The standard Oseen problem is obtained for α = 0 (steady case).

Page 83: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 3: the generalized Oseen problem

Let Ω be a domain in Rd and let α ≥ 0 and ν > 0. Also, let w be adivergence-free vector field on Ω. Consider the system

αu− ν∆u + (w · ∇) u +∇p = f in Ω,

div u = 0 in Ω,

u = 0 on ∂Ω.

Note that for w = 0 we recover the generalized Stokes problem.

Weak formulation: Find (u, p) ∈ (H10 (Ω))d × L2

0(Ω) such that

α〈u, v〉+ ν〈∇u,∇v〉+ 〈(w · ∇) u, v〉 − 〈p, div v〉 = 〈f, v〉, v ∈ (H10 (Ω))d ,

〈q, div u〉 = 0, q ∈ L20(Ω).

The standard Oseen problem is obtained for α = 0 (steady case).

Page 84: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 3: the generalized Oseen problem

Let Ω be a domain in Rd and let α ≥ 0 and ν > 0. Also, let w be adivergence-free vector field on Ω. Consider the system

αu− ν∆u + (w · ∇) u +∇p = f in Ω,

div u = 0 in Ω,

u = 0 on ∂Ω.

Note that for w = 0 we recover the generalized Stokes problem.

Weak formulation: Find (u, p) ∈ (H10 (Ω))d × L2

0(Ω) such that

α〈u, v〉+ ν〈∇u,∇v〉+ 〈(w · ∇) u, v〉 − 〈p, div v〉 = 〈f, v〉, v ∈ (H10 (Ω))d ,

〈q, div u〉 = 0, q ∈ L20(Ω).

The standard Oseen problem is obtained for α = 0 (steady case).

Page 85: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 3: the generalized Oseen problem

Let Ω be a domain in Rd and let α ≥ 0 and ν > 0. Also, let w be adivergence-free vector field on Ω. Consider the system

αu− ν∆u + (w · ∇) u +∇p = f in Ω,

div u = 0 in Ω,

u = 0 on ∂Ω.

Note that for w = 0 we recover the generalized Stokes problem.

Weak formulation: Find (u, p) ∈ (H10 (Ω))d × L2

0(Ω) such that

α〈u, v〉+ ν〈∇u,∇v〉+ 〈(w · ∇) u, v〉 − 〈p, div v〉 = 〈f, v〉, v ∈ (H10 (Ω))d ,

〈q, div u〉 = 0, q ∈ L20(Ω).

The standard Oseen problem is obtained for α = 0 (steady case).

Page 86: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 3: the generalized Oseen problem (cont.)

Discretization using LBB-stable finite element pairs or other div-stable schemeleads to an algebraic saddle point problem:

„A BT

B O

«„up

«=

„f0

«

Now A is a discrete reaction-convection-diffusion operator. For α = 0, A is justa discrete vector convection-diffusion operator. Note that now A 6= AT .

The Oseen problem arises from Picard iteration applied to the steadyincompressible Navier–Stokes equations, and from fully implicit schemesapplied to the unsteady NSE. The ‘wind’ w represents an approximation of thesolution u obtained from the previous Picard step, or from time-lagging.

As we will see, this problem can be very challenging to solve, especially forsmall values of the viscosity ν and on stretched meshes.

Page 87: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 3: the generalized Oseen problem (cont.)

Discretization using LBB-stable finite element pairs or other div-stable schemeleads to an algebraic saddle point problem:„

A BT

B O

«„up

«=

„f0

«

Now A is a discrete reaction-convection-diffusion operator. For α = 0, A is justa discrete vector convection-diffusion operator. Note that now A 6= AT .

The Oseen problem arises from Picard iteration applied to the steadyincompressible Navier–Stokes equations, and from fully implicit schemesapplied to the unsteady NSE. The ‘wind’ w represents an approximation of thesolution u obtained from the previous Picard step, or from time-lagging.

As we will see, this problem can be very challenging to solve, especially forsmall values of the viscosity ν and on stretched meshes.

Page 88: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 3: the generalized Oseen problem (cont.)

Discretization using LBB-stable finite element pairs or other div-stable schemeleads to an algebraic saddle point problem:„

A BT

B O

«„up

«=

„f0

«

Now A is a discrete reaction-convection-diffusion operator. For α = 0, A is justa discrete vector convection-diffusion operator. Note that now A 6= AT .

The Oseen problem arises from Picard iteration applied to the steadyincompressible Navier–Stokes equations, and from fully implicit schemesapplied to the unsteady NSE. The ‘wind’ w represents an approximation of thesolution u obtained from the previous Picard step, or from time-lagging.

As we will see, this problem can be very challenging to solve, especially forsmall values of the viscosity ν and on stretched meshes.

Page 89: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 3: the generalized Oseen problem (cont.)

Discretization using LBB-stable finite element pairs or other div-stable schemeleads to an algebraic saddle point problem:„

A BT

B O

«„up

«=

„f0

«

Now A is a discrete reaction-convection-diffusion operator. For α = 0, A is justa discrete vector convection-diffusion operator. Note that now A 6= AT .

The Oseen problem arises from Picard iteration applied to the steadyincompressible Navier–Stokes equations, and from fully implicit schemesapplied to the unsteady NSE. The ‘wind’ w represents an approximation of thesolution u obtained from the previous Picard step, or from time-lagging.

As we will see, this problem can be very challenging to solve, especially forsmall values of the viscosity ν and on stretched meshes.

Page 90: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Example 3: the generalized Oseen problem (cont.)

Discretization using LBB-stable finite element pairs or other div-stable schemeleads to an algebraic saddle point problem:„

A BT

B O

«„up

«=

„f0

«

Now A is a discrete reaction-convection-diffusion operator. For α = 0, A is justa discrete vector convection-diffusion operator. Note that now A 6= AT .

The Oseen problem arises from Picard iteration applied to the steadyincompressible Navier–Stokes equations, and from fully implicit schemesapplied to the unsteady NSE. The ‘wind’ w represents an approximation of thesolution u obtained from the previous Picard step, or from time-lagging.

As we will see, this problem can be very challenging to solve, especially forsmall values of the viscosity ν and on stretched meshes.

Page 91: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Eigenvalues of discrete Oseen problem (ν = 0.01), indefinite form

Eigenvalues of Oseen matrix A =

„A BT

B O

«, MAC discretization.

−50 0 50 100 150 200 250−4

−3

−2

−1

0

1

2

3

4

Note the different scales in the x and y axes.

Page 92: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Examples of saddle point problems

Eigenvalues of discrete Oseen problem (ν = 0.01), positive definite form

Eigenvalues of Oseen matrix A− =

„A BT

−B O

«, MAC discretization.

0 20 40 60 80 100 120 140 160 180 200−6

−4

−2

0

2

4

6

Note the different scales in the x and y axes.

Page 93: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Outline

1 Properties of saddle point matrices

2 Examples of saddle point problems

3 Some solution algorithms

Page 94: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Overview of available solvers

Two main classes of solvers exist:

1 Direct methods: based on factorization of A

High-quality software exists (Duff et al.; Demmel et al.)Quite popular in some areasStability issues (indefiniteness)Large amounts of fill-inNot feasible for 3D problemsDifficult to parallelize

2 Krylov subspace methods (MINRES, GMRES, Bi-CGSTAB,...)Appropriate for large, sparse problemsTend to converge slowlyNumber of iterations increases as problem size growsEffective preconditioners a must

Much effort has been put into developing preconditioners, with optimality androbustness w.r.t. parameters as the ultimate goals. Parallelizability also needsto be taken into account.

Page 95: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Overview of available solvers

Two main classes of solvers exist:

1 Direct methods: based on factorization of AHigh-quality software exists (Duff et al.; Demmel et al.)

Quite popular in some areasStability issues (indefiniteness)Large amounts of fill-inNot feasible for 3D problemsDifficult to parallelize

2 Krylov subspace methods (MINRES, GMRES, Bi-CGSTAB,...)Appropriate for large, sparse problemsTend to converge slowlyNumber of iterations increases as problem size growsEffective preconditioners a must

Much effort has been put into developing preconditioners, with optimality androbustness w.r.t. parameters as the ultimate goals. Parallelizability also needsto be taken into account.

Page 96: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Overview of available solvers

Two main classes of solvers exist:

1 Direct methods: based on factorization of AHigh-quality software exists (Duff et al.; Demmel et al.)Quite popular in some areas

Stability issues (indefiniteness)Large amounts of fill-inNot feasible for 3D problemsDifficult to parallelize

2 Krylov subspace methods (MINRES, GMRES, Bi-CGSTAB,...)Appropriate for large, sparse problemsTend to converge slowlyNumber of iterations increases as problem size growsEffective preconditioners a must

Much effort has been put into developing preconditioners, with optimality androbustness w.r.t. parameters as the ultimate goals. Parallelizability also needsto be taken into account.

Page 97: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Overview of available solvers

Two main classes of solvers exist:

1 Direct methods: based on factorization of AHigh-quality software exists (Duff et al.; Demmel et al.)Quite popular in some areasStability issues (indefiniteness)

Large amounts of fill-inNot feasible for 3D problemsDifficult to parallelize

2 Krylov subspace methods (MINRES, GMRES, Bi-CGSTAB,...)Appropriate for large, sparse problemsTend to converge slowlyNumber of iterations increases as problem size growsEffective preconditioners a must

Much effort has been put into developing preconditioners, with optimality androbustness w.r.t. parameters as the ultimate goals. Parallelizability also needsto be taken into account.

Page 98: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Overview of available solvers

Two main classes of solvers exist:

1 Direct methods: based on factorization of AHigh-quality software exists (Duff et al.; Demmel et al.)Quite popular in some areasStability issues (indefiniteness)Large amounts of fill-in

Not feasible for 3D problemsDifficult to parallelize

2 Krylov subspace methods (MINRES, GMRES, Bi-CGSTAB,...)Appropriate for large, sparse problemsTend to converge slowlyNumber of iterations increases as problem size growsEffective preconditioners a must

Much effort has been put into developing preconditioners, with optimality androbustness w.r.t. parameters as the ultimate goals. Parallelizability also needsto be taken into account.

Page 99: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Overview of available solvers

Two main classes of solvers exist:

1 Direct methods: based on factorization of AHigh-quality software exists (Duff et al.; Demmel et al.)Quite popular in some areasStability issues (indefiniteness)Large amounts of fill-inNot feasible for 3D problems

Difficult to parallelize

2 Krylov subspace methods (MINRES, GMRES, Bi-CGSTAB,...)Appropriate for large, sparse problemsTend to converge slowlyNumber of iterations increases as problem size growsEffective preconditioners a must

Much effort has been put into developing preconditioners, with optimality androbustness w.r.t. parameters as the ultimate goals. Parallelizability also needsto be taken into account.

Page 100: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Overview of available solvers

Two main classes of solvers exist:

1 Direct methods: based on factorization of AHigh-quality software exists (Duff et al.; Demmel et al.)Quite popular in some areasStability issues (indefiniteness)Large amounts of fill-inNot feasible for 3D problemsDifficult to parallelize

2 Krylov subspace methods (MINRES, GMRES, Bi-CGSTAB,...)Appropriate for large, sparse problemsTend to converge slowlyNumber of iterations increases as problem size growsEffective preconditioners a must

Much effort has been put into developing preconditioners, with optimality androbustness w.r.t. parameters as the ultimate goals. Parallelizability also needsto be taken into account.

Page 101: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Overview of available solvers

Two main classes of solvers exist:

1 Direct methods: based on factorization of AHigh-quality software exists (Duff et al.; Demmel et al.)Quite popular in some areasStability issues (indefiniteness)Large amounts of fill-inNot feasible for 3D problemsDifficult to parallelize

2 Krylov subspace methods (MINRES, GMRES, Bi-CGSTAB,...)

Appropriate for large, sparse problemsTend to converge slowlyNumber of iterations increases as problem size growsEffective preconditioners a must

Much effort has been put into developing preconditioners, with optimality androbustness w.r.t. parameters as the ultimate goals. Parallelizability also needsto be taken into account.

Page 102: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Overview of available solvers

Two main classes of solvers exist:

1 Direct methods: based on factorization of AHigh-quality software exists (Duff et al.; Demmel et al.)Quite popular in some areasStability issues (indefiniteness)Large amounts of fill-inNot feasible for 3D problemsDifficult to parallelize

2 Krylov subspace methods (MINRES, GMRES, Bi-CGSTAB,...)Appropriate for large, sparse problems

Tend to converge slowlyNumber of iterations increases as problem size growsEffective preconditioners a must

Much effort has been put into developing preconditioners, with optimality androbustness w.r.t. parameters as the ultimate goals. Parallelizability also needsto be taken into account.

Page 103: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Overview of available solvers

Two main classes of solvers exist:

1 Direct methods: based on factorization of AHigh-quality software exists (Duff et al.; Demmel et al.)Quite popular in some areasStability issues (indefiniteness)Large amounts of fill-inNot feasible for 3D problemsDifficult to parallelize

2 Krylov subspace methods (MINRES, GMRES, Bi-CGSTAB,...)Appropriate for large, sparse problemsTend to converge slowly

Number of iterations increases as problem size growsEffective preconditioners a must

Much effort has been put into developing preconditioners, with optimality androbustness w.r.t. parameters as the ultimate goals. Parallelizability also needsto be taken into account.

Page 104: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Overview of available solvers

Two main classes of solvers exist:

1 Direct methods: based on factorization of AHigh-quality software exists (Duff et al.; Demmel et al.)Quite popular in some areasStability issues (indefiniteness)Large amounts of fill-inNot feasible for 3D problemsDifficult to parallelize

2 Krylov subspace methods (MINRES, GMRES, Bi-CGSTAB,...)Appropriate for large, sparse problemsTend to converge slowlyNumber of iterations increases as problem size grows

Effective preconditioners a must

Much effort has been put into developing preconditioners, with optimality androbustness w.r.t. parameters as the ultimate goals. Parallelizability also needsto be taken into account.

Page 105: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Overview of available solvers

Two main classes of solvers exist:

1 Direct methods: based on factorization of AHigh-quality software exists (Duff et al.; Demmel et al.)Quite popular in some areasStability issues (indefiniteness)Large amounts of fill-inNot feasible for 3D problemsDifficult to parallelize

2 Krylov subspace methods (MINRES, GMRES, Bi-CGSTAB,...)Appropriate for large, sparse problemsTend to converge slowlyNumber of iterations increases as problem size growsEffective preconditioners a must

Much effort has been put into developing preconditioners, with optimality androbustness w.r.t. parameters as the ultimate goals. Parallelizability also needsto be taken into account.

Page 106: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Overview of available solvers

Two main classes of solvers exist:

1 Direct methods: based on factorization of AHigh-quality software exists (Duff et al.; Demmel et al.)Quite popular in some areasStability issues (indefiniteness)Large amounts of fill-inNot feasible for 3D problemsDifficult to parallelize

2 Krylov subspace methods (MINRES, GMRES, Bi-CGSTAB,...)Appropriate for large, sparse problemsTend to converge slowlyNumber of iterations increases as problem size growsEffective preconditioners a must

Much effort has been put into developing preconditioners, with optimality androbustness w.r.t. parameters as the ultimate goals. Parallelizability also needsto be taken into account.

Page 107: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Preconditioners

Preconditioning: Find an invertible matrix P such that Krylov methodsapplied to the preconditioned system

P−1A x = P−1b

will converge rapidly (possibly, independently of the discretization parameter h).

In practice, fast convergence is typically observed when the eigenvalues of thepreconditioned matrix P−1A are clustered away from zero. However, it is notan easy matter to characterize the rate of convergence, in general.

To be effective, a preconditioner must significantly reduce the total amount ofwork:

Setting up P must be inexpensive

Evaluating z = P−1r must be inexpensive

Convergence must be rapid

Page 108: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Preconditioners

Preconditioning: Find an invertible matrix P such that Krylov methodsapplied to the preconditioned system

P−1A x = P−1b

will converge rapidly (possibly, independently of the discretization parameter h).

In practice, fast convergence is typically observed when the eigenvalues of thepreconditioned matrix P−1A are clustered away from zero. However, it is notan easy matter to characterize the rate of convergence, in general.

To be effective, a preconditioner must significantly reduce the total amount ofwork:

Setting up P must be inexpensive

Evaluating z = P−1r must be inexpensive

Convergence must be rapid

Page 109: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Preconditioners

Preconditioning: Find an invertible matrix P such that Krylov methodsapplied to the preconditioned system

P−1A x = P−1b

will converge rapidly (possibly, independently of the discretization parameter h).

In practice, fast convergence is typically observed when the eigenvalues of thepreconditioned matrix P−1A are clustered away from zero. However, it is notan easy matter to characterize the rate of convergence, in general.

To be effective, a preconditioner must significantly reduce the total amount ofwork:

Setting up P must be inexpensive

Evaluating z = P−1r must be inexpensive

Convergence must be rapid

Page 110: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Preconditioners

Preconditioning: Find an invertible matrix P such that Krylov methodsapplied to the preconditioned system

P−1A x = P−1b

will converge rapidly (possibly, independently of the discretization parameter h).

In practice, fast convergence is typically observed when the eigenvalues of thepreconditioned matrix P−1A are clustered away from zero. However, it is notan easy matter to characterize the rate of convergence, in general.

To be effective, a preconditioner must significantly reduce the total amount ofwork:

Setting up P must be inexpensive

Evaluating z = P−1r must be inexpensive

Convergence must be rapid

Page 111: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Preconditioners

Preconditioning: Find an invertible matrix P such that Krylov methodsapplied to the preconditioned system

P−1A x = P−1b

will converge rapidly (possibly, independently of the discretization parameter h).

In practice, fast convergence is typically observed when the eigenvalues of thepreconditioned matrix P−1A are clustered away from zero. However, it is notan easy matter to characterize the rate of convergence, in general.

To be effective, a preconditioner must significantly reduce the total amount ofwork:

Setting up P must be inexpensive

Evaluating z = P−1r must be inexpensive

Convergence must be rapid

Page 112: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Preconditioners

Preconditioning: Find an invertible matrix P such that Krylov methodsapplied to the preconditioned system

P−1A x = P−1b

will converge rapidly (possibly, independently of the discretization parameter h).

In practice, fast convergence is typically observed when the eigenvalues of thepreconditioned matrix P−1A are clustered away from zero. However, it is notan easy matter to characterize the rate of convergence, in general.

To be effective, a preconditioner must significantly reduce the total amount ofwork:

Setting up P must be inexpensive

Evaluating z = P−1r must be inexpensive

Convergence must be rapid

Page 113: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Preconditioners

Options include:

1 ILU preconditioners

2 Coupled multigrid methods (geometric and algebraic; Vanka-type)

3 Schur complement-based methods (‘segregated’ approach)Block diagonal preconditioningBlock triangular preconditioning (Elman, Silvester, Wathen et al.)SIMPLE and its variants (ur Rehman, Segal, Vuik)

4 Constraint preconditioning (‘null space methods’)

5 Schilders factorization

6 Augmented Lagrangian-based techniques (AL)

The choice of an appropriate preconditioner is highly problem-dependent.

Page 114: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Preconditioners

Options include:

1 ILU preconditioners

2 Coupled multigrid methods (geometric and algebraic; Vanka-type)

3 Schur complement-based methods (‘segregated’ approach)Block diagonal preconditioningBlock triangular preconditioning (Elman, Silvester, Wathen et al.)SIMPLE and its variants (ur Rehman, Segal, Vuik)

4 Constraint preconditioning (‘null space methods’)

5 Schilders factorization

6 Augmented Lagrangian-based techniques (AL)

The choice of an appropriate preconditioner is highly problem-dependent.

Page 115: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Preconditioners

Options include:

1 ILU preconditioners

2 Coupled multigrid methods (geometric and algebraic; Vanka-type)

3 Schur complement-based methods (‘segregated’ approach)Block diagonal preconditioningBlock triangular preconditioning (Elman, Silvester, Wathen et al.)SIMPLE and its variants (ur Rehman, Segal, Vuik)

4 Constraint preconditioning (‘null space methods’)

5 Schilders factorization

6 Augmented Lagrangian-based techniques (AL)

The choice of an appropriate preconditioner is highly problem-dependent.

Page 116: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Preconditioners

Options include:

1 ILU preconditioners

2 Coupled multigrid methods (geometric and algebraic; Vanka-type)

3 Schur complement-based methods (‘segregated’ approach)

Block diagonal preconditioningBlock triangular preconditioning (Elman, Silvester, Wathen et al.)SIMPLE and its variants (ur Rehman, Segal, Vuik)

4 Constraint preconditioning (‘null space methods’)

5 Schilders factorization

6 Augmented Lagrangian-based techniques (AL)

The choice of an appropriate preconditioner is highly problem-dependent.

Page 117: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Preconditioners

Options include:

1 ILU preconditioners

2 Coupled multigrid methods (geometric and algebraic; Vanka-type)

3 Schur complement-based methods (‘segregated’ approach)Block diagonal preconditioning

Block triangular preconditioning (Elman, Silvester, Wathen et al.)SIMPLE and its variants (ur Rehman, Segal, Vuik)

4 Constraint preconditioning (‘null space methods’)

5 Schilders factorization

6 Augmented Lagrangian-based techniques (AL)

The choice of an appropriate preconditioner is highly problem-dependent.

Page 118: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Preconditioners

Options include:

1 ILU preconditioners

2 Coupled multigrid methods (geometric and algebraic; Vanka-type)

3 Schur complement-based methods (‘segregated’ approach)Block diagonal preconditioningBlock triangular preconditioning (Elman, Silvester, Wathen et al.)

SIMPLE and its variants (ur Rehman, Segal, Vuik)

4 Constraint preconditioning (‘null space methods’)

5 Schilders factorization

6 Augmented Lagrangian-based techniques (AL)

The choice of an appropriate preconditioner is highly problem-dependent.

Page 119: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Preconditioners

Options include:

1 ILU preconditioners

2 Coupled multigrid methods (geometric and algebraic; Vanka-type)

3 Schur complement-based methods (‘segregated’ approach)Block diagonal preconditioningBlock triangular preconditioning (Elman, Silvester, Wathen et al.)SIMPLE and its variants (ur Rehman, Segal, Vuik)

4 Constraint preconditioning (‘null space methods’)

5 Schilders factorization

6 Augmented Lagrangian-based techniques (AL)

The choice of an appropriate preconditioner is highly problem-dependent.

Page 120: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Preconditioners

Options include:

1 ILU preconditioners

2 Coupled multigrid methods (geometric and algebraic; Vanka-type)

3 Schur complement-based methods (‘segregated’ approach)Block diagonal preconditioningBlock triangular preconditioning (Elman, Silvester, Wathen et al.)SIMPLE and its variants (ur Rehman, Segal, Vuik)

4 Constraint preconditioning (‘null space methods’)

5 Schilders factorization

6 Augmented Lagrangian-based techniques (AL)

The choice of an appropriate preconditioner is highly problem-dependent.

Page 121: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Preconditioners

Options include:

1 ILU preconditioners

2 Coupled multigrid methods (geometric and algebraic; Vanka-type)

3 Schur complement-based methods (‘segregated’ approach)Block diagonal preconditioningBlock triangular preconditioning (Elman, Silvester, Wathen et al.)SIMPLE and its variants (ur Rehman, Segal, Vuik)

4 Constraint preconditioning (‘null space methods’)

5 Schilders factorization

6 Augmented Lagrangian-based techniques (AL)

The choice of an appropriate preconditioner is highly problem-dependent.

Page 122: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Preconditioners

Options include:

1 ILU preconditioners

2 Coupled multigrid methods (geometric and algebraic; Vanka-type)

3 Schur complement-based methods (‘segregated’ approach)Block diagonal preconditioningBlock triangular preconditioning (Elman, Silvester, Wathen et al.)SIMPLE and its variants (ur Rehman, Segal, Vuik)

4 Constraint preconditioning (‘null space methods’)

5 Schilders factorization

6 Augmented Lagrangian-based techniques (AL)

The choice of an appropriate preconditioner is highly problem-dependent.

Page 123: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Preconditioners

Options include:

1 ILU preconditioners

2 Coupled multigrid methods (geometric and algebraic; Vanka-type)

3 Schur complement-based methods (‘segregated’ approach)Block diagonal preconditioningBlock triangular preconditioning (Elman, Silvester, Wathen et al.)SIMPLE and its variants (ur Rehman, Segal, Vuik)

4 Constraint preconditioning (‘null space methods’)

5 Schilders factorization

6 Augmented Lagrangian-based techniques (AL)

The choice of an appropriate preconditioner is highly problem-dependent.

Page 124: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Preconditioners (cont.)

Example: The Silvester–Wathen preconditioner for the Stokes problem is

P =

bA O

O bMp

!

where bA−1 is given by a multigrid V-cycle applied to linear systems withcoefficient matrix A and bMp is the diagonal of the pressure mass matrix.

This preconditioner is provably optimal:

MINRES preconditioned with P converges at a rate independent of themesh size h

Each preconditioned MINRES iteration costs O(n + m) flops

Efficient parallelization is possible

Page 125: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Preconditioners (cont.)

Example: The Silvester–Wathen preconditioner for the Stokes problem is

P =

bA O

O bMp

!

where bA−1 is given by a multigrid V-cycle applied to linear systems withcoefficient matrix A and bMp is the diagonal of the pressure mass matrix.

This preconditioner is provably optimal:

MINRES preconditioned with P converges at a rate independent of themesh size h

Each preconditioned MINRES iteration costs O(n + m) flops

Efficient parallelization is possible

Page 126: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Preconditioners (cont.)

Example: The Silvester–Wathen preconditioner for the Stokes problem is

P =

bA O

O bMp

!

where bA−1 is given by a multigrid V-cycle applied to linear systems withcoefficient matrix A and bMp is the diagonal of the pressure mass matrix.

This preconditioner is provably optimal:

MINRES preconditioned with P converges at a rate independent of themesh size h

Each preconditioned MINRES iteration costs O(n + m) flops

Efficient parallelization is possible

Page 127: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Preconditioners (cont.)

Example: The Silvester–Wathen preconditioner for the Stokes problem is

P =

bA O

O bMp

!

where bA−1 is given by a multigrid V-cycle applied to linear systems withcoefficient matrix A and bMp is the diagonal of the pressure mass matrix.

This preconditioner is provably optimal:

MINRES preconditioned with P converges at a rate independent of themesh size h

Each preconditioned MINRES iteration costs O(n + m) flops

Efficient parallelization is possible

Page 128: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Preconditioners (cont.)

Example: The Silvester–Wathen preconditioner for the Stokes problem is

P =

bA O

O bMp

!

where bA−1 is given by a multigrid V-cycle applied to linear systems withcoefficient matrix A and bMp is the diagonal of the pressure mass matrix.

This preconditioner is provably optimal:

MINRES preconditioned with P converges at a rate independent of themesh size h

Each preconditioned MINRES iteration costs O(n + m) flops

Efficient parallelization is possible

Page 129: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Example: Silvester-Wathen preconditioner

Number of MINRES iterations needed for 10−6 reduction in residual for locallystabilized Q1− P0 mixed finite elements for Stokes flow in a cavity. Blockdiagonal preconditioner: bA is one multigrid V-cycle with 1,1 relaxed Jacobismoothing and bMp is the diagonal pressure mass matrix. The cpu time (inseconds) is on a Sun sparcv9 502 MHz processor with 1024 Mb of memory.The cpu time is also given for a sparse direct solve (UMFPACK in MATLAB).

grid n m iterations cpu time sparse direct cpu64× 64 8450 4096 38 14.3 6.8

128× 128 33282 16384 37 37.7 48.0256× 256 132098 65536 36 194.6 897512× 512 526339 263169 35 6903 out of memory

Note: the lack of scalability for finer meshes is caused by memory managementissues.

But what about more difficult problems?

Page 130: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Block preconditioners

If A is invertible, A has the block LU factorization

A =

„A BT

B O

«=

„I O

BA−1 I

«„A BT

O S

«,

where S = −BA−1BT (Schur complement).Let

PD =

„A OO S

«, PT =

„A BT

O S

«,

then

The spectrum of P−1D A is σ(P−1

D A) =

1, 1±

√5

2

ffThe spectrum of P−1

T A is σ(P−1T A) = 1

GMRES converges in three iterations with PD , and in two iterationswith PT .

Page 131: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Block preconditioners

If A is invertible, A has the block LU factorization

A =

„A BT

B O

«=

„I O

BA−1 I

«„A BT

O S

«,

where S = −BA−1BT (Schur complement).

Let

PD =

„A OO S

«, PT =

„A BT

O S

«,

then

The spectrum of P−1D A is σ(P−1

D A) =

1, 1±

√5

2

ffThe spectrum of P−1

T A is σ(P−1T A) = 1

GMRES converges in three iterations with PD , and in two iterationswith PT .

Page 132: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Block preconditioners

If A is invertible, A has the block LU factorization

A =

„A BT

B O

«=

„I O

BA−1 I

«„A BT

O S

«,

where S = −BA−1BT (Schur complement).Let

PD =

„A OO S

«, PT =

„A BT

O S

«,

then

The spectrum of P−1D A is σ(P−1

D A) =

1, 1±

√5

2

ff

The spectrum of P−1T A is σ(P−1

T A) = 1

GMRES converges in three iterations with PD , and in two iterationswith PT .

Page 133: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Block preconditioners

If A is invertible, A has the block LU factorization

A =

„A BT

B O

«=

„I O

BA−1 I

«„A BT

O S

«,

where S = −BA−1BT (Schur complement).Let

PD =

„A OO S

«, PT =

„A BT

O S

«,

then

The spectrum of P−1D A is σ(P−1

D A) =

1, 1±

√5

2

ffThe spectrum of P−1

T A is σ(P−1T A) = 1

GMRES converges in three iterations with PD , and in two iterationswith PT .

Page 134: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Block preconditioners

If A is invertible, A has the block LU factorization

A =

„A BT

B O

«=

„I O

BA−1 I

«„A BT

O S

«,

where S = −BA−1BT (Schur complement).Let

PD =

„A OO S

«, PT =

„A BT

O S

«,

then

The spectrum of P−1D A is σ(P−1

D A) =

1, 1±

√5

2

ffThe spectrum of P−1

T A is σ(P−1T A) = 1

GMRES converges in three iterations with PD , and in two iterationswith PT .

Page 135: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Block preconditioners (cont.)

In practice, it is necessary to replace A and S with easily invertibleapproximations:

PD =

bA O

O bS!

, PT =

bA BT

O bS!

bA should be spectrally equivalent to A: that is, we want cond(bA−1A) ≤ cfor some constant c independent of h

Often a small, fixed number of multigrid V-cycles will do

Approximating S is more involved, except in special situations; forexample, in the case of Stokes we can use the pressure mass matrix(bS = Mp) or its diagonal, assuming the LBB condition holds. This isthe Silvester–Wathen preconditioner.

For the Oseen problem this does not work, except for very small Reynolds.

Page 136: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Block preconditioners (cont.)

In practice, it is necessary to replace A and S with easily invertibleapproximations:

PD =

bA O

O bS!

, PT =

bA BT

O bS!

bA should be spectrally equivalent to A: that is, we want cond(bA−1A) ≤ cfor some constant c independent of h

Often a small, fixed number of multigrid V-cycles will do

Approximating S is more involved, except in special situations; forexample, in the case of Stokes we can use the pressure mass matrix(bS = Mp) or its diagonal, assuming the LBB condition holds. This isthe Silvester–Wathen preconditioner.

For the Oseen problem this does not work, except for very small Reynolds.

Page 137: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Block preconditioners (cont.)

In practice, it is necessary to replace A and S with easily invertibleapproximations:

PD =

bA O

O bS!

, PT =

bA BT

O bS!

bA should be spectrally equivalent to A: that is, we want cond(bA−1A) ≤ cfor some constant c independent of h

Often a small, fixed number of multigrid V-cycles will do

Approximating S is more involved, except in special situations; forexample, in the case of Stokes we can use the pressure mass matrix(bS = Mp) or its diagonal, assuming the LBB condition holds. This isthe Silvester–Wathen preconditioner.

For the Oseen problem this does not work, except for very small Reynolds.

Page 138: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Block preconditioners (cont.)

In practice, it is necessary to replace A and S with easily invertibleapproximations:

PD =

bA O

O bS!

, PT =

bA BT

O bS!

bA should be spectrally equivalent to A: that is, we want cond(bA−1A) ≤ cfor some constant c independent of h

Often a small, fixed number of multigrid V-cycles will do

Approximating S is more involved, except in special situations; forexample, in the case of Stokes we can use the pressure mass matrix(bS = Mp) or its diagonal, assuming the LBB condition holds. This isthe Silvester–Wathen preconditioner.

For the Oseen problem this does not work, except for very small Reynolds.

Page 139: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Block preconditioners (cont.)

In practice, it is necessary to replace A and S with easily invertibleapproximations:

PD =

bA O

O bS!

, PT =

bA BT

O bS!

bA should be spectrally equivalent to A: that is, we want cond(bA−1A) ≤ cfor some constant c independent of h

Often a small, fixed number of multigrid V-cycles will do

Approximating S is more involved, except in special situations; forexample, in the case of Stokes we can use the pressure mass matrix(bS = Mp) or its diagonal, assuming the LBB condition holds. This isthe Silvester–Wathen preconditioner.

For the Oseen problem this does not work, except for very small Reynolds.

Page 140: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Block preconditioners (cont.)

In practice, it is necessary to replace A and S with easily invertibleapproximations:

PD =

bA O

O bS!

, PT =

bA BT

O bS!

bA should be spectrally equivalent to A: that is, we want cond(bA−1A) ≤ cfor some constant c independent of h

Often a small, fixed number of multigrid V-cycles will do

Approximating S is more involved, except in special situations; forexample, in the case of Stokes we can use the pressure mass matrix(bS = Mp) or its diagonal, assuming the LBB condition holds. This isthe Silvester–Wathen preconditioner.

For the Oseen problem this does not work, except for very small Reynolds.

Page 141: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Block preconditioners (cont.)

Recall that S = −BA−1BT is a discretization of the operator

S = div(−ν∆ + w · ∇)−1∇

A plausible (if non-rigorous) approximation of the inverse of this operator is

bS−1 := ∆−1(−ν∆ + w · ∇)p

where the subscript p indicated that the convection-diffusion operator acts onthe pressure space. Hence, the action of S−1 can be approximated by amatrix-vector multiply with a discrete pressure convection-diffusion operator,followed by a Poisson solve.

This is known as the pressure convection-diffusion preconditioner (PCD),introduced and analyzed by Kay, Loghin, and Wathen (SISC, 2001).

This preconditioner performs well for small or moderate Reynolds numbers.

Page 142: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Block preconditioners (cont.)

Recall that S = −BA−1BT is a discretization of the operator

S = div(−ν∆ + w · ∇)−1∇

A plausible (if non-rigorous) approximation of the inverse of this operator is

bS−1 := ∆−1(−ν∆ + w · ∇)p

where the subscript p indicated that the convection-diffusion operator acts onthe pressure space. Hence, the action of S−1 can be approximated by amatrix-vector multiply with a discrete pressure convection-diffusion operator,followed by a Poisson solve.

This is known as the pressure convection-diffusion preconditioner (PCD),introduced and analyzed by Kay, Loghin, and Wathen (SISC, 2001).

This preconditioner performs well for small or moderate Reynolds numbers.

Page 143: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Block preconditioners (cont.)

Recall that S = −BA−1BT is a discretization of the operator

S = div(−ν∆ + w · ∇)−1∇

A plausible (if non-rigorous) approximation of the inverse of this operator is

bS−1 := ∆−1(−ν∆ + w · ∇)p

where the subscript p indicated that the convection-diffusion operator acts onthe pressure space. Hence, the action of S−1 can be approximated by amatrix-vector multiply with a discrete pressure convection-diffusion operator,followed by a Poisson solve.

This is known as the pressure convection-diffusion preconditioner (PCD),introduced and analyzed by Kay, Loghin, and Wathen (SISC, 2001).

This preconditioner performs well for small or moderate Reynolds numbers.

Page 144: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Block preconditioners (cont.)

Recall that S = −BA−1BT is a discretization of the operator

S = div(−ν∆ + w · ∇)−1∇

A plausible (if non-rigorous) approximation of the inverse of this operator is

bS−1 := ∆−1(−ν∆ + w · ∇)p

where the subscript p indicated that the convection-diffusion operator acts onthe pressure space. Hence, the action of S−1 can be approximated by amatrix-vector multiply with a discrete pressure convection-diffusion operator,followed by a Poisson solve.

This is known as the pressure convection-diffusion preconditioner (PCD),introduced and analyzed by Kay, Loghin, and Wathen (SISC, 2001).

This preconditioner performs well for small or moderate Reynolds numbers.

Page 145: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Results for Kay, Loghin and Wathen preconditioner

Test problems: steady Oseen, homogeneous Dirichlet BCs, two choices of thewind function.

A constant wind problem: w =

„10

«A recirculating flow (vortex) problem: w =

„4(2y − 1)(1− x)x

−4(2x − 1)(1− y)y

«Uniform FEM discretizations: isoP2-P0 and isoP2-P1. These discretizationssatisfy the inf-sup condition: no pressure stabilization is needed.SUPG stabilization is used for the velocities.

The Krylov subspace method used is Bi-CGSTAB. This method requires twomatrix-vector multiplies with A and two applications of the preconditioner ateach iteration.

A preconditioning step requires two convection-diffusion solves (three in 3D)and one Poisson solve at each iteration, plus some mat-vecs.

Page 146: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Results for Kay, Loghin and Wathen preconditioner

Test problems: steady Oseen, homogeneous Dirichlet BCs, two choices of thewind function.

A constant wind problem: w =

„10

«

A recirculating flow (vortex) problem: w =

„4(2y − 1)(1− x)x

−4(2x − 1)(1− y)y

«Uniform FEM discretizations: isoP2-P0 and isoP2-P1. These discretizationssatisfy the inf-sup condition: no pressure stabilization is needed.SUPG stabilization is used for the velocities.

The Krylov subspace method used is Bi-CGSTAB. This method requires twomatrix-vector multiplies with A and two applications of the preconditioner ateach iteration.

A preconditioning step requires two convection-diffusion solves (three in 3D)and one Poisson solve at each iteration, plus some mat-vecs.

Page 147: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Results for Kay, Loghin and Wathen preconditioner

Test problems: steady Oseen, homogeneous Dirichlet BCs, two choices of thewind function.

A constant wind problem: w =

„10

«A recirculating flow (vortex) problem: w =

„4(2y − 1)(1− x)x

−4(2x − 1)(1− y)y

«

Uniform FEM discretizations: isoP2-P0 and isoP2-P1. These discretizationssatisfy the inf-sup condition: no pressure stabilization is needed.SUPG stabilization is used for the velocities.

The Krylov subspace method used is Bi-CGSTAB. This method requires twomatrix-vector multiplies with A and two applications of the preconditioner ateach iteration.

A preconditioning step requires two convection-diffusion solves (three in 3D)and one Poisson solve at each iteration, plus some mat-vecs.

Page 148: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Results for Kay, Loghin and Wathen preconditioner

Test problems: steady Oseen, homogeneous Dirichlet BCs, two choices of thewind function.

A constant wind problem: w =

„10

«A recirculating flow (vortex) problem: w =

„4(2y − 1)(1− x)x

−4(2x − 1)(1− y)y

«Uniform FEM discretizations: isoP2-P0 and isoP2-P1. These discretizationssatisfy the inf-sup condition: no pressure stabilization is needed.SUPG stabilization is used for the velocities.

The Krylov subspace method used is Bi-CGSTAB. This method requires twomatrix-vector multiplies with A and two applications of the preconditioner ateach iteration.

A preconditioning step requires two convection-diffusion solves (three in 3D)and one Poisson solve at each iteration, plus some mat-vecs.

Page 149: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Results for Kay, Loghin and Wathen preconditioner

Test problems: steady Oseen, homogeneous Dirichlet BCs, two choices of thewind function.

A constant wind problem: w =

„10

«A recirculating flow (vortex) problem: w =

„4(2y − 1)(1− x)x

−4(2x − 1)(1− y)y

«Uniform FEM discretizations: isoP2-P0 and isoP2-P1. These discretizationssatisfy the inf-sup condition: no pressure stabilization is needed.SUPG stabilization is used for the velocities.

The Krylov subspace method used is Bi-CGSTAB. This method requires twomatrix-vector multiplies with A and two applications of the preconditioner ateach iteration.

A preconditioning step requires two convection-diffusion solves (three in 3D)and one Poisson solve at each iteration, plus some mat-vecs.

Page 150: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Results for Kay, Loghin and Wathen preconditioner

Test problems: steady Oseen, homogeneous Dirichlet BCs, two choices of thewind function.

A constant wind problem: w =

„10

«A recirculating flow (vortex) problem: w =

„4(2y − 1)(1− x)x

−4(2x − 1)(1− y)y

«Uniform FEM discretizations: isoP2-P0 and isoP2-P1. These discretizationssatisfy the inf-sup condition: no pressure stabilization is needed.SUPG stabilization is used for the velocities.

The Krylov subspace method used is Bi-CGSTAB. This method requires twomatrix-vector multiplies with A and two applications of the preconditioner ateach iteration.

A preconditioning step requires two convection-diffusion solves (three in 3D)and one Poisson solve at each iteration, plus some mat-vecs.

Page 151: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Results for Kay, Loghin, and Wathen preconditioner (cont.)

Results for KLW preconditioner.

mesh size h viscosity ν

1 0.1 0.01 0.001 0.0001constant wind

1/16 6 / 12 8 / 16 12 / 24 30 / 34 100 / 801/32 6 / 10 10 / 16 14 / 24 24 / 28 86 / 921/64 6 / 10 8 / 14 16 / 24 22 / 32 64 / 661/128 6 / 10 8 / 12 16 / 26 24 / 36 64 / 58rotating vortex

1/16 6 / 8 10 / 12 30 / 40 > 400 / 1881/32 6 / 8 10 / 12 30 / 40 > 400 / 3781/64 4 / 6 8 / 12 26 / 40 > 400 / > 4001/128 4 / 6 8 / 10 22 / 44 228 / > 400

Number of Bi-CGSTAB iterations(Note: exact solves used throughout. Stopping criterion: ‖b−Axk‖2 < 10−6‖b‖2).

Page 152: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

Results with Vanka-type MG preconditioner

Results for Vanka-MG-BiCGStab approach, isoP2-P0 FEM.

mesh size h viscosity = ν

1 0.1 0.01 10−3 10−4

constant wind

1/16 4 4 3 4 51/32 4 4 3 4 41/64 4 4 4 3 51/128 4 4 4 3 4rotating vortex

1/16 4 5 5 8 141/32 4 4 6 9 171/64 4 4 5 8 401/128 4 4 4 9 > 400

Better than KLW preconditioner, but still not robust for low viscosity. Can wedo better than this?

Page 153: Introduction to Saddle Point Problems · Introduction to Saddle Point Problems Motivation and goals Let A be a real, symmetric, n × n matrix and let f ∈ Rn be given. Let h·,·i

Introduction to Saddle Point Problems

Some solution algorithms

References

1 M. Benzi, G. H. Golub and J. Liesen, Numerical solution of saddle pointproblems, Acta Numerica, 14 (2005), pp. 1–137.

2 H. C. Elman, D. J. Silvester and A. J. Wathen, Finite Elements and FastIterative Solvers: With Applications in Incompressible Fluid Dynamics, OxfordUniversity Press, 2005.

3 K. A. Mardal and R. Winther, Preconditioning discretizations of systems ofpartial differential equations, Numer. Linear Algebra Appl., 18 (2011), pp. 1–40.